id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0907.3924 | Andrea Barreiro | Andrea K. Barreiro, Eric Shea-Brown, Evan L. Thilo | Timescales of spike-train correlation for neural oscillators with common
drive | null | Physical Review E, vol. 81: 011916 (2010) | 10.1103/PhysRevE.81.011916 | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the effect of the phase-resetting curve (PRC) on the transfer of
correlated input signals into correlated output spikes in a class of neural
models receiving noisy, super-threshold stimulation. We use linear response
theory to approximate the spike correlation coefficient in terms of moments of
the associated exit time problem, and contrast the results for Type I vs. Type
II models and across the different timescales over which spike correlations can
be assessed. We find that, on long timescales, Type I oscillators transfer
correlations much more efficiently than Type II oscillators. On short
timescales this trend reverses, with the relative efficiency switching at a
timescale that depends on the mean and standard deviation of input currents.
This switch occurs over timescales that could be exploited by downstream
circuits.
| [
{
"created": "Wed, 22 Jul 2009 20:15:11 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Barreiro",
"Andrea K.",
""
],
[
"Shea-Brown",
"Eric",
""
],
[
"Thilo",
"Evan L.",
""
]
] | We examine the effect of the phase-resetting curve (PRC) on the transfer of correlated input signals into correlated output spikes in a class of neural models receiving noisy, super-threshold stimulation. We use linear response theory to approximate the spike correlation coefficient in terms of moments of the associated exit time problem, and contrast the results for Type I vs. Type II models and across the different timescales over which spike correlations can be assessed. We find that, on long timescales, Type I oscillators transfer correlations much more efficiently than Type II oscillators. On short timescales this trend reverses, with the relative efficiency switching at a timescale that depends on the mean and standard deviation of input currents. This switch occurs over timescales that could be exploited by downstream circuits. |
2105.07284 | Joseph Monaco | Joseph D. Monaco, Kanaka Rajan, Grace M. Hwang | A brain basis of dynamical intelligence for AI and computational
neuroscience | Perspective article: 24 pages, 3 figures, 1 display box | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | The deep neural nets of modern artificial intelligence (AI) have not achieved
defining features of biological intelligence, including abstraction, causal
learning, and energy-efficiency. While scaling to larger models has delivered
performance improvements for current applications, more brain-like capacities
may demand new theories, models, and methods for designing artificial learning
systems. Here, we argue that this opportunity to reassess insights from the
brain should stimulate cooperation between AI research and theory-driven
computational neuroscience (CN). To motivate a brain basis of neural
computation, we present a dynamical view of intelligence from which we
elaborate concepts of sparsity in network structure, temporal dynamics, and
interactive learning. In particular, we suggest that temporal dynamics, as
expressed through neural synchrony, nested oscillations, and flexible
sequences, provide a rich computational layer for reading and updating
hierarchical models distributed in long-term memory networks. Moreover,
embracing agent-centered paradigms in AI and CN will accelerate our
understanding of the complex dynamics and behaviors that build useful world
models. A convergence of AI/CN theories and objectives will reveal dynamical
principles of intelligence for brains and engineered learning systems. This
article was inspired by our symposium on dynamical neuroscience and machine
learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
| [
{
"created": "Sat, 15 May 2021 19:49:32 GMT",
"version": "v1"
},
{
"created": "Fri, 21 May 2021 13:29:51 GMT",
"version": "v2"
}
] | 2021-05-24 | [
[
"Monaco",
"Joseph D.",
""
],
[
"Rajan",
"Kanaka",
""
],
[
"Hwang",
"Grace M.",
""
]
] | The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting. |
2307.04109 | Anurabh Chakravarty | Anurabh Charkravarty, Gnanam Ramasamy | Profiling Of Volatiles In Tissues Of Salacia Reticulata Wight. With
Anti-Diabetic Potential Using GC-MS And Molecular Docking | null | null | null | null | q-bio.BM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Type 2 diabetes mellitus is a global pandemic, it is a chronic, progressive
and an incompletely understood metabolic condition. The disease is
characterized by higher levels of sugar in blood caused either due to
insufficient production of insulin or because of insulin resistance. Major
drugs used for the treatment of the condition are fraught with side effects.
Hence, it is becoming an obligation to gaze at alternative agents showing
marginal adverse effects. An important source of such agents are the medicinal
plants. Several plants have been positively identified to show anti-diabetic
effects. The species Salacia reticulata Wight., belonging to the family
Celastraceae which is found in the forests of Southern India is one such
promising plant to tackle type 2 diabetics. In this study, numerous volatile
compounds were identified from various tissues through GC-MS analysis. Among
these the compounds possessed suitable ADMET properties, and high binding
affinities were compared with two approved {\alpha}-glucosidase inhibitors,
Acarbose and Miglitol. The analysis indicated that the compounds with PubChem
IDs, CID-240051 and CID-533471 exhibited potential as inhibitors of Human
Maltase-Glucoamylase enzyme.
| [
{
"created": "Sun, 9 Jul 2023 06:52:29 GMT",
"version": "v1"
}
] | 2023-07-11 | [
[
"Charkravarty",
"Anurabh",
""
],
[
"Ramasamy",
"Gnanam",
""
]
] | Type 2 diabetes mellitus is a global pandemic, it is a chronic, progressive and an incompletely understood metabolic condition. The disease is characterized by higher levels of sugar in blood caused either due to insufficient production of insulin or because of insulin resistance. Major drugs used for the treatment of the condition are fraught with side effects. Hence, it is becoming an obligation to gaze at alternative agents showing marginal adverse effects. An important source of such agents are the medicinal plants. Several plants have been positively identified to show anti-diabetic effects. The species Salacia reticulata Wight., belonging to the family Celastraceae which is found in the forests of Southern India is one such promising plant to tackle type 2 diabetics. In this study, numerous volatile compounds were identified from various tissues through GC-MS analysis. Among these the compounds possessed suitable ADMET properties, and high binding affinities were compared with two approved {\alpha}-glucosidase inhibitors, Acarbose and Miglitol. The analysis indicated that the compounds with PubChem IDs, CID-240051 and CID-533471 exhibited potential as inhibitors of Human Maltase-Glucoamylase enzyme. |
1011.4245 | Diego Fernandez Slezak | Diego Fernandez Slezak, Cecilia Suarez, Guillermo A. Cecchi, Guillermo
Marshall and Gustavo Stolovitzky | When the optimal is not the best: parameter estimation in complex
biological models | null | Fernandez Slezak D, Su\'arez C, Cecchi GA, Marshall G, Stolovitzky
G, 2010 When the Optimal Is Not the Best: Parameter Estimation in Complex
Biological Models. PLoS ONE 5(10): e13283 | 10.1371/journal.pone.0013283 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The vast computational resources that became available during the
past decade enabled the development and simulation of increasingly complex
mathematical models of cancer growth. These models typically involve many free
parameters whose determination is a substantial obstacle to model development.
Direct measurement of biochemical parameters in vivo is often difficult and
sometimes impracticable, while fitting them under data-poor conditions may
result in biologically implausible values.
Results: We discuss different methodological approaches to estimate
parameters in complex biological models. We make use of the high computational
power of the Blue Gene technology to perform an extensive study of the
parameter space in a model of avascular tumor growth. We explicitly show that
the landscape of the cost function used to optimize the model to the data has a
very rugged surface in parameter space. This cost function has many local
minima with unrealistic solutions, including the global minimum corresponding
to the best fit.
Conclusions: The case studied in this paper shows one example in which model
parameters that optimally fit the data are not necessarily the best ones from a
biological point of view. To avoid force-fitting a model to a dataset, we
propose that the best model parameters should be found by choosing, among
suboptimal parameters, those that match criteria other than the ones used to
fit the model. We also conclude that the model, data and optimization approach
form a new complex system, and point to the need of a theory that addresses
this problem more generally.
| [
{
"created": "Thu, 18 Nov 2010 17:58:20 GMT",
"version": "v1"
}
] | 2010-11-19 | [
[
"Slezak",
"Diego Fernandez",
""
],
[
"Suarez",
"Cecilia",
""
],
[
"Cecchi",
"Guillermo A.",
""
],
[
"Marshall",
"Guillermo",
""
],
[
"Stolovitzky",
"Gustavo",
""
]
] | Background: The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. Results: We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions: The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system, and point to the need of a theory that addresses this problem more generally. |
1508.04751 | Ehtibar Dzhafarov | Ehtibar N. Dzhafarov, Janne V. Kujala, Victor H. Cervantes, Ru Zhang,
and Matt Jones | On Contextuality in Behavioral Data | to be published in April 2016 in vol. 374 issue 2066 of Philosophical
Transactions of the Royal Society A | null | 10.1098/rsta.2015.0234 | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dzhafarov, Zhang, and Kujala (Phil. Trans. Roy. Soc. A 374, 20150099)
reviewed several behavioral data sets imitating the formal design of the
quantum-mechanical contextuality experiments. The conclusion was that none of
these data sets exhibited contextuality if understood in the generalized sense
proposed in Dzhafarov, Kujala, and Larsson (Found. Phys. 7, 762-782, 2015),
while the traditional definition of contextuality does not apply to these data
because they violate the condition of consistent connectedness (also known as
marginal selectivity, no-signaling condition, no-disturbance principle, etc.).
In this paper we clarify the relationship between (in)consistent connectedness
and (non)contextuality, as well as between the traditional and extended
definitions of (non)contextuality, using as an example the
Clauser-Horn-Shimony-Holt (CHSH) inequalities originally designed for detecting
contextuality in entangled particles.
| [
{
"created": "Tue, 18 Aug 2015 19:01:06 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Oct 2015 17:06:13 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Nov 2015 07:35:12 GMT",
"version": "v3"
},
{
"created": "Thu, 17 Dec 2015 03:20:25 GMT",
"version": "v4"
},
{
"created": "Thu, 24 Mar 2016 15:10:45 GMT",
"version": "v5"
}
] | 2016-09-28 | [
[
"Dzhafarov",
"Ehtibar N.",
""
],
[
"Kujala",
"Janne V.",
""
],
[
"Cervantes",
"Victor H.",
""
],
[
"Zhang",
"Ru",
""
],
[
"Jones",
"Matt",
""
]
] | Dzhafarov, Zhang, and Kujala (Phil. Trans. Roy. Soc. A 374, 20150099) reviewed several behavioral data sets imitating the formal design of the quantum-mechanical contextuality experiments. The conclusion was that none of these data sets exhibited contextuality if understood in the generalized sense proposed in Dzhafarov, Kujala, and Larsson (Found. Phys. 7, 762-782, 2015), while the traditional definition of contextuality does not apply to these data because they violate the condition of consistent connectedness (also known as marginal selectivity, no-signaling condition, no-disturbance principle, etc.). In this paper we clarify the relationship between (in)consistent connectedness and (non)contextuality, as well as between the traditional and extended definitions of (non)contextuality, using as an example the Clauser-Horn-Shimony-Holt (CHSH) inequalities originally designed for detecting contextuality in entangled particles. |
1003.3093 | A. E. Sitnitsky | A.E. Sitnitsky | Relationship between preexponent and distribution over activation
barrier energies for enzymatic reactions | 21 pages, 5 figures | Current Topics in Catalysis 2012, 10, 101-112 | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A relationship between the preexponent of the rate constant and the
distribution over activation barrier energies for enzymatic/protein reactions
is revealed. We consider an enzyme solution as an ensemble of individual
molecules with different values of the activation barrier energy described by
the distribution. From the solvent viscosity effect on the preexponent we
derive the integral equation for the distribution and find its approximate
solution. Our approach enables us to attain a twofold purpose. On the one hand
it yields a simple interpretation of the solvent viscosity dependence for
enzymatic/protein reactions that requires neither a modification of the
Kramers' theory nor that of the Stokes law. On the other hand our approach
enables us to deduce the form of the distribution over activation barrier
energies. The obtained function has a familiar bell-shaped form and is in
qualitative agreement with the results of single enzyme kinetics measurements.
General formalism is exemplified by the analysis of literature experimental
data.
| [
{
"created": "Tue, 16 Mar 2010 07:50:11 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Nov 2012 08:36:29 GMT",
"version": "v2"
}
] | 2012-11-16 | [
[
"Sitnitsky",
"A. E.",
""
]
] | A relationship between the preexponent of the rate constant and the distribution over activation barrier energies for enzymatic/protein reactions is revealed. We consider an enzyme solution as an ensemble of individual molecules with different values of the activation barrier energy described by the distribution. From the solvent viscosity effect on the preexponent we derive the integral equation for the distribution and find its approximate solution. Our approach enables us to attain a twofold purpose. On the one hand it yields a simple interpretation of the solvent viscosity dependence for enzymatic/protein reactions that requires neither a modification of the Kramers' theory nor that of the Stokes law. On the other hand our approach enables us to deduce the form of the distribution over activation barrier energies. The obtained function has a familiar bell-shaped form and is in qualitative agreement with the results of single enzyme kinetics measurements. General formalism is exemplified by the analysis of literature experimental data. |
1906.07211 | Junqi Wang | Junqi Wang, Li Xiao, Tony W. Wilson, Julia M. Stephen, Vince D.
Calhoun, Yu-Ping Wang | Brain Maturation Study during Adolescence Using Graph Laplacian Learning
Based Fourier Transform | 10 pages | null | null | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Longitudinal neuroimaging studies have demonstrated that
adolescence is the crucial developmental epoch of continued brain growth and
change. A large number of researchers dedicate to uncovering the mechanisms
about brain maturity during adolescence. Motivated by both achievement in graph
signal processing and recent evidence that some brain areas act as hubs
connecting functionally specialized systems, we proposed an approach to detect
these regions from spectral analysis perspective. In particular, as human brain
undergoes substantial development throughout adolescence, we addressed the
challenge by evaluating the functional network difference among age groups from
functional magnetic resonance imaging (fMRI) observations. Methods: We treated
these observations as graph signals defined on the parcellated functional brain
regions and applied graph Laplacian learning based Fourier Transform (GLFT) to
transform the original graph signals into frequency domain. Eigen-analysis was
conducted afterwards to study the behavior of the corresponding brain regions,
which enables the characterization of brain maturation. Result: We first
evaluated our method on the synthetic data and further applied the method to
resting and task state fMRI imaging data from Philadelphia Neurodevelopmental
Cohort (PNC) dataset, comprised of normally developing adolescents from 8 to
22. The model provided a highest accuracy of 95.69% in distinguishing different
adolescence stages. Conclusion: We detected 13 hubs from resting state fMRI and
16 hubs from task state fMRI that are highly related to brain maturation
process. Significance: The proposed GLFT method is powerful in extracting the
brain connectivity patterns and identifying hub regions with a high prediction
power
| [
{
"created": "Mon, 17 Jun 2019 18:23:41 GMT",
"version": "v1"
}
] | 2019-06-19 | [
[
"Wang",
"Junqi",
""
],
[
"Xiao",
"Li",
""
],
[
"Wilson",
"Tony W.",
""
],
[
"Stephen",
"Julia M.",
""
],
[
"Calhoun",
"Vince D.",
""
],
[
"Wang",
"Yu-Ping",
""
]
] | Objective: Longitudinal neuroimaging studies have demonstrated that adolescence is the crucial developmental epoch of continued brain growth and change. A large number of researchers dedicate to uncovering the mechanisms about brain maturity during adolescence. Motivated by both achievement in graph signal processing and recent evidence that some brain areas act as hubs connecting functionally specialized systems, we proposed an approach to detect these regions from spectral analysis perspective. In particular, as human brain undergoes substantial development throughout adolescence, we addressed the challenge by evaluating the functional network difference among age groups from functional magnetic resonance imaging (fMRI) observations. Methods: We treated these observations as graph signals defined on the parcellated functional brain regions and applied graph Laplacian learning based Fourier Transform (GLFT) to transform the original graph signals into frequency domain. Eigen-analysis was conducted afterwards to study the behavior of the corresponding brain regions, which enables the characterization of brain maturation. Result: We first evaluated our method on the synthetic data and further applied the method to resting and task state fMRI imaging data from Philadelphia Neurodevelopmental Cohort (PNC) dataset, comprised of normally developing adolescents from 8 to 22. The model provided a highest accuracy of 95.69% in distinguishing different adolescence stages. Conclusion: We detected 13 hubs from resting state fMRI and 16 hubs from task state fMRI that are highly related to brain maturation process. Significance: The proposed GLFT method is powerful in extracting the brain connectivity patterns and identifying hub regions with a high prediction power |
1609.09036 | Vince Grolmusz | Bal\'azs Szalkai and Csaba Kerepesi and B\'alint Varga and Vince
Grolmusz | High-Resolution Directed Human Connectomes and the Consensus Connectome
Dynamics | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we show a method of directing the edges of the connectomes, prepared
from diffusion tensor imaging (DTI) datasets from the human brain. Before the
present work, no high-definition directed braingraphs (or connectomes) were
published, because the tractography methods in use are not capable of assigning
directions to the neural tracts discovered. Previous work on the functional
connectomes applied low-resolution functional MRI-detected statistical
causality for the assignment of directions of connectomes of typically several
dozens of vertices. Our method is based on the phenomenon of the "Consensus
Connectome Dynamics" (CCD), described earlier by our research group. In this
contribution, we apply the method to the 423 braingraphs, each with 1015
vertices, computed from the public release of the Human Connectome Project, and
we also made the directed connectomes publicly available at the site
\url{http://braingraph.org}. We also show the robustness of our edge directing
method in four independently chosen connectome datasets: we have found that
86\% of the edges, which were present in all four datasets, get the very same
directions in all datasets; therefore the direction method is robust, it does
not depend on the particular choice of the dataset. We think that our present
contribution opens up new possibilities in the analysis of the high-definition
human connectome: from now on we can work with a robust assignment of
directions of the connections of the human brain.
| [
{
"created": "Wed, 28 Sep 2016 18:35:30 GMT",
"version": "v1"
}
] | 2016-09-29 | [
[
"Szalkai",
"Balázs",
""
],
[
"Kerepesi",
"Csaba",
""
],
[
"Varga",
"Bálint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | Here we show a method of directing the edges of the connectomes, prepared from diffusion tensor imaging (DTI) datasets from the human brain. Before the present work, no high-definition directed braingraphs (or connectomes) were published, because the tractography methods in use are not capable of assigning directions to the neural tracts discovered. Previous work on the functional connectomes applied low-resolution functional MRI-detected statistical causality for the assignment of directions of connectomes of typically several dozens of vertices. Our method is based on the phenomenon of the "Consensus Connectome Dynamics" (CCD), described earlier by our research group. In this contribution, we apply the method to the 423 braingraphs, each with 1015 vertices, computed from the public release of the Human Connectome Project, and we also made the directed connectomes publicly available at the site \url{http://braingraph.org}. We also show the robustness of our edge directing method in four independently chosen connectome datasets: we have found that 86\% of the edges, which were present in all four datasets, get the very same directions in all datasets; therefore the direction method is robust, it does not depend on the particular choice of the dataset. We think that our present contribution opens up new possibilities in the analysis of the high-definition human connectome: from now on we can work with a robust assignment of directions of the connections of the human brain. |
2009.08802 | Christoph M Augustin | Christoph M. Augustin, Matthias A. F. Gsell, Elias Karabelas, Erik
Willemen, Frits W. Prinzen, Joost Lumens, Edward J. Vigmond, Gernot Plank | A computationally efficient physiologically comprehensive 3D-0D
closed-loop model of the heart and circulation | This project has received funding from the EU's H2020 programme under
the Marie Sklodowska-Curie Action InsiliCardio, GA No. 750835 and under the
ERA-NET co-fund action No. 680969 (ERA-CVD SICVALVES) funded by the Austrian
Science Fund (FWF), GA I 4652-B to CMA. The research was supported by the
Grants F3210-N18 and I2760-B30 from the FWF and a BioTechMed Graz flagship
award ILearnHeart to GP | Computer Methods in Applied Mechanics and Engineering 386 (2021)
114092 | 10.1016/j.cma.2021.114092 | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Computer models of cardiac electro-mechanics (EM) show promise as an
effective means for quantitative analysis of clinical data and, potentially,
for predicting therapeutic responses.realize such advanced applications
methodological key challenges must be addressed. Enhanced computational
efficiency and robustness is crucial to facilitate, within tractable time
frames, model personalization, the simulation of prolonged observation periods
under a broad range of conditions, and physiological completeness encompassing
therapy-relevant mechanisms is needed to endow models with predictive
capabilities beyond the mere replication of observations. Here, we introduce a
universal feature-complete cardiac EM modeling framework that builds on a
flexible method for coupling a 3D model of bi-ventricular EM to the
physiologically comprehensive 0D CircAdapt model representing atrial mechanics
and closed-loop circulation. A detailed mathematical description is given and
efficiency, robustness, and accuracy of numerical scheme and solver
implementation are evaluated. After parameterization and stabilization of the
coupled 3D-0D model to a limit cycle under baseline conditions, the model's
ability to replicate physiological behaviors is demonstrated, by simulating the
transient response to alterations in loading conditions and contractility, as
induced by experimental protocols used for assessing systolic and diastolic
ventricular properties. Mechanistic completeness and computational efficiency
of this novel model render advanced applications geared towards predicting
acute outcomes of EM therapies feasible.
| [
{
"created": "Wed, 16 Sep 2020 09:52:02 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 10:19:40 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Aug 2021 08:43:58 GMT",
"version": "v3"
}
] | 2021-08-20 | [
[
"Augustin",
"Christoph M.",
""
],
[
"Gsell",
"Matthias A. F.",
""
],
[
"Karabelas",
"Elias",
""
],
[
"Willemen",
"Erik",
""
],
[
"Prinzen",
"Frits W.",
""
],
[
"Lumens",
"Joost",
""
],
[
"Vigmond",
"Edward J.",
""
],
[
"Plank",
"Gernot",
""
]
] | Computer models of cardiac electro-mechanics (EM) show promise as an effective means for quantitative analysis of clinical data and, potentially, for predicting therapeutic responses.realize such advanced applications methodological key challenges must be addressed. Enhanced computational efficiency and robustness is crucial to facilitate, within tractable time frames, model personalization, the simulation of prolonged observation periods under a broad range of conditions, and physiological completeness encompassing therapy-relevant mechanisms is needed to endow models with predictive capabilities beyond the mere replication of observations. Here, we introduce a universal feature-complete cardiac EM modeling framework that builds on a flexible method for coupling a 3D model of bi-ventricular EM to the physiologically comprehensive 0D CircAdapt model representing atrial mechanics and closed-loop circulation. A detailed mathematical description is given and efficiency, robustness, and accuracy of numerical scheme and solver implementation are evaluated. After parameterization and stabilization of the coupled 3D-0D model to a limit cycle under baseline conditions, the model's ability to replicate physiological behaviors is demonstrated, by simulating the transient response to alterations in loading conditions and contractility, as induced by experimental protocols used for assessing systolic and diastolic ventricular properties. Mechanistic completeness and computational efficiency of this novel model render advanced applications geared towards predicting acute outcomes of EM therapies feasible. |
0706.3589 | Janos Locsei | J. T. Locsei | Persistence of direction increases the drift velocity of run and tumble
chemotaxis | 17 pages, 5 figures | J. Math. Biol. 55, 2007, 41-60 | 10.1007/s00285-007-0080-z | null | q-bio.QM | null | Escherichia coli is a motile bacterium that moves up a chemoattractant
gradient by performing a biased random walk composed of alternating runs and
tumbles. Previous models of run and tumble chemotaxis neglect one or more
features of the motion, namely (i) a cell cannot directly detect a
chemoattractant gradient but rather makes temporal comparisons of
chemoattractant concentration, (ii) rather than being entirely random, tumbles
exhibit persistence of direction, meaning that the new direction after a tumble
is more likely to be in the forward hemisphere, and (iii) rotational Brownian
motion makes it impossible for an E. coli cell to swim in a straight line
during a run. This paper presents an analytic calculation of the chemotactic
drift velocity taking account of (i), (ii) and (iii), for weak chemotaxis. The
analytic results are verified by Monte Carlo simulation. The results reveal a
synergy between temporal comparisons and persistence that enhances the drift
velocity, while rotational Brownian motion reduces the drift velocity.
| [
{
"created": "Mon, 25 Jun 2007 08:42:01 GMT",
"version": "v1"
}
] | 2007-06-26 | [
[
"Locsei",
"J. T.",
""
]
] | Escherichia coli is a motile bacterium that moves up a chemoattractant gradient by performing a biased random walk composed of alternating runs and tumbles. Previous models of run and tumble chemotaxis neglect one or more features of the motion, namely (i) a cell cannot directly detect a chemoattractant gradient but rather makes temporal comparisons of chemoattractant concentration, (ii) rather than being entirely random, tumbles exhibit persistence of direction, meaning that the new direction after a tumble is more likely to be in the forward hemisphere, and (iii) rotational Brownian motion makes it impossible for an E. coli cell to swim in a straight line during a run. This paper presents an analytic calculation of the chemotactic drift velocity taking account of (i), (ii) and (iii), for weak chemotaxis. The analytic results are verified by Monte Carlo simulation. The results reveal a synergy between temporal comparisons and persistence that enhances the drift velocity, while rotational Brownian motion reduces the drift velocity. |
2001.06135 | Borislav Hristov | Borislav H. Hristov, Bernard Chazelle and Mona Singh | A guided network propagation approach to identify disease genes that
combines prior and new information | RECOMB2020 | null | null | null | q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in biomedical data science is to identify the causal genes
underlying complex genetic diseases. Despite the massive influx of genome
sequencing data, identifying disease-relevant genes remains difficult as
individuals with the same disease may share very few, if any, genetic variants.
Protein-protein interaction networks provide a means to tackle this
heterogeneity, as genes causing the same disease tend to be proximal within
networks. Previously, network propagation approaches have spread signal across
the network from either known disease genes or genes that are newly putatively
implicated in the disease (e.g., found to be mutated in exome studies or linked
via genome-wide association studies). Here we introduce a general framework
that considers both sources of data within a network context. Specifically, we
use prior knowledge of disease-associated genes to guide random walks initiated
from genes that are newly identified as perhaps disease-relevant. In
large-scale testing across 24 cancer types, we demonstrate that our approach
for integrating both prior and new information not only better identifies
cancer driver genes than using either source of information alone but also
readily outperforms other state-of-the-art network-based approaches. To
demonstrate the versatility of our approach, we also apply it to genome-wide
association data to identify genes functionally relevant for several complex
diseases. Overall, our work suggests that guided network propagation approaches
that utilize both prior and new data are a powerful means to identify disease
genes.
| [
{
"created": "Fri, 17 Jan 2020 01:55:59 GMT",
"version": "v1"
}
] | 2020-01-20 | [
[
"Hristov",
"Borislav H.",
""
],
[
"Chazelle",
"Bernard",
""
],
[
"Singh",
"Mona",
""
]
] | A major challenge in biomedical data science is to identify the causal genes underlying complex genetic diseases. Despite the massive influx of genome sequencing data, identifying disease-relevant genes remains difficult as individuals with the same disease may share very few, if any, genetic variants. Protein-protein interaction networks provide a means to tackle this heterogeneity, as genes causing the same disease tend to be proximal within networks. Previously, network propagation approaches have spread signal across the network from either known disease genes or genes that are newly putatively implicated in the disease (e.g., found to be mutated in exome studies or linked via genome-wide association studies). Here we introduce a general framework that considers both sources of data within a network context. Specifically, we use prior knowledge of disease-associated genes to guide random walks initiated from genes that are newly identified as perhaps disease-relevant. In large-scale testing across 24 cancer types, we demonstrate that our approach for integrating both prior and new information not only better identifies cancer driver genes than using either source of information alone but also readily outperforms other state-of-the-art network-based approaches. To demonstrate the versatility of our approach, we also apply it to genome-wide association data to identify genes functionally relevant for several complex diseases. Overall, our work suggests that guided network propagation approaches that utilize both prior and new data are a powerful means to identify disease genes. |
1412.5909 | Peter Erdi | P\'eter \'Erdi | Teaching Computational Neuroscience | 8 pages | null | 10.1007/s11571-015-9340-6. | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problems and beauty of teaching computational neuroscience are discussed
by reviewing three new textbooks.
| [
{
"created": "Thu, 18 Dec 2014 15:52:08 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Mar 2015 17:09:21 GMT",
"version": "v2"
}
] | 2015-03-24 | [
[
"Érdi",
"Péter",
""
]
] | The problems and beauty of teaching computational neuroscience are discussed by reviewing three new textbooks. |
1912.11870 | Sergiy Perepelytsya | Sergiy Perepelytsya | Positively and negatively hydrated counterions in molecular dynamics
simulations of DNA double helix | 18 pages, 6 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The DNA double helix is a polyanionic macromolecule that in water solutions
is neutralized by metal ions (counterions). The property of the counterions to
stabilize the water network (positive hydration) or to make it friable
(negative hydration) is important in terms of the physical mechanisms of
stabilization of the DNA double helix. In the present research, the effects of
positive hydration of Na$^{+}$ counterions and negative hydration of K$^{+}$
and Cs$^{+}$ counterions, incorporated into the hydration shell of the DNA
double helix have been studied using molecular dynamics simulations. The
results have shown that the dynamics of the hydration shell of counterions
depends on region of the double helix: minor groove, major groove, and outside
the macromolecule. The longest average residence time has been observed for
water molecules contacting with the counterions, localized in the minor groove
of the double helix (about 50 ps for Na$^{+}$, and lower than 10 ps for K$^{+}$
and Cs$^{+}$). The estimated potentials of mean force for the hydration shells
of the counterions show that the water molecules are constrained too strong,
and, consequently, the effect of negative hydration for K$^{+}$ and Cs$^{+}$
counterions has not been observed in the simulations. The analysis has shown
that the effects of counterion hydration can be described more accurately with
water models having lower dipole moments.
| [
{
"created": "Thu, 26 Dec 2019 14:04:45 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Mar 2020 08:11:29 GMT",
"version": "v2"
}
] | 2020-03-24 | [
[
"Perepelytsya",
"Sergiy",
""
]
] | The DNA double helix is a polyanionic macromolecule that in water solutions is neutralized by metal ions (counterions). The property of the counterions to stabilize the water network (positive hydration) or to make it friable (negative hydration) is important in terms of the physical mechanisms of stabilization of the DNA double helix. In the present research, the effects of positive hydration of Na$^{+}$ counterions and negative hydration of K$^{+}$ and Cs$^{+}$ counterions, incorporated into the hydration shell of the DNA double helix have been studied using molecular dynamics simulations. The results have shown that the dynamics of the hydration shell of counterions depends on region of the double helix: minor groove, major groove, and outside the macromolecule. The longest average residence time has been observed for water molecules contacting with the counterions, localized in the minor groove of the double helix (about 50 ps for Na$^{+}$, and lower than 10 ps for K$^{+}$ and Cs$^{+}$). The estimated potentials of mean force for the hydration shells of the counterions show that the water molecules are constrained too strong, and, consequently, the effect of negative hydration for K$^{+}$ and Cs$^{+}$ counterions has not been observed in the simulations. The analysis has shown that the effects of counterion hydration can be described more accurately with water models having lower dipole moments. |
1310.3754 | Zachary Kilpatrick PhD | Zachary P. Kilpatrick, Bard Ermentrout, and Brent Doiron | Optimizing working memory with heterogeneity of recurrent cortical
excitation | 42 pages, 9 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A neural correlate of parametric working memory is a stimulus specific rise
in neuron firing rate that persists long after the stimulus is removed. Network
models with local excitation and broad inhibition support persistent neural
activity, linking network architecture and parametric working memory. Cortical
neurons receive noisy input fluctuations which causes persistent activity to
diffusively wander about the network, degrading memory over time. We explore
how cortical architecture that supports parametric working memory affects the
diffusion of persistent neural activity. Studying both a spiking network and a
simplified potential well model, we show that spatially heterogeneous
excitatory coupling stabilizes a discrete number of persistent states, reducing
the diffusion of persistent activity over the network. However, heterogeneous
coupling also coarse-grains the stimulus representation space, limiting the
capacity of parametric working memory. The storage errors due to
coarse-graining and diffusion tradeoff so that information transfer between the
initial and recalled stimulus is optimized at a fixed network heterogeneity.
For sufficiently long delay times, the optimal number of attractors is less
than the number of possible stimuli, suggesting that memory networks can
under-represent stimulus space to optimize performance. Our results clearly
demonstrate the effects of network architecture and stochastic fluctuations on
parametric memory storage.
| [
{
"created": "Mon, 14 Oct 2013 17:30:00 GMT",
"version": "v1"
}
] | 2013-10-15 | [
[
"Kilpatrick",
"Zachary P.",
""
],
[
"Ermentrout",
"Bard",
""
],
[
"Doiron",
"Brent",
""
]
] | A neural correlate of parametric working memory is a stimulus specific rise in neuron firing rate that persists long after the stimulus is removed. Network models with local excitation and broad inhibition support persistent neural activity, linking network architecture and parametric working memory. Cortical neurons receive noisy input fluctuations which causes persistent activity to diffusively wander about the network, degrading memory over time. We explore how cortical architecture that supports parametric working memory affects the diffusion of persistent neural activity. Studying both a spiking network and a simplified potential well model, we show that spatially heterogeneous excitatory coupling stabilizes a discrete number of persistent states, reducing the diffusion of persistent activity over the network. However, heterogeneous coupling also coarse-grains the stimulus representation space, limiting the capacity of parametric working memory. The storage errors due to coarse-graining and diffusion tradeoff so that information transfer between the initial and recalled stimulus is optimized at a fixed network heterogeneity. For sufficiently long delay times, the optimal number of attractors is less than the number of possible stimuli, suggesting that memory networks can under-represent stimulus space to optimize performance. Our results clearly demonstrate the effects of network architecture and stochastic fluctuations on parametric memory storage. |
1908.06180 | Dongrui Wu | Zhenhua Shi, Xiaomo Chen, Changming Zhao, He He, Veit Stuphorn and
Dongrui Wu | Multi-View Broad Learning System for Primate Oculomotor Decision
Decoding | null | IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 2020 | null | null | q-bio.NC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-view learning improves the learning performance by utilizing multi-view
data: data collected from multiple sources, or feature sets extracted from the
same data source. This approach is suitable for primate brain state decoding
using cortical neural signals. This is because the complementary components of
simultaneously recorded neural signals, local field potentials (LFPs) and
action potentials (spikes), can be treated as two views. In this paper, we
extended broad learning system (BLS), a recently proposed wide neural network
architecture, from single-view learning to multi-view learning, and validated
its performance in decoding monkeys' oculomotor decision from medial frontal
LFPs and spikes. We demonstrated that medial frontal LFPs and spikes in
non-human primate do contain complementary information about the oculomotor
decision, and that the proposed multi-view BLS is a more effective approach for
decoding the oculomotor decision than several classical and state-of-the-art
single-view and multi-view learning approaches.
| [
{
"created": "Fri, 16 Aug 2019 21:23:20 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Jan 2020 16:18:49 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Jul 2020 22:53:26 GMT",
"version": "v3"
}
] | 2020-07-06 | [
[
"Shi",
"Zhenhua",
""
],
[
"Chen",
"Xiaomo",
""
],
[
"Zhao",
"Changming",
""
],
[
"He",
"He",
""
],
[
"Stuphorn",
"Veit",
""
],
[
"Wu",
"Dongrui",
""
]
] | Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source. This approach is suitable for primate brain state decoding using cortical neural signals. This is because the complementary components of simultaneously recorded neural signals, local field potentials (LFPs) and action potentials (spikes), can be treated as two views. In this paper, we extended broad learning system (BLS), a recently proposed wide neural network architecture, from single-view learning to multi-view learning, and validated its performance in decoding monkeys' oculomotor decision from medial frontal LFPs and spikes. We demonstrated that medial frontal LFPs and spikes in non-human primate do contain complementary information about the oculomotor decision, and that the proposed multi-view BLS is a more effective approach for decoding the oculomotor decision than several classical and state-of-the-art single-view and multi-view learning approaches. |
1602.07146 | Ciprian Palaghianu Dr. | R. Cenusa, I. Biris, F. Clinovschi, I. Barnoaiea, C. Palaghianu, M.
Teodosiu | Variabilitatea structurala a padurii naturale. Studiu de caz: Calimani | 6 pages, 2 figures. Cenusa, R., Biris, I., Clinovschi, F., Barnoaiea,
I., Palaghianu, C., Teodosiu, M. (2008). Variabilitatea structurala a padurii
naturale. Studiu de caz: Calimani, Lucrarile Sesiunii Stiintifice MENER. UPB,
Sinaia, 4-7 septembrie 2008, 451-456 | Lucrarile Sesiunii Stiintifice MENER. UPB, Sinaia, 4-7 septembrie
2008, 451-456 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents the importance of research which characterizes the natural
forest structure for the forest management. The lessons learned in these
particular forest ecosystems can be integrated by the forest management
objectives, in order to increase the sustainability of this type of resources.
The project NATFORMAN was focused on the structure of the natural forest, thus
research methodologies and modern technology (such as Field-Map) investigation
and determination were used in order to record information on forest structural
parameters. The results obtained refer to these structural parameters and to
the possibility of transferring such information in practice, in order to
achieve forest sustainable management.
| [
{
"created": "Tue, 23 Feb 2016 13:33:07 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Feb 2016 13:24:42 GMT",
"version": "v2"
}
] | 2016-02-25 | [
[
"Cenusa",
"R.",
""
],
[
"Biris",
"I.",
""
],
[
"Clinovschi",
"F.",
""
],
[
"Barnoaiea",
"I.",
""
],
[
"Palaghianu",
"C.",
""
],
[
"Teodosiu",
"M.",
""
]
] | The paper presents the importance of research which characterizes the natural forest structure for the forest management. The lessons learned in these particular forest ecosystems can be integrated by the forest management objectives, in order to increase the sustainability of this type of resources. The project NATFORMAN was focused on the structure of the natural forest, thus research methodologies and modern technology (such as Field-Map) investigation and determination were used in order to record information on forest structural parameters. The results obtained refer to these structural parameters and to the possibility of transferring such information in practice, in order to achieve forest sustainable management. |
2308.06593 | Martin Bootsma | Martin Bootsma, Danny Chan, Odo Diekmann and Hisashi Inaba | The effect of host population heterogeneity on epidemic outbreaks | 36 pages, 5 figures | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | In the first part of this paper, we review old and new results about the
influence of host population heterogeneity on (various characteristics of)
epidemic outbreaks. In the second part we highlight a modelling issue that so
far has received little attention: how do contact patterns, and hence
transmission opportunities, depend on the size and the composition of the host
population? Without any claim on completeness, we offer a range of potential
(quasi-mechanistic) submodels. The overall aim of the paper is to describe the
state-of-the-art and to catalyse new work.
| [
{
"created": "Sat, 12 Aug 2023 15:16:50 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jan 2024 11:22:44 GMT",
"version": "v2"
}
] | 2024-01-17 | [
[
"Bootsma",
"Martin",
""
],
[
"Chan",
"Danny",
""
],
[
"Diekmann",
"Odo",
""
],
[
"Inaba",
"Hisashi",
""
]
] | In the first part of this paper, we review old and new results about the influence of host population heterogeneity on (various characteristics of) epidemic outbreaks. In the second part we highlight a modelling issue that so far has received little attention: how do contact patterns, and hence transmission opportunities, depend on the size and the composition of the host population? Without any claim on completeness, we offer a range of potential (quasi-mechanistic) submodels. The overall aim of the paper is to describe the state-of-the-art and to catalyse new work. |
0808.2322 | Anca Radulescu | Anca Radulescu | A multi-etiology model of systemic degeneration in schizophrenia | 11 pages, 6 figures, 4 page bibliography | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We discuss the possibility of multiple underlying etiologies of the condition
currently labeled as schizophrenia. We support this hypothesis with a
theoretical model of the prefrontal-limbic system. We show how the dynamical
behavior of this model depends on an entire set of physiological parameters,
representing synaptic strengths, vulnerability to stress-induced cortisol,
dopamine regulation and rates of autoantibody production. Malfunction of
different such parameters produces similar outward dysregulation of the system,
which may readily lead to diagnosis difficulties in a clinician's office. We
further place this paradigm within the contexts of pathophysiology and of
antipsychotic pharmacology. We finally propose brain profiling as the future
quantitative diagnostic toolbox that agrees with a multiple etiologies
hypothesis of schizophrenia.
| [
{
"created": "Sun, 17 Aug 2008 23:56:42 GMT",
"version": "v1"
}
] | 2008-08-19 | [
[
"Radulescu",
"Anca",
""
]
] | We discuss the possibility of multiple underlying etiologies of the condition currently labeled as schizophrenia. We support this hypothesis with a theoretical model of the prefrontal-limbic system. We show how the dynamical behavior of this model depends on an entire set of physiological parameters, representing synaptic strengths, vulnerability to stress-induced cortisol, dopamine regulation and rates of autoantibody production. Malfunction of different such parameters produces similar outward dysregulation of the system, which may readily lead to diagnosis difficulties in a clinician's office. We further place this paradigm within the contexts of pathophysiology and of antipsychotic pharmacology. We finally propose brain profiling as the future quantitative diagnostic toolbox that agrees with a multiple etiologies hypothesis of schizophrenia. |
1408.5628 | Jorge G. T. Za\~nudo | Jorge G. T. Za\~nudo and R\'eka Albert | Cell fate reprogramming by control of intracellular network dynamics | 61 pages (main text, 15 pages; supporting information, 46 pages) and
12 figures (main text, 6 figures; supporting information, 6 figures). In
review | null | 10.1371/journal.pcbi.1004193 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying control strategies for biological networks is paramount for
practical applications that involve reprogramming a cell's fate, such as
disease therapeutics and stem cell reprogramming. Here we develop a novel
network control framework that integrates the structural and functional
information available for intracellular networks to predict control targets.
Formulated in a logical dynamic scheme, our approach drives any initial state
to the target state with 100% effectiveness and needs to be applied only
transiently for the network to reach and stay in the desired state. We
illustrate our method's potential to find intervention targets for cancer
treatment and cell differentiation by applying it to a leukemia signaling
network and to the network controlling the differentiation of helper T cells.
We find that the predicted control targets are effective in a broad dynamic
framework. Moreover, several of the predicted interventions are supported by
experiments.
| [
{
"created": "Sun, 24 Aug 2014 19:27:34 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Dec 2014 23:39:06 GMT",
"version": "v2"
}
] | 2015-08-19 | [
[
"Zañudo",
"Jorge G. T.",
""
],
[
"Albert",
"Réka",
""
]
] | Identifying control strategies for biological networks is paramount for practical applications that involve reprogramming a cell's fate, such as disease therapeutics and stem cell reprogramming. Here we develop a novel network control framework that integrates the structural and functional information available for intracellular networks to predict control targets. Formulated in a logical dynamic scheme, our approach drives any initial state to the target state with 100% effectiveness and needs to be applied only transiently for the network to reach and stay in the desired state. We illustrate our method's potential to find intervention targets for cancer treatment and cell differentiation by applying it to a leukemia signaling network and to the network controlling the differentiation of helper T cells. We find that the predicted control targets are effective in a broad dynamic framework. Moreover, several of the predicted interventions are supported by experiments. |
1805.04164 | Momiao Xiong | Rong Jiao, Nan Lin, Zixin Hu, David A Bennett, Li Jin and Momiao Xiong | Bivariate Causal Discovery and its Applications to Gene Expression and
Imaging Data Analysis | null | null | null | null | q-bio.GN stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mainstream of research in genetics, epigenetics and imaging data analysis
focuses on statistical association or exploring statistical dependence between
variables. Despite their significant progresses in genetic research,
understanding the etiology and mechanism of complex phenotypes remains elusive.
Using association analysis as a major analytical platform for the complex data
analysis is a key issue that hampers the theoretic development of genomic
science and its application in practice. Causal inference is an essential
component for the discovery of mechanical relationships among complex
phenotypes. Many researchers suggest making the transition from association to
causation. Despite its fundamental role in science, engineering and
biomedicine, the traditional methods for causal inference require at least
three variables. However, quantitative genetic analysis such as QTL, eQTL,
mQTL, and genomic-imaging data analysis requires exploring the causal
relationships between two variables. This paper will focus on bivariate causal
discovery. We will introduce independence of cause and mechanism (ICM) as a
basic principle for causal inference, algorithmic information theory and
additive noise model (ANM) as major tools for bivariate causal discovery.
Large-scale simulations will be performed to evaluate the feasibility of the
ANM for bivariate causal discovery. To further evaluate their performance for
causal inference, the ANM will be applied to the construction of gene
regulatory networks. Also, the ANM will be applied to trait-imaging data
analysis to illustrate three scenarios: presence of both causation and
association, presence of association while absence of causation, and presence
of causation, while lack of association between two variables.
| [
{
"created": "Thu, 10 May 2018 20:27:13 GMT",
"version": "v1"
}
] | 2018-05-14 | [
[
"Jiao",
"Rong",
""
],
[
"Lin",
"Nan",
""
],
[
"Hu",
"Zixin",
""
],
[
"Bennett",
"David A",
""
],
[
"Jin",
"Li",
""
],
[
"Xiong",
"Momiao",
""
]
] | The mainstream of research in genetics, epigenetics and imaging data analysis focuses on statistical association or exploring statistical dependence between variables. Despite their significant progresses in genetic research, understanding the etiology and mechanism of complex phenotypes remains elusive. Using association analysis as a major analytical platform for the complex data analysis is a key issue that hampers the theoretic development of genomic science and its application in practice. Causal inference is an essential component for the discovery of mechanical relationships among complex phenotypes. Many researchers suggest making the transition from association to causation. Despite its fundamental role in science, engineering and biomedicine, the traditional methods for causal inference require at least three variables. However, quantitative genetic analysis such as QTL, eQTL, mQTL, and genomic-imaging data analysis requires exploring the causal relationships between two variables. This paper will focus on bivariate causal discovery. We will introduce independence of cause and mechanism (ICM) as a basic principle for causal inference, algorithmic information theory and additive noise model (ANM) as major tools for bivariate causal discovery. Large-scale simulations will be performed to evaluate the feasibility of the ANM for bivariate causal discovery. To further evaluate their performance for causal inference, the ANM will be applied to the construction of gene regulatory networks. Also, the ANM will be applied to trait-imaging data analysis to illustrate three scenarios: presence of both causation and association, presence of association while absence of causation, and presence of causation, while lack of association between two variables. |
1907.00694 | Danko Nikolic | Madeline E. Klinger, Christian A. Kell, Danko Nikolic | Quickly fading afterimages: hierarchical adaptations in human perception | 3 pages, 3 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Afterimages result from a prolonged exposure to still visual stimuli. They
are best detectable when viewed against uniform backgrounds and can persist for
multiple seconds. Consequently, the dynamics of afterimages appears to be slow
by their very nature. To the contrary, we report here that about 50% of an
afterimage intensity can be erased rapidly--within less than a second. The
prerequisite is that subjects view a rich visual content to erase the
afterimage; fast erasure of afterimages does not occur if subjects view a blank
screen. Moreover, we find evidence that fast removal of afterimages is a skill
learned with practice as our subjects were always more effective in cleaning up
afterimages in later parts of the experiment. These results can be explained by
a tri-level hierarchy of adaptive mechanisms, as has been proposed by the
theory of practopoiesis.
| [
{
"created": "Mon, 10 Jun 2019 06:22:18 GMT",
"version": "v1"
}
] | 2019-07-02 | [
[
"Klinger",
"Madeline E.",
""
],
[
"Kell",
"Christian A.",
""
],
[
"Nikolic",
"Danko",
""
]
] | Afterimages result from a prolonged exposure to still visual stimuli. They are best detectable when viewed against uniform backgrounds and can persist for multiple seconds. Consequently, the dynamics of afterimages appears to be slow by their very nature. To the contrary, we report here that about 50% of an afterimage intensity can be erased rapidly--within less than a second. The prerequisite is that subjects view a rich visual content to erase the afterimage; fast erasure of afterimages does not occur if subjects view a blank screen. Moreover, we find evidence that fast removal of afterimages is a skill learned with practice as our subjects were always more effective in cleaning up afterimages in later parts of the experiment. These results can be explained by a tri-level hierarchy of adaptive mechanisms, as has been proposed by the theory of practopoiesis. |
1904.05036 | Sabine Plancoulaine | Eve Reynaud (CRESS - U1153, EHESP), Marie-Fran\c{c}oise Vecchierini,
Barbara Heude (CRESS - U1153), Marie-Aline Charles (CRESS - U1153), Sabine
Plancoulaine (INSERM, CRESS - U1153) | Sleep and its relation to cognition and behaviour in preschool-aged
children of the general population: a systematic review | null | Journal of Sleep Research, Wiley, 2018, 27 (3), pp.e12636 | 10.1111/jsr.12636 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: While the relations between sleep, cognition and behavior have
been extensively studied in adolescents and school-aged children, very little
attention has been given to preschoolers. Objective: In this systematic review,
our aim was to survey articles that address the link between sleep and both
cognition and behavior in preschoolers (24 to 72 months old). Methods: Four
electronic databases were searched, namely Medline, Web of Science, PsycINFO
and ERIC, completed by forward and backward citation search. Results: Among the
1590 articles identified (minus duplicates), 26 met the inclusion criteria.
Globally, studies with the largest sample sizes (N=13) found that a greater
quantity or quality of sleep was associated with better behavioral and
cognitive outcomes, while the others were less consistent. Conclusion: Although
the current literature seems to indicate that sleep is related to behavioral
and cognitive development as early as preschool years, the strength of the
associations (i.e. effect sizes) was relatively small. In addition to taking
stock of the available data, this systematic review identifies potential
sources of improvement for future research.
| [
{
"created": "Wed, 10 Apr 2019 07:47:20 GMT",
"version": "v1"
}
] | 2019-04-11 | [
[
"Reynaud",
"Eve",
"",
"CRESS - U1153, EHESP"
],
[
"Vecchierini",
"Marie-Françoise",
"",
"CRESS - U1153"
],
[
"Heude",
"Barbara",
"",
"CRESS - U1153"
],
[
"Charles",
"Marie-Aline",
"",
"CRESS - U1153"
],
[
"Plancoulaine",
"Sabine",
"",
"INSERM, CRESS - U1153"
]
] | Background: While the relations between sleep, cognition and behavior have been extensively studied in adolescents and school-aged children, very little attention has been given to preschoolers. Objective: In this systematic review, our aim was to survey articles that address the link between sleep and both cognition and behavior in preschoolers (24 to 72 months old). Methods: Four electronic databases were searched, namely Medline, Web of Science, PsycINFO and ERIC, completed by forward and backward citation search. Results: Among the 1590 articles identified (minus duplicates), 26 met the inclusion criteria. Globally, studies with the largest sample sizes (N=13) found that a greater quantity or quality of sleep was associated with better behavioral and cognitive outcomes, while the others were less consistent. Conclusion: Although the current literature seems to indicate that sleep is related to behavioral and cognitive development as early as preschool years, the strength of the associations (i.e. effect sizes) was relatively small. In addition to taking stock of the available data, this systematic review identifies potential sources of improvement for future research. |
q-bio/0605001 | Alexei Vazquez | Alexei Vazquez | Spreading of infectious diseases on heterogeneous populations:
multi-type network approach | 28 pages, 4 figures | Phys. Rev. E 74, 066114 (2006) | 10.1103/PhysRevE.74.066114 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | I study the spreading of infectious diseases on heterogeneous populations. I
represent the population structure by a contact-graph where vertices represent
agents and edges represent disease transmission channels among them. The
population heterogeneity is taken into account by the agent's subdivision in
types and the mixing matrix among them. I introduce a type-network
representation for the mixing matrix allowing an intuitive understanding of the
mixing patterns and the analytical calculations. Using an iterative approach I
obtain recursive equations for the probability distribution of the outbreak
size as a function of time. I demonstrate that the expected outbreak size and
its progression in time are determined by the largest eigenvalue of the
reproductive number matrix and the characteristic distance between agents on
the contact-graph. Finally, I discuss the impact of intervention strategies to
halt epidemic outbreaks. This work provides both a qualitative understanding
and tools to obtain quantitative predictions for the spreading dynamics on
heterogeneous populations.
| [
{
"created": "Sat, 29 Apr 2006 19:48:37 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Vazquez",
"Alexei",
""
]
] | I study the spreading of infectious diseases on heterogeneous populations. I represent the population structure by a contact-graph where vertices represent agents and edges represent disease transmission channels among them. The population heterogeneity is taken into account by the agent's subdivision in types and the mixing matrix among them. I introduce a type-network representation for the mixing matrix allowing an intuitive understanding of the mixing patterns and the analytical calculations. Using an iterative approach I obtain recursive equations for the probability distribution of the outbreak size as a function of time. I demonstrate that the expected outbreak size and its progression in time are determined by the largest eigenvalue of the reproductive number matrix and the characteristic distance between agents on the contact-graph. Finally, I discuss the impact of intervention strategies to halt epidemic outbreaks. This work provides both a qualitative understanding and tools to obtain quantitative predictions for the spreading dynamics on heterogeneous populations. |
1408.1085 | Jesus Gomez-Gardenes | Alejandro Torres-Sanchez, Jesus Gomez-Gardenes and Fernando Falo | An integrative approach for modeling and simulation of Heterocyst
pattern formation in Cyanobacteria strands | 20 pages (including the supporting information), 8 figures | PLoS Comput Biol 11(3): e1004129 (2015) | 10.1371/journal.pcbi.1004129 | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A comprehensive approach to cellular differentiation in cyanobacteria is
developed. To this aim, the process of heterocyst cell formation is studied
under a systems biology point of view. By relying on statistical physics
techniques, we translate the essential ingredients and mechanisms of the
genetic circuit into a set of differential equations that describes the
continuous time evolution of combined nitrogen, PatS, HetR and NtcA
concentrations. The detailed analysis of these equations gives insight into the
single cell dynamics. On the other hand, the inclusion of diffusion and noisy
conditions allows simulating the formation of heterocysts patterns in
cyanobacteria strains. The time evolution of relevant component concentrations
are calculated allowing for a comparison with experiments. Finally, we discuss
the validity and the possible improvements of the model.
| [
{
"created": "Tue, 5 Aug 2014 10:43:26 GMT",
"version": "v1"
}
] | 2015-04-30 | [
[
"Torres-Sanchez",
"Alejandro",
""
],
[
"Gomez-Gardenes",
"Jesus",
""
],
[
"Falo",
"Fernando",
""
]
] | A comprehensive approach to cellular differentiation in cyanobacteria is developed. To this aim, the process of heterocyst cell formation is studied under a systems biology point of view. By relying on statistical physics techniques, we translate the essential ingredients and mechanisms of the genetic circuit into a set of differential equations that describes the continuous time evolution of combined nitrogen, PatS, HetR and NtcA concentrations. The detailed analysis of these equations gives insight into the single cell dynamics. On the other hand, the inclusion of diffusion and noisy conditions allows simulating the formation of heterocysts patterns in cyanobacteria strains. The time evolution of relevant component concentrations are calculated allowing for a comparison with experiments. Finally, we discuss the validity and the possible improvements of the model. |
1211.3133 | Alex Lang | Alex H. Lang, Hu Li, James J. Collins, and Pankaj Mehta | Epigenetic landscapes explain partially reprogrammed cells and identify
key reprogramming genes | 24 pages in main text with 11 pages in Supplementary Information, 6
Figures, 6 Data Files. v2 correctly attaches Data Files, no paper changes. v3
only change in Data File TF_Reprogramming_Candidates.xls so that
Overexpression / Knockout were in separate lists. v4 updates file to version
published in Plos Comp Bio | Lang AH, Li H, Collins JJ, Mehta P (2014) Epigenetic Landscapes
Explain Partially Reprogrammed Cells and Identify Key Reprogramming Genes.
PLoS Comput Biol 10(8): e1003734. doi:10.1371/journal.pcbi.1003734 | 10.1371/journal.pcbi.1003734 | null | q-bio.MN cond-mat.dis-nn cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common metaphor for describing development is a rugged "epigenetic
landscape" where cell fates are represented as attracting valleys resulting
from a complex regulatory network. Here, we introduce a framework for
explicitly constructing epigenetic landscapes that combines genomic data with
techniques from spin-glass physics. Each cell fate is a dynamic attractor, yet
cells can change fate in response to external signals. Our model suggests that
partially reprogrammed cells are a natural consequence of high-dimensional
landscapes, and predicts that partially reprogrammed cells should be hybrids
that co-express genes from multiple cell fates. We verify this prediction by
reanalyzing existing datasets. Our model reproduces known reprogramming
protocols and identifies candidate transcription factors for reprogramming to
novel cell fates, suggesting epigenetic landscapes are a powerful paradigm for
understanding cellular identity.
| [
{
"created": "Tue, 13 Nov 2012 21:07:27 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Nov 2012 01:27:46 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Nov 2012 20:43:03 GMT",
"version": "v3"
},
{
"created": "Wed, 10 Sep 2014 17:24:00 GMT",
"version": "v4"
}
] | 2014-09-11 | [
[
"Lang",
"Alex H.",
""
],
[
"Li",
"Hu",
""
],
[
"Collins",
"James J.",
""
],
[
"Mehta",
"Pankaj",
""
]
] | A common metaphor for describing development is a rugged "epigenetic landscape" where cell fates are represented as attracting valleys resulting from a complex regulatory network. Here, we introduce a framework for explicitly constructing epigenetic landscapes that combines genomic data with techniques from spin-glass physics. Each cell fate is a dynamic attractor, yet cells can change fate in response to external signals. Our model suggests that partially reprogrammed cells are a natural consequence of high-dimensional landscapes, and predicts that partially reprogrammed cells should be hybrids that co-express genes from multiple cell fates. We verify this prediction by reanalyzing existing datasets. Our model reproduces known reprogramming protocols and identifies candidate transcription factors for reprogramming to novel cell fates, suggesting epigenetic landscapes are a powerful paradigm for understanding cellular identity. |
q-bio/0312036 | Peng-Ye Wang | Wei Li, Shuo-Xing Dou, Peng-Ye Wang | Brownian Dynamics Simulation of Nucleosome Formation and Disruption
under Stretching | 16 pages, 11 figures | Journal of Theoretical Biology 230 (2004) 375-383 | null | null | q-bio.BM | null | Using a Brownian dynamics simulation, we numerically studied the interaction
of DNA with histone and proposed an octamer-rotation model to describe the
process of nucleosome formation. Nucleosome disruption under stretching was
also simulated. The theoretical curves of extension versus time as well as of
force versus extension are consistent with previous experimental results.
| [
{
"created": "Tue, 23 Dec 2003 00:57:12 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Nov 2004 04:56:57 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Li",
"Wei",
""
],
[
"Dou",
"Shuo-Xing",
""
],
[
"Wang",
"Peng-Ye",
""
]
] | Using a Brownian dynamics simulation, we numerically studied the interaction of DNA with histone and proposed an octamer-rotation model to describe the process of nucleosome formation. Nucleosome disruption under stretching was also simulated. The theoretical curves of extension versus time as well as of force versus extension are consistent with previous experimental results. |
2107.11440 | Mikl\'os Cs\H{u}r\"os | Miklos Csuros | Gain-loss-duplication models on a phylogeny: exact algorithms for
computing the likelihood and its gradient | null | null | null | null | q-bio.PE cs.DS | http://creativecommons.org/licenses/by/4.0/ | Gene gain-loss-duplication models are commonly based on continuous-time
birth-death processes. Employed in a phylogenetic context, such models have
been increasingly popular in studies of gene content evolution across multiple
genomes. While the applications are becoming more varied and demanding,
bioinformatics methods for probabilistic inference on copy numbers (or
integer-valued evolutionary characters, in general) are scarce. We describe a
flexible probabilistic framework for phylogenetic gene-loss-duplication models.
The framework is based on a novel elementary representation by dependent random
variables with well-characterized conditional distributions: binomial, P\'olya
(negative binomial), and Poisson. The corresponding graphical model yields
exact numerical procedures for computing the likelihood and the posterior
distribution of ancestral copy numbers. The resulting algorithms take quadratic
time in the total number of copies. In addition, we show how the likelihood
gradient can be computed by a linear-time algorithm.
| [
{
"created": "Fri, 23 Jul 2021 19:48:49 GMT",
"version": "v1"
}
] | 2021-07-27 | [
[
"Csuros",
"Miklos",
""
]
] | Gene gain-loss-duplication models are commonly based on continuous-time birth-death processes. Employed in a phylogenetic context, such models have been increasingly popular in studies of gene content evolution across multiple genomes. While the applications are becoming more varied and demanding, bioinformatics methods for probabilistic inference on copy numbers (or integer-valued evolutionary characters, in general) are scarce. We describe a flexible probabilistic framework for phylogenetic gene-loss-duplication models. The framework is based on a novel elementary representation by dependent random variables with well-characterized conditional distributions: binomial, P\'olya (negative binomial), and Poisson. The corresponding graphical model yields exact numerical procedures for computing the likelihood and the posterior distribution of ancestral copy numbers. The resulting algorithms take quadratic time in the total number of copies. In addition, we show how the likelihood gradient can be computed by a linear-time algorithm. |
1501.05731 | Mike Steel Prof. | Mike Steel | Self-sustaining autocatalytic networks within open-ended reaction
systems | 17 pages, 5 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given any finite and closed chemical reaction system, it is possible to
efficiently determine whether or not it contains a `self-sustaining and
collectively autocatalytic' subset of reactions, and to find such subsets when
they exist. However, for systems that are potentially open-ended (for example,
when no prescribed upper bound is placed on the complexity or size/length of
molecules types), the theory developed for the finite case breaks down. We
investigate a number of subtleties that arise in such systems that are absent
in the finite setting, and present several new results.
| [
{
"created": "Fri, 23 Jan 2015 07:53:43 GMT",
"version": "v1"
}
] | 2015-01-26 | [
[
"Steel",
"Mike",
""
]
] | Given any finite and closed chemical reaction system, it is possible to efficiently determine whether or not it contains a `self-sustaining and collectively autocatalytic' subset of reactions, and to find such subsets when they exist. However, for systems that are potentially open-ended (for example, when no prescribed upper bound is placed on the complexity or size/length of molecules types), the theory developed for the finite case breaks down. We investigate a number of subtleties that arise in such systems that are absent in the finite setting, and present several new results. |
2404.15634 | Saeed Mahdisoltani | Saeed Mahdisoltani, Pranav Murugan, Arup K Chakraborty, Mehran Kardar | A Minimal Framework for Optimizing Vaccination Protocols Targeting
Highly Mutable Pathogens | null | null | null | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A persistent public health challenge is finding immunization schemes that are
effective in combating highly mutable pathogens such as HIV and influenza
viruses. To address this, we analyze a simplified model of affinity maturation,
the Darwinian evolutionary process B cells undergo during immunization. The
vaccination protocol dictates selection forces that steer affinity maturation
to generate antibodies. We focus on determining the optimal selection forces
exerted by a generic time-dependent vaccination protocol to maximize production
of broadly neutralizing antibodies (bnAbs) that can protect against a broad
spectrum of pathogen strains. The model lends itself to a path integral
representation and operator approximations within a mean-field limit, providing
guiding principles for optimizing time-dependent vaccine-induced selection
forces to enhance bnAb generation. We compare our analytical mean-field results
with the outcomes of stochastic simulations and discuss their similarities and
differences.
| [
{
"created": "Wed, 24 Apr 2024 03:53:08 GMT",
"version": "v1"
}
] | 2024-04-25 | [
[
"Mahdisoltani",
"Saeed",
""
],
[
"Murugan",
"Pranav",
""
],
[
"Chakraborty",
"Arup K",
""
],
[
"Kardar",
"Mehran",
""
]
] | A persistent public health challenge is finding immunization schemes that are effective in combating highly mutable pathogens such as HIV and influenza viruses. To address this, we analyze a simplified model of affinity maturation, the Darwinian evolutionary process B cells undergo during immunization. The vaccination protocol dictates selection forces that steer affinity maturation to generate antibodies. We focus on determining the optimal selection forces exerted by a generic time-dependent vaccination protocol to maximize production of broadly neutralizing antibodies (bnAbs) that can protect against a broad spectrum of pathogen strains. The model lends itself to a path integral representation and operator approximations within a mean-field limit, providing guiding principles for optimizing time-dependent vaccine-induced selection forces to enhance bnAb generation. We compare our analytical mean-field results with the outcomes of stochastic simulations and discuss their similarities and differences. |
2211.12420 | Xiaoshan Zhou | Xiaoshan Zhou and Pin-Chao Liao | Brain informed transfer learning for categorizing construction hazards | null | null | 10.1111/mice.13078 | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A transfer learning paradigm is proposed for "knowledge" transfer between the
human brain and convolutional neural network (CNN) for a construction hazard
categorization task. Participants' brain activities are recorded using
electroencephalogram (EEG) measurements when viewing the same images (target
dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned
on the construction scene images. The results reveal that the EEG-pretrained
CNN achieves a 9 % higher accuracy compared with a network with same
architecture but randomly initialized parameters on a three-class
classification task. Brain activity from the left frontal cortex exhibits the
highest performance gains, thus indicating high-level cognitive processing
during hazard recognition. This work is a step toward improving machine
learning algorithms by learning from human-brain signals recorded via a
commercially available brain-computer interface. More generalized visual
recognition systems can be effectively developed based on this approach of
"keep human in the loop".
| [
{
"created": "Thu, 17 Nov 2022 19:41:04 GMT",
"version": "v1"
}
] | 2023-08-17 | [
[
"Zhou",
"Xiaoshan",
""
],
[
"Liao",
"Pin-Chao",
""
]
] | A transfer learning paradigm is proposed for "knowledge" transfer between the human brain and convolutional neural network (CNN) for a construction hazard categorization task. Participants' brain activities are recorded using electroencephalogram (EEG) measurements when viewing the same images (target dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned on the construction scene images. The results reveal that the EEG-pretrained CNN achieves a 9 % higher accuracy compared with a network with same architecture but randomly initialized parameters on a three-class classification task. Brain activity from the left frontal cortex exhibits the highest performance gains, thus indicating high-level cognitive processing during hazard recognition. This work is a step toward improving machine learning algorithms by learning from human-brain signals recorded via a commercially available brain-computer interface. More generalized visual recognition systems can be effectively developed based on this approach of "keep human in the loop". |
0804.4830 | Laurent Perrinet | Laurent Perrinet (INCM) | Sparse Spike Coding : applications of Neuroscience to the processing of
natural images | http://incm.cnrs-mrs.fr/LaurentPerrinet/Publications/Perrinet08spie | SPIE Photonics Europe, Strasbourg : France (2008) | 10.1117/12.787076 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If modern computers are sometimes superior to humans in some specialized
tasks such as playing chess or browsing a large database, they can't beat the
efficiency of biological vision for such simple tasks as recognizing and
following an object in a complex cluttered background. We present in this paper
our attempt at outlining the dynamical, parallel and event-based representation
for vision in the architecture of the central nervous system. We will
illustrate this on static natural images by showing that in a signal matching
framework, a L/LN (linear/non-linear) cascade may efficiently transform a
sensory signal into a neural spiking signal and we will apply this framework to
a model retina. However, this code gets redundant when using an over-complete
basis as is necessary for modeling the primary visual cortex: we therefore
optimize the efficiency cost by increasing the sparseness of the code. This is
implemented by propagating and canceling redundant information using lateral
interactions. We compare the efficiency of this representation in terms of
compression as the reconstruction quality as a function of the coding length.
This will correspond to a modification of the Matching Pursuit algorithm where
the ArgMax function is optimized for competition, or Competition Optimized
Matching Pursuit (COMP). We will in particular focus on bridging neuroscience
and image processing and on the advantages of such an interdisciplinary
approach.
| [
{
"created": "Wed, 30 Apr 2008 15:15:32 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Jan 2009 12:02:33 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Perrinet",
"Laurent",
"",
"INCM"
]
] | If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a complex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach. |
2311.04419 | Fengfeng Zhou | Ruochi Zhang (1,2,3), Haoran Wu (3), Yuting Xiu (3), Kewei Li (1,4),
Ningning Chen (3), Yu Wang (3), Yan Wang (1,2,4), Xin Gao (5,6,7), Fengfeng
Zhou (1,4,7) ((1) Key Laboratory of Symbolic Computation and Knowledge
Engineering of Ministry of Education, Jilin University, Changchun, China. (2)
School of Artificial Intelligence, Jilin University, Changchun, China. (3)
Syneron Technology, Guangzhou, China. (4) College of Computer Science and
Technology, Jilin University, Changchun, China. (5) Computational Bioscience
Research Center, King Abdullah University of Science and Technology (KAUST),
Thuwal, Saudi Arabia. (6) Computer Science Program, Computer, Electrical and
Mathematical Sciences and Engineering Division, King Abdullah University of
Science and Technology (KAUST), Thuwal, Saudi Arabia. (7) Corresponding
Authors) | PepLand: a large-scale pre-trained peptide representation model for a
comprehensive landscape of both canonical and non-canonical amino acids | null | null | null | null | q-bio.BM cs.AI q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, the scientific community has become increasingly interested
on peptides with non-canonical amino acids due to their superior stability and
resistance to proteolytic degradation. These peptides present promising
modifications to biological, pharmacological, and physiochemical attributes in
both endogenous and engineered peptides. Notwithstanding their considerable
advantages, the scientific community exhibits a conspicuous absence of an
effective pre-trained model adept at distilling feature representations from
such complex peptide sequences. We herein propose PepLand, a novel pre-training
architecture for representation and property analysis of peptides spanning both
canonical and non-canonical amino acids. In essence, PepLand leverages a
comprehensive multi-view heterogeneous graph neural network tailored to unveil
the subtle structural representations of peptides. Empirical validations
underscore PepLand's effectiveness across an array of peptide property
predictions, encompassing protein-protein interactions, permeability,
solubility, and synthesizability. The rigorous evaluation confirms PepLand's
unparalleled capability in capturing salient synthetic peptide features,
thereby laying a robust foundation for transformative advances in
peptide-centric research domains. We have made all the source code utilized in
this study publicly accessible via GitHub at
https://github.com/zhangruochi/pepland
| [
{
"created": "Wed, 8 Nov 2023 01:18:32 GMT",
"version": "v1"
}
] | 2023-11-09 | [
[
"Zhang",
"Ruochi",
""
],
[
"Wu",
"Haoran",
""
],
[
"Xiu",
"Yuting",
""
],
[
"Li",
"Kewei",
""
],
[
"Chen",
"Ningning",
""
],
[
"Wang",
"Yu",
""
],
[
"Wang",
"Yan",
""
],
[
"Gao",
"Xin",
""
],
[
"Zhou",
"Fengfeng",
""
]
] | In recent years, the scientific community has become increasingly interested on peptides with non-canonical amino acids due to their superior stability and resistance to proteolytic degradation. These peptides present promising modifications to biological, pharmacological, and physiochemical attributes in both endogenous and engineered peptides. Notwithstanding their considerable advantages, the scientific community exhibits a conspicuous absence of an effective pre-trained model adept at distilling feature representations from such complex peptide sequences. We herein propose PepLand, a novel pre-training architecture for representation and property analysis of peptides spanning both canonical and non-canonical amino acids. In essence, PepLand leverages a comprehensive multi-view heterogeneous graph neural network tailored to unveil the subtle structural representations of peptides. Empirical validations underscore PepLand's effectiveness across an array of peptide property predictions, encompassing protein-protein interactions, permeability, solubility, and synthesizability. The rigorous evaluation confirms PepLand's unparalleled capability in capturing salient synthetic peptide features, thereby laying a robust foundation for transformative advances in peptide-centric research domains. We have made all the source code utilized in this study publicly accessible via GitHub at https://github.com/zhangruochi/pepland |
1306.1652 | Stephan Peischl | Stephan Peischl, Isabelle Dupanloup, Mark Kirkpatrick, and Laurent
Excoffier | On the accumulation of deleterious mutations during range expansions | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the effect of spatial range expansions on the evolution of
fitness when beneficial and deleterious mutations co-segregate. We perform
individual-based simulations of a uniform linear habitat and complement them
with analytical approximations for the evolution of mean fitness at the edge of
the expansion. We find that deleterious mutations accumulate steadily on the
wave front during range expansions, thus creating an expansion load. Reduced
fitness due to the expansion load is not restricted to the wave front but
occurs over a large proportion of newly colonized habitats. The expansion load
can persist and represent a major fraction of the total mutation load thousands
of generations after the expansion. Our results extend qualitatively and
quantitatively to two-dimensional expansions. The phenomenon of expansion load
may explain growing evidence that populations that have recently expanded,
including humans, show an excess of deleterious mutations. To test the
predictions of our model, we analyze patterns of neutral and non-neutral
genetic diversity in humans and find an excellent fit between theory and data.
| [
{
"created": "Fri, 7 Jun 2013 08:39:50 GMT",
"version": "v1"
}
] | 2013-06-10 | [
[
"Peischl",
"Stephan",
""
],
[
"Dupanloup",
"Isabelle",
""
],
[
"Kirkpatrick",
"Mark",
""
],
[
"Excoffier",
"Laurent",
""
]
] | We investigate the effect of spatial range expansions on the evolution of fitness when beneficial and deleterious mutations co-segregate. We perform individual-based simulations of a uniform linear habitat and complement them with analytical approximations for the evolution of mean fitness at the edge of the expansion. We find that deleterious mutations accumulate steadily on the wave front during range expansions, thus creating an expansion load. Reduced fitness due to the expansion load is not restricted to the wave front but occurs over a large proportion of newly colonized habitats. The expansion load can persist and represent a major fraction of the total mutation load thousands of generations after the expansion. Our results extend qualitatively and quantitatively to two-dimensional expansions. The phenomenon of expansion load may explain growing evidence that populations that have recently expanded, including humans, show an excess of deleterious mutations. To test the predictions of our model, we analyze patterns of neutral and non-neutral genetic diversity in humans and find an excellent fit between theory and data. |
q-bio/0603004 | Emily Stone | Emily Stone, John Goldes, Martha Garlick | A two stage model for quantitative PCR | 35 pages 11 figures. Submitted to the Bulletin of Mathematical
Biology March 2006 | null | null | null | q-bio.BM | null | PCR (Polymerase Chain Reaction), a method which replicates a selected
sequence of DNA, has revolutionized the study of genomic material, but
mathematical study of the process has been limited to simple deterministic
models or descriptions relying on stochastic processes. In this paper we
develop a suite of deterministic models for the reactions of quantitative PCR
(Polymerase Chain Reaction) based on the law of mass action. Maps are created
from DNA copy number in one cycle to the next, with ordinary differential
equations describing the evolution of difference molecular species during each
cycle. Qualitative analysis is preformed at each stage and parameters are
estimated by fitting each model to data from Roche LightCycler (TM) runs.
| [
{
"created": "Wed, 1 Mar 2006 21:03:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Stone",
"Emily",
""
],
[
"Goldes",
"John",
""
],
[
"Garlick",
"Martha",
""
]
] | PCR (Polymerase Chain Reaction), a method which replicates a selected sequence of DNA, has revolutionized the study of genomic material, but mathematical study of the process has been limited to simple deterministic models or descriptions relying on stochastic processes. In this paper we develop a suite of deterministic models for the reactions of quantitative PCR (Polymerase Chain Reaction) based on the law of mass action. Maps are created from DNA copy number in one cycle to the next, with ordinary differential equations describing the evolution of difference molecular species during each cycle. Qualitative analysis is preformed at each stage and parameters are estimated by fitting each model to data from Roche LightCycler (TM) runs. |
2002.04501 | Karl Friston | Karl Friston, Lancelot Da Costa and Thomas Parr | Some interesting observations on the free energy principle | A response to a technical critique [arXiv:2001.06408] of the free
energy principle as presented in "Life as we know it" | null | 10.3390/e23081076 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biehl et al (2020) present some interesting observations on an early
formulation of the free energy principle in (Friston, 2013). We use these
observations to scaffold a discussion of the technical arguments that
underwrite the free energy principle. This discussion focuses on solenoidal
coupling between various (subsets of) states in sparsely coupled systems that
possess a Markov blanket - and the distinction between exact and approximate
Bayesian inference, implied by the ensuing Bayesian mechanics.
| [
{
"created": "Wed, 5 Feb 2020 13:40:48 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Friston",
"Karl",
""
],
[
"Da Costa",
"Lancelot",
""
],
[
"Parr",
"Thomas",
""
]
] | Biehl et al (2020) present some interesting observations on an early formulation of the free energy principle in (Friston, 2013). We use these observations to scaffold a discussion of the technical arguments that underwrite the free energy principle. This discussion focuses on solenoidal coupling between various (subsets of) states in sparsely coupled systems that possess a Markov blanket - and the distinction between exact and approximate Bayesian inference, implied by the ensuing Bayesian mechanics. |
2209.03185 | Julien MARTINELLI | Julien Martinelli (Lifeware), Jeremy Grignard (IRS, Lifeware), Sylvain
Soliman (Lifeware), Annabelle Ballesta, Fran\c{c}ois Fages (Lifeware) | Reactmine: a statistical search algorithm for inferring chemical
reactions from time series data | null | null | null | null | q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring chemical reaction networks (CRN) from concentration time series is
a challenge encouragedby the growing availability of quantitative temporal data
at the cellular level. This motivates thedesign of algorithms to infer the
preponderant reactions between the molecular species observed ina given
biochemical process, and build CRN structure and kinetics models. Existing
ODE-basedinference methods such as SINDy resort to least square regression
combined with sparsity-enforcingpenalization, such as Lasso. However, we
observe that these methods fail to learn sparse modelswhen the input time
series are only available in wild type conditions, i.e. without the possibility
toplay with combinations of zeroes in the initial conditions. We present a CRN
inference algorithmwhich enforces sparsity by inferring reactions in a
sequential fashion within a search tree of boundeddepth, ranking the inferred
reaction candidates according to the variance of their kinetics on
theirsupporting transitions, and re-optimizing the kinetic parameters of the
CRN candidates on the wholetrace in a final pass. We show that Reactmine
succeeds both on simulation data by retrievinghidden CRNs where SINDy fails,
and on two real datasets, one of fluorescence videomicroscopyof cell cycle and
circadian clock markers, the other one of biomedical measurements of
systemiccircadian biomarkers possibly acting on clock gene expression in
peripheral organs, by inferringpreponderant regulations in agreement with
previous model-based analyses. The code is available
athttps://gitlab.inria.fr/julmarti/crninf/ together with introductory
notebooks.
| [
{
"created": "Wed, 7 Sep 2022 14:30:33 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Feb 2023 16:17:57 GMT",
"version": "v2"
}
] | 2023-02-09 | [
[
"Martinelli",
"Julien",
"",
"Lifeware"
],
[
"Grignard",
"Jeremy",
"",
"IRS, Lifeware"
],
[
"Soliman",
"Sylvain",
"",
"Lifeware"
],
[
"Ballesta",
"Annabelle",
"",
"Lifeware"
],
[
"Fages",
"François",
"",
"Lifeware"
]
] | Inferring chemical reaction networks (CRN) from concentration time series is a challenge encouragedby the growing availability of quantitative temporal data at the cellular level. This motivates thedesign of algorithms to infer the preponderant reactions between the molecular species observed ina given biochemical process, and build CRN structure and kinetics models. Existing ODE-basedinference methods such as SINDy resort to least square regression combined with sparsity-enforcingpenalization, such as Lasso. However, we observe that these methods fail to learn sparse modelswhen the input time series are only available in wild type conditions, i.e. without the possibility toplay with combinations of zeroes in the initial conditions. We present a CRN inference algorithmwhich enforces sparsity by inferring reactions in a sequential fashion within a search tree of boundeddepth, ranking the inferred reaction candidates according to the variance of their kinetics on theirsupporting transitions, and re-optimizing the kinetic parameters of the CRN candidates on the wholetrace in a final pass. We show that Reactmine succeeds both on simulation data by retrievinghidden CRNs where SINDy fails, and on two real datasets, one of fluorescence videomicroscopyof cell cycle and circadian clock markers, the other one of biomedical measurements of systemiccircadian biomarkers possibly acting on clock gene expression in peripheral organs, by inferringpreponderant regulations in agreement with previous model-based analyses. The code is available athttps://gitlab.inria.fr/julmarti/crninf/ together with introductory notebooks. |
1203.1953 | Sandip Ghosal | S. Ghosal and Z. Chen | Electromigration dispersion in a capillary in the presence of
electro-osmotic flow | 19 pages, 5 figures | null | 10.1017/jfm.2012.76 | null | q-bio.QM cond-mat.soft physics.bio-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The differential migration of ions in an applied electric field is the basis
for separation of chemical species by capillary electrophoresis. Axial
diffusion of the concentration peak limits the separation efficiency.
Electromigration dispersion is observed when the concentration of sample ions
is comparable to that of the background ions. Under such conditions, the local
electrical conductivity is significantly altered in the sample zone making the
electric field, and therefore, the ion migration velocity concentration
dependent. The resulting nonlinear wave exhibits shock like features, and,
under certain simplifying assumptions, is described by Burgers' equation (S.
Ghosal and Z. Chen Bull. Math. Biol. 2010, vol.72, pg. 2047).In this paper, we
consider the more general situation where the walls of the separation channel
may have a non-zero zeta potential and are therefore able to sustain an
electro-osmotic bulk flow. The main result is a one dimensional nonlinear
advection diffusion equation for the area averaged concentration. This
homogenized equation accounts for the Taylor-Aris dispersion resulting from the
variation in the electro-osmotic slip velocity along the wall. It is shown that
in a certain range of parameters, the electro-osmotic flow can actually reduce
the total dispersion by delaying the formation of a concentration shock.
However, if the electro-osmotic flow is sufficiently high, the total dispersion
is increased because of the Taylor-Aris contribution.
| [
{
"created": "Thu, 8 Mar 2012 22:38:51 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Ghosal",
"S.",
""
],
[
"Chen",
"Z.",
""
]
] | The differential migration of ions in an applied electric field is the basis for separation of chemical species by capillary electrophoresis. Axial diffusion of the concentration peak limits the separation efficiency. Electromigration dispersion is observed when the concentration of sample ions is comparable to that of the background ions. Under such conditions, the local electrical conductivity is significantly altered in the sample zone making the electric field, and therefore, the ion migration velocity concentration dependent. The resulting nonlinear wave exhibits shock like features, and, under certain simplifying assumptions, is described by Burgers' equation (S. Ghosal and Z. Chen Bull. Math. Biol. 2010, vol.72, pg. 2047).In this paper, we consider the more general situation where the walls of the separation channel may have a non-zero zeta potential and are therefore able to sustain an electro-osmotic bulk flow. The main result is a one dimensional nonlinear advection diffusion equation for the area averaged concentration. This homogenized equation accounts for the Taylor-Aris dispersion resulting from the variation in the electro-osmotic slip velocity along the wall. It is shown that in a certain range of parameters, the electro-osmotic flow can actually reduce the total dispersion by delaying the formation of a concentration shock. However, if the electro-osmotic flow is sufficiently high, the total dispersion is increased because of the Taylor-Aris contribution. |
1903.03588 | Micha{\l} {\L}epek | Mateusz Soli\'nski, Micha{\l} {\L}epek, {\L}ukasz Ko{\l}towski | Automatic cough detection based on airflow signals for portable
spirometry system | 18 pages, original work. Few improvements and some additional
analysis added in this version | Informatics in Medicine Unlocked 18 (2020) 100313 | 10.1016/j.imu.2020.100313 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a short introduction to cough detection efforts that were undertaken
during the last decade and we describe the solution for automatic cough
detection developed for the AioCare portable spirometry system. In contrast to
more popular analysis of sound and audio recordings, we fully based our
approach on airflow signals only. As the system is intended to be used in a
large variety of environments and different patients, we trained and validated
the algorithm using AioCare-collected data and the large database of spirometry
curves from the NHANES database by the American National Center for Health
Statistics. We trained different classifiers, such as logistic regression,
feed-forward artificial neural network, support vector machine, and random
forest to choose the one with the best performance. The ANN solution was
selected as the final classifier. The classification results on the test set
(AioCare data) are: 0.86 (sensitivity), 0.91 (specificity), 0.91 (accuracy) and
0.88 (F1 score). The classification methodology developed in this study is
robust for detecting cough events during spirometry measurements. As far as we
know, the solution presented in this work is the first fully reproducible
description of the automatic cough detection algorithm based totally on airflow
signals and the first cough detection implemented in a commercial spirometry
system that is to be published.
| [
{
"created": "Tue, 26 Feb 2019 15:55:09 GMT",
"version": "v1"
},
{
"created": "Thu, 23 May 2019 14:58:22 GMT",
"version": "v2"
},
{
"created": "Fri, 24 May 2019 08:36:00 GMT",
"version": "v3"
},
{
"created": "Wed, 25 Mar 2020 13:26:37 GMT",
"version": "v4"
}
] | 2020-03-26 | [
[
"Soliński",
"Mateusz",
""
],
[
"Łepek",
"Michał",
""
],
[
"Kołtowski",
"Łukasz",
""
]
] | We give a short introduction to cough detection efforts that were undertaken during the last decade and we describe the solution for automatic cough detection developed for the AioCare portable spirometry system. In contrast to more popular analysis of sound and audio recordings, we fully based our approach on airflow signals only. As the system is intended to be used in a large variety of environments and different patients, we trained and validated the algorithm using AioCare-collected data and the large database of spirometry curves from the NHANES database by the American National Center for Health Statistics. We trained different classifiers, such as logistic regression, feed-forward artificial neural network, support vector machine, and random forest to choose the one with the best performance. The ANN solution was selected as the final classifier. The classification results on the test set (AioCare data) are: 0.86 (sensitivity), 0.91 (specificity), 0.91 (accuracy) and 0.88 (F1 score). The classification methodology developed in this study is robust for detecting cough events during spirometry measurements. As far as we know, the solution presented in this work is the first fully reproducible description of the automatic cough detection algorithm based totally on airflow signals and the first cough detection implemented in a commercial spirometry system that is to be published. |
2012.01328 | Justin Jude | Justin Jude and Matthias H. Hennig | Hippocampal representations emerge when training recurrent neural
networks on a memory dependent maze navigation task | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Can neural networks learn goal-directed behaviour using similar strategies to
the brain, by combining the relationships between the current state of the
organism and the consequences of future actions? Recent work has shown that
recurrent neural networks trained on goal based tasks can develop
representations resembling those found in the brain, entorhinal cortex grid
cells, for instance. Here we explore the evolution of the dynamics of their
internal representations and compare this with experimental data. We observe
that once a recurrent network is trained to learn the structure of its
environment solely based on sensory prediction, an attractor based landscape
forms in the network's representation, which parallels hippocampal place cells
in structure and function. Next, we extend the predictive objective to include
Q-learning for a reward task, where rewarding actions are dependent on delayed
cue modulation. Mirroring experimental findings in hippocampus recordings in
rodents performing the same task, this training paradigm causes nonlocal neural
activity to sweep forward in space at decision points, anticipating the future
path to a rewarded location. Moreover, prevalent choice and cue-selective
neurons form in this network, again recapitulating experimental findings.
Together, these results indicate that combining predictive, unsupervised
learning of the structure of an environment with reinforcement learning can
help understand the formation of hippocampus-like representations containing
both spatial and task-relevant information.
| [
{
"created": "Wed, 2 Dec 2020 16:55:02 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jan 2021 17:36:54 GMT",
"version": "v2"
}
] | 2021-01-21 | [
[
"Jude",
"Justin",
""
],
[
"Hennig",
"Matthias H.",
""
]
] | Can neural networks learn goal-directed behaviour using similar strategies to the brain, by combining the relationships between the current state of the organism and the consequences of future actions? Recent work has shown that recurrent neural networks trained on goal based tasks can develop representations resembling those found in the brain, entorhinal cortex grid cells, for instance. Here we explore the evolution of the dynamics of their internal representations and compare this with experimental data. We observe that once a recurrent network is trained to learn the structure of its environment solely based on sensory prediction, an attractor based landscape forms in the network's representation, which parallels hippocampal place cells in structure and function. Next, we extend the predictive objective to include Q-learning for a reward task, where rewarding actions are dependent on delayed cue modulation. Mirroring experimental findings in hippocampus recordings in rodents performing the same task, this training paradigm causes nonlocal neural activity to sweep forward in space at decision points, anticipating the future path to a rewarded location. Moreover, prevalent choice and cue-selective neurons form in this network, again recapitulating experimental findings. Together, these results indicate that combining predictive, unsupervised learning of the structure of an environment with reinforcement learning can help understand the formation of hippocampus-like representations containing both spatial and task-relevant information. |
1708.04593 | Ruben Perez-Carrasco | Ruben Perez-Carrasco, Chris P. Barnes, Yolanda Schaerli, Mark Isalan,
James Briscoe, Karen M. Page | The power of the AC-DC circuit: Operating principles of a simple
multi-functional transcriptional network motif | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetically encoded regulatory circuits control biological function. A major
focus of systems biology is to understand these circuits by establishing the
relationship between specific structures and functions. Of special interest are
multifunctional circuits that are capable of performing distinct behaviors
without changing their topology. A particularly simple example of such a system
is the AC-DC circuit. Found in multiple regulatory processes, this circuit
consists of three genes connected in a combination of a toggle switch and a
repressilator. Using dynamical system theory we analyze the available dynamical
regimes to show that the AC-DC can exhibit both oscillations and bistability.
We found that both dynamical regimes can coexist robustly in the same region of
parameter space, generating novel emergent behaviors not available to the
individual subnetwork components. We demonstrate that the AC-DC circuit
provides a mechanism to rapidly switch between oscillations and steady
expression and, in the presence of noise, the multi-functionality of the
circuit offers the possibility to control the coherence of oscillations.
Additionally, we provide evidence that the availability of a bistable
oscillatory regime allows the AC-DC circuit to behave as an excitable system
capable of stochastic pulses and spatial signal propagation. Taken together
these results reveal how a system as simple as the AC-DC circuit can produce
multiple complex dynamical behaviors in the same parameter region and is well
suited for the construction of multifunctional synthetic genetic circuits.
Likewise, the analysis reveals the potential of this circuit to facilitate the
evolution of distinct patterning mechanisms.
| [
{
"created": "Tue, 15 Aug 2017 16:58:45 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Aug 2017 07:59:41 GMT",
"version": "v2"
}
] | 2017-08-17 | [
[
"Perez-Carrasco",
"Ruben",
""
],
[
"Barnes",
"Chris P.",
""
],
[
"Schaerli",
"Yolanda",
""
],
[
"Isalan",
"Mark",
""
],
[
"Briscoe",
"James",
""
],
[
"Page",
"Karen M.",
""
]
] | Genetically encoded regulatory circuits control biological function. A major focus of systems biology is to understand these circuits by establishing the relationship between specific structures and functions. Of special interest are multifunctional circuits that are capable of performing distinct behaviors without changing their topology. A particularly simple example of such a system is the AC-DC circuit. Found in multiple regulatory processes, this circuit consists of three genes connected in a combination of a toggle switch and a repressilator. Using dynamical system theory we analyze the available dynamical regimes to show that the AC-DC can exhibit both oscillations and bistability. We found that both dynamical regimes can coexist robustly in the same region of parameter space, generating novel emergent behaviors not available to the individual subnetwork components. We demonstrate that the AC-DC circuit provides a mechanism to rapidly switch between oscillations and steady expression and, in the presence of noise, the multi-functionality of the circuit offers the possibility to control the coherence of oscillations. Additionally, we provide evidence that the availability of a bistable oscillatory regime allows the AC-DC circuit to behave as an excitable system capable of stochastic pulses and spatial signal propagation. Taken together these results reveal how a system as simple as the AC-DC circuit can produce multiple complex dynamical behaviors in the same parameter region and is well suited for the construction of multifunctional synthetic genetic circuits. Likewise, the analysis reveals the potential of this circuit to facilitate the evolution of distinct patterning mechanisms. |
1201.2798 | Edoardo Milotti | Roberto Chignola and Edoardo Milotti | Bridging the gap between the micro- and the macro-world of tumors | 24 pages, 10 figures. Accepted for publication in AIP Advances, in
the special issue on the physics of cancer | null | null | null | q-bio.TO physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At present it is still quite difficult to match the vast knowledge on the
behavior of individual tumor cells with macroscopic measurements on clinical
tumors. On the modeling side, we already know how to deal with many molecular
pathways and cellular events, using systems of differential equations and other
modeling tools, and ideally, we should be able to extend such a mathematical
description up to the level of large tumor masses. An extended model should
thus help us forecast the behavior of large tumors from our basic knowledge of
microscopic processes. Unfortunately, the complexity of these processes makes
it very difficult -- probably impossible -- to develop comprehensive analytical
models. We try to bridge the gap with a simulation program which is based on
basic biochemical and biophysical processes -- thereby building an effective
computational model -- and in this paper we describe its structure, endeavoring
to make the description sufficiently detailed and yet understandable.
| [
{
"created": "Fri, 13 Jan 2012 10:54:56 GMT",
"version": "v1"
}
] | 2012-01-16 | [
[
"Chignola",
"Roberto",
""
],
[
"Milotti",
"Edoardo",
""
]
] | At present it is still quite difficult to match the vast knowledge on the behavior of individual tumor cells with macroscopic measurements on clinical tumors. On the modeling side, we already know how to deal with many molecular pathways and cellular events, using systems of differential equations and other modeling tools, and ideally, we should be able to extend such a mathematical description up to the level of large tumor masses. An extended model should thus help us forecast the behavior of large tumors from our basic knowledge of microscopic processes. Unfortunately, the complexity of these processes makes it very difficult -- probably impossible -- to develop comprehensive analytical models. We try to bridge the gap with a simulation program which is based on basic biochemical and biophysical processes -- thereby building an effective computational model -- and in this paper we describe its structure, endeavoring to make the description sufficiently detailed and yet understandable. |
2010.11836 | David Terman | David Terman and Yousef Hannawi | Using graph theory to compute Laplace operators arising in a model for
blood flow in capillary network | 20 pages, 8 figures | null | null | null | q-bio.TO math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maintaining cerebral blood flow is critical for adequate neuronal function.
Previous computational models of brain capillary networks have predicted that
heterogeneous cerebral capillary flow patterns result in lower brain tissue
partial oxygen pressures. It has been suggested that this may lead to number of
diseases such as Alzheimer's disease, acute ischemic stroke, traumatic brain
injury and ischemic heart disease. We have previously developed a computational
model that was used to describe in detail the effect of flow heterogeneities on
tissue oxygen levels. The main result in that paper was that, for a general
class of capillary networks, perturbations of segment diameters or conductances
always lead to decreased oxygen levels. This result was verified using both
numerical simulations and mathematical analysis. However, the analysis depended
on a novel conjecture concerning the Laplace operator of functions related to
the segment flow rates and how they depend on the conductances. The goal of
this paper is to give a mathematically rigorous proof of the conjecture for a
general class of networks. The proof depends on determining the number of trees
and forests in certain graphs arising from the capillary network.
| [
{
"created": "Wed, 21 Oct 2020 14:23:25 GMT",
"version": "v1"
}
] | 2020-10-23 | [
[
"Terman",
"David",
""
],
[
"Hannawi",
"Yousef",
""
]
] | Maintaining cerebral blood flow is critical for adequate neuronal function. Previous computational models of brain capillary networks have predicted that heterogeneous cerebral capillary flow patterns result in lower brain tissue partial oxygen pressures. It has been suggested that this may lead to number of diseases such as Alzheimer's disease, acute ischemic stroke, traumatic brain injury and ischemic heart disease. We have previously developed a computational model that was used to describe in detail the effect of flow heterogeneities on tissue oxygen levels. The main result in that paper was that, for a general class of capillary networks, perturbations of segment diameters or conductances always lead to decreased oxygen levels. This result was verified using both numerical simulations and mathematical analysis. However, the analysis depended on a novel conjecture concerning the Laplace operator of functions related to the segment flow rates and how they depend on the conductances. The goal of this paper is to give a mathematically rigorous proof of the conjecture for a general class of networks. The proof depends on determining the number of trees and forests in certain graphs arising from the capillary network. |
1902.01155 | Guido Tiana | G. Franco, M. Cagiada, G. Bussi, G. Tiana | Statistical mechanical properties of sequence space determine the
efficiency of the various algorithms to predict interaction energies and
native contacts from protein coevolution | null | null | 10.1088/1478-3975/ab1c15 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying evolutionary correlations in alignments of homologous sequences by
means of an inverse Potts model has proven useful to obtain residue-residue
contact energies and to predict contacts in proteins. The quality of the
results depend much on several choices of the detailed model and on the
algorithms used. We built, in a very controlled way, synthetic alignments with
statistical properties similar to those of real proteins, and used them to
assess the performance of different inversion algorithms and of their variants.
Realistic synthetic alignments display typical features of low--temperature
phases of disordered systems, a feature that affects the inversion algorithms.
We showed that a Boltzmann--learning algorithm is computationally feasible and
performs well in predicting the energy of native contacts. However, all
algorithms suffer of false positives quite equally, making the quality of the
prediction of native contacts with the different algorithm much
system--dependent.
| [
{
"created": "Mon, 4 Feb 2019 12:54:04 GMT",
"version": "v1"
}
] | 2019-09-04 | [
[
"Franco",
"G.",
""
],
[
"Cagiada",
"M.",
""
],
[
"Bussi",
"G.",
""
],
[
"Tiana",
"G.",
""
]
] | Studying evolutionary correlations in alignments of homologous sequences by means of an inverse Potts model has proven useful to obtain residue-residue contact energies and to predict contacts in proteins. The quality of the results depend much on several choices of the detailed model and on the algorithms used. We built, in a very controlled way, synthetic alignments with statistical properties similar to those of real proteins, and used them to assess the performance of different inversion algorithms and of their variants. Realistic synthetic alignments display typical features of low--temperature phases of disordered systems, a feature that affects the inversion algorithms. We showed that a Boltzmann--learning algorithm is computationally feasible and performs well in predicting the energy of native contacts. However, all algorithms suffer of false positives quite equally, making the quality of the prediction of native contacts with the different algorithm much system--dependent. |
2206.14416 | Jingyi Bu | Jingyi Bu, Guojing Gan, Jiahao Chen, Yanxin Su, Mengjia Yuan, Yanchun
Gao, Francisco Domingo, Mirco Migliavacca, Tarek S. El-Madany, Pierre
Gentine, Monica Garcia | Dryland evapotranspiration from remote sensing solar-induced chlorophyll
fluorescence: constraining an optimal stomatal model within a two-source
energy balance model | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evapotranspiration (ET) represents the largest water loss flux in drylands,
but ET and its partition into plant transpiration (T) and soil evaporation (E)
are poorly quantified, especially at fine temporal scales. Physically-based
remote sensing models relying on sensible heat flux estimates, like the
two-source energy balance model, could benefit from considering more explicitly
the key effect of stomatal regulation on dryland ET. The objective of this
study is to assess the value of solar-induced chlorophyll fluorescence (SIF), a
proxy for photosynthesis, to constrain the canopy conductance (Gc) of an
optimal stomatal model within a two-source energy balance model in drylands. We
assessed our ET estimation using in situ eddy covariance GPP as a benchmark,
and compared with results from using the Contiguous solar-induced chlorophyll
fluorescence (CSIF) remote sensing product instead of GPP, with and without the
effect of root-zone soil moisture on the Gc. The estimated ET was robust across
four steppes and two tree-grass dryland ecosystem. Comparison of ET simulated
against in situ GPP yielded an average R2 of 0.73 (0.86) and RMSE of 0.031
(0.36) mm at half-hourly (daily) timescale. Including explicitly the soil
moisture effect on Gc, increased the R2 to 0.76 (0.89). For the CSIF model, the
average R2 for ET estimates also improved when including the effect of soil
moisture: from 0.65 (0.79) to 0.71 (0.84), with RMSE ranging between 0.023
(0.22) and 0.043 (0.54) mm depending on the site. Our results demonstrate the
capacity of SIF to estimate subdaily and daily ET fluxes under very low ET
conditions. SIF can provide effective vegetation signals to constrain stomatal
conductance and partition ET into T and E in drylands. This approach could be
extended for regional estimates using remote sensing SIF estimates such as
CSIF, TROPOMI-SIF, or the upcoming FLEX mission, among others.
| [
{
"created": "Wed, 29 Jun 2022 06:04:28 GMT",
"version": "v1"
}
] | 2022-06-30 | [
[
"Bu",
"Jingyi",
""
],
[
"Gan",
"Guojing",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Su",
"Yanxin",
""
],
[
"Yuan",
"Mengjia",
""
],
[
"Gao",
"Yanchun",
""
],
[
"Domingo",
"Francisco",
""
],
[
"Migliavacca",
"Mirco",
""
],
[
"El-Madany",
"Tarek S.",
""
],
[
"Gentine",
"Pierre",
""
],
[
"Garcia",
"Monica",
""
]
] | Evapotranspiration (ET) represents the largest water loss flux in drylands, but ET and its partition into plant transpiration (T) and soil evaporation (E) are poorly quantified, especially at fine temporal scales. Physically-based remote sensing models relying on sensible heat flux estimates, like the two-source energy balance model, could benefit from considering more explicitly the key effect of stomatal regulation on dryland ET. The objective of this study is to assess the value of solar-induced chlorophyll fluorescence (SIF), a proxy for photosynthesis, to constrain the canopy conductance (Gc) of an optimal stomatal model within a two-source energy balance model in drylands. We assessed our ET estimation using in situ eddy covariance GPP as a benchmark, and compared with results from using the Contiguous solar-induced chlorophyll fluorescence (CSIF) remote sensing product instead of GPP, with and without the effect of root-zone soil moisture on the Gc. The estimated ET was robust across four steppes and two tree-grass dryland ecosystem. Comparison of ET simulated against in situ GPP yielded an average R2 of 0.73 (0.86) and RMSE of 0.031 (0.36) mm at half-hourly (daily) timescale. Including explicitly the soil moisture effect on Gc, increased the R2 to 0.76 (0.89). For the CSIF model, the average R2 for ET estimates also improved when including the effect of soil moisture: from 0.65 (0.79) to 0.71 (0.84), with RMSE ranging between 0.023 (0.22) and 0.043 (0.54) mm depending on the site. Our results demonstrate the capacity of SIF to estimate subdaily and daily ET fluxes under very low ET conditions. SIF can provide effective vegetation signals to constrain stomatal conductance and partition ET into T and E in drylands. This approach could be extended for regional estimates using remote sensing SIF estimates such as CSIF, TROPOMI-SIF, or the upcoming FLEX mission, among others. |
2309.06402 | Christopher Versteeg | Christopher Versteeg, Andrew R. Sedler, Jonathan D. McCart, and
Chethan Pandarinath | Expressive dynamics models with nonlinear injective readouts enable
reliable recovery of latent features from neural activity | 11 pages, 6 figures, Submitted to NeurIPS 2023 | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The advent of large-scale neural recordings has enabled new methods to
discover the computational mechanisms of neural circuits by understanding the
rules that govern how their state evolves over time. While these \textit{neural
dynamics} cannot be directly measured, they can typically be approximated by
low-dimensional models in a latent space. How these models represent the
mapping from latent space to neural space can affect the interpretability of
the latent representation. We show that typical choices for this mapping (e.g.,
linear or MLP) often lack the property of injectivity, meaning that changes in
latent state are not obligated to affect activity in the neural space. During
training, non-injective readouts incentivize the invention of dynamics that
misrepresent the underlying system and the computation it performs. Combining
our injective Flow readout with prior work on interpretable latent dynamics
models, we created the Ordinary Differential equations autoencoder with
Injective Nonlinear readout (ODIN), which captures latent dynamical systems
that are nonlinearly embedded into observed neural activity via an
approximately injective nonlinear mapping. We show that ODIN can recover
nonlinearly embedded systems from simulated neural activity, even when the
nature of the system and embedding are unknown. Additionally, ODIN enables the
unsupervised recovery of underlying dynamical features (e.g., fixed points) and
embedding geometry. When applied to biological neural recordings, ODIN can
reconstruct neural activity with comparable accuracy to previous
state-of-the-art methods while using substantially fewer latent dimensions.
Overall, ODIN's accuracy in recovering ground-truth latent features and ability
to accurately reconstruct neural activity with low dimensionality make it a
promising method for distilling interpretable dynamics that can help explain
neural computation.
| [
{
"created": "Tue, 12 Sep 2023 17:03:50 GMT",
"version": "v1"
}
] | 2023-09-13 | [
[
"Versteeg",
"Christopher",
""
],
[
"Sedler",
"Andrew R.",
""
],
[
"McCart",
"Jonathan D.",
""
],
[
"Pandarinath",
"Chethan",
""
]
] | The advent of large-scale neural recordings has enabled new methods to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these \textit{neural dynamics} cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which captures latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation. |
1212.4788 | Dominik Grimm dg | Dominik Grimm, Bastian Greshake, Stefan Kleeberger, Christoph Lippert,
Oliver Stegle, Bernhard Sch\"olkopf, Detlef Weigel and Karsten Borgwardt | easyGWAS: An integrated interspecies platform for performing genome-wide
association studies | null | null | null | null | q-bio.GN cs.CE cs.DL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: The rapid growth in genome-wide association studies (GWAS) in
plants and animals has brought about the need for a central resource that
facilitates i) performing GWAS, ii) accessing data and results of other GWAS,
and iii) enabling all users regardless of their background to exploit the
latest statistical techniques without having to manage complex software and
computing resources.
Results: We present easyGWAS, a web platform that provides methods, tools and
dynamic visualizations to perform and analyze GWAS. In addition, easyGWAS makes
it simple to reproduce results of others, validate findings, and access larger
sample sizes through merging of public datasets.
Availability: Detailed method and data descriptions as well as tutorials are
available in the supplementary materials. easyGWAS is available at
http://easygwas.tuebingen.mpg.de/.
Contact: dominik.grimm@tuebingen.mpg.de
| [
{
"created": "Wed, 19 Dec 2012 18:39:06 GMT",
"version": "v1"
}
] | 2012-12-20 | [
[
"Grimm",
"Dominik",
""
],
[
"Greshake",
"Bastian",
""
],
[
"Kleeberger",
"Stefan",
""
],
[
"Lippert",
"Christoph",
""
],
[
"Stegle",
"Oliver",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Weigel",
"Detlef",
""
],
[
"Borgwardt",
"Karsten",
""
]
] | Motivation: The rapid growth in genome-wide association studies (GWAS) in plants and animals has brought about the need for a central resource that facilitates i) performing GWAS, ii) accessing data and results of other GWAS, and iii) enabling all users regardless of their background to exploit the latest statistical techniques without having to manage complex software and computing resources. Results: We present easyGWAS, a web platform that provides methods, tools and dynamic visualizations to perform and analyze GWAS. In addition, easyGWAS makes it simple to reproduce results of others, validate findings, and access larger sample sizes through merging of public datasets. Availability: Detailed method and data descriptions as well as tutorials are available in the supplementary materials. easyGWAS is available at http://easygwas.tuebingen.mpg.de/. Contact: dominik.grimm@tuebingen.mpg.de |
2005.00082 | Stephan Preibisch | Raghav Chhetri, Stephan Preibisch, Nico Stuurman | Software for Microscopy Workshop White Paper | 10 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microscopes have morphed from purely optical instruments into motorized,
robotic machines that form images on digital sensors rather than eyeballs. This
continuing trend towards automation and digitization enables many new
approaches to microscopy that would have been impossible or impractical without
computer interfaces. Accordingly, todays development of new microscopes most
often depends on concurrent software development to bring these custom-build
systems to life. This dependence on software brings opportunities and
challenges. Most importantly, a key challenge while developing new microscopes
is to develop the appropriate software. Despite the fact that software is
easily copied and distributed, remarkably few opportunities are available to
share experiences creating microscope control software. In turn, this brings
challenges in creating maintainable and flexible software code and writing User
Interfaces (UIs) that are easily used by researchers, who are primarily life
scientists. To start to address these challenges by identifying common problems
and shared solutions, we assembled a small group of researchers that develop or
use software to control their custom-build microscopes at the Janelia Research
Campus for a two-day workshop in February 2020. The outcome of the workshop was
the definition of clear milestones, as well as the recognition of an involved
community, much larger than the one assembled at the workshop. This community
encounters similar hurdles and shares a great desire to overcome these by
stronger, community-wide collaborations on Open Source Software. This White
Paper summarizes the major issues identified, proposes approaches to address
these as a community, and outlines the next steps that can be taken to develop
a framework facilitating shared microscope software development, significantly
speeding up development of new microscopy systems.
| [
{
"created": "Thu, 30 Apr 2020 20:10:45 GMT",
"version": "v1"
}
] | 2020-05-04 | [
[
"Chhetri",
"Raghav",
""
],
[
"Preibisch",
"Stephan",
""
],
[
"Stuurman",
"Nico",
""
]
] | Microscopes have morphed from purely optical instruments into motorized, robotic machines that form images on digital sensors rather than eyeballs. This continuing trend towards automation and digitization enables many new approaches to microscopy that would have been impossible or impractical without computer interfaces. Accordingly, todays development of new microscopes most often depends on concurrent software development to bring these custom-build systems to life. This dependence on software brings opportunities and challenges. Most importantly, a key challenge while developing new microscopes is to develop the appropriate software. Despite the fact that software is easily copied and distributed, remarkably few opportunities are available to share experiences creating microscope control software. In turn, this brings challenges in creating maintainable and flexible software code and writing User Interfaces (UIs) that are easily used by researchers, who are primarily life scientists. To start to address these challenges by identifying common problems and shared solutions, we assembled a small group of researchers that develop or use software to control their custom-build microscopes at the Janelia Research Campus for a two-day workshop in February 2020. The outcome of the workshop was the definition of clear milestones, as well as the recognition of an involved community, much larger than the one assembled at the workshop. This community encounters similar hurdles and shares a great desire to overcome these by stronger, community-wide collaborations on Open Source Software. This White Paper summarizes the major issues identified, proposes approaches to address these as a community, and outlines the next steps that can be taken to develop a framework facilitating shared microscope software development, significantly speeding up development of new microscopy systems. |
2005.09786 | Jo\~ao AM Gondim | Jo\~ao A. M. Gondim, Larissa Machado | Optimal quarantine strategies for the COVID-19 pandemic in a population
with a discrete age structure | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this work is to study the optimal controls for the COVID-19
epidemic in Brazil. We consider an age-structured SEIRQ model with quarantine
compartment, where the controls are the quarantine entrance parameters. We then
compare the optimal controls for different quarantine lengths and distribution
of the total control cost by assessing their respective reductions in deaths in
comparison to the same period without quarantine. The best strategy provides a
calendar of when to relax the isolation measures for each age group. Finally,
we analyse how a delay in the beginning of the quarantine affects this calendar
by changing the initial conditions.
| [
{
"created": "Tue, 19 May 2020 23:07:09 GMT",
"version": "v1"
}
] | 2020-05-21 | [
[
"Gondim",
"João A. M.",
""
],
[
"Machado",
"Larissa",
""
]
] | The goal of this work is to study the optimal controls for the COVID-19 epidemic in Brazil. We consider an age-structured SEIRQ model with quarantine compartment, where the controls are the quarantine entrance parameters. We then compare the optimal controls for different quarantine lengths and distribution of the total control cost by assessing their respective reductions in deaths in comparison to the same period without quarantine. The best strategy provides a calendar of when to relax the isolation measures for each age group. Finally, we analyse how a delay in the beginning of the quarantine affects this calendar by changing the initial conditions. |
1502.03744 | Heng Li | Heng Li | Correcting Illumina sequencing errors for human data | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summary: We present a new tool to correct sequencing errors in Illumina data
produced from high-coverage whole-genome shotgun resequencing. It uses a
non-greedy algorithm and shows comparable performance and higher accuracy in an
evaluation on real human data. This evaluation has the most complete collection
of high-performance error correctors so far.
Availability and implementation: https://github.com/lh3/bfc
Contact: hengli@broadinstitute.org
| [
{
"created": "Thu, 12 Feb 2015 17:30:34 GMT",
"version": "v1"
}
] | 2015-02-13 | [
[
"Li",
"Heng",
""
]
] | Summary: We present a new tool to correct sequencing errors in Illumina data produced from high-coverage whole-genome shotgun resequencing. It uses a non-greedy algorithm and shows comparable performance and higher accuracy in an evaluation on real human data. This evaluation has the most complete collection of high-performance error correctors so far. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org |
2405.02767 | Srijit Seal | Srijit Seal, Maria-Anna Trapotsi, Ola Spjuth, Shantanu Singh, Jordi
Carreras-Puigvert, Nigel Greene, Andreas Bender, Anne E. Carpenter | A Decade in a Systematic Review: The Evolution and Impact of Cell
Painting | Supplementary Table/Code here:
https://github.com/srijitseal/CellPainting_SystematicReview | null | null | null | q-bio.SC q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | High-content image-based assays have fueled significant discoveries in the
life sciences in the past decade (2013-2023), including novel insights into
disease etiology, mechanism of action, new therapeutics, and toxicology
predictions. Here, we systematically review the substantial methodological
advancements and applications of Cell Painting. Advancements include
improvements in the Cell Painting protocol, assay adaptations for different
types of perturbations and applications, and improved methodologies for feature
extraction, quality control, and batch effect correction. Moreover, machine
learning methods recently surpassed classical approaches in their ability to
extract biologically useful information from Cell Painting images. Cell
Painting data have been used alone or in combination with other -omics data to
decipher the mechanism of action of a compound, its toxicity profile, and many
other biological effects. Overall, key methodological advances have expanded
the ability of Cell Painting to capture cellular responses to various
perturbations. Future advances will likely lie in advancing computational and
experimental techniques, developing new publicly available datasets, and
integrating them with other high-content data types.
| [
{
"created": "Sat, 4 May 2024 22:05:58 GMT",
"version": "v1"
}
] | 2024-05-07 | [
[
"Seal",
"Srijit",
""
],
[
"Trapotsi",
"Maria-Anna",
""
],
[
"Spjuth",
"Ola",
""
],
[
"Singh",
"Shantanu",
""
],
[
"Carreras-Puigvert",
"Jordi",
""
],
[
"Greene",
"Nigel",
""
],
[
"Bender",
"Andreas",
""
],
[
"Carpenter",
"Anne E.",
""
]
] | High-content image-based assays have fueled significant discoveries in the life sciences in the past decade (2013-2023), including novel insights into disease etiology, mechanism of action, new therapeutics, and toxicology predictions. Here, we systematically review the substantial methodological advancements and applications of Cell Painting. Advancements include improvements in the Cell Painting protocol, assay adaptations for different types of perturbations and applications, and improved methodologies for feature extraction, quality control, and batch effect correction. Moreover, machine learning methods recently surpassed classical approaches in their ability to extract biologically useful information from Cell Painting images. Cell Painting data have been used alone or in combination with other -omics data to decipher the mechanism of action of a compound, its toxicity profile, and many other biological effects. Overall, key methodological advances have expanded the ability of Cell Painting to capture cellular responses to various perturbations. Future advances will likely lie in advancing computational and experimental techniques, developing new publicly available datasets, and integrating them with other high-content data types. |
0902.0664 | Anirban Banerji | Anirban Banerji | A (possible) mathematical model to describe biological
"context-dependence" : case study with protein structure | 9 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context-dependent nature of biological phenomena are well documented in every
branch of biology. While there have been few previous attempts to (implicitly)
model various facets of biological context-dependence, a formal and general
mathematical construct to model the wide spectrum of context-dependence, eludes
the students of biology. An objective and rigorous model, from both 'bottom-up'
as well as 'top-down' perspective, is proposed here to serve as the template to
describe the various kinds of context-dependence that we encounter in different
branches of biology. Interactions between biological contexts was found to be
transitive but non-commutative. It is found that a hierarchical nature of
dependence amongst the biological contexts models the emergent biological
properties efficiently. Reasons for these findings are provided with a general
model to describe biological reality. Scheme to algorithmically implement the
hierarchic structure of organization of biological contexts was achieved with a
construct named 'Context tree'. A 'Context tree' based analysis of context
interactions among biophysical factors influencing protein structure was
performed.
| [
{
"created": "Wed, 4 Feb 2009 08:54:08 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Nov 2011 05:32:00 GMT",
"version": "v2"
}
] | 2011-11-28 | [
[
"Banerji",
"Anirban",
""
]
] | Context-dependent nature of biological phenomena are well documented in every branch of biology. While there have been few previous attempts to (implicitly) model various facets of biological context-dependence, a formal and general mathematical construct to model the wide spectrum of context-dependence, eludes the students of biology. An objective and rigorous model, from both 'bottom-up' as well as 'top-down' perspective, is proposed here to serve as the template to describe the various kinds of context-dependence that we encounter in different branches of biology. Interactions between biological contexts was found to be transitive but non-commutative. It is found that a hierarchical nature of dependence amongst the biological contexts models the emergent biological properties efficiently. Reasons for these findings are provided with a general model to describe biological reality. Scheme to algorithmically implement the hierarchic structure of organization of biological contexts was achieved with a construct named 'Context tree'. A 'Context tree' based analysis of context interactions among biophysical factors influencing protein structure was performed. |
1201.2072 | Ajit Kumar | Ajit Kumar, Kre\v{s}imir Josi\'c | Piecewise linear models of chemical reaction networks | null | null | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that certain non-linear dynamical systems with non-linearities in the
form of Hill functions, can be approximated by piecewise linear dynamical
systems. The resulting piecewise systems have closed form solutions that can be
used to understand the behavior of the fully nonlinear system. We justify the
reduction using geometric singular perturbation theory, and illustrate the
results in networks modeling a genetic switch and a genetic oscillator.
| [
{
"created": "Tue, 10 Jan 2012 15:16:52 GMT",
"version": "v1"
}
] | 2012-01-11 | [
[
"Kumar",
"Ajit",
""
],
[
"Josić",
"Krešimir",
""
]
] | We show that certain non-linear dynamical systems with non-linearities in the form of Hill functions, can be approximated by piecewise linear dynamical systems. The resulting piecewise systems have closed form solutions that can be used to understand the behavior of the fully nonlinear system. We justify the reduction using geometric singular perturbation theory, and illustrate the results in networks modeling a genetic switch and a genetic oscillator. |
2407.09538 | Zhongju Yuan | Zhongju Yuan, Wannes Van Ransbeeck, Geraint Wiggins, Dick Botteldooren | A Dynamic Systems Approach to Modelling Human-Machine Rhythm Interaction | null | null | null | null | q-bio.NC cs.AI cs.HC cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In exploring the simulation of human rhythmic perception and synchronization
capabilities, this study introduces a computational model inspired by the
physical and biological processes underlying rhythm processing. Utilizing a
reservoir computing framework that simulates the function of cerebellum, the
model features a dual-neuron classification and incorporates parameters to
modulate information transfer, reflecting biological neural network
characteristics. Our findings demonstrate the model's ability to accurately
perceive and adapt to rhythmic patterns within the human perceptible range,
exhibiting behavior closely aligned with human rhythm interaction. By
incorporating fine-tuning mechanisms and delay-feedback, the model enables
continuous learning and precise rhythm prediction. The introduction of
customized settings further enhances its capacity to stimulate diverse human
rhythmic behaviors, underscoring the potential of this architecture in temporal
cognitive task modeling and the study of rhythm synchronization and prediction
in artificial and biological systems. Therefore, our model is capable of
transparently modelling cognitive theories that elucidate the dynamic processes
by which the brain generates rhythm-related behavior.
| [
{
"created": "Wed, 26 Jun 2024 10:07:20 GMT",
"version": "v1"
}
] | 2024-07-16 | [
[
"Yuan",
"Zhongju",
""
],
[
"Van Ransbeeck",
"Wannes",
""
],
[
"Wiggins",
"Geraint",
""
],
[
"Botteldooren",
"Dick",
""
]
] | In exploring the simulation of human rhythmic perception and synchronization capabilities, this study introduces a computational model inspired by the physical and biological processes underlying rhythm processing. Utilizing a reservoir computing framework that simulates the function of cerebellum, the model features a dual-neuron classification and incorporates parameters to modulate information transfer, reflecting biological neural network characteristics. Our findings demonstrate the model's ability to accurately perceive and adapt to rhythmic patterns within the human perceptible range, exhibiting behavior closely aligned with human rhythm interaction. By incorporating fine-tuning mechanisms and delay-feedback, the model enables continuous learning and precise rhythm prediction. The introduction of customized settings further enhances its capacity to stimulate diverse human rhythmic behaviors, underscoring the potential of this architecture in temporal cognitive task modeling and the study of rhythm synchronization and prediction in artificial and biological systems. Therefore, our model is capable of transparently modelling cognitive theories that elucidate the dynamic processes by which the brain generates rhythm-related behavior. |
1312.2234 | Mark Alber | Ziheng Wu, Zhiliang Xu, Oleg Kim and Mark Alber | Three-dimensional Multiscale Model of Deformable Platelets Adhesion to
Vessel Wall in Blood Flow | 38 pages, 10 figures, (accepted for publication). Philosophical
Transactions of the Royal Society A, 2014 | null | 10.1098/rsta.2013.0380 | null | q-bio.TO math.NA physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When a blood vessel ruptures or gets inflamed, the human body responds by
rapidly forming a clot to restrict the loss of blood. Platelets aggregation at
the injury site of the blood vessel occurring via platelet-platelet adhesion,
tethering and rolling on the injured endothelium is a critical initial step in
blood clot formation. A novel three-dimensional multiscale model is introduced
and used in this paper to simulate receptor-mediated adhesion of deformable
platelets at the site of vascular injury under different shear rates of blood
flow. The novelty of the model is based on a new approach of coupling submodels
at three biological scales crucial for the early clot formation: novel hybrid
cell membrane submodel to represent physiological elastic properties of a
platelet, stochastic receptor-ligand binding submodel to describe cell adhesion
kinetics and Lattice Boltzmann submodel for simulating blood flow. The model
implementation on the GPUs cluster significantly improved simulation
performance. Predictive model simulations revealed that platelet deformation,
interactions between platelets in the vicinity of the vessel wall as well as
the number of functional GPIb{\alpha} platelet receptors played significant
roles in the platelet adhesion to the injury site. Variation of the number of
functional GPIb{\alpha} platelet receptors as well as changes of platelet
stiffness can represent effects of specific drugs reducing or enhancing
platelet activity. Therefore, predictive simulations can improve the search for
new drug targets and help to make treatment of thrombosis patient specific.
| [
{
"created": "Sun, 8 Dec 2013 16:55:47 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Apr 2014 23:51:04 GMT",
"version": "v2"
}
] | 2015-06-18 | [
[
"Wu",
"Ziheng",
""
],
[
"Xu",
"Zhiliang",
""
],
[
"Kim",
"Oleg",
""
],
[
"Alber",
"Mark",
""
]
] | When a blood vessel ruptures or gets inflamed, the human body responds by rapidly forming a clot to restrict the loss of blood. Platelets aggregation at the injury site of the blood vessel occurring via platelet-platelet adhesion, tethering and rolling on the injured endothelium is a critical initial step in blood clot formation. A novel three-dimensional multiscale model is introduced and used in this paper to simulate receptor-mediated adhesion of deformable platelets at the site of vascular injury under different shear rates of blood flow. The novelty of the model is based on a new approach of coupling submodels at three biological scales crucial for the early clot formation: novel hybrid cell membrane submodel to represent physiological elastic properties of a platelet, stochastic receptor-ligand binding submodel to describe cell adhesion kinetics and Lattice Boltzmann submodel for simulating blood flow. The model implementation on the GPUs cluster significantly improved simulation performance. Predictive model simulations revealed that platelet deformation, interactions between platelets in the vicinity of the vessel wall as well as the number of functional GPIb{\alpha} platelet receptors played significant roles in the platelet adhesion to the injury site. Variation of the number of functional GPIb{\alpha} platelet receptors as well as changes of platelet stiffness can represent effects of specific drugs reducing or enhancing platelet activity. Therefore, predictive simulations can improve the search for new drug targets and help to make treatment of thrombosis patient specific. |
1001.4263 | W B Langdon | W. B. Langdon, Olivia Sanchez Graillet, A. P. Harrison | RNAnet a Map of Human Gene Expression | 1 page, 0 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNAnet provides a bridge between two widely used Human gene databases.
Ensembl describes DNA sequences and transcripts but not experimental gene
expression. Whilst NCBI's GEO contains actual expression levels from Human
samples. RNAnet provides immediate access to thousands of Affymetrix HG-U133
2plus GeneChip measurements Homo sapiens genes in most medically interesting
tissues.
Without RNAnet comparison across experiments in GEO is very labour intensive
requiring man months of effort to down load and clean data. With RNAnet anyone
can access cleaned quantile normalised data in seconds. It can be used to data
mine patterns of co-expression. The network of strongly correlated genes is
huge but sparse. Thousands of genes interact strongly with thousands of others.
Conversely there are tens of thousands of genes which interact strongly with
less than 100 others. I.e. RNAnet gives new views for RNA Systems Biology.
| [
{
"created": "Sun, 24 Jan 2010 17:45:18 GMT",
"version": "v1"
}
] | 2010-01-26 | [
[
"Langdon",
"W. B.",
""
],
[
"Graillet",
"Olivia Sanchez",
""
],
[
"Harrison",
"A. P.",
""
]
] | RNAnet provides a bridge between two widely used Human gene databases. Ensembl describes DNA sequences and transcripts but not experimental gene expression. Whilst NCBI's GEO contains actual expression levels from Human samples. RNAnet provides immediate access to thousands of Affymetrix HG-U133 2plus GeneChip measurements Homo sapiens genes in most medically interesting tissues. Without RNAnet comparison across experiments in GEO is very labour intensive requiring man months of effort to down load and clean data. With RNAnet anyone can access cleaned quantile normalised data in seconds. It can be used to data mine patterns of co-expression. The network of strongly correlated genes is huge but sparse. Thousands of genes interact strongly with thousands of others. Conversely there are tens of thousands of genes which interact strongly with less than 100 others. I.e. RNAnet gives new views for RNA Systems Biology. |
1204.6538 | Dimitris Anastassiou | Wei-Yi Cheng and Dimitris Anastassiou | Biomolecular events in cancer revealed by attractor metagenes | 22 pages, 1 figure, 5 tables | null | 10.1371/journal.pcbi.1002920 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining gene expression profiles has proven valuable for identifying
metagenes, defined as linear combinations of individual genes, serving as
surrogates of biological phenotypes. Typically, such metagenes are jointly
generated as the result of an optimization process for dimensionality
reduction. Here we present an unconstrained method for individually generating
metagenes that can point to the core of the underlying biological mechanisms.
We use an iterative process that starts from any seed gene and converges to one
of several precise attractor metagenes representing biomolecular events, such
as cell transdifferentiation or the presence of an amplicon. By analyzing six
rich gene expression datasets from three different cancer types, we identified
many such biomolecular events, some of which are present in all tested cancer
types. We focus on several such events including a stage-associated mesenchymal
transition and a grade-associated mitotic chromosomal instability.
| [
{
"created": "Mon, 30 Apr 2012 02:49:03 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Cheng",
"Wei-Yi",
""
],
[
"Anastassiou",
"Dimitris",
""
]
] | Mining gene expression profiles has proven valuable for identifying metagenes, defined as linear combinations of individual genes, serving as surrogates of biological phenotypes. Typically, such metagenes are jointly generated as the result of an optimization process for dimensionality reduction. Here we present an unconstrained method for individually generating metagenes that can point to the core of the underlying biological mechanisms. We use an iterative process that starts from any seed gene and converges to one of several precise attractor metagenes representing biomolecular events, such as cell transdifferentiation or the presence of an amplicon. By analyzing six rich gene expression datasets from three different cancer types, we identified many such biomolecular events, some of which are present in all tested cancer types. We focus on several such events including a stage-associated mesenchymal transition and a grade-associated mitotic chromosomal instability. |
1209.5603 | Hiizu Nakanishi | Ryota Nishino, Takahiro Sakaue, and Hiizu Nakanishi | Transcription fluctuation effects on biochemical oscillations | with Supplementary material | PLoS ONE 8(4) (2013) e60938 | 10.1371/journal.pone.0060938 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biochemical oscillation systems often consist of negative feedback loops with
repressive transcription regulation. Such systems have distinctive
characteristics in comparison with ordinary chemical systems: i) the numbers of
molecules involved are small, ii) there are typically only a couple of genes in
a cell with a finite regulation time scale. Due to the fluctuations caused by
these features, the system behavior can be quite different from the one
obtained by rate equations, because the rate equations ignore molecular
fluctuations and thus are exact only in the infinite molecular number limit.
The molecular fluctuations on a free-running circadian system have been studied
by Gonze et al. (2002) by introducing a scale parameter $\Omega$ for the system
size. They consider, however, only the first effect, assuming that the gene
process is fast enough for the second effect to be ignored, but this has not
been examined systematically yet. In this work, we study fluctuation effects
due to the finite gene regulation time by introducing a new scale parameter
$\tau$, which we take as the unbinding time of a nuclear protein from the gene.
We focus on the case where the fluctuations due to small molecular numbers can
be ignored. In simulations on the same system studied by Gonze et al., we find
the system is sensitive to the fluctuation in the transcription regulation; the
period of oscillation fluctuates about 30 min even when the regulation time
scale $\tau$ is around 30 s, that is even smaller than 1/1000 of its circadian
period. We also demonstrate that the distribution width for the oscillation
period and the amplitude scales with $\sqrt\tau$, and the correlation time of
the oscillation scales with $1/\tau$ in the small $\tau$ regime. The relative
fluctuations for the period are about half of that for the amplitude, namely,
the periodicity is more stable than the amplitude.
| [
{
"created": "Tue, 25 Sep 2012 13:29:00 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Nishino",
"Ryota",
""
],
[
"Sakaue",
"Takahiro",
""
],
[
"Nakanishi",
"Hiizu",
""
]
] | Biochemical oscillation systems often consist of negative feedback loops with repressive transcription regulation. Such systems have distinctive characteristics in comparison with ordinary chemical systems: i) the numbers of molecules involved are small, ii) there are typically only a couple of genes in a cell with a finite regulation time scale. Due to the fluctuations caused by these features, the system behavior can be quite different from the one obtained by rate equations, because the rate equations ignore molecular fluctuations and thus are exact only in the infinite molecular number limit. The molecular fluctuations on a free-running circadian system have been studied by Gonze et al. (2002) by introducing a scale parameter $\Omega$ for the system size. They consider, however, only the first effect, assuming that the gene process is fast enough for the second effect to be ignored, but this has not been examined systematically yet. In this work, we study fluctuation effects due to the finite gene regulation time by introducing a new scale parameter $\tau$, which we take as the unbinding time of a nuclear protein from the gene. We focus on the case where the fluctuations due to small molecular numbers can be ignored. In simulations on the same system studied by Gonze et al., we find the system is sensitive to the fluctuation in the transcription regulation; the period of oscillation fluctuates about 30 min even when the regulation time scale $\tau$ is around 30 s, that is even smaller than 1/1000 of its circadian period. We also demonstrate that the distribution width for the oscillation period and the amplitude scales with $\sqrt\tau$, and the correlation time of the oscillation scales with $1/\tau$ in the small $\tau$ regime. The relative fluctuations for the period are about half of that for the amplitude, namely, the periodicity is more stable than the amplitude. |
1211.3053 | Anna DiRienzo | Gorka Alkorta-Aranburu, Cynthia M. Beall, David B. Witonsky, Amha
Gebremedhin, Jonathan K. Pritchard, Anna Di Rienzo | The genetic architecture of adaptations to high altitude in Ethiopia | to be published in PLOS Genetics | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although hypoxia is a major stress on physiological processes, several human
populations have survived for millennia at high altitudes, suggesting that they
have adapted to hypoxic conditions. This hypothesis was recently corroborated
by studies of Tibetan highlanders, which showed that polymorphisms in candidate
genes show signatures of natural selection as well as well-replicated
association signals for variation in hemoglobin levels. We extended genomic
analysis to two Ethiopian ethnic groups: Amhara and Oromo. For each ethnic
group, we sampled low and high altitude residents, thus allowing genetic and
phenotypic comparisons across altitudes and across ethnic groups. Genome-wide
SNP genotype data were collected in these samples by using Illumina arrays. We
find that variants associated with hemoglobin variation among Tibetans or other
variants at the same loci do not influence the trait in Ethiopians. However, in
the Amhara, SNP rs10803083 is associated with hemoglobin levels at genome-wide
levels of significance. No significant genotype association was observed for
oxygen saturation levels in either ethnic group. Approaches based on allele
frequency divergence did not detect outliers in candidate hypoxia genes, but
the most differentiated variants between high- and lowlanders have a clear role
in pathogen defense. Interestingly, a significant excess of allele frequency
divergence was consistently detected for genes involved in cell cycle control,
DNA damage and repair, thus pointing to new pathways for high altitude
adaptations. Finally, a comparison of CpG methylation levels between high- and
lowlanders found several significant signals at individual genes in the Oromo.
| [
{
"created": "Tue, 13 Nov 2012 17:15:58 GMT",
"version": "v1"
}
] | 2012-11-14 | [
[
"Alkorta-Aranburu",
"Gorka",
""
],
[
"Beall",
"Cynthia M.",
""
],
[
"Witonsky",
"David B.",
""
],
[
"Gebremedhin",
"Amha",
""
],
[
"Pritchard",
"Jonathan K.",
""
],
[
"Di Rienzo",
"Anna",
""
]
] | Although hypoxia is a major stress on physiological processes, several human populations have survived for millennia at high altitudes, suggesting that they have adapted to hypoxic conditions. This hypothesis was recently corroborated by studies of Tibetan highlanders, which showed that polymorphisms in candidate genes show signatures of natural selection as well as well-replicated association signals for variation in hemoglobin levels. We extended genomic analysis to two Ethiopian ethnic groups: Amhara and Oromo. For each ethnic group, we sampled low and high altitude residents, thus allowing genetic and phenotypic comparisons across altitudes and across ethnic groups. Genome-wide SNP genotype data were collected in these samples by using Illumina arrays. We find that variants associated with hemoglobin variation among Tibetans or other variants at the same loci do not influence the trait in Ethiopians. However, in the Amhara, SNP rs10803083 is associated with hemoglobin levels at genome-wide levels of significance. No significant genotype association was observed for oxygen saturation levels in either ethnic group. Approaches based on allele frequency divergence did not detect outliers in candidate hypoxia genes, but the most differentiated variants between high- and lowlanders have a clear role in pathogen defense. Interestingly, a significant excess of allele frequency divergence was consistently detected for genes involved in cell cycle control, DNA damage and repair, thus pointing to new pathways for high altitude adaptations. Finally, a comparison of CpG methylation levels between high- and lowlanders found several significant signals at individual genes in the Oromo. |
2301.00159 | Florian Nill | Florian Nill | Symmetries and normalization in 3-compartment epidemic models I: The
replacement number dynamics | 29 pages, 1 figure, 3 tables | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As shown recently by the author, constant population SI(R)S models map to
Hethcote's classic endemic model originally proposed in 1973. This unifies a
whole class of models with up to 10 parameters, all being isomorphic to a
simple 2-parameter master model for endemic bifurcation. In this work this
procedure is extended to a 14-parameter SSISS Model, including social behavior
parameters, a (diminished) susceptibility of the R-compartment and unbalanced
constant per capita birth and death rates, thus covering many prominent models
in the literature. Under mild conditions, in the dynamics for fractional
variables in this model all vital parameters become redundant at the cost of
possibly negative incidence rates. There is a symmetry group G acting on
parameter space A, such that systems with G-equivalent parameters are
isomorphic and map to the same normalized system. Using (Xrep,I) as canonical
coordinates, Xrep the replacement number, normalization reduces to parameter
space A/G with 5 parameters only. This approach reveals unexpected relations
between various models in the literature. Part two of this work will analyze
equilibria, stability and backward bifurcation and part three will further
reduce the number of essential parameters from 5 to 3.
| [
{
"created": "Sat, 31 Dec 2022 08:59:41 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jan 2023 18:24:25 GMT",
"version": "v2"
}
] | 2023-01-06 | [
[
"Nill",
"Florian",
""
]
] | As shown recently by the author, constant population SI(R)S models map to Hethcote's classic endemic model originally proposed in 1973. This unifies a whole class of models with up to 10 parameters, all being isomorphic to a simple 2-parameter master model for endemic bifurcation. In this work this procedure is extended to a 14-parameter SSISS Model, including social behavior parameters, a (diminished) susceptibility of the R-compartment and unbalanced constant per capita birth and death rates, thus covering many prominent models in the literature. Under mild conditions, in the dynamics for fractional variables in this model all vital parameters become redundant at the cost of possibly negative incidence rates. There is a symmetry group G acting on parameter space A, such that systems with G-equivalent parameters are isomorphic and map to the same normalized system. Using (Xrep,I) as canonical coordinates, Xrep the replacement number, normalization reduces to parameter space A/G with 5 parameters only. This approach reveals unexpected relations between various models in the literature. Part two of this work will analyze equilibria, stability and backward bifurcation and part three will further reduce the number of essential parameters from 5 to 3. |
1510.05495 | Oliver Gould Mr | Oliver Gould, Amy Smart, Norman Ratcliffe, Ben de Lacy Costello | Canine Olfactory Differentiation of Cancer: A Review of the Literature | Total of 23 pages including citations, with 1 embedded table | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous studies have attempted to demonstrate the olfactory ability of
canines to detect several common cancer types from human bodily fluids, breath
and tissue. Canines have been reported to detect bladder cancer (sensitivity of
0.63-0.73 and specificity of 0.64-0.92) and prostate cancer (sensitivity of
0.91-0.99 and specificity of 0.91-0.97) from urine; breast cancer (sensitivity
of 0.88 and specificity of 0.98) and lung cancer (sensitivity 0.56-0.99 and
specificity of 8.30-0.99) on breath and colorectal cancer from stools
(sensitivity of 0.91-0.97 and specificity of 0.97-0.99). The quoted figures of
sensitivity and specificity across differing studies demonstrate that in many
cases results are variable from study to study; this raises questions about the
reproducibility of methodology and study design which we have identified
herein. Furthermore in some studies the controls used have resulted in
differentiation of samples which are of limited use for clinical diagnosis.
These studies provide some evidence that cancer gives rise to different
volatile organic compounds (VOCs) compared to healthy samples. Whilst canine
detection may be unsuitable for clinical implementation they can, at least,
provide inspiration for more traditional laboratory investigations.
| [
{
"created": "Mon, 5 Oct 2015 15:32:06 GMT",
"version": "v1"
}
] | 2015-10-20 | [
[
"Gould",
"Oliver",
""
],
[
"Smart",
"Amy",
""
],
[
"Ratcliffe",
"Norman",
""
],
[
"Costello",
"Ben de Lacy",
""
]
] | Numerous studies have attempted to demonstrate the olfactory ability of canines to detect several common cancer types from human bodily fluids, breath and tissue. Canines have been reported to detect bladder cancer (sensitivity of 0.63-0.73 and specificity of 0.64-0.92) and prostate cancer (sensitivity of 0.91-0.99 and specificity of 0.91-0.97) from urine; breast cancer (sensitivity of 0.88 and specificity of 0.98) and lung cancer (sensitivity 0.56-0.99 and specificity of 8.30-0.99) on breath and colorectal cancer from stools (sensitivity of 0.91-0.97 and specificity of 0.97-0.99). The quoted figures of sensitivity and specificity across differing studies demonstrate that in many cases results are variable from study to study; this raises questions about the reproducibility of methodology and study design which we have identified herein. Furthermore in some studies the controls used have resulted in differentiation of samples which are of limited use for clinical diagnosis. These studies provide some evidence that cancer gives rise to different volatile organic compounds (VOCs) compared to healthy samples. Whilst canine detection may be unsuitable for clinical implementation they can, at least, provide inspiration for more traditional laboratory investigations. |
1512.07810 | Irem Altan | Irem Altan, Patrick Charbonneau, Edward H. Snell | Computational Crystallization | 9 pages, 3 figures | Arch. Biochem. Biophys. 602, 12-20 (2016) | 10.1016/j.abb.2016.01.004 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crystallization is a key step in macromolecular structure determination by
crystallography. While a robust theoretical treatment of the process is
available, due to the complexity of the system, the experimental process is
still largely one of trial and error. In this article, efforts in the field are
discussed together with a theoretical underpinning using a solubility phase
diagram. Prior knowledge has been used to develop tools that computationally
predict the crystallization outcome and define mutational approaches that
enhance the likelihood of crystallization. For the most part these tools are
based on binary outcomes (crystal or no crystal), and the full information
contained in an assembly of crystallization screening experiments is lost. The
potential of this additional information is illustrated by examples where new
biological knowledge can be obtained and where a target can be sub-categorized
to predict which class of reagents provides the crystallization driving force.
Computational analysis of crystallization requires complete and correctly
formatted data. While massive crystallization screening efforts are under way,
the data available from many of these studies are sparse. The potential for
this data and the steps needed to realize this potential are discussed.
| [
{
"created": "Thu, 24 Dec 2015 13:09:44 GMT",
"version": "v1"
}
] | 2016-08-02 | [
[
"Altan",
"Irem",
""
],
[
"Charbonneau",
"Patrick",
""
],
[
"Snell",
"Edward H.",
""
]
] | Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. |
2005.08861 | Josue Tchouanti Fotso | Carl Graham (CMAP, CNRS, Inria), J\'er\^ome Harmand (LBE, INRAE),
Sylvie M\'el\'eard (CMAP, IUF), Josu\'e Tchouanti (CMAP) | Bacterial Metabolic Heterogeneity: from Stochastic to Deterministic
Models | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the modeling of the diauxic growth of a pure microorganism on two
distinct sugars which was first described by Monod. Most available models are
deterministic and make the assumption that all cells of the microbial ecosystem
behave homogeneously with respect to both sugars, all consuming the first one
and then switching to the second when the first is exhausted. We propose here a
stochastic model which describes what is called "metabolic heterogeneity". It
allows to consider small populations as in microfluidics as well as large
populations where billions of individuals coexist in the medium in a batch or
chemostat. We highlight the link between the stochastic model and the
deterministic behavior in real large cultures using a large population
approximation. Then the influence of model parameter values on model dynamics
is studied, notably with respect to the lag-phase observed in real systems
depending on the sugars on which the microorganism grows. It is shown that both
metabolic parameters as well as initial conditions play a crucial role on
system dynamics.
| [
{
"created": "Fri, 15 May 2020 14:24:02 GMT",
"version": "v1"
}
] | 2020-05-19 | [
[
"Graham",
"Carl",
"",
"CMAP, CNRS, Inria"
],
[
"Harmand",
"Jérôme",
"",
"LBE, INRAE"
],
[
"Méléard",
"Sylvie",
"",
"CMAP, IUF"
],
[
"Tchouanti",
"Josué",
"",
"CMAP"
]
] | We revisit the modeling of the diauxic growth of a pure microorganism on two distinct sugars which was first described by Monod. Most available models are deterministic and make the assumption that all cells of the microbial ecosystem behave homogeneously with respect to both sugars, all consuming the first one and then switching to the second when the first is exhausted. We propose here a stochastic model which describes what is called "metabolic heterogeneity". It allows to consider small populations as in microfluidics as well as large populations where billions of individuals coexist in the medium in a batch or chemostat. We highlight the link between the stochastic model and the deterministic behavior in real large cultures using a large population approximation. Then the influence of model parameter values on model dynamics is studied, notably with respect to the lag-phase observed in real systems depending on the sugars on which the microorganism grows. It is shown that both metabolic parameters as well as initial conditions play a crucial role on system dynamics. |
1905.11341 | Ahmed Fouad | Ahmed M. Fouad | Dynamical stability analysis of tumor growth and invasion: A
reaction-diffusion model | null | null | null | null | q-bio.TO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The acid-mediated tumor invasion hypothesis proposes that altered glucose
metabolism exhibited by the vast majority of tumors leads to increased acid (H+
ion) production which subsequently facilitates tumor invasion [1-3]. The
reaction-diffusion model [2] that captures the key elements of the hypothesis
shows how the densities of normal cells, tumor cells, and excess H+ ions change
with time due to both chemical reactions between these three populations and
density-dependent diffusion by which they spread out in three-dimensional
space. Moreover, it proposes that each cell has an optimal pH for survival;
that is, if the local pH deviates from the optimal value in either an acidic or
alkaline direction, the cells begin to die, and that the death rate saturates
at some maximum value when the microenvironment is extremely acidic or
alkaline. We have previously studied in detail how the death-rate functions of
the normal and tumor populations depend upon the H+ ion density [4]. Here, we
extend previous work by investigating how the equilibrium densities (at which
the time rates of change of the cellular densities are equal to zero) reached
by the normal and tumor populations in three-dimensional space are affected by
the presence of the H+ ions, and we present detailed analytical and
computational techniques to analyze the dynamical stability of these
equilibrium densities. For a sample set of biological input parameters and
within the acid-mediation hypothesis, our model predicts the transformation to
a malignant behavior, as indicated by the presence of unstable sets of
equilibrium densities.
| [
{
"created": "Fri, 10 May 2019 18:44:46 GMT",
"version": "v1"
}
] | 2019-06-10 | [
[
"Fouad",
"Ahmed M.",
""
]
] | The acid-mediated tumor invasion hypothesis proposes that altered glucose metabolism exhibited by the vast majority of tumors leads to increased acid (H+ ion) production which subsequently facilitates tumor invasion [1-3]. The reaction-diffusion model [2] that captures the key elements of the hypothesis shows how the densities of normal cells, tumor cells, and excess H+ ions change with time due to both chemical reactions between these three populations and density-dependent diffusion by which they spread out in three-dimensional space. Moreover, it proposes that each cell has an optimal pH for survival; that is, if the local pH deviates from the optimal value in either an acidic or alkaline direction, the cells begin to die, and that the death rate saturates at some maximum value when the microenvironment is extremely acidic or alkaline. We have previously studied in detail how the death-rate functions of the normal and tumor populations depend upon the H+ ion density [4]. Here, we extend previous work by investigating how the equilibrium densities (at which the time rates of change of the cellular densities are equal to zero) reached by the normal and tumor populations in three-dimensional space are affected by the presence of the H+ ions, and we present detailed analytical and computational techniques to analyze the dynamical stability of these equilibrium densities. For a sample set of biological input parameters and within the acid-mediation hypothesis, our model predicts the transformation to a malignant behavior, as indicated by the presence of unstable sets of equilibrium densities. |
2305.12175 | In\^es Hip\'olito | In\`es Hipolito | Psychotic Markov Blankets: Striking a Free Energy Balance for Complex
Adaptation | null | null | null | null | q-bio.NC nlin.AO | http://creativecommons.org/publicdomain/zero/1.0/ | This paper proposes a framework for optimising the adaptation and attunement
of a Complex Adaptive System (CAS) with its environments. The tendency towards
stability can be explained by minimising free energy but high variability,
noise, and over-specialized rigidity can lead to a "stuck state" in a CAS.
Without perturbation (increasing free energy), the system remains stuck and
unable to adapt to changing circumstances. The paper introduces the concept of
'psychotic' Markov blankets to understand and specify factors contributing to
maladjustment conditions moving away from the minimum stuck state. The paper
offers directions for optimising adaptation and attunement to be applied to
real-world problems, from cells to behaviour, societies and ecosystems.
| [
{
"created": "Sat, 20 May 2023 11:47:28 GMT",
"version": "v1"
}
] | 2023-05-23 | [
[
"Hipolito",
"Inès",
""
]
] | This paper proposes a framework for optimising the adaptation and attunement of a Complex Adaptive System (CAS) with its environments. The tendency towards stability can be explained by minimising free energy but high variability, noise, and over-specialized rigidity can lead to a "stuck state" in a CAS. Without perturbation (increasing free energy), the system remains stuck and unable to adapt to changing circumstances. The paper introduces the concept of 'psychotic' Markov blankets to understand and specify factors contributing to maladjustment conditions moving away from the minimum stuck state. The paper offers directions for optimising adaptation and attunement to be applied to real-world problems, from cells to behaviour, societies and ecosystems. |
1710.10508 | Nils Loewen | Susannah Waxman, Ralitsa T. Loewen, Yalong Dang, Simon C. Watkins,
Alan M. Watson, Nils A. Loewen | High-Resolution, Three-Dimensional Reconstruction of the Outflow Tract
Demonstrates Segmental Differences in Cleared Eyes | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: The rate of conventional aqueous humor outflow is the highest
nasally. We hypothesized that this is reflected in regionally different outflow
structures and analyzed the entire limbus by high-resolution, full-thickness
ribbon-scanning confocal microscopy (RSCM). Methods: We perfused pig eyes by
anterior chamber cannulation with eight lectin-fluorophore conjugates, followed
by optical clearance with benzyl alcohol benzyl benzoate (BABB). RSCM and an
advanced analysis software (Imaris) were used to reconstruct a
three-dimensional, whole-specimen rendering of the perilimbal outflow
structures. We performed morphometric analyses of the outflow tract from the
level of the trabecular meshwork (TM) to the scleral vascular plexus (SVP).
Results: Except for pigmented structures, BABB cleared the entire eye.
Rhodamine-conjugated Glycine max agglutinin (soybean, SBA) labeled the outflow
tract evenly and retained fluorescence for months. RSCM produced terabyte-sized
files allowing for in silico dissection of outflow tract vessel at a high
resolution and in 3D. Networks of interconnected lumina were traced from the TM
to downstream drainage structures. The collector channel (CC) volumes were ten
times smaller than the receiving SVP vessels, the largest of which were in the
inferior limbus. Proximal CC diameters were up to four times the size of distal
diameters and more elliptical at their proximal ends. The largest CCs were
found in the superonasal and inferonasal quadrants where the highest outflow
occurs. Conclusions: RSCM of cleared eyes enabled high-resolution, volumetric
analysis of the outflow tract. The proximal structures had greater diameters
nasally while the SVP was larger in the inferior limbus.
| [
{
"created": "Sat, 28 Oct 2017 17:52:15 GMT",
"version": "v1"
}
] | 2017-10-31 | [
[
"Waxman",
"Susannah",
""
],
[
"Loewen",
"Ralitsa T.",
""
],
[
"Dang",
"Yalong",
""
],
[
"Watkins",
"Simon C.",
""
],
[
"Watson",
"Alan M.",
""
],
[
"Loewen",
"Nils A.",
""
]
] | Purpose: The rate of conventional aqueous humor outflow is the highest nasally. We hypothesized that this is reflected in regionally different outflow structures and analyzed the entire limbus by high-resolution, full-thickness ribbon-scanning confocal microscopy (RSCM). Methods: We perfused pig eyes by anterior chamber cannulation with eight lectin-fluorophore conjugates, followed by optical clearance with benzyl alcohol benzyl benzoate (BABB). RSCM and an advanced analysis software (Imaris) were used to reconstruct a three-dimensional, whole-specimen rendering of the perilimbal outflow structures. We performed morphometric analyses of the outflow tract from the level of the trabecular meshwork (TM) to the scleral vascular plexus (SVP). Results: Except for pigmented structures, BABB cleared the entire eye. Rhodamine-conjugated Glycine max agglutinin (soybean, SBA) labeled the outflow tract evenly and retained fluorescence for months. RSCM produced terabyte-sized files allowing for in silico dissection of outflow tract vessel at a high resolution and in 3D. Networks of interconnected lumina were traced from the TM to downstream drainage structures. The collector channel (CC) volumes were ten times smaller than the receiving SVP vessels, the largest of which were in the inferior limbus. Proximal CC diameters were up to four times the size of distal diameters and more elliptical at their proximal ends. The largest CCs were found in the superonasal and inferonasal quadrants where the highest outflow occurs. Conclusions: RSCM of cleared eyes enabled high-resolution, volumetric analysis of the outflow tract. The proximal structures had greater diameters nasally while the SVP was larger in the inferior limbus. |
0710.4398 | Benjamin Audit | C\'edric Vaillant (SG), Benjamin Audit (Phys-ENS), Alain Arn\'eodo
(Phys-ENS) | Thermodynamics of DNA loops with long-range correlated structural
disorder | null | Physical Review Letters 95, 6 (2005) 068101 | null | null | q-bio.GN cond-mat.stat-mech | null | We study the influence of a structural disorder on the thermodynamical
properties of 2D-elastic chains submitted to mechanical/topological constraint
as loops. The disorder is introduced via a spontaneous curvature whose
distribution along the chain presents either no correlation or long-range
correlations (LRC). The equilibrium properties of the one-loop system are
derived numerically and analytically for weak disorder. LRC are shown to favor
the formation of small loop, larger the LRC, smaller the loop size. We use the
mean first passage time formalism to show that the typical short time loop
dynamics is superdiffusive in the presence of LRC. Potential biological
implications on nucleosome positioning and dynamics in eukaryotic chromatin are
discussed.
| [
{
"created": "Wed, 24 Oct 2007 08:04:36 GMT",
"version": "v1"
}
] | 2007-10-25 | [
[
"Vaillant",
"Cédric",
"",
"SG"
],
[
"Audit",
"Benjamin",
"",
"Phys-ENS"
],
[
"Arnéodo",
"Alain",
"",
"Phys-ENS"
]
] | We study the influence of a structural disorder on the thermodynamical properties of 2D-elastic chains submitted to mechanical/topological constraint as loops. The disorder is introduced via a spontaneous curvature whose distribution along the chain presents either no correlation or long-range correlations (LRC). The equilibrium properties of the one-loop system are derived numerically and analytically for weak disorder. LRC are shown to favor the formation of small loop, larger the LRC, smaller the loop size. We use the mean first passage time formalism to show that the typical short time loop dynamics is superdiffusive in the presence of LRC. Potential biological implications on nucleosome positioning and dynamics in eukaryotic chromatin are discussed. |
2312.13478 | Shervin Safavi | Shervin Safavi | Brain as a complex system, harnessing systems neuroscience tools &
notions for an empirical approach | PhD thesis. Complete thesis is available here
https://tobias-lib.ub.uni-tuebingen.de/xmlui/handle/10900/128071 | null | 10.15496/publikation-69434 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Finding general principles underlying brain function has been appealing to
scientists. Indeed, in some branches of science like physics and chemistry (and
to some degree biology) a general theory often can capture the essence of a
wide range of phenomena. Whether we can find such principles in neuroscience,
and [assuming they do exist] what those principles are, are important
questions. Abstracting the brain as a complex system is one of the perspectives
that may help us answer this question.
While it is commonly accepted that the brain is a (or even the) prominent
example of a complex system, the far reaching implications of this are still
arguably overlooked in our approaches to neuroscientific questions. One of the
reasons for the lack of attention could be the apparent difference in foci of
investigations in these two fields -- neuroscience and complex systems. This
thesis is an effort toward providing a bridge between systems neuroscience and
complex systems by harnessing systems neuroscience tools & notions for building
empirical approaches toward the brain as a complex system.
Perhaps, in the spirit of searching for principles, we should abstract and
approach the brain as a complex adaptive system as the more complete
perspective (rather than just a complex system). In the end, the brain, even
the most "complex system", need to survive in the environment. Indeed, in the
field of complex adaptive systems, the intention is understanding very similar
questions in nature. As an outlook, we also touch on some research directions
pertaining to the adaptivity of the brain as well.
| [
{
"created": "Wed, 20 Dec 2023 23:18:10 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Safavi",
"Shervin",
""
]
] | Finding general principles underlying brain function has been appealing to scientists. Indeed, in some branches of science like physics and chemistry (and to some degree biology) a general theory often can capture the essence of a wide range of phenomena. Whether we can find such principles in neuroscience, and [assuming they do exist] what those principles are, are important questions. Abstracting the brain as a complex system is one of the perspectives that may help us answer this question. While it is commonly accepted that the brain is a (or even the) prominent example of a complex system, the far reaching implications of this are still arguably overlooked in our approaches to neuroscientific questions. One of the reasons for the lack of attention could be the apparent difference in foci of investigations in these two fields -- neuroscience and complex systems. This thesis is an effort toward providing a bridge between systems neuroscience and complex systems by harnessing systems neuroscience tools & notions for building empirical approaches toward the brain as a complex system. Perhaps, in the spirit of searching for principles, we should abstract and approach the brain as a complex adaptive system as the more complete perspective (rather than just a complex system). In the end, the brain, even the most "complex system", need to survive in the environment. Indeed, in the field of complex adaptive systems, the intention is understanding very similar questions in nature. As an outlook, we also touch on some research directions pertaining to the adaptivity of the brain as well. |
1801.08447 | Michael Altenbuchinger | Franziska G\"ortler, Stefan Solbrig, Tilo Wettig, Peter J. Oefner,
Rainer Spang, Michael Altenbuchinger | Loss-function learning for digital tissue deconvolution | 13 pages, 7 figures | null | 10.1007/978-3-319-89929-9_5 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gene expression profile of a tissue averages the expression profiles of
all cells in this tissue. Digital tissue deconvolution (DTD) addresses the
following inverse problem: Given the expression profile $y$ of a tissue, what
is the cellular composition $c$ of that tissue? If $X$ is a matrix whose
columns are reference profiles of individual cell types, the composition $c$
can be computed by minimizing $\mathcal L(y-Xc)$ for a given loss function
$\mathcal L$. Current methods use predefined all-purpose loss functions. They
successfully quantify the dominating cells of a tissue, while often falling
short in detecting small cell populations.
Here we learn the loss function $\mathcal L$ along with the composition $c$.
This allows us to adapt to application-specific requirements such as focusing
on small cell populations or distinguishing phenotypically similar cell
populations. Our method quantifies large cell fractions as accurately as
existing methods and significantly improves the detection of small cell
populations and the distinction of similar cell types.
| [
{
"created": "Thu, 25 Jan 2018 15:17:31 GMT",
"version": "v1"
}
] | 2018-06-13 | [
[
"Görtler",
"Franziska",
""
],
[
"Solbrig",
"Stefan",
""
],
[
"Wettig",
"Tilo",
""
],
[
"Oefner",
"Peter J.",
""
],
[
"Spang",
"Rainer",
""
],
[
"Altenbuchinger",
"Michael",
""
]
] | The gene expression profile of a tissue averages the expression profiles of all cells in this tissue. Digital tissue deconvolution (DTD) addresses the following inverse problem: Given the expression profile $y$ of a tissue, what is the cellular composition $c$ of that tissue? If $X$ is a matrix whose columns are reference profiles of individual cell types, the composition $c$ can be computed by minimizing $\mathcal L(y-Xc)$ for a given loss function $\mathcal L$. Current methods use predefined all-purpose loss functions. They successfully quantify the dominating cells of a tissue, while often falling short in detecting small cell populations. Here we learn the loss function $\mathcal L$ along with the composition $c$. This allows us to adapt to application-specific requirements such as focusing on small cell populations or distinguishing phenotypically similar cell populations. Our method quantifies large cell fractions as accurately as existing methods and significantly improves the detection of small cell populations and the distinction of similar cell types. |
1310.8619 | James P. Brody | James P. Brody | The age specific incidence anomaly suggests that cancers originate
during development | 11 pages, 4 figures. Paper for a special issue of Biophysical Reviews
and Letters: "Research on the Physics of Cancer: A Global Perspective" | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Cancers are caused by the accumulation of genetic alterations. Since this
accumulation takes time, the incidence of most cancers is thought to increase
exponentially with age. However, careful measurements of the age-specific
incidence shows that the specific incidence for many forms of cancer rises with
age to a maximum, then decreases. This decrease in the age-specific incidence
with age is an anomaly. Understanding this anomaly should lead to a better
understanding of how tumors develop and grow. Here I derive the shape of the
age-specific incidence, showing that it should follow the shape of a Weibull
distribution. Measurements indicate that the age-specific incidence for colon
cancer does indeed follow a Weibull distribution. This analysis leads to the
interpretation that for colon cancer two sub-populations exist in the general
population: a susceptible population and an immune population. Colon tumors
will only occur in the susceptible population. This analysis is consistent with
the developmental origins of disease hypothesis and generalizable to many other
common forms of cancer.
| [
{
"created": "Thu, 31 Oct 2013 17:57:27 GMT",
"version": "v1"
}
] | 2013-11-01 | [
[
"Brody",
"James P.",
""
]
] | Cancers are caused by the accumulation of genetic alterations. Since this accumulation takes time, the incidence of most cancers is thought to increase exponentially with age. However, careful measurements of the age-specific incidence shows that the specific incidence for many forms of cancer rises with age to a maximum, then decreases. This decrease in the age-specific incidence with age is an anomaly. Understanding this anomaly should lead to a better understanding of how tumors develop and grow. Here I derive the shape of the age-specific incidence, showing that it should follow the shape of a Weibull distribution. Measurements indicate that the age-specific incidence for colon cancer does indeed follow a Weibull distribution. This analysis leads to the interpretation that for colon cancer two sub-populations exist in the general population: a susceptible population and an immune population. Colon tumors will only occur in the susceptible population. This analysis is consistent with the developmental origins of disease hypothesis and generalizable to many other common forms of cancer. |
2206.05059 | Jiahao Ma | Jiahao Ma | Simulation, Modeling and Prediction of a Pharmacodynamic Animal Tissue
Culture Compartment Model by Physical Informed Neural Network | 7 pages, 5 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compartment models of cell culture are widely used in cytology, pharmacology,
toxicology and other fields. Numerical simulation, data modeling and prediction
of compartment models can be realized by traditional differential equation
modeling methods. At the same time, with the development of software and
hardware, Physical Informed Neural Network (PINN) is widely used to solve
differential equation models. This work models, simulates and predicts the cell
culture compartment model based on the machine learning framework PyTorch with
an 16 hidden layers neural network, including 8 linear layers and 8 feedback
active layers. The results showed a loss value of 0.0004853 for three-component
four-parameter quantitative pharmacodynamic model predictions in this way,
which is evaluated by Mean Square Error (MSE). In summary, Physical Informed
Neural Network can serve as an effective tool to deal with cell culture
compartment models and may perform better in dealing with big datasets.
| [
{
"created": "Thu, 9 Jun 2022 12:21:11 GMT",
"version": "v1"
}
] | 2022-06-13 | [
[
"Ma",
"Jiahao",
""
]
] | Compartment models of cell culture are widely used in cytology, pharmacology, toxicology and other fields. Numerical simulation, data modeling and prediction of compartment models can be realized by traditional differential equation modeling methods. At the same time, with the development of software and hardware, Physical Informed Neural Network (PINN) is widely used to solve differential equation models. This work models, simulates and predicts the cell culture compartment model based on the machine learning framework PyTorch with an 16 hidden layers neural network, including 8 linear layers and 8 feedback active layers. The results showed a loss value of 0.0004853 for three-component four-parameter quantitative pharmacodynamic model predictions in this way, which is evaluated by Mean Square Error (MSE). In summary, Physical Informed Neural Network can serve as an effective tool to deal with cell culture compartment models and may perform better in dealing with big datasets. |
1003.1872 | Marc H. E. de Lussanet PhD | Marc H.E. de Lussanet and Jan W.M. Osse | An ancestral axial twist explains the contralateral forebrain and the
optic chiasm in vertebrates | 13 pages, 6 figures. A small correction is made (May 2014): see
footnote 2; a new stable link is provided for the Appendix | Animal Biol., 62(2):193-216 (2012) | 10.1163/157075611X617102 | null | q-bio.NC q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the best-known facts of the brain are the contralateral visual,
auditory, sensational, and motor mappings in the forebrain. How and why did
these evolve? The few theories to this question provide functional answers,
such as better networks for visuomotor control. However, these theories
contradict the data, as discussed here. Instead we propose that a 90-deg
left-turn around the body-axis evolved in a common ancestor of all vertebrates.
Compensatory migrations of the tissues during development restore body
symmetry. Eyes, nostrils and forebrain compensate in the direction of the turn,
whereas more caudal structures migrate in the opposite direction. As a result
of these opposite migrations the forebrain becomes crossed and inverted with
respect to the rest of the nervous system. We show that these compensatory
migratory movements can indeed be observed in the zebrafish (Danio rerio) and
the chick (Gallus gallus). With a model we show how the axial twist hypothesis
predicts that an optic chiasm should develop on the ventral side of the brain,
whereas the olfactory tract should be uncrossed. In addition, the hypothesis
explains the decussation of the trochlear nerve, why olfaction is non-crossed,
why the cerebellar hemispheres represent the ipsilateral bodyside, why in
sharks the forebrain halves each represent the ipsilateral eye, why the heart
and other inner organs are asymmetric in the body. Due to the poor fossil
record, the possible evolutionary scenarios remain speculative. Molecular
evidence does support the hypothesis. The findings may throw new insight on the
problematic structure of the forebrain.
| [
{
"created": "Tue, 9 Mar 2010 13:53:19 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Jul 2011 19:28:58 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Jul 2011 07:17:50 GMT",
"version": "v3"
},
{
"created": "Mon, 14 May 2012 07:23:39 GMT",
"version": "v4"
},
{
"created": "Sat, 10 May 2014 13:24:05 GMT",
"version": "v5"
},
{
"created": "Fri, 8 Sep 2023 03:26:40 GMT",
"version": "v6"
}
] | 2023-09-11 | [
[
"de Lussanet",
"Marc H. E.",
""
],
[
"Osse",
"Jan W. M.",
""
]
] | Among the best-known facts of the brain are the contralateral visual, auditory, sensational, and motor mappings in the forebrain. How and why did these evolve? The few theories to this question provide functional answers, such as better networks for visuomotor control. However, these theories contradict the data, as discussed here. Instead we propose that a 90-deg left-turn around the body-axis evolved in a common ancestor of all vertebrates. Compensatory migrations of the tissues during development restore body symmetry. Eyes, nostrils and forebrain compensate in the direction of the turn, whereas more caudal structures migrate in the opposite direction. As a result of these opposite migrations the forebrain becomes crossed and inverted with respect to the rest of the nervous system. We show that these compensatory migratory movements can indeed be observed in the zebrafish (Danio rerio) and the chick (Gallus gallus). With a model we show how the axial twist hypothesis predicts that an optic chiasm should develop on the ventral side of the brain, whereas the olfactory tract should be uncrossed. In addition, the hypothesis explains the decussation of the trochlear nerve, why olfaction is non-crossed, why the cerebellar hemispheres represent the ipsilateral bodyside, why in sharks the forebrain halves each represent the ipsilateral eye, why the heart and other inner organs are asymmetric in the body. Due to the poor fossil record, the possible evolutionary scenarios remain speculative. Molecular evidence does support the hypothesis. The findings may throw new insight on the problematic structure of the forebrain. |
2101.08651 | Vincent Huin | Vincent Huin (JPArc, ICM), Mathieu Barbier (ICM), Alexandra Durr
(ICM), Isabelle Le Ber (IM2A, ICM) | Reply: Early-onset phenotype of bi-allelic GRN mutations | Brain - A Journal of Neurology , Oxford University Press (OUP), 2020 | null | 10.1093/brain/awaa415 | null | q-bio.GN q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We would like to reply to Neuray et al. who report a series of five new
patients from four unrelated families with bi-allelic mutations of GRN. Their
work nicely completes the few existing reports of similar cases, and refers to
our recent publication describing six homozygous GRN pathogenic variant
carriers with divergent phenotypes and ages at onset (Huin et al., 2020). In
summary, the Letter from Neuray et al., reports valuable findings that lead to
better define CLN11 due to bi-allelic GRN pathogenic variants. Despite the
small sample number that does not allow statistical analysis, the authors
underlined the occurrence of cognitive deterioration and epilepsy. Further
study of the CLN11 families with functional brain imaging and
neuropsychological examinations may be highly informative for the understanding
and the clinical characterization of this rare disease.
| [
{
"created": "Wed, 20 Jan 2021 10:37:10 GMT",
"version": "v1"
}
] | 2021-01-22 | [
[
"Huin",
"Vincent",
"",
"JPArc, ICM"
],
[
"Barbier",
"Mathieu",
"",
"ICM"
],
[
"Durr",
"Alexandra",
"",
"ICM"
],
[
"Ber",
"Isabelle Le",
"",
"IM2A, ICM"
]
] | We would like to reply to Neuray et al. who report a series of five new patients from four unrelated families with bi-allelic mutations of GRN. Their work nicely completes the few existing reports of similar cases, and refers to our recent publication describing six homozygous GRN pathogenic variant carriers with divergent phenotypes and ages at onset (Huin et al., 2020). In summary, the Letter from Neuray et al., reports valuable findings that lead to better define CLN11 due to bi-allelic GRN pathogenic variants. Despite the small sample number that does not allow statistical analysis, the authors underlined the occurrence of cognitive deterioration and epilepsy. Further study of the CLN11 families with functional brain imaging and neuropsychological examinations may be highly informative for the understanding and the clinical characterization of this rare disease. |
2307.10768 | Ankur Sikarwar | Ankur Sikarwar and Mengmi Zhang | Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of
Working Memory | Conference on Neural Information Processing Systems (NeurIPS 2023) | null | null | null | q-bio.NC cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Working memory (WM), a fundamental cognitive process facilitating the
temporary storage, integration, manipulation, and retrieval of information,
plays a vital role in reasoning and decision-making tasks. Robust benchmark
datasets that capture the multifaceted nature of WM are crucial for the
effective development and evaluation of AI WM models. Here, we introduce a
comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM
comprises 10 tasks and a total of 1 million trials, assessing 4
functionalities, 3 domains, and 11 behavioral and neural characteristics of WM.
We jointly trained and tested state-of-the-art recurrent neural networks and
transformers on all these tasks. We also include human behavioral benchmarks as
an upper bound for comparison. Our results suggest that AI models replicate
some characteristics of WM in the brain, most notably primacy and recency
effects, and neural clusters and correlates specialized for different domains
and functionalities of WM. In the experiments, we also reveal some limitations
in existing models to approximate human behavior. This dataset serves as a
valuable resource for communities in cognitive psychology, neuroscience, and
AI, offering a standardized framework to compare and enhance WM models,
investigate WM's neural underpinnings, and develop WM models with human-like
capabilities. Our source code and data are available at
https://github.com/ZhangLab-DeepNeuroCogLab/WorM.
| [
{
"created": "Thu, 20 Jul 2023 10:57:02 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Nov 2023 07:09:39 GMT",
"version": "v2"
}
] | 2023-11-02 | [
[
"Sikarwar",
"Ankur",
""
],
[
"Zhang",
"Mengmi",
""
]
] | Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at https://github.com/ZhangLab-DeepNeuroCogLab/WorM. |
1902.11233 | Hideaki Shimazaki | Hideaki Shimazaki | The principles of adaptation in organisms and machines I: machine
learning, information theory, and thermodynamics | 22 pages, 6 figures | null | null | null | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How do organisms recognize their environment by acquiring knowledge about the
world, and what actions do they take based on this knowledge? This article
examines hypotheses about organisms' adaptation to the environment from machine
learning, information-theoretic, and thermodynamic perspectives. We start with
constructing a hierarchical model of the world as an internal model in the
brain, and review standard machine learning methods to infer causes by
approximately learning the model under the maximum likelihood principle. This
in turn provides an overview of the free energy principle for an organism, a
hypothesis to explain perception and action from the principle of least
surprise. Treating this statistical learning as communication between the world
and brain, learning is interpreted as a process to maximize information about
the world. We investigate how the classical theories of perception such as the
infomax principle relates to learning the hierarchical model. We then present
an approach to the recognition and learning based on thermodynamics, showing
that adaptation by causal learning results in the second law of thermodynamics
whereas inference dynamics that fuses observation with prior knowledge forms a
thermodynamic process. These provide a unified view on the adaptation of
organisms to the environment.
| [
{
"created": "Thu, 28 Feb 2019 17:30:46 GMT",
"version": "v1"
}
] | 2019-03-01 | [
[
"Shimazaki",
"Hideaki",
""
]
] | How do organisms recognize their environment by acquiring knowledge about the world, and what actions do they take based on this knowledge? This article examines hypotheses about organisms' adaptation to the environment from machine learning, information-theoretic, and thermodynamic perspectives. We start with constructing a hierarchical model of the world as an internal model in the brain, and review standard machine learning methods to infer causes by approximately learning the model under the maximum likelihood principle. This in turn provides an overview of the free energy principle for an organism, a hypothesis to explain perception and action from the principle of least surprise. Treating this statistical learning as communication between the world and brain, learning is interpreted as a process to maximize information about the world. We investigate how the classical theories of perception such as the infomax principle relates to learning the hierarchical model. We then present an approach to the recognition and learning based on thermodynamics, showing that adaptation by causal learning results in the second law of thermodynamics whereas inference dynamics that fuses observation with prior knowledge forms a thermodynamic process. These provide a unified view on the adaptation of organisms to the environment. |
q-bio/0401006 | Debashish Chowdhury | Debashish Chowdhury, Katsuhiro Nishinari, and Andreas Schadschneider | Self-organized patterns and traffic flow in colonies of organisms: from
bacteria and social insects to vertebrates | LATEX, 37 pages, 8 EPS figures, Titles of cited papers added | null | null | null | q-bio.PE cond-mat.stat-mech | null | Flocks of birds and schools of fish are familiar examples of spatial patterns
formed by living organisms. In contrast to the patterns on the skins of, say,
zebra and giraffe, the patterns of our interest are {\it transient} although
different patterns change over different time scales. The aesthetic beauty of
these patterns have attracted the attentions of poets and philosophers for
centuries. Scientists from various disciplines, however, are in search of
common underlying principles that give rise to the transient patterns in
colonies of organisms. Such patterns are observed not only in colonies of
organisms as simple as single-cell bacteria, as interesting as social insects
like ants and termites as well as in colonies of vertebrates as complex as
birds and fish but also in human societies. In recent years, particularly over
the last one decade, physicists have utilized the conceptual framework as well
as the methodological toolbox of statistical mechanics to unravel the mystery
of these patterns. In this article we present an overview emphasizing the
common trends that rely on theoretical modelling of these systems using the
so-called agent-based Lagrangian approach.
| [
{
"created": "Tue, 6 Jan 2004 21:47:23 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jan 2004 05:14:53 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Feb 2004 07:31:37 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Chowdhury",
"Debashish",
""
],
[
"Nishinari",
"Katsuhiro",
""
],
[
"Schadschneider",
"Andreas",
""
]
] | Flocks of birds and schools of fish are familiar examples of spatial patterns formed by living organisms. In contrast to the patterns on the skins of, say, zebra and giraffe, the patterns of our interest are {\it transient} although different patterns change over different time scales. The aesthetic beauty of these patterns have attracted the attentions of poets and philosophers for centuries. Scientists from various disciplines, however, are in search of common underlying principles that give rise to the transient patterns in colonies of organisms. Such patterns are observed not only in colonies of organisms as simple as single-cell bacteria, as interesting as social insects like ants and termites as well as in colonies of vertebrates as complex as birds and fish but also in human societies. In recent years, particularly over the last one decade, physicists have utilized the conceptual framework as well as the methodological toolbox of statistical mechanics to unravel the mystery of these patterns. In this article we present an overview emphasizing the common trends that rely on theoretical modelling of these systems using the so-called agent-based Lagrangian approach. |
1704.05861 | Sidney Redner | U. Bhat, S. Redner, O. Benichou | Starvation Dynamics of a Greedy Forager | 32 pages, 11 figures. Version 2: Various corrections in response to
referee reports. For publication in JSTAT | J. Stat. Mech. 073213 (2017) | 10.1088/1742-5468/aa7dfc | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the dynamics of a greedy forager that moves by random walking
in an environment where each site initially contains one unit of food. Upon
encountering a food-containing site, the forager eats all the food there and
can subsequently hop an additional $\mathcal{S}$ steps without food before
starving to death. Upon encountering an empty site, the forager goes hungry and
comes one time unit closer to starvation. We investigate the new feature of
forager greed; if the forager has a choice between hopping to an empty site or
to a food-containing site in its nearest neighborhood, it hops preferentially
towards food. If the neighboring sites all contain food or are all empty, the
forager hops equiprobably to one of these neighbors. Paradoxically, the
lifetime of the forager can depend non-monotonically on greed, and the sense of
the non-monotonicity is opposite in one and two dimensions. Even more
unexpectedly, the forager lifetime in one dimension is substantially enhanced
when the greed is negative; here the forager tends to avoid food in its local
neighborhood. We also determine the average amount of food consumed at the
instant when the forager starves. We present analytic, heuristic, and numerical
results to elucidate these intriguing phenomena.
| [
{
"created": "Wed, 19 Apr 2017 18:02:50 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Jun 2017 22:18:28 GMT",
"version": "v2"
}
] | 2017-07-31 | [
[
"Bhat",
"U.",
""
],
[
"Redner",
"S.",
""
],
[
"Benichou",
"O.",
""
]
] | We investigate the dynamics of a greedy forager that moves by random walking in an environment where each site initially contains one unit of food. Upon encountering a food-containing site, the forager eats all the food there and can subsequently hop an additional $\mathcal{S}$ steps without food before starving to death. Upon encountering an empty site, the forager goes hungry and comes one time unit closer to starvation. We investigate the new feature of forager greed; if the forager has a choice between hopping to an empty site or to a food-containing site in its nearest neighborhood, it hops preferentially towards food. If the neighboring sites all contain food or are all empty, the forager hops equiprobably to one of these neighbors. Paradoxically, the lifetime of the forager can depend non-monotonically on greed, and the sense of the non-monotonicity is opposite in one and two dimensions. Even more unexpectedly, the forager lifetime in one dimension is substantially enhanced when the greed is negative; here the forager tends to avoid food in its local neighborhood. We also determine the average amount of food consumed at the instant when the forager starves. We present analytic, heuristic, and numerical results to elucidate these intriguing phenomena. |
2301.11109 | J. C. Phillips | J. C. Phillips | Functional Differences of MIP (Lens Fiber Major Intrinsic Protein)
Between Animals and Birds | 6 pages, 1 Figure | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | The major intrinsic protein (MIP) of the lens fiber cell membrane plays a
role in lens biogenesis and maintenance. Its polypeptide chains span the
membrane six times, and the protein is naturally divided into two halves. We
use modern sequence analysis to identify differences between halves for humans
(common to animals) and chickens (common to birds).
| [
{
"created": "Tue, 24 Jan 2023 23:19:28 GMT",
"version": "v1"
}
] | 2023-01-27 | [
[
"Phillips",
"J. C.",
""
]
] | The major intrinsic protein (MIP) of the lens fiber cell membrane plays a role in lens biogenesis and maintenance. Its polypeptide chains span the membrane six times, and the protein is naturally divided into two halves. We use modern sequence analysis to identify differences between halves for humans (common to animals) and chickens (common to birds). |
1307.1736 | Sandro Meloni | Chiara Poletto, Sandro Meloni, Vittoria Colizza, Yamir Moreno and
Alessandro Vespignani | Host mobility drives pathogen competition in spatially structured
populations | 23 pages, 8 figures, 1 table. Final version to appear in PLoS Comp.
Bio | PLoS Comput Biol 9(8): e1003169 (2013) | 10.1371/journal.pcbi.1003169 | null | q-bio.PE physics.bio-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactions among multiple infectious agents are increasingly recognized as
a fundamental issue in the understanding of key questions in public health,
regarding pathogen emergence, maintenance, and evolution. The full description
of host-multipathogen systems is however challenged by the multiplicity of
factors affecting the interaction dynamics and the resulting competition that
may occur at different scales, from the within-host scale to the spatial
structure and mobility of the host population. Here we study the dynamics of
two competing pathogens in a structured host population and assess the impact
of the mobility pattern of hosts on the pathogen competition. We model the
spatial structure of the host population in terms of a metapopulation network
and focus on two strains imported locally in the system and having the same
transmission potential but different infectious periods. We find different
scenarios leading to competitive success of either one of the strain or to the
codominance of both strains in the system. The dominance of the strain
characterized by the shorter or longer infectious period depends exclusively on
the structure of the population and on the the mobility of hosts across
patches. The proposed modeling framework allows the integration of other
relevant epidemiological, environmental and demographic factors opening the
path to further mathematical and computational studies of the dynamics of
multipathogen systems.
| [
{
"created": "Fri, 5 Jul 2013 23:04:36 GMT",
"version": "v1"
}
] | 2013-08-21 | [
[
"Poletto",
"Chiara",
""
],
[
"Meloni",
"Sandro",
""
],
[
"Colizza",
"Vittoria",
""
],
[
"Moreno",
"Yamir",
""
],
[
"Vespignani",
"Alessandro",
""
]
] | Interactions among multiple infectious agents are increasingly recognized as a fundamental issue in the understanding of key questions in public health, regarding pathogen emergence, maintenance, and evolution. The full description of host-multipathogen systems is however challenged by the multiplicity of factors affecting the interaction dynamics and the resulting competition that may occur at different scales, from the within-host scale to the spatial structure and mobility of the host population. Here we study the dynamics of two competing pathogens in a structured host population and assess the impact of the mobility pattern of hosts on the pathogen competition. We model the spatial structure of the host population in terms of a metapopulation network and focus on two strains imported locally in the system and having the same transmission potential but different infectious periods. We find different scenarios leading to competitive success of either one of the strain or to the codominance of both strains in the system. The dominance of the strain characterized by the shorter or longer infectious period depends exclusively on the structure of the population and on the the mobility of hosts across patches. The proposed modeling framework allows the integration of other relevant epidemiological, environmental and demographic factors opening the path to further mathematical and computational studies of the dynamics of multipathogen systems. |
1312.7556 | Andrew Dhawan | Andrew Dhawan, Kamran Kaveh, Mohammad Kohandel, Siv Sivaloganathan | Stochastic Model for Tumor Control Probability: Effects of Cell Cycle
and (A)symmetric Proliferation | 12 pages, 5 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating the required dose in radiotherapy is of crucial importance since
the administrated dose should be sufficient to eradicate the tumor and at the
same time should inflict minimal damage on normal cells. The probability that a
given dose and schedule of ionizing radiation eradicates all the tumor cells in
a given tissue is called the tumor control probability (TCP), and is often used
to compare various treatment strategies used in radiation therapy. In this
paper, we aim to investigate the effects of including cell-cycle phase on the
TCP by analyzing a stochastic model of a tumor comprised of actively dividing
cells and quiescent cells with different radiation sensitivities. We derive an
exact phase-diagram for the steady-state TCP of the model and show that at
high, clinically-relevant doses of radiation, the distinction between active
and quiescent tumor cells (i.e. accounting for cell-cycle effects) becomes of
negligible importance in terms of its effect on the TCP curve. However, for
very low doses of radiation, these proportions become significant determinants
of the TCP. Moreover, we use a novel numerical approach based on the method of
characteristics for partial differential equations, validated by the Gillespie
algorithm, to compute the TCP as a function of time. We observe that our
results differ from the results in the literature using similar existing
models, even though similar parameters values are used, and the reasons for
this are discussed.
| [
{
"created": "Sun, 29 Dec 2013 16:41:56 GMT",
"version": "v1"
}
] | 2013-12-31 | [
[
"Dhawan",
"Andrew",
""
],
[
"Kaveh",
"Kamran",
""
],
[
"Kohandel",
"Mohammad",
""
],
[
"Sivaloganathan",
"Siv",
""
]
] | Estimating the required dose in radiotherapy is of crucial importance since the administrated dose should be sufficient to eradicate the tumor and at the same time should inflict minimal damage on normal cells. The probability that a given dose and schedule of ionizing radiation eradicates all the tumor cells in a given tissue is called the tumor control probability (TCP), and is often used to compare various treatment strategies used in radiation therapy. In this paper, we aim to investigate the effects of including cell-cycle phase on the TCP by analyzing a stochastic model of a tumor comprised of actively dividing cells and quiescent cells with different radiation sensitivities. We derive an exact phase-diagram for the steady-state TCP of the model and show that at high, clinically-relevant doses of radiation, the distinction between active and quiescent tumor cells (i.e. accounting for cell-cycle effects) becomes of negligible importance in terms of its effect on the TCP curve. However, for very low doses of radiation, these proportions become significant determinants of the TCP. Moreover, we use a novel numerical approach based on the method of characteristics for partial differential equations, validated by the Gillespie algorithm, to compute the TCP as a function of time. We observe that our results differ from the results in the literature using similar existing models, even though similar parameters values are used, and the reasons for this are discussed. |
2202.11222 | Niket Thakkar | Niket Thakkar | A modeling approach for estimating dynamic measles case detection rates | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The main idea in this paper is that the age associated with reported measles
cases can be used to estimate the number of undetected measles infections.
Somewhat surprisingly, even with age only to the nearest year, estimates of
underreporting can be generated at the much faster, 2 week time-scale
associated with measles transmission. I describe this idea by focusing on the
well-studied, 60 city United Kingdom data set, which covers the transition to
universal healthcare in 1948, and is, as a result, an interesting case study in
infectious disease surveillance. Finally, at the end of the paper, I comment
briefly on how the approach can be modified for application to modern contexts.
| [
{
"created": "Tue, 22 Feb 2022 22:43:40 GMT",
"version": "v1"
}
] | 2022-02-24 | [
[
"Thakkar",
"Niket",
""
]
] | The main idea in this paper is that the age associated with reported measles cases can be used to estimate the number of undetected measles infections. Somewhat surprisingly, even with age only to the nearest year, estimates of underreporting can be generated at the much faster, 2 week time-scale associated with measles transmission. I describe this idea by focusing on the well-studied, 60 city United Kingdom data set, which covers the transition to universal healthcare in 1948, and is, as a result, an interesting case study in infectious disease surveillance. Finally, at the end of the paper, I comment briefly on how the approach can be modified for application to modern contexts. |
2207.04333 | Konstantia Georgouli | Konstantia Georgouli, Helgi I Ing\'olfsson, Fikret Aydin, Mark
Heimann, Felice C Lightstone, Peer-Timo Bremer, Harsh Bhatia | Emerging Patterns in the Continuum Representation of Protein-Lipid
Fingerprints | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Capturing intricate biological phenomena often requires multiscale modeling
where coarse and inexpensive models are developed using limited components of
expensive and high-fidelity models. Here, we consider such a multiscale
framework in the context of cancer biology and address the challenge of
evaluating the descriptive capabilities of a continuum model developed using
1-dimensional statistics from a molecular dynamics model. Using deep learning,
we develop a highly predictive classification model that identifies complex and
emergent behavior from the continuum model. With over 99.9% accuracy
demonstrated for two simulations, our approach confirms the existence of
protein-specific "lipid fingerprints", i.e. spatial rearrangements of lipids in
response to proteins of interest. Through this demonstration, our model also
provides external validation of the continuum model, affirms the value of such
multiscale modeling, and can foster new insights through further analysis of
these fingerprints.
| [
{
"created": "Sat, 9 Jul 2022 20:07:49 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Georgouli",
"Konstantia",
""
],
[
"Ingólfsson",
"Helgi I",
""
],
[
"Aydin",
"Fikret",
""
],
[
"Heimann",
"Mark",
""
],
[
"Lightstone",
"Felice C",
""
],
[
"Bremer",
"Peer-Timo",
""
],
[
"Bhatia",
"Harsh",
""
]
] | Capturing intricate biological phenomena often requires multiscale modeling where coarse and inexpensive models are developed using limited components of expensive and high-fidelity models. Here, we consider such a multiscale framework in the context of cancer biology and address the challenge of evaluating the descriptive capabilities of a continuum model developed using 1-dimensional statistics from a molecular dynamics model. Using deep learning, we develop a highly predictive classification model that identifies complex and emergent behavior from the continuum model. With over 99.9% accuracy demonstrated for two simulations, our approach confirms the existence of protein-specific "lipid fingerprints", i.e. spatial rearrangements of lipids in response to proteins of interest. Through this demonstration, our model also provides external validation of the continuum model, affirms the value of such multiscale modeling, and can foster new insights through further analysis of these fingerprints. |
2005.08052 | Samaherni Dias | Samaherni M. Dias, Kurios I. P. de M. Queiroz, Allan de M. Martins | Controlling epidemic diseases based only on social distancing level | 11 pages, 3 figures | null | 10.1007/s40313-021-00745-6 | null | q-bio.PE cs.SY eess.SY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The World Health Organization (WHO) made the assessment that COVID-19 can be
characterized as a pandemic on March 11, 2020. To the COVID-19 outbreak, there
is no vaccination and no treatment. The only way to control the COVID-19
outbreak is sustained physical distancing. In this work, a simple control law
was proposed to keep the infected individuals during the COVID-19 outbreak
below the desired number. The proposed control law keeps the value of infected
individuals controlled only adjusting the social distancing level. The
stability analysis of the proposed control law is done and the uncertainties in
the parameters were considered. A version of the proposed controller to daily
update was developed. This is a very simple approach to the developed control
law and can be calculated in a spreadsheet. In the end, numerical simulations
were done to show the behavior of the number of infected individuals during an
epidemic disease when the proposed control law is used to adjust the social
distancing level.
| [
{
"created": "Sat, 16 May 2020 17:46:49 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Dias",
"Samaherni M.",
""
],
[
"Queiroz",
"Kurios I. P. de M.",
""
],
[
"Martins",
"Allan de M.",
""
]
] | The World Health Organization (WHO) made the assessment that COVID-19 can be characterized as a pandemic on March 11, 2020. To the COVID-19 outbreak, there is no vaccination and no treatment. The only way to control the COVID-19 outbreak is sustained physical distancing. In this work, a simple control law was proposed to keep the infected individuals during the COVID-19 outbreak below the desired number. The proposed control law keeps the value of infected individuals controlled only adjusting the social distancing level. The stability analysis of the proposed control law is done and the uncertainties in the parameters were considered. A version of the proposed controller to daily update was developed. This is a very simple approach to the developed control law and can be calculated in a spreadsheet. In the end, numerical simulations were done to show the behavior of the number of infected individuals during an epidemic disease when the proposed control law is used to adjust the social distancing level. |
0903.1979 | Claudius Gros | C. Gros, G. Kaczor | Semantic learning in autonomously active recurrent neural networks | Journal of Algorithms in Cognition, Informatics and Logic, special
issue on `Perspectives and Challenges for Recurrent Neural Networks', in
press | Logic Journal of the IGPL, Vol. 18, 686 (2010) | 10.1093/jigpal/jzp045 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brain is autonomously active, being characterized by a
self-sustained neural activity which would be present even in the absence of
external sensory stimuli. Here we study the interrelation between the
self-sustained activity in autonomously active recurrent neural nets and
external sensory stimuli.
There is no a priori semantical relation between the influx of external
stimuli and the patterns generated internally by the autonomous and ongoing
brain dynamics. The question then arises when and how are semantic correlations
between internal and external dynamical processes learned and built up?
We study this problem within the paradigm of transient state dynamics for the
neural activity in recurrent neural nets, i.e. for an autonomous neural
activity characterized by an infinite time-series of transiently stable
attractor states. We propose that external stimuli will be relevant during the
sensitive periods, {\it viz} the transition period between one transient state
and the subsequent semi-stable attractor. A diffusive learning signal is
generated unsupervised whenever the stimulus influences the internal dynamics
qualitatively.
For testing we have presented to the model system stimuli corresponding to
the bars and stripes problem. We found that the system performs a non-linear
independent component analysis on its own, being continuously and autonomously
active. This emergent cognitive capability results here from a general
principle for the neural dynamics, the competition between neural ensembles.
| [
{
"created": "Wed, 11 Mar 2009 14:14:51 GMT",
"version": "v1"
}
] | 2017-11-27 | [
[
"Gros",
"C.",
""
],
[
"Kaczor",
"G.",
""
]
] | The human brain is autonomously active, being characterized by a self-sustained neural activity which would be present even in the absence of external sensory stimuli. Here we study the interrelation between the self-sustained activity in autonomously active recurrent neural nets and external sensory stimuli. There is no a priori semantical relation between the influx of external stimuli and the patterns generated internally by the autonomous and ongoing brain dynamics. The question then arises when and how are semantic correlations between internal and external dynamical processes learned and built up? We study this problem within the paradigm of transient state dynamics for the neural activity in recurrent neural nets, i.e. for an autonomous neural activity characterized by an infinite time-series of transiently stable attractor states. We propose that external stimuli will be relevant during the sensitive periods, {\it viz} the transition period between one transient state and the subsequent semi-stable attractor. A diffusive learning signal is generated unsupervised whenever the stimulus influences the internal dynamics qualitatively. For testing we have presented to the model system stimuli corresponding to the bars and stripes problem. We found that the system performs a non-linear independent component analysis on its own, being continuously and autonomously active. This emergent cognitive capability results here from a general principle for the neural dynamics, the competition between neural ensembles. |
2102.04219 | Larissa Albantakis | Larissa Albantakis and Giulio Tononi | What we are is more than what we do | 4 pages; German version of this article to appear as a contribution
to the anthology "Artificial Intelligence with Consciousness? Statements
2021" edited by the Karlsruhe Institute of Technology (KIT) | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | If we take the subjective character of consciousness seriously, consciousness
becomes a matter of "being" rather than "doing". Because "doing" can be
dissociated from "being", functional criteria alone are insufficient to decide
whether a system possesses the necessary requirements for being a physical
substrate of consciousness. The dissociation between "being" and "doing" is
most salient in artificial general intelligence, which may soon replicate any
human capacity: computers can perform complex functions (in the limit
resembling human behavior) in the absence of consciousness. Complex behavior
becomes meaningless if it is not performed by a conscious being.
| [
{
"created": "Thu, 21 Jan 2021 19:26:15 GMT",
"version": "v1"
}
] | 2021-02-10 | [
[
"Albantakis",
"Larissa",
""
],
[
"Tononi",
"Giulio",
""
]
] | If we take the subjective character of consciousness seriously, consciousness becomes a matter of "being" rather than "doing". Because "doing" can be dissociated from "being", functional criteria alone are insufficient to decide whether a system possesses the necessary requirements for being a physical substrate of consciousness. The dissociation between "being" and "doing" is most salient in artificial general intelligence, which may soon replicate any human capacity: computers can perform complex functions (in the limit resembling human behavior) in the absence of consciousness. Complex behavior becomes meaningless if it is not performed by a conscious being. |
1707.04582 | Gregory Barello | Takafumi Arakaki, G. Barello, Yashar Ahmadian | Capturing the diversity of biological tuning curves using generative
adversarial networks | null | null | null | null | q-bio.QM cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tuning curves characterizing the response selectivities of biological neurons
often exhibit large degrees of irregularity and diversity across neurons.
Theoretical network models that feature heterogeneous cell populations or
random connectivity also give rise to diverse tuning curves. However, a general
framework for fitting such models to experimentally measured tuning curves is
lacking. We address this problem by proposing to view mechanistic network
models as generative models whose parameters can be optimized to fit the
distribution of experimentally measured tuning curves. A major obstacle for
fitting such models is that their likelihood function is not explicitly
available or is highly intractable to compute. Recent advances in machine
learning provide ways for fitting generative models without the need to
evaluate the likelihood and its gradient. Generative Adversarial Networks (GAN)
provide one such framework which has been successful in traditional machine
learning tasks. We apply this approach in two separate experiments, showing how
GANs can be used to fit commonly used mechanistic models in theoretical
neuroscience to datasets of measured tuning curves. This fitting procedure
avoids the computationally expensive step of inferring latent variables, e.g.
the biophysical parameters of individual cells or the particular realization of
the full synaptic connectivity matrix, and directly learns model parameters
which characterize the statistics of connectivity or of single-cell properties.
Another strength of this approach is that it fits the entire, joint
distribution of experimental tuning curves, instead of matching a few summary
statistics picked a priori by the user. More generally, this framework opens
the door to fitting theoretically motivated dynamical network models directly
to simultaneously or non-simultaneously recorded neural responses.
| [
{
"created": "Fri, 14 Jul 2017 17:56:50 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jul 2017 18:44:38 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jul 2017 16:52:01 GMT",
"version": "v3"
}
] | 2017-07-20 | [
[
"Arakaki",
"Takafumi",
""
],
[
"Barello",
"G.",
""
],
[
"Ahmadian",
"Yashar",
""
]
] | Tuning curves characterizing the response selectivities of biological neurons often exhibit large degrees of irregularity and diversity across neurons. Theoretical network models that feature heterogeneous cell populations or random connectivity also give rise to diverse tuning curves. However, a general framework for fitting such models to experimentally measured tuning curves is lacking. We address this problem by proposing to view mechanistic network models as generative models whose parameters can be optimized to fit the distribution of experimentally measured tuning curves. A major obstacle for fitting such models is that their likelihood function is not explicitly available or is highly intractable to compute. Recent advances in machine learning provide ways for fitting generative models without the need to evaluate the likelihood and its gradient. Generative Adversarial Networks (GAN) provide one such framework which has been successful in traditional machine learning tasks. We apply this approach in two separate experiments, showing how GANs can be used to fit commonly used mechanistic models in theoretical neuroscience to datasets of measured tuning curves. This fitting procedure avoids the computationally expensive step of inferring latent variables, e.g. the biophysical parameters of individual cells or the particular realization of the full synaptic connectivity matrix, and directly learns model parameters which characterize the statistics of connectivity or of single-cell properties. Another strength of this approach is that it fits the entire, joint distribution of experimental tuning curves, instead of matching a few summary statistics picked a priori by the user. More generally, this framework opens the door to fitting theoretically motivated dynamical network models directly to simultaneously or non-simultaneously recorded neural responses. |
2303.00899 | William Fries | William Fries | What Motivated Mitigation Policies? A Network-Based Longitudinal
Analysis of State-Level Mitigation Strategies | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Understanding which factors informed pandemic response can help create a more
nuanced perspective on how each state of the United States handled the crisis.
To this end, we create various networks linking states together based on their
similarity in mitigation policies, politics, geographic proximity and COVID-19
case data. We use these networks to analyze the correlation between pandemic
policies and politics, location and case-load from January 2020 through March
2022. We show that the best predictors of a state's response is an aggregate
political affiliation rather than solely governor affiliation as others have
shown. Further, we illustrate that political affiliation is heavily correlated
with policy intensity from June 2020 through June 2021, but has little impact
on policy after June 2021. In contrast, geographic proximity and daily
incidence are not consistently correlated with state's having similar
mitigation policies.
| [
{
"created": "Thu, 2 Mar 2023 01:46:48 GMT",
"version": "v1"
},
{
"created": "Wed, 10 May 2023 20:41:25 GMT",
"version": "v2"
}
] | 2023-05-12 | [
[
"Fries",
"William",
""
]
] | Understanding which factors informed pandemic response can help create a more nuanced perspective on how each state of the United States handled the crisis. To this end, we create various networks linking states together based on their similarity in mitigation policies, politics, geographic proximity and COVID-19 case data. We use these networks to analyze the correlation between pandemic policies and politics, location and case-load from January 2020 through March 2022. We show that the best predictors of a state's response is an aggregate political affiliation rather than solely governor affiliation as others have shown. Further, we illustrate that political affiliation is heavily correlated with policy intensity from June 2020 through June 2021, but has little impact on policy after June 2021. In contrast, geographic proximity and daily incidence are not consistently correlated with state's having similar mitigation policies. |
1812.09362 | Logan Thrasher Collins | Logan Thrasher Collins | The case for emulating insect brains using anatomical "wiring diagrams"
equipped with biophysical models of neuronal activity | 25 pages, 2 figures. Biological Cybernetics | null | 10.1007/s00422-019-00810-z | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Developing whole-brain emulation (WBE) technology would provide immense
benefits across neuroscience, biomedicine, artificial intelligence, and
robotics. At this time, constructing a simulated human brain lacks feasibility
due to limited experimental data and limited computational resources. However,
I suggest that progress towards this goal might be accelerated by working
towards an intermediate objective, namely insect brain emulation (IBE). More
specifically, this would entail creating biologically realistic simulations of
entire insect nervous systems along with more approximate simulations of
non-neuronal insect physiology to make "virtual insects." I argue that this
could be realistically achievable within the next 20 years. I propose that
developing emulations of insect brains will galvanize the global community of
scientists, businesspeople, and policymakers towards pursuing the loftier goal
of emulating the human brain. By demonstrating that WBE is possible via IBE,
simulating mammalian brains and eventually the human brain may no longer be
viewed as too radically ambitious to deserve substantial funding and resources.
Furthermore, IBE will facilitate dramatic advances in cognitive neuroscience,
artificial intelligence, and robotics through studies performed using virtual
insects.
| [
{
"created": "Fri, 7 Dec 2018 06:00:34 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2019 18:15:04 GMT",
"version": "v2"
},
{
"created": "Sun, 27 Oct 2019 19:30:14 GMT",
"version": "v3"
}
] | 2019-11-07 | [
[
"Collins",
"Logan Thrasher",
""
]
] | Developing whole-brain emulation (WBE) technology would provide immense benefits across neuroscience, biomedicine, artificial intelligence, and robotics. At this time, constructing a simulated human brain lacks feasibility due to limited experimental data and limited computational resources. However, I suggest that progress towards this goal might be accelerated by working towards an intermediate objective, namely insect brain emulation (IBE). More specifically, this would entail creating biologically realistic simulations of entire insect nervous systems along with more approximate simulations of non-neuronal insect physiology to make "virtual insects." I argue that this could be realistically achievable within the next 20 years. I propose that developing emulations of insect brains will galvanize the global community of scientists, businesspeople, and policymakers towards pursuing the loftier goal of emulating the human brain. By demonstrating that WBE is possible via IBE, simulating mammalian brains and eventually the human brain may no longer be viewed as too radically ambitious to deserve substantial funding and resources. Furthermore, IBE will facilitate dramatic advances in cognitive neuroscience, artificial intelligence, and robotics through studies performed using virtual insects. |
1201.0913 | Na-Rae Kim | Na-Rae Kim and Chan-Byoung Chae | Novel Modulation Techniques using Isomers as Messenger Molecules for
Molecular Communication via Diffusion | 5 pages, 7 figures | null | null | null | q-bio.QM cs.CE cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose novel modulation techniques using isomers as
messenger molecules for nano communication via diffusion. To evaluate
achievable rate performance, we compare the proposed techniques with
concentration-based and molecular-type-based methods. Analytical and numerical
results confirm that the proposed modulation techniques achieve higher data
transmission rate performance than conventional insulin based concepts.
| [
{
"created": "Tue, 3 Jan 2012 02:33:19 GMT",
"version": "v1"
}
] | 2012-01-05 | [
[
"Kim",
"Na-Rae",
""
],
[
"Chae",
"Chan-Byoung",
""
]
] | In this paper, we propose novel modulation techniques using isomers as messenger molecules for nano communication via diffusion. To evaluate achievable rate performance, we compare the proposed techniques with concentration-based and molecular-type-based methods. Analytical and numerical results confirm that the proposed modulation techniques achieve higher data transmission rate performance than conventional insulin based concepts. |
1301.1439 | Sarada Seetharaman | Sarada Seetharaman and Kavita Jain | Adaptive walks and distribution of beneficial fitness effects | Accepted in Evolution | Evolution 68, 965 (2014) | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the adaptation dynamics of a maladapted asexual population on rugged
fitness landscapes with many local fitness peaks. The distribution of
beneficial fitness effects is assumed to belong to one of the three extreme
value domains, viz. Weibull, Gumbel and Fr{\'e}chet. We work in the strong
selection-weak mutation regime in which beneficial mutations fix sequentially,
and the population performs an uphill walk on the fitness landscape until a
local fitness peak is reached. A striking prediction of our analysis is that
the fitness difference between successive steps follows a pattern of
diminishing returns in the Weibull domain and accelerating returns in the
Fr{\'e}chet domain, as the initial fitness of the population is increased.
These trends are found to be robust with respect to fitness correlations. We
believe that this result can be exploited in experiments to determine the
extreme value domain of the distribution of beneficial fitness effects. Our
work here differs significantly from the previous ones that assume the
selection coefficient to be small. On taking large effect mutations into
account, we find that the length of the walk shows different qualitative trends
from those derived using small selection coefficient approximation.
| [
{
"created": "Tue, 8 Jan 2013 08:14:26 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Oct 2013 06:04:54 GMT",
"version": "v2"
}
] | 2016-01-13 | [
[
"Seetharaman",
"Sarada",
""
],
[
"Jain",
"Kavita",
""
]
] | We study the adaptation dynamics of a maladapted asexual population on rugged fitness landscapes with many local fitness peaks. The distribution of beneficial fitness effects is assumed to belong to one of the three extreme value domains, viz. Weibull, Gumbel and Fr{\'e}chet. We work in the strong selection-weak mutation regime in which beneficial mutations fix sequentially, and the population performs an uphill walk on the fitness landscape until a local fitness peak is reached. A striking prediction of our analysis is that the fitness difference between successive steps follows a pattern of diminishing returns in the Weibull domain and accelerating returns in the Fr{\'e}chet domain, as the initial fitness of the population is increased. These trends are found to be robust with respect to fitness correlations. We believe that this result can be exploited in experiments to determine the extreme value domain of the distribution of beneficial fitness effects. Our work here differs significantly from the previous ones that assume the selection coefficient to be small. On taking large effect mutations into account, we find that the length of the walk shows different qualitative trends from those derived using small selection coefficient approximation. |
2101.04647 | Nadia Loy | Nadia Loy and Thomas Hillen and Kevin John Painter | Direction-Dependent Turning Leads to Anisotropic Diffusion and
Persistence | null | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells and organisms follow aligned structures in their environment, a process
that can generate persistent migration paths. Kinetic transport equations are a
popular modelling tool for describing biological movements at the mesoscopic
level, yet their formulations usually assume a constant turning rate. Here we
relax this simplification, extending to include a turning rate that varies
according to the anisotropy of a heterogeneous environment. We extend known
methods of parabolic and hyperbolic scaling and apply the results to cell
movement on micro-patterned domains. We show that inclusion of orientation
dependence in the turning rate can lead to persistence of motion in an
otherwise fully symmetric environment, and generate enhanced diffusion in
structured domains.
| [
{
"created": "Tue, 12 Jan 2021 18:11:14 GMT",
"version": "v1"
}
] | 2021-01-13 | [
[
"Loy",
"Nadia",
""
],
[
"Hillen",
"Thomas",
""
],
[
"Painter",
"Kevin John",
""
]
] | Cells and organisms follow aligned structures in their environment, a process that can generate persistent migration paths. Kinetic transport equations are a popular modelling tool for describing biological movements at the mesoscopic level, yet their formulations usually assume a constant turning rate. Here we relax this simplification, extending to include a turning rate that varies according to the anisotropy of a heterogeneous environment. We extend known methods of parabolic and hyperbolic scaling and apply the results to cell movement on micro-patterned domains. We show that inclusion of orientation dependence in the turning rate can lead to persistence of motion in an otherwise fully symmetric environment, and generate enhanced diffusion in structured domains. |
2009.04854 | Jingwen Zhang | Jingwen Zhang, Defu Yang, Wei He, Guorong Wu, Minghan Chen | A Network-Guided Reaction-Diffusion Model of AT[N] Biomarkers in
Alzheimer's Disease | 11 pages, 8 figures | null | null | null | q-bio.NC q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, many studies of Alzheimer's disease (AD) are investigating the
neurobiological factors behind the acquisition of beta-amyloid (A), pathologic
tau (T), and neurodegeneration ([N]) biomarkers from neuroimages. However, a
system-level mechanism of how these neuropathological burdens promote
neurodegeneration and why AD exhibits characteristic progression is largely
elusive. In this study, we combined the power of systems biology and network
neuroscience to understand the dynamic interaction and diffusion process of
AT[N] biomarkers from an unprecedented amount of longitudinal Amyloid PET scan,
MRI imaging, and DTI data. Specifically, we developed a network-guided
biochemical model to jointly (1) model the interaction of AT[N] biomarkers at
each brain region and (2) characterize their propagation pattern across the
fiber pathways in the structural brain network, where the brain resilience is
also considered as a moderator of cognitive decline. Our biochemical model
offers a greater mathematical insight to understand the physiopathological
mechanism of AD progression by studying the system dynamics and stability.
Thus, an in-depth system-level analysis allows us to gain a new understanding
of how AT[N] biomarkers spread throughout the brain, capture the early sign of
cognitive decline, and predict the AD progression from the preclinical stage.
| [
{
"created": "Thu, 10 Sep 2020 13:39:56 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Sep 2020 13:34:03 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Oct 2020 06:56:01 GMT",
"version": "v3"
}
] | 2020-10-05 | [
[
"Zhang",
"Jingwen",
""
],
[
"Yang",
"Defu",
""
],
[
"He",
"Wei",
""
],
[
"Wu",
"Guorong",
""
],
[
"Chen",
"Minghan",
""
]
] | Currently, many studies of Alzheimer's disease (AD) are investigating the neurobiological factors behind the acquisition of beta-amyloid (A), pathologic tau (T), and neurodegeneration ([N]) biomarkers from neuroimages. However, a system-level mechanism of how these neuropathological burdens promote neurodegeneration and why AD exhibits characteristic progression is largely elusive. In this study, we combined the power of systems biology and network neuroscience to understand the dynamic interaction and diffusion process of AT[N] biomarkers from an unprecedented amount of longitudinal Amyloid PET scan, MRI imaging, and DTI data. Specifically, we developed a network-guided biochemical model to jointly (1) model the interaction of AT[N] biomarkers at each brain region and (2) characterize their propagation pattern across the fiber pathways in the structural brain network, where the brain resilience is also considered as a moderator of cognitive decline. Our biochemical model offers a greater mathematical insight to understand the physiopathological mechanism of AD progression by studying the system dynamics and stability. Thus, an in-depth system-level analysis allows us to gain a new understanding of how AT[N] biomarkers spread throughout the brain, capture the early sign of cognitive decline, and predict the AD progression from the preclinical stage. |
2003.14078 | Marco Hartl | C. Corbella, M. Hartl, M. Fernandez-gatell, J. Puigagut (Group of
environmental engineering and microbiology (GEMMA), Universitat Politecnica
de Catalunya - BarcelonaTech) | MFC-based biosensor for domestic wastewater COD assessment in
constructed wetlands | 46 pages, 14 Figures | 660, 2019, 218-226 | 10.1016/j.scitotenv.2018.12.347 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the context of natural-based wastewater treatment technologies (such as
constructed wetlands - CW) the use of a low-cost, continuous-like biosensor
tool for the assessment of operational conditions is of key importance for
plant management optimization. The objective of the present study was to assess
the potential use of constructed wetland microbial fuel cells (CW-MFC) as a
domestic wastewater COD assessment tool. For the purpose of this work four
lab-scale CW-MFCs were set up and fed with pre-settled domestic wastewater at
different COD concentrations. Under laboratory conditions two different anodic
materials were tested (graphite rods and gravel). Furthermore, a pilot-plant
based experiment was also conducted to confirm the findings previously recorded
for lab-scale experiments. Results showed that in spite of the low coulombic
efficiencies recorded, either gravel or graphite-based anodes were suitable for
the purposes of domestic wastewater COD assessment. Significant linear
relationships could be established between inlet COD concentrations and CW-MFC
Ecell whenever contact time was above 10 hours. Results also showed that the
accuracy of the CW-MFC was greatly compromised after several weeks of
operation. Pilot experiments showed that CW-MFC presents a good bio-indication
response between week 3 and 7 of operation (equivalent to an accumulated
organic loading between 100 and 200 g COD/m2, respectively). Main conclusion of
this work is that of CW-MFC could be used as an "alarm-tool" for qualitative
continuous influent water quality assessment rather than a precise COD
assessment tool due to a loss of precision after several weeks of operation.
| [
{
"created": "Tue, 31 Mar 2020 10:30:42 GMT",
"version": "v1"
}
] | 2020-04-01 | [
[
"Corbella",
"C.",
"",
"Group of\n environmental engineering and microbiology"
],
[
"Hartl",
"M.",
"",
"Group of\n environmental engineering and microbiology"
],
[
"Fernandez-gatell",
"M.",
"",
"Group of\n environmental engineering and microbiology"
],
[
"Puigagut",
"J.",
"",
"Group of\n environmental engineering and microbiology"
]
] | In the context of natural-based wastewater treatment technologies (such as constructed wetlands - CW) the use of a low-cost, continuous-like biosensor tool for the assessment of operational conditions is of key importance for plant management optimization. The objective of the present study was to assess the potential use of constructed wetland microbial fuel cells (CW-MFC) as a domestic wastewater COD assessment tool. For the purpose of this work four lab-scale CW-MFCs were set up and fed with pre-settled domestic wastewater at different COD concentrations. Under laboratory conditions two different anodic materials were tested (graphite rods and gravel). Furthermore, a pilot-plant based experiment was also conducted to confirm the findings previously recorded for lab-scale experiments. Results showed that in spite of the low coulombic efficiencies recorded, either gravel or graphite-based anodes were suitable for the purposes of domestic wastewater COD assessment. Significant linear relationships could be established between inlet COD concentrations and CW-MFC Ecell whenever contact time was above 10 hours. Results also showed that the accuracy of the CW-MFC was greatly compromised after several weeks of operation. Pilot experiments showed that CW-MFC presents a good bio-indication response between week 3 and 7 of operation (equivalent to an accumulated organic loading between 100 and 200 g COD/m2, respectively). Main conclusion of this work is that of CW-MFC could be used as an "alarm-tool" for qualitative continuous influent water quality assessment rather than a precise COD assessment tool due to a loss of precision after several weeks of operation. |
2201.05464 | Zafeirios Fountas PhD | Zafeirios Fountas, Alexey Zakharov | Bayesian sense of time in biological and artificial brains | 27 pages, 3 figures, to appear in: Time and Science (ed. R. Lestienne
& P. Harris) | null | null | null | q-bio.NC cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enquiries concerning the underlying mechanisms and the emergent properties of
a biological brain have a long history of theoretical postulates and
experimental findings. Today, the scientific community tends to converge to a
single interpretation of the brain's cognitive underpinnings -- that it is a
Bayesian inference machine. This contemporary view has naturally been a strong
driving force in recent developments around computational and cognitive
neurosciences. Of particular interest is the brain's ability to process the
passage of time -- one of the fundamental dimensions of our experience. How can
we explain empirical data on human time perception using the Bayesian brain
hypothesis? Can we replicate human estimation biases using Bayesian models?
What insights can the agent-based machine learning models provide for the study
of this subject? In this chapter, we review some of the recent advancements in
the field of time perception and discuss the role of Bayesian processing in the
construction of temporal models.
| [
{
"created": "Fri, 14 Jan 2022 14:05:30 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Fountas",
"Zafeirios",
""
],
[
"Zakharov",
"Alexey",
""
]
] | Enquiries concerning the underlying mechanisms and the emergent properties of a biological brain have a long history of theoretical postulates and experimental findings. Today, the scientific community tends to converge to a single interpretation of the brain's cognitive underpinnings -- that it is a Bayesian inference machine. This contemporary view has naturally been a strong driving force in recent developments around computational and cognitive neurosciences. Of particular interest is the brain's ability to process the passage of time -- one of the fundamental dimensions of our experience. How can we explain empirical data on human time perception using the Bayesian brain hypothesis? Can we replicate human estimation biases using Bayesian models? What insights can the agent-based machine learning models provide for the study of this subject? In this chapter, we review some of the recent advancements in the field of time perception and discuss the role of Bayesian processing in the construction of temporal models. |
1405.1668 | Claus Metzner | Christoph Mark, Claus Metzner and Ben Fabry | Bayesian inference of time varying parameters in autoregressive
processes | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the autoregressive process of first order AR(1), a homogeneous correlated
time series $u_t$ is recursively constructed as $u_t = q\; u_{t-1} + \sigma
\;\epsilon_t$, using random Gaussian deviates $\epsilon_t$ and fixed values for
the correlation coefficient $q$ and for the noise amplitude $\sigma$. To model
temporally heterogeneous time series, the coefficients $q_t$ and $\sigma_t$ can
be regarded as time-dependend variables by themselves, leading to the
time-varying autoregressive processes TVAR(1). We assume here that the time
series $u_t$ is known and attempt to infer the temporal evolution of the
'superstatistical' parameters $q_t$ and $\sigma_t$. We present a sequential
Bayesian method of inference, which is conceptually related to the Hidden
Markov model, but takes into account the direct statistical dependence of
successively measured variables $u_t$. The method requires almost no prior
knowledge about the temporal dynamics of $q_t$ and $\sigma_t$ and can handle
gradual and abrupt changes of these superparameters simultaneously. We compare
our method with a Maximum Likelihood estimate based on a sliding window and
show that it is superior for a wide range of window sizes.
| [
{
"created": "Wed, 7 May 2014 16:52:58 GMT",
"version": "v1"
},
{
"created": "Mon, 12 May 2014 12:13:29 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Oct 2014 16:24:42 GMT",
"version": "v3"
}
] | 2014-10-10 | [
[
"Mark",
"Christoph",
""
],
[
"Metzner",
"Claus",
""
],
[
"Fabry",
"Ben",
""
]
] | In the autoregressive process of first order AR(1), a homogeneous correlated time series $u_t$ is recursively constructed as $u_t = q\; u_{t-1} + \sigma \;\epsilon_t$, using random Gaussian deviates $\epsilon_t$ and fixed values for the correlation coefficient $q$ and for the noise amplitude $\sigma$. To model temporally heterogeneous time series, the coefficients $q_t$ and $\sigma_t$ can be regarded as time-dependend variables by themselves, leading to the time-varying autoregressive processes TVAR(1). We assume here that the time series $u_t$ is known and attempt to infer the temporal evolution of the 'superstatistical' parameters $q_t$ and $\sigma_t$. We present a sequential Bayesian method of inference, which is conceptually related to the Hidden Markov model, but takes into account the direct statistical dependence of successively measured variables $u_t$. The method requires almost no prior knowledge about the temporal dynamics of $q_t$ and $\sigma_t$ and can handle gradual and abrupt changes of these superparameters simultaneously. We compare our method with a Maximum Likelihood estimate based on a sliding window and show that it is superior for a wide range of window sizes. |
2110.13690 | Jasmin Stein | Jasmin Stein, Katharina von Kriegstein, Alejandro Tabas | Frequency and frequency modulation share the same predictive encoding
mechanisms in human auditory cortex | null | null | 10.1093/texcom/tgac047 | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Expectations can substantially influence perception. Predictive coding is a
theory of sensory processing that aims to explain the neural mechanisms
underlying the effect of expectations in sensory processing. Its main
assumption is that sensory neurons encode prediction error with respect to
expected sensory input. Neural populations encoding prediction error have been
previously reported in the human auditory cortex (AC); however, most studies
focused on the encoding of pure tones and induced expectations by stimulus
repetition, potentially confounding prediction error with effects of neural
habituation. Here, we systematically studied prediction error to pure tones and
fast frequency modulated (FM) sweeps across different auditory cortical fields
in humans. We conducted two fMRI experiments, each using one type of stimulus.
We measured BOLD responses across the bilateral auditory cortical fields Te1.0,
Te1.1, Te1.2, and Te3 while participants listened to sequences of sounds. We
induced subjective expectations on the incoming sounds independently of
stimulus repetition using abstract rules. Our results indicate that pure tones
and FM-sweeps are encoded as prediction error with respect to the participants'
expectations across auditory cortical fields. The topographical distribution of
neural populations encoding prediction error to pure tones and FM-sweeps was
highly correlated in left Te1.1 and Te1.2, and in bilateral Te3, suggesting
that predictive coding is the general encoding mechanism in AC.
| [
{
"created": "Tue, 26 Oct 2021 13:28:12 GMT",
"version": "v1"
}
] | 2022-11-24 | [
[
"Stein",
"Jasmin",
""
],
[
"von Kriegstein",
"Katharina",
""
],
[
"Tabas",
"Alejandro",
""
]
] | Expectations can substantially influence perception. Predictive coding is a theory of sensory processing that aims to explain the neural mechanisms underlying the effect of expectations in sensory processing. Its main assumption is that sensory neurons encode prediction error with respect to expected sensory input. Neural populations encoding prediction error have been previously reported in the human auditory cortex (AC); however, most studies focused on the encoding of pure tones and induced expectations by stimulus repetition, potentially confounding prediction error with effects of neural habituation. Here, we systematically studied prediction error to pure tones and fast frequency modulated (FM) sweeps across different auditory cortical fields in humans. We conducted two fMRI experiments, each using one type of stimulus. We measured BOLD responses across the bilateral auditory cortical fields Te1.0, Te1.1, Te1.2, and Te3 while participants listened to sequences of sounds. We induced subjective expectations on the incoming sounds independently of stimulus repetition using abstract rules. Our results indicate that pure tones and FM-sweeps are encoded as prediction error with respect to the participants' expectations across auditory cortical fields. The topographical distribution of neural populations encoding prediction error to pure tones and FM-sweeps was highly correlated in left Te1.1 and Te1.2, and in bilateral Te3, suggesting that predictive coding is the general encoding mechanism in AC. |
q-bio/0611091 | Georgy Karev | Faina S Berezovskaya, Artem S Novozhilov, Georgy P Karev | Population models with singular equilibrium | 34 pages, 11 figures; submitted to Mathematical Bioscience | null | null | null | q-bio.QM q-bio.PE | null | A class of models of biological population and communities with a singular
equilibrium at the origin is analyzed; it is shown that these models can
possess a dynamical regime of deterministic extinction, which is crucially
important from the biological standpoint. This regime corresponds to the
presence of a family of homoclinics to the origin, so-called elliptic sector.
The complete analysis of possible topological structures in a neighborhood of
the origin, as well as asymptotics to orbits tending to this point, is given.
An algorithmic approach to analyze system behavior with parameter changes is
presented. The developed methods and algorithm are applied to existing
mathematical models of biological systems. In particular, we analyze a model of
anticancer treatment with oncolytic viruses, a parasite-host interaction model,
and a model of Chagas' disease.
| [
{
"created": "Tue, 28 Nov 2006 17:41:30 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Berezovskaya",
"Faina S",
""
],
[
"Novozhilov",
"Artem S",
""
],
[
"Karev",
"Georgy P",
""
]
] | A class of models of biological population and communities with a singular equilibrium at the origin is analyzed; it is shown that these models can possess a dynamical regime of deterministic extinction, which is crucially important from the biological standpoint. This regime corresponds to the presence of a family of homoclinics to the origin, so-called elliptic sector. The complete analysis of possible topological structures in a neighborhood of the origin, as well as asymptotics to orbits tending to this point, is given. An algorithmic approach to analyze system behavior with parameter changes is presented. The developed methods and algorithm are applied to existing mathematical models of biological systems. In particular, we analyze a model of anticancer treatment with oncolytic viruses, a parasite-host interaction model, and a model of Chagas' disease. |
1702.04999 | Konstantin Blyuss | G.O. Agaba, Y.N. Kyrychko, K.B. Blyuss | Mathematical model for the impact of awareness on the dynamics of
infectious diseases | 18 pages, 7 figures | Math. Biosci. 286, 22-30 (2017) | 10.1016/j.mbs.2017.01.009 | null | q-bio.PE nlin.CD q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyses an SIRS-type model for infectious diseases with account
for behavioural changes associated with the simultaneous spread of awareness in
the population. Two types of awareness are included into the model: private
awareness associated with direct contacts between unaware and aware
populations, and public information campaign. Stability analysis of different
steady states in the model provides information about potential spread of
disease in a population, and well as about how the disease dynamics is affected
by the two types of awareness. Numerical simulations are performed to
illustrate the behaviour of the system in different dynamical regimes.
| [
{
"created": "Thu, 16 Feb 2017 14:56:20 GMT",
"version": "v1"
}
] | 2017-02-17 | [
[
"Agaba",
"G. O.",
""
],
[
"Kyrychko",
"Y. N.",
""
],
[
"Blyuss",
"K. B.",
""
]
] | This paper analyses an SIRS-type model for infectious diseases with account for behavioural changes associated with the simultaneous spread of awareness in the population. Two types of awareness are included into the model: private awareness associated with direct contacts between unaware and aware populations, and public information campaign. Stability analysis of different steady states in the model provides information about potential spread of disease in a population, and well as about how the disease dynamics is affected by the two types of awareness. Numerical simulations are performed to illustrate the behaviour of the system in different dynamical regimes. |
2403.15500 | Haoyue Dai | Haoyue Dai, Ignavier Ng, Gongxu Luo, Peter Spirtes, Petar Stojanov,
Kun Zhang | Gene Regulatory Network Inference in the Presence of Dropouts: a Causal
View | Appears at ICLR 2024 (oral) | null | null | null | q-bio.QM cs.LG q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene regulatory network inference (GRNI) is a challenging problem,
particularly owing to the presence of zeros in single-cell RNA sequencing data:
some are biological zeros representing no gene expression, while some others
are technical zeros arising from the sequencing procedure (aka dropouts), which
may bias GRNI by distorting the joint distribution of the measured gene
expressions. Existing approaches typically handle dropout error via imputation,
which may introduce spurious relations as the true joint distribution is
generally unidentifiable. To tackle this issue, we introduce a causal graphical
model to characterize the dropout mechanism, namely, Causal Dropout Model. We
provide a simple yet effective theoretical result: interestingly, the
conditional independence (CI) relations in the data with dropouts, after
deleting the samples with zero values (regardless if technical or not) for the
conditioned variables, are asymptotically identical to the CI relations in the
original data without dropouts. This particular test-wise deletion procedure,
in which we perform CI tests on the samples without zeros for the conditioned
variables, can be seamlessly integrated with existing structure learning
approaches including constraint-based and greedy score-based methods, thus
giving rise to a principled framework for GRNI in the presence of dropouts. We
further show that the causal dropout model can be validated from data, and many
existing statistical models to handle dropouts fit into our model as specific
parametric instances. Empirical evaluation on synthetic, curated, and
real-world experimental transcriptomic data comprehensively demonstrate the
efficacy of our method.
| [
{
"created": "Thu, 21 Mar 2024 21:27:43 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Dai",
"Haoyue",
""
],
[
"Ng",
"Ignavier",
""
],
[
"Luo",
"Gongxu",
""
],
[
"Spirtes",
"Peter",
""
],
[
"Stojanov",
"Petar",
""
],
[
"Zhang",
"Kun",
""
]
] | Gene regulatory network inference (GRNI) is a challenging problem, particularly owing to the presence of zeros in single-cell RNA sequencing data: some are biological zeros representing no gene expression, while some others are technical zeros arising from the sequencing procedure (aka dropouts), which may bias GRNI by distorting the joint distribution of the measured gene expressions. Existing approaches typically handle dropout error via imputation, which may introduce spurious relations as the true joint distribution is generally unidentifiable. To tackle this issue, we introduce a causal graphical model to characterize the dropout mechanism, namely, Causal Dropout Model. We provide a simple yet effective theoretical result: interestingly, the conditional independence (CI) relations in the data with dropouts, after deleting the samples with zero values (regardless if technical or not) for the conditioned variables, are asymptotically identical to the CI relations in the original data without dropouts. This particular test-wise deletion procedure, in which we perform CI tests on the samples without zeros for the conditioned variables, can be seamlessly integrated with existing structure learning approaches including constraint-based and greedy score-based methods, thus giving rise to a principled framework for GRNI in the presence of dropouts. We further show that the causal dropout model can be validated from data, and many existing statistical models to handle dropouts fit into our model as specific parametric instances. Empirical evaluation on synthetic, curated, and real-world experimental transcriptomic data comprehensively demonstrate the efficacy of our method. |
1508.06523 | Ulrike Schl\"agel | Ulrike E. Schl\"agel and Mark A. Lewis | Robustness of movement models: can models bridge the gap between
temporal scales of data sets and behavioural processes? | 38 pages, 10 figures, submitted to Journal of Mathematical Biology | null | null | null | q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discrete-time random walks and their extensions are common tools for
analyzing animal movement data. In these analyses, resolution of temporal
discretization is a critical feature. Ideally, a model both mirrors the
relevant temporal scale of the biological process of interest and matches the
data sampling rate. Challenges arise when resolution of data is too coarse due
to technological constraints, or when we wish to extrapolate results or compare
results obtained from data with different resolutions. Drawing loosely on the
concept of robustness in statistics, we propose a rigorous mathematical
framework for studying movement models' robustness against changes in temporal
resolution. In this framework, we define varying levels of robustness as formal
model properties, focusing on random walk models with spatially-explicit
component. With the new framework, we can investigate whether models can
validly be applied to data across varying temporal resolutions and how we can
account for these different resolutions in statistical inference results. We
apply the new framework to movement-based resource selection models,
demonstrating both analytical and numerical calculations, as well as a Monte
Carlo simulation approach. While exact robustness is rare, the concept of
approximate robustness provides a promising new direction for analyzing
movement models.
| [
{
"created": "Wed, 26 Aug 2015 15:03:20 GMT",
"version": "v1"
}
] | 2015-08-27 | [
[
"Schlägel",
"Ulrike E.",
""
],
[
"Lewis",
"Mark A.",
""
]
] | Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models. |
1308.2889 | Daniel S Calovi | Daniel S. Calovi, Ugo Lopez, Sandrine Ngo, Cl\'ement Sire, Hugues
Chat\'e, Guy Theraulaz | Swarming, Schooling, Milling: Phase diagram of a data-driven fish school
model | null | New J. Phys. 16 015026 (2014) | 10.1088/1367-2630/16/1/015026 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We determine the basic phase diagram of the fish school model derived from
data by Gautrais etal (PLoS Comp. Biol. 8, e1002678 (2012)), exploring its
parameter space beyond the parameter values determined experimentally on groups
of barred flagtails (Kuhlia mugil}) swimming in a shallow tank. A modified
model is studied alongside the original one, in which an additional frontal
preference is introduced in the stimulus/response function to account for the
angular weighting of interactions. Our study, mostly limited to groups of
moderate size (in the order of 100 individuals), focused not only on the
transition to schooling induced by increasing the swimming speed, but also on
the conditions under which a school can exhibit milling dynamics and the
corresponding behavioral transitions. We show the existence of a transition
region between milling and schooling, in which the school exhibits
multistability and intermittency between schooling and milling for the same
combination of individuals parameters. We also show that milling does not occur
for arbitrarily large groups, mainly due to a distance dependence interaction
of the model and information propagation delays in the school, which cause
conflicting reactions for large groups. We finally discuss the biological
significance of our findings, especially the dependence of behavioural
transitions on social interactions, which were reported by Gautrais etal to be
adaptive in the experimental conditions.
| [
{
"created": "Mon, 12 Aug 2013 15:27:28 GMT",
"version": "v1"
}
] | 2015-02-03 | [
[
"Calovi",
"Daniel S.",
""
],
[
"Lopez",
"Ugo",
""
],
[
"Ngo",
"Sandrine",
""
],
[
"Sire",
"Clément",
""
],
[
"Chaté",
"Hugues",
""
],
[
"Theraulaz",
"Guy",
""
]
] | We determine the basic phase diagram of the fish school model derived from data by Gautrais etal (PLoS Comp. Biol. 8, e1002678 (2012)), exploring its parameter space beyond the parameter values determined experimentally on groups of barred flagtails (Kuhlia mugil}) swimming in a shallow tank. A modified model is studied alongside the original one, in which an additional frontal preference is introduced in the stimulus/response function to account for the angular weighting of interactions. Our study, mostly limited to groups of moderate size (in the order of 100 individuals), focused not only on the transition to schooling induced by increasing the swimming speed, but also on the conditions under which a school can exhibit milling dynamics and the corresponding behavioral transitions. We show the existence of a transition region between milling and schooling, in which the school exhibits multistability and intermittency between schooling and milling for the same combination of individuals parameters. We also show that milling does not occur for arbitrarily large groups, mainly due to a distance dependence interaction of the model and information propagation delays in the school, which cause conflicting reactions for large groups. We finally discuss the biological significance of our findings, especially the dependence of behavioural transitions on social interactions, which were reported by Gautrais etal to be adaptive in the experimental conditions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.