id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1210.0362
|
Leo van Iersel
|
Leo van Iersel and Vincent Moulton
|
Trinets encode tree-child and level-2 phylogenetic networks
| null | null | null | null |
q-bio.PE math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phylogenetic networks generalize evolutionary trees, and are commonly used to
represent evolutionary histories of species that undergo reticulate
evolutionary processes such as hybridization, recombination and lateral gene
transfer. Recently, there has been great interest in trying to develop methods
to construct rooted phylogenetic networks from triplets, that is rooted trees
on three species. However, although triplets determine or encode rooted
phylogenetic trees, they do not in general encode rooted phylogenetic networks,
which is a potential issue for any such method. Motivated by this fact, Huber
and Moulton recently introduced trinets as a natural extension of rooted
triplets to networks. In particular, they showed that level-1 phylogenetic
networks are encoded by their trinets, and also conjectured that all
"recoverable" rooted phylogenetic networks are encoded by their trinets. Here
we prove that recoverable binary level-2 networks and binary tree-child
networks are also encoded by their trinets. To do this we prove two
decomposition theorems based on trinets which hold for all recoverable binary
rooted phylogenetic networks. Our results provide some additional evidence in
support of the conjecture that trinets encode all recoverable rooted
phylogenetic networks, and could also lead to new approaches to construct
phylogenetic networks from trinets.
|
[
{
"created": "Mon, 1 Oct 2012 12:14:50 GMT",
"version": "v1"
}
] |
2012-10-02
|
[
[
"van Iersel",
"Leo",
""
],
[
"Moulton",
"Vincent",
""
]
] |
Phylogenetic networks generalize evolutionary trees, and are commonly used to represent evolutionary histories of species that undergo reticulate evolutionary processes such as hybridization, recombination and lateral gene transfer. Recently, there has been great interest in trying to develop methods to construct rooted phylogenetic networks from triplets, that is rooted trees on three species. However, although triplets determine or encode rooted phylogenetic trees, they do not in general encode rooted phylogenetic networks, which is a potential issue for any such method. Motivated by this fact, Huber and Moulton recently introduced trinets as a natural extension of rooted triplets to networks. In particular, they showed that level-1 phylogenetic networks are encoded by their trinets, and also conjectured that all "recoverable" rooted phylogenetic networks are encoded by their trinets. Here we prove that recoverable binary level-2 networks and binary tree-child networks are also encoded by their trinets. To do this we prove two decomposition theorems based on trinets which hold for all recoverable binary rooted phylogenetic networks. Our results provide some additional evidence in support of the conjecture that trinets encode all recoverable rooted phylogenetic networks, and could also lead to new approaches to construct phylogenetic networks from trinets.
|
1111.2998
|
David Lukatsky
|
Itamar Sela and David B. Lukatsky
|
DNA sequence correlations shape nonspecific transcription factor-DNA
binding affinity
| null |
Biophys. J. 101(1), 160-166 (2011)
|
10.1016/j.bpj.2011.04.037
| null |
q-bio.BM q-bio.GN q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transcription factors (TFs) are regulatory proteins that bind DNA in promoter
regions of the genome and either promote or repress gene expression. Here we
predict analytically that enhanced homo-oligonucleotide sequence correlations,
such as poly(dA:dT) and poly(dC:dG) tracts, statistically enhance non-specific
TF-DNA binding affinity. This prediction is generic and qualitatively
independent of microscopic parameters of the model. We show that non-specific
TF binding affinity is universally controlled by the strength and symmetry of
DNA sequence correlations. We perform correlation analysis of the yeast genome
and show that DNA regions highly occupied by TFs exhibit stronger
homo-oligonucleotide sequence correlations, and thus higher propensity for
non-specific binding, as compared with poorly occupied regions. We suggest that
this effect plays the role of an effective localization potential enhancing the
quasi-one-dimensional diffusion of TFs in the vicinity of DNA, speeding up the
stochastic search process for specific TF binding sites. The predicted effect
also imposes an upper bound on the size of TF-DNA binding motifs.
|
[
{
"created": "Sun, 13 Nov 2011 09:13:51 GMT",
"version": "v1"
}
] |
2011-11-15
|
[
[
"Sela",
"Itamar",
""
],
[
"Lukatsky",
"David B.",
""
]
] |
Transcription factors (TFs) are regulatory proteins that bind DNA in promoter regions of the genome and either promote or repress gene expression. Here we predict analytically that enhanced homo-oligonucleotide sequence correlations, such as poly(dA:dT) and poly(dC:dG) tracts, statistically enhance non-specific TF-DNA binding affinity. This prediction is generic and qualitatively independent of microscopic parameters of the model. We show that non-specific TF binding affinity is universally controlled by the strength and symmetry of DNA sequence correlations. We perform correlation analysis of the yeast genome and show that DNA regions highly occupied by TFs exhibit stronger homo-oligonucleotide sequence correlations, and thus higher propensity for non-specific binding, as compared with poorly occupied regions. We suggest that this effect plays the role of an effective localization potential enhancing the quasi-one-dimensional diffusion of TFs in the vicinity of DNA, speeding up the stochastic search process for specific TF binding sites. The predicted effect also imposes an upper bound on the size of TF-DNA binding motifs.
|
1309.0833
|
Ron Nielsen
|
Ron W Nielsen aka Jan Nurzynski
|
Growth of human population in Australia, 1000-10,000 years BP
|
8 pages, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Close analysis of the published interpretation of the number of rock-shelter
sites in Australia provides further evidence that there was no intensification
in the growth of human population between 1000 and 10,000 years BP. An
alternative way of determining the time-dependent distribution of the size of
human population between 1000 and 10,000 years BP is discussed.
|
[
{
"created": "Tue, 3 Sep 2013 20:25:55 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Oct 2013 02:06:13 GMT",
"version": "v2"
}
] |
2013-10-22
|
[
[
"Nurzynski",
"Ron W Nielsen aka Jan",
""
]
] |
Close analysis of the published interpretation of the number of rock-shelter sites in Australia provides further evidence that there was no intensification in the growth of human population between 1000 and 10,000 years BP. An alternative way of determining the time-dependent distribution of the size of human population between 1000 and 10,000 years BP is discussed.
|
1007.4124
|
Tsvi Tlusty
|
Tsvi Tlusty
|
A simple model for the evolution of molecular codes driven by the
interplay of accuracy, diversity and cost
|
Keywords: molecular codes, rate-distortion theory, biological
information channels, stochastic maps, genetic code, genetic networks
|
Tsvi Tlusty 2008 Phys. Biol. 5 016001
|
10.1088/1478-3975/5/1/016001
| null |
q-bio.QM cs.IT math.IT physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular codes translate information written in one type of molecules into
another molecular language. We introduce a simple model that treats molecular
codes as noisy information channels. An optimal code is a channel that conveys
information accurately and efficiently while keeping down the impact of errors.
The equipoise of the three conflicting needs, for minimal error-load, minimal
cost of resources and maximal diversity of vocabulary, defines the fitness of
the code. The model suggests a mechanism for the emergence of a code when
evolution varies the parameters that control this equipoise and the mapping
between the two molecular languages becomes non-random. This mechanism is
demonstrated by a simple toy model that is formally equivalent to a mean-field
Ising magnet.
|
[
{
"created": "Fri, 23 Jul 2010 13:20:35 GMT",
"version": "v1"
}
] |
2010-07-26
|
[
[
"Tlusty",
"Tsvi",
""
]
] |
Molecular codes translate information written in one type of molecules into another molecular language. We introduce a simple model that treats molecular codes as noisy information channels. An optimal code is a channel that conveys information accurately and efficiently while keeping down the impact of errors. The equipoise of the three conflicting needs, for minimal error-load, minimal cost of resources and maximal diversity of vocabulary, defines the fitness of the code. The model suggests a mechanism for the emergence of a code when evolution varies the parameters that control this equipoise and the mapping between the two molecular languages becomes non-random. This mechanism is demonstrated by a simple toy model that is formally equivalent to a mean-field Ising magnet.
|
1205.3739
|
Ivan Santamaria-Holek
|
J. Lopez Alamilla, I. Santamaria-Holek
|
Reconstructing the free-energy landscape associated to molecular motors
processivity
|
To appear in Biophysical Chemistry, 12 pgs, 8 figures
| null | null | null |
q-bio.SC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a biochemical model providing the kinetic and energetic
descriptions of the processivity dynamics of kinesin and dinein molecular
motors. Our approach is a modified version of a well known model describing
kinesin dynamics and considers the presence of a competitive inhibition
reaction by ADP. We first reconstruct a continuous free-energy landscape of the
cycle catalyst process that allows us to calculate the number of steps given by
a single molecular motor. Then, we calculate an analytical expression
associated to the translational velocity and the stopping time of the molecular
motor in terms of time and ATP concentration. An energetic interpretation of
motor processivity is discussed in quantitative form by using experimental
data. We also predict a time duration of collective processes that agrees with
experimental reports.
|
[
{
"created": "Wed, 16 May 2012 17:24:43 GMT",
"version": "v1"
}
] |
2012-05-17
|
[
[
"Alamilla",
"J. Lopez",
""
],
[
"Santamaria-Holek",
"I.",
""
]
] |
We propose a biochemical model providing the kinetic and energetic descriptions of the processivity dynamics of kinesin and dinein molecular motors. Our approach is a modified version of a well known model describing kinesin dynamics and considers the presence of a competitive inhibition reaction by ADP. We first reconstruct a continuous free-energy landscape of the cycle catalyst process that allows us to calculate the number of steps given by a single molecular motor. Then, we calculate an analytical expression associated to the translational velocity and the stopping time of the molecular motor in terms of time and ATP concentration. An energetic interpretation of motor processivity is discussed in quantitative form by using experimental data. We also predict a time duration of collective processes that agrees with experimental reports.
|
1808.05002
|
Arkady Zgonnikov
|
Takashi Suzuki, Ihor Lubashevsky, Arkady Zgonnikov
|
Complexity of human response delay in intermittent control: The case of
virtual stick balancing
|
12 pages, 5 figures
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Response delay is an inherent and essential part of human actions. In the
context of human balance control, the response delay is traditionally modeled
using the formalism of delay-differential equations, which adopts the
approximation of fixed delay. However, experimental studies revealing
substantial variability, adaptive anticipation, and non-stationary dynamics of
response delay provide evidence against this approximation. In this paper, we
call for development of principally new mathematical formalism describing human
response delay. To support this, we present the experimental data from a simple
virtual stick balancing task. Our results demonstrate that human response delay
is a widely distributed random variable with complex properties, which can
exhibit oscillatory and adaptive dynamics characterized by long-range
correlations. Given this, we argue that the fixed-delay approximation ignores
essential properties of human response, and conclude with possible directions
for future developments of new mathematical notions describing human control.
|
[
{
"created": "Wed, 15 Aug 2018 09:01:10 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Aug 2018 11:49:00 GMT",
"version": "v2"
}
] |
2018-08-22
|
[
[
"Suzuki",
"Takashi",
""
],
[
"Lubashevsky",
"Ihor",
""
],
[
"Zgonnikov",
"Arkady",
""
]
] |
Response delay is an inherent and essential part of human actions. In the context of human balance control, the response delay is traditionally modeled using the formalism of delay-differential equations, which adopts the approximation of fixed delay. However, experimental studies revealing substantial variability, adaptive anticipation, and non-stationary dynamics of response delay provide evidence against this approximation. In this paper, we call for development of principally new mathematical formalism describing human response delay. To support this, we present the experimental data from a simple virtual stick balancing task. Our results demonstrate that human response delay is a widely distributed random variable with complex properties, which can exhibit oscillatory and adaptive dynamics characterized by long-range correlations. Given this, we argue that the fixed-delay approximation ignores essential properties of human response, and conclude with possible directions for future developments of new mathematical notions describing human control.
|
1204.0574
|
Hamidreza Namazi Dr.
|
Karthik Seetharaman, Hamidreza Namazi, Vladimir V.Kulish
|
Phase lagging model of brain response to external stimuli - modeling of
single action potential
|
19 pages
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we detail a phase lagging model of brain response to external
stimuli. The model is derived using the basic laws of physics like conservation
of energy law. This model eliminates the paradox of instantaneous propagation
of the action potential in the brain. The solution of this model is then
presented. The model is further applied in the case of a single neuron and is
verified by simulating a single action potential. The results of this modeling
are useful not only for the fundamental understanding of single action
potential generation, but also they can be applied in case of neuronal
interactions where the results can be verified against the real EEG signal.
|
[
{
"created": "Tue, 3 Apr 2012 02:20:09 GMT",
"version": "v1"
}
] |
2012-05-21
|
[
[
"Seetharaman",
"Karthik",
""
],
[
"Namazi",
"Hamidreza",
""
],
[
"Kulish",
"Vladimir V.",
""
]
] |
In this paper we detail a phase lagging model of brain response to external stimuli. The model is derived using the basic laws of physics like conservation of energy law. This model eliminates the paradox of instantaneous propagation of the action potential in the brain. The solution of this model is then presented. The model is further applied in the case of a single neuron and is verified by simulating a single action potential. The results of this modeling are useful not only for the fundamental understanding of single action potential generation, but also they can be applied in case of neuronal interactions where the results can be verified against the real EEG signal.
|
1808.00868
|
Candy Abboud
|
Candy Abboud, Olivier Bonnefon, Eric Parent, Samuel Soubeyrand
|
Dating and localizing an invasion from post-introduction data and a
coupled reaction-diffusion-absorption model
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Invasion of new territories by alien organisms is of primary concern for
environmental and health agencies and has been a core topic in mathematical
modeling, in particular in the intents of reconstructing the past dynamics of
the alien organisms and predicting their future spatial extents. Partial
differential equations offer a rich and flexible modeling framework that has
been applied to a large number of invasions. In this article, we are
specifically interested in dating and localizing the introduction that led to
an invasion using mathematical modeling, post-introduction data and an adequate
statistical inference procedure. We adopt a mechanistic-statistical approach
grounded on a coupled reaction-diffusion-absorption model representing the
dynamics of an organism in an heterogeneous domain with respect to growth.
Initial conditions (including the date and site of the introduction) and model
parameters related to diffusion, reproduction and mortality are jointly
estimated in the Bayesian framework by using an adaptive importance sampling
algorithm. This framework is applied to the invasion of \textit{Xylella
fastidiosa}, a phytopathogenic bacterium detected in South Corsica in 2015,
France.
|
[
{
"created": "Wed, 1 Aug 2018 16:21:36 GMT",
"version": "v1"
}
] |
2018-08-03
|
[
[
"Abboud",
"Candy",
""
],
[
"Bonnefon",
"Olivier",
""
],
[
"Parent",
"Eric",
""
],
[
"Soubeyrand",
"Samuel",
""
]
] |
Invasion of new territories by alien organisms is of primary concern for environmental and health agencies and has been a core topic in mathematical modeling, in particular in the intents of reconstructing the past dynamics of the alien organisms and predicting their future spatial extents. Partial differential equations offer a rich and flexible modeling framework that has been applied to a large number of invasions. In this article, we are specifically interested in dating and localizing the introduction that led to an invasion using mathematical modeling, post-introduction data and an adequate statistical inference procedure. We adopt a mechanistic-statistical approach grounded on a coupled reaction-diffusion-absorption model representing the dynamics of an organism in an heterogeneous domain with respect to growth. Initial conditions (including the date and site of the introduction) and model parameters related to diffusion, reproduction and mortality are jointly estimated in the Bayesian framework by using an adaptive importance sampling algorithm. This framework is applied to the invasion of \textit{Xylella fastidiosa}, a phytopathogenic bacterium detected in South Corsica in 2015, France.
|
1007.1383
|
Anna Mummert
|
Bonita Lawrence and Anna Mummert and Charles Somerville
|
A Model of the Number of Antibiotic Resistant Bacteria in Rivers
|
18 pages, 5 figures, 2 tables
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The large reservoir of antibiotic resistant bacteria in raw and treated water
supplies is a matter of public health concern. Currently, the National
Antimicrobial Resistance Monitoring Systems, a collaborative effort of the
Centers for Disease Control, the US Department of Agriculture, and the US Food
and Drug Administration, does not monitor antimicrobial resistance in surface
waters. Given the serious nature of antibiotic resistance in clinical settings,
and the likelihood that antibiotic resistant bacteria can be transmitted to
humans from large environmental reservoirs via drinking water, explanations for
the distribution of antibiotic resistant bacteria and tools for studying this
distribution must be found. Here we focus on mathematical modeling of
cultivable bacteria in a river, which will be used to study the distribution of
antibiotic resistant bacteria in the environment. We consider both antibiotic
resistant and non-antibiotic resistant bacteria in the model, and, taking into
account the strong correlation between land use and antibiotic resistant
bacteria in rivers, we include a function for the influx of bacteria into the
river from the shore. We simulate the model for two different time scales and
show that if there is too many bacteria from the land entering the river, the
river entirely fills with antibiotic resistant bacteria, while less frequent
influxes allows time for the bacteria to lose the antibiotic resistant gene.
This mathematically verifies that reduction in antibiotic use near the banks of
rivers, will reduce the counts of antibiotic resistant bacteria in rivers.
|
[
{
"created": "Thu, 8 Jul 2010 14:06:26 GMT",
"version": "v1"
}
] |
2010-07-09
|
[
[
"Lawrence",
"Bonita",
""
],
[
"Mummert",
"Anna",
""
],
[
"Somerville",
"Charles",
""
]
] |
The large reservoir of antibiotic resistant bacteria in raw and treated water supplies is a matter of public health concern. Currently, the National Antimicrobial Resistance Monitoring Systems, a collaborative effort of the Centers for Disease Control, the US Department of Agriculture, and the US Food and Drug Administration, does not monitor antimicrobial resistance in surface waters. Given the serious nature of antibiotic resistance in clinical settings, and the likelihood that antibiotic resistant bacteria can be transmitted to humans from large environmental reservoirs via drinking water, explanations for the distribution of antibiotic resistant bacteria and tools for studying this distribution must be found. Here we focus on mathematical modeling of cultivable bacteria in a river, which will be used to study the distribution of antibiotic resistant bacteria in the environment. We consider both antibiotic resistant and non-antibiotic resistant bacteria in the model, and, taking into account the strong correlation between land use and antibiotic resistant bacteria in rivers, we include a function for the influx of bacteria into the river from the shore. We simulate the model for two different time scales and show that if there is too many bacteria from the land entering the river, the river entirely fills with antibiotic resistant bacteria, while less frequent influxes allows time for the bacteria to lose the antibiotic resistant gene. This mathematically verifies that reduction in antibiotic use near the banks of rivers, will reduce the counts of antibiotic resistant bacteria in rivers.
|
1310.2357
|
Rodrigo Aldecoa
|
Rodrigo Aldecoa and Ignacio Mar\'in
|
SurpriseMe: an integrated tool for network community structure
characterization using Surprise maximization
|
6 pages
|
Bioinformatics 30, 1041 (2014)
|
10.1093/bioinformatics/btt741
| null |
q-bio.MN cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting communities, densely connected groups may contribute to unravel the
underlying relationships among the units present in diverse biological networks
(e.g., interactome, coexpression networks, ecological networks, etc.). We
recently showed that communities can be very precisely characterized by
maximizing Surprise, a global network parameter. Here we present SurpriseMe, a
tool that integrates the outputs of seven of the best algorithms available to
estimate the maximum Surprise value. SurpriseMe also generates distance
matrices that allow to visualize the relationships among the solutions
generated by the algorithms. We show that the communities present in small and
medium-sized networks, with up to 10.000 nodes, can be easily characterized: on
standard PC computers, these analyses take less than an hour. Also, four of the
algorithms may quite rapidly analyze networks with up to 100.000 nodes, given
enough memory resources. Because of its performance and simplicity, SurpriseMe
is a reference tool for community structure characterization.
|
[
{
"created": "Wed, 9 Oct 2013 05:44:11 GMT",
"version": "v1"
}
] |
2014-04-11
|
[
[
"Aldecoa",
"Rodrigo",
""
],
[
"Marín",
"Ignacio",
""
]
] |
Detecting communities, densely connected groups may contribute to unravel the underlying relationships among the units present in diverse biological networks (e.g., interactome, coexpression networks, ecological networks, etc.). We recently showed that communities can be very precisely characterized by maximizing Surprise, a global network parameter. Here we present SurpriseMe, a tool that integrates the outputs of seven of the best algorithms available to estimate the maximum Surprise value. SurpriseMe also generates distance matrices that allow to visualize the relationships among the solutions generated by the algorithms. We show that the communities present in small and medium-sized networks, with up to 10.000 nodes, can be easily characterized: on standard PC computers, these analyses take less than an hour. Also, four of the algorithms may quite rapidly analyze networks with up to 100.000 nodes, given enough memory resources. Because of its performance and simplicity, SurpriseMe is a reference tool for community structure characterization.
|
1711.02454
|
Dave Thirumalai
|
Ngo Min Toan and D. Thirumalai
|
Forced-rupture of Cell-Adhesion Complexes Reveals abrupt switch between
two Brittle States
|
29 pages, 6 figures, Submitted to J. Chem. Phys
| null |
10.1063/1.5011056
| null |
q-bio.BM cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell adhesion complexes (CACs), which are activated by ligand binding, play
key roles in many cellular functions ranging from cell cycle regulation to
mediation of cell extracellular matrix adhesion. Inspired by single molecule
pulling experiments on leukocyte function-associated antigen-1 (LFA-1),
expressed in T-cells, bound to intercellular adhesion molecules (ICAM), we
performed constant loading rate ($r_f$) and constant force ($F$) simulations
using the Self-Organized Polymer (SOP) model to describe the mechanism of
ligand rupture from CACs. The simulations reproduce the major experimental
finding on the kinetics of the rupture process, namely, the dependence of the
most probable rupture forces ($f^*$s) on $\ln r_f$ ($r_f$ is the loading rate)
exhibits two distinct linear regimes. The first, at low $r_f$, has a shallow
slope whereas the slope at high $r_f$ is much larger, especially for
LFA-1/ICAM-1 complex with the transition between the two occurring over a
narrow $r_f$ range. Locations of the two transition states (TSs), extracted
from the simulations show an abrupt change from a high value at low $r_f$ or
$F$ to a low value at high $r_f$ or $F$. The unusual behavior in which the CACs
switch from one brittle (TS position is a constant over a range of forces)
state to another brittle state is not found in forced-rupture in other protein
complexes. We explain this novel behavior by constructing the free energy
profiles, $F(\Lambda)$s, as a function of a collective reaction coordinate
($\Lambda$), involving many key charged residues and a critical metal ion. The
TS positions in F($\Lambda) change abruptly at a critical force, demonstrating
that it, rather than the molecular extension is a good reaction coordinate. We
reveal a new mechanism for the two loading regimes observed in the rupture
kinetics in CACs.
|
[
{
"created": "Tue, 7 Nov 2017 13:28:48 GMT",
"version": "v1"
}
] |
2018-04-04
|
[
[
"Toan",
"Ngo Min",
""
],
[
"Thirumalai",
"D.",
""
]
] |
Cell adhesion complexes (CACs), which are activated by ligand binding, play key roles in many cellular functions ranging from cell cycle regulation to mediation of cell extracellular matrix adhesion. Inspired by single molecule pulling experiments on leukocyte function-associated antigen-1 (LFA-1), expressed in T-cells, bound to intercellular adhesion molecules (ICAM), we performed constant loading rate ($r_f$) and constant force ($F$) simulations using the Self-Organized Polymer (SOP) model to describe the mechanism of ligand rupture from CACs. The simulations reproduce the major experimental finding on the kinetics of the rupture process, namely, the dependence of the most probable rupture forces ($f^*$s) on $\ln r_f$ ($r_f$ is the loading rate) exhibits two distinct linear regimes. The first, at low $r_f$, has a shallow slope whereas the slope at high $r_f$ is much larger, especially for LFA-1/ICAM-1 complex with the transition between the two occurring over a narrow $r_f$ range. Locations of the two transition states (TSs), extracted from the simulations show an abrupt change from a high value at low $r_f$ or $F$ to a low value at high $r_f$ or $F$. The unusual behavior in which the CACs switch from one brittle (TS position is a constant over a range of forces) state to another brittle state is not found in forced-rupture in other protein complexes. We explain this novel behavior by constructing the free energy profiles, $F(\Lambda)$s, as a function of a collective reaction coordinate ($\Lambda$), involving many key charged residues and a critical metal ion. The TS positions in F($\Lambda) change abruptly at a critical force, demonstrating that it, rather than the molecular extension is a good reaction coordinate. We reveal a new mechanism for the two loading regimes observed in the rupture kinetics in CACs.
|
1604.01317
|
Michael Sadovsky
|
Sergey Tsarev, Michael Sadovsky
|
New Error Tolerant Method to Search Long Repeats in Symbol Sequences
|
13 pages, 4 figures
| null | null | null |
q-bio.GN cs.DS cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new method to identify all sufficiently long repeating substrings in one or
several symbol sequences is proposed. The method is based on a specific gauge
applied to symbol sequences that guarantees identification of the repeating
substrings. It allows the matching of substrings to contain a given level of
errors. The gauge is based on the development of a heavily sparse dictionary of
repeats, thus drastically accelerating the search procedure. Some genomic
applications illustrate the method.
This paper is the extended and detailed version of the presentation at the
third International Conference on Algorithms for Computational Biology to be
held at Trujillo, Spain, June 21-22, 2016.
|
[
{
"created": "Tue, 5 Apr 2016 16:18:33 GMT",
"version": "v1"
}
] |
2016-04-07
|
[
[
"Tsarev",
"Sergey",
""
],
[
"Sadovsky",
"Michael",
""
]
] |
A new method to identify all sufficiently long repeating substrings in one or several symbol sequences is proposed. The method is based on a specific gauge applied to symbol sequences that guarantees identification of the repeating substrings. It allows the matching of substrings to contain a given level of errors. The gauge is based on the development of a heavily sparse dictionary of repeats, thus drastically accelerating the search procedure. Some genomic applications illustrate the method. This paper is the extended and detailed version of the presentation at the third International Conference on Algorithms for Computational Biology to be held at Trujillo, Spain, June 21-22, 2016.
|
1511.03559
|
Jorge Hidalgo
|
Jorge Hidalgo, Rafael Rubio de Casas and Miguel A. Munoz
|
Environmental unpredictability and inbreeding depression select for
mixed dispersal syndromes
|
15 pages, 8 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mixed dispersal syndromes have historically been regarded as bet-hedging
mechanisms that enhance survival in unpredictable environments, ensuring that
some propagules stay in the maternal environment while others can potentially
colonize new sites. However, this entails paying the costs of both dispersal
and non-dispersal. Propagules that disperse are likely to encounter unfavorable
conditions for establishment, while non-dispersing propagules might form
populations of close relatives burdened with inbreeding. Here, we investigate
the conditions under which mixed dispersal syndromes emerge and are
evolutionarily stable, taking into account the risks of both environmental
unpredictability and inbreeding. Using mathematical and computational modeling
we show that high dispersal propensity is favored whenever temporal
environmental unpredictability is low and inbreeding depression high, whereas
mixed dispersal syndromes are adaptive under conditions of high environmental
unpredictability, but more particularly if also inbreeding depression is small.
Although pure dispersers can be selected for under some circumstances, mixed
dispersal provides the optimal strategy under most parameterizations of our
models, indicating that this strategy is likely to be favored under a wide
variety of conditions. Furthermore, populations exhibiting any single phenotype
go inevitably extinct when environmental and genetic costs are high, whilst
mixed strategies can maintain viable populations even under such conditions.
Our models support the hypothesis that the interplay between inbreeding
depression and environmental unpredictability shapes dispersal syndromes, often
resulting in mixed strategies. Moreover, mixed dispersal seems to facilitate
persistence whenever conditions are critical or nearly critical for survival.
|
[
{
"created": "Wed, 11 Nov 2015 16:34:48 GMT",
"version": "v1"
}
] |
2015-11-12
|
[
[
"Hidalgo",
"Jorge",
""
],
[
"de Casas",
"Rafael Rubio",
""
],
[
"Munoz",
"Miguel A.",
""
]
] |
Mixed dispersal syndromes have historically been regarded as bet-hedging mechanisms that enhance survival in unpredictable environments, ensuring that some propagules stay in the maternal environment while others can potentially colonize new sites. However, this entails paying the costs of both dispersal and non-dispersal. Propagules that disperse are likely to encounter unfavorable conditions for establishment, while non-dispersing propagules might form populations of close relatives burdened with inbreeding. Here, we investigate the conditions under which mixed dispersal syndromes emerge and are evolutionarily stable, taking into account the risks of both environmental unpredictability and inbreeding. Using mathematical and computational modeling we show that high dispersal propensity is favored whenever temporal environmental unpredictability is low and inbreeding depression high, whereas mixed dispersal syndromes are adaptive under conditions of high environmental unpredictability, but more particularly if also inbreeding depression is small. Although pure dispersers can be selected for under some circumstances, mixed dispersal provides the optimal strategy under most parameterizations of our models, indicating that this strategy is likely to be favored under a wide variety of conditions. Furthermore, populations exhibiting any single phenotype go inevitably extinct when environmental and genetic costs are high, whilst mixed strategies can maintain viable populations even under such conditions. Our models support the hypothesis that the interplay between inbreeding depression and environmental unpredictability shapes dispersal syndromes, often resulting in mixed strategies. Moreover, mixed dispersal seems to facilitate persistence whenever conditions are critical or nearly critical for survival.
|
1304.6952
|
Jay Newby
|
Jay M. Newby, Paul C. Bressloff, James P. Keener
|
Breakdown of fast-slow analysis in an excitable system with channel
noise
| null | null |
10.1103/PhysRevLett.111.128101
| null |
q-bio.NC cond-mat.stat-mech q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a stochastic version of an excitable system based on the
Morris-Lecar model of a neuron, in which the noise originates from stochastic
Sodium and Potassium ion channels opening and closing. One can analyze neural
excitability in the deterministic model by using a separation of time scales
involving a fast voltage variable and a slow recovery variable, which
represents the fraction of open Potassium channels. In the stochastic setting,
spontaneous excitation is initiated by ion channel noise. If the recovery
variable is constant during initiation, the spontaneous activity rate can be
calculated using Kramer's rate theory. The validity of this assumption in the
stochastic model is examined using a systematic perturbation analysis. We find
that in most physically relevant cases, this assumption breaks down, requiring
an alternative to Kramers theory for excitable systems with one deterministic
fixed point. We also show that an exit time problem can be formulated in an
excitable system by considering maximum likelihood trajectories of the
stochastic process.
|
[
{
"created": "Thu, 25 Apr 2013 16:11:33 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Aug 2013 21:09:59 GMT",
"version": "v2"
}
] |
2015-06-15
|
[
[
"Newby",
"Jay M.",
""
],
[
"Bressloff",
"Paul C.",
""
],
[
"Keener",
"James P.",
""
]
] |
We consider a stochastic version of an excitable system based on the Morris-Lecar model of a neuron, in which the noise originates from stochastic Sodium and Potassium ion channels opening and closing. One can analyze neural excitability in the deterministic model by using a separation of time scales involving a fast voltage variable and a slow recovery variable, which represents the fraction of open Potassium channels. In the stochastic setting, spontaneous excitation is initiated by ion channel noise. If the recovery variable is constant during initiation, the spontaneous activity rate can be calculated using Kramer's rate theory. The validity of this assumption in the stochastic model is examined using a systematic perturbation analysis. We find that in most physically relevant cases, this assumption breaks down, requiring an alternative to Kramers theory for excitable systems with one deterministic fixed point. We also show that an exit time problem can be formulated in an excitable system by considering maximum likelihood trajectories of the stochastic process.
|
1007.4098
|
Sudip Kundu
|
Dhriti Sengupta and Sudip Kundu
|
Protein contact networks at different length scales and role of
hydrophobic, hydrophilic and charged residues in protein's structural
organisation
| null | null | null | null |
q-bio.BM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The three dimensional structure of a protein is an outcome of the
interactions of its constituent amino acids in 3D space. Considering the amino
acids as nodes and the interactions among them as edges we have constructed and
analyzed protein contact networks at different length scales, long and
short-range. While long and short-range interactions are determined by the
positions of amino acids in primary chain, the contact networks are constructed
based on the 3D spatial distances of amino acids. We have further divided these
networks into sub-networks of hydrophobic, hydrophilic and charged residues.
Our analysis reveals that a significantly higher percentage of assortative
sub-clusters of long-range hydrophobic networks helps a protein in
communicating the necessary information for protein folding in one hand; on the
other hand the higher values of clustering coefficients of hydrophobic
sub-clusters play a major role in slowing down the process so that necessary
local and global stability can be achieved through intra connectivities of the
amino acid residues. Further, higher degrees of hydrophobic long-range
interactions suggest their greater role in protein folding and stability. The
small-range all amino acids networks have signature of hierarchy. The present
analysis with other evidences suggest that in a protein's 3D conformational
space, the growth of connectivity is not evolved either through preferential
attachment or through random connections; rather, it follows a specific
structural necessity based guiding principle - where some of the interactions
are primary while the others, generated as a consequence of these primary
interactions are secondary.
|
[
{
"created": "Fri, 23 Jul 2010 10:52:44 GMT",
"version": "v1"
}
] |
2010-07-26
|
[
[
"Sengupta",
"Dhriti",
""
],
[
"Kundu",
"Sudip",
""
]
] |
The three dimensional structure of a protein is an outcome of the interactions of its constituent amino acids in 3D space. Considering the amino acids as nodes and the interactions among them as edges we have constructed and analyzed protein contact networks at different length scales, long and short-range. While long and short-range interactions are determined by the positions of amino acids in primary chain, the contact networks are constructed based on the 3D spatial distances of amino acids. We have further divided these networks into sub-networks of hydrophobic, hydrophilic and charged residues. Our analysis reveals that a significantly higher percentage of assortative sub-clusters of long-range hydrophobic networks helps a protein in communicating the necessary information for protein folding in one hand; on the other hand the higher values of clustering coefficients of hydrophobic sub-clusters play a major role in slowing down the process so that necessary local and global stability can be achieved through intra connectivities of the amino acid residues. Further, higher degrees of hydrophobic long-range interactions suggest their greater role in protein folding and stability. The small-range all amino acids networks have signature of hierarchy. The present analysis with other evidences suggest that in a protein's 3D conformational space, the growth of connectivity is not evolved either through preferential attachment or through random connections; rather, it follows a specific structural necessity based guiding principle - where some of the interactions are primary while the others, generated as a consequence of these primary interactions are secondary.
|
1011.2605
|
Sergei Nechaev
|
S.K. Nechaev, M.V. Tamm, O.V. Valba
|
Sequence matching algorithms and pairing of noncoding RNAs
|
23 pages, 14 figures
| null | null | null |
q-bio.QM cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new statistical method of alignment of two heteropolymers which can form
hierarchical cloverleaf-like secondary structures is proposed. This offers a
new constructive algorithm for quantitative determination of binding free
energy of two noncoding RNAs with arbitrary primary sequences. The alignment of
ncRNAs differs from the complete alignment of two RNA sequences: in ncRNA case
we align only the sequences of nucleotides which constitute pairs between two
different RNAs, while the secondary structure of each RNA comes into play only
by the combinatorial factors affecting the entropc contribution of each
molecule to the total cost function. The proposed algorithm is based on two
observations: i) the standard alignment problem is considered as a
zero-temperature limit of a more general statistical problem of binding of two
associating heteropolymer chains; ii) this last problem is generalized onto the
sequences with hierarchical cloverleaf-like structures (i.e. of RNA-type).
Taking zero-temperature limit at the very end we arrive at the desired "cost
function" of the system with account for entropy of side cactus-like loops.
Moreover, we have demonstrated in detail how our algorithm enables to solve the
"structure recovery" problem. Namely, we can predict in zero-temperature limit
the cloverleaf-like (i.e. secondary) structure of interacting ncRNAs by knowing
only their primary sequences.
|
[
{
"created": "Thu, 11 Nov 2010 10:26:42 GMT",
"version": "v1"
}
] |
2010-11-12
|
[
[
"Nechaev",
"S. K.",
""
],
[
"Tamm",
"M. V.",
""
],
[
"Valba",
"O. V.",
""
]
] |
A new statistical method of alignment of two heteropolymers which can form hierarchical cloverleaf-like secondary structures is proposed. This offers a new constructive algorithm for quantitative determination of binding free energy of two noncoding RNAs with arbitrary primary sequences. The alignment of ncRNAs differs from the complete alignment of two RNA sequences: in ncRNA case we align only the sequences of nucleotides which constitute pairs between two different RNAs, while the secondary structure of each RNA comes into play only by the combinatorial factors affecting the entropc contribution of each molecule to the total cost function. The proposed algorithm is based on two observations: i) the standard alignment problem is considered as a zero-temperature limit of a more general statistical problem of binding of two associating heteropolymer chains; ii) this last problem is generalized onto the sequences with hierarchical cloverleaf-like structures (i.e. of RNA-type). Taking zero-temperature limit at the very end we arrive at the desired "cost function" of the system with account for entropy of side cactus-like loops. Moreover, we have demonstrated in detail how our algorithm enables to solve the "structure recovery" problem. Namely, we can predict in zero-temperature limit the cloverleaf-like (i.e. secondary) structure of interacting ncRNAs by knowing only their primary sequences.
|
2310.07543
|
Jonathan D. Victor
|
Jonathan D. Victor, Guillermo Aguilar, Suniyya A. Waraich
|
Ordinal Characterization of Similarity Judgments
|
Body: 44 pages; 6 figures; 3 appendices
| null | null | null |
q-bio.NC q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Characterizing judgments of similarity within a perceptual or semantic
domain, and making inferences about the underlying structure of this domain
from these judgments, has an increasingly important role in cognitive and
systems neuroscience. We present a new framework for this purpose that makes
very limited assumptions about how perceptual distances are converted into
similarity judgments. The approach starts from a dataset of empirical judgments
of relative similarities: the fraction of times that a subject chooses one of
two comparison stimuli to be more similar to a reference stimulus. These
empirical judgments provide Bayesian estimates of underling choice
probabilities. From these estimates, we derive three indices that characterize
the set of judgments, measuring consistency with a symmetric dis-similarity,
consistency with an ultrametric space, and consistency with an additive tree.
We illustrate this approach with example psychophysical datasets of
dis-similarity judgments in several visual domains and provide code that
implements the analyses.
|
[
{
"created": "Wed, 11 Oct 2023 14:47:51 GMT",
"version": "v1"
}
] |
2023-10-12
|
[
[
"Victor",
"Jonathan D.",
""
],
[
"Aguilar",
"Guillermo",
""
],
[
"Waraich",
"Suniyya A.",
""
]
] |
Characterizing judgments of similarity within a perceptual or semantic domain, and making inferences about the underlying structure of this domain from these judgments, has an increasingly important role in cognitive and systems neuroscience. We present a new framework for this purpose that makes very limited assumptions about how perceptual distances are converted into similarity judgments. The approach starts from a dataset of empirical judgments of relative similarities: the fraction of times that a subject chooses one of two comparison stimuli to be more similar to a reference stimulus. These empirical judgments provide Bayesian estimates of underling choice probabilities. From these estimates, we derive three indices that characterize the set of judgments, measuring consistency with a symmetric dis-similarity, consistency with an ultrametric space, and consistency with an additive tree. We illustrate this approach with example psychophysical datasets of dis-similarity judgments in several visual domains and provide code that implements the analyses.
|
1309.6479
|
Jannik Vollmer
|
Jannik Vollmer, Denis Menshykau, Dagmar Iber
|
Simulating Organogenesis in COMSOL: Cell-based Signaling Models
|
6 pages, 3 Figures, Proceedings of COMSOL Conference 2013
| null | null | null |
q-bio.QM q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most models of biological pattern formation are simulated on continuous
domains even though cells are discrete objects that provide internal boundaries
to the diffusion of regulatory components. In our previous papers on simulating
organogenesis in COMSOL (Germann et al COMSOL Conf Procedings 2011; Menshykau
and Iber, COMSOL Conf Proceedings 2012) we discussed methods to efficiently
solve signaling models on static and growing continuous domains. Here we
discuss COMSOL-based methods to study spatio-temporal signaling models at
cellular resolution with subcellular compartments, i.e. cell membrane,
cytoplasm, and nucleus.
|
[
{
"created": "Wed, 25 Sep 2013 12:23:41 GMT",
"version": "v1"
}
] |
2013-09-26
|
[
[
"Vollmer",
"Jannik",
""
],
[
"Menshykau",
"Denis",
""
],
[
"Iber",
"Dagmar",
""
]
] |
Most models of biological pattern formation are simulated on continuous domains even though cells are discrete objects that provide internal boundaries to the diffusion of regulatory components. In our previous papers on simulating organogenesis in COMSOL (Germann et al COMSOL Conf Procedings 2011; Menshykau and Iber, COMSOL Conf Proceedings 2012) we discussed methods to efficiently solve signaling models on static and growing continuous domains. Here we discuss COMSOL-based methods to study spatio-temporal signaling models at cellular resolution with subcellular compartments, i.e. cell membrane, cytoplasm, and nucleus.
|
2307.01602
|
Csenge Petak
|
Csenge Petak, Lapo Frati, Melissa H. Pespeni, Nick Cheney
|
Coping with seasons: evolutionary dynamics of gene networks in a
changing environment
|
Accepted at the Genetic and Evolutionary Computation Conference 2023
as a poster paper
| null |
10.1145/3583133.3590744
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In environments that vary frequently and unpredictably, bet-hedgers can
overtake the population. Diversifying bet-hedgers have a diverse set of
offspring so that, no matter the conditions they find themselves in, at least
some offspring will have high fitness. In contrast, conservative bet-hedgers
have a set of offspring that all have an in-between phenotype compared to the
specialists. Here, we use an evolutionary algorithm of gene regulatory networks
to de novo evolve the two strategies and investigate their relative success in
different parameter settings. We found that diversifying bet-hedgers almost
always evolved first, but then eventually got outcompeted by conservative
bet-hedgers. We argue that even though similar selection pressures apply to the
two bet-hedger strategies, conservative bet-hedgers could win due to the
robustness of their evolved networks, in contrast to the sensitive networks of
the diversifying bet-hedgers. These results reveal an unexplored aspect of the
evolution of bet-hedging that could shed more light on the principles of
biological adaptation in variable environmental conditions.
|
[
{
"created": "Tue, 4 Jul 2023 09:41:15 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Petak",
"Csenge",
""
],
[
"Frati",
"Lapo",
""
],
[
"Pespeni",
"Melissa H.",
""
],
[
"Cheney",
"Nick",
""
]
] |
In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions.
|
1703.09637
|
Joshua D. Salvi
|
Joshua D. Salvi, Daibhid O Maoileidigh, Brian A. Fabella, Melanie
Tobin, and A. J. Hudspeth
|
Control of a hair bundle's mechanosensory function by its mechanical
load
| null |
Proc Natl Acad Sci U S A. 2015 Mar 3;112(9):E1000-9
|
10.1073/pnas.1501453112
| null |
q-bio.NC physics.bio-ph q-bio.CB
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Hair cells, the sensory receptors of the internal ear, subserve different
functions in various receptor organs: they detect oscillatory stimuli in the
auditory system, but transduce constant and step stimuli in the vestibular and
lateral-line systems. We show that a hair cell's function can be controlled
experimentally by adjusting its mechanical load. By making bundles from a
single organ operate as any of four distinct types of signal detector, we
demonstrate that altering only a few key parameters can fundamentally change a
sensory cell's role. The motions of a single hair bundle can resemble those of
a bundle from the amphibian vestibular system, the reptilian auditory system,
or the mammalian auditory system, demonstrating an essential similarity of
bundles across species and receptor organs.
|
[
{
"created": "Tue, 28 Mar 2017 15:41:21 GMT",
"version": "v1"
}
] |
2017-03-29
|
[
[
"Salvi",
"Joshua D.",
""
],
[
"Maoileidigh",
"Daibhid O",
""
],
[
"Fabella",
"Brian A.",
""
],
[
"Tobin",
"Melanie",
""
],
[
"Hudspeth",
"A. J.",
""
]
] |
Hair cells, the sensory receptors of the internal ear, subserve different functions in various receptor organs: they detect oscillatory stimuli in the auditory system, but transduce constant and step stimuli in the vestibular and lateral-line systems. We show that a hair cell's function can be controlled experimentally by adjusting its mechanical load. By making bundles from a single organ operate as any of four distinct types of signal detector, we demonstrate that altering only a few key parameters can fundamentally change a sensory cell's role. The motions of a single hair bundle can resemble those of a bundle from the amphibian vestibular system, the reptilian auditory system, or the mammalian auditory system, demonstrating an essential similarity of bundles across species and receptor organs.
|
1406.0217
|
Mike Steel Prof.
|
Mareike Fischer, Michelle Galla, Lina Herbst and Mike Steel
|
The most parsimonious tree for random data
|
19 pages, 8 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Applying a method to reconstruct a phylogenetic tree from random data
provides a way to detect whether that method has an inherent bias towards
certain tree `shapes'. For maximum parsimony, applied to a sequence of random
2-state data, each possible binary phylogenetic tree has exactly the same
distribution for its parsimony score. Despite this pleasing and slightly
surprising symmetry, some binary phylogenetic trees are more likely than others
to be a most parsimonious (MP) tree for a sequence of $k$ such characters, as
we show. For $k=2$, and unrooted binary trees on six taxa, any tree with a
caterpillar shape has a higher chance of being an MP tree than any tree with a
symmetric shape. On the other hand, if we take any two binary trees, on any
number of taxa, we prove that this bias between the two trees vanishes as the
number of characters grows. However, again there is a twist: MP trees on six
taxa are more likely to have certain shapes than a uniform distribution on
binary phylogenetic trees predicts, and this difference does not appear to
dissipate as $k$ grows.
|
[
{
"created": "Sun, 1 Jun 2014 23:50:13 GMT",
"version": "v1"
}
] |
2014-06-03
|
[
[
"Fischer",
"Mareike",
""
],
[
"Galla",
"Michelle",
""
],
[
"Herbst",
"Lina",
""
],
[
"Steel",
"Mike",
""
]
] |
Applying a method to reconstruct a phylogenetic tree from random data provides a way to detect whether that method has an inherent bias towards certain tree `shapes'. For maximum parsimony, applied to a sequence of random 2-state data, each possible binary phylogenetic tree has exactly the same distribution for its parsimony score. Despite this pleasing and slightly surprising symmetry, some binary phylogenetic trees are more likely than others to be a most parsimonious (MP) tree for a sequence of $k$ such characters, as we show. For $k=2$, and unrooted binary trees on six taxa, any tree with a caterpillar shape has a higher chance of being an MP tree than any tree with a symmetric shape. On the other hand, if we take any two binary trees, on any number of taxa, we prove that this bias between the two trees vanishes as the number of characters grows. However, again there is a twist: MP trees on six taxa are more likely to have certain shapes than a uniform distribution on binary phylogenetic trees predicts, and this difference does not appear to dissipate as $k$ grows.
|
0806.4444
|
Christian Korn
|
C. B. Korn and U. S. Schwarz
|
Mean encounter times for cell adhesion in hydrodynamic flow: analytical
progress by dimensional reduction
|
Reftex, postscript figures included
|
EPL 83:28007 (2008)
|
10.1209/0295-5075/83/28007
| null |
q-bio.CB physics.bio-ph q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a cell moving in hydrodynamic flow above a wall, translational and
rotational degrees of freedom are coupled by the Stokes equation. In addition,
there is a close coupling of convection and diffusion due to the
position-dependent mobility. These couplings render calculation of the mean
encounter time between cell surface receptors and ligands on the substrate very
difficult. Here we show for a two-dimensional model system how analytical
progress can be achieved by treating motion in the vertical direction by an
effective reaction term in the mean first passage time equation for the
rotational degree of freedom. The strength of this reaction term can either be
estimated from equilibrium considerations or used as a fit parameter. Our
analytical results are confirmed by computer simulations and allow to assess
the relative roles of convection and diffusion for different scaling regimes of
interest.
|
[
{
"created": "Fri, 27 Jun 2008 07:48:44 GMT",
"version": "v1"
}
] |
2008-07-14
|
[
[
"Korn",
"C. B.",
""
],
[
"Schwarz",
"U. S.",
""
]
] |
For a cell moving in hydrodynamic flow above a wall, translational and rotational degrees of freedom are coupled by the Stokes equation. In addition, there is a close coupling of convection and diffusion due to the position-dependent mobility. These couplings render calculation of the mean encounter time between cell surface receptors and ligands on the substrate very difficult. Here we show for a two-dimensional model system how analytical progress can be achieved by treating motion in the vertical direction by an effective reaction term in the mean first passage time equation for the rotational degree of freedom. The strength of this reaction term can either be estimated from equilibrium considerations or used as a fit parameter. Our analytical results are confirmed by computer simulations and allow to assess the relative roles of convection and diffusion for different scaling regimes of interest.
|
1103.6276
|
Stuart Borrett Stuart Borrett
|
Stuart R. Borrett, Michael A. Freeze, Andria K. Salas
|
Equivalence of the realized input and output oriented indirect effects
metrics in ecological network analysis
|
13 pages, 1 figure, 1 table
|
Borrett, S.R., M.A. Freeze, and A.K. Salas. 2011. Equivalence of
the realized input and output oriented indirect effects metrics in Ecological
Network Analysis. Ecological Modelling 222(13): 2142-2148
|
10.1016/j.ecolmodel.2011.04.003
| null |
q-bio.QM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new understanding of the consequences of how ecosystem elements are
interconnected is emerging from the development and application of Ecological
Network Analysis. The relative importance of indirect effects is central to
this understanding, and the ratio of indirect flow to direct flow (I/D) is one
indicator of their importance. Two methods have been proposed for calculating
this indicator. The unit approach shows what would happen if each system member
had a unit input or output, while the realized technique determines the ratio
using the observed system inputs or outputs. When using the unit method, the
input oriented and output oriented ratios can be different, potentially leading
to conflicting results. However, we show that the input and output oriented I/D
ratios are identical using the realized method when the system is at steady
state. This work is a step in the maturation of Ecological Network Analysis
that will let it be more readily testable empirically and ultimately more
useful for environmental assessment and management.
|
[
{
"created": "Thu, 31 Mar 2011 19:38:14 GMT",
"version": "v1"
}
] |
2011-11-28
|
[
[
"Borrett",
"Stuart R.",
""
],
[
"Freeze",
"Michael A.",
""
],
[
"Salas",
"Andria K.",
""
]
] |
A new understanding of the consequences of how ecosystem elements are interconnected is emerging from the development and application of Ecological Network Analysis. The relative importance of indirect effects is central to this understanding, and the ratio of indirect flow to direct flow (I/D) is one indicator of their importance. Two methods have been proposed for calculating this indicator. The unit approach shows what would happen if each system member had a unit input or output, while the realized technique determines the ratio using the observed system inputs or outputs. When using the unit method, the input oriented and output oriented ratios can be different, potentially leading to conflicting results. However, we show that the input and output oriented I/D ratios are identical using the realized method when the system is at steady state. This work is a step in the maturation of Ecological Network Analysis that will let it be more readily testable empirically and ultimately more useful for environmental assessment and management.
|
1810.04269
|
Isabel Irurzun Prof.
|
Isabel M. Irurzun, Magdalena M. Defeo, L. Garavaglia, Thomas Mailland,
E. E. Mola
|
Scaling behavior in the heart rate variability characteristics with age
|
6 figures
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we study the characteristics of the heart rate variability (HRV)
as a function of age and gender. The analyzed data include previous results
reported in the literature. The data obtained in this work expand the range of
age studied until now revealing new behaviors not reported before. We analyze
some measurements in the time domain,in the frequency domain and nonlinear
measurements. We report scaling behaviors and abrupt changes in some
measurements. There is also a progressive decrease in the dimensionality of the
dynamic system governing the HRV, with the increase in age that is interpreted
in terms ofautonomic regulation of cardiac activity.
|
[
{
"created": "Wed, 5 Sep 2018 14:00:41 GMT",
"version": "v1"
}
] |
2018-10-11
|
[
[
"Irurzun",
"Isabel M.",
""
],
[
"Defeo",
"Magdalena M.",
""
],
[
"Garavaglia",
"L.",
""
],
[
"Mailland",
"Thomas",
""
],
[
"Mola",
"E. E.",
""
]
] |
In this work we study the characteristics of the heart rate variability (HRV) as a function of age and gender. The analyzed data include previous results reported in the literature. The data obtained in this work expand the range of age studied until now revealing new behaviors not reported before. We analyze some measurements in the time domain,in the frequency domain and nonlinear measurements. We report scaling behaviors and abrupt changes in some measurements. There is also a progressive decrease in the dimensionality of the dynamic system governing the HRV, with the increase in age that is interpreted in terms ofautonomic regulation of cardiac activity.
|
2010.12706
|
Deepak K. Agrawal
|
Deepak. K. Agrawal, Bradford J. Smith, Peter D. Sottile, and David J.
Albers
|
A damaged-informed lung model for ventilator waveforms
|
22 pages, 7 figure, 1 table and Supplementary File
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The acute respiratory distress syndrome (ARDS) is characterized by the acute
development of diffuse alveolar damage (DAD) resulting in increased vascular
permeability and decreased alveolar gas exchange. Mechanical ventilation is a
potentially lifesaving intervention to improve oxygen exchange but has the
potential to cause ventilator-induced lung injury (VILI). A general strategy to
reduce VILI is to use low tidal volume and low-pressure ventilation, but
optimal ventilator settings for an individual patient are difficult for the
bedside physician to determine and mortality from ARDS remains unacceptably
high. Motivated by the need to minimize VILI, scientists have developed models
of varying complexity to understand diseased pulmonary physiology. However,
simple models often fail to capture real-world injury while complex models tend
to not be estimable with clinical data, limiting the clinical utility of
existing models. To address this gap, we present a physiologically anchored
data-driven model to better model lung injury. Our approach relies on using
clinically relevant features in the ventilator waveform data that contain
information about pulmonary physiology, patients-ventilator interaction and
ventilator settings. Our lung model can reproduce essential physiology and
pathophysiology dynamics of differently damaged lungs for both controlled mouse
model data and uncontrolled human ICU data. The estimated parameters values
that are correlated with a known measure of lung physiology agree with the
observed lung damage. In future endeavors, this model could be used to
phenotype ventilator waveforms and serve as a basis for predicting the course
of ARDS and improving patient care.
|
[
{
"created": "Fri, 23 Oct 2020 23:23:31 GMT",
"version": "v1"
}
] |
2020-10-27
|
[
[
"Agrawal",
"Deepak. K.",
""
],
[
"Smith",
"Bradford J.",
""
],
[
"Sottile",
"Peter D.",
""
],
[
"Albers",
"David J.",
""
]
] |
The acute respiratory distress syndrome (ARDS) is characterized by the acute development of diffuse alveolar damage (DAD) resulting in increased vascular permeability and decreased alveolar gas exchange. Mechanical ventilation is a potentially lifesaving intervention to improve oxygen exchange but has the potential to cause ventilator-induced lung injury (VILI). A general strategy to reduce VILI is to use low tidal volume and low-pressure ventilation, but optimal ventilator settings for an individual patient are difficult for the bedside physician to determine and mortality from ARDS remains unacceptably high. Motivated by the need to minimize VILI, scientists have developed models of varying complexity to understand diseased pulmonary physiology. However, simple models often fail to capture real-world injury while complex models tend to not be estimable with clinical data, limiting the clinical utility of existing models. To address this gap, we present a physiologically anchored data-driven model to better model lung injury. Our approach relies on using clinically relevant features in the ventilator waveform data that contain information about pulmonary physiology, patients-ventilator interaction and ventilator settings. Our lung model can reproduce essential physiology and pathophysiology dynamics of differently damaged lungs for both controlled mouse model data and uncontrolled human ICU data. The estimated parameters values that are correlated with a known measure of lung physiology agree with the observed lung damage. In future endeavors, this model could be used to phenotype ventilator waveforms and serve as a basis for predicting the course of ARDS and improving patient care.
|
1503.00183
|
Romulus Breban
|
Samit Bhattacharyya, Chris Bauch, Romulus Breban
|
Role of word-of-mouth for programs of voluntary vaccination: A
game-theoretic approach
|
10 pages, 2 figures
| null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a model describing the synergetic feedback between word-of-mouth
(WoM) and epidemic dynamics controlled by voluntary vaccination. We combine a
game-theoretic model for the spread of WoM and a compartmental model describing
$SIR$ disease dynamics in the presence of a program of voluntary vaccination.
We evaluate and compare two scenarios, depending on what WoM disseminates: (1)
vaccine advertising, which may occur whether or not an epidemic is ongoing and
(2) epidemic status, notably disease prevalence. Understanding the synergy
between the two strategies could be particularly important for organizing
voluntary vaccination campaigns. We find that, in the initial phase of an
epidemic, vaccination uptake is determined more by vaccine advertising than the
epidemic status. As the epidemic progresses, epidemic status become
increasingly important for vaccination uptake, considerably accelerating
vaccination uptake toward a stable vaccination coverage.
|
[
{
"created": "Sat, 28 Feb 2015 21:33:41 GMT",
"version": "v1"
}
] |
2015-03-03
|
[
[
"Bhattacharyya",
"Samit",
""
],
[
"Bauch",
"Chris",
""
],
[
"Breban",
"Romulus",
""
]
] |
We propose a model describing the synergetic feedback between word-of-mouth (WoM) and epidemic dynamics controlled by voluntary vaccination. We combine a game-theoretic model for the spread of WoM and a compartmental model describing $SIR$ disease dynamics in the presence of a program of voluntary vaccination. We evaluate and compare two scenarios, depending on what WoM disseminates: (1) vaccine advertising, which may occur whether or not an epidemic is ongoing and (2) epidemic status, notably disease prevalence. Understanding the synergy between the two strategies could be particularly important for organizing voluntary vaccination campaigns. We find that, in the initial phase of an epidemic, vaccination uptake is determined more by vaccine advertising than the epidemic status. As the epidemic progresses, epidemic status become increasingly important for vaccination uptake, considerably accelerating vaccination uptake toward a stable vaccination coverage.
|
1604.06121
|
Chathika Gunaratne
|
Chathika Gunaratne, Mustafa Ilhan Akbas, Ivan Garibay, Ozlem Ozmen
|
Evaluation of Zika Vector Control Strategies Using Agent-Based Modeling
|
14 pages, 6 figures, 1 table, conference
| null | null | null |
q-bio.PE cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aedes Aegypti is the vector of several deadly diseases, including Zika.
Effective and sustainable vector control measures must be deployed to keep A.
aegypti numbers under control. The distribution of A. Aegypti is subject to
spatial and climatic constraints. Using agent-based modeling, we model the
population dynamics of A. aegypti subjected to the spatial and climatic
constraints of a neighborhood in the Key West. Satellite imagery was used to
identify vegetation, houses (CO2 zones) both critical to the mosquito
lifecycle. The model replicates the seasonal fluctuation of adult population
sampled through field studies and approximates the population at a high of 986
(95% CI: [979, 993]) females and 1031 (95% CI: [1024, 1039]) males in the fall
and a low of 316 (95% CI: [313, 319]) females and 333 (95% CI: [330, 336])
males during the winter. We then simulate two biological vector control
strategies: 1) Wolbachia infection and 2) Release of Insects carrying a
Dominant Lethal gene (RIDL). Our results support the probability of sustained
Wolbachia infection within the population for two years after the year of
release. egies, our approach provides a realistic simulation environment
consisting of male and female Aedes aegypti, breeding spots, vegetation and CO2
sources.
|
[
{
"created": "Wed, 20 Apr 2016 20:52:41 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Aug 2016 18:11:18 GMT",
"version": "v2"
}
] |
2016-08-05
|
[
[
"Gunaratne",
"Chathika",
""
],
[
"Akbas",
"Mustafa Ilhan",
""
],
[
"Garibay",
"Ivan",
""
],
[
"Ozmen",
"Ozlem",
""
]
] |
Aedes Aegypti is the vector of several deadly diseases, including Zika. Effective and sustainable vector control measures must be deployed to keep A. aegypti numbers under control. The distribution of A. Aegypti is subject to spatial and climatic constraints. Using agent-based modeling, we model the population dynamics of A. aegypti subjected to the spatial and climatic constraints of a neighborhood in the Key West. Satellite imagery was used to identify vegetation, houses (CO2 zones) both critical to the mosquito lifecycle. The model replicates the seasonal fluctuation of adult population sampled through field studies and approximates the population at a high of 986 (95% CI: [979, 993]) females and 1031 (95% CI: [1024, 1039]) males in the fall and a low of 316 (95% CI: [313, 319]) females and 333 (95% CI: [330, 336]) males during the winter. We then simulate two biological vector control strategies: 1) Wolbachia infection and 2) Release of Insects carrying a Dominant Lethal gene (RIDL). Our results support the probability of sustained Wolbachia infection within the population for two years after the year of release. egies, our approach provides a realistic simulation environment consisting of male and female Aedes aegypti, breeding spots, vegetation and CO2 sources.
|
2407.14556
|
Ralph Lano P
|
Ralph P. Lano
|
Mechanical Self-replication
| null | null | null | null |
q-bio.OT cs.CL physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study presents a theoretical model for a self-replicating mechanical
system inspired by biological processes within living cells and supported by
computer simulations. The model decomposes self-replication into core
components, each of which is executed by a single machine constructed from a
set of basic block types. Key functionalities such as sorting, copying, and
building, are demonstrated. The model provides valuable insights into the
constraints of self-replicating systems. The discussion also addresses the
spatial and timing behavior of the system, as well as its efficiency and
complexity. This work provides a foundational framework for future studies on
self-replicating mechanisms and their information-processing applications.
|
[
{
"created": "Thu, 18 Jul 2024 09:49:50 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Lano",
"Ralph P.",
""
]
] |
This study presents a theoretical model for a self-replicating mechanical system inspired by biological processes within living cells and supported by computer simulations. The model decomposes self-replication into core components, each of which is executed by a single machine constructed from a set of basic block types. Key functionalities such as sorting, copying, and building, are demonstrated. The model provides valuable insights into the constraints of self-replicating systems. The discussion also addresses the spatial and timing behavior of the system, as well as its efficiency and complexity. This work provides a foundational framework for future studies on self-replicating mechanisms and their information-processing applications.
|
1906.08365
|
Dushyant Sahoo
|
Dushyant Sahoo, Theodore D. Satterthwaite and Christos Davatzikos
|
Extraction of hierarchical functional connectivity components in human
brain using resting-state fMRI
| null | null |
10.1109/TMI.2020.3042873
| null |
q-bio.NC cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of hierarchy in networks of the human brain has been of significant
interest among the researchers as numerous studies have pointed out towards a
functional hierarchical organization of the human brain. This paper provides a
novel method for the extraction of hierarchical connectivity components in the
human brain using resting-state fMRI. The method builds upon prior work of
Sparse Connectivity Patterns (SCPs) by introducing a hierarchy of sparse
overlapping patterns. The components are estimated by deep factorization of
correlation matrices generated from fMRI. The goal of the paper is to extract
interpretable hierarchical patterns using correlation matrices where a low rank
decomposition is formed by a linear combination of a high rank decomposition.
We formulate the decomposition as a non-convex optimization problem and solve
it using gradient descent algorithms with adaptive step size. We also provide a
method for the warm start of the gradient descent using singular value
decomposition. We demonstrate the effectiveness of the developed method on two
different real-world datasets by showing that multi-scale hierarchical SCPs are
reproducible between sub-samples and are more reproducible as compared to
single scale patterns. We also compare our method with existing hierarchical
community detection approaches. Our method also provides novel insight into the
functional organization of the human brain.
|
[
{
"created": "Wed, 19 Jun 2019 21:29:40 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2019 18:27:57 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Dec 2020 17:24:32 GMT",
"version": "v3"
}
] |
2021-03-02
|
[
[
"Sahoo",
"Dushyant",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Davatzikos",
"Christos",
""
]
] |
The study of hierarchy in networks of the human brain has been of significant interest among the researchers as numerous studies have pointed out towards a functional hierarchical organization of the human brain. This paper provides a novel method for the extraction of hierarchical connectivity components in the human brain using resting-state fMRI. The method builds upon prior work of Sparse Connectivity Patterns (SCPs) by introducing a hierarchy of sparse overlapping patterns. The components are estimated by deep factorization of correlation matrices generated from fMRI. The goal of the paper is to extract interpretable hierarchical patterns using correlation matrices where a low rank decomposition is formed by a linear combination of a high rank decomposition. We formulate the decomposition as a non-convex optimization problem and solve it using gradient descent algorithms with adaptive step size. We also provide a method for the warm start of the gradient descent using singular value decomposition. We demonstrate the effectiveness of the developed method on two different real-world datasets by showing that multi-scale hierarchical SCPs are reproducible between sub-samples and are more reproducible as compared to single scale patterns. We also compare our method with existing hierarchical community detection approaches. Our method also provides novel insight into the functional organization of the human brain.
|
1205.0098
|
Jiao Sy
|
Shuyun Jiao, and Ping Ao
|
Absorbing Phenomena and Escaping Time for Muller's Ratchet in Adaptive
Landscape
|
published in BMC Systems Biology Special Issues
|
BMC Systems Biology 2012, 6(Suppl 1):S10
|
10.1186/1752-0509-6-S1-S10
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: The accumulation of deleterious mutations of a population
directly contributes to the fate as to how long the population would exist, a
process often described as Muller's ratchet with the absorbing phenomenon. The
key to understand this absorbing phenomenon is to characterize the decaying
time of the fittest class of the population. Adaptive landscape introduced by
Wright, a re-emerging powerful concept in systems biology, is used as a tool to
describe biological processes. To our knowledge, the dynamical behaviors for
Muller's ratchet over the full parameter regimes are not studied from the point
of the adaptive landscape. And the characterization of the absorbing phenomenon
is not yet quantitatively obtained without extraneous assumptions as well.
Results: We describe the dynamical behavior of the population exposed to
Muller's ratchet in all parameters regimes by adaptive landscape. The adaptive
landscape has rich structures such as finite and infinite potential, real and
imaginary fixed points. We give the formula about the single click time with
finite and infinite potential. And we find the single click time increases with
selection rates and population size increasing, decreases with mutation rates
increasing. These results provide a new understanding of infinite potential. We
analytically demonstrate the adaptive and unadaptive states for the whole
parameters regimes. Interesting issues about the parameters regions with the
imaginary fixed points is demonstrated. This can help understand the biological
meaning of the critical points such as the intersection points of regimes. Most
importantly, we find that the absorbing phenomenon is characterized by the
adaptive landscape and the single click time without any extraneous
assumptions. These results suggest a graphical and quantitative framework to
study the absorbing phenomenon.
|
[
{
"created": "Tue, 1 May 2012 06:55:54 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Aug 2012 03:31:45 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Sep 2012 06:54:06 GMT",
"version": "v3"
}
] |
2012-09-11
|
[
[
"Jiao",
"Shuyun",
""
],
[
"Ao",
"Ping",
""
]
] |
Background: The accumulation of deleterious mutations of a population directly contributes to the fate as to how long the population would exist, a process often described as Muller's ratchet with the absorbing phenomenon. The key to understand this absorbing phenomenon is to characterize the decaying time of the fittest class of the population. Adaptive landscape introduced by Wright, a re-emerging powerful concept in systems biology, is used as a tool to describe biological processes. To our knowledge, the dynamical behaviors for Muller's ratchet over the full parameter regimes are not studied from the point of the adaptive landscape. And the characterization of the absorbing phenomenon is not yet quantitatively obtained without extraneous assumptions as well. Results: We describe the dynamical behavior of the population exposed to Muller's ratchet in all parameters regimes by adaptive landscape. The adaptive landscape has rich structures such as finite and infinite potential, real and imaginary fixed points. We give the formula about the single click time with finite and infinite potential. And we find the single click time increases with selection rates and population size increasing, decreases with mutation rates increasing. These results provide a new understanding of infinite potential. We analytically demonstrate the adaptive and unadaptive states for the whole parameters regimes. Interesting issues about the parameters regions with the imaginary fixed points is demonstrated. This can help understand the biological meaning of the critical points such as the intersection points of regimes. Most importantly, we find that the absorbing phenomenon is characterized by the adaptive landscape and the single click time without any extraneous assumptions. These results suggest a graphical and quantitative framework to study the absorbing phenomenon.
|
2407.03380
|
Srivathsan Badrinarayanan
|
Srivathsan Badrinarayanan, Chakradhar Guntuboina, Parisa Mollaei, Amir
Barati Farimani
|
Multi-Peptide: Multimodality Leveraged Language-Graph Learning of
Peptide Properties
| null | null | null | null |
q-bio.QM cs.AI cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Peptides are essential in biological processes and therapeutics. In this
study, we introduce Multi-Peptide, an innovative approach that combines
transformer-based language models with Graph Neural Networks (GNNs) to predict
peptide properties. We combine PeptideBERT, a transformer model tailored for
peptide property prediction, with a GNN encoder to capture both sequence-based
and structural features. By employing Contrastive Language-Image Pre-training
(CLIP), Multi-Peptide aligns embeddings from both modalities into a shared
latent space, thereby enhancing the model's predictive accuracy. Evaluations on
hemolysis and nonfouling datasets demonstrate Multi-Peptide's robustness,
achieving state-of-the-art 86.185% accuracy in hemolysis prediction. This study
highlights the potential of multimodal learning in bioinformatics, paving the
way for accurate and reliable predictions in peptide-based research and
applications.
|
[
{
"created": "Tue, 2 Jul 2024 20:13:47 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Badrinarayanan",
"Srivathsan",
""
],
[
"Guntuboina",
"Chakradhar",
""
],
[
"Mollaei",
"Parisa",
""
],
[
"Farimani",
"Amir Barati",
""
]
] |
Peptides are essential in biological processes and therapeutics. In this study, we introduce Multi-Peptide, an innovative approach that combines transformer-based language models with Graph Neural Networks (GNNs) to predict peptide properties. We combine PeptideBERT, a transformer model tailored for peptide property prediction, with a GNN encoder to capture both sequence-based and structural features. By employing Contrastive Language-Image Pre-training (CLIP), Multi-Peptide aligns embeddings from both modalities into a shared latent space, thereby enhancing the model's predictive accuracy. Evaluations on hemolysis and nonfouling datasets demonstrate Multi-Peptide's robustness, achieving state-of-the-art 86.185% accuracy in hemolysis prediction. This study highlights the potential of multimodal learning in bioinformatics, paving the way for accurate and reliable predictions in peptide-based research and applications.
|
1805.05597
|
Francois Enault
|
Camille Loiseau, Victor Hatte, Charlotte Andrieu, Loic Barlet, Audric
Cologne, Romain De Oliveira, Lionel Ferrato-Berberian, H\'el\`ene Gardon
(LMGE), Damien Lauber, M\'elanie Molinier, St\'ephanie Monnerie, Kissi N
'Gou, Benjamin Penaud, Olivier Pereira, Justine Picarle, Amandine Septier,
Antoine Mahul, Jean-Christophe Charvy (LMGE), Fran\c{c}ois Enault (LMGE)
|
PanGeneHome : A Web Interface to Analyze Microbial Pangenomes
| null |
Journal of Bioinformatics, Computational and Systems Biology,
Elyns Group, 2017
| null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PanGeneHome is a web server dedicated to the analysis of available microbial
pangenomes. For any prokaryotic taxon with at least three sequenced genomes,
PanGeneHome provides (i) conservation level of genes, (ii) pangenome and
core-genome curves, estimated pangenome size and other metrics, (iii)
dendrograms based on gene content and average amino acid identity (AAI) for
these genomes, and (iv) functional categories and metabolic pathways
represented in the core, accessory and unique gene pools of the selected taxon.
In addition, the results for these different analyses can be compared for any
set of taxa. With the availability of 615 taxa, covering 182 species and 49
orders, PanGeneHome provides an easy way to get a glimpse on the pangenome of a
microbial group of interest. The server and its documentation are available at
http://pangenehome.lmge.uca.fr.
|
[
{
"created": "Tue, 15 May 2018 07:08:13 GMT",
"version": "v1"
}
] |
2018-05-16
|
[
[
"Loiseau",
"Camille",
"",
"LMGE"
],
[
"Hatte",
"Victor",
"",
"LMGE"
],
[
"Andrieu",
"Charlotte",
"",
"LMGE"
],
[
"Barlet",
"Loic",
"",
"LMGE"
],
[
"Cologne",
"Audric",
"",
"LMGE"
],
[
"De Oliveira",
"Romain",
"",
"LMGE"
],
[
"Ferrato-Berberian",
"Lionel",
"",
"LMGE"
],
[
"Gardon",
"Hélène",
"",
"LMGE"
],
[
"Lauber",
"Damien",
"",
"LMGE"
],
[
"Molinier",
"Mélanie",
"",
"LMGE"
],
[
"Monnerie",
"Stéphanie",
"",
"LMGE"
],
[
"'Gou",
"Kissi N",
"",
"LMGE"
],
[
"Penaud",
"Benjamin",
"",
"LMGE"
],
[
"Pereira",
"Olivier",
"",
"LMGE"
],
[
"Picarle",
"Justine",
"",
"LMGE"
],
[
"Septier",
"Amandine",
"",
"LMGE"
],
[
"Mahul",
"Antoine",
"",
"LMGE"
],
[
"Charvy",
"Jean-Christophe",
"",
"LMGE"
],
[
"Enault",
"François",
"",
"LMGE"
]
] |
PanGeneHome is a web server dedicated to the analysis of available microbial pangenomes. For any prokaryotic taxon with at least three sequenced genomes, PanGeneHome provides (i) conservation level of genes, (ii) pangenome and core-genome curves, estimated pangenome size and other metrics, (iii) dendrograms based on gene content and average amino acid identity (AAI) for these genomes, and (iv) functional categories and metabolic pathways represented in the core, accessory and unique gene pools of the selected taxon. In addition, the results for these different analyses can be compared for any set of taxa. With the availability of 615 taxa, covering 182 species and 49 orders, PanGeneHome provides an easy way to get a glimpse on the pangenome of a microbial group of interest. The server and its documentation are available at http://pangenehome.lmge.uca.fr.
|
1711.05351
|
Vanessa Haverd
|
Vanessa Haverd, Benjamin Smith, Lars Nieradzik, Peter R. Briggs,
William Woodgate, Cathy M. Trudinger, Josep G. Canadell
|
A new version of the CABLE land surface model, incorporating land-use
change, woody vegetation demography and a novel optimisation-based approach
to plant coordination of photosynthesis
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CABLE is a land surface model (LSM) that can be applied stand-alone, as well
as providing for land surface-atmosphere exchange within the Australian
Community Climate and Earth System Simulator (ACCESS). We describe critical new
developments that extend the applicability of CABLE for regional and global
carbon-climate simulations, accounting for vegetation response to biophysical
and anthropogenic forcings. A land-use and land-cover change module, driven by
gross land-use transitions and wood harvest area was implemented, tailored to
the needs of the Coupled Model Intercomparison Project-6 (CMIP6). Novel aspects
include the treatment of secondary woody vegetation, which benefits from a
tight coupling between the land-use module and the Population Orders Physiology
(POP) module for woody demography and disturbance-mediated landscape
heterogeneity. Land-use transitions and harvest associated with secondary
forest tiles modify the annually-resolved patch age distribution within
secondary-vegetated tiles, in turn affecting biomass accumulation and turnover
rates and hence the magnitude of the secondary forest sink. Additionally, we
implemented a novel approach to constrain modelled GPP consistent with the
Co-ordination Hypothesis, predicted by evolutionary theory, which suggests that
electron transport and Rubisco-limited rates adjust seasonally and across
biomes to be co-limiting. We show that the default prior assumption - common to
CABLE and other LSMs - of a fixed ratio of electron transport to carboxylation
capacity is at odds with this hypothesis, and implement an alternative
algorithm for dynamic optimisation of this ratio, such that co-ordination is
achieved as an outcome of fitness maximisation. Results have significant
implications the magnitude of the simulated CO2 fertilisation effect on
photosynthesis in comparison to alternative estimates and observational
proxies.
|
[
{
"created": "Tue, 14 Nov 2017 23:13:44 GMT",
"version": "v1"
}
] |
2017-11-16
|
[
[
"Haverd",
"Vanessa",
""
],
[
"Smith",
"Benjamin",
""
],
[
"Nieradzik",
"Lars",
""
],
[
"Briggs",
"Peter R.",
""
],
[
"Woodgate",
"William",
""
],
[
"Trudinger",
"Cathy M.",
""
],
[
"Canadell",
"Josep G.",
""
]
] |
CABLE is a land surface model (LSM) that can be applied stand-alone, as well as providing for land surface-atmosphere exchange within the Australian Community Climate and Earth System Simulator (ACCESS). We describe critical new developments that extend the applicability of CABLE for regional and global carbon-climate simulations, accounting for vegetation response to biophysical and anthropogenic forcings. A land-use and land-cover change module, driven by gross land-use transitions and wood harvest area was implemented, tailored to the needs of the Coupled Model Intercomparison Project-6 (CMIP6). Novel aspects include the treatment of secondary woody vegetation, which benefits from a tight coupling between the land-use module and the Population Orders Physiology (POP) module for woody demography and disturbance-mediated landscape heterogeneity. Land-use transitions and harvest associated with secondary forest tiles modify the annually-resolved patch age distribution within secondary-vegetated tiles, in turn affecting biomass accumulation and turnover rates and hence the magnitude of the secondary forest sink. Additionally, we implemented a novel approach to constrain modelled GPP consistent with the Co-ordination Hypothesis, predicted by evolutionary theory, which suggests that electron transport and Rubisco-limited rates adjust seasonally and across biomes to be co-limiting. We show that the default prior assumption - common to CABLE and other LSMs - of a fixed ratio of electron transport to carboxylation capacity is at odds with this hypothesis, and implement an alternative algorithm for dynamic optimisation of this ratio, such that co-ordination is achieved as an outcome of fitness maximisation. Results have significant implications the magnitude of the simulated CO2 fertilisation effect on photosynthesis in comparison to alternative estimates and observational proxies.
|
1003.5535
|
Max Little
|
Max A. Little and Nick S. Jones
|
Sparse bayesian step-filtering for high-throughput analysis of molecular
machine dynamics
|
4 pages, link to code available from author's website.
| null | null | null |
q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nature has evolved many molecular machines such as kinesin, myosin, and the
rotary flagellar motor powered by an ion current from the mitochondria. Direct
observation of the step-like motion of these machines with time series from
novel experimental assays has recently become possible. These time series are
corrupted by molecular and experimental noise that requires removal, but
classical signal processing is of limited use for recovering such step-like
dynamics. This paper reports simple, novel Bayesian filters that are robust to
step-like dynamics in noise, and introduce an L1-regularized, global filter
whose sparse solution can be rapidly obtained by standard convex optimization
methods. We show these techniques outperforming classical filters on simulated
time series in terms of their ability to accurately recover the underlying step
dynamics. To show the techniques in action, we extract step-like speed
transitions from Rhodobacter sphaeroides flagellar motor time series. Code
implementing these algorithms available from
http://www.eng.ox.ac.uk/samp/members/max/software/.
|
[
{
"created": "Mon, 29 Mar 2010 13:22:37 GMT",
"version": "v1"
}
] |
2010-03-30
|
[
[
"Little",
"Max A.",
""
],
[
"Jones",
"Nick S.",
""
]
] |
Nature has evolved many molecular machines such as kinesin, myosin, and the rotary flagellar motor powered by an ion current from the mitochondria. Direct observation of the step-like motion of these machines with time series from novel experimental assays has recently become possible. These time series are corrupted by molecular and experimental noise that requires removal, but classical signal processing is of limited use for recovering such step-like dynamics. This paper reports simple, novel Bayesian filters that are robust to step-like dynamics in noise, and introduce an L1-regularized, global filter whose sparse solution can be rapidly obtained by standard convex optimization methods. We show these techniques outperforming classical filters on simulated time series in terms of their ability to accurately recover the underlying step dynamics. To show the techniques in action, we extract step-like speed transitions from Rhodobacter sphaeroides flagellar motor time series. Code implementing these algorithms available from http://www.eng.ox.ac.uk/samp/members/max/software/.
|
1405.2373
|
David Sivak
|
David A. Sivak and Matt Thomson
|
Environmental statistics and optimal regulation
|
21 pages, 7 figures
|
PLoS Comp Bio 10(9), e1003826, 2014
|
10.1016/j.bpj.2014.11.1999
| null |
q-bio.QM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Any organism is embedded in an environment that changes over time. The
timescale for and statistics of environmental change, the precision with which
the organism can detect its environment, and the costs and benefits of
particular protein expression levels all will affect the suitability of
different strategies-such as constitutive expression or graded response-for
regulating protein levels in response to environmental inputs. We propose a
general framework-here specifically applied to the enzymatic regulation of
metabolism in response to changing concentrations of a basic nutrient-to
predict the optimal regulatory strategy given the statistics of fluctuations in
the environment and measurement apparatus, respectively, and the costs
associated with enzyme production. We use this framework to address three
fundamental questions: (i) when a cell should prefer thresholding to a graded
response; (ii) when there is a fitness advantage to implementing a Bayesian
decision rule; and (iii) when retaining memory of the past provides a selective
advantage. We specifically find that: (i) relative convexity of enzyme
expression cost and benefit influences the fitness of thresholding or graded
responses; (ii) intermediate levels of measurement uncertainty call for a
sophisticated Bayesian decision rule; and (iii) in dynamic contexts,
intermediate levels of uncertainty call for retaining memory of the past.
Statistical properties of the environment, such as variability and correlation
times, set optimal biochemical parameters, such as thresholds and decay rates
in signaling pathways. Our framework provides a theoretical basis for
interpreting molecular signal processing algorithms and a classification scheme
that organizes known regulatory strategies and may help conceptualize
heretofore unknown ones.
|
[
{
"created": "Sat, 10 May 2014 01:12:47 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Aug 2014 23:40:30 GMT",
"version": "v2"
}
] |
2023-07-19
|
[
[
"Sivak",
"David A.",
""
],
[
"Thomson",
"Matt",
""
]
] |
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies-such as constitutive expression or graded response-for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
|
q-bio/0501008
|
Binder Hans
|
Hans Binder, Stephan Preibisch, Toralf Kirsten
|
Base pair interactions and hybridization isotherms of matched and
mismatched oligonucleotide probes on microarrays
|
32 pages, 12 figures, 3 tables
| null | null | null |
q-bio.BM
| null |
The lack of specificity in microarray experiments due to non-specific
hybridization raises a serious problem for the analysis of microarray data
because the residual chemical background intensity is not related to the
expression degree of the gene of interest. We analyzed the concentration
dependence of the signal intensity of perfect match (PM) and mismatch (MM)
probes in terms using a microscopic binding model using a combination of mean
hybridization isotherms and single base related affinity terms. The signal
intensities of the PM and MM probes and their difference are assessed with
regard to their sensitivity, specificity and resolution for gene expression
measures. The presented theory implies the refinement of existing algorithms of
probe level analysis to correct microarray data for non-specific background
intensities.
|
[
{
"created": "Thu, 6 Jan 2005 16:35:04 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2005 20:22:52 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Binder",
"Hans",
""
],
[
"Preibisch",
"Stephan",
""
],
[
"Kirsten",
"Toralf",
""
]
] |
The lack of specificity in microarray experiments due to non-specific hybridization raises a serious problem for the analysis of microarray data because the residual chemical background intensity is not related to the expression degree of the gene of interest. We analyzed the concentration dependence of the signal intensity of perfect match (PM) and mismatch (MM) probes in terms using a microscopic binding model using a combination of mean hybridization isotherms and single base related affinity terms. The signal intensities of the PM and MM probes and their difference are assessed with regard to their sensitivity, specificity and resolution for gene expression measures. The presented theory implies the refinement of existing algorithms of probe level analysis to correct microarray data for non-specific background intensities.
|
2311.12045
|
Lauren Sanders
|
Kevin Li, Danko Nikoli\'c, Vjekoslav Nikoli\'c, Davor Andri\'c, Lauren
M. Sanders, Sylvain V. Costes
|
Using Guided Transfer Learning to Predispose AI Agent to Learn
Efficiently from Small RNA-sequencing Datasets
| null | null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Given the increasing availability of RNA-seq data and its complex and
heterogeneous nature, there has been growing interest in applying AI/machine
learning methodologies to work with such data modalities. However, because
omics data is characterized by high dimensionality and low sample size (HDLSS),
current attempts at integrating AI in this domain require significant human
guidance and expertise to mitigate overfitting. In this work we look at how
transfer learning can be improved to learn from small RNA-seq sample sizes
without significant human interference. The strategy is to gain general prior
knowledge about a particular domain of data (e.g. RNA-seq data) by pre-training
on a general task with a large aggregate of data, then fine-tuning to various
specific, downstream target tasks in the same domain. Because previous attempts
have shown traditional transfer learning failing on HLDSS, we propose to
improve performance by using Guided Transfer Learning (GTL). Collaborating with
Robots Go Mental, the AI we deploy here not only learns good initial parameters
during pre-training, but also learns inductive biases that affect how the AI
learns downstream tasks. In this approach, we first pre-trained on recount3
data, a collection of over 400,000 mouse RNA-seq samples sourced from thousands
of individual studies. With such a large collection, patterns of expression
between the ~30,000 genes in mammalian systems were pre-determined. Such
patterns were sufficient for the pre-trained AI agent to efficiently learn new
downstream tasks involving RNA-seq datasets with very low sample sizes and
performed notably better on few-shot learning tasks compared to the same model
without pre-training.
|
[
{
"created": "Fri, 17 Nov 2023 22:47:46 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Li",
"Kevin",
""
],
[
"Nikolić",
"Danko",
""
],
[
"Nikolić",
"Vjekoslav",
""
],
[
"Andrić",
"Davor",
""
],
[
"Sanders",
"Lauren M.",
""
],
[
"Costes",
"Sylvain V.",
""
]
] |
Given the increasing availability of RNA-seq data and its complex and heterogeneous nature, there has been growing interest in applying AI/machine learning methodologies to work with such data modalities. However, because omics data is characterized by high dimensionality and low sample size (HDLSS), current attempts at integrating AI in this domain require significant human guidance and expertise to mitigate overfitting. In this work we look at how transfer learning can be improved to learn from small RNA-seq sample sizes without significant human interference. The strategy is to gain general prior knowledge about a particular domain of data (e.g. RNA-seq data) by pre-training on a general task with a large aggregate of data, then fine-tuning to various specific, downstream target tasks in the same domain. Because previous attempts have shown traditional transfer learning failing on HLDSS, we propose to improve performance by using Guided Transfer Learning (GTL). Collaborating with Robots Go Mental, the AI we deploy here not only learns good initial parameters during pre-training, but also learns inductive biases that affect how the AI learns downstream tasks. In this approach, we first pre-trained on recount3 data, a collection of over 400,000 mouse RNA-seq samples sourced from thousands of individual studies. With such a large collection, patterns of expression between the ~30,000 genes in mammalian systems were pre-determined. Such patterns were sufficient for the pre-trained AI agent to efficiently learn new downstream tasks involving RNA-seq datasets with very low sample sizes and performed notably better on few-shot learning tasks compared to the same model without pre-training.
|
2103.02163
|
Zhaoxia Yu
|
Tong Shen, Gyorgy Lur, Xiangmin Xu, Zhaoxia Yu
|
To Deconvolve, or Not to Deconvolve: Inferences of Neuronal Activities
using Calcium Imaging Data
| null | null | null | null |
q-bio.NC stat.AP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the increasing popularity of calcium imaging data in neuroscience
research, methods for analyzing calcium trace data are critical to address
various questions. The observed calcium traces are either analyzed directly or
deconvolved to spike trains to infer neuronal activities. When both approaches
are applicable, it is unclear whether deconvolving calcium traces is a
necessary step. In this article, we compare the performance of using calcium
traces or their deconvolved spike trains for three common analyses: clustering,
principal component analysis (PCA), and population decoding. Our simulations
and applications to real data suggest that the estimated spike data outperform
calcium trace data for both clustering and PCA. Although calcium trace data
show higher predictability than spike data at each time point, spike history or
cumulative spike counts is comparable to or better than calcium traces in
population decoding.
|
[
{
"created": "Wed, 3 Mar 2021 03:57:18 GMT",
"version": "v1"
}
] |
2021-03-04
|
[
[
"Shen",
"Tong",
""
],
[
"Lur",
"Gyorgy",
""
],
[
"Xu",
"Xiangmin",
""
],
[
"Yu",
"Zhaoxia",
""
]
] |
With the increasing popularity of calcium imaging data in neuroscience research, methods for analyzing calcium trace data are critical to address various questions. The observed calcium traces are either analyzed directly or deconvolved to spike trains to infer neuronal activities. When both approaches are applicable, it is unclear whether deconvolving calcium traces is a necessary step. In this article, we compare the performance of using calcium traces or their deconvolved spike trains for three common analyses: clustering, principal component analysis (PCA), and population decoding. Our simulations and applications to real data suggest that the estimated spike data outperform calcium trace data for both clustering and PCA. Although calcium trace data show higher predictability than spike data at each time point, spike history or cumulative spike counts is comparable to or better than calcium traces in population decoding.
|
q-bio/0401033
|
Lior Pachter
|
Lior Pachter, Bernd Sturmfels
|
Parametric Inference for Biological Sequence Analysis
|
15 pages, 4 figures. See also companion paper "Tropical Geometry of
Statistical Models" (q-bio.QM/0311009)
| null |
10.1073/pnas.0406011101
| null |
q-bio.GN cs.LG math.ST stat.TH
| null |
One of the major successes in computational biology has been the unification,
using the graphical model formalism, of a multitude of algorithms for
annotating and comparing biological sequences. Graphical models that have been
applied towards these problems include hidden Markov models for annotation,
tree models for phylogenetics, and pair hidden Markov models for alignment. A
single algorithm, the sum-product algorithm, solves many of the inference
problems associated with different statistical models. This paper introduces
the \emph{polytope propagation algorithm} for computing the Newton polytope of
an observation from a graphical model. This algorithm is a geometric version of
the sum-product algorithm and is used to analyze the parametric behavior of
maximum a posteriori inference calculations for graphical models.
|
[
{
"created": "Mon, 26 Jan 2004 03:50:03 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Pachter",
"Lior",
""
],
[
"Sturmfels",
"Bernd",
""
]
] |
One of the major successes in computational biology has been the unification, using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied towards these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems associated with different statistical models. This paper introduces the \emph{polytope propagation algorithm} for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
|
2303.00394
|
Sebastien Lambert
|
S\'ebastien Lambert (IHAP), Billy Bauzile (IHAP), Am\'elie Mugnier
(NeoCare), Benoit Durand (ANSES), Timoth\'ee Vergne (IHAP), Mathilde C Paul
(IHAP)
|
A systematic review of mechanistic models used to study avian influenza
virus transmission and control
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The global spread of avian influenza A viruses in domestic birds is causing
dramatic economic and social losses. Various mechanistic models have been
developed in an attempt to better understand avian influenza transmission and
to evaluate the effectiveness of control measures. However, no comprehensive
review of the mechanistic approaches used currently exists. To help fill this
gap, we conducted a systematic review of mechanistic models applied to
real-world epidemics to (1) describe the type of models and their
epidemiological context, (2) synthetise estimated values of AIV transmission
parameters and (3) review the control strategies most frequently evaluated and
their outcome. Fourty-five articles qualified for inclusion, that fitted the
model to data and estimated parameter values (n = 42) and/or evaluated the
effectiveness of control strategies (n = 21). The majority were
population-based models (n = 26), followed by individual-based models (n = 15)
and a few metapopulation models (n = 4). Estimated values for the transmission
rate varied substantially according to epidemiological settings, virus subtypes
and epidemiological units. Other parameters such as the durations of the latent
and infectious periods were more frequently assumed, limiting the insights
brought by mechanistic models on these. Concerning control strategies, many
models evaluated culling (n = 15), while vaccination received less attention (n
= 7). According to the reviewed articles, optimal control strategies varied
between virus subtypes and local conditions, and also depended on the
objective. For instance, vaccination was optimal when the objective was to
limit the overall number of culled flocks, while pre-emptive culling was
preferred for reducing the epidemic size and duration. Earlier implementation
of interventions consistently improved the efficacy of control strategies,
highlighting the need for effective surveillance and epidemic preparedness.
Potential improvements of mechanistic models include explicitly accounting for
various transmission routes, and distinguishing poultry populations according
to species and farm type. To provide insights to policy makers in a timely
manner, aspects about the evaluation of control strategies that could deserve
further attention include: economic evaluation, combination of strategies
including vaccination, the use of optimization algorithm instead of comparing a
limited set of scenarios, and real-time evaluation.
|
[
{
"created": "Wed, 1 Mar 2023 10:26:33 GMT",
"version": "v1"
}
] |
2023-03-02
|
[
[
"Lambert",
"Sébastien",
"",
"IHAP"
],
[
"Bauzile",
"Billy",
"",
"IHAP"
],
[
"Mugnier",
"Amélie",
"",
"NeoCare"
],
[
"Durand",
"Benoit",
"",
"ANSES"
],
[
"Vergne",
"Timothée",
"",
"IHAP"
],
[
"Paul",
"Mathilde C",
"",
"IHAP"
]
] |
The global spread of avian influenza A viruses in domestic birds is causing dramatic economic and social losses. Various mechanistic models have been developed in an attempt to better understand avian influenza transmission and to evaluate the effectiveness of control measures. However, no comprehensive review of the mechanistic approaches used currently exists. To help fill this gap, we conducted a systematic review of mechanistic models applied to real-world epidemics to (1) describe the type of models and their epidemiological context, (2) synthetise estimated values of AIV transmission parameters and (3) review the control strategies most frequently evaluated and their outcome. Fourty-five articles qualified for inclusion, that fitted the model to data and estimated parameter values (n = 42) and/or evaluated the effectiveness of control strategies (n = 21). The majority were population-based models (n = 26), followed by individual-based models (n = 15) and a few metapopulation models (n = 4). Estimated values for the transmission rate varied substantially according to epidemiological settings, virus subtypes and epidemiological units. Other parameters such as the durations of the latent and infectious periods were more frequently assumed, limiting the insights brought by mechanistic models on these. Concerning control strategies, many models evaluated culling (n = 15), while vaccination received less attention (n = 7). According to the reviewed articles, optimal control strategies varied between virus subtypes and local conditions, and also depended on the objective. For instance, vaccination was optimal when the objective was to limit the overall number of culled flocks, while pre-emptive culling was preferred for reducing the epidemic size and duration. Earlier implementation of interventions consistently improved the efficacy of control strategies, highlighting the need for effective surveillance and epidemic preparedness. Potential improvements of mechanistic models include explicitly accounting for various transmission routes, and distinguishing poultry populations according to species and farm type. To provide insights to policy makers in a timely manner, aspects about the evaluation of control strategies that could deserve further attention include: economic evaluation, combination of strategies including vaccination, the use of optimization algorithm instead of comparing a limited set of scenarios, and real-time evaluation.
|
1910.03936
|
Mohammed Alser
|
Mohammed H K Alser
|
Accelerating the Understanding of Life's Code Through Better Algorithms
and Hardware Design
| null | null | null | null |
q-bio.GN cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Calculating the similarities between a pair of genomic sequences is one of
the most fundamental computational steps in genomic analysis. This step --
called sequence alignment -- is the computational bottleneck because: (1) it is
implemented using quadratic-time dynamic programming algorithms and (2) the
majority of candidate locations in the reference genome do not align with a
given read due to high dissimilarity. Calculating the alignment of such
incorrect candidate locations consumes an overwhelming majority of a modern
read mapper's execution time.
In this thesis, we introduce four new algorithms (GateKeeper, Shouji, MAGNET,
and SneakySnake) that function as a pre-alignment step and aim to filter out
most incorrect candidate locations. The first key idea of our pre-alignment
filters is to provide high filtering accuracy by correctly detecting all
similar segments shared between two sequences. The second key idea is to
exploit the massively parallel architecture of modern FPGAs for accelerating
our filtering algorithms. We also develop an efficient CPU implementation of
the SneakySnake algorithm for commodity desktops and servers. We evaluate the
benefits and downsides of our pre-alignment filtering approach in detail using
12 real datasets. In our evaluation, we demonstrate that our hardware
pre-alignment filters show two to three orders of magnitude speedup over their
equivalent CPU implementations. We also demonstrate that integrating our
hardware pre-alignment filters with the state-of-the-art read aligners reduces
the aligner's execution time by up to 21.5x. Finally, we show that efficient
CPU implementation of pre-alignment filtering still provides significant
benefits. We show that SneakySnake on average reduces the execution time of the
best performing CPU-based read aligners Edlib and Parasail, by up to 43x and
57.9x, respectively.
|
[
{
"created": "Tue, 8 Oct 2019 15:52:53 GMT",
"version": "v1"
}
] |
2019-10-10
|
[
[
"Alser",
"Mohammed H K",
""
]
] |
Calculating the similarities between a pair of genomic sequences is one of the most fundamental computational steps in genomic analysis. This step -- called sequence alignment -- is the computational bottleneck because: (1) it is implemented using quadratic-time dynamic programming algorithms and (2) the majority of candidate locations in the reference genome do not align with a given read due to high dissimilarity. Calculating the alignment of such incorrect candidate locations consumes an overwhelming majority of a modern read mapper's execution time. In this thesis, we introduce four new algorithms (GateKeeper, Shouji, MAGNET, and SneakySnake) that function as a pre-alignment step and aim to filter out most incorrect candidate locations. The first key idea of our pre-alignment filters is to provide high filtering accuracy by correctly detecting all similar segments shared between two sequences. The second key idea is to exploit the massively parallel architecture of modern FPGAs for accelerating our filtering algorithms. We also develop an efficient CPU implementation of the SneakySnake algorithm for commodity desktops and servers. We evaluate the benefits and downsides of our pre-alignment filtering approach in detail using 12 real datasets. In our evaluation, we demonstrate that our hardware pre-alignment filters show two to three orders of magnitude speedup over their equivalent CPU implementations. We also demonstrate that integrating our hardware pre-alignment filters with the state-of-the-art read aligners reduces the aligner's execution time by up to 21.5x. Finally, we show that efficient CPU implementation of pre-alignment filtering still provides significant benefits. We show that SneakySnake on average reduces the execution time of the best performing CPU-based read aligners Edlib and Parasail, by up to 43x and 57.9x, respectively.
|
1811.05088
|
Alexei Goun
|
Zachary Quine, Alexei Goun, Herschel Rabitz
|
Suppression of the Spectral Cross Talk of Optogenetic Switching by
Stimulated Depletion Quenching. Theoretical Analysis
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optogenetics is a rapidly growing field of biotechnology, potentially
allowing a deeper understanding and control of complex biological networks. The
major challenge is the multiplexed control of several optogenetic components in
the presence of significant spectral cross talk. We propose and demonstrate
through simulations a new control approach of Stimulated Depletion Quenching.
This approach is applied to the phytochrome Cph8 bidirectional optogenetic
switch, and the results show significant improvement of its dynamic range.
|
[
{
"created": "Tue, 13 Nov 2018 03:34:08 GMT",
"version": "v1"
}
] |
2018-11-14
|
[
[
"Quine",
"Zachary",
""
],
[
"Goun",
"Alexei",
""
],
[
"Rabitz",
"Herschel",
""
]
] |
Optogenetics is a rapidly growing field of biotechnology, potentially allowing a deeper understanding and control of complex biological networks. The major challenge is the multiplexed control of several optogenetic components in the presence of significant spectral cross talk. We propose and demonstrate through simulations a new control approach of Stimulated Depletion Quenching. This approach is applied to the phytochrome Cph8 bidirectional optogenetic switch, and the results show significant improvement of its dynamic range.
|
1510.09138
|
Axel Blau
|
Aurel Vasile Martiniuc, Victor Boco\c{s}-Bin\c{t}in\c{t}an, Rouhollah
Habibey, Asiyeh Golabchi, Alois Knoll, Axel Blau
|
Paired spiking robustly shapes spontaneous activity in neural networks
in vitro
|
23 pages, 9 figures; 2 supplementary pages, 2 supplementary figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In vivo, neurons establish functional connections and preserve information
along their synaptic pathways from one information processing stage to the next
in a very efficient manner. Paired spiking (PS) enhancement plays a key role by
acting as a temporal filter that deletes less informative spikes. We analyzed
the spontaneous neural activity evolution in a hippocampal and a cortical
network over several weeks exploring whether the same PS coding mechanism
appears in neuronal cultures as well. We show that self-organized neural in
vitro networks not only develop characteristic bursting activity, but feature
robust in vivo-like PS activity. PS activity formed spatiotemporal patterns
that started at early days in vitro (DIVs) and lasted until the end of the
recording sessions. Initially random-like and sparse PS patterns became robust
after three weeks in vitro (WIVs). They were characterized by a high number of
occurrences and short inter-paired spike intervals (IPSIs). Spatially, the
degree of complexity increased by recruiting new neighboring sites in PS as a
culture matured. Moreover, PS activity participated in establishing functional
connectivity between different sites within the developing network. Employing
transfer entropy (TE) as an information transfer measure, we show that PS
activity is robustly involved in establishing effective connectivities. Spiking
activity at both individual sites and network level robustly followed each PS
within a short time interval. PS may thus be considered a spiking predictor.
These findings suggest that PS activity is preserved in spontaneously active in
vitro networks as part of a robust coding mechanism as encountered in vivo. We
suggest that, presumably in lack of any external sensory stimuli, PS may act as
an internal surrogate stimulus to drive neural activity at different
developmental stages.
|
[
{
"created": "Fri, 30 Oct 2015 15:56:23 GMT",
"version": "v1"
}
] |
2015-11-02
|
[
[
"Martiniuc",
"Aurel Vasile",
""
],
[
"Bocoş-Binţinţan",
"Victor",
""
],
[
"Habibey",
"Rouhollah",
""
],
[
"Golabchi",
"Asiyeh",
""
],
[
"Knoll",
"Alois",
""
],
[
"Blau",
"Axel",
""
]
] |
In vivo, neurons establish functional connections and preserve information along their synaptic pathways from one information processing stage to the next in a very efficient manner. Paired spiking (PS) enhancement plays a key role by acting as a temporal filter that deletes less informative spikes. We analyzed the spontaneous neural activity evolution in a hippocampal and a cortical network over several weeks exploring whether the same PS coding mechanism appears in neuronal cultures as well. We show that self-organized neural in vitro networks not only develop characteristic bursting activity, but feature robust in vivo-like PS activity. PS activity formed spatiotemporal patterns that started at early days in vitro (DIVs) and lasted until the end of the recording sessions. Initially random-like and sparse PS patterns became robust after three weeks in vitro (WIVs). They were characterized by a high number of occurrences and short inter-paired spike intervals (IPSIs). Spatially, the degree of complexity increased by recruiting new neighboring sites in PS as a culture matured. Moreover, PS activity participated in establishing functional connectivity between different sites within the developing network. Employing transfer entropy (TE) as an information transfer measure, we show that PS activity is robustly involved in establishing effective connectivities. Spiking activity at both individual sites and network level robustly followed each PS within a short time interval. PS may thus be considered a spiking predictor. These findings suggest that PS activity is preserved in spontaneously active in vitro networks as part of a robust coding mechanism as encountered in vivo. We suggest that, presumably in lack of any external sensory stimuli, PS may act as an internal surrogate stimulus to drive neural activity at different developmental stages.
|
1811.03674
|
Serena Bradde
|
Serena Bradde and Thierry Mora and Aleksandra M. Walczak
|
Cost and benefits of CRISPR spacer acquisition
|
5 figures, 10 pages
| null | null | null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CRISPR-Cas mediated immunity in bacteria allows bacterial populations to
protect themselves against pathogens. However, it also exposes them to the
dangers of auto-immunity by developing protection that targets its own genome.
Using a simple model of the coupled dynamics of phage and bacterial
populations, we explore how acquisition rates affect the survival rate of the
bacterial colony. We find that the optimal strategy depends on the initial
population sizes of both viruses and bacteria. Additionally, certain
combinations of acquisition and dynamical rates and initial population sizes
guarantee protection, due to a dynamical balance between the evolving
population sizes, without relying on acquisition of viral spacers. Outside this
regime, the high cost of auto-immunity limits the acquisition rate. We discuss
these optimal survival strategies in terms of recent experiments.
|
[
{
"created": "Thu, 8 Nov 2018 20:54:32 GMT",
"version": "v1"
}
] |
2018-11-12
|
[
[
"Bradde",
"Serena",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
]
] |
CRISPR-Cas mediated immunity in bacteria allows bacterial populations to protect themselves against pathogens. However, it also exposes them to the dangers of auto-immunity by developing protection that targets its own genome. Using a simple model of the coupled dynamics of phage and bacterial populations, we explore how acquisition rates affect the survival rate of the bacterial colony. We find that the optimal strategy depends on the initial population sizes of both viruses and bacteria. Additionally, certain combinations of acquisition and dynamical rates and initial population sizes guarantee protection, due to a dynamical balance between the evolving population sizes, without relying on acquisition of viral spacers. Outside this regime, the high cost of auto-immunity limits the acquisition rate. We discuss these optimal survival strategies in terms of recent experiments.
|
1610.00611
|
Grzegorz Wojcik Prof.
|
Dominik S. Kufel, Grzegorz M. Wojcik
|
Analytical modelling of temperature effects on synapses
|
14 pages, 11 figures
|
Kufel, D. S., & Wojcik, G. M. (2018). Analytical modelling of
temperature effects on an AMPA-type synapse. Journal of computational
neuroscience, 44(3), 379-391
|
10.1007/s10827-018-0684-x
| null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It was previously reported, that temperature may significantly influence
neural dynamics on different levels of brain modelling. Due to this fact, while
creating the model in computational neuroscience we would like to make it
scalable for wide-range of various brain temperatures. However currently,
because of a lack of experimental data and an absence of analytical model
describing temperature influence on synapses, it is not possible to include
temperature effects on multi-neuron modelling level. In this paper, we propose
first step to deal with this problem: new analytical model of AMPA-type
synaptic conductance, which is able to include temperature effects in
low-frequency stimulations. It was constructed on basis of Markov model
description of AMPA receptor kinetics and few simplifications motivated both
experimentally and from Monte Carlo simulation of synaptic transmission. The
model may be used for efficient and accurate implementation of temperature
effects on AMPA receptor conductance in large scale neural network simulations.
This in fact, opens wide-range of new possibilities for researching an
influence of temperature on brain functioning.
|
[
{
"created": "Mon, 3 Oct 2016 16:12:27 GMT",
"version": "v1"
}
] |
2018-08-30
|
[
[
"Kufel",
"Dominik S.",
""
],
[
"Wojcik",
"Grzegorz M.",
""
]
] |
It was previously reported, that temperature may significantly influence neural dynamics on different levels of brain modelling. Due to this fact, while creating the model in computational neuroscience we would like to make it scalable for wide-range of various brain temperatures. However currently, because of a lack of experimental data and an absence of analytical model describing temperature influence on synapses, it is not possible to include temperature effects on multi-neuron modelling level. In this paper, we propose first step to deal with this problem: new analytical model of AMPA-type synaptic conductance, which is able to include temperature effects in low-frequency stimulations. It was constructed on basis of Markov model description of AMPA receptor kinetics and few simplifications motivated both experimentally and from Monte Carlo simulation of synaptic transmission. The model may be used for efficient and accurate implementation of temperature effects on AMPA receptor conductance in large scale neural network simulations. This in fact, opens wide-range of new possibilities for researching an influence of temperature on brain functioning.
|
1303.7006
|
Bhavin Khatri
|
Bhavin S. Khatri and Richard A. Goldstein
|
Evolutionary stochastic dynamics of speciation and a simple
genotype-phenotype map for protein binding DNA
|
5 pages, 2 figures
|
Journal of Theoretical Biology, 378 (2015), p56-64
|
10.1016/j.jtbi.2015.04.027
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speciation is of fundamental importance to understanding the huge diversity
of life on Earth. In contrast to current phenomenological models, we develop a
biophysically motivated approach to study speciation involving the co-evolution
of protein binding DNA for two geographically isolated populations. Our results
predict that, despite neutral diffusion of hybrids in trait space, smaller
populations have a higher rate of speciation, due to sequence entropy poising
populations more closely to incompatible regions of phenotype space. A key
lesson of this work is that non-trivial contributions of sequence entropy give
rise to a strong population size dependence on speciation rates.
|
[
{
"created": "Wed, 27 Mar 2013 23:32:59 GMT",
"version": "v1"
},
{
"created": "Sun, 12 May 2013 10:27:34 GMT",
"version": "v2"
}
] |
2015-05-18
|
[
[
"Khatri",
"Bhavin S.",
""
],
[
"Goldstein",
"Richard A.",
""
]
] |
Speciation is of fundamental importance to understanding the huge diversity of life on Earth. In contrast to current phenomenological models, we develop a biophysically motivated approach to study speciation involving the co-evolution of protein binding DNA for two geographically isolated populations. Our results predict that, despite neutral diffusion of hybrids in trait space, smaller populations have a higher rate of speciation, due to sequence entropy poising populations more closely to incompatible regions of phenotype space. A key lesson of this work is that non-trivial contributions of sequence entropy give rise to a strong population size dependence on speciation rates.
|
1411.2017
|
Kristina Crona
|
Kristina Crona
|
Recombination and peak jumping
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We find an advantage of recombination for a category of complex fitness
landscapes. Recent studies of empirical fitness landscapes reveal complex gene
interactions and multiple peaks, and recombination can be a powerful mechanism
for escaping suboptimal peaks. However classical work on recombination largely
ignores the effect of complex gene interactions. The advantage we find has no
correspondence for 2-locus systems or for smooth landscapes. The effect is
sometimes extreme, in the sense that shutting off recombination could result in
that the organism fails to adapt. A standard question about recombination is if
the mechanism tends to accelerate or decelerate adaptation. However, we argue
that extreme effects may be more important than how the majority falls.
|
[
{
"created": "Fri, 7 Nov 2014 20:06:18 GMT",
"version": "v1"
}
] |
2014-11-10
|
[
[
"Crona",
"Kristina",
""
]
] |
We find an advantage of recombination for a category of complex fitness landscapes. Recent studies of empirical fitness landscapes reveal complex gene interactions and multiple peaks, and recombination can be a powerful mechanism for escaping suboptimal peaks. However classical work on recombination largely ignores the effect of complex gene interactions. The advantage we find has no correspondence for 2-locus systems or for smooth landscapes. The effect is sometimes extreme, in the sense that shutting off recombination could result in that the organism fails to adapt. A standard question about recombination is if the mechanism tends to accelerate or decelerate adaptation. However, we argue that extreme effects may be more important than how the majority falls.
|
2301.10850
|
Mohammad Anwar Ul Alam
|
Mohammad Anwar Ul Alam, Aschalew Kassu, and Lamin Kassama
|
Effect of sonication time and surfactant concentration on improving the
bio-accessibility of lycopene synthesized in poly-lactic co-glycolic acid
nanoparticles
| null |
2022 ASABE Annual International Meeting 2200474
|
10.13031/aim.202200474
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of biodegradable polymers simplifies the development of therapeutic
devices with regards to transient implants and three-dimensional platform
suitable for tissue engineering. Further advances have also occurred in the
controlled released mechanism of bioactive compounds encapsulated in
biodegradable polymers. This application requires the understanding of the
physicochemical properties of the polymeric materials and their inherent impact
on the delivery of encapsulated bioactive. Hence, the objective of this study
was to evaluate the effect of surfactant and sonication time on the
bio-accessibility of lycopene encapsulated polymeric nanoparticles. The
emulsion evaporation method was used to encapsulate lycopene in poly-lactic
co-glycolic acid (PLGA) with surfactant concentration, sonication time and
polymer concentration as independent variables. Physicochemical and
morphological characteristics were measured with a zetasizer and SEM, while the
encapsulation efficiency and controlled release kinetics with
spectrophotometric, and the dialysis method was used to estimate
bioaccessibility. The results have shown sonication time to have significantly
(p < 0.05) influenced the encapsulation efficiency. Hence, the sonication time
of 4 min yield an encapsulation efficiency of 78% and increased to 97% with
increase sonication time (6 min). Increased sonication time had a decreasing
effect on the hydrodynamic diameter and stability of the encapsulated
nanoparticles. The slow release of lycopene was observed during the first 12
days, followed by a burst release of about 44% on the 13th day in-vitro. The
study will have significant impact on the manufacturing of functional food with
encapsulated ingredients and provide an understanding of their inherent control
release mechanism in the GIT.
|
[
{
"created": "Wed, 25 Jan 2023 22:18:44 GMT",
"version": "v1"
}
] |
2023-01-27
|
[
[
"Alam",
"Mohammad Anwar Ul",
""
],
[
"Kassu",
"Aschalew",
""
],
[
"Kassama",
"Lamin",
""
]
] |
The use of biodegradable polymers simplifies the development of therapeutic devices with regards to transient implants and three-dimensional platform suitable for tissue engineering. Further advances have also occurred in the controlled released mechanism of bioactive compounds encapsulated in biodegradable polymers. This application requires the understanding of the physicochemical properties of the polymeric materials and their inherent impact on the delivery of encapsulated bioactive. Hence, the objective of this study was to evaluate the effect of surfactant and sonication time on the bio-accessibility of lycopene encapsulated polymeric nanoparticles. The emulsion evaporation method was used to encapsulate lycopene in poly-lactic co-glycolic acid (PLGA) with surfactant concentration, sonication time and polymer concentration as independent variables. Physicochemical and morphological characteristics were measured with a zetasizer and SEM, while the encapsulation efficiency and controlled release kinetics with spectrophotometric, and the dialysis method was used to estimate bioaccessibility. The results have shown sonication time to have significantly (p < 0.05) influenced the encapsulation efficiency. Hence, the sonication time of 4 min yield an encapsulation efficiency of 78% and increased to 97% with increase sonication time (6 min). Increased sonication time had a decreasing effect on the hydrodynamic diameter and stability of the encapsulated nanoparticles. The slow release of lycopene was observed during the first 12 days, followed by a burst release of about 44% on the 13th day in-vitro. The study will have significant impact on the manufacturing of functional food with encapsulated ingredients and provide an understanding of their inherent control release mechanism in the GIT.
|
2204.09225
|
Xiaowei Yu
|
Xiaowei Yu, Lu Zhang, Lin Zhao, Yanjun Lyu, Tianming Liu, Dajiang Zhu
|
Disentangling Spatial-Temporal Functional Brain Networks via
Twin-Transformers
|
full pages
| null | null | null |
q-bio.NC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How to identify and characterize functional brain networks (BN) is
fundamental to gain system-level insights into the mechanisms of brain
organizational architecture. Current functional magnetic resonance (fMRI)
analysis highly relies on prior knowledge of specific patterns in either
spatial (e.g., resting-state network) or temporal (e.g., task stimulus) domain.
In addition, most approaches aim to find group-wise common functional networks,
individual-specific functional networks have been rarely studied. In this work,
we propose a novel Twin-Transformers framework to simultaneously infer common
and individual functional networks in both spatial and temporal space, in a
self-supervised manner. The first transformer takes space-divided information
as input and generates spatial features, while the second transformer takes
time-related information as input and outputs temporal features. The spatial
and temporal features are further separated into common and individual ones via
interactions (weights sharing) and constraints between the two transformers. We
applied our TwinTransformers to Human Connectome Project (HCP) motor task-fMRI
dataset and identified multiple common brain networks, including both
task-related and resting-state networks (e.g., default mode network).
Interestingly, we also successfully recovered a set of individual-specific
networks that are not related to task stimulus and only exist at the individual
level.
|
[
{
"created": "Wed, 20 Apr 2022 04:57:53 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Jun 2022 18:46:03 GMT",
"version": "v2"
}
] |
2022-06-28
|
[
[
"Yu",
"Xiaowei",
""
],
[
"Zhang",
"Lu",
""
],
[
"Zhao",
"Lin",
""
],
[
"Lyu",
"Yanjun",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhu",
"Dajiang",
""
]
] |
How to identify and characterize functional brain networks (BN) is fundamental to gain system-level insights into the mechanisms of brain organizational architecture. Current functional magnetic resonance (fMRI) analysis highly relies on prior knowledge of specific patterns in either spatial (e.g., resting-state network) or temporal (e.g., task stimulus) domain. In addition, most approaches aim to find group-wise common functional networks, individual-specific functional networks have been rarely studied. In this work, we propose a novel Twin-Transformers framework to simultaneously infer common and individual functional networks in both spatial and temporal space, in a self-supervised manner. The first transformer takes space-divided information as input and generates spatial features, while the second transformer takes time-related information as input and outputs temporal features. The spatial and temporal features are further separated into common and individual ones via interactions (weights sharing) and constraints between the two transformers. We applied our TwinTransformers to Human Connectome Project (HCP) motor task-fMRI dataset and identified multiple common brain networks, including both task-related and resting-state networks (e.g., default mode network). Interestingly, we also successfully recovered a set of individual-specific networks that are not related to task stimulus and only exist at the individual level.
|
2012.05294
|
Imelda Trejo
|
Imelda Trejo and Nicolas Hengartner
|
A modified Susceptible-Infected-Recovered model for observed
under-reported incidence data
| null | null | null | null |
q-bio.PE stat.AP stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Fitting Susceptible-Infected-Recovered (SIR) models to incidence data is
problematic when not all infected individuals are reported. Assuming an
underlying SIR model with general but known distribution for the time to
recovery, this paper derives the implied differential-integral equations for
observed incidence data when a fixed fraction of newly infected individuals are
not observed. The parameters of the resulting system of differential equations
are identifiable. Using these differential equations, we develop a stochastic
model for the conditional distribution of current disease incidence given the
entire past history of reported cases. We estimate the model parameters using
Bayesian Markov Chain Monte-Carlo sampling of the posterior distribution. We
use our model to estimate the transmission rate and fraction of asymptomatic
individuals for the current Coronavirus 2019 outbreak in eight American
Countries: the United States of America, Brazil, Mexico, Argentina, Chile,
Colombia, Peru, and Panama, from January 2020 to May 2021. Our analysis reveals
that consistently, about 40-60% of the infections were not observed in the
American outbreaks. The two exception are Mexico and Peru, with acute
under-reporting in Mexico.
|
[
{
"created": "Wed, 9 Dec 2020 20:05:49 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Aug 2021 18:39:39 GMT",
"version": "v2"
}
] |
2021-08-16
|
[
[
"Trejo",
"Imelda",
""
],
[
"Hengartner",
"Nicolas",
""
]
] |
Fitting Susceptible-Infected-Recovered (SIR) models to incidence data is problematic when not all infected individuals are reported. Assuming an underlying SIR model with general but known distribution for the time to recovery, this paper derives the implied differential-integral equations for observed incidence data when a fixed fraction of newly infected individuals are not observed. The parameters of the resulting system of differential equations are identifiable. Using these differential equations, we develop a stochastic model for the conditional distribution of current disease incidence given the entire past history of reported cases. We estimate the model parameters using Bayesian Markov Chain Monte-Carlo sampling of the posterior distribution. We use our model to estimate the transmission rate and fraction of asymptomatic individuals for the current Coronavirus 2019 outbreak in eight American Countries: the United States of America, Brazil, Mexico, Argentina, Chile, Colombia, Peru, and Panama, from January 2020 to May 2021. Our analysis reveals that consistently, about 40-60% of the infections were not observed in the American outbreaks. The two exception are Mexico and Peru, with acute under-reporting in Mexico.
|
q-bio/0509014
|
Mathieu Bouville
|
Mathieu Bouville
|
Fermentation kinetics including product and substrate inhibitions plus
biomass death: a mathematical analysis
|
4 pages, 4 figures
|
Biotechnology Letters 29(5), 737-741 (2007)
|
10.1007/s10529-006-9296-z
| null |
q-bio.QM q-bio.PE
| null |
Fermentation is generally modelled by kinetic equations giving the time
evolutions for biomass, substrate, and product concentrations. Although these
equations can be solved analytically in simple cases if substrate/product
inhibition and biomass death are included, they are typically solved
numerically. We propose an analytical treatment of the kinetic equations
--including cell death and an arbitrary number of inhibitions-- in which
constant yield needs not be assumed. Equations are solved in phase space, i.e.
the biomass concentration is written explicitly as a function of the substrate
concentration.
|
[
{
"created": "Tue, 13 Sep 2005 15:58:08 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Oct 2005 10:54:29 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Bouville",
"Mathieu",
""
]
] |
Fermentation is generally modelled by kinetic equations giving the time evolutions for biomass, substrate, and product concentrations. Although these equations can be solved analytically in simple cases if substrate/product inhibition and biomass death are included, they are typically solved numerically. We propose an analytical treatment of the kinetic equations --including cell death and an arbitrary number of inhibitions-- in which constant yield needs not be assumed. Equations are solved in phase space, i.e. the biomass concentration is written explicitly as a function of the substrate concentration.
|
1803.09005
|
Mathilde Badoual
|
Chlo\'e Gerin, Johan Pallud, Christophe Deroulers, Pascale Varlet,
Catherine Oppenheim, Fran\c{c}ois-Xavier Roux, Fabrice Chr\'etien, Stephen
Randy Thomas, Basile Grammaticos, Mathilde Badoual
|
Quantitative characterization of the imaging limits of diffuse low-grade
oligodendrogliomas
|
19 pages, 1 table, 6 figures, 1 supplementary figure
|
Neuro Oncology 2013 Oct;15(10):1379-88
|
10.1093/neuonc/not072
| null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background : Supratentorial diffuse low-grade gliomas in adults extend beyond
maximal visible MRI-defined abnormalities, and a gap exists between the imaging
signal changes and the actual tumor margins. Direct quantitative comparisons
between imaging and histological analyses are lacking to date. However, they
are of the utmost importance if one wishes to develop realistic models for
diffuse glioma growth.
Methods : In this study, we quantitatively compare the cell concentration and
the edema fraction from human histological biopsy samples (BSs) performed
inside and outside imaging abnormalities during serial imaging-based
stereotactic biopsy of diffuse low-grade gliomas.
Results : The cell concentration was significantly higher in BSs located
inside (1189 $\pm$ 378 cell/mm$^2$) than outside (740 $\pm$ 124 cell/mm$^2$)
MRI-defined abnormalities (p=0.0003). The edema fraction was significantly
higher in BSs located inside (mean, 45 $\pm$ 23%) than outside (mean, 5 $\pm$
9%) MRI-defined abnormalities (p<0.0001). At borders of the MRI-defined
abnormalities, 20% of the tissue surface area was occupied by edema, and only
3% by tumor cells. The cycling cell concentration was significantly higher in
BSs located inside (10 $\pm$ 12 cell/mm$^2$) compared to outside (0.5 $\pm$ 0.9
cell/mm$^2$) MRI-defined abnormalities (p=0.0001).
Conclusions : We show that the margins of T2-weighted signal changes are
mainly correlated with the edema fraction. In 62.5% of patients, the cycling
tumor cell fraction (defined as the ratio of the cycling tumor cell
concentration to the total number of tumor cells) was higher at the limits of
the MRI-defined abnormalities than closer to the center of the tumor. In the
remaining patients, the cycling tumor cell fraction increased towards the
center of the tumor.
|
[
{
"created": "Fri, 23 Mar 2018 22:49:04 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Gerin",
"Chloé",
""
],
[
"Pallud",
"Johan",
""
],
[
"Deroulers",
"Christophe",
""
],
[
"Varlet",
"Pascale",
""
],
[
"Oppenheim",
"Catherine",
""
],
[
"Roux",
"François-Xavier",
""
],
[
"Chrétien",
"Fabrice",
""
],
[
"Thomas",
"Stephen Randy",
""
],
[
"Grammaticos",
"Basile",
""
],
[
"Badoual",
"Mathilde",
""
]
] |
Background : Supratentorial diffuse low-grade gliomas in adults extend beyond maximal visible MRI-defined abnormalities, and a gap exists between the imaging signal changes and the actual tumor margins. Direct quantitative comparisons between imaging and histological analyses are lacking to date. However, they are of the utmost importance if one wishes to develop realistic models for diffuse glioma growth. Methods : In this study, we quantitatively compare the cell concentration and the edema fraction from human histological biopsy samples (BSs) performed inside and outside imaging abnormalities during serial imaging-based stereotactic biopsy of diffuse low-grade gliomas. Results : The cell concentration was significantly higher in BSs located inside (1189 $\pm$ 378 cell/mm$^2$) than outside (740 $\pm$ 124 cell/mm$^2$) MRI-defined abnormalities (p=0.0003). The edema fraction was significantly higher in BSs located inside (mean, 45 $\pm$ 23%) than outside (mean, 5 $\pm$ 9%) MRI-defined abnormalities (p<0.0001). At borders of the MRI-defined abnormalities, 20% of the tissue surface area was occupied by edema, and only 3% by tumor cells. The cycling cell concentration was significantly higher in BSs located inside (10 $\pm$ 12 cell/mm$^2$) compared to outside (0.5 $\pm$ 0.9 cell/mm$^2$) MRI-defined abnormalities (p=0.0001). Conclusions : We show that the margins of T2-weighted signal changes are mainly correlated with the edema fraction. In 62.5% of patients, the cycling tumor cell fraction (defined as the ratio of the cycling tumor cell concentration to the total number of tumor cells) was higher at the limits of the MRI-defined abnormalities than closer to the center of the tumor. In the remaining patients, the cycling tumor cell fraction increased towards the center of the tumor.
|
2201.04296
|
Diego Martinez
|
J. Chacon, M. Pimentel, A. Pedroso, A. Ferreira, D. Martinez, C.
Ruelas
|
In vitro evaluation of the effect of Ceftiofur Sodium and of a new
Gentamycin Sulfate formulation on the viability of Marek disease virus
|
in Spanish
|
Proceedings of the XX Latin American Poultry Congress. 2007. Porto
Alegre, Brazil
| null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The present study evaluated In vitro effect of gentamicin sulfate and
ceftiofur sodium on the viability of the Marek's disease virus. The titer of
cell associated turkey herpesvirus (HVT) vaccine was not appreciably reduced
when incubated with 50 mg/ml of gentamicin sulfate or ceftiofur sodium.
Statistic difference was not found between the number of plaqueforming units
(PFU) of reconstituted vaccine associated with both antibiotics 0, 15, 30 and
60 minutes after reconstitution of vaccine. The antibiotics did not
considerably alter the pH values. There was a significative decrease of the
titer of all vaccinal solutions when they were inoculated 30 and 60 minutes
after the reconstitution of the vaccine. Nevertheless, these titers are higher
than the required titers to protectect against the Marek disease.
|
[
{
"created": "Wed, 12 Jan 2022 04:38:41 GMT",
"version": "v1"
}
] |
2022-01-13
|
[
[
"Chacon",
"J.",
""
],
[
"Pimentel",
"M.",
""
],
[
"Pedroso",
"A.",
""
],
[
"Ferreira",
"A.",
""
],
[
"Martinez",
"D.",
""
],
[
"Ruelas",
"C.",
""
]
] |
The present study evaluated In vitro effect of gentamicin sulfate and ceftiofur sodium on the viability of the Marek's disease virus. The titer of cell associated turkey herpesvirus (HVT) vaccine was not appreciably reduced when incubated with 50 mg/ml of gentamicin sulfate or ceftiofur sodium. Statistic difference was not found between the number of plaqueforming units (PFU) of reconstituted vaccine associated with both antibiotics 0, 15, 30 and 60 minutes after reconstitution of vaccine. The antibiotics did not considerably alter the pH values. There was a significative decrease of the titer of all vaccinal solutions when they were inoculated 30 and 60 minutes after the reconstitution of the vaccine. Nevertheless, these titers are higher than the required titers to protectect against the Marek disease.
|
1712.08058
|
Christian Dansereau
|
Christian Dansereau, Angela Tam, AmanPreet Badhwar, Sebastian Urchs,
Pierre Orban, Pedro Rosa-Neto, Pierre Bellec
|
A brain signature highly predictive of future progression to Alzheimer's
dementia
| null | null | null | null |
q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Early prognosis of Alzheimer's dementia is hard. Mild cognitive impairment
(MCI) typically precedes Alzheimer's dementia, yet only a fraction of MCI
individuals will progress to dementia, even when screened using biomarkers. We
propose here to identify a subset of individuals who share a common brain
signature highly predictive of oncoming dementia. This signature was composed
of brain atrophy and functional dysconnectivity and discovered using a machine
learning model in patients suffering from dementia. The model recognized the
same brain signature in MCI individuals, 90% of which progressed to dementia
within three years. This result is a marked improvement on the state-of-the-art
in prognostic precision, while the brain signature still identified 47% of all
MCI progressors. We thus discovered a sizable MCI subpopulation which
represents an excellent recruitment target for clinical trials at the prodromal
stage of Alzheimer's disease.
|
[
{
"created": "Thu, 21 Dec 2017 16:26:48 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Mar 2018 17:37:11 GMT",
"version": "v2"
}
] |
2018-03-05
|
[
[
"Dansereau",
"Christian",
""
],
[
"Tam",
"Angela",
""
],
[
"Badhwar",
"AmanPreet",
""
],
[
"Urchs",
"Sebastian",
""
],
[
"Orban",
"Pierre",
""
],
[
"Rosa-Neto",
"Pedro",
""
],
[
"Bellec",
"Pierre",
""
]
] |
Early prognosis of Alzheimer's dementia is hard. Mild cognitive impairment (MCI) typically precedes Alzheimer's dementia, yet only a fraction of MCI individuals will progress to dementia, even when screened using biomarkers. We propose here to identify a subset of individuals who share a common brain signature highly predictive of oncoming dementia. This signature was composed of brain atrophy and functional dysconnectivity and discovered using a machine learning model in patients suffering from dementia. The model recognized the same brain signature in MCI individuals, 90% of which progressed to dementia within three years. This result is a marked improvement on the state-of-the-art in prognostic precision, while the brain signature still identified 47% of all MCI progressors. We thus discovered a sizable MCI subpopulation which represents an excellent recruitment target for clinical trials at the prodromal stage of Alzheimer's disease.
|
2308.02172
|
Haotian Zhang
|
Haotian Zhang, Huifeng Zhao, Xujun Zhang, Qun Su, Hongyan Du, Chao
Shen, Zhe Wang, Dan Li, Peichen Pan, Guangyong Chen, Yu Kang, Chang-yu Hsieh,
Tingjun Hou
|
Delete: Deep Lead Optimization Enveloped in Protein Pocket through
Unified Deleting Strategies and a Structure-aware Network
| null | null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Drug discovery is a highly complicated process, and it is unfeasible to fully
commit it to the recently developed molecular generation methods. Deep
learning-based lead optimization takes expert knowledge as a starting point,
learning from numerous historical cases about how to modify the structure for
better drug-forming properties. However, compared with the more established de
novo generation schemes, lead optimization is still an area that requires
further exploration. Previously developed models are often limited to resolving
one (or few) certain subtask(s) of lead optimization, and most of them can only
generate the two-dimensional structures of molecules while disregarding the
vital protein-ligand interactions based on the three-dimensional binding poses.
To address these challenges, we present a novel tool for lead optimization,
named Delete (Deep lead optimization enveloped in protein pocket). Our model
can handle all subtasks of lead optimization involving fragment growing,
linking, and replacement through a unified deleting (masking) strategy, and is
aware of the intricate pocket-ligand interactions through the geometric design
of networks. Statistical evaluations and case studies conducted on individual
subtasks demonstrate that Delete has a significant ability to produce molecules
with superior binding affinities to protein targets and reasonable
drug-likeness from given fragments or atoms. This feature may assist medicinal
chemists in developing not only me-too/me-better products from existing drugs
but also hit-to-lead for first-in-class drugs in a highly efficient manner.
|
[
{
"created": "Fri, 4 Aug 2023 07:18:08 GMT",
"version": "v1"
}
] |
2023-08-07
|
[
[
"Zhang",
"Haotian",
""
],
[
"Zhao",
"Huifeng",
""
],
[
"Zhang",
"Xujun",
""
],
[
"Su",
"Qun",
""
],
[
"Du",
"Hongyan",
""
],
[
"Shen",
"Chao",
""
],
[
"Wang",
"Zhe",
""
],
[
"Li",
"Dan",
""
],
[
"Pan",
"Peichen",
""
],
[
"Chen",
"Guangyong",
""
],
[
"Kang",
"Yu",
""
],
[
"Hsieh",
"Chang-yu",
""
],
[
"Hou",
"Tingjun",
""
]
] |
Drug discovery is a highly complicated process, and it is unfeasible to fully commit it to the recently developed molecular generation methods. Deep learning-based lead optimization takes expert knowledge as a starting point, learning from numerous historical cases about how to modify the structure for better drug-forming properties. However, compared with the more established de novo generation schemes, lead optimization is still an area that requires further exploration. Previously developed models are often limited to resolving one (or few) certain subtask(s) of lead optimization, and most of them can only generate the two-dimensional structures of molecules while disregarding the vital protein-ligand interactions based on the three-dimensional binding poses. To address these challenges, we present a novel tool for lead optimization, named Delete (Deep lead optimization enveloped in protein pocket). Our model can handle all subtasks of lead optimization involving fragment growing, linking, and replacement through a unified deleting (masking) strategy, and is aware of the intricate pocket-ligand interactions through the geometric design of networks. Statistical evaluations and case studies conducted on individual subtasks demonstrate that Delete has a significant ability to produce molecules with superior binding affinities to protein targets and reasonable drug-likeness from given fragments or atoms. This feature may assist medicinal chemists in developing not only me-too/me-better products from existing drugs but also hit-to-lead for first-in-class drugs in a highly efficient manner.
|
1303.0402
|
Mayya Miftakhova
|
V.V. Babenko, M.M. Miftakhova, D.V. Yavna
|
Organizational properties of second-order visual filters sensitive to
the orientation modulations
|
4 pages, 3 figures; this is the English version of paper; it's
published in Russian
|
Babenko V.V., Miftakhova M.M., Yavna D.V. Organizational
properties of second-order visual filters sensitive to the orientation
modulations // Proceedings of 10th International Conference "Applied Optics",
St. Petersburg. P. 331-334. 2012
| null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Here we show psychophysical study of second-order visual mechanisms sensitive
to the orientation modulations. Selectivity to orientation, phase and spatial
frequency of modulation is measured. Bandwidths for phase (0,5{\pi}) and
orientation (33,75 deg) are defined, but there is no evidence for spatial
frequency selectivity.
|
[
{
"created": "Sat, 2 Mar 2013 16:50:14 GMT",
"version": "v1"
}
] |
2013-03-05
|
[
[
"Babenko",
"V. V.",
""
],
[
"Miftakhova",
"M. M.",
""
],
[
"Yavna",
"D. V.",
""
]
] |
Here we show psychophysical study of second-order visual mechanisms sensitive to the orientation modulations. Selectivity to orientation, phase and spatial frequency of modulation is measured. Bandwidths for phase (0,5{\pi}) and orientation (33,75 deg) are defined, but there is no evidence for spatial frequency selectivity.
|
1509.02745
|
Klaus Jaffe Dr
|
Klaus Jaffe
|
Extended Inclusive Fitness Theory bridges Economics and Biology through
a common understanding of Social Synergy
|
Bioeconomics, Synergy, Complexity
|
SpringerPlus. 5(1) 1-19, 2016, 5:1092
|
10.1186/s40064-016-2750-z
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inclusive Fitness Theory (IFT) was proposed half a century ago by W.D.
Hamilton to explain the emergence and maintenance of cooperation between
individuals that allows the existence of society. Contemporary evolutionary
ecology identified several factors that increase inclusive fitness, in addition
to kin-selection, such as assortation or homophily, and social synergies
triggered by cooperation. Here we propose an Extend Inclusive Fitness Theory
(EIFT) that includes in the fitness calculation all direct and indirect
benefits an agent obtains by its own actions, and through interactions with kin
and with genetically unrelated individuals. This formulation focuses on the
sustainable cost/benefit threshold ratio of cooperation and on the probability
of agents sharing mutually compatible memes or genes. This broader description
of the nature of social dynamics allows to compare the evolution of cooperation
among kin and non-kin, intra- and inter-specific cooperation, co-evolution, the
emergence of symbioses, of social synergies, and the emergence of division of
labor. EIFT promotes interdisciplinary cross fertilization of ideas by allowing
to describe the role for division of labor in the emergence of social
synergies, providing an integrated framework for the study of both, biological
evolution of social behavior and economic market dynamics.
|
[
{
"created": "Wed, 9 Sep 2015 12:18:00 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Nov 2015 19:26:28 GMT",
"version": "v2"
}
] |
2017-03-07
|
[
[
"Jaffe",
"Klaus",
""
]
] |
Inclusive Fitness Theory (IFT) was proposed half a century ago by W.D. Hamilton to explain the emergence and maintenance of cooperation between individuals that allows the existence of society. Contemporary evolutionary ecology identified several factors that increase inclusive fitness, in addition to kin-selection, such as assortation or homophily, and social synergies triggered by cooperation. Here we propose an Extend Inclusive Fitness Theory (EIFT) that includes in the fitness calculation all direct and indirect benefits an agent obtains by its own actions, and through interactions with kin and with genetically unrelated individuals. This formulation focuses on the sustainable cost/benefit threshold ratio of cooperation and on the probability of agents sharing mutually compatible memes or genes. This broader description of the nature of social dynamics allows to compare the evolution of cooperation among kin and non-kin, intra- and inter-specific cooperation, co-evolution, the emergence of symbioses, of social synergies, and the emergence of division of labor. EIFT promotes interdisciplinary cross fertilization of ideas by allowing to describe the role for division of labor in the emergence of social synergies, providing an integrated framework for the study of both, biological evolution of social behavior and economic market dynamics.
|
2309.07082
|
Bel\'en Valenzuela
|
Bel\'en Valenzuela
|
Landau model to illustrate the process of learning and unlearning of
nociplastic pain
| null | null | null | null |
q-bio.TO physics.soc-ph q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in the comprehension of the consolidation of nociplastic pain
points to a complex nonconscious learnt process of threat perception.
Neurobiological education is emerging as a promising approach to unlearn
nociplastic pain supported by biopsychosocial tools (exposition to movement,
mindfulness, sharing group format...). However this approach is still poorly
known by clinisians and society in general forming a communication problem
that, unfortunately, perpetuate the suffering of the patients. We propose a
Landau model to describe the process of learning and unlearning nociplastic
pain to help to clarify this complex situation and facilitate communication
between different sectors of society. Nociplastic pain corresponds to a first
order transition with attention more likely in the alert-protection state than
in the trust-explore state. Two appealing results of the model are that the
perception of the critical context depends on the personal history about the
symptom and that biopsychosocial loops are formed when there are alarming
learnt historic information about the symptom together with confused and
contradictory expert information as in nocebo messages. Learning and unlearning
in the model correspond to a change in control parameters able to weight more
alert-protected state, trust-explore state or neutral state. This description
makes clear why neurobiological education is the ground therapy from which
others must be built to embody the pertinent, clear and trustful information.
|
[
{
"created": "Mon, 11 Sep 2023 11:08:13 GMT",
"version": "v1"
}
] |
2023-09-14
|
[
[
"Valenzuela",
"Belén",
""
]
] |
Recent advances in the comprehension of the consolidation of nociplastic pain points to a complex nonconscious learnt process of threat perception. Neurobiological education is emerging as a promising approach to unlearn nociplastic pain supported by biopsychosocial tools (exposition to movement, mindfulness, sharing group format...). However this approach is still poorly known by clinisians and society in general forming a communication problem that, unfortunately, perpetuate the suffering of the patients. We propose a Landau model to describe the process of learning and unlearning nociplastic pain to help to clarify this complex situation and facilitate communication between different sectors of society. Nociplastic pain corresponds to a first order transition with attention more likely in the alert-protection state than in the trust-explore state. Two appealing results of the model are that the perception of the critical context depends on the personal history about the symptom and that biopsychosocial loops are formed when there are alarming learnt historic information about the symptom together with confused and contradictory expert information as in nocebo messages. Learning and unlearning in the model correspond to a change in control parameters able to weight more alert-protected state, trust-explore state or neutral state. This description makes clear why neurobiological education is the ground therapy from which others must be built to embody the pertinent, clear and trustful information.
|
1806.02171
|
Davide Fiore
|
Davide Fiore and Agostino Guarino and Mario di Bernardo
|
Analysis and control of genetic toggle switches subject to periodic
multi-input stimulation
|
Preprint accepted for publication on L-CSS (First submission
31.05.2018, accepted 21.08.2018)
|
IEEE Control Systems Letters, vol. 3, no. 2, pp. 278-283, 2019
|
10.1109/LCSYS.2018.2868925
| null |
q-bio.MN cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we analyze a genetic toggle switch recently studied in the
literature where the expression of two repressor proteins can be tuned by
controlling two different inputs, namely the concentration of two inducer
molecules in the growth medium of the cells. Specifically, we investigate the
dynamics of this system when subject to pulse-width modulated (PWM) input. We
provide an analytical model that captures qualitatively the experimental
observations reported in the literature and approximates its asymptotic
behavior. We also discuss the effect that the system parameters have on the
prediction accuracy of the model. Moreover, we propose a possible external
control strategy to regulate the mean value of the fluorescence of the reporter
proteins when the cells are subject to such periodic forcing.
|
[
{
"created": "Wed, 6 Jun 2018 13:28:29 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Sep 2018 08:17:21 GMT",
"version": "v2"
}
] |
2020-03-18
|
[
[
"Fiore",
"Davide",
""
],
[
"Guarino",
"Agostino",
""
],
[
"di Bernardo",
"Mario",
""
]
] |
In this letter, we analyze a genetic toggle switch recently studied in the literature where the expression of two repressor proteins can be tuned by controlling two different inputs, namely the concentration of two inducer molecules in the growth medium of the cells. Specifically, we investigate the dynamics of this system when subject to pulse-width modulated (PWM) input. We provide an analytical model that captures qualitatively the experimental observations reported in the literature and approximates its asymptotic behavior. We also discuss the effect that the system parameters have on the prediction accuracy of the model. Moreover, we propose a possible external control strategy to regulate the mean value of the fluorescence of the reporter proteins when the cells are subject to such periodic forcing.
|
1912.00960
|
Matteo Romandini
|
Matteo Romandini, Jacopo Crezzini, Eugenio Bortolini, Paolo Boscato,
Francesco Boschin, Lisa Carrera, Nicola Nannini, Antonio Tagliacozzo,
Gabriele Terlato, Simona Arrighi, Federica Badino, Carla Figus, Federico
Lugli, Giulia Marciani, Gregorio Oxilia, Adriana Moroni, Fabio Negrino, Marco
Peresani, Julien Riel-Salvatore, Annamaria Ronchitelli, Enza Elena
Spinapolice, Stefano Benazzi
|
Macromammal and bird assemblages across the Late Middle to Upper
Palaeolithic transition in Italy: an extended zooarchaeological review
| null | null |
10.1016/j.quaint.2019.11.008
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evidence of human activities during the Middle to Upper Palaeolithic
transition is well represented from rock_shelters, caves and open_air sites
across Italy. Over the past decade, both the revision of taphonomic processes
affecting archaeological faunal assemblages and new zooarchaeological studies
have allowed archaeologists to better understand subsistence strategies and
cultural behaviors attributed to groups of Neandertal and modern humans living
in the region. This work presents the preliminary results of a 5 years research
programme (ERC n. 724046_SUCCESS) and offers a state_of_the_art synthesis of
archaeological faunal assemblages including mammals and birds uncovered in
Italy between 50 and 35 ky ago. The present data were recovered in primary Late
Mousterian, Uluzzian, and Protoaurignacian stratigraphic contexts from Northern
Italy (Grotta di Fumane, Riparo del Broion, Grotta Maggiore di San Bernardino,
Grotta del Rio Secco, Riparo Bombrini), and Southern Italy (Grotta di
Castelcivita, Grotta della Cala, Grotta del Cavallo, and Riparo
l'Oscurusciuto). The available Number of Identified Specimens (NISP) is
analysed through intra- and inter-site comparisons at a regional scale, while
aoristic analysis is applied to the sequence documented at Grotta di Fumane.
Results of qualitative comparisons suggest an increase in the number of hunted
taxa since the end of the Middle Palaeolithic, and a marked change in
ecological settings beginning with the Protoaurignacian, with a shift to lower
temperatures and humidity. The distribution of carnivore remains and taphonomic
analyses hint at a possible change in faunal exploitation and butchering
processing between the Middle and Upper Palaeolithic. A preliminary comparison
between bone frequencies and the distribution of burned bones poses interesting
questions concerning the management of fire. Eventually, the combined use
of...(continue)
|
[
{
"created": "Mon, 2 Dec 2019 17:47:11 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 18:20:26 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Jan 2020 18:35:39 GMT",
"version": "v3"
}
] |
2020-01-03
|
[
[
"Romandini",
"Matteo",
""
],
[
"Crezzini",
"Jacopo",
""
],
[
"Bortolini",
"Eugenio",
""
],
[
"Boscato",
"Paolo",
""
],
[
"Boschin",
"Francesco",
""
],
[
"Carrera",
"Lisa",
""
],
[
"Nannini",
"Nicola",
""
],
[
"Tagliacozzo",
"Antonio",
""
],
[
"Terlato",
"Gabriele",
""
],
[
"Arrighi",
"Simona",
""
],
[
"Badino",
"Federica",
""
],
[
"Figus",
"Carla",
""
],
[
"Lugli",
"Federico",
""
],
[
"Marciani",
"Giulia",
""
],
[
"Oxilia",
"Gregorio",
""
],
[
"Moroni",
"Adriana",
""
],
[
"Negrino",
"Fabio",
""
],
[
"Peresani",
"Marco",
""
],
[
"Riel-Salvatore",
"Julien",
""
],
[
"Ronchitelli",
"Annamaria",
""
],
[
"Spinapolice",
"Enza Elena",
""
],
[
"Benazzi",
"Stefano",
""
]
] |
Evidence of human activities during the Middle to Upper Palaeolithic transition is well represented from rock_shelters, caves and open_air sites across Italy. Over the past decade, both the revision of taphonomic processes affecting archaeological faunal assemblages and new zooarchaeological studies have allowed archaeologists to better understand subsistence strategies and cultural behaviors attributed to groups of Neandertal and modern humans living in the region. This work presents the preliminary results of a 5 years research programme (ERC n. 724046_SUCCESS) and offers a state_of_the_art synthesis of archaeological faunal assemblages including mammals and birds uncovered in Italy between 50 and 35 ky ago. The present data were recovered in primary Late Mousterian, Uluzzian, and Protoaurignacian stratigraphic contexts from Northern Italy (Grotta di Fumane, Riparo del Broion, Grotta Maggiore di San Bernardino, Grotta del Rio Secco, Riparo Bombrini), and Southern Italy (Grotta di Castelcivita, Grotta della Cala, Grotta del Cavallo, and Riparo l'Oscurusciuto). The available Number of Identified Specimens (NISP) is analysed through intra- and inter-site comparisons at a regional scale, while aoristic analysis is applied to the sequence documented at Grotta di Fumane. Results of qualitative comparisons suggest an increase in the number of hunted taxa since the end of the Middle Palaeolithic, and a marked change in ecological settings beginning with the Protoaurignacian, with a shift to lower temperatures and humidity. The distribution of carnivore remains and taphonomic analyses hint at a possible change in faunal exploitation and butchering processing between the Middle and Upper Palaeolithic. A preliminary comparison between bone frequencies and the distribution of burned bones poses interesting questions concerning the management of fire. Eventually, the combined use of...(continue)
|
1806.07557
|
Heyrim Cho
|
Heyrim Cho and Doron Levy
|
Modeling continuous levels of resistance to multidrug therapy in cancer
|
42 pages
| null |
10.1016/j.apm.2018.07.025
| null |
q-bio.PE q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multidrug resistance consists of a series of genetic and epigenetic
alternations that involve multifactorial and complex processes, which are a
challenge to successful cancer treatments. Accompanied by advances in
biotechnology and high-dimensional data analysis techniques that are bringing
in new opportunities in modeling biological systems with continuous phenotypic
structured models, we study a cancer cell population model that considers a
multi-dimensional continuous resistance trait to multiple drugs to investigate
multidrug resistance. We compare our continuous resistance trait model with
classical models that assume a discrete resistance state and classify the cases
when the continuum and discrete models yield different dynamical patterns in
the emerging heterogeneity in response to drugs. We also compute the maximal
fitness resistance trait for various continuum models and study the effect of
epimutations. Finally, we demonstrate how our approach can be used to study
tumor growth regarding the turnover rate and the proliferating fraction, and
show that a continuous resistance level may result in a different dynamics when
compared with the predictions of other discrete models.
|
[
{
"created": "Wed, 20 Jun 2018 05:22:16 GMT",
"version": "v1"
}
] |
2022-04-19
|
[
[
"Cho",
"Heyrim",
""
],
[
"Levy",
"Doron",
""
]
] |
Multidrug resistance consists of a series of genetic and epigenetic alternations that involve multifactorial and complex processes, which are a challenge to successful cancer treatments. Accompanied by advances in biotechnology and high-dimensional data analysis techniques that are bringing in new opportunities in modeling biological systems with continuous phenotypic structured models, we study a cancer cell population model that considers a multi-dimensional continuous resistance trait to multiple drugs to investigate multidrug resistance. We compare our continuous resistance trait model with classical models that assume a discrete resistance state and classify the cases when the continuum and discrete models yield different dynamical patterns in the emerging heterogeneity in response to drugs. We also compute the maximal fitness resistance trait for various continuum models and study the effect of epimutations. Finally, we demonstrate how our approach can be used to study tumor growth regarding the turnover rate and the proliferating fraction, and show that a continuous resistance level may result in a different dynamics when compared with the predictions of other discrete models.
|
q-bio/0611029
|
Angelika Studeny
|
P. Pfaffelhuber and A. Studeny
|
Approximating genealogies for partially linked neutral loci under a
selective sweep
|
30 pages, 7 figures, submitted to Journal of Math. Biology
| null | null | null |
q-bio.PE math.PR
| null |
Consider a genetic locus carrying a strongly beneficial allele which has
recently fixed in a large population. As strongly beneficial alleles fix
quickly, sequence diversity at partially linked neutral loci is reduced. This
phenomenon is known as a selective sweep. The fixation of the beneficial allele
not only affects sequence diversity at single neutral loci but also the joint
allele distribution of several partially linked neutral loci. This distribution
can be studied using the ancestral recombination graph for samples of partially
linked neutral loci during the selective sweep. To approximate this graph, we
extend recent work by Schweinsberg & Durrett 2005 and Etheridge, Pfaffelhuber &
Wakolbinger 2006 using a marked Yule tree for the genealogy at a single neutral
locus linked to a strongly beneficial one. We focus on joint genealogies at two
partially linked neutral loci in the case of large selection coefficients
\alpha and recombination rates \rho = O(\alpha/\log\alpha) between loci. Our
approach leads to a full description of the genealogy with accuracy of O((\log
\alpha)^{-2}) in probability. As an application, we derive the expectation of
Lewontin's D as a measure for non-random association of alleles.
|
[
{
"created": "Wed, 8 Nov 2006 11:30:52 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Pfaffelhuber",
"P.",
""
],
[
"Studeny",
"A.",
""
]
] |
Consider a genetic locus carrying a strongly beneficial allele which has recently fixed in a large population. As strongly beneficial alleles fix quickly, sequence diversity at partially linked neutral loci is reduced. This phenomenon is known as a selective sweep. The fixation of the beneficial allele not only affects sequence diversity at single neutral loci but also the joint allele distribution of several partially linked neutral loci. This distribution can be studied using the ancestral recombination graph for samples of partially linked neutral loci during the selective sweep. To approximate this graph, we extend recent work by Schweinsberg & Durrett 2005 and Etheridge, Pfaffelhuber & Wakolbinger 2006 using a marked Yule tree for the genealogy at a single neutral locus linked to a strongly beneficial one. We focus on joint genealogies at two partially linked neutral loci in the case of large selection coefficients \alpha and recombination rates \rho = O(\alpha/\log\alpha) between loci. Our approach leads to a full description of the genealogy with accuracy of O((\log \alpha)^{-2}) in probability. As an application, we derive the expectation of Lewontin's D as a measure for non-random association of alleles.
|
1902.03429
|
Yu Zong Chen
|
Chu Qin, Ying Tan, Shang Ying Chen, Xian Zeng, Xingxing Qi, Tian Jin,
Huan Shi, Yiwei Wan, Yu Chen, Jingfeng Li, Weidong He, Yali Wang, Peng Zhang,
Feng Zhu, Hongping Zhao, Yuyang Jiang, Yuzong Chen
|
Clustering Bioactive Molecules in 3D Chemical Space with Unsupervised
Deep Learning
| null | null | null | null |
q-bio.BM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised clustering has broad applications in data stratification,
pattern investigation and new discovery beyond existing knowledge. In
particular, clustering of bioactive molecules facilitates chemical space
mapping, structure-activity studies, and drug discovery. These tasks,
conventionally conducted by similarity-based methods, are complicated by data
complexity and diversity. We ex-plored the superior learning capability of deep
autoencoders for unsupervised clustering of 1.39 mil-lion bioactive molecules
into band-clusters in a 3-dimensional latent chemical space. These
band-clusters, displayed by a space-navigation simulation software, band
molecules of selected bioactivity classes into individual band-clusters
possessing unique sets of common sub-structural features beyond structural
similarity. These sub-structural features form the frameworks of the
literature-reported pharmacophores and privileged fragments. Within each
band-cluster, molecules are further banded into selected sub-regions with
respect to their bioactivity target, sub-structural features and molecular
scaffolds. Our method is potentially applicable for big data clustering tasks
of different fields.
|
[
{
"created": "Sat, 9 Feb 2019 14:31:09 GMT",
"version": "v1"
}
] |
2019-02-12
|
[
[
"Qin",
"Chu",
""
],
[
"Tan",
"Ying",
""
],
[
"Chen",
"Shang Ying",
""
],
[
"Zeng",
"Xian",
""
],
[
"Qi",
"Xingxing",
""
],
[
"Jin",
"Tian",
""
],
[
"Shi",
"Huan",
""
],
[
"Wan",
"Yiwei",
""
],
[
"Chen",
"Yu",
""
],
[
"Li",
"Jingfeng",
""
],
[
"He",
"Weidong",
""
],
[
"Wang",
"Yali",
""
],
[
"Zhang",
"Peng",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Hongping",
""
],
[
"Jiang",
"Yuyang",
""
],
[
"Chen",
"Yuzong",
""
]
] |
Unsupervised clustering has broad applications in data stratification, pattern investigation and new discovery beyond existing knowledge. In particular, clustering of bioactive molecules facilitates chemical space mapping, structure-activity studies, and drug discovery. These tasks, conventionally conducted by similarity-based methods, are complicated by data complexity and diversity. We ex-plored the superior learning capability of deep autoencoders for unsupervised clustering of 1.39 mil-lion bioactive molecules into band-clusters in a 3-dimensional latent chemical space. These band-clusters, displayed by a space-navigation simulation software, band molecules of selected bioactivity classes into individual band-clusters possessing unique sets of common sub-structural features beyond structural similarity. These sub-structural features form the frameworks of the literature-reported pharmacophores and privileged fragments. Within each band-cluster, molecules are further banded into selected sub-regions with respect to their bioactivity target, sub-structural features and molecular scaffolds. Our method is potentially applicable for big data clustering tasks of different fields.
|
2010.14385
|
Sara Oliveira
|
Sara M Oliveira, Alice Gruppi, Marta V. Vieira, Gabriela M. Souza,
Ant\'onio A. Vicente, Jos\'e A.C. Teixeira, Pablo Fuci\~nos, Giorgia Spigno
and Lorenzo M. Pastrana
|
How additive manufacturing can boost the bioactivity of baked functional
foods
| null | null |
10.1016/j.jfoodeng.2020.110394
| null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The antioxidant activity of baked foods is of utmost interest when
envisioning enhancing their health benefits. Incorporating functional
ingredients is challenging since their bioactivity naturally declines during
baking. In this study, 3D food printing and design of experiments are employed
to clarify how the antioxidant activity of cookies enriched with encapsulated
polyphenols can be maximized. A synergistic effect between encapsulation, time,
temperature, number of layers, and infill of the printed cookies was observed
on the moisture and antioxidant activity. Four-layer cookies with 30 % infill
provided the highest bioactivity and phenolic content if baked for 10 min and
at 180 {\deg}C. The bioacitivity and total phenolic content improved by 115 %
and 173 %, respectively, comparing to free extract cookies. Moreover, the
proper combination of the design and baking variables allowed to vary the
bioactivity of cooked cookies (moisture 3-5 %) between 300 to 700
{\mu}molTR/gdry. The additive manufacture of foods with interconnected pores
could accelerate baking and browning, or reduce thermal degradation. This
represents a potential approach to enhance the functional and healthy
properties of cookies or other thermal treated bioactive food products.
|
[
{
"created": "Tue, 27 Oct 2020 15:49:30 GMT",
"version": "v1"
}
] |
2020-11-17
|
[
[
"Oliveira",
"Sara M",
""
],
[
"Gruppi",
"Alice",
""
],
[
"Vieira",
"Marta V.",
""
],
[
"Souza",
"Gabriela M.",
""
],
[
"Vicente",
"António A.",
""
],
[
"Teixeira",
"José A. C.",
""
],
[
"Fuciños",
"Pablo",
""
],
[
"Spigno",
"Giorgia",
""
],
[
"Pastrana",
"Lorenzo M.",
""
]
] |
The antioxidant activity of baked foods is of utmost interest when envisioning enhancing their health benefits. Incorporating functional ingredients is challenging since their bioactivity naturally declines during baking. In this study, 3D food printing and design of experiments are employed to clarify how the antioxidant activity of cookies enriched with encapsulated polyphenols can be maximized. A synergistic effect between encapsulation, time, temperature, number of layers, and infill of the printed cookies was observed on the moisture and antioxidant activity. Four-layer cookies with 30 % infill provided the highest bioactivity and phenolic content if baked for 10 min and at 180 {\deg}C. The bioacitivity and total phenolic content improved by 115 % and 173 %, respectively, comparing to free extract cookies. Moreover, the proper combination of the design and baking variables allowed to vary the bioactivity of cooked cookies (moisture 3-5 %) between 300 to 700 {\mu}molTR/gdry. The additive manufacture of foods with interconnected pores could accelerate baking and browning, or reduce thermal degradation. This represents a potential approach to enhance the functional and healthy properties of cookies or other thermal treated bioactive food products.
|
1810.04910
|
Annabelle Haudry
|
Annabelle Haudry, Stefan Laurent, Martin Kapun
|
Population genomics on the fly: recent advances in Drosophila
|
book chapter, 69 pages (ref included), 2 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drosophila melanogaster, a small dipteran of African origin, represents one
of the best-studied model organisms. Early work in this system has uniquely
shed light on the basic principles of genetics and resulted in a versatile
collection of genetic tools that allow to uncover mechanistic links between
genotype and phenotype. Moreover, given its world-wide distribution in diverse
habitats and its moderate genome-size, Drosophila has proven very powerful for
population genetics inference and was one of the first eukaryotes whose genome
was fully sequenced. In this book-chapter, we provide a brief historical
overview of research in Drosophila and then focus on recent advances during the
genomic era. After describing different types and sources of genomic data, we
discuss mechanisms of neutral evolution including the demographic history of
Drosophila and the effects of recombination and biased gene conversion. Then,
we review recent advances in detecting genome-wide signals of selection, such
as soft and hard selective sweeps. We further provide a brief introduction to
background selection, selection of non-coding DNA and codon usage and focus on
the role of structural variants, such as transposable elements and chromosomal
inversions, during the adaptive process. Finally, we discuss how genomic data
helps to dissect neutral and adaptive evolutionary mechanisms that shape
genetic and phenotypic variation in natural populations along environmental
gradients. In summary, this book chapter serves as a starting point to
Drosophila population genomics and provides an introduction to the system and
an overview to data sources, important population genetic concepts and recent
advances in the field.
|
[
{
"created": "Thu, 11 Oct 2018 09:15:03 GMT",
"version": "v1"
}
] |
2018-10-12
|
[
[
"Haudry",
"Annabelle",
""
],
[
"Laurent",
"Stefan",
""
],
[
"Kapun",
"Martin",
""
]
] |
Drosophila melanogaster, a small dipteran of African origin, represents one of the best-studied model organisms. Early work in this system has uniquely shed light on the basic principles of genetics and resulted in a versatile collection of genetic tools that allow to uncover mechanistic links between genotype and phenotype. Moreover, given its world-wide distribution in diverse habitats and its moderate genome-size, Drosophila has proven very powerful for population genetics inference and was one of the first eukaryotes whose genome was fully sequenced. In this book-chapter, we provide a brief historical overview of research in Drosophila and then focus on recent advances during the genomic era. After describing different types and sources of genomic data, we discuss mechanisms of neutral evolution including the demographic history of Drosophila and the effects of recombination and biased gene conversion. Then, we review recent advances in detecting genome-wide signals of selection, such as soft and hard selective sweeps. We further provide a brief introduction to background selection, selection of non-coding DNA and codon usage and focus on the role of structural variants, such as transposable elements and chromosomal inversions, during the adaptive process. Finally, we discuss how genomic data helps to dissect neutral and adaptive evolutionary mechanisms that shape genetic and phenotypic variation in natural populations along environmental gradients. In summary, this book chapter serves as a starting point to Drosophila population genomics and provides an introduction to the system and an overview to data sources, important population genetic concepts and recent advances in the field.
|
1406.7185
|
Marijn van Dongen
|
M.N. van Dongen and F.E. Hoebeek and S.K.E. Koekkoek and C.I. De Zeeuw
and W.A. Serdijn
|
Efficacy of high frequency switched-mode stimulation in activating
Purkinje cells
|
13 pages, 8 figures
| null | null | null |
q-bio.NC q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the efficacy of high frequency switched-mode neural
stimulation. Instead of using a constant stimulation amplitude, the stimulus is
switched on and off repeatedly with a high frequency (up to 100kHz) duty cycled
signal. By means of tissue modeling that includes the dynamic properties of
both the tissue material as well as the axon membrane, it is first shown that
switched-mode stimulation depolarizes the cell membrane in a similar way as
classical constant amplitude stimulation. These findings are subsequently
verified using in vitro experiments in which the response of a Purkinje cell is
measured due to a stimulation signal in the molecular layer of the cerebellum
of a mouse. For this purpose a stimulator circuit is developed that is able to
produce a monophasic high frequency switched-mode stimulation signal. The
results confirm the modeling by showing that switched-mode stimulation is able
to induce similar responses in the Purkinje cell as classical stimulation using
a constant current source. This conclusion opens up possibilities for novel
stimulation designs that can improve the performance of the stimulator
circuitry. Care has to be taken to avoid losses in the system due to the higher
operating frequency.
|
[
{
"created": "Fri, 27 Jun 2014 14:04:03 GMT",
"version": "v1"
}
] |
2014-06-30
|
[
[
"van Dongen",
"M. N.",
""
],
[
"Hoebeek",
"F. E.",
""
],
[
"Koekkoek",
"S. K. E.",
""
],
[
"De Zeeuw",
"C. I.",
""
],
[
"Serdijn",
"W. A.",
""
]
] |
This paper investigates the efficacy of high frequency switched-mode neural stimulation. Instead of using a constant stimulation amplitude, the stimulus is switched on and off repeatedly with a high frequency (up to 100kHz) duty cycled signal. By means of tissue modeling that includes the dynamic properties of both the tissue material as well as the axon membrane, it is first shown that switched-mode stimulation depolarizes the cell membrane in a similar way as classical constant amplitude stimulation. These findings are subsequently verified using in vitro experiments in which the response of a Purkinje cell is measured due to a stimulation signal in the molecular layer of the cerebellum of a mouse. For this purpose a stimulator circuit is developed that is able to produce a monophasic high frequency switched-mode stimulation signal. The results confirm the modeling by showing that switched-mode stimulation is able to induce similar responses in the Purkinje cell as classical stimulation using a constant current source. This conclusion opens up possibilities for novel stimulation designs that can improve the performance of the stimulator circuitry. Care has to be taken to avoid losses in the system due to the higher operating frequency.
|
1912.02964
|
Jianyuan Deng
|
Jianyuan Deng, Fusheng Wang
|
An Informatics-based Approach to Identify Key Pharmacological Components
in Drug-Drug Interactions
|
Accepted to AMIA 2020 Informatics Summit
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drug-drug interactions (DDI) can cause severe adverse drug reactions and pose
a major challenge to medication therapy. Recently, informatics-based approaches
are emerging for DDI studies. In this paper, we aim to identify key
pharmacological components in DDI based on large-scale data from DrugBank, a
comprehensive DDI database. With pharmacological components as features,
logistic regression is used to perform DDI classification with a focus on
searching for most predictive features, a process of identifying key
pharmacological components. Using univariate feature selection with chi-squared
statistic as the ranking criteria, our study reveals that top 10% features can
achieve comparable classification performance compared to that using all
features. The top 10% features are identified to be key pharmacological
components. Furthermore, their importance is quantified by feature coefficients
in the classifier, which measures the DDI potential and provides a novel
perspective to evaluate pharmacological components.
|
[
{
"created": "Fri, 6 Dec 2019 03:20:38 GMT",
"version": "v1"
}
] |
2019-12-09
|
[
[
"Deng",
"Jianyuan",
""
],
[
"Wang",
"Fusheng",
""
]
] |
Drug-drug interactions (DDI) can cause severe adverse drug reactions and pose a major challenge to medication therapy. Recently, informatics-based approaches are emerging for DDI studies. In this paper, we aim to identify key pharmacological components in DDI based on large-scale data from DrugBank, a comprehensive DDI database. With pharmacological components as features, logistic regression is used to perform DDI classification with a focus on searching for most predictive features, a process of identifying key pharmacological components. Using univariate feature selection with chi-squared statistic as the ranking criteria, our study reveals that top 10% features can achieve comparable classification performance compared to that using all features. The top 10% features are identified to be key pharmacological components. Furthermore, their importance is quantified by feature coefficients in the classifier, which measures the DDI potential and provides a novel perspective to evaluate pharmacological components.
|
1706.03836
|
Christian Meisel
|
Christian Meisel, Andreas Klaus, Vladyslav V. Vyazovskiy and Dietmar
Plenz
|
The interplay between long- and short-range temporal correlations shapes
cortex dynamics across vigilance states
| null | null | null | null |
q-bio.NC nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasing evidence suggests that cortical dynamics during wake exhibits
long-range temporal correlations suitable to integrate inputs over extended
periods of time to increase the signal-to-noise ratio in decision-making and
working memory tasks. Accordingly, sleep has been suggested as a state
characterized by a breakdown of long-range correlations; detailed measurements
of neuronal timescales that support this view, however, have so far been
lacking. Here we show that the long timescales measured at the individual
neuron level in freely-behaving rats during the awake state are abrogated
during non-REM (NREM) sleep. We provide evidence for the existence of two
distinct states in terms of timescale dynamics in cortex: one which is
characterized by long timescales which dominate during wake and REM sleep, and
a second one characterized by the absence of long-range temporal correlations
which characterizes NREM sleep. We observe that both timescale regimes can
co-exist and, in combination, lead to an apparent gradual decline of long
timescales during extended wake which is restored after sleep. Our results
provide a missing link between the observed long timescales in individual
neuron fluctuations during wake and the reported absence of long-term
correlations during deep sleep in EEG and fMRI studies. They furthermore
suggest a network-level function of sleep, to reorganize cortical networks
towards states governed by slow cortex dynamics to ensure optimal function for
the time awake.
|
[
{
"created": "Mon, 12 Jun 2017 20:16:42 GMT",
"version": "v1"
}
] |
2017-06-14
|
[
[
"Meisel",
"Christian",
""
],
[
"Klaus",
"Andreas",
""
],
[
"Vyazovskiy",
"Vladyslav V.",
""
],
[
"Plenz",
"Dietmar",
""
]
] |
Increasing evidence suggests that cortical dynamics during wake exhibits long-range temporal correlations suitable to integrate inputs over extended periods of time to increase the signal-to-noise ratio in decision-making and working memory tasks. Accordingly, sleep has been suggested as a state characterized by a breakdown of long-range correlations; detailed measurements of neuronal timescales that support this view, however, have so far been lacking. Here we show that the long timescales measured at the individual neuron level in freely-behaving rats during the awake state are abrogated during non-REM (NREM) sleep. We provide evidence for the existence of two distinct states in terms of timescale dynamics in cortex: one which is characterized by long timescales which dominate during wake and REM sleep, and a second one characterized by the absence of long-range temporal correlations which characterizes NREM sleep. We observe that both timescale regimes can co-exist and, in combination, lead to an apparent gradual decline of long timescales during extended wake which is restored after sleep. Our results provide a missing link between the observed long timescales in individual neuron fluctuations during wake and the reported absence of long-term correlations during deep sleep in EEG and fMRI studies. They furthermore suggest a network-level function of sleep, to reorganize cortical networks towards states governed by slow cortex dynamics to ensure optimal function for the time awake.
|
1212.0504
|
Michael Menden
|
Michael P. Menden, Francesco Iorio, Mathew Garnett, Ultan McDermott,
Cyril Benes, Pedro J. Ballester, Julio Saez-Rodriguez
|
Machine learning prediction of cancer cell sensitivity to drugs based on
genomic and chemical properties
|
26 pages, 7 figures, including supplemental information, presented by
Michael Menden at the 5th annual RECOMB Conference on Regulatory and Systems
Genomics with DREAM Challenges; accepted in PLOS ONE
| null |
10.1371/journal.pone.0061318
| null |
q-bio.GN cs.CE cs.LG q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting the response of a specific cancer to a therapy is a major goal in
modern oncology that should ultimately lead to a personalised treatment.
High-throughput screenings of potentially active compounds against a panel of
genomically heterogeneous cancer cell lines have unveiled multiple
relationships between genomic alterations and drug responses. Various
computational approaches have been proposed to predict sensitivity based on
genomic features, while others have used the chemical properties of the drugs
to ascertain their effect. In an effort to integrate these complementary
approaches, we developed machine learning models to predict the response of
cancer cell lines to drug treatment, quantified through IC50 values, based on
both the genomic features of the cell lines and the chemical properties of the
considered drugs. Models predicted IC50 values in a 8-fold cross-validation and
an independent blind test with coefficient of determination R2 of 0.72 and 0.64
respectively. Furthermore, models were able to predict with comparable accuracy
(R2 of 0.61) IC50s of cell lines from a tissue not used in the training stage.
Our in silico models can be used to optimise the experimental design of
drug-cell screenings by estimating a large proportion of missing IC50 values
rather than experimentally measure them. The implications of our results go
beyond virtual drug screening design: potentially thousands of drugs could be
probed in silico to systematically test their potential efficacy as anti-tumour
agents based on their structure, thus providing a computational framework to
identify new drug repositioning opportunities as well as ultimately be useful
for personalized medicine by linking the genomic traits of patients to drug
sensitivity.
|
[
{
"created": "Mon, 3 Dec 2012 19:38:09 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Dec 2012 15:10:37 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2013 18:07:47 GMT",
"version": "v3"
}
] |
2015-06-12
|
[
[
"Menden",
"Michael P.",
""
],
[
"Iorio",
"Francesco",
""
],
[
"Garnett",
"Mathew",
""
],
[
"McDermott",
"Ultan",
""
],
[
"Benes",
"Cyril",
""
],
[
"Ballester",
"Pedro J.",
""
],
[
"Saez-Rodriguez",
"Julio",
""
]
] |
Predicting the response of a specific cancer to a therapy is a major goal in modern oncology that should ultimately lead to a personalised treatment. High-throughput screenings of potentially active compounds against a panel of genomically heterogeneous cancer cell lines have unveiled multiple relationships between genomic alterations and drug responses. Various computational approaches have been proposed to predict sensitivity based on genomic features, while others have used the chemical properties of the drugs to ascertain their effect. In an effort to integrate these complementary approaches, we developed machine learning models to predict the response of cancer cell lines to drug treatment, quantified through IC50 values, based on both the genomic features of the cell lines and the chemical properties of the considered drugs. Models predicted IC50 values in a 8-fold cross-validation and an independent blind test with coefficient of determination R2 of 0.72 and 0.64 respectively. Furthermore, models were able to predict with comparable accuracy (R2 of 0.61) IC50s of cell lines from a tissue not used in the training stage. Our in silico models can be used to optimise the experimental design of drug-cell screenings by estimating a large proportion of missing IC50 values rather than experimentally measure them. The implications of our results go beyond virtual drug screening design: potentially thousands of drugs could be probed in silico to systematically test their potential efficacy as anti-tumour agents based on their structure, thus providing a computational framework to identify new drug repositioning opportunities as well as ultimately be useful for personalized medicine by linking the genomic traits of patients to drug sensitivity.
|
1805.01011
|
Ishtiaque Ahammad
|
Ishtiaque Ahammad
|
Identification of Key Proteins Involved in Axon Guidance Related
Disorders: A Systems Biology Approach
| null | null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Axon guidance is a crucial process for growth of the central and peripheral
nervous systems. In this study, 3 axon guidance related disorders, namely-
Duane Retraction Syndrome (DRS) , Horizontal Gaze Palsy with Progressive
Scoliosis (HGPPS) and Congenital fibrosis of the extraocular muscles type 3
(CFEOM3) were studied using various Systems Biology tools to identify the genes
and proteins involved with them to get a better idea about the underlying
molecular mechanisms including the regulatory mechanisms. Based on the analyses
carried out, 7 significant modules have been identified from the PPI network.
Five pathways/processes have been found to be significantly associated with
DRS, HGPPS and CFEOM3 associated genes. From the PPI network, 3 have been
identified as hub proteins- DRD2, UBC and CUL3.
|
[
{
"created": "Tue, 17 Apr 2018 18:22:45 GMT",
"version": "v1"
}
] |
2018-05-04
|
[
[
"Ahammad",
"Ishtiaque",
""
]
] |
Axon guidance is a crucial process for growth of the central and peripheral nervous systems. In this study, 3 axon guidance related disorders, namely- Duane Retraction Syndrome (DRS) , Horizontal Gaze Palsy with Progressive Scoliosis (HGPPS) and Congenital fibrosis of the extraocular muscles type 3 (CFEOM3) were studied using various Systems Biology tools to identify the genes and proteins involved with them to get a better idea about the underlying molecular mechanisms including the regulatory mechanisms. Based on the analyses carried out, 7 significant modules have been identified from the PPI network. Five pathways/processes have been found to be significantly associated with DRS, HGPPS and CFEOM3 associated genes. From the PPI network, 3 have been identified as hub proteins- DRD2, UBC and CUL3.
|
1807.02126
|
Kanika Bansal
|
Kanika Bansal, Javier O. Garcia, Steven H. Tompson, Timothy Verstynen,
Jean M. Vettel, and Sarah F. Muldoon
|
Cognitive chimera states in human brain networks
| null | null |
10.1126/sciadv.aau8535
| null |
q-bio.NC math.DS physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The human brain is a complex dynamical system that gives rise to cognition
through spatiotemporal patterns of coherent and incoherent activity between
brain regions. As different regions dynamically interact to perform cognitive
tasks, variable patterns of partial synchrony can be observed, forming chimera
states. We propose that the emergence of such states plays a fundamental role
in the cognitive organization of the brain, and present a novel
cognitively-informed, chimera-based framework to explore how large-scale brain
architecture affects brain dynamics and function. Using personalized brain
network models, we systematically study how regional brain stimulation produces
different patterns of synchronization across predefined cognitive systems. We
then analyze these emergent patterns within our novel framework to understand
the impact of subject-specific and region-specific structural variability on
brain dynamics. Our results suggest a classification of cognitive systems into
four groups with differing levels of subject and regional variability that
reflect their different functional roles.
|
[
{
"created": "Thu, 5 Jul 2018 18:03:19 GMT",
"version": "v1"
}
] |
2019-04-09
|
[
[
"Bansal",
"Kanika",
""
],
[
"Garcia",
"Javier O.",
""
],
[
"Tompson",
"Steven H.",
""
],
[
"Verstynen",
"Timothy",
""
],
[
"Vettel",
"Jean M.",
""
],
[
"Muldoon",
"Sarah F.",
""
]
] |
The human brain is a complex dynamical system that gives rise to cognition through spatiotemporal patterns of coherent and incoherent activity between brain regions. As different regions dynamically interact to perform cognitive tasks, variable patterns of partial synchrony can be observed, forming chimera states. We propose that the emergence of such states plays a fundamental role in the cognitive organization of the brain, and present a novel cognitively-informed, chimera-based framework to explore how large-scale brain architecture affects brain dynamics and function. Using personalized brain network models, we systematically study how regional brain stimulation produces different patterns of synchronization across predefined cognitive systems. We then analyze these emergent patterns within our novel framework to understand the impact of subject-specific and region-specific structural variability on brain dynamics. Our results suggest a classification of cognitive systems into four groups with differing levels of subject and regional variability that reflect their different functional roles.
|
1301.6470
|
Mike Steel Prof.
|
Elliott Sober and Mike Steel
|
Time and Knowability in Evolutionary Processes
|
22 pages, 4 figures
|
Philos. sci. 81 (2014) 558-579
|
10.1086/677954
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Historical sciences like evolutionary biology reconstruct past events by
using the traces that the past has bequeathed to the present. The Markov Chain
Convergence Theorem and the Data Processing Inequality describe how the mutual
information between present and past is affected by how much time there is in
between. These two results are very general; they pertain to any process, not
just to the biological processes that occur in evolution. To study the
specifically biological question of how the present state of a lineage provides
information about its evolutionary past, we use a Moran process framework and
consider how the kind of evolutionary process (drift, and selection of various
kinds) at work in a lineage affects the epistemological relation of present to
past.
|
[
{
"created": "Mon, 28 Jan 2013 08:23:42 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Feb 2013 02:19:31 GMT",
"version": "v2"
}
] |
2023-06-22
|
[
[
"Sober",
"Elliott",
""
],
[
"Steel",
"Mike",
""
]
] |
Historical sciences like evolutionary biology reconstruct past events by using the traces that the past has bequeathed to the present. The Markov Chain Convergence Theorem and the Data Processing Inequality describe how the mutual information between present and past is affected by how much time there is in between. These two results are very general; they pertain to any process, not just to the biological processes that occur in evolution. To study the specifically biological question of how the present state of a lineage provides information about its evolutionary past, we use a Moran process framework and consider how the kind of evolutionary process (drift, and selection of various kinds) at work in a lineage affects the epistemological relation of present to past.
|
2110.01191
|
Fang Wu
|
Fang Wu, Dragomir Radev, Stan Z. Li
|
Molformer: Motif-based Transformer on 3D Heterogeneous Molecular Graphs
| null | null | null | null |
q-bio.QM cs.CE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Procuring expressive molecular representations underpins AI-driven molecule
design and scientific discovery. The research mainly focuses on atom-level
homogeneous molecular graphs, ignoring the rich information in subgraphs or
motifs. However, it has been widely accepted that substructures play a dominant
role in identifying and determining molecular properties. To address such
issues, we formulate heterogeneous molecular graphs (HMGs), and introduce a
novel architecture to exploit both molecular motifs and 3D geometry. Precisely,
we extract functional groups as motifs for small molecules and employ
reinforcement learning to adaptively select quaternary amino acids as motif
candidates for proteins. Then HMGs are constructed with both atom-level and
motif-level nodes. To better accommodate those HMGs, we introduce a variant of
Transformer named Molformer, which adopts a heterogeneous self-attention layer
to distinguish the interactions between multi-level nodes. Besides, it is also
coupled with a multi-scale mechanism to capture fine-grained local patterns
with increasing contextual scales. An attentive farthest point sampling
algorithm is also proposed to obtain the molecular representations. We validate
Molformer across a broad range of domains, including quantum chemistry,
physiology, and biophysics. Extensive experiments show that Molformer
outperforms or achieves the comparable performance of several state-of-the-art
baselines. Our work provides a promising way to utilize informative motifs from
the perspective of multi-level graph construction.
|
[
{
"created": "Mon, 4 Oct 2021 05:11:23 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Oct 2021 05:59:37 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Dec 2021 03:17:08 GMT",
"version": "v3"
},
{
"created": "Sun, 15 May 2022 11:27:25 GMT",
"version": "v4"
},
{
"created": "Wed, 18 May 2022 08:10:34 GMT",
"version": "v5"
},
{
"created": "Tue, 20 Dec 2022 07:28:58 GMT",
"version": "v6"
},
{
"created": "Sat, 7 Jan 2023 05:33:24 GMT",
"version": "v7"
}
] |
2023-01-10
|
[
[
"Wu",
"Fang",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Li",
"Stan Z.",
""
]
] |
Procuring expressive molecular representations underpins AI-driven molecule design and scientific discovery. The research mainly focuses on atom-level homogeneous molecular graphs, ignoring the rich information in subgraphs or motifs. However, it has been widely accepted that substructures play a dominant role in identifying and determining molecular properties. To address such issues, we formulate heterogeneous molecular graphs (HMGs), and introduce a novel architecture to exploit both molecular motifs and 3D geometry. Precisely, we extract functional groups as motifs for small molecules and employ reinforcement learning to adaptively select quaternary amino acids as motif candidates for proteins. Then HMGs are constructed with both atom-level and motif-level nodes. To better accommodate those HMGs, we introduce a variant of Transformer named Molformer, which adopts a heterogeneous self-attention layer to distinguish the interactions between multi-level nodes. Besides, it is also coupled with a multi-scale mechanism to capture fine-grained local patterns with increasing contextual scales. An attentive farthest point sampling algorithm is also proposed to obtain the molecular representations. We validate Molformer across a broad range of domains, including quantum chemistry, physiology, and biophysics. Extensive experiments show that Molformer outperforms or achieves the comparable performance of several state-of-the-art baselines. Our work provides a promising way to utilize informative motifs from the perspective of multi-level graph construction.
|
q-bio/0509010
|
Georgy Karev
|
Artem S. Novozhilov, Georgy P. Karev, and Eugene V. Koonin
|
Mathematical modeling of evolution of horizontally transferred genes
|
36 pages, 7 figures; submitted to Mol. Biol. Evol
| null | null | null |
q-bio.GN q-bio.QM
| null |
We describe a stochastic birth-and-death model of evolution of horizontally
transferred genes in microbial populations. The model is a generalization of
the stochastic model described by Berg and Kurland and includes five
parameters: the rate of mutational inactivation, selection coefficient,
immigration rate (i.e., rate of arrival of a novel sequence from outside of the
recipient population), within-population horizontal transmission rate, and
population size. The model of Berg and Kurland included four parameters,
namely, mutational inactivation, selection coefficient, population size, and
transmission rate. However, the effect of transmission was disregarded in the
interpretation of the results, and the overall conclusion was that horizontally
acquired sequences can be fixed in a population only when they confer a
substantial selective advantage onto the recipient and therefore are subject to
strong positive selection. By contrast, analysis of the present model in
different domains of parameter values shows that, as long as the rate of
within-population horizontal transmission is comparable to the mutational
inactivation rate and there is even a low rate of immigration, horizontally
acquired sequences can be fixed in the population or at least persist for a
long time even when they are neutral or slightly deleterious. The available
biological data strongly suggest that intense within-population and even
between-populations gene flows are realistic for at least some prokaryotic
species and environments. Therefore our modeling results are compatible with
the notion of a pivotal role of horizontal gene transfer in the evolution of
prokaryotes.
|
[
{
"created": "Fri, 9 Sep 2005 15:16:24 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Novozhilov",
"Artem S.",
""
],
[
"Karev",
"Georgy P.",
""
],
[
"Koonin",
"Eugene V.",
""
]
] |
We describe a stochastic birth-and-death model of evolution of horizontally transferred genes in microbial populations. The model is a generalization of the stochastic model described by Berg and Kurland and includes five parameters: the rate of mutational inactivation, selection coefficient, immigration rate (i.e., rate of arrival of a novel sequence from outside of the recipient population), within-population horizontal transmission rate, and population size. The model of Berg and Kurland included four parameters, namely, mutational inactivation, selection coefficient, population size, and transmission rate. However, the effect of transmission was disregarded in the interpretation of the results, and the overall conclusion was that horizontally acquired sequences can be fixed in a population only when they confer a substantial selective advantage onto the recipient and therefore are subject to strong positive selection. By contrast, analysis of the present model in different domains of parameter values shows that, as long as the rate of within-population horizontal transmission is comparable to the mutational inactivation rate and there is even a low rate of immigration, horizontally acquired sequences can be fixed in the population or at least persist for a long time even when they are neutral or slightly deleterious. The available biological data strongly suggest that intense within-population and even between-populations gene flows are realistic for at least some prokaryotic species and environments. Therefore our modeling results are compatible with the notion of a pivotal role of horizontal gene transfer in the evolution of prokaryotes.
|
2001.08349
|
Satpreet Harcharan Singh
|
Satpreet H. Singh, Steven M. Peterson, Rajesh P. N. Rao, Bingni W.
Brunton
|
Investigating naturalistic hand movements by behavior mining in
long-term video and neural recordings
| null | null | null | null |
q-bio.NC cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent technological advances in brain recording and artificial intelligence
are propelling a new paradigm in neuroscience beyond the traditional controlled
experiment. Rather than focusing on cued, repeated trials, naturalistic
neuroscience studies neural processes underlying spontaneous behaviors
performed in unconstrained settings. However, analyzing such unstructured data
lacking a priori experimental design remains a significant challenge,
especially when the data is multi-modal and long-term. Here we describe an
automated approach for analyzing simultaneously recorded long-term,
naturalistic electrocorticography (ECoG) and naturalistic behavior video data.
We take a behavior-first approach to analyzing the long-term recordings. Using
a combination of computer vision, discrete latent-variable modeling, and string
pattern-matching on the behavioral video data, we find and annotate spontaneous
human upper-limb movement events. We show results from our approach applied to
data collected for 12 human subjects over 7--9 days for each subject. Our
pipeline discovers and annotates over 40,000 instances of naturalistic human
upper-limb movement events in the behavioral videos. Analysis of the
simultaneously recorded brain data reveals neural signatures of movement that
corroborate prior findings from traditional controlled experiments. We also
prototype a decoder for a movement initiation detection task to demonstrate the
efficacy of our pipeline as a source of training data for brain-computer
interfacing applications. Our work addresses the unique data analysis
challenges in studying naturalistic human behaviors, and contributes methods
that may generalize to other neural recording modalities beyond ECoG. We
publicly release our curated dataset, providing a resource to study
naturalistic neural and behavioral variability at a scale not previously
available.
|
[
{
"created": "Thu, 23 Jan 2020 02:41:35 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jun 2020 22:52:49 GMT",
"version": "v2"
}
] |
2020-06-23
|
[
[
"Singh",
"Satpreet H.",
""
],
[
"Peterson",
"Steven M.",
""
],
[
"Rao",
"Rajesh P. N.",
""
],
[
"Brunton",
"Bingni W.",
""
]
] |
Recent technological advances in brain recording and artificial intelligence are propelling a new paradigm in neuroscience beyond the traditional controlled experiment. Rather than focusing on cued, repeated trials, naturalistic neuroscience studies neural processes underlying spontaneous behaviors performed in unconstrained settings. However, analyzing such unstructured data lacking a priori experimental design remains a significant challenge, especially when the data is multi-modal and long-term. Here we describe an automated approach for analyzing simultaneously recorded long-term, naturalistic electrocorticography (ECoG) and naturalistic behavior video data. We take a behavior-first approach to analyzing the long-term recordings. Using a combination of computer vision, discrete latent-variable modeling, and string pattern-matching on the behavioral video data, we find and annotate spontaneous human upper-limb movement events. We show results from our approach applied to data collected for 12 human subjects over 7--9 days for each subject. Our pipeline discovers and annotates over 40,000 instances of naturalistic human upper-limb movement events in the behavioral videos. Analysis of the simultaneously recorded brain data reveals neural signatures of movement that corroborate prior findings from traditional controlled experiments. We also prototype a decoder for a movement initiation detection task to demonstrate the efficacy of our pipeline as a source of training data for brain-computer interfacing applications. Our work addresses the unique data analysis challenges in studying naturalistic human behaviors, and contributes methods that may generalize to other neural recording modalities beyond ECoG. We publicly release our curated dataset, providing a resource to study naturalistic neural and behavioral variability at a scale not previously available.
|
1610.00077
|
Sangeeta Bhatia
|
Sangeeta Bhatia, Pedro Feij\~ao, Andrew R. Francis
|
Position and content paradigms in genome rearrangements: the wild and
crazy world of permutations in genomics
| null | null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modellers of large scale genome rearrangement events, in which segments of
DNA are inverted, moved, swapped, or even inserted or deleted, have found a
natural syntax in the language of permutations. Despite this, there has been a
wide range of modelling choices, assumptions and interpretations that make
navigating the literature a significant challenge. Indeed, even authors of
papers that use permutations to model genome rearrangement can struggle to
interpret each others' work, because of subtle differences in basic assumptions
that are often deeply ingrained (and consequently sometimes not even
mentioned). In this paper, we describe the different ways in which permutations
have been used to model genomes and genome rearrangement events, presenting
some features and limitations of each approach, and show how the various models
are related. This paper will help researchers navigate the landscape of genome
rearrangement models, and make it easier for authors to present clear and
consistent models.
|
[
{
"created": "Sat, 1 Oct 2016 03:24:05 GMT",
"version": "v1"
}
] |
2016-10-04
|
[
[
"Bhatia",
"Sangeeta",
""
],
[
"Feijão",
"Pedro",
""
],
[
"Francis",
"Andrew R.",
""
]
] |
Modellers of large scale genome rearrangement events, in which segments of DNA are inverted, moved, swapped, or even inserted or deleted, have found a natural syntax in the language of permutations. Despite this, there has been a wide range of modelling choices, assumptions and interpretations that make navigating the literature a significant challenge. Indeed, even authors of papers that use permutations to model genome rearrangement can struggle to interpret each others' work, because of subtle differences in basic assumptions that are often deeply ingrained (and consequently sometimes not even mentioned). In this paper, we describe the different ways in which permutations have been used to model genomes and genome rearrangement events, presenting some features and limitations of each approach, and show how the various models are related. This paper will help researchers navigate the landscape of genome rearrangement models, and make it easier for authors to present clear and consistent models.
|
1207.3563
|
Joshua Chang
|
Joshua C. Chang and K.C. Brennan and Dongdong He and Huaxiong Huang
and Robert M. Miura and Phillip L. Wilson and Jonathan J. Wylie
|
A mathematical model of the metabolic and perfusion effects on cortical
spreading depression
|
17 pages including 9 figures, accepted by PLoS One
|
PLoS ONE 8(8) (2013)
|
10.1371/journal.pone.0070469
| null |
q-bio.NC q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cortical spreading depression (CSD) is a slow-moving ionic and metabolic
disturbance that propagates in cortical brain tissue. In addition to massive
cellular depolarization, CSD also involves significant changes in perfusion and
metabolism -- aspects of CSD that had not been modeled and are important to
traumatic brain injury, subarachnoid hemorrhage, stroke, and migraine.
In this study, we develop a mathematical model for CSD where we focus on
modeling the features essential to understanding the implications of
neurovascular coupling during CSD. In our model, the sodium-potassium--ATPase,
mainly responsible for ionic homeostasis and active during CSD, operates at a
rate that is dependent on the supply of oxygen. The supply of oxygen is
determined by modeling blood flow through a lumped vascular tree with an
effective local vessel radius that is controlled by the extracellular potassium
concentration. We show that during CSD, the metabolic demands of the cortex
exceed the physiological limits placed on oxygen delivery, regardless of
vascular constriction or dilation. However, vasoconstriction and vasodilation
play important roles in the propagation of CSD and its recovery. Our model
replicates the qualitative and quantitative behavior of CSD --
vasoconstriction, oxygen depletion, extracellular potassium elevation,
prolonged depolarization -- found in experimental studies.
We predict faster, longer duration CSD in vivo than in vitro due to the
contribution of the vasculature. Our results also help explain some of the
variability of CSD between species and even within the same animal. These
results have clinical and translational implications, as they allow for more
precise in vitro, in vivo, and in silico exploration of a phenomenon broadly
relevant to neurological disease.
|
[
{
"created": "Mon, 16 Jul 2012 02:28:23 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jun 2013 21:15:12 GMT",
"version": "v2"
}
] |
2020-06-05
|
[
[
"Chang",
"Joshua C.",
""
],
[
"Brennan",
"K. C.",
""
],
[
"He",
"Dongdong",
""
],
[
"Huang",
"Huaxiong",
""
],
[
"Miura",
"Robert M.",
""
],
[
"Wilson",
"Phillip L.",
""
],
[
"Wylie",
"Jonathan J.",
""
]
] |
Cortical spreading depression (CSD) is a slow-moving ionic and metabolic disturbance that propagates in cortical brain tissue. In addition to massive cellular depolarization, CSD also involves significant changes in perfusion and metabolism -- aspects of CSD that had not been modeled and are important to traumatic brain injury, subarachnoid hemorrhage, stroke, and migraine. In this study, we develop a mathematical model for CSD where we focus on modeling the features essential to understanding the implications of neurovascular coupling during CSD. In our model, the sodium-potassium--ATPase, mainly responsible for ionic homeostasis and active during CSD, operates at a rate that is dependent on the supply of oxygen. The supply of oxygen is determined by modeling blood flow through a lumped vascular tree with an effective local vessel radius that is controlled by the extracellular potassium concentration. We show that during CSD, the metabolic demands of the cortex exceed the physiological limits placed on oxygen delivery, regardless of vascular constriction or dilation. However, vasoconstriction and vasodilation play important roles in the propagation of CSD and its recovery. Our model replicates the qualitative and quantitative behavior of CSD -- vasoconstriction, oxygen depletion, extracellular potassium elevation, prolonged depolarization -- found in experimental studies. We predict faster, longer duration CSD in vivo than in vitro due to the contribution of the vasculature. Our results also help explain some of the variability of CSD between species and even within the same animal. These results have clinical and translational implications, as they allow for more precise in vitro, in vivo, and in silico exploration of a phenomenon broadly relevant to neurological disease.
|
1901.02483
|
Yana Safonova
|
Yana Safonova and Pavel A. Pevzner
|
De novo inference of diversity genes and analysis of non-canonical
V(DD)J recombination in immunoglobulins
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The V(D)J recombination forms the immunoglobulin genes by joining the
variable (V), diversity (D), and joining (J) germline genes. Since variations
in germline genes have been linked to various diseases, personalized
immunogenomics aims at finding alleles of germline genes across various
patients. Although recent studies described algorithms for de novo inference of
V and J genes from immunosequencing data, they stopped short of solving a more
difficult problem of reconstructing D genes that form the highly divergent CDR3
regions and provide the most important contribution to the antigen binding. We
present the IgScout algorithm for de novo D gene reconstruction and apply it to
reveal new alleles of human D genes and previously unknown D genes in camel, an
important model organism in immunology. We further analyze non-canonical V(DD)J
recombination that results in unusually long tandem CDR3s and thus expands the
diversity of the antibody repertoires. We demonstrate that tandem CDR3s
represent a consistent and functional feature of all analyzed immunosequencing
datasets, reveal ultra-long tandem CDR3s, and shed light on the mechanism
responsible for their formation.
|
[
{
"created": "Tue, 8 Jan 2019 19:41:18 GMT",
"version": "v1"
}
] |
2019-01-10
|
[
[
"Safonova",
"Yana",
""
],
[
"Pevzner",
"Pavel A.",
""
]
] |
The V(D)J recombination forms the immunoglobulin genes by joining the variable (V), diversity (D), and joining (J) germline genes. Since variations in germline genes have been linked to various diseases, personalized immunogenomics aims at finding alleles of germline genes across various patients. Although recent studies described algorithms for de novo inference of V and J genes from immunosequencing data, they stopped short of solving a more difficult problem of reconstructing D genes that form the highly divergent CDR3 regions and provide the most important contribution to the antigen binding. We present the IgScout algorithm for de novo D gene reconstruction and apply it to reveal new alleles of human D genes and previously unknown D genes in camel, an important model organism in immunology. We further analyze non-canonical V(DD)J recombination that results in unusually long tandem CDR3s and thus expands the diversity of the antibody repertoires. We demonstrate that tandem CDR3s represent a consistent and functional feature of all analyzed immunosequencing datasets, reveal ultra-long tandem CDR3s, and shed light on the mechanism responsible for their formation.
|
1304.4337
|
Song Xu
|
Song Xu and Shuyun Jiao and Pengyao Jiang and Ping Ao
|
Two-timescale evolution on a singular landscape
|
arXiv admin note: text overlap with arXiv:1108.1484
| null |
10.1103/PhysRevE.89.012724
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Under the effect of strong genetic drift, it is highly probable to observe
gene fixation or gene loss in a population, shown by infinite peaks on a
coherently constructed potential energy landscape. It is then important to ask
what such singular peaks imply, with or without the effects of other biological
factors. We studied the stochastic escape time from the infinite potential
peaks in the Wright-Fisher model, where the typical two-scale diffusion
dynamics was observed via computer simulations. We numerically found the
average escape time for all the bi-stable cases and analytically approximated
the results under weak mutations and selections by calculating the mean first
passage time (MFPT) in singular potential peak. Our results showed that
Kramers' classical escape formula can be extended to the models with
non-Gaussian probability distributions, overcoming constraints in previous
methods. The constructed landscape provides a global and coherent description
for system's evolutionary dynamics, allowing new biological results to be
generated.
|
[
{
"created": "Tue, 16 Apr 2013 06:10:28 GMT",
"version": "v1"
}
] |
2015-06-15
|
[
[
"Xu",
"Song",
""
],
[
"Jiao",
"Shuyun",
""
],
[
"Jiang",
"Pengyao",
""
],
[
"Ao",
"Ping",
""
]
] |
Under the effect of strong genetic drift, it is highly probable to observe gene fixation or gene loss in a population, shown by infinite peaks on a coherently constructed potential energy landscape. It is then important to ask what such singular peaks imply, with or without the effects of other biological factors. We studied the stochastic escape time from the infinite potential peaks in the Wright-Fisher model, where the typical two-scale diffusion dynamics was observed via computer simulations. We numerically found the average escape time for all the bi-stable cases and analytically approximated the results under weak mutations and selections by calculating the mean first passage time (MFPT) in singular potential peak. Our results showed that Kramers' classical escape formula can be extended to the models with non-Gaussian probability distributions, overcoming constraints in previous methods. The constructed landscape provides a global and coherent description for system's evolutionary dynamics, allowing new biological results to be generated.
|
2004.04819
|
Massimo Materassi Dr
|
Massimo Materassi
|
Some fractal thoughts about the COVID-19 infection outbreak
| null | null | null | null |
q-bio.PE math.DS physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some ideas are presented about the physical motivation of the apparent
capacity of generalized logistic equations to describe the outbreak of the
COVID-19 infection, and in general of quite many other epidemics. The main
focuses here are: the complex, possibly fractal, structure of the locus
describing the "contagion event set"; what can be learnt from the models of
trophic webs with "herd behaviour".
|
[
{
"created": "Wed, 8 Apr 2020 16:52:05 GMT",
"version": "v1"
}
] |
2020-04-13
|
[
[
"Materassi",
"Massimo",
""
]
] |
Some ideas are presented about the physical motivation of the apparent capacity of generalized logistic equations to describe the outbreak of the COVID-19 infection, and in general of quite many other epidemics. The main focuses here are: the complex, possibly fractal, structure of the locus describing the "contagion event set"; what can be learnt from the models of trophic webs with "herd behaviour".
|
1605.07479
|
Nadav M. Shnerb
|
Yael Fried, David A. Kessler and Nadav M. Shnerb
|
Communities as cliques
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-diversity assemblages are very common in nature, and yet the factors
allowing for the maintenance of biodiversity remain obscure. The competitive
exclusion principle and May's complexity-diversity puzzle both suggest that a
community can support only a small number of species, turning the spotlight at
the dynamics of local patches or islands, where stable and uninvadable (SU)
subsets of species play a crucial role. Here we map the community SUs question
to the geometric problem of finding maximal cliques of the corresponding graph.
We solve for the number of SUs as a function of the species richness in the
regional pool, $N$, showing that this growth is subexponential, contrary to
long-standing wisdom. We show that symmetric systems relax rapidly to an SU,
where the system stays until a regime shift takes place. In asymmetric systems
the relaxation time grows much faster with $N$, suggesting an excitable
dynamics under noise.
|
[
{
"created": "Tue, 24 May 2016 14:31:31 GMT",
"version": "v1"
}
] |
2016-05-25
|
[
[
"Fried",
"Yael",
""
],
[
"Kessler",
"David A.",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] |
High-diversity assemblages are very common in nature, and yet the factors allowing for the maintenance of biodiversity remain obscure. The competitive exclusion principle and May's complexity-diversity puzzle both suggest that a community can support only a small number of species, turning the spotlight at the dynamics of local patches or islands, where stable and uninvadable (SU) subsets of species play a crucial role. Here we map the community SUs question to the geometric problem of finding maximal cliques of the corresponding graph. We solve for the number of SUs as a function of the species richness in the regional pool, $N$, showing that this growth is subexponential, contrary to long-standing wisdom. We show that symmetric systems relax rapidly to an SU, where the system stays until a regime shift takes place. In asymmetric systems the relaxation time grows much faster with $N$, suggesting an excitable dynamics under noise.
|
2007.08262
|
Sayantari Ghosh
|
Priya Chakraborty, Sayantari Ghosh
|
Emergent Correlations in Gene Expression Dynamics as Footprints of
Resource Competition
|
15 pages, 7 figures
| null | null | null |
q-bio.QM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genetic circuits need a cellular environment to operate in, which naturally
couples the circuit function with the overall functionality of gene regulatory
network. To execute their functions all gene circuits draw resources in the
form of RNA polymerases, ribosomes, and tRNAs. Recent experiments pointed out
that the role of resource competition on synthetic circuit outputs could be
immense. However, the effect of complexity of the circuit architecture on
resource sharing dynamics is yet unexplored. In this paper, we employ
mathematical modelling and in-silico experiments to identify the sources of
resource trade-off and to quantify its impact on the function of a genetic
circuit, keeping our focus on regulation of immediate downstream proteins. We
take the example of the fluorescent reporters, which are often used as protein
read-outs. We show that estimating gene expression dynamics from readings of
downstream protein data might be unreliable when the resource is limited and
ribosome affinities are asymmetric. We focus on the impact of mRNA copy number
and RBS strength on the nonlinear isocline that emerges with two regimes,
prominently separated by a tipping point, and study how correlation and
competition dominate each other depending on various circuit parameters.
Focusing further on genetic toggle circuit, we have identified major effects of
resource competition in this model motif, and quantified the observations. The
observations are testable in wet-lab experiments, as all the parameters chosen
are experimentally relevant.
|
[
{
"created": "Thu, 16 Jul 2020 11:24:45 GMT",
"version": "v1"
}
] |
2020-07-17
|
[
[
"Chakraborty",
"Priya",
""
],
[
"Ghosh",
"Sayantari",
""
]
] |
Genetic circuits need a cellular environment to operate in, which naturally couples the circuit function with the overall functionality of gene regulatory network. To execute their functions all gene circuits draw resources in the form of RNA polymerases, ribosomes, and tRNAs. Recent experiments pointed out that the role of resource competition on synthetic circuit outputs could be immense. However, the effect of complexity of the circuit architecture on resource sharing dynamics is yet unexplored. In this paper, we employ mathematical modelling and in-silico experiments to identify the sources of resource trade-off and to quantify its impact on the function of a genetic circuit, keeping our focus on regulation of immediate downstream proteins. We take the example of the fluorescent reporters, which are often used as protein read-outs. We show that estimating gene expression dynamics from readings of downstream protein data might be unreliable when the resource is limited and ribosome affinities are asymmetric. We focus on the impact of mRNA copy number and RBS strength on the nonlinear isocline that emerges with two regimes, prominently separated by a tipping point, and study how correlation and competition dominate each other depending on various circuit parameters. Focusing further on genetic toggle circuit, we have identified major effects of resource competition in this model motif, and quantified the observations. The observations are testable in wet-lab experiments, as all the parameters chosen are experimentally relevant.
|
2305.19284
|
J. C. Phillips
|
J. C. Phillips
|
Sequence Evolution, Structure and Dynamics of Transmembrane Proteins:
Rhodopsin
|
11 pages, 5 figures, 1 table
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Rhodopsin is a G-protein coupled receptor found in retinal rod cells, where
it mediates monocrhromatic vision in dim light. It is one of the most studied
proteins with thousands of reviewed entries in Uniprot. It has seven
transmembrane segments, here examined for their hydrophobic character, and how
that has evolved from chickens to humans. Elastic features associated with
Proline are also discussed. Finally, differences between rhodopsin and cone
opsins are also discussed.
|
[
{
"created": "Sun, 28 May 2023 14:56:03 GMT",
"version": "v1"
}
] |
2023-06-01
|
[
[
"Phillips",
"J. C.",
""
]
] |
Rhodopsin is a G-protein coupled receptor found in retinal rod cells, where it mediates monocrhromatic vision in dim light. It is one of the most studied proteins with thousands of reviewed entries in Uniprot. It has seven transmembrane segments, here examined for their hydrophobic character, and how that has evolved from chickens to humans. Elastic features associated with Proline are also discussed. Finally, differences between rhodopsin and cone opsins are also discussed.
|
2201.08894
|
Lei Xie
|
Ryan K. Tan, Yang Liu, Lei Xie
|
Reinforcement Learning for Personalized Drug Discovery and Design for
Complex Diseases: A Systems Pharmacology Perspective
|
26 pages, 3 figure
| null | null | null |
q-bio.BM cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many multi-genic systemic diseases such as neurological disorders,
inflammatory diseases, and the majority of cancers do not have effective
treatments yet. Reinforcement learning powered systems pharmacology is a
potentially effective approach to design personalized therapies for untreatable
complex diseases. In this survey, state-of-the-art reinforcement learning
methods and their latest applications to drug design are reviewed. The
challenges on harnessing reinforcement learning for systems pharmacology and
personalized medicine are discussed. Potential solutions to overcome the
challenges are proposed. In spite of successful application of advanced
reinforcement learning techniques to target-based drug discovery, new
reinforcement learning strategies are needed to address systems
pharmacology-oriented personalized de novo drug design.
|
[
{
"created": "Fri, 21 Jan 2022 21:29:46 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Feb 2022 02:19:56 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Feb 2022 19:45:33 GMT",
"version": "v3"
}
] |
2022-02-25
|
[
[
"Tan",
"Ryan K.",
""
],
[
"Liu",
"Yang",
""
],
[
"Xie",
"Lei",
""
]
] |
Many multi-genic systemic diseases such as neurological disorders, inflammatory diseases, and the majority of cancers do not have effective treatments yet. Reinforcement learning powered systems pharmacology is a potentially effective approach to design personalized therapies for untreatable complex diseases. In this survey, state-of-the-art reinforcement learning methods and their latest applications to drug design are reviewed. The challenges on harnessing reinforcement learning for systems pharmacology and personalized medicine are discussed. Potential solutions to overcome the challenges are proposed. In spite of successful application of advanced reinforcement learning techniques to target-based drug discovery, new reinforcement learning strategies are needed to address systems pharmacology-oriented personalized de novo drug design.
|
q-bio/0505049
|
Karsten Suhre
|
Karsten Suhre (IGS)
|
Gene & Genome Duplication in Acanthamoeba Polyphaga Mimivirus
| null | null | null | null |
q-bio.GN
| null |
Gene duplication is key to molecular evolution in all three domains of life
and may be the first step in the emergence of new gene function. It is a well
recognized feature in large DNA viruses, but has not been studied extensively
in the largest known virus to date, the recently discovered Acanthamoeba
Polyphaga Mimivirus. Here we present a systematic analysis of gene and genome
duplication events in the Mimivirus genome. We find that one third of the
Mimivirus genes are related to at least one other gene in the Mimivirus genome,
either through a large segmental genome duplication event that occurred in the
more remote past, either through more recent gene duplication events, which
often occur in tandem. This shows that gene and genome duplication played a
major role in shaping the Mimivirus genome. Using multiple alignments together
with remote homology detection methods based on Hidden Markov Model comparison,
we assign putative functions to some of the paralogous gene families. We
suggest that a large part of the duplicated Mimivirus gene families are likely
to interfere with important host cell processes, such as transcription control,
protein degradation, and cell regulatory processes. Our findings support the
view that large DNA viruses are complex evolving organisms, possibly deeply
rooted within the tree of life, and oppose the paradigm that viral evolution is
dominated by lateral gene acquisition, at least in what concerns large DNA
viruses.
|
[
{
"created": "Wed, 25 May 2005 14:17:57 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jun 2005 14:27:37 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Jul 2005 12:34:14 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Suhre",
"Karsten",
"",
"IGS"
]
] |
Gene duplication is key to molecular evolution in all three domains of life and may be the first step in the emergence of new gene function. It is a well recognized feature in large DNA viruses, but has not been studied extensively in the largest known virus to date, the recently discovered Acanthamoeba Polyphaga Mimivirus. Here we present a systematic analysis of gene and genome duplication events in the Mimivirus genome. We find that one third of the Mimivirus genes are related to at least one other gene in the Mimivirus genome, either through a large segmental genome duplication event that occurred in the more remote past, either through more recent gene duplication events, which often occur in tandem. This shows that gene and genome duplication played a major role in shaping the Mimivirus genome. Using multiple alignments together with remote homology detection methods based on Hidden Markov Model comparison, we assign putative functions to some of the paralogous gene families. We suggest that a large part of the duplicated Mimivirus gene families are likely to interfere with important host cell processes, such as transcription control, protein degradation, and cell regulatory processes. Our findings support the view that large DNA viruses are complex evolving organisms, possibly deeply rooted within the tree of life, and oppose the paradigm that viral evolution is dominated by lateral gene acquisition, at least in what concerns large DNA viruses.
|
1208.5673
|
Alfred Bennun
|
Alfred Bennun
|
The dynamics of H-bonds of the hydration shells of ions, ATPase and
NE-activated adenylyl cyclase on the coupling of energy and signal
transduction
| null | null | null | null |
q-bio.OT q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Glycerol titration distinguished from free water the local hydration shell
involved in ATPase transition from active to inactive, with cooperativity for
water n=16. Rat brain cortex: NE-stimulated and its basal AC in the absence of
free Mg2+, allows a refractive state of AC with negative cooperativity for
MgATP and ATP4-. The erythrocyte-Hb system operates as a metabolic sensor to
match glucose availability with the release of Hb-carried, O2 and Mg2+ at CSF.
[Mg(H2O)6](H2O)122+ by chelating either a protein or ATP4- losses most of its
hydration shell. The ion pump ATPase by forming ADP3- releases an incompletely
hydrated Mg2+, which could capture H2O from either [Na.(H2O)6]+ or [K.(H2O)6]+.
Thus, sieve-sizing their hydration shells for fitting into Na+-pump channels.
An AC refractory period may participate in STM. Mg2+ with cooperativity n=3.7
activates NE-AC. CREB-generated receptors coupled by Mg2+ may modulate
hydration shells-dependent oscillations for retrieval of LTM
|
[
{
"created": "Tue, 17 Jul 2012 22:01:58 GMT",
"version": "v1"
}
] |
2012-08-29
|
[
[
"Bennun",
"Alfred",
""
]
] |
Glycerol titration distinguished from free water the local hydration shell involved in ATPase transition from active to inactive, with cooperativity for water n=16. Rat brain cortex: NE-stimulated and its basal AC in the absence of free Mg2+, allows a refractive state of AC with negative cooperativity for MgATP and ATP4-. The erythrocyte-Hb system operates as a metabolic sensor to match glucose availability with the release of Hb-carried, O2 and Mg2+ at CSF. [Mg(H2O)6](H2O)122+ by chelating either a protein or ATP4- losses most of its hydration shell. The ion pump ATPase by forming ADP3- releases an incompletely hydrated Mg2+, which could capture H2O from either [Na.(H2O)6]+ or [K.(H2O)6]+. Thus, sieve-sizing their hydration shells for fitting into Na+-pump channels. An AC refractory period may participate in STM. Mg2+ with cooperativity n=3.7 activates NE-AC. CREB-generated receptors coupled by Mg2+ may modulate hydration shells-dependent oscillations for retrieval of LTM
|
1609.02794
|
Fabio Nikolay
|
Fabio Nikolay, Marius Pesavento, George Kritikos, Nassos Typas
|
Learning Directed-Acyclic-Graphs from Large-Scale Genomics Data
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we consider the problem of learning the
genetic-interaction-map, i.e., the topology of a directed acyclic graph (DAG)
of genetic interactions from noisy double knockout (DK) data. Based on a set of
well established biological interaction models we detect and classify the
interactions between genes. We propose a novel linear integer optimization
program called the Genetic-Interactions-Detector (GENIE) to identify the
complex biological dependencies among genes and to compute the DAG topology
that matches the DK measurements best. Furthermore, we extend the GENIE-program
by incorporating genetic-interactions-profile (GI-profile) data to further
enhance the detection performance. In addition, we propose a sequential
scalability technique for large sets of genes under study, in order to provide
statistically stressable results for real measurement data. Finally, we show
via numeric simulations that the GENIE-program as well as the GI-profile data
extended GENIE (GI-GENIE)-program clearly outperform the conventional
techniques and present real data results for our proposed sequential
scalability technique.
|
[
{
"created": "Fri, 9 Sep 2016 13:48:14 GMT",
"version": "v1"
}
] |
2016-09-12
|
[
[
"Nikolay",
"Fabio",
""
],
[
"Pesavento",
"Marius",
""
],
[
"Kritikos",
"George",
""
],
[
"Typas",
"Nassos",
""
]
] |
In this paper we consider the problem of learning the genetic-interaction-map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double knockout (DK) data. Based on a set of well established biological interaction models we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE-program by incorporating genetic-interactions-profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically stressable results for real measurement data. Finally, we show via numeric simulations that the GENIE-program as well as the GI-profile data extended GENIE (GI-GENIE)-program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.
|
1308.3690
|
Michael Retsky
|
M.W. Retsky, R. Demicheli, W.J.M. Hrushesky, P. Forget, M. DeKock, I.
Gukas, R.A. Rogers, M. Baum, V. Sukhatme, and J.S. Vaidya
|
Reduction of breast cancer relapses with perioperative non-steroidal
anti-inflammatory drugs: new findings and a review
|
25 pages, 10 figures, in press; Current Medicinal Chemistry, May 2013
| null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To explain a bimodal pattern of hazard of relapse among early stage breast
cancer patients identified in multiple databases, we proposed that late
relapses result from steady stochastic progressions from single dormant
malignant cells to avascular micrometastases and then on to growing deposits.
However in order to explain early relapses, we had to postulate that something
happens at about the time of surgery to provoke sudden exits from dormant
phases to active growth and then to detection. Most relapses in breast cancer
are in the early category. Recent data from Forget et al suggests an unexpected
mechanism. They retrospectively studied results from 327 consecutive breast
cancer patients comparing various perioperative analgesics and anesthetics in
one Belgian hospital and one surgeon. Patients were treated with mastectomy and
conventional adjuvant therapy. Relapse hazard updated Sept 2011 are presented.
A common Non-Steroidal Anti-Inflammatory Drug (NSAID) analgesic used in surgery
produced far superior disease-free survival in the first 5 years after surgery.
The expected prominent early relapse events in months 9-18 are reduced 5-fold.
If this observation holds up to further scrutiny, it could mean that the simple
use of this safe, inexpensive and effective anti-inflammatory agent at surgery
might eliminate early relapses. Transient systemic inflammation accompanying
surgery could facilitate angiogenesis of dormant micrometastases, proliferation
of dormant single cells, and seeding of circulating cancer stem cells (perhaps
in part released from bone marrow) resulting in early relapse and could have
been effectively blocked by the perioperative anti-inflammatory agent.
|
[
{
"created": "Fri, 16 Aug 2013 18:40:21 GMT",
"version": "v1"
}
] |
2013-08-19
|
[
[
"Retsky",
"M. W.",
""
],
[
"Demicheli",
"R.",
""
],
[
"Hrushesky",
"W. J. M.",
""
],
[
"Forget",
"P.",
""
],
[
"DeKock",
"M.",
""
],
[
"Gukas",
"I.",
""
],
[
"Rogers",
"R. A.",
""
],
[
"Baum",
"M.",
""
],
[
"Sukhatme",
"V.",
""
],
[
"Vaidya",
"J. S.",
""
]
] |
To explain a bimodal pattern of hazard of relapse among early stage breast cancer patients identified in multiple databases, we proposed that late relapses result from steady stochastic progressions from single dormant malignant cells to avascular micrometastases and then on to growing deposits. However in order to explain early relapses, we had to postulate that something happens at about the time of surgery to provoke sudden exits from dormant phases to active growth and then to detection. Most relapses in breast cancer are in the early category. Recent data from Forget et al suggests an unexpected mechanism. They retrospectively studied results from 327 consecutive breast cancer patients comparing various perioperative analgesics and anesthetics in one Belgian hospital and one surgeon. Patients were treated with mastectomy and conventional adjuvant therapy. Relapse hazard updated Sept 2011 are presented. A common Non-Steroidal Anti-Inflammatory Drug (NSAID) analgesic used in surgery produced far superior disease-free survival in the first 5 years after surgery. The expected prominent early relapse events in months 9-18 are reduced 5-fold. If this observation holds up to further scrutiny, it could mean that the simple use of this safe, inexpensive and effective anti-inflammatory agent at surgery might eliminate early relapses. Transient systemic inflammation accompanying surgery could facilitate angiogenesis of dormant micrometastases, proliferation of dormant single cells, and seeding of circulating cancer stem cells (perhaps in part released from bone marrow) resulting in early relapse and could have been effectively blocked by the perioperative anti-inflammatory agent.
|
0907.2043
|
Carla Sofia Carvalho
|
Ekaterini Vourvouhaki and C. Sofia Carvalho
|
A Bayesian approach to the probability of coronary heart disease subject
to the --308 tumor necrosis factor-$\alpha$ SNP
|
23 pages, 2 figures, version accepted for publication in J.Biosystems
| null |
10.1016/j.biosystems.2011.03.010
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the correlation of the occurrence of coronary heart disease (CHD)
with the presence of the single-nucleotide polymorphism (SNP) at the -308
position of the tumor necrosis factor alpha (TNF-$\alpha$) gene. We also
consider the influence of the occurrence of type 2 diabetes (t2DM). Using
Bayesian inference, we first pursue a bottom-up approach to compute the working
hypothesis and the probabilities derivable from the data. We then pursue a
top-down approach by modelling the signal pathway that causally connects the
SNP with the emergence of CHD. We compute the functional form of the
probability of CHD conditional on the presence of the SNP in terms of both the
statistical and biochemical properties of the system. From the probability of
occurrence of a disease conditional on a given risk factor, we explore the
possibility of extracting information on the pathways involved in the
occurrence of the disease. This is a first study that we want to systematise
into a comprehensive formalism to be applied to the inference of the mechanism
connecting the risk factors to the disease.
|
[
{
"created": "Sun, 12 Jul 2009 14:01:32 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2010 16:22:57 GMT",
"version": "v2"
},
{
"created": "Thu, 5 May 2011 14:50:35 GMT",
"version": "v3"
}
] |
2011-05-06
|
[
[
"Vourvouhaki",
"Ekaterini",
""
],
[
"Carvalho",
"C. Sofia",
""
]
] |
We study the correlation of the occurrence of coronary heart disease (CHD) with the presence of the single-nucleotide polymorphism (SNP) at the -308 position of the tumor necrosis factor alpha (TNF-$\alpha$) gene. We also consider the influence of the occurrence of type 2 diabetes (t2DM). Using Bayesian inference, we first pursue a bottom-up approach to compute the working hypothesis and the probabilities derivable from the data. We then pursue a top-down approach by modelling the signal pathway that causally connects the SNP with the emergence of CHD. We compute the functional form of the probability of CHD conditional on the presence of the SNP in terms of both the statistical and biochemical properties of the system. From the probability of occurrence of a disease conditional on a given risk factor, we explore the possibility of extracting information on the pathways involved in the occurrence of the disease. This is a first study that we want to systematise into a comprehensive formalism to be applied to the inference of the mechanism connecting the risk factors to the disease.
|
1606.02837
|
Nadav M. Shnerb
|
Matan Danino, Yahav Shem-Tov and Nadav M. Shnerb
|
Spatial neutral dynamics
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neutral models, in which individual agents with equal fitness undergo a
birth-death-mutation process, are very popular in population genetics and
community ecology. Usually these models are applied to populations and
communities with spatial structure, but the analytic results presented so far
are limited to well-mixed or mainland-island scenarios. Here we present a new
technique, based on interface dynamics analysis, and apply it to the neutral
dynamics in one, two and three spatial dimensions. New results are derived for
the correlation length and for the main characteristics of the community, like
total biodiversity and the species abundance distribution above the correlation
length. Our results are supported by extensive numerical simulations, and
provide qualitative and quantitative insights that allow for a rigorous
comparison between model predictions and empirical data.
|
[
{
"created": "Thu, 9 Jun 2016 06:58:57 GMT",
"version": "v1"
}
] |
2016-06-10
|
[
[
"Danino",
"Matan",
""
],
[
"Shem-Tov",
"Yahav",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] |
Neutral models, in which individual agents with equal fitness undergo a birth-death-mutation process, are very popular in population genetics and community ecology. Usually these models are applied to populations and communities with spatial structure, but the analytic results presented so far are limited to well-mixed or mainland-island scenarios. Here we present a new technique, based on interface dynamics analysis, and apply it to the neutral dynamics in one, two and three spatial dimensions. New results are derived for the correlation length and for the main characteristics of the community, like total biodiversity and the species abundance distribution above the correlation length. Our results are supported by extensive numerical simulations, and provide qualitative and quantitative insights that allow for a rigorous comparison between model predictions and empirical data.
|
1904.00117
|
Jean Feng
|
Jean Feng, William S DeWitt III, Aaron McKenna, Noah Simon, Amy
Willis, Frederick A Matsen IV
|
Estimation of cell lineage trees by maximum-likelihood phylogenetics
| null | null | null | null |
q-bio.QM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
CRISPR technology has enabled large-scale cell lineage tracing for complex
multicellular organisms by mutating synthetic genomic barcodes during
organismal development. However, these sophisticated biological tools currently
use ad-hoc and outmoded computational methods to reconstruct the cell lineage
tree from the mutated barcodes. Because these methods are agnostic to the
biological mechanism, they are unable to take full advantage of the data's
structure. We propose a statistical model for the mutation process and develop
a procedure to estimate the tree topology, branch lengths, and mutation
parameters by iteratively applying penalized maximum likelihood estimation. In
contrast to existing techniques, our method estimates time along each branch,
rather than number of mutation events, thus providing a detailed account of
tissue-type differentiation. Via simulations, we demonstrate that our method is
substantially more accurate than existing approaches. Our reconstructed trees
also better recapitulate known aspects of zebrafish development and reproduce
similar results across fish replicates.
|
[
{
"created": "Fri, 29 Mar 2019 23:27:36 GMT",
"version": "v1"
}
] |
2019-04-02
|
[
[
"Feng",
"Jean",
""
],
[
"DeWitt",
"William S",
"III"
],
[
"McKenna",
"Aaron",
""
],
[
"Simon",
"Noah",
""
],
[
"Willis",
"Amy",
""
],
[
"Matsen",
"Frederick A",
"IV"
]
] |
CRISPR technology has enabled large-scale cell lineage tracing for complex multicellular organisms by mutating synthetic genomic barcodes during organismal development. However, these sophisticated biological tools currently use ad-hoc and outmoded computational methods to reconstruct the cell lineage tree from the mutated barcodes. Because these methods are agnostic to the biological mechanism, they are unable to take full advantage of the data's structure. We propose a statistical model for the mutation process and develop a procedure to estimate the tree topology, branch lengths, and mutation parameters by iteratively applying penalized maximum likelihood estimation. In contrast to existing techniques, our method estimates time along each branch, rather than number of mutation events, thus providing a detailed account of tissue-type differentiation. Via simulations, we demonstrate that our method is substantially more accurate than existing approaches. Our reconstructed trees also better recapitulate known aspects of zebrafish development and reproduce similar results across fish replicates.
|
1601.07766
|
Luca Puviani
|
Luca Puviani, Sidita Rama, Giorgio Vitetta
|
Prediction Errors Drive UCS Revaluation and not Classical Conditioning:
Evidence and Neurophysiological Consequences
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, the experimental study of emotional learning is commonly based on
classical conditioning paradigms and models, which have been thoroughly
investigated in the last century. On the contrary, limited attention has been
paid to the revaluation of an unconditioned stimulus (UCS), which, as
experimentally observed by various researchers in the last four decades, occurs
out of classical conditioning. For this reason, no analytical or quantitative
theory has been developed for this phenomenon and its dynamics. Unluckily,
models based on classical conditioning are unable to explain or predict
important psychophysiological phenomena, such as the failure of the extinction
of emotional responses in certain circumstances. In this manuscript an
analytical representation of UCS revaluation learning is developed; this allows
us to identify the conditions determining the "inextinguishability" (or
resistant-to-extinction) property of emotional responses and reactions (such as
those observed in evaluative conditioning, in the nonreinforcement presentation
of a conditioned inhibitor, in post-traumatic stress disorders and in panic
attacks). Furthermore, an analysis of the causal relation existing between
classical conditioning and UCS revaluation is provided. Starting from this
result, a theory of implicit emotional learning and a novel interpretation of
classical conditioning are derived. Moreover, we discuss how the proposed
theory can lead to the development of new methodologies for the detection and
the treatment of undesired or pathological emotional responses, and can inspire
animal models for resistant-to-extinction responses and reactions.
|
[
{
"created": "Thu, 28 Jan 2016 14:12:22 GMT",
"version": "v1"
}
] |
2016-01-29
|
[
[
"Puviani",
"Luca",
""
],
[
"Rama",
"Sidita",
""
],
[
"Vitetta",
"Giorgio",
""
]
] |
Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. On the contrary, limited attention has been paid to the revaluation of an unconditioned stimulus (UCS), which, as experimentally observed by various researchers in the last four decades, occurs out of classical conditioning. For this reason, no analytical or quantitative theory has been developed for this phenomenon and its dynamics. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances. In this manuscript an analytical representation of UCS revaluation learning is developed; this allows us to identify the conditions determining the "inextinguishability" (or resistant-to-extinction) property of emotional responses and reactions (such as those observed in evaluative conditioning, in the nonreinforcement presentation of a conditioned inhibitor, in post-traumatic stress disorders and in panic attacks). Furthermore, an analysis of the causal relation existing between classical conditioning and UCS revaluation is provided. Starting from this result, a theory of implicit emotional learning and a novel interpretation of classical conditioning are derived. Moreover, we discuss how the proposed theory can lead to the development of new methodologies for the detection and the treatment of undesired or pathological emotional responses, and can inspire animal models for resistant-to-extinction responses and reactions.
|
1107.2330
|
Andrea De Martino
|
Daniele De Martino, Matteo Figliuzzi, Andrea De Martino, Enzo Marinari
|
Computing fluxes and chemical potential distributions in biochemical
networks: energy balance analysis of the human red blood cell
|
16 pages, 13 figures
| null | null | null |
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of non-equilibrium steady states of biochemical reaction
networks relies on finding the configurations of fluxes and chemical potentials
satisfying stoichiometric (mass balance) and thermodynamic (energy balance)
constraints. Efficient methods to explore such states are crucial to predict
reaction directionality, calculate physiologic ranges of variability, estimate
correlations, and reconstruct the overall energy balance of the network from
the underlying molecular processes. While different techniques for sampling the
space generated by mass balance constraints are currently available,
thermodynamics is generically harder to incorporate. Here we introduce a method
to sample the free energy landscape of a reaction network at steady state. In
its most general form, it allows to calculate distributions of fluxes and
concentrations starting from trial functions that may contain prior biochemical
information. We apply our method to the human red blood cell's metabolic
network, whose space of mass-balanced flux states has been sampled extensively
in recent years. Specifically, we profile its thermodynamically feasible flux
configurations, characterizing in detail how fluctuations of fluxes and
potentials are correlated. Based on this, we derive the cell's energy balance
in terms of entropy production, chemical work done and thermodynamic
efficiency.
|
[
{
"created": "Tue, 12 Jul 2011 15:43:19 GMT",
"version": "v1"
}
] |
2011-07-13
|
[
[
"De Martino",
"Daniele",
""
],
[
"Figliuzzi",
"Matteo",
""
],
[
"De Martino",
"Andrea",
""
],
[
"Marinari",
"Enzo",
""
]
] |
The analysis of non-equilibrium steady states of biochemical reaction networks relies on finding the configurations of fluxes and chemical potentials satisfying stoichiometric (mass balance) and thermodynamic (energy balance) constraints. Efficient methods to explore such states are crucial to predict reaction directionality, calculate physiologic ranges of variability, estimate correlations, and reconstruct the overall energy balance of the network from the underlying molecular processes. While different techniques for sampling the space generated by mass balance constraints are currently available, thermodynamics is generically harder to incorporate. Here we introduce a method to sample the free energy landscape of a reaction network at steady state. In its most general form, it allows to calculate distributions of fluxes and concentrations starting from trial functions that may contain prior biochemical information. We apply our method to the human red blood cell's metabolic network, whose space of mass-balanced flux states has been sampled extensively in recent years. Specifically, we profile its thermodynamically feasible flux configurations, characterizing in detail how fluctuations of fluxes and potentials are correlated. Based on this, we derive the cell's energy balance in terms of entropy production, chemical work done and thermodynamic efficiency.
|
1602.08417
|
Nicolas Gauvrit
|
Nicolas Gauvrit and Fabien Mathy
|
Mathematical transcription of the "Time-Based Resource Sharing" theory
of working memory
|
21 pages, 5 figures, 1 table
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The time-based resource sharing (TBRS) model is a prominent model of working
memory that is both predictive and simple. The TBRS is the mainstream
decay-based model and the most susceptible to competition with
interference-based models. A connectionist implementation of the TBRS, the
TBRS*, has recently been developed. However, the TBRS* is an enriched version
of the TBRS, making it difficult to test the general characteristics resulting
from the TBRS assumptions. Here, we describe a novel model, the TBRS2, built to
be more transparent and simple than the TBRS*. The TBRS2 is minimalist and
allows only a few parameters. It is a straightforward mathematical
transcription of the TBRS that focuses exclusively on the activation level of
memory items as a function of time. Its simplicity makes it possible to derive
several theorems from the original TBRS and allows several variants of the
refreshing process to be tested without relying on particular architectures.
|
[
{
"created": "Fri, 1 Jan 2016 16:05:53 GMT",
"version": "v1"
}
] |
2016-02-29
|
[
[
"Gauvrit",
"Nicolas",
""
],
[
"Mathy",
"Fabien",
""
]
] |
The time-based resource sharing (TBRS) model is a prominent model of working memory that is both predictive and simple. The TBRS is the mainstream decay-based model and the most susceptible to competition with interference-based models. A connectionist implementation of the TBRS, the TBRS*, has recently been developed. However, the TBRS* is an enriched version of the TBRS, making it difficult to test the general characteristics resulting from the TBRS assumptions. Here, we describe a novel model, the TBRS2, built to be more transparent and simple than the TBRS*. The TBRS2 is minimalist and allows only a few parameters. It is a straightforward mathematical transcription of the TBRS that focuses exclusively on the activation level of memory items as a function of time. Its simplicity makes it possible to derive several theorems from the original TBRS and allows several variants of the refreshing process to be tested without relying on particular architectures.
|
1307.7385
|
Bhaskar DasGupta
|
Reka Albert, Bhaskar DasGupta, Nasim Mobasheri
|
Some Perspectives on Network Modeling in Therapeutic Target Prediction
| null |
Biomedical Engineering and Computational Biology, 5, 17-24, 2013
|
10.4137/BECB.S10793
| null |
q-bio.MN cs.CE cs.DM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drug target identification is of significant commercial interest to
pharmaceutical companies, and there is a vast amount of research done related
to the topic of therapeutic target identification. Interdisciplinary research
in this area involves both the biological network community and the graph
algorithms community. Key steps of a typical therapeutic target identification
problem include synthesizing or inferring the complex network of interactions
relevant to the disease, connecting this network to the disease-specific
behavior, and predicting which components are key mediators of the behavior.
All of these steps involve graph theoretical or graph algorithmic aspects. In
this perspective, we provide modelling and algorithmic perspectives for
therapeutic target identification and highlight a number of algorithmic
advances, which have gotten relatively little attention so far, with the hope
of strengthening the ties between these two research communities.
|
[
{
"created": "Sun, 28 Jul 2013 17:42:01 GMT",
"version": "v1"
}
] |
2013-07-30
|
[
[
"Albert",
"Reka",
""
],
[
"DasGupta",
"Bhaskar",
""
],
[
"Mobasheri",
"Nasim",
""
]
] |
Drug target identification is of significant commercial interest to pharmaceutical companies, and there is a vast amount of research done related to the topic of therapeutic target identification. Interdisciplinary research in this area involves both the biological network community and the graph algorithms community. Key steps of a typical therapeutic target identification problem include synthesizing or inferring the complex network of interactions relevant to the disease, connecting this network to the disease-specific behavior, and predicting which components are key mediators of the behavior. All of these steps involve graph theoretical or graph algorithmic aspects. In this perspective, we provide modelling and algorithmic perspectives for therapeutic target identification and highlight a number of algorithmic advances, which have gotten relatively little attention so far, with the hope of strengthening the ties between these two research communities.
|
1405.3310
|
Jason Perlmutter
|
Jason D Perlmutter, Matthew R Perkett, Michael F Hagan
|
Pathways for virus assembly around nucleic acids
|
24 pages, 12 figures
| null | null | null |
q-bio.BM cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the pathways by which viral capsid proteins assemble around
their genomes could identify key intermediates as potential drug targets. In
this work we use computer simulations to characterize assembly over a wide
range of capsid protein-protein interaction strengths and solution ionic
strengths. We find that assembly pathways can be categorized into two classes,
in which intermediates are either predominantly ordered or disordered. Our
results suggest that estimating the protein-protein and the protein-genome
binding affinities may be sufficient to predict which pathway occurs.
Furthermore, the calculated phase diagrams suggest that knowledge of the
dominant assembly pathway and its relationship to control parameters could
identify optimal strategies to thwart or redirect assembly to block infection.
Finally, analysis of simulation trajectories suggests that the two classes of
assembly pathways can be distinguished in single molecule fluorescence
correlation spectroscopy or bulk time resolved small angle x-ray scattering
experiments.
|
[
{
"created": "Tue, 13 May 2014 21:22:11 GMT",
"version": "v1"
}
] |
2014-05-15
|
[
[
"Perlmutter",
"Jason D",
""
],
[
"Perkett",
"Matthew R",
""
],
[
"Hagan",
"Michael F",
""
]
] |
Understanding the pathways by which viral capsid proteins assemble around their genomes could identify key intermediates as potential drug targets. In this work we use computer simulations to characterize assembly over a wide range of capsid protein-protein interaction strengths and solution ionic strengths. We find that assembly pathways can be categorized into two classes, in which intermediates are either predominantly ordered or disordered. Our results suggest that estimating the protein-protein and the protein-genome binding affinities may be sufficient to predict which pathway occurs. Furthermore, the calculated phase diagrams suggest that knowledge of the dominant assembly pathway and its relationship to control parameters could identify optimal strategies to thwart or redirect assembly to block infection. Finally, analysis of simulation trajectories suggests that the two classes of assembly pathways can be distinguished in single molecule fluorescence correlation spectroscopy or bulk time resolved small angle x-ray scattering experiments.
|
1512.09133
|
Tomas Ros
|
Tomas Ros, Paul Frewen, Jean Theberge, Rosemarie Kluetsch, Andreas
Mueller, Gian Candrian, Rakesh Jetly, Patrik Vuilleumier, Ruth Lanius
|
Neurofeedback Tunes Scale-Free Dynamics in Spontaneous Brain Activity
| null |
Cerebral Cortex (2016)
|
10.1093/cercor/bhw285
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain oscillations exhibit long-range temporal correlations (LRTCs), which
reflect the regularity of their fluctuations: low values representing more
random (decorrelated) while high values more persistent (correlated) dynamics.
LRTCs constitute supporting evidence that the brain operates near criticality,
a state where neuronal activities are balanced between order and randomness.
Here, healthy adults used closed-loop brain training (neurofeedback, NFB) to
reduce the amplitude of alpha oscillations, producing a significant increase in
spontaneous LRTCs post-training. This effect was reproduced in patients with
post-traumatic stress disorder, where abnormally random dynamics were reversed
by NFB, correlating with significant improvements in hyperarousal. Notably,
regions manifesting abnormally low LRTCs (i.e., excessive randomness)
normalized toward healthy population levels, consistent with theoretical
predictions about self-organized criticality. Hence, when exposed to
appropriate training, spontaneous cortical activity reveals a residual capacity
for "self-tuning" its own temporal complexity, despite manifesting the abnormal
dynamics seen in individuals with psychiatric disorder. Lastly, we observed an
inverse-U relationship between strength of LRTC and oscillation amplitude,
suggesting a breakdown of long-range dependence at high/low synchronization
extremes, in line with recent computational models. Together, our findings
offer a broader mechanistic framework for motivating research and clinical
applications of NFB, encompassing disorders with perturbed LRTCs.
|
[
{
"created": "Wed, 30 Dec 2015 20:58:03 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Mar 2016 09:16:18 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Feb 2018 10:18:50 GMT",
"version": "v3"
}
] |
2018-02-05
|
[
[
"Ros",
"Tomas",
""
],
[
"Frewen",
"Paul",
""
],
[
"Theberge",
"Jean",
""
],
[
"Kluetsch",
"Rosemarie",
""
],
[
"Mueller",
"Andreas",
""
],
[
"Candrian",
"Gian",
""
],
[
"Jetly",
"Rakesh",
""
],
[
"Vuilleumier",
"Patrik",
""
],
[
"Lanius",
"Ruth",
""
]
] |
Brain oscillations exhibit long-range temporal correlations (LRTCs), which reflect the regularity of their fluctuations: low values representing more random (decorrelated) while high values more persistent (correlated) dynamics. LRTCs constitute supporting evidence that the brain operates near criticality, a state where neuronal activities are balanced between order and randomness. Here, healthy adults used closed-loop brain training (neurofeedback, NFB) to reduce the amplitude of alpha oscillations, producing a significant increase in spontaneous LRTCs post-training. This effect was reproduced in patients with post-traumatic stress disorder, where abnormally random dynamics were reversed by NFB, correlating with significant improvements in hyperarousal. Notably, regions manifesting abnormally low LRTCs (i.e., excessive randomness) normalized toward healthy population levels, consistent with theoretical predictions about self-organized criticality. Hence, when exposed to appropriate training, spontaneous cortical activity reveals a residual capacity for "self-tuning" its own temporal complexity, despite manifesting the abnormal dynamics seen in individuals with psychiatric disorder. Lastly, we observed an inverse-U relationship between strength of LRTC and oscillation amplitude, suggesting a breakdown of long-range dependence at high/low synchronization extremes, in line with recent computational models. Together, our findings offer a broader mechanistic framework for motivating research and clinical applications of NFB, encompassing disorders with perturbed LRTCs.
|
1707.00769
|
Emilio Gallicchio
|
Emilio Gallicchio
|
Alchemical Response Parameters from an Analytical Model of Molecular
Binding
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a parameterized analytical model of alchemical molecular binding.
The model describes accurately the free energy profiles of linear
single-decoupling alchemical binding free energy calculations. The parameters
of the model, which are physically motivated, are obtained by fitting model
predictions to numerical simulations. The validity of the model has been
assessed on a set of host-guest complexes. The model faithfully reproduces both
the binding free energy profiles and the probability densities of the
perturbation energy as a function of the alchemical progress parameter
$\lambda$. The model offers a rationalization for the characteristic shape of
the free energy profiles. The parameters obtained from the model are
potentially useful descriptors of the association equilibrium of molecular
complexes.
|
[
{
"created": "Mon, 3 Jul 2017 22:00:33 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jul 2017 17:09:19 GMT",
"version": "v2"
}
] |
2017-07-06
|
[
[
"Gallicchio",
"Emilio",
""
]
] |
We present a parameterized analytical model of alchemical molecular binding. The model describes accurately the free energy profiles of linear single-decoupling alchemical binding free energy calculations. The parameters of the model, which are physically motivated, are obtained by fitting model predictions to numerical simulations. The validity of the model has been assessed on a set of host-guest complexes. The model faithfully reproduces both the binding free energy profiles and the probability densities of the perturbation energy as a function of the alchemical progress parameter $\lambda$. The model offers a rationalization for the characteristic shape of the free energy profiles. The parameters obtained from the model are potentially useful descriptors of the association equilibrium of molecular complexes.
|
1609.08213
|
Chandan Singh
|
Chandan Singh, William B. Levy
|
Complexity Leads to Simplicity: A Consensus Layer V Pyramidal Neuron Can
Sustain Interpulse-Interval Coding
|
submitted to PLOS Computational Biology
|
Published in PLOS One 2017
|
10.1371/journal.pone.0180839
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In terms of the long-distance communication of a single neuron, interpulse
intervals (IPIs) are a possible alternative to rate and binary codes. As a
proxy for IPI, the time-to-spike (TTS) for a neuron can be found in the
biophysical and experimental literature. Using the current, consensus layer V
pyramidal neuron, the present study examines the feasibility of IPI-coding and
examines the noise sources that limit the information rate of such an encoding.
In descending order of noise intensity, the noise sources are (i) synaptic
variability, (ii) sodium channel shot-noise, followed by (iii) thermal noise
with synaptic noise much greater than the sodium channel-noise. More
importantly, the biophysical model demonstrates a linear relationship between
input intensity and inverse TTS. This linear observation contradicts the
assumption that a neuron should be treated as a passive, electronic circuit (an
RC circuit, as in the Stein model). Finally, the biophysical simulations allow
the calculation of mutual information, which is about 3.0 bits/spike.
|
[
{
"created": "Mon, 26 Sep 2016 22:31:25 GMT",
"version": "v1"
}
] |
2019-01-15
|
[
[
"Singh",
"Chandan",
""
],
[
"Levy",
"William B.",
""
]
] |
In terms of the long-distance communication of a single neuron, interpulse intervals (IPIs) are a possible alternative to rate and binary codes. As a proxy for IPI, the time-to-spike (TTS) for a neuron can be found in the biophysical and experimental literature. Using the current, consensus layer V pyramidal neuron, the present study examines the feasibility of IPI-coding and examines the noise sources that limit the information rate of such an encoding. In descending order of noise intensity, the noise sources are (i) synaptic variability, (ii) sodium channel shot-noise, followed by (iii) thermal noise with synaptic noise much greater than the sodium channel-noise. More importantly, the biophysical model demonstrates a linear relationship between input intensity and inverse TTS. This linear observation contradicts the assumption that a neuron should be treated as a passive, electronic circuit (an RC circuit, as in the Stein model). Finally, the biophysical simulations allow the calculation of mutual information, which is about 3.0 bits/spike.
|
2102.11667
|
Larry Bull
|
Larry Bull
|
On Sexual Selection
|
arXiv admin note: substantial text overlap with arXiv:1808.03471,
arXiv:1903.07429, arXiv:1811.04073, arXiv:2004.10061
| null | null | null |
q-bio.PE cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sexual selection is a fundamental aspect of evolution for all eukaryotic
organisms with mating types. This paper suggests intersexual selection is best
viewed as a mechanism to compensate for the unavoidable dynamics of coevolution
between sexes that emerge with isogamy. Using the NK model of fitness
landscapes, the conditions under which allosomes emerge are first explored.
This extends previous work on the evolution of sex where the fitness landscape
smoothing of a rudimentary form of the Baldwin effect is suggested as the
underlying cause. The NKCS model of coevolution is then used to show how
varying fitness landscape size, ruggedness, and connectedness can vary the
conditions under which a very simple sexual selection mechanism proves
beneficial. This is found to be the case whether one or both sexes exploit
sexual selection.
|
[
{
"created": "Tue, 23 Feb 2021 12:47:58 GMT",
"version": "v1"
}
] |
2021-02-24
|
[
[
"Bull",
"Larry",
""
]
] |
Sexual selection is a fundamental aspect of evolution for all eukaryotic organisms with mating types. This paper suggests intersexual selection is best viewed as a mechanism to compensate for the unavoidable dynamics of coevolution between sexes that emerge with isogamy. Using the NK model of fitness landscapes, the conditions under which allosomes emerge are first explored. This extends previous work on the evolution of sex where the fitness landscape smoothing of a rudimentary form of the Baldwin effect is suggested as the underlying cause. The NKCS model of coevolution is then used to show how varying fitness landscape size, ruggedness, and connectedness can vary the conditions under which a very simple sexual selection mechanism proves beneficial. This is found to be the case whether one or both sexes exploit sexual selection.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.