id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1202.6573
|
Francesc Rossell\'o
|
Gabriel Cardona, Arnau Mir, Francesc Rossello
|
Exact formulas for the variance of several balance indices under the
Yule model
|
32 pages. The final version will appear in the J. Comp. Biol. v2
covers the Colless index, which did not appear in v1
| null | null | null |
q-bio.PE math.PR q-bio.QM
|
http://creativecommons.org/licenses/publicdomain/
|
One of the main applications of balance indices is in tests of null models of
evolutionary processes. The knowledge of an exact formula for a statistic of a
balance index, holding for any number n of leaves, is necessary in order to use
this statistic in tests of this kind involving trees of any size. In this paper
we obtain exact formulas for the variance under the Yule model of the Sackin
index, the Colless index and the total cophenetic index of binary rooted
phylogenetic trees with n leaves. We also obtain the covariance of the Sackin
and the total cophenetic index.
|
[
{
"created": "Wed, 29 Feb 2012 15:28:47 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Oct 2012 07:55:16 GMT",
"version": "v2"
}
] |
2012-10-22
|
[
[
"Cardona",
"Gabriel",
""
],
[
"Mir",
"Arnau",
""
],
[
"Rossello",
"Francesc",
""
]
] |
One of the main applications of balance indices is in tests of null models of evolutionary processes. The knowledge of an exact formula for a statistic of a balance index, holding for any number n of leaves, is necessary in order to use this statistic in tests of this kind involving trees of any size. In this paper we obtain exact formulas for the variance under the Yule model of the Sackin index, the Colless index and the total cophenetic index of binary rooted phylogenetic trees with n leaves. We also obtain the covariance of the Sackin and the total cophenetic index.
|
1310.3316
|
Mike Steel Prof.
|
Benny Chor and Mike Steel
|
Tree split probabilities determine the branch lengths
|
12 pages, 1 figure
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The evolution of aligned DNA sequence sites is generally modeled by a Markov
process operating along the edges of a phylogenetic tree. It is well known that
the probability distribution on the site patterns at the tips of the tree
determines the tree and its branch lengths. However, the number of patterns is
typically much larger than the number of edges, suggesting considerable
redundancy in the branch length estimation. In this paper we ask whether the
probabilities of just the `edge-specific' patterns (the ones that correspond to
a change of state on a single edge) suffice to recover the branch lengths of
the tree, under a symmetric 2-state Markov process. We first show that this
holds provided the branch lengths are sufficiently short, by applying the
inverse function theorem. We then consider whether this restriction to short
branch lengths is necessary, and show that for trees with up to four leaves it
can be lifted. This leaves open the interesting question of whether this holds
in general.
|
[
{
"created": "Sat, 12 Oct 2013 00:13:38 GMT",
"version": "v1"
}
] |
2013-10-15
|
[
[
"Chor",
"Benny",
""
],
[
"Steel",
"Mike",
""
]
] |
The evolution of aligned DNA sequence sites is generally modeled by a Markov process operating along the edges of a phylogenetic tree. It is well known that the probability distribution on the site patterns at the tips of the tree determines the tree and its branch lengths. However, the number of patterns is typically much larger than the number of edges, suggesting considerable redundancy in the branch length estimation. In this paper we ask whether the probabilities of just the `edge-specific' patterns (the ones that correspond to a change of state on a single edge) suffice to recover the branch lengths of the tree, under a symmetric 2-state Markov process. We first show that this holds provided the branch lengths are sufficiently short, by applying the inverse function theorem. We then consider whether this restriction to short branch lengths is necessary, and show that for trees with up to four leaves it can be lifted. This leaves open the interesting question of whether this holds in general.
|
1611.03965
|
Reza Ebrahimpour
|
Farzaneh Olianezhad, Maryam Tohidi-Moghaddam, Sajjad Zabbah, Reza
Ebrahimpour
|
Residual Information of Previous Decision Affects Evidence Accumulation
in Current Decision
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bias in perceptual decisions comes to pass when the advance knowledge
colludes with the current sensory evidence in support of the final choice. The
literature on decision making suggests two main hypotheses to account for this
kind of bias: internal bias signals are derived from (a) the residual of motor
response-related signals, and (b) the sensory information residues of the
decisions that we made in the past. Beside these hypotheses, a credible
hypothesis proposed by this study to explain the cause of decision biasing,
suggests that the decision-related neuron can make use of the residual
information of the previous decision for the current decision. We demonstrate
the validity of this assumption, first by performing behavioral experiments
based on the two-alternative forced-choice (TAFC) discrimination of motion
direction paradigms and then, we modified the pure drift-diffusion model (DDM)
based on accumulation to the bound mechanism to account for the sequential
effect. In both cases, the trace of the previous trial influences the current
decision. Results indicate that the probability of being correct in a current
decision increases if it is in line with the previously made decision. Also,
the model that keeps the previous decision information provides a better fit to
the behavioral data. Our findings suggest that the state of a decision variable
which is represented in the activity of decision-related neurons after crossing
the bound (in the previous decision) can accumulate with the decision variable
for the current decision in consecutive trials.
|
[
{
"created": "Sat, 12 Nov 2016 07:33:13 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Oct 2017 11:26:03 GMT",
"version": "v2"
}
] |
2017-10-17
|
[
[
"Olianezhad",
"Farzaneh",
""
],
[
"Tohidi-Moghaddam",
"Maryam",
""
],
[
"Zabbah",
"Sajjad",
""
],
[
"Ebrahimpour",
"Reza",
""
]
] |
Bias in perceptual decisions comes to pass when the advance knowledge colludes with the current sensory evidence in support of the final choice. The literature on decision making suggests two main hypotheses to account for this kind of bias: internal bias signals are derived from (a) the residual of motor response-related signals, and (b) the sensory information residues of the decisions that we made in the past. Beside these hypotheses, a credible hypothesis proposed by this study to explain the cause of decision biasing, suggests that the decision-related neuron can make use of the residual information of the previous decision for the current decision. We demonstrate the validity of this assumption, first by performing behavioral experiments based on the two-alternative forced-choice (TAFC) discrimination of motion direction paradigms and then, we modified the pure drift-diffusion model (DDM) based on accumulation to the bound mechanism to account for the sequential effect. In both cases, the trace of the previous trial influences the current decision. Results indicate that the probability of being correct in a current decision increases if it is in line with the previously made decision. Also, the model that keeps the previous decision information provides a better fit to the behavioral data. Our findings suggest that the state of a decision variable which is represented in the activity of decision-related neurons after crossing the bound (in the previous decision) can accumulate with the decision variable for the current decision in consecutive trials.
|
1604.00385
|
Stephen Plaza
|
Stephen M. Plaza and Stuart E. Berg
|
Large-Scale Electron Microscopy Image Segmentation in Spark
| null | null | null | null |
q-bio.QM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging field of connectomics aims to unlock the mysteries of the brain
by understanding the connectivity between neurons. To map this connectivity, we
acquire thousands of electron microscopy (EM) images with nanometer-scale
resolution. After aligning these images, the resulting dataset has the
potential to reveal the shapes of neurons and the synaptic connections between
them. However, imaging the brain of even a tiny organism like the fruit fly
yields terabytes of data. It can take years of manual effort to examine such
image volumes and trace their neuronal connections. One solution is to apply
image segmentation algorithms to help automate the tracing tasks. In this
paper, we propose a novel strategy to apply such segmentation on very large
datasets that exceed the capacity of a single machine. Our solution is robust
to potential segmentation errors which could otherwise severely compromise the
quality of the overall segmentation, for example those due to poor classifier
generalizability or anomalies in the image dataset. We implement our algorithms
in a Spark application which minimizes disk I/O, and apply them to a few large
EM datasets, revealing both their effectiveness and scalability. We hope this
work will encourage external contributions to EM segmentation by providing 1) a
flexible plugin architecture that deploys easily on different cluster
environments and 2) an in-memory representation of segmentation that could be
conducive to new advances.
|
[
{
"created": "Fri, 1 Apr 2016 19:53:30 GMT",
"version": "v1"
}
] |
2016-04-04
|
[
[
"Plaza",
"Stephen M.",
""
],
[
"Berg",
"Stuart E.",
""
]
] |
The emerging field of connectomics aims to unlock the mysteries of the brain by understanding the connectivity between neurons. To map this connectivity, we acquire thousands of electron microscopy (EM) images with nanometer-scale resolution. After aligning these images, the resulting dataset has the potential to reveal the shapes of neurons and the synaptic connections between them. However, imaging the brain of even a tiny organism like the fruit fly yields terabytes of data. It can take years of manual effort to examine such image volumes and trace their neuronal connections. One solution is to apply image segmentation algorithms to help automate the tracing tasks. In this paper, we propose a novel strategy to apply such segmentation on very large datasets that exceed the capacity of a single machine. Our solution is robust to potential segmentation errors which could otherwise severely compromise the quality of the overall segmentation, for example those due to poor classifier generalizability or anomalies in the image dataset. We implement our algorithms in a Spark application which minimizes disk I/O, and apply them to a few large EM datasets, revealing both their effectiveness and scalability. We hope this work will encourage external contributions to EM segmentation by providing 1) a flexible plugin architecture that deploys easily on different cluster environments and 2) an in-memory representation of segmentation that could be conducive to new advances.
|
0908.0230
|
Marc Joyeux
|
Ana-Maria Florescu, Marc Joyeux and Benedicte Lafay
|
Modeling of two-dimensional DNA display
|
accepted in Electrophoresis
|
Electrophoresis 30 (2009) 3649
|
10.1002/elps.200900258
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
2D display is a fast and economical way of visualizing polymorphism and
comparing genomes, which is based on the separation of DNA fragments in two
steps, according first to their size and then to their sequence composition. In
this paper, we present an exhaustive study of the numerical issues associated
with a model aimed at predicting the final absolute locations of DNA fragments
in 2D display experiments. We show that simple expressions for the mobility of
DNA fragments in both dimensions allow one to reproduce experimental final
absolute locations to better than experimental uncertainties. On the other
hand, our simulations also point out that the results of 2D display experiments
are not sufficient to determine the best set of parameters for the modeling of
fragments separation in the second dimension and that additional detailed
measurements of the mobility of a few sequences are necessary to achieve this
goal. We hope that this work will help in establishing simulations as a
powerful tool to optimize experimental conditions without having to perform a
large number of preliminary experiments and to estimate whether 2D DNA display
is suited to identify a mutation or a genetic difference that is expected to
exist between the genomes of closely related organisms.
|
[
{
"created": "Mon, 3 Aug 2009 11:27:09 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Aug 2009 06:56:40 GMT",
"version": "v2"
}
] |
2009-10-29
|
[
[
"Florescu",
"Ana-Maria",
""
],
[
"Joyeux",
"Marc",
""
],
[
"Lafay",
"Benedicte",
""
]
] |
2D display is a fast and economical way of visualizing polymorphism and comparing genomes, which is based on the separation of DNA fragments in two steps, according first to their size and then to their sequence composition. In this paper, we present an exhaustive study of the numerical issues associated with a model aimed at predicting the final absolute locations of DNA fragments in 2D display experiments. We show that simple expressions for the mobility of DNA fragments in both dimensions allow one to reproduce experimental final absolute locations to better than experimental uncertainties. On the other hand, our simulations also point out that the results of 2D display experiments are not sufficient to determine the best set of parameters for the modeling of fragments separation in the second dimension and that additional detailed measurements of the mobility of a few sequences are necessary to achieve this goal. We hope that this work will help in establishing simulations as a powerful tool to optimize experimental conditions without having to perform a large number of preliminary experiments and to estimate whether 2D DNA display is suited to identify a mutation or a genetic difference that is expected to exist between the genomes of closely related organisms.
|
0710.2739
|
Dion Whitehead
|
Dion J. Whitehead, Claus O. Wilke, David Vernazobres, and Erich
Bornberg-Bauer
|
The look-ahead effect of phenotypic mutations
|
Submitted to "Genetics"
| null | null | null |
q-bio.PE
| null |
The evolution of complex molecular traits such as disulphide bridges often
requires multiple mutations. The intermediate steps in such evolutionary
trajectories are likely to be selectively neutral or deleterious. Therefore,
large populations and long times may be required to evolve such traits. We
propose that errors in transcription and translation may allow selection for
the intermediate mutations if the final trait provides a large enough selective
advantage. We test this hypothesis using a population based model of protein
evolution. If an individual acquires one of two mutations needed for a novel
trait, the second mutation can be introduced into the phenotype due to
transcription and translation errors. If the novel trait is advantageous
enough, the allele with only one mutation will spread through the population,
even though the gene sequence does not yet code for the complete trait. The
first mutation then has a higher frequency than expected without phenotypic
mutations giving the second mutation a higher probability of fixation. Thus,
errors allow protein sequences to ''look-ahead'' for a more direct path to a
complex trait.
|
[
{
"created": "Mon, 15 Oct 2007 08:47:00 GMT",
"version": "v1"
}
] |
2007-10-16
|
[
[
"Whitehead",
"Dion J.",
""
],
[
"Wilke",
"Claus O.",
""
],
[
"Vernazobres",
"David",
""
],
[
"Bornberg-Bauer",
"Erich",
""
]
] |
The evolution of complex molecular traits such as disulphide bridges often requires multiple mutations. The intermediate steps in such evolutionary trajectories are likely to be selectively neutral or deleterious. Therefore, large populations and long times may be required to evolve such traits. We propose that errors in transcription and translation may allow selection for the intermediate mutations if the final trait provides a large enough selective advantage. We test this hypothesis using a population based model of protein evolution. If an individual acquires one of two mutations needed for a novel trait, the second mutation can be introduced into the phenotype due to transcription and translation errors. If the novel trait is advantageous enough, the allele with only one mutation will spread through the population, even though the gene sequence does not yet code for the complete trait. The first mutation then has a higher frequency than expected without phenotypic mutations giving the second mutation a higher probability of fixation. Thus, errors allow protein sequences to ''look-ahead'' for a more direct path to a complex trait.
|
1510.04455
|
Masafumi Oizumi
|
Masafumi Oizumi, Naotsugu Tsuchiya, and Shun-ichi Amari
|
A unified framework for information integration based on information
geometry
| null | null |
10.1073/pnas.1603583113
| null |
q-bio.NC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a unified theoretical framework for quantifying spatio-temporal
interactions in a stochastic dynamical system based on information geometry. In
the proposed framework, the degree of interactions is quantified by the
divergence between the actual probability distribution of the system and a
constrained probability distribution where the interactions of interest are
disconnected. This framework provides novel geometric interpretations of
various information theoretic measures of interactions, such as mutual
information, transfer entropy, and stochastic interaction in terms of how
interactions are disconnected. The framework therefore provides an intuitive
understanding of the relationships between the various quantities. By extending
the concept of transfer entropy, we propose a novel measure of integrated
information which measures causal interactions between parts of a system.
Integrated information quantifies the extent to which the whole is more than
the sum of the parts and can be potentially used as a biological measure of the
levels of consciousness.
|
[
{
"created": "Thu, 15 Oct 2015 09:20:13 GMT",
"version": "v1"
}
] |
2016-12-08
|
[
[
"Oizumi",
"Masafumi",
""
],
[
"Tsuchiya",
"Naotsugu",
""
],
[
"Amari",
"Shun-ichi",
""
]
] |
We propose a unified theoretical framework for quantifying spatio-temporal interactions in a stochastic dynamical system based on information geometry. In the proposed framework, the degree of interactions is quantified by the divergence between the actual probability distribution of the system and a constrained probability distribution where the interactions of interest are disconnected. This framework provides novel geometric interpretations of various information theoretic measures of interactions, such as mutual information, transfer entropy, and stochastic interaction in terms of how interactions are disconnected. The framework therefore provides an intuitive understanding of the relationships between the various quantities. By extending the concept of transfer entropy, we propose a novel measure of integrated information which measures causal interactions between parts of a system. Integrated information quantifies the extent to which the whole is more than the sum of the parts and can be potentially used as a biological measure of the levels of consciousness.
|
1912.02991
|
Yuriy Shckorbatov G
|
Yuriy Shckorbatov
|
Chromatin Structure Changes in Human Disease: A Mini Review
|
5 pages
| null | null | null |
q-bio.CB q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many experimental data indicating the correlations of the changes
in high level of organization of chromatin in human cells and changes in the
state of the whole organism related to disease, state of tiredness or aging. In
our previous work: arXiv.org-2018 (1812.00186) we analyzed the publications on
the topic up to 2017. In this work we focused on works upon the problem of
connection of the state of chromatin with human diseases published in
2018-2019. In the modern literature the most attention is paid to problem of
chromatin transformations in different forms of cancer, Alzheimer'r disease,
and hereditary diseases. Summing up, the tendency of scientific research of
noncommunicable diseases is shifting towards investigation of aspects of
nuclear regulation of disease origin, connected with conformation of chromatin.
|
[
{
"created": "Fri, 6 Dec 2019 06:25:03 GMT",
"version": "v1"
}
] |
2019-12-09
|
[
[
"Shckorbatov",
"Yuriy",
""
]
] |
There are many experimental data indicating the correlations of the changes in high level of organization of chromatin in human cells and changes in the state of the whole organism related to disease, state of tiredness or aging. In our previous work: arXiv.org-2018 (1812.00186) we analyzed the publications on the topic up to 2017. In this work we focused on works upon the problem of connection of the state of chromatin with human diseases published in 2018-2019. In the modern literature the most attention is paid to problem of chromatin transformations in different forms of cancer, Alzheimer'r disease, and hereditary diseases. Summing up, the tendency of scientific research of noncommunicable diseases is shifting towards investigation of aspects of nuclear regulation of disease origin, connected with conformation of chromatin.
|
1304.0052
|
Jingzhi Pu Jingzhi Pu
|
Yan Zhou, Pedro Ojeda-May, and Jingzhi Pu
|
H-loop Histidine Catalyzes ATP Hydrolysis in the E. coli ABC-Transporter
HlyB
| null |
Phys. Chem. Chem. Phys. 2013, 15, 15811-15815
|
10.1039/C3CP50965F
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adenosine triphosphate (ATP)-binding cassette (ABC) transporters form a
family of molecular motor proteins that couple ATP hydrolysis to substrate
translocation across cell membranes. Each nucleotide binding domain of
ABC-transporters contains a highly conserved H-loop Histidine residue, whose
precise mechanistic role to motor functions has remained elusive. By using
combined quantum mechanical and molecular mechanical calculations, we showed
that the conserved H-loop residue H662 in E. coli HlyB, a bacterial ABC
transporter, can act first as a general acid and then as a general base to
facilitate proton transfers in ATP hydrolysis. Without the assistance of H662,
direct proton transfer from the lytic water to ATP results in a greatly
elevated barrier height. Our findings suggest that the essential function of
the H-loop residue H662 is to provide a "chemical linchpin" that shuttles
protons between reactants through a relay mechanism, thereby catalyzing ATP
hydrolysis in HlyB.
|
[
{
"created": "Sat, 30 Mar 2013 00:28:58 GMT",
"version": "v1"
}
] |
2017-09-13
|
[
[
"Zhou",
"Yan",
""
],
[
"Ojeda-May",
"Pedro",
""
],
[
"Pu",
"Jingzhi",
""
]
] |
Adenosine triphosphate (ATP)-binding cassette (ABC) transporters form a family of molecular motor proteins that couple ATP hydrolysis to substrate translocation across cell membranes. Each nucleotide binding domain of ABC-transporters contains a highly conserved H-loop Histidine residue, whose precise mechanistic role to motor functions has remained elusive. By using combined quantum mechanical and molecular mechanical calculations, we showed that the conserved H-loop residue H662 in E. coli HlyB, a bacterial ABC transporter, can act first as a general acid and then as a general base to facilitate proton transfers in ATP hydrolysis. Without the assistance of H662, direct proton transfer from the lytic water to ATP results in a greatly elevated barrier height. Our findings suggest that the essential function of the H-loop residue H662 is to provide a "chemical linchpin" that shuttles protons between reactants through a relay mechanism, thereby catalyzing ATP hydrolysis in HlyB.
|
1609.04856
|
Ilya Nemenman
|
Xinxian Shao, Bruce R. Levin, Ilya Nemenman
|
Single variant bottleneck in the early dynamics of H. influenzae
bacteremia in neonatal rats questions the theory of independent action
|
16 pages, 7 figures
| null |
10.1088/1478-3975/aa731b
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is an abundance of information about the genetic basis, physiological
and molecular mechanisms of bacterial pathogenesis. In contrast, relatively
little is known about population dynamic processes, by which bacteria colonize
hosts and invade tissues and cells and thereby cause disease. In an article
published in 1978, Moxon and Murphy presented evidence that, when inoculated
intranasally with a mixture streptomycin sensitive and resistant (Sm$^S$ and
Sm$^R$) and otherwise isogenic stains of Haemophilus influenzae type b (Hib),
neonatal rats develop a bacteremic infection that often is dominated by only
one strain, Sm$^S$ or Sm$^R$. After rulling out other possibilities through
years of related experiments, the field seems to have settled on a plausible
explanation for this phenomenon: the first bacterium to invade the host
activates the host immune response that `shuts the door' on the second invading
strain. To explore this hypothesis in a necessarily quantitative way, we
modeled this process with a set of mixed stochastic and deterministic
differential equations. Our analysis of the properties of this model with
realistic parameters suggests that this hypothesis cannot explain the
experimental results of Moxon and Murphy, and in particular the observed
relationship between the frequency of different types of blood infections
(bacteremias) and the inoculum size. We propose modifications to the model that
come closer to explaining these data. However, the modified and better fitting
model contradicts the common theory of independent action of individual
bacteria in establishing infections. We discuss the implications of these
results.
|
[
{
"created": "Thu, 15 Sep 2016 20:48:58 GMT",
"version": "v1"
}
] |
2017-06-28
|
[
[
"Shao",
"Xinxian",
""
],
[
"Levin",
"Bruce R.",
""
],
[
"Nemenman",
"Ilya",
""
]
] |
There is an abundance of information about the genetic basis, physiological and molecular mechanisms of bacterial pathogenesis. In contrast, relatively little is known about population dynamic processes, by which bacteria colonize hosts and invade tissues and cells and thereby cause disease. In an article published in 1978, Moxon and Murphy presented evidence that, when inoculated intranasally with a mixture streptomycin sensitive and resistant (Sm$^S$ and Sm$^R$) and otherwise isogenic stains of Haemophilus influenzae type b (Hib), neonatal rats develop a bacteremic infection that often is dominated by only one strain, Sm$^S$ or Sm$^R$. After rulling out other possibilities through years of related experiments, the field seems to have settled on a plausible explanation for this phenomenon: the first bacterium to invade the host activates the host immune response that `shuts the door' on the second invading strain. To explore this hypothesis in a necessarily quantitative way, we modeled this process with a set of mixed stochastic and deterministic differential equations. Our analysis of the properties of this model with realistic parameters suggests that this hypothesis cannot explain the experimental results of Moxon and Murphy, and in particular the observed relationship between the frequency of different types of blood infections (bacteremias) and the inoculum size. We propose modifications to the model that come closer to explaining these data. However, the modified and better fitting model contradicts the common theory of independent action of individual bacteria in establishing infections. We discuss the implications of these results.
|
1602.05847
|
Fernando Vericat
|
C. Manuel Carlevaro, Ramiro M. Irastorza and Fernando Vericat
|
Chirality in a quaternionic representation of the genetic code
|
17 pages, 9 figures. arXiv admin note: substantial text overlap with
arXiv:1505.04656
| null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A quaternionic representation of the genetic code, previously reported by the
authors, is updated in order to incorporate chirality of nucleotide bases and
amino acids. The original representation assigns to each nucleotide base a
prime integer quaternion of norm 7 and involves a function that associates with
each codon, represented by three of these quaternions, another integer
quaternion (amino acid type quaternion) in such a way that the essentials of
the standard genetic code (particulaty its degeneration) are preserved. To show
the advantages of such a quaternionic representation we have, in turn,
associated with each amino acid of a given protein, besides of the type
quaternion, another real one according to its order along the protein (order
quaternion) and have designed an algorithm to go from the primary to the
tertiary structure of the protein by using type and order quaternions. In this
context, we incorporate chirality in our representation by observing that the
set of eight integer quaternions of norm 7 can be partitioned into a pair of
subsets of cardinality four each with their elements mutually conjugates and by
putting they in correspondence one to one with the two sets of enantiomers (D
and L) of the four nucleotide bases adenine, cytosine, guanine and uracil,
respectively. Thus, guided by two diagrams proposed for the codes evolution, we
define functions that in each case assign a L- (D-) amino acid type integer
quaternion to the triplets of D- (L-) bases. The assignation is such that for a
given D-amino acid, the associated integer quaternion is the conjugate of that
one corresponding to the enantiomer L. The chiral type quaternions obtained for
the amino acids are used, together with a common set of order quaternions, to
describe the folding of the two classes, L and D, of homochiral proteins.
|
[
{
"created": "Wed, 30 Dec 2015 14:15:45 GMT",
"version": "v1"
}
] |
2016-02-19
|
[
[
"Carlevaro",
"C. Manuel",
""
],
[
"Irastorza",
"Ramiro M.",
""
],
[
"Vericat",
"Fernando",
""
]
] |
A quaternionic representation of the genetic code, previously reported by the authors, is updated in order to incorporate chirality of nucleotide bases and amino acids. The original representation assigns to each nucleotide base a prime integer quaternion of norm 7 and involves a function that associates with each codon, represented by three of these quaternions, another integer quaternion (amino acid type quaternion) in such a way that the essentials of the standard genetic code (particulaty its degeneration) are preserved. To show the advantages of such a quaternionic representation we have, in turn, associated with each amino acid of a given protein, besides of the type quaternion, another real one according to its order along the protein (order quaternion) and have designed an algorithm to go from the primary to the tertiary structure of the protein by using type and order quaternions. In this context, we incorporate chirality in our representation by observing that the set of eight integer quaternions of norm 7 can be partitioned into a pair of subsets of cardinality four each with their elements mutually conjugates and by putting they in correspondence one to one with the two sets of enantiomers (D and L) of the four nucleotide bases adenine, cytosine, guanine and uracil, respectively. Thus, guided by two diagrams proposed for the codes evolution, we define functions that in each case assign a L- (D-) amino acid type integer quaternion to the triplets of D- (L-) bases. The assignation is such that for a given D-amino acid, the associated integer quaternion is the conjugate of that one corresponding to the enantiomer L. The chiral type quaternions obtained for the amino acids are used, together with a common set of order quaternions, to describe the folding of the two classes, L and D, of homochiral proteins.
|
1310.1334
|
Yong Xu
|
Yong Xu, Xiaoqin Jin, Huiqing Zhang
|
The parallel logic gates in synthetic gene networks induced by
non-Gausssian noise
| null | null |
10.1103/PhysRevE.88.052721
| null |
q-bio.MN
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The newly appeared idea of Logical Stochastic Resonance (LSR) is verified in
synthetic gene networks induced by non-Gaussian noise. We realize the switching
between two kinds of logic gates under optimal moderate noise intensity by
varying two different tunable parameters in a single gene network. Furthermore,
in order to obtain more logic operations, thus providing additional information
processing capacity, we obtain in a two-dimensional toggle switch model two
complementary logic gates and realize the transformation between two logic
gates via the methods of changing different parameters. These simulated results
contribute to improve the computational power and functionality of the
networks.
|
[
{
"created": "Tue, 17 Sep 2013 05:30:19 GMT",
"version": "v1"
}
] |
2015-06-17
|
[
[
"Xu",
"Yong",
""
],
[
"Jin",
"Xiaoqin",
""
],
[
"Zhang",
"Huiqing",
""
]
] |
The newly appeared idea of Logical Stochastic Resonance (LSR) is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
|
q-bio/0510036
|
Gleb Basalyga
|
Gleb Basalyga and Emilio Salinas
|
When Response Variability Increases Neural Network Robustness to
Synaptic Noise
|
26 pages, 7 figures, to appear in Neural Computation
| null | null | null |
q-bio.NC cond-mat.dis-nn
| null |
Cortical sensory neurons are known to be highly variable, in the sense that
responses evoked by identical stimuli often change dramatically from trial to
trial. The origin of this variability is uncertain, but it is usually
interpreted as detrimental noise that reduces the computational accuracy of
neural circuits. Here we investigate the possibility that such response
variability might, in fact, be beneficial, because it may partially compensate
for a decrease in accuracy due to stochastic changes in the synaptic strengths
of a network. We study the interplay between two kinds of noise, response (or
neuronal) noise and synaptic noise, by analyzing their joint influence on the
accuracy of neural networks trained to perform various tasks. We find an
interesting, generic interaction: when fluctuations in the synaptic connections
are proportional to their strengths (multiplicative noise), a certain amount of
response noise in the input neurons can significantly improve network
performance, compared to the same network without response noise. Performance
is enhanced because response noise and multiplicative synaptic noise are in
some ways equivalent. These results are demonstrated analytically for the most
basic network consisting of two input neurons and one output neuron performing
a simple classification task, but computer simulations show that the phenomenon
persists in a wide range of architectures, including recurrent (attractor)
networks and sensory-motor networks that perform coordinate transformations.
The results suggest that response variability could play an important dynamic
role in networks that continuously learn.
|
[
{
"created": "Tue, 18 Oct 2005 20:38:33 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Basalyga",
"Gleb",
""
],
[
"Salinas",
"Emilio",
""
]
] |
Cortical sensory neurons are known to be highly variable, in the sense that responses evoked by identical stimuli often change dramatically from trial to trial. The origin of this variability is uncertain, but it is usually interpreted as detrimental noise that reduces the computational accuracy of neural circuits. Here we investigate the possibility that such response variability might, in fact, be beneficial, because it may partially compensate for a decrease in accuracy due to stochastic changes in the synaptic strengths of a network. We study the interplay between two kinds of noise, response (or neuronal) noise and synaptic noise, by analyzing their joint influence on the accuracy of neural networks trained to perform various tasks. We find an interesting, generic interaction: when fluctuations in the synaptic connections are proportional to their strengths (multiplicative noise), a certain amount of response noise in the input neurons can significantly improve network performance, compared to the same network without response noise. Performance is enhanced because response noise and multiplicative synaptic noise are in some ways equivalent. These results are demonstrated analytically for the most basic network consisting of two input neurons and one output neuron performing a simple classification task, but computer simulations show that the phenomenon persists in a wide range of architectures, including recurrent (attractor) networks and sensory-motor networks that perform coordinate transformations. The results suggest that response variability could play an important dynamic role in networks that continuously learn.
|
1508.06232
|
Stephen Plaza
|
Ting Zhao, Shin-ya Takemura, Gary B. Huang, Jane Anne Horne, William
T. Katz, Kazunori Shinomiya, Louis K. Scheffer, Ian A. Meinertzhagen,
Patricia K. Rivlin, Stephen M. Plaza
|
Large-scale EM Analysis of the Drosophila Antennal Lobe with
Automatically Computed Synapse Point Clouds
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The promise of extracting connectomes and performing useful analysis on large
electron microscopy (EM) datasets has been an elusive dream for many years.
Tracing in even the smallest portions of neuropil requires copious human
annotation, the rate-limiting step for generating a connectome. While a
combination of improved imaging and automatic segmentation will lead to the
analysis of increasingly large volumes, machines still fail to reach the
quality of human tracers. Unfortunately, small errors in image segmentation can
lead to catastrophic distortions of the connectome.
In this paper, to analyze very large datasets, we explore different
mechanisms that are less sensitive to errors in automation. Namely, we advocate
and deploy extensive synapse detection on the entire antennal lobe (AL)
neuropil in the brain of the fruit fly Drosophila, a region much larger than
any densely annotated to date. The resulting synapse point cloud produced is
invaluable for determining compartment boundaries in the AL and choosing
specific regions for subsequent analysis. We introduce our methodology in this
paper for region selection and show both manual and automatic synapse
annotation results. Finally, we note the correspondence between image datasets
obtained using the synaptic marker, antibody nc82, and our datasets enabling
registration between light and EM image modalities.
|
[
{
"created": "Tue, 25 Aug 2015 17:48:55 GMT",
"version": "v1"
}
] |
2015-08-26
|
[
[
"Zhao",
"Ting",
""
],
[
"Takemura",
"Shin-ya",
""
],
[
"Huang",
"Gary B.",
""
],
[
"Horne",
"Jane Anne",
""
],
[
"Katz",
"William T.",
""
],
[
"Shinomiya",
"Kazunori",
""
],
[
"Scheffer",
"Louis K.",
""
],
[
"Meinertzhagen",
"Ian A.",
""
],
[
"Rivlin",
"Patricia K.",
""
],
[
"Plaza",
"Stephen M.",
""
]
] |
The promise of extracting connectomes and performing useful analysis on large electron microscopy (EM) datasets has been an elusive dream for many years. Tracing in even the smallest portions of neuropil requires copious human annotation, the rate-limiting step for generating a connectome. While a combination of improved imaging and automatic segmentation will lead to the analysis of increasingly large volumes, machines still fail to reach the quality of human tracers. Unfortunately, small errors in image segmentation can lead to catastrophic distortions of the connectome. In this paper, to analyze very large datasets, we explore different mechanisms that are less sensitive to errors in automation. Namely, we advocate and deploy extensive synapse detection on the entire antennal lobe (AL) neuropil in the brain of the fruit fly Drosophila, a region much larger than any densely annotated to date. The resulting synapse point cloud produced is invaluable for determining compartment boundaries in the AL and choosing specific regions for subsequent analysis. We introduce our methodology in this paper for region selection and show both manual and automatic synapse annotation results. Finally, we note the correspondence between image datasets obtained using the synaptic marker, antibody nc82, and our datasets enabling registration between light and EM image modalities.
|
2203.12101
|
Josinaldo Menezes
|
J. Menezes, M. Tenorio, E. Rangel
|
Adaptive movement strategy may promote biodiversity in the
rock-paper-scissors model
|
7 pages, 7 figures. arXiv admin note: text overlap with
arXiv:2203.06531
|
Europhysics Letters,139, 57002 (2022)
|
10.1209/0295-5075/ac817a
| null |
q-bio.PE nlin.AO nlin.PS physics.bio-ph physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the role of the adaptive movement strategy in promoting biodiversity
in cyclic models described by the rock-paper-scissors game rules. We assume
that individuals of one out of the species may adjust their movement to escape
hostile regions and stay longer in their comfort zones. Running a series of
stochastic simulations, we calculate the alterations in the spatial patterns
and population densities in scenarios where not all organisms are physically or
cognitively conditioned to perform the behavioural strategy. Although the
adaptive movement strategy is not profitable in terms of territorial dominance
for the species, it may promote biodiversity. Our findings show that if all
individuals are apt to move adaptively, coexistence probability increases for
intermediary mobility. The outcomes also show that even if not all individuals
can react to the signals received from the neighbourhood, biodiversity is still
benefited, but for a shorter mobility range. We find that the improvement in
the coexistence conditions is more accentuated if organisms adjust their
movement intensely and can receive sensory information from longer distances.
We also discover that biodiversity is slightly promoted for high mobility if
the proportion of individuals participating in the strategy is low. Our results
may be helpful for biologists and data scientists to understand adaptive
process learning in system biology.
|
[
{
"created": "Tue, 22 Mar 2022 23:50:55 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Dec 2022 19:31:20 GMT",
"version": "v2"
}
] |
2022-12-27
|
[
[
"Menezes",
"J.",
""
],
[
"Tenorio",
"M.",
""
],
[
"Rangel",
"E.",
""
]
] |
We study the role of the adaptive movement strategy in promoting biodiversity in cyclic models described by the rock-paper-scissors game rules. We assume that individuals of one out of the species may adjust their movement to escape hostile regions and stay longer in their comfort zones. Running a series of stochastic simulations, we calculate the alterations in the spatial patterns and population densities in scenarios where not all organisms are physically or cognitively conditioned to perform the behavioural strategy. Although the adaptive movement strategy is not profitable in terms of territorial dominance for the species, it may promote biodiversity. Our findings show that if all individuals are apt to move adaptively, coexistence probability increases for intermediary mobility. The outcomes also show that even if not all individuals can react to the signals received from the neighbourhood, biodiversity is still benefited, but for a shorter mobility range. We find that the improvement in the coexistence conditions is more accentuated if organisms adjust their movement intensely and can receive sensory information from longer distances. We also discover that biodiversity is slightly promoted for high mobility if the proportion of individuals participating in the strategy is low. Our results may be helpful for biologists and data scientists to understand adaptive process learning in system biology.
|
2005.09890
|
Lars Ailo Bongo
|
Tengel Ekrem Skar, Einar Holsb{\o}, Kristian Svendsen, Lars Ailo Bongo
|
Interactive exploration of population scale pharmacoepidemiology
datasets
| null | null |
10.1145/3388440.3414862
| null |
q-bio.QM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Population-scale drug prescription data linked with adverse drug reaction
(ADR) data supports the fitting of models large enough to detect drug use and
ADR patterns that are not detectable using traditional methods on smaller
datasets. However, detecting ADR patterns in large datasets requires tools for
scalable data processing, machine learning for data analysis, and interactive
visualization. To our knowledge no existing pharmacoepidemiology tool supports
all three requirements. We have therefore created a tool for interactive
exploration of patterns in prescription datasets with millions of samples. We
use Spark to preprocess the data for machine learning and for analyses using
SQL queries. We have implemented models in Keras and the scikit-learn
framework. The model results are visualized and interpreted using live Python
coding in Jupyter. We apply our tool to explore a 384 million prescription data
set from the Norwegian Prescription Database combined with a 62 million
prescriptions for elders that were hospitalized. We preprocess the data in two
minutes, train models in seconds, and plot the results in milliseconds. Our
results show the power of combining computational power, short computation
times, and ease of use for analysis of population scale pharmacoepidemiology
datasets. The code is open source and available at:
https://github.com/uit-hdl/norpd_prescription_analyses
|
[
{
"created": "Wed, 20 May 2020 07:34:50 GMT",
"version": "v1"
}
] |
2021-06-02
|
[
[
"Skar",
"Tengel Ekrem",
""
],
[
"Holsbø",
"Einar",
""
],
[
"Svendsen",
"Kristian",
""
],
[
"Bongo",
"Lars Ailo",
""
]
] |
Population-scale drug prescription data linked with adverse drug reaction (ADR) data supports the fitting of models large enough to detect drug use and ADR patterns that are not detectable using traditional methods on smaller datasets. However, detecting ADR patterns in large datasets requires tools for scalable data processing, machine learning for data analysis, and interactive visualization. To our knowledge no existing pharmacoepidemiology tool supports all three requirements. We have therefore created a tool for interactive exploration of patterns in prescription datasets with millions of samples. We use Spark to preprocess the data for machine learning and for analyses using SQL queries. We have implemented models in Keras and the scikit-learn framework. The model results are visualized and interpreted using live Python coding in Jupyter. We apply our tool to explore a 384 million prescription data set from the Norwegian Prescription Database combined with a 62 million prescriptions for elders that were hospitalized. We preprocess the data in two minutes, train models in seconds, and plot the results in milliseconds. Our results show the power of combining computational power, short computation times, and ease of use for analysis of population scale pharmacoepidemiology datasets. The code is open source and available at: https://github.com/uit-hdl/norpd_prescription_analyses
|
2306.15279
|
Fabrice SARLEGNA
|
Najib Abi Chebel, Florence Gaunet, Pascale Chavet, Christine Assaiante
(LNC), Christophe Bourdin (ISM), Fabrice Sarlegna
|
Does visual experience influence arm proprioception and its
lateralization? Evidence from passive matching performance in
congenitally-blind and sighted adults
| null |
Neuroscience Letters, 2023, 137335, pp.137335
|
10.1016/j.neulet.2023.137335
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In humans, body segments' position and movement can be estimated from
multiple senses such as vision and proprioception. It has been suggested that
vision and proprioception can influence each other and that upper-limb
proprioception is asymmetrical, with proprioception of the non-dominant arm
being more accurate and/or precise than proprioception of the dominant arm.
However, the mechanisms underlying the lateralization of proprioceptive
perception are not yet understood. Here we tested the hypothesis that early
visual experience influences the lateralization of arm proprioceptive
perception by comparing 8 congenitally-blind and 8 matched, sighted
right-handed adults. Their proprioceptive perception was assessed at the elbow
and wrist joints of both arms using an ipsilateral passive matching task.
Results support and extend the view that proprioceptive precision is better at
the non-dominant arm for blindfolded sighted individuals. While this finding
was rather systematic across sighted individuals, proprioceptive precision of
congenitally-blind individuals was not lateralized as systematically,
suggesting that lack of visual experience during ontogenesis influences the
lateralization of arm proprioception.
|
[
{
"created": "Tue, 27 Jun 2023 08:10:52 GMT",
"version": "v1"
}
] |
2023-06-28
|
[
[
"Chebel",
"Najib Abi",
"",
"LNC"
],
[
"Gaunet",
"Florence",
"",
"LNC"
],
[
"Chavet",
"Pascale",
"",
"LNC"
],
[
"Assaiante",
"Christine",
"",
"LNC"
],
[
"Bourdin",
"Christophe",
"",
"ISM"
],
[
"Sarlegna",
"Fabrice",
""
]
] |
In humans, body segments' position and movement can be estimated from multiple senses such as vision and proprioception. It has been suggested that vision and proprioception can influence each other and that upper-limb proprioception is asymmetrical, with proprioception of the non-dominant arm being more accurate and/or precise than proprioception of the dominant arm. However, the mechanisms underlying the lateralization of proprioceptive perception are not yet understood. Here we tested the hypothesis that early visual experience influences the lateralization of arm proprioceptive perception by comparing 8 congenitally-blind and 8 matched, sighted right-handed adults. Their proprioceptive perception was assessed at the elbow and wrist joints of both arms using an ipsilateral passive matching task. Results support and extend the view that proprioceptive precision is better at the non-dominant arm for blindfolded sighted individuals. While this finding was rather systematic across sighted individuals, proprioceptive precision of congenitally-blind individuals was not lateralized as systematically, suggesting that lack of visual experience during ontogenesis influences the lateralization of arm proprioception.
|
1911.00930
|
Kaifu Gao
|
Kaifu Gao, Duc Duy Nguyen, Vishnu Sresht, Alan M. Mathiowetz, Meihua
Tu, and Guo-Wei Wei
|
Are 2D fingerprints still valuable for drug discovery?
| null | null |
10.1039/D0CP00305K
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, molecular fingerprints extracted from three-dimensional (3D)
structures using advanced mathematics, such as algebraic topology, differential
geometry, and graph theory have been paired with efficient machine learning,
especially deep learning algorithms to outperform other methods in drug
discovery applications and competitions. This raises the question of whether
classical 2D fingerprints are still valuable in computer-aided drug discovery.
This work considers 23 datasets associated with four typical problems, namely
protein-ligand binding, toxicity, solubility and partition coefficient to
assess the performance of eight 2D fingerprints. Advanced machine learning
algorithms including random forest, gradient boosted decision tree, single-task
deep neural network and multitask deep neural network are employed to construct
efficient 2D-fingerprint based models. Additionally, appropriate consensus
models are built to further enhance the performance of 2D-fingerprintbased
methods. It is demonstrated that 2D-fingerprint-based models perform as well as
the state-of-the-art 3D structure-based models for the predictions of toxicity,
solubility, partition coefficient and protein-ligand binding affinity based on
only ligand information. However, 3D structure-based models outperform 2D
fingerprint-based methods in complex-based protein-ligand binding affinity
predictions.
|
[
{
"created": "Sun, 3 Nov 2019 17:06:26 GMT",
"version": "v1"
}
] |
2020-06-24
|
[
[
"Gao",
"Kaifu",
""
],
[
"Nguyen",
"Duc Duy",
""
],
[
"Sresht",
"Vishnu",
""
],
[
"Mathiowetz",
"Alan M.",
""
],
[
"Tu",
"Meihua",
""
],
[
"Wei",
"Guo-Wei",
""
]
] |
Recently, molecular fingerprints extracted from three-dimensional (3D) structures using advanced mathematics, such as algebraic topology, differential geometry, and graph theory have been paired with efficient machine learning, especially deep learning algorithms to outperform other methods in drug discovery applications and competitions. This raises the question of whether classical 2D fingerprints are still valuable in computer-aided drug discovery. This work considers 23 datasets associated with four typical problems, namely protein-ligand binding, toxicity, solubility and partition coefficient to assess the performance of eight 2D fingerprints. Advanced machine learning algorithms including random forest, gradient boosted decision tree, single-task deep neural network and multitask deep neural network are employed to construct efficient 2D-fingerprint based models. Additionally, appropriate consensus models are built to further enhance the performance of 2D-fingerprintbased methods. It is demonstrated that 2D-fingerprint-based models perform as well as the state-of-the-art 3D structure-based models for the predictions of toxicity, solubility, partition coefficient and protein-ligand binding affinity based on only ligand information. However, 3D structure-based models outperform 2D fingerprint-based methods in complex-based protein-ligand binding affinity predictions.
|
1609.08696
|
David K. Lubensky
|
Meryl A. Spencer, Zahera Jabeen, David K. Lubensky
|
Vertex stability and topological transitions in vertex models of foams
and epithelia
|
20 pages, 7 figures
| null | null | null |
q-bio.TO cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In computer simulations of dry foams and of epithelial tissues, vertex models
are often used to describe the shape and motion of individual cells. Although
these models have been widely adopted, relatively little is known about their
basic theoretical properties. For example, while fourfold vertices in real
foams are always unstable, it remains unclear whether a simplified vertex model
description has the same behavior. Here, we study vertex stability and the
dynamics of T1 topological transitions in vertex models. We show that, when all
edges have the same tension, stationary fourfold vertices in these models do
indeed always break up. In contrast, when tensions are allowed to depend on
edge orientation, fourfold vertices can become stable, as is observed in some
biological systems. More generally, our formulation of vertex stability leads
to an improved treatment of T1 transitions in simulations and paves the way for
studies of more biologically realistic models that couple topological
transitions to the dynamics of regulatory proteins.
|
[
{
"created": "Tue, 27 Sep 2016 22:37:38 GMT",
"version": "v1"
}
] |
2016-09-29
|
[
[
"Spencer",
"Meryl A.",
""
],
[
"Jabeen",
"Zahera",
""
],
[
"Lubensky",
"David K.",
""
]
] |
In computer simulations of dry foams and of epithelial tissues, vertex models are often used to describe the shape and motion of individual cells. Although these models have been widely adopted, relatively little is known about their basic theoretical properties. For example, while fourfold vertices in real foams are always unstable, it remains unclear whether a simplified vertex model description has the same behavior. Here, we study vertex stability and the dynamics of T1 topological transitions in vertex models. We show that, when all edges have the same tension, stationary fourfold vertices in these models do indeed always break up. In contrast, when tensions are allowed to depend on edge orientation, fourfold vertices can become stable, as is observed in some biological systems. More generally, our formulation of vertex stability leads to an improved treatment of T1 transitions in simulations and paves the way for studies of more biologically realistic models that couple topological transitions to the dynamics of regulatory proteins.
|
2008.12106
|
Irina Volinsky
|
Irina Volinsky, Alexander Domoshnitsky, Marina Bershadsky and Roman
Shklyar
|
Marchuk's models of infection diseases: new developments
| null | null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider mathematical models of infection diseases built by G.I. Marchuk
in his well known book on immunology. These models are in the form of systems
of ordinary delay differential equations. We add a distributed control in one
of the equations describing the dynamics of the antibody concentration rate.
Distributed control looks here naturally since the change of this concentration
rather depends on the corresponding average value of the difference of the
current and normal antibody concentrations on the time interval than on their
difference at the point t only.
|
[
{
"created": "Thu, 23 Jul 2020 18:47:53 GMT",
"version": "v1"
}
] |
2020-08-28
|
[
[
"Volinsky",
"Irina",
""
],
[
"Domoshnitsky",
"Alexander",
""
],
[
"Bershadsky",
"Marina",
""
],
[
"Shklyar",
"Roman",
""
]
] |
We consider mathematical models of infection diseases built by G.I. Marchuk in his well known book on immunology. These models are in the form of systems of ordinary delay differential equations. We add a distributed control in one of the equations describing the dynamics of the antibody concentration rate. Distributed control looks here naturally since the change of this concentration rather depends on the corresponding average value of the difference of the current and normal antibody concentrations on the time interval than on their difference at the point t only.
|
2312.11107
|
Paul Sorba
|
Antonino Sciarrino and Paul Sorba
|
Hierarchy of codon usage frequencies from codon-anticodon interaction in
the crystal basis model
| null | null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Analyzing the codon usage frequencies of a specimen of 20 plants, for which
the codon-anticodon pattern is known, we have remarked that the hierarchy of
the usage frequencies present an almost "universal" behavior. Searching to
explain this behavior, we assume that the codon usage probability results from
the sum of two contributions: the first dominant term is an almost "universal"
one and it depends on the codon-anticodon interaction; the second term is a
local one, i.e. depends on the biological species. The codon-anticodon
interaction is written as a spin-spin plus a z-spin term in the formalism of
the crystal basis model. From general considerations, in particular from the
choice of the signs and some constraints on the parameters defining the
interaction, we are able to explain most of the observed data.
|
[
{
"created": "Mon, 18 Dec 2023 11:12:46 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Sciarrino",
"Antonino",
""
],
[
"Sorba",
"Paul",
""
]
] |
Analyzing the codon usage frequencies of a specimen of 20 plants, for which the codon-anticodon pattern is known, we have remarked that the hierarchy of the usage frequencies present an almost "universal" behavior. Searching to explain this behavior, we assume that the codon usage probability results from the sum of two contributions: the first dominant term is an almost "universal" one and it depends on the codon-anticodon interaction; the second term is a local one, i.e. depends on the biological species. The codon-anticodon interaction is written as a spin-spin plus a z-spin term in the formalism of the crystal basis model. From general considerations, in particular from the choice of the signs and some constraints on the parameters defining the interaction, we are able to explain most of the observed data.
|
2104.10771
|
Joceline Lega
|
Adrienne C. Kinney, Sean Current, Joceline Lega
|
Aedes-AI: Neural Network Models of Mosquito Abundance
| null |
PLoS Comput Biol 17(11): e1009467 (2021)
|
10.1371/journal.pcbi.1009467
| null |
q-bio.PE cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present artificial neural networks as a feasible replacement for a
mechanistic model of mosquito abundance. We develop a feed-forward neural
network, a long short-term memory recurrent neural network, and a gated
recurrent unit network. We evaluate the networks in their ability to replicate
the spatiotemporal features of mosquito populations predicted by the
mechanistic model, and discuss how augmenting the training data with time
series that emphasize specific dynamical behaviors affects model performance.
We conclude with an outlook on how such equation-free models may facilitate
vector control or the estimation of disease risk at arbitrary spatial scales.
|
[
{
"created": "Wed, 21 Apr 2021 21:28:03 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Aug 2021 21:38:14 GMT",
"version": "v2"
}
] |
2021-12-09
|
[
[
"Kinney",
"Adrienne C.",
""
],
[
"Current",
"Sean",
""
],
[
"Lega",
"Joceline",
""
]
] |
We present artificial neural networks as a feasible replacement for a mechanistic model of mosquito abundance. We develop a feed-forward neural network, a long short-term memory recurrent neural network, and a gated recurrent unit network. We evaluate the networks in their ability to replicate the spatiotemporal features of mosquito populations predicted by the mechanistic model, and discuss how augmenting the training data with time series that emphasize specific dynamical behaviors affects model performance. We conclude with an outlook on how such equation-free models may facilitate vector control or the estimation of disease risk at arbitrary spatial scales.
|
q-bio/0507005
|
Michael Stumpf
|
Michael P.H. Stumpf, Piers J. Ingram
|
Probability Models for Degree Distributions of Protein Interaction
Networks
| null |
Europhys.Lett., 71 (1), pp. 152-158 (2005)
|
10.1209/epl/i2004-10531-8
| null |
q-bio.MN
| null |
The degree distribution of many biological and technological networks has
been described as a power-law distribution. While the degree distribution does
not capture all aspects of a network, it has often been suggested that its
functional form contains important clues as to underlying evolutionary
processes that have shaped the network. Generally, the functional form for the
degree distribution has been determined in an ad-hoc fashion, with clear
power-law like behaviour often only extending over a limited range of
connectivities. Here we apply formal model selection techniques to decide which
probability distribution best describes the degree distributions of protein
interaction networks. Contrary to previous studies this well defined approach
suggests that the degree distribution of many molecular networks is often
better described by distributions other than the popular power-law
distribution. This, in turn, suggests that simple, if elegant, models may not
necessarily help in the quantitative understanding of complex biological
processes.\
|
[
{
"created": "Sun, 3 Jul 2005 08:40:55 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Stumpf",
"Michael P. H.",
""
],
[
"Ingram",
"Piers J.",
""
]
] |
The degree distribution of many biological and technological networks has been described as a power-law distribution. While the degree distribution does not capture all aspects of a network, it has often been suggested that its functional form contains important clues as to underlying evolutionary processes that have shaped the network. Generally, the functional form for the degree distribution has been determined in an ad-hoc fashion, with clear power-law like behaviour often only extending over a limited range of connectivities. Here we apply formal model selection techniques to decide which probability distribution best describes the degree distributions of protein interaction networks. Contrary to previous studies this well defined approach suggests that the degree distribution of many molecular networks is often better described by distributions other than the popular power-law distribution. This, in turn, suggests that simple, if elegant, models may not necessarily help in the quantitative understanding of complex biological processes.\
|
1209.0128
|
Casey Bergman
|
Martin Carr, Douda Bensasson, Casey M. Bergman
|
Evolutionary genomics of transposable elements in Saccharomyces
cerevisiae
|
34 pages, 7 figures
|
Carr M, Bensasson D, Bergman CM (2012) Evolutionary Genomics of
Transposable Elements in Saccharomyces cerevisiae. PLoS ONE 7(11): e50978
|
10.1371/journal.pone.0050978
| null |
q-bio.PE q-bio.GN
|
http://creativecommons.org/licenses/by/3.0/
|
Saccharomyces cerevisiae is one of the premier model systems for studying the
genomics and evolution of transposable elements. The availability of the S.
cerevisiae genome led to many insights into its five known transposable element
families (Ty1-Ty5) in the years shortly after its completion. However,
subsequent advances in bioinformatics tools for analysing transposable elements
and the recent availability of genome sequences for multiple strains and
species of yeast motivates new investigations into Ty evolution in S.
cerevisiae. Here we provide a comprehensive phylogenetic and population genetic
analysis of Ty families in S. cerevisiae based on a reannotation of Ty elements
in the S288c reference genome. We show that previous annotation efforts have
underestimated the total copy number of Ty elements for all known families. In
addition, we identify a new family of Ty3-like elements related to the S.
paradoxus Ty3p which is composed entirely of degenerate solo LTRs. Phylogenetic
analyses of LTR sequences identified three families with short-branch, recently
active clades nested among long branch, inactive insertions (Ty1, Ty3, Ty4),
one family with essentially all recently active elements (Ty2) and two families
with only inactive elements (Ty3p and Ty5). Population genomic data from 38
additional strains of S. cerevisiae show that elements present in active clades
are predominantly polymorphic, whereas most of the inactive elements are fixed.
Finally, we use comparative genomic data to provide evidence that the Ty2 and
Ty3p families have arisen in the S. cerevisiae genome by horizontal transfer.
Our results demonstrate that the genome of a single individual contains
important information about the state of TE population dynamics within a
species and suggest that horizontal transfer may play an important role in
shaping the diversity of transposable elements in unicellular eukaryotes.
|
[
{
"created": "Sat, 1 Sep 2012 19:56:01 GMT",
"version": "v1"
}
] |
2012-12-05
|
[
[
"Carr",
"Martin",
""
],
[
"Bensasson",
"Douda",
""
],
[
"Bergman",
"Casey M.",
""
]
] |
Saccharomyces cerevisiae is one of the premier model systems for studying the genomics and evolution of transposable elements. The availability of the S. cerevisiae genome led to many insights into its five known transposable element families (Ty1-Ty5) in the years shortly after its completion. However, subsequent advances in bioinformatics tools for analysing transposable elements and the recent availability of genome sequences for multiple strains and species of yeast motivates new investigations into Ty evolution in S. cerevisiae. Here we provide a comprehensive phylogenetic and population genetic analysis of Ty families in S. cerevisiae based on a reannotation of Ty elements in the S288c reference genome. We show that previous annotation efforts have underestimated the total copy number of Ty elements for all known families. In addition, we identify a new family of Ty3-like elements related to the S. paradoxus Ty3p which is composed entirely of degenerate solo LTRs. Phylogenetic analyses of LTR sequences identified three families with short-branch, recently active clades nested among long branch, inactive insertions (Ty1, Ty3, Ty4), one family with essentially all recently active elements (Ty2) and two families with only inactive elements (Ty3p and Ty5). Population genomic data from 38 additional strains of S. cerevisiae show that elements present in active clades are predominantly polymorphic, whereas most of the inactive elements are fixed. Finally, we use comparative genomic data to provide evidence that the Ty2 and Ty3p families have arisen in the S. cerevisiae genome by horizontal transfer. Our results demonstrate that the genome of a single individual contains important information about the state of TE population dynamics within a species and suggest that horizontal transfer may play an important role in shaping the diversity of transposable elements in unicellular eukaryotes.
|
2403.01678
|
Michael Plank
|
Michael J. Plank and Matthew J. Simpson
|
Structured methods for parameter inference and uncertainty
quantification for mechanistic models in the life sciences
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Parameter inference and uncertainty quantification are important steps when
relating mathematical models to real-world observations, and when estimating
uncertainty in model predictions. However, methods for doing this can be
computationally expensive, particularly when the number of unknown model
parameters is large. The aim of this study is to develop and test an efficient
profile likelihood-based method, which takes advantage of the structure of the
mathematical model being used. We do this by identifying specific parameters
that affect model output in a known way, such as a linear scaling. We
illustrate the method by applying it to three caricature models from different
areas of the life sciences: (i) a predator-prey model from ecology; (ii) a
compartment-based epidemic model from health sciences; and, (iii) an
advection-diffusion-reaction model describing transport of dissolved solutes
from environmental science. We show that the new method produces results of
comparable accuracy to existing profile likelihood methods, but with
substantially fewer evaluations of the forward model. We conclude that our
method could provide a much more efficient approach to parameter inference for
models where a structured approach is feasible. Code to apply the new method to
user-supplied models and data is provided via a publicly accessible repository.
|
[
{
"created": "Mon, 4 Mar 2024 02:13:58 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2024 23:51:08 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jul 2024 10:39:39 GMT",
"version": "v3"
}
] |
2024-07-09
|
[
[
"Plank",
"Michael J.",
""
],
[
"Simpson",
"Matthew J.",
""
]
] |
Parameter inference and uncertainty quantification are important steps when relating mathematical models to real-world observations, and when estimating uncertainty in model predictions. However, methods for doing this can be computationally expensive, particularly when the number of unknown model parameters is large. The aim of this study is to develop and test an efficient profile likelihood-based method, which takes advantage of the structure of the mathematical model being used. We do this by identifying specific parameters that affect model output in a known way, such as a linear scaling. We illustrate the method by applying it to three caricature models from different areas of the life sciences: (i) a predator-prey model from ecology; (ii) a compartment-based epidemic model from health sciences; and, (iii) an advection-diffusion-reaction model describing transport of dissolved solutes from environmental science. We show that the new method produces results of comparable accuracy to existing profile likelihood methods, but with substantially fewer evaluations of the forward model. We conclude that our method could provide a much more efficient approach to parameter inference for models where a structured approach is feasible. Code to apply the new method to user-supplied models and data is provided via a publicly accessible repository.
|
2212.07492
|
Gianni De Fabritiis
|
Maciej Majewski, Adri\`a P\'erez, Philipp Th\"olke, Stefan Doerr,
Nicholas E. Charron, Toni Giorgino, Brooke E. Husic, Cecilia Clementi, Frank
No\'e and Gianni De Fabritiis
|
Machine Learning Coarse-Grained Potentials of Protein Thermodynamics
| null | null | null | null |
q-bio.BM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A generalized understanding of protein dynamics is an unsolved scientific
problem, the solution of which is critical to the interpretation of the
structure-function relationships that govern essential biological processes.
Here, we approach this problem by constructing coarse-grained molecular
potentials based on artificial neural networks and grounded in statistical
mechanics. For training, we build a unique dataset of unbiased all-atom
molecular dynamics simulations of approximately 9 ms for twelve different
proteins with multiple secondary structure arrangements. The coarse-grained
models are capable of accelerating the dynamics by more than three orders of
magnitude while preserving the thermodynamics of the systems. Coarse-grained
simulations identify relevant structural states in the ensemble with comparable
energetics to the all-atom systems. Furthermore, we show that a single
coarse-grained potential can integrate all twelve proteins and can capture
experimental structural features of mutated proteins. These results indicate
that machine learning coarse-grained potentials could provide a feasible
approach to simulate and understand protein dynamics.
|
[
{
"created": "Wed, 14 Dec 2022 20:23:11 GMT",
"version": "v1"
}
] |
2022-12-16
|
[
[
"Majewski",
"Maciej",
""
],
[
"Pérez",
"Adrià",
""
],
[
"Thölke",
"Philipp",
""
],
[
"Doerr",
"Stefan",
""
],
[
"Charron",
"Nicholas E.",
""
],
[
"Giorgino",
"Toni",
""
],
[
"Husic",
"Brooke E.",
""
],
[
"Clementi",
"Cecilia",
""
],
[
"Noé",
"Frank",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] |
A generalized understanding of protein dynamics is an unsolved scientific problem, the solution of which is critical to the interpretation of the structure-function relationships that govern essential biological processes. Here, we approach this problem by constructing coarse-grained molecular potentials based on artificial neural networks and grounded in statistical mechanics. For training, we build a unique dataset of unbiased all-atom molecular dynamics simulations of approximately 9 ms for twelve different proteins with multiple secondary structure arrangements. The coarse-grained models are capable of accelerating the dynamics by more than three orders of magnitude while preserving the thermodynamics of the systems. Coarse-grained simulations identify relevant structural states in the ensemble with comparable energetics to the all-atom systems. Furthermore, we show that a single coarse-grained potential can integrate all twelve proteins and can capture experimental structural features of mutated proteins. These results indicate that machine learning coarse-grained potentials could provide a feasible approach to simulate and understand protein dynamics.
|
1505.03785
|
Silvia Bartolucci
|
Silvia Bartolucci, Alessia Annibale
|
A dynamical model of the adaptive immune system: effects of cells
promiscuity, antigens and B-B interactions
|
40 pages, 26 figures
| null |
10.1088/1742-5468/2015/08/P08017
| null |
q-bio.CB cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyse a minimal model for the primary response in the adaptive immune
system comprising three different players: antigens, T and B cells. We assume
B-T interactions to be diluted and sampled locally from heterogeneous degree
distributions, which mimic B cells receptors' promiscuity. We derive dynamical
equations for the order parameters quantifying the B cells activation and study
the nature and stability of the stationary solutions using linear stability
analysis and Monte Carlo simulations.The system's behaviour is studied in
different scaling regimes of the number of B cells, dilution in the
interactions and number of antigens. Our analysis shows that: (i) B cells
activation depends on the number of receptors in such a way that cells with an
insufficient number of triggered receptors cannot be activated; (ii) idiotypic
(i.e. B-B) interactions enhance parallel activation of multiple clones,
improving the system's ability to fight different pathogens in parallel; (iii)
the higher the fraction of antigens within the host the harder is for the
system to sustain parallel signalling to B cells, crucial for the homeostatic
control of cell numbers.
|
[
{
"created": "Thu, 14 May 2015 16:25:22 GMT",
"version": "v1"
}
] |
2015-09-30
|
[
[
"Bartolucci",
"Silvia",
""
],
[
"Annibale",
"Alessia",
""
]
] |
We analyse a minimal model for the primary response in the adaptive immune system comprising three different players: antigens, T and B cells. We assume B-T interactions to be diluted and sampled locally from heterogeneous degree distributions, which mimic B cells receptors' promiscuity. We derive dynamical equations for the order parameters quantifying the B cells activation and study the nature and stability of the stationary solutions using linear stability analysis and Monte Carlo simulations.The system's behaviour is studied in different scaling regimes of the number of B cells, dilution in the interactions and number of antigens. Our analysis shows that: (i) B cells activation depends on the number of receptors in such a way that cells with an insufficient number of triggered receptors cannot be activated; (ii) idiotypic (i.e. B-B) interactions enhance parallel activation of multiple clones, improving the system's ability to fight different pathogens in parallel; (iii) the higher the fraction of antigens within the host the harder is for the system to sustain parallel signalling to B cells, crucial for the homeostatic control of cell numbers.
|
1302.3869
|
Daniele Marinazzo
|
Daniele Marinazzo, Mario Pellicoro, Guorong Wu, Leonardo Angelini,
Jesus M Cortes, Sebastiano Stramaglia
|
Information transfer of an Ising model on a brain network
| null | null | null | null |
q-bio.NC cond-mat.dis-nn physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We implement the Ising model on a structural connectivity matrix describing
the brain at a coarse scale. Tuning the model temperature to its critical
value, i.e. at the susceptibility peak, we find a maximal amount of total
information transfer between the spin variables. At this point the amount of
information that can be redistributed by some nodes reaches a limit and the net
dynamics exhibits signature of the law of diminishing marginal returns, a
fundamental principle connected to saturated levels of production. Our results
extend the recent analysis of dynamical oscillators models on the connectome
structure, taking into account lagged and directional influences, focusing only
on the nodes that are more prone to became bottlenecks of information. The
ratio between the outgoing and the incoming information at each node is related
to the number of incoming links.
|
[
{
"created": "Fri, 15 Feb 2013 20:32:22 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2013 20:45:14 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Jul 2013 15:39:50 GMT",
"version": "v3"
},
{
"created": "Sun, 1 Sep 2013 14:38:18 GMT",
"version": "v4"
}
] |
2013-09-03
|
[
[
"Marinazzo",
"Daniele",
""
],
[
"Pellicoro",
"Mario",
""
],
[
"Wu",
"Guorong",
""
],
[
"Angelini",
"Leonardo",
""
],
[
"Cortes",
"Jesus M",
""
],
[
"Stramaglia",
"Sebastiano",
""
]
] |
We implement the Ising model on a structural connectivity matrix describing the brain at a coarse scale. Tuning the model temperature to its critical value, i.e. at the susceptibility peak, we find a maximal amount of total information transfer between the spin variables. At this point the amount of information that can be redistributed by some nodes reaches a limit and the net dynamics exhibits signature of the law of diminishing marginal returns, a fundamental principle connected to saturated levels of production. Our results extend the recent analysis of dynamical oscillators models on the connectome structure, taking into account lagged and directional influences, focusing only on the nodes that are more prone to became bottlenecks of information. The ratio between the outgoing and the incoming information at each node is related to the number of incoming links.
|
1508.06549
|
Alexander Teplukhin
|
V. I. Poltev, E. Rodriguez, T. I. Grokhlina, A. V. Teplukhin, A.
Deriabina, and E. Gonzalez
|
Computational Study of Molecular Mechanisms of Caffeine Actions
|
Proceedings of the International Conference on Applied Computer
Science, Malta, September 15-17, 2010, pp. 51-55
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Caffeine (CAF) is one of the most widely and regularly consumed biologically
active substances. We use computer simulation approach to the study of CAF
activity by searching for its possible complexes with biopolymer fragments. The
principal CAF target at physiologically important concentrations refers to
adenosine receptors. It is a common opinion that CAF is a competitive
antagonist of adenosine. At the first step to molecular level elucidation of
CAF action, we have found a set of the minima of the interaction energy between
CAF and the fragments of human A1 adenosine receptor. Molecular mechanics is
the main method for the calculations of the energy of interactions between CAF
and the biopolymer fragments. We use the Monte Carlo simulation to follow
various mutual arrangements of CAF molecule near the receptor. It appears that
the deepest energy minima refer to hydrogen-bond formation of CAF with amino
acid residues involved in interactions with adenosine, its agonists and
antagonists. The results suggest that the formation of such CAF-receptor
complexes enforced by a close packing of CAF and the receptor fragments is the
reason of CAF actions on nervous system. CAF can block the atomic groups of the
adenosine repressors responsible for the interactions with adenosine, not
necessary by the formation of H bonds with them, but simply hide these groups
from the interactions with adenosine.
|
[
{
"created": "Wed, 26 Aug 2015 16:08:24 GMT",
"version": "v1"
}
] |
2015-08-27
|
[
[
"Poltev",
"V. I.",
""
],
[
"Rodriguez",
"E.",
""
],
[
"Grokhlina",
"T. I.",
""
],
[
"Teplukhin",
"A. V.",
""
],
[
"Deriabina",
"A.",
""
],
[
"Gonzalez",
"E.",
""
]
] |
Caffeine (CAF) is one of the most widely and regularly consumed biologically active substances. We use computer simulation approach to the study of CAF activity by searching for its possible complexes with biopolymer fragments. The principal CAF target at physiologically important concentrations refers to adenosine receptors. It is a common opinion that CAF is a competitive antagonist of adenosine. At the first step to molecular level elucidation of CAF action, we have found a set of the minima of the interaction energy between CAF and the fragments of human A1 adenosine receptor. Molecular mechanics is the main method for the calculations of the energy of interactions between CAF and the biopolymer fragments. We use the Monte Carlo simulation to follow various mutual arrangements of CAF molecule near the receptor. It appears that the deepest energy minima refer to hydrogen-bond formation of CAF with amino acid residues involved in interactions with adenosine, its agonists and antagonists. The results suggest that the formation of such CAF-receptor complexes enforced by a close packing of CAF and the receptor fragments is the reason of CAF actions on nervous system. CAF can block the atomic groups of the adenosine repressors responsible for the interactions with adenosine, not necessary by the formation of H bonds with them, but simply hide these groups from the interactions with adenosine.
|
2004.14920
|
Cesar Castilho
|
Marcilio Ferreira dos Santos and Cesar Castilho
|
Deterministic Critical Community Size For The SIR System and Viral
Strain Selection
| null | null | null | null |
q-bio.PE math.DS physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper the concept of Critical Community Size (CCS) for the
deterministic SIR model is introduced and its consequences for the disease
dynamics are stressed. The disease can fade out after an outburst. Also the
principle of competitive exclusion holds no longer true. This is exemplified
for the dynamics of two competing virus strains. The virus with higher R0 can
be eradicated from the population.
|
[
{
"created": "Wed, 29 Apr 2020 17:05:57 GMT",
"version": "v1"
}
] |
2020-05-01
|
[
[
"Santos",
"Marcilio Ferreira dos",
""
],
[
"Castilho",
"Cesar",
""
]
] |
In this paper the concept of Critical Community Size (CCS) for the deterministic SIR model is introduced and its consequences for the disease dynamics are stressed. The disease can fade out after an outburst. Also the principle of competitive exclusion holds no longer true. This is exemplified for the dynamics of two competing virus strains. The virus with higher R0 can be eradicated from the population.
|
2305.09189
|
Christopher Miles
|
Pei Tan, Christopher E Miles
|
Intrinsic statistical separation of subpopulations in heterogeneous
collective motion via dimensionality reduction
|
9 pages, 7 figures; v2: based on a great reviewer's suggestion, all
numerical experiments to start after equilibration and figures updated
accordingly
| null |
10.1103/PhysRevE.109.014403
| null |
q-bio.QM nlin.AO physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Collective motion of locally interacting agents is found ubiquitously
throughout nature. The inability to probe individuals has driven longstanding
interest in the development of methods for inferring the underlying
interactions. In the context of heterogeneous collectives, where the population
consists of individuals driven by different interactions, existing approaches
require some knowledge about the heterogeneities or underlying interactions.
Here, we investigate the feasibility of identifying the identities in a
heterogeneous collective without such prior knowledge. We numerically explore
the behavior of a heterogeneous Vicsek model and find sufficiently long
trajectories intrinsically cluster in a PCA-based dimensionally reduced
model-agnostic description of the data. We identify how heterogeneities in each
parameter in the model (interaction radius, noise, population proportions)
dictate this clustering. Finally, we show the generality of this phenomenon by
finding similar behavior in a heterogeneous D'Orsogna model. Altogether, our
results establish and quantify the intrinsic model-agnostic statistical
disentanglement of identities in heterogeneous collectives.
|
[
{
"created": "Tue, 16 May 2023 05:55:00 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2023 18:38:53 GMT",
"version": "v2"
}
] |
2024-01-23
|
[
[
"Tan",
"Pei",
""
],
[
"Miles",
"Christopher E",
""
]
] |
Collective motion of locally interacting agents is found ubiquitously throughout nature. The inability to probe individuals has driven longstanding interest in the development of methods for inferring the underlying interactions. In the context of heterogeneous collectives, where the population consists of individuals driven by different interactions, existing approaches require some knowledge about the heterogeneities or underlying interactions. Here, we investigate the feasibility of identifying the identities in a heterogeneous collective without such prior knowledge. We numerically explore the behavior of a heterogeneous Vicsek model and find sufficiently long trajectories intrinsically cluster in a PCA-based dimensionally reduced model-agnostic description of the data. We identify how heterogeneities in each parameter in the model (interaction radius, noise, population proportions) dictate this clustering. Finally, we show the generality of this phenomenon by finding similar behavior in a heterogeneous D'Orsogna model. Altogether, our results establish and quantify the intrinsic model-agnostic statistical disentanglement of identities in heterogeneous collectives.
|
1505.00452
|
Loren Coquille
|
Martina Baar, Loren Coquille, Hannah Mayer, Michael H\"olzel, Meri
Rogava, Thomas T\"uting, Anton Bovier
|
A stochastic individual-based model for immunotherapy of cancer
| null |
Scientific Reports, 6, 24169 (2016)
|
10.1038/srep24169
| null |
q-bio.PE math.PR q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an extension of a standard stochastic individual-based model in
population dynamics which broadens the range of biological applications. Our
primary motivation is modelling of immunotherapy of malignant tumours. In this
context the different actors, T-cells, cytokines or cancer cells, are modelled
as single particles (individuals) in the stochastic system. The main expansions
of the model are distinguishing cancer cells by phenotype and genotype,
including environment-dependent phenotypic plasticity that does not affect the
genotype, taking into account the effects of therapy and introducing a
competition term which lowers the reproduction rate of an individual in
addition to the usual term that increases its death rate. We illustrate the new
setup by using it to model various phenomena arising in immunotherapy. Our aim
is twofold: on the one hand, we show that the interplay of genetic mutations
and phenotypic switches on different timescales as well as the occurrence of
metastability phenomena raise new mathematical challenges. On the other hand,
we argue why understanding purely stochastic events (which cannot be obtained
with deterministic models) may help to understand the resistance of tumours to
therapeutic approaches and may have non-trivial consequences on tumour
treatment protocols. This is supported through numerical simulations.
|
[
{
"created": "Sun, 3 May 2015 17:05:18 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2015 11:20:06 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Apr 2016 14:13:34 GMT",
"version": "v3"
}
] |
2016-04-18
|
[
[
"Baar",
"Martina",
""
],
[
"Coquille",
"Loren",
""
],
[
"Mayer",
"Hannah",
""
],
[
"Hölzel",
"Michael",
""
],
[
"Rogava",
"Meri",
""
],
[
"Tüting",
"Thomas",
""
],
[
"Bovier",
"Anton",
""
]
] |
We propose an extension of a standard stochastic individual-based model in population dynamics which broadens the range of biological applications. Our primary motivation is modelling of immunotherapy of malignant tumours. In this context the different actors, T-cells, cytokines or cancer cells, are modelled as single particles (individuals) in the stochastic system. The main expansions of the model are distinguishing cancer cells by phenotype and genotype, including environment-dependent phenotypic plasticity that does not affect the genotype, taking into account the effects of therapy and introducing a competition term which lowers the reproduction rate of an individual in addition to the usual term that increases its death rate. We illustrate the new setup by using it to model various phenomena arising in immunotherapy. Our aim is twofold: on the one hand, we show that the interplay of genetic mutations and phenotypic switches on different timescales as well as the occurrence of metastability phenomena raise new mathematical challenges. On the other hand, we argue why understanding purely stochastic events (which cannot be obtained with deterministic models) may help to understand the resistance of tumours to therapeutic approaches and may have non-trivial consequences on tumour treatment protocols. This is supported through numerical simulations.
|
2406.18531
|
Nghi Nguyen
|
Duy Duong-Tran, Nghi Nguyen, Shizhuo Mu, Jiong Chen, Jingxuan Bao,
Frederick Xu, Sumita Garai, Jose Cadena-Pico, Alan David Kaplan, Tianlong
Chen, Yize Zhao, Li Shen, and Joaqu\'in Go\~ni
|
A principled framework to assess the information-theoretic fitness of
brain functional sub-circuits
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
In systems and network neuroscience, many common practices in brain
connectomic analysis are often not properly scrutinized. One such practice is
mapping a predetermined set of sub-circuits, like functional networks (FNs),
onto subjects' functional connectomes (FCs) without adequately assessing the
information-theoretic appropriateness of the partition. Another practice that
goes unchallenged is thresholding weighted FCs to remove spurious connections
without justifying the chosen threshold. This paper leverages recent
theoretical advances in Stochastic Block Models (SBMs) to formally define and
quantify the information-theoretic fitness (e.g., prominence) of a
predetermined set of FNs when mapped to individual FCs under different fMRI
task conditions. Our framework allows for evaluating any combination of FC
granularity, FN partition, and thresholding strategy, thereby optimizing these
choices to preserve important topological features of the human brain
connectomes. By applying to the Human Connectome Project with Schaefer
parcellations at multiple levels of granularity, the framework showed that the
common thresholding value of 0.25 was indeed information-theoretically valid
for group-average FCs despite its previous lack of justification. Our results
pave the way for the proper use of FNs and thresholding methods and provide
insights for future research in individualized parcellations.
|
[
{
"created": "Wed, 26 Jun 2024 17:57:27 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jul 2024 23:01:08 GMT",
"version": "v2"
}
] |
2024-07-25
|
[
[
"Duong-Tran",
"Duy",
""
],
[
"Nguyen",
"Nghi",
""
],
[
"Mu",
"Shizhuo",
""
],
[
"Chen",
"Jiong",
""
],
[
"Bao",
"Jingxuan",
""
],
[
"Xu",
"Frederick",
""
],
[
"Garai",
"Sumita",
""
],
[
"Cadena-Pico",
"Jose",
""
],
[
"Kaplan",
"Alan David",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Zhao",
"Yize",
""
],
[
"Shen",
"Li",
""
],
[
"Goñi",
"Joaquín",
""
]
] |
In systems and network neuroscience, many common practices in brain connectomic analysis are often not properly scrutinized. One such practice is mapping a predetermined set of sub-circuits, like functional networks (FNs), onto subjects' functional connectomes (FCs) without adequately assessing the information-theoretic appropriateness of the partition. Another practice that goes unchallenged is thresholding weighted FCs to remove spurious connections without justifying the chosen threshold. This paper leverages recent theoretical advances in Stochastic Block Models (SBMs) to formally define and quantify the information-theoretic fitness (e.g., prominence) of a predetermined set of FNs when mapped to individual FCs under different fMRI task conditions. Our framework allows for evaluating any combination of FC granularity, FN partition, and thresholding strategy, thereby optimizing these choices to preserve important topological features of the human brain connectomes. By applying to the Human Connectome Project with Schaefer parcellations at multiple levels of granularity, the framework showed that the common thresholding value of 0.25 was indeed information-theoretically valid for group-average FCs despite its previous lack of justification. Our results pave the way for the proper use of FNs and thresholding methods and provide insights for future research in individualized parcellations.
|
2205.05014
|
Cristian Staii
|
Ilya Yurchenko, Matthew Farwell, Donovan D. Brady and Cristian Staii
|
Neuronal Growth and Formation of Neuron Networks on Directional Surfaces
|
34 pages, 10 figures
|
Biomimetics 6(2), 41 (2021)
| null | null |
q-bio.CB physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The formation of neuron networks is a process of fundamental importance for
understanding the development of the nervous system and for creating biomimetic
devices for tissue engineering and neural repair. The basic process that
controls the network formation is the growth of an axon from the cell body and
its extension towards target neurons. Axonal growth is directed by
environmental stimuli that include intercellular interactions, biochemical
cues, and the mechanical and geometrical properties of the growth substrate.
Despite significant recent progress, the steering of the growing axon remains
poorly understood. In this paper, we develop a model of axonal motility, which
incorporates substrate-geometry sensing. We combine experimental data with
theoretical analysis to measure the parameters that describe axonal growth on
micropatterned surfaces: diffusion (cell motility) coefficients, speed and
angular distributions, and cell-substrate interactions. Experiments performed
on neurons treated with inhibitors for microtubules (Taxol) and actin filaments
(Y-27632) indicate that cytoskeletal dynamics play a critical role in the
steering mechanism. Our results demonstrate that axons follow geometrical
patterns through a contact-guidance mechanism, in which geometrical patterns
impart high traction forces to the growth cone. These results have important
implications for bioengineering novel substrates to guide neuronal growth and
promote nerve repair.
|
[
{
"created": "Tue, 10 May 2022 16:31:12 GMT",
"version": "v1"
},
{
"created": "Fri, 13 May 2022 02:42:17 GMT",
"version": "v2"
}
] |
2022-05-16
|
[
[
"Yurchenko",
"Ilya",
""
],
[
"Farwell",
"Matthew",
""
],
[
"Brady",
"Donovan D.",
""
],
[
"Staii",
"Cristian",
""
]
] |
The formation of neuron networks is a process of fundamental importance for understanding the development of the nervous system and for creating biomimetic devices for tissue engineering and neural repair. The basic process that controls the network formation is the growth of an axon from the cell body and its extension towards target neurons. Axonal growth is directed by environmental stimuli that include intercellular interactions, biochemical cues, and the mechanical and geometrical properties of the growth substrate. Despite significant recent progress, the steering of the growing axon remains poorly understood. In this paper, we develop a model of axonal motility, which incorporates substrate-geometry sensing. We combine experimental data with theoretical analysis to measure the parameters that describe axonal growth on micropatterned surfaces: diffusion (cell motility) coefficients, speed and angular distributions, and cell-substrate interactions. Experiments performed on neurons treated with inhibitors for microtubules (Taxol) and actin filaments (Y-27632) indicate that cytoskeletal dynamics play a critical role in the steering mechanism. Our results demonstrate that axons follow geometrical patterns through a contact-guidance mechanism, in which geometrical patterns impart high traction forces to the growth cone. These results have important implications for bioengineering novel substrates to guide neuronal growth and promote nerve repair.
|
2204.11026
|
Jiahao Ma
|
Jiahao Ma, Guotong Xu, Le Ao, Siqi Chen, Jingze Liu
|
Bioinformatic analysis for structure and function of Glutamine
synthetase(GS)
|
8 pages, 8 figures
| null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Objective: To predict structure and function of Glutamine synthetase (GS)
from Pseudoalteromonas sp. by bioinformatics technology, and to provide a
theoretical basis for further study. Methods: Open reading frame (ORF) of GS
sequence from Pseudoalteromonas sp. was obtained by ORF finder and was
translated into amino acid residue. The structure domain was analyzed by Blast.
By the method of analysis tools: Protparam, ProtScale, SignalP-4.0, TMHMM,
SOPMA, SWISS-MODEL, NCBI SMART-BLAST and MAGA 7.0, the structure and function
of the protein were predicted and analyzed. Results: The results showed that
the sequence was GS with 468 amino acid residues, theoretical molecular weight
was 51986.64 Da. The protein has the closest evolutionary status with
Shewanella oneidensis. Then it had no signal peptide site and transmembrane
domain. Secondary structure of GS contained 35.04% alpha-helix, 16.67% Extended
chain, 5.34% beta-turn, 42.95% RandomCoil. Conclusions: This GU was a variety
of biological functions of protein that may be used as a molecular samples of
microbial nitrogen metabolism in extreme environments.
|
[
{
"created": "Sat, 23 Apr 2022 09:02:20 GMT",
"version": "v1"
}
] |
2022-04-26
|
[
[
"Ma",
"Jiahao",
""
],
[
"Xu",
"Guotong",
""
],
[
"Ao",
"Le",
""
],
[
"Chen",
"Siqi",
""
],
[
"Liu",
"Jingze",
""
]
] |
Objective: To predict structure and function of Glutamine synthetase (GS) from Pseudoalteromonas sp. by bioinformatics technology, and to provide a theoretical basis for further study. Methods: Open reading frame (ORF) of GS sequence from Pseudoalteromonas sp. was obtained by ORF finder and was translated into amino acid residue. The structure domain was analyzed by Blast. By the method of analysis tools: Protparam, ProtScale, SignalP-4.0, TMHMM, SOPMA, SWISS-MODEL, NCBI SMART-BLAST and MAGA 7.0, the structure and function of the protein were predicted and analyzed. Results: The results showed that the sequence was GS with 468 amino acid residues, theoretical molecular weight was 51986.64 Da. The protein has the closest evolutionary status with Shewanella oneidensis. Then it had no signal peptide site and transmembrane domain. Secondary structure of GS contained 35.04% alpha-helix, 16.67% Extended chain, 5.34% beta-turn, 42.95% RandomCoil. Conclusions: This GU was a variety of biological functions of protein that may be used as a molecular samples of microbial nitrogen metabolism in extreme environments.
|
1310.0889
|
Nobu C. Shirai
|
Nobu C. Shirai and Macoto Kikuchi
|
Structural flexibility of intrinsically disordered proteins induces
stepwise target recognition
|
9 pages, 14 figures, 1 table
|
J. Chem. Phys. 139, 225103 (2013)
|
10.1063/1.4838476
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An intrinsically disordered protein (IDP) lacks a stable three-dimensional
structure, while it folds into a specific structure when it binds to a target
molecule. In some IDP-target complexes, not all target binding surfaces are
exposed on the outside, and intermediate states are observed in their binding
processes. We consider that stepwise target recognition via intermediate states
is a characteristic of IDP binding to targets with "hidden" binding sites. To
investigate IDP binding to hidden target binding sites, we constructed an IDP
lattice model based on the HP model. In our model, the IDP is modeled as a
chain and the target is modeled as a highly coarse-grained object. We
introduced motion and internal interactions to the target to hide its binding
sites. In the case of unhidden binding sites, a two-state transition between
the free states and a bound state is observed, and we consider that this
represents coupled folding and binding. Introducing hidden binding sites, we
found an intermediate bound state in which the IDP forms various structures to
temporarily stabilize the complex. The intermediate state provides a scaffold
for the IDP to access the hidden binding site. We call this process multiform
binding. We conclude that structural flexibility of IDPs enables them to access
hidden binding sites, and this is a functional advantage of IDPs.
|
[
{
"created": "Thu, 3 Oct 2013 03:45:18 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Nov 2013 14:05:10 GMT",
"version": "v2"
}
] |
2013-12-12
|
[
[
"Shirai",
"Nobu C.",
""
],
[
"Kikuchi",
"Macoto",
""
]
] |
An intrinsically disordered protein (IDP) lacks a stable three-dimensional structure, while it folds into a specific structure when it binds to a target molecule. In some IDP-target complexes, not all target binding surfaces are exposed on the outside, and intermediate states are observed in their binding processes. We consider that stepwise target recognition via intermediate states is a characteristic of IDP binding to targets with "hidden" binding sites. To investigate IDP binding to hidden target binding sites, we constructed an IDP lattice model based on the HP model. In our model, the IDP is modeled as a chain and the target is modeled as a highly coarse-grained object. We introduced motion and internal interactions to the target to hide its binding sites. In the case of unhidden binding sites, a two-state transition between the free states and a bound state is observed, and we consider that this represents coupled folding and binding. Introducing hidden binding sites, we found an intermediate bound state in which the IDP forms various structures to temporarily stabilize the complex. The intermediate state provides a scaffold for the IDP to access the hidden binding site. We call this process multiform binding. We conclude that structural flexibility of IDPs enables them to access hidden binding sites, and this is a functional advantage of IDPs.
|
1211.6198
|
Chao Yang Mr.
|
Chao Yang, Zengyou He and Weichuan Yu
|
Running PeptideProphet Separately on Replicates Improves Peptide
Identification Results
|
Due to an error
| null | null | null |
q-bio.QM q-bio.GN stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Limited spectrum coverage is a problem in shotgun proteomics. Replicates are
generated to improve the spectrum coverage. When integrating peptide
identification results obtained from replicates, the state-of-the-art algorithm
PeptideProphet combines Peptide-Spectrum Matches (PSMs) before building the
statistical model to calculate peptide probabilities.
In this paper, we find the connection between merging results of replicates
and Bagging, which is a standard routine to improve the power of statistical
methods. Following Bagging's philosophy, we propose to run PeptideProphet
separately on each replicate and combine the outputs to obtain the final
peptide probabilities. In our experiments, we show that the proposed routine
can improve PeptideProphet consistently on a standard protein dataset, a Human
dataset and a Yeast dataset.
|
[
{
"created": "Tue, 27 Nov 2012 02:42:15 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Nov 2012 01:49:07 GMT",
"version": "v2"
},
{
"created": "Sun, 2 Dec 2012 06:14:12 GMT",
"version": "v3"
}
] |
2012-12-04
|
[
[
"Yang",
"Chao",
""
],
[
"He",
"Zengyou",
""
],
[
"Yu",
"Weichuan",
""
]
] |
Limited spectrum coverage is a problem in shotgun proteomics. Replicates are generated to improve the spectrum coverage. When integrating peptide identification results obtained from replicates, the state-of-the-art algorithm PeptideProphet combines Peptide-Spectrum Matches (PSMs) before building the statistical model to calculate peptide probabilities. In this paper, we find the connection between merging results of replicates and Bagging, which is a standard routine to improve the power of statistical methods. Following Bagging's philosophy, we propose to run PeptideProphet separately on each replicate and combine the outputs to obtain the final peptide probabilities. In our experiments, we show that the proposed routine can improve PeptideProphet consistently on a standard protein dataset, a Human dataset and a Yeast dataset.
|
1106.0236
|
Alessia Annibale
|
A. Annibale and A.C.C. Coolen
|
What you see is not what you get: how sampling affects macroscopic
features of biological networks
|
26 pages, 8 figures
| null | null | null |
q-bio.QM cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use mathematical methods from the theory of tailored random graphs to
study systematically the effects of sampling on topological features of large
biological signalling networks. Our aim in doing so is to increase our
quantitative understanding of the relation between true biological networks and
the imperfect and often biased samples of these networks that are reported in
public data repositories and used by biomedical scientists. We derive exact
explicit formulae for degree distributions and degree correlation kernels of
sampled networks, in terms of the degree distributions and degree correlation
kernels of the underlying true network, for a broad family of sampling
protocols that include (un-)biased node and/or link undersampling as well as
(un-)biased link oversampling. Our predictions are in excellent agreement with
numerical simulations.
|
[
{
"created": "Wed, 1 Jun 2011 16:35:14 GMT",
"version": "v1"
}
] |
2011-06-02
|
[
[
"Annibale",
"A.",
""
],
[
"Coolen",
"A. C. C.",
""
]
] |
We use mathematical methods from the theory of tailored random graphs to study systematically the effects of sampling on topological features of large biological signalling networks. Our aim in doing so is to increase our quantitative understanding of the relation between true biological networks and the imperfect and often biased samples of these networks that are reported in public data repositories and used by biomedical scientists. We derive exact explicit formulae for degree distributions and degree correlation kernels of sampled networks, in terms of the degree distributions and degree correlation kernels of the underlying true network, for a broad family of sampling protocols that include (un-)biased node and/or link undersampling as well as (un-)biased link oversampling. Our predictions are in excellent agreement with numerical simulations.
|
2003.05666
|
Liu Hong
|
Wuyue Yang, Dongyan Zhang, Liangrong Peng, Changjing Zhuge, Liu Hong
|
Rational evaluation of various epidemic models based on the COVID-19
data of China
|
25 pages, 5 figures, 2 tables
| null | null | null |
q-bio.PE math.DS q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, based on the Akaike information criterion, root mean square
error and robustness coefficient, a rational evaluation of various epidemic
models/methods, including seven empirical functions, four statistical inference
methods and five dynamical models, on their forecasting abilities is carried
out. With respect to the outbreak data of COVID-19 epidemics in China, we find
that before the inflection point, all models fail to make a reliable
prediction. The Logistic function consistently underestimates the final
epidemic size, while the Gompertz's function makes an overestimation in all
cases. Towards statistical inference methods, the methods of sequential
Bayesian and time-dependent reproduction number are more accurate at the late
stage of an epidemic. And the transition-like behavior of exponential growth
method from underestimation to overestimation with respect to the inflection
point might be useful for constructing a more reliable forecast. Compared to
ODE-based SIR, SEIR and SEIR-AHQ models, the SEIR-QD and SEIR-PO models
generally show a better performance on studying the COVID-19 epidemics, whose
success we believe could be attributed to a proper trade-off between model
complexity and fitting accuracy. Our findings not only are crucial for the
forecast of COVID-19 epidemics, but also may apply to other infectious
diseases.
|
[
{
"created": "Thu, 12 Mar 2020 08:58:39 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 03:45:26 GMT",
"version": "v2"
}
] |
2021-09-16
|
[
[
"Yang",
"Wuyue",
""
],
[
"Zhang",
"Dongyan",
""
],
[
"Peng",
"Liangrong",
""
],
[
"Zhuge",
"Changjing",
""
],
[
"Hong",
"Liu",
""
]
] |
In this paper, based on the Akaike information criterion, root mean square error and robustness coefficient, a rational evaluation of various epidemic models/methods, including seven empirical functions, four statistical inference methods and five dynamical models, on their forecasting abilities is carried out. With respect to the outbreak data of COVID-19 epidemics in China, we find that before the inflection point, all models fail to make a reliable prediction. The Logistic function consistently underestimates the final epidemic size, while the Gompertz's function makes an overestimation in all cases. Towards statistical inference methods, the methods of sequential Bayesian and time-dependent reproduction number are more accurate at the late stage of an epidemic. And the transition-like behavior of exponential growth method from underestimation to overestimation with respect to the inflection point might be useful for constructing a more reliable forecast. Compared to ODE-based SIR, SEIR and SEIR-AHQ models, the SEIR-QD and SEIR-PO models generally show a better performance on studying the COVID-19 epidemics, whose success we believe could be attributed to a proper trade-off between model complexity and fitting accuracy. Our findings not only are crucial for the forecast of COVID-19 epidemics, but also may apply to other infectious diseases.
|
1402.6397
|
Jamie Oaks
|
Jamie R. Oaks, Charles W. Linkem and Jeet Sukumaran
|
Implications of uniformly distributed, empirically informed priors for
phylogeographical model selection: A reply to Hickerson et al
|
24 pages, 4 figures, 1 table, 14 pages of supporting information with
10 supporting figures
| null |
10.1111/evo.12523
| null |
q-bio.PE stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing that a set of population-splitting events occurred at the same
time can be a potentially persuasive argument that a common process affected
the populations. Oaks et al. (2013) assessed the ability of an
approximate-Bayesian method (msBayes) to estimate such a pattern of
simultaneous divergence across taxa, to which Hickerson et al. (2014)
responded. Both papers agree the method is sensitive to prior assumptions and
often erroneously supports shared divergences; the papers differ about the
explanation and solution. Oaks et al. (2013) suggested the method's behavior is
caused by the strong weight of uniform priors on divergence times leading to
smaller marginal likelihoods of models with more divergence-time parameters
(Hypothesis 1); they proposed alternative priors to avoid strongly weighted
posteriors. Hickerson et al. (2014) suggested numerical approximation error
causes msBayes analyses to be biased toward models of clustered divergences
(Hypothesis 2); they proposed using narrow, empirical uniform priors. Here, we
demonstrate that the approach of Hickerson et al. (2014) does not mitigate the
method's tendency to erroneously support models of clustered divergences, and
often excludes the true parameter values. Our results also show that the
tendency of msBayes analyses to support models of shared divergences is
primarily due to Hypothesis 1. This series of papers demonstrate that if our
prior assumptions place too much weight in unlikely regions of parameter space
such that the exact posterior supports the wrong model of evolutionary history,
no amount of computation can rescue our inference. Fortunately, more flexible
distributions that accommodate prior uncertainty about parameters without
placing excessive weight in vast regions of parameter space with low likelihood
increase the method's robustness and power to detect temporal variation in
divergences.
|
[
{
"created": "Wed, 26 Feb 2014 02:29:29 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Nov 2014 22:41:08 GMT",
"version": "v2"
}
] |
2014-11-26
|
[
[
"Oaks",
"Jamie R.",
""
],
[
"Linkem",
"Charles W.",
""
],
[
"Sukumaran",
"Jeet",
""
]
] |
Establishing that a set of population-splitting events occurred at the same time can be a potentially persuasive argument that a common process affected the populations. Oaks et al. (2013) assessed the ability of an approximate-Bayesian method (msBayes) to estimate such a pattern of simultaneous divergence across taxa, to which Hickerson et al. (2014) responded. Both papers agree the method is sensitive to prior assumptions and often erroneously supports shared divergences; the papers differ about the explanation and solution. Oaks et al. (2013) suggested the method's behavior is caused by the strong weight of uniform priors on divergence times leading to smaller marginal likelihoods of models with more divergence-time parameters (Hypothesis 1); they proposed alternative priors to avoid strongly weighted posteriors. Hickerson et al. (2014) suggested numerical approximation error causes msBayes analyses to be biased toward models of clustered divergences (Hypothesis 2); they proposed using narrow, empirical uniform priors. Here, we demonstrate that the approach of Hickerson et al. (2014) does not mitigate the method's tendency to erroneously support models of clustered divergences, and often excludes the true parameter values. Our results also show that the tendency of msBayes analyses to support models of shared divergences is primarily due to Hypothesis 1. This series of papers demonstrate that if our prior assumptions place too much weight in unlikely regions of parameter space such that the exact posterior supports the wrong model of evolutionary history, no amount of computation can rescue our inference. Fortunately, more flexible distributions that accommodate prior uncertainty about parameters without placing excessive weight in vast regions of parameter space with low likelihood increase the method's robustness and power to detect temporal variation in divergences.
|
2408.05579
|
Claus Metzner
|
Claus Metzner, Achim Schilling, Andreas Maier and Patrick Krauss
|
Recurrence Resonance -- Noise-Enhanced Dynamics in Recurrent Neural
Networks
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
In specific motifs of three recurrently connected neurons with probabilistic
response, the spontaneous information flux, defined as the mutual information
between subsequent states, has been shown to increase by adding ongoing white
noise of some optimal strength to each of the neurons
\cite{krauss2019recurrence}. However, the precise conditions for and mechanisms
of this phenomenon called 'recurrence resonance' (RR) remain largely
unexplored. Using Boltzmann machines of different sizes and with various types
of weight matrices, we show that RR can generally occur when a system has
multiple dynamical attractors, but is trapped in one or a few of them. In
probabilistic networks, the phenomenon is bound to a suitable observation time
scale, as the system could autonomously access its entire attractor landscape
even without the help of external noise, given enough time. Yet, even in large
systems, where time scales for observing RR in the full network become too
long, the resonance can still be detected in small subsets of neurons. Finally,
we show that short noise pulses can be used to transfer recurrent neural
networks, both probabilistic and deterministic, between their dynamical
attractors. Our results are relevant to the fields of reservoir computing and
neuroscience, where controlled noise may turn out a key factor for efficient
information processing leading to more robust and adaptable systems.
|
[
{
"created": "Sat, 10 Aug 2024 15:14:41 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Metzner",
"Claus",
""
],
[
"Schilling",
"Achim",
""
],
[
"Maier",
"Andreas",
""
],
[
"Krauss",
"Patrick",
""
]
] |
In specific motifs of three recurrently connected neurons with probabilistic response, the spontaneous information flux, defined as the mutual information between subsequent states, has been shown to increase by adding ongoing white noise of some optimal strength to each of the neurons \cite{krauss2019recurrence}. However, the precise conditions for and mechanisms of this phenomenon called 'recurrence resonance' (RR) remain largely unexplored. Using Boltzmann machines of different sizes and with various types of weight matrices, we show that RR can generally occur when a system has multiple dynamical attractors, but is trapped in one or a few of them. In probabilistic networks, the phenomenon is bound to a suitable observation time scale, as the system could autonomously access its entire attractor landscape even without the help of external noise, given enough time. Yet, even in large systems, where time scales for observing RR in the full network become too long, the resonance can still be detected in small subsets of neurons. Finally, we show that short noise pulses can be used to transfer recurrent neural networks, both probabilistic and deterministic, between their dynamical attractors. Our results are relevant to the fields of reservoir computing and neuroscience, where controlled noise may turn out a key factor for efficient information processing leading to more robust and adaptable systems.
|
2107.10901
|
Saba Moeinizade
|
Saba Moeinizade, Guiping Hu, Lizhi Wang
|
A reinforcement learning approach to resource allocation in genomic
selection
|
18 pages,5 figures
| null | null | null |
q-bio.GN cs.AI cs.LG math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Genomic selection (GS) is a technique that plant breeders use to select
individuals to mate and produce new generations of species. Allocation of
resources is a key factor in GS. At each selection cycle, breeders are facing
the choice of budget allocation to make crosses and produce the next generation
of breeding parents. Inspired by recent advances in reinforcement learning for
AI problems, we develop a reinforcement learning-based algorithm to
automatically learn to allocate limited resources across different generations
of breeding. We mathematically formulate the problem in the framework of Markov
Decision Process (MDP) by defining state and action spaces. To avoid the
explosion of the state space, an integer linear program is proposed that
quantifies the trade-off between resources and time. Finally, we propose a
value function approximation method to estimate the action-value function and
then develop a greedy policy improvement technique to find the optimal
resources. We demonstrate the effectiveness of the proposed method in enhancing
genetic gain using a case study with realistic data.
|
[
{
"created": "Thu, 22 Jul 2021 19:55:16 GMT",
"version": "v1"
}
] |
2021-07-26
|
[
[
"Moeinizade",
"Saba",
""
],
[
"Hu",
"Guiping",
""
],
[
"Wang",
"Lizhi",
""
]
] |
Genomic selection (GS) is a technique that plant breeders use to select individuals to mate and produce new generations of species. Allocation of resources is a key factor in GS. At each selection cycle, breeders are facing the choice of budget allocation to make crosses and produce the next generation of breeding parents. Inspired by recent advances in reinforcement learning for AI problems, we develop a reinforcement learning-based algorithm to automatically learn to allocate limited resources across different generations of breeding. We mathematically formulate the problem in the framework of Markov Decision Process (MDP) by defining state and action spaces. To avoid the explosion of the state space, an integer linear program is proposed that quantifies the trade-off between resources and time. Finally, we propose a value function approximation method to estimate the action-value function and then develop a greedy policy improvement technique to find the optimal resources. We demonstrate the effectiveness of the proposed method in enhancing genetic gain using a case study with realistic data.
|
1907.13472
|
Jean-Marc Ginoux
|
Jean-Marc Ginoux (LIS), Heikki Ruskeep\"a\"a, Matja\v{z} Perc, Roomila
Naeck, V\'eronique Di Costanzo, Moez Bouchouicha (LIS), Farhat Fnaiech,
Mounir Sayadi, Takoua Hamdi
|
Is type 1 diabetes a chaotic phenomenon?
| null |
Chaos, Solitons & Fractals 111, 198-205 (2018)
|
10.1016/j.chaos.2018.03.033
| null |
q-bio.OT nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A database of ten type 1 diabetes patients wearing a continuous glucose
monitoring device has enabled to record their blood glucose continuous
variations every minute all day long during fourteen consecutive days. These
recordings represent, for each patient, a time series consisting of 1 value of
glycaemia per minute during 24 hours and 14 days, i.e., 20,160 data point.
Thus, while using numerical methods, these time series have been anonymously
analyzed. Nevertheless, because of the stochastic inputs induced by daily
activities of any human being, it has not been possible to discriminate chaos
from noise. So, we have decided to keep only the 14 nights of these ten
patients. Then, the determination of the time delay and embedding dimension
according to the delay coordinate embedding method has allowed us to estimate
for each patient the correlation dimension and the maximal Lyapunov exponent.
This has led us to show that type 1 diabetes could indeed be a chaotic
phenomenon. Once this result has been confirmed by the determinism test, we
have computed the Lyapunov time and found that the limit of predictability of
this phenomenon is nearly equal to half the 90-minutes sleep-dream cycle. We
hope that our results will prove to be useful to characterize and predict blood
glucose variations.
|
[
{
"created": "Mon, 29 Jul 2019 13:09:02 GMT",
"version": "v1"
}
] |
2019-08-07
|
[
[
"Ginoux",
"Jean-Marc",
"",
"LIS"
],
[
"Ruskeepää",
"Heikki",
"",
"LIS"
],
[
"Perc",
"Matjaž",
"",
"LIS"
],
[
"Naeck",
"Roomila",
"",
"LIS"
],
[
"Di Costanzo",
"Véronique",
"",
"LIS"
],
[
"Bouchouicha",
"Moez",
"",
"LIS"
],
[
"Fnaiech",
"Farhat",
""
],
[
"Sayadi",
"Mounir",
""
],
[
"Hamdi",
"Takoua",
""
]
] |
A database of ten type 1 diabetes patients wearing a continuous glucose monitoring device has enabled to record their blood glucose continuous variations every minute all day long during fourteen consecutive days. These recordings represent, for each patient, a time series consisting of 1 value of glycaemia per minute during 24 hours and 14 days, i.e., 20,160 data point. Thus, while using numerical methods, these time series have been anonymously analyzed. Nevertheless, because of the stochastic inputs induced by daily activities of any human being, it has not been possible to discriminate chaos from noise. So, we have decided to keep only the 14 nights of these ten patients. Then, the determination of the time delay and embedding dimension according to the delay coordinate embedding method has allowed us to estimate for each patient the correlation dimension and the maximal Lyapunov exponent. This has led us to show that type 1 diabetes could indeed be a chaotic phenomenon. Once this result has been confirmed by the determinism test, we have computed the Lyapunov time and found that the limit of predictability of this phenomenon is nearly equal to half the 90-minutes sleep-dream cycle. We hope that our results will prove to be useful to characterize and predict blood glucose variations.
|
1906.08238
|
Sebastian Salassi
|
Sebastian Salassi, Ester Canepa, Riccardo Ferrando and Giulia Rossi
|
Anionic nanoparticle-lipid membrane interactions: the protonation of
anionic ligands at the membrane surface reduces membrane disruption
| null |
RSC Adv., 2019, 9, 13992
|
10.1039/c9ra02462j
| null |
q-bio.BM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monolayer-protected gold nanoparticles (Au NPs) are promising biomedical
tools with applications to diagnosis and therapy, thanks to their
biocompatibility and versatility. Here we show how the NP surface
functionalization can drive the mechanism of interaction with lipid membranes.
In particular, we show that the spontaneous protonation of anionic carboxylic
groups on the NP surface can make the NP-membrane interaction faster and less
disruptive.
|
[
{
"created": "Thu, 30 May 2019 13:14:28 GMT",
"version": "v1"
}
] |
2019-06-20
|
[
[
"Salassi",
"Sebastian",
""
],
[
"Canepa",
"Ester",
""
],
[
"Ferrando",
"Riccardo",
""
],
[
"Rossi",
"Giulia",
""
]
] |
Monolayer-protected gold nanoparticles (Au NPs) are promising biomedical tools with applications to diagnosis and therapy, thanks to their biocompatibility and versatility. Here we show how the NP surface functionalization can drive the mechanism of interaction with lipid membranes. In particular, we show that the spontaneous protonation of anionic carboxylic groups on the NP surface can make the NP-membrane interaction faster and less disruptive.
|
q-bio/0605003
|
Zhao Jing
|
Jing Zhao, Hong Yu, Jian-Hua Luo, Zhi-Wei Cao, Yi-Xue Li
|
Hierarchical modularity of nested bow-ties in metabolic networks
|
26 pages, 9 figures
|
BMC Bioinformatics 2006, 7:386
| null | null |
q-bio.MN
| null |
The exploration of the structural topology and the organizing principles of
genome-based large-scale metabolic networks is essential for studying possible
relations between structure and functionality of metabolic networks.
Topological analysis of graph models has often been applied to study the
structural characteristics of complex metabolic networks.In this work,
metabolic networks of 75 organisms were investigated from a topological point
of view. Network decomposition of three microbes (Escherichia coli, Aeropyrum
pernix and Saccharomyces cerevisiae) shows that almost all of the sub-networks
exhibit a highly modularized bow-tie topological pattern similar to that of the
global metabolic networks. Moreover, these small bow-ties are hierarchically
nested into larger ones and collectively integrated into a large metabolic
network, and important features of this modularity are not observed in the
random shuffled network. In addition, such a bow-tie pattern appears to be
present in certain chemically isolated functional modules and spatially
separated modules including carbohydrate metabolism, cytosol and mitochondrion
respectively. The highly modularized bow-tie pattern is present at different
levels and scales, and in different chemical and spatial modules of metabolic
networks, which is likely the result of the evolutionary process rather than a
random accident. Identification and analysis of such a pattern is helpful for
understanding the design principles and facilitate the modelling of metabolic
networks.
|
[
{
"created": "Sun, 30 Apr 2006 05:43:36 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Aug 2006 12:20:53 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Zhao",
"Jing",
""
],
[
"Yu",
"Hong",
""
],
[
"Luo",
"Jian-Hua",
""
],
[
"Cao",
"Zhi-Wei",
""
],
[
"Li",
"Yi-Xue",
""
]
] |
The exploration of the structural topology and the organizing principles of genome-based large-scale metabolic networks is essential for studying possible relations between structure and functionality of metabolic networks. Topological analysis of graph models has often been applied to study the structural characteristics of complex metabolic networks.In this work, metabolic networks of 75 organisms were investigated from a topological point of view. Network decomposition of three microbes (Escherichia coli, Aeropyrum pernix and Saccharomyces cerevisiae) shows that almost all of the sub-networks exhibit a highly modularized bow-tie topological pattern similar to that of the global metabolic networks. Moreover, these small bow-ties are hierarchically nested into larger ones and collectively integrated into a large metabolic network, and important features of this modularity are not observed in the random shuffled network. In addition, such a bow-tie pattern appears to be present in certain chemically isolated functional modules and spatially separated modules including carbohydrate metabolism, cytosol and mitochondrion respectively. The highly modularized bow-tie pattern is present at different levels and scales, and in different chemical and spatial modules of metabolic networks, which is likely the result of the evolutionary process rather than a random accident. Identification and analysis of such a pattern is helpful for understanding the design principles and facilitate the modelling of metabolic networks.
|
1308.5342
|
Michael Harvey
|
Brian Tilston Smith, Michael G. Harvey, Brant C. Faircloth, Travis C.
Glenn, Robb T. Brumfield
|
Target capture and massively parallel sequencing of ultraconserved
elements (UCEs) for comparative studies at shallow evolutionary time scales
|
53 pages, 2 tables, 4 figures, 5 supplemental tables, 11 supplemental
figures
|
(2014) Systematic Biology 63: 83-95
|
10.1093/sysbio/syt061
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Comparative genetic studies of non-model organisms are transforming rapidly
due to major advances in sequencing technology. A limiting factor in these
studies has been the identification and screening of orthologous loci across an
evolutionarily distant set of taxa. Here, we evaluate the efficacy of genomic
markers targeting ultraconserved DNA elements (UCEs) for analyses at shallow
evolutionary timescales. Using sequence capture and massively parallel
sequencing to generate UCE data for five co-distributed Neotropical rainforest
bird species, we recovered 776-1,516 UCE loci across the five species. Across
species, 53-77 percent of the loci were polymorphic, containing between 2.0 and
3.2 variable sites per polymorphic locus, on average. We performed species tree
construction, coalescent modeling, and species delimitation, and we found that
the five co-distributed species exhibited discordant phylogeographic histories.
We also found that species trees and divergence times estimated from UCEs were
similar to those obtained from mtDNA. The species that inhabit the understory
had older divergence times across barriers, contained a higher number of
cryptic species, and exhibited larger effective population sizes relative to
species inhabiting the canopy. Because orthologous UCEs can be obtained from a
wide array of taxa, are polymorphic at shallow evolutionary time scales, and
can be generated rapidly at low cost, they are effective genetic markers for
studies investigating evolutionary patterns and processes at shallow time
scales.
|
[
{
"created": "Sat, 24 Aug 2013 14:56:21 GMT",
"version": "v1"
}
] |
2017-03-28
|
[
[
"Smith",
"Brian Tilston",
""
],
[
"Harvey",
"Michael G.",
""
],
[
"Faircloth",
"Brant C.",
""
],
[
"Glenn",
"Travis C.",
""
],
[
"Brumfield",
"Robb T.",
""
]
] |
Comparative genetic studies of non-model organisms are transforming rapidly due to major advances in sequencing technology. A limiting factor in these studies has been the identification and screening of orthologous loci across an evolutionarily distant set of taxa. Here, we evaluate the efficacy of genomic markers targeting ultraconserved DNA elements (UCEs) for analyses at shallow evolutionary timescales. Using sequence capture and massively parallel sequencing to generate UCE data for five co-distributed Neotropical rainforest bird species, we recovered 776-1,516 UCE loci across the five species. Across species, 53-77 percent of the loci were polymorphic, containing between 2.0 and 3.2 variable sites per polymorphic locus, on average. We performed species tree construction, coalescent modeling, and species delimitation, and we found that the five co-distributed species exhibited discordant phylogeographic histories. We also found that species trees and divergence times estimated from UCEs were similar to those obtained from mtDNA. The species that inhabit the understory had older divergence times across barriers, contained a higher number of cryptic species, and exhibited larger effective population sizes relative to species inhabiting the canopy. Because orthologous UCEs can be obtained from a wide array of taxa, are polymorphic at shallow evolutionary time scales, and can be generated rapidly at low cost, they are effective genetic markers for studies investigating evolutionary patterns and processes at shallow time scales.
|
1611.04425
|
Yuan Zhi Pan
|
Xue Lan Liao, Guo Qin Wen, Qing Lin Liu, Xue Yang Li Meng Xi Wu and
Yuan Zhi Pan
|
The Drought-Stress Response of a Drought Resistant Impatiens Processed
in PEG-6000 Solution in a Simulation Test
|
19 pages,6 figures,1 table
| null | null | null |
q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Premise of the Study: Impatiens is a commonly seen garden flower, renowned
for its strong adaptability and long history of cultivation. However, seldom
has any research touched on its physiological resistance mechanism. In this
experiment, the impatiens is selected from those which experienced aerospace
mutation and thereafter 12 years of cultivation and breeding. Therefore, it is
superior to the non-mutagenized impatiens in terms of drought resistance and
demonstrates tremendous differences from the normal impatiens in physiology,
which intrigues scholars to search for the underlying reasons . Methods: By
reference to Impatiens balsamina L,this experiment uses mutagenized impatiens
seeds, processed by PEG-6000 in different solution concentration, to measure
the germination rate of impatiens, its relative enzymatic activity and
expression differences between gene SoS2 and gene RD29b in the drought lower
reaches. Key results: Under simulated drought stress, there is no distinct
difference between the mutagenized impatiens and the normal impatiens in terms
of germination rate. But by measuring the root tillers and the length, the
relative enzymatic activity, MDA, and the expression differences between gene
SoS2 and gene RD29b in the drought lower reaches, it is verified that the
mutagenized impatiens has more advantages than the normal impatiens, and it Can
further cultivate become drought resistance varieties impatiens. Conclusions:
In this experiment,which is a positive mutation, the mutagenized impatiens
improves drought resistance through radiative mutation. The so obtained
impatiens is more pleasing in sight in terms of color and shape and has higher
application value in garden virescence.
|
[
{
"created": "Mon, 14 Nov 2016 15:46:26 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Liao",
"Xue Lan",
""
],
[
"Wen",
"Guo Qin",
""
],
[
"Liu",
"Qing Lin",
""
],
[
"Wu",
"Xue Yang Li Meng Xi",
""
],
[
"Pan",
"Yuan Zhi",
""
]
] |
Premise of the Study: Impatiens is a commonly seen garden flower, renowned for its strong adaptability and long history of cultivation. However, seldom has any research touched on its physiological resistance mechanism. In this experiment, the impatiens is selected from those which experienced aerospace mutation and thereafter 12 years of cultivation and breeding. Therefore, it is superior to the non-mutagenized impatiens in terms of drought resistance and demonstrates tremendous differences from the normal impatiens in physiology, which intrigues scholars to search for the underlying reasons . Methods: By reference to Impatiens balsamina L,this experiment uses mutagenized impatiens seeds, processed by PEG-6000 in different solution concentration, to measure the germination rate of impatiens, its relative enzymatic activity and expression differences between gene SoS2 and gene RD29b in the drought lower reaches. Key results: Under simulated drought stress, there is no distinct difference between the mutagenized impatiens and the normal impatiens in terms of germination rate. But by measuring the root tillers and the length, the relative enzymatic activity, MDA, and the expression differences between gene SoS2 and gene RD29b in the drought lower reaches, it is verified that the mutagenized impatiens has more advantages than the normal impatiens, and it Can further cultivate become drought resistance varieties impatiens. Conclusions: In this experiment,which is a positive mutation, the mutagenized impatiens improves drought resistance through radiative mutation. The so obtained impatiens is more pleasing in sight in terms of color and shape and has higher application value in garden virescence.
|
2404.16769
|
Martina Conte
|
Giulia Chiari, Martina Conte, and Marcello Delitala
|
Multi-scale modeling of Snail-mediated response to hypoxia in tumor
progression
|
30 pages, 8 figures
| null | null | null |
q-bio.CB
|
http://creativecommons.org/licenses/by/4.0/
|
Tumor cell migration within the microenvironment is a crucial aspect for
cancer progression and, in this context, hypoxia has a significant role. An
inadequate oxygen supply acts as an environmental stressor inducing migratory
bias and phenotypic changes. In this paper, we propose a novel multi-scale
mathematical model to analyze the pivotal role of Snail protein expression in
the cellular responses to hypoxia. Starting from the description of single-cell
dynamics driven by the Snail protein, we construct the corresponding kinetic
transport equation that describes the evolution of the cell distribution.
Subsequently, we employ proper scaling arguments to formally derive the
equations for the statistical moments of the cell distribution, which govern
the macroscopic tumor dynamics. Numerical simulations of the model are
performed in various scenarios with biological relevance to provide insights
into the role of the multiple tactic terms, the impact of Snail expression on
cell proliferation, and the emergence of hypoxia-induced migration patterns.
Moreover, quantitative comparison with experimental data shows the model's
reliability in measuring the impact of Snail transcription on cell migratory
potential. Through our findings, we shed light on the potential of our
mathematical framework in advancing the understanding of the biological
mechanisms driving tumor progression.
|
[
{
"created": "Thu, 25 Apr 2024 17:23:29 GMT",
"version": "v1"
}
] |
2024-04-26
|
[
[
"Chiari",
"Giulia",
""
],
[
"Conte",
"Martina",
""
],
[
"Delitala",
"Marcello",
""
]
] |
Tumor cell migration within the microenvironment is a crucial aspect for cancer progression and, in this context, hypoxia has a significant role. An inadequate oxygen supply acts as an environmental stressor inducing migratory bias and phenotypic changes. In this paper, we propose a novel multi-scale mathematical model to analyze the pivotal role of Snail protein expression in the cellular responses to hypoxia. Starting from the description of single-cell dynamics driven by the Snail protein, we construct the corresponding kinetic transport equation that describes the evolution of the cell distribution. Subsequently, we employ proper scaling arguments to formally derive the equations for the statistical moments of the cell distribution, which govern the macroscopic tumor dynamics. Numerical simulations of the model are performed in various scenarios with biological relevance to provide insights into the role of the multiple tactic terms, the impact of Snail expression on cell proliferation, and the emergence of hypoxia-induced migration patterns. Moreover, quantitative comparison with experimental data shows the model's reliability in measuring the impact of Snail transcription on cell migratory potential. Through our findings, we shed light on the potential of our mathematical framework in advancing the understanding of the biological mechanisms driving tumor progression.
|
1704.05407
|
Sergei Maslov
|
Sergei Maslov and Kim Sneppen
|
Severe population collapses and species extinctions in multi-host
epidemic dynamics
| null |
Phys. Rev. E 96, 022412 (2017)
|
10.1103/PhysRevE.96.022412
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most infectious diseases including more than half of known human pathogens
are not restricted to just one host, yet much of the mathematical modeling of
infections has been limited to a single species. We investigate consequences of
a single epidemic propagating in multiple species and compare and contrast it
with the endemic steady state of the disease. We use the two-species
Susceptible-Infected-Recovered (SIR) model to calculate the severity of
post-epidemic collapses in populations of two host species as a function of
their initial population sizes, the times individuals remain infectious, and
the matrix of infection rates. We derive the criteria for a very large,
extinction-level, population collapse in one or both of the species. The main
conclusion of our study is that a single epidemic could drive a species with
high mortality rate to local or even global extinction provided that it is
co-infected with an abundant species. Such collapse-driven extinctions depend
on factors different than those in the endemic steady state of the disease.
|
[
{
"created": "Tue, 18 Apr 2017 16:12:25 GMT",
"version": "v1"
}
] |
2017-08-30
|
[
[
"Maslov",
"Sergei",
""
],
[
"Sneppen",
"Kim",
""
]
] |
Most infectious diseases including more than half of known human pathogens are not restricted to just one host, yet much of the mathematical modeling of infections has been limited to a single species. We investigate consequences of a single epidemic propagating in multiple species and compare and contrast it with the endemic steady state of the disease. We use the two-species Susceptible-Infected-Recovered (SIR) model to calculate the severity of post-epidemic collapses in populations of two host species as a function of their initial population sizes, the times individuals remain infectious, and the matrix of infection rates. We derive the criteria for a very large, extinction-level, population collapse in one or both of the species. The main conclusion of our study is that a single epidemic could drive a species with high mortality rate to local or even global extinction provided that it is co-infected with an abundant species. Such collapse-driven extinctions depend on factors different than those in the endemic steady state of the disease.
|
2303.00933
|
Fabian Filipp
|
Fabian V. Filipp
|
Spatial cancer systems biology resolves heterotypic interactions and
identifies disruption of spatial hierarchy as a pathological driver event
|
7 pages, 2 figures
| null | null | null |
q-bio.GN q-bio.CB q-bio.MN q-bio.TO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Spatially annotated single-cell datasets provide unprecedented opportunities
to dissect cell-cell communication in development and disease. Heterotypic
signaling includes interactions between different cell types and is well
established in tissue development and spatial organization. Epithelial
organization requires several different programs that are tightly regulated.
Planar cell polarity is the organization of epithelial cells along the planar
axis orthogonal to the apical-basal axis. In this study, we investigate planar
cell polarity factors and explore the implications of developmental regulators
as malignant drivers. Utilizing cancer systems biology analysis, we derive gene
expression network for WNT-ligands (WNT) and their cognate frizzled (FZD)
receptors in skin cutaneous melanoma. The profiles supported by unsupervised
clustering of multiple-sequence alignments identify ligand-independent
signaling and implications for metastatic progression based on the underpinning
developmental spatial program. Omics studies and spatial biology connect
developmental programs with oncological events and explain key spatial features
of metastatic aggressiveness. Dysregulation of prominent planar cell polarity
factors such specific representative of the WNT and FZD families in malignant
melanoma recapitulates the development program of normal melanocytes but in an
uncontrolled and disorganized fashion.
|
[
{
"created": "Thu, 2 Mar 2023 03:14:06 GMT",
"version": "v1"
}
] |
2023-03-03
|
[
[
"Filipp",
"Fabian V.",
""
]
] |
Spatially annotated single-cell datasets provide unprecedented opportunities to dissect cell-cell communication in development and disease. Heterotypic signaling includes interactions between different cell types and is well established in tissue development and spatial organization. Epithelial organization requires several different programs that are tightly regulated. Planar cell polarity is the organization of epithelial cells along the planar axis orthogonal to the apical-basal axis. In this study, we investigate planar cell polarity factors and explore the implications of developmental regulators as malignant drivers. Utilizing cancer systems biology analysis, we derive gene expression network for WNT-ligands (WNT) and their cognate frizzled (FZD) receptors in skin cutaneous melanoma. The profiles supported by unsupervised clustering of multiple-sequence alignments identify ligand-independent signaling and implications for metastatic progression based on the underpinning developmental spatial program. Omics studies and spatial biology connect developmental programs with oncological events and explain key spatial features of metastatic aggressiveness. Dysregulation of prominent planar cell polarity factors such specific representative of the WNT and FZD families in malignant melanoma recapitulates the development program of normal melanocytes but in an uncontrolled and disorganized fashion.
|
1912.05370
|
Yves Dumont
|
Michael Chapwanya, Yves Dumont (UMR AMAP)
|
Application of Mathematical Epidemiology to crop vector-borne diseases.
The cassava mosaic virus disease case
|
Infectious Diseases and our Planet, In press
| null | null | null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this chapter, an application of Mathematical Epidemiology to crop
vector-borne diseases is presented to investigate the interactions between
crops, vectors, and virus. The main illustrative example is the cassava mosaic
disease (CMD). The CMD virus has two routes of infection: through vectors and
also through infected crops. In the field, the main tool to control CMD
spreading is roguing. The presented biological model is sufficiently generic
and the same methodology can be adapted to other crops or crop vector-borne
diseases. After an introduction where a brief history of crop diseases and
useful information on Cassava and CMD is given, we develop and study a
compartmental temporal model, taking into account the crop growth and the
vector dynamics. A brief qualitative analysis of the model is provided,i.e.,
existence and uniqueness of a solution,existence of a disease-free equilibrium
and existence of an endemic equilibrium. We also provide conditions for local
(global) asymptotic stability and show that a Hopf Bifurcation may occur, for
instance, when diseased plants are removed. Numerical simulations are provided
to illustrate all possible behaviors. Finally, we discuss the theoretical and
numerical outputs in terms of crop protection.
|
[
{
"created": "Wed, 11 Dec 2019 14:56:29 GMT",
"version": "v1"
}
] |
2019-12-12
|
[
[
"Chapwanya",
"Michael",
"",
"UMR AMAP"
],
[
"Dumont",
"Yves",
"",
"UMR AMAP"
]
] |
In this chapter, an application of Mathematical Epidemiology to crop vector-borne diseases is presented to investigate the interactions between crops, vectors, and virus. The main illustrative example is the cassava mosaic disease (CMD). The CMD virus has two routes of infection: through vectors and also through infected crops. In the field, the main tool to control CMD spreading is roguing. The presented biological model is sufficiently generic and the same methodology can be adapted to other crops or crop vector-borne diseases. After an introduction where a brief history of crop diseases and useful information on Cassava and CMD is given, we develop and study a compartmental temporal model, taking into account the crop growth and the vector dynamics. A brief qualitative analysis of the model is provided,i.e., existence and uniqueness of a solution,existence of a disease-free equilibrium and existence of an endemic equilibrium. We also provide conditions for local (global) asymptotic stability and show that a Hopf Bifurcation may occur, for instance, when diseased plants are removed. Numerical simulations are provided to illustrate all possible behaviors. Finally, we discuss the theoretical and numerical outputs in terms of crop protection.
|
2112.10068
|
Somali Chaterji
|
Atul Sharma, Pranjal Jain, Ashraf Mahgoub, Zihan Zhou, Kanak Mahadik,
and Somali Chaterji
|
Lerna: Transformer Architectures for Configuring Error Correction Tools
for Short- and Long-Read Genome Sequencing
|
26 pages, 5 figures, 10 tables. Accepted to BMC Bioinformatics
| null | null | null |
q-bio.GN cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sequencing technologies are prone to errors, making error correction (EC)
necessary for downstream applications. EC tools need to be manually configured
for optimal performance. We find that the optimal parameters (e.g., k-mer size)
are both tool- and dataset-dependent. Moreover, evaluating the performance
(i.e., Alignment-rate or Gain) of a given tool usually relies on a reference
genome, but quality reference genomes are not always available. We introduce
Lerna for the automated configuration of k-mer-based EC tools. Lerna first
creates a language model (LM) of the uncorrected genomic reads; then,
calculates the perplexity metric to evaluate the corrected reads for different
parameter choices. Next, it finds the one that produces the highest alignment
rate without using a reference genome. The fundamental intuition of our
approach is that the perplexity metric is inversely correlated with the quality
of the assembly after error correction. Results: First, we show that the best
k-mer value can vary for different datasets, even for the same EC tool. Second,
we show the gains of our LM using its component attention-based transformers.
We show the model's estimation of the perplexity metric before and after error
correction. The lower the perplexity after correction, the better the k-mer
size. We also show that the alignment rate and assembly quality computed for
the corrected reads are strongly negatively correlated with the perplexity,
enabling the automated selection of k-mer values for better error correction,
and hence, improved assembly quality. Additionally, we show that our
attention-based models have significant runtime improvement for the entire
pipeline -- 18X faster than previous works, due to parallelizing the attention
mechanism and the use of JIT compilation for GPU inferencing.
|
[
{
"created": "Sun, 19 Dec 2021 05:59:26 GMT",
"version": "v1"
}
] |
2021-12-21
|
[
[
"Sharma",
"Atul",
""
],
[
"Jain",
"Pranjal",
""
],
[
"Mahgoub",
"Ashraf",
""
],
[
"Zhou",
"Zihan",
""
],
[
"Mahadik",
"Kanak",
""
],
[
"Chaterji",
"Somali",
""
]
] |
Sequencing technologies are prone to errors, making error correction (EC) necessary for downstream applications. EC tools need to be manually configured for optimal performance. We find that the optimal parameters (e.g., k-mer size) are both tool- and dataset-dependent. Moreover, evaluating the performance (i.e., Alignment-rate or Gain) of a given tool usually relies on a reference genome, but quality reference genomes are not always available. We introduce Lerna for the automated configuration of k-mer-based EC tools. Lerna first creates a language model (LM) of the uncorrected genomic reads; then, calculates the perplexity metric to evaluate the corrected reads for different parameter choices. Next, it finds the one that produces the highest alignment rate without using a reference genome. The fundamental intuition of our approach is that the perplexity metric is inversely correlated with the quality of the assembly after error correction. Results: First, we show that the best k-mer value can vary for different datasets, even for the same EC tool. Second, we show the gains of our LM using its component attention-based transformers. We show the model's estimation of the perplexity metric before and after error correction. The lower the perplexity after correction, the better the k-mer size. We also show that the alignment rate and assembly quality computed for the corrected reads are strongly negatively correlated with the perplexity, enabling the automated selection of k-mer values for better error correction, and hence, improved assembly quality. Additionally, we show that our attention-based models have significant runtime improvement for the entire pipeline -- 18X faster than previous works, due to parallelizing the attention mechanism and the use of JIT compilation for GPU inferencing.
|
1210.5497
|
Mary Rorick
|
David V. Foster, Mary M. Rorick, Tanja Gesell, Laura Feeney and Jacob
G. Foster
|
Dynamic Landscapes: A Model of Context and Contingency in Evolution
| null | null | null | null |
q-bio.PE q-bio.MN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The basic mechanics of evolution have been understood since Darwin. But
debate continues over whether macroevolutionary phenomena are driven primary by
the fitness structure of genotype space or by ecological interaction. In this
paper we propose a simple, abstract model capturing some key features of
fitness-landscape and ecological models of evolution. Our model describes
evolutionary dynamics in a high-dimensional, structured genotype space with a
significant role for interspecific interaction. We find some promising
qualitative similarity with the empirical facts about macroevolution, including
broadly distributed extinction sizes and realistic exploration of the genotype
space. The abstraction of our model permits numerous interpretations and
applications beyond macroevolution, including molecular evolution and
technological innovation.
|
[
{
"created": "Fri, 19 Oct 2012 18:49:51 GMT",
"version": "v1"
}
] |
2012-10-22
|
[
[
"Foster",
"David V.",
""
],
[
"Rorick",
"Mary M.",
""
],
[
"Gesell",
"Tanja",
""
],
[
"Feeney",
"Laura",
""
],
[
"Foster",
"Jacob G.",
""
]
] |
The basic mechanics of evolution have been understood since Darwin. But debate continues over whether macroevolutionary phenomena are driven primary by the fitness structure of genotype space or by ecological interaction. In this paper we propose a simple, abstract model capturing some key features of fitness-landscape and ecological models of evolution. Our model describes evolutionary dynamics in a high-dimensional, structured genotype space with a significant role for interspecific interaction. We find some promising qualitative similarity with the empirical facts about macroevolution, including broadly distributed extinction sizes and realistic exploration of the genotype space. The abstraction of our model permits numerous interpretations and applications beyond macroevolution, including molecular evolution and technological innovation.
|
2003.09348
|
Pradeep Sarin
|
Ishant Tiwari, Pradeep Sarin, Punit Parmananda
|
Predictive modelling of disease propagation in a mobile, connected
community
|
Updated with full model calculations. Submited to PRL
| null |
10.1063/5.0021113
| null |
q-bio.PE nlin.CG physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present numerical results obtained from the modelling of a stochastic,
highly connected and mobile community. The spread of attributes like health,
disease among the community members is simulated using cellular automata on a
planar 2 dimensional bounded space. With remarkably few assumptions, we are
able to predict the future course of propagation of such disease as a function
of time and the fine-tuning of parameters related to the interaction among the
automata.
|
[
{
"created": "Thu, 19 Mar 2020 16:08:29 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2020 09:13:04 GMT",
"version": "v2"
}
] |
2020-08-26
|
[
[
"Tiwari",
"Ishant",
""
],
[
"Sarin",
"Pradeep",
""
],
[
"Parmananda",
"Punit",
""
]
] |
We present numerical results obtained from the modelling of a stochastic, highly connected and mobile community. The spread of attributes like health, disease among the community members is simulated using cellular automata on a planar 2 dimensional bounded space. With remarkably few assumptions, we are able to predict the future course of propagation of such disease as a function of time and the fine-tuning of parameters related to the interaction among the automata.
|
2302.05789
|
Ivana Pajic-Lijakovic Dr.
|
Ivana Pajic-Lijakovic and Milan Milivojevic
|
Physics of collective cell migration
|
21 pages, 3 figures, 2 tables
| null | null | null |
q-bio.CB q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
Movement of cell clusters along extracellular matrices (ECM) during tissue
development, wound healing, and early stage of cancer invasion involve various
inter-connected migration modes such as: (1) cell movement within clusters, (2)
cluster extension (wetting) and compression (de-wetting), and (3) directional
cluster movement. It has become increasingly evident that dilational and
volumetric viscoelasticity of cell clusters and their surrounding substrate
significantly influence these migration modes through physical parameters such
as: cell and matrix surface tensions, interfacial tension between cells and
substrate, gradients of surface and interfacial tensions, as well as, the
accumulation of cell and matrix residual stresses. Inhomogeneous distribution
of cell surface tension along migrating cell cluster can appear as a
consequence of different strength of cell-cell adhesion contacts and cell
contractility between leader and follower cells. While the directional cell
migration caused by the matrix stiffness gradient (i.e. durotaxis) has been
widely elaborated, the structural changes of matrix surface caused by cell
tractions which lead to the generation of the matrix surface tension gradient
has not been considered yet. The main goal of this theoretical consideration is
to clarify the roles of various physical parameters in collective cell
migration based on the formulating biophysical model. This complex phenomenon
is discussed on the model systems such as the movement of cell clusters on the
collagen I gel matrix by simultaneously reviewing various experimental data
with and without cells.
|
[
{
"created": "Sat, 11 Feb 2023 21:47:57 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Pajic-Lijakovic",
"Ivana",
""
],
[
"Milivojevic",
"Milan",
""
]
] |
Movement of cell clusters along extracellular matrices (ECM) during tissue development, wound healing, and early stage of cancer invasion involve various inter-connected migration modes such as: (1) cell movement within clusters, (2) cluster extension (wetting) and compression (de-wetting), and (3) directional cluster movement. It has become increasingly evident that dilational and volumetric viscoelasticity of cell clusters and their surrounding substrate significantly influence these migration modes through physical parameters such as: cell and matrix surface tensions, interfacial tension between cells and substrate, gradients of surface and interfacial tensions, as well as, the accumulation of cell and matrix residual stresses. Inhomogeneous distribution of cell surface tension along migrating cell cluster can appear as a consequence of different strength of cell-cell adhesion contacts and cell contractility between leader and follower cells. While the directional cell migration caused by the matrix stiffness gradient (i.e. durotaxis) has been widely elaborated, the structural changes of matrix surface caused by cell tractions which lead to the generation of the matrix surface tension gradient has not been considered yet. The main goal of this theoretical consideration is to clarify the roles of various physical parameters in collective cell migration based on the formulating biophysical model. This complex phenomenon is discussed on the model systems such as the movement of cell clusters on the collagen I gel matrix by simultaneously reviewing various experimental data with and without cells.
|
1903.10478
|
Benjamin Allen
|
Benjamin Allen, Gabor Lippner, Martin A. Nowak
|
Evolutionary Games on Isothermal Graphs
|
40 pages, 6 figures
| null |
10.1038/s41467-019-13006-7
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Population structure affects the outcome of natural selection. Static
population structures can be described by graphs, where individuals occupy the
nodes, and interactions occur along the edges. General conditions for
evolutionary success on any weighted graph were recently derived, for weak
selection, in terms of coalescence times of random walks. Here we show that for
a special class of graphs, the conditions for success take a particularly
simple form, in which all effects of graph structure are described by the
graph's "effective degree"---a measure of the effective number of neighbors per
individual. This result holds for all weighted graphs that are isothermal,
meaning that the sum of edge weights is the same at each node. Isothermal
graphs encompass a wide variety of underlying topologies, and arise naturally
from supposing that each individual devotes the same amount of time to
interaction. Cooperative behavior is favored on a large isothermal graph if the
benefit-to-cost ratio exceeds the effective degree. We relate the effective
degree of a graph to its spectral gap, thereby providing a link between
evolutionary dynamics and the theory of expander graphs. As a surprising
example, we report graphs of infinite average degree that are nonetheless
highly conducive for promoting cooperation.
|
[
{
"created": "Mon, 25 Mar 2019 17:26:34 GMT",
"version": "v1"
}
] |
2020-01-08
|
[
[
"Allen",
"Benjamin",
""
],
[
"Lippner",
"Gabor",
""
],
[
"Nowak",
"Martin A.",
""
]
] |
Population structure affects the outcome of natural selection. Static population structures can be described by graphs, where individuals occupy the nodes, and interactions occur along the edges. General conditions for evolutionary success on any weighted graph were recently derived, for weak selection, in terms of coalescence times of random walks. Here we show that for a special class of graphs, the conditions for success take a particularly simple form, in which all effects of graph structure are described by the graph's "effective degree"---a measure of the effective number of neighbors per individual. This result holds for all weighted graphs that are isothermal, meaning that the sum of edge weights is the same at each node. Isothermal graphs encompass a wide variety of underlying topologies, and arise naturally from supposing that each individual devotes the same amount of time to interaction. Cooperative behavior is favored on a large isothermal graph if the benefit-to-cost ratio exceeds the effective degree. We relate the effective degree of a graph to its spectral gap, thereby providing a link between evolutionary dynamics and the theory of expander graphs. As a surprising example, we report graphs of infinite average degree that are nonetheless highly conducive for promoting cooperation.
|
2311.07791
|
Jesse Meyer
|
Yuming Jiang, Devasahayam Arokia Balaya Rex, Dina Schuster, Benjamin
A. Neely, Germ\'an L. Rosano, Norbert Volkmar, Amanda Momenzadeh, Trenton M.
Peters-Clarke, Susan B. Egbert, Simion Kreimer, Emma H. Doud, Oliver M.
Crook, Amit Kumar Yadav, Muralidharan Vanuopadath, Mart\'in L. Mayta, Anna G.
Duboff, Nicholas M. Riley, Robert L. Moritz, Jesse G. Meyer
|
Comprehensive Overview of Bottom-up Proteomics using Mass Spectrometry
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Proteomics is the large scale study of protein structure and function from
biological systems through protein identification and quantification. "Shotgun
proteomics" or "bottom-up proteomics" is the prevailing strategy, in which
proteins are hydrolyzed into peptides that are analyzed by mass spectrometry.
Proteomics studies can be applied to diverse studies ranging from simple
protein identification to studies of proteoforms, protein-protein interactions,
protein structural alterations, absolute and relative protein quantification,
post-translational modifications, and protein stability. To enable this range
of different experiments, there are diverse strategies for proteome analysis.
The nuances of how proteomic workflows differ may be challenging to understand
for new practitioners. Here, we provide a comprehensive overview of different
proteomics methods to aid the novice and experienced researcher. We cover from
biochemistry basics and protein extraction to biological interpretation and
orthogonal validation. We expect this work to serve as a basic resource for new
practitioners in the field of shotgun or bottom-up proteomics.
|
[
{
"created": "Mon, 13 Nov 2023 22:58:30 GMT",
"version": "v1"
}
] |
2023-11-15
|
[
[
"Jiang",
"Yuming",
""
],
[
"Rex",
"Devasahayam Arokia Balaya",
""
],
[
"Schuster",
"Dina",
""
],
[
"Neely",
"Benjamin A.",
""
],
[
"Rosano",
"Germán L.",
""
],
[
"Volkmar",
"Norbert",
""
],
[
"Momenzadeh",
"Amanda",
""
],
[
"Peters-Clarke",
"Trenton M.",
""
],
[
"Egbert",
"Susan B.",
""
],
[
"Kreimer",
"Simion",
""
],
[
"Doud",
"Emma H.",
""
],
[
"Crook",
"Oliver M.",
""
],
[
"Yadav",
"Amit Kumar",
""
],
[
"Vanuopadath",
"Muralidharan",
""
],
[
"Mayta",
"Martín L.",
""
],
[
"Duboff",
"Anna G.",
""
],
[
"Riley",
"Nicholas M.",
""
],
[
"Moritz",
"Robert L.",
""
],
[
"Meyer",
"Jesse G.",
""
]
] |
Proteomics is the large scale study of protein structure and function from biological systems through protein identification and quantification. "Shotgun proteomics" or "bottom-up proteomics" is the prevailing strategy, in which proteins are hydrolyzed into peptides that are analyzed by mass spectrometry. Proteomics studies can be applied to diverse studies ranging from simple protein identification to studies of proteoforms, protein-protein interactions, protein structural alterations, absolute and relative protein quantification, post-translational modifications, and protein stability. To enable this range of different experiments, there are diverse strategies for proteome analysis. The nuances of how proteomic workflows differ may be challenging to understand for new practitioners. Here, we provide a comprehensive overview of different proteomics methods to aid the novice and experienced researcher. We cover from biochemistry basics and protein extraction to biological interpretation and orthogonal validation. We expect this work to serve as a basic resource for new practitioners in the field of shotgun or bottom-up proteomics.
|
1807.00215
|
Peter Clote
|
Peter Clote
|
On the scale-free nature of RNA secondary structure networks
|
4 tables, 11 figures, 26 pages
| null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A network is scale-free if its connectivity density function is proportional
to a power-law distribution. Scale-free networks may provide an explanation for
the robustness observed in certain physical and biological phenomena, since the
presence of a few highly connected hub nodes and a large number of small-
degree nodes may provide alternate paths between any two nodes on average --
such robustness has been suggested in studies of metabolic networks, gene
interaction networks and protein folding. A theoretical justification for why
biological networks are often found to be scale-free may lie in the well-known
fact that expanding networks in which new nodes are preferentially attached to
highly connected nodes tend to be scale-free. In this paper, we provide the
first efficient algorithm to compute the connectivity density function for the
ensemble of all secondary structures of a user-specified length, and show both
by computational and theoretical arguments that preferential attachment holds
when expanding the network from length n to length n + 1 structures. Since
existent power-law fitting software, such as powerlaw, cannot be used to
determine a power-law fit for our exponentially large RNA connectivity data, we
also implement efficient code to compute the maximum likelihood estimate for
the power-law scaling factor and associated Kolmogorov-Smirnov p-value.
Statistical goodness-of-fit tests indicate that one must reject the hypothesis
that RNA connectivity data follows a power-law distribution. Nevertheless, the
power-law fit is visually a good approximation for the tail of connectivity
data, and provides a rationale for investigation of preferential attachment in
the context of macromolecular folding.
|
[
{
"created": "Sat, 30 Jun 2018 18:12:14 GMT",
"version": "v1"
}
] |
2018-07-03
|
[
[
"Clote",
"Peter",
""
]
] |
A network is scale-free if its connectivity density function is proportional to a power-law distribution. Scale-free networks may provide an explanation for the robustness observed in certain physical and biological phenomena, since the presence of a few highly connected hub nodes and a large number of small- degree nodes may provide alternate paths between any two nodes on average -- such robustness has been suggested in studies of metabolic networks, gene interaction networks and protein folding. A theoretical justification for why biological networks are often found to be scale-free may lie in the well-known fact that expanding networks in which new nodes are preferentially attached to highly connected nodes tend to be scale-free. In this paper, we provide the first efficient algorithm to compute the connectivity density function for the ensemble of all secondary structures of a user-specified length, and show both by computational and theoretical arguments that preferential attachment holds when expanding the network from length n to length n + 1 structures. Since existent power-law fitting software, such as powerlaw, cannot be used to determine a power-law fit for our exponentially large RNA connectivity data, we also implement efficient code to compute the maximum likelihood estimate for the power-law scaling factor and associated Kolmogorov-Smirnov p-value. Statistical goodness-of-fit tests indicate that one must reject the hypothesis that RNA connectivity data follows a power-law distribution. Nevertheless, the power-law fit is visually a good approximation for the tail of connectivity data, and provides a rationale for investigation of preferential attachment in the context of macromolecular folding.
|
2403.00033
|
Jun-En Ding
|
Jun-En Ding, Shihao Yang, Anna Zilverstand, and Feng Liu
|
Identification of Craving Maps among Marijuana Users via the Analysis of
Functional Brain Networks with High-Order Attention Graph Neural Networks
| null | null | null | null |
q-bio.NC cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The excessive consumption of marijuana can induce substantial psychological
and social consequences. In this investigation, we propose an elucidative
framework termed high-order graph attention neural networks (HOGANN) for the
classification of Marijuana addiction, coupled with an analysis of localized
brain network communities exhibiting abnormal activities among chronic
marijuana users. HOGANN integrates dynamic intrinsic functional brain networks,
estimated from resting-state functional magnetic resonance imaging (rs-fMRI),
using long short-term memory (LSTM) to capture temporal network dynamics. We
employ a high-order attention module for information fusion and message passing
among neighboring nodes, enhancing the network community analysis. Our model is
validated across two distinct data cohorts, yielding substantially higher
classification accuracy than benchmark algorithms. Furthermore, we discern the
most pertinent subnetworks and cognitive regions affected by persistent
marijuana consumption, indicating adverse effects on functional brain networks,
particularly within the dorsal attention and frontoparietal networks.
Intriguingly, our model demonstrates superior performance in cohorts exhibiting
prolonged dependence, implying that prolonged marijuana usage induces more
pronounced alterations in brain networks. The model proficiently identifies
craving brain maps, thereby delineating critical brain regions for analysis.
|
[
{
"created": "Thu, 29 Feb 2024 04:01:38 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2024 15:00:58 GMT",
"version": "v2"
},
{
"created": "Sun, 17 Mar 2024 03:59:44 GMT",
"version": "v3"
},
{
"created": "Tue, 26 Mar 2024 04:58:01 GMT",
"version": "v4"
}
] |
2024-03-27
|
[
[
"Ding",
"Jun-En",
""
],
[
"Yang",
"Shihao",
""
],
[
"Zilverstand",
"Anna",
""
],
[
"Liu",
"Feng",
""
]
] |
The excessive consumption of marijuana can induce substantial psychological and social consequences. In this investigation, we propose an elucidative framework termed high-order graph attention neural networks (HOGANN) for the classification of Marijuana addiction, coupled with an analysis of localized brain network communities exhibiting abnormal activities among chronic marijuana users. HOGANN integrates dynamic intrinsic functional brain networks, estimated from resting-state functional magnetic resonance imaging (rs-fMRI), using long short-term memory (LSTM) to capture temporal network dynamics. We employ a high-order attention module for information fusion and message passing among neighboring nodes, enhancing the network community analysis. Our model is validated across two distinct data cohorts, yielding substantially higher classification accuracy than benchmark algorithms. Furthermore, we discern the most pertinent subnetworks and cognitive regions affected by persistent marijuana consumption, indicating adverse effects on functional brain networks, particularly within the dorsal attention and frontoparietal networks. Intriguingly, our model demonstrates superior performance in cohorts exhibiting prolonged dependence, implying that prolonged marijuana usage induces more pronounced alterations in brain networks. The model proficiently identifies craving brain maps, thereby delineating critical brain regions for analysis.
|
1109.4160
|
Monica Skoge
|
Monica Skoge, Yigal Meir, and Ned S. Wingreen
|
Dynamics of cooperativity in chemical sensing among cell-surface
receptors
|
5 pages, 2 figures
| null |
10.1103/PhysRevLett.107.178101
|
Phys. Rev. Lett. 107 (2011) 178101
|
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative interactions among sensory receptors provide a general mechanism
to increase the sensitivity of signal transduction. In particular, bacterial
chemotaxis receptors interact cooperatively to produce an ultrasensitive
response to chemoeffector concentrations. However, cooperativity between
receptors in large macromolecular complexes is necessarily based on local
interactions and consequently is fundamentally connected to slowing of receptor
conformational dynamics, which increases intrinsic noise. Therefore, it is not
clear whether or under what conditions cooperativity actually increases the
precision of the concentration measurement. We explictly calculate the
signal-to-noise ratio (SNR) for sensing a concentration change using a simple,
Ising-type model of receptor-receptor interactions, generalized via scaling
arguments, and find that the optimal SNR is always achieved by independent
receptors.
|
[
{
"created": "Mon, 19 Sep 2011 20:11:36 GMT",
"version": "v1"
}
] |
2011-10-19
|
[
[
"Skoge",
"Monica",
""
],
[
"Meir",
"Yigal",
""
],
[
"Wingreen",
"Ned S.",
""
]
] |
Cooperative interactions among sensory receptors provide a general mechanism to increase the sensitivity of signal transduction. In particular, bacterial chemotaxis receptors interact cooperatively to produce an ultrasensitive response to chemoeffector concentrations. However, cooperativity between receptors in large macromolecular complexes is necessarily based on local interactions and consequently is fundamentally connected to slowing of receptor conformational dynamics, which increases intrinsic noise. Therefore, it is not clear whether or under what conditions cooperativity actually increases the precision of the concentration measurement. We explictly calculate the signal-to-noise ratio (SNR) for sensing a concentration change using a simple, Ising-type model of receptor-receptor interactions, generalized via scaling arguments, and find that the optimal SNR is always achieved by independent receptors.
|
2012.02113
|
Tom Leinster
|
Tom Leinster
|
Entropy and Diversity: The Axiomatic Approach
|
Book, viii + 442 pages. Version 3: small number of minor corrections
|
Cambridge University Press 2021, ISBN 9781108965576 (paperback),
9781108832700 (hardback)
| null | null |
q-bio.PE cs.IT math.CA math.CT math.IT q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This book brings new mathematical rigour to the ongoing vigorous debate on
how to quantify biological diversity. The question "what is diversity?" has
surprising mathematical depth, and breadth too: this book involves parts of
mathematics ranging from information theory, functional equations and
probability theory to category theory, geometric measure theory and number
theory. It applies the power of the axiomatic method to a biological problem of
pressing concern, but the new concepts and theorems are also motivated from a
purely mathematical perspective.
The main narrative thread requires no more than an undergraduate course in
analysis. No familiarity with entropy or diversity is assumed.
|
[
{
"created": "Thu, 3 Dec 2020 17:47:44 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jun 2021 18:51:20 GMT",
"version": "v2"
},
{
"created": "Sat, 22 Oct 2022 22:46:41 GMT",
"version": "v3"
}
] |
2022-10-25
|
[
[
"Leinster",
"Tom",
""
]
] |
This book brings new mathematical rigour to the ongoing vigorous debate on how to quantify biological diversity. The question "what is diversity?" has surprising mathematical depth, and breadth too: this book involves parts of mathematics ranging from information theory, functional equations and probability theory to category theory, geometric measure theory and number theory. It applies the power of the axiomatic method to a biological problem of pressing concern, but the new concepts and theorems are also motivated from a purely mathematical perspective. The main narrative thread requires no more than an undergraduate course in analysis. No familiarity with entropy or diversity is assumed.
|
2005.11201
|
Jos\'e-Miguel Ponciano PhD
|
Jos\'e Miguel Ponciano and Juan Adolfo Ponciano and Juan Pablo G\'omez
and Robert D. Holt and Jason K. Blackburn
|
Poverty levels, societal and individual heterogeneities explain the
SARS-CoV-2 pandemic growth in Latin America
|
22 pages, 5 figures, Supplementary Material not uploaded
| null | null | null |
q-bio.PE physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Latin America is experiencing severe impacts of the SARS-CoV-2 pandemic, but
poverty and weak public health institutions hamper gathering the kind of
refined data needed to inform classical SEIR models of epidemics. We present an
alternative approach that draws on advances in statistical ecology and
conservation biology to enhance the value of sparse data in projecting and
ameliorating epidemics. Our approach, leading to what we call a Stochastic
Epidemic Gompertz model, with few parameters can flexibly incorporate
heterogeneity in transmission within populations and across time. We
demonstrate that poverty has a large impact on the course of the pandemic,
across fourteen Latin American countries, and show how our approach provides
flexible, time-varying projections of disease risk that can be used to refine
public health strategies.
|
[
{
"created": "Fri, 22 May 2020 14:23:42 GMT",
"version": "v1"
}
] |
2020-05-25
|
[
[
"Ponciano",
"José Miguel",
""
],
[
"Ponciano",
"Juan Adolfo",
""
],
[
"Gómez",
"Juan Pablo",
""
],
[
"Holt",
"Robert D.",
""
],
[
"Blackburn",
"Jason K.",
""
]
] |
Latin America is experiencing severe impacts of the SARS-CoV-2 pandemic, but poverty and weak public health institutions hamper gathering the kind of refined data needed to inform classical SEIR models of epidemics. We present an alternative approach that draws on advances in statistical ecology and conservation biology to enhance the value of sparse data in projecting and ameliorating epidemics. Our approach, leading to what we call a Stochastic Epidemic Gompertz model, with few parameters can flexibly incorporate heterogeneity in transmission within populations and across time. We demonstrate that poverty has a large impact on the course of the pandemic, across fourteen Latin American countries, and show how our approach provides flexible, time-varying projections of disease risk that can be used to refine public health strategies.
|
2010.15055
|
Sara Zein
|
Sara A. Zein, Marie-Claude Bordage, Ziad Francis, Giovanni Macetti,
Alessandro Genoni, Claude Dal Cappello, Wook-Geun Shin, Sebastien Incerti
|
Electron transport in DNA bases: An extension of the Geant4-DNA Monte
Carlo toolkit
|
23 pages, 12 figures, 1 table
| null |
10.1016/j.nimb.2020.11.021
| null |
q-bio.BM physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this work is to extend the Geant4-DNA Monte Carlo toolkit to
include electron interactions with the four DNA bases using a set of cross
sections recently implemented in Geant-DNA CPA100 models and available for
liquid water. Electron interaction cross sections for elastic scattering,
ionisation, and electronic excitation were calculated in the four DNA bases
adenine, thymine, guanine and cytosine. The electron energy range is extended
to include relativistic electrons. Elastic scattering cross sections were
calculated using the independent atom model with amplitude derived from ELSEPA
code. Relativistic Binary Encounter Bethe Vriens model was used to calculate
ionisation cross sections. The electronic excitation cross sections
calculations were based on the water cross sections following the same strategy
used in CPA100 code. These were implemented within the Geant4-DNA option6
physics constructor to extend its capability of tracking electrons in DNA
material in addition to liquid water. Since DNA nucleobases have different
molecular structure than water it is important to perform more accurate
simulations especially because DNA is considered the most radiosensitive
structure in cells. Differential and integrated cross sections calculations
were in good agreement with data from the literature for all DNA bases.
Stopping power, range and inelastic mean free path calculations in the four DNA
bases using this new extension of Geant4-DNA option6 are in good agreement with
calculations done by other studies, especially for high energy electrons. Some
deviations are shown at the low electron energy range, which could be
attributed to the different interaction models. Comparison with water
simulations shows obvious difference which emphasizes the need to include DNA
bases cross sections in track structure codes for better estimation of
radiation effects on biological material.
|
[
{
"created": "Wed, 21 Oct 2020 17:39:06 GMT",
"version": "v1"
}
] |
2021-02-24
|
[
[
"Zein",
"Sara A.",
""
],
[
"Bordage",
"Marie-Claude",
""
],
[
"Francis",
"Ziad",
""
],
[
"Macetti",
"Giovanni",
""
],
[
"Genoni",
"Alessandro",
""
],
[
"Cappello",
"Claude Dal",
""
],
[
"Shin",
"Wook-Geun",
""
],
[
"Incerti",
"Sebastien",
""
]
] |
The purpose of this work is to extend the Geant4-DNA Monte Carlo toolkit to include electron interactions with the four DNA bases using a set of cross sections recently implemented in Geant-DNA CPA100 models and available for liquid water. Electron interaction cross sections for elastic scattering, ionisation, and electronic excitation were calculated in the four DNA bases adenine, thymine, guanine and cytosine. The electron energy range is extended to include relativistic electrons. Elastic scattering cross sections were calculated using the independent atom model with amplitude derived from ELSEPA code. Relativistic Binary Encounter Bethe Vriens model was used to calculate ionisation cross sections. The electronic excitation cross sections calculations were based on the water cross sections following the same strategy used in CPA100 code. These were implemented within the Geant4-DNA option6 physics constructor to extend its capability of tracking electrons in DNA material in addition to liquid water. Since DNA nucleobases have different molecular structure than water it is important to perform more accurate simulations especially because DNA is considered the most radiosensitive structure in cells. Differential and integrated cross sections calculations were in good agreement with data from the literature for all DNA bases. Stopping power, range and inelastic mean free path calculations in the four DNA bases using this new extension of Geant4-DNA option6 are in good agreement with calculations done by other studies, especially for high energy electrons. Some deviations are shown at the low electron energy range, which could be attributed to the different interaction models. Comparison with water simulations shows obvious difference which emphasizes the need to include DNA bases cross sections in track structure codes for better estimation of radiation effects on biological material.
|
1210.0104
|
Andrew Mugler
|
Andrew Mugler, Pieter Rein ten Wolde
|
The macroscopic effects of microscopic heterogeneity
|
20 pages, 5 figures. To appear in Advances in Chemical Physics
|
Advances in Chemical Physics (2013) 153, 373-396
|
10.1002/9781118571767.ch5
| null |
q-bio.MN q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past decade, advances in super-resolution microscopy and
particle-based modeling have driven an intense interest in investigating
spatial heterogeneity at the level of single molecules in cells. Remarkably, it
is becoming clear that spatiotemporal correlations between just a few molecules
can have profound effects on the signaling behavior of the entire cell. While
such correlations are often explicitly imposed by molecular structures such as
rafts, clusters, or scaffolds, they also arise intrinsically, due strictly to
the small numbers of molecules involved, the finite speed of diffusion, and the
effects of macromolecular crowding. In this chapter we review examples of both
explicitly imposed and intrinsic correlations, focusing on the mechanisms by
which microscopic heterogeneity is amplified to macroscopic effect.
|
[
{
"created": "Sat, 29 Sep 2012 13:22:21 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Oct 2012 09:41:24 GMT",
"version": "v2"
}
] |
2013-11-19
|
[
[
"Mugler",
"Andrew",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] |
Over the past decade, advances in super-resolution microscopy and particle-based modeling have driven an intense interest in investigating spatial heterogeneity at the level of single molecules in cells. Remarkably, it is becoming clear that spatiotemporal correlations between just a few molecules can have profound effects on the signaling behavior of the entire cell. While such correlations are often explicitly imposed by molecular structures such as rafts, clusters, or scaffolds, they also arise intrinsically, due strictly to the small numbers of molecules involved, the finite speed of diffusion, and the effects of macromolecular crowding. In this chapter we review examples of both explicitly imposed and intrinsic correlations, focusing on the mechanisms by which microscopic heterogeneity is amplified to macroscopic effect.
|
2310.12005
|
Morten Gram Pedersen
|
Francesco Montefusco and Morten Gram Pedersen
|
Geometric slow-fast analysis of a hybrid pituitary cell model with
stochastic ion channel dynamics
|
15 pages, 8 figures
| null | null | null |
q-bio.QM math.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To obtain explicit understanding of the behavior of dynamical systems,
geometrical methods and slow-fast analysis have proved to be highly useful.
Such methods are standard for smooth dynamical systems, and increasingly used
for continuous, non-smooth dynamical systems. However, they are much less used
for random dynamical systems, in particular for hybrid models with discrete,
random dynamics. Indeed, the analysis of such systems has typically been done
by studying the corresponding deterministic system and considering how noise
perturbs the deterministic geometrical structures. Here we propose a
geometrical method that works directly with the hybrid system. We illustrate
our approach through an application to a hybrid pituitary cell model in which
the stochastic dynamics of very few active large-conductance potassium (BK)
channels is coupled to a deterministic model of the other ion channels and
calcium dynamics. To employ our geometric approach, we exploit the slow-fast
structure of the model. The random fast subsystem is analyzed by considering
discrete phase planes, corresponding to the discrete number of open BK
channels, and stochastic events correspond to jumps between these planes. The
evolution within each plane can be understood from nullclines and limit cycles,
and the overall dynamics, e.g., whether the model produces a spike or a burst,
is determined by the location at which the system jumps from one plane to
another. Our approach is generally applicable to other scenarios to study
discrete random dynamical systems defined by hybrid stochastic-deterministic
models.
|
[
{
"created": "Wed, 18 Oct 2023 14:39:34 GMT",
"version": "v1"
}
] |
2023-10-19
|
[
[
"Montefusco",
"Francesco",
""
],
[
"Pedersen",
"Morten Gram",
""
]
] |
To obtain explicit understanding of the behavior of dynamical systems, geometrical methods and slow-fast analysis have proved to be highly useful. Such methods are standard for smooth dynamical systems, and increasingly used for continuous, non-smooth dynamical systems. However, they are much less used for random dynamical systems, in particular for hybrid models with discrete, random dynamics. Indeed, the analysis of such systems has typically been done by studying the corresponding deterministic system and considering how noise perturbs the deterministic geometrical structures. Here we propose a geometrical method that works directly with the hybrid system. We illustrate our approach through an application to a hybrid pituitary cell model in which the stochastic dynamics of very few active large-conductance potassium (BK) channels is coupled to a deterministic model of the other ion channels and calcium dynamics. To employ our geometric approach, we exploit the slow-fast structure of the model. The random fast subsystem is analyzed by considering discrete phase planes, corresponding to the discrete number of open BK channels, and stochastic events correspond to jumps between these planes. The evolution within each plane can be understood from nullclines and limit cycles, and the overall dynamics, e.g., whether the model produces a spike or a burst, is determined by the location at which the system jumps from one plane to another. Our approach is generally applicable to other scenarios to study discrete random dynamical systems defined by hybrid stochastic-deterministic models.
|
q-bio/0411014
|
Arnaud Buhot
|
Avraham Halperin, Arnaud Buhot, and Ekaterina B. Zhulina
|
Hybridization Isotherms of DNA Microarrays and the Quantification of
Mutation Studies
|
To be published in Clinical Chemistry Nov. 2004. 15 pages and 8
figures
|
Clin. Chem. 50, 2254-2262, (2004).
| null | null |
q-bio.BM cond-mat.soft q-bio.GN
| null |
Background: Diagnostic DNA arrays for detection of point mutations as markers
for cancer usually function in the presence of a large excess of wild type DNA.
This excess can give rise to false positives due to competitive hybridization
of the wild type target at the mutation spot. The analysis of the DNA array
data is typically qualitative aiming to establish the presence or absence of a
particular point mutation. Our theoretical approach yields methods for
quantifying the analysis so as to obtain the ratio of concentrations of mutated
and wild type DNA. Method: The theory is formulated in terms of the
hybridization isotherms relating the hybridization fraction at the spot to the
composition of the sample solutions at thermodynamic equilibrium. It focuses on
samples containing an excess of single stranded DNA and on DNA arrays with low
surface density of probes. The hybridization equilibrium constants can be
obtained by the nearest neighbor method. Results: Two approaches allow us to
obtain quantitative results from the DNA array data. In one the signal of the
mutation spot is compared with that of the wild type spot. The implementation
requires knowledge of the saturation intensity of the two spots. The second
approach requires comparison of the intensity of the mutation spot at two
different temperatures. In this case knowledge of the saturation signal is not
always necessary. Conclusions: DNA arrays can be used to obtain quantitative
results on the concentration ratio of mutated DNA to wild type DNA in studies
of somatic point mutations.
|
[
{
"created": "Wed, 3 Nov 2004 14:19:34 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Halperin",
"Avraham",
""
],
[
"Buhot",
"Arnaud",
""
],
[
"Zhulina",
"Ekaterina B.",
""
]
] |
Background: Diagnostic DNA arrays for detection of point mutations as markers for cancer usually function in the presence of a large excess of wild type DNA. This excess can give rise to false positives due to competitive hybridization of the wild type target at the mutation spot. The analysis of the DNA array data is typically qualitative aiming to establish the presence or absence of a particular point mutation. Our theoretical approach yields methods for quantifying the analysis so as to obtain the ratio of concentrations of mutated and wild type DNA. Method: The theory is formulated in terms of the hybridization isotherms relating the hybridization fraction at the spot to the composition of the sample solutions at thermodynamic equilibrium. It focuses on samples containing an excess of single stranded DNA and on DNA arrays with low surface density of probes. The hybridization equilibrium constants can be obtained by the nearest neighbor method. Results: Two approaches allow us to obtain quantitative results from the DNA array data. In one the signal of the mutation spot is compared with that of the wild type spot. The implementation requires knowledge of the saturation intensity of the two spots. The second approach requires comparison of the intensity of the mutation spot at two different temperatures. In this case knowledge of the saturation signal is not always necessary. Conclusions: DNA arrays can be used to obtain quantitative results on the concentration ratio of mutated DNA to wild type DNA in studies of somatic point mutations.
|
2312.01272
|
Hongyan Du
|
Hongyan Du, Guo-Wei Wei, Tingjun Hou
|
Multiscale Topology in Interactomic Network: From Transcriptome to
Antiaddiction Drug Repurposing
| null | null | null | null |
q-bio.BM cs.LG q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The escalating drug addiction crisis in the United States underscores the
urgent need for innovative therapeutic strategies. This study embarked on an
innovative and rigorous strategy to unearth potential drug repurposing
candidates for opioid and cocaine addiction treatment, bridging the gap between
transcriptomic data analysis and drug discovery. We initiated our approach by
conducting differential gene expression analysis on addiction-related
transcriptomic data to identify key genes. We propose a novel topological
differentiation to identify key genes from a protein-protein interaction (PPI)
network derived from DEGs. This method utilizes persistent Laplacians to
accurately single out pivotal nodes within the network, conducting this
analysis in a multiscale manner to ensure high reliability. Through rigorous
literature validation, pathway analysis, and data-availability scrutiny, we
identified three pivotal molecular targets, mTOR, mGluR5, and NMDAR, for drug
repurposing from DrugBank. We crafted machine learning models employing two
natural language processing (NLP)-based embeddings and a traditional 2D
fingerprint, which demonstrated robust predictive ability in gauging binding
affinities of DrugBank compounds to selected targets. Furthermore, we
elucidated the interactions of promising drugs with the targets and evaluated
their drug-likeness. This study delineates a multi-faceted and comprehensive
analytical framework, amalgamating bioinformatics, topological data analysis
and machine learning, for drug repurposing in addiction treatment, setting the
stage for subsequent experimental validation. The versatility of the methods we
developed allows for applications across a range of diseases and transcriptomic
datasets.
|
[
{
"created": "Sun, 3 Dec 2023 04:01:38 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Du",
"Hongyan",
""
],
[
"Wei",
"Guo-Wei",
""
],
[
"Hou",
"Tingjun",
""
]
] |
The escalating drug addiction crisis in the United States underscores the urgent need for innovative therapeutic strategies. This study embarked on an innovative and rigorous strategy to unearth potential drug repurposing candidates for opioid and cocaine addiction treatment, bridging the gap between transcriptomic data analysis and drug discovery. We initiated our approach by conducting differential gene expression analysis on addiction-related transcriptomic data to identify key genes. We propose a novel topological differentiation to identify key genes from a protein-protein interaction (PPI) network derived from DEGs. This method utilizes persistent Laplacians to accurately single out pivotal nodes within the network, conducting this analysis in a multiscale manner to ensure high reliability. Through rigorous literature validation, pathway analysis, and data-availability scrutiny, we identified three pivotal molecular targets, mTOR, mGluR5, and NMDAR, for drug repurposing from DrugBank. We crafted machine learning models employing two natural language processing (NLP)-based embeddings and a traditional 2D fingerprint, which demonstrated robust predictive ability in gauging binding affinities of DrugBank compounds to selected targets. Furthermore, we elucidated the interactions of promising drugs with the targets and evaluated their drug-likeness. This study delineates a multi-faceted and comprehensive analytical framework, amalgamating bioinformatics, topological data analysis and machine learning, for drug repurposing in addiction treatment, setting the stage for subsequent experimental validation. The versatility of the methods we developed allows for applications across a range of diseases and transcriptomic datasets.
|
2308.12354
|
Nikolai Schapin
|
Nikolai Schapin, Maciej Majewski, Alejandro Varela, Carlos Arroniz,
Gianni De Fabritiis
|
Machine Learning Small Molecule Properties in Drug Discovery
|
46 pages, 1 figure
| null | null | null |
q-bio.BM cs.LG q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) is a promising approach for predicting small molecule
properties in drug discovery. Here, we provide a comprehensive overview of
various ML methods introduced for this purpose in recent years. We review a
wide range of properties, including binding affinities, solubility, and ADMET
(Absorption, Distribution, Metabolism, Excretion, and Toxicity). We discuss
existing popular datasets and molecular descriptors and embeddings, such as
chemical fingerprints and graph-based neural networks. We highlight also
challenges of predicting and optimizing multiple properties during hit-to-lead
and lead optimization stages of drug discovery and explore briefly possible
multi-objective optimization techniques that can be used to balance diverse
properties while optimizing lead candidates. Finally, techniques to provide an
understanding of model predictions, especially for critical decision-making in
drug discovery are assessed. Overall, this review provides insights into the
landscape of ML models for small molecule property predictions in drug
discovery. So far, there are multiple diverse approaches, but their
performances are often comparable. Neural networks, while more flexible, do not
always outperform simpler models. This shows that the availability of
high-quality training data remains crucial for training accurate models and
there is a need for standardized benchmarks, additional performance metrics,
and best practices to enable richer comparisons between the different
techniques and models that can shed a better light on the differences between
the many techniques.
|
[
{
"created": "Wed, 2 Aug 2023 22:18:41 GMT",
"version": "v1"
}
] |
2023-08-25
|
[
[
"Schapin",
"Nikolai",
""
],
[
"Majewski",
"Maciej",
""
],
[
"Varela",
"Alejandro",
""
],
[
"Arroniz",
"Carlos",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] |
Machine learning (ML) is a promising approach for predicting small molecule properties in drug discovery. Here, we provide a comprehensive overview of various ML methods introduced for this purpose in recent years. We review a wide range of properties, including binding affinities, solubility, and ADMET (Absorption, Distribution, Metabolism, Excretion, and Toxicity). We discuss existing popular datasets and molecular descriptors and embeddings, such as chemical fingerprints and graph-based neural networks. We highlight also challenges of predicting and optimizing multiple properties during hit-to-lead and lead optimization stages of drug discovery and explore briefly possible multi-objective optimization techniques that can be used to balance diverse properties while optimizing lead candidates. Finally, techniques to provide an understanding of model predictions, especially for critical decision-making in drug discovery are assessed. Overall, this review provides insights into the landscape of ML models for small molecule property predictions in drug discovery. So far, there are multiple diverse approaches, but their performances are often comparable. Neural networks, while more flexible, do not always outperform simpler models. This shows that the availability of high-quality training data remains crucial for training accurate models and there is a need for standardized benchmarks, additional performance metrics, and best practices to enable richer comparisons between the different techniques and models that can shed a better light on the differences between the many techniques.
|
q-bio/0512005
|
Wilfred Ndifon
|
W. Ndifon and A. Nkwanta
|
An RNA foldability metric; implications for the design of rapidly
foldable RNA sequences
|
To appear in Biophysical Chemistry
| null |
10.1016/j.bpc.2005.11.012
| null |
q-bio.BM physics.bio-ph
| null |
Evidence is presented suggesting, for the first time, that the protein
foldability metric sigma=(T_theta - T_f)/T_theta, where T_theta and T_f are,
respectively, the collapse and folding transition temperatures, could be used
also to measure the foldability of RNA sequences. The importance of sigma is
discussed in the context of the in silico design of rapidly foldable RNA
sequences.
|
[
{
"created": "Fri, 2 Dec 2005 00:54:57 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Ndifon",
"W.",
""
],
[
"Nkwanta",
"A.",
""
]
] |
Evidence is presented suggesting, for the first time, that the protein foldability metric sigma=(T_theta - T_f)/T_theta, where T_theta and T_f are, respectively, the collapse and folding transition temperatures, could be used also to measure the foldability of RNA sequences. The importance of sigma is discussed in the context of the in silico design of rapidly foldable RNA sequences.
|
1902.05235
|
Shunfu Mao
|
Shunfu Mao and Yihan Jiang and Edwin Basil Mathew and Sreeram Kannan
|
BOAssembler: a Bayesian Optimization Framework to Improve RNA-Seq
Assembly Performance
| null | null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High throughput sequencing of RNA (RNA-Seq) can provide us with millions of
short fragments of RNA transcripts from a sample. How to better recover the
original RNA transcripts from those fragments (RNA-Seq assembly) is still a
difficult task. For example, RNA-Seq assembly tools typically require
hyper-parameter tuning to achieve good performance for particular datasets.
This kind of tuning is usually unintuitive and time-consuming. Consequently,
users often resort to default parameters, which do not guarantee consistent
good performance for various datasets.
Here we propose BOAssembler (https://github.com/olivomao/boassembler), a
framework that enables end-to-end automatic tuning of RNA-Seq assemblers, based
on Bayesian Optimization principles. Experiments show this data-driven approach
is effective to improve the overall assembly performance. The approach would be
helpful for downstream (e.g. gene, protein, cell) analysis, and more broadly,
for future bioinformatics benchmark studies.
|
[
{
"created": "Thu, 14 Feb 2019 06:18:34 GMT",
"version": "v1"
}
] |
2019-02-15
|
[
[
"Mao",
"Shunfu",
""
],
[
"Jiang",
"Yihan",
""
],
[
"Mathew",
"Edwin Basil",
""
],
[
"Kannan",
"Sreeram",
""
]
] |
High throughput sequencing of RNA (RNA-Seq) can provide us with millions of short fragments of RNA transcripts from a sample. How to better recover the original RNA transcripts from those fragments (RNA-Seq assembly) is still a difficult task. For example, RNA-Seq assembly tools typically require hyper-parameter tuning to achieve good performance for particular datasets. This kind of tuning is usually unintuitive and time-consuming. Consequently, users often resort to default parameters, which do not guarantee consistent good performance for various datasets. Here we propose BOAssembler (https://github.com/olivomao/boassembler), a framework that enables end-to-end automatic tuning of RNA-Seq assemblers, based on Bayesian Optimization principles. Experiments show this data-driven approach is effective to improve the overall assembly performance. The approach would be helpful for downstream (e.g. gene, protein, cell) analysis, and more broadly, for future bioinformatics benchmark studies.
|
1206.1401
|
Jesus Fernandez
|
Jes\'us Fern\'andez-S\'anchez, Jeremy G. Sumner, Peter D. Jarvis, and
Michael D. Woodhams
|
Lie Markov models with purine/pyrimidine symmetry
|
32 pages
| null | null | null |
q-bio.PE math.GR math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continuous-time Markov chains are a standard tool in phylogenetic inference.
If homogeneity is assumed, the chain is formulated by specifying
time-independent rates of substitutions between states in the chain. In
applications, there are usually extra constraints on the rates, depending on
the situation. If a model is formulated in this way, it is possible to
generalise it and allow for an inhomogeneous process, with time-dependent rates
satisfying the same constraints. It is then useful to require that there exists
a homogeneous average of this inhomogeneous process within the same model. This
leads to the definition of "Lie Markov models", which are precisely the class
of models where such an average exists. These models form Lie algebras and
hence concepts from Lie group theory are central to their derivation. In this
paper, we concentrate on applications to phylogenetics and nucleotide
evolution, and derive the complete hierarchy of Lie Markov models that respect
the grouping of nucleotides into purines and pyrimidines -- that is, models
with purine/pyrimidine symmetry. We also discuss how to handle the subtleties
of applying Lie group methods, most naturally defined over the complex field,
to the stochastic case of a Markov process, where parameter values are
restricted to be real and positive. In particular, we explore the geometric
embedding of the cone of stochastic rate matrices within the ambient space of
the associated complex Lie algebra.
The whole list of Lie Markov models with purine/pyrimidine symmetry is
available at http://www.pagines.ma1.upc.edu/~jfernandez/LMNR.pdf.
|
[
{
"created": "Thu, 7 Jun 2012 05:08:38 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jun 2013 15:14:45 GMT",
"version": "v2"
}
] |
2013-06-26
|
[
[
"Fernández-Sánchez",
"Jesús",
""
],
[
"Sumner",
"Jeremy G.",
""
],
[
"Jarvis",
"Peter D.",
""
],
[
"Woodhams",
"Michael D.",
""
]
] |
Continuous-time Markov chains are a standard tool in phylogenetic inference. If homogeneity is assumed, the chain is formulated by specifying time-independent rates of substitutions between states in the chain. In applications, there are usually extra constraints on the rates, depending on the situation. If a model is formulated in this way, it is possible to generalise it and allow for an inhomogeneous process, with time-dependent rates satisfying the same constraints. It is then useful to require that there exists a homogeneous average of this inhomogeneous process within the same model. This leads to the definition of "Lie Markov models", which are precisely the class of models where such an average exists. These models form Lie algebras and hence concepts from Lie group theory are central to their derivation. In this paper, we concentrate on applications to phylogenetics and nucleotide evolution, and derive the complete hierarchy of Lie Markov models that respect the grouping of nucleotides into purines and pyrimidines -- that is, models with purine/pyrimidine symmetry. We also discuss how to handle the subtleties of applying Lie group methods, most naturally defined over the complex field, to the stochastic case of a Markov process, where parameter values are restricted to be real and positive. In particular, we explore the geometric embedding of the cone of stochastic rate matrices within the ambient space of the associated complex Lie algebra. The whole list of Lie Markov models with purine/pyrimidine symmetry is available at http://www.pagines.ma1.upc.edu/~jfernandez/LMNR.pdf.
|
1605.08001
|
Hande Topa
|
Hande Topa and Antti Honkela
|
Analysis of differential splicing suggests different modes of short-term
splicing regulation
|
20 pages, 5 figures. To be published in the conference proceedings of
Intelligent Systems for Molecular Biology (ISMB) 2016
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Motivation: Alternative splicing is an important mechanism in which the
regions of pre-mRNAs are differentially joined in order to form different
transcript isoforms. Alternative splicing is involved in the regulation of
normal physiological functions but also linked to the development of diseases
such as cancer. We analyse differential expression and splicing using RNA-seq
time series in three different settings: overall gene expression levels,
absolute transcript expression levels and relative transcript expression
levels.
Results: Using estrogen receptor $\alpha$ signalling response as a model
system, our Gaussian process (GP)-based test identifies genes with differential
splicing and/or differentially expressed transcripts. We discover genes with
consistent changes in alternative splicing independent of changes in absolute
expression and genes where some transcripts change while others stay constant
in absolute level. The results suggest classes of genes with different modes of
alternative splicing regulation during the experiment.
Availability: R and Matlab codes implementing the method are available at
https://github.com/PROBIC/diffsplicing . An interactive browser for viewing all
model fits is available at http://users.ics.aalto.fi/hande/splicingGP/ .
|
[
{
"created": "Wed, 25 May 2016 18:43:54 GMT",
"version": "v1"
}
] |
2016-05-26
|
[
[
"Topa",
"Hande",
""
],
[
"Honkela",
"Antti",
""
]
] |
Motivation: Alternative splicing is an important mechanism in which the regions of pre-mRNAs are differentially joined in order to form different transcript isoforms. Alternative splicing is involved in the regulation of normal physiological functions but also linked to the development of diseases such as cancer. We analyse differential expression and splicing using RNA-seq time series in three different settings: overall gene expression levels, absolute transcript expression levels and relative transcript expression levels. Results: Using estrogen receptor $\alpha$ signalling response as a model system, our Gaussian process (GP)-based test identifies genes with differential splicing and/or differentially expressed transcripts. We discover genes with consistent changes in alternative splicing independent of changes in absolute expression and genes where some transcripts change while others stay constant in absolute level. The results suggest classes of genes with different modes of alternative splicing regulation during the experiment. Availability: R and Matlab codes implementing the method are available at https://github.com/PROBIC/diffsplicing . An interactive browser for viewing all model fits is available at http://users.ics.aalto.fi/hande/splicingGP/ .
|
2102.12860
|
Laura Wadkin PhD
|
L E Wadkin, S Orozco-Fuentes, I Neganova, M Lako, N G Parker, A
Shukurov
|
A mathematical modelling framework for the regulation of intra-cellular
OCT4 in human pluripotent stem cells
| null | null | null | null |
q-bio.SC physics.bio-ph q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pluripotent stem cells (hPSCs) have promising clinical applications in
regenerative medicine, drug-discovery and personalised medicine due to their
potential to differentiate into all cell types, a property know as
pluripotency. A deeper understanding of how pluripotency is regulated is
required to assist in controlling pluripotency and differentiation trajectories
experimentally. Mathematical modelling provides a non-invasive tool through
which to explore, characterise and replicate the regulation of pluripotency and
the consequences on cell fate. Here we use experimental data of the expression
of the pluripotency transcription factor OCT4 in a growing hPSC colony to
develop and evaluate mathematical models for temporal pluripotency regulation.
We consider fractional Brownian motion and the stochastic logistic equation and
explore the effects of both additive and multiplicative noise. We illustrate
the use of time-dependent carrying capacities and the introduction of Allee
effects to the stochastic logistic equation to describe cell differentiation.
This mathematical framework for describing intra-cellular OCT4 regulation can
be extended to other transcription factors and developed into sophisticated
predictive models.
|
[
{
"created": "Thu, 25 Feb 2021 13:58:12 GMT",
"version": "v1"
}
] |
2021-02-26
|
[
[
"Wadkin",
"L E",
""
],
[
"Orozco-Fuentes",
"S",
""
],
[
"Neganova",
"I",
""
],
[
"Lako",
"M",
""
],
[
"Parker",
"N G",
""
],
[
"Shukurov",
"A",
""
]
] |
Human pluripotent stem cells (hPSCs) have promising clinical applications in regenerative medicine, drug-discovery and personalised medicine due to their potential to differentiate into all cell types, a property know as pluripotency. A deeper understanding of how pluripotency is regulated is required to assist in controlling pluripotency and differentiation trajectories experimentally. Mathematical modelling provides a non-invasive tool through which to explore, characterise and replicate the regulation of pluripotency and the consequences on cell fate. Here we use experimental data of the expression of the pluripotency transcription factor OCT4 in a growing hPSC colony to develop and evaluate mathematical models for temporal pluripotency regulation. We consider fractional Brownian motion and the stochastic logistic equation and explore the effects of both additive and multiplicative noise. We illustrate the use of time-dependent carrying capacities and the introduction of Allee effects to the stochastic logistic equation to describe cell differentiation. This mathematical framework for describing intra-cellular OCT4 regulation can be extended to other transcription factors and developed into sophisticated predictive models.
|
1602.08802
|
Randal Olson
|
Randal S. Olson, Arend Hintze, Fred C. Dyer, Jason H. Moore, Christoph
Adami
|
Exploring the coevolution of predator and prey morphology and behavior
|
8 pages, 8 figures, submitted to Artificial Life 2016 conference
|
Proceedings Artificial Life 15 (C. Gershenson, T. Froese, J.M.
Sisqueiros, W. Aguilar, E.J. Izquierdo, H. Sayama, eds.) MIT Press
(Cambridge, MA, 2016), pp. 250-258
|
10.7551/978-0-262-33936-0-ch045
| null |
q-bio.PE cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A common idiom in biology education states, "Eyes in the front, the animal
hunts. Eyes on the side, the animal hides." In this paper, we explore one
possible explanation for why predators tend to have forward-facing, high-acuity
visual systems. We do so using an agent-based computational model of evolution,
where predators and prey interact and adapt their behavior and morphology to
one another over successive generations of evolution. In this model, we observe
a coevolutionary cycle between prey swarming behavior and the predator's visual
system, where the predator and prey continually adapt their visual system and
behavior, respectively, over evolutionary time in reaction to one another due
to the well-known "predator confusion effect." Furthermore, we provide evidence
that the predator visual system is what drives this coevolutionary cycle, and
suggest that the cycle could be closed if the predator evolves a hybrid visual
system capable of narrow, high-acuity vision for tracking prey as well as
broad, coarse vision for prey discovery. Thus, the conflicting demands imposed
on a predator's visual system by the predator confusion effect could have led
to the evolution of complex eyes in many predators.
|
[
{
"created": "Mon, 29 Feb 2016 02:48:19 GMT",
"version": "v1"
}
] |
2016-12-07
|
[
[
"Olson",
"Randal S.",
""
],
[
"Hintze",
"Arend",
""
],
[
"Dyer",
"Fred C.",
""
],
[
"Moore",
"Jason H.",
""
],
[
"Adami",
"Christoph",
""
]
] |
A common idiom in biology education states, "Eyes in the front, the animal hunts. Eyes on the side, the animal hides." In this paper, we explore one possible explanation for why predators tend to have forward-facing, high-acuity visual systems. We do so using an agent-based computational model of evolution, where predators and prey interact and adapt their behavior and morphology to one another over successive generations of evolution. In this model, we observe a coevolutionary cycle between prey swarming behavior and the predator's visual system, where the predator and prey continually adapt their visual system and behavior, respectively, over evolutionary time in reaction to one another due to the well-known "predator confusion effect." Furthermore, we provide evidence that the predator visual system is what drives this coevolutionary cycle, and suggest that the cycle could be closed if the predator evolves a hybrid visual system capable of narrow, high-acuity vision for tracking prey as well as broad, coarse vision for prey discovery. Thus, the conflicting demands imposed on a predator's visual system by the predator confusion effect could have led to the evolution of complex eyes in many predators.
|
2106.03663
|
Raffaella Mulas
|
Raffaella Mulas and Michael J. Casey
|
Estimating cellular redundancy in networks of genetic expression
| null |
Mathematical Biosciences (2021)
|
10.1016/j.mbs.2021.108713
| null |
q-bio.MN math.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networks of genetic expression can be modelled by hypergraphs with the
additional structure that real coefficients are given to each vertex-edge
incidence. The spectra, i.e. the multiset of the eigenvalues, of such
hypergraphs, are known to encode structural information of the data. We show
how these spectra can be used, in particular, in order to give an estimation of
cellular redundancy, a novel measure of gene expression heterogeneity, of the
network. We analyze some simulated and real data sets of gene expression for
illustrating the new method proposed here.
|
[
{
"created": "Mon, 7 Jun 2021 14:41:23 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Sep 2021 06:34:41 GMT",
"version": "v2"
}
] |
2021-09-24
|
[
[
"Mulas",
"Raffaella",
""
],
[
"Casey",
"Michael J.",
""
]
] |
Networks of genetic expression can be modelled by hypergraphs with the additional structure that real coefficients are given to each vertex-edge incidence. The spectra, i.e. the multiset of the eigenvalues, of such hypergraphs, are known to encode structural information of the data. We show how these spectra can be used, in particular, in order to give an estimation of cellular redundancy, a novel measure of gene expression heterogeneity, of the network. We analyze some simulated and real data sets of gene expression for illustrating the new method proposed here.
|
1109.3351
|
Ulrich Gerland
|
Nico Geisel and Ulrich Gerland
|
Physical limits on cooperative protein-DNA binding and the kinetics of
combinatorial transcription regulation
|
manuscript and supplementary material combined into a single
document; to be published in Biophysical Journal
| null |
10.1016/j.bpj.2011.08.041
| null |
q-bio.BM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Much of the complexity observed in gene regulation originates from
cooperative protein-DNA binding. While studies of the target search of proteins
for their specific binding sites on the DNA have revealed design principles for
the quantitative characteristics of protein-DNA interactions, no such
principles are known for the cooperative interactions between DNA-binding
proteins. We consider a simple theoretical model for two interacting
transcription factor (TF) species, searching for and binding to two adjacent
target sites hidden in the genomic background. We study the kinetic competition
of a dimer search pathway and a monomer search pathway, as well as the
steady-state regulation function mediated by the two TFs over a broad range of
TF-TF interaction strengths. Using a transcriptional AND-logic as exemplary
functional context, we identify the functionally desirable regime for the
interaction. We find that both weak and very strong TF-TF interactions are
favorable, albeit with different characteristics. However, there is also an
unfavorable regime of intermediate interactions where the genetic response is
prohibitively slow.
|
[
{
"created": "Thu, 15 Sep 2011 13:37:40 GMT",
"version": "v1"
}
] |
2015-05-30
|
[
[
"Geisel",
"Nico",
""
],
[
"Gerland",
"Ulrich",
""
]
] |
Much of the complexity observed in gene regulation originates from cooperative protein-DNA binding. While studies of the target search of proteins for their specific binding sites on the DNA have revealed design principles for the quantitative characteristics of protein-DNA interactions, no such principles are known for the cooperative interactions between DNA-binding proteins. We consider a simple theoretical model for two interacting transcription factor (TF) species, searching for and binding to two adjacent target sites hidden in the genomic background. We study the kinetic competition of a dimer search pathway and a monomer search pathway, as well as the steady-state regulation function mediated by the two TFs over a broad range of TF-TF interaction strengths. Using a transcriptional AND-logic as exemplary functional context, we identify the functionally desirable regime for the interaction. We find that both weak and very strong TF-TF interactions are favorable, albeit with different characteristics. However, there is also an unfavorable regime of intermediate interactions where the genetic response is prohibitively slow.
|
1604.04883
|
Steven Frank
|
Steven A. Frank
|
The invariances of power law size distributions
|
Added appendix discussing the lognormal distribution, updated to
match version 2 of published version at F1000Research
|
F1000Research 2016, 5:2074
|
10.12688/f1000research.9452.2
| null |
q-bio.PE math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
Size varies. Small things are typically more frequent than large things. The
logarithm of frequency often declines linearly with the logarithm of size. That
power law relation forms one of the common patterns of nature. Why does the
complexity of nature reduce to such a simple pattern? Why do things as
different as tree size and enzyme rate follow similarly simple patterns? Here I
analyze such patterns by their invariant properties. For example, a common
pattern should not change when adding a constant value to all observations.
That shift is essentially the renumbering of the points on a ruler without
changing the metric information provided by the ruler. A ruler is shift
invariant only when its scale is properly calibrated to the pattern being
measured. Stretch invariance corresponds to the conservation of the total
amount of something, such as the total biomass and consequently the average
size. Rotational invariance corresponds to pattern that does not depend on the
order in which underlying processes occur, for example, a scale that additively
combines the component processes leading to observed values. I use tree size as
an example to illustrate how the key invariances shape pattern. A simple
interpretation of common pattern follows. That simple interpretation connects
the normal distribution to a wide variety of other common patterns through the
transformations of scale set by the fundamental invariances.
|
[
{
"created": "Sun, 17 Apr 2016 15:23:53 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Aug 2016 20:23:55 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Nov 2016 18:34:25 GMT",
"version": "v3"
}
] |
2016-11-08
|
[
[
"Frank",
"Steven A.",
""
]
] |
Size varies. Small things are typically more frequent than large things. The logarithm of frequency often declines linearly with the logarithm of size. That power law relation forms one of the common patterns of nature. Why does the complexity of nature reduce to such a simple pattern? Why do things as different as tree size and enzyme rate follow similarly simple patterns? Here I analyze such patterns by their invariant properties. For example, a common pattern should not change when adding a constant value to all observations. That shift is essentially the renumbering of the points on a ruler without changing the metric information provided by the ruler. A ruler is shift invariant only when its scale is properly calibrated to the pattern being measured. Stretch invariance corresponds to the conservation of the total amount of something, such as the total biomass and consequently the average size. Rotational invariance corresponds to pattern that does not depend on the order in which underlying processes occur, for example, a scale that additively combines the component processes leading to observed values. I use tree size as an example to illustrate how the key invariances shape pattern. A simple interpretation of common pattern follows. That simple interpretation connects the normal distribution to a wide variety of other common patterns through the transformations of scale set by the fundamental invariances.
|
1805.04329
|
Matteo Cinelli
|
M. Cinelli, and I. Echegoyen, and M. Oliveira, and S. Orellana, and T.
Gili
|
Altered Modularity and Disproportional Integration in Functional
Networks are Markers of Abnormal Brain Organization in Schizophrenia
|
This work is the output of the Complexity72h workshop, held at IMT
School in Lucca, 7-11 May 2018. https://complexity72h.weebly.com/
| null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modularity plays an important role in brain networks' architecture and
influences its dynamics and the ability to integrate and segregate different
modules of cerebral regions. Alterations in community structure are associated
with several clinical disorders, specially schizophrenia, although its time
evolution is not clear yet. In the present work, we analyze fMRI functional
networks of $65$ healthy subjects (HC) and $44$ patients of schizophrenia (SZ),
$28$ of them in a chronic state (CR) of illness, and $16$ at early stage (ES).
We find clear differences in edges' weights distribution, networks density,
community structure consistency and robustness against edge removal. In
comparison to healthy subjects, we found that networks from SZ patients
exhibits wider weight distribution, larger overall connectivity, and are more
consistent in the community structure across subjects. We also showed that the
networks of SZ patients tend to be more robust to edge removal than healthy
subjects, while having lower network density. In the case of early stages
patients, we found that their networks exhibit topological features
consistently in between the ones obtained from the other two groups, resulting
in a tendency towards the chronic group state.
|
[
{
"created": "Fri, 11 May 2018 11:25:30 GMT",
"version": "v1"
}
] |
2018-05-14
|
[
[
"Cinelli",
"M.",
""
],
[
"Echegoyen",
"I.",
""
],
[
"Oliveira",
"M.",
""
],
[
"Orellana",
"S.",
""
],
[
"Gili",
"T.",
""
]
] |
Modularity plays an important role in brain networks' architecture and influences its dynamics and the ability to integrate and segregate different modules of cerebral regions. Alterations in community structure are associated with several clinical disorders, specially schizophrenia, although its time evolution is not clear yet. In the present work, we analyze fMRI functional networks of $65$ healthy subjects (HC) and $44$ patients of schizophrenia (SZ), $28$ of them in a chronic state (CR) of illness, and $16$ at early stage (ES). We find clear differences in edges' weights distribution, networks density, community structure consistency and robustness against edge removal. In comparison to healthy subjects, we found that networks from SZ patients exhibits wider weight distribution, larger overall connectivity, and are more consistent in the community structure across subjects. We also showed that the networks of SZ patients tend to be more robust to edge removal than healthy subjects, while having lower network density. In the case of early stages patients, we found that their networks exhibit topological features consistently in between the ones obtained from the other two groups, resulting in a tendency towards the chronic group state.
|
1809.04877
|
Fereshteh Lagzi
|
Fereshteh Lagzi, Fatihcan M. Atay, Stefan Rotter
|
Bifurcation analysis of the dynamics of interacting populations of
spiking networks
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze the collective dynamics of hierarchically structured networks of
densely connected spiking neurons. These networks of sub-networks may represent
interactions between cell assemblies or different nuclei in the brain. The
dynamical activity pattern that results from these interactions depends on the
strength of synaptic coupling between them. Importantly, the overall dynamics
of a brain region in the absence of external input, so called ongoing brain
activity, has been attributed to the dynamics of such interactions. In our
study, two different network scenarios are considered: a system with one
inhibitory and two excitatory sub-networks, and a network representation with
three inhibitory sub-networks. To study the effect of synaptic strength on the
global dynamics of the network, two parameters for relative couplings between
these sub-networks are considered. For each case, a co-dimension two
bifurcation analysis is performed and the results have been compared to
large-scale network simulations. Our analysis shows that Generalized
Lotka-Volterra (GLV) equations, well-known in predator-prey studies, yield a
meaningful population-level description for the collective behavior of spiking
neuronal interaction, which have a hierarchical structure. In particular, we
observed a striking equivalence between the bifurcation diagrams of spiking
neuronal networks and their corresponding GLV equations. This study gives new
insight on the behavior of neuronal assemblies, and can potentially suggest new
mechanisms for altering the dynamical patterns of spiking networks based on
changing the synaptic strength between some groups of neurons.
|
[
{
"created": "Thu, 13 Sep 2018 10:29:29 GMT",
"version": "v1"
}
] |
2018-09-14
|
[
[
"Lagzi",
"Fereshteh",
""
],
[
"Atay",
"Fatihcan M.",
""
],
[
"Rotter",
"Stefan",
""
]
] |
We analyze the collective dynamics of hierarchically structured networks of densely connected spiking neurons. These networks of sub-networks may represent interactions between cell assemblies or different nuclei in the brain. The dynamical activity pattern that results from these interactions depends on the strength of synaptic coupling between them. Importantly, the overall dynamics of a brain region in the absence of external input, so called ongoing brain activity, has been attributed to the dynamics of such interactions. In our study, two different network scenarios are considered: a system with one inhibitory and two excitatory sub-networks, and a network representation with three inhibitory sub-networks. To study the effect of synaptic strength on the global dynamics of the network, two parameters for relative couplings between these sub-networks are considered. For each case, a co-dimension two bifurcation analysis is performed and the results have been compared to large-scale network simulations. Our analysis shows that Generalized Lotka-Volterra (GLV) equations, well-known in predator-prey studies, yield a meaningful population-level description for the collective behavior of spiking neuronal interaction, which have a hierarchical structure. In particular, we observed a striking equivalence between the bifurcation diagrams of spiking neuronal networks and their corresponding GLV equations. This study gives new insight on the behavior of neuronal assemblies, and can potentially suggest new mechanisms for altering the dynamical patterns of spiking networks based on changing the synaptic strength between some groups of neurons.
|
2004.14482
|
Simon Wood
|
Simon N. Wood, Ernst C. Wit, Matteo Fasiolo and Peter J. Green
|
COVID-19 and the difficulty of inferring epidemiological parameters from
clinical data
|
Version accepted by the Lancet Infectious Diseases. See previous
version for less terse presentation
| null |
10.1016/S1473-3099(20)30437-0
| null |
q-bio.QM q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowing the infection fatality ratio (IFR) is of crucial importance for
evidence-based epidemic management: for immediate planning; for balancing the
life years saved against the life years lost due to the consequences of
management; and for evaluating the ethical issues associated with the tacit
willingness to pay substantially more for life years lost to the epidemic, than
for those to other diseases. Against this background Verity et al. (2020,
Lancet Infections Diseases) have rapidly assembled case data and used
statistical modelling to infer the IFR for COVID-19. We have attempted an
in-depth statistical review of their approach, to identify to what extent the
data are sufficiently informative about the IFR to play a greater role than the
modelling assumptions, and have tried to identify those assumptions that appear
to play a key role. Given the difficulties with other data sources, we provide
a crude alternative analysis based on the Diamond Princess Cruise ship data and
case data from China, and argue that, given the data problems, modelling of
clinical data to obtain the IFR can only be a stop-gap measure. What is needed
is near direct measurement of epidemic size by PCR and/or antibody testing of
random samples of the at risk population.
|
[
{
"created": "Tue, 28 Apr 2020 14:46:27 GMT",
"version": "v1"
},
{
"created": "Tue, 5 May 2020 12:43:30 GMT",
"version": "v2"
}
] |
2020-08-11
|
[
[
"Wood",
"Simon N.",
""
],
[
"Wit",
"Ernst C.",
""
],
[
"Fasiolo",
"Matteo",
""
],
[
"Green",
"Peter J.",
""
]
] |
Knowing the infection fatality ratio (IFR) is of crucial importance for evidence-based epidemic management: for immediate planning; for balancing the life years saved against the life years lost due to the consequences of management; and for evaluating the ethical issues associated with the tacit willingness to pay substantially more for life years lost to the epidemic, than for those to other diseases. Against this background Verity et al. (2020, Lancet Infections Diseases) have rapidly assembled case data and used statistical modelling to infer the IFR for COVID-19. We have attempted an in-depth statistical review of their approach, to identify to what extent the data are sufficiently informative about the IFR to play a greater role than the modelling assumptions, and have tried to identify those assumptions that appear to play a key role. Given the difficulties with other data sources, we provide a crude alternative analysis based on the Diamond Princess Cruise ship data and case data from China, and argue that, given the data problems, modelling of clinical data to obtain the IFR can only be a stop-gap measure. What is needed is near direct measurement of epidemic size by PCR and/or antibody testing of random samples of the at risk population.
|
2012.07691
|
Thiago B. Burghi
|
Thiago B. Burghi, Maarten Schoukens, Rodolphe Sepulchre
|
System identification of biophysical neuronal models
|
Slightly extended pre-print of the paper to be presented at the 59th
Conference on Decision and Control, held remotely between December 14-18,
2020
|
Proceedings of the 2020 59th IEEE Conference on Decision and
Control (CDC), Jeju, South Korea
|
10.1109/CDC42340.2020.9304363
| null |
q-bio.NC cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
After sixty years of quantitative biophysical modeling of neurons, the
identification of neuronal dynamics from input-output data remains a
challenging problem, primarily due to the inherently nonlinear nature of
excitable behaviors. By reformulating the problem in terms of the
identification of an operator with fading memory, we explore a simple approach
based on a parametrization given by a series interconnection of Generalized
Orthonormal Basis Functions (GOBFs) and static Artificial Neural Networks. We
show that GOBFs are particularly well-suited to tackle the identification
problem, and provide a heuristic for selecting GOBF poles which addresses the
ultra-sensitivity of neuronal behaviors. The method is illustrated on the
identification of a bursting model from the crab stomatogastric ganglion.
|
[
{
"created": "Mon, 14 Dec 2020 16:41:27 GMT",
"version": "v1"
}
] |
2024-02-29
|
[
[
"Burghi",
"Thiago B.",
""
],
[
"Schoukens",
"Maarten",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] |
After sixty years of quantitative biophysical modeling of neurons, the identification of neuronal dynamics from input-output data remains a challenging problem, primarily due to the inherently nonlinear nature of excitable behaviors. By reformulating the problem in terms of the identification of an operator with fading memory, we explore a simple approach based on a parametrization given by a series interconnection of Generalized Orthonormal Basis Functions (GOBFs) and static Artificial Neural Networks. We show that GOBFs are particularly well-suited to tackle the identification problem, and provide a heuristic for selecting GOBF poles which addresses the ultra-sensitivity of neuronal behaviors. The method is illustrated on the identification of a bursting model from the crab stomatogastric ganglion.
|
2109.04431
|
Alexander Stewart
|
Mark Jayson Cortez, Alan Eric Akil, Kre\v{s}imir Josi\'c and Alexander
J. Stewart
|
Incorporating Computational Challenges into a Multidisciplinary Course
on Stochastic Processes
| null | null | null | null |
q-bio.OT math.HO
|
http://creativecommons.org/licenses/by/4.0/
|
Quantitative methods and mathematical modeling are playing an increasingly
important role across disciplines. As a result, interdisciplinary mathematics
courses are increasing in popularity. However, teaching such courses at an
advanced level can be challenging. Students often arrive with different
mathematical backgrounds, different interests, and divergent reasons for
wanting to learn the material. Here we describe a course on stochastic
processes in biology, delivered between September and December 2020 to a mixed
audience of mathematicians and biologists. In addition to traditional lectures
and homeworks, we incorporated a series of weekly computational challenges into
the course. These challenges served to familiarize students with the main
modeling concepts, and provide them with an introduction on how to implement
them in a research-like setting. In order to account for the different academic
backgrounds of the students, they worked on the challenges in small groups, and
presented their results and code in a dedicated discussion class each week. We
discuss our experience designing and implementing an element of problem-based
learning in an applied mathematics course through computational challenges. We
also discuss feedback from students, and describe the content of the challenges
presented in the course. We provide all materials, along with example code for
a number of challenges.
|
[
{
"created": "Thu, 9 Sep 2021 17:23:21 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Cortez",
"Mark Jayson",
""
],
[
"Akil",
"Alan Eric",
""
],
[
"Josić",
"Krešimir",
""
],
[
"Stewart",
"Alexander J.",
""
]
] |
Quantitative methods and mathematical modeling are playing an increasingly important role across disciplines. As a result, interdisciplinary mathematics courses are increasing in popularity. However, teaching such courses at an advanced level can be challenging. Students often arrive with different mathematical backgrounds, different interests, and divergent reasons for wanting to learn the material. Here we describe a course on stochastic processes in biology, delivered between September and December 2020 to a mixed audience of mathematicians and biologists. In addition to traditional lectures and homeworks, we incorporated a series of weekly computational challenges into the course. These challenges served to familiarize students with the main modeling concepts, and provide them with an introduction on how to implement them in a research-like setting. In order to account for the different academic backgrounds of the students, they worked on the challenges in small groups, and presented their results and code in a dedicated discussion class each week. We discuss our experience designing and implementing an element of problem-based learning in an applied mathematics course through computational challenges. We also discuss feedback from students, and describe the content of the challenges presented in the course. We provide all materials, along with example code for a number of challenges.
|
2007.14236
|
Bruno Golosio
|
Bruno Golosio, Gianmarco Tiddia, Chiara De Luca, Elena Pastorelli,
Francesco Simula, Pier Stanislao Paolucci
|
Fast simulations of highly-connected spiking cortical models using GPUs
| null |
Front. Comput. Neurosci. 15:627620 2021
|
10.3389/fncom.2021.627620
| null |
q-bio.NC cs.DC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past decade there has been a growing interest in the development of
parallel hardware systems for simulating large-scale networks of spiking
neurons. Compared to other highly-parallel systems, GPU-accelerated solutions
have the advantage of a relatively low cost and a great versatility, thanks
also to the possibility of using the CUDA-C/C++ programming languages.
NeuronGPU is a GPU library for large-scale simulations of spiking neural
network models, written in the C++ and CUDA-C++ programming languages, based on
a novel spike-delivery algorithm. This library includes simple LIF
(leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx
(adaptive-exponential-integrate-and-fire) neuron models with current or
conductance based synapses, user definable models and different devices. The
numerical solution of the differential equations of the dynamics of the AdEx
models is performed through a parallel implementation, written in CUDA-C++, of
the fifth-order Runge-Kutta method with adaptive step-size control. In this
work we evaluate the performance of this library on the simulation of a
cortical microcircuit model, based on LIF neurons and current-based synapses,
and on a balanced network of excitatory and inhibitory neurons, using AdEx
neurons and conductance-based synapses. On these models, we will show that the
proposed library achieves state-of-the-art performance in terms of simulation
time per second of biological activity. In particular, using a single NVIDIA
GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model,
which includes about 77,000 neurons and $3 \cdot 10^8$ connections, can be
simulated at a speed very close to real time, while the simulation time of a
balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron
was about 70 s per second of biological activity.
|
[
{
"created": "Tue, 28 Jul 2020 13:58:50 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Jul 2020 15:43:02 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Nov 2020 17:13:33 GMT",
"version": "v3"
}
] |
2021-02-22
|
[
[
"Golosio",
"Bruno",
""
],
[
"Tiddia",
"Gianmarco",
""
],
[
"De Luca",
"Chiara",
""
],
[
"Pastorelli",
"Elena",
""
],
[
"Simula",
"Francesco",
""
],
[
"Paolucci",
"Pier Stanislao",
""
]
] |
Over the past decade there has been a growing interest in the development of parallel hardware systems for simulating large-scale networks of spiking neurons. Compared to other highly-parallel systems, GPU-accelerated solutions have the advantage of a relatively low cost and a great versatility, thanks also to the possibility of using the CUDA-C/C++ programming languages. NeuronGPU is a GPU library for large-scale simulations of spiking neural network models, written in the C++ and CUDA-C++ programming languages, based on a novel spike-delivery algorithm. This library includes simple LIF (leaky-integrate-and-fire) neuron models as well as several multisynapse AdEx (adaptive-exponential-integrate-and-fire) neuron models with current or conductance based synapses, user definable models and different devices. The numerical solution of the differential equations of the dynamics of the AdEx models is performed through a parallel implementation, written in CUDA-C++, of the fifth-order Runge-Kutta method with adaptive step-size control. In this work we evaluate the performance of this library on the simulation of a cortical microcircuit model, based on LIF neurons and current-based synapses, and on a balanced network of excitatory and inhibitory neurons, using AdEx neurons and conductance-based synapses. On these models, we will show that the proposed library achieves state-of-the-art performance in terms of simulation time per second of biological activity. In particular, using a single NVIDIA GeForce RTX 2080 Ti GPU board, the full-scale cortical-microcircuit model, which includes about 77,000 neurons and $3 \cdot 10^8$ connections, can be simulated at a speed very close to real time, while the simulation time of a balanced network of 1,000,000 AdEx neurons with 1,000 connections per neuron was about 70 s per second of biological activity.
|
1203.0873
|
Jing Kang Dr.
|
Jing Kang, Jianhua Wu, Anteo Smerieri and Jianfeng Feng
|
Weber's law implies neural discharge more regular than a Poisson process
|
13 pages, 8 figures; European Journal of Neuroscience 2010
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weber's law is one of the basic laws in psychophysics, but the link between
this psychophysical behavior and the neuronal response has not yet been
established. In this paper, we carried out an analysis on the spike train
statistics when Weber's law holds, and found that the efferent spike train of a
single neuron is less variable than a Poisson process. For population neurons,
Weber's law is satisfied only when the population size is small (< 10 neurons).
However, if the population neurons share a weak correlation in their discharges
and individual neuronal spike train is more regular than a Poisson process,
Weber's law is true without any restriction on the population size. Biased
competition attractor network also demonstrates that the coefficient of
variation of interspike interval in the winning pool should be less than one
for the validity of Weber's law. Our work links Weber's law with neural firing
property quantitatively, shedding light on the relation between psychophysical
behavior and neuronal responses.
|
[
{
"created": "Mon, 5 Mar 2012 11:51:23 GMT",
"version": "v1"
}
] |
2012-03-06
|
[
[
"Kang",
"Jing",
""
],
[
"Wu",
"Jianhua",
""
],
[
"Smerieri",
"Anteo",
""
],
[
"Feng",
"Jianfeng",
""
]
] |
Weber's law is one of the basic laws in psychophysics, but the link between this psychophysical behavior and the neuronal response has not yet been established. In this paper, we carried out an analysis on the spike train statistics when Weber's law holds, and found that the efferent spike train of a single neuron is less variable than a Poisson process. For population neurons, Weber's law is satisfied only when the population size is small (< 10 neurons). However, if the population neurons share a weak correlation in their discharges and individual neuronal spike train is more regular than a Poisson process, Weber's law is true without any restriction on the population size. Biased competition attractor network also demonstrates that the coefficient of variation of interspike interval in the winning pool should be less than one for the validity of Weber's law. Our work links Weber's law with neural firing property quantitatively, shedding light on the relation between psychophysical behavior and neuronal responses.
|
2003.14333
|
Bing He
|
Bing He, Lana Garmire
|
Prediction of repurposed drugs for treating lung injury in COVID-19
| null | null | null | null |
q-bio.TO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Coronavirus disease (COVID-19) is an infectious disease discovered in 2019
and currently in outbreak across the world. Lung injury with severe respiratory
failure is the leading cause of death in COVID-19, brought by severe acute
respiratory syndrome coronavirus 2 (SARS-CoV-2). However, there still lacks
efficient treatment for COVID-19 induced lung injury and acute respiratory
failure. Inhibition of Angiotensin-converting enzyme 2 (ACE2) caused by spike
protein of SARS-CoV-2 is the most plausible mechanism of lung injury in
COVID-19. We propose two candidate drugs, COL-3 (a chemically modified
tetracycline) and CGP-60474 (a cyclin-dependent kinase inhibitor), for treating
lung injuries in COVID-19, based on their abilities to reverse the gene
expression patterns in HCC515 cells treated with ACE2 inhibitor and in human
COVID-19 patient lung tissues. Further bioinformatics analysis shows that
twelve significantly enriched pathways (P-value <0.05) overlap between HCC515
cells treated with ACE2 inhibitor and human COVID-19 patient lung tissues,
including signaling pathways known to be associated with lung injury such as
TNF signaling, MAPK signaling and Chemokine signaling pathways. All these
twelve pathways are targeted in COL-3 treated HCC515 cells, in which genes such
as RHOA, RAC2, FAS, CDC42 have reduced expression. CGP-60474 shares eleven of
twelve pathways with COL-3 with common target genes such as RHOA. It also
uniquely targets genes related to lung injury, such as CALR and MMP14. In
summary, this study shows that ACE2 inhibition is likely part of the mechanisms
leading to lung injury in COVID-19, and that compounds such as COL-3 and
CGP-60474 have the potential as repurposed drugs for its treatment.
|
[
{
"created": "Mon, 30 Mar 2020 03:36:44 GMT",
"version": "v1"
},
{
"created": "Sun, 3 May 2020 05:34:24 GMT",
"version": "v2"
}
] |
2020-05-05
|
[
[
"He",
"Bing",
""
],
[
"Garmire",
"Lana",
""
]
] |
Coronavirus disease (COVID-19) is an infectious disease discovered in 2019 and currently in outbreak across the world. Lung injury with severe respiratory failure is the leading cause of death in COVID-19, brought by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). However, there still lacks efficient treatment for COVID-19 induced lung injury and acute respiratory failure. Inhibition of Angiotensin-converting enzyme 2 (ACE2) caused by spike protein of SARS-CoV-2 is the most plausible mechanism of lung injury in COVID-19. We propose two candidate drugs, COL-3 (a chemically modified tetracycline) and CGP-60474 (a cyclin-dependent kinase inhibitor), for treating lung injuries in COVID-19, based on their abilities to reverse the gene expression patterns in HCC515 cells treated with ACE2 inhibitor and in human COVID-19 patient lung tissues. Further bioinformatics analysis shows that twelve significantly enriched pathways (P-value <0.05) overlap between HCC515 cells treated with ACE2 inhibitor and human COVID-19 patient lung tissues, including signaling pathways known to be associated with lung injury such as TNF signaling, MAPK signaling and Chemokine signaling pathways. All these twelve pathways are targeted in COL-3 treated HCC515 cells, in which genes such as RHOA, RAC2, FAS, CDC42 have reduced expression. CGP-60474 shares eleven of twelve pathways with COL-3 with common target genes such as RHOA. It also uniquely targets genes related to lung injury, such as CALR and MMP14. In summary, this study shows that ACE2 inhibition is likely part of the mechanisms leading to lung injury in COVID-19, and that compounds such as COL-3 and CGP-60474 have the potential as repurposed drugs for its treatment.
|
1311.1651
|
Rafael Guariento Dettogni
|
Rafael D Guariento
|
Population Density as an Equalizing (or Misbalancing) Mechanism for
Species Coexistence
|
26 pages, 4 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the general acknowledgment of the role of niche and fitness
differences in community dynamics, species abundance has been coined as a
relevant feature not just regarding niche perspectives, but also according to
neutral perspectives. Here we explore a minimum probabilistic stochastic model
to evaluate the role of populations relative initial and total abundances on
species chances to outcompete each other and their persistence in time (i.e.,
unstable coexistence). The present results show that taking into account the
stochasticity in demographic properties and conservation of individuals in
closed communities (zero-sum assumption), population initial abundance can
strongly influence species chances to outcompete each other, and also influence
the period of coexistence of these species in a particular time interval.
Systems carrying capacity can have an important role in species coexistence by
exacerbating fitness inequalities and affecting the size of the period of
coexistence. This study also shows that populations initial abundances can act
as an equalizing mechanism, reducing fitness inequalities, which can favor
species coexistence and even make less fitted species to be more likely to
outcompete better fitted species, and thus to dominate ecological communities
in the absence of niche mechanisms.
|
[
{
"created": "Thu, 7 Nov 2013 12:11:03 GMT",
"version": "v1"
}
] |
2013-11-08
|
[
[
"Guariento",
"Rafael D",
""
]
] |
Despite the general acknowledgment of the role of niche and fitness differences in community dynamics, species abundance has been coined as a relevant feature not just regarding niche perspectives, but also according to neutral perspectives. Here we explore a minimum probabilistic stochastic model to evaluate the role of populations relative initial and total abundances on species chances to outcompete each other and their persistence in time (i.e., unstable coexistence). The present results show that taking into account the stochasticity in demographic properties and conservation of individuals in closed communities (zero-sum assumption), population initial abundance can strongly influence species chances to outcompete each other, and also influence the period of coexistence of these species in a particular time interval. Systems carrying capacity can have an important role in species coexistence by exacerbating fitness inequalities and affecting the size of the period of coexistence. This study also shows that populations initial abundances can act as an equalizing mechanism, reducing fitness inequalities, which can favor species coexistence and even make less fitted species to be more likely to outcompete better fitted species, and thus to dominate ecological communities in the absence of niche mechanisms.
|
1201.3216
|
Giuseppe Jurman
|
A. Barla and S. Riccadonna and S. Masecchia and M. Squillario and M.
Filosi and G. Jurman and C. Furlanello
|
Evaluating sources of variability in pathway profiling
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A bioinformatics platform is introduced aimed at identifying models of
disease-specific pathways, as well as a set of network measures that can
quantify changes in terms of global structure or single link disruptions.The
approach integrates a network comparison framework with machine learning
molecular profiling. <CA>The platform includes different tools combined in one
Open Source pipeline, supporting reproducibility of the analysis. We describe
here the computational pipeline and explore the main sources of variability
that can affect the results, namely the classifier, the feature
ranking/selection algorithm, the enrichment procedure, the inference method and
the networks comparison function.
The proposed pipeline is tested on a microarray dataset of late stage
Parkinsons' Disease patients together with healty controls. Choosing different
machine learning approaches we get low pathway profiling overlapping in terms
of common enriched elements. Nevertheless, they identify different but equally
meaningful biological aspects of the same process, suggesting the integration
of information across different methods as the best overall strategy.
All the elements of the proposed pipeline are available as Open Source
Software: availability details are provided in the main text.
|
[
{
"created": "Mon, 16 Jan 2012 11:19:45 GMT",
"version": "v1"
}
] |
2012-01-17
|
[
[
"Barla",
"A.",
""
],
[
"Riccadonna",
"S.",
""
],
[
"Masecchia",
"S.",
""
],
[
"Squillario",
"M.",
""
],
[
"Filosi",
"M.",
""
],
[
"Jurman",
"G.",
""
],
[
"Furlanello",
"C.",
""
]
] |
A bioinformatics platform is introduced aimed at identifying models of disease-specific pathways, as well as a set of network measures that can quantify changes in terms of global structure or single link disruptions.The approach integrates a network comparison framework with machine learning molecular profiling. <CA>The platform includes different tools combined in one Open Source pipeline, supporting reproducibility of the analysis. We describe here the computational pipeline and explore the main sources of variability that can affect the results, namely the classifier, the feature ranking/selection algorithm, the enrichment procedure, the inference method and the networks comparison function. The proposed pipeline is tested on a microarray dataset of late stage Parkinsons' Disease patients together with healty controls. Choosing different machine learning approaches we get low pathway profiling overlapping in terms of common enriched elements. Nevertheless, they identify different but equally meaningful biological aspects of the same process, suggesting the integration of information across different methods as the best overall strategy. All the elements of the proposed pipeline are available as Open Source Software: availability details are provided in the main text.
|
1112.5704
|
Tetsuhiro Hatakeyama
|
Tetsuhiro S. Hatakeyama, Kunihiko Kaneko
|
Generic temperature compensation of biological clocks by autonomous
regulation of catalyst concentration
|
21 pages, 12 figures
| null |
10.1073/pnas.1120711109
| null |
q-bio.MN physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Circadian clocks ubiquitous in life forms ranging bacteria to multi-cellular
organisms, often exhibit intrinsic temperature compensation; the period of
circadian oscillators is maintained constant over a range of physiological
temperatures, despite the expected Arrhenius form for the reaction coefficient.
Observations have shown that the amplitude of the oscillation depends on the
temperature but the period does not---this suggests that although not every
reaction step is temperature independent, the total system comprising several
reactions still exhibits compensation. We present a general mechanism for such
temperature compensation. Consider a system with multiple activation energy
barriers for reactions, with a common enzyme shared across several reaction
steps with a higher activation energy. These reaction steps rate-limit the
cycle if the temperature is not high. If the total abundance of the enzyme is
limited, the amount of free enzyme available to catalyze a specific reaction
decreases as more substrates bind to common enzyme. We show that this change in
free enzyme abundance compensate for the Arrhenius-type temperature dependence
of the reaction coefficient. Taking the example of circadian clocks with
cyanobacterial proteins KaiABC consisting of several phosphorylation sites, we
show that this temperature compensation mechanisms is indeed valid.
Specifically, if the activation energy for phosphorylation is larger than that
for dephosphorylation, competition for KaiA shared among the phosphorylation
reactions leads to temperature compensation. Moreover, taking a simpler model,
we demonstrate the generality of the proposed compensation mechanism,
suggesting relevance not only to circadian clocks but to other (bio)chemical
oscillators as well.
|
[
{
"created": "Sat, 24 Dec 2011 07:33:50 GMT",
"version": "v1"
}
] |
2014-07-21
|
[
[
"Hatakeyama",
"Tetsuhiro S.",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] |
Circadian clocks ubiquitous in life forms ranging bacteria to multi-cellular organisms, often exhibit intrinsic temperature compensation; the period of circadian oscillators is maintained constant over a range of physiological temperatures, despite the expected Arrhenius form for the reaction coefficient. Observations have shown that the amplitude of the oscillation depends on the temperature but the period does not---this suggests that although not every reaction step is temperature independent, the total system comprising several reactions still exhibits compensation. We present a general mechanism for such temperature compensation. Consider a system with multiple activation energy barriers for reactions, with a common enzyme shared across several reaction steps with a higher activation energy. These reaction steps rate-limit the cycle if the temperature is not high. If the total abundance of the enzyme is limited, the amount of free enzyme available to catalyze a specific reaction decreases as more substrates bind to common enzyme. We show that this change in free enzyme abundance compensate for the Arrhenius-type temperature dependence of the reaction coefficient. Taking the example of circadian clocks with cyanobacterial proteins KaiABC consisting of several phosphorylation sites, we show that this temperature compensation mechanisms is indeed valid. Specifically, if the activation energy for phosphorylation is larger than that for dephosphorylation, competition for KaiA shared among the phosphorylation reactions leads to temperature compensation. Moreover, taking a simpler model, we demonstrate the generality of the proposed compensation mechanism, suggesting relevance not only to circadian clocks but to other (bio)chemical oscillators as well.
|
1812.10841
|
Daisuke Kihara
|
Atilla Sit and Daisuke Kihara
|
Three-Dimensional Krawtchouk Descriptors for Protein Local Surface Shape
Comparison
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct comparison of three-dimensional (3D) objects is computationally
expensive due to the need for translation, rotation, and scaling of the objects
to evaluate their similarity. In applications of 3D object comparison, often
identifying specific local regions of objects is of particular interest. We
have recently developed a set of 2D moment invariants based on discrete
orthogonal Krawtchouk polynomials for comparison of local image patches. In
this work, we extend them to 3D and construct 3D Krawtchouk descriptors (3DKD)
that are invariant under translation, rotation, and scaling. The new
descriptors have the ability to extract local features of a 3D surface from any
region-of-interest. This property enables comparison of two arbitrary local
surface regions from different 3D objects. We present the new formulation of
3DKD and apply it to the local shape comparison of protein surfaces in order to
predict ligand molecules that bind to query proteins.
|
[
{
"created": "Thu, 27 Dec 2018 22:41:14 GMT",
"version": "v1"
}
] |
2018-12-31
|
[
[
"Sit",
"Atilla",
""
],
[
"Kihara",
"Daisuke",
""
]
] |
Direct comparison of three-dimensional (3D) objects is computationally expensive due to the need for translation, rotation, and scaling of the objects to evaluate their similarity. In applications of 3D object comparison, often identifying specific local regions of objects is of particular interest. We have recently developed a set of 2D moment invariants based on discrete orthogonal Krawtchouk polynomials for comparison of local image patches. In this work, we extend them to 3D and construct 3D Krawtchouk descriptors (3DKD) that are invariant under translation, rotation, and scaling. The new descriptors have the ability to extract local features of a 3D surface from any region-of-interest. This property enables comparison of two arbitrary local surface regions from different 3D objects. We present the new formulation of 3DKD and apply it to the local shape comparison of protein surfaces in order to predict ligand molecules that bind to query proteins.
|
1710.05639
|
Xiang-Yi Li
|
Xiaoquan Yu, Xiang-Yi Li
|
Applications of WKB and Fokker-Planck methods in analyzing population
extinction driven by weak demographic fluctuations
| null |
Yu, X. & Li, XY. Bull Math Biol (2018).
https://doi.org/10.1007/s11538-018-0483-6
|
10.1007/s11538-018-0483-6
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In large but finite populations, weak demographic stochasticity due to random
birth and death events can lead to population extinction. The process is
analogous to the escaping problem of trapped particles under random forces.
Methods widely used in studying such physical systems, for instance,
Wentzel-Kramers-Brillouin (WKB) and Fokker-Planck methods, can be applied to
solve similar biological problems. In this article, we comparatively analyse
applications of WKB and Fokker-Planck methods to some typical stochastic
population dynamical models, including the logistic growth, endemic SIR,
predator-prey, and competitive Lotka-Volterra models. The mean extinction time
strongly depends on the nature of the corresponding deterministic fixed
point(s). For different types of fixed points, the extinction can be driven
either by rare events or typical Gaussian fluctuations. In the former case, the
large deviation function that governs the distribution of rare events can be
well-approximated by the WKB method in the weak noise limit. In the later case,
the simpler Fokker-Planck approximation approach is also appropriate.
|
[
{
"created": "Mon, 16 Oct 2017 12:01:26 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2017 10:06:25 GMT",
"version": "v2"
},
{
"created": "Sat, 21 Jul 2018 07:36:21 GMT",
"version": "v3"
}
] |
2018-11-28
|
[
[
"Yu",
"Xiaoquan",
""
],
[
"Li",
"Xiang-Yi",
""
]
] |
In large but finite populations, weak demographic stochasticity due to random birth and death events can lead to population extinction. The process is analogous to the escaping problem of trapped particles under random forces. Methods widely used in studying such physical systems, for instance, Wentzel-Kramers-Brillouin (WKB) and Fokker-Planck methods, can be applied to solve similar biological problems. In this article, we comparatively analyse applications of WKB and Fokker-Planck methods to some typical stochastic population dynamical models, including the logistic growth, endemic SIR, predator-prey, and competitive Lotka-Volterra models. The mean extinction time strongly depends on the nature of the corresponding deterministic fixed point(s). For different types of fixed points, the extinction can be driven either by rare events or typical Gaussian fluctuations. In the former case, the large deviation function that governs the distribution of rare events can be well-approximated by the WKB method in the weak noise limit. In the later case, the simpler Fokker-Planck approximation approach is also appropriate.
|
2112.13153
|
Leonid Hanin
|
Leonid Hanin (1), Liyang Xie (2), Rainer Sachs (2) ((1) Idaho State
University, Pocatello, Idaho, USA, (2) University of California, Berkeley,
Berkeley, California, USA)
|
Mathematical Properties of Incremental Effect Additivity and Other
Synergy Theories
|
53 pages including 14 figures and supplementary materials
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synergy theories for multi-component agent mixtures use 1-agent dose-effect
relations, assumed known from analyzing previous 1-agent experiments, to
calculate baseline Neither-Synergy-Nor-Antagonism mixture dose-effect
relations. The most commonly used synergy theory, Simple Effect Additivity, is
not self-consistent mathematically. Many nonlinear alternatives have been
suggested, almost all of which require an assumption that effects increase
monotonically as dose increases. We here emphasize the recently introduced
Incremental Effect Additivity synergy theory and briefly discuss Loewe
Additivity. By utilizing the fact that, when dose increments approach zero,
dose-effect relations approach linearity, Incremental Effect Additivity theory
to some extent circumvents the non-linearity of dose-effect relations that
plague Simple Effect Additivity calculations. We study mathematical properties
of Incremental Effect Additivity that are relevant to practical implementation
of this synergy theory and hold whatever particular area of biology, medicine,
toxicology or pharmacology is involved. However, as yet Incremental Effect
Additivity synergy theory has only been applied to mixture experiments
simulating the toxic galactic cosmic ray mixture encountered during voyages in
interplanetary space. Our main results are theorems, propositions, examples and
counterexamples revealing various properties of Incremental Effect Additivity
synergy theory including whether or not Neither-Synergy-Nor-Antagonism
dose-effect relations lie between 1-agent dose-effect relations. These results
are amply illustrated with figures.
|
[
{
"created": "Fri, 24 Dec 2021 22:38:56 GMT",
"version": "v1"
}
] |
2021-12-28
|
[
[
"Hanin",
"Leonid",
""
],
[
"Xie",
"Liyang",
""
],
[
"Sachs",
"Rainer",
""
]
] |
Synergy theories for multi-component agent mixtures use 1-agent dose-effect relations, assumed known from analyzing previous 1-agent experiments, to calculate baseline Neither-Synergy-Nor-Antagonism mixture dose-effect relations. The most commonly used synergy theory, Simple Effect Additivity, is not self-consistent mathematically. Many nonlinear alternatives have been suggested, almost all of which require an assumption that effects increase monotonically as dose increases. We here emphasize the recently introduced Incremental Effect Additivity synergy theory and briefly discuss Loewe Additivity. By utilizing the fact that, when dose increments approach zero, dose-effect relations approach linearity, Incremental Effect Additivity theory to some extent circumvents the non-linearity of dose-effect relations that plague Simple Effect Additivity calculations. We study mathematical properties of Incremental Effect Additivity that are relevant to practical implementation of this synergy theory and hold whatever particular area of biology, medicine, toxicology or pharmacology is involved. However, as yet Incremental Effect Additivity synergy theory has only been applied to mixture experiments simulating the toxic galactic cosmic ray mixture encountered during voyages in interplanetary space. Our main results are theorems, propositions, examples and counterexamples revealing various properties of Incremental Effect Additivity synergy theory including whether or not Neither-Synergy-Nor-Antagonism dose-effect relations lie between 1-agent dose-effect relations. These results are amply illustrated with figures.
|
2007.01979
|
Ronal Arela-Bobadilla
|
Ronal Arela-Bobadilla
|
Excess deaths hidden 100 days after the quarantine in Peru by COVID-19
|
in Spanish
| null | null | null |
q-bio.PE stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: To make an estimate of the excess deaths caused by COVID-19 in the
non-violent mortality of Peru, controlling for the effect of quarantine.
Methods: Analysis of longitudinal data from the departments of Peru using
official public information from the National Death Information System and the
Ministry of Health of Peru. The analysis is performed between January 1, 2018
and June 23, 2020 (100 days of quarantine). The daily death rate per million
inhabitants has been used. The days in which the departments were quarantined
with a limit number of accumulated cases of COVID-19 were used to estimate the
quarantine impact. Three limits were established for cases: less than 1, 10 and
100 cases. Result: In Peru, the daily death rate per million inhabitants
decreased by -1.89 (95% CI: -2.70; -1.07) on quarantine days and without
COVID-19 cases. When comparing this result with the total number of non-violent
deaths, the excess deaths during the first 100 days of quarantine is 36,230.
This estimate is 1.12 times the estimate with data from 2019 and 4.2 times the
deaths officers by COVID-19. Conclusion: Quarantine reduced nonviolent deaths;
however, they are overshadowed by the increase as a direct or indirect cause of
the pandemic. Therefore, the difference between the number of current deaths
and that of past years underestimates the real excess of deaths.
|
[
{
"created": "Sat, 4 Jul 2020 01:16:56 GMT",
"version": "v1"
}
] |
2020-07-07
|
[
[
"Arela-Bobadilla",
"Ronal",
""
]
] |
Objective: To make an estimate of the excess deaths caused by COVID-19 in the non-violent mortality of Peru, controlling for the effect of quarantine. Methods: Analysis of longitudinal data from the departments of Peru using official public information from the National Death Information System and the Ministry of Health of Peru. The analysis is performed between January 1, 2018 and June 23, 2020 (100 days of quarantine). The daily death rate per million inhabitants has been used. The days in which the departments were quarantined with a limit number of accumulated cases of COVID-19 were used to estimate the quarantine impact. Three limits were established for cases: less than 1, 10 and 100 cases. Result: In Peru, the daily death rate per million inhabitants decreased by -1.89 (95% CI: -2.70; -1.07) on quarantine days and without COVID-19 cases. When comparing this result with the total number of non-violent deaths, the excess deaths during the first 100 days of quarantine is 36,230. This estimate is 1.12 times the estimate with data from 2019 and 4.2 times the deaths officers by COVID-19. Conclusion: Quarantine reduced nonviolent deaths; however, they are overshadowed by the increase as a direct or indirect cause of the pandemic. Therefore, the difference between the number of current deaths and that of past years underestimates the real excess of deaths.
|
2211.12412
|
Jay Stotsky
|
Jay A. Stotsky and Hans G. Othmer
|
The Role of Cytonemes and Diffusive Transport in the Establishment of
Morphogen Gradients
|
36 pages, 16 figures
| null | null | null |
q-bio.CB math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial distributions of morphogens provide positional information in
developing systems, but how the distributions are established and maintained
remains an open problem. Transport by diffusion has been the
traditional mechanism, but recent experimental work has shown that cells can
also communicate by filopodia-like structures called cytonemes that make
direct
cell-to-cell contacts. Here we investigate the roles each may play
individually in a complex tissue and how they can jointly establish a
reliable
spatial distribution of a morphogen.
|
[
{
"created": "Tue, 22 Nov 2022 17:16:25 GMT",
"version": "v1"
}
] |
2022-11-23
|
[
[
"Stotsky",
"Jay A.",
""
],
[
"Othmer",
"Hans G.",
""
]
] |
Spatial distributions of morphogens provide positional information in developing systems, but how the distributions are established and maintained remains an open problem. Transport by diffusion has been the traditional mechanism, but recent experimental work has shown that cells can also communicate by filopodia-like structures called cytonemes that make direct cell-to-cell contacts. Here we investigate the roles each may play individually in a complex tissue and how they can jointly establish a reliable spatial distribution of a morphogen.
|
q-bio/0512007
|
Ping Ao
|
X. Zhu, L. Yin, L. Hood, D. Galas, and P. Ao
|
Efficiency, Robustness and Stochasticity of Gene Regulatory Networks in
Systems Biology: lambda Switch as a Working Example
|
40 pages
| null | null | null |
q-bio.SC cond-mat.other nlin.AO q-bio.MN q-bio.OT
| null |
Phage lambda is one of the most studied biological models in modern molecular
biology. Over the past 50 years quantitative experimental knowledge on this
biological model has been accumulated at all levels: physics, chemistry,
genomics, proteomics, functions, and more. All its components have been known
to a great detail. The theoretical task has been to integrate its components to
make the organism working quantitatively in a harmonic manner. This would test
our biological understanding and would lay a solid fundamental for further
explorations and applications, an obvious goal of systems biology. One of the
outstanding challenges in doing so has been the so-called stability puzzle of
lambda switch: the biologically observed robustness and its difficult
mathematical reconstruction based on known experimental values. In this chapter
we review the recent theoretical and experimental efforts on tackling this
problem. An emphasis is put on the minimum quantitative modeling where a
successful numerical agreement between experiments and modeling has been
achieved. A novel method tentatively named stochastic dynamical structure
analysis emerged from such study is also discussed within a broad modeling
perspective.
|
[
{
"created": "Fri, 2 Dec 2005 17:14:59 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Feb 2006 03:23:29 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Zhu",
"X.",
""
],
[
"Yin",
"L.",
""
],
[
"Hood",
"L.",
""
],
[
"Galas",
"D.",
""
],
[
"Ao",
"P.",
""
]
] |
Phage lambda is one of the most studied biological models in modern molecular biology. Over the past 50 years quantitative experimental knowledge on this biological model has been accumulated at all levels: physics, chemistry, genomics, proteomics, functions, and more. All its components have been known to a great detail. The theoretical task has been to integrate its components to make the organism working quantitatively in a harmonic manner. This would test our biological understanding and would lay a solid fundamental for further explorations and applications, an obvious goal of systems biology. One of the outstanding challenges in doing so has been the so-called stability puzzle of lambda switch: the biologically observed robustness and its difficult mathematical reconstruction based on known experimental values. In this chapter we review the recent theoretical and experimental efforts on tackling this problem. An emphasis is put on the minimum quantitative modeling where a successful numerical agreement between experiments and modeling has been achieved. A novel method tentatively named stochastic dynamical structure analysis emerged from such study is also discussed within a broad modeling perspective.
|
1505.02072
|
Andrew Lover
|
Andrew A. Lover
|
Epidemiology of Latency and Relapse in Plasmodium vivax Malaria
|
PhD thesis (2015); 128 pages
| null | null | null |
q-bio.PE stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malaria is a major contributor to health burdens throughout the regions where
it is endemic. Historically, it was believed that there was limited morbidity
and essentially no mortality associated with Plasmodium vivax; however,
evidence from diverse settings now suggests that infections with P. vivax can
be both severe and fatal. This awareness has highlighted a critical gap: the
vast majority of research has been directed towards P. falciparum, leading to a
decades-long neglect of epidemiological and clinical studies of P. vivax. There
exists a large body of historical data on human experimental infections with P.
vivax; these studies in controlled settings provided a wealth of wide-ranging
statements based on expert opinion, which form the basis for much of what is
currently known about P. vivax. In this thesis, portions of this evidence-base
have been re-examined using modern epidemiological analyses with two aims: to
critically examine this accumulated knowledge base, and to inform current
research agendas towards global malaria elimination for all species of
Plasmodium. Chapter 2 examines geographic variation in the epidemiology of P.
vivax, especially the timing of incubation periods and of relapses, by origin
of the parasites. Chapter 3 re-assesses the impact of sporozoite dosage upon
incubation and pre-patent periods; Chapter 4 provides well-defined mathematical
distributions for incubation and relapses periods in experimental infections,
and explores their epidemiological impacts using simple transmission models.
Chapter 5 examines the epidemiology of mixed-strain P. vivax infections and
compares these results with studies in murine models and general ecological
theory; and Chapter 6 clarifies the origin of the Madagascar strain of P.
vivax.
|
[
{
"created": "Fri, 8 May 2015 15:57:46 GMT",
"version": "v1"
}
] |
2015-05-11
|
[
[
"Lover",
"Andrew A.",
""
]
] |
Malaria is a major contributor to health burdens throughout the regions where it is endemic. Historically, it was believed that there was limited morbidity and essentially no mortality associated with Plasmodium vivax; however, evidence from diverse settings now suggests that infections with P. vivax can be both severe and fatal. This awareness has highlighted a critical gap: the vast majority of research has been directed towards P. falciparum, leading to a decades-long neglect of epidemiological and clinical studies of P. vivax. There exists a large body of historical data on human experimental infections with P. vivax; these studies in controlled settings provided a wealth of wide-ranging statements based on expert opinion, which form the basis for much of what is currently known about P. vivax. In this thesis, portions of this evidence-base have been re-examined using modern epidemiological analyses with two aims: to critically examine this accumulated knowledge base, and to inform current research agendas towards global malaria elimination for all species of Plasmodium. Chapter 2 examines geographic variation in the epidemiology of P. vivax, especially the timing of incubation periods and of relapses, by origin of the parasites. Chapter 3 re-assesses the impact of sporozoite dosage upon incubation and pre-patent periods; Chapter 4 provides well-defined mathematical distributions for incubation and relapses periods in experimental infections, and explores their epidemiological impacts using simple transmission models. Chapter 5 examines the epidemiology of mixed-strain P. vivax infections and compares these results with studies in murine models and general ecological theory; and Chapter 6 clarifies the origin of the Madagascar strain of P. vivax.
|
2009.10018
|
Madhav Marathe
|
Aniruddha Adiga, Jiangzhuo Chen, Madhav Marathe, Henning Mortveit,
Srinivasan Venkatramanan, Anil Vullikanti
|
Data-driven modeling for different stages of pandemic response
| null | null | null | null |
q-bio.PE physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Some of the key questions of interest during the COVID-19 pandemic (and all
outbreaks) include: where did the disease start, how is it spreading, who is at
risk, and how to control the spread. There are a large number of complex
factors driving the spread of pandemics, and, as a result, multiple modeling
techniques play an increasingly important role in shaping public policy and
decision making. As different countries and regions go through phases of the
pandemic, the questions and data availability also changes. Especially of
interest is aligning model development and data collection to support response
efforts at each stage of the pandemic. The COVID-19 pandemic has been
unprecedented in terms of real-time collection and dissemination of a number of
diverse datasets, ranging from disease outcomes, to mobility, behaviors, and
socio-economic factors. The data sets have been critical from the perspective
of disease modeling and analytics to support policymakers in real-time. In this
overview article, we survey the data landscape around COVID-19, with a focus on
how such datasets have aided modeling and response through different stages so
far in the pandemic. We also discuss some of the current challenges and the
needs that will arise as we plan our way out of the pandemic.
|
[
{
"created": "Mon, 21 Sep 2020 16:48:59 GMT",
"version": "v1"
}
] |
2020-09-22
|
[
[
"Adiga",
"Aniruddha",
""
],
[
"Chen",
"Jiangzhuo",
""
],
[
"Marathe",
"Madhav",
""
],
[
"Mortveit",
"Henning",
""
],
[
"Venkatramanan",
"Srinivasan",
""
],
[
"Vullikanti",
"Anil",
""
]
] |
Some of the key questions of interest during the COVID-19 pandemic (and all outbreaks) include: where did the disease start, how is it spreading, who is at risk, and how to control the spread. There are a large number of complex factors driving the spread of pandemics, and, as a result, multiple modeling techniques play an increasingly important role in shaping public policy and decision making. As different countries and regions go through phases of the pandemic, the questions and data availability also changes. Especially of interest is aligning model development and data collection to support response efforts at each stage of the pandemic. The COVID-19 pandemic has been unprecedented in terms of real-time collection and dissemination of a number of diverse datasets, ranging from disease outcomes, to mobility, behaviors, and socio-economic factors. The data sets have been critical from the perspective of disease modeling and analytics to support policymakers in real-time. In this overview article, we survey the data landscape around COVID-19, with a focus on how such datasets have aided modeling and response through different stages so far in the pandemic. We also discuss some of the current challenges and the needs that will arise as we plan our way out of the pandemic.
|
1608.06222
|
Bienvenue Kouwaye
|
Gilles Cottrell (UPD5 Pharmacie), Bienvenue Kouwaye (SAMM, UAC),
Charlotte Pierrat (ENEC, MERIT), Agn\`es Le Port (UPD5 Pharmacie, MERIT,
UPD5), Boura\"ima Aziz (MERIT), No\"el Fonton, Achille Massougbodji (UAC,
MERIT), Vincent Corbel (MIVEGEC, MERIT), Mahouton Norbert Hounkonnou, Andr\'e
Garcia (UPD5 Pharmacie, MERIT, UPD5)
|
Modeling the Influence of Local Environmental Factors on Malaria
Transmission in Benin and Its Implications for Cohort Study
|
PLoS ONE, Public Library of Science, 2012
| null |
10.1371/journal.pone.0028812.g005
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malaria remains endemic in tropical areas, especially in Africa. For the
evaluation of new tools and to further ourunderstanding of host-parasite
interactions, knowing the environmental risk of transmission-even at a very
local scale-isessential. The aim of this study was to assess how malaria
transmission is influenced and can be predicted by local climaticand
environmental factors. As the entomological part of a cohort study of 650
newborn babies in nine villages in the ToriBossito district of Southern Benin
between June 2007 and February 2010, human landing catches were performed to
assessthe density of malaria vectors and transmission intensity. Climatic
factors as well as household characteristics were recordedthroughout the study.
Statistical correlations between Anopheles density and environmental and
climatic factors weretested using a three-level Poisson mixed regression model.
The results showed both temporal variations in vector density(related to season
and rainfall), and spatial variations at the level of both village and house.
These spatial variations could belargely explained by factors associated with
the house's immediate surroundings, namely soil type, vegetation index andthe
proximity of a watercourse. Based on these results, a predictive regression
model was developed using a leave-one-outmethod, to predict the spatiotemporal
variability of malaria transmission in the nine villages. This study points up
theimportance of local environmental factors in malaria transmission and
describes a model to predict the transmission risk ofindividual children, based
on environmental and behavioral characteristics.
|
[
{
"created": "Fri, 19 Aug 2016 16:25:30 GMT",
"version": "v1"
}
] |
2016-08-23
|
[
[
"Cottrell",
"Gilles",
"",
"UPD5 Pharmacie"
],
[
"Kouwaye",
"Bienvenue",
"",
"SAMM, UAC"
],
[
"Pierrat",
"Charlotte",
"",
"ENEC, MERIT"
],
[
"Port",
"Agnès Le",
"",
"UPD5 Pharmacie, MERIT,\n UPD5"
],
[
"Aziz",
"Bouraïma",
"",
"MERIT"
],
[
"Fonton",
"Noël",
"",
"UAC,\n MERIT"
],
[
"Massougbodji",
"Achille",
"",
"UAC,\n MERIT"
],
[
"Corbel",
"Vincent",
"",
"MIVEGEC, MERIT"
],
[
"Hounkonnou",
"Mahouton Norbert",
"",
"UPD5 Pharmacie, MERIT, UPD5"
],
[
"Garcia",
"André",
"",
"UPD5 Pharmacie, MERIT, UPD5"
]
] |
Malaria remains endemic in tropical areas, especially in Africa. For the evaluation of new tools and to further ourunderstanding of host-parasite interactions, knowing the environmental risk of transmission-even at a very local scale-isessential. The aim of this study was to assess how malaria transmission is influenced and can be predicted by local climaticand environmental factors. As the entomological part of a cohort study of 650 newborn babies in nine villages in the ToriBossito district of Southern Benin between June 2007 and February 2010, human landing catches were performed to assessthe density of malaria vectors and transmission intensity. Climatic factors as well as household characteristics were recordedthroughout the study. Statistical correlations between Anopheles density and environmental and climatic factors weretested using a three-level Poisson mixed regression model. The results showed both temporal variations in vector density(related to season and rainfall), and spatial variations at the level of both village and house. These spatial variations could belargely explained by factors associated with the house's immediate surroundings, namely soil type, vegetation index andthe proximity of a watercourse. Based on these results, a predictive regression model was developed using a leave-one-outmethod, to predict the spatiotemporal variability of malaria transmission in the nine villages. This study points up theimportance of local environmental factors in malaria transmission and describes a model to predict the transmission risk ofindividual children, based on environmental and behavioral characteristics.
|
1507.03053
|
Francis Manno
|
Francis AM Manno III
|
Memetic evolution of art to distinct aesthetics
|
short theory article, 3 pages, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans have been using symbolic representation (i.e. art) as a creative
cultural form indisputably for at least 80,000 years. A description of the
processes central to the evolution of art from sculpted earthen forms early in
human existence to paintings in museums of the modern world is absent in
scholarly literature. The present manuscript offers a memetic theory of art
evolution demonstrating typological transitions of art in the period post the
3-dimensional to 2-dimensional transition occurring in the early Aurignacian
period. The process of art evolution in the 2-dimensional form central to the
typological transitions post this period was propelled by material and/or
structure chosen to display the artistic message. Symbolic representational
tinkering whether collectively or individually creates artistic typologies of
display deemed aesthetic and non-aesthetic by the current cultural milieu.
Contemporary forms of graffiti underscore the aesthetic/nonaesthetic dichotomy.
|
[
{
"created": "Sat, 11 Jul 2015 01:41:26 GMT",
"version": "v1"
}
] |
2015-07-14
|
[
[
"Manno",
"Francis AM",
"III"
]
] |
Humans have been using symbolic representation (i.e. art) as a creative cultural form indisputably for at least 80,000 years. A description of the processes central to the evolution of art from sculpted earthen forms early in human existence to paintings in museums of the modern world is absent in scholarly literature. The present manuscript offers a memetic theory of art evolution demonstrating typological transitions of art in the period post the 3-dimensional to 2-dimensional transition occurring in the early Aurignacian period. The process of art evolution in the 2-dimensional form central to the typological transitions post this period was propelled by material and/or structure chosen to display the artistic message. Symbolic representational tinkering whether collectively or individually creates artistic typologies of display deemed aesthetic and non-aesthetic by the current cultural milieu. Contemporary forms of graffiti underscore the aesthetic/nonaesthetic dichotomy.
|
2201.09916
|
Rainer Engelken
|
Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L.
F. Abbott
|
Input correlations impede suppression of chaos and learning in balanced
rate networks
| null | null |
10.1371/journal.pcbi.1010590
| null |
q-bio.NC cond-mat.dis-nn cs.LG nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural circuits exhibit complex activity patterns, both spontaneously and
evoked by external stimuli. Information encoding and learning in neural
circuits depend on how well time-varying stimuli can control spontaneous
network activity. We show that in firing-rate networks in the balanced state,
external control of recurrent dynamics, i.e., the suppression of
internally-generated chaotic variability, strongly depends on correlations in
the input. A unique feature of balanced networks is that, because common
external input is dynamically canceled by recurrent feedback, it is far easier
to suppress chaos with independent inputs into each neuron than through common
input. To study this phenomenon we develop a non-stationary dynamic mean-field
theory that determines how the activity statistics and largest Lyapunov
exponent depend on frequency and amplitude of the input, recurrent coupling
strength, and network size, for both common and independent input. We also show
that uncorrelated inputs facilitate learning in balanced networks.
|
[
{
"created": "Mon, 24 Jan 2022 19:20:49 GMT",
"version": "v1"
}
] |
2023-01-11
|
[
[
"Engelken",
"Rainer",
""
],
[
"Ingrosso",
"Alessandro",
""
],
[
"Khajeh",
"Ramin",
""
],
[
"Goedeke",
"Sven",
""
],
[
"Abbott",
"L. F.",
""
]
] |
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A unique feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far easier to suppress chaos with independent inputs into each neuron than through common input. To study this phenomenon we develop a non-stationary dynamic mean-field theory that determines how the activity statistics and largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We also show that uncorrelated inputs facilitate learning in balanced networks.
|
1711.09558
|
Jasper Zuallaert
|
Jasper Zuallaert, Mijung Kim, Yvan Saeys, Wesley De Neve
|
Interpretable Convolutional Neural Networks for Effective Translation
Initiation Site Prediction
|
Presented at International Workshop on Deep Learning in
Bioinformatics, Biomedicine, and Healthcare Informatics (DLB2H 2017) --- in
conjunction with the IEEE International Conference on Bioinformatics and
Biomedicine (BIBM 2017)
| null | null | null |
q-bio.GN cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Thanks to rapidly evolving sequencing techniques, the amount of genomic data
at our disposal is growing increasingly large. Determining the gene structure
is a fundamental requirement to effectively interpret gene function and
regulation. An important part in that determination process is the
identification of translation initiation sites. In this paper, we propose a
novel approach for automatic prediction of translation initiation sites,
leveraging convolutional neural networks that allow for automatic feature
extraction. Our experimental results demonstrate that we are able to improve
the state-of-the-art approaches with a decrease of 75.2% in false positive rate
and with a decrease of 24.5% in error rate on chosen datasets. Furthermore, an
in-depth analysis of the decision-making process used by our predictive model
shows that our neural network implicitly learns biologically relevant features
from scratch, without any prior knowledge about the problem at hand, such as
the Kozak consensus sequence, the influence of stop and start codons in the
sequence and the presence of donor splice site patterns. In summary, our
findings yield a better understanding of the internal reasoning of a
convolutional neural network when applying such a neural network to genomic
data.
|
[
{
"created": "Mon, 27 Nov 2017 06:37:37 GMT",
"version": "v1"
}
] |
2017-11-28
|
[
[
"Zuallaert",
"Jasper",
""
],
[
"Kim",
"Mijung",
""
],
[
"Saeys",
"Yvan",
""
],
[
"De Neve",
"Wesley",
""
]
] |
Thanks to rapidly evolving sequencing techniques, the amount of genomic data at our disposal is growing increasingly large. Determining the gene structure is a fundamental requirement to effectively interpret gene function and regulation. An important part in that determination process is the identification of translation initiation sites. In this paper, we propose a novel approach for automatic prediction of translation initiation sites, leveraging convolutional neural networks that allow for automatic feature extraction. Our experimental results demonstrate that we are able to improve the state-of-the-art approaches with a decrease of 75.2% in false positive rate and with a decrease of 24.5% in error rate on chosen datasets. Furthermore, an in-depth analysis of the decision-making process used by our predictive model shows that our neural network implicitly learns biologically relevant features from scratch, without any prior knowledge about the problem at hand, such as the Kozak consensus sequence, the influence of stop and start codons in the sequence and the presence of donor splice site patterns. In summary, our findings yield a better understanding of the internal reasoning of a convolutional neural network when applying such a neural network to genomic data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.