id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0405016 | Bernardo Barbiellini | B. Barbiellini, Alexandra Portnova, Anna Chetoukhina, Chia-Hsin Lu and
Matteo Pellegrini | Position Dependent and Independent Evolutionary Models Based on
Empirical Amino Acid Substitution Matrices | Paper presented at the Biological Language Conference November 20-21,
2003 University of Pittsburgh | null | null | null | q-bio.PE cond-mat.stat-mech q-bio.GN | null | Evolutionary models measure the probability of amino acid substitutions
occurring over different evolutionary distances. We examine various
evolutionary models based on empirically derived amino acid substitution
matrices. The models are constructed using the PAM and BLOSUM amino acid
substitution matrices. We rescale these matrices by raising them to powers to
model substitution patterns that account for different evolutionary distances.
We also examine models that account for the dissimilarity of substitution rates
along a protein sequence. We compare the models by computing the likelihood of
each model across different alignments. We also present a specific example to
illustrate the subtle differences in the estimation of evolutionary distance
computed using the different models.
| [
{
"created": "Wed, 19 May 2004 23:07:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Barbiellini",
"B.",
""
],
[
"Portnova",
"Alexandra",
""
],
[
"Chetoukhina",
"Anna",
""
],
[
"Lu",
"Chia-Hsin",
""
],
[
"Pellegrini",
"Matteo",
""
]
] | Evolutionary models measure the probability of amino acid substitutions occurring over different evolutionary distances. We examine various evolutionary models based on empirically derived amino acid substitution matrices. The models are constructed using the PAM and BLOSUM amino acid substitution matrices. We rescale these matrices by raising them to powers to model substitution patterns that account for different evolutionary distances. We also examine models that account for the dissimilarity of substitution rates along a protein sequence. We compare the models by computing the likelihood of each model across different alignments. We also present a specific example to illustrate the subtle differences in the estimation of evolutionary distance computed using the different models. |
1701.07744 | Bogdan Dragnea | Cheng Zeng, Mercedes Hernando-P\'erez, Xiang Ma, Paul van der Schoot,
Roya Zandi and Bogdan Dragnea | Contact Mechanics of a Small Icosahedral Virus | null | Phys. Rev. Lett. 119, 038102 (2017) | 10.1103/PhysRevLett.119.038102 | null | q-bio.QM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virus binding to a surface results at least locally, at the contact area, in
stress and potential structural perturbation of the virus cage. Here we address
the question of the role of substrate-induced deformation in the overall virus
mechanical response to the adsorption event. This question may be especially
important for the broad category of viruses that have their shells stabilized
by weak, non-covalent interactions. We utilize atomic force microscopy to
measure the height change distributions of the brome mosaic virus upon
adsorption from liquid on atomically flat substrates and present a continuum
model which captures well the behavior. Height data fitting according the model
provides, without recourse to indentation, estimates of virus elastic
properties and of the interfacial energy.
| [
{
"created": "Thu, 26 Jan 2017 15:42:38 GMT",
"version": "v1"
}
] | 2017-07-26 | [
[
"Zeng",
"Cheng",
""
],
[
"Hernando-Pérez",
"Mercedes",
""
],
[
"Ma",
"Xiang",
""
],
[
"van der Schoot",
"Paul",
""
],
[
"Zandi",
"Roya",
""
],
[
"Dragnea",
"Bogdan",
""
]
] | Virus binding to a surface results at least locally, at the contact area, in stress and potential structural perturbation of the virus cage. Here we address the question of the role of substrate-induced deformation in the overall virus mechanical response to the adsorption event. This question may be especially important for the broad category of viruses that have their shells stabilized by weak, non-covalent interactions. We utilize atomic force microscopy to measure the height change distributions of the brome mosaic virus upon adsorption from liquid on atomically flat substrates and present a continuum model which captures well the behavior. Height data fitting according the model provides, without recourse to indentation, estimates of virus elastic properties and of the interfacial energy. |
1911.00301 | Gabriele Micali | Gabriele Micali and Robert G. Endres | Maximal information transmission is compatible with ultrasensitive
biological pathways | 28 pages, 5 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells are often considered input-output devices that maximize the
transmission of information by converting extracellular stimuli (input) via
signaling pathways (communication channel) to cell behavior (output). However,
in biological systems outputs might feed back into inputs due to cell motility,
and the biological channel can change by mutations during evolution. Here, we
show that the conventional channel capacity obtained by optimizing the input
distribution for a fixed channel may not reflect the global optimum. In a new
approach we analytically identify both input distributions and input-output
curves that optimally transmit information, given constraints from noise and
the dynamic range of the channel. We find a universal optimal input
distribution only depending on the input noise, and we generalize our formalism
to multiple outputs (or inputs). Applying our formalism to Escherichia coli
chemotaxis, we find that its pathway is compatible with optimal information
transmission despite the ultrasensitive rotary motors.
| [
{
"created": "Fri, 1 Nov 2019 11:10:22 GMT",
"version": "v1"
}
] | 2019-11-04 | [
[
"Micali",
"Gabriele",
""
],
[
"Endres",
"Robert G.",
""
]
] | Cells are often considered input-output devices that maximize the transmission of information by converting extracellular stimuli (input) via signaling pathways (communication channel) to cell behavior (output). However, in biological systems outputs might feed back into inputs due to cell motility, and the biological channel can change by mutations during evolution. Here, we show that the conventional channel capacity obtained by optimizing the input distribution for a fixed channel may not reflect the global optimum. In a new approach we analytically identify both input distributions and input-output curves that optimally transmit information, given constraints from noise and the dynamic range of the channel. We find a universal optimal input distribution only depending on the input noise, and we generalize our formalism to multiple outputs (or inputs). Applying our formalism to Escherichia coli chemotaxis, we find that its pathway is compatible with optimal information transmission despite the ultrasensitive rotary motors. |
1906.11196 | Fusong Ju | Fusong Ju, Jianwei Zhu, Guozheng Wei, Qi Zhang, Shiwei Sun, Dongbo Bu | Seq-SetNet: Exploring Sequence Sets for Inferring Structures | null | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence set is a widely-used type of data source in a large variety of
fields. A typical example is protein structure prediction, which takes an
multiple sequence alignment (MSA) as input and aims to infer structural
information from it. Almost all of the existing approaches exploit MSAs in an
indirect fashion, i.e., they transform MSAs into position-specific scoring
matrices (PSSM) that represent the distribution of amino acid types at each
column. PSSM could capture column-wise characteristics of MSA, however, the
column-wise characteristics embedded in each individual component sequence were
nearly totally neglected.
The drawback of PSSM is rooted in the fact that an MSA is essentially an
unordered sequence set rather than a matrix. Specifically, the interchange of
any two sequences will not affect the whole MSA. In contrast, the pixels in an
image essentially form a matrix since any two rows of pixels cannot be
interchanged. Therefore, the traditional deep neural networks designed for
image processing cannot be directly applied on sequence sets. Here, we proposed
a novel deep neural network framework (called Seq-SetNet) for sequence set
processing. By employing a {\it symmetric function} module to integrate
features calculated from preceding layers, Seq-SetNet are immune to the order
of sequences in the input MSA. This advantage enables us to directly and fully
exploit MSAs by considering each component protein individually. We evaluated
Seq-SetNet by using it to extract structural information from MSA for protein
secondary structure prediction. Experimental results on popular benchmark sets
suggests that Seq-SetNet outperforms the state-of-the-art approaches by 3.6% in
precision. These results clearly suggest the advantages of Seq-SetNet in
sequence set processing and it can be readily used in a wide range of fields,
say natural language processing.
| [
{
"created": "Thu, 6 Jun 2019 12:41:00 GMT",
"version": "v1"
}
] | 2019-06-27 | [
[
"Ju",
"Fusong",
""
],
[
"Zhu",
"Jianwei",
""
],
[
"Wei",
"Guozheng",
""
],
[
"Zhang",
"Qi",
""
],
[
"Sun",
"Shiwei",
""
],
[
"Bu",
"Dongbo",
""
]
] | Sequence set is a widely-used type of data source in a large variety of fields. A typical example is protein structure prediction, which takes an multiple sequence alignment (MSA) as input and aims to infer structural information from it. Almost all of the existing approaches exploit MSAs in an indirect fashion, i.e., they transform MSAs into position-specific scoring matrices (PSSM) that represent the distribution of amino acid types at each column. PSSM could capture column-wise characteristics of MSA, however, the column-wise characteristics embedded in each individual component sequence were nearly totally neglected. The drawback of PSSM is rooted in the fact that an MSA is essentially an unordered sequence set rather than a matrix. Specifically, the interchange of any two sequences will not affect the whole MSA. In contrast, the pixels in an image essentially form a matrix since any two rows of pixels cannot be interchanged. Therefore, the traditional deep neural networks designed for image processing cannot be directly applied on sequence sets. Here, we proposed a novel deep neural network framework (called Seq-SetNet) for sequence set processing. By employing a {\it symmetric function} module to integrate features calculated from preceding layers, Seq-SetNet are immune to the order of sequences in the input MSA. This advantage enables us to directly and fully exploit MSAs by considering each component protein individually. We evaluated Seq-SetNet by using it to extract structural information from MSA for protein secondary structure prediction. Experimental results on popular benchmark sets suggests that Seq-SetNet outperforms the state-of-the-art approaches by 3.6% in precision. These results clearly suggest the advantages of Seq-SetNet in sequence set processing and it can be readily used in a wide range of fields, say natural language processing. |
2303.10678 | Joachim Poutaraud | Joachim Poutaraud | Estimating the Repertoire Size in Birds using Unsupervised Clustering
techniques | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Birds produce multiple types of vocalizations that, together, constitute a
vocal repertoire. For some species, the repertoire size is of importance
because it informs us about their brain capacity, territory size or social
behaviour. Estimating the repertoire size is challenging because it requires
large amounts of data which can be difficult to obtain and analyse. From birds
vocalizations recordings, songs are extracted and segmented as sequences of
syllables before being clustered. Segmenting songs in such a way can be done
either by simple enumeration, where one counts unique vocalization types until
there are no new types detected, or by specific algorithms permitting
reproducible studies. In this paper, we present a specific automatic method to
compute a syllable distance measure that allows an unsupervised classification
of bird song syllables. The results obtained from the segmenting of the bird
songs are evaluated using the Silhouette metric score.
| [
{
"created": "Sun, 19 Mar 2023 14:44:48 GMT",
"version": "v1"
}
] | 2023-03-21 | [
[
"Poutaraud",
"Joachim",
""
]
] | Birds produce multiple types of vocalizations that, together, constitute a vocal repertoire. For some species, the repertoire size is of importance because it informs us about their brain capacity, territory size or social behaviour. Estimating the repertoire size is challenging because it requires large amounts of data which can be difficult to obtain and analyse. From birds vocalizations recordings, songs are extracted and segmented as sequences of syllables before being clustered. Segmenting songs in such a way can be done either by simple enumeration, where one counts unique vocalization types until there are no new types detected, or by specific algorithms permitting reproducible studies. In this paper, we present a specific automatic method to compute a syllable distance measure that allows an unsupervised classification of bird song syllables. The results obtained from the segmenting of the bird songs are evaluated using the Silhouette metric score. |
q-bio/0310033 | Simon Kogan | Simon Kogan | Pattern overlapping decomposition by Cumulative Local Cross-Correlation | 14 pages, 8 figures, 1 table | null | null | null | q-bio.QM q-bio.BM | null | Background
Nucleotide sequences contain multiple codes responsible for organism's
functioning and structure. They can be investigated by various signal
processing methods. These techniques are well suited for indication of
frequently encountered sequence motifs (i.e., repeats). However, if there are
two or more codes containing the same motif, the local nucleotide distribution
(i.e., profile), resulting from sequence alignment by the motif position, will
represent overlapping of the code patterns.
Results
The novel algorithm for decomposition of pattern overlapping is proposed. It
is capable to work with dispersed repeats as well. The algorithm is based on
cross-correlation procedure applied locally in a cumulative fashion. Its
sensitivity was tested on human genomic sequences.
Conclusions
Cumulative Local Cross-Correlation was successfully used to decompose
overlapping of nucleotide patterns in human genomic sequences. Being very
general technique (as general as cross-correlation), it can be easily adopted
in other signal processing applications and naturally extended for
multidimensional cases.
Software implementation of the algorithm is available on request from the
authors.
| [
{
"created": "Sat, 25 Oct 2003 11:27:55 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Dec 2004 16:14:35 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Kogan",
"Simon",
""
]
] | Background Nucleotide sequences contain multiple codes responsible for organism's functioning and structure. They can be investigated by various signal processing methods. These techniques are well suited for indication of frequently encountered sequence motifs (i.e., repeats). However, if there are two or more codes containing the same motif, the local nucleotide distribution (i.e., profile), resulting from sequence alignment by the motif position, will represent overlapping of the code patterns. Results The novel algorithm for decomposition of pattern overlapping is proposed. It is capable to work with dispersed repeats as well. The algorithm is based on cross-correlation procedure applied locally in a cumulative fashion. Its sensitivity was tested on human genomic sequences. Conclusions Cumulative Local Cross-Correlation was successfully used to decompose overlapping of nucleotide patterns in human genomic sequences. Being very general technique (as general as cross-correlation), it can be easily adopted in other signal processing applications and naturally extended for multidimensional cases. Software implementation of the algorithm is available on request from the authors. |
q-bio/0608021 | Atul Narang | Atul Narang and Sergei S. Pilyugin | Bacterial gene regulation in diauxic and nondiauxic growth | Accepted, J Theoret Biol (47 pages) | null | null | null | q-bio.MN q-bio.CB | null | When bacteria are grown on a mixture of two growth-limiting substrates, they
exhibit a rich spectrum of substrate consumption patterns including diauxic
growth, simultaneous consumption, and bistable growth. In previous work, we
showed that a minimal model accounting only for enzyme induction and dilution
captures all the substrate consumption patterns. Here, we construct the
bifurcation diagram of the minimal model. The bifurcation diagram explains
several general properties of mixed-substrate growth. (1) In almost all cases
of diauxic growth, the "preferred" substrate is the one that, by itself,
supports a higher specific growth rate. In the literature, this property is
often attributed to optimality of regulatory mechanisms. Here, we show that the
minimal model, which contains only induction, displays the property under
fairly general conditions. This suggests that the higher growth rate of the
preferred substrate is an intrinsic property of the induction and dilution
kinetics.(2) The model explains the phenotypes of various mutants containing
lesions in the regions encoding for the operator, repressor, and peripheral
enzymes. A particularly striking phenotype is the "reversal of the diauxie" in
which the wild-type and mutant strains consume the very same two substrates in
opposite order. This phenotype is difficult to explain in terms of molecular
mechanisms, but it turns out to be a natural consequence of the model. We show
furthermore that the model is robust. The key property of the model, namely,
the competitive dynamics of the enzymes, is preserved even if the model is
modified to account for various regulatory mechanisms. Finally, the model has
important implications for size regulation in development, since it suggests
that protein dilution is one mechanism for coupling patterning and growth.
| [
{
"created": "Thu, 10 Aug 2006 00:21:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Narang",
"Atul",
""
],
[
"Pilyugin",
"Sergei S.",
""
]
] | When bacteria are grown on a mixture of two growth-limiting substrates, they exhibit a rich spectrum of substrate consumption patterns including diauxic growth, simultaneous consumption, and bistable growth. In previous work, we showed that a minimal model accounting only for enzyme induction and dilution captures all the substrate consumption patterns. Here, we construct the bifurcation diagram of the minimal model. The bifurcation diagram explains several general properties of mixed-substrate growth. (1) In almost all cases of diauxic growth, the "preferred" substrate is the one that, by itself, supports a higher specific growth rate. In the literature, this property is often attributed to optimality of regulatory mechanisms. Here, we show that the minimal model, which contains only induction, displays the property under fairly general conditions. This suggests that the higher growth rate of the preferred substrate is an intrinsic property of the induction and dilution kinetics.(2) The model explains the phenotypes of various mutants containing lesions in the regions encoding for the operator, repressor, and peripheral enzymes. A particularly striking phenotype is the "reversal of the diauxie" in which the wild-type and mutant strains consume the very same two substrates in opposite order. This phenotype is difficult to explain in terms of molecular mechanisms, but it turns out to be a natural consequence of the model. We show furthermore that the model is robust. The key property of the model, namely, the competitive dynamics of the enzymes, is preserved even if the model is modified to account for various regulatory mechanisms. Finally, the model has important implications for size regulation in development, since it suggests that protein dilution is one mechanism for coupling patterning and growth. |
1108.4876 | Michael Gilson | Crystal Nguyen, Michael K. Gilson, Tom Young | Structure and Thermodynamics of Molecular Hydration via Grid
Inhomogeneous Solvation Theory | 16 pages, 5 figures | null | null | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Changes in hydration are central to the phenomenon of biomolecular
recognition, but it has been difficult to properly frame and answer questions
about their precise thermodynamic role. We address this problem by introducing
Grid Inhomogeneous Solvation Theory (GIST), which discretizes the equations of
Inhomogeneous Solvation Theory on a 3D grid in a volume of interest. Here, the
solvent volume is divided into small grid boxes and localized thermodynamic
entropies, energies and free energies are defined for each grid box.
Thermodynamic solvation quantities are defined in such a manner that summing
the quantities over all the grid boxes yields the desired total quantity for
the system. This approach smoothly accounts for the thermodynamics of not only
highly occupied water sites but also partly occupied and water depleted regions
of the solvent, without the need for ad hoc terms drawn from other theories.
The GIST method has the further advantage of allowing a rigorous end-states
analysis that, for example in the problem of molecular recognition, can account
for not only the thermodynamics of displacing water from the surface but also
for the thermodynamics of solvent reorganization around the bound complex. As a
preliminary application, we present GIST calculations at the 1-body level for
the host cucurbit[7]uril, a low molecular weight receptor molecule which
represents a tractable model for biomolecular recognition. One of the most
striking results is the observation of a toroidal region of water density, at
the center of the host's nonpolar cavity, which is significantly disfavored
entropically, and hence may contribute to the ability of this small receptor to
bind guest molecules with unusually high affinities.
| [
{
"created": "Wed, 24 Aug 2011 16:15:04 GMT",
"version": "v1"
}
] | 2011-08-25 | [
[
"Nguyen",
"Crystal",
""
],
[
"Gilson",
"Michael K.",
""
],
[
"Young",
"Tom",
""
]
] | Changes in hydration are central to the phenomenon of biomolecular recognition, but it has been difficult to properly frame and answer questions about their precise thermodynamic role. We address this problem by introducing Grid Inhomogeneous Solvation Theory (GIST), which discretizes the equations of Inhomogeneous Solvation Theory on a 3D grid in a volume of interest. Here, the solvent volume is divided into small grid boxes and localized thermodynamic entropies, energies and free energies are defined for each grid box. Thermodynamic solvation quantities are defined in such a manner that summing the quantities over all the grid boxes yields the desired total quantity for the system. This approach smoothly accounts for the thermodynamics of not only highly occupied water sites but also partly occupied and water depleted regions of the solvent, without the need for ad hoc terms drawn from other theories. The GIST method has the further advantage of allowing a rigorous end-states analysis that, for example in the problem of molecular recognition, can account for not only the thermodynamics of displacing water from the surface but also for the thermodynamics of solvent reorganization around the bound complex. As a preliminary application, we present GIST calculations at the 1-body level for the host cucurbit[7]uril, a low molecular weight receptor molecule which represents a tractable model for biomolecular recognition. One of the most striking results is the observation of a toroidal region of water density, at the center of the host's nonpolar cavity, which is significantly disfavored entropically, and hence may contribute to the ability of this small receptor to bind guest molecules with unusually high affinities. |
1310.3985 | Marco Zoli | Marco Zoli | Twist versus Nonlinear Stacking in Short DNA Molecules | Journal of Theoretical Biology (2014) | J. Theor. Biol. vol. 354, 95 (2014) | 10.1016/j.jtbi.2014.03.031 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The denaturation of the double helix is a template for fundamental biological
functions such as replication and transcription involving the formation of
local fluctuational openings. The denaturation transition is studied for
heterogeneous short sequences of DNA, i.e. $\sim 100$ base pairs, in the
framework of a mesoscopic Hamiltonian model which accounts for the helicoidal
geometry of the molecule. The theoretical background for the application of the
path integral formalism to predictive analysis of the molecule thermodynamical
properties is discussed. The base pair displacements with respect to the ground
state are treated as paths whose temperature dependent amplitudes are governed
by the thermal wavelength. The ensemble of base pairs paths is selected, at any
temperature, consistently with both the model potential and the second law of
thermodynamics. The partition function incorporates the effects of the base
pair thermal fluctuations which become stronger close to the denaturation. The
transition appears as a gradual phenomenon starting from the molecule segments
rich in adenine-thymine base pairs. Computing the equilibrium thermodynamics,
we focus on the interplay between twisting of the complementary strands around
the molecule axis and nonlinear stacking potential: it is shown that the latter
affects the melting profiles only if the rotational degrees of freedom are
included in the Hamiltonian. The use of ladder Hamiltonian models for the DNA
complementary strands in the pre-melting regime is questioned.
| [
{
"created": "Tue, 15 Oct 2013 10:02:15 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Dec 2013 17:06:44 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Apr 2014 15:20:42 GMT",
"version": "v3"
}
] | 2014-06-03 | [
[
"Zoli",
"Marco",
""
]
] | The denaturation of the double helix is a template for fundamental biological functions such as replication and transcription involving the formation of local fluctuational openings. The denaturation transition is studied for heterogeneous short sequences of DNA, i.e. $\sim 100$ base pairs, in the framework of a mesoscopic Hamiltonian model which accounts for the helicoidal geometry of the molecule. The theoretical background for the application of the path integral formalism to predictive analysis of the molecule thermodynamical properties is discussed. The base pair displacements with respect to the ground state are treated as paths whose temperature dependent amplitudes are governed by the thermal wavelength. The ensemble of base pairs paths is selected, at any temperature, consistently with both the model potential and the second law of thermodynamics. The partition function incorporates the effects of the base pair thermal fluctuations which become stronger close to the denaturation. The transition appears as a gradual phenomenon starting from the molecule segments rich in adenine-thymine base pairs. Computing the equilibrium thermodynamics, we focus on the interplay between twisting of the complementary strands around the molecule axis and nonlinear stacking potential: it is shown that the latter affects the melting profiles only if the rotational degrees of freedom are included in the Hamiltonian. The use of ladder Hamiltonian models for the DNA complementary strands in the pre-melting regime is questioned. |
1401.4134 | Diogo Pratas | Diogo Pratas and Armando J. Pinho | A conditional compression distance that unveils insights of the genomic
evolution | Full version of DCC 2014 paper "A conditional compression distance
that unveils insights of the genomic evolution" | null | null | null | q-bio.GN cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a compression-based distance for genomic sequences. Instead of
using the usual conjoint information content, as in the classical Normalized
Compression Distance (NCD), it uses the conditional information content. To
compute this Normalized Conditional Compression Distance (NCCD), we need a
normal conditional compressor, that we built using a mixture of static and
dynamic finite-context models. Using this approach, we measured chromosomal
distances between Hominidae primates and also between Muroidea (rat and mouse),
observing several insights of evolution that so far have not been reported in
the literature.
| [
{
"created": "Thu, 16 Jan 2014 19:17:36 GMT",
"version": "v1"
}
] | 2014-01-21 | [
[
"Pratas",
"Diogo",
""
],
[
"Pinho",
"Armando J.",
""
]
] | We describe a compression-based distance for genomic sequences. Instead of using the usual conjoint information content, as in the classical Normalized Compression Distance (NCD), it uses the conditional information content. To compute this Normalized Conditional Compression Distance (NCCD), we need a normal conditional compressor, that we built using a mixture of static and dynamic finite-context models. Using this approach, we measured chromosomal distances between Hominidae primates and also between Muroidea (rat and mouse), observing several insights of evolution that so far have not been reported in the literature. |
2012.02833 | Lautaro Vassallo | Lautaro Vassallo, Ignacio A. Perez, Lucila G. Alvarez-Zuzek, Juli\'an
Amaya, Marcos F. Torres, Lucas D. Valdez, Cristian E. La Rocca, Lidia A.
Braunstein | An epidemic model for COVID-19 transmission in Argentina: Exploration of
the alternating quarantine and massive testing strategies | null | null | 10.1016/j.mbs.2021.108664 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The COVID-19 pandemic has challenged authorities at different levels of
government administration around the globe. When faced with diseases of this
severity, it is useful for the authorities to have prediction tools to estimate
in advance the impact on the health system and the human, material, and
economic resources that will be necessary. In this paper, we construct an
extended Susceptible-Exposed-Infected-Recovered model that incorporates the
social structure of Mar del Plata, the $4^\circ$ most inhabited city in
Argentina and head of the Municipality of General Pueyrred\'on. Moreover, we
consider detailed partitions of infected individuals according to the illness
severity, as well as data of local health resources, to bring these predictions
closer to the local reality. Tuning the corresponding epidemic parameters for
COVID-19, we study an alternating quarantine strategy, in which a part of the
population can circulate without restrictions at any time, while the rest is
equally divided into two groups and goes on successive periods of normal
activity and lockdown, each one with a duration of $\tau$ days. Besides, we
implement a random testing strategy over the population. We found that $\tau =
7$ is a good choice for the quarantine strategy since it matches with the
weekly cycle as it reduces the infected population. Focusing on the health
system, projecting from the situation as of September 30, we foresee a
difficulty to avoid saturation of ICU, given the extremely low levels of
mobility that would be required. In the worst case, our model estimates that
four thousand deaths would occur, of which 30\% could be avoided with proper
medical attention. Nonetheless, we found that aggressive testing would allow an
increase in the percentage of people that can circulate without restrictions,
being the equipment required to deal with the additional critical patients
relatively low.
| [
{
"created": "Fri, 4 Dec 2020 20:27:39 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Mar 2021 00:24:45 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Jul 2021 18:04:23 GMT",
"version": "v3"
}
] | 2021-07-20 | [
[
"Vassallo",
"Lautaro",
""
],
[
"Perez",
"Ignacio A.",
""
],
[
"Alvarez-Zuzek",
"Lucila G.",
""
],
[
"Amaya",
"Julián",
""
],
[
"Torres",
"Marcos F.",
""
],
[
"Valdez",
"Lucas D.",
""
],
[
"La Rocca",
"Cristian E.",
""
],
[
"Braunstein",
"Lidia A.",
""
]
] | The COVID-19 pandemic has challenged authorities at different levels of government administration around the globe. When faced with diseases of this severity, it is useful for the authorities to have prediction tools to estimate in advance the impact on the health system and the human, material, and economic resources that will be necessary. In this paper, we construct an extended Susceptible-Exposed-Infected-Recovered model that incorporates the social structure of Mar del Plata, the $4^\circ$ most inhabited city in Argentina and head of the Municipality of General Pueyrred\'on. Moreover, we consider detailed partitions of infected individuals according to the illness severity, as well as data of local health resources, to bring these predictions closer to the local reality. Tuning the corresponding epidemic parameters for COVID-19, we study an alternating quarantine strategy, in which a part of the population can circulate without restrictions at any time, while the rest is equally divided into two groups and goes on successive periods of normal activity and lockdown, each one with a duration of $\tau$ days. Besides, we implement a random testing strategy over the population. We found that $\tau = 7$ is a good choice for the quarantine strategy since it matches with the weekly cycle as it reduces the infected population. Focusing on the health system, projecting from the situation as of September 30, we foresee a difficulty to avoid saturation of ICU, given the extremely low levels of mobility that would be required. In the worst case, our model estimates that four thousand deaths would occur, of which 30\% could be avoided with proper medical attention. Nonetheless, we found that aggressive testing would allow an increase in the percentage of people that can circulate without restrictions, being the equipment required to deal with the additional critical patients relatively low. |
1507.08602 | Matthias Jorg Fuhr | Matthias J\"org Fuhr, Michael Meyer, Eric Fehr, Gilles Ponzio, Sabine
Werner, Hans J\"urgen Herrmann | A modeling approach to study the effect of cell polarization on
keratinocyte migration | 27 pages, 3 figures | PLoS ONE. 2015;10(2):e0117676 | 10.1371/journal.pone.0117676 | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The skin forms an efficient barrier against the environment, and rapid
cutaneous wound healing after injury is therefore essential. Healing of the
uppermost layer of the skin, the epidermis, involves collective migration of
keratinocytes, which requires coordinated polarization of the cells. To study
this process, we developed a model that allows analysis of live-cell images of
migrating keratinocytes in culture based on a small number of parameters,
including the radius of the cells, their mass and their polarization. This
computational approach allowed the analysis of cell migration at the front of
the wound and a reliable identification and quantification of the impaired
polarization and migration of keratinocytes from mice lacking fibroblast growth
factors 1 and 2 an established model of impaired healing. Therefore, our
modeling approach is suitable for large-scale analysis of migration phenotypes
of cells with specific genetic defects or upon treatment with different
pharmacological agents.
| [
{
"created": "Wed, 29 Jul 2015 07:01:48 GMT",
"version": "v1"
}
] | 2015-07-31 | [
[
"Fuhr",
"Matthias Jörg",
""
],
[
"Meyer",
"Michael",
""
],
[
"Fehr",
"Eric",
""
],
[
"Ponzio",
"Gilles",
""
],
[
"Werner",
"Sabine",
""
],
[
"Herrmann",
"Hans Jürgen",
""
]
] | The skin forms an efficient barrier against the environment, and rapid cutaneous wound healing after injury is therefore essential. Healing of the uppermost layer of the skin, the epidermis, involves collective migration of keratinocytes, which requires coordinated polarization of the cells. To study this process, we developed a model that allows analysis of live-cell images of migrating keratinocytes in culture based on a small number of parameters, including the radius of the cells, their mass and their polarization. This computational approach allowed the analysis of cell migration at the front of the wound and a reliable identification and quantification of the impaired polarization and migration of keratinocytes from mice lacking fibroblast growth factors 1 and 2 an established model of impaired healing. Therefore, our modeling approach is suitable for large-scale analysis of migration phenotypes of cells with specific genetic defects or upon treatment with different pharmacological agents. |
2103.04844 | James Gregory | James Gregory, Tom Shearer, Andrew L. Hazel | A microstructural model of tendon failure | 26 pages, 10 figures | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Collagen fibrils are the most important structural component of tendons.
Their crimped structure and parallel arrangement within the tendon lead to a
distinctive non-linear stress-strain curve when a tendon is stretched.
Microstructural models can be used to relate microscale collagen fibril
mechanics to macroscale tendon mechanics, allowing us to identify the
mechanisms behind each feature present in the stress-strain curve. Most models
in the literature focus on the elastic behaviour of the tendon, and there are
few which model beyond the elastic limit without introducing phenomenological
parameters. We develop a model, built upon a collagen recruitment approach,
that only contains microstructural parameters. We split the stress in the
fibrils into elastic and plastic parts, and assume that the fibril yield
stretch and rupture stretch are each described by a distribution function,
rather than being single-valued. By changing the shapes of the distributions
and their regions of overlap, we can produce macroscale tendon stress-strain
curves that generate the full range of features observed experimentally,
including those that could not be explained using existing models. These
features include second linear regions occurring after the tendon has yielded,
and step-like failure behaviour present after the stress has peaked. When we
compare with an existing model, we find that our model reduces the average root
mean squared error from 4.15MPa to 1.61MPa, and the resulting parameter values
are closer to those found experimentally. Since our model contains only
parameters that have a direct physical interpretation, it can be used to
predict how processes such as ageing, disease, and injury affect the mechanical
behaviour of tendons, provided we can quantify the effects of these processes
on the microstructure.
| [
{
"created": "Mon, 8 Mar 2021 15:50:52 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Gregory",
"James",
""
],
[
"Shearer",
"Tom",
""
],
[
"Hazel",
"Andrew L.",
""
]
] | Collagen fibrils are the most important structural component of tendons. Their crimped structure and parallel arrangement within the tendon lead to a distinctive non-linear stress-strain curve when a tendon is stretched. Microstructural models can be used to relate microscale collagen fibril mechanics to macroscale tendon mechanics, allowing us to identify the mechanisms behind each feature present in the stress-strain curve. Most models in the literature focus on the elastic behaviour of the tendon, and there are few which model beyond the elastic limit without introducing phenomenological parameters. We develop a model, built upon a collagen recruitment approach, that only contains microstructural parameters. We split the stress in the fibrils into elastic and plastic parts, and assume that the fibril yield stretch and rupture stretch are each described by a distribution function, rather than being single-valued. By changing the shapes of the distributions and their regions of overlap, we can produce macroscale tendon stress-strain curves that generate the full range of features observed experimentally, including those that could not be explained using existing models. These features include second linear regions occurring after the tendon has yielded, and step-like failure behaviour present after the stress has peaked. When we compare with an existing model, we find that our model reduces the average root mean squared error from 4.15MPa to 1.61MPa, and the resulting parameter values are closer to those found experimentally. Since our model contains only parameters that have a direct physical interpretation, it can be used to predict how processes such as ageing, disease, and injury affect the mechanical behaviour of tendons, provided we can quantify the effects of these processes on the microstructure. |
1508.01537 | Rui Ponte Costa | Rui Ponte Costa | Computational model of axon guidance | Master research thesis | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Axon guidance (AG) towards their target during embryogenesis or after injury
is an important issue in the development of neuronal networks. During their
growth, axons often face complex decisions that are difficult to understand
when observing just a small part of the problem. In this work we propose a
computational model of AG based on activity-independent mechanisms that takes
into account the most important aspects of AG.
The model includes the main elements (neurons, with soma, axon and growth
cone; glial cells acting as guideposts) and mechanisms (attraction/repulsion
guidance cues, growth cone adaptation, tissue-gradient intersections, axonal
transport, changes in the growth cone complexity and a range of responses for
each receptor). The growth cone guidance is defined as a function that maps the
receptor activation by ligands into a repulsive or attractive force. This force
is then converted into a turn- ing angle using spherical coordinates. A
regulatory network between the receptors and the intracellular proteins is
considered, leading to more complex and realistic behaviors. The ligand
diffusion through the extracellular environment is modeled with linear or
exponential functions. Concerning experimentation, it was developed the first
computational model and a new theoretical model of the midline crossing of
Drosophila axons that focus all the decision points. The computational model
created allows describing to a great extent the behaviors that have been
reported in the literature, for three different pathfinding scenarios: (i)
normal, (ii) comm mutant and (iii) robo mutant. Moreover, this model suggests
new hypotheses, being the most relevant the existence of an inhibitory link
between the DCC receptor and the Comm protein that is Netrin-mediated or
mediated by a third unknown signal.
| [
{
"created": "Thu, 6 Aug 2015 20:40:34 GMT",
"version": "v1"
}
] | 2015-08-10 | [
[
"Costa",
"Rui Ponte",
""
]
] | Axon guidance (AG) towards their target during embryogenesis or after injury is an important issue in the development of neuronal networks. During their growth, axons often face complex decisions that are difficult to understand when observing just a small part of the problem. In this work we propose a computational model of AG based on activity-independent mechanisms that takes into account the most important aspects of AG. The model includes the main elements (neurons, with soma, axon and growth cone; glial cells acting as guideposts) and mechanisms (attraction/repulsion guidance cues, growth cone adaptation, tissue-gradient intersections, axonal transport, changes in the growth cone complexity and a range of responses for each receptor). The growth cone guidance is defined as a function that maps the receptor activation by ligands into a repulsive or attractive force. This force is then converted into a turn- ing angle using spherical coordinates. A regulatory network between the receptors and the intracellular proteins is considered, leading to more complex and realistic behaviors. The ligand diffusion through the extracellular environment is modeled with linear or exponential functions. Concerning experimentation, it was developed the first computational model and a new theoretical model of the midline crossing of Drosophila axons that focus all the decision points. The computational model created allows describing to a great extent the behaviors that have been reported in the literature, for three different pathfinding scenarios: (i) normal, (ii) comm mutant and (iii) robo mutant. Moreover, this model suggests new hypotheses, being the most relevant the existence of an inhibitory link between the DCC receptor and the Comm protein that is Netrin-mediated or mediated by a third unknown signal. |
0806.3738 | Ernest Barreto | John R. Cressman Jr., Ghanim Ullah, Jokubas Ziburkus, Steven J.
Schiff, and Ernest Barreto | The Influence of Sodium and Potassium Dynamics on Excitability,
Seizures, and the Stability of Persistent States: I. Single Neuron Dynamics | Post-review revision. 8 figures. This is the first of a pair of
related papers; see also arXiv:0806.3741 | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In these companion papers, we study how the interrelated dynamics of sodium
and potassium affect the excitability of neurons, the occurrence of seizures,
and the stability of persistent states of activity. In this first paper, we
construct a mathematical model consisting of a single conductance-based neuron
together with intra- and extracellular ion concentration dynamics. We formulate
a reduction of this model that permits a detailed bifurcation analysis, and
show that the reduced model is a reasonable approximation of the full model. We
find that competition between intrinsic neuronal currents, sodium-potassium
pumps, glia, and diffusion can produce very slow and large-amplitude
oscillations in ion concentrations similar to what is seen physiologically in
seizures. Using the reduced model, we identify the dynamical mechanisms that
give rise to these phenomena. These models reveal several experimentally
testable predictions. Our work emphasizes the critical role of ion
concentration homeostasis in the proper functioning of neurons, and points to
important fundamental processes that may underlie pathological states such as
epilepsy.
| [
{
"created": "Mon, 23 Jun 2008 19:25:29 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Jun 2008 14:38:42 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Oct 2008 00:31:53 GMT",
"version": "v3"
}
] | 2009-09-29 | [
[
"Cressman",
"John R.",
"Jr."
],
[
"Ullah",
"Ghanim",
""
],
[
"Ziburkus",
"Jokubas",
""
],
[
"Schiff",
"Steven J.",
""
],
[
"Barreto",
"Ernest",
""
]
] | In these companion papers, we study how the interrelated dynamics of sodium and potassium affect the excitability of neurons, the occurrence of seizures, and the stability of persistent states of activity. In this first paper, we construct a mathematical model consisting of a single conductance-based neuron together with intra- and extracellular ion concentration dynamics. We formulate a reduction of this model that permits a detailed bifurcation analysis, and show that the reduced model is a reasonable approximation of the full model. We find that competition between intrinsic neuronal currents, sodium-potassium pumps, glia, and diffusion can produce very slow and large-amplitude oscillations in ion concentrations similar to what is seen physiologically in seizures. Using the reduced model, we identify the dynamical mechanisms that give rise to these phenomena. These models reveal several experimentally testable predictions. Our work emphasizes the critical role of ion concentration homeostasis in the proper functioning of neurons, and points to important fundamental processes that may underlie pathological states such as epilepsy. |
1406.1976 | Wenlian Lu | Y. Yao, W. L. Lu, B. Xu, C. B. Li, C. P. Lin, D. Waxman, J. F. Feng | The Increase of the Functional Entropy of the Human Brain with Age | 8 pages, 5 figures | Scientific Reports, 3:2853, 2013 | 10.1038/srep02853 | null | q-bio.QM physics.med-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use entropy to characterize intrinsic ageing properties of the human
brain. Analysis of fMRI data from a large dataset of individuals, using resting
state BOLD signals, demonstrated that a functional entropy associated with
brain activity increases with age. During an average lifespan, the entropy,
which was calculated from a population of individuals, increased by
approximately 0.1 bits, due to correlations in BOLD activity becoming more
widely distributed. We attribute this to the number of excitatory neurons and
the excitatory conductance decreasing with age. Incorporating these properties
into a computational model leads to quantitatively similar results to the fMRI
data. Our dataset involved males and females and we found significant
differences between them. The entropy of males at birth was lower than that of
females. However, the entropies of the two sexes increase at different rates,
and intersect at approximately 50 years; after this age, males have a larger
entropy.
| [
{
"created": "Sun, 8 Jun 2014 12:03:11 GMT",
"version": "v1"
}
] | 2014-06-10 | [
[
"Yao",
"Y.",
""
],
[
"Lu",
"W. L.",
""
],
[
"Xu",
"B.",
""
],
[
"Li",
"C. B.",
""
],
[
"Lin",
"C. P.",
""
],
[
"Waxman",
"D.",
""
],
[
"Feng",
"J. F.",
""
]
] | We use entropy to characterize intrinsic ageing properties of the human brain. Analysis of fMRI data from a large dataset of individuals, using resting state BOLD signals, demonstrated that a functional entropy associated with brain activity increases with age. During an average lifespan, the entropy, which was calculated from a population of individuals, increased by approximately 0.1 bits, due to correlations in BOLD activity becoming more widely distributed. We attribute this to the number of excitatory neurons and the excitatory conductance decreasing with age. Incorporating these properties into a computational model leads to quantitatively similar results to the fMRI data. Our dataset involved males and females and we found significant differences between them. The entropy of males at birth was lower than that of females. However, the entropies of the two sexes increase at different rates, and intersect at approximately 50 years; after this age, males have a larger entropy. |
1701.07787 | Jere Koskela | Jere Koskela | Multi-locus data distinguishes between population growth and multiple
merger coalescents | 24 pages, 13 figures | Statistical Applications in Genetics and Molecular Biology
17(3):20170011, 2018 | 10.1515/sagmb-2017-0011 | null | q-bio.PE q-bio.QM stat.CO stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a low dimensional function of the site frequency spectrum that
is tailor-made for distinguishing coalescent models with multiple mergers from
Kingman coalescent models with population growth, and use this function to
construct a hypothesis test between these model classes. The null and
alternative sampling distributions of the statistic are intractable, but its
low dimensionality renders them amenable to Monte Carlo estimation. We
construct kernel density estimates of the sampling distributions based on
simulated data, and show that the resulting hypothesis test dramatically
improves on the statistical power of a current state-of-the-art method. A key
reason for this improvement is the use of multi-locus data, in particular
averaging observed site frequency spectra across unlinked loci to reduce
sampling variance. We also demonstrate the robustness of our method to nuisance
and tuning parameters. Finally we show that the same kernel density estimates
can be used to conduct parameter estimation, and argue that our method is
readily generalisable for applications in model selection, parameter inference
and experimental design.
| [
{
"created": "Thu, 26 Jan 2017 17:40:31 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Feb 2017 17:18:01 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Feb 2017 12:18:42 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Sep 2017 14:18:29 GMT",
"version": "v4"
},
{
"created": "Fri, 23 Mar 2018 08:48:37 GMT",
"version": "v5"
},
{
"created": "Thu, 19 Apr 2018 15:17:51 GMT",
"version": "v6"
}
] | 2019-08-13 | [
[
"Koskela",
"Jere",
""
]
] | We introduce a low dimensional function of the site frequency spectrum that is tailor-made for distinguishing coalescent models with multiple mergers from Kingman coalescent models with population growth, and use this function to construct a hypothesis test between these model classes. The null and alternative sampling distributions of the statistic are intractable, but its low dimensionality renders them amenable to Monte Carlo estimation. We construct kernel density estimates of the sampling distributions based on simulated data, and show that the resulting hypothesis test dramatically improves on the statistical power of a current state-of-the-art method. A key reason for this improvement is the use of multi-locus data, in particular averaging observed site frequency spectra across unlinked loci to reduce sampling variance. We also demonstrate the robustness of our method to nuisance and tuning parameters. Finally we show that the same kernel density estimates can be used to conduct parameter estimation, and argue that our method is readily generalisable for applications in model selection, parameter inference and experimental design. |
1811.07560 | Tim Landgraf | Johannes Polster, Julian Petrasch, Randolf Menzel, Tim Landgraf | Reconstructing the visual perception of honey bees in complex 3-D worlds | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last decades, honeybees have been a fascinating model to study
insect navigation. While there is some controversy about the complexity of
underlying neural correlates, the research of honeybee navigation makes
progress through both the analysis of flight behavior and the synthesis of
agent models. Since visual cues are believed to play a crucial role for the
behavioral output of a navigating bee we have developed a realistic
3-dimensional virtual world, in which simulated agents can be tested, or in
which the visual input of experimentally traced animals can be reconstructed.
In this paper we present implementation details on how we reconstructed a large
3-dimensional world from aerial imagery of one of our field sites, how the
distribution of ommatidia and their view geometry was modeled, and how the
system samples from the scene to obtain realistic bee views. This system is
made available as an open-source project to the community on
\url{http://github.com/bioroboticslab/bee_view}.
| [
{
"created": "Mon, 19 Nov 2018 09:00:10 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jun 2019 15:18:28 GMT",
"version": "v2"
}
] | 2019-06-24 | [
[
"Polster",
"Johannes",
""
],
[
"Petrasch",
"Julian",
""
],
[
"Menzel",
"Randolf",
""
],
[
"Landgraf",
"Tim",
""
]
] | Over the last decades, honeybees have been a fascinating model to study insect navigation. While there is some controversy about the complexity of underlying neural correlates, the research of honeybee navigation makes progress through both the analysis of flight behavior and the synthesis of agent models. Since visual cues are believed to play a crucial role for the behavioral output of a navigating bee we have developed a realistic 3-dimensional virtual world, in which simulated agents can be tested, or in which the visual input of experimentally traced animals can be reconstructed. In this paper we present implementation details on how we reconstructed a large 3-dimensional world from aerial imagery of one of our field sites, how the distribution of ommatidia and their view geometry was modeled, and how the system samples from the scene to obtain realistic bee views. This system is made available as an open-source project to the community on \url{http://github.com/bioroboticslab/bee_view}. |
1803.00063 | Laura Wadkin MMath | L E Wadkin, S Orozco-Fuentes, I Neganova, G Swan, A Laude, M Lako, A
Shukurov and N G Parker | Correlated random walks of human embryonic stem cell in-vitro | 19 pages, 12 Figures | null | null | null | q-bio.CB physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We perform a detailed analysis of the migratory motion of human embryonic
stem cells in two-dimensions, both when isolated and in close proximity to
another cell, recorded with time-lapse microscopic imaging. We show that
isolated cells tend to perform an unusual locally anisotropic walk, moving
backwards and forwards along a preferred local direction correlated over a
timescale of around 50 minutes and aligned with the axis of the cell
elongation. Increasing elongation of the cell shape is associated with
increased instantaneous migration speed. We also show that two cells in close
proximity tend to move in the same direction, with the average separation of 70
um or less and the correlation length of around 25 um, a typical cell diameter.
These results can be used as a basis for the mathematical modelling of the
formation of clonal hESC colonies.
| [
{
"created": "Tue, 27 Feb 2018 16:13:17 GMT",
"version": "v1"
}
] | 2018-03-02 | [
[
"Wadkin",
"L E",
""
],
[
"Orozco-Fuentes",
"S",
""
],
[
"Neganova",
"I",
""
],
[
"Swan",
"G",
""
],
[
"Laude",
"A",
""
],
[
"Lako",
"M",
""
],
[
"Shukurov",
"A",
""
],
[
"Parker",
"N G",
""
]
] | We perform a detailed analysis of the migratory motion of human embryonic stem cells in two-dimensions, both when isolated and in close proximity to another cell, recorded with time-lapse microscopic imaging. We show that isolated cells tend to perform an unusual locally anisotropic walk, moving backwards and forwards along a preferred local direction correlated over a timescale of around 50 minutes and aligned with the axis of the cell elongation. Increasing elongation of the cell shape is associated with increased instantaneous migration speed. We also show that two cells in close proximity tend to move in the same direction, with the average separation of 70 um or less and the correlation length of around 25 um, a typical cell diameter. These results can be used as a basis for the mathematical modelling of the formation of clonal hESC colonies. |
2105.14372 | Anthony Gitter | Benjamin D. Lee, Anthony Gitter, Casey S. Greene, Sebastian Raschka,
Finlay Maguire, Alexander J. Titus, Michael D. Kessler, Alexandra J. Lee,
Marc G. Chevrette, Paul Allen Stewart, Thiago Britto-Borges, Evan M. Cofer,
Kun-Hsing Yu, Juan Jose Carmona, Elana J. Fertig, Alexandr A. Kalinin, Beth
Signal, Benjamin J. Lengerich, Timothy J. Triche Jr, Simina M. Boca | Ten Quick Tips for Deep Learning in Biology | 23 pages, 2 figures | null | 10.1371/journal.pcbi.1009803 | null | q-bio.OT cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning is a modern approach to problem-solving and task automation.
In particular, machine learning is concerned with the development and
applications of algorithms that can recognize patterns in data and use them for
predictive modeling. Artificial neural networks are a particular class of
machine learning algorithms and models that evolved into what is now described
as deep learning. Given the computational advances made in the last decade,
deep learning can now be applied to massive data sets and in innumerable
contexts. Therefore, deep learning has become its own subfield of machine
learning. In the context of biological research, it has been increasingly used
to derive novel insights from high-dimensional biological data. To make the
biological applications of deep learning more accessible to scientists who have
some experience with machine learning, we solicited input from a community of
researchers with varied biological and deep learning interests. These
individuals collaboratively contributed to this manuscript's writing using the
GitHub version control platform and the Manubot manuscript generation toolset.
The goal was to articulate a practical, accessible, and concise set of
guidelines and suggestions to follow when using deep learning. In the course of
our discussions, several themes became clear: the importance of understanding
and applying machine learning fundamentals as a baseline for utilizing deep
learning, the necessity for extensive model comparisons with careful
evaluation, and the need for critical thought in interpreting results generated
by deep learning, among others.
| [
{
"created": "Sat, 29 May 2021 21:02:44 GMT",
"version": "v1"
}
] | 2022-05-04 | [
[
"Lee",
"Benjamin D.",
""
],
[
"Gitter",
"Anthony",
""
],
[
"Greene",
"Casey S.",
""
],
[
"Raschka",
"Sebastian",
""
],
[
"Maguire",
"Finlay",
""
],
[
"Titus",
"Alexander J.",
""
],
[
"Kessler",
"Michael D.",
""
],
[
"Lee",
"Alexandra J.",
""
],
[
"Chevrette",
"Marc G.",
""
],
[
"Stewart",
"Paul Allen",
""
],
[
"Britto-Borges",
"Thiago",
""
],
[
"Cofer",
"Evan M.",
""
],
[
"Yu",
"Kun-Hsing",
""
],
[
"Carmona",
"Juan Jose",
""
],
[
"Fertig",
"Elana J.",
""
],
[
"Kalinin",
"Alexandr A.",
""
],
[
"Signal",
"Beth",
""
],
[
"Lengerich",
"Benjamin J.",
""
],
[
"Triche",
"Timothy J.",
"Jr"
],
[
"Boca",
"Simina M.",
""
]
] | Machine learning is a modern approach to problem-solving and task automation. In particular, machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling. Artificial neural networks are a particular class of machine learning algorithms and models that evolved into what is now described as deep learning. Given the computational advances made in the last decade, deep learning can now be applied to massive data sets and in innumerable contexts. Therefore, deep learning has become its own subfield of machine learning. In the context of biological research, it has been increasingly used to derive novel insights from high-dimensional biological data. To make the biological applications of deep learning more accessible to scientists who have some experience with machine learning, we solicited input from a community of researchers with varied biological and deep learning interests. These individuals collaboratively contributed to this manuscript's writing using the GitHub version control platform and the Manubot manuscript generation toolset. The goal was to articulate a practical, accessible, and concise set of guidelines and suggestions to follow when using deep learning. In the course of our discussions, several themes became clear: the importance of understanding and applying machine learning fundamentals as a baseline for utilizing deep learning, the necessity for extensive model comparisons with careful evaluation, and the need for critical thought in interpreting results generated by deep learning, among others. |
q-bio/0601048 | Radek Erban | Radek Erban and Hans G. Othmer | Taxis Equations for Amoeboid Cells | 35 pages, submitted to the Journal of Mathematical Biology | null | null | null | q-bio.CB physics.bio-ph | null | The classical macroscopic chemotaxis equations have previously been derived
from an individual-based description of the tactic response of cells that use a
"run-and-tumble" strategy in response to environmental cues. Here we derive
macroscopic equations for the more complex type of behavioral response
characteristic of crawling cells, which detect a signal, extract directional
information from a scalar concentration field, and change their motile behavior
accordingly. We present several models of increasing complexity for which the
derivation of population-level equations is possible, and we show how
experimentally-measured statistics can be obtained from the transport equation
formalism. We also show that amoeboid cells that do not adapt to constant
signals can still aggregate in steady gradients, but not in response to
periodic waves. This is in contrast to the case of cells that use a
"run-and-tumble" strategy, where adaptation is essential.
| [
{
"created": "Sun, 29 Jan 2006 01:47:59 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Erban",
"Radek",
""
],
[
"Othmer",
"Hans G.",
""
]
] | The classical macroscopic chemotaxis equations have previously been derived from an individual-based description of the tactic response of cells that use a "run-and-tumble" strategy in response to environmental cues. Here we derive macroscopic equations for the more complex type of behavioral response characteristic of crawling cells, which detect a signal, extract directional information from a scalar concentration field, and change their motile behavior accordingly. We present several models of increasing complexity for which the derivation of population-level equations is possible, and we show how experimentally-measured statistics can be obtained from the transport equation formalism. We also show that amoeboid cells that do not adapt to constant signals can still aggregate in steady gradients, but not in response to periodic waves. This is in contrast to the case of cells that use a "run-and-tumble" strategy, where adaptation is essential. |
0801.0606 | Leah B. Shaw | Leah B. Shaw, Ira B. Schwartz | Fluctuating epidemics on adaptive networks | Submitted to Phys Rev E | null | 10.1103/PhysRevE.77.066101 | null | q-bio.PE physics.soc-ph | null | A model for epidemics on an adaptive network is considered. Nodes follow an
SIRS (susceptible-infective-recovered-susceptible) pattern. Connections are
rewired to break links from non-infected nodes to infected nodes and are
reformed to connect to other non-infected nodes, as the nodes that are not
infected try to avoid the infection. Monte Carlo simulation and numerical
solution of a mean field model are employed. The introduction of rewiring
affects both the network structure and the epidemic dynamics. Degree
distributions are altered, and the average distance from a node to the nearest
infective increases. The rewiring leads to regions of bistability where either
an endemic or a disease-free steady state can exist. Fluctuations around the
endemic state and the lifetime of the endemic state are considered. The
fluctuations are found to exhibit power law behavior.
| [
{
"created": "Thu, 3 Jan 2008 21:42:00 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Shaw",
"Leah B.",
""
],
[
"Schwartz",
"Ira B.",
""
]
] | A model for epidemics on an adaptive network is considered. Nodes follow an SIRS (susceptible-infective-recovered-susceptible) pattern. Connections are rewired to break links from non-infected nodes to infected nodes and are reformed to connect to other non-infected nodes, as the nodes that are not infected try to avoid the infection. Monte Carlo simulation and numerical solution of a mean field model are employed. The introduction of rewiring affects both the network structure and the epidemic dynamics. Degree distributions are altered, and the average distance from a node to the nearest infective increases. The rewiring leads to regions of bistability where either an endemic or a disease-free steady state can exist. Fluctuations around the endemic state and the lifetime of the endemic state are considered. The fluctuations are found to exhibit power law behavior. |
1406.1452 | Sebastien Benzekry | S\'ebastien Benzekry, Alberto Gandolfi, Philip Hahnfeldt | Global Dormancy of Metastases Due to Systemic Inhibition of Angiogenesis | 5 figures, 2 tables | PLoS ONE 9(1): e84249 | 10.1371/journal.pone.0084249 | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autopsy studies of adults dying of non-cancer causes have shown that
virtually all of us possess occult, cancerous lesions. This suggests that, for
most individuals, cancer will become dormant and not progress, while only in
some will it become symptomatic disease. Meanwhile, it was recently shown in
animal models that a tumor can produce both stimulators and inhibitors of its
own blood supply. To explain the autopsy findings in light of the preclinical
research data, we propose a mathematical model of cancer development at the
organism scale describing a growing population of metastases, which, together
with the primary tumor, can exert a progressively greater level of systemic
angiogenesis-inhibitory influence that eventually overcomes local angiogenesis
stimulation to suppress the growth of all lesions. As a departure from modeling
efforts to date, we look not just at signaling from and effects on the primary
tumor, but integrate over this increasingly negative global signaling from all
sources to track the development of total tumor burden. This in silico study of
the dynamics of the tumor/metastasis system identifies ranges of parameter
values where mutual angio-inhibitory interactions within a population of tumor
lesions could yield global dormancy, i.e., an organism-level homeostatic steady
state in total tumor burden. Given that mortality arises most often from
metastatic disease rather than growth of the primary per se, this finding may
have important therapeutic implications.
| [
{
"created": "Thu, 5 Jun 2014 17:28:27 GMT",
"version": "v1"
}
] | 2014-06-06 | [
[
"Benzekry",
"Sébastien",
""
],
[
"Gandolfi",
"Alberto",
""
],
[
"Hahnfeldt",
"Philip",
""
]
] | Autopsy studies of adults dying of non-cancer causes have shown that virtually all of us possess occult, cancerous lesions. This suggests that, for most individuals, cancer will become dormant and not progress, while only in some will it become symptomatic disease. Meanwhile, it was recently shown in animal models that a tumor can produce both stimulators and inhibitors of its own blood supply. To explain the autopsy findings in light of the preclinical research data, we propose a mathematical model of cancer development at the organism scale describing a growing population of metastases, which, together with the primary tumor, can exert a progressively greater level of systemic angiogenesis-inhibitory influence that eventually overcomes local angiogenesis stimulation to suppress the growth of all lesions. As a departure from modeling efforts to date, we look not just at signaling from and effects on the primary tumor, but integrate over this increasingly negative global signaling from all sources to track the development of total tumor burden. This in silico study of the dynamics of the tumor/metastasis system identifies ranges of parameter values where mutual angio-inhibitory interactions within a population of tumor lesions could yield global dormancy, i.e., an organism-level homeostatic steady state in total tumor burden. Given that mortality arises most often from metastatic disease rather than growth of the primary per se, this finding may have important therapeutic implications. |
2208.10286 | Enrico Carlon | Aderik Voorspoels, Jocelyne Vreede, Enrico Carlon | Rigid Base Biasing in Molecular Dynamics enables enhanced sampling of
DNA conformations | 12 pages, 6 figures | null | null | null | q-bio.BM cond-mat.soft cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | All-atom simulations have become increasingly popular to study conformational
and dynamical properties of nucleic acids as they are accurate and provide high
spatial and time resolutions. This high resolution however comes at a heavy
computational cost and within the time scales of simulations nucleic acids
weakly fluctuate around their ideal structure exploring a limited set of
conformations. We introduce the RBB-NA algorithm which is capable of
controlling rigid base parameters in all-atom simulations of Nucleic Acids.
With suitable biasing potentials this algorithm can "force" a DNA or RNA
molecule to assume specific values of the six rotational (tilt, roll, twist,
buckle, propeller, opening) and/or the six translational parameters (shift,
slide, rise, shear, stretch, stagger). The algorithm enables the use of
advanced sampling techniques to probe the structure and dynamics of locally
strongly deformed Nucleic Acids. We illustrate its performance showing some
examples in which DNA is strongly twisted, bent or locally buckled. In these
examples RBB-NA reproduces well the unconstrained simulations data and other
known features of DNA mechanics, but it also allows one to explore the
anharmonic behavior characterizing the mechanics of nucleic acids in the high
deformation regime.
| [
{
"created": "Mon, 22 Aug 2022 13:03:09 GMT",
"version": "v1"
}
] | 2022-08-23 | [
[
"Voorspoels",
"Aderik",
""
],
[
"Vreede",
"Jocelyne",
""
],
[
"Carlon",
"Enrico",
""
]
] | All-atom simulations have become increasingly popular to study conformational and dynamical properties of nucleic acids as they are accurate and provide high spatial and time resolutions. This high resolution however comes at a heavy computational cost and within the time scales of simulations nucleic acids weakly fluctuate around their ideal structure exploring a limited set of conformations. We introduce the RBB-NA algorithm which is capable of controlling rigid base parameters in all-atom simulations of Nucleic Acids. With suitable biasing potentials this algorithm can "force" a DNA or RNA molecule to assume specific values of the six rotational (tilt, roll, twist, buckle, propeller, opening) and/or the six translational parameters (shift, slide, rise, shear, stretch, stagger). The algorithm enables the use of advanced sampling techniques to probe the structure and dynamics of locally strongly deformed Nucleic Acids. We illustrate its performance showing some examples in which DNA is strongly twisted, bent or locally buckled. In these examples RBB-NA reproduces well the unconstrained simulations data and other known features of DNA mechanics, but it also allows one to explore the anharmonic behavior characterizing the mechanics of nucleic acids in the high deformation regime. |
1410.1116 | Andrew Sornborger | Andrew T. Sornborger and Louis Tao | A Unified Framework for Information Coding: Oscillations, Memory, and
Zombie Modes | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synchronous neural activity can improve neural processing and is believed to
mediate neuronal interaction by providing temporal windows during which
information is more easily transferred. We demonstrate a pulse gating mechanism
in a feedforward network that can exactly propagate graded information through
a multilayer circuit. Based on this mechanism, we present a unified framework
wherein neural information coding and processing can be considered as a product
of linear maps under the active control of a pulse generator. Distinct control
and processing components combine to form the basis for the binding,
propagation, and processing of dynamically routed information within neural
pathways. Using our framework, we construct example neural circuits to 1)
maintain a short-term memory, 2) compute time-windowed Fourier transforms, and
3) perform spatial rotations. We postulate that such circuits, with stereotyped
control and processing of information, are the neural correlates of Crick and
Koch's zombie modes.
| [
{
"created": "Sun, 5 Oct 2014 05:49:38 GMT",
"version": "v1"
}
] | 2014-10-07 | [
[
"Sornborger",
"Andrew T.",
""
],
[
"Tao",
"Louis",
""
]
] | Synchronous neural activity can improve neural processing and is believed to mediate neuronal interaction by providing temporal windows during which information is more easily transferred. We demonstrate a pulse gating mechanism in a feedforward network that can exactly propagate graded information through a multilayer circuit. Based on this mechanism, we present a unified framework wherein neural information coding and processing can be considered as a product of linear maps under the active control of a pulse generator. Distinct control and processing components combine to form the basis for the binding, propagation, and processing of dynamically routed information within neural pathways. Using our framework, we construct example neural circuits to 1) maintain a short-term memory, 2) compute time-windowed Fourier transforms, and 3) perform spatial rotations. We postulate that such circuits, with stereotyped control and processing of information, are the neural correlates of Crick and Koch's zombie modes. |
1505.00327 | Mariusz Pietruszka PhD | Mariusz A. Pietruszka | pH/$T$ duality - wall properties and time evolution of plant cells | 50 pages, 10 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examined the pH/$T$ (or $\mu$/$T$) duality of acidic pH and temperature
($T$) for the growth of grass shoots in order to determine the equation of
state (EoS) for living plants. By considering non-meristematic growth as a
dynamic series of 'state transitions' (STs) in the extending primary wall, we
identified the critical (read: optimum) exponents for this phenomenon, which
exhibit a singular behaviour at a critical temperature, critical pH and
critical chemical potential ($\mu$) in the form of four power laws:
$F(\tau)_\pi\propto|\tau|^{\beta-1}$, $F(\pi)_\tau\propto|\pi|^{1-\alpha}$,
$G(\tau)_\mu\propto|\tau|^{-2-\alpha+2\beta}$ and
$G(\mu)_\tau\propto|\mu|^{2-\alpha}$. The power-law exponents $\alpha$ and
$\beta$ are numbers, which are independent of pH (or $\mu$) and $T$ that are
known as critical exponents, while $\pi$ and $\tau$ represent a reduced pH and
reduced temperature, respectively. Various 'scaling' predictions were obtained
- a convexity relation $\alpha + \beta \ge 2$ for practical pH-based analysis
and a $\beta \equiv 2$ identity in microscopic representation. In the presented
scenario, the magnitude that is decisive is the chemical potential of H$^+$
ions (protons), enforcing subsequent STs and growth. The EoS span areas of the
biological, physical, chemical and Earth sciences cross the borders with the
language (adapted formalism) of phase transitions.
| [
{
"created": "Sat, 2 May 2015 08:56:45 GMT",
"version": "v1"
},
{
"created": "Thu, 7 May 2015 11:32:17 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Jul 2015 11:02:07 GMT",
"version": "v3"
},
{
"created": "Thu, 27 Jul 2017 07:24:26 GMT",
"version": "v4"
}
] | 2017-07-28 | [
[
"Pietruszka",
"Mariusz A.",
""
]
] | We examined the pH/$T$ (or $\mu$/$T$) duality of acidic pH and temperature ($T$) for the growth of grass shoots in order to determine the equation of state (EoS) for living plants. By considering non-meristematic growth as a dynamic series of 'state transitions' (STs) in the extending primary wall, we identified the critical (read: optimum) exponents for this phenomenon, which exhibit a singular behaviour at a critical temperature, critical pH and critical chemical potential ($\mu$) in the form of four power laws: $F(\tau)_\pi\propto|\tau|^{\beta-1}$, $F(\pi)_\tau\propto|\pi|^{1-\alpha}$, $G(\tau)_\mu\propto|\tau|^{-2-\alpha+2\beta}$ and $G(\mu)_\tau\propto|\mu|^{2-\alpha}$. The power-law exponents $\alpha$ and $\beta$ are numbers, which are independent of pH (or $\mu$) and $T$ that are known as critical exponents, while $\pi$ and $\tau$ represent a reduced pH and reduced temperature, respectively. Various 'scaling' predictions were obtained - a convexity relation $\alpha + \beta \ge 2$ for practical pH-based analysis and a $\beta \equiv 2$ identity in microscopic representation. In the presented scenario, the magnitude that is decisive is the chemical potential of H$^+$ ions (protons), enforcing subsequent STs and growth. The EoS span areas of the biological, physical, chemical and Earth sciences cross the borders with the language (adapted formalism) of phase transitions. |
1408.2761 | Christoph Adami | Aditi Gupta and Christoph Adami | Strong Selection Significantly Increases Epistatic Interactions in the
Long-Term Evolution of a Protein | 25 pages, 9 figures, plus Supplementary Material including
Supplementary Text S1-S7, Supplementary Tables S1-S2, and Supplementary
Figures S1-2. Version that appears in PLoS Genetics | PLoS Genetics 12 (2016) e1005960 | 10.1371/journal.pgen.1005960 | null | q-bio.PE q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epistatic interactions between residues determine a protein's adaptability
and shape its evolutionary trajectory. When a protein experiences a changed
environment, it is under strong selection to find a peak in the new fitness
landscape. It has been shown that strong selection increases epistatic
interactions as well as the ruggedness of the fitness landscape, but little is
known about how the epistatic interactions change under selection in the
long-term evolution of a protein. Here we analyze the evolution of epistasis in
the protease of the human immunodeficiency virus type 1 (HIV-1) using protease
sequences collected for almost a decade from both treated and untreated
patients, to understand how epistasis changes and how those changes impact the
long-term evolvability of a protein. We use an information-theoretic proxy for
epistasis that quantifies the co-variation between sites, and show that
positive information is a necessary (but not sufficient) condition that detects
epistasis in most cases. We analyze the "fossils" of the evolutionary
trajectories of the protein contained in the sequence data, and show that
epistasis continues to enrich under strong selection, but not for proteins
whose environment is unchanged. The increase in epistasis compensates for the
information loss due to sequence variability brought about by treatment, and
facilitates adaptation in the increasingly rugged fitness landscape of
treatment. While epistasis is thought to enhance evolvability via
valley-crossing early-on in adaptation, it can hinder adaptation later when the
landscape has turned rugged. However, we find no evidence that the HIV-1
protease has reached its potential for evolution after 9 years of adapting to a
drug environment that itself is constantly changing.
| [
{
"created": "Tue, 12 Aug 2014 16:04:59 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jan 2015 04:39:13 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Jun 2015 21:11:54 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Mar 2016 16:06:33 GMT",
"version": "v4"
}
] | 2016-04-01 | [
[
"Gupta",
"Aditi",
""
],
[
"Adami",
"Christoph",
""
]
] | Epistatic interactions between residues determine a protein's adaptability and shape its evolutionary trajectory. When a protein experiences a changed environment, it is under strong selection to find a peak in the new fitness landscape. It has been shown that strong selection increases epistatic interactions as well as the ruggedness of the fitness landscape, but little is known about how the epistatic interactions change under selection in the long-term evolution of a protein. Here we analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using protease sequences collected for almost a decade from both treated and untreated patients, to understand how epistasis changes and how those changes impact the long-term evolvability of a protein. We use an information-theoretic proxy for epistasis that quantifies the co-variation between sites, and show that positive information is a necessary (but not sufficient) condition that detects epistasis in most cases. We analyze the "fossils" of the evolutionary trajectories of the protein contained in the sequence data, and show that epistasis continues to enrich under strong selection, but not for proteins whose environment is unchanged. The increase in epistasis compensates for the information loss due to sequence variability brought about by treatment, and facilitates adaptation in the increasingly rugged fitness landscape of treatment. While epistasis is thought to enhance evolvability via valley-crossing early-on in adaptation, it can hinder adaptation later when the landscape has turned rugged. However, we find no evidence that the HIV-1 protease has reached its potential for evolution after 9 years of adapting to a drug environment that itself is constantly changing. |
1603.02264 | Romulus Breban | Romulus Breban | Prevention versus treatment: A game-theoretic approach | 9 pages, 1 figure, correction of previous version | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Empirical studies show that preference for prevention versus treatment
remains a subject of debate. We build a paradigm model combining a utility game
for the individual-level dilemma of prevention versus treatment, and a
compartmental model for the epidemic dynamic. We assume that individuals arrive
to maximize the utility of voluntary prevention, as the epidemic reaches an
endemic level alleviated by prevention and treatment. We thus obtain an
expression for the asymptotic prevention coverage. Notably, we obtain that, if
the relative cost of prevention versus treatment is sufficiently low, epidemics
may be averted through the use of prevention alone.
| [
{
"created": "Mon, 7 Mar 2016 10:31:02 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Jul 2016 14:04:47 GMT",
"version": "v2"
}
] | 2016-07-06 | [
[
"Breban",
"Romulus",
""
]
] | Empirical studies show that preference for prevention versus treatment remains a subject of debate. We build a paradigm model combining a utility game for the individual-level dilemma of prevention versus treatment, and a compartmental model for the epidemic dynamic. We assume that individuals arrive to maximize the utility of voluntary prevention, as the epidemic reaches an endemic level alleviated by prevention and treatment. We thus obtain an expression for the asymptotic prevention coverage. Notably, we obtain that, if the relative cost of prevention versus treatment is sufficiently low, epidemics may be averted through the use of prevention alone. |
1904.08648 | Jian Song | Jian Song, Benjamin Winkeljann, Oliver Lieleg | The lubricity of mucin solutions is robust toward changes in
physiological conditions | null | ACS Appl. Bio Mater. 2019, 2, 8, 3448-3457 | 10.1021/acsabm.9b00389 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solutions of manually purified gastric mucins have been shown to be promising
lubricants for biomedical purposes, where they can efficiently reduce friction
and wear. However, so far, such mucin solutions have been mostly tested in
specific settings, and variations in the composition of the lubricating fluid
have not been systematically explored. We here fill this gap and determine the
viscosity, adsorption behavior, and lubricity of porcine gastric mucin
solutions on hydrophobic surfaces at different pH levels, mucin and salt
concentrations and in the presence of other proteins. We demonstrate that mucin
solutions provide excellent lubricity even at very low concentrations of 0.01 %
(w/v), over a broad range of pH levels and even at elevated ionic strength.
Furthermore, we provide mechanistic insights into mucin lubricity, which help
explain how certain variations in physiologically relevant parameters can limit
the lubricating potential of mucin solutions. Our results motivate that
solutions of manually purified mucin solutions can be powerful biomedical
lubricants, e.g. serving as eye drops, mouth sprays or as a personal lubricant
for intercourse.
| [
{
"created": "Thu, 18 Apr 2019 09:19:38 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2019 09:34:06 GMT",
"version": "v2"
}
] | 2019-08-21 | [
[
"Song",
"Jian",
""
],
[
"Winkeljann",
"Benjamin",
""
],
[
"Lieleg",
"Oliver",
""
]
] | Solutions of manually purified gastric mucins have been shown to be promising lubricants for biomedical purposes, where they can efficiently reduce friction and wear. However, so far, such mucin solutions have been mostly tested in specific settings, and variations in the composition of the lubricating fluid have not been systematically explored. We here fill this gap and determine the viscosity, adsorption behavior, and lubricity of porcine gastric mucin solutions on hydrophobic surfaces at different pH levels, mucin and salt concentrations and in the presence of other proteins. We demonstrate that mucin solutions provide excellent lubricity even at very low concentrations of 0.01 % (w/v), over a broad range of pH levels and even at elevated ionic strength. Furthermore, we provide mechanistic insights into mucin lubricity, which help explain how certain variations in physiologically relevant parameters can limit the lubricating potential of mucin solutions. Our results motivate that solutions of manually purified mucin solutions can be powerful biomedical lubricants, e.g. serving as eye drops, mouth sprays or as a personal lubricant for intercourse. |
0909.0737 | Irmtraud Meyer | Tin Yin Lam and Irmtraud M. Meyer | Efficient algorithms for training the parameters of hidden Markov models
using stochastic expectation maximization EM training and Viterbi training | 32 pages including 9 figures and 2 tables | BMC Algorithms for Molecular Biology (2010) 5:38 | null | null | q-bio.QM cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Hidden Markov models are widely employed by numerous
bioinformatics programs used today. Applications range widely from comparative
gene prediction to time-series analyses of micro-array data. The parameters of
the underlying models need to be adjusted for specific data sets, for example
the genome of a particular species, in order to maximize the prediction
accuracy. Computationally efficient algorithms for parameter training are thus
key to maximizing the usability of a wide range of bioinformatics applications.
Results: We introduce two computationally efficient training algorithms, one
for Viterbi training and one for stochastic expectation maximization (EM)
training, which render the memory requirements independent of the sequence
length. Unlike the existing algorithms for Viterbi and stochastic EM training
which require a two-step procedure, our two new algorithms require only one
step and scan the input sequence in only one direction. We also implement these
two new algorithms and the already published linear-memory algorithm for EM
training into the hidden Markov model compiler HMM-Converter and examine their
respective practical merits for three small example models.
Conclusions: Bioinformatics applications employing hidden Markov models can
use the two algorithms in order to make Viterbi training and stochastic EM
training more computationally efficient. Using these algorithms, parameter
training can thus be attempted for more complex models and longer training
sequences. The two new algorithms have the added advantage of being easier to
implement than the corresponding default algorithms for Viterbi training and
stochastic EM training.
| [
{
"created": "Thu, 3 Sep 2009 19:29:56 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Oct 2012 21:57:24 GMT",
"version": "v2"
}
] | 2012-10-18 | [
[
"Lam",
"Tin Yin",
""
],
[
"Meyer",
"Irmtraud M.",
""
]
] | Background: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications. Results: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-Converter and examine their respective practical merits for three small example models. Conclusions: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training. |
2311.16132 | Snehanshu Saha | Sourabh Patil, Archana Mathur, Raviprasad Aduri, Snehanshu Saha | A novel RNA pseudouridine site prediction model using Utility Kernel and
data-driven parameters | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | RNA protein Interactions (RPIs) play an important role in biological systems.
Recently, we have enumerated the RPIs at the residue level and have elucidated
the minimum structural unit (MSU) in these interactions to be a stretch of five
residues (Nucleotides/amino acids). Pseudouridine is the most frequent
modification in RNA. The conversion of uridine to pseudouridine involves
interactions between pseudouridine synthase and RNA. The existing models to
predict the pseudouridine sites in a given RNA sequence mainly depend on
user-defined features such as mono and dinucleotide composition/propensities of
RNA sequences. Predicting pseudouridine sites is a non-linear classification
problem with limited data points. Deep Learning models are efficient
discriminators when the data set size is reasonably large and fail when there
is a paucity of data ($<1000$ samples). To mitigate this problem, we propose a
Support Vector Machine (SVM) Kernel based on utility theory from Economics, and
using data-driven parameters (i.e. MSU) as features. For this purpose, we have
used position-specific tri/quad/pentanucleotide composition/propensity
(PSPC/PSPP) besides nucleotide and dineculeotide composition as features. SVMs
are known to work well in small data regimes and kernels in SVM are designed to
classify non-linear data. The proposed model outperforms the existing
state-of-the-art models significantly (10%-15% on average).
| [
{
"created": "Thu, 2 Nov 2023 08:32:10 GMT",
"version": "v1"
}
] | 2023-11-29 | [
[
"Patil",
"Sourabh",
""
],
[
"Mathur",
"Archana",
""
],
[
"Aduri",
"Raviprasad",
""
],
[
"Saha",
"Snehanshu",
""
]
] | RNA protein Interactions (RPIs) play an important role in biological systems. Recently, we have enumerated the RPIs at the residue level and have elucidated the minimum structural unit (MSU) in these interactions to be a stretch of five residues (Nucleotides/amino acids). Pseudouridine is the most frequent modification in RNA. The conversion of uridine to pseudouridine involves interactions between pseudouridine synthase and RNA. The existing models to predict the pseudouridine sites in a given RNA sequence mainly depend on user-defined features such as mono and dinucleotide composition/propensities of RNA sequences. Predicting pseudouridine sites is a non-linear classification problem with limited data points. Deep Learning models are efficient discriminators when the data set size is reasonably large and fail when there is a paucity of data ($<1000$ samples). To mitigate this problem, we propose a Support Vector Machine (SVM) Kernel based on utility theory from Economics, and using data-driven parameters (i.e. MSU) as features. For this purpose, we have used position-specific tri/quad/pentanucleotide composition/propensity (PSPC/PSPP) besides nucleotide and dineculeotide composition as features. SVMs are known to work well in small data regimes and kernels in SVM are designed to classify non-linear data. The proposed model outperforms the existing state-of-the-art models significantly (10%-15% on average). |
1001.5292 | Hernan A. Makse | Viviane Galvao, Jose G. V. Miranda, Roberto F. S. Andrade, Jose S.
Andrade Jr., Lazaros K. Gallos, Hernan A. Makse | Modularity map of the network of human cell differentiation | 32 pages, 7 figures | Proc. Nat. Acad. Sci. 107, 5750 (2010) | 10.1073/pnas.0914748107 | null | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell differentiation in multicellular organisms is a complex process whose
mechanism can be understood by a reductionist approach, in which the individual
processes that control the generation of different cell types are identified.
Alternatively, a large scale approach in search of different organizational
features of the growth stages promises to reveal its modular global structure
with the goal of discovering previously unknown relations between cell types.
Here we sort and analyze a large set of scattered data to construct the network
of human cell differentiation (NHCD) based on cell types (nodes) and
differentiation steps (links) from the fertilized egg to a crying baby. We
discover a dynamical law of critical branching, which reveals a fractal
regularity in the modular organization of the network, and allows us to observe
the network at different scales. The emerging picture clearly identifies
clusters of cell types following a hierarchical organization, ranging from
sub-modules to super-modules of specialized tissues and organs on varying
scales. This discovery will allow one to treat the development of a particular
cell function in the context of the complex network of human development as a
whole. Our results point to an integrated large-scale view of the network of
cell types systematically revealing ties between previously unrelated domains
in organ functions.
| [
{
"created": "Thu, 28 Jan 2010 23:20:08 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2016 01:04:16 GMT",
"version": "v2"
}
] | 2016-03-29 | [
[
"Galvao",
"Viviane",
""
],
[
"Miranda",
"Jose G. V.",
""
],
[
"Andrade",
"Roberto F. S.",
""
],
[
"Andrade",
"Jose S.",
"Jr."
],
[
"Gallos",
"Lazaros K.",
""
],
[
"Makse",
"Hernan A.",
""
]
] | Cell differentiation in multicellular organisms is a complex process whose mechanism can be understood by a reductionist approach, in which the individual processes that control the generation of different cell types are identified. Alternatively, a large scale approach in search of different organizational features of the growth stages promises to reveal its modular global structure with the goal of discovering previously unknown relations between cell types. Here we sort and analyze a large set of scattered data to construct the network of human cell differentiation (NHCD) based on cell types (nodes) and differentiation steps (links) from the fertilized egg to a crying baby. We discover a dynamical law of critical branching, which reveals a fractal regularity in the modular organization of the network, and allows us to observe the network at different scales. The emerging picture clearly identifies clusters of cell types following a hierarchical organization, ranging from sub-modules to super-modules of specialized tissues and organs on varying scales. This discovery will allow one to treat the development of a particular cell function in the context of the complex network of human development as a whole. Our results point to an integrated large-scale view of the network of cell types systematically revealing ties between previously unrelated domains in organ functions. |
1502.02783 | Iosif Lazaridis | Wolfgang Haak, Iosif Lazaridis, Nick Patterson, Nadin Rohland, Swapan
Mallick, Bastien Llamas, Guido Brandt, Susanne Nordenfelt, Eadaoin Harney,
Kristin Stewardson, Qiaomei Fu, Alissa Mittnik, Eszter B\'anffy, Christos
Economou, Michael Francken, Susanne Friederich, Rafael Garrido Pena, Fredrik
Hallgren, Valery Khartanovich, Aleksandr Khokhlov, Michael Kunst, Pavel
Kuznetsov, Harald Meller, Oleg Mochalov, Vayacheslav Moiseyev, Nicole
Nicklisch, Sandra L. Pichler, Roberto Risch, Manuel A. Rojo Guerra, Christina
Roth, Anna Sz\'ecs\'enyi-Nagy, Joachim Wahl, Matthias Meyer, Johannes Krause,
Dorcas Brown, David Anthony, Alan Cooper, Kurt Werner Alt, and David Reich | Massive migration from the steppe is a source for Indo-European
languages in Europe | null | null | 10.1038/nature14317 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We generated genome-wide data from 69 Europeans who lived between 8,000-3,000
years ago by enriching ancient DNA libraries for a target set of almost four
hundred thousand polymorphisms. Enrichment of these positions decreases the
sequencing required for genome-wide ancient DNA analysis by a median of around
250-fold, allowing us to study an order of magnitude more individuals than
previous studies and to obtain new insights about the past. We show that the
populations of western and far eastern Europe followed opposite trajectories
between 8,000-5,000 years ago. At the beginning of the Neolithic period in
Europe, ~8,000-7,000 years ago, closely related groups of early farmers
appeared in Germany, Hungary, and Spain, different from indigenous
hunter-gatherers, whereas Russia was inhabited by a distinctive population of
hunter-gatherers with high affinity to a ~24,000 year old Siberian6 . By
~6,000-5,000 years ago, a resurgence of hunter-gatherer ancestry had occurred
throughout much of Europe, but in Russia, the Yamnaya steppe herders of this
time were descended not only from the preceding eastern European
hunter-gatherers, but from a population of Near Eastern ancestry. Western and
Eastern Europe came into contact ~4,500 years ago, as the Late Neolithic Corded
Ware people from Germany traced ~3/4 of their ancestry to the Yamnaya,
documenting a massive migration into the heartland of Europe from its eastern
periphery. This steppe ancestry persisted in all sampled central Europeans
until at least ~3,000 years ago, and is ubiquitous in present-day Europeans.
These results provide support for the theory of a steppe origin of at least
some of the Indo-European languages of Europe.
| [
{
"created": "Tue, 10 Feb 2015 05:25:06 GMT",
"version": "v1"
}
] | 2015-08-19 | [
[
"Haak",
"Wolfgang",
""
],
[
"Lazaridis",
"Iosif",
""
],
[
"Patterson",
"Nick",
""
],
[
"Rohland",
"Nadin",
""
],
[
"Mallick",
"Swapan",
""
],
[
"Llamas",
"Bastien",
""
],
[
"Brandt",
"Guido",
""
],
[
"Nordenfelt",
"Susanne",
""
],
[
"Harney",
"Eadaoin",
""
],
[
"Stewardson",
"Kristin",
""
],
[
"Fu",
"Qiaomei",
""
],
[
"Mittnik",
"Alissa",
""
],
[
"Bánffy",
"Eszter",
""
],
[
"Economou",
"Christos",
""
],
[
"Francken",
"Michael",
""
],
[
"Friederich",
"Susanne",
""
],
[
"Pena",
"Rafael Garrido",
""
],
[
"Hallgren",
"Fredrik",
""
],
[
"Khartanovich",
"Valery",
""
],
[
"Khokhlov",
"Aleksandr",
""
],
[
"Kunst",
"Michael",
""
],
[
"Kuznetsov",
"Pavel",
""
],
[
"Meller",
"Harald",
""
],
[
"Mochalov",
"Oleg",
""
],
[
"Moiseyev",
"Vayacheslav",
""
],
[
"Nicklisch",
"Nicole",
""
],
[
"Pichler",
"Sandra L.",
""
],
[
"Risch",
"Roberto",
""
],
[
"Guerra",
"Manuel A. Rojo",
""
],
[
"Roth",
"Christina",
""
],
[
"Szécsényi-Nagy",
"Anna",
""
],
[
"Wahl",
"Joachim",
""
],
[
"Meyer",
"Matthias",
""
],
[
"Krause",
"Johannes",
""
],
[
"Brown",
"Dorcas",
""
],
[
"Anthony",
"David",
""
],
[
"Cooper",
"Alan",
""
],
[
"Alt",
"Kurt Werner",
""
],
[
"Reich",
"David",
""
]
] | We generated genome-wide data from 69 Europeans who lived between 8,000-3,000 years ago by enriching ancient DNA libraries for a target set of almost four hundred thousand polymorphisms. Enrichment of these positions decreases the sequencing required for genome-wide ancient DNA analysis by a median of around 250-fold, allowing us to study an order of magnitude more individuals than previous studies and to obtain new insights about the past. We show that the populations of western and far eastern Europe followed opposite trajectories between 8,000-5,000 years ago. At the beginning of the Neolithic period in Europe, ~8,000-7,000 years ago, closely related groups of early farmers appeared in Germany, Hungary, and Spain, different from indigenous hunter-gatherers, whereas Russia was inhabited by a distinctive population of hunter-gatherers with high affinity to a ~24,000 year old Siberian6 . By ~6,000-5,000 years ago, a resurgence of hunter-gatherer ancestry had occurred throughout much of Europe, but in Russia, the Yamnaya steppe herders of this time were descended not only from the preceding eastern European hunter-gatherers, but from a population of Near Eastern ancestry. Western and Eastern Europe came into contact ~4,500 years ago, as the Late Neolithic Corded Ware people from Germany traced ~3/4 of their ancestry to the Yamnaya, documenting a massive migration into the heartland of Europe from its eastern periphery. This steppe ancestry persisted in all sampled central Europeans until at least ~3,000 years ago, and is ubiquitous in present-day Europeans. These results provide support for the theory of a steppe origin of at least some of the Indo-European languages of Europe. |
2101.09153 | Ulrike Boehm | Glyn Nelson, Ulrike Boehm, Steve Bagley, Peter Bajcsy, Johanna
Bischof, Claire M Brown, Aurelien Dauphin, Ian M Dobbie, John E Eriksson,
Orestis Faklaris, Julia Fernandez-Rodriguez, Alexia Ferrand, Laurent Gelman,
Ali Gheisari, Hella Hartmann, Christian Kukat, Alex Laude, Miso Mitkovski,
Sebastian Munck, Alison J North, Tobias M Rasse, Ute Resch-Genger, Lucas C
Schuetz, Arne Seitz, Caterina Strambio-De-Castillia, Jason R Swedlow, Ioannis
Alexopoulos, Karin Aumayr, Sergiy Avilov, Gert-Jan Bakker, Rodrigo R Bammann,
Andrea Bassi, Hannes Beckert, Sebastian Beer, Yury Belyaev, Jakob Bierwagen,
Konstantin A Birngruber, Manel Bosch, Juergen Breitlow, Lisa A Cameron, Joe
Chalfoun, James J Chambers, Chieh-Li Chen, Eduardo Conde-Sousa, Alexander D
Corbett, Fabrice P Cordelieres, Elaine Del Nery, Ralf Dietzel, Frank Eismann,
Elnaz Fazeli, Andreas Felscher, Hans Fried, Nathalie Gaudreault, Wah Ing Goh,
Thomas Guilbert, Roland Hadleigh, Peter Hemmerich, Gerhard A Holst, Michelle
S Itano, Claudia B Jaffe, Helena K Jambor, Stuart C Jarvis, Antje Keppler,
David Kirchenbuechler, Marcel Kirchner, Norio Kobayashi, Gabriel Krens,
Susanne Kunis, Judith Lacoste, Marco Marcello, Gabriel G Martins, Daniel J
Metcalf, Claire A Mitchell, Joshua Moore, Tobias Mueller, Michael S Nelson,
Stephen Ogg, Shuichi Onami, Alexandra L Palmer, Perrine Paul-Gilloteaux,
Jaime A Pimentel, Laure Plantard, Santosh Podder, Elton Rexhepaj, Arnaud
Royon, Markku A Saari, Damien Schapman, Vincent Schoonderwoert, Britta
Schroth-Diez, Stanley Schwartz, Michael Shaw, Martin Spitaler, Martin T
Stoeckl, Damir Sudar, Jeremie Teillon, Stefan Terjung, Roland Thuenauer,
Christian D Wilms, Graham D Wright, Roland Nitschke | QUAREP-LiMi: A community-driven initiative to establish guidelines for
quality assessment and reproducibility for instruments and images in light
microscopy | 17 pages, 3 figures, shortened abstract, Co-Lead Authors: Glyn Nelson
and Ulrike Boehm, Corresponding author: Roland Nitschke | J. Microsc. 2021;1-18 | 10.1111/jmi.13041 | null | q-bio.OT physics.ins-det | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In April 2020, the QUality Assessment and REProducibility for Instruments and
Images in Light Microscopy (QUAREP-LiMi) initiative was formed. This initiative
comprises imaging scientists from academia and industry who share a common
interest in achieving a better understanding of the performance and limitations
of microscopes and improved quality control (QC) in light microscopy. The
ultimate goal of the QUAREP-LiMi initiative is to establish a set of common QC
standards, guidelines, metadata models, and tools, including detailed
protocols, with the ultimate aim of improving reproducible advances in
scientific research. This White Paper 1) summarizes the major obstacles
identified in the field that motivated the launch of the QUAREP-LiMi
initiative; 2) identifies the urgent need to address these obstacles in a
grassroots manner, through a community of stakeholders including, researchers,
imaging scientists, bioimage analysts, bioimage informatics developers,
corporate partners, funding agencies, standards organizations, scientific
publishers, and observers of such; 3) outlines the current actions of the
QUAREP-LiMi initiative, and 4) proposes future steps that can be taken to
improve the dissemination and acceptance of the proposed guidelines to manage
QC. To summarize, the principal goal of the QUAREP-LiMi initiative is to
improve the overall quality and reproducibility of light microscope image data
by introducing broadly accepted standard practices and accurately captured
image data metrics.
| [
{
"created": "Thu, 21 Jan 2021 14:27:30 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 21:04:34 GMT",
"version": "v2"
}
] | 2021-08-17 | [
[
"Nelson",
"Glyn",
""
],
[
"Boehm",
"Ulrike",
""
],
[
"Bagley",
"Steve",
""
],
[
"Bajcsy",
"Peter",
""
],
[
"Bischof",
"Johanna",
""
],
[
"Brown",
"Claire M",
""
],
[
"Dauphin",
"Aurelien",
""
],
[
"Dobbie",
"Ian M",
""
],
[
"Eriksson",
"John E",
""
],
[
"Faklaris",
"Orestis",
""
],
[
"Fernandez-Rodriguez",
"Julia",
""
],
[
"Ferrand",
"Alexia",
""
],
[
"Gelman",
"Laurent",
""
],
[
"Gheisari",
"Ali",
""
],
[
"Hartmann",
"Hella",
""
],
[
"Kukat",
"Christian",
""
],
[
"Laude",
"Alex",
""
],
[
"Mitkovski",
"Miso",
""
],
[
"Munck",
"Sebastian",
""
],
[
"North",
"Alison J",
""
],
[
"Rasse",
"Tobias M",
""
],
[
"Resch-Genger",
"Ute",
""
],
[
"Schuetz",
"Lucas C",
""
],
[
"Seitz",
"Arne",
""
],
[
"Strambio-De-Castillia",
"Caterina",
""
],
[
"Swedlow",
"Jason R",
""
],
[
"Alexopoulos",
"Ioannis",
""
],
[
"Aumayr",
"Karin",
""
],
[
"Avilov",
"Sergiy",
""
],
[
"Bakker",
"Gert-Jan",
""
],
[
"Bammann",
"Rodrigo R",
""
],
[
"Bassi",
"Andrea",
""
],
[
"Beckert",
"Hannes",
""
],
[
"Beer",
"Sebastian",
""
],
[
"Belyaev",
"Yury",
""
],
[
"Bierwagen",
"Jakob",
""
],
[
"Birngruber",
"Konstantin A",
""
],
[
"Bosch",
"Manel",
""
],
[
"Breitlow",
"Juergen",
""
],
[
"Cameron",
"Lisa A",
""
],
[
"Chalfoun",
"Joe",
""
],
[
"Chambers",
"James J",
""
],
[
"Chen",
"Chieh-Li",
""
],
[
"Conde-Sousa",
"Eduardo",
""
],
[
"Corbett",
"Alexander D",
""
],
[
"Cordelieres",
"Fabrice P",
""
],
[
"Del Nery",
"Elaine",
""
],
[
"Dietzel",
"Ralf",
""
],
[
"Eismann",
"Frank",
""
],
[
"Fazeli",
"Elnaz",
""
],
[
"Felscher",
"Andreas",
""
],
[
"Fried",
"Hans",
""
],
[
"Gaudreault",
"Nathalie",
""
],
[
"Goh",
"Wah Ing",
""
],
[
"Guilbert",
"Thomas",
""
],
[
"Hadleigh",
"Roland",
""
],
[
"Hemmerich",
"Peter",
""
],
[
"Holst",
"Gerhard A",
""
],
[
"Itano",
"Michelle S",
""
],
[
"Jaffe",
"Claudia B",
""
],
[
"Jambor",
"Helena K",
""
],
[
"Jarvis",
"Stuart C",
""
],
[
"Keppler",
"Antje",
""
],
[
"Kirchenbuechler",
"David",
""
],
[
"Kirchner",
"Marcel",
""
],
[
"Kobayashi",
"Norio",
""
],
[
"Krens",
"Gabriel",
""
],
[
"Kunis",
"Susanne",
""
],
[
"Lacoste",
"Judith",
""
],
[
"Marcello",
"Marco",
""
],
[
"Martins",
"Gabriel G",
""
],
[
"Metcalf",
"Daniel J",
""
],
[
"Mitchell",
"Claire A",
""
],
[
"Moore",
"Joshua",
""
],
[
"Mueller",
"Tobias",
""
],
[
"Nelson",
"Michael S",
""
],
[
"Ogg",
"Stephen",
""
],
[
"Onami",
"Shuichi",
""
],
[
"Palmer",
"Alexandra L",
""
],
[
"Paul-Gilloteaux",
"Perrine",
""
],
[
"Pimentel",
"Jaime A",
""
],
[
"Plantard",
"Laure",
""
],
[
"Podder",
"Santosh",
""
],
[
"Rexhepaj",
"Elton",
""
],
[
"Royon",
"Arnaud",
""
],
[
"Saari",
"Markku A",
""
],
[
"Schapman",
"Damien",
""
],
[
"Schoonderwoert",
"Vincent",
""
],
[
"Schroth-Diez",
"Britta",
""
],
[
"Schwartz",
"Stanley",
""
],
[
"Shaw",
"Michael",
""
],
[
"Spitaler",
"Martin",
""
],
[
"Stoeckl",
"Martin T",
""
],
[
"Sudar",
"Damir",
""
],
[
"Teillon",
"Jeremie",
""
],
[
"Terjung",
"Stefan",
""
],
[
"Thuenauer",
"Roland",
""
],
[
"Wilms",
"Christian D",
""
],
[
"Wright",
"Graham D",
""
],
[
"Nitschke",
"Roland",
""
]
] | In April 2020, the QUality Assessment and REProducibility for Instruments and Images in Light Microscopy (QUAREP-LiMi) initiative was formed. This initiative comprises imaging scientists from academia and industry who share a common interest in achieving a better understanding of the performance and limitations of microscopes and improved quality control (QC) in light microscopy. The ultimate goal of the QUAREP-LiMi initiative is to establish a set of common QC standards, guidelines, metadata models, and tools, including detailed protocols, with the ultimate aim of improving reproducible advances in scientific research. This White Paper 1) summarizes the major obstacles identified in the field that motivated the launch of the QUAREP-LiMi initiative; 2) identifies the urgent need to address these obstacles in a grassroots manner, through a community of stakeholders including, researchers, imaging scientists, bioimage analysts, bioimage informatics developers, corporate partners, funding agencies, standards organizations, scientific publishers, and observers of such; 3) outlines the current actions of the QUAREP-LiMi initiative, and 4) proposes future steps that can be taken to improve the dissemination and acceptance of the proposed guidelines to manage QC. To summarize, the principal goal of the QUAREP-LiMi initiative is to improve the overall quality and reproducibility of light microscope image data by introducing broadly accepted standard practices and accurately captured image data metrics. |
2110.10419 | Gudrun Gygli | Gudrun Gygli | On the reproducibility of enzyme reactions and kinetic modelling | All scripts and raw data can be found on FARDOMHub
(https://fairdomhub.org/investigations/483) | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Enzyme reactions are highly dependent on reaction conditions. To ensure
reproducibility of enzyme reaction parameters, experiments need to be carefully
designed and kinetic modelling meticulously executed. Furthermore, to enable
the judgement of the quality of enzyme reaction parameters, the experimental
conditions, the modelling process as well as the raw data need to be reported
comprehensively. By taking these steps, enzyme reaction parameters can be open
and FAIR (findable, accessible, interoperable, re-usable) as well as
repeatable, replicable and reproducible. This review discusses these issues and
provides a practical guide to designing initial rate experiments for the
determination of enzyme reaction parameters and gives an open, FAIR and
re-editable example of the kinetic modelling of an enzyme reaction. Both the
guide and example are scripted with Python in Jupyter Notebooks and are
publicly available (https://fairdomhub.org/investigations/483). Finally, the
prerequisites of automated data analysis and machine learning algorithms are
briefly discussed to provide further motivation for the comprehensive, open and
FAIR reporting of enzyme reaction parameters.
| [
{
"created": "Wed, 20 Oct 2021 07:44:13 GMT",
"version": "v1"
}
] | 2021-10-22 | [
[
"Gygli",
"Gudrun",
""
]
] | Enzyme reactions are highly dependent on reaction conditions. To ensure reproducibility of enzyme reaction parameters, experiments need to be carefully designed and kinetic modelling meticulously executed. Furthermore, to enable the judgement of the quality of enzyme reaction parameters, the experimental conditions, the modelling process as well as the raw data need to be reported comprehensively. By taking these steps, enzyme reaction parameters can be open and FAIR (findable, accessible, interoperable, re-usable) as well as repeatable, replicable and reproducible. This review discusses these issues and provides a practical guide to designing initial rate experiments for the determination of enzyme reaction parameters and gives an open, FAIR and re-editable example of the kinetic modelling of an enzyme reaction. Both the guide and example are scripted with Python in Jupyter Notebooks and are publicly available (https://fairdomhub.org/investigations/483). Finally, the prerequisites of automated data analysis and machine learning algorithms are briefly discussed to provide further motivation for the comprehensive, open and FAIR reporting of enzyme reaction parameters. |
1403.3994 | Sang-Yoon Kim | Sang-Yoon Kim and Woochang Lim | Thermodynamic Order Parameters and Statistical-Mechanical Measures for
Characterization of the Burst and Spike Synchronizations of Bursting Neurons | arXiv admin note: text overlap with arXiv:1403.1255 | null | null | null | q-bio.NC nlin.CD physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in characterization of population synchronization of
bursting neurons which exhibit both the slow bursting and the fast spiking
timescales, in contrast to spiking neurons. Population synchronization may be
well visualized in the raster plot of neural spikes which can be obtained in
experiments. The instantaneous population firing rate (IPFR) $R(t)$, which may
be directly obtained from the raster plot of spikes, is often used as a
realistic collective quantity describing population behaviors in both the
computational and the experimental neuroscience. For the case of spiking
neurons, realistic thermodynamic order parameter and statistical-mechanical
spiking measure, based on $R(t)$, were introduced in our recent work to make
practical characterization of spike synchronization. Here, we separate the slow
bursting and the fast spiking timescales via frequency filtering, and extend
the thermodynamic order parameter and the statistical-mechanical measure to the
case of bursting neurons. Consequently, it is shown in explicit examples that
both the order parameters and the statistical-mechanical measures may be
effectively used to characterize the burst and spike synchronizations of
bursting neurons.
| [
{
"created": "Mon, 17 Mar 2014 04:13:14 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jul 2014 03:32:50 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Oct 2014 05:32:43 GMT",
"version": "v3"
}
] | 2014-10-07 | [
[
"Kim",
"Sang-Yoon",
""
],
[
"Lim",
"Woochang",
""
]
] | We are interested in characterization of population synchronization of bursting neurons which exhibit both the slow bursting and the fast spiking timescales, in contrast to spiking neurons. Population synchronization may be well visualized in the raster plot of neural spikes which can be obtained in experiments. The instantaneous population firing rate (IPFR) $R(t)$, which may be directly obtained from the raster plot of spikes, is often used as a realistic collective quantity describing population behaviors in both the computational and the experimental neuroscience. For the case of spiking neurons, realistic thermodynamic order parameter and statistical-mechanical spiking measure, based on $R(t)$, were introduced in our recent work to make practical characterization of spike synchronization. Here, we separate the slow bursting and the fast spiking timescales via frequency filtering, and extend the thermodynamic order parameter and the statistical-mechanical measure to the case of bursting neurons. Consequently, it is shown in explicit examples that both the order parameters and the statistical-mechanical measures may be effectively used to characterize the burst and spike synchronizations of bursting neurons. |
1505.06577 | Thomas Ouldridge | Pieter Rein ten Wolde, Nils B. Becker, Thomas E. Ouldridge and A.
Mugler | Fundamental Limits to Cellular Sensing | null | null | null | null | q-bio.MN physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years experiments have demonstrated that living cells can measure
low chemical concentrations with high precision, and much progress has been
made in understanding what sets the fundamental limit to the precision of
chemical sensing. Chemical concentration measurements start with the binding of
ligand molecules to receptor proteins, which is an inherently noisy process,
especially at low concentrations. The signaling networks that transmit the
information on the ligand concentration from the receptors into the cell have
to filter this noise extrinsic to the cell as much as possible. These networks,
however, are also stochastic in nature, which means that they will also add
noise to the transmitted signal. In this review, we will first discuss how the
diffusive transport and binding of ligand to the receptor sets the receptor
correlation time, and then how downstream signaling pathways integrate the
noise in the receptor state; we will discuss how the number of receptors, the
receptor correlation time, and the effective integration time together set a
fundamental limit on the precision of sensing. We then discuss how cells can
remove the receptor noise while simultaneously suppressing the intrinsic noise
in the signaling network. We describe why this mechanism of time integration
requires three classes of resources---receptors and their integration time,
readout molecules, energy---and how each resource class sets a fundamental
sensing limit. We also briefly discuss the scheme of maximum-likelihood
estimation, the role of receptor cooperativity, and how cellular copy protocols
differ from canonical copy protocols typically considered in the computational
literature, explaining why cellular sensing systems can never reach the
Landauer limit on the optimal trade-off between accuracy and energetic cost.
| [
{
"created": "Mon, 25 May 2015 09:27:52 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Wolde",
"Pieter Rein ten",
""
],
[
"Becker",
"Nils B.",
""
],
[
"Ouldridge",
"Thomas E.",
""
],
[
"Mugler",
"A.",
""
]
] | In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this noise extrinsic to the cell as much as possible. These networks, however, are also stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, and then how downstream signaling pathways integrate the noise in the receptor state; we will discuss how the number of receptors, the receptor correlation time, and the effective integration time together set a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes of resources---receptors and their integration time, readout molecules, energy---and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade-off between accuracy and energetic cost. |
1902.01395 | Muhammad Abubakar Yamin | A. Yamin, M. Dayan, L. Squarcina, P. Brambilla, V. Murino, V.
Diwadkar, and D. Sona | Comparison of brain connectomes using geodesic distance on manifold:a
twin study | Paper is accepted for presentation in ISBI 2019. Camera ready has
been submitted on 15 Jan 2019 | null | null | null | q-bio.NC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | fMRI is a unique non-invasive approach for understanding the functional
organization of the human brain, and task-based fMRI promotes identification of
functionally relevant brain regions associated with a given task. Here, we use
fMRI (using the Poffenberger Paradigm) data collected in mono- and dizygotic
twin pairs to propose a novel approach for assessing similarity in functional
networks. In particular, we compared network similarity between pairs of twins
in task-relevant and task-orthogonal networks. The proposed method measures the
similarity between functional networks using a geodesic distance between graph
Laplacians. With method we show that networks are more similar in monozygotic
twins compared to dizygotic twins. Furthermore, the similarity in monozygotic
twins is higher for task-relevant, than task-orthogonal networks.
| [
{
"created": "Mon, 4 Feb 2019 13:48:38 GMT",
"version": "v1"
}
] | 2019-02-06 | [
[
"Yamin",
"A.",
""
],
[
"Dayan",
"M.",
""
],
[
"Squarcina",
"L.",
""
],
[
"Brambilla",
"P.",
""
],
[
"Murino",
"V.",
""
],
[
"Diwadkar",
"V.",
""
],
[
"Sona",
"D.",
""
]
] | fMRI is a unique non-invasive approach for understanding the functional organization of the human brain, and task-based fMRI promotes identification of functionally relevant brain regions associated with a given task. Here, we use fMRI (using the Poffenberger Paradigm) data collected in mono- and dizygotic twin pairs to propose a novel approach for assessing similarity in functional networks. In particular, we compared network similarity between pairs of twins in task-relevant and task-orthogonal networks. The proposed method measures the similarity between functional networks using a geodesic distance between graph Laplacians. With method we show that networks are more similar in monozygotic twins compared to dizygotic twins. Furthermore, the similarity in monozygotic twins is higher for task-relevant, than task-orthogonal networks. |
2308.07416 | Charles Harris | Jos Torge, Charles Harris, Simon V. Mathis, Pietro Lio | DiffHopp: A Graph Diffusion Model for Novel Drug Design via Scaffold
Hopping | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Scaffold hopping is a drug discovery strategy to generate new chemical
entities by modifying the core structure, the \emph{scaffold}, of a known
active compound. This approach preserves the essential molecular features of
the original scaffold while introducing novel chemical elements or structural
features to enhance potency, selectivity, or bioavailability. However, there is
currently a lack of generative models specifically tailored for this task,
especially in the pocket-conditioned context. In this work, we present
DiffHopp, a conditional E(3)-equivariant graph diffusion model tailored for
scaffold hopping given a known protein-ligand complex.
| [
{
"created": "Mon, 14 Aug 2023 19:08:34 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Torge",
"Jos",
""
],
[
"Harris",
"Charles",
""
],
[
"Mathis",
"Simon V.",
""
],
[
"Lio",
"Pietro",
""
]
] | Scaffold hopping is a drug discovery strategy to generate new chemical entities by modifying the core structure, the \emph{scaffold}, of a known active compound. This approach preserves the essential molecular features of the original scaffold while introducing novel chemical elements or structural features to enhance potency, selectivity, or bioavailability. However, there is currently a lack of generative models specifically tailored for this task, especially in the pocket-conditioned context. In this work, we present DiffHopp, a conditional E(3)-equivariant graph diffusion model tailored for scaffold hopping given a known protein-ligand complex. |
2201.08207 | Josef Tkadlec | Loke Durocher, Panagiotis Karras, Andreas Pavlogiannis, Josef Tkadlec | Invasion Dynamics in the Biased Voter Process | 8 pages, 3 figures. To be published in IJCAI-22 | null | null | null | q-bio.PE cs.CC cs.DS cs.GT cs.SI | http://creativecommons.org/licenses/by/4.0/ | The voter process is a classic stochastic process that models the invasion of
a mutant trait $A$ (e.g., a new opinion, belief, legend, genetic mutation,
magnetic spin) in a population of agents (e.g., people, genes, particles) who
share a resident trait $B$, spread over the nodes of a graph. An agent may
adopt the trait of one of its neighbors at any time, while the invasion bias
$r\in(0,\infty)$ quantifies the stochastic preference towards ($r>1$) or
against ($r<1$) adopting $A$ over $B$. Success is measured in terms of the
fixation probability, i.e., the probability that eventually all agents have
adopted the mutant trait $A$. In this paper we study the problem of fixation
probability maximization under this model: given a budget $k$, find a set of
$k$ agents to initiate the invasion that maximizes the fixation probability. We
show that the problem is NP-hard for both $r>1$ and $r<1$, while the latter
case is also inapproximable within any multiplicative factor. On the positive
side, we show that when $r>1$, the optimization function is submodular and thus
can be greedily approximated within a factor $1-1/e$. An experimental
evaluation of some proposed heuristics corroborates our results.
| [
{
"created": "Thu, 20 Jan 2022 14:43:45 GMT",
"version": "v1"
},
{
"created": "Mon, 2 May 2022 18:56:42 GMT",
"version": "v2"
}
] | 2022-05-04 | [
[
"Durocher",
"Loke",
""
],
[
"Karras",
"Panagiotis",
""
],
[
"Pavlogiannis",
"Andreas",
""
],
[
"Tkadlec",
"Josef",
""
]
] | The voter process is a classic stochastic process that models the invasion of a mutant trait $A$ (e.g., a new opinion, belief, legend, genetic mutation, magnetic spin) in a population of agents (e.g., people, genes, particles) who share a resident trait $B$, spread over the nodes of a graph. An agent may adopt the trait of one of its neighbors at any time, while the invasion bias $r\in(0,\infty)$ quantifies the stochastic preference towards ($r>1$) or against ($r<1$) adopting $A$ over $B$. Success is measured in terms of the fixation probability, i.e., the probability that eventually all agents have adopted the mutant trait $A$. In this paper we study the problem of fixation probability maximization under this model: given a budget $k$, find a set of $k$ agents to initiate the invasion that maximizes the fixation probability. We show that the problem is NP-hard for both $r>1$ and $r<1$, while the latter case is also inapproximable within any multiplicative factor. On the positive side, we show that when $r>1$, the optimization function is submodular and thus can be greedily approximated within a factor $1-1/e$. An experimental evaluation of some proposed heuristics corroborates our results. |
2407.11728 | Praful Gagrani | Praful Gagrani | Evolution of complexity and the origins of biochemical life | 27 pages, 9 figures | null | null | null | q-bio.PE nlin.AO | http://creativecommons.org/licenses/by/4.0/ | While modern physics and biology satisfactorily explain the passage from the
Big Bang to the formation of Earth and the first cells to present-day life,
respectively, the origins of biochemical life still remain an open question.
Since life, as we know it, requires extremely long genetic polymers, any answer
to the question must explain how an evolving system of polymers of
ever-increasing length could come about on a planet that otherwise consisted
only of small molecular building blocks. In this work, we show that, under
realistic constraints, an abstract polymer model can exhibit dynamics such that
attractors in the polymer population space with a higher average polymer length
are also more probable. We generalize from the model and formalize the notions
of complexity and evolution for chemical reaction networks with multiple
attractors. The complexity of a species is defined as the minimum number of
reactions needed to produce it from a set of building blocks, which in turn is
used to define a measure of complexity for an attractor. A transition between
attractors is considered to be a progressive evolution if the attractor with
the higher probability also has a higher complexity. In an environment where
only monomers are readily available, the attractor with a higher average
polymer length is more complex. Thus, our abstract polymer model can exhibit
progressive evolution for a range of thermodynamically plausible rate
constants. We also formalize criteria for open-ended and
historically-contingent evolution and explain the role of autocatalysis in
obtaining them. Our work provides a basis for searching for prebiotically
plausible scenarios in which long polymers can emerge and yield populations
with even longer polymers.
| [
{
"created": "Tue, 16 Jul 2024 13:49:39 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Gagrani",
"Praful",
""
]
] | While modern physics and biology satisfactorily explain the passage from the Big Bang to the formation of Earth and the first cells to present-day life, respectively, the origins of biochemical life still remain an open question. Since life, as we know it, requires extremely long genetic polymers, any answer to the question must explain how an evolving system of polymers of ever-increasing length could come about on a planet that otherwise consisted only of small molecular building blocks. In this work, we show that, under realistic constraints, an abstract polymer model can exhibit dynamics such that attractors in the polymer population space with a higher average polymer length are also more probable. We generalize from the model and formalize the notions of complexity and evolution for chemical reaction networks with multiple attractors. The complexity of a species is defined as the minimum number of reactions needed to produce it from a set of building blocks, which in turn is used to define a measure of complexity for an attractor. A transition between attractors is considered to be a progressive evolution if the attractor with the higher probability also has a higher complexity. In an environment where only monomers are readily available, the attractor with a higher average polymer length is more complex. Thus, our abstract polymer model can exhibit progressive evolution for a range of thermodynamically plausible rate constants. We also formalize criteria for open-ended and historically-contingent evolution and explain the role of autocatalysis in obtaining them. Our work provides a basis for searching for prebiotically plausible scenarios in which long polymers can emerge and yield populations with even longer polymers. |
2308.05027 | Karolis Martinkus | Karolis Martinkus, Jan Ludwiczak, Kyunghyun Cho, Wei-Ching Liang,
Julien Lafrance-Vanasse, Isidro Hotzel, Arvind Rajpal, Yan Wu, Richard
Bonneau, Vladimir Gligorijevic, Andreas Loukas | AbDiffuser: Full-Atom Generation of in vitro Functioning Antibodies | NeurIPS 2023 | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce AbDiffuser, an equivariant and physics-informed diffusion model
for the joint generation of antibody 3D structures and sequences. AbDiffuser is
built on top of a new representation of protein structure, relies on a novel
architecture for aligned proteins, and utilizes strong diffusion priors to
improve the denoising process. Our approach improves protein diffusion by
taking advantage of domain knowledge and physics-based constraints; handles
sequence-length changes; and reduces memory complexity by an order of
magnitude, enabling backbone and side chain generation. We validate AbDiffuser
in silico and in vitro. Numerical experiments showcase the ability of
AbDiffuser to generate antibodies that closely track the sequence and
structural properties of a reference set. Laboratory experiments confirm that
all 16 HER2 antibodies discovered were expressed at high levels and that 57.1%
of the selected designs were tight binders.
| [
{
"created": "Fri, 28 Jul 2023 11:57:44 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2024 17:15:51 GMT",
"version": "v2"
}
] | 2024-03-07 | [
[
"Martinkus",
"Karolis",
""
],
[
"Ludwiczak",
"Jan",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Liang",
"Wei-Ching",
""
],
[
"Lafrance-Vanasse",
"Julien",
""
],
[
"Hotzel",
"Isidro",
""
],
[
"Rajpal",
"Arvind",
""
],
[
"Wu",
"Yan",
""
],
[
"Bonneau",
"Richard",
""
],
[
"Gligorijevic",
"Vladimir",
""
],
[
"Loukas",
"Andreas",
""
]
] | We introduce AbDiffuser, an equivariant and physics-informed diffusion model for the joint generation of antibody 3D structures and sequences. AbDiffuser is built on top of a new representation of protein structure, relies on a novel architecture for aligned proteins, and utilizes strong diffusion priors to improve the denoising process. Our approach improves protein diffusion by taking advantage of domain knowledge and physics-based constraints; handles sequence-length changes; and reduces memory complexity by an order of magnitude, enabling backbone and side chain generation. We validate AbDiffuser in silico and in vitro. Numerical experiments showcase the ability of AbDiffuser to generate antibodies that closely track the sequence and structural properties of a reference set. Laboratory experiments confirm that all 16 HER2 antibodies discovered were expressed at high levels and that 57.1% of the selected designs were tight binders. |
q-bio/0510043 | Alexander Grosberg | Tao Hu, A.Yu.Grosberg, B.I.Shklovskii | How do proteins search for their specific sites on coiled or globular
DNA | 16 pages, 5 figures | Biophysical Journal, v. 90, p. 2731-2744, 2006 | 10.1529/biophysj.105.078162 | null | q-bio.BM cond-mat.soft | null | It is known since the early days of molecular biology that proteins locate
their specific targets on DNA up to two orders of magnitude faster than the
Smoluchowski 3D diffusion rate. It was the idea due to Delbruck that they are
non-specifically adsorbed on DNA, and sliding along DNA provides for the faster
1D search. Surprisingly, the role of DNA conformation was never considered in
this context. In this article, we explicitly address the relative role of 3D
diffusion and 1D sliding along coiled or globular DNA and the possibility of
correlated re-adsorbtion of desorbed proteins. We have identified a wealth of
new different scaling regimes. We also found the maximal possible acceleration
of the reaction due to sliding, we found that the maximum on the
rate-versus-ionic strength curve is asymmetric, and that sliding can lead not
only to acceleration, but in some regimes to dramatic deceleration of the
reaction.
| [
{
"created": "Mon, 24 Oct 2005 19:56:08 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Hu",
"Tao",
""
],
[
"Grosberg",
"A. Yu.",
""
],
[
"Shklovskii",
"B. I.",
""
]
] | It is known since the early days of molecular biology that proteins locate their specific targets on DNA up to two orders of magnitude faster than the Smoluchowski 3D diffusion rate. It was the idea due to Delbruck that they are non-specifically adsorbed on DNA, and sliding along DNA provides for the faster 1D search. Surprisingly, the role of DNA conformation was never considered in this context. In this article, we explicitly address the relative role of 3D diffusion and 1D sliding along coiled or globular DNA and the possibility of correlated re-adsorbtion of desorbed proteins. We have identified a wealth of new different scaling regimes. We also found the maximal possible acceleration of the reaction due to sliding, we found that the maximum on the rate-versus-ionic strength curve is asymmetric, and that sliding can lead not only to acceleration, but in some regimes to dramatic deceleration of the reaction. |
0902.1484 | Hiroshi Momiji | Hiroshi Momiji and Nicholas A.M. Monk | Oscillatory Notch pathway activity in a delay model of neuronal
differentiation | 17 figures, to appear in Phys Rev E | null | 10.1103/PhysRevE.80.021930 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lateral inhibition resulting from a double-negative feedback loop underlies
the assignment of different fates to cells in many developmental processes.
Previous studies have shown that the presence of time delays in models of
lateral inhibition can result in significant oscillatory transients before
patterned steady states are reached. We study the impact of local feedback
loops in a model of lateral inhibition based on the Notch signalling pathway,
elucidating the roles of intracellular and intercellular delays in controlling
the overall system behaviour. The model exhibits both in-phase and out-of-phase
oscillatory modes, and oscillation death. Interactions between oscillatory
modes can generate complex behaviours such as intermittent oscillations. Our
results provide a framework for exploring the recent observation of transient
Notch pathway oscillations during fate assignment in vertebrate neurogenesis.
| [
{
"created": "Mon, 9 Feb 2009 17:49:55 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jul 2009 10:06:19 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Momiji",
"Hiroshi",
""
],
[
"Monk",
"Nicholas A. M.",
""
]
] | Lateral inhibition resulting from a double-negative feedback loop underlies the assignment of different fates to cells in many developmental processes. Previous studies have shown that the presence of time delays in models of lateral inhibition can result in significant oscillatory transients before patterned steady states are reached. We study the impact of local feedback loops in a model of lateral inhibition based on the Notch signalling pathway, elucidating the roles of intracellular and intercellular delays in controlling the overall system behaviour. The model exhibits both in-phase and out-of-phase oscillatory modes, and oscillation death. Interactions between oscillatory modes can generate complex behaviours such as intermittent oscillations. Our results provide a framework for exploring the recent observation of transient Notch pathway oscillations during fate assignment in vertebrate neurogenesis. |
1103.5934 | Christian Meisel | Christian Kuehn and Christian Meisel | On spatial and temporal multilevel dynamics and scaling effects in
epileptic seizures | 24 pages, 9 figures | null | null | null | q-bio.NC math.DS nlin.CD nlin.PS physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epileptic seizures are one of the most well-known dysfunctions of the nervous
system. During a seizure, a highly synchronized behavior of neural activity is
observed that can cause symptoms ranging from mild sensual malfunctions to the
complete loss of body control. In this paper, we aim to contribute towards a
better understanding of the dynamical systems phenomena that cause seizures.
Based on data analysis and modelling, seizure dynamics can be identified to
possess multiple spatial scales and on each spatial scale also multiple time
scales. At each scale, we reach several novel insights. On the smallest spatial
scale we consider single model neurons and investigate early-warning signs of
spiking. This introduces the theory of critical transitions to excitable
systems. For clusters of neurons (or neuronal regions) we use patient data and
find oscillatory behavior and new scaling laws near the seizure onset. These
scalings lead to substantiate the conjecture obtained from mean-field models
that a Hopf bifurcation could be involved near seizure onset. On the largest
spatial scale we introduce a measure based on phase-locking intervals and
wavelets into seizure modelling. It is used to resolve synchronization between
different regions in the brain and identifies time-shifted scaling laws at
different wavelet scales. We also compare our wavelet-based multiscale approach
with maximum linear cross-correlation and mean-phase coherence measures.
| [
{
"created": "Wed, 30 Mar 2011 14:12:25 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2011 09:26:48 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Jul 2011 15:21:45 GMT",
"version": "v3"
}
] | 2011-08-01 | [
[
"Kuehn",
"Christian",
""
],
[
"Meisel",
"Christian",
""
]
] | Epileptic seizures are one of the most well-known dysfunctions of the nervous system. During a seizure, a highly synchronized behavior of neural activity is observed that can cause symptoms ranging from mild sensual malfunctions to the complete loss of body control. In this paper, we aim to contribute towards a better understanding of the dynamical systems phenomena that cause seizures. Based on data analysis and modelling, seizure dynamics can be identified to possess multiple spatial scales and on each spatial scale also multiple time scales. At each scale, we reach several novel insights. On the smallest spatial scale we consider single model neurons and investigate early-warning signs of spiking. This introduces the theory of critical transitions to excitable systems. For clusters of neurons (or neuronal regions) we use patient data and find oscillatory behavior and new scaling laws near the seizure onset. These scalings lead to substantiate the conjecture obtained from mean-field models that a Hopf bifurcation could be involved near seizure onset. On the largest spatial scale we introduce a measure based on phase-locking intervals and wavelets into seizure modelling. It is used to resolve synchronization between different regions in the brain and identifies time-shifted scaling laws at different wavelet scales. We also compare our wavelet-based multiscale approach with maximum linear cross-correlation and mean-phase coherence measures. |
q-bio/0604013 | Akira Kinjo | Akira R. Kinjo and Ken Nishikawa | CRNPRED: Highly Accurate Prediction of One-dimensional Protein
Structures by Large-scale Critical Random Networks | 10 pages, 1 figure, 2 tables | BMC Bioinformatics, 7:401 (2006) | 10.1186/1471-2105-7-401 | null | q-bio.BM | null | Background: One-dimensional protein structures such as secondary structures
or contact numbers are useful for three-dimensional structure prediction and
helpful for intuitive understanding of the sequence-structure relationship.
Accurate prediction methods will serve as a basis for these and other purposes.
Results: We implemented a program CRNPRED which predicts secondary structures,
contact numbers and residue-wise contact orders. This program is based on a
novel machine learning scheme called critical random networks. Unlike most
conventional one-dimensional structure prediction methods which are based on
local windows of an amino acid sequence, CRNPRED takes into account the whole
sequence. CRNPRED achieves, on average per chain, Q3 = 81% for secondary
structure prediction, and correlation coefficients of 0.75 and 0.61 for contact
number and residue-wise contact order predictions, respectively. Conclusion:
CRNPRED will be a useful tool for computational as well as experimental
biologists who need accurate one-dimensional protein structure predictions.
| [
{
"created": "Wed, 12 Apr 2006 06:46:03 GMT",
"version": "v1"
}
] | 2007-07-09 | [
[
"Kinjo",
"Akira R.",
""
],
[
"Nishikawa",
"Ken",
""
]
] | Background: One-dimensional protein structures such as secondary structures or contact numbers are useful for three-dimensional structure prediction and helpful for intuitive understanding of the sequence-structure relationship. Accurate prediction methods will serve as a basis for these and other purposes. Results: We implemented a program CRNPRED which predicts secondary structures, contact numbers and residue-wise contact orders. This program is based on a novel machine learning scheme called critical random networks. Unlike most conventional one-dimensional structure prediction methods which are based on local windows of an amino acid sequence, CRNPRED takes into account the whole sequence. CRNPRED achieves, on average per chain, Q3 = 81% for secondary structure prediction, and correlation coefficients of 0.75 and 0.61 for contact number and residue-wise contact order predictions, respectively. Conclusion: CRNPRED will be a useful tool for computational as well as experimental biologists who need accurate one-dimensional protein structure predictions. |
q-bio/0505044 | Fredrik Liljeros | Martin Camitz and Fredrik Liljeros | The effect of travel restrictions on the spread of a highly contagious
disease in Sweden | null | null | null | null | q-bio.QM physics.soc-ph q-bio.OT | null | Travel restrictions may reduce the spread of a contagious disease that
threatens public health. In this study we investigate what effect different
levels of travel restrictions may have on the speed and geographical spread of
an outbreak of a disease similar to SARS. We use a stochastic simulation model
of the Swedish population, calibrated with survey data of travel patterns
between municipalities in Sweden collected over three years. We find that a ban
on journeys longer than 50 km drastically reduces the speed and the
geographical spread of outbreaks, even with when compliance is less than 100%.
The result is found to be robust for different rates of inter-municipality
transmission intensities. Travel restrictions may therefore be an effective way
to mitigate the effect of a future outbreak.
| [
{
"created": "Tue, 24 May 2005 12:29:49 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Oct 2005 08:34:20 GMT",
"version": "v2"
}
] | 2016-09-08 | [
[
"Camitz",
"Martin",
""
],
[
"Liljeros",
"Fredrik",
""
]
] | Travel restrictions may reduce the spread of a contagious disease that threatens public health. In this study we investigate what effect different levels of travel restrictions may have on the speed and geographical spread of an outbreak of a disease similar to SARS. We use a stochastic simulation model of the Swedish population, calibrated with survey data of travel patterns between municipalities in Sweden collected over three years. We find that a ban on journeys longer than 50 km drastically reduces the speed and the geographical spread of outbreaks, even with when compliance is less than 100%. The result is found to be robust for different rates of inter-municipality transmission intensities. Travel restrictions may therefore be an effective way to mitigate the effect of a future outbreak. |
2405.13305 | Jan Peter George | Jan-Peter George, Mari Rusanen, Egbert Beuker, Leena Yrj\"an\"a,
Volkmar Timmermann, Nenad Potocic, Sakari V\"alim\"aki, Heino Konrad | Lessons to learn for better safeguarding of genetic resources during
tree pandemics: the case of ash dieback in Europe | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Ash dieback (ADB) is threatening populations of European ash (Fraxinus
excelsior & F. angustifolia) for more than three decades. Although much
knowledge has been gathered in the recent past, practical conservation measures
have been mostly implemented at local scale. Since range contraction in both
ash species will be exacerbated in the near future by westward expansion of the
emerald ash borer and climate change, systematic conservation frameworks need
to be developed to avoid long-term population-genetic consequences and
depletion of genomic diversity. In this article, we address the advantages and
obstacles of conservation approaches aiming to conserve genetic diversity
in-situ or ex-situ during tree pandemics. We are reviewing 47 studies which
were published on ash dieback to unravel three important dimensions of ongoing
conservation approaches or perceived conservation problems: i) conservation
philosophy (i.e. natural selection, resistance breeding or genetic
conservation), ii) the spatial scale (ecosystem, country, continent), and iii)
the integration of genetic safety margins in conservation planning. Although
nearly equal proportions of the reviewed studies mention breeding or active
conservation as possible long-term solutions, only 17% consider that additional
threats exist which may further reduce genetic diversity in both ash species.
We also identify and discuss several knowledge gaps and limitations which may
have limited the initiation of conservation projects at national and
international level so far. Finally, we demonstrate that there is not much time
left for filling these gaps, because European-wide forest health monitoring
data indicates a significant decline of ash populations in the last 5 years.
| [
{
"created": "Wed, 22 May 2024 02:47:23 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"George",
"Jan-Peter",
""
],
[
"Rusanen",
"Mari",
""
],
[
"Beuker",
"Egbert",
""
],
[
"Yrjänä",
"Leena",
""
],
[
"Timmermann",
"Volkmar",
""
],
[
"Potocic",
"Nenad",
""
],
[
"Välimäki",
"Sakari",
""
],
[
"Konrad",
"Heino",
""
]
] | Ash dieback (ADB) is threatening populations of European ash (Fraxinus excelsior & F. angustifolia) for more than three decades. Although much knowledge has been gathered in the recent past, practical conservation measures have been mostly implemented at local scale. Since range contraction in both ash species will be exacerbated in the near future by westward expansion of the emerald ash borer and climate change, systematic conservation frameworks need to be developed to avoid long-term population-genetic consequences and depletion of genomic diversity. In this article, we address the advantages and obstacles of conservation approaches aiming to conserve genetic diversity in-situ or ex-situ during tree pandemics. We are reviewing 47 studies which were published on ash dieback to unravel three important dimensions of ongoing conservation approaches or perceived conservation problems: i) conservation philosophy (i.e. natural selection, resistance breeding or genetic conservation), ii) the spatial scale (ecosystem, country, continent), and iii) the integration of genetic safety margins in conservation planning. Although nearly equal proportions of the reviewed studies mention breeding or active conservation as possible long-term solutions, only 17% consider that additional threats exist which may further reduce genetic diversity in both ash species. We also identify and discuss several knowledge gaps and limitations which may have limited the initiation of conservation projects at national and international level so far. Finally, we demonstrate that there is not much time left for filling these gaps, because European-wide forest health monitoring data indicates a significant decline of ash populations in the last 5 years. |
q-bio/0507013 | Ana Nunes | A. Nunes, M. M. Telo da Gama and M. G. M. Gomes | Does host contact structure reduce pathogen diversity? | 28 pages, 7 figures | null | null | null | q-bio.PE | null | We investigate the dynamics of a simple epidemiological model for the
invasion by a pathogen strain of a population where another strain circulates.
We assume that reinfection by the same strain is possible but occurs at a
reduced rate due to acquired immunity. The rate of reinfection by a distinct
strain is also reduced due to cross-immunity. Individual based simulations of
this model on a `small-world' network show that the host contact network
structure significantly affects the outcome of such an invasion, and as a
consequence will affect the patterns of pathogen evolution. In particular, host
populations interacting through a 'small-world' network of contacts support
lower prevalence of infection than well-mixed populations, and the region in
parameter space for which an invading strain can become endemic and coexist
with the circulating strain is smaller, reducing the potential to accommodate
pathogen diversity. We discuss the underlying mechanisms for the reported
effects, and we propose an effective mean-field model to account for the
contact structure of the host population in 'small-world' networks.
| [
{
"created": "Fri, 8 Jul 2005 18:37:13 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Nov 2005 17:25:51 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Nunes",
"A.",
""
],
[
"da Gama",
"M. M. Telo",
""
],
[
"Gomes",
"M. G. M.",
""
]
] | We investigate the dynamics of a simple epidemiological model for the invasion by a pathogen strain of a population where another strain circulates. We assume that reinfection by the same strain is possible but occurs at a reduced rate due to acquired immunity. The rate of reinfection by a distinct strain is also reduced due to cross-immunity. Individual based simulations of this model on a `small-world' network show that the host contact network structure significantly affects the outcome of such an invasion, and as a consequence will affect the patterns of pathogen evolution. In particular, host populations interacting through a 'small-world' network of contacts support lower prevalence of infection than well-mixed populations, and the region in parameter space for which an invading strain can become endemic and coexist with the circulating strain is smaller, reducing the potential to accommodate pathogen diversity. We discuss the underlying mechanisms for the reported effects, and we propose an effective mean-field model to account for the contact structure of the host population in 'small-world' networks. |
1301.2979 | Yannis Drossinos | Marguerite Robinson, Yannis Drossinos, Nikolaos I. Stilianakis | Indirect transmission and the effect of seasonal pathogen inactivation
on infectious disease periodicity | 36 pages, 10 figures, to appear in Epidemics | Epidemics 5, 111-121 (2013) | 10.1016/j.epidem.2013.01.001 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The annual occurrence of many infectious diseases remains a constant burden
to public health systems. The seasonal patterns in respiratory disease
incidence observed in temperate regions have been attributed to the impact of
environmental conditions on pathogen survival. A model describing the
transmission of an infectious disease by means of a pathogenic state capable of
surviving in an environmental reservoir outside of its host organism is
presented in this paper. The ratio of pathogen lifespan to the duration of the
infectious disease state is found to be a critical parameter in determining
disease dynamics. The introduction of a seasonally forced pathogen inactivation
rate identifies a time delay between peak pathogen survival and peak disease
incidence. The delay is dependent on specific disease parameters and, for
influenza, decreases with increasing reproduction number. The observed seasonal
oscillations are found to have a period identical to that of the seasonally
forced inactivation rate and which is independent of the duration of infection
acquired immunity
| [
{
"created": "Mon, 14 Jan 2013 14:06:24 GMT",
"version": "v1"
}
] | 2013-11-13 | [
[
"Robinson",
"Marguerite",
""
],
[
"Drossinos",
"Yannis",
""
],
[
"Stilianakis",
"Nikolaos I.",
""
]
] | The annual occurrence of many infectious diseases remains a constant burden to public health systems. The seasonal patterns in respiratory disease incidence observed in temperate regions have been attributed to the impact of environmental conditions on pathogen survival. A model describing the transmission of an infectious disease by means of a pathogenic state capable of surviving in an environmental reservoir outside of its host organism is presented in this paper. The ratio of pathogen lifespan to the duration of the infectious disease state is found to be a critical parameter in determining disease dynamics. The introduction of a seasonally forced pathogen inactivation rate identifies a time delay between peak pathogen survival and peak disease incidence. The delay is dependent on specific disease parameters and, for influenza, decreases with increasing reproduction number. The observed seasonal oscillations are found to have a period identical to that of the seasonally forced inactivation rate and which is independent of the duration of infection acquired immunity |
2005.11257 | Purushottam Kar | Amit Chandak and Debojyoti Dey and Bhaskar Mukhoty and Purushottam Kar | Epidemiologically and Socio-economically Optimal Policies via Bayesian
Optimization | Keywords: COVID-19, Optimal Policy, Lock-down, Epidemiology, Bayesian
Optimization Code available at https://github.com/purushottamkar/esop | null | null | null | q-bio.PE cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mass public quarantining, colloquially known as a lock-down, is a
non-pharmaceutical intervention to check spread of disease. This paper presents
ESOP (Epidemiologically and Socio-economically Optimal Policies), a novel
application of active machine learning techniques using Bayesian optimization,
that interacts with an epidemiological model to arrive at lock-down schedules
that optimally balance public health benefits and socio-economic downsides of
reduced economic activity during lock-down periods. The utility of ESOP is
demonstrated using case studies with VIPER
(Virus-Individual-Policy-EnviRonment), a stochastic agent-based simulator that
this paper also proposes. However, ESOP is flexible enough to interact with
arbitrary epidemiological simulators in a black-box manner, and produce
schedules that involve multiple phases of lock-downs.
| [
{
"created": "Fri, 22 May 2020 16:11:33 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 03:44:27 GMT",
"version": "v2"
}
] | 2020-06-16 | [
[
"Chandak",
"Amit",
""
],
[
"Dey",
"Debojyoti",
""
],
[
"Mukhoty",
"Bhaskar",
""
],
[
"Kar",
"Purushottam",
""
]
] | Mass public quarantining, colloquially known as a lock-down, is a non-pharmaceutical intervention to check spread of disease. This paper presents ESOP (Epidemiologically and Socio-economically Optimal Policies), a novel application of active machine learning techniques using Bayesian optimization, that interacts with an epidemiological model to arrive at lock-down schedules that optimally balance public health benefits and socio-economic downsides of reduced economic activity during lock-down periods. The utility of ESOP is demonstrated using case studies with VIPER (Virus-Individual-Policy-EnviRonment), a stochastic agent-based simulator that this paper also proposes. However, ESOP is flexible enough to interact with arbitrary epidemiological simulators in a black-box manner, and produce schedules that involve multiple phases of lock-downs. |
1401.5823 | Samuel Gross | Samuel M. Gross, Robert Tibshirani | Collaborative Regression | 13 pages, 4 figures | null | null | null | q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the scenario where one observes an outcome variable and sets of
features from multiple assays, all measured on the same set of samples. One
approach that has been proposed for dealing with this type of data is ``sparse
multiple canonical correlation analysis'' (sparse mCCA). All of the current
sparse mCCA techniques are biconvex and thus have no guarantees about reaching
a global optimum. We propose a method for performing sparse supervised
canonical correlation analysis (sparse sCCA), a specific case of sparse mCCA
when one of the datasets is a vector. Our proposal for sparse sCCA is convex
and thus does not face the same difficulties as the other methods. We derive
efficient algorithms for this problem, and illustrate their use on simulated
and real data.
| [
{
"created": "Wed, 22 Jan 2014 23:00:11 GMT",
"version": "v1"
}
] | 2014-01-24 | [
[
"Gross",
"Samuel M.",
""
],
[
"Tibshirani",
"Robert",
""
]
] | We consider the scenario where one observes an outcome variable and sets of features from multiple assays, all measured on the same set of samples. One approach that has been proposed for dealing with this type of data is ``sparse multiple canonical correlation analysis'' (sparse mCCA). All of the current sparse mCCA techniques are biconvex and thus have no guarantees about reaching a global optimum. We propose a method for performing sparse supervised canonical correlation analysis (sparse sCCA), a specific case of sparse mCCA when one of the datasets is a vector. Our proposal for sparse sCCA is convex and thus does not face the same difficulties as the other methods. We derive efficient algorithms for this problem, and illustrate their use on simulated and real data. |
1112.5966 | Ralph Brinks | Ralph Brinks | Calculation of the mean duration and age of onset of a chronic disease
and application to dementia in Germany | 11 pages, 4 figures | null | null | null | q-bio.QM q-bio.PE stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper descibes a new method of calculating the mean duration and mean
age of onset of a chronic disease from incidence and mortality rates. It is
based on an ordinary differential equation resulting from a simple compartment
model. Applicability of the method is demonstrated in data about dementia in
Germany.
| [
{
"created": "Tue, 27 Dec 2011 14:23:55 GMT",
"version": "v1"
}
] | 2011-12-30 | [
[
"Brinks",
"Ralph",
""
]
] | This paper descibes a new method of calculating the mean duration and mean age of onset of a chronic disease from incidence and mortality rates. It is based on an ordinary differential equation resulting from a simple compartment model. Applicability of the method is demonstrated in data about dementia in Germany. |
2311.13977 | Joaquin Torres | Gustavo Menesse, Joaquin J. Torres | Information dynamics of $in\; silico$ EEG Brain Waves: Insights into
oscillations and functions | 47 pages, 15 figures | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relation between EEG rhythms, brain functions, and behavioral correlates
is well-established. Some mechanisms underlying rhythm generation are
understood, enabling the replication of brain rhythms $in\; silico$. This
allows to explore relations between neural oscillations and specific neuronal
circuits, helping to decipher the functional properties of brain waves.
Integrated information Decomposition ($\Phi$-ID) framework relates dynamical
regimes with informational properties, providing deeper insights into neuronal
dynamic functions. Here, we investigate wave emergence in an
excitatory/inhibitory (E/I) balanced network of IF neurons with short-term
synaptic plasticity producing a diverse range of EEG-like rhythms, from low
$\delta$ waves to high-frequency oscillations. Through $\Phi$-ID, we analyze
the network's information dynamics elucidating the system's suitability for
robust information transfer, storage, and parallel operation. Our study
identifies also regimes that may resemble pathological states due to poor
informational properties and high randomness. We found that $in\; silico$
$\beta$ and $\delta$ waves are associated with maximum information transfer in
inhibitory and excitatory neuron populations, and the coexistence of excitatory
$\theta$, $\alpha$, and $\beta$ waves associated to information storage. Also,
high-frequency oscillations can exhibit either high or poor informational
properties, shedding light on discussions regarding physiological versus
pathological high-frequency oscillations. Our study demonstrates that dynamical
regimes with similar oscillations may exhibit different information dynamics.
Finally, our findings suggest that the use of information dynamics in both
model and experimental data analysis, could help discriminate between
oscillations associated with cognitive functions and those linked to neuronal
disorders.
| [
{
"created": "Thu, 23 Nov 2023 12:43:11 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 10:40:53 GMT",
"version": "v2"
}
] | 2024-08-02 | [
[
"Menesse",
"Gustavo",
""
],
[
"Torres",
"Joaquin J.",
""
]
] | The relation between EEG rhythms, brain functions, and behavioral correlates is well-established. Some mechanisms underlying rhythm generation are understood, enabling the replication of brain rhythms $in\; silico$. This allows to explore relations between neural oscillations and specific neuronal circuits, helping to decipher the functional properties of brain waves. Integrated information Decomposition ($\Phi$-ID) framework relates dynamical regimes with informational properties, providing deeper insights into neuronal dynamic functions. Here, we investigate wave emergence in an excitatory/inhibitory (E/I) balanced network of IF neurons with short-term synaptic plasticity producing a diverse range of EEG-like rhythms, from low $\delta$ waves to high-frequency oscillations. Through $\Phi$-ID, we analyze the network's information dynamics elucidating the system's suitability for robust information transfer, storage, and parallel operation. Our study identifies also regimes that may resemble pathological states due to poor informational properties and high randomness. We found that $in\; silico$ $\beta$ and $\delta$ waves are associated with maximum information transfer in inhibitory and excitatory neuron populations, and the coexistence of excitatory $\theta$, $\alpha$, and $\beta$ waves associated to information storage. Also, high-frequency oscillations can exhibit either high or poor informational properties, shedding light on discussions regarding physiological versus pathological high-frequency oscillations. Our study demonstrates that dynamical regimes with similar oscillations may exhibit different information dynamics. Finally, our findings suggest that the use of information dynamics in both model and experimental data analysis, could help discriminate between oscillations associated with cognitive functions and those linked to neuronal disorders. |
1711.10991 | Alejandro Tabas | Alejandro Tabas, Martin Andermann, Valeria Sebold, Helmut Riedel,
Emili Balaguer-Ballester and Andr\'e Rupp | Early processing of consonance and dissonance in human auditory cortex | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pitch is the perceptual correlate of sound's periodicity and a fundamental
property of the auditory sensation. The interaction of two or more pitches
gives rise to a sensation that can be characterized by its degree of consonance
or dissonance. In the current study, we investigated the neuromagnetic
representations of consonant and dissonant musical dyads using a new model of
cortical activity, in an effort to assess the possible involvement of
pitch-specific neural mechanisms in consonance processing at early cortical
stages.
In the first step of the study, we developed a novel model of cortical pitch
processing designed to explain the morphology of the pitch onset response
(POR), a pitch-specific subcomponent of the auditory evoked N100 component in
the human auditory cortex. The model explains the neural mechanisms underlying
the generation of the POR and quantitatively accounts for the relation between
its peak latency and the perceived pitch.
Next, we applied magnetoencephalography (MEG) to record the POR as elicited
by six consonant and dissonant dyads. The peak latency of the POR was strongly
modulated by the degree of consonance within the stimuli; specifically, the
most dissonant dyad exhibited a POR with a latency that was about 30ms longer
than that of the most consonant dyad, an effect that greatly exceeds the
expected latency difference induced by a single pitch sound.
Our model was able to predict the POR latency pattern observed in the
neuromagnetic data, and to generalize this prediction to additional dyads.
These results indicate that the neural mechanisms responsible for pitch
processing exhibit an intrinsic differential response to concurrent consonant
and dissonant pitch combinations, suggesting that the perception of consonance
and dissonance might be an emergent property of the pitch processing system in
human auditory cortex.
| [
{
"created": "Wed, 29 Nov 2017 18:10:29 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2017 13:26:37 GMT",
"version": "v2"
}
] | 2017-12-01 | [
[
"Tabas",
"Alejandro",
""
],
[
"Andermann",
"Martin",
""
],
[
"Sebold",
"Valeria",
""
],
[
"Riedel",
"Helmut",
""
],
[
"Balaguer-Ballester",
"Emili",
""
],
[
"Rupp",
"André",
""
]
] | Pitch is the perceptual correlate of sound's periodicity and a fundamental property of the auditory sensation. The interaction of two or more pitches gives rise to a sensation that can be characterized by its degree of consonance or dissonance. In the current study, we investigated the neuromagnetic representations of consonant and dissonant musical dyads using a new model of cortical activity, in an effort to assess the possible involvement of pitch-specific neural mechanisms in consonance processing at early cortical stages. In the first step of the study, we developed a novel model of cortical pitch processing designed to explain the morphology of the pitch onset response (POR), a pitch-specific subcomponent of the auditory evoked N100 component in the human auditory cortex. The model explains the neural mechanisms underlying the generation of the POR and quantitatively accounts for the relation between its peak latency and the perceived pitch. Next, we applied magnetoencephalography (MEG) to record the POR as elicited by six consonant and dissonant dyads. The peak latency of the POR was strongly modulated by the degree of consonance within the stimuli; specifically, the most dissonant dyad exhibited a POR with a latency that was about 30ms longer than that of the most consonant dyad, an effect that greatly exceeds the expected latency difference induced by a single pitch sound. Our model was able to predict the POR latency pattern observed in the neuromagnetic data, and to generalize this prediction to additional dyads. These results indicate that the neural mechanisms responsible for pitch processing exhibit an intrinsic differential response to concurrent consonant and dissonant pitch combinations, suggesting that the perception of consonance and dissonance might be an emergent property of the pitch processing system in human auditory cortex. |
2212.03660 | Bo Liu | Bo Liu, Rongmei Yang, Hao Wang, Linyuan L\"u | Complete cavity map of the C. elegans connectome | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network structure or topology is the basis for understanding complex systems.
Recently, higher-order structures have been considered as a new research
direction that can provide new perspectives and phenomena. However, most
existing studies have focused on simple 1-cycles and triangles, and few have
explored the higher-order and non-trivial cycles. The current study focused on
the cavity, which is a non-trivial cycle. We proposed a method to compute
cavities with different orders based on pruning in the boundary matrix and
calculated all cavities of the neural network of Caenorhabditis elegans
(\emph{C. elegans}) neural network. This study reports for the first time a
complete cavity map of C. elegans neural network, developing a new method for
mining higher-order structures that can be applied by researchers in
neuroscience, network science and other interdisciplinary fields to explore
higher-order structural markers of complex systems.
| [
{
"created": "Wed, 7 Dec 2022 14:23:46 GMT",
"version": "v1"
}
] | 2022-12-08 | [
[
"Liu",
"Bo",
""
],
[
"Yang",
"Rongmei",
""
],
[
"Wang",
"Hao",
""
],
[
"Lü",
"Linyuan",
""
]
] | Network structure or topology is the basis for understanding complex systems. Recently, higher-order structures have been considered as a new research direction that can provide new perspectives and phenomena. However, most existing studies have focused on simple 1-cycles and triangles, and few have explored the higher-order and non-trivial cycles. The current study focused on the cavity, which is a non-trivial cycle. We proposed a method to compute cavities with different orders based on pruning in the boundary matrix and calculated all cavities of the neural network of Caenorhabditis elegans (\emph{C. elegans}) neural network. This study reports for the first time a complete cavity map of C. elegans neural network, developing a new method for mining higher-order structures that can be applied by researchers in neuroscience, network science and other interdisciplinary fields to explore higher-order structural markers of complex systems. |
1210.1844 | Francisco-Jose Perez-Reche | Francisco J. Perez-Reche, Franco M. Neri, Sergei N. Taraskin and
Christopher A. Gilligan | Prediction of invasion from the early stage of an epidemic | Main text: 18 pages, 7 figures. Supporting information: 21 pages, 8
figures | Journal of the Royal Society Interface, 9, 2085 (2012) | 10.1098/rsif.2012.0130 | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictability of undesired events is a question of great interest in many
scientific disciplines including seismology, economy, and epidemiology. Here,
we focus on the predictability of invasion of a broad class of epidemics caused
by diseases that lead to permanent immunity of infected hosts after recovery or
death. We approach the problem from the perspective of the science of
complexity by proposing and testing several strategies for the estimation of
important characteristics of epidemics, such as the probability of invasion.
Our results suggest that parsimonious approximate methodologies may lead to the
most reliable and robust predictions. The proposed methodologies are first
applied to analysis of experimentally observed epidemics: invasion of the
fungal plant pathogen \emph{Rhizoctonia solani} in replicated host microcosms.
We then consider numerical experiments of the SIR
(susceptible-infected-removed) model to investigate the performance of the
proposed methods in further detail. The suggested framework can be used as a
valuable tool for quick assessment of epidemic threat at the stage when
epidemics only start developing. Moreover, our work amplifies the significance
of the small-scale and finite-time microcosm realizations of epidemics
revealing their predictive power.
| [
{
"created": "Fri, 5 Oct 2012 19:52:42 GMT",
"version": "v1"
}
] | 2012-10-08 | [
[
"Perez-Reche",
"Francisco J.",
""
],
[
"Neri",
"Franco M.",
""
],
[
"Taraskin",
"Sergei N.",
""
],
[
"Gilligan",
"Christopher A.",
""
]
] | Predictability of undesired events is a question of great interest in many scientific disciplines including seismology, economy, and epidemiology. Here, we focus on the predictability of invasion of a broad class of epidemics caused by diseases that lead to permanent immunity of infected hosts after recovery or death. We approach the problem from the perspective of the science of complexity by proposing and testing several strategies for the estimation of important characteristics of epidemics, such as the probability of invasion. Our results suggest that parsimonious approximate methodologies may lead to the most reliable and robust predictions. The proposed methodologies are first applied to analysis of experimentally observed epidemics: invasion of the fungal plant pathogen \emph{Rhizoctonia solani} in replicated host microcosms. We then consider numerical experiments of the SIR (susceptible-infected-removed) model to investigate the performance of the proposed methods in further detail. The suggested framework can be used as a valuable tool for quick assessment of epidemic threat at the stage when epidemics only start developing. Moreover, our work amplifies the significance of the small-scale and finite-time microcosm realizations of epidemics revealing their predictive power. |
1702.03318 | Hamidreza Ardeshiri | H. Ardeshiri, F. G. Schmitt, S. Souissi, F. Toschi and E. Calzavarini | Copepods encounter rates from a model of escape jump behaviour in
turbulence | 11 pages, 10 figures | Journal of Plankton Research , 39(6), 878-890, (2017) | 10.1093/plankt/fbx051 | null | q-bio.PE physics.bio-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key ecological parameter for planktonic copepods studies is their
interspecies encounter rate which is driven by their behaviour and is strongly
influenced by turbulence of the surrounding environment. A distinctive feature
of copepods motility is their ability to perform quick displacements, often
dubbed jumps, by means of powerful swimming strokes. Such a reaction has been
associated to an escape behaviour from flow disturbances due to predators or
other external dangers. In the present study, the encounter rate of copepods in
a developed turbulent flow with intensity comparable to the one found in
copepods' habitat is numerically investigated. This is done by means of a
Lagrangian copepod (LC) model that mimics the jump escape reaction behaviour
from localised high-shear rate fluctuations in the turbulent flows. Our
analysis shows that the encounter rate for copepods of typical perception
radius of ~ {\eta}, where {\eta} is the dissipative scale of turbulence, can be
increased by a factor up to ~ 100 compared to the one experienced by passively
transported fluid tracers. Furthermore, we address the effect of introducing in
the LC model a minimal waiting time between consecutive jumps. It is shown that
any encounter-rate enhancement is lost if such time goes beyond the dissipative
time-scale of turbulence, {\tau}_{\eta}. Because typically in the ocean {\eta}
~ 0.001m and {\tau}_{\eta} ~ 1s, this provides stringent constraints on the
turbulent-driven enhancement of encounter-rate due to a purely mechanical
induced escape reaction.
| [
{
"created": "Fri, 10 Feb 2017 20:26:46 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2017 19:36:55 GMT",
"version": "v2"
}
] | 2017-12-07 | [
[
"Ardeshiri",
"H.",
""
],
[
"Schmitt",
"F. G.",
""
],
[
"Souissi",
"S.",
""
],
[
"Toschi",
"F.",
""
],
[
"Calzavarini",
"E.",
""
]
] | A key ecological parameter for planktonic copepods studies is their interspecies encounter rate which is driven by their behaviour and is strongly influenced by turbulence of the surrounding environment. A distinctive feature of copepods motility is their ability to perform quick displacements, often dubbed jumps, by means of powerful swimming strokes. Such a reaction has been associated to an escape behaviour from flow disturbances due to predators or other external dangers. In the present study, the encounter rate of copepods in a developed turbulent flow with intensity comparable to the one found in copepods' habitat is numerically investigated. This is done by means of a Lagrangian copepod (LC) model that mimics the jump escape reaction behaviour from localised high-shear rate fluctuations in the turbulent flows. Our analysis shows that the encounter rate for copepods of typical perception radius of ~ {\eta}, where {\eta} is the dissipative scale of turbulence, can be increased by a factor up to ~ 100 compared to the one experienced by passively transported fluid tracers. Furthermore, we address the effect of introducing in the LC model a minimal waiting time between consecutive jumps. It is shown that any encounter-rate enhancement is lost if such time goes beyond the dissipative time-scale of turbulence, {\tau}_{\eta}. Because typically in the ocean {\eta} ~ 0.001m and {\tau}_{\eta} ~ 1s, this provides stringent constraints on the turbulent-driven enhancement of encounter-rate due to a purely mechanical induced escape reaction. |
q-bio/0501034 | Max Shpak | Max Shpak | Evolution of Variance in Offspring Number: the Effects of Population
Size and Migration | null | null | null | null | q-bio.PE | null | It was shown by Gillespie (1974) that if two genotypes produce the same
average number of offspring on but have a different variance associated within
each generation, the genotype with a lower variance will have a higher
effective fitness. Specifically, the effective fitness is W(effective)=w-var/N,
where w is the mean fitness, var is the variance in offspring number, and N is
the total population size. The model also predicts that if a strategy has a
higher arithmetic mean fitness and a higher variance than the competitor, the
outcome of selection will depend on the population size (with larger population
sizes favoring the high variance, high mean genotype). This suggests that for
metapopulations with large numbers of (relatively) small demes, a strategy with
lower variance and lower mean may be favored if the migration rate is low while
higher migration rates (consistent with a larger effective population size)
favor the opposite strategy. Individual based simulation confirms that this is
indeed the case for an island model of migration, though the effect of
migration differs greatly depending on whether migration precedes or follows
selection. It is noted in the appendix that while Gillespie 1974 does seem to
be heuristically accurate, it is not clear that the definition of effective
fitness follows from his derivation.
| [
{
"created": "Wed, 26 Jan 2005 22:39:05 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Shpak",
"Max",
""
]
] | It was shown by Gillespie (1974) that if two genotypes produce the same average number of offspring on but have a different variance associated within each generation, the genotype with a lower variance will have a higher effective fitness. Specifically, the effective fitness is W(effective)=w-var/N, where w is the mean fitness, var is the variance in offspring number, and N is the total population size. The model also predicts that if a strategy has a higher arithmetic mean fitness and a higher variance than the competitor, the outcome of selection will depend on the population size (with larger population sizes favoring the high variance, high mean genotype). This suggests that for metapopulations with large numbers of (relatively) small demes, a strategy with lower variance and lower mean may be favored if the migration rate is low while higher migration rates (consistent with a larger effective population size) favor the opposite strategy. Individual based simulation confirms that this is indeed the case for an island model of migration, though the effect of migration differs greatly depending on whether migration precedes or follows selection. It is noted in the appendix that while Gillespie 1974 does seem to be heuristically accurate, it is not clear that the definition of effective fitness follows from his derivation. |
1505.04228 | Yun S. Song | Jonathan Terhorst and Yun S. Song | Fundamental limits on the accuracy of demographic inference based on the
sample frequency spectrum | 17 pages, 1 figure | Proc. Natl. Acad. Sci. U.S.A., Vol. 112, No. 25 (2015) 7677-7682 | 10.1073/pnas.1503717112 | null | q-bio.PE math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sample frequency spectrum (SFS) of DNA sequences from a collection of
individuals is a summary statistic which is commonly used for parametric
inference in population genetics. Despite the popularity of SFS-based inference
methods, currently little is known about the information-theoretic limit on the
estimation accuracy as a function of sample size. Here, we show that using the
SFS to estimate the size history of a population has a minimax error of at
least $O(1/\log s)$, where $s$ is the number of independent segregating sites
used in the analysis. This rate is exponentially worse than known convergence
rates for many classical estimation problems in statistics. Another surprising
aspect of our theoretical bound is that it does not depend on the dimension of
the SFS, which is related to the number of sampled individuals. This means
that, for a fixed number $s$ of segregating sites considered, using more
individuals does not help to reduce the minimax error bound. Our result
pertains to populations that have experienced a bottleneck, and we argue that
it can be expected to apply to many populations in nature.
| [
{
"created": "Sat, 16 May 2015 01:25:19 GMT",
"version": "v1"
}
] | 2015-06-24 | [
[
"Terhorst",
"Jonathan",
""
],
[
"Song",
"Yun S.",
""
]
] | The sample frequency spectrum (SFS) of DNA sequences from a collection of individuals is a summary statistic which is commonly used for parametric inference in population genetics. Despite the popularity of SFS-based inference methods, currently little is known about the information-theoretic limit on the estimation accuracy as a function of sample size. Here, we show that using the SFS to estimate the size history of a population has a minimax error of at least $O(1/\log s)$, where $s$ is the number of independent segregating sites used in the analysis. This rate is exponentially worse than known convergence rates for many classical estimation problems in statistics. Another surprising aspect of our theoretical bound is that it does not depend on the dimension of the SFS, which is related to the number of sampled individuals. This means that, for a fixed number $s$ of segregating sites considered, using more individuals does not help to reduce the minimax error bound. Our result pertains to populations that have experienced a bottleneck, and we argue that it can be expected to apply to many populations in nature. |
1602.03008 | Vince Grolmusz | Balazs Szalkai and Balint Varga and Vince Grolmusz | Mapping Correlations of Psychological and Connectomical Properties of
the Dataset of the Human Connectome Project with the Maximum Spanning Tree
Method | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyzed correlations between more than 700 psychological-, anatomical-
and connectome--properties, originated from the Human Connectome Project's
(HCP) 500-subject dataset. Apart from numerous natural correlations, which
describe parameters computable or approximable from one another, we have
discovered numerous significant correlations in the dataset, never described
before. We also have found correlations described very recently independently
from the HCP-dataset: e.g., between gambling behavior and the number of the
connections leaving the insula.
| [
{
"created": "Tue, 9 Feb 2016 14:49:17 GMT",
"version": "v1"
}
] | 2016-02-10 | [
[
"Szalkai",
"Balazs",
""
],
[
"Varga",
"Balint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | We analyzed correlations between more than 700 psychological-, anatomical- and connectome--properties, originated from the Human Connectome Project's (HCP) 500-subject dataset. Apart from numerous natural correlations, which describe parameters computable or approximable from one another, we have discovered numerous significant correlations in the dataset, never described before. We also have found correlations described very recently independently from the HCP-dataset: e.g., between gambling behavior and the number of the connections leaving the insula. |
1404.4184 | Tomasz Rutkowski | Katsuhiko Hamada, Hiromu Mori, Hiroyuki Shinoda, and Tomasz M.
Rutkowski | Airborne Ultrasonic Tactile Display Brain-computer Interface Paradigm | 5 pages, 3 figures, submitted to 6th International Brain-Computer
Interface Conference 2014, Graz, Austria | null | 10.3217/978-3-85125-378-8-18 | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We study the extent to which contact-less and airborne ultrasonic tactile
display (AUTD) stimuli delivered to the palms of a user can serve as a platform
for a brain computer interface (BCI) paradigm. Six palm positions are used to
evoke combined somatosensory brain responses, in order to define a novel
contact-less tactile BCI. A comparison is made with classical attached
vibrotactile transducers. Experiment results of subjects performing online
experiments validate the novel BCI paradigm.
| [
{
"created": "Wed, 16 Apr 2014 10:10:25 GMT",
"version": "v1"
}
] | 2015-01-06 | [
[
"Hamada",
"Katsuhiko",
""
],
[
"Mori",
"Hiromu",
""
],
[
"Shinoda",
"Hiroyuki",
""
],
[
"Rutkowski",
"Tomasz M.",
""
]
] | We study the extent to which contact-less and airborne ultrasonic tactile display (AUTD) stimuli delivered to the palms of a user can serve as a platform for a brain computer interface (BCI) paradigm. Six palm positions are used to evoke combined somatosensory brain responses, in order to define a novel contact-less tactile BCI. A comparison is made with classical attached vibrotactile transducers. Experiment results of subjects performing online experiments validate the novel BCI paradigm. |
1304.7441 | Xin Liu | Xin Liu | A remote response of ATP hydrolysis | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ATP-hydrolysis is the basic energy source of many physiological processes,
but there is a lack of knowledge regarding its biological role other than
energy transfer and thermogenesis. Not all the energy released by
ATP-hydrolysis could be used in powering target biological processes and
functions. Partial energy dissipates into water. By validating the impact of
this dissipated energy, we found that energy released by ATP hydrolysis could
induce notable regulation of biomolecule's properties 100 nanometers away.
Namely ATP hydrolyzation is recycled in remote biochemical property modulation.
| [
{
"created": "Sun, 28 Apr 2013 08:02:52 GMT",
"version": "v1"
}
] | 2013-04-30 | [
[
"Liu",
"Xin",
""
]
] | ATP-hydrolysis is the basic energy source of many physiological processes, but there is a lack of knowledge regarding its biological role other than energy transfer and thermogenesis. Not all the energy released by ATP-hydrolysis could be used in powering target biological processes and functions. Partial energy dissipates into water. By validating the impact of this dissipated energy, we found that energy released by ATP hydrolysis could induce notable regulation of biomolecule's properties 100 nanometers away. Namely ATP hydrolyzation is recycled in remote biochemical property modulation. |
1612.03828 | Sergii Domanskyi | Sergii Domanskyi, Joshua E. Schilling, Vyacheslav Gorshkov, Sergiy
Libert, Vladimir Privman | Rate-Equation Modelling and Ensemble Approach to Extraction of
Parameters for Viral Infection-Induced Cell Apoptosis and Necrosis | null | J. Chem. Phys. 145 (9), Article 094103, 8 pages (2016) | 10.1063/1.4961676 | VP-275 | q-bio.QM cond-mat.stat-mech physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a theoretical approach that uses physiochemical kinetics modelling
to describe cell population dynamics upon progression of viral infection in
cell culture, which results in cell apoptosis (programmed cell death) and
necrosis (direct cell death). Several model parameters necessary for computer
simulation were determined by reviewing and analyzing available published
experimental data. By comparing experimental data to computer modelling
results, we identify the parameters that are the most sensitive to the measured
system properties and allow for the best data fitting. Our model allows
extraction of parameters from experimental data and also has predictive power.
Using the model we describe interesting time-dependent quantities that were not
directly measured in the experiment, and identify correlations among the fitted
parameter values. Numerical simulation of viral infection progression is done
by a rate-equation approach resulting in a system of "stiff" equations, which
are solved by using a novel variant of the stochastic ensemble modelling
approach. The latter was originally developed for coupled chemical reactions.
| [
{
"created": "Mon, 12 Dec 2016 18:14:02 GMT",
"version": "v1"
}
] | 2016-12-30 | [
[
"Domanskyi",
"Sergii",
""
],
[
"Schilling",
"Joshua E.",
""
],
[
"Gorshkov",
"Vyacheslav",
""
],
[
"Libert",
"Sergiy",
""
],
[
"Privman",
"Vladimir",
""
]
] | We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment, and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions. |
1911.02363 | Chi-Ning Chou | Chi-Ning Chou, Mien Brabeeba Wang | ODE-Inspired Analysis for the Biological Version of Oja's Rule in
Solving Streaming PCA | Accepted for presentation at the Conference on Learning Theory (COLT)
2020 | null | null | null | q-bio.NC cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oja's rule [Oja, Journal of mathematical biology 1982] is a well-known
biologically-plausible algorithm using a Hebbian-type synaptic update rule to
solve streaming principal component analysis (PCA). Computational
neuroscientists have known that this biological version of Oja's rule converges
to the top eigenvector of the covariance matrix of the input in the limit.
However, prior to this work, it was open to prove any convergence rate
guarantee.
In this work, we give the first convergence rate analysis for the biological
version of Oja's rule in solving streaming PCA. Moreover, our convergence rate
matches the information theoretical lower bound up to logarithmic factors and
outperforms the state-of-the-art upper bound for streaming PCA. Furthermore, we
develop a novel framework inspired by ordinary differential equations (ODE) to
analyze general stochastic dynamics. The framework abandons the traditional
step-by-step analysis and instead analyzes a stochastic dynamic in one-shot by
giving a closed-form solution to the entire dynamic. The one-shot framework
allows us to apply stopping time and martingale techniques to have a flexible
and precise control on the dynamic. We believe that this general framework is
powerful and should lead to effective yet simple analysis for a large class of
problems with stochastic dynamics.
| [
{
"created": "Mon, 4 Nov 2019 16:01:32 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jun 2020 21:32:51 GMT",
"version": "v2"
}
] | 2020-06-19 | [
[
"Chou",
"Chi-Ning",
""
],
[
"Wang",
"Mien Brabeeba",
""
]
] | Oja's rule [Oja, Journal of mathematical biology 1982] is a well-known biologically-plausible algorithm using a Hebbian-type synaptic update rule to solve streaming principal component analysis (PCA). Computational neuroscientists have known that this biological version of Oja's rule converges to the top eigenvector of the covariance matrix of the input in the limit. However, prior to this work, it was open to prove any convergence rate guarantee. In this work, we give the first convergence rate analysis for the biological version of Oja's rule in solving streaming PCA. Moreover, our convergence rate matches the information theoretical lower bound up to logarithmic factors and outperforms the state-of-the-art upper bound for streaming PCA. Furthermore, we develop a novel framework inspired by ordinary differential equations (ODE) to analyze general stochastic dynamics. The framework abandons the traditional step-by-step analysis and instead analyzes a stochastic dynamic in one-shot by giving a closed-form solution to the entire dynamic. The one-shot framework allows us to apply stopping time and martingale techniques to have a flexible and precise control on the dynamic. We believe that this general framework is powerful and should lead to effective yet simple analysis for a large class of problems with stochastic dynamics. |
2303.03580 | Shuji Ishihara | Nen Saito and Shuji Ishihara | Active Deformable Cells Undergo Cell Shape Transition Associated with
Percolation of Topological Defects | 9 pages, 4 figures | null | null | null | q-bio.TO cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell deformability is an essential determinant for tissue-scale mechanical
nature, such as fluidity and rigidity, and is thus crucial for understanding
tissue homeostasis and stable developmental processes. However, numerical
simulations for the collective dynamics of cells with arbitral cell
deformations akin to mesenchymal, ameboid, and epithelial cells in a
non-confluent situation need high computational costs and are still
challenging. Here we propose a new method that allows us to study significantly
larger numbers of cells than existing methods. Using the method, we
investigated the densely packed active cell population interacting via excluded
volume interactions, and discovered the emergence of two fluid phases in
deformable cell populations, a soft-fluid phase with drastically deformed cell
shapes and a fluid phase with circular cell shapes. The transition between
these two phases is characterized by the percolation of topological defects,
which is experimentally testable.
| [
{
"created": "Tue, 7 Mar 2023 01:25:45 GMT",
"version": "v1"
}
] | 2023-03-08 | [
[
"Saito",
"Nen",
""
],
[
"Ishihara",
"Shuji",
""
]
] | Cell deformability is an essential determinant for tissue-scale mechanical nature, such as fluidity and rigidity, and is thus crucial for understanding tissue homeostasis and stable developmental processes. However, numerical simulations for the collective dynamics of cells with arbitral cell deformations akin to mesenchymal, ameboid, and epithelial cells in a non-confluent situation need high computational costs and are still challenging. Here we propose a new method that allows us to study significantly larger numbers of cells than existing methods. Using the method, we investigated the densely packed active cell population interacting via excluded volume interactions, and discovered the emergence of two fluid phases in deformable cell populations, a soft-fluid phase with drastically deformed cell shapes and a fluid phase with circular cell shapes. The transition between these two phases is characterized by the percolation of topological defects, which is experimentally testable. |
1303.5889 | Gregory Ryslik | Gregory Ryslik and Yuwei Cheng and Kei-Hoi Cheung and Yorgo Modis and
Hongyu Zhao | A Graph Theoretic Approach to Utilizing Protein Structure to Identify
Non-Random Somatic Mutations | 25 pages, 8 figures, 3 Tables | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: It is well known that the development of cancer is caused by the
accumulation of somatic mutations within the genome. For oncogenes
specifically, current research suggests that there is a small set of "driver"
mutations that are primarily responsible for tumorigenesis. Further, due to
some recent pharmacological successes in treating these driver mutations and
their resulting tumors, a variety of methods have been developed to identify
potential driver mutations using methods such as machine learning and
mutational clustering. We propose a novel methodology that increases our power
to identify mutational clusters by taking into account protein tertiary
structure via a graph theoretical approach.
Results: We have designed and implemented GraphPAC (Graph Protein Amino Acid
Clustering) to identify mutational clustering while considering protein spatial
structure. Using GraphPAC, we are able to detect novel clusters in proteins
that are known to exhibit mutation clustering as well as identify clusters in
proteins without evidence of prior clustering based on current methods.
Specifically, by utilizing the spatial information available in the Protein
Data Bank (PDB) along with the mutational data in the Catalogue of Somatic
Mutations in Cancer (COSMIC), GraphPAC identifies new mutational clusters in
well known oncogenes such as EGFR and KRAS. Further, by utilizing graph theory
to account for the tertiary structure, GraphPAC identifies clusters in DPP4,
NRP1 and other proteins not identified by existing methods. The R package is
available at: http://bioconductor.org/packages/release/bioc/html/GraphPAC.html
Conclusion: GraphPAC provides an alternative to iPAC and an extension to
current methodology when identifying potential activating driver mutations by
utilizing a graph theoretic approach when considering protein tertiary
structure.
| [
{
"created": "Sat, 23 Mar 2013 22:27:32 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2013 21:32:45 GMT",
"version": "v2"
}
] | 2013-07-16 | [
[
"Ryslik",
"Gregory",
""
],
[
"Cheng",
"Yuwei",
""
],
[
"Cheung",
"Kei-Hoi",
""
],
[
"Modis",
"Yorgo",
""
],
[
"Zhao",
"Hongyu",
""
]
] | Background: It is well known that the development of cancer is caused by the accumulation of somatic mutations within the genome. For oncogenes specifically, current research suggests that there is a small set of "driver" mutations that are primarily responsible for tumorigenesis. Further, due to some recent pharmacological successes in treating these driver mutations and their resulting tumors, a variety of methods have been developed to identify potential driver mutations using methods such as machine learning and mutational clustering. We propose a novel methodology that increases our power to identify mutational clusters by taking into account protein tertiary structure via a graph theoretical approach. Results: We have designed and implemented GraphPAC (Graph Protein Amino Acid Clustering) to identify mutational clustering while considering protein spatial structure. Using GraphPAC, we are able to detect novel clusters in proteins that are known to exhibit mutation clustering as well as identify clusters in proteins without evidence of prior clustering based on current methods. Specifically, by utilizing the spatial information available in the Protein Data Bank (PDB) along with the mutational data in the Catalogue of Somatic Mutations in Cancer (COSMIC), GraphPAC identifies new mutational clusters in well known oncogenes such as EGFR and KRAS. Further, by utilizing graph theory to account for the tertiary structure, GraphPAC identifies clusters in DPP4, NRP1 and other proteins not identified by existing methods. The R package is available at: http://bioconductor.org/packages/release/bioc/html/GraphPAC.html Conclusion: GraphPAC provides an alternative to iPAC and an extension to current methodology when identifying potential activating driver mutations by utilizing a graph theoretic approach when considering protein tertiary structure. |
2312.15789 | Manuel Eduardo Hern\'andez-Garc\'ia | Manuel Eduardo Hern\'andez-Garc\'ia, Jorge Vel\'azquez-Castro | Exploring the Relationship between Fractional Hill Coefficient,
Intermediate Processes and Mesoscopic Fluctuations | 12 pages, 9 figures | null | null | null | q-bio.MN q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Hill function is relevant for describing enzyme binding and other
processes in gene regulatory networks. Despite its theoretical foundation, it
is often used empirically as a useful fitting function. Theoretical predictions
suggest that the Hill coefficient should be an integer; however, it is often
assigned a fractional value. This study aims to show that the use of fractional
Hill coefficients actually indicates the presence of intermediate processes or,
in some cases, the effect of noise on these systems. The deterministic
approximation of enzymatic processes leads to the derivation of the Hill
function, which can be expanded around the noise magnitude to derive mesoscopic
corrections. This study establishes relationships between the intermediate
processes and the decimal Hill coefficient, both with and without fluctuations.
This outcome contributes to a deeper understanding of the underlying processes
associated with the fractional Hill coefficient, while also enabling the
prediction of an effective value of the Hill coefficient from the underlying
mechanism, allowing us to have a simplified description of complex systems.
| [
{
"created": "Mon, 25 Dec 2023 18:53:00 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Hernández-García",
"Manuel Eduardo",
""
],
[
"Velázquez-Castro",
"Jorge",
""
]
] | The Hill function is relevant for describing enzyme binding and other processes in gene regulatory networks. Despite its theoretical foundation, it is often used empirically as a useful fitting function. Theoretical predictions suggest that the Hill coefficient should be an integer; however, it is often assigned a fractional value. This study aims to show that the use of fractional Hill coefficients actually indicates the presence of intermediate processes or, in some cases, the effect of noise on these systems. The deterministic approximation of enzymatic processes leads to the derivation of the Hill function, which can be expanded around the noise magnitude to derive mesoscopic corrections. This study establishes relationships between the intermediate processes and the decimal Hill coefficient, both with and without fluctuations. This outcome contributes to a deeper understanding of the underlying processes associated with the fractional Hill coefficient, while also enabling the prediction of an effective value of the Hill coefficient from the underlying mechanism, allowing us to have a simplified description of complex systems. |
2005.00495 | Robert Thorne S | Robert S Thorne | Inferring the effective fraction of the population infected with
Covid-19 from the behaviour of Lombardy, Madrid and London relative to the
remainder of Italy, Spain and England | 19 pages. 6 figures.The updated article considers the possibility of
variable susceptibility to infection. The analysis remains unaltered but the
interpretation of the results is extended. The overall conclusions are
largely unaltered | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | I use a very simple deterministic model for the spread of Covid-19 in a large
population. Using this to compare the relative decay of the number of deaths
per day between different regions in Italy, Spain and England, each applying in
principle the same social distancing procedures across the whole country, I
obtain an estimate of the total fraction of the population which had already
become infected by April 10th. In the most heavily affected regions, Lombardy,
Madrid and London, this fraction is higher than expected, i.e. $\approx 0.3$.
This result can then be converted to a determination of the infection fatality
rate $ifr$, which appears to be $ifr \approx 0.0025-0.005$, and even smaller in
London, somewhat lower than usually assumed. Alternatively, the result can also
be interpreted as an effectively larger fraction of the population than simple
counting would suggest if there is a variation in susceptibility to infection
with a variance of up to a value of about 2. The implications are very similar
for either interpretation or for a combination of effects.
| [
{
"created": "Fri, 1 May 2020 17:11:10 GMT",
"version": "v1"
},
{
"created": "Mon, 25 May 2020 17:56:48 GMT",
"version": "v2"
}
] | 2020-05-26 | [
[
"Thorne",
"Robert S",
""
]
] | I use a very simple deterministic model for the spread of Covid-19 in a large population. Using this to compare the relative decay of the number of deaths per day between different regions in Italy, Spain and England, each applying in principle the same social distancing procedures across the whole country, I obtain an estimate of the total fraction of the population which had already become infected by April 10th. In the most heavily affected regions, Lombardy, Madrid and London, this fraction is higher than expected, i.e. $\approx 0.3$. This result can then be converted to a determination of the infection fatality rate $ifr$, which appears to be $ifr \approx 0.0025-0.005$, and even smaller in London, somewhat lower than usually assumed. Alternatively, the result can also be interpreted as an effectively larger fraction of the population than simple counting would suggest if there is a variation in susceptibility to infection with a variance of up to a value of about 2. The implications are very similar for either interpretation or for a combination of effects. |
1710.00553 | Michael Peters | M. D. Peters and D. Iber | Simulating Organogenesis in COMSOL: Tissue Mechanics | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During growth, tissue expands and deforms. Given its elastic properties,
stresses emerge in an expanding and deforming tissue. Cell rearrangements can
dissipate these stresses and numerous experiments confirm the viscoelastic
properties of tissues [1]-[4]. On long time scales, as characteristic for many
developmental processes, tissue is therefore typically represented as a liquid,
viscous material and is then described by the Stokes equation [5]-[7]. On short
time scales, however, tissues have mainly elastic properties. In discrete
cell-based tissue models, the elastic tissue properties are realized by springs
between cell vertices [8], [9]. In this article, we adopt a macroscale
perspective of tissue and consider it as homogeneous material. Therefore, we
may use the "Structural Mechanics" module in COMSOL Multiphysics in order to
model the viscoelastic behavior of tissue. Concretely, we consider two
examples: first, we aim at numerically reproducing published [10] analytical
results for the sea urchin blastula. Afterwards, we numerically solve a
continuum mechanics model for the compression and relaxation experiments
presented in [4].
| [
{
"created": "Mon, 2 Oct 2017 09:33:43 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2017 06:52:08 GMT",
"version": "v2"
}
] | 2017-10-20 | [
[
"Peters",
"M. D.",
""
],
[
"Iber",
"D.",
""
]
] | During growth, tissue expands and deforms. Given its elastic properties, stresses emerge in an expanding and deforming tissue. Cell rearrangements can dissipate these stresses and numerous experiments confirm the viscoelastic properties of tissues [1]-[4]. On long time scales, as characteristic for many developmental processes, tissue is therefore typically represented as a liquid, viscous material and is then described by the Stokes equation [5]-[7]. On short time scales, however, tissues have mainly elastic properties. In discrete cell-based tissue models, the elastic tissue properties are realized by springs between cell vertices [8], [9]. In this article, we adopt a macroscale perspective of tissue and consider it as homogeneous material. Therefore, we may use the "Structural Mechanics" module in COMSOL Multiphysics in order to model the viscoelastic behavior of tissue. Concretely, we consider two examples: first, we aim at numerically reproducing published [10] analytical results for the sea urchin blastula. Afterwards, we numerically solve a continuum mechanics model for the compression and relaxation experiments presented in [4]. |
2211.07398 | Yuxiang Wei | Yuxiang Wei, Tengfei Xue, Yogesh Rathi, Nikos Makris, Fan Zhang,
Lauren J. O'Donnell | Age Prediction Performance Varies Across Deep, Superficial, and
Cerebellar White Matter Connections | 5 pages, 1 figure | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain's white matter (WM) undergoes developmental and degenerative
processes during the human lifespan. To investigate the relationship between WM
anatomical regions and age, we study diffusion magnetic resonance imaging
tractography that is finely parcellated into fiber clusters in the deep,
superficial, and cerebellar WM. We propose a deep-learning-based age prediction
model that leverages large convolutional kernels and inverted bottlenecks. We
improve performance using novel discrete multi-faceted mix data augmentation
and a novel prior-knowledge-based loss function that encourages age predictions
in the expected range. We study a dataset of 965 healthy young adults (22-37
years) derived from the Human Connectome Project (HCP). Experimental results
demonstrate that the proposed model achieves a mean absolute error of 2.59
years and outperforms compared methods. We find that the deep WM is the most
informative for age prediction in this cohort, while the superficial WM is the
least informative. Overall, the most predictive WM tracts are the
thalamo-frontal tract from the deep WM and the intracerebellar input and
Purkinje tract from the cerebellar WM.
| [
{
"created": "Fri, 11 Nov 2022 15:23:09 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jul 2023 13:08:31 GMT",
"version": "v2"
}
] | 2023-07-06 | [
[
"Wei",
"Yuxiang",
""
],
[
"Xue",
"Tengfei",
""
],
[
"Rathi",
"Yogesh",
""
],
[
"Makris",
"Nikos",
""
],
[
"Zhang",
"Fan",
""
],
[
"O'Donnell",
"Lauren J.",
""
]
] | The brain's white matter (WM) undergoes developmental and degenerative processes during the human lifespan. To investigate the relationship between WM anatomical regions and age, we study diffusion magnetic resonance imaging tractography that is finely parcellated into fiber clusters in the deep, superficial, and cerebellar WM. We propose a deep-learning-based age prediction model that leverages large convolutional kernels and inverted bottlenecks. We improve performance using novel discrete multi-faceted mix data augmentation and a novel prior-knowledge-based loss function that encourages age predictions in the expected range. We study a dataset of 965 healthy young adults (22-37 years) derived from the Human Connectome Project (HCP). Experimental results demonstrate that the proposed model achieves a mean absolute error of 2.59 years and outperforms compared methods. We find that the deep WM is the most informative for age prediction in this cohort, while the superficial WM is the least informative. Overall, the most predictive WM tracts are the thalamo-frontal tract from the deep WM and the intracerebellar input and Purkinje tract from the cerebellar WM. |
1506.08483 | Andrew J. Dolgert | Andrew J. Dolgert | Discrete Stochastic Models in Continuous Time for Ecology | 18 pages, 10 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article shows how to specify and construct a discrete, stochastic,
continuous-time model specifically for ecological systems. The model is more
broad than typical chemical kinetics models in two ways. First, using
time-dependent hazard rates simplifies the process of making models more
faithful. Second, the state of the system includes individual traits and use of
environmental resources. The models defined here focus on taking survival
analysis of observations in the field and using the measured hazard rates to
generate simulations which match exactly what was measured.
| [
{
"created": "Mon, 29 Jun 2015 01:26:44 GMT",
"version": "v1"
}
] | 2015-06-30 | [
[
"Dolgert",
"Andrew J.",
""
]
] | This article shows how to specify and construct a discrete, stochastic, continuous-time model specifically for ecological systems. The model is more broad than typical chemical kinetics models in two ways. First, using time-dependent hazard rates simplifies the process of making models more faithful. Second, the state of the system includes individual traits and use of environmental resources. The models defined here focus on taking survival analysis of observations in the field and using the measured hazard rates to generate simulations which match exactly what was measured. |
1107.3459 | Henry Tuckwell | Henry C Tuckwell, Patrick D Shipman | Predicting the probability of persistence of HIV infection with the
standard model | 19 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the standard three-component differential equation model for the
growth of an HIV virion population in an infected host in the absence of drug
therapy. The dynamical properties of the model are determined by the set of
values of six parameters which vary across host populations. There may be one
or two critical points whose natures play a key role in determining the outcome
of infection and in particular whether the HIV population will persist or
become extinct. There are two cases which may arise. In the first case, there
is only one critical point P_1 at biological values and this is an
asymptotically stable node. The system ends up with zero virions and so the
host becomes HIV-free. In the second case, there are two critical points P_1
and P_2 at biological values. Here P_1 is an unstable saddle point and P_2 is
an asymptotically stable spiral point with a non-zero virion level. In this
case the HIV population persists unless parameters change. We let the parameter
values take random values from distributions based on empirical data, but
suitably truncated, and determine the probabilities of occurrence of the
various combinations of critical points. From these simulations the probability
that an HIV infection will persist, across a population, is estimated. It is
found that with conservatively estimated distributions of parameters, within
the framework of the standard 3-component model, the chances that a within host
HIV population will become extinct is between 0.6% and 6.9%. With less
conservative parameter estimates, the probability is estimated to be as high as
24%. The many factors related to the transmission and possible spontaneous
elimination of the virus are discussed.
| [
{
"created": "Mon, 18 Jul 2011 14:56:43 GMT",
"version": "v1"
}
] | 2011-07-19 | [
[
"Tuckwell",
"Henry C",
""
],
[
"Shipman",
"Patrick D",
""
]
] | We consider the standard three-component differential equation model for the growth of an HIV virion population in an infected host in the absence of drug therapy. The dynamical properties of the model are determined by the set of values of six parameters which vary across host populations. There may be one or two critical points whose natures play a key role in determining the outcome of infection and in particular whether the HIV population will persist or become extinct. There are two cases which may arise. In the first case, there is only one critical point P_1 at biological values and this is an asymptotically stable node. The system ends up with zero virions and so the host becomes HIV-free. In the second case, there are two critical points P_1 and P_2 at biological values. Here P_1 is an unstable saddle point and P_2 is an asymptotically stable spiral point with a non-zero virion level. In this case the HIV population persists unless parameters change. We let the parameter values take random values from distributions based on empirical data, but suitably truncated, and determine the probabilities of occurrence of the various combinations of critical points. From these simulations the probability that an HIV infection will persist, across a population, is estimated. It is found that with conservatively estimated distributions of parameters, within the framework of the standard 3-component model, the chances that a within host HIV population will become extinct is between 0.6% and 6.9%. With less conservative parameter estimates, the probability is estimated to be as high as 24%. The many factors related to the transmission and possible spontaneous elimination of the virus are discussed. |
2405.14925 | Julian Cremer | Julian Cremer, Tuan Le, Frank No\'e, Djork-Arn\'e Clevert, Kristof T.
Sch\"utt | PILOT: Equivariant diffusion for pocket conditioned de novo ligand
generation with multi-objective guidance via importance sampling | null | null | null | null | q-bio.BM cs.AI cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation of ligands that both are tailored to a given protein pocket
and exhibit a range of desired chemical properties is a major challenge in
structure-based drug design. Here, we propose an in-silico approach for the
$\textit{de novo}$ generation of 3D ligand structures using the equivariant
diffusion model PILOT, combining pocket conditioning with a large-scale
pre-training and property guidance. Its multi-objective trajectory-based
importance sampling strategy is designed to direct the model towards molecules
that not only exhibit desired characteristics such as increased binding
affinity for a given protein pocket but also maintains high synthetic
accessibility. This ensures the practicality of sampled molecules, thus
maximizing their potential for the drug discovery pipeline. PILOT significantly
outperforms existing methods across various metrics on the common benchmark
dataset CrossDocked2020. Moreover, we employ PILOT to generate novel ligands
for unseen protein pockets from the Kinodata-3D dataset, which encompasses a
substantial portion of the human kinome. The generated structures exhibit
predicted $IC_{50}$ values indicative of potent biological activity, which
highlights the potential of PILOT as a powerful tool for structure-based drug
design.
| [
{
"created": "Thu, 23 May 2024 17:58:28 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Cremer",
"Julian",
""
],
[
"Le",
"Tuan",
""
],
[
"Noé",
"Frank",
""
],
[
"Clevert",
"Djork-Arné",
""
],
[
"Schütt",
"Kristof T.",
""
]
] | The generation of ligands that both are tailored to a given protein pocket and exhibit a range of desired chemical properties is a major challenge in structure-based drug design. Here, we propose an in-silico approach for the $\textit{de novo}$ generation of 3D ligand structures using the equivariant diffusion model PILOT, combining pocket conditioning with a large-scale pre-training and property guidance. Its multi-objective trajectory-based importance sampling strategy is designed to direct the model towards molecules that not only exhibit desired characteristics such as increased binding affinity for a given protein pocket but also maintains high synthetic accessibility. This ensures the practicality of sampled molecules, thus maximizing their potential for the drug discovery pipeline. PILOT significantly outperforms existing methods across various metrics on the common benchmark dataset CrossDocked2020. Moreover, we employ PILOT to generate novel ligands for unseen protein pockets from the Kinodata-3D dataset, which encompasses a substantial portion of the human kinome. The generated structures exhibit predicted $IC_{50}$ values indicative of potent biological activity, which highlights the potential of PILOT as a powerful tool for structure-based drug design. |
2405.11096 | Hiba Kobeissi | Hiba Kobeissi, Xining Gao, Samuel J. DePalma, Jourdan K. Ewoldt,
Miranda C. Wang, Shoshana L. Das, Javiera Jilberto, David Nordsletten,
Brendon M. Baker, Christopher S. Chen, Emma Lejeune | MicroBundlePillarTrack: A Python package for automated segmentation,
tracking, and analysis of pillar deflection in cardiac microbundles | 8 main pages, 1 main figure, Supplementary Information included.
microPublication Biology (2024) | null | 10.17912/micropub.biology.001231 | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Movies of human induced pluripotent stem cell (hiPSC)-derived engineered
cardiac tissue (microbundles) contain abundant information about structural and
functional maturity. However, extracting these data in a reproducible and
high-throughput manner remains a major challenge. Furthermore, it is not
straightforward to make direct quantitative comparisons across the multiple in
vitro experimental platforms employed to fabricate these tissues. Here, we
present "MicroBundlePillarTrack," an open-source optical flow-based package
developed in Python to track the deflection of pillars in cardiac microbundles
grown on experimental platforms with two different pillar designs ("Type 1" and
"Type 2" design). Our software is able to automatically segment the pillars,
track their displacements, and output time-dependent metrics for contractility
analysis, including beating amplitude and rate, contractile force, and tissue
stress. Because this software is fully automated, it will allow for both faster
and more reproducible analyses of larger datasets and it will enable more
reliable cross-platform comparisons as compared to existing approaches that
require manual steps and are tailored to a specific experimental platform. To
complement this open-source software, we share a dataset of 1,540 brightfield
example movies on which we have tested our software. Through sharing this data
and software, our goal is to directly enable quantitative comparisons across
labs, and facilitate future collective progress via the biomedical engineering
open-source data and software ecosystem.
| [
{
"created": "Fri, 17 May 2024 21:20:18 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Aug 2024 17:49:57 GMT",
"version": "v2"
}
] | 2024-08-16 | [
[
"Kobeissi",
"Hiba",
""
],
[
"Gao",
"Xining",
""
],
[
"DePalma",
"Samuel J.",
""
],
[
"Ewoldt",
"Jourdan K.",
""
],
[
"Wang",
"Miranda C.",
""
],
[
"Das",
"Shoshana L.",
""
],
[
"Jilberto",
"Javiera",
""
],
[
"Nordsletten",
"David",
""
],
[
"Baker",
"Brendon M.",
""
],
[
"Chen",
"Christopher S.",
""
],
[
"Lejeune",
"Emma",
""
]
] | Movies of human induced pluripotent stem cell (hiPSC)-derived engineered cardiac tissue (microbundles) contain abundant information about structural and functional maturity. However, extracting these data in a reproducible and high-throughput manner remains a major challenge. Furthermore, it is not straightforward to make direct quantitative comparisons across the multiple in vitro experimental platforms employed to fabricate these tissues. Here, we present "MicroBundlePillarTrack," an open-source optical flow-based package developed in Python to track the deflection of pillars in cardiac microbundles grown on experimental platforms with two different pillar designs ("Type 1" and "Type 2" design). Our software is able to automatically segment the pillars, track their displacements, and output time-dependent metrics for contractility analysis, including beating amplitude and rate, contractile force, and tissue stress. Because this software is fully automated, it will allow for both faster and more reproducible analyses of larger datasets and it will enable more reliable cross-platform comparisons as compared to existing approaches that require manual steps and are tailored to a specific experimental platform. To complement this open-source software, we share a dataset of 1,540 brightfield example movies on which we have tested our software. Through sharing this data and software, our goal is to directly enable quantitative comparisons across labs, and facilitate future collective progress via the biomedical engineering open-source data and software ecosystem. |
1303.0564 | Krzysztof Argasinski | Krzysztof Argasinski and Mark Broom | The nest site lottery: how selectively neutral density dependent growth
suppression induces frequency dependent selection | 32 pages, 1 figure | Theoretical Population Biology 90 (2013) 82-90 | 10.1016/j.tpb.2013.09.011 | null | q-bio.PE math.CA nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern developments in population dynamics emphasize the role of the turnover
of individuals. In the new approaches stable population size is a dynamic
equilibrium between different mortality and fecundity factors instead of an
arbitrary fixed carrying capacity. The latest replicator dynamics models assume
that regulation of the population size acts through feedback driven by density
dependent juvenile mortality. Here, we consider a simplified model to extract
the properties of this approach. We show that at the stable population size,
the structure of the frequency dependent evolutionary game emerges. Turnover of
individuals induces a lottery mechanism where for each nest site released by a
dead adult individual a single newborn is drawn from the pool of newborn
candidates. This frequency dependent selection leads toward the strategy
maximizing the number of newborns per adult death. However, multiple strategies
can maximize this value. Among them, the strategy with the greatest mortality
(which implies greater instantaneous growth rate) is selected. This result is
important for the discussion about universal fitness measures and which
parameters are maximized by natural selection. This is related to the fitness
measures R0 and r, because the number of newborns per single dead individual
equals lifetime production of newborn R0 in models without ageing. Our model
suggests the existence of another fitness measure which is the combination of
R0 and r. According to the nest site lottery mechanism, at stable population
size, selection favours strategies with the greatest r from those with the
greatest R0.
| [
{
"created": "Sun, 3 Mar 2013 20:45:15 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Dec 2013 22:49:43 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Mar 2014 01:22:57 GMT",
"version": "v3"
}
] | 2014-03-05 | [
[
"Argasinski",
"Krzysztof",
""
],
[
"Broom",
"Mark",
""
]
] | Modern developments in population dynamics emphasize the role of the turnover of individuals. In the new approaches stable population size is a dynamic equilibrium between different mortality and fecundity factors instead of an arbitrary fixed carrying capacity. The latest replicator dynamics models assume that regulation of the population size acts through feedback driven by density dependent juvenile mortality. Here, we consider a simplified model to extract the properties of this approach. We show that at the stable population size, the structure of the frequency dependent evolutionary game emerges. Turnover of individuals induces a lottery mechanism where for each nest site released by a dead adult individual a single newborn is drawn from the pool of newborn candidates. This frequency dependent selection leads toward the strategy maximizing the number of newborns per adult death. However, multiple strategies can maximize this value. Among them, the strategy with the greatest mortality (which implies greater instantaneous growth rate) is selected. This result is important for the discussion about universal fitness measures and which parameters are maximized by natural selection. This is related to the fitness measures R0 and r, because the number of newborns per single dead individual equals lifetime production of newborn R0 in models without ageing. Our model suggests the existence of another fitness measure which is the combination of R0 and r. According to the nest site lottery mechanism, at stable population size, selection favours strategies with the greatest r from those with the greatest R0. |
1908.09256 | Thomas G\"otz | N.C. Ganegoda and T. G\"otz and K.P. Wijaya | An Age-Dependent Model for Dengue Transmission: Analysis and Comparison
to Field Data from Semarang, Indonesia | null | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical statistics reveal a significant dependence of hospitalized dengue
patient on the patient's age. To incorporate an age-dependence into a
mathematical model, we extend the classical ODE system of disease dynamics to a
PDE system. The equilibrium distribution is then determined by the fixed points
of resulting integro-differential equations. In this paper we use an extension
of the concept of the basic reproductive number to characterize parameter
regimes, where either only the disease-free or an endemic equilibrium exists.
Using rather general and minimal assumptions on the population distribution and
on the age-dependent transmission rate, we prove the existence of those
equilibria. Furthermore, we are able to prove the convergence of an iteration
scheme to compute the endemic equilibrium. To validate our model, we use
existing data from the city of Semarang, Indonesia for comparison and to
identify the model parameters.
| [
{
"created": "Sun, 25 Aug 2019 06:11:40 GMT",
"version": "v1"
}
] | 2019-08-27 | [
[
"Ganegoda",
"N. C.",
""
],
[
"Götz",
"T.",
""
],
[
"Wijaya",
"K. P.",
""
]
] | Medical statistics reveal a significant dependence of hospitalized dengue patient on the patient's age. To incorporate an age-dependence into a mathematical model, we extend the classical ODE system of disease dynamics to a PDE system. The equilibrium distribution is then determined by the fixed points of resulting integro-differential equations. In this paper we use an extension of the concept of the basic reproductive number to characterize parameter regimes, where either only the disease-free or an endemic equilibrium exists. Using rather general and minimal assumptions on the population distribution and on the age-dependent transmission rate, we prove the existence of those equilibria. Furthermore, we are able to prove the convergence of an iteration scheme to compute the endemic equilibrium. To validate our model, we use existing data from the city of Semarang, Indonesia for comparison and to identify the model parameters. |
1601.07881 | Richard Betzel | Richard F. Betzel, Theodore D. Satterthwaite, Joshua I. Gold, Danielle
S. Bassett | A positive mood, a flexible brain | 15 pages, 15 figures + 3 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flexible reconfiguration of human brain networks supports cognitive
flexibility and learning. However, modulating flexibility to enhance learning
requires an understanding of the relationship between flexibility and brain
state. In an unprecedented longitudinal data set, we investigate the
relationship between flexibility and mood, demonstrating that flexibility is
positively correlated with emotional state. Our results inform the modulation
of brain state to enhance response to training in health and injury.
| [
{
"created": "Thu, 28 Jan 2016 20:00:09 GMT",
"version": "v1"
}
] | 2016-01-29 | [
[
"Betzel",
"Richard F.",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Gold",
"Joshua I.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Flexible reconfiguration of human brain networks supports cognitive flexibility and learning. However, modulating flexibility to enhance learning requires an understanding of the relationship between flexibility and brain state. In an unprecedented longitudinal data set, we investigate the relationship between flexibility and mood, demonstrating that flexibility is positively correlated with emotional state. Our results inform the modulation of brain state to enhance response to training in health and injury. |
2102.08704 | Giulia Giordano | Giulia Giordano, Marta Colaneri, Alessandro Di Filippo, Franco
Blanchini, Paolo Bolzern, Giuseppe De Nicolao, Paolo Sacchi, Raffaele Bruno,
Patrizio Colaneri | Vaccination and SARS-CoV-2 variants: how much containment is still
needed? A quantitative assessment | null | Nature Medicine, 27, pages 993-998 (2021) | 10.1038/s41591-021-01334-5 | null | q-bio.PE cs.SY eess.SY math.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite the progress in medical care, combined population-wide interventions
(such as physical distancing, testing and contact tracing) are still crucial to
manage the SARS-CoV-2 pandemic, aggravated by the emergence of new highly
transmissible variants. We combine the compartmental SIDARTHE model, predicting
the course of COVID-19 infections, with a new data-based model that projects
new cases onto casualties and healthcare system costs. Based on the Italian
case study, we outline several scenarios: mass vaccination campaigns with
different paces, different transmission rates due to new variants, and
different enforced countermeasures, including the alternation of opening and
closure phases. Our results demonstrate that non-pharmaceutical interventions
(NPIs) have a higher impact on the epidemic evolution than vaccination, which
advocates for the need to keep containment measures in place throughout the
vaccination campaign. We also show that, if intermittent open-close strategies
are adopted, deaths and healthcare system costs can be drastically reduced,
without any aggravation of socioeconomic losses, as long as one has the
foresight to start with a closing phase rather than an opening one.
| [
{
"created": "Wed, 17 Feb 2021 11:24:11 GMT",
"version": "v1"
}
] | 2021-08-23 | [
[
"Giordano",
"Giulia",
""
],
[
"Colaneri",
"Marta",
""
],
[
"Di Filippo",
"Alessandro",
""
],
[
"Blanchini",
"Franco",
""
],
[
"Bolzern",
"Paolo",
""
],
[
"De Nicolao",
"Giuseppe",
""
],
[
"Sacchi",
"Paolo",
""
],
[
"Bruno",
"Raffaele",
""
],
[
"Colaneri",
"Patrizio",
""
]
] | Despite the progress in medical care, combined population-wide interventions (such as physical distancing, testing and contact tracing) are still crucial to manage the SARS-CoV-2 pandemic, aggravated by the emergence of new highly transmissible variants. We combine the compartmental SIDARTHE model, predicting the course of COVID-19 infections, with a new data-based model that projects new cases onto casualties and healthcare system costs. Based on the Italian case study, we outline several scenarios: mass vaccination campaigns with different paces, different transmission rates due to new variants, and different enforced countermeasures, including the alternation of opening and closure phases. Our results demonstrate that non-pharmaceutical interventions (NPIs) have a higher impact on the epidemic evolution than vaccination, which advocates for the need to keep containment measures in place throughout the vaccination campaign. We also show that, if intermittent open-close strategies are adopted, deaths and healthcare system costs can be drastically reduced, without any aggravation of socioeconomic losses, as long as one has the foresight to start with a closing phase rather than an opening one. |
2206.11232 | Amira Meddah | Evelyn Buckwar, Martina Conte, Amira Meddah | A stochastic hierarchical model for low grade glioma evolution | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | A stochastic hierarchical model for the evolution of low grade gliomas is
proposed. Starting with the description of cell motion using piecewise
diffusion Markov processes (PDifMPs) at the cellular level, we derive an
equation for the density of the transition probability of this Markov process
using the generalised Fokker-Planck equation. Then a macroscopic model is
derived via parabolic limit and Hilbert expansions in the moment equations.
After setting up the model, we perform several numerical tests to study the
role of the local characteristics and the extended generator of the PDifMP in
the process of tumour progression. The main aim focuses on understanding how
the variations of the jump rate function of this process at the microscopic
scale and the diffusion coefficient at the macroscopic scale are related to the
diffusive behaviour of the glioma cells and to the onset of malignancy, i.e.,
the transition from low-grade to high-grade gliomas.
| [
{
"created": "Mon, 20 Jun 2022 15:07:03 GMT",
"version": "v1"
}
] | 2022-06-23 | [
[
"Buckwar",
"Evelyn",
""
],
[
"Conte",
"Martina",
""
],
[
"Meddah",
"Amira",
""
]
] | A stochastic hierarchical model for the evolution of low grade gliomas is proposed. Starting with the description of cell motion using piecewise diffusion Markov processes (PDifMPs) at the cellular level, we derive an equation for the density of the transition probability of this Markov process using the generalised Fokker-Planck equation. Then a macroscopic model is derived via parabolic limit and Hilbert expansions in the moment equations. After setting up the model, we perform several numerical tests to study the role of the local characteristics and the extended generator of the PDifMP in the process of tumour progression. The main aim focuses on understanding how the variations of the jump rate function of this process at the microscopic scale and the diffusion coefficient at the macroscopic scale are related to the diffusive behaviour of the glioma cells and to the onset of malignancy, i.e., the transition from low-grade to high-grade gliomas. |
2210.07209 | Jenna Fromer | Jenna C. Fromer and Connor W. Coley | Computer-Aided Multi-Objective Optimization in Small Molecule Discovery | null | null | 10.1016/j.patter.2023.100678 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular discovery is a multi-objective optimization problem that requires
identifying a molecule or set of molecules that balance multiple, often
competing, properties. Multi-objective molecular design is commonly addressed
by combining properties of interest into a single objective function using
scalarization, which imposes assumptions about relative importance and uncovers
little about the trade-offs between objectives. In contrast to scalarization,
Pareto optimization does not require knowledge of relative importance and
reveals the trade-offs between objectives. However, it introduces additional
considerations in algorithm design. In this review, we describe pool-based and
de novo generative approaches to multi-objective molecular discovery with a
focus on Pareto optimization algorithms. We show how pool-based molecular
discovery is a relatively direct extension of multi-objective Bayesian
optimization and how the plethora of different generative models extend from
single-objective to multi-objective optimization in similar ways using
non-dominated sorting in the reward function (reinforcement learning) or to
select molecules for retraining (distribution learning) or propagation (genetic
algorithms). Finally, we discuss some remaining challenges and opportunities in
the field, emphasizing the opportunity to adopt Bayesian optimization
techniques into multi-objective de novo design.
| [
{
"created": "Thu, 13 Oct 2022 17:33:07 GMT",
"version": "v1"
}
] | 2023-10-17 | [
[
"Fromer",
"Jenna C.",
""
],
[
"Coley",
"Connor W.",
""
]
] | Molecular discovery is a multi-objective optimization problem that requires identifying a molecule or set of molecules that balance multiple, often competing, properties. Multi-objective molecular design is commonly addressed by combining properties of interest into a single objective function using scalarization, which imposes assumptions about relative importance and uncovers little about the trade-offs between objectives. In contrast to scalarization, Pareto optimization does not require knowledge of relative importance and reveals the trade-offs between objectives. However, it introduces additional considerations in algorithm design. In this review, we describe pool-based and de novo generative approaches to multi-objective molecular discovery with a focus on Pareto optimization algorithms. We show how pool-based molecular discovery is a relatively direct extension of multi-objective Bayesian optimization and how the plethora of different generative models extend from single-objective to multi-objective optimization in similar ways using non-dominated sorting in the reward function (reinforcement learning) or to select molecules for retraining (distribution learning) or propagation (genetic algorithms). Finally, we discuss some remaining challenges and opportunities in the field, emphasizing the opportunity to adopt Bayesian optimization techniques into multi-objective de novo design. |
2006.03698 | Jonathan Vacher | Jonathan Vacher, Aida Davila, Adam Kohn, Ruben Coen-Cagli | Texture Interpolation for Probing Visual Perception | Paper + ref: 12 pages and 7 figures | Supplementary: 16 pages and 16
figures Accepted to NeurIPS 2020 | null | null | null | q-bio.NC cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Texture synthesis models are important tools for understanding visual
processing. In particular, statistical approaches based on neurally relevant
features have been instrumental in understanding aspects of visual perception
and of neural coding. New deep learning-based approaches further improve the
quality of synthetic textures. Yet, it is still unclear why deep texture
synthesis performs so well, and applications of this new framework to probe
visual perception are scarce. Here, we show that distributions of deep
convolutional neural network (CNN) activations of a texture are well described
by elliptical distributions and therefore, following optimal transport theory,
constraining their mean and covariance is sufficient to generate new texture
samples. Then, we propose the natural geodesics (ie the shortest path between
two points) arising with the optimal transport metric to interpolate between
arbitrary textures. Compared to other CNN-based approaches, our interpolation
method appears to match more closely the geometry of texture perception, and
our mathematical framework is better suited to study its statistical nature. We
apply our method by measuring the perceptual scale associated to the
interpolation parameter in human observers, and the neural sensitivity of
different areas of visual cortex in macaque monkeys.
| [
{
"created": "Fri, 5 Jun 2020 21:28:36 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Oct 2020 18:05:27 GMT",
"version": "v2"
}
] | 2020-10-26 | [
[
"Vacher",
"Jonathan",
""
],
[
"Davila",
"Aida",
""
],
[
"Kohn",
"Adam",
""
],
[
"Coen-Cagli",
"Ruben",
""
]
] | Texture synthesis models are important tools for understanding visual processing. In particular, statistical approaches based on neurally relevant features have been instrumental in understanding aspects of visual perception and of neural coding. New deep learning-based approaches further improve the quality of synthetic textures. Yet, it is still unclear why deep texture synthesis performs so well, and applications of this new framework to probe visual perception are scarce. Here, we show that distributions of deep convolutional neural network (CNN) activations of a texture are well described by elliptical distributions and therefore, following optimal transport theory, constraining their mean and covariance is sufficient to generate new texture samples. Then, we propose the natural geodesics (ie the shortest path between two points) arising with the optimal transport metric to interpolate between arbitrary textures. Compared to other CNN-based approaches, our interpolation method appears to match more closely the geometry of texture perception, and our mathematical framework is better suited to study its statistical nature. We apply our method by measuring the perceptual scale associated to the interpolation parameter in human observers, and the neural sensitivity of different areas of visual cortex in macaque monkeys. |
q-bio/0504014 | Jean-Michel Claverie | Elodie Ghedin, Jean-Michel Claverie (IGS) | Mimivirus Relatives in the Sargasso Sea | see also http://www.giantvirus.org | null | null | null | q-bio.PE | null | The discovery and genome analysis of Acanthamoeba polyphaga Mimivirus, the
largest known DNA virus, challenged much of the accepted dogma regarding
viruses. Its particle size (>400 nm), genome length (1.2 million bp) and huge
gene repertoire (911 protein coding genes) all contribute to blur the
established boundaries between viruses and the smallest parasitic cellular
organisms. Phylogenetic analyses also suggested that the Mimivirus lineage
could have emerged prior to the individualization of cellular organisms from
the three established domains, triggering a debate that can only be resolved by
generating and analyzing more data. The next step is then to seek some evidence
that Mimivirus is not the only representative of its kind and determine where
to look for new Mimiviridae. An exhaustive similarity search of all Mimivirus
predicted proteins against all publicly available sequences identified many of
their closest homologues among the Sargasso Sea environmental sequences.
Subsequent phylogenetic analyses suggested that unknown large viruses
evolutionarily closer to Mimivirus than to any presently characterized species
exist in abundance in the Sargasso Sea. Their isolation and genome sequencing
could prove invaluable in understanding the origin and diversity of large DNA
viruses, and shed some light on the role they eventually played in the
emergence of eukaryotes.
| [
{
"created": "Mon, 11 Apr 2005 11:51:59 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ghedin",
"Elodie",
"",
"IGS"
],
[
"Claverie",
"Jean-Michel",
"",
"IGS"
]
] | The discovery and genome analysis of Acanthamoeba polyphaga Mimivirus, the largest known DNA virus, challenged much of the accepted dogma regarding viruses. Its particle size (>400 nm), genome length (1.2 million bp) and huge gene repertoire (911 protein coding genes) all contribute to blur the established boundaries between viruses and the smallest parasitic cellular organisms. Phylogenetic analyses also suggested that the Mimivirus lineage could have emerged prior to the individualization of cellular organisms from the three established domains, triggering a debate that can only be resolved by generating and analyzing more data. The next step is then to seek some evidence that Mimivirus is not the only representative of its kind and determine where to look for new Mimiviridae. An exhaustive similarity search of all Mimivirus predicted proteins against all publicly available sequences identified many of their closest homologues among the Sargasso Sea environmental sequences. Subsequent phylogenetic analyses suggested that unknown large viruses evolutionarily closer to Mimivirus than to any presently characterized species exist in abundance in the Sargasso Sea. Their isolation and genome sequencing could prove invaluable in understanding the origin and diversity of large DNA viruses, and shed some light on the role they eventually played in the emergence of eukaryotes. |
0806.2108 | Scott Cheng-Hsin Yang | Scott Cheng-Hsin Yang and John Bechhoefer | How Xenopus laevis embryos replicate reliably: investigating the
random-completion problem | 16 pages, 9 figures, submitted to Physical Review E | Phys. Rev. E 78, 041917 (2008) | 10.1103/PhysRevE.78.041917 | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA synthesis in \textit{Xenopus} frog embryos initiates stochastically in
time at many sites (origins) along the chromosome. Stochastic initiation
implies fluctuations in the time to complete and may lead to cell death if
replication takes longer than the cell cycle time ($\approx 25$ min).
Surprisingly, although the typical replication time is about 20 min, \textit{in
vivo} experiments show that replication fails to complete only about 1 in 300
times. How is replication timing accurately controlled despite the
stochasticity? Biologists have proposed two solutions to this
"random-completion problem." The first solution uses randomly located origins
but increases their rate of initiation as S phase proceeds, while the second
uses regularly spaced origins. In this paper, we investigate the
random-completion problem using a type of model first developed to describe the
kinetics of first-order phase transitions. Using methods from the field of
extreme-value statistics, we derive the distribution of replication-completion
times for a finite genome. We then argue that the biologists' first solution to
the problem is not only consistent with experiment but also nearly optimizes
the use of replicative proteins. We also show that spatial regularity in origin
placement does not alter significantly the distribution of replication times
and, thus, is not needed for the control of replication timing.
| [
{
"created": "Thu, 12 Jun 2008 16:21:20 GMT",
"version": "v1"
}
] | 2016-11-03 | [
[
"Yang",
"Scott Cheng-Hsin",
""
],
[
"Bechhoefer",
"John",
""
]
] | DNA synthesis in \textit{Xenopus} frog embryos initiates stochastically in time at many sites (origins) along the chromosome. Stochastic initiation implies fluctuations in the time to complete and may lead to cell death if replication takes longer than the cell cycle time ($\approx 25$ min). Surprisingly, although the typical replication time is about 20 min, \textit{in vivo} experiments show that replication fails to complete only about 1 in 300 times. How is replication timing accurately controlled despite the stochasticity? Biologists have proposed two solutions to this "random-completion problem." The first solution uses randomly located origins but increases their rate of initiation as S phase proceeds, while the second uses regularly spaced origins. In this paper, we investigate the random-completion problem using a type of model first developed to describe the kinetics of first-order phase transitions. Using methods from the field of extreme-value statistics, we derive the distribution of replication-completion times for a finite genome. We then argue that the biologists' first solution to the problem is not only consistent with experiment but also nearly optimizes the use of replicative proteins. We also show that spatial regularity in origin placement does not alter significantly the distribution of replication times and, thus, is not needed for the control of replication timing. |
1410.1115 | Andrew Sornborger | Andrew T. Sornborger and Louis Tao | Exact, Dynamically Routable Current Propagation in Pulse-Gated Synfire
Chains | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural oscillations can enhance feature recognition, modulate interactions
between neurons, and improve learning and memory. Simulational studies have
shown that coherent oscillations give rise to windows in time during which
information transfer can be enhanced in neuronal networks. Unanswered questions
are: 1) What is the transfer mechanism? And 2) how well can a transfer be
executed? Here, we present a pulse-based mechanism by which graded current
amplitudes may be exactly propagated from one neuronal population to another.
The mechanism relies on the downstream gating of mean synaptic current
amplitude from one population of neurons to another via a pulse. Because
transfer is pulse-based, information may be dynamically routed through a neural
circuit. We demonstrate the amplitude transfer mechanism in a realistic network
of spiking neurons and show that it is robust to noise in the form of pulse
timing inaccuracies, random synaptic strengths and finite size effects. In
finding an exact, analytical solution to a fundamental problem of information
coding in the brain, graded information transfer, we have isolated a basic
mechanism that may be used as a building block for fast, complex information
processing in neural circuits.
| [
{
"created": "Sun, 5 Oct 2014 05:41:10 GMT",
"version": "v1"
}
] | 2014-10-07 | [
[
"Sornborger",
"Andrew T.",
""
],
[
"Tao",
"Louis",
""
]
] | Neural oscillations can enhance feature recognition, modulate interactions between neurons, and improve learning and memory. Simulational studies have shown that coherent oscillations give rise to windows in time during which information transfer can be enhanced in neuronal networks. Unanswered questions are: 1) What is the transfer mechanism? And 2) how well can a transfer be executed? Here, we present a pulse-based mechanism by which graded current amplitudes may be exactly propagated from one neuronal population to another. The mechanism relies on the downstream gating of mean synaptic current amplitude from one population of neurons to another via a pulse. Because transfer is pulse-based, information may be dynamically routed through a neural circuit. We demonstrate the amplitude transfer mechanism in a realistic network of spiking neurons and show that it is robust to noise in the form of pulse timing inaccuracies, random synaptic strengths and finite size effects. In finding an exact, analytical solution to a fundamental problem of information coding in the brain, graded information transfer, we have isolated a basic mechanism that may be used as a building block for fast, complex information processing in neural circuits. |
1905.03723 | Timothy William Russell | Timothy W. Russell, Matthew J. Russell, Francisco \'Ubeda and Vincent
A. A. Jansen | Stable cycling in quasi-linkage equilibrium: fluctuating dynamics under
gene conversion and selection | 35 pages, 6 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic systems with multiple loci can have complex dynamics. For example,
mean fitness need not always increase and stable cycling is possible. Here, we
study the dynamics of a genetic system inspired by the molecular biology of
recognition-dependent double strand breaks and repair as it happens in
recombination hotspots. The model shows slow-fast dynamics in which the system
converges to the quasi-linkage equilibrium (QLE) manifold. On this manifold,
sustained cycling is possible as the dynamics approach a heteroclinic cycle, in
which allele frequencies alternate between near extinction and near fixation.
We find a closed-form approximation for the QLE manifold and use it to simplify
the model. For the simplified model, we can analytically calculate the
stability of the heteroclinic cycle. In the discrete-time model the cycle is
always stable; in a continuous-time approximation, the cycle is always
unstable. This demonstrates that complex dynamics are possible under
quasi-linkage equilibrium.
| [
{
"created": "Thu, 9 May 2019 15:58:38 GMT",
"version": "v1"
}
] | 2019-05-10 | [
[
"Russell",
"Timothy W.",
""
],
[
"Russell",
"Matthew J.",
""
],
[
"Úbeda",
"Francisco",
""
],
[
"Jansen",
"Vincent A. A.",
""
]
] | Genetic systems with multiple loci can have complex dynamics. For example, mean fitness need not always increase and stable cycling is possible. Here, we study the dynamics of a genetic system inspired by the molecular biology of recognition-dependent double strand breaks and repair as it happens in recombination hotspots. The model shows slow-fast dynamics in which the system converges to the quasi-linkage equilibrium (QLE) manifold. On this manifold, sustained cycling is possible as the dynamics approach a heteroclinic cycle, in which allele frequencies alternate between near extinction and near fixation. We find a closed-form approximation for the QLE manifold and use it to simplify the model. For the simplified model, we can analytically calculate the stability of the heteroclinic cycle. In the discrete-time model the cycle is always stable; in a continuous-time approximation, the cycle is always unstable. This demonstrates that complex dynamics are possible under quasi-linkage equilibrium. |
2108.04681 | Alida Cosenza | Dario Presti, Alida Cosenza, Fanny Claire Capri, Giuseppe Gallo, Rosa
Alduina and Giorgio Mannina | Influence of volatile solids and pH for the production of volatile fatty
acids: batch fermentation tests using sewage sludge | 27 pages, 4 figures | Bioresource Technology Volume 342, December 2021, 125853 | 10.1016/j.biortech.2021.125853 | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The aim of this work was to study the effect of volatile suspended solid
(VSS) and pH on volatile fatty acids (VFA) production from waste activated
sludge (WAS) fermentation by means of batch tests. The final goal was to gain
insights to enhance VFA stream quality, with the novelty of using WAS with high
sludge retention time. Results revealed that the optimum conditions to maximize
VFAs and minimize nutrients and non-VFA sCOD are a VSS concentration of 5.9 g/L
and initial pH adjustment to pH 10. The WAS bacterial community structures were
analysed according to Next Generation Sequencing (NGS) of 16S rDNA amplicons.
The results revealed changes of bacterial phyla abundance in comparison with
the batch test starting condition.
| [
{
"created": "Mon, 9 Aug 2021 15:21:02 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Aug 2021 12:43:44 GMT",
"version": "v2"
}
] | 2021-09-23 | [
[
"Presti",
"Dario",
""
],
[
"Cosenza",
"Alida",
""
],
[
"Capri",
"Fanny Claire",
""
],
[
"Gallo",
"Giuseppe",
""
],
[
"Alduina",
"Rosa",
""
],
[
"Mannina",
"Giorgio",
""
]
] | The aim of this work was to study the effect of volatile suspended solid (VSS) and pH on volatile fatty acids (VFA) production from waste activated sludge (WAS) fermentation by means of batch tests. The final goal was to gain insights to enhance VFA stream quality, with the novelty of using WAS with high sludge retention time. Results revealed that the optimum conditions to maximize VFAs and minimize nutrients and non-VFA sCOD are a VSS concentration of 5.9 g/L and initial pH adjustment to pH 10. The WAS bacterial community structures were analysed according to Next Generation Sequencing (NGS) of 16S rDNA amplicons. The results revealed changes of bacterial phyla abundance in comparison with the batch test starting condition. |
1808.10023 | Hue Sun Chan | Suman Das, Alan Amin, Yi-Hsuan Lin and Hue Sun Chan | Coarse-Grained Residue-Based Models of Disordered Protein Condensates:
Utility and Limitations of Simple Charge Pattern Parameters | 44 pages, 14 figures, 2 tables, accepted for publication in Physical
Chemistry Chemical Physics (PCCP) | Physical Chemistry Chemical Physics (PCCP) Vol.20, pp.28558-28574
(2018) | 10.1039/C8CP05095C | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biomolecular condensates undergirded by phase separations of proteins and
nucleic acids serve crucial biological functions. To gain physical insights
into their genetic basis, we study how liquid-liquid phase separation (LLPS) of
intrinsically disordered proteins (IDPs) depends on their sequence charge
patterns using a continuum Langevin chain model wherein each amino acid residue
is represented by a single bead. Charge patterns are characterized by the
`blockiness' measure $\kappa$ and the `sequence charge decoration' (SCD)
parameter. Consistent with random phase approximation (RPA) theory and lattice
simulations, LLPS propensity as characterized by critical temperature $T^*_{\rm
cr}$ increases with increasingly negative SCD for a set of sequences showing a
positive correlation between $\kappa$ and $-$SCD. Relative to RPA, the
simulated sequence-dependent variation in $T^*_{\rm cr}$ is often---though not
always---smaller, whereas the simulated critical volume fractions are higher.
However, for a set of sequences exhibiting an anti-correlation between $\kappa$
and $-$SCD, the simulated $T^*_{\rm cr}$'s are quite insensitive to either
parameters. Additionally, we find that blocky sequences that allow for strong
electrostatic repulsion can lead to coexistence curves with upward concavity as
stipulated by RPA, but the LLPS propensity of a strictly alternating charge
sequence was likely overestimated by RPA and lattice models because interchain
stabilization of this sequence requires spatial alignments that are difficult
to achieve in real space. These results help delineate the utility and
limitations of the charge pattern parameters and of RPA, pointing to further
efforts necessary for rationalizing the newly observed subtleties.
| [
{
"created": "Wed, 29 Aug 2018 19:59:07 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Oct 2018 02:11:49 GMT",
"version": "v2"
}
] | 2018-12-03 | [
[
"Das",
"Suman",
""
],
[
"Amin",
"Alan",
""
],
[
"Lin",
"Yi-Hsuan",
""
],
[
"Chan",
"Hue Sun",
""
]
] | Biomolecular condensates undergirded by phase separations of proteins and nucleic acids serve crucial biological functions. To gain physical insights into their genetic basis, we study how liquid-liquid phase separation (LLPS) of intrinsically disordered proteins (IDPs) depends on their sequence charge patterns using a continuum Langevin chain model wherein each amino acid residue is represented by a single bead. Charge patterns are characterized by the `blockiness' measure $\kappa$ and the `sequence charge decoration' (SCD) parameter. Consistent with random phase approximation (RPA) theory and lattice simulations, LLPS propensity as characterized by critical temperature $T^*_{\rm cr}$ increases with increasingly negative SCD for a set of sequences showing a positive correlation between $\kappa$ and $-$SCD. Relative to RPA, the simulated sequence-dependent variation in $T^*_{\rm cr}$ is often---though not always---smaller, whereas the simulated critical volume fractions are higher. However, for a set of sequences exhibiting an anti-correlation between $\kappa$ and $-$SCD, the simulated $T^*_{\rm cr}$'s are quite insensitive to either parameters. Additionally, we find that blocky sequences that allow for strong electrostatic repulsion can lead to coexistence curves with upward concavity as stipulated by RPA, but the LLPS propensity of a strictly alternating charge sequence was likely overestimated by RPA and lattice models because interchain stabilization of this sequence requires spatial alignments that are difficult to achieve in real space. These results help delineate the utility and limitations of the charge pattern parameters and of RPA, pointing to further efforts necessary for rationalizing the newly observed subtleties. |
1705.05921 | Ana Pavel | Ana B. Pavel and Kirill S. Korolev | Genetic load makes cancer cells more sensitive to common drugs: evidence
from Cancer Cell Line Encyclopedia | null | Scientific Reports (7:1938). Springer Nature. May 16 2017 | 10.1038/s41598-017-02178-1 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic alterations initiate tumors and enable the evolution of drug
resistance. The pro-cancer view of mutations is however incomplete, and several
studies show that mutational load can reduce tumor fitness. Given its negative
effect, genetic load should make tumors more sensitive to anticancer drugs.
Here, we test this hypothesis across all major types of cancer from the Cancer
Cell Line Encyclopedia, which provides genetic and expression data of 496 cell
lines together with their response to 24 common anticancer drugs. We found that
the efficacy of 9 out of 24 drugs showed significant association with genetic
load in a pan-cancer analysis. The associations for some tissue-drug
combinations were remarkably strong, with genetic load explaining up to 83% of
the variance in the drug response. Overall, the role of genetic load depended
on both the drug and the tissue type with 10 tissues being particularly
vulnerable to genetic load. We also identified changes in gene expression
associated with increased genetic load, which included cell-cycle checkpoints,
DNA damage and apoptosis. Our results show that genetic load is an important
component of tumor fitness and can predict drug sensitivity. Beyond being a
biomarker, genetic load might be a new, unexplored vulnerability of cancer.
| [
{
"created": "Tue, 16 May 2017 21:02:48 GMT",
"version": "v1"
}
] | 2017-05-18 | [
[
"Pavel",
"Ana B.",
""
],
[
"Korolev",
"Kirill S.",
""
]
] | Genetic alterations initiate tumors and enable the evolution of drug resistance. The pro-cancer view of mutations is however incomplete, and several studies show that mutational load can reduce tumor fitness. Given its negative effect, genetic load should make tumors more sensitive to anticancer drugs. Here, we test this hypothesis across all major types of cancer from the Cancer Cell Line Encyclopedia, which provides genetic and expression data of 496 cell lines together with their response to 24 common anticancer drugs. We found that the efficacy of 9 out of 24 drugs showed significant association with genetic load in a pan-cancer analysis. The associations for some tissue-drug combinations were remarkably strong, with genetic load explaining up to 83% of the variance in the drug response. Overall, the role of genetic load depended on both the drug and the tissue type with 10 tissues being particularly vulnerable to genetic load. We also identified changes in gene expression associated with increased genetic load, which included cell-cycle checkpoints, DNA damage and apoptosis. Our results show that genetic load is an important component of tumor fitness and can predict drug sensitivity. Beyond being a biomarker, genetic load might be a new, unexplored vulnerability of cancer. |
q-bio/0310003 | Brian Fristensky | Sandhya Tewari, Stuart M. Brown, and Brian Fristensky | Plant defense multigene families: I. Divergence of Fusarium
solani-induced expression in Pisum and Lathyrus | 13 pages, 6 figures arXiv reference added to first page; minor
formatting changes | null | null | null | q-bio.PE | null | The defense response in plants challenged with pathogens is characterized by
the activation of a diverse set of genes. Many of the same genes are induced in
the defense responses of a wide range of plant species. How plant defense gene
families evolve may therefore provide an important clue to our understanding of
how disease resistance evolves. Because studies usually focus on a single host
species, little data are available regarding changes in defense gene expression
patterns as species diverge. The expression of defense-induced genes PR10,
chitinase and chalcone synthase was assayed in four pea species (Pisum sativum,
P. humile, P. elatius and P. fulvum) and two Lathyrus species (L. sativus and
L. tingitanus) which exhibited a range of infection phenotypes with Fusarium
solani . In P. sativum, resistance was accompanied by a strong induction of
defense genes at 8 hr. post-inoculation. Weaker induction was seen in
susceptible interactions in wild species. Divergence in the timing of PR10
expression was most striking between P. sativum and its closest realtive, P.
humile. Two members of this multigene family, designated PR10.1 and PR10.2, are
strongly-expressed in response to Fusarium, while the PR10.3 gene is more
weakly expressed, among Pisum species. The rapidity with which PR10 expression
evolves raises the question, is divergence of defense gene expression a part of
the phenotypic diversity underlying plant/pathogen coevolution?
| [
{
"created": "Sun, 5 Oct 2003 23:44:24 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Oct 2003 17:17:56 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Tewari",
"Sandhya",
""
],
[
"Brown",
"Stuart M.",
""
],
[
"Fristensky",
"Brian",
""
]
] | The defense response in plants challenged with pathogens is characterized by the activation of a diverse set of genes. Many of the same genes are induced in the defense responses of a wide range of plant species. How plant defense gene families evolve may therefore provide an important clue to our understanding of how disease resistance evolves. Because studies usually focus on a single host species, little data are available regarding changes in defense gene expression patterns as species diverge. The expression of defense-induced genes PR10, chitinase and chalcone synthase was assayed in four pea species (Pisum sativum, P. humile, P. elatius and P. fulvum) and two Lathyrus species (L. sativus and L. tingitanus) which exhibited a range of infection phenotypes with Fusarium solani . In P. sativum, resistance was accompanied by a strong induction of defense genes at 8 hr. post-inoculation. Weaker induction was seen in susceptible interactions in wild species. Divergence in the timing of PR10 expression was most striking between P. sativum and its closest realtive, P. humile. Two members of this multigene family, designated PR10.1 and PR10.2, are strongly-expressed in response to Fusarium, while the PR10.3 gene is more weakly expressed, among Pisum species. The rapidity with which PR10 expression evolves raises the question, is divergence of defense gene expression a part of the phenotypic diversity underlying plant/pathogen coevolution? |
1810.03371 | Sumedha | Arabind Swain, A.V. Anil Kumar and Sumedha | A Stochastic model for dynamics of FtsZ filaments and the formation of
Z-ring | Replaced with published version | Eur. Phys. J. E (2020) 43: 43 | 10.1140/epje/i2020-11967-6 | null | q-bio.SC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the mechanisms responsible for the formation and growth of FtsZ
polymers and their subsequent formation of the $Z$-ring is important for
gaining insight into the cell division in prokaryotic cells. In this work, we
present a minimal stochastic model that qualitatively reproduces {\it in vitro}
observations of polymerization, formation of dynamic contractile ring that is
stable for a long time and depolymerization shown by FtsZ polymer filaments. In
this stochastic model, we explore different mechanisms for ring breaking and
hydrolysis. In addition to hydrolysis, which is known to regulate the dynamics
of other tubulin polymers like microtubules, we find that the presence of the
ring allows for an additional mechanism for regulating the dynamics of FtsZ
polymers. Ring breaking dynamics in the presence of hydrolysis naturally induce
rescue and catastrophe events in this model irrespective of the mechanism of
hydrolysis.
| [
{
"created": "Mon, 8 Oct 2018 11:07:21 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jun 2020 13:24:21 GMT",
"version": "v2"
}
] | 2020-07-21 | [
[
"Swain",
"Arabind",
""
],
[
"Kumar",
"A. V. Anil",
""
],
[
"Sumedha",
"",
""
]
] | Understanding the mechanisms responsible for the formation and growth of FtsZ polymers and their subsequent formation of the $Z$-ring is important for gaining insight into the cell division in prokaryotic cells. In this work, we present a minimal stochastic model that qualitatively reproduces {\it in vitro} observations of polymerization, formation of dynamic contractile ring that is stable for a long time and depolymerization shown by FtsZ polymer filaments. In this stochastic model, we explore different mechanisms for ring breaking and hydrolysis. In addition to hydrolysis, which is known to regulate the dynamics of other tubulin polymers like microtubules, we find that the presence of the ring allows for an additional mechanism for regulating the dynamics of FtsZ polymers. Ring breaking dynamics in the presence of hydrolysis naturally induce rescue and catastrophe events in this model irrespective of the mechanism of hydrolysis. |
2010.05950 | Breno de Oliveira Ferraz | D. Bazeia, B.F. de Oliveira, J.V.O. Silva, A. Szolnoki | Breaking unidirectional invasions jeopardizes biodiversity in spatial
May-Leonard systems | 9 pages, 6 figures | Chaos, Solitons & Fractals Volume 141, December 2020, 110356 | 10.1016/j.chaos.2020.110356 | null | q-bio.PE cond-mat.stat-mech nlin.PS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-transitive dominance and the resulting cyclic loop of three or more
competing species provide a fundamental mechanism to explain biodiversity in
biological and ecological systems. Both Lotka-Volterra and May-Leonard type
model approaches agree that heterogeneity of invasion rates within this loop
does not hazard the coexistence of competing species. While the resulting
abundances of species become heterogeneous, the species who has the smallest
invasion power benefits the most from unequal invasions. Nevertheless, the
effective invasion rate in a predator and prey interaction can also be modified
by breaking the direction of dominance and allowing reversed invasion with a
smaller probability. While this alteration has no particular consequence on the
behavior within the framework of Lotka-Volterra models, the reactions of
May-Leonard systems are highly different. In the latter case, not just the
mentioned "survival of the weakest" effect vanishes, but also the coexistence
of the loop cannot be maintained if the reversed invasion exceeds a threshold
value. Interestingly, the extinction to a uniform state is characterized by a
non-monotonous probability function. While the presence of reversed invasion
does not fully diminish the evolutionary advantage of the original predator
species, but this weakened effective invasion rate helps the related prey
species to collect larger initial area for the final battle between them. The
competition of these processes determines the likelihood in which uniform state
the system terminates.
| [
{
"created": "Mon, 12 Oct 2020 18:17:53 GMT",
"version": "v1"
}
] | 2020-10-20 | [
[
"Bazeia",
"D.",
""
],
[
"de Oliveira",
"B. F.",
""
],
[
"Silva",
"J. V. O.",
""
],
[
"Szolnoki",
"A.",
""
]
] | Non-transitive dominance and the resulting cyclic loop of three or more competing species provide a fundamental mechanism to explain biodiversity in biological and ecological systems. Both Lotka-Volterra and May-Leonard type model approaches agree that heterogeneity of invasion rates within this loop does not hazard the coexistence of competing species. While the resulting abundances of species become heterogeneous, the species who has the smallest invasion power benefits the most from unequal invasions. Nevertheless, the effective invasion rate in a predator and prey interaction can also be modified by breaking the direction of dominance and allowing reversed invasion with a smaller probability. While this alteration has no particular consequence on the behavior within the framework of Lotka-Volterra models, the reactions of May-Leonard systems are highly different. In the latter case, not just the mentioned "survival of the weakest" effect vanishes, but also the coexistence of the loop cannot be maintained if the reversed invasion exceeds a threshold value. Interestingly, the extinction to a uniform state is characterized by a non-monotonous probability function. While the presence of reversed invasion does not fully diminish the evolutionary advantage of the original predator species, but this weakened effective invasion rate helps the related prey species to collect larger initial area for the final battle between them. The competition of these processes determines the likelihood in which uniform state the system terminates. |
1005.3862 | Iaroslav Ispolatov | Michael Doebeli, Iaroslav Ispolatov | Continuously stable strategies as evolutionary branching points | 22 pages, 4 figures | Journal of Theoretical Biology, v. 266, 21 October 2010, pp.
529-535 | 10.1016/j.jtbi.2010.06.036 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary branching points are a paradigmatic feature of adaptive
dynamics, because they are potential starting points for adaptive
diversification. The antithesis to evolutionary branching points are
Continuously stable strategies (CSS's), which are convergent stable and
evolutionarily stable equilibrium points of the adaptive dynamics and hence are
thought to represent endpoints of adaptive processes. However, this assessment
is based on situations in which the invasion fitness function determining the
adaptive dynamics have non-zero second derivatives at a CSS. Here we show that
the scope of evolutionary branching can increase if the invasion fitness
function vanishes to higher than first order at a CSS. Using a class of
classical models for frequency-dependent competition, we show that if the
invasion fitness vanishes to higher orders, a CSS may be the starting point for
evolutionary branching, with the only additional requirement that mutant types
need to reach a certain threshold frequency, which can happen e.g. due to
demographic stochasticity. Thus, when invasion fitness functions vanish to
higher than first order at equilibrium points of the adaptive dynamics,
evolutionary diversification can occur even after convergence to an
evolutionarily stable strategy.
| [
{
"created": "Fri, 21 May 2010 00:06:03 GMT",
"version": "v1"
}
] | 2017-02-07 | [
[
"Doebeli",
"Michael",
""
],
[
"Ispolatov",
"Iaroslav",
""
]
] | Evolutionary branching points are a paradigmatic feature of adaptive dynamics, because they are potential starting points for adaptive diversification. The antithesis to evolutionary branching points are Continuously stable strategies (CSS's), which are convergent stable and evolutionarily stable equilibrium points of the adaptive dynamics and hence are thought to represent endpoints of adaptive processes. However, this assessment is based on situations in which the invasion fitness function determining the adaptive dynamics have non-zero second derivatives at a CSS. Here we show that the scope of evolutionary branching can increase if the invasion fitness function vanishes to higher than first order at a CSS. Using a class of classical models for frequency-dependent competition, we show that if the invasion fitness vanishes to higher orders, a CSS may be the starting point for evolutionary branching, with the only additional requirement that mutant types need to reach a certain threshold frequency, which can happen e.g. due to demographic stochasticity. Thus, when invasion fitness functions vanish to higher than first order at equilibrium points of the adaptive dynamics, evolutionary diversification can occur even after convergence to an evolutionarily stable strategy. |
2406.13889 | Jordi Garcia-Ojalvo | Alda Sabalic, Victoria Moiseeva, Andres Cisneros, Oleg Deryagin,
Eusebio Perdiguero, Pura Mu\~noz-Canoves, Jordi Garcia-Ojalvo | Network-community analysis of cellular senescence | 20 pages, 11 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Most cellular phenotypes are genetically complex. Identifying the set of
genes that are most closely associated with a specific cellular state is still
an open question in many cases. Here we study the transcriptional profile of
cellular senescence using a combination of network-based approaches, which
include eigenvector centrality feature selection and community detection. We
apply our method to cell-type-resolved RNA sequencing data obtained from
injured muscle tissue in mice. The analysis identifies some genetic markers
consistent with previous findings, and other previously unidentified ones,
which are validated with previously published single-cell RNA sequencing data
in a different type of tissue. The key identified genes, both those previously
known and the newly identified ones, are transcriptional targets of factors
known to be associated with established hallmarks of senescence, and can thus
be interpreted as molecular correlates of such hallmarks. The method proposed
here could be applied to any complex cellular phenotype even when only bulk RNA
sequencing is available, provided the data is resolved by cell type.
| [
{
"created": "Wed, 19 Jun 2024 23:41:10 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Sabalic",
"Alda",
""
],
[
"Moiseeva",
"Victoria",
""
],
[
"Cisneros",
"Andres",
""
],
[
"Deryagin",
"Oleg",
""
],
[
"Perdiguero",
"Eusebio",
""
],
[
"Muñoz-Canoves",
"Pura",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] | Most cellular phenotypes are genetically complex. Identifying the set of genes that are most closely associated with a specific cellular state is still an open question in many cases. Here we study the transcriptional profile of cellular senescence using a combination of network-based approaches, which include eigenvector centrality feature selection and community detection. We apply our method to cell-type-resolved RNA sequencing data obtained from injured muscle tissue in mice. The analysis identifies some genetic markers consistent with previous findings, and other previously unidentified ones, which are validated with previously published single-cell RNA sequencing data in a different type of tissue. The key identified genes, both those previously known and the newly identified ones, are transcriptional targets of factors known to be associated with established hallmarks of senescence, and can thus be interpreted as molecular correlates of such hallmarks. The method proposed here could be applied to any complex cellular phenotype even when only bulk RNA sequencing is available, provided the data is resolved by cell type. |
2405.11009 | Kamila Barylska | Kamila Barylska, Anna Gogoli\'nska | Petri nets in modelling glucose regulating processes in the liver | submitted to International Workshop on Petri Nets and Software
Engineering (PNSE 2024) | null | null | null | q-bio.OT cs.CL | http://creativecommons.org/licenses/by/4.0/ | Diabetes is a chronic condition, considered one of the civilization diseases,
that is characterized by sustained high blood sugar levels. There is no doubt
that more and more people is going to suffer from diabetes, hence it is crucial
to understand better its biological foundations. The essential processes
related to the control of glucose levels in the blood are: glycolysis (process
of breaking down of glucose) and glucose synthesis, both taking place in the
liver. The glycolysis occurs during feeding and it is stimulated by insulin. On
the other hand, the glucose synthesis arises during fasting and it is
stimulated by glucagon. In the paper we present a Petri net model of glycolysis
and glucose synthesis in the liver. The model is created based on medical
literature. Standard Petri nets techniques are used to analyse the properties
of the model: traps, reachability graphs, tokens dynamics, deadlocks analysis.
The results are described in the paper. Our analysis shows that the model
captures the interactions between different enzymes and substances, which is
consistent with the biological processes occurring during fasting and feeding.
The model constitutes the first element of our long-time goal to create the
whole body model of the glucose regulation in a healthy human and a person with
diabetes.
| [
{
"created": "Fri, 17 May 2024 13:15:01 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Barylska",
"Kamila",
""
],
[
"Gogolińska",
"Anna",
""
]
] | Diabetes is a chronic condition, considered one of the civilization diseases, that is characterized by sustained high blood sugar levels. There is no doubt that more and more people is going to suffer from diabetes, hence it is crucial to understand better its biological foundations. The essential processes related to the control of glucose levels in the blood are: glycolysis (process of breaking down of glucose) and glucose synthesis, both taking place in the liver. The glycolysis occurs during feeding and it is stimulated by insulin. On the other hand, the glucose synthesis arises during fasting and it is stimulated by glucagon. In the paper we present a Petri net model of glycolysis and glucose synthesis in the liver. The model is created based on medical literature. Standard Petri nets techniques are used to analyse the properties of the model: traps, reachability graphs, tokens dynamics, deadlocks analysis. The results are described in the paper. Our analysis shows that the model captures the interactions between different enzymes and substances, which is consistent with the biological processes occurring during fasting and feeding. The model constitutes the first element of our long-time goal to create the whole body model of the glucose regulation in a healthy human and a person with diabetes. |
2404.00822 | Francisco-Jose Perez-Reche | Francisco J. Perez-Reche | Impact of heterogeneity on infection probability: Insights from
single-hit dose-response models | 36 pages, 7 figures | null | null | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by/4.0/ | The process of infection of a host is complex, influenced by factors such as
microbial variation within and between hosts as well as differences in dose
across hosts. This study uses dose-response and microbial growth models to
delve into the impact of these factors on infection probability. It is
rigorously demonstrated that within-host heterogeneity in microbial infectivity
enhances the probability of infection. The effect of infectivity and dose
variation between hosts is studied in terms of the expected value of the
probability of infection. General analytical findings, derived under the
assumption of small infectivity, reveal that both types of heterogeneity reduce
the expected infection probability. Interestingly, this trend appears
consistent across specific dose-response models, suggesting a limited role for
the small infectivity condition. Additionally, the vital dynamics behind
heterogeneous infectivity are investigated with a microbial growth model which
enhances the biological significance of single-hit dose-response models.
Testing these mathematical predictions inspire new and challenging laboratory
experiments that could deepen our understanding of infections.
| [
{
"created": "Sun, 31 Mar 2024 23:18:53 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Perez-Reche",
"Francisco J.",
""
]
] | The process of infection of a host is complex, influenced by factors such as microbial variation within and between hosts as well as differences in dose across hosts. This study uses dose-response and microbial growth models to delve into the impact of these factors on infection probability. It is rigorously demonstrated that within-host heterogeneity in microbial infectivity enhances the probability of infection. The effect of infectivity and dose variation between hosts is studied in terms of the expected value of the probability of infection. General analytical findings, derived under the assumption of small infectivity, reveal that both types of heterogeneity reduce the expected infection probability. Interestingly, this trend appears consistent across specific dose-response models, suggesting a limited role for the small infectivity condition. Additionally, the vital dynamics behind heterogeneous infectivity are investigated with a microbial growth model which enhances the biological significance of single-hit dose-response models. Testing these mathematical predictions inspire new and challenging laboratory experiments that could deepen our understanding of infections. |
2404.14224 | Laura Blicher Ms | Laura Hjort Blicher, Peter Emil Carstensen, Jacob Bendsen, Henrik
Linden, Bj{\o}rn Hald, Kim Kristensen, John Bagterp J{\o}rgensen | Modeling principles for a physiology-based whole-body model of human
metabolism | 6 pages, 3 figures, 3 tables, submitted to be presented at a
conference | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physiological whole-body models are valuable tools for the development of
novel drugs where understanding the system aspects is important. This paper
presents a generalized model that encapsulates the structure and flow of
whole-body human physiology. The model contains vascular, interstitial, and
cellular subcompartments for each organ. Scaling of volumes and blood flows is
described to allow for investigation across populations or specific patient
groups. The model equations and the corresponding parameters are presented
along with a catalog of functions that can be used to define the organ
transport model and the biochemical reaction model. A simple example
illustrates the procedure.
| [
{
"created": "Mon, 22 Apr 2024 14:35:05 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Blicher",
"Laura Hjort",
""
],
[
"Carstensen",
"Peter Emil",
""
],
[
"Bendsen",
"Jacob",
""
],
[
"Linden",
"Henrik",
""
],
[
"Hald",
"Bjørn",
""
],
[
"Kristensen",
"Kim",
""
],
[
"Jørgensen",
"John Bagterp",
""
]
] | Physiological whole-body models are valuable tools for the development of novel drugs where understanding the system aspects is important. This paper presents a generalized model that encapsulates the structure and flow of whole-body human physiology. The model contains vascular, interstitial, and cellular subcompartments for each organ. Scaling of volumes and blood flows is described to allow for investigation across populations or specific patient groups. The model equations and the corresponding parameters are presented along with a catalog of functions that can be used to define the organ transport model and the biochemical reaction model. A simple example illustrates the procedure. |
1009.0652 | Gabriel Cardona | Gabriel Cardona, Merce Llabres, Francesc Rossello | A metric for galled networks | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Galled networks, directed acyclic graphs that model evolutionary histories
with reticulation cycles containing only tree nodes, have become very popular
due to both their biological significance and the existence of polynomial time
algorithms for their reconstruction. In this paper we prove that Nakhleh's $m$
measure is a metric for this class of phylogenetic networks and hence it can be
safely used to evaluate galled network reconstruction methods.
| [
{
"created": "Fri, 3 Sep 2010 13:07:50 GMT",
"version": "v1"
}
] | 2010-09-06 | [
[
"Cardona",
"Gabriel",
""
],
[
"Llabres",
"Merce",
""
],
[
"Rossello",
"Francesc",
""
]
] | Galled networks, directed acyclic graphs that model evolutionary histories with reticulation cycles containing only tree nodes, have become very popular due to both their biological significance and the existence of polynomial time algorithms for their reconstruction. In this paper we prove that Nakhleh's $m$ measure is a metric for this class of phylogenetic networks and hence it can be safely used to evaluate galled network reconstruction methods. |
1302.3861 | Matthew Cobley | Matthew James Cobley | The Flexibility and Musculature of the Ostrich Neck: Implications for
the Feeding Ecology and Reconstruction of the Sauropoda
(Dinosauria:Saurischia) | MSc Thesis, University of Bristol, UK, Department of Earth Science.
71 Pages | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Sauropoda were the largest terrestrial animals ever to have lived on this
planet. As their nutritional requirements were so huge, their diet holds sway
over the ecology of many Mesozoic herbivores. The diet of the sauropods is
limited by their feeding envelope, which in turn is governed by the posture and
flexibility of their elongate necks. Yet the exact nature of the flexibility
and posture of the neck has been a contentious issue. Previous studies have
utilised computer models of dry bone, mechanical principles or the flexibility
of the necks of extant animals. However, the effect of the musculature of the
neck has yet to be investigated. Through measurements of the flexibility of the
ostrich neck after cumulative tissue removal, analyses of the muscle attachment
sites of the ostrich and sauropods, and testing of the Osteological Neutral
Pose model, this study attempts to rectify this situation. The ostrich neck was
shown to have three sections of flexibility; a slightly flexible anterior
section, a very flexible middle section and a stiff posterior section. The
Osteological Neutral Pose did not show these sections, and was shown to
potentially overestimate and underestimate flexibility. It was also found that
the inter-vertebral space could account for varying estimates of flexibility,
and that sauropods would have proportionally more muscle mass at the base of
the neck in relation to the ostrich. Ultimately, it was shown that the tissues
of the neck place the limits of flexibility, and that zygapophyseal overlap
does not indicate the flexibility of the neck. Should the Osteological Neutral
Pose affect sauropod flexibility estimates in the same manner as that of the
ostrich (a general overestimate), then the sauropods would have a more limited
feeding envelope than previously thought, allowing for greater niche
partitioning between groups.
| [
{
"created": "Fri, 15 Feb 2013 20:08:13 GMT",
"version": "v1"
}
] | 2013-02-18 | [
[
"Cobley",
"Matthew James",
""
]
] | The Sauropoda were the largest terrestrial animals ever to have lived on this planet. As their nutritional requirements were so huge, their diet holds sway over the ecology of many Mesozoic herbivores. The diet of the sauropods is limited by their feeding envelope, which in turn is governed by the posture and flexibility of their elongate necks. Yet the exact nature of the flexibility and posture of the neck has been a contentious issue. Previous studies have utilised computer models of dry bone, mechanical principles or the flexibility of the necks of extant animals. However, the effect of the musculature of the neck has yet to be investigated. Through measurements of the flexibility of the ostrich neck after cumulative tissue removal, analyses of the muscle attachment sites of the ostrich and sauropods, and testing of the Osteological Neutral Pose model, this study attempts to rectify this situation. The ostrich neck was shown to have three sections of flexibility; a slightly flexible anterior section, a very flexible middle section and a stiff posterior section. The Osteological Neutral Pose did not show these sections, and was shown to potentially overestimate and underestimate flexibility. It was also found that the inter-vertebral space could account for varying estimates of flexibility, and that sauropods would have proportionally more muscle mass at the base of the neck in relation to the ostrich. Ultimately, it was shown that the tissues of the neck place the limits of flexibility, and that zygapophyseal overlap does not indicate the flexibility of the neck. Should the Osteological Neutral Pose affect sauropod flexibility estimates in the same manner as that of the ostrich (a general overestimate), then the sauropods would have a more limited feeding envelope than previously thought, allowing for greater niche partitioning between groups. |
2206.09624 | Francois Fages | Mathieu Hemery (Lifeware), Fran\c{c}ois Fages (Lifeware) | Algebraic Biochemistry: a Framework for Analog Online Computation in
Cells | null | Proc. Int. Conf. Computational Methods for Systems Biology
CMSB'22, Sep 2022, Bucarest, Romania | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Turing completeness of continuous chemical reaction networks (CRNs)
states that any computable real function can be computed by a continuous CRN on
a finite set of molecular species, possibly restricted to elementary reactions,
i.e. with at most two reactants and mass action law kinetics. In this paper, we
introduce a notion of online analog computation for the CRNs that stabilize the
concentration of their output species to the result of some function of the
concentration values of their input species, whatever changes are operated on
the inputs during the computation. We prove that the set of real functions
stabilized by a CRN with mass action law kinetics is precisely the set of real
algebraic functions.
| [
{
"created": "Mon, 20 Jun 2022 08:18:21 GMT",
"version": "v1"
}
] | 2022-06-22 | [
[
"Hemery",
"Mathieu",
"",
"Lifeware"
],
[
"Fages",
"François",
"",
"Lifeware"
]
] | The Turing completeness of continuous chemical reaction networks (CRNs) states that any computable real function can be computed by a continuous CRN on a finite set of molecular species, possibly restricted to elementary reactions, i.e. with at most two reactants and mass action law kinetics. In this paper, we introduce a notion of online analog computation for the CRNs that stabilize the concentration of their output species to the result of some function of the concentration values of their input species, whatever changes are operated on the inputs during the computation. We prove that the set of real functions stabilized by a CRN with mass action law kinetics is precisely the set of real algebraic functions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.