id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2207.06518
|
Peter Taylor
|
Thomas W. Owen, Gabrielle M. Schroeder, Vytene Janiukstyte, Gerard R.
Hall, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus
Rugg-Gunn, Yujiang Wang, Peter N. Taylor
|
MEG abnormalities and mechanisms of surgical failure in neocortical
epilepsy
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Neocortical epilepsy surgery fails to achieve post-operative seizure freedom
in 30-40% of cases. It is not fully understood why surgery in some patients is
unsuccessful. Comparing interictal MEG bandpower from patients to normative
maps, which describe healthy spatial and population variability, we identify
patient specific abnormalities relating to surgical failure. We propose three
mechanisms contributing to poor surgical outcome; 1) failure to resect
abnormalities, 2) failing to remove all epileptogenic abnormalities, and 3)
insufficiently impacting the overall cortical abnormality. We develop markers
of these mechanisms, validating them against patient outcomes. Resting-state
MEG data were acquired for 70 healthy controls and 32 patients with refractory
neocortical epilepsy. Relative bandpower maps were computed using source
localised recordings from healthy controls. Patient and region-specific
bandpower abnormalities were estimated as the maximum absolute z-score, using
healthy data as a baseline. Resected regions were identified from
post-operative MRI. We hypothesised our mechanism markers would discriminate
patient's post-surgery seizure outcomes. Mechanisms of surgical failure
discriminate surgical outcome groups (Abnormalities not targeted: AUC=0.80,
Partial resection of the epileptogenic zone: AUC=0.68, Insufficient cortical
abnormality impact: AUC=0.64). Leveraging all markers together found that 95%
of those who were not seizure free had markers of surgical failure in at least
one of the three proposed mechanisms. In contrast, of those patients markers
for any mechanism, 80% were seizure-free. Abnormality mapping across the brain
is important for a wide range of neurological conditions. Here we demonstrated
that interictal MEG bandpower mapping has merit for localising pathology and
improving our mechanistic understanding of epilepsy.
|
[
{
"created": "Wed, 13 Jul 2022 20:38:06 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Feb 2023 15:53:13 GMT",
"version": "v2"
}
] |
2023-02-14
|
[
[
"Owen",
"Thomas W.",
""
],
[
"Schroeder",
"Gabrielle M.",
""
],
[
"Janiukstyte",
"Vytene",
""
],
[
"Hall",
"Gerard R.",
""
],
[
"McEvoy",
"Andrew",
""
],
[
"Miserocchi",
"Anna",
""
],
[
"de Tisi",
"Jane",
""
],
[
"Duncan",
"John S.",
""
],
[
"Rugg-Gunn",
"Fergus",
""
],
[
"Wang",
"Yujiang",
""
],
[
"Taylor",
"Peter N.",
""
]
] |
Neocortical epilepsy surgery fails to achieve post-operative seizure freedom in 30-40% of cases. It is not fully understood why surgery in some patients is unsuccessful. Comparing interictal MEG bandpower from patients to normative maps, which describe healthy spatial and population variability, we identify patient specific abnormalities relating to surgical failure. We propose three mechanisms contributing to poor surgical outcome; 1) failure to resect abnormalities, 2) failing to remove all epileptogenic abnormalities, and 3) insufficiently impacting the overall cortical abnormality. We develop markers of these mechanisms, validating them against patient outcomes. Resting-state MEG data were acquired for 70 healthy controls and 32 patients with refractory neocortical epilepsy. Relative bandpower maps were computed using source localised recordings from healthy controls. Patient and region-specific bandpower abnormalities were estimated as the maximum absolute z-score, using healthy data as a baseline. Resected regions were identified from post-operative MRI. We hypothesised our mechanism markers would discriminate patient's post-surgery seizure outcomes. Mechanisms of surgical failure discriminate surgical outcome groups (Abnormalities not targeted: AUC=0.80, Partial resection of the epileptogenic zone: AUC=0.68, Insufficient cortical abnormality impact: AUC=0.64). Leveraging all markers together found that 95% of those who were not seizure free had markers of surgical failure in at least one of the three proposed mechanisms. In contrast, of those patients markers for any mechanism, 80% were seizure-free. Abnormality mapping across the brain is important for a wide range of neurological conditions. Here we demonstrated that interictal MEG bandpower mapping has merit for localising pathology and improving our mechanistic understanding of epilepsy.
|
q-bio/0309009
|
Giuseppe Vitiello
|
Eliano Pessa and Giuseppe Vitiello
|
Quantum noise, entanglement and chaos in the quantum field theory of
mind/brain states
|
15 pages
|
Mind and Matter 1, 59 (2003)
| null | null |
q-bio.OT q-bio.NC quant-ph
| null |
We review the dissipative quantum model of brain and present recent
developments related with the r\^ole of entanglement, quantum noise and chaos
in the model.
|
[
{
"created": "Sun, 21 Sep 2003 21:54:17 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Pessa",
"Eliano",
""
],
[
"Vitiello",
"Giuseppe",
""
]
] |
We review the dissipative quantum model of brain and present recent developments related with the r\^ole of entanglement, quantum noise and chaos in the model.
|
2302.13340
|
Anando Sen
|
Anando Sen, Victoria Hedley, John Owen, Ronald Cornet, Dipak Kalra,
Corinna Engel, Avril Palmeri, Joanne Lee, Jean-Christophe Roze, Joseph F
Standing, Adilia Warris, Claudia Pansieri, Rebecca Leary, Mark Turner, Volker
Straub
|
Standardizing Paediatric Clinical Data: The Development of the
conect4children (c4c) Cross Cutting Paediatric Data Dictionary
| null |
Journal of the Society of Clinical Data Management, Volume 2,
Issue 3, 2023
|
10.47912/jscdm.218
| null |
q-bio.OT cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Standardization of data items collected in paediatric clinical trials is an
important but challenging issue. The Clinical Data Interchange Standards
Consortium (CDISC) data standards are well understood by the pharmaceutical
industry but lack the implementation of some paediatric specific concepts. When
a paediatric concept is absent within CDISC standards, companies and research
institutions take multiple approaches in the collection of paediatric data,
leading to different implementations of standards and potentially limited
utility for reuse. To overcome these challenges, the conect4children consortium
has developed a cross-cutting paediatric data dictionary (CCPDD). The
dictionary was built over three phases - scoping (including a survey sent out
to ten industrial and 34 academic partners to gauge interest), creation of a
longlist and consensus building for the final set of terms. The dictionary was
finalized during a workshop with attendees from academia, hospitals, industry
and CDISC. The attendees held detailed discussions on each data item and
participated in the final vote on the inclusion of the item in the CCPDD. Nine
industrial and 34 academic partners responded to the survey, which showed
overall interest in the development of the CCPDD. Following the final vote on
27 data items, three were rejected, six were deferred to the next version and a
final opinion was sought from CDISC. The first version of the CCPDD with 25
data items was released in August 2019. The continued use of the dictionary has
the potential to ensure the collection of standardized data that is
interoperable and can later be pooled and reused for other applications. The
dictionary is already being used for case report form creation in three
clinical trials. The CCPDD will also serve as one of the inputs to the
Paediatric User Guide, which is being developed by CDISC.
|
[
{
"created": "Sun, 26 Feb 2023 16:10:12 GMT",
"version": "v1"
}
] |
2023-02-28
|
[
[
"Sen",
"Anando",
""
],
[
"Hedley",
"Victoria",
""
],
[
"Owen",
"John",
""
],
[
"Cornet",
"Ronald",
""
],
[
"Kalra",
"Dipak",
""
],
[
"Engel",
"Corinna",
""
],
[
"Palmeri",
"Avril",
""
],
[
"Lee",
"Joanne",
""
],
[
"Roze",
"Jean-Christophe",
""
],
[
"Standing",
"Joseph F",
""
],
[
"Warris",
"Adilia",
""
],
[
"Pansieri",
"Claudia",
""
],
[
"Leary",
"Rebecca",
""
],
[
"Turner",
"Mark",
""
],
[
"Straub",
"Volker",
""
]
] |
Standardization of data items collected in paediatric clinical trials is an important but challenging issue. The Clinical Data Interchange Standards Consortium (CDISC) data standards are well understood by the pharmaceutical industry but lack the implementation of some paediatric specific concepts. When a paediatric concept is absent within CDISC standards, companies and research institutions take multiple approaches in the collection of paediatric data, leading to different implementations of standards and potentially limited utility for reuse. To overcome these challenges, the conect4children consortium has developed a cross-cutting paediatric data dictionary (CCPDD). The dictionary was built over three phases - scoping (including a survey sent out to ten industrial and 34 academic partners to gauge interest), creation of a longlist and consensus building for the final set of terms. The dictionary was finalized during a workshop with attendees from academia, hospitals, industry and CDISC. The attendees held detailed discussions on each data item and participated in the final vote on the inclusion of the item in the CCPDD. Nine industrial and 34 academic partners responded to the survey, which showed overall interest in the development of the CCPDD. Following the final vote on 27 data items, three were rejected, six were deferred to the next version and a final opinion was sought from CDISC. The first version of the CCPDD with 25 data items was released in August 2019. The continued use of the dictionary has the potential to ensure the collection of standardized data that is interoperable and can later be pooled and reused for other applications. The dictionary is already being used for case report form creation in three clinical trials. The CCPDD will also serve as one of the inputs to the Paediatric User Guide, which is being developed by CDISC.
|
1109.1495
|
Carlo Amoruso Mr
|
Carlo Amoruso, Thibault Lagache, David Holcman
|
Modeling the early steps of cytoplasmic trafficking in viral infection
and gene delivery
|
31 pages, 11 figures. To appear on SIAM
| null | null | null |
q-bio.SC q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gene delivery of nucleic acid to the cell nucleus is a fundamental step in
gene therapy. In this review of modeling drug and gene delivery, we focus on
the particular stage of plasmid DNA or virus cytoplasmic trafficking. A
challenging problem is to quantify the success of this limiting stage. We
present some models and simulations of plasmid trafficking and of the limiting
phase of DNA-polycation escape from an endosome and discuss virus cytoplasmic
trafficking. The models can be used to assess the success of viral escape from
endosomes, to quantify the early step of viral-cell infection, and to propose
new simulation tools for designing new hybrid-viruses as synthetic vectors.
|
[
{
"created": "Wed, 7 Sep 2011 15:49:00 GMT",
"version": "v1"
}
] |
2011-09-08
|
[
[
"Amoruso",
"Carlo",
""
],
[
"Lagache",
"Thibault",
""
],
[
"Holcman",
"David",
""
]
] |
Gene delivery of nucleic acid to the cell nucleus is a fundamental step in gene therapy. In this review of modeling drug and gene delivery, we focus on the particular stage of plasmid DNA or virus cytoplasmic trafficking. A challenging problem is to quantify the success of this limiting stage. We present some models and simulations of plasmid trafficking and of the limiting phase of DNA-polycation escape from an endosome and discuss virus cytoplasmic trafficking. The models can be used to assess the success of viral escape from endosomes, to quantify the early step of viral-cell infection, and to propose new simulation tools for designing new hybrid-viruses as synthetic vectors.
|
2005.07171
|
Dimitra Kosta
|
Muhammad Ardiyansyah, Dimitra Kosta, and Kaie Kubjas
|
The Model-Specific Markov Embedding Problem for Symmetric Group-Based
Models
|
25 pages, 1 figure, 6 tables. An initial version of the main result
of this preprint appeared in the preprint arXiv:1705.09228. Following the
advice of referees, we divided the earlier preprint into two. The present
preprint supersedes the section on the embedding problem in arXiv:1705.09228.
A first version of the present preprint had title "Model Embeddability for
Symmetric Group-Based Models"
| null | null | null |
q-bio.PE math.CO math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study model embeddability, which is a variation of the famous embedding
problem in probability theory, when apart from the requirement that the Markov
matrix is the matrix exponential of a rate matrix, we additionally ask that the
rate matrix follows the model structure. We provide a characterisation of model
embeddable Markov matrices corresponding to symmetric group-based phylogenetic
models. In particular, we provide necessary and sufficient conditions in terms
of the eigenvalues of symmetric group-based matrices. To showcase our main
result on model embeddability, we provide an application to hachimoji models,
which are eight-state models for synthetic DNA. Moreover, our main result on
model embeddability enables us to compute the volume of the set of model
embeddable Markov matrices relative to the volume of other relevant sets of
Markov matrices within the model.
|
[
{
"created": "Thu, 14 May 2020 17:41:24 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Apr 2021 17:46:40 GMT",
"version": "v2"
}
] |
2021-04-02
|
[
[
"Ardiyansyah",
"Muhammad",
""
],
[
"Kosta",
"Dimitra",
""
],
[
"Kubjas",
"Kaie",
""
]
] |
We study model embeddability, which is a variation of the famous embedding problem in probability theory, when apart from the requirement that the Markov matrix is the matrix exponential of a rate matrix, we additionally ask that the rate matrix follows the model structure. We provide a characterisation of model embeddable Markov matrices corresponding to symmetric group-based phylogenetic models. In particular, we provide necessary and sufficient conditions in terms of the eigenvalues of symmetric group-based matrices. To showcase our main result on model embeddability, we provide an application to hachimoji models, which are eight-state models for synthetic DNA. Moreover, our main result on model embeddability enables us to compute the volume of the set of model embeddable Markov matrices relative to the volume of other relevant sets of Markov matrices within the model.
|
1902.04321
|
Adam Barrett DPhil
|
Adam B. Barrett and Pedro A. M. Mediano
|
The Phi measure of integrated information is not well-defined for
general physical systems
|
8 pages
|
Journal of Consciousness Studies, 26 (1-2). pp. 11-20. ISSN
1355-8250
| null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to the Integrated Information Theory of Consciousness,
consciousness is a fundamental observer-independent property of physical
systems, and the measure Phi of integrated information is identical to the
quantity or level of consciousness. For this to be plausible, there should be
no alternative formulae for Phi consistent with the axioms of IIT, and there
should not be cases of Phi being ill-defined. This article presents three ways
in which Phi, in its current formulation, fails to meet these standards, and
discusses how this problem might be addressed.
|
[
{
"created": "Tue, 12 Feb 2019 10:40:06 GMT",
"version": "v1"
}
] |
2019-02-13
|
[
[
"Barrett",
"Adam B.",
""
],
[
"Mediano",
"Pedro A. M.",
""
]
] |
According to the Integrated Information Theory of Consciousness, consciousness is a fundamental observer-independent property of physical systems, and the measure Phi of integrated information is identical to the quantity or level of consciousness. For this to be plausible, there should be no alternative formulae for Phi consistent with the axioms of IIT, and there should not be cases of Phi being ill-defined. This article presents three ways in which Phi, in its current formulation, fails to meet these standards, and discusses how this problem might be addressed.
|
1908.09314
|
Mamoru Sugamoto
|
Naoaki Fujimoto, Muneharu Onoue, Akio Sugamoto, Mamoru Sugamoto, and
Tsukasa Yumibayashi
|
Surround Inhibition Mechanism by Deep Learning
|
5 pages, 4 figures
| null | null |
OUJ-FTC-3, OCHA-PP-356
|
q-bio.NC physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the sensation of tones, visions and other stimuli, the "surround
inhibition mechanism" (or "lateral inhibition mechanism") is crucial. The
mechanism enhances the signals of the strongest tone, color and other stimuli,
by reducing and inhibiting the surrounding signals, since the latter signals
are less important. This surround inhibition mechanism is well studied in the
physiology of sensor systems. The neural network with two hidden layers in
addition to input and output layers is constructed; having 60 neurons (units)
in each of the four layers. The label (correct answer) is prepared from an
input signal by applying seven times operations of the "Hartline mechanism",
that is, by sending inhibitory signals from the neighboring neurons and
amplifying all the signals afterwards. The implication obtained by the deep
learning of this neural network is compared with the standard physiological
understanding of the surround inhibition mechanism.
|
[
{
"created": "Sun, 25 Aug 2019 12:41:56 GMT",
"version": "v1"
}
] |
2019-08-27
|
[
[
"Fujimoto",
"Naoaki",
""
],
[
"Onoue",
"Muneharu",
""
],
[
"Sugamoto",
"Akio",
""
],
[
"Sugamoto",
"Mamoru",
""
],
[
"Yumibayashi",
"Tsukasa",
""
]
] |
In the sensation of tones, visions and other stimuli, the "surround inhibition mechanism" (or "lateral inhibition mechanism") is crucial. The mechanism enhances the signals of the strongest tone, color and other stimuli, by reducing and inhibiting the surrounding signals, since the latter signals are less important. This surround inhibition mechanism is well studied in the physiology of sensor systems. The neural network with two hidden layers in addition to input and output layers is constructed; having 60 neurons (units) in each of the four layers. The label (correct answer) is prepared from an input signal by applying seven times operations of the "Hartline mechanism", that is, by sending inhibitory signals from the neighboring neurons and amplifying all the signals afterwards. The implication obtained by the deep learning of this neural network is compared with the standard physiological understanding of the surround inhibition mechanism.
|
2006.06767
|
Steffen Werner
|
Steffen Werner, W Mathijs Rozemuller, Annabel Ebbing, Anna Alemany,
Joleen Traets, Jeroen S. van Zon, Alexander van Oudenaarden, Hendrik C.
Korswagen, Greg J. Stephens, and Thomas S. Shimizu
|
Functional modules from variable genes: Leveraging percolation to
analyze noisy, high-dimensional data
|
13 pages, 8 figures
| null | null | null |
q-bio.GN physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While measurement advances now allow extensive surveys of gene activity
(large numbers of genes across many samples), interpretation of these data is
often confounded by noise -- expression counts can differ strongly across
samples due to variation of both biological and experimental origin.
Complimentary to perturbation approaches, we extract functionally related
groups of genes by analyzing the standing variation within a sampled
population. To distinguish biologically meaningful patterns from
uninterpretable noise, we focus on correlated variation and develop a novel
density-based clustering approach that takes advantage of a percolation
transition generically arising in random, uncorrelated data. We apply our
approach to two contrasting RNA sequencing data sets that sample individual
variation -- across single cells of fission yeast and whole animals of C.
elegans worms -- and demonstrate robust applicability and versatility in
revealing correlated gene clusters of diverse biological origin, including cell
cycle phase, development/reproduction, tissue-specific functions, and feeding
history. Our technique exploits generic features of noisy high-dimensional data
and is applicable, beyond gene expression, to feature-rich data that sample
population-level variability in the presence of noise.
|
[
{
"created": "Thu, 11 Jun 2020 19:45:08 GMT",
"version": "v1"
}
] |
2020-06-15
|
[
[
"Werner",
"Steffen",
""
],
[
"Rozemuller",
"W Mathijs",
""
],
[
"Ebbing",
"Annabel",
""
],
[
"Alemany",
"Anna",
""
],
[
"Traets",
"Joleen",
""
],
[
"van Zon",
"Jeroen S.",
""
],
[
"van Oudenaarden",
"Alexander",
""
],
[
"Korswagen",
"Hendrik C.",
""
],
[
"Stephens",
"Greg J.",
""
],
[
"Shimizu",
"Thomas S.",
""
]
] |
While measurement advances now allow extensive surveys of gene activity (large numbers of genes across many samples), interpretation of these data is often confounded by noise -- expression counts can differ strongly across samples due to variation of both biological and experimental origin. Complimentary to perturbation approaches, we extract functionally related groups of genes by analyzing the standing variation within a sampled population. To distinguish biologically meaningful patterns from uninterpretable noise, we focus on correlated variation and develop a novel density-based clustering approach that takes advantage of a percolation transition generically arising in random, uncorrelated data. We apply our approach to two contrasting RNA sequencing data sets that sample individual variation -- across single cells of fission yeast and whole animals of C. elegans worms -- and demonstrate robust applicability and versatility in revealing correlated gene clusters of diverse biological origin, including cell cycle phase, development/reproduction, tissue-specific functions, and feeding history. Our technique exploits generic features of noisy high-dimensional data and is applicable, beyond gene expression, to feature-rich data that sample population-level variability in the presence of noise.
|
1109.4262
|
Ilona Nagy
|
I. Nagy, J. T\'oth
|
Microscopic Reversibility or Detailed Balance in Ion Channel Models
| null | null | null | null |
q-bio.MN math.CA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mass action type deterministic kinetic models of ion channels are usually
constructed in such a way as to obey the principle of detailed balance (or,
microscopic reversibility) for two reasons: first, the authors aspire to have
models harmonizing with thermodynamics, second, the conditions to ensure
detailed balance reduce the number of reaction rate coefficients to be
measured. We investigate a series of ion channel models which are asserted to
obey detailed balance, however, these models violate mass conservation and in
their case only the necessary conditions (the so-called circuit conditions) are
taken into account. We show that ion channel models have a very specific
structure which makes the consequences true in spite of the imprecise
arguments. First, we transform the models into mass conserving ones, second, we
show that the full set of conditions ensuring detailed balance (formulated by
Feinberg) leads to the same relations for the reaction rate constants in these
special cases, both for the original models and the transformed ones.
|
[
{
"created": "Tue, 20 Sep 2011 10:54:53 GMT",
"version": "v1"
}
] |
2011-09-21
|
[
[
"Nagy",
"I.",
""
],
[
"Tóth",
"J.",
""
]
] |
Mass action type deterministic kinetic models of ion channels are usually constructed in such a way as to obey the principle of detailed balance (or, microscopic reversibility) for two reasons: first, the authors aspire to have models harmonizing with thermodynamics, second, the conditions to ensure detailed balance reduce the number of reaction rate coefficients to be measured. We investigate a series of ion channel models which are asserted to obey detailed balance, however, these models violate mass conservation and in their case only the necessary conditions (the so-called circuit conditions) are taken into account. We show that ion channel models have a very specific structure which makes the consequences true in spite of the imprecise arguments. First, we transform the models into mass conserving ones, second, we show that the full set of conditions ensuring detailed balance (formulated by Feinberg) leads to the same relations for the reaction rate constants in these special cases, both for the original models and the transformed ones.
|
1109.5402
|
I. Emrah Nikerel
|
Oscar M.J.A. Stassen, Ruud J.J.Jorna, Bastiaan A. van den Berg, Rad
Haghi, Farzad Ehtemam, Steven M. Flipse, Marco J.L. de Groot, Janine A.
Kiers, I. Emrah Nikerel and Domenico Bellomo
|
Toward tunable RNA thermo-switches for temperature dependent gene
expression
|
12 pages, 10 figures
| null | null | null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RNA thermometers are mRNA strands with a temperature dependent secondary
structure: depending on the spatial conformation, the mRNA strand can get
translated (on-state) or can be inaccessible for ribosomes binding (off-state).
These have been found in a number of microorganisms (mainly pathogens), where
they are used to adaptively regulate the gene expression, in response to
changes in the environmental temperature. Besides naturally occurring RNA
thermometers, synthetic RNA thermometers have been recently designed by
modifying their natural counterparts (Hofacker et al., 2003). The newly
designed RNA thermometers are simpler, and exhibit a sharper switching between
off- and on-states. However, the proposed trial-and-error design procedure has
not been algorithmically formalized, and the switching temperature is rigidly
determined by the natural RNA thermometer used as template for the design.
We developed a general algorithmic procedure (consensus distribution) for the
design of RNA thermo-switches with a tunable switching temperature that can be
decided in advance by the designer. A software tool with a user friendly GUI
has been written to automate the design of RNA thermo-switches with a desired
threshold temperature. Starting from a natural template, a new RNA thermometer
has been designed by our method for a new desired threshold temperature of 32C.
The designed RNA thermo-switch has been experimentally validated by using it to
control the expression of lucifarase. A 9.2 fold increase of luminescence has
been observed between 30C and 37C, whereas between 20C and 30C the luminescence
increase is less than 3-fold.
This work represents a first step towards the design of flexible and tunable
RNA thermometers that can be used for a precise control of gene expression
without the need of external chemicals and possibly for temperature
measurements at a nano-scale resolution.
|
[
{
"created": "Sun, 25 Sep 2011 21:35:06 GMT",
"version": "v1"
}
] |
2011-09-27
|
[
[
"Stassen",
"Oscar M. J. A.",
""
],
[
"Jorna",
"Ruud J. J.",
""
],
[
"Berg",
"Bastiaan A. van den",
""
],
[
"Haghi",
"Rad",
""
],
[
"Ehtemam",
"Farzad",
""
],
[
"Flipse",
"Steven M.",
""
],
[
"de Groot",
"Marco J. L.",
""
],
[
"Kiers",
"Janine A.",
""
],
[
"Nikerel",
"I. Emrah",
""
],
[
"Bellomo",
"Domenico",
""
]
] |
RNA thermometers are mRNA strands with a temperature dependent secondary structure: depending on the spatial conformation, the mRNA strand can get translated (on-state) or can be inaccessible for ribosomes binding (off-state). These have been found in a number of microorganisms (mainly pathogens), where they are used to adaptively regulate the gene expression, in response to changes in the environmental temperature. Besides naturally occurring RNA thermometers, synthetic RNA thermometers have been recently designed by modifying their natural counterparts (Hofacker et al., 2003). The newly designed RNA thermometers are simpler, and exhibit a sharper switching between off- and on-states. However, the proposed trial-and-error design procedure has not been algorithmically formalized, and the switching temperature is rigidly determined by the natural RNA thermometer used as template for the design. We developed a general algorithmic procedure (consensus distribution) for the design of RNA thermo-switches with a tunable switching temperature that can be decided in advance by the designer. A software tool with a user friendly GUI has been written to automate the design of RNA thermo-switches with a desired threshold temperature. Starting from a natural template, a new RNA thermometer has been designed by our method for a new desired threshold temperature of 32C. The designed RNA thermo-switch has been experimentally validated by using it to control the expression of lucifarase. A 9.2 fold increase of luminescence has been observed between 30C and 37C, whereas between 20C and 30C the luminescence increase is less than 3-fold. This work represents a first step towards the design of flexible and tunable RNA thermometers that can be used for a precise control of gene expression without the need of external chemicals and possibly for temperature measurements at a nano-scale resolution.
|
2208.00417
|
Wasiur R. KhudaBukhsh
|
Colin Klaus and Matthew Wascher and Wasiur R. KhudaBukhsh and Grzegorz
A. Rempala
|
Likelihood-Free Dynamical Survival Analysis Applied to the COVID-19
Epidemic in Ohio
|
27 pages, 7 figures. Manuscript under submission for publication
| null | null | null |
q-bio.PE math.AP math.DS physics.soc-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Dynamical Survival Analysis (DSA) is a framework for modeling epidemics
based on mean field dynamics applied to individual (agent) level history of
infection and recovery. Recently, DSA has been shown to be an effective tool in
analyzing complex non-Markovian epidemic processes that are otherwise difficult
to handle using standard methods. One of the advantages of DSA is its
representation of typical epidemic data in a simple although not explicit form
that involves solutions of certain differential equations. In this work we
describe how a complex non-Markovian DSA model may be applied to a specific
data set with the help of appropriate numerical and statistical schemes. The
ideas are illustrated with a data example of the COVID-19 epidemic in Ohio.
|
[
{
"created": "Sun, 31 Jul 2022 11:17:18 GMT",
"version": "v1"
}
] |
2022-08-02
|
[
[
"Klaus",
"Colin",
""
],
[
"Wascher",
"Matthew",
""
],
[
"KhudaBukhsh",
"Wasiur R.",
""
],
[
"Rempala",
"Grzegorz A.",
""
]
] |
The Dynamical Survival Analysis (DSA) is a framework for modeling epidemics based on mean field dynamics applied to individual (agent) level history of infection and recovery. Recently, DSA has been shown to be an effective tool in analyzing complex non-Markovian epidemic processes that are otherwise difficult to handle using standard methods. One of the advantages of DSA is its representation of typical epidemic data in a simple although not explicit form that involves solutions of certain differential equations. In this work we describe how a complex non-Markovian DSA model may be applied to a specific data set with the help of appropriate numerical and statistical schemes. The ideas are illustrated with a data example of the COVID-19 epidemic in Ohio.
|
q-bio/0507038
|
Jaewook Joo
|
Jaewook Joo, Michelle Gunny, Marisa Cases, Peter Hudson, Reka Albert
and Eric Harvill
|
Bacteriophage-mediated competition in Bordetella bacteria
|
10pages, 8 figures
|
Proceedings of the Royal Society B: Biological Sciences (2006)
|
10.1098/rspb.2006.3512
| null |
q-bio.PE
| null |
Apparent competition between species is believed to be one of the principle
driving forces that structure ecological communities, although the precise
mecha nisms have yet to be characterized. Here we develop a model system that
isolates phage-mediated interactions by neutralizing resource competition using
two genetically identical Bordetella bronchiseptica strains that differ only in
that one is the carrier of a phage and the other is susceptible to the phage.
We observe and quantify the competitive advantage of the bacterial strain
bearing the prophage in both invading and in resisting invasion by bacteria
susceptible to the phage, and use our measurements to develop a mathematical
model of phage-mediated competition. The model predicts, and experimental
evidence confirms, that the competitive advantage conferred by the phage
depends only on the relative phage pathology and is independent of other phage
and host parameters. This work combines experimental and mathematical
approaches to the study of phage-driven competition, and provides an
experimentally tested framework for evaluation of the effects of
pathogens/parasites on interspecific competition.
|
[
{
"created": "Mon, 25 Jul 2005 21:51:45 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Joo",
"Jaewook",
""
],
[
"Gunny",
"Michelle",
""
],
[
"Cases",
"Marisa",
""
],
[
"Hudson",
"Peter",
""
],
[
"Albert",
"Reka",
""
],
[
"Harvill",
"Eric",
""
]
] |
Apparent competition between species is believed to be one of the principle driving forces that structure ecological communities, although the precise mecha nisms have yet to be characterized. Here we develop a model system that isolates phage-mediated interactions by neutralizing resource competition using two genetically identical Bordetella bronchiseptica strains that differ only in that one is the carrier of a phage and the other is susceptible to the phage. We observe and quantify the competitive advantage of the bacterial strain bearing the prophage in both invading and in resisting invasion by bacteria susceptible to the phage, and use our measurements to develop a mathematical model of phage-mediated competition. The model predicts, and experimental evidence confirms, that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of other phage and host parameters. This work combines experimental and mathematical approaches to the study of phage-driven competition, and provides an experimentally tested framework for evaluation of the effects of pathogens/parasites on interspecific competition.
|
2102.08524
|
Nicolas Roth
|
Teresa Chouzouris, Nicolas Roth, Caglar Cakan, Klaus Obermayer
|
Applications of optimal nonlinear control to a whole-brain network of
FitzHugh-Nagumo oscillators
|
Main paper: 20 pages, 14 figures; supplemental material: 10 pages, 9
figures. For associated video files, see
https://github.com/rederoth/nonlinearControlFHN-videos
|
Phys. Rev. E 104, 024213 (2021)
|
10.1103/PhysRevE.104.024213
| null |
q-bio.NC cs.SY eess.SY nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We apply the framework of optimal nonlinear control to steer the dynamics of
a whole-brain network of FitzHugh-Nagumo oscillators. Its nodes correspond to
the cortical areas of an atlas-based segmentation of the human cerebral cortex,
and the inter-node coupling strengths are derived from Diffusion Tensor Imaging
data of the connectome of the human brain. Nodes are coupled using an additive
scheme without delays and are driven by background inputs with fixed mean and
additive Gaussian noise. Optimal control inputs to nodes are determined by
minimizing a cost functional that penalizes the deviations from a desired
network dynamic, the control energy, and spatially non-sparse control inputs.
Using the strength of the background input and the overall coupling strength as
order parameters, the network's state-space decomposes into regions of low and
high activity fixed points separated by a high amplitude limit cycle all of
which qualitatively correspond to the states of an isolated network node. Along
the borders, however, additional limit cycles, asynchronous states and
multistability can be observed. Optimal control is applied to several
state-switching and network synchronization tasks, and the results are compared
to controllability measures from linear control theory for the same connectome.
We find that intuitions from the latter about the roles of nodes in steering
the network dynamics, which are solely based on connectome features, do not
generally carry over to nonlinear systems, as had been previously implied.
Instead, the role of nodes under optimal nonlinear control critically depends
on the specified task and the system's location in state space. Our results
shed new light on the controllability of brain network states and may serve as
an inspiration for the design of new paradigms for non-invasive brain
stimulation.
|
[
{
"created": "Wed, 17 Feb 2021 01:38:56 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 17:29:26 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Sep 2021 13:15:58 GMT",
"version": "v3"
}
] |
2021-09-08
|
[
[
"Chouzouris",
"Teresa",
""
],
[
"Roth",
"Nicolas",
""
],
[
"Cakan",
"Caglar",
""
],
[
"Obermayer",
"Klaus",
""
]
] |
We apply the framework of optimal nonlinear control to steer the dynamics of a whole-brain network of FitzHugh-Nagumo oscillators. Its nodes correspond to the cortical areas of an atlas-based segmentation of the human cerebral cortex, and the inter-node coupling strengths are derived from Diffusion Tensor Imaging data of the connectome of the human brain. Nodes are coupled using an additive scheme without delays and are driven by background inputs with fixed mean and additive Gaussian noise. Optimal control inputs to nodes are determined by minimizing a cost functional that penalizes the deviations from a desired network dynamic, the control energy, and spatially non-sparse control inputs. Using the strength of the background input and the overall coupling strength as order parameters, the network's state-space decomposes into regions of low and high activity fixed points separated by a high amplitude limit cycle all of which qualitatively correspond to the states of an isolated network node. Along the borders, however, additional limit cycles, asynchronous states and multistability can be observed. Optimal control is applied to several state-switching and network synchronization tasks, and the results are compared to controllability measures from linear control theory for the same connectome. We find that intuitions from the latter about the roles of nodes in steering the network dynamics, which are solely based on connectome features, do not generally carry over to nonlinear systems, as had been previously implied. Instead, the role of nodes under optimal nonlinear control critically depends on the specified task and the system's location in state space. Our results shed new light on the controllability of brain network states and may serve as an inspiration for the design of new paradigms for non-invasive brain stimulation.
|
1609.04515
|
Thomas R. Weikl
|
Michael Raatz and Thomas R. Weikl
|
Membrane tubulation by elongated and patchy nanoparticles
|
11 pages, 6 figures
|
Adv. Mater. Interfaces 2016, 1600325
|
10.1002/admi.201600325
| null |
q-bio.SC cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in nanotechnology lead to an increasing interest in how
nanoparticles interact with biomembranes. Nanoparticles are wrapped
spontaneously by biomembranes if the adhesive interactions between the
particles and membranes compensate for the cost of membrane bending. In the
last years, the cooperative wrapping of spherical nanoparticles in membrane
tubules has been observed in experiments and simulations. For spherical
nanoparticles, the stability of the particle-filled membrane tubules strongly
depends on the range of the adhesive particle-membrane interactions. In this
article, we show via modeling and energy minimization that elongated and patchy
particles are wrapped cooperatively in membrane tubules that are highly stable
for all ranges of the particle-membrane interactions, compared to individual
wrapping of the particles. The cooperative wrapping of linear chains of
elongated or patchy particles in membrane tubules may thus provide an efficient
route to induce membrane tubulation, or to store such particles in membranes.
|
[
{
"created": "Thu, 15 Sep 2016 05:38:50 GMT",
"version": "v1"
}
] |
2016-09-16
|
[
[
"Raatz",
"Michael",
""
],
[
"Weikl",
"Thomas R.",
""
]
] |
Advances in nanotechnology lead to an increasing interest in how nanoparticles interact with biomembranes. Nanoparticles are wrapped spontaneously by biomembranes if the adhesive interactions between the particles and membranes compensate for the cost of membrane bending. In the last years, the cooperative wrapping of spherical nanoparticles in membrane tubules has been observed in experiments and simulations. For spherical nanoparticles, the stability of the particle-filled membrane tubules strongly depends on the range of the adhesive particle-membrane interactions. In this article, we show via modeling and energy minimization that elongated and patchy particles are wrapped cooperatively in membrane tubules that are highly stable for all ranges of the particle-membrane interactions, compared to individual wrapping of the particles. The cooperative wrapping of linear chains of elongated or patchy particles in membrane tubules may thus provide an efficient route to induce membrane tubulation, or to store such particles in membranes.
|
0808.0750
|
Raouf Ghomrasni
|
Raouf Ghomrasni and Lisa Bonney
|
SDE in Random Population Growth
|
17 pages
| null | null | null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we extend the recent work of C.A. Braumann \cite{B2007} to the
case of stochastic differential equation with random coefficients. Furthermore,
the relationship of the It\^o-Stratonovich stochastic calculus to studies of
random population growth is also explained.
|
[
{
"created": "Wed, 6 Aug 2008 00:34:27 GMT",
"version": "v1"
}
] |
2008-08-07
|
[
[
"Ghomrasni",
"Raouf",
""
],
[
"Bonney",
"Lisa",
""
]
] |
In this paper we extend the recent work of C.A. Braumann \cite{B2007} to the case of stochastic differential equation with random coefficients. Furthermore, the relationship of the It\^o-Stratonovich stochastic calculus to studies of random population growth is also explained.
|
2002.11195
|
Ilya A. Surov Mr.
|
Ilya A. Surov
|
Quantum Cognitive Triad. Semantic geometry of context representation
|
44 pages, 6 figures
| null |
10.1007/s10699-020-09712-x
| null |
q-bio.NC cs.AI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper describes an algorithm for semantic representation of behavioral
contexts relative to a dichotomic decision alternative. The contexts are
represented as quantum qubit states in two-dimensional Hilbert space visualized
as points on the Bloch sphere. The azimuthal coordinate of this sphere
functions as a one-dimensional semantic space in which the contexts are
accommodated according to their subjective relevance to the considered
uncertainty. The contexts are processed in triples defined by knowledge of a
subject about a binary situational factor. The obtained triads of context
representations function as stable cognitive structure at the same time
allowing a subject to model probabilistically-variative behavior. The developed
algorithm illustrates an approach for quantitative subjectively-semantic
modeling of behavior based on conceptual and mathematical apparatus of quantum
theory.
|
[
{
"created": "Sat, 22 Feb 2020 17:29:10 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Dec 2020 14:43:28 GMT",
"version": "v2"
}
] |
2020-12-23
|
[
[
"Surov",
"Ilya A.",
""
]
] |
The paper describes an algorithm for semantic representation of behavioral contexts relative to a dichotomic decision alternative. The contexts are represented as quantum qubit states in two-dimensional Hilbert space visualized as points on the Bloch sphere. The azimuthal coordinate of this sphere functions as a one-dimensional semantic space in which the contexts are accommodated according to their subjective relevance to the considered uncertainty. The contexts are processed in triples defined by knowledge of a subject about a binary situational factor. The obtained triads of context representations function as stable cognitive structure at the same time allowing a subject to model probabilistically-variative behavior. The developed algorithm illustrates an approach for quantitative subjectively-semantic modeling of behavior based on conceptual and mathematical apparatus of quantum theory.
|
0811.3034
|
Elfi Kraka
|
S. Ranganathan, D. Izotov, E. Kraka, and D. Cremer
|
Description and Recognition of Regular and Distorted Secondary
Structures in Proteins Using the Automated Protein Structure Analysis Method
|
49 pages, 6 figures, 2 schemes
| null | null | null |
q-bio.QM q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Automated Protein Structure Analysis (APSA) method, which describes the
protein backbone as a smooth line in 3-dimensional space and characterizes it
by curvature kappa and torsion tau as a function of arc length s, was applied
on 77 proteins to determine all secondary structural units via specific
kappa(s) and tau(s) patterns. A total of 533 alpha-helices and 644 beta-strands
were recognized by APSA, whereas DSSP gives 536 and 651 units, respectively.
Kinks and distortions were quantified and the boundaries (entry and exit) of
secondary structures were classified. Similarity between proteins can be easily
quantified using APSA, as was demonstrated for the roll architecture of
proteins ubiquitin and spinach ferridoxin. A twenty-by-twenty comparison of
all-alpha domains showed that the curvature-torsion patterns generated by APSA
provide an accurate and meaningful similarity measurement for secondary,
super-secondary, and tertiary protein structure. APSA is shown to accurately
reflect the conformation of the backbone effectively reducing 3-dimensional
structure information to 2-dimensional representations that are easy to
interpret and understand.
|
[
{
"created": "Wed, 19 Nov 2008 00:22:37 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Nov 2008 21:21:27 GMT",
"version": "v2"
}
] |
2008-11-24
|
[
[
"Ranganathan",
"S.",
""
],
[
"Izotov",
"D.",
""
],
[
"Kraka",
"E.",
""
],
[
"Cremer",
"D.",
""
]
] |
The Automated Protein Structure Analysis (APSA) method, which describes the protein backbone as a smooth line in 3-dimensional space and characterizes it by curvature kappa and torsion tau as a function of arc length s, was applied on 77 proteins to determine all secondary structural units via specific kappa(s) and tau(s) patterns. A total of 533 alpha-helices and 644 beta-strands were recognized by APSA, whereas DSSP gives 536 and 651 units, respectively. Kinks and distortions were quantified and the boundaries (entry and exit) of secondary structures were classified. Similarity between proteins can be easily quantified using APSA, as was demonstrated for the roll architecture of proteins ubiquitin and spinach ferridoxin. A twenty-by-twenty comparison of all-alpha domains showed that the curvature-torsion patterns generated by APSA provide an accurate and meaningful similarity measurement for secondary, super-secondary, and tertiary protein structure. APSA is shown to accurately reflect the conformation of the backbone effectively reducing 3-dimensional structure information to 2-dimensional representations that are easy to interpret and understand.
|
1901.00991
|
Lin Yang
|
Xiaoliang Ma, Chengyu Hou, Liping Shi, Long Li, Jiacheng Li, Lin Ye,
Lin Yang, Xiaodong He
|
Physical Folding Codes for Proteins
|
19 pages, 7 figures, 8 tables
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploring and understanding the protein-folding problem has been a
long-standing challenge in molecular biology. Here, using molecular dynamics
simulation, we reveal how parallel distributed adjacent planar peptide groups
of unfolded proteins fold reproducibly following explicit physical folding
codes in aqueous environments due to electrostatic attractions. Superfast
folding of protein is found to be powered by the contribution of the formation
of hydrogen bonds. Temperature-induced torsional waves propagating along
unfolded proteins break the parallel distributed state of specific amino acids,
inferred as the beginning of folding. Electric charge and rotational resistance
differences among neighboring side-chains are used to decipher the physical
folding codes by means of which precise secondary structures develop. We
present a powerful method of decoding amino acid sequences to predict native
structures of proteins. The method is verified by comparing the results
available from experiments in the literature.
|
[
{
"created": "Fri, 4 Jan 2019 06:28:11 GMT",
"version": "v1"
}
] |
2019-01-11
|
[
[
"Ma",
"Xiaoliang",
""
],
[
"Hou",
"Chengyu",
""
],
[
"Shi",
"Liping",
""
],
[
"Li",
"Long",
""
],
[
"Li",
"Jiacheng",
""
],
[
"Ye",
"Lin",
""
],
[
"Yang",
"Lin",
""
],
[
"He",
"Xiaodong",
""
]
] |
Exploring and understanding the protein-folding problem has been a long-standing challenge in molecular biology. Here, using molecular dynamics simulation, we reveal how parallel distributed adjacent planar peptide groups of unfolded proteins fold reproducibly following explicit physical folding codes in aqueous environments due to electrostatic attractions. Superfast folding of protein is found to be powered by the contribution of the formation of hydrogen bonds. Temperature-induced torsional waves propagating along unfolded proteins break the parallel distributed state of specific amino acids, inferred as the beginning of folding. Electric charge and rotational resistance differences among neighboring side-chains are used to decipher the physical folding codes by means of which precise secondary structures develop. We present a powerful method of decoding amino acid sequences to predict native structures of proteins. The method is verified by comparing the results available from experiments in the literature.
|
1807.02741
|
Carina Curto
|
Carina Curto, Elizabeth Gross, Jack Jeffries, Katherine Morrison, Zvi
Rosen, Anne Shiu, and Nora Youngs
|
Algebraic signatures of convex and non-convex codes
|
22 pages, 6 figures, 7 tables
| null | null | null |
q-bio.NC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A convex code is a binary code generated by the pattern of intersections of a
collection of open convex sets in some Euclidean space. Convex codes are
relevant to neuroscience as they arise from the activity of neurons that have
convex receptive fields. In this paper, we use algebraic methods to determine
if a code is convex. Specifically, we use the neural ideal of a code, which is
a generalization of the Stanley-Reisner ideal. Using the neural ideal together
with its standard generating set, the canonical form, we provide algebraic
signatures of certain families of codes that are non-convex. We connect these
signatures to the precise conditions on the arrangement of sets that prevent
the codes from being convex. Finally, we also provide algebraic signatures for
some families of codes that are convex, including the class of
intersection-complete codes. These results allow us to detect convexity and
non-convexity in a variety of situations, and point to some interesting open
questions.
|
[
{
"created": "Sun, 8 Jul 2018 02:43:04 GMT",
"version": "v1"
}
] |
2018-07-10
|
[
[
"Curto",
"Carina",
""
],
[
"Gross",
"Elizabeth",
""
],
[
"Jeffries",
"Jack",
""
],
[
"Morrison",
"Katherine",
""
],
[
"Rosen",
"Zvi",
""
],
[
"Shiu",
"Anne",
""
],
[
"Youngs",
"Nora",
""
]
] |
A convex code is a binary code generated by the pattern of intersections of a collection of open convex sets in some Euclidean space. Convex codes are relevant to neuroscience as they arise from the activity of neurons that have convex receptive fields. In this paper, we use algebraic methods to determine if a code is convex. Specifically, we use the neural ideal of a code, which is a generalization of the Stanley-Reisner ideal. Using the neural ideal together with its standard generating set, the canonical form, we provide algebraic signatures of certain families of codes that are non-convex. We connect these signatures to the precise conditions on the arrangement of sets that prevent the codes from being convex. Finally, we also provide algebraic signatures for some families of codes that are convex, including the class of intersection-complete codes. These results allow us to detect convexity and non-convexity in a variety of situations, and point to some interesting open questions.
|
q-bio/0312043
|
Peng-Ye Wang
|
Ping Xie, Shuo-Xing Dou, Peng-Ye Wang
|
Mechanism of unidirectional movement of kinesin motors
|
22 pages, 6 figures
|
Chinese Physics, Vol.14, No.4 (2005) 734-743
|
10.1088/1009-1963/14/4/017
| null |
q-bio.BM
| null |
Kinesin motors have been studied extensively both experimentally and
theoretically. However, the microscopic mechanism of the processive movement of
kinesin is still an open question. In this paper, we propose a hand-over-hand
model for the processivity of kinesin, which is based on chemical, mechanical,
and electrical couplings. In the model the processive movement does not need to
rely on the two heads' coordination in their ATP hydrolysis and mechanical
cycles. Rather, the ATP hydrolyses at the two heads are independent. The much
higher ATPase rate at the trailing head than the leading head makes the motor
walk processively in a natural way, with one ATP being hydrolyzed per step. The
model is consistent with the structural study of kinesin and the measured
pathway of the kinesin ATPase. Using the model the estimated driving force of ~
5.8 pN is in agreements with the experimental results (5~7.5 pN). The
prediction of the moving time in one step (~10 microseconds) is also consistent
with the measured values of 0~50 microseconds. The previous observation of
substeps within the 8-nm step is explained. The shapes of velocity-load (both
positive and negative) curves show resemblance to previous experimental
results.
|
[
{
"created": "Tue, 30 Dec 2003 02:53:25 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Xie",
"Ping",
""
],
[
"Dou",
"Shuo-Xing",
""
],
[
"Wang",
"Peng-Ye",
""
]
] |
Kinesin motors have been studied extensively both experimentally and theoretically. However, the microscopic mechanism of the processive movement of kinesin is still an open question. In this paper, we propose a hand-over-hand model for the processivity of kinesin, which is based on chemical, mechanical, and electrical couplings. In the model the processive movement does not need to rely on the two heads' coordination in their ATP hydrolysis and mechanical cycles. Rather, the ATP hydrolyses at the two heads are independent. The much higher ATPase rate at the trailing head than the leading head makes the motor walk processively in a natural way, with one ATP being hydrolyzed per step. The model is consistent with the structural study of kinesin and the measured pathway of the kinesin ATPase. Using the model the estimated driving force of ~ 5.8 pN is in agreements with the experimental results (5~7.5 pN). The prediction of the moving time in one step (~10 microseconds) is also consistent with the measured values of 0~50 microseconds. The previous observation of substeps within the 8-nm step is explained. The shapes of velocity-load (both positive and negative) curves show resemblance to previous experimental results.
|
2405.05998
|
Niki Kilbertus
|
Zhufeng Li and Sandeep S Cranganore and Nicholas Youngblut and Niki
Kilbertus
|
Whole Genome Transformer for Gene Interaction Effects in Microbiome
Habitat Specificity
| null | null | null | null |
q-bio.GN cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leveraging the vast genetic diversity within microbiomes offers unparalleled
insights into complex phenotypes, yet the task of accurately predicting and
understanding such traits from genomic data remains challenging. We propose a
framework taking advantage of existing large models for gene vectorization to
predict habitat specificity from entire microbial genome sequences. Based on
our model, we develop attribution techniques to elucidate gene interaction
effects that drive microbial adaptation to diverse environments. We train and
validate our approach on a large dataset of high quality microbiome genomes
from different habitats. We not only demonstrate solid predictive performance,
but also how sequence-level information of entire genomes allows us to identify
gene associations underlying complex phenotypes. Our attribution recovers known
important interaction networks and proposes new candidates for experimental
follow up.
|
[
{
"created": "Thu, 9 May 2024 09:34:51 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 10:59:16 GMT",
"version": "v2"
}
] |
2024-05-29
|
[
[
"Li",
"Zhufeng",
""
],
[
"Cranganore",
"Sandeep S",
""
],
[
"Youngblut",
"Nicholas",
""
],
[
"Kilbertus",
"Niki",
""
]
] |
Leveraging the vast genetic diversity within microbiomes offers unparalleled insights into complex phenotypes, yet the task of accurately predicting and understanding such traits from genomic data remains challenging. We propose a framework taking advantage of existing large models for gene vectorization to predict habitat specificity from entire microbial genome sequences. Based on our model, we develop attribution techniques to elucidate gene interaction effects that drive microbial adaptation to diverse environments. We train and validate our approach on a large dataset of high quality microbiome genomes from different habitats. We not only demonstrate solid predictive performance, but also how sequence-level information of entire genomes allows us to identify gene associations underlying complex phenotypes. Our attribution recovers known important interaction networks and proposes new candidates for experimental follow up.
|
1502.07110
|
Gabor Szederkenyi
|
Antonio A. Alonso and Gabor Szederkenyi
|
Uniqueness of feasible equilibria for mass action law (MAL) kinetic
systems
| null |
Journal of Process Control, 48: 41-71, 2016
|
10.1016/j.jprocont.2016.10.002
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the relations among system parameters, uniqueness, and
stability of equilibria, for kinetic systems given in the form of polynomial
ODEs. Such models are commonly used to describe the dynamics of nonnegative
systems, with a wide range of application fields such as chemistry, systems
biology, process modeling or even transportation systems. Using a flux-based
description of kinetic models, a canonical representation of the set of all
possible feasible equilibria is developed. The characterization is made in
terms of strictly stable compartmental matrices to define the so-called family
of solutions. Feasibility is imposed by a set of constraints, which are linear
on a log-transformed space of complexes, and relate to the kernel of a matrix,
the columns of which span the stoichiometric subspace. One particularly
interesting representation of these constraints can be expressed in terms of a
class of monotonous decreasing functions. This allows connections to be
established with classical results in CRNT that relate to the existence and
uniqueness of equilibria along positive stoichiometric compatibility classes.
In particular, monotonicity can be employed to identify regions in the set of
possible reaction rate coefficients leading to complex balancing, and to
conclude uniqueness of equilibria for a class of positive deficiency networks.
The latter result might support constructing an alternative proof of the
well-known deficiency one theorem. The developed notions and results are
illustrated through examples.
|
[
{
"created": "Wed, 25 Feb 2015 10:16:46 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Nov 2016 23:30:54 GMT",
"version": "v2"
}
] |
2016-11-14
|
[
[
"Alonso",
"Antonio A.",
""
],
[
"Szederkenyi",
"Gabor",
""
]
] |
This paper studies the relations among system parameters, uniqueness, and stability of equilibria, for kinetic systems given in the form of polynomial ODEs. Such models are commonly used to describe the dynamics of nonnegative systems, with a wide range of application fields such as chemistry, systems biology, process modeling or even transportation systems. Using a flux-based description of kinetic models, a canonical representation of the set of all possible feasible equilibria is developed. The characterization is made in terms of strictly stable compartmental matrices to define the so-called family of solutions. Feasibility is imposed by a set of constraints, which are linear on a log-transformed space of complexes, and relate to the kernel of a matrix, the columns of which span the stoichiometric subspace. One particularly interesting representation of these constraints can be expressed in terms of a class of monotonous decreasing functions. This allows connections to be established with classical results in CRNT that relate to the existence and uniqueness of equilibria along positive stoichiometric compatibility classes. In particular, monotonicity can be employed to identify regions in the set of possible reaction rate coefficients leading to complex balancing, and to conclude uniqueness of equilibria for a class of positive deficiency networks. The latter result might support constructing an alternative proof of the well-known deficiency one theorem. The developed notions and results are illustrated through examples.
|
2210.01169
|
Ying Tang
|
Ying Tang, Jiayu Weng, Pan Zhang
|
Neural-network solutions to stochastic reaction networks
| null | null | null | null |
q-bio.MN cs.LG nlin.AO
|
http://creativecommons.org/licenses/by/4.0/
|
The stochastic reaction network in which chemical species evolve through a
set of reactions is widely used to model stochastic processes in physics,
chemistry and biology. To characterize the evolving joint probability
distribution in the state space of species counts requires solving a system of
ordinary differential equations, the chemical master equation, where the size
of the counting state space increases exponentially with the type of species,
making it challenging to investigate the stochastic reaction network. Here, we
propose a machine-learning approach using the variational autoregressive
network to solve the chemical master equation. Training the autoregressive
network employs the policy gradient algorithm in the reinforcement learning
framework, which does not require any data simulated in prior by another
method. Different from simulating single trajectories, the approach tracks the
time evolution of the joint probability distribution, and supports direct
sampling of configurations and computing their normalized joint probabilities.
We apply the approach to representative examples in physics and biology, and
demonstrate that it accurately generates the probability distribution over
time. The variational autoregressive network exhibits a plasticity in
representing the multimodal distribution, cooperates with the conservation law,
enables time-dependent reaction rates, and is efficient for high-dimensional
reaction networks with allowing a flexible upper count limit. The results
suggest a general approach to investigate stochastic reaction networks based on
modern machine learning.
|
[
{
"created": "Thu, 29 Sep 2022 07:27:59 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Feb 2023 05:21:30 GMT",
"version": "v2"
}
] |
2023-02-08
|
[
[
"Tang",
"Ying",
""
],
[
"Weng",
"Jiayu",
""
],
[
"Zhang",
"Pan",
""
]
] |
The stochastic reaction network in which chemical species evolve through a set of reactions is widely used to model stochastic processes in physics, chemistry and biology. To characterize the evolving joint probability distribution in the state space of species counts requires solving a system of ordinary differential equations, the chemical master equation, where the size of the counting state space increases exponentially with the type of species, making it challenging to investigate the stochastic reaction network. Here, we propose a machine-learning approach using the variational autoregressive network to solve the chemical master equation. Training the autoregressive network employs the policy gradient algorithm in the reinforcement learning framework, which does not require any data simulated in prior by another method. Different from simulating single trajectories, the approach tracks the time evolution of the joint probability distribution, and supports direct sampling of configurations and computing their normalized joint probabilities. We apply the approach to representative examples in physics and biology, and demonstrate that it accurately generates the probability distribution over time. The variational autoregressive network exhibits a plasticity in representing the multimodal distribution, cooperates with the conservation law, enables time-dependent reaction rates, and is efficient for high-dimensional reaction networks with allowing a flexible upper count limit. The results suggest a general approach to investigate stochastic reaction networks based on modern machine learning.
|
2310.18377
|
Ran Wang
|
Ran Wang and Zhe Sage Chen
|
Large-scale Foundation Models and Generative AI for BigData Neuroscience
| null | null | null | null |
q-bio.NC cs.AI cs.HC cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in machine learning have made revolutionary breakthroughs in
computer games, image and natural language understanding, and scientific
discovery. Foundation models and large-scale language models (LLMs) have
recently achieved human-like intelligence thanks to BigData. With the help of
self-supervised learning (SSL) and transfer learning, these models may
potentially reshape the landscapes of neuroscience research and make a
significant impact on the future. Here we present a mini-review on recent
advances in foundation models and generative AI models as well as their
applications in neuroscience, including natural language and speech, semantic
memory, brain-machine interfaces (BMIs), and data augmentation. We argue that
this paradigm-shift framework will open new avenues for many neuroscience
research directions and discuss the accompanying challenges and opportunities.
|
[
{
"created": "Fri, 27 Oct 2023 00:44:40 GMT",
"version": "v1"
}
] |
2023-10-31
|
[
[
"Wang",
"Ran",
""
],
[
"Chen",
"Zhe Sage",
""
]
] |
Recent advances in machine learning have made revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.
|
1010.2538
|
Adam Lipowski
|
Jacek Wendykier, Adam Lipowski, and Antonio Luis Ferreira
|
Coexistence and critical behaviour in a lattice model of competing
species
|
11 pages, 14 figures
|
Phy. Rev. E 83, 031904 (2011)
|
10.1103/PhysRevE.83.031904
| null |
q-bio.PE cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper we study a lattice model of two species competing for
the same resources. Monte Carlo simulations for d=1, 2, and 3 show that when
resources are easily available both species coexist. However, when the supply
of resources is on an intermediate level, the species with slower metabolism
becomes extinct. On the other hand, when resources are scarce it is the species
with faster metabolism that becomes extinct. The range of coexistence of the
two species increases with dimension. We suggest that our model might describe
some aspects of the competition between normal and tumor cells. With such an
interpretation, examples of tumor remission, recurrence and of different
morphologies are presented. In the d=1 and d=2 models, we analyse the nature of
phase transitions: they are either discontinuous or belong to the
directed-percolation universality class, and in some cases they have an active
subcritical phase. In the d=2 case, one of the transitions seems to be
characterized by critical exponents different than directed-percolation ones,
but this transition could be also weakly discontinuous. In the d=3 version,
Monte Carlo simulations are in a good agreement with the solution of the
mean-field approximation. This approximation predicts that oscillatory
behaviour occurs in the present model, but only for d>2. For d>=2, a steady
state depends on the initial configuration in some cases.
|
[
{
"created": "Tue, 12 Oct 2010 23:33:42 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2011 21:14:35 GMT",
"version": "v2"
}
] |
2011-03-09
|
[
[
"Wendykier",
"Jacek",
""
],
[
"Lipowski",
"Adam",
""
],
[
"Ferreira",
"Antonio Luis",
""
]
] |
In the present paper we study a lattice model of two species competing for the same resources. Monte Carlo simulations for d=1, 2, and 3 show that when resources are easily available both species coexist. However, when the supply of resources is on an intermediate level, the species with slower metabolism becomes extinct. On the other hand, when resources are scarce it is the species with faster metabolism that becomes extinct. The range of coexistence of the two species increases with dimension. We suggest that our model might describe some aspects of the competition between normal and tumor cells. With such an interpretation, examples of tumor remission, recurrence and of different morphologies are presented. In the d=1 and d=2 models, we analyse the nature of phase transitions: they are either discontinuous or belong to the directed-percolation universality class, and in some cases they have an active subcritical phase. In the d=2 case, one of the transitions seems to be characterized by critical exponents different than directed-percolation ones, but this transition could be also weakly discontinuous. In the d=3 version, Monte Carlo simulations are in a good agreement with the solution of the mean-field approximation. This approximation predicts that oscillatory behaviour occurs in the present model, but only for d>2. For d>=2, a steady state depends on the initial configuration in some cases.
|
1301.0170
|
Yutaka Hori
|
Yutaka Hori and Shinji Hara
|
Noise-Induced Spatial Pattern Formation in Stochastic Reaction-Diffusion
Systems
| null |
Proceedings of IEEE Conference on Decision and Control, pp.
1053-1058, 2012
|
10.1109/CDC.2012.6426152
| null |
q-bio.QM cs.SY q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is concerned with stochastic reaction-diffusion kinetics governed
by the reaction-diffusion master equation. Specifically, the primary goal of
this paper is to provide a mechanistic basis of Turing pattern formation that
is induced by intrinsic noise. To this end, we first derive an approximate
reaction-diffusion system by using linear noise approximation. We show that the
approximated system has a certain structure that is associated with a coupled
dynamic multi-agent system. This observation then helps us derive an efficient
computation tool to examine the spatial power spectrum of the intrinsic noise.
We numerically demonstrate that the result is quite effective to analyze
noise-induced Turing pattern. Finally, we illustrate the theoretical mechanism
behind the noise-induced pattern formation with a H2 norm interpretation of the
multi-agent system.
|
[
{
"created": "Wed, 2 Jan 2013 05:38:34 GMT",
"version": "v1"
}
] |
2015-04-23
|
[
[
"Hori",
"Yutaka",
""
],
[
"Hara",
"Shinji",
""
]
] |
This paper is concerned with stochastic reaction-diffusion kinetics governed by the reaction-diffusion master equation. Specifically, the primary goal of this paper is to provide a mechanistic basis of Turing pattern formation that is induced by intrinsic noise. To this end, we first derive an approximate reaction-diffusion system by using linear noise approximation. We show that the approximated system has a certain structure that is associated with a coupled dynamic multi-agent system. This observation then helps us derive an efficient computation tool to examine the spatial power spectrum of the intrinsic noise. We numerically demonstrate that the result is quite effective to analyze noise-induced Turing pattern. Finally, we illustrate the theoretical mechanism behind the noise-induced pattern formation with a H2 norm interpretation of the multi-agent system.
|
2011.11335
|
Sacha van Albada
|
Sacha Jennifer van Albada, Jari Pronold, Alexander van Meegen, Markus
Diesmann
|
Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey
Cortex
| null | null |
10.1007/978-3-030-82427-3_4
| null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We are entering an age of `big' computational neuroscience, in which neural
network models are increasing in size and in numbers of underlying data sets.
Consolidating the zoo of models into large-scale models simultaneously
consistent with a wide range of data is only possible through the effort of
large teams, which can be spread across multiple research institutions. To
ensure that computational neuroscientists can build on each other's work, it is
important to make models publicly available as well-documented code. This
chapter describes such an open-source model, which relates the connectivity
structure of all vision-related cortical areas of the macaque monkey with their
resting-state dynamics. We give a brief overview of how to use the executable
model specification, which employs NEST as simulation engine, and show its
runtime scaling. The solutions found serve as an example for organizing the
workflow of future models from the raw experimental data to the visualization
of the results, expose the challenges, and give guidance for the construction
of ICT infrastructure for neuroscience.
|
[
{
"created": "Mon, 23 Nov 2020 11:25:14 GMT",
"version": "v1"
}
] |
2022-10-17
|
[
[
"van Albada",
"Sacha Jennifer",
""
],
[
"Pronold",
"Jari",
""
],
[
"van Meegen",
"Alexander",
""
],
[
"Diesmann",
"Markus",
""
]
] |
We are entering an age of `big' computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other's work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of ICT infrastructure for neuroscience.
|
1403.4196
|
Alison Hill
|
Alison L. Hill, Daniel I. S. Rosenbloom, Feng Fu, Martin A. Nowak,
Robert F. Siliciano
|
Predicting the outcomes of treatment to eradicate the latent reservoir
for HIV-1
|
8 pages main text (4 figures). In PNAS Early Edition
http://www.pnas.org/content/early/2014/08/05/1406663111. Ancillary files: SI,
24 pages SI (7 figures). File .htm opens a browser-based application to
calculate rebound times (see SI). Or, the .cdf file can be run with
Mathematica. The most up-to-date version of the code is available at
http://www.danielrosenbloom.com/reboundtimes/
| null |
10.1073/pnas.1406663111
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive research efforts are now underway to develop a cure for HIV
infection, allowing patients to discontinue lifelong combination antiretroviral
therapy (ART). New latency-reversing agents (LRAs) may be able to purge the
persistent reservoir of latent virus in resting memory CD4+ T cells, but the
degree of reservoir reduction needed for cure remains unknown. Here we use a
stochastic model of infection dynamics to estimate the efficacy of LRA needed
to prevent viral rebound after ART interruption. We incorporate clinical data
to estimate population-level parameter distributions and outcomes. Our findings
suggest that approximately 2,000-fold reductions are required to permit a
majority of patients to interrupt ART for one year without rebound and that
rebound may occur suddenly after multiple years. Greater than 10,000-fold
reductions may be required to prevent rebound altogether. Our results predict
large variation in rebound times following LRA therapy, which will complicate
clinical management. This model provides benchmarks for moving LRAs from the
lab to the clinic and can aid in the design and interpretation of clinical
trials. These results also apply to other interventions to reduce the latent
reservoir and can explain the observed return of viremia after months of
apparent cure in recent bone marrow transplant recipients and an
immediately-treated neonate.
|
[
{
"created": "Mon, 17 Mar 2014 18:25:16 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Aug 2014 20:50:42 GMT",
"version": "v2"
}
] |
2014-08-07
|
[
[
"Hill",
"Alison L.",
""
],
[
"Rosenbloom",
"Daniel I. S.",
""
],
[
"Fu",
"Feng",
""
],
[
"Nowak",
"Martin A.",
""
],
[
"Siliciano",
"Robert F.",
""
]
] |
Massive research efforts are now underway to develop a cure for HIV infection, allowing patients to discontinue lifelong combination antiretroviral therapy (ART). New latency-reversing agents (LRAs) may be able to purge the persistent reservoir of latent virus in resting memory CD4+ T cells, but the degree of reservoir reduction needed for cure remains unknown. Here we use a stochastic model of infection dynamics to estimate the efficacy of LRA needed to prevent viral rebound after ART interruption. We incorporate clinical data to estimate population-level parameter distributions and outcomes. Our findings suggest that approximately 2,000-fold reductions are required to permit a majority of patients to interrupt ART for one year without rebound and that rebound may occur suddenly after multiple years. Greater than 10,000-fold reductions may be required to prevent rebound altogether. Our results predict large variation in rebound times following LRA therapy, which will complicate clinical management. This model provides benchmarks for moving LRAs from the lab to the clinic and can aid in the design and interpretation of clinical trials. These results also apply to other interventions to reduce the latent reservoir and can explain the observed return of viremia after months of apparent cure in recent bone marrow transplant recipients and an immediately-treated neonate.
|
2301.01619
|
Ankit Rao
|
Ankit Rao, Sooraj Sanjay, Majid Ahmadi, Anirudh Venugopalrao,
Navakanta Bhat, Bart Kooi, Srinivasan Raghavan, Pavan Nukala
|
Self-assembled neuromorphic networks at self-organized criticality in
Ag-hBN platform
| null | null | null | null |
q-bio.NC cond-mat.dis-nn cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Networks and systems which exhibit brain-like behavior can analyze
information from intrinsically noisy and unstructured data with very low power
consumption. Such characteristics arise due to the critical nature and complex
interconnectivity of the brain and its neuronal network. We demonstrate that a
system comprising of multilayer hexagonal Boron Nitride (hBN) films contacted
with Silver (Ag), that can uniquely host two different self-assembled networks,
which are self-organized at criticality (SOC). This system shows bipolar
resistive switching between high resistance (HRS) and low resistance states
(LRS). In the HRS, Ag clusters (nodes) intercalate in the van der Waals gaps of
hBN forming a network of tunnel junctions, whereas the LRS contains a network
of Ag filaments. The temporal avalanche dynamics in both these states exhibit
power-law scaling, long-range temporal correlation, and SOC. These networks can
be tuned from one to another with voltage as a control parameter. For the first
time, different neuron-like networks are realized in a single CMOS compatible,
2D materials platform.
|
[
{
"created": "Wed, 7 Dec 2022 09:35:47 GMT",
"version": "v1"
}
] |
2023-01-05
|
[
[
"Rao",
"Ankit",
""
],
[
"Sanjay",
"Sooraj",
""
],
[
"Ahmadi",
"Majid",
""
],
[
"Venugopalrao",
"Anirudh",
""
],
[
"Bhat",
"Navakanta",
""
],
[
"Kooi",
"Bart",
""
],
[
"Raghavan",
"Srinivasan",
""
],
[
"Nukala",
"Pavan",
""
]
] |
Networks and systems which exhibit brain-like behavior can analyze information from intrinsically noisy and unstructured data with very low power consumption. Such characteristics arise due to the critical nature and complex interconnectivity of the brain and its neuronal network. We demonstrate that a system comprising of multilayer hexagonal Boron Nitride (hBN) films contacted with Silver (Ag), that can uniquely host two different self-assembled networks, which are self-organized at criticality (SOC). This system shows bipolar resistive switching between high resistance (HRS) and low resistance states (LRS). In the HRS, Ag clusters (nodes) intercalate in the van der Waals gaps of hBN forming a network of tunnel junctions, whereas the LRS contains a network of Ag filaments. The temporal avalanche dynamics in both these states exhibit power-law scaling, long-range temporal correlation, and SOC. These networks can be tuned from one to another with voltage as a control parameter. For the first time, different neuron-like networks are realized in a single CMOS compatible, 2D materials platform.
|
2407.04232
|
Jian-Sheng Kang
|
Shu-Ang Li, Xiao-Yan Meng, Su Zhang, Ying-Jie Zhang, Run-Zhou Yang,
Dian-Dian Wang, Yang Yang, Pei-Pei Liu, Jian-Sheng Kang
|
A Unified Intracellular pH Landscape with SITE-pHorin: a
Quantum-Entanglement-Enhanced pH Probe
|
64 pages, 7 figures, the supplemental material contains 13
supplemental figures and 4 supplemental tables
| null | null | null |
q-bio.QM physics.bio-ph q-bio.BM q-bio.SC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
An accurate map of intracellular organelle pH is crucial for comprehending
cellular metabolism and organellar functions. However, a unified intracellular
pH spectrum using a single probe is still lack. Here, we developed a novel
quantum entanglement-enhanced pH-sensitive probe called SITE-pHorin, which
featured a wide pH-sensitive range and ratiometric quantitative measurement
capabilities. Subsequently, we measured the pH of various organelles and their
sub-compartments, including mitochondrial sub-spaces, Golgi stacks, endoplasmic
reticulum, lysosomes, peroxisomes, and endosomes in COS-7 cells. For the
long-standing debate on mitochondrial compartments pH, we measured the pH of
mitochondrial cristae as 6.60 \pm 0.40, the pH of mitochondrial intermembrane
space as 6.95 \pm 0.30, and two populations of mitochondrial matrix pH at
approximately 7.20 \pm 0.27 and 7.50 \pm 0.16, respectively. Notably, the
lysosome pH exhibited a single, narrow Gaussian distribution centered at 4.79
\pm 0.17. Furthermore, quantum chemistry computations revealed that both the
deprotonation of the residue Y182 and the discrete curvature of deformed
benzene ring in chromophore are both necessary for the quantum entanglement
mechanism of SITE-pHorin. Intriguingly, our findings reveal an accurate pH
gradient (0.6-0.9 pH unit) between mitochondrial cristae and matrix, suggesting
prior knowledge about \Delta pH (0.4-0.6) and mitochondrial proton motive force
(pmf) are underestimated.
|
[
{
"created": "Fri, 5 Jul 2024 03:19:49 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Li",
"Shu-Ang",
""
],
[
"Meng",
"Xiao-Yan",
""
],
[
"Zhang",
"Su",
""
],
[
"Zhang",
"Ying-Jie",
""
],
[
"Yang",
"Run-Zhou",
""
],
[
"Wang",
"Dian-Dian",
""
],
[
"Yang",
"Yang",
""
],
[
"Liu",
"Pei-Pei",
""
],
[
"Kang",
"Jian-Sheng",
""
]
] |
An accurate map of intracellular organelle pH is crucial for comprehending cellular metabolism and organellar functions. However, a unified intracellular pH spectrum using a single probe is still lack. Here, we developed a novel quantum entanglement-enhanced pH-sensitive probe called SITE-pHorin, which featured a wide pH-sensitive range and ratiometric quantitative measurement capabilities. Subsequently, we measured the pH of various organelles and their sub-compartments, including mitochondrial sub-spaces, Golgi stacks, endoplasmic reticulum, lysosomes, peroxisomes, and endosomes in COS-7 cells. For the long-standing debate on mitochondrial compartments pH, we measured the pH of mitochondrial cristae as 6.60 \pm 0.40, the pH of mitochondrial intermembrane space as 6.95 \pm 0.30, and two populations of mitochondrial matrix pH at approximately 7.20 \pm 0.27 and 7.50 \pm 0.16, respectively. Notably, the lysosome pH exhibited a single, narrow Gaussian distribution centered at 4.79 \pm 0.17. Furthermore, quantum chemistry computations revealed that both the deprotonation of the residue Y182 and the discrete curvature of deformed benzene ring in chromophore are both necessary for the quantum entanglement mechanism of SITE-pHorin. Intriguingly, our findings reveal an accurate pH gradient (0.6-0.9 pH unit) between mitochondrial cristae and matrix, suggesting prior knowledge about \Delta pH (0.4-0.6) and mitochondrial proton motive force (pmf) are underestimated.
|
q-bio/0311009
|
Lior Pachter
|
Lior Pachter, Bernd Sturmfels
|
Tropical Geometry of Statistical Models
|
14 pages, 3 figures. Major revision. Applications now in companion
paper, "Parametric Inference for Biological Sequence Analysis"
| null |
10.1073/pnas.0406010101
| null |
q-bio.QM math.AG q-bio.GN
| null |
This paper presents a unified mathematical framework for inference in
graphical models, building on the observation that graphical models are
algebraic varieties.
From this geometric viewpoint, observations generated from a model are
coordinates of a point in the variety, and the sum-product algorithm is an
efficient tool for evaluating specific coordinates. The question addressed here
is how the solutions to various inference problems depend on the model
parameters. The proposed answer is expressed in terms of tropical algebraic
geometry. A key role is played by the Newton polytope of a statistical model.
Our results are applied to the hidden Markov model and to the general Markov
model on a binary tree.
|
[
{
"created": "Sat, 8 Nov 2003 23:32:48 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jan 2004 03:38:20 GMT",
"version": "v2"
}
] |
2009-11-10
|
[
[
"Pachter",
"Lior",
""
],
[
"Sturmfels",
"Bernd",
""
]
] |
This paper presents a unified mathematical framework for inference in graphical models, building on the observation that graphical models are algebraic varieties. From this geometric viewpoint, observations generated from a model are coordinates of a point in the variety, and the sum-product algorithm is an efficient tool for evaluating specific coordinates. The question addressed here is how the solutions to various inference problems depend on the model parameters. The proposed answer is expressed in terms of tropical algebraic geometry. A key role is played by the Newton polytope of a statistical model. Our results are applied to the hidden Markov model and to the general Markov model on a binary tree.
|
q-bio/0403028
|
Popkov Vladislav
|
M. Barbi, C. Place, V. Popkov and M. Salerno
|
Base sequence dependent sliding of proteins on DNA
|
12 pages, 3 figures
|
Phys. Rev. E 70, 041901 (2004)
|
10.1103/PhysRevE.70.041901
| null |
q-bio.BM cond-mat.soft physics.bio-ph
| null |
The possibility that the sliding motion of proteins on DNA is influenced by
the base sequence through a base pair reading interaction, is considered.
Referring to the case of the T7 RNA-polymerase, we show that the protein should
follow a noise-influenced sequence-dependent motion which deviate from the
standard random walk usually assumed. The general validity and the implications
of the results are discussed.
|
[
{
"created": "Fri, 19 Mar 2004 19:38:18 GMT",
"version": "v1"
}
] |
2011-07-13
|
[
[
"Barbi",
"M.",
""
],
[
"Place",
"C.",
""
],
[
"Popkov",
"V.",
""
],
[
"Salerno",
"M.",
""
]
] |
The possibility that the sliding motion of proteins on DNA is influenced by the base sequence through a base pair reading interaction, is considered. Referring to the case of the T7 RNA-polymerase, we show that the protein should follow a noise-influenced sequence-dependent motion which deviate from the standard random walk usually assumed. The general validity and the implications of the results are discussed.
|
1407.6117
|
Areejit Samal
|
Joseph Xu Zhou, Areejit Samal, Aymeric Fouquier d'H\`erou\"el, Nathan
D. Price and Sui Huang
|
Relative Stability of Network States in Boolean Network Models of Gene
Regulation in Development
|
24 pages, 6 figures, 1 table
|
Biosystems 142-143:15-24 (2016)
|
10.1016/j.biosystems.2016.03.002
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Progress in cell type reprogramming has revived the interest in Waddington's
concept of the epigenetic landscape. Recently researchers developed the
quasi-potential theory to represent the Waddington's landscape. The
Quasi-potential U(x), derived from interactions in the gene regulatory network
(GRN) of a cell, quantifies the relative stability of network states, which
determine the effort required for state transitions in a multi-stable dynamical
system. However, quasi-potential landscapes, originally developed for
continuous systems, are not suitable for discrete-valued networks which are
important tools to study complex systems. In this paper, we provide a framework
to quantify the landscape for discrete Boolean networks (BNs). We apply our
framework to study pancreas cell differentiation where an ensemble of BN models
is considered based on the structure of a minimal GRN for pancreas development.
We impose biologically motivated structural constraints (corresponding to
specific type of Boolean functions) and dynamical constraints (corresponding to
stable attractor states) to limit the space of BN models for pancreas
development. In addition, we enforce a novel functional constraint
corresponding to the relative ordering of attractor states in BN models to
restrict the space of BN models to the biological relevant class. We find that
BNs with canalyzing/sign-compatible Boolean functions best capture the dynamics
of pancreas cell differentiation. This framework can also determine the genes'
influence on cell state transitions, and thus can facilitate the rational
design of cell reprogramming protocols.
|
[
{
"created": "Wed, 23 Jul 2014 07:06:27 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Oct 2015 13:56:14 GMT",
"version": "v2"
}
] |
2016-05-09
|
[
[
"Zhou",
"Joseph Xu",
""
],
[
"Samal",
"Areejit",
""
],
[
"d'Hèrouël",
"Aymeric Fouquier",
""
],
[
"Price",
"Nathan D.",
""
],
[
"Huang",
"Sui",
""
]
] |
Progress in cell type reprogramming has revived the interest in Waddington's concept of the epigenetic landscape. Recently researchers developed the quasi-potential theory to represent the Waddington's landscape. The Quasi-potential U(x), derived from interactions in the gene regulatory network (GRN) of a cell, quantifies the relative stability of network states, which determine the effort required for state transitions in a multi-stable dynamical system. However, quasi-potential landscapes, originally developed for continuous systems, are not suitable for discrete-valued networks which are important tools to study complex systems. In this paper, we provide a framework to quantify the landscape for discrete Boolean networks (BNs). We apply our framework to study pancreas cell differentiation where an ensemble of BN models is considered based on the structure of a minimal GRN for pancreas development. We impose biologically motivated structural constraints (corresponding to specific type of Boolean functions) and dynamical constraints (corresponding to stable attractor states) to limit the space of BN models for pancreas development. In addition, we enforce a novel functional constraint corresponding to the relative ordering of attractor states in BN models to restrict the space of BN models to the biological relevant class. We find that BNs with canalyzing/sign-compatible Boolean functions best capture the dynamics of pancreas cell differentiation. This framework can also determine the genes' influence on cell state transitions, and thus can facilitate the rational design of cell reprogramming protocols.
|
1210.6979
|
Ferdinando Giacco
|
Ferdinando Giacco and Silvia Scarpetta
|
Attractor networks and memory replay of phase coded spike patterns
|
arXiv admin note: text overlap with arXiv:1210.6789
|
Frontiers in Artificial Intelligence and Applications, Volume 234,
2011, pag 265-274
| null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyse the storage and retrieval capacity in a recurrent neural network
of spiking integrate and fire neurons. In the model we distinguish between a
learning mode, during which the synaptic connections change according to a
Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which
connections strengths are no more plastic. Our findings show the ability of the
network to store and recall periodic phase coded patterns a small number of
neurons has been stimulated. The self sustained dynamics selectively gives an
oscillating spiking activity that matches one of the stored patterns, depending
on the initialization of the network.
|
[
{
"created": "Thu, 25 Oct 2012 11:09:56 GMT",
"version": "v1"
}
] |
2012-10-29
|
[
[
"Giacco",
"Ferdinando",
""
],
[
"Scarpetta",
"Silvia",
""
]
] |
We analyse the storage and retrieval capacity in a recurrent neural network of spiking integrate and fire neurons. In the model we distinguish between a learning mode, during which the synaptic connections change according to a Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which connections strengths are no more plastic. Our findings show the ability of the network to store and recall periodic phase coded patterns a small number of neurons has been stimulated. The self sustained dynamics selectively gives an oscillating spiking activity that matches one of the stored patterns, depending on the initialization of the network.
|
1106.0848
|
Francisco-Jose Perez-Reche
|
Francisco J. Perez-Reche, Jonathan J. Ludlam, Sergei N. Taraskin, and
Christopher A. Gilligan
|
Synergy in spreading processes: from exploitative to explorative
foraging strategies
|
16 pages, 15 figures. 4 pages for main text and 6 appendices
published as supplemental material at
http://link.aps.org/supplemental/10.1103/PhysRevLett.106.218701
|
Phys. Rev. Lett. 106, 218701 (2011)
|
10.1103/PhysRevLett.106.218701
| null |
q-bio.PE cond-mat.stat-mech nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An epidemiological model which incorporates synergistic effects that allow
the infectivity and/or susceptibility of hosts to be dependent on the number of
infected neighbours is proposed. Constructive synergy induces an exploitative
behaviour which results in a rapid invasion that infects a large number of
hosts. Interfering synergy leads to a slower and sparser explorative foraging
strategy that traverses larger distances by infecting fewer hosts. The model
can be mapped to a dynamical bond-percolation with spatial correlations that
affect the mechanism of spread but do not influence the critical behaviour of
epidemics.
|
[
{
"created": "Sat, 4 Jun 2011 18:38:35 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Perez-Reche",
"Francisco J.",
""
],
[
"Ludlam",
"Jonathan J.",
""
],
[
"Taraskin",
"Sergei N.",
""
],
[
"Gilligan",
"Christopher A.",
""
]
] |
An epidemiological model which incorporates synergistic effects that allow the infectivity and/or susceptibility of hosts to be dependent on the number of infected neighbours is proposed. Constructive synergy induces an exploitative behaviour which results in a rapid invasion that infects a large number of hosts. Interfering synergy leads to a slower and sparser explorative foraging strategy that traverses larger distances by infecting fewer hosts. The model can be mapped to a dynamical bond-percolation with spatial correlations that affect the mechanism of spread but do not influence the critical behaviour of epidemics.
|
1503.02570
|
Helio M. de Oliveira
|
H.M. de Oliveira and N.S. Santos-Magalhaes
|
The Genetic Code revisited: Inner-to-outer map, 2D-Gray map, and
World-map Genetic Representations
|
6 pages, 5 figures
|
Lecture Notes in Computer Science, LNCS 3124, Heidelberg: Springer
Verlag, vol.1, pp.526-531, 2004
|
10.1007/b99377
| null |
q-bio.OT cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How to represent the genetic code? Despite the fact that it is extensively
known, the DNA mapping into proteins remains as one of the relevant discoveries
of genetics. However, modern genomic signal processing usually requires
converting symbolic-DNA strings into complex-valued signals in order to take
full advantage of a broad variety of digital processing techniques. The genetic
code is revisited in this paper, addressing alternative representations for it,
which can be worthy for genomic signal processing. Three original
representations are discussed. The inner-to-outer map builds on the unbalanced
role of nucleotides of a 'codon' and it seems to be suitable for handling
information-theory-based matter. The two-dimensional-Gray map representation is
offered as a mathematically structured map that can help interpreting
spectrograms or scalograms. Finally, the world-map representation for the
genetic code is investigated, which can particularly be valuable for
educational purposes -besides furnishing plenty of room for application of
distance-based algorithms.
|
[
{
"created": "Thu, 5 Mar 2015 18:31:43 GMT",
"version": "v1"
}
] |
2015-03-10
|
[
[
"de Oliveira",
"H. M.",
""
],
[
"Santos-Magalhaes",
"N. S.",
""
]
] |
How to represent the genetic code? Despite the fact that it is extensively known, the DNA mapping into proteins remains as one of the relevant discoveries of genetics. However, modern genomic signal processing usually requires converting symbolic-DNA strings into complex-valued signals in order to take full advantage of a broad variety of digital processing techniques. The genetic code is revisited in this paper, addressing alternative representations for it, which can be worthy for genomic signal processing. Three original representations are discussed. The inner-to-outer map builds on the unbalanced role of nucleotides of a 'codon' and it seems to be suitable for handling information-theory-based matter. The two-dimensional-Gray map representation is offered as a mathematically structured map that can help interpreting spectrograms or scalograms. Finally, the world-map representation for the genetic code is investigated, which can particularly be valuable for educational purposes -besides furnishing plenty of room for application of distance-based algorithms.
|
1503.06308
|
Henrique Oliveira Prof Dr
|
Jo\~ao F. Alves and Henrique M. Oliveira
|
Similarity of general population matrices and pseudo-Leslie matrices
|
14 pages
| null | null | null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A similarity transformation is obtained between general population matrices
models of the Usher or Lefkovitch types and a simpler model, the pseudo-Leslie
model. The pseudo Leslie model is a matrix that can be decomposed in a row
matrix, which is not necessarily non-negative and a subdiagonal positive
matrix. This technique has computational advantages, since the solutions of the
iterative problem using Leslie matrices are readily obtained . In the case of
two age structured population models, one Lefkovitch and another Leslie, the
Kolmogorov-Sinai entropies are different, despite the same growth ratio of both
models. We prove that Markov matrices associated to similar population matrices
are similar.
|
[
{
"created": "Sat, 21 Mar 2015 15:14:45 GMT",
"version": "v1"
}
] |
2015-03-29
|
[
[
"Alves",
"João F.",
""
],
[
"Oliveira",
"Henrique M.",
""
]
] |
A similarity transformation is obtained between general population matrices models of the Usher or Lefkovitch types and a simpler model, the pseudo-Leslie model. The pseudo Leslie model is a matrix that can be decomposed in a row matrix, which is not necessarily non-negative and a subdiagonal positive matrix. This technique has computational advantages, since the solutions of the iterative problem using Leslie matrices are readily obtained . In the case of two age structured population models, one Lefkovitch and another Leslie, the Kolmogorov-Sinai entropies are different, despite the same growth ratio of both models. We prove that Markov matrices associated to similar population matrices are similar.
|
1307.6177
|
Jose Fontanari
|
Jose F. Fontanari and Maurizio Serva
|
Nonlinear group survival in Kimura's model for the evolution of altruism
| null |
Mathematical Biosciences 249 (2014), 18-26
|
10.1016/j.mbs.2014.01.003
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing the conditions that guarantee the spreading or the sustenance of
altruistic traits in a population is the main goal of intergroup selection
models. Of particular interest is the balance of the parameters associated to
group size, migration and group survival against the selective advantage of the
non-altruistic individuals. Here we use Kimura's diffusion model of intergroup
selection to determine those conditions in the case the group survival
probability is a nonlinear non-decreasing function of the proportion of
altruists in a group. In the case this function is linear, there are two
possible steady states which correspond to the non-altruistic and the
altruistic phases. At the discontinuous transition line separating these phases
there is a non-ergodic coexistence phase. For a continuous concave survival
function, we find an ergodic coexistence phase that occupies a finite region of
the parameter space in between the altruistic and the non-altruistic phases,
and is separated from these phases by continuous transition lines. For a convex
survival function, the coexistence phase disappears altogether but a bistable
phase appears for which the choice of the initial condition determines whether
the evolutionary dynamics leads to the altruistic or the non-altruistic steady
state.
|
[
{
"created": "Tue, 23 Jul 2013 18:01:11 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jan 2014 11:08:39 GMT",
"version": "v2"
}
] |
2014-02-17
|
[
[
"Fontanari",
"Jose F.",
""
],
[
"Serva",
"Maurizio",
""
]
] |
Establishing the conditions that guarantee the spreading or the sustenance of altruistic traits in a population is the main goal of intergroup selection models. Of particular interest is the balance of the parameters associated to group size, migration and group survival against the selective advantage of the non-altruistic individuals. Here we use Kimura's diffusion model of intergroup selection to determine those conditions in the case the group survival probability is a nonlinear non-decreasing function of the proportion of altruists in a group. In the case this function is linear, there are two possible steady states which correspond to the non-altruistic and the altruistic phases. At the discontinuous transition line separating these phases there is a non-ergodic coexistence phase. For a continuous concave survival function, we find an ergodic coexistence phase that occupies a finite region of the parameter space in between the altruistic and the non-altruistic phases, and is separated from these phases by continuous transition lines. For a convex survival function, the coexistence phase disappears altogether but a bistable phase appears for which the choice of the initial condition determines whether the evolutionary dynamics leads to the altruistic or the non-altruistic steady state.
|
2302.00582
|
Alexander Jaffe
|
Alexander L. Jaffe, Cindy J. Castelle, and Jillian F. Banfield
|
Habitat transition in the evolution of bacteria and archaea
|
Accepted for publication in the Annual Review of Microbiology, Volume
77
| null | null | null |
q-bio.PE q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Related groups of microbes are widely distributed across Earth's habitats,
implying numerous dispersal and adaptation events over evolutionary time.
However, to date, relatively little is known about the characteristics and
mechanisms of these habitat transitions, particularly for populations that
reside in animal microbiomes. Here, we review the existing literature
concerning habitat transitions among a variety of bacterial and archaeal
lineages, considering the frequency of migration events, potential
environmental barriers, and mechanisms of adaptation to new physicochemical
conditions, including the modification of protein inventories and other genomic
characteristics. Cells dependent on microbial hosts, particularly bacteria from
the Candidate Phyla Radiation (CPR), have undergone repeated habitat
transitions from environmental sources into animal microbiomes. We compare
their trajectories to those of both free-living cells - including the
Melainabacteria, Elusimicrobia, and methanogenic archaea - as well as cellular
endosymbionts and bacteriophages, which have made similar transitions. We
conclude by highlighting major related topics that may be worthy of future
study.
|
[
{
"created": "Wed, 1 Feb 2023 16:56:06 GMT",
"version": "v1"
}
] |
2023-02-02
|
[
[
"Jaffe",
"Alexander L.",
""
],
[
"Castelle",
"Cindy J.",
""
],
[
"Banfield",
"Jillian F.",
""
]
] |
Related groups of microbes are widely distributed across Earth's habitats, implying numerous dispersal and adaptation events over evolutionary time. However, to date, relatively little is known about the characteristics and mechanisms of these habitat transitions, particularly for populations that reside in animal microbiomes. Here, we review the existing literature concerning habitat transitions among a variety of bacterial and archaeal lineages, considering the frequency of migration events, potential environmental barriers, and mechanisms of adaptation to new physicochemical conditions, including the modification of protein inventories and other genomic characteristics. Cells dependent on microbial hosts, particularly bacteria from the Candidate Phyla Radiation (CPR), have undergone repeated habitat transitions from environmental sources into animal microbiomes. We compare their trajectories to those of both free-living cells - including the Melainabacteria, Elusimicrobia, and methanogenic archaea - as well as cellular endosymbionts and bacteriophages, which have made similar transitions. We conclude by highlighting major related topics that may be worthy of future study.
|
2111.00119
|
Esteban Vargas Bernal
|
Esteban Vargas Bernal, Omar Saucedo, Joseph Hua Tien
|
Relating Eulerian and Lagrangian spatial models for vector-host diseases
dynamics through a fundamental matrix
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
We explore the relationship between Eulerian and Lagrangian approaches for
modeling movement in vector-borne diseases for discrete space. In the Eulerian
approach we account for the movement of hosts explicitly through movement rates
captured by a graph Laplacian matrix $L$. In the Lagrangian approach we only
account for the proportion of time that individuals spend in foreign patches
through a mixing matrix $P$. We establish a relationship between an Eulerian
model and a Lagrangian model for the hosts in terms of the matrices $L$ and
$P$. We say that the two modeling frameworks are consistent if for a given
matrix $P$, the matrix $L$ can be chosen so that the residence times of the
matrix $P$ and the matrix $L$ match. We find a sufficient condition for
consistency, and examine disease quantities such as the final outbreak size and
basic reproduction number in both the consistent and inconsistent cases. In the
special case of a two-patch model, we observe how similar values for the basic
reproduction number and final outbreak size can occur even in the inconsistent
case. However, there are scenarios where the final sizes in both approaches can
significantly differ by means of the relationship we propose.
|
[
{
"created": "Fri, 29 Oct 2021 23:31:22 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jan 2022 19:54:01 GMT",
"version": "v2"
}
] |
2022-01-25
|
[
[
"Bernal",
"Esteban Vargas",
""
],
[
"Saucedo",
"Omar",
""
],
[
"Tien",
"Joseph Hua",
""
]
] |
We explore the relationship between Eulerian and Lagrangian approaches for modeling movement in vector-borne diseases for discrete space. In the Eulerian approach we account for the movement of hosts explicitly through movement rates captured by a graph Laplacian matrix $L$. In the Lagrangian approach we only account for the proportion of time that individuals spend in foreign patches through a mixing matrix $P$. We establish a relationship between an Eulerian model and a Lagrangian model for the hosts in terms of the matrices $L$ and $P$. We say that the two modeling frameworks are consistent if for a given matrix $P$, the matrix $L$ can be chosen so that the residence times of the matrix $P$ and the matrix $L$ match. We find a sufficient condition for consistency, and examine disease quantities such as the final outbreak size and basic reproduction number in both the consistent and inconsistent cases. In the special case of a two-patch model, we observe how similar values for the basic reproduction number and final outbreak size can occur even in the inconsistent case. However, there are scenarios where the final sizes in both approaches can significantly differ by means of the relationship we propose.
|
2108.01620
|
Amit Kumar Das
|
Amit Kumar Das
|
Stochastic gene transcription with non-competitive transcription
regulatory architecture
|
The article is substantially modified with new paragraphs,
equations,images and appendices, 29 pages, 63 figures, 2 tables
| null | null | null |
q-bio.MN cond-mat.soft cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transcription factors, such as activators and repressors, can interact
with the promoter of gene either in a competitive or non-competitive way. In
this paper, we construct a stochastic model with non-competitive
transcriptional regulatory architecture and develop an analytical theory that
re-establishes the experimental results with an improved data fitting. The
analytical expressions in the theory allow us to study the nature of the system
corresponding to any of its parameters, and hence enable us to find out the
factors that govern the regulation of gene expression for that architecture. We
notice that, along with transcriptional reinitiation and repressors, there are
other parameters that can control the noisiness of this network. We also
observe that, the Fano factor (at mRNA level) varies from sub-Poissonian regime
to superPoissonian regime. In addition to the aforementioned properties, we
observe some anomalous characteristics of the Fano factor (at mRNA level) and
that of the variance of protein at lower activator concentrations in presence
of repressor molecules. This model is useful to understand the architecture of
interactions which may buffer the stochasticity inherent to gene transcription.
|
[
{
"created": "Tue, 3 Aug 2021 16:48:04 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Sep 2021 07:29:16 GMT",
"version": "v2"
},
{
"created": "Sat, 12 Feb 2022 13:34:03 GMT",
"version": "v3"
},
{
"created": "Thu, 26 May 2022 15:14:38 GMT",
"version": "v4"
}
] |
2022-05-27
|
[
[
"Das",
"Amit Kumar",
""
]
] |
The transcription factors, such as activators and repressors, can interact with the promoter of gene either in a competitive or non-competitive way. In this paper, we construct a stochastic model with non-competitive transcriptional regulatory architecture and develop an analytical theory that re-establishes the experimental results with an improved data fitting. The analytical expressions in the theory allow us to study the nature of the system corresponding to any of its parameters, and hence enable us to find out the factors that govern the regulation of gene expression for that architecture. We notice that, along with transcriptional reinitiation and repressors, there are other parameters that can control the noisiness of this network. We also observe that, the Fano factor (at mRNA level) varies from sub-Poissonian regime to superPoissonian regime. In addition to the aforementioned properties, we observe some anomalous characteristics of the Fano factor (at mRNA level) and that of the variance of protein at lower activator concentrations in presence of repressor molecules. This model is useful to understand the architecture of interactions which may buffer the stochasticity inherent to gene transcription.
|
2009.12693
|
Abicumaran Uthamacumaran
|
Abicumaran Uthamacumaran
|
A Review of Complex Systems Approaches to Cancer Networks
|
43 pages
|
Complex Systems, Vol. 29, Issue 4 (2020)
|
10.25088/ComplexSystems.29.4.779
| null |
q-bio.OT nlin.CD
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Cancers remain the lead cause of disease-related, pediatric death in North
America. The emerging field of complex systems has redefined cancer networks as
a computational system with intractable algorithmic complexity. Herein, a tumor
and its heterogeneous phenotypes are discussed as dynamical systems having
multiple, strange attractors. Machine learning, network science and algorithmic
information dynamics are discussed as current tools for cancer network
reconstruction. Deep Learning architectures and computational fluid models are
proposed for better forecasting gene expression patterns in cancer ecosystems.
Cancer cell decision-making is investigated within the framework of complex
systems and complexity theory.
|
[
{
"created": "Sat, 26 Sep 2020 21:40:48 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Nov 2020 01:05:58 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Mar 2021 03:02:05 GMT",
"version": "v3"
},
{
"created": "Sun, 29 Aug 2021 08:21:41 GMT",
"version": "v4"
}
] |
2021-08-31
|
[
[
"Uthamacumaran",
"Abicumaran",
""
]
] |
Cancers remain the lead cause of disease-related, pediatric death in North America. The emerging field of complex systems has redefined cancer networks as a computational system with intractable algorithmic complexity. Herein, a tumor and its heterogeneous phenotypes are discussed as dynamical systems having multiple, strange attractors. Machine learning, network science and algorithmic information dynamics are discussed as current tools for cancer network reconstruction. Deep Learning architectures and computational fluid models are proposed for better forecasting gene expression patterns in cancer ecosystems. Cancer cell decision-making is investigated within the framework of complex systems and complexity theory.
|
q-bio/0512027
|
Maria Barbi
|
Julien Mozziconacci, Christophe Lavelle, Maria Barbi, Annick Lesne and
Jean-Marc Victor
|
A Physical Model for the Condensation and Decondensation of Eukaryotic
Chromosomes
| null |
FEBS Letters 580 (2006) 368-372
|
10.1016/j.febslet.2005.12.053
| null |
q-bio.SC
| null |
During the eukaryotic cell cycle, chromatin undergoes several conformational
changes, which are believed to play key roles in gene expression regulation
during interphase, and in genome replication and division during mitosis. In
this paper, we propose a scenario for chromatin structural reorganization
during mitosis, which bridges all the different scales involved in chromatin
architecture, from nucleosomes to chromatin loops. We build a model for
chromatin, based on available data, taking into account both physical and
topological constraints DNA has to deal with. Our results suggest that the
mitotic chromosome condensation/decondensation process is induced by a
structural change at the level of the nucleosome itself.
|
[
{
"created": "Mon, 12 Dec 2005 17:05:58 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Sep 2007 16:28:22 GMT",
"version": "v2"
}
] |
2007-09-03
|
[
[
"Mozziconacci",
"Julien",
""
],
[
"Lavelle",
"Christophe",
""
],
[
"Barbi",
"Maria",
""
],
[
"Lesne",
"Annick",
""
],
[
"Victor",
"Jean-Marc",
""
]
] |
During the eukaryotic cell cycle, chromatin undergoes several conformational changes, which are believed to play key roles in gene expression regulation during interphase, and in genome replication and division during mitosis. In this paper, we propose a scenario for chromatin structural reorganization during mitosis, which bridges all the different scales involved in chromatin architecture, from nucleosomes to chromatin loops. We build a model for chromatin, based on available data, taking into account both physical and topological constraints DNA has to deal with. Our results suggest that the mitotic chromosome condensation/decondensation process is induced by a structural change at the level of the nucleosome itself.
|
1803.10753
|
Milena \v{C}uki\'c Dr
|
Milena B. Cukic, Mirjana M. Platisa, Aleksandar Kalauzi, Joji Oommen,
Milos R. Ljubisavljevic
|
The comparison of Higuchi fractal dimension and Sample Entropy analysis
of sEMG: effects of muscle contraction intensity and TMS
|
21 pages, 3 Figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of the study was to examine how the complexity of surface
electromyogram (sEMG) signal, estimated by Higuchi fractal dimension (HFD) and
Sample Entropy (SampEn), change depending on muscle contraction intensity and
external perturbation of the corticospinal activity during muscle contraction
induced by single-pulse Transcranial Magnetic Stimulation (spTMS). HFD and
SampEn were computed from sEMG signal recorded at three various levels of
voluntary contraction before and after spTMS. After spTMS, both HFD and SampEn
decreased at medium compared to the mild contraction. SampEn increased, while
HFD did not change significantly at strong compared to medium contraction.
spTMS significantly decreased both parameters at all contraction levels. When
same parameters were computed from the mathematically generated sine-wave
calibration curves, the results show that SampEn has better accuracy at lower
(0-40 Hz) and HFD at higher (60-120 Hz) frequencies. Changes in the sEMG
complexity associated with increased muscle contraction intensity cannot be
accurately depicted by a single complexity measure. Examination of sEMG should
entail both SampEn and HFD as they provide complementary information about
different frequency components of sEMG. Further studies are needed to explain
the implication of changes in nonlinear parameters and their relation to
underlying sEMG physiological processes.
|
[
{
"created": "Wed, 28 Mar 2018 17:44:44 GMT",
"version": "v1"
}
] |
2018-03-29
|
[
[
"Cukic",
"Milena B.",
""
],
[
"Platisa",
"Mirjana M.",
""
],
[
"Kalauzi",
"Aleksandar",
""
],
[
"Oommen",
"Joji",
""
],
[
"Ljubisavljevic",
"Milos R.",
""
]
] |
The aim of the study was to examine how the complexity of surface electromyogram (sEMG) signal, estimated by Higuchi fractal dimension (HFD) and Sample Entropy (SampEn), change depending on muscle contraction intensity and external perturbation of the corticospinal activity during muscle contraction induced by single-pulse Transcranial Magnetic Stimulation (spTMS). HFD and SampEn were computed from sEMG signal recorded at three various levels of voluntary contraction before and after spTMS. After spTMS, both HFD and SampEn decreased at medium compared to the mild contraction. SampEn increased, while HFD did not change significantly at strong compared to medium contraction. spTMS significantly decreased both parameters at all contraction levels. When same parameters were computed from the mathematically generated sine-wave calibration curves, the results show that SampEn has better accuracy at lower (0-40 Hz) and HFD at higher (60-120 Hz) frequencies. Changes in the sEMG complexity associated with increased muscle contraction intensity cannot be accurately depicted by a single complexity measure. Examination of sEMG should entail both SampEn and HFD as they provide complementary information about different frequency components of sEMG. Further studies are needed to explain the implication of changes in nonlinear parameters and their relation to underlying sEMG physiological processes.
|
2407.00050
|
Zhangyang Gao
|
Zhangyang Gao, Cheng Tan, Stan Z. Li
|
FoldToken2: Learning compact, invariant and generative protein structure
language
| null | null | null | null |
q-bio.BM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The equivalent nature of 3D coordinates has posed long term challenges in
protein structure representation learning, alignment, and generation. Can we
create a compact and invariant language that equivalently represents protein
structures? Towards this goal, we propose FoldToken2 to transfer equivariant
structures into discrete tokens, while maintaining the recoverability of the
original structures. From FoldToken1 to FoldToken2, we improve three key
components: (1) invariant structure encoder, (2) vector-quantized compressor,
and (3) equivalent structure decoder. We evaluate FoldToken2 on the protein
structure reconstruction task and show that it outperforms previous FoldToken1
by 20\% in TMScore and 81\% in RMSD. FoldToken2 probably be the first method
that works well on both single-chain and multi-chain protein structures
quantization. We believe that FoldToken2 will inspire further improvement in
protein structure representation learning, structure alignment, and structure
generation tasks.
|
[
{
"created": "Tue, 11 Jun 2024 09:24:51 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Gao",
"Zhangyang",
""
],
[
"Tan",
"Cheng",
""
],
[
"Li",
"Stan Z.",
""
]
] |
The equivalent nature of 3D coordinates has posed long term challenges in protein structure representation learning, alignment, and generation. Can we create a compact and invariant language that equivalently represents protein structures? Towards this goal, we propose FoldToken2 to transfer equivariant structures into discrete tokens, while maintaining the recoverability of the original structures. From FoldToken1 to FoldToken2, we improve three key components: (1) invariant structure encoder, (2) vector-quantized compressor, and (3) equivalent structure decoder. We evaluate FoldToken2 on the protein structure reconstruction task and show that it outperforms previous FoldToken1 by 20\% in TMScore and 81\% in RMSD. FoldToken2 probably be the first method that works well on both single-chain and multi-chain protein structures quantization. We believe that FoldToken2 will inspire further improvement in protein structure representation learning, structure alignment, and structure generation tasks.
|
1606.01315
|
Vladimir Privman
|
Vladimir Privman
|
Theoretical Modeling Expressions for Networked Enzymatic Signal
Processing Steps as Logic Gates Optimized by Filtering
|
arXiv admin note: substantial text overlap with arXiv:1312.4235
|
Int. J. Parallel Emergent Distrib. Syst. 32 (1), 30-43 (2017)
|
10.1080/17445760.2016.1140167
|
VP-269
|
q-bio.MN cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe modeling approaches to a "network" of connected enzyme-catalyzed
reactions, with added (bio)chemical processes that introduce biochemical
filtering steps into the functioning of such a biocatalytic cascade.
Theoretical expressions are derived that allow simple, few-parameter modeling
of processes concatenated in such cascades, both with and without filtering.
The modeling approach captures and explains features identified in earlier
studies of enzymatic processes considered as potential "network components" for
multi-step information/signal processing systems.
|
[
{
"created": "Sat, 4 Jun 2016 01:07:27 GMT",
"version": "v1"
}
] |
2016-12-13
|
[
[
"Privman",
"Vladimir",
""
]
] |
We describe modeling approaches to a "network" of connected enzyme-catalyzed reactions, with added (bio)chemical processes that introduce biochemical filtering steps into the functioning of such a biocatalytic cascade. Theoretical expressions are derived that allow simple, few-parameter modeling of processes concatenated in such cascades, both with and without filtering. The modeling approach captures and explains features identified in earlier studies of enzymatic processes considered as potential "network components" for multi-step information/signal processing systems.
|
2008.12766
|
Assad Oberai
|
Harisankar Ramaswamy, Assad A Oberai and Yannis C Yortsos
|
A comprehensive spatial-temporal infection model
| null | null | null | null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by analogies between the spreading of human-to-human infections and
of chemical processes, we develop a comprehensive model that accounts both for
infection and for transport. In this analogy, the three different populations
of infection models correspond to three chemical species. Areal densities
emerge as the key variables, thus capturing the effect of spatial density. We
derive expressions for the kinetics of the infection rates and for the
important parameter R0, that include areal density and its spatial
distribution. Coupled with mobility the model allows the study of various
effects. We first present results for a batch reactor, the chemical process
equivalent of the SIR model. Because density makes R0 a decreasing function of
the process extent, the infection curves are different and smaller than for the
standard SIR model. We show that the effect of the initial conditions is
limited to the onset of the epidemic. We derive effective infection curves for
a number of cases, including a back-and-forth commute between regions of low
and high R0 environments. We then consider spatially distributed systems. We
show that diffusion leads to traveling waves, which in 1-D geometries propagate
at a constant speed and with a constant shape, both of which are sole functions
of R0. The infection curves are slightly different than for the batch problem,
as diffusion mitigates the infection intensity, thus leading to an effective
lower R0. The dimensional wave speed is found to be proportional to the product
of the square root of the diffusivity and of an increasing function of R0,
confirming the importance of restricting mobility in arresting the propagation
of infection. We examine the interaction of infection waves under various
conditions and scenarios, and extend the wave propagation analysis to 2-D
heterogeneous systems.
|
[
{
"created": "Fri, 28 Aug 2020 17:45:39 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Dec 2020 21:03:25 GMT",
"version": "v2"
}
] |
2020-12-08
|
[
[
"Ramaswamy",
"Harisankar",
""
],
[
"Oberai",
"Assad A",
""
],
[
"Yortsos",
"Yannis C",
""
]
] |
Motivated by analogies between the spreading of human-to-human infections and of chemical processes, we develop a comprehensive model that accounts both for infection and for transport. In this analogy, the three different populations of infection models correspond to three chemical species. Areal densities emerge as the key variables, thus capturing the effect of spatial density. We derive expressions for the kinetics of the infection rates and for the important parameter R0, that include areal density and its spatial distribution. Coupled with mobility the model allows the study of various effects. We first present results for a batch reactor, the chemical process equivalent of the SIR model. Because density makes R0 a decreasing function of the process extent, the infection curves are different and smaller than for the standard SIR model. We show that the effect of the initial conditions is limited to the onset of the epidemic. We derive effective infection curves for a number of cases, including a back-and-forth commute between regions of low and high R0 environments. We then consider spatially distributed systems. We show that diffusion leads to traveling waves, which in 1-D geometries propagate at a constant speed and with a constant shape, both of which are sole functions of R0. The infection curves are slightly different than for the batch problem, as diffusion mitigates the infection intensity, thus leading to an effective lower R0. The dimensional wave speed is found to be proportional to the product of the square root of the diffusivity and of an increasing function of R0, confirming the importance of restricting mobility in arresting the propagation of infection. We examine the interaction of infection waves under various conditions and scenarios, and extend the wave propagation analysis to 2-D heterogeneous systems.
|
1810.03666
|
Giovanni Paolini
|
Emanuele Delucchi, Linard Hoessly, Giovanni Paolini
|
Impossibility results on stability of phylogenetic consensus methods
| null |
Systematic Biology 69 (3), pp. 557-565, 2020
|
10.1093/sysbio/syz071
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We answer two questions raised by Bryant, Francis and Steel in their work on
consensus methods in phylogenetics. Consensus methods apply to every practical
instance where it is desired to aggregate a set of given phylogenetic trees
(say, gene evolution trees) into a resulting, "consensus" tree (say, a species
tree). Various stability criteria have been explored in this context, seeking
to model desirable consistency properties of consensus methods as the
experimental data are updated (e.g., more taxa, or more trees, are mapped).
However, such stability conditions can be incompatible with some basic
regularity properties that are widely accepted to be essential in any
meaningful consensus method. Here, we prove that such an incompatibility does
arise in the case of extension stability on binary trees and in the case of
associative stability. Our methods combine general theoretical considerations
with the use of computer programs tailored to the given stability requirements.
|
[
{
"created": "Mon, 8 Oct 2018 19:12:22 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Nov 2023 14:24:19 GMT",
"version": "v2"
}
] |
2023-11-14
|
[
[
"Delucchi",
"Emanuele",
""
],
[
"Hoessly",
"Linard",
""
],
[
"Paolini",
"Giovanni",
""
]
] |
We answer two questions raised by Bryant, Francis and Steel in their work on consensus methods in phylogenetics. Consensus methods apply to every practical instance where it is desired to aggregate a set of given phylogenetic trees (say, gene evolution trees) into a resulting, "consensus" tree (say, a species tree). Various stability criteria have been explored in this context, seeking to model desirable consistency properties of consensus methods as the experimental data are updated (e.g., more taxa, or more trees, are mapped). However, such stability conditions can be incompatible with some basic regularity properties that are widely accepted to be essential in any meaningful consensus method. Here, we prove that such an incompatibility does arise in the case of extension stability on binary trees and in the case of associative stability. Our methods combine general theoretical considerations with the use of computer programs tailored to the given stability requirements.
|
2011.01877
|
Camille Dunning
|
Camille Dunning
|
Exploring the Synchrony Between Body Temperature and HR, RR, and Aortic
Blood Pressure in Viral/Bacterial Disease Onsets with Signal Dynamics
| null | null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Signal-based early detection of illnesses has been a key topic in research
and hospital settings; it reduces technological costs and paves the way for
quick and effective patient-care operations. Elementary machine learning and
signal processing algorithms have proven to be sufficient in classifying the
onset of viral and bacterial conditions before clinical symptoms are shown.
Inspired by these recent developments, this project employs signal dynamics
analysis to infer changes in vital signs (temperature, respiration, and heart
rate). The results demonstrate that the trends of one vital function can be
predicted from that of another. In particular, it is shown that heart rate and
respiration typically change shortly after body temperature, and aortic blood
pressure follows. This is not an etiologically specific approach, but if
advanced further, it can enable patients and wearable system users to tame
these changes and prevent immediate symptoms.
|
[
{
"created": "Fri, 30 Oct 2020 22:32:04 GMT",
"version": "v1"
}
] |
2020-11-04
|
[
[
"Dunning",
"Camille",
""
]
] |
Signal-based early detection of illnesses has been a key topic in research and hospital settings; it reduces technological costs and paves the way for quick and effective patient-care operations. Elementary machine learning and signal processing algorithms have proven to be sufficient in classifying the onset of viral and bacterial conditions before clinical symptoms are shown. Inspired by these recent developments, this project employs signal dynamics analysis to infer changes in vital signs (temperature, respiration, and heart rate). The results demonstrate that the trends of one vital function can be predicted from that of another. In particular, it is shown that heart rate and respiration typically change shortly after body temperature, and aortic blood pressure follows. This is not an etiologically specific approach, but if advanced further, it can enable patients and wearable system users to tame these changes and prevent immediate symptoms.
|
1802.08849
|
Petter Holme
|
Petter Holme, Liubov Tupikina
|
Epidemic extinction in networks: Insights from the 12,110 smallest
graphs
|
v2 has some minor bug fixes
| null |
10.1088/1367-2630/aaf016
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the expected time to extinction in the
susceptible-infectious-susceptible (SIS) model of disease spreading. Rather
than using stochastic simulations, or asymptotic calculations in network
models, we solve the extinction time exactly for all connected graphs with
three to eight vertices. This approach enables us to discover descriptive
relations that would be impossible with stochastic simulations. It also helps
us discovering graphs and configurations of S and I with anomalous behaviors
with respect to disease spreading. We find that for large transmission rates
the extinction time is independent of the configurations, just dependent on the
graph. In this limit, the number of vertices and edges determine the extinction
time very accurately (deviations primarily coming from the fluctuations in
degrees). We find that the rankings of configurations with respect to
extinction times at low and high transmission rates are correlated at low
prevalences and negatively correlated for high prevalences. The most important
structural factor determining this ranking is the degrees of the infectious
vertices.
|
[
{
"created": "Sat, 24 Feb 2018 14:11:08 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Mar 2018 01:00:21 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Sep 2018 10:40:13 GMT",
"version": "v3"
}
] |
2018-12-26
|
[
[
"Holme",
"Petter",
""
],
[
"Tupikina",
"Liubov",
""
]
] |
We investigate the expected time to extinction in the susceptible-infectious-susceptible (SIS) model of disease spreading. Rather than using stochastic simulations, or asymptotic calculations in network models, we solve the extinction time exactly for all connected graphs with three to eight vertices. This approach enables us to discover descriptive relations that would be impossible with stochastic simulations. It also helps us discovering graphs and configurations of S and I with anomalous behaviors with respect to disease spreading. We find that for large transmission rates the extinction time is independent of the configurations, just dependent on the graph. In this limit, the number of vertices and edges determine the extinction time very accurately (deviations primarily coming from the fluctuations in degrees). We find that the rankings of configurations with respect to extinction times at low and high transmission rates are correlated at low prevalences and negatively correlated for high prevalences. The most important structural factor determining this ranking is the degrees of the infectious vertices.
|
1603.03767
|
Samuel Johnson
|
Virginia Dom\'inguez-Garc\'ia and Samuel Johnson and Miguel A. Mu\~noz
|
Intervality and coherence in complex networks
| null | null |
10.1063/1.4953163
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Food webs -- networks of predators and prey -- have long been known to
exhibit "intervality": species can generally be ordered along a single axis in
such a way that the prey of any given predator tend to lie on unbroken compact
intervals. Although the meaning of this axis -- identified with a "niche"
dimension -- has remained a mystery, it is assumed to lie at the basis of the
highly non-trivial structure of food webs. With this in mind, most trophic
network modelling has for decades been based on assigning species a niche value
by hand. However, we argue here that intervality should not be considered the
cause but rather a consequence of food-web structure. First, analysing a set of
$46$ empirical food webs, we find that they also exhibit {\it predator}
intervality: the predators of any given species are as likely to be contiguous
as the prey are, but in a different ordering. Furthermore, this property is not
exclusive of trophic networks: several networks of genes, neurons, metabolites,
cellular machines, airports, and words are found to be approximately as
interval as food webs. We go on to show that a simple model of food-web
assembly which does not make use of a niche axis can nevertheless generate
significant intervality. Therefore, the niche dimension (in the sense used for
food-web modelling) could in fact be the consequence of other, more fundamental
structural traits, such as trophic coherence. We conclude that a new approach
to food-web modelling is required for a deeper understanding of ecosystem
assembly, structure and function, and propose that certain topological features
thought to be specific of food webs are in fact common to many complex
networks.
|
[
{
"created": "Fri, 11 Mar 2016 10:53:09 GMT",
"version": "v1"
}
] |
2016-06-29
|
[
[
"Domínguez-García",
"Virginia",
""
],
[
"Johnson",
"Samuel",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] |
Food webs -- networks of predators and prey -- have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis -- identified with a "niche" dimension -- has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of $46$ empirical food webs, we find that they also exhibit {\it predator} intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits, such as trophic coherence. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.
|
1702.07319
|
John Pearson
|
Shariq Iqbal and John Pearson
|
A Goal-Based Movement Model for Continuous Multi-Agent Tasks
|
New title; substantial simplifications of model
| null | null | null |
q-bio.NC cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite increasing attention paid to the need for fast, scalable methods to
analyze next-generation neuroscience data, comparatively little attention has
been paid to the development of similar methods for behavioral analysis. Just
as the volume and complexity of brain data have grown, behavioral paradigms in
systems neuroscience have likewise become more naturalistic and less
constrained, necessitating an increase in the flexibility and scalability of
the models used to study them. In particular, key assumptions made in the
analysis of typical decision paradigms --- optimality; analytic tractability;
discrete, low-dimensional action spaces --- may be untenable in richer tasks.
Here, using the case of a two-player, real-time, continuous strategic game as
an example, we show how the use of modern machine learning methods allows us to
relax each of these assumptions. Following an inverse reinforcement learning
approach, we are able to succinctly characterize the joint distribution over
players' actions via a generative model that allows us to simulate realistic
game play. We compare simulated play from a number of generative time series
models and show that ours successfully resists mode collapse while generating
trajectories with the rich variability of real behavior. Together, these
methods offer a rich class of models for the analysis of continuous action
tasks at the single-trial level.
|
[
{
"created": "Thu, 23 Feb 2017 18:09:14 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Oct 2017 20:14:11 GMT",
"version": "v2"
}
] |
2017-11-02
|
[
[
"Iqbal",
"Shariq",
""
],
[
"Pearson",
"John",
""
]
] |
Despite increasing attention paid to the need for fast, scalable methods to analyze next-generation neuroscience data, comparatively little attention has been paid to the development of similar methods for behavioral analysis. Just as the volume and complexity of brain data have grown, behavioral paradigms in systems neuroscience have likewise become more naturalistic and less constrained, necessitating an increase in the flexibility and scalability of the models used to study them. In particular, key assumptions made in the analysis of typical decision paradigms --- optimality; analytic tractability; discrete, low-dimensional action spaces --- may be untenable in richer tasks. Here, using the case of a two-player, real-time, continuous strategic game as an example, we show how the use of modern machine learning methods allows us to relax each of these assumptions. Following an inverse reinforcement learning approach, we are able to succinctly characterize the joint distribution over players' actions via a generative model that allows us to simulate realistic game play. We compare simulated play from a number of generative time series models and show that ours successfully resists mode collapse while generating trajectories with the rich variability of real behavior. Together, these methods offer a rich class of models for the analysis of continuous action tasks at the single-trial level.
|
1501.01863
|
Simon R. Schultz
|
Robin A. A. Ince, Simon R. Schultz and Stefano Panzeri
|
Estimating Information-Theoretic Quantities
|
16 pages, 3 figures
|
Encyclopaedia of Computational Neuroscience 2014, pp 1-13
|
10.1007/978-1-4614-7320-6_140-1
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information theory is a practical and theoretical framework developed for the
study of communication over noisy channels. Its probabilistic basis and
capacity to relate statistical structure to function make it ideally suited for
studying information flow in the nervous system. It has a number of useful
properties: it is a general measure sensitive to any relationship, not only
linear effects; it has meaningful units which in many cases allow direct
comparison between different experiments; and it can be used to study how much
information can be gained by observing neural responses in single trials,
rather than in averages over multiple trials. A variety of information
theoretic quantities are in common use in neuroscience - (see entry "Summary of
Information-Theoretic Quantities"). Estimating these quantities in an accurate
and unbiased way from real neurophysiological data frequently presents
challenges, which are explained in this entry.
|
[
{
"created": "Thu, 8 Jan 2015 14:19:23 GMT",
"version": "v1"
}
] |
2015-01-09
|
[
[
"Ince",
"Robin A. A.",
""
],
[
"Schultz",
"Simon R.",
""
],
[
"Panzeri",
"Stefano",
""
]
] |
Information theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information theoretic quantities are in common use in neuroscience - (see entry "Summary of Information-Theoretic Quantities"). Estimating these quantities in an accurate and unbiased way from real neurophysiological data frequently presents challenges, which are explained in this entry.
|
2303.08818
|
Daeseok Lee
|
Daeseok Lee, Jeunghyun Byun and Bonggun Shin
|
Boosting Convolutional Neural Networks' Protein Binding Site Prediction
Capacity Using SE(3)-invariant transformers, Transfer Learning and
Homology-based Augmentation
|
Updates in version 2: author order change (making it clear that
Bonggun Shin is the corresponding author)
| null | null | null |
q-bio.QM cs.LG cs.NE q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Figuring out small molecule binding sites in target proteins, in the
resolution of either pocket or residue, is critical in many virtual and real
drug-discovery scenarios. Since it is not always easy to find such binding
sites based on domain knowledge or traditional methods, different deep learning
methods that predict binding sites out of protein structures have been
developed in recent years. Here we present a new such deep learning algorithm,
that significantly outperformed all state-of-the-art baselines in terms of the
both resolutions$\unicode{x2013}$pocket and residue. This good performance was
also demonstrated in a case study involving the protein human serum albumin and
its binding sites. Our algorithm included new ideas both in the model
architecture and in the training method. For the model architecture, it
incorporated SE(3)-invariant geometric self-attention layers that operate on
top of residue-level CNN outputs. This residue-level processing of the model
allowed a transfer learning between the two resolutions, which turned out to
significantly improve the binding pocket prediction. Moreover, we developed
novel augmentation method based on protein homology, which prevented our model
from over-fitting. Overall, we believe that our contribution to the literature
is twofold. First, we provided a new computational method for binding site
prediction that is relevant to real-world applications, as shown by the good
performance on different benchmarks and case study. Second, the novel ideas in
our method$\unicode{x2013}$the model architecture, transfer learning and the
homology augmentation$\unicode{x2013}$would serve as useful components in
future works.
|
[
{
"created": "Mon, 20 Feb 2023 05:02:40 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Apr 2023 05:05:00 GMT",
"version": "v2"
}
] |
2023-04-19
|
[
[
"Lee",
"Daeseok",
""
],
[
"Byun",
"Jeunghyun",
""
],
[
"Shin",
"Bonggun",
""
]
] |
Figuring out small molecule binding sites in target proteins, in the resolution of either pocket or residue, is critical in many virtual and real drug-discovery scenarios. Since it is not always easy to find such binding sites based on domain knowledge or traditional methods, different deep learning methods that predict binding sites out of protein structures have been developed in recent years. Here we present a new such deep learning algorithm, that significantly outperformed all state-of-the-art baselines in terms of the both resolutions$\unicode{x2013}$pocket and residue. This good performance was also demonstrated in a case study involving the protein human serum albumin and its binding sites. Our algorithm included new ideas both in the model architecture and in the training method. For the model architecture, it incorporated SE(3)-invariant geometric self-attention layers that operate on top of residue-level CNN outputs. This residue-level processing of the model allowed a transfer learning between the two resolutions, which turned out to significantly improve the binding pocket prediction. Moreover, we developed novel augmentation method based on protein homology, which prevented our model from over-fitting. Overall, we believe that our contribution to the literature is twofold. First, we provided a new computational method for binding site prediction that is relevant to real-world applications, as shown by the good performance on different benchmarks and case study. Second, the novel ideas in our method$\unicode{x2013}$the model architecture, transfer learning and the homology augmentation$\unicode{x2013}$would serve as useful components in future works.
|
2306.11232
|
Haiping Huang
|
Haiping Huang
|
Eight challenges in developing theory of intelligence
|
24 pages, 131 references, revised version to journal
|
Front. Comput. Neurosci. 18:1388166 (2024)
|
10.3389/fncom.2024.1388166
| null |
q-bio.NC cond-mat.stat-mech cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A good theory of mathematical beauty is more practical than any current
observation, as new predictions of physical reality can be verified
self-consistently. This belief applies to the current status of understanding
deep neural networks including large language models and even the biological
intelligence. Toy models provide a metaphor of physical reality, allowing
mathematically formulating that reality (i.e., the so-called theory), which can
be updated as more conjectures are justified or refuted. One does not need to
pack all details into a model, but rather, more abstract models are
constructed, as complex systems like brains or deep networks have many sloppy
dimensions but much less stiff dimensions that strongly impact macroscopic
observables. This kind of bottom-up mechanistic modeling is still promising in
the modern era of understanding the natural or artificial intelligence. Here,
we shed light on eight challenges in developing theory of intelligence
following this theoretical paradigm. Theses challenges are representation
learning, generalization, adversarial robustness, continual learning, causal
learning, internal model of the brain, next-token prediction, and finally the
mechanics of subjective experience.
|
[
{
"created": "Tue, 20 Jun 2023 01:45:42 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jun 2024 08:26:30 GMT",
"version": "v2"
}
] |
2024-07-26
|
[
[
"Huang",
"Haiping",
""
]
] |
A good theory of mathematical beauty is more practical than any current observation, as new predictions of physical reality can be verified self-consistently. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating that reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to pack all details into a model, but rather, more abstract models are constructed, as complex systems like brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This kind of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and finally the mechanics of subjective experience.
|
1610.02301
|
Sa\'ul Ares
|
Javier Munoz-Garcia and Saul Ares
|
Formation and maintenance of nitrogen fixing cell patterns in
filamentous cyanobacteria
| null |
Proc. Natl. Acad. Sci. USA 113, 6218-6223 (2016)
|
10.1073/pnas.1524383113
| null |
q-bio.CB nlin.PS q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyanobacteria forming one-dimensional filaments are paradigmatic model
organisms of the transition between unicellular and multicellular living forms.
Under nitrogen limiting conditions, in filaments of the genus Anabaena, some
cells differentiate into heterocysts, which lose the possibility to divide but
are able to fix environmental nitrogen for the colony. These heterocysts form a
quasi-regular pattern in the filament, representing a prototype of patterning
and morphogenesis in prokaryotes. Recent years have seen advances in the
identification of the molecular mechanism regulating this pattern. We use this
data to build a theory on heterocyst pattern formation, for which both genetic
regulation and the effects of cell division and filament growth are key
components. The theory is based on the interplay of three generic mechanisms:
local autoactivation, early long range inhibition, and late long range
inhibition. These mechanisms can be identified with the dynamics of hetR, patS
and hetN expression. Our theory reproduces quantitatively the experimental
dynamics of pattern formation and maintenance for wild type and mutants. We
find that hetN alone is not enough to play the role as the late inhibitory
mechanism: a second mechanism, hypothetically the products of nitrogen fixation
supplied by heterocysts, must also play a role in late long range inhibition.
The preponderance of even intervals between heterocysts arises naturally as a
result of the interplay between the timescales of genetic regulation and cell
division. We also find that a purely stochastic initiation of the pattern,
without a two-stage process, is enough to reproduce experimental observations.
|
[
{
"created": "Fri, 7 Oct 2016 14:25:49 GMT",
"version": "v1"
}
] |
2016-10-10
|
[
[
"Munoz-Garcia",
"Javier",
""
],
[
"Ares",
"Saul",
""
]
] |
Cyanobacteria forming one-dimensional filaments are paradigmatic model organisms of the transition between unicellular and multicellular living forms. Under nitrogen limiting conditions, in filaments of the genus Anabaena, some cells differentiate into heterocysts, which lose the possibility to divide but are able to fix environmental nitrogen for the colony. These heterocysts form a quasi-regular pattern in the filament, representing a prototype of patterning and morphogenesis in prokaryotes. Recent years have seen advances in the identification of the molecular mechanism regulating this pattern. We use this data to build a theory on heterocyst pattern formation, for which both genetic regulation and the effects of cell division and filament growth are key components. The theory is based on the interplay of three generic mechanisms: local autoactivation, early long range inhibition, and late long range inhibition. These mechanisms can be identified with the dynamics of hetR, patS and hetN expression. Our theory reproduces quantitatively the experimental dynamics of pattern formation and maintenance for wild type and mutants. We find that hetN alone is not enough to play the role as the late inhibitory mechanism: a second mechanism, hypothetically the products of nitrogen fixation supplied by heterocysts, must also play a role in late long range inhibition. The preponderance of even intervals between heterocysts arises naturally as a result of the interplay between the timescales of genetic regulation and cell division. We also find that a purely stochastic initiation of the pattern, without a two-stage process, is enough to reproduce experimental observations.
|
q-bio/0608003
|
Leor Weinberger
|
Leor S. Weinberger, John C. Burnett, Jared E. Toettcher, Adam P.
Arkin, and David V. Schaffer
|
Supplemental Data: Stochastic Gene Expression in a Lentiviral Positive
Feedback Loop: HIV-1 Tat Fluctuations Drive Phenotypic Diversity
|
Supplemental data for q-bio.MN/0608002
| null | null | null |
q-bio.MN q-bio.CB
| null |
Supplemental data for "Stochastic Gene Expression in a Lentiviral Positive
Feedback Loop: HIV-1 Tat Fluctuations Drive Phenotypic Diversity"
[q-bio.MN/0608002, Cell. 2005 Jul 29;122(2):169-82].
|
[
{
"created": "Wed, 2 Aug 2006 19:21:27 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Weinberger",
"Leor S.",
""
],
[
"Burnett",
"John C.",
""
],
[
"Toettcher",
"Jared E.",
""
],
[
"Arkin",
"Adam P.",
""
],
[
"Schaffer",
"David V.",
""
]
] |
Supplemental data for "Stochastic Gene Expression in a Lentiviral Positive Feedback Loop: HIV-1 Tat Fluctuations Drive Phenotypic Diversity" [q-bio.MN/0608002, Cell. 2005 Jul 29;122(2):169-82].
|
1601.02948
|
Yoram Burak
|
Noga Weiss Mosheiff, Haggai Agmon, Avraham Moriel, and Yoram Burak
|
An Efficient Coding Theory for a Dynamic Trajectory Predicts non-Uniform
Allocation of Grid Cells to Modules in the Entorhinal Cortex
|
23 pages, 5 figures. Supplemental Information available from the
authors on request. A previous version of this work appeared in abstract form
(Program No. 727.02. 2015 Neuroscience Meeting Planner. Chicago, IL: Society
for Neuroscience, 2015. Online.)
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grid cells in the entorhinal cortex encode the position of an animal in its
environment using spatially periodic tuning curves of varying periodicity.
Recent experiments established that these cells are functionally organized in
discrete modules with uniform grid spacing. Here we develop a theory for
efficient coding of position, which takes into account the temporal statistics
of the animal's motion. The theory predicts a sharp decrease of module
population sizes with grid spacing, in agreement with the trends seen in the
experimental data. We identify a simple scheme for readout of the grid cell
code by neural circuitry, that can match in accuracy the optimal Bayesian
decoder of the spikes. This readout scheme requires persistence over varying
timescales, ranging from ~1ms to ~1s, depending on the grid cell module. Our
results suggest that the brain employs an efficient representation of position
which takes advantage of the spatiotemporal statistics of the encoded variable,
in similarity to the principles that govern early sensory coding.
|
[
{
"created": "Tue, 12 Jan 2016 16:38:14 GMT",
"version": "v1"
}
] |
2016-01-13
|
[
[
"Mosheiff",
"Noga Weiss",
""
],
[
"Agmon",
"Haggai",
""
],
[
"Moriel",
"Avraham",
""
],
[
"Burak",
"Yoram",
""
]
] |
Grid cells in the entorhinal cortex encode the position of an animal in its environment using spatially periodic tuning curves of varying periodicity. Recent experiments established that these cells are functionally organized in discrete modules with uniform grid spacing. Here we develop a theory for efficient coding of position, which takes into account the temporal statistics of the animal's motion. The theory predicts a sharp decrease of module population sizes with grid spacing, in agreement with the trends seen in the experimental data. We identify a simple scheme for readout of the grid cell code by neural circuitry, that can match in accuracy the optimal Bayesian decoder of the spikes. This readout scheme requires persistence over varying timescales, ranging from ~1ms to ~1s, depending on the grid cell module. Our results suggest that the brain employs an efficient representation of position which takes advantage of the spatiotemporal statistics of the encoded variable, in similarity to the principles that govern early sensory coding.
|
2004.10177
|
He Zhang
|
He Zhang, Liang Zhang, Ang Lin, Congcong Xu, Ziyu Li, Kaibo Liu,
Boxiang Liu, Xiaopin Ma, Fanfan Zhao, Weiguo Yao, Hangwen Li, David H.
Mathews, Yujian Zhang, and Liang Huang
|
Algorithm for Optimized mRNA Design Improves Stability and
Immunogenicity
|
17 pages for main text; 24 pages of supporting information; 4+11=15
figures; 2 tables (both in supporting information)
|
Nature (621), 396-403, 2023
|
10.1038/s41586-023-06127-z
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Messenger RNA (mRNA) vaccines are being used for COVID-19, but still suffer
from the critical issue of mRNA instability and degradation, which is a major
obstacle in the storage, distribution, and efficacy of the vaccine. Previous
work showed that optimizing secondary structure stability lengthens mRNA
half-life, which, together with optimal codons, increases protein expression.
Therefore, a principled mRNA design algorithm must optimize both structural
stability and codon usage to improve mRNA efficiency. However, due to
synonymous codons, the mRNA design space is prohibitively large, e.g., there
are $\sim\!10^{632}$ mRNAs for the SARS-CoV-2 Spike protein, which poses
insurmountable challenges to previous methods. Here we provide a surprisingly
simple solution to this hard problem by reducing it to a classical problem in
computational linguistics, where finding the optimal mRNA is akin to finding
the most likely sentence among similar sounding alternatives. Our algorithm,
named LinearDesign, takes only 11 minutes for the Spike protein, and can
jointly optimize stability and codon usage. Experimentally, without chemical
modification, our designs substantially improve mRNA half-life and protein
expression in vitro, and dramatically increase antibody response by up to
23$\times$ in vivo, compared to the codon-optimized benchmark. Our work enables
the exploration of highly stable and efficient designs that are previously
unreachable and is a timely tool not only for vaccines but also for mRNA
medicine encoding all therapeutic proteins (e.g., monoclonal antibodies and
anti-cancer drugs).
|
[
{
"created": "Tue, 21 Apr 2020 17:35:45 GMT",
"version": "v1"
},
{
"created": "Tue, 5 May 2020 17:42:04 GMT",
"version": "v2"
},
{
"created": "Wed, 6 May 2020 17:31:10 GMT",
"version": "v3"
},
{
"created": "Wed, 13 May 2020 15:58:24 GMT",
"version": "v4"
},
{
"created": "Mon, 11 Oct 2021 19:05:03 GMT",
"version": "v5"
},
{
"created": "Fri, 21 Jan 2022 17:39:37 GMT",
"version": "v6"
},
{
"created": "Thu, 17 Mar 2022 19:09:44 GMT",
"version": "v7"
}
] |
2024-02-08
|
[
[
"Zhang",
"He",
""
],
[
"Zhang",
"Liang",
""
],
[
"Lin",
"Ang",
""
],
[
"Xu",
"Congcong",
""
],
[
"Li",
"Ziyu",
""
],
[
"Liu",
"Kaibo",
""
],
[
"Liu",
"Boxiang",
""
],
[
"Ma",
"Xiaopin",
""
],
[
"Zhao",
"Fanfan",
""
],
[
"Yao",
"Weiguo",
""
],
[
"Li",
"Hangwen",
""
],
[
"Mathews",
"David H.",
""
],
[
"Zhang",
"Yujian",
""
],
[
"Huang",
"Liang",
""
]
] |
Messenger RNA (mRNA) vaccines are being used for COVID-19, but still suffer from the critical issue of mRNA instability and degradation, which is a major obstacle in the storage, distribution, and efficacy of the vaccine. Previous work showed that optimizing secondary structure stability lengthens mRNA half-life, which, together with optimal codons, increases protein expression. Therefore, a principled mRNA design algorithm must optimize both structural stability and codon usage to improve mRNA efficiency. However, due to synonymous codons, the mRNA design space is prohibitively large, e.g., there are $\sim\!10^{632}$ mRNAs for the SARS-CoV-2 Spike protein, which poses insurmountable challenges to previous methods. Here we provide a surprisingly simple solution to this hard problem by reducing it to a classical problem in computational linguistics, where finding the optimal mRNA is akin to finding the most likely sentence among similar sounding alternatives. Our algorithm, named LinearDesign, takes only 11 minutes for the Spike protein, and can jointly optimize stability and codon usage. Experimentally, without chemical modification, our designs substantially improve mRNA half-life and protein expression in vitro, and dramatically increase antibody response by up to 23$\times$ in vivo, compared to the codon-optimized benchmark. Our work enables the exploration of highly stable and efficient designs that are previously unreachable and is a timely tool not only for vaccines but also for mRNA medicine encoding all therapeutic proteins (e.g., monoclonal antibodies and anti-cancer drugs).
|
0901.1572
|
Sophie Querouil
|
Sophie Qu\'erouil (IMAR-DOP, CAVIAR), M. A. Silva (IMAR-DOP), I.
Cascao (IMAR-DOP), S. Magalhaes (IMAR-DOP), M. I. Seabra (IMAR-DOP), M. A.
Machete (IMAR-DOP), R. S. Santos (IMAR-DOP)
|
Why do dolphins form mixed-species associations in the Azores ?
| null |
Ethology 114, 12 (2008) 1183-1194
|
10.1111/j.1439-0310.2008.01570.x
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mixed-species associations are temporary associations between individuals of
different species that are often observed in birds, primates and cetaceans.
They have been interpreted as a strategy to reduce predation risk, enhance
foraging success and/or provide a social advantage. In the archipelago of the
Azores, four species of dolphins are commonly involved in mixed-species
associations: the common dolphin, Delphinus delphis, the bottlenose dolphin,
Tursiops truncatus, the striped dolphin, Stenella coeruleoalba, and the spotted
dolphin, Stenella frontalis. In order to understand the reasons why dolphins
associate, we analysed field data collected since 1999 by research scientists
and trained observers placed onboard fishing vessels. In total, 113
mixed-species groups were observed out of 5720 sightings. The temporal
distribution, habitat (water depth, distance to the coast), behaviour (i.e.
feeding, travelling, socializing), size and composition of mixed-species groups
were compared with those of single-species groups. Results did not support the
predation avoidance hypothesis and gave little support to the social advantage
hypothesis. The foraging advantage hypothesis was the most convincing. However,
the benefits of mixed-species associations appeared to depend on the species.
Associations were likely to be opportunistic in the larger bottlenose dolphin,
while there seemed to be some evolutionary constraints favouring associations
in the rarer striped dolphin. Comparison with previous studies suggests that
the formation of mixed-species groups depends on several environmental factors,
and therefore may constitute an adaptive response.
|
[
{
"created": "Mon, 12 Jan 2009 13:33:03 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jul 2010 14:24:55 GMT",
"version": "v2"
}
] |
2010-07-26
|
[
[
"Quérouil",
"Sophie",
"",
"IMAR-DOP, CAVIAR"
],
[
"Silva",
"M. A.",
"",
"IMAR-DOP"
],
[
"Cascao",
"I.",
"",
"IMAR-DOP"
],
[
"Magalhaes",
"S.",
"",
"IMAR-DOP"
],
[
"Seabra",
"M. I.",
"",
"IMAR-DOP"
],
[
"Machete",
"M. A.",
"",
"IMAR-DOP"
],
[
"Santos",
"R. S.",
"",
"IMAR-DOP"
]
] |
Mixed-species associations are temporary associations between individuals of different species that are often observed in birds, primates and cetaceans. They have been interpreted as a strategy to reduce predation risk, enhance foraging success and/or provide a social advantage. In the archipelago of the Azores, four species of dolphins are commonly involved in mixed-species associations: the common dolphin, Delphinus delphis, the bottlenose dolphin, Tursiops truncatus, the striped dolphin, Stenella coeruleoalba, and the spotted dolphin, Stenella frontalis. In order to understand the reasons why dolphins associate, we analysed field data collected since 1999 by research scientists and trained observers placed onboard fishing vessels. In total, 113 mixed-species groups were observed out of 5720 sightings. The temporal distribution, habitat (water depth, distance to the coast), behaviour (i.e. feeding, travelling, socializing), size and composition of mixed-species groups were compared with those of single-species groups. Results did not support the predation avoidance hypothesis and gave little support to the social advantage hypothesis. The foraging advantage hypothesis was the most convincing. However, the benefits of mixed-species associations appeared to depend on the species. Associations were likely to be opportunistic in the larger bottlenose dolphin, while there seemed to be some evolutionary constraints favouring associations in the rarer striped dolphin. Comparison with previous studies suggests that the formation of mixed-species groups depends on several environmental factors, and therefore may constitute an adaptive response.
|
2003.08576
|
Masaki Sasai
|
Bhaswati Bhattacharyya, Jin Wang and Masaki Sasai
|
Stochastic epigenetic dynamics of gene switching
| null |
Phys. Rev. E 102, 042408 (2020)
|
10.1103/PhysRevE.102.042408
| null |
q-bio.MN
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Epigenetic modifications of histones crucially affect the eukaryotic gene
activity, while the epigenetic histone state is largely determined by the
binding of specific factors such as the transcription factors (TFs) to DNA.
Here, the way how the TFs and the histone state are dynamically correlated is
not obvious when the TF synthesis is regulated by the histone state. This type
of feedback regulatory relations are ubiquitous in gene networks to determine
cell fate in differentiation and other cell transformations. To gain insights
into such dynamical feedback regulations, we theoretically analyze a model of
epigenetic gene switching by extending the Doi-Peliti operator formalism of
reaction kinetics to the problem of coupled molecular processes. The spin-1 and
spin-1/2 coherent state representations are introduced to describe stochastic
reactions of histones and binding/unbinding of TF in a unified way, which
provides a concise view of the effects of timescale difference among these
molecular processes; even in the case that binding/unbinding of TF to/from DNA
are adiabatically fast, the slow nonadiabatic histone dynamics give rise to a
distinct circular flow of the probability flux around basins in the landscape
of the gene state distribution, which leads to hysteresis in gene switching. In
contrast to the general belief that the change in the amount of TF precedes the
histone state change, the flux drives histones to be modified prior to the
change in the amount of TF in the self-regulating circuits. The flux-landscape
analyses shed light on the nonlinear nonadiabatic mechanism of epigenetic cell
fate decision making.
|
[
{
"created": "Thu, 19 Mar 2020 05:14:24 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Oct 2020 00:04:18 GMT",
"version": "v2"
}
] |
2020-10-28
|
[
[
"Bhattacharyya",
"Bhaswati",
""
],
[
"Wang",
"Jin",
""
],
[
"Sasai",
"Masaki",
""
]
] |
Epigenetic modifications of histones crucially affect the eukaryotic gene activity, while the epigenetic histone state is largely determined by the binding of specific factors such as the transcription factors (TFs) to DNA. Here, the way how the TFs and the histone state are dynamically correlated is not obvious when the TF synthesis is regulated by the histone state. This type of feedback regulatory relations are ubiquitous in gene networks to determine cell fate in differentiation and other cell transformations. To gain insights into such dynamical feedback regulations, we theoretically analyze a model of epigenetic gene switching by extending the Doi-Peliti operator formalism of reaction kinetics to the problem of coupled molecular processes. The spin-1 and spin-1/2 coherent state representations are introduced to describe stochastic reactions of histones and binding/unbinding of TF in a unified way, which provides a concise view of the effects of timescale difference among these molecular processes; even in the case that binding/unbinding of TF to/from DNA are adiabatically fast, the slow nonadiabatic histone dynamics give rise to a distinct circular flow of the probability flux around basins in the landscape of the gene state distribution, which leads to hysteresis in gene switching. In contrast to the general belief that the change in the amount of TF precedes the histone state change, the flux drives histones to be modified prior to the change in the amount of TF in the self-regulating circuits. The flux-landscape analyses shed light on the nonlinear nonadiabatic mechanism of epigenetic cell fate decision making.
|
1206.0344
|
Andrew Vlasic
|
Andrew Vlasic
|
Long-Run Analysis of the Stochastic Replicator Dynamics in the Presence
of Random Jumps
| null | null | null | null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A further generalization of the stochastic replicator dynamic derived by
Fudenberg and Harris \cite{FH92} is considered. In particular, a Poissonian
integral is introduced to the fitness to simulate the affects of anomalous
events. For the two strategy population, an estimation of the long run behavior
of the dynamic is derived. For the population with many strategies, conditions
for stability to pure strict Nash equilibria, extinction of dominated pure
strategies, and recurrence in a neighborhood of an internal evolutionary stable
strategy are derived. This extends the results given by Imhof \cite{I05}.
|
[
{
"created": "Sat, 2 Jun 2012 04:30:21 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Jan 2014 20:46:57 GMT",
"version": "v2"
}
] |
2014-01-30
|
[
[
"Vlasic",
"Andrew",
""
]
] |
A further generalization of the stochastic replicator dynamic derived by Fudenberg and Harris \cite{FH92} is considered. In particular, a Poissonian integral is introduced to the fitness to simulate the affects of anomalous events. For the two strategy population, an estimation of the long run behavior of the dynamic is derived. For the population with many strategies, conditions for stability to pure strict Nash equilibria, extinction of dominated pure strategies, and recurrence in a neighborhood of an internal evolutionary stable strategy are derived. This extends the results given by Imhof \cite{I05}.
|
1110.0433
|
Matthias Keil
|
Matthias S. Keil
|
Computation of Object Approach by a Biophysical Model of a Wide-Field
Visual Neuron: Dynamics, Peaks, and Fits
|
A revised version of this paper with the title "Emergence of
Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for
Computing Object Approaches: Dynamics, Peaks, & Fits" has been accepted in
Advances in Neural Information Processing Systems, NIPS 2011, Granda, Spain
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many species show avoidance reactions in response to looming object
approaches. In locusts, the corresponding escape behavior correlates with the
activity of the lobula giant movement detector (LGMD) neuron. During an object
approach, its firing rate was reported to gradually increase until a peak is
reached, and then it declines quickly. The ETA-function predicts that the LGMD
activity is a product between an exponential function of angular size and
angular velocity, and that peak activity is reached before time-to-contact
(ttc). The ETA-function has become the prevailing LGMD model because it
reproduces many experimental observations, and even experimental evidence for
the multiplicative operation was reported. Several inconsistencies remain
unresolved, though. Here we address these issues with a new model (PSI-model),
which explicitly connects angular size and angular velocity to biophysical
quantities. The PSI-model avoids biophysical problems associated with
implementing exp(), implements the multiplicative operation of ETA via divisive
inhibition, and explains why activity peaks could occur after ttc. It
consistently predicts response features of the LGMD, and provides excellent
fits to published experimental data, with goodness of fit measures comparable
to corresponding fits with the ETA-function.
|
[
{
"created": "Mon, 3 Oct 2011 18:00:00 GMT",
"version": "v1"
}
] |
2011-10-04
|
[
[
"Keil",
"Matthias S.",
""
]
] |
Many species show avoidance reactions in response to looming object approaches. In locusts, the corresponding escape behavior correlates with the activity of the lobula giant movement detector (LGMD) neuron. During an object approach, its firing rate was reported to gradually increase until a peak is reached, and then it declines quickly. The ETA-function predicts that the LGMD activity is a product between an exponential function of angular size and angular velocity, and that peak activity is reached before time-to-contact (ttc). The ETA-function has become the prevailing LGMD model because it reproduces many experimental observations, and even experimental evidence for the multiplicative operation was reported. Several inconsistencies remain unresolved, though. Here we address these issues with a new model (PSI-model), which explicitly connects angular size and angular velocity to biophysical quantities. The PSI-model avoids biophysical problems associated with implementing exp(), implements the multiplicative operation of ETA via divisive inhibition, and explains why activity peaks could occur after ttc. It consistently predicts response features of the LGMD, and provides excellent fits to published experimental data, with goodness of fit measures comparable to corresponding fits with the ETA-function.
|
1803.01123
|
Chen Jia
|
Chen Jia, Hong Qian, Min Chen, Michael Q. Zhang
|
Relaxation rates of gene expression kinetics reveal the feedback signs
of autoregulatory gene networks
|
17 pages
| null |
10.1063/1.5009749
| null |
q-bio.MN cond-mat.stat-mech q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transient response to a stimulus and subsequent recovery to a steady
state are the fundamental characteristics of a living organism. Here we study
the relaxation kinetics of autoregulatory gene networks based on the chemical
master equation model of single-cell stochastic gene expression with nonlinear
feedback regulation. We report a novel relation between the rate of relaxation,
characterized by the spectral gap of the Markov model, and the feedback sign of
the underlying gene circuit. When a network has no feedback, the relaxation
rate is exactly the decaying rate of the protein. We further show that positive
feedback always slows down the relaxation kinetics while negative feedback
always speeds it up. Numerical simulations demonstrate that this relation
provides a possible method to infer the feedback topology of autoregulatory
gene networks by using time-series data of gene expression.
|
[
{
"created": "Sat, 3 Mar 2018 08:18:42 GMT",
"version": "v1"
}
] |
2018-03-06
|
[
[
"Jia",
"Chen",
""
],
[
"Qian",
"Hong",
""
],
[
"Chen",
"Min",
""
],
[
"Zhang",
"Michael Q.",
""
]
] |
The transient response to a stimulus and subsequent recovery to a steady state are the fundamental characteristics of a living organism. Here we study the relaxation kinetics of autoregulatory gene networks based on the chemical master equation model of single-cell stochastic gene expression with nonlinear feedback regulation. We report a novel relation between the rate of relaxation, characterized by the spectral gap of the Markov model, and the feedback sign of the underlying gene circuit. When a network has no feedback, the relaxation rate is exactly the decaying rate of the protein. We further show that positive feedback always slows down the relaxation kinetics while negative feedback always speeds it up. Numerical simulations demonstrate that this relation provides a possible method to infer the feedback topology of autoregulatory gene networks by using time-series data of gene expression.
|
1407.2480
|
Alain Destexhe
|
Claude Bedard and Alain Destexhe
|
Generalized cable formalism to calculate the magnetic field of single
neurons and neuronal populations
|
Physical Review E (in press); 24 pages, 16 figures
|
Physical Review E 90: 042723 (2014)
|
10.1103/PhysRevE.90.042723
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neurons generate magnetic fields which can be recorded with macroscopic
techniques such as magneto-encephalography. The theory that accounts for the
genesis of neuronal magnetic fields involves dendritic cable structures in
homogeneous resistive extracellular media. Here, we generalize this model by
considering dendritic cables in extracellular media with arbitrarily complex
electric properties. This method is based on a multi-scale mean-field theory
where the neuron is considered in interaction with a "mean" extracellular
medium (characterized by a specific impedance). We first show that, as
expected, the generalized cable equation and the standard cable generate
magnetic fields that mostly depend on the axial current in the cable, with a
moderate contribution of extracellular currents. Less expected, we also show
that the nature of the extracellular and intracellular media influence the
axial current, and thus also influence neuronal magnetic fields. We illustrate
these properties by numerical simulations and suggest experiments to test these
findings.
|
[
{
"created": "Wed, 9 Jul 2014 13:53:23 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Oct 2014 21:37:23 GMT",
"version": "v2"
}
] |
2014-11-04
|
[
[
"Bedard",
"Claude",
""
],
[
"Destexhe",
"Alain",
""
]
] |
Neurons generate magnetic fields which can be recorded with macroscopic techniques such as magneto-encephalography. The theory that accounts for the genesis of neuronal magnetic fields involves dendritic cable structures in homogeneous resistive extracellular media. Here, we generalize this model by considering dendritic cables in extracellular media with arbitrarily complex electric properties. This method is based on a multi-scale mean-field theory where the neuron is considered in interaction with a "mean" extracellular medium (characterized by a specific impedance). We first show that, as expected, the generalized cable equation and the standard cable generate magnetic fields that mostly depend on the axial current in the cable, with a moderate contribution of extracellular currents. Less expected, we also show that the nature of the extracellular and intracellular media influence the axial current, and thus also influence neuronal magnetic fields. We illustrate these properties by numerical simulations and suggest experiments to test these findings.
|
1206.6071
|
Pascal Buenzli
|
Pascal R. Buenzli, C. David L. Thomas, John G. Clement, Peter Pivonka
|
Endocortical bone loss in osteoporosis: The role of bone surface
availability
|
13 pages, 3 figures. V2: minor stylistic improvements in
text/figures; more accurately referenced subsection "Internal mechanical
stress distribution"; some improved remarks in the Discussion section
|
Int J Numer Meth Biomed Engng (2013) 29:1307-1322
|
10.1002/cnm.2567
| null |
q-bio.TO physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Age-related bone loss and postmenopausal osteoporosis are disorders of bone
remodelling, in which less bone is reformed than resorbed. Yet, this
dysregulation of bone remodelling does not occur equally in all bone regions.
Loss of bone is more pronounced near and at the endocortex, leading to cortical
wall thinning and medullary cavity expansion, a process sometimes referred to
as "trabecularisation" or "cancellisation". Cortical wall thinning is of
primary concern in osteoporosis due to the strong deterioration of bone
mechanical properties that it is associated with. In this paper, we examine the
possibility that the non-uniformity of microscopic bone surface availability
could explain the non-uniformity of bone loss in osteoporosis. We use a
computational model of bone remodelling in which microscopic bone surface
availability influences bone turnover rate and simulate the evolution of the
bone volume fraction profile across the midshaft of a long bone. We find that
bone loss is accelerated near the endocortical wall where the specific surface
is highest. Over time, this leads to a substantial reduction of cortical wall
thickness from the endosteum. The associated expansion of the medullary cavity
can be made to match experimentally observed cross-sectional data from the
Melbourne Femur Collection. Finally, we calculate the redistribution of the
mechanical stresses in this evolving bone structure and show that mechanical
load becomes critically transferred to the periosteal cortical bone.
|
[
{
"created": "Tue, 26 Jun 2012 18:26:45 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Aug 2012 17:32:39 GMT",
"version": "v2"
}
] |
2014-05-21
|
[
[
"Buenzli",
"Pascal R.",
""
],
[
"Thomas",
"C. David L.",
""
],
[
"Clement",
"John G.",
""
],
[
"Pivonka",
"Peter",
""
]
] |
Age-related bone loss and postmenopausal osteoporosis are disorders of bone remodelling, in which less bone is reformed than resorbed. Yet, this dysregulation of bone remodelling does not occur equally in all bone regions. Loss of bone is more pronounced near and at the endocortex, leading to cortical wall thinning and medullary cavity expansion, a process sometimes referred to as "trabecularisation" or "cancellisation". Cortical wall thinning is of primary concern in osteoporosis due to the strong deterioration of bone mechanical properties that it is associated with. In this paper, we examine the possibility that the non-uniformity of microscopic bone surface availability could explain the non-uniformity of bone loss in osteoporosis. We use a computational model of bone remodelling in which microscopic bone surface availability influences bone turnover rate and simulate the evolution of the bone volume fraction profile across the midshaft of a long bone. We find that bone loss is accelerated near the endocortical wall where the specific surface is highest. Over time, this leads to a substantial reduction of cortical wall thickness from the endosteum. The associated expansion of the medullary cavity can be made to match experimentally observed cross-sectional data from the Melbourne Femur Collection. Finally, we calculate the redistribution of the mechanical stresses in this evolving bone structure and show that mechanical load becomes critically transferred to the periosteal cortical bone.
|
2312.11584
|
Zhi Jin
|
Zhi Jin, Sheng Xu, Xiang Zhang, Tianze Ling, Nanqing Dong, Wanli
Ouyang, Zhiqiang Gao, Cheng Chang, Siqi Sun
|
ContraNovo: A Contrastive Learning Approach to Enhance De Novo Peptide
Sequencing
|
This paper has been accepted by AAAI 2024
| null | null | null |
q-bio.QM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
De novo peptide sequencing from mass spectrometry (MS) data is a critical
task in proteomics research. Traditional de novo algorithms have encountered a
bottleneck in accuracy due to the inherent complexity of proteomics data. While
deep learning-based methods have shown progress, they reduce the problem to a
translation task, potentially overlooking critical nuances between spectra and
peptides. In our research, we present ContraNovo, a pioneering algorithm that
leverages contrastive learning to extract the relationship between spectra and
peptides and incorporates the mass information into peptide decoding, aiming to
address these intricacies more efficiently. Through rigorous evaluations on two
benchmark datasets, ContraNovo consistently outshines contemporary
state-of-the-art solutions, underscoring its promising potential in enhancing
de novo peptide sequencing. The source code is available at
https://github.com/BEAM-Labs/ContraNovo.
|
[
{
"created": "Mon, 18 Dec 2023 12:49:46 GMT",
"version": "v1"
}
] |
2023-12-20
|
[
[
"Jin",
"Zhi",
""
],
[
"Xu",
"Sheng",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Ling",
"Tianze",
""
],
[
"Dong",
"Nanqing",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Gao",
"Zhiqiang",
""
],
[
"Chang",
"Cheng",
""
],
[
"Sun",
"Siqi",
""
]
] |
De novo peptide sequencing from mass spectrometry (MS) data is a critical task in proteomics research. Traditional de novo algorithms have encountered a bottleneck in accuracy due to the inherent complexity of proteomics data. While deep learning-based methods have shown progress, they reduce the problem to a translation task, potentially overlooking critical nuances between spectra and peptides. In our research, we present ContraNovo, a pioneering algorithm that leverages contrastive learning to extract the relationship between spectra and peptides and incorporates the mass information into peptide decoding, aiming to address these intricacies more efficiently. Through rigorous evaluations on two benchmark datasets, ContraNovo consistently outshines contemporary state-of-the-art solutions, underscoring its promising potential in enhancing de novo peptide sequencing. The source code is available at https://github.com/BEAM-Labs/ContraNovo.
|
2301.10774
|
Cheng Tan
|
Cheng Tan, Yijie Zhang, Zhangyang Gao, Bozhen Hu, Siyuan Li, Zicheng
Liu, Stan Z. Li
|
RDesign: Hierarchical Data-efficient Representation Learning for
Tertiary Structure-based RNA Design
|
30 pages, 28 figures, 16 tables
| null | null | null |
q-bio.BM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While artificial intelligence has made remarkable strides in revealing the
relationship between biological macromolecules' primary sequence and tertiary
structure, designing RNA sequences based on specified tertiary structures
remains challenging. Though existing approaches in protein design have
thoroughly explored structure-to-sequence dependencies in proteins, RNA design
still confronts difficulties due to structural complexity and data scarcity.
Moreover, direct transplantation of protein design methodologies into RNA
design fails to achieve satisfactory outcomes although sharing similar
structural components. In this study, we aim to systematically construct a
data-driven RNA design pipeline. We crafted a large, well-curated benchmark
dataset and designed a comprehensive structural modeling approach to represent
the complex RNA tertiary structure. More importantly, we proposed a
hierarchical data-efficient representation learning framework that learns
structural representations through contrastive learning at both cluster-level
and sample-level to fully leverage the limited data. By constraining data
representations within a limited hyperspherical space, the intrinsic
relationships between data points could be explicitly imposed. Moreover, we
incorporated extracted secondary structures with base pairs as prior knowledge
to facilitate the RNA design process. Extensive experiments demonstrate the
effectiveness of our proposed method, providing a reliable baseline for future
RNA design tasks. The source code and benchmark dataset are available at
https://github.com/A4Bio/RDesign.
|
[
{
"created": "Wed, 25 Jan 2023 17:19:49 GMT",
"version": "v1"
},
{
"created": "Wed, 17 May 2023 13:59:04 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Mar 2024 02:07:37 GMT",
"version": "v3"
}
] |
2024-03-08
|
[
[
"Tan",
"Cheng",
""
],
[
"Zhang",
"Yijie",
""
],
[
"Gao",
"Zhangyang",
""
],
[
"Hu",
"Bozhen",
""
],
[
"Li",
"Siyuan",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Li",
"Stan Z.",
""
]
] |
While artificial intelligence has made remarkable strides in revealing the relationship between biological macromolecules' primary sequence and tertiary structure, designing RNA sequences based on specified tertiary structures remains challenging. Though existing approaches in protein design have thoroughly explored structure-to-sequence dependencies in proteins, RNA design still confronts difficulties due to structural complexity and data scarcity. Moreover, direct transplantation of protein design methodologies into RNA design fails to achieve satisfactory outcomes although sharing similar structural components. In this study, we aim to systematically construct a data-driven RNA design pipeline. We crafted a large, well-curated benchmark dataset and designed a comprehensive structural modeling approach to represent the complex RNA tertiary structure. More importantly, we proposed a hierarchical data-efficient representation learning framework that learns structural representations through contrastive learning at both cluster-level and sample-level to fully leverage the limited data. By constraining data representations within a limited hyperspherical space, the intrinsic relationships between data points could be explicitly imposed. Moreover, we incorporated extracted secondary structures with base pairs as prior knowledge to facilitate the RNA design process. Extensive experiments demonstrate the effectiveness of our proposed method, providing a reliable baseline for future RNA design tasks. The source code and benchmark dataset are available at https://github.com/A4Bio/RDesign.
|
1412.1525
|
Jeremy Sumner
|
Michael D. Woodhams, Jes\'us Fern\'andez-S\'anchez, and Jeremy G.
Sumner
|
A new hierarchy of phylogenetic models consistent with heterogeneous
substitution rates
|
20 pages. Supplementary files available via email
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When the process underlying DNA substitutions varies across evolutionary
history, the standard Markov models underlying standard phylogenetic methods
are mathematically inconsistent. The most prominent example is the general time
reversible model (GTR) together with some, but not all, of its submodels. To
rectify this deficiency, Lie Markov models have been developed as the class of
models that are consistent in the face of a changing process of DNA
substitutions. Some well-known models in popular use are within this class, but
are either overly simplistic (e.g. the Kimura two-parameter model) or overly
complex (the general Markov model). On a diverse set of biological data sets,
we test a hierarchy of Lie Markov models spanning the full range of parameter
richness. Compared against the benchmark of the ever-popular GTR model, we find
that as a whole the Lie Markov models perform remarkably well, with the best
performing models having eight parameters and the ability to recognise the
distinction between purines and pyrimidines.
|
[
{
"created": "Thu, 4 Dec 2014 00:43:03 GMT",
"version": "v1"
}
] |
2014-12-05
|
[
[
"Woodhams",
"Michael D.",
""
],
[
"Fernández-Sánchez",
"Jesús",
""
],
[
"Sumner",
"Jeremy G.",
""
]
] |
When the process underlying DNA substitutions varies across evolutionary history, the standard Markov models underlying standard phylogenetic methods are mathematically inconsistent. The most prominent example is the general time reversible model (GTR) together with some, but not all, of its submodels. To rectify this deficiency, Lie Markov models have been developed as the class of models that are consistent in the face of a changing process of DNA substitutions. Some well-known models in popular use are within this class, but are either overly simplistic (e.g. the Kimura two-parameter model) or overly complex (the general Markov model). On a diverse set of biological data sets, we test a hierarchy of Lie Markov models spanning the full range of parameter richness. Compared against the benchmark of the ever-popular GTR model, we find that as a whole the Lie Markov models perform remarkably well, with the best performing models having eight parameters and the ability to recognise the distinction between purines and pyrimidines.
|
1010.4726
|
Peter Thomas PhD
|
Edward K. Agarwala, Hillel J. Chiel, Peter J. Thomas
|
Information Maximization Fails to Maximize Expected Utility in a Simple
Foraging Model
|
52 pages, 14 figures
| null | null | null |
q-bio.OT cs.IT math.IT physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information theory has explained the organization of many biological
phenomena, from the physiology of sensory receptive fields to the variability
of certain DNA sequence ensembles. Some scholars have proposed that information
should provide the central explanatory principle in biology, in the sense that
any behavioral strategy that is optimal for an organism's survival must
necessarily involve efficient information processing. We challenge this view by
providing a counterexample. We present an analytically tractable model for a
particular instance of a perception-action loop: a creature searching for a
food source confined to a one-dimensional ring world. The model incorporates
the statistical structure of the creature's world, the effects of the
creature's actions on that structure, and the creature's strategic decision
process. The model takes the form of a Markov process on an infinite
dimensional state space. To analyze it we construct an exact coarse graining
that reduces the model to a Markov process on a finite number of "information
states". This technique allows us to make quantitative comparisons between the
performance of an information-theoretically optimal strategy with other
candidate strategies on a food gathering task. We find that: 1. Information
optimal search does not necessarily optimize utility (expected food gain). 2.
The rank ordering of search strategies by information performance does not
predict their ordering by expected food obtained. 3. The relative advantage of
different strategies depends on the statistical structure of the environment,
in particular the variability of motion of the source. We conclude that there
is no simple relationship between information and utility. Behavioral
optimality does not imply information efficiency, nor is there a simple
tradeoff between gaining information about a food source versus obtaining the
food itself.
|
[
{
"created": "Fri, 22 Oct 2010 14:33:46 GMT",
"version": "v1"
}
] |
2010-10-25
|
[
[
"Agarwala",
"Edward K.",
""
],
[
"Chiel",
"Hillel J.",
""
],
[
"Thomas",
"Peter J.",
""
]
] |
Information theory has explained the organization of many biological phenomena, from the physiology of sensory receptive fields to the variability of certain DNA sequence ensembles. Some scholars have proposed that information should provide the central explanatory principle in biology, in the sense that any behavioral strategy that is optimal for an organism's survival must necessarily involve efficient information processing. We challenge this view by providing a counterexample. We present an analytically tractable model for a particular instance of a perception-action loop: a creature searching for a food source confined to a one-dimensional ring world. The model incorporates the statistical structure of the creature's world, the effects of the creature's actions on that structure, and the creature's strategic decision process. The model takes the form of a Markov process on an infinite dimensional state space. To analyze it we construct an exact coarse graining that reduces the model to a Markov process on a finite number of "information states". This technique allows us to make quantitative comparisons between the performance of an information-theoretically optimal strategy with other candidate strategies on a food gathering task. We find that: 1. Information optimal search does not necessarily optimize utility (expected food gain). 2. The rank ordering of search strategies by information performance does not predict their ordering by expected food obtained. 3. The relative advantage of different strategies depends on the statistical structure of the environment, in particular the variability of motion of the source. We conclude that there is no simple relationship between information and utility. Behavioral optimality does not imply information efficiency, nor is there a simple tradeoff between gaining information about a food source versus obtaining the food itself.
|
q-bio/0701004
|
Feng Yang
|
Feng Yang, Feng Qi, and Daniel A. Beard
|
Directionality is an inherent property of biochemical networks
|
A short version of the previous one, more concise and previse. It
includes 5 pages, 5 figures and 1 table
| null | null | null |
q-bio.MN
| null |
Thermodynamic constraints on reactions directions are inherent in the
structure of a given biochemical network. However, concrete procedures for
determining feasible reaction directions for large-scale metabolic networks are
not well established. This work introduces a systematic approach to compute
reaction directions, which are constrained by mass balance and thermodynamics,
for genome-scale networks. In addition, it is shown that the nonconvex solution
space constrained by physicochemical constraints can be approximated by a set
of linearized subspaces in which mass and thermodynamic balance are guaranteed.
The developed methodology can be used to {\it ab initio} predict reaction
directions of genome-scale networks based solely on the network stoichoimetry.
|
[
{
"created": "Tue, 2 Jan 2007 22:26:38 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Feb 2007 18:06:05 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Yang",
"Feng",
""
],
[
"Qi",
"Feng",
""
],
[
"Beard",
"Daniel A.",
""
]
] |
Thermodynamic constraints on reactions directions are inherent in the structure of a given biochemical network. However, concrete procedures for determining feasible reaction directions for large-scale metabolic networks are not well established. This work introduces a systematic approach to compute reaction directions, which are constrained by mass balance and thermodynamics, for genome-scale networks. In addition, it is shown that the nonconvex solution space constrained by physicochemical constraints can be approximated by a set of linearized subspaces in which mass and thermodynamic balance are guaranteed. The developed methodology can be used to {\it ab initio} predict reaction directions of genome-scale networks based solely on the network stoichoimetry.
|
1710.10153
|
Kelly Iarosz
|
E.L. Lameu, E.E.N. Macau, F.S. Borges, K.C. Iarosz, I.L. Caldas, R.R.
Borges, P.R. Protachevicz, R.L. Viana, A.M. Batista
|
Alterations in brain connectivity due to plasticity and synaptic delay
| null | null |
10.1140/epjst/e2018-00090-6
| null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain plasticity refers to brain's ability to change neuronal connections, as
a result of environmental stimuli, new experiences, or damage. In this work, we
study the effects of the synaptic delay on both the coupling strengths and
synchronisation in a neuronal network with synaptic plasticity. We build a
network of Hodgkin-Huxley neurons, where the plasticity is given by the Hebbian
rules. We verify that without time delay the excitatory synapses became
stronger from the high frequency to low frequency neurons and the inhibitory
synapses increases in the opposite way, when the delay is increased the network
presents a non-trivial topology. Regarding the synchronisation, only for small
values of the synaptic delay this phenomenon is observed.
|
[
{
"created": "Fri, 27 Oct 2017 14:21:21 GMT",
"version": "v1"
}
] |
2018-11-14
|
[
[
"Lameu",
"E. L.",
""
],
[
"Macau",
"E. E. N.",
""
],
[
"Borges",
"F. S.",
""
],
[
"Iarosz",
"K. C.",
""
],
[
"Caldas",
"I. L.",
""
],
[
"Borges",
"R. R.",
""
],
[
"Protachevicz",
"P. R.",
""
],
[
"Viana",
"R. L.",
""
],
[
"Batista",
"A. M.",
""
]
] |
Brain plasticity refers to brain's ability to change neuronal connections, as a result of environmental stimuli, new experiences, or damage. In this work, we study the effects of the synaptic delay on both the coupling strengths and synchronisation in a neuronal network with synaptic plasticity. We build a network of Hodgkin-Huxley neurons, where the plasticity is given by the Hebbian rules. We verify that without time delay the excitatory synapses became stronger from the high frequency to low frequency neurons and the inhibitory synapses increases in the opposite way, when the delay is increased the network presents a non-trivial topology. Regarding the synchronisation, only for small values of the synaptic delay this phenomenon is observed.
|
1305.6717
|
Hiroshi Toyoizumi
|
Hiroshi Toyizumi, Jeremy Field
|
Dyanamics of Social Queues
| null | null | null | null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Queues formed by social wasps to inherit the dominant position in the nest
are analyzed by using a transient quasi-birth-and-death process. We show that
the extended nest life time due to division of labor between queen and helpers
has a big impact for the nest productivity.
|
[
{
"created": "Wed, 29 May 2013 08:10:04 GMT",
"version": "v1"
}
] |
2013-05-30
|
[
[
"Toyizumi",
"Hiroshi",
""
],
[
"Field",
"Jeremy",
""
]
] |
Queues formed by social wasps to inherit the dominant position in the nest are analyzed by using a transient quasi-birth-and-death process. We show that the extended nest life time due to division of labor between queen and helpers has a big impact for the nest productivity.
|
1204.1564
|
Jose Fontanari
|
Paulo F. C. Tilles and Jose F. Fontanari
|
Minimal model of associative learning for cross-situational lexicon
acquisition
| null |
J. Math. Psych. 56, 396-403 (2012)
|
10.1016/j.jmp.2012.11.002
| null |
q-bio.NC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An explanation for the acquisition of word-object mappings is the associative
learning in a cross-situational scenario. Here we present analytical results of
the performance of a simple associative learning algorithm for acquiring a
one-to-one mapping between $N$ objects and $N$ words based solely on the
co-occurrence between objects and words. In particular, a learning trial in our
learning scenario consists of the presentation of $C + 1 < N$ objects together
with a target word, which refers to one of the objects in the context. We find
that the learning times are distributed exponentially and the learning rates
are given by $\ln{[\frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target
words are sampled randomly and by $\frac{1}{N} \ln [\frac{N-1}{C}] $ in the
case they follow a deterministic presentation sequence. This learning
performance is much superior to those exhibited by humans and more realistic
learning algorithms in cross-situational experiments. We show that introduction
of discrimination limitations using Weber's law and forgetting reduce the
performance of the associative algorithm to the human level.
|
[
{
"created": "Fri, 6 Apr 2012 20:57:07 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Jun 2012 12:04:12 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Aug 2012 12:15:10 GMT",
"version": "v3"
},
{
"created": "Mon, 17 Dec 2012 16:58:04 GMT",
"version": "v4"
}
] |
2012-12-18
|
[
[
"Tilles",
"Paulo F. C.",
""
],
[
"Fontanari",
"Jose F.",
""
]
] |
An explanation for the acquisition of word-object mappings is the associative learning in a cross-situational scenario. Here we present analytical results of the performance of a simple associative learning algorithm for acquiring a one-to-one mapping between $N$ objects and $N$ words based solely on the co-occurrence between objects and words. In particular, a learning trial in our learning scenario consists of the presentation of $C + 1 < N$ objects together with a target word, which refers to one of the objects in the context. We find that the learning times are distributed exponentially and the learning rates are given by $\ln{[\frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target words are sampled randomly and by $\frac{1}{N} \ln [\frac{N-1}{C}] $ in the case they follow a deterministic presentation sequence. This learning performance is much superior to those exhibited by humans and more realistic learning algorithms in cross-situational experiments. We show that introduction of discrimination limitations using Weber's law and forgetting reduce the performance of the associative algorithm to the human level.
|
1512.05143
|
Igor Florinsky
|
I. V. Florinsky, E. V. Selezneva, A. I. Kulikova
|
GIS-based support for the complex botanical studies at the Molnieboi
Spur, Altai
|
10 pages, 7 figures (corrected text, references added)
| null | null | null |
q-bio.QM physics.geo-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Molnieboi Spur is located at the northwestern margin of the Katun Range,
the high-mountain part of the Altai Mountains. Unique geological and
geophysical characteristics of the Molnieboi Spur made it an attractive target
for complex botanical studies including botanical, soil, geological,
geochemical, geophysical, radiation, and soil gas surveys and analyses. In this
paper, we present the first version of the geographic information system (GIS)
application for the Molnieboi Spur developed using the software QGIS. A digital
elevation model for the study area was derived from a detailed topographic map.
The database was filled with tabular data on about 100 parameters including:
eight botanical characteristics of the Lonicera caerulea local population, two
cytogenetic indices of Lonicera caerulea seeds, five types of biochemical
parameters of Lonicera caerulea leaves and fruits, three types of geochemical
characteristics of the local soils, three types of radiation parameters of the
local soils and Lonicera caerulea plants, and one soil gas parameter. The
results of the magnetometric survey were inserted as a raster image. A visual
analysis of the maps produced allows one to better understand the spatial
relationships between various natural components of the Molnieboi Spur.
|
[
{
"created": "Wed, 16 Dec 2015 12:20:42 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Dec 2015 11:16:14 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Dec 2015 20:21:09 GMT",
"version": "v3"
},
{
"created": "Mon, 21 Dec 2015 21:13:11 GMT",
"version": "v4"
}
] |
2015-12-23
|
[
[
"Florinsky",
"I. V.",
""
],
[
"Selezneva",
"E. V.",
""
],
[
"Kulikova",
"A. I.",
""
]
] |
The Molnieboi Spur is located at the northwestern margin of the Katun Range, the high-mountain part of the Altai Mountains. Unique geological and geophysical characteristics of the Molnieboi Spur made it an attractive target for complex botanical studies including botanical, soil, geological, geochemical, geophysical, radiation, and soil gas surveys and analyses. In this paper, we present the first version of the geographic information system (GIS) application for the Molnieboi Spur developed using the software QGIS. A digital elevation model for the study area was derived from a detailed topographic map. The database was filled with tabular data on about 100 parameters including: eight botanical characteristics of the Lonicera caerulea local population, two cytogenetic indices of Lonicera caerulea seeds, five types of biochemical parameters of Lonicera caerulea leaves and fruits, three types of geochemical characteristics of the local soils, three types of radiation parameters of the local soils and Lonicera caerulea plants, and one soil gas parameter. The results of the magnetometric survey were inserted as a raster image. A visual analysis of the maps produced allows one to better understand the spatial relationships between various natural components of the Molnieboi Spur.
|
1704.05906
|
Eugene Postnikov
|
Eugene B. Postnikov, Maria O. Tsoy, Maxim A. Kurochkin, Dmitry E.
Postnov
|
A fast method for the detection of vascular structure in images, based
on the continuous wavelet transform with the Morlet wavelet having a low
central frequency
|
6 pages, 3 figures
|
Proc. SPIE (2017) 10337
|
10.1117/12.2268427
| null |
q-bio.QM math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A manual measurement of blood vessels diameter is a conventional component of
routine visual assessment of microcirculation, say, during optical
capillaroscopy. However, many modern optical methods for blood flow
measurements demand the reliable procedure for a fully automated detection of
vessels and estimation of their diameter that is a challenging task.
Specifically, if one measure the velocity of red blood cells by means of laser
speckle imaging, then visual measurements become impossible, while the
velocity-based estimation has their own limitations. One of promising
approaches is based on fast switching of illumination type, but it drastically
reduces the observation time, and hence, the achievable quality of images. In
the present work we address this problem proposing an alternative method for
the processing of noisy images of vascular structure, which extracts the mask
denoting locations of vessels, based on the application of the continuous
wavelet transform with the Morlet wavelet having small central frequencies.
Such a method combines a reasonable accuracy with the possibility of fast
direct implementation to images. Discussing the latter, we describe in details
a new MATLAB program code realization for the CWT with the Morlet wavelet,
which does not use loops completely replaced with element-by-element operations
that drastically reduces the computation time.
|
[
{
"created": "Wed, 19 Apr 2017 19:29:48 GMT",
"version": "v1"
}
] |
2017-04-21
|
[
[
"Postnikov",
"Eugene B.",
""
],
[
"Tsoy",
"Maria O.",
""
],
[
"Kurochkin",
"Maxim A.",
""
],
[
"Postnov",
"Dmitry E.",
""
]
] |
A manual measurement of blood vessels diameter is a conventional component of routine visual assessment of microcirculation, say, during optical capillaroscopy. However, many modern optical methods for blood flow measurements demand the reliable procedure for a fully automated detection of vessels and estimation of their diameter that is a challenging task. Specifically, if one measure the velocity of red blood cells by means of laser speckle imaging, then visual measurements become impossible, while the velocity-based estimation has their own limitations. One of promising approaches is based on fast switching of illumination type, but it drastically reduces the observation time, and hence, the achievable quality of images. In the present work we address this problem proposing an alternative method for the processing of noisy images of vascular structure, which extracts the mask denoting locations of vessels, based on the application of the continuous wavelet transform with the Morlet wavelet having small central frequencies. Such a method combines a reasonable accuracy with the possibility of fast direct implementation to images. Discussing the latter, we describe in details a new MATLAB program code realization for the CWT with the Morlet wavelet, which does not use loops completely replaced with element-by-element operations that drastically reduces the computation time.
|
1403.2051
|
Vyacheslav Yukalov
|
V.I. Yukalov, E.P. Yukalova, and D. Sornette
|
Population dynamics with nonlinear delayed carrying capacity
|
Latex file, 24 pages
|
Int. J. Bifur. Chaos 24 (2014) 1450021
|
10.1142/S0218127414500217
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a class of evolution equations describing population dynamics in
the presence of a carrying capacity depending on the population with delay. In
an earlier work, we presented an exhaustive classification of the logistic
equation where the carrying capacity is linearly dependent on the population
with a time delay, which we refer to as the "linear delayed carrying capacity"
model. Here, we generalize it to the case of a nonlinear delayed carrying
capacity. The nonlinear functional form of the carrying capacity characterizes
the delayed feedback of the evolving population on the capacity of their
surrounding by either creating additional means for survival or destroying the
available resources. The previously studied linear approximation for the
capacity assumed weak feedback, while the nonlinear form is applicable to
arbitrarily strong feedback. The nonlinearity essentially changes the behavior
of solutions to the evolution equation, as compared to the linear case. All
admissible dynamical regimes are analyzed, which can be of the following types:
punctuated unbounded growth, punctuated increase or punctuated degradation to a
stationary state, convergence to a stationary state with sharp reversals of
plateaus, oscillatory attenuation, everlasting fluctuations, everlasting
up-down plateau reversals, and divergence in finite time. The theorem is proved
that, for the case characterizing the evolution under gain and competition,
solutions are always bounded, if the feedback is destructive. We find that even
a small noise level profoundly affects the position of the finite-time
singularities. Finally, we demonstrate the feasibility of predicting the
critical time of solutions having finite-time singularities from the knowledge
of a simple quadratic approximation of the early time dynamics.
|
[
{
"created": "Sun, 9 Mar 2014 12:23:55 GMT",
"version": "v1"
}
] |
2015-06-19
|
[
[
"Yukalov",
"V. I.",
""
],
[
"Yukalova",
"E. P.",
""
],
[
"Sornette",
"D.",
""
]
] |
We consider a class of evolution equations describing population dynamics in the presence of a carrying capacity depending on the population with delay. In an earlier work, we presented an exhaustive classification of the logistic equation where the carrying capacity is linearly dependent on the population with a time delay, which we refer to as the "linear delayed carrying capacity" model. Here, we generalize it to the case of a nonlinear delayed carrying capacity. The nonlinear functional form of the carrying capacity characterizes the delayed feedback of the evolving population on the capacity of their surrounding by either creating additional means for survival or destroying the available resources. The previously studied linear approximation for the capacity assumed weak feedback, while the nonlinear form is applicable to arbitrarily strong feedback. The nonlinearity essentially changes the behavior of solutions to the evolution equation, as compared to the linear case. All admissible dynamical regimes are analyzed, which can be of the following types: punctuated unbounded growth, punctuated increase or punctuated degradation to a stationary state, convergence to a stationary state with sharp reversals of plateaus, oscillatory attenuation, everlasting fluctuations, everlasting up-down plateau reversals, and divergence in finite time. The theorem is proved that, for the case characterizing the evolution under gain and competition, solutions are always bounded, if the feedback is destructive. We find that even a small noise level profoundly affects the position of the finite-time singularities. Finally, we demonstrate the feasibility of predicting the critical time of solutions having finite-time singularities from the knowledge of a simple quadratic approximation of the early time dynamics.
|
2306.10040
|
Giulia Chiari
|
Giulia Chiari, Giada Fiandaca, Marcello Edoardo Delitala
|
Hypoxia-related radiotherapy resistance in tumours: treatment efficacy
investigation in an eco-evolutionary perspective
| null |
Front. Appl. Math. Stat., 13 July 2023 Sec. Mathematical Biology
|
10.3389/fams.2023.1193191
| null |
q-bio.PE physics.med-ph q-bio.TO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the study of therapeutic strategies for the treatment of cancer,
eco-evolutionary dynamics are of particular interest, since characteristics of
the tumour population, interaction with the environment and effects of the
treatment, influence the geometric and epigenetic characterization of the
tumour with direct consequences on the efficacy of the therapy and possible
relapses. In particular, when considering radiotherapy, oxygen concentration
plays a central role both in determining the effectiveness of the treatment and
the selective pressure due to hypoxia. We propose a mathematical model, settled
in the framework of epigenetically-structured population dynamics and
formulated in terms of systems of coupled non-linear integro-differential
equations, that aims to catch these phenomena and to provide a predictive tool
for the tumour mass evolution and therapeutic effects. The outcomes of the
simulations show how the model is able to explain the impact of environmental
selection and therapies on the evolution of the mass, motivating observed
dynamics such as relapses and therapeutic failures. Furthermore it offers a
first hint for the development of therapies which can be adapted to overcome
problems of resistance and relapses.
|
[
{
"created": "Sat, 10 Jun 2023 08:40:28 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jun 2023 09:33:11 GMT",
"version": "v2"
}
] |
2023-07-19
|
[
[
"Chiari",
"Giulia",
""
],
[
"Fiandaca",
"Giada",
""
],
[
"Delitala",
"Marcello Edoardo",
""
]
] |
In the study of therapeutic strategies for the treatment of cancer, eco-evolutionary dynamics are of particular interest, since characteristics of the tumour population, interaction with the environment and effects of the treatment, influence the geometric and epigenetic characterization of the tumour with direct consequences on the efficacy of the therapy and possible relapses. In particular, when considering radiotherapy, oxygen concentration plays a central role both in determining the effectiveness of the treatment and the selective pressure due to hypoxia. We propose a mathematical model, settled in the framework of epigenetically-structured population dynamics and formulated in terms of systems of coupled non-linear integro-differential equations, that aims to catch these phenomena and to provide a predictive tool for the tumour mass evolution and therapeutic effects. The outcomes of the simulations show how the model is able to explain the impact of environmental selection and therapies on the evolution of the mass, motivating observed dynamics such as relapses and therapeutic failures. Furthermore it offers a first hint for the development of therapies which can be adapted to overcome problems of resistance and relapses.
|
q-bio/0403044
|
Markus Kollmann
|
H.H. von Gr\"unberg and M. Kollmann
|
Variations in Substitution Rate in Human and Mouse Genomes
|
4 pages, 3 Figs
| null |
10.1103/PhysRevLett.93.208102
| null |
q-bio.GN q-bio.PE
| null |
We present a method to quantify spatial fluctuations of the substitution rate
on different length scales throughout genomes of eukaryotes. The fluctuations
on large length scales are found to be predominantly a consequence of a
coarse-graining effect of fluctuations on shorter length scales. This is
verified for both the mouse and the human genome. We also found that both
species show similar standard deviation of fluctuations even though their mean
substitution rate differs by a factor of two. Our method furthermore allows to
determine time-resolved substitution rate maps from which we can compute
auto-correlation functions in order to quantify how fast the spatial
fluctuations in substitution rate change in time.
|
[
{
"created": "Wed, 31 Mar 2004 13:55:28 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"von Grünberg",
"H. H.",
""
],
[
"Kollmann",
"M.",
""
]
] |
We present a method to quantify spatial fluctuations of the substitution rate on different length scales throughout genomes of eukaryotes. The fluctuations on large length scales are found to be predominantly a consequence of a coarse-graining effect of fluctuations on shorter length scales. This is verified for both the mouse and the human genome. We also found that both species show similar standard deviation of fluctuations even though their mean substitution rate differs by a factor of two. Our method furthermore allows to determine time-resolved substitution rate maps from which we can compute auto-correlation functions in order to quantify how fast the spatial fluctuations in substitution rate change in time.
|
2010.08359
|
Julien Yann Dutheil
|
Julien Y. Dutheil
|
Towards more realistic models of genomes in populations: the
Markov-modulated sequentially Markov coalescent
|
24 pages, 5 figures
|
Probabilistic Structures in Evolution (E. Baake and A.
Wakolbinger, eds.), EMS Press, Berlin, 2021, pp. 383-408
| null | null |
q-bio.PE q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
The development of coalescent theory paved the way to statistical inference
from population genetic data. In the genomic era, however, coalescent models
are limited due to the complexity of the underlying ancestral recombination
graph. The sequentially Markov coalescent (SMC) is a heuristic that enables the
modelling of complete genomes under the coalescent framework. While it empowers
the inference of detailed demographic history of a population from as few as
one diploid genome, current implementations of the SMC make unrealistic
assumptions about the homogeneity of the coalescent process along the genome,
ignoring the intrinsic spatial variability of parameters such as the
recombination rate. Here, I review the historical developments of SMC models
and discuss the evidence for parameter heterogeneity. I then survey approaches
to handle this heterogeneity, focusing on a recently developed extension of the
SMC.
|
[
{
"created": "Fri, 16 Oct 2020 12:56:21 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Dutheil",
"Julien Y.",
""
]
] |
The development of coalescent theory paved the way to statistical inference from population genetic data. In the genomic era, however, coalescent models are limited due to the complexity of the underlying ancestral recombination graph. The sequentially Markov coalescent (SMC) is a heuristic that enables the modelling of complete genomes under the coalescent framework. While it empowers the inference of detailed demographic history of a population from as few as one diploid genome, current implementations of the SMC make unrealistic assumptions about the homogeneity of the coalescent process along the genome, ignoring the intrinsic spatial variability of parameters such as the recombination rate. Here, I review the historical developments of SMC models and discuss the evidence for parameter heterogeneity. I then survey approaches to handle this heterogeneity, focusing on a recently developed extension of the SMC.
|
2007.08782
|
Yukihiro Murakami
|
Elizabeth Gross, Leo van Iersel, Remie Janssen, Mark Jones, Colby Long
and Yukihiro Murakami
|
Distinguishing level-1 phylogenetic networks on the basis of data
generated by Markov processes
|
24 pages, 10 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phylogenetic networks can represent evolutionary events that cannot be
described by phylogenetic trees. These networks are able to incorporate
reticulate evolutionary events such as hybridization, introgression, and
lateral gene transfer. Recently, network-based Markov models of DNA sequence
evolution have been introduced along with model-based methods for
reconstructing phylogenetic networks. For these methods to be consistent, the
network parameter needs to be identifiable from data generated under the model.
Here, we show that the semi-directed network parameter of a triangle-free,
level-1 network model with any fixed number of reticulation vertices is
generically identifiable under the Jukes-Cantor, Kimura 2-parameter, or Kimura
3-parameter constraints.
|
[
{
"created": "Fri, 17 Jul 2020 07:15:33 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Nov 2020 17:12:19 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Jul 2021 09:33:45 GMT",
"version": "v3"
}
] |
2021-07-08
|
[
[
"Gross",
"Elizabeth",
""
],
[
"van Iersel",
"Leo",
""
],
[
"Janssen",
"Remie",
""
],
[
"Jones",
"Mark",
""
],
[
"Long",
"Colby",
""
],
[
"Murakami",
"Yukihiro",
""
]
] |
Phylogenetic networks can represent evolutionary events that cannot be described by phylogenetic trees. These networks are able to incorporate reticulate evolutionary events such as hybridization, introgression, and lateral gene transfer. Recently, network-based Markov models of DNA sequence evolution have been introduced along with model-based methods for reconstructing phylogenetic networks. For these methods to be consistent, the network parameter needs to be identifiable from data generated under the model. Here, we show that the semi-directed network parameter of a triangle-free, level-1 network model with any fixed number of reticulation vertices is generically identifiable under the Jukes-Cantor, Kimura 2-parameter, or Kimura 3-parameter constraints.
|
1709.07193
|
Thu Thuy
|
Charles Burdet (DEBRC, IAME), Sakina Sayah-Jeanne, Thu Thuy Nguyen
(IAME), Christine Miossec, Nathalie Saint-Lu, Mark Pulse, William Weiss,
Antoine Andremont (IAME), France Mentr\'e (IAME, DEBRC), Jean De Gunzburg
|
Protection of hamsters from mortality by reducing fecal moxifloxacin
concentration with DAV131A in a model of moxifloxacin-induced Clostridium
difficile colitis
| null |
Antimicrobial Agents and Chemotherapy, American Society for
Microbiology, 2017
|
10.1128/AAC.00543-17
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
BackgroundLowering the gut exposure to antibiotics during treatments can
prevent microbiota disruption. We evaluated the effect of an activated
charcoal-based adsorbent, DAV131A, on fecal free moxifloxacin concentration and
mortality in a hamster model of moxifloxacin-induced C. difficile
infection.Methods215 hamsters receiving moxifloxacin subcutaneously (D1-D5)
were orally infected at D3 with C. difficile spores. They received various
doses (0-1800mg/kg/day) and schedules (BID, TID) of DAV131A (D1-D8).
Moxifloxacin concentration and C. difficile counts were determined at D3, and
mortality at D12. We compared mortality, moxifloxacin concentration and C.
difficile counts according to DAV131A regimens, and modelled the link between
DAV131A regimen, moxifloxacin concentration and mortality. ResultsAll hamsters
that received no DAV131A died, but none of those that received 1800mg/kg/day. A
significant dose-dependent relationship between DAV131A dose and (i) mortality
rates, (ii) moxifloxacin concentration and (iii) C. difficile counts was
evidenced. Mathematical modeling suggested that (i) lowering moxifloxacin
concentration at D3, which was 58$\mu$g/g (95%CI=50-66) without DAV131A, to
17$\mu$g/g (14-21) would reduce mortality by 90% and (ii) this would be
achieved with a daily DAV131A dose of 703mg/kg (596-809).ConclusionsIn this
model of C. difficile infection, DAV131A reduced mortality in a dose-dependent
manner by decreasing fecal free moxifloxacin concentration.
|
[
{
"created": "Thu, 21 Sep 2017 08:04:55 GMT",
"version": "v1"
}
] |
2017-09-22
|
[
[
"Burdet",
"Charles",
"",
"DEBRC, IAME"
],
[
"Sayah-Jeanne",
"Sakina",
"",
"IAME"
],
[
"Nguyen",
"Thu Thuy",
"",
"IAME"
],
[
"Miossec",
"Christine",
"",
"IAME"
],
[
"Saint-Lu",
"Nathalie",
"",
"IAME"
],
[
"Pulse",
"Mark",
"",
"IAME"
],
[
"Weiss",
"William",
"",
"IAME"
],
[
"Andremont",
"Antoine",
"",
"IAME"
],
[
"Mentré",
"France",
"",
"IAME, DEBRC"
],
[
"De Gunzburg",
"Jean",
""
]
] |
BackgroundLowering the gut exposure to antibiotics during treatments can prevent microbiota disruption. We evaluated the effect of an activated charcoal-based adsorbent, DAV131A, on fecal free moxifloxacin concentration and mortality in a hamster model of moxifloxacin-induced C. difficile infection.Methods215 hamsters receiving moxifloxacin subcutaneously (D1-D5) were orally infected at D3 with C. difficile spores. They received various doses (0-1800mg/kg/day) and schedules (BID, TID) of DAV131A (D1-D8). Moxifloxacin concentration and C. difficile counts were determined at D3, and mortality at D12. We compared mortality, moxifloxacin concentration and C. difficile counts according to DAV131A regimens, and modelled the link between DAV131A regimen, moxifloxacin concentration and mortality. ResultsAll hamsters that received no DAV131A died, but none of those that received 1800mg/kg/day. A significant dose-dependent relationship between DAV131A dose and (i) mortality rates, (ii) moxifloxacin concentration and (iii) C. difficile counts was evidenced. Mathematical modeling suggested that (i) lowering moxifloxacin concentration at D3, which was 58$\mu$g/g (95%CI=50-66) without DAV131A, to 17$\mu$g/g (14-21) would reduce mortality by 90% and (ii) this would be achieved with a daily DAV131A dose of 703mg/kg (596-809).ConclusionsIn this model of C. difficile infection, DAV131A reduced mortality in a dose-dependent manner by decreasing fecal free moxifloxacin concentration.
|
1707.04129
|
Gerard Rinkus
|
Gerard J. Rinkus
|
A cortical sparse distributed coding model linking mini- and
macrocolumn-scale functionality
|
13 pages, 5 figures
|
Frontiers in Neuroanatomy (2010) 4:17
|
10.3389/fnana.2010.00017
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
No generic function for the minicolumn, i.e., one that would apply equally
well to all cortical areas and species, has yet been proposed. I propose that
the minicolumn does have a generic functionality, which only becomes clear when
seen in the context of the function of the higher-level, subsuming unit, the
macrocolumn. I propose that: a) a macrocolumn's function is to store sparse
distributed representations of its inputs and to be a recognizer of those
inputs; and b) the generic function of the minicolumn is to enforce
macrocolumnar code sparseness. The minicolumn, defined here as a physically
localized pool of ~20 L2/3 pyramidals, does this by acting as a winner-take-all
(WTA) competitive module, implying that macrocolumnar codes consist of ~70
active L2/3 cells, assuming ~70 minicolumns per macrocolumn. I describe an
algorithm for activating these codes during both learning and retrievals, which
causes more similar inputs to map to more highly intersecting codes, a property
which yields ultra-fast (immediate, first-shot) storage and retrieval. The
algorithm achieves this by adding an amount of randomness (noise) into the code
selection process, which is inversely proportional to an input's familiarity. I
propose a possible mapping of the algorithm onto cortical circuitry, and adduce
evidence for a neuromodulatory implementation of this familiarity-contingent
noise mechanism. The model is distinguished from other recent columnar cortical
circuit models in proposing a generic minicolumnar function in which a group of
cells within the minicolumn, the L2/3 pyramidals, compete (WTA) to be part of
the sparse distributed macrocolumnar code.
|
[
{
"created": "Thu, 13 Jul 2017 13:56:51 GMT",
"version": "v1"
}
] |
2017-07-14
|
[
[
"Rinkus",
"Gerard J.",
""
]
] |
No generic function for the minicolumn, i.e., one that would apply equally well to all cortical areas and species, has yet been proposed. I propose that the minicolumn does have a generic functionality, which only becomes clear when seen in the context of the function of the higher-level, subsuming unit, the macrocolumn. I propose that: a) a macrocolumn's function is to store sparse distributed representations of its inputs and to be a recognizer of those inputs; and b) the generic function of the minicolumn is to enforce macrocolumnar code sparseness. The minicolumn, defined here as a physically localized pool of ~20 L2/3 pyramidals, does this by acting as a winner-take-all (WTA) competitive module, implying that macrocolumnar codes consist of ~70 active L2/3 cells, assuming ~70 minicolumns per macrocolumn. I describe an algorithm for activating these codes during both learning and retrievals, which causes more similar inputs to map to more highly intersecting codes, a property which yields ultra-fast (immediate, first-shot) storage and retrieval. The algorithm achieves this by adding an amount of randomness (noise) into the code selection process, which is inversely proportional to an input's familiarity. I propose a possible mapping of the algorithm onto cortical circuitry, and adduce evidence for a neuromodulatory implementation of this familiarity-contingent noise mechanism. The model is distinguished from other recent columnar cortical circuit models in proposing a generic minicolumnar function in which a group of cells within the minicolumn, the L2/3 pyramidals, compete (WTA) to be part of the sparse distributed macrocolumnar code.
|
2102.03276
|
Daniel Charlebois
|
Kevin S. Farquhar, Samira Rasouli Koohi, and Daniel A. Charlebois
|
Does Non-Genetic Heterogeneity Facilitate the Development of Genetic
Drug Resistance?
|
11 pages, 2 figures
|
BioEssays, 43: e2100043 (2021)
|
10.1002/bies.202100043
| null |
q-bio.PE physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Non-genetic forms of antimicrobial drug resistance can result from
cell-to-cell variability that is not encoded in the genetic material. Data from
recent studies also suggest that non-genetic mechanisms can facilitate the
development of genetic drug resistance. In this Perspective article, we
speculate on how the interplay between non-genetic and genetic mechanisms may
affect microbial adaptation and evolution during drug treatment. We argue that
cellular heterogeneity arising from fluctuations in gene expression, epigenetic
modifications, as well as genetic changes contributes to drug resistance at
different timescales, and that the interplay between these mechanisms may
influence the evolutionary dynamics of pathogen resistance. Accordingly,
developing a better understanding of non-genetic mechanisms in drug resistance
and how they interact with genetic mechanisms will enhance our ability to
combat antimicrobial resistance.
|
[
{
"created": "Fri, 5 Feb 2021 16:31:58 GMT",
"version": "v1"
}
] |
2022-06-23
|
[
[
"Farquhar",
"Kevin S.",
""
],
[
"Koohi",
"Samira Rasouli",
""
],
[
"Charlebois",
"Daniel A.",
""
]
] |
Non-genetic forms of antimicrobial drug resistance can result from cell-to-cell variability that is not encoded in the genetic material. Data from recent studies also suggest that non-genetic mechanisms can facilitate the development of genetic drug resistance. In this Perspective article, we speculate on how the interplay between non-genetic and genetic mechanisms may affect microbial adaptation and evolution during drug treatment. We argue that cellular heterogeneity arising from fluctuations in gene expression, epigenetic modifications, as well as genetic changes contributes to drug resistance at different timescales, and that the interplay between these mechanisms may influence the evolutionary dynamics of pathogen resistance. Accordingly, developing a better understanding of non-genetic mechanisms in drug resistance and how they interact with genetic mechanisms will enhance our ability to combat antimicrobial resistance.
|
1210.4948
|
Michael Courtney
|
Joshua Courtney, Jessica Abbott, Kerri Schmidt, and Michael Courtney
|
Plump Cutthroat Trout and Thin Rainbow Trout in a Lentic Ecosystem
|
7 pages, 2 tables, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Much has been written about introduced rainbow trout
(Oncorhynchus mykiss) interbreeding and outcompeting cutthroat trout
(Oncorhynchus clarkii). However, the specific mechanisms by which rainbow trout
and their hybrids outcompete cutthroat trout have not been thoroughly explored,
and the published data is limited to lotic ecosystems. Materials and Methods:
Samples of rainbow trout and cutthroat trout were obtained from a lentic
ecosystem by angling. The total length and weight of each fish was measured and
the relative weight of each fish was computed (Anderson R.O., Neumann R.M.
1996. Length, Weight, and Associated Structural Indices, Pp. 447-481. In:
Murphy B.E. and Willis D.W. (eds.) Fisheries Techniques, second edition.
American Fisheries Society.), along with the mean and uncertainty in the mean
for each species. Data from an independent source (K.D. Carlander, 1969.
Handbook of Freshwater Fishery Biology, Volume One, Iowa University Press,
Ames.) was also used to generate mean weight-length curves, as well as 25th and
75th percentile curves for each species to allow further comparison. Results:
The mean relative weight of the rainbow trout was 72.5 (+/- 2.1); whereas, the
mean relative weight of the cutthroat trout was 101.0 (+/- 4.9). The rainbow
trout were thin; 80% weighed below the 25th percentile. The cutthroat trout
were plump; 86% weighed above the 75th percentile, and 29% were above the
heaviest recorded specimens at a given length in the Carlander (1969) data set.
Conclusion: This data casts doubt on the hypothesis that rainbow trout are
strong food competitors with cutthroat trout in lentic ecosystems. On the
contrary, in the lake under study, the cutthroat trout seem to be outcompeting
rainbow trout for the available food.
|
[
{
"created": "Wed, 17 Oct 2012 20:15:28 GMT",
"version": "v1"
}
] |
2012-10-19
|
[
[
"Courtney",
"Joshua",
""
],
[
"Abbott",
"Jessica",
""
],
[
"Schmidt",
"Kerri",
""
],
[
"Courtney",
"Michael",
""
]
] |
Background: Much has been written about introduced rainbow trout (Oncorhynchus mykiss) interbreeding and outcompeting cutthroat trout (Oncorhynchus clarkii). However, the specific mechanisms by which rainbow trout and their hybrids outcompete cutthroat trout have not been thoroughly explored, and the published data is limited to lotic ecosystems. Materials and Methods: Samples of rainbow trout and cutthroat trout were obtained from a lentic ecosystem by angling. The total length and weight of each fish was measured and the relative weight of each fish was computed (Anderson R.O., Neumann R.M. 1996. Length, Weight, and Associated Structural Indices, Pp. 447-481. In: Murphy B.E. and Willis D.W. (eds.) Fisheries Techniques, second edition. American Fisheries Society.), along with the mean and uncertainty in the mean for each species. Data from an independent source (K.D. Carlander, 1969. Handbook of Freshwater Fishery Biology, Volume One, Iowa University Press, Ames.) was also used to generate mean weight-length curves, as well as 25th and 75th percentile curves for each species to allow further comparison. Results: The mean relative weight of the rainbow trout was 72.5 (+/- 2.1); whereas, the mean relative weight of the cutthroat trout was 101.0 (+/- 4.9). The rainbow trout were thin; 80% weighed below the 25th percentile. The cutthroat trout were plump; 86% weighed above the 75th percentile, and 29% were above the heaviest recorded specimens at a given length in the Carlander (1969) data set. Conclusion: This data casts doubt on the hypothesis that rainbow trout are strong food competitors with cutthroat trout in lentic ecosystems. On the contrary, in the lake under study, the cutthroat trout seem to be outcompeting rainbow trout for the available food.
|
1211.4251
|
Claus O. Wilke
|
Matthew Z. Tien, Austin G. Meyer, Dariya K. Sydykova, Stephanie J.
Spielman, and Claus O. Wilke
|
Maximum allowed solvent accessibilites of residues in proteins
|
16 pages, 4 figures
|
PLoS ONE 8(11): e80635
|
10.1371/journal.pone.0080635
| null |
q-bio.BM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The relative solvent accessibility (RSA) of a residue in a protein measures
the extent of burial or exposure of that residue in the 3D structure. RSA is
frequently used to describe a protein's biophysical or evolutionary properties.
To calculate RSA, a residue's solvent accessibility (ASA) needs to be
normalized by a suitable reference value for the given amino acid; several
normalization scales have previously been proposed. However, these scales do
not provide tight upper bounds on ASA values frequently observed in empirical
crystal structures. Instead, they underestimate the largest allowed ASA values,
by up to 20%. As a result, many empirical crystal structures contain residues
that seem to have RSA values in excess of one. Here, we derive a new
normalization scale that does provide a tight upper bound on observed ASA
values. We pursue two complementary strategies, one based on extensive analysis
of empirical structures and one based on systematic enumeration of
biophysically allowed tripeptides. Both approaches yield congruent results that
consistently exceed published values. We conclude that previously published ASA
normalization values were too small, primarily because the conformations that
maximize ASA had not been correctly identified. As an application of our
results, we show that empirically derived hydrophobicity scales are sensitive
to accurate RSA calculation, and we derive new hydrophobicity scales that show
increased correlation with experimentally measured scales.
|
[
{
"created": "Sun, 18 Nov 2012 19:51:45 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Jan 2013 18:18:55 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Sep 2013 22:56:13 GMT",
"version": "v3"
}
] |
2013-11-28
|
[
[
"Tien",
"Matthew Z.",
""
],
[
"Meyer",
"Austin G.",
""
],
[
"Sydykova",
"Dariya K.",
""
],
[
"Spielman",
"Stephanie J.",
""
],
[
"Wilke",
"Claus O.",
""
]
] |
The relative solvent accessibility (RSA) of a residue in a protein measures the extent of burial or exposure of that residue in the 3D structure. RSA is frequently used to describe a protein's biophysical or evolutionary properties. To calculate RSA, a residue's solvent accessibility (ASA) needs to be normalized by a suitable reference value for the given amino acid; several normalization scales have previously been proposed. However, these scales do not provide tight upper bounds on ASA values frequently observed in empirical crystal structures. Instead, they underestimate the largest allowed ASA values, by up to 20%. As a result, many empirical crystal structures contain residues that seem to have RSA values in excess of one. Here, we derive a new normalization scale that does provide a tight upper bound on observed ASA values. We pursue two complementary strategies, one based on extensive analysis of empirical structures and one based on systematic enumeration of biophysically allowed tripeptides. Both approaches yield congruent results that consistently exceed published values. We conclude that previously published ASA normalization values were too small, primarily because the conformations that maximize ASA had not been correctly identified. As an application of our results, we show that empirically derived hydrophobicity scales are sensitive to accurate RSA calculation, and we derive new hydrophobicity scales that show increased correlation with experimentally measured scales.
|
2104.06857
|
Ioannis Kontoyiannis
|
Jussi Taipale, Ioannis Kontoyiannis, and Sten Linnarsson
|
Population-scale testing can suppress the spread of infectious disease
|
This paper is based, in part, on an earlier manuscript, that appears
as medRxiv 2020.04.27.20078329. This is a significantly extended version,
including a new and more extensive mathematical analysis. The present
manuscript was written in September 2020. The form included here includes
some additional bibliographical references
| null | null | null |
q-bio.PE math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
Major advances in public health have resulted from disease prevention.
However, prevention of a new infectious disease by vaccination or
pharmaceuticals is made difficult by the slow process of vaccine and drug
development. We propose an additional intervention that allows rapid control of
emerging infectious diseases, and can also be used to eradicate diseases that
rely almost exclusively on human-to-human transmission. The intervention is
based on (1) testing every individual for the disease, (2) repeatedly, and (3)
isolation of infected individuals. We show here that at a sufficient rate of
testing, the reproduction number is reduced below 1.0 and the epidemic will
rapidly collapse. The approach does not rely on strong or unrealistic
assumptions about test accuracy, isolation compliance, population structure or
epidemiological parameters, and its success can be monitored in real time by
following the test positivity rate. In addition to the compliance rate and
false negatives, the required rate of testing depends on the design of the
testing regime, with concurrent testing outperforming random sampling. Provided
that results are obtained rapidly, the test frequency required to suppress an
epidemic is monotonic and near-linear with respect to R0, the infectious
period, and the fraction of susceptible individuals. The testing regime is
effective against both early phase and established epidemics, and additive to
other interventions (e.g. contact tracing and social distancing). It is also
robust to failure: any rate of testing reduces the number of infections,
improving both public health and economic conditions. These conclusions are
based on rigorous analysis and simulations of appropriate epidemiological
models. A mass-produced, disposable test that could be used at home would be
ideal, due to the optimal performance of concurrent tests that return immediate
results.
|
[
{
"created": "Wed, 14 Apr 2021 13:48:38 GMT",
"version": "v1"
}
] |
2021-04-15
|
[
[
"Taipale",
"Jussi",
""
],
[
"Kontoyiannis",
"Ioannis",
""
],
[
"Linnarsson",
"Sten",
""
]
] |
Major advances in public health have resulted from disease prevention. However, prevention of a new infectious disease by vaccination or pharmaceuticals is made difficult by the slow process of vaccine and drug development. We propose an additional intervention that allows rapid control of emerging infectious diseases, and can also be used to eradicate diseases that rely almost exclusively on human-to-human transmission. The intervention is based on (1) testing every individual for the disease, (2) repeatedly, and (3) isolation of infected individuals. We show here that at a sufficient rate of testing, the reproduction number is reduced below 1.0 and the epidemic will rapidly collapse. The approach does not rely on strong or unrealistic assumptions about test accuracy, isolation compliance, population structure or epidemiological parameters, and its success can be monitored in real time by following the test positivity rate. In addition to the compliance rate and false negatives, the required rate of testing depends on the design of the testing regime, with concurrent testing outperforming random sampling. Provided that results are obtained rapidly, the test frequency required to suppress an epidemic is monotonic and near-linear with respect to R0, the infectious period, and the fraction of susceptible individuals. The testing regime is effective against both early phase and established epidemics, and additive to other interventions (e.g. contact tracing and social distancing). It is also robust to failure: any rate of testing reduces the number of infections, improving both public health and economic conditions. These conclusions are based on rigorous analysis and simulations of appropriate epidemiological models. A mass-produced, disposable test that could be used at home would be ideal, due to the optimal performance of concurrent tests that return immediate results.
|
1304.0399
|
Fernando Fabian Montani
|
Fernando Montani, Emilia B. Deleglise, Osvaldo A. Rosso
|
Efficiency characterization of a large neuronal network: a causal
information approach
|
26 pages, 3 Figures; Physica A, in press
| null |
10.1016/j.physa.2013.12.053
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When inhibitory neurons constitute about 40% of neurons they could have an
important antinociceptive role, as they would easily regulate the level of
activity of other neurons. We consider a simple network of cortical spiking
neurons with axonal conduction delays and spike timing dependent plasticity,
representative of a cortical column or hypercolumn with large proportion of
inhibitory neurons. Each neuron fires following a Hodgkin-Huxley like dynamics
and it is interconnected randomly to other neurons. The network dynamics is
investigated estimating Bandt and Pompe probability distribution function
associated to the interspike intervals and taking different degrees of
inter-connectivity across neurons. More specifically we take into account the
fine temporal ``structures'' of the complex neuronal signals not just by using
the probability distributions associated to the inter spike intervals, but
instead considering much more subtle measures accounting for their causal
information: the Shannon permutation entropy, Fisher permutation information
and permutation statistical complexity. This allows us to investigate how the
information of the system might saturate to a finite value as the degree of
inter-connectivity across neurons grows, inferring the emergent dynamical
properties of the system.
|
[
{
"created": "Mon, 1 Apr 2013 17:36:49 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Jan 2014 14:53:37 GMT",
"version": "v2"
}
] |
2014-01-28
|
[
[
"Montani",
"Fernando",
""
],
[
"Deleglise",
"Emilia B.",
""
],
[
"Rosso",
"Osvaldo A.",
""
]
] |
When inhibitory neurons constitute about 40% of neurons they could have an important antinociceptive role, as they would easily regulate the level of activity of other neurons. We consider a simple network of cortical spiking neurons with axonal conduction delays and spike timing dependent plasticity, representative of a cortical column or hypercolumn with large proportion of inhibitory neurons. Each neuron fires following a Hodgkin-Huxley like dynamics and it is interconnected randomly to other neurons. The network dynamics is investigated estimating Bandt and Pompe probability distribution function associated to the interspike intervals and taking different degrees of inter-connectivity across neurons. More specifically we take into account the fine temporal ``structures'' of the complex neuronal signals not just by using the probability distributions associated to the inter spike intervals, but instead considering much more subtle measures accounting for their causal information: the Shannon permutation entropy, Fisher permutation information and permutation statistical complexity. This allows us to investigate how the information of the system might saturate to a finite value as the degree of inter-connectivity across neurons grows, inferring the emergent dynamical properties of the system.
|
2209.08694
|
Ketevi A. Assamagan
|
Dephney Mathebula, Abigail Amankwah, Kossi Amouzouvi, K\'et\'evi A.
Assamagan, Somi\'ealo Azote, Jesutofunmi Ayo Fajemisin, Jean Baptiste Fankam
Fankam, Aluwani Guga, Moses Kamwela, Toivo S. Mabote, Mulape M Kanduza,
Francisco Fenias Macucule, Azwinndini Muronga, Ann Njeri, Michael Oluwole,
Cl\'audio Mois\'es Paulo
|
Modelling the impact of vaccination on the COVID-19 pandemic in African
countries
|
40 pages, 10 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The rapid development of vaccines to combat the spread of COVID-19 disease
caused by the SARS-CoV-2 virus is a great scientific achievement. Before the
development of the COVID-19 vaccines, most studies capitalized on the available
data that did not include pharmaceutical measures. Such studies focused on the
impact of non-pharmaceutical measures (e.g social distancing, sanitation,
wearing of face masks, and lockdown) to study the spread of COVID-19. In this
study, we used the SIDARTHE-V model which is an extension of the SIDARTHE model
wherein we include vaccination roll outs. We studied the impact of vaccination
on the severity (deadly nature) of the virus in African countries. Model
parameters were extracted by fitting simultaneously the COVID-19 cumulative
data of deaths, recoveries, active cases, and full vaccinations reported by the
governments of Ghana, Kenya, Mozambique, Nigeria, South Africa, Togo, and
Zambia. With countries having some degree of variation in their vaccination
programs, we considered the impact of vaccination campaigns on the death rates
in these countries. The study showed that the cumulative death rates declined
drastically with the increased extent of vaccination in each country; while
infection rates were sometimes increasing with the arrival of new waves, the
death rates did not increase as we saw before vaccination.
|
[
{
"created": "Mon, 19 Sep 2022 01:00:57 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Sep 2022 19:59:51 GMT",
"version": "v2"
},
{
"created": "Sun, 30 Oct 2022 09:12:01 GMT",
"version": "v3"
},
{
"created": "Thu, 10 Nov 2022 12:14:20 GMT",
"version": "v4"
}
] |
2022-11-11
|
[
[
"Mathebula",
"Dephney",
""
],
[
"Amankwah",
"Abigail",
""
],
[
"Amouzouvi",
"Kossi",
""
],
[
"Assamagan",
"Kétévi A.",
""
],
[
"Azote",
"Somiéalo",
""
],
[
"Fajemisin",
"Jesutofunmi Ayo",
""
],
[
"Fankam",
"Jean Baptiste Fankam",
""
],
[
"Guga",
"Aluwani",
""
],
[
"Kamwela",
"Moses",
""
],
[
"Mabote",
"Toivo S.",
""
],
[
"Kanduza",
"Mulape M",
""
],
[
"Macucule",
"Francisco Fenias",
""
],
[
"Muronga",
"Azwinndini",
""
],
[
"Njeri",
"Ann",
""
],
[
"Oluwole",
"Michael",
""
],
[
"Paulo",
"Cláudio Moisés",
""
]
] |
The rapid development of vaccines to combat the spread of COVID-19 disease caused by the SARS-CoV-2 virus is a great scientific achievement. Before the development of the COVID-19 vaccines, most studies capitalized on the available data that did not include pharmaceutical measures. Such studies focused on the impact of non-pharmaceutical measures (e.g social distancing, sanitation, wearing of face masks, and lockdown) to study the spread of COVID-19. In this study, we used the SIDARTHE-V model which is an extension of the SIDARTHE model wherein we include vaccination roll outs. We studied the impact of vaccination on the severity (deadly nature) of the virus in African countries. Model parameters were extracted by fitting simultaneously the COVID-19 cumulative data of deaths, recoveries, active cases, and full vaccinations reported by the governments of Ghana, Kenya, Mozambique, Nigeria, South Africa, Togo, and Zambia. With countries having some degree of variation in their vaccination programs, we considered the impact of vaccination campaigns on the death rates in these countries. The study showed that the cumulative death rates declined drastically with the increased extent of vaccination in each country; while infection rates were sometimes increasing with the arrival of new waves, the death rates did not increase as we saw before vaccination.
|
2110.06882
|
Paul Bressloff
|
Paul C Bressloff
|
Queuing model of axonal transport
|
34 pages, 9 figures
| null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The motor-driven intracellular transport of vesicles to synaptic targets in
the axons and dendrites of neurons plays a crucial role in normal cell
function. Moreover, stimulus-dependent regulation of active transport is an
important component of long-term synaptic plasticity, whereas the disruption of
vesicular transport can lead to the onset of various neurodegenerative
diseases. In this paper we investigate how the discrete and stochastic nature
of vesicular transport in axons contributes to fluctuations in the accumulation
of resources within synaptic targets. We begin by solving the first passage
time problem of a single motor-cargo complex (particle) searching for synaptic
targets distributed along a one-dimensional axonal cable. We then use queuing
theory
to analyze the accumulation of synaptic resources under the combined effects
of multiple search-and-capture events and degradation. In particular, we
determine the steady-state mean and variance of the distribution of synaptic
resources along the axon in response to the periodic insertion of particles.
The mean distribution recovers the spatially decaying distribution of resources
familiar from deterministic population models. However, the discrete nature of
vesicular transport can lead to Fano factors that are greater than unity
(non-Poissonian) across the array of synapses, resulting in significant
fluctuation bursts. We also find that each synaptic Fano factor is independent
of the rate of particle insertion but increases monotonically with the amount
of protein cargo in each vesicle. This implies that fluctuations can be reduced
by increasing the injection rate while decreasing the cargo load of each
vesicle.
|
[
{
"created": "Wed, 13 Oct 2021 17:16:24 GMT",
"version": "v1"
}
] |
2021-10-14
|
[
[
"Bressloff",
"Paul C",
""
]
] |
The motor-driven intracellular transport of vesicles to synaptic targets in the axons and dendrites of neurons plays a crucial role in normal cell function. Moreover, stimulus-dependent regulation of active transport is an important component of long-term synaptic plasticity, whereas the disruption of vesicular transport can lead to the onset of various neurodegenerative diseases. In this paper we investigate how the discrete and stochastic nature of vesicular transport in axons contributes to fluctuations in the accumulation of resources within synaptic targets. We begin by solving the first passage time problem of a single motor-cargo complex (particle) searching for synaptic targets distributed along a one-dimensional axonal cable. We then use queuing theory to analyze the accumulation of synaptic resources under the combined effects of multiple search-and-capture events and degradation. In particular, we determine the steady-state mean and variance of the distribution of synaptic resources along the axon in response to the periodic insertion of particles. The mean distribution recovers the spatially decaying distribution of resources familiar from deterministic population models. However, the discrete nature of vesicular transport can lead to Fano factors that are greater than unity (non-Poissonian) across the array of synapses, resulting in significant fluctuation bursts. We also find that each synaptic Fano factor is independent of the rate of particle insertion but increases monotonically with the amount of protein cargo in each vesicle. This implies that fluctuations can be reduced by increasing the injection rate while decreasing the cargo load of each vesicle.
|
2006.05357
|
Tomas Veloz
|
Tomas Veloz, Pedro Maldonado, Samuel Ropert, Cesar Ravello, Soraya
Mora, Alejandra Barrios, Tomas Villaseca, Cesar Valdenegro, Tomas Perez-Acle
|
On the interplay between mobility and hospitalization capacity during
the COVID-19 pandemic: The SEIRHUD model
| null | null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measures to reduce the impact of the COVID-19 pandemic require a mix of
logistic, political and social capacity. Depending on the country, different
approaches to increase hospitalization capacity or to properly apply lock-downs
are observed. In order to better understand the impact of these measures we
have developed a compartmental model which, on the one hand allows to calibrate
the reduction of movement of people within and among different areas, and on
the other hand it incorporates a hospitalization dynamics that differentiates
the available kinds of treatment that infected people can receive. By bounding
the hospitalization capacity, we are able to study in detail the interplay
between mobility and hospitalization capacity.
|
[
{
"created": "Tue, 9 Jun 2020 15:42:51 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jun 2020 02:34:19 GMT",
"version": "v2"
}
] |
2020-06-12
|
[
[
"Veloz",
"Tomas",
""
],
[
"Maldonado",
"Pedro",
""
],
[
"Ropert",
"Samuel",
""
],
[
"Ravello",
"Cesar",
""
],
[
"Mora",
"Soraya",
""
],
[
"Barrios",
"Alejandra",
""
],
[
"Villaseca",
"Tomas",
""
],
[
"Valdenegro",
"Cesar",
""
],
[
"Perez-Acle",
"Tomas",
""
]
] |
Measures to reduce the impact of the COVID-19 pandemic require a mix of logistic, political and social capacity. Depending on the country, different approaches to increase hospitalization capacity or to properly apply lock-downs are observed. In order to better understand the impact of these measures we have developed a compartmental model which, on the one hand allows to calibrate the reduction of movement of people within and among different areas, and on the other hand it incorporates a hospitalization dynamics that differentiates the available kinds of treatment that infected people can receive. By bounding the hospitalization capacity, we are able to study in detail the interplay between mobility and hospitalization capacity.
|
1807.03781
|
Stefano Corni
|
Giorgia Brancolini, Maria Celeste Maschio, Cristina Cantarutti,
Alessandra Corazza, Federico Fogolari, Vittorio Bellotti, Stefano Corni,
Gennaro Esposito
|
Citrate stabilized gold nanoparticles interfere with amyloid fibril
formation: D76N and {\Delta}N6 \b{eta}2-microglobulin variants
|
Published by RSC, under a Creative Commons Attribution 3.0 Unported
Licence
|
Nanoscale, 2018, 10, 4793
|
10.1039/c7nr06808e
| null |
q-bio.BM cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein aggregation including the formation of dimers and multimers in
solution, underlies an array of human diseases such as systemic amyloidosis
which is a fatal disease caused by misfolding of native globular proteins
damaging the structure and function of affected organs. Different kind of
interactors can interfere with the formation of protein dimers and multimers in
solution. A very special class of interactors are nanoparticles thanks to the
extremely efficient extension of their interaction surface. In particular
citrate-coated gold nanoparticles (cit-AuNPs) were recently investigated with
amyloidogenic protein $\beta$2-microglobulin ($\beta$2m). Here we present the
computational studies on two challenging models known for their enhanced
amyloidogenic propensity, namely $\Delta$N6 and D76N $\beta$2m naturally
occurring variants, and disclose the role of cit-AuNPs on their
fibrillogenesis. The proposed interaction mechanism lies in the interference of
the cit-AuNPs with the protein dimers at the early stages of aggregation, that
induces dimer disassembling. As a consequence, natural fibril formation can be
inhibited. Relying on the comparison between atomistic simulations at multiple
levels (enhanced sampling molecular dynamics and Brownian dynamics) and protein
structural characterisation by NMR, we demonstrate that the cit-AuNPs
interactors are able to inhibit protein dimer assembling. As a consequence, the
natural fibril formation is also inhibited, as found in experiment.
|
[
{
"created": "Tue, 10 Jul 2018 12:53:58 GMT",
"version": "v1"
}
] |
2018-07-12
|
[
[
"Brancolini",
"Giorgia",
""
],
[
"Maschio",
"Maria Celeste",
""
],
[
"Cantarutti",
"Cristina",
""
],
[
"Corazza",
"Alessandra",
""
],
[
"Fogolari",
"Federico",
""
],
[
"Bellotti",
"Vittorio",
""
],
[
"Corni",
"Stefano",
""
],
[
"Esposito",
"Gennaro",
""
]
] |
Protein aggregation including the formation of dimers and multimers in solution, underlies an array of human diseases such as systemic amyloidosis which is a fatal disease caused by misfolding of native globular proteins damaging the structure and function of affected organs. Different kind of interactors can interfere with the formation of protein dimers and multimers in solution. A very special class of interactors are nanoparticles thanks to the extremely efficient extension of their interaction surface. In particular citrate-coated gold nanoparticles (cit-AuNPs) were recently investigated with amyloidogenic protein $\beta$2-microglobulin ($\beta$2m). Here we present the computational studies on two challenging models known for their enhanced amyloidogenic propensity, namely $\Delta$N6 and D76N $\beta$2m naturally occurring variants, and disclose the role of cit-AuNPs on their fibrillogenesis. The proposed interaction mechanism lies in the interference of the cit-AuNPs with the protein dimers at the early stages of aggregation, that induces dimer disassembling. As a consequence, natural fibril formation can be inhibited. Relying on the comparison between atomistic simulations at multiple levels (enhanced sampling molecular dynamics and Brownian dynamics) and protein structural characterisation by NMR, we demonstrate that the cit-AuNPs interactors are able to inhibit protein dimer assembling. As a consequence, the natural fibril formation is also inhibited, as found in experiment.
|
2109.09740
|
Gabriele Corso
|
Gabriele Corso, Rex Ying, Michal P\'andy, Petar Veli\v{c}kovi\'c, Jure
Leskovec, Pietro Li\`o
|
Neural Distance Embeddings for Biological Sequences
|
Advances in Neural Information Processing Systems (NeurIPS 2021)
| null | null | null |
q-bio.QM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of data-dependent heuristics and representations for
biological sequences that reflect their evolutionary distance is critical for
large-scale biological research. However, popular machine learning approaches,
based on continuous Euclidean spaces, have struggled with the discrete
combinatorial formulation of the edit distance that models evolution and the
hierarchical relationship that characterises real-world datasets. We present
Neural Distance Embeddings (NeuroSEED), a general framework to embed sequences
in geometric vector spaces, and illustrate the effectiveness of the hyperbolic
space that captures the hierarchical structure and provides an average 22%
reduction in embedding RMSE against the best competing geometry. The capacity
of the framework and the significance of these improvements are then
demonstrated devising supervised and unsupervised NeuroSEED approaches to
multiple core tasks in bioinformatics. Benchmarked with common baselines, the
proposed approaches display significant accuracy and/or runtime improvements on
real-world datasets. As an example for hierarchical clustering, the proposed
pretrained and from-scratch methods match the quality of competing baselines
with 30x and 15x runtime reduction, respectively.
|
[
{
"created": "Mon, 20 Sep 2021 17:30:58 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Oct 2021 19:49:08 GMT",
"version": "v2"
}
] |
2021-10-13
|
[
[
"Corso",
"Gabriele",
""
],
[
"Ying",
"Rex",
""
],
[
"Pándy",
"Michal",
""
],
[
"Veličković",
"Petar",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Liò",
"Pietro",
""
]
] |
The development of data-dependent heuristics and representations for biological sequences that reflect their evolutionary distance is critical for large-scale biological research. However, popular machine learning approaches, based on continuous Euclidean spaces, have struggled with the discrete combinatorial formulation of the edit distance that models evolution and the hierarchical relationship that characterises real-world datasets. We present Neural Distance Embeddings (NeuroSEED), a general framework to embed sequences in geometric vector spaces, and illustrate the effectiveness of the hyperbolic space that captures the hierarchical structure and provides an average 22% reduction in embedding RMSE against the best competing geometry. The capacity of the framework and the significance of these improvements are then demonstrated devising supervised and unsupervised NeuroSEED approaches to multiple core tasks in bioinformatics. Benchmarked with common baselines, the proposed approaches display significant accuracy and/or runtime improvements on real-world datasets. As an example for hierarchical clustering, the proposed pretrained and from-scratch methods match the quality of competing baselines with 30x and 15x runtime reduction, respectively.
|
1505.00569
|
Igor Sazonov Dr
|
Igor Sazonov and Mark Kelbert
|
Randomized migration processes between two epidemic centers
|
27 pages, 10 figures (17 plots)
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epidemic dynamics in a stochastic network of interacting epidemic centers is
considered. The epidemic and migration processes are modelled by Markov's
chains. Explicit formulas for probability distribution of the migration process
are derived. Dependence of outbreak parameters on initial parameters,
population, coupling parameters is examined analytically and numerically. The
mean field approximation for a general migration process is derived. An
approximate method allowing computation of statistical moments for networks
with highly populated centres is proposed and tested numerically.
|
[
{
"created": "Mon, 4 May 2015 09:42:06 GMT",
"version": "v1"
}
] |
2015-05-05
|
[
[
"Sazonov",
"Igor",
""
],
[
"Kelbert",
"Mark",
""
]
] |
Epidemic dynamics in a stochastic network of interacting epidemic centers is considered. The epidemic and migration processes are modelled by Markov's chains. Explicit formulas for probability distribution of the migration process are derived. Dependence of outbreak parameters on initial parameters, population, coupling parameters is examined analytically and numerically. The mean field approximation for a general migration process is derived. An approximate method allowing computation of statistical moments for networks with highly populated centres is proposed and tested numerically.
|
2004.12472
|
Bing Liu
|
Bing Liu
|
A Model Checking-based Analysis Framework for Systems Biology Models
|
To appear in the Proceedings of the 57th Design Automation Conference
(DAC)
| null | null | null |
q-bio.QM cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Biological systems are often modeled as a system of ordinary differential
equations (ODEs) with time-invariant parameters. However, cell signaling events
or pharmacological interventions may alter the cellular state and induce
multi-mode dynamics of the system. Such systems are naturally modeled as hybrid
automata, which possess multiple operational modes with specific nonlinear
dynamics in each mode. In this paper we introduce a model checking-enabled
framework than can model and analyze both single- and multi-mode biological
systems. We tackle the central problem in systems biology--identify parameter
values such that a model satisfies desired behaviors--using bounded model
checking. We resort to the delta-decision procedures to solve satisfiability
modulo theories (SMT) problems and sidestep undecidability of reachability
problems. Our framework enables several analysis tasks including model
calibration and falsification, therapeutic strategy identification, and
Lyapunov stability analysis. We demonstrate the applicablitliy of these methods
using case studies of prostate cancer progression, cardiac cell action
potential and radiation diseases.
|
[
{
"created": "Sun, 26 Apr 2020 20:31:39 GMT",
"version": "v1"
}
] |
2020-04-28
|
[
[
"Liu",
"Bing",
""
]
] |
Biological systems are often modeled as a system of ordinary differential equations (ODEs) with time-invariant parameters. However, cell signaling events or pharmacological interventions may alter the cellular state and induce multi-mode dynamics of the system. Such systems are naturally modeled as hybrid automata, which possess multiple operational modes with specific nonlinear dynamics in each mode. In this paper we introduce a model checking-enabled framework than can model and analyze both single- and multi-mode biological systems. We tackle the central problem in systems biology--identify parameter values such that a model satisfies desired behaviors--using bounded model checking. We resort to the delta-decision procedures to solve satisfiability modulo theories (SMT) problems and sidestep undecidability of reachability problems. Our framework enables several analysis tasks including model calibration and falsification, therapeutic strategy identification, and Lyapunov stability analysis. We demonstrate the applicablitliy of these methods using case studies of prostate cancer progression, cardiac cell action potential and radiation diseases.
|
2210.13423
|
Sara Mohammad Taheri Mrs
|
Sara Mohammad-Taheri and Vartika Tewari and Rohan Kapre and Ehsan
Rahiminasab and Karen Sachs and Charles Tapley Hoyt and Jeremy Zucker and
Olga Vitek
|
Experimental design for causal query estimation in partially observed
biomolecular networks
| null | null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Estimating a causal query from observational data is an essential task in the
analysis of biomolecular networks. Estimation takes as input a network
topology, a query estimation method, and observational measurements on the
network variables. However, estimations involving many variables can be
experimentally expensive, and computationally intractable. Moreover, using the
full set of variables can be detrimental, leading to bias, or increasing the
variance in the estimation. Therefore, designing an experiment based on a
well-chosen subset of network components can increase estimation accuracy, and
reduce experimental and computational costs. We propose a simulation-based
algorithm for selecting sub-networks that support unbiased estimators of the
causal query under a constraint of cost, ranked with respect to the variance of
the estimators. The simulations are constructed based on historical
experimental data, or based on known properties of the biological system. Three
case studies demonstrated the effectiveness of well-chosen network subsets for
estimating causal queries from observational data. All the case studies are
reproducible and available at https://github.com/srtaheri/Simplified_LVM.
|
[
{
"created": "Mon, 24 Oct 2022 17:39:07 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Nov 2022 22:00:23 GMT",
"version": "v2"
}
] |
2022-11-30
|
[
[
"Mohammad-Taheri",
"Sara",
""
],
[
"Tewari",
"Vartika",
""
],
[
"Kapre",
"Rohan",
""
],
[
"Rahiminasab",
"Ehsan",
""
],
[
"Sachs",
"Karen",
""
],
[
"Hoyt",
"Charles Tapley",
""
],
[
"Zucker",
"Jeremy",
""
],
[
"Vitek",
"Olga",
""
]
] |
Estimating a causal query from observational data is an essential task in the analysis of biomolecular networks. Estimation takes as input a network topology, a query estimation method, and observational measurements on the network variables. However, estimations involving many variables can be experimentally expensive, and computationally intractable. Moreover, using the full set of variables can be detrimental, leading to bias, or increasing the variance in the estimation. Therefore, designing an experiment based on a well-chosen subset of network components can increase estimation accuracy, and reduce experimental and computational costs. We propose a simulation-based algorithm for selecting sub-networks that support unbiased estimators of the causal query under a constraint of cost, ranked with respect to the variance of the estimators. The simulations are constructed based on historical experimental data, or based on known properties of the biological system. Three case studies demonstrated the effectiveness of well-chosen network subsets for estimating causal queries from observational data. All the case studies are reproducible and available at https://github.com/srtaheri/Simplified_LVM.
|
1311.3214
|
Wes Maciejewski
|
Wes Maciejewski and Gregory J. Puleo
|
Environmental Evolutionary Graph Theory
| null |
Journal of Theoretical Biology, (2014), vol.360, pp.117-128
|
10.1016/j.jtbi.2014.06.040
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the influence of an environment on the evolution of its
resident population is a major challenge in evolutionary biology. Great
progress has been made in homogeneous population structures while heterogeneous
structures have received relatively less attention. Here we present a
structured population model where different individuals are best suited to
different regions of their environment. The underlying structure is a graph:
individuals occupy vertices, which are connected by edges. If an individual is
suited for their vertex, they receive an increase in fecundity. This framework
allows attention to be restricted to the spatial arrangement of suitable
habitat. We prove some basic properties of this model and find some
counter-intuitive results. Notably, 1) the arrangement of suitable sites is as
important as their proportion, and, 2) decreasing the proportion of suitable
sites may result in a decrease in the fixation time of an allele.
|
[
{
"created": "Wed, 13 Nov 2013 17:08:31 GMT",
"version": "v1"
}
] |
2014-07-30
|
[
[
"Maciejewski",
"Wes",
""
],
[
"Puleo",
"Gregory J.",
""
]
] |
Understanding the influence of an environment on the evolution of its resident population is a major challenge in evolutionary biology. Great progress has been made in homogeneous population structures while heterogeneous structures have received relatively less attention. Here we present a structured population model where different individuals are best suited to different regions of their environment. The underlying structure is a graph: individuals occupy vertices, which are connected by edges. If an individual is suited for their vertex, they receive an increase in fecundity. This framework allows attention to be restricted to the spatial arrangement of suitable habitat. We prove some basic properties of this model and find some counter-intuitive results. Notably, 1) the arrangement of suitable sites is as important as their proportion, and, 2) decreasing the proportion of suitable sites may result in a decrease in the fixation time of an allele.
|
0811.3040
|
Andre Brown
|
Andre E. X. Brown, Alina Hategan, Daniel Safer, Yale E. Goldman,
Dennis E. Discher
|
Cross-correlated TIRF/AFM shows Self-assembled Synthetic Myosin
Filaments are Asymmetric - Implications for Motile Filaments
|
25 pages, 8 figures. Accepted in Biophysical Journal
| null | null | null |
q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Myosin-II's rod-like tail drives filament assembly with a head arrangement
that should generate equal and opposite contractile forces on actin--if one
assumes that the filament is a symmetric bipole. Self-assembled myosin
filaments are shown here to be asymmetric in physiological buffer based on
cross-correlated images from both atomic force microscopy (AFM) and total
internal reflection fluorescence (TIRF). Quantitative cross-correlation of
these orthogonal methods produces structural information unavailable to either
method alone in showing that fluorescence intensity along the filament length
is proportional to height. This implies that myosin heads form a shell around
the filament axis, consistent with F-actin binding. A motor density of ~50 -
100 heads/micron is further estimated but with an average of 32% more motors on
one half of any given filament compared to the other, regardless of length. A
purely entropic pyramidal lattice model is developed that qualitatively
captures this lack of length dependence and the distribution of filament
asymmetries. Such strongly asymmetric bipoles are likely to produce an
imbalanced contractile force in cells and in actin-myosin gels, and thereby
contribute to motility as well as cytoskeletal tension.
|
[
{
"created": "Wed, 19 Nov 2008 01:09:39 GMT",
"version": "v1"
}
] |
2008-11-20
|
[
[
"Brown",
"Andre E. X.",
""
],
[
"Hategan",
"Alina",
""
],
[
"Safer",
"Daniel",
""
],
[
"Goldman",
"Yale E.",
""
],
[
"Discher",
"Dennis E.",
""
]
] |
Myosin-II's rod-like tail drives filament assembly with a head arrangement that should generate equal and opposite contractile forces on actin--if one assumes that the filament is a symmetric bipole. Self-assembled myosin filaments are shown here to be asymmetric in physiological buffer based on cross-correlated images from both atomic force microscopy (AFM) and total internal reflection fluorescence (TIRF). Quantitative cross-correlation of these orthogonal methods produces structural information unavailable to either method alone in showing that fluorescence intensity along the filament length is proportional to height. This implies that myosin heads form a shell around the filament axis, consistent with F-actin binding. A motor density of ~50 - 100 heads/micron is further estimated but with an average of 32% more motors on one half of any given filament compared to the other, regardless of length. A purely entropic pyramidal lattice model is developed that qualitatively captures this lack of length dependence and the distribution of filament asymmetries. Such strongly asymmetric bipoles are likely to produce an imbalanced contractile force in cells and in actin-myosin gels, and thereby contribute to motility as well as cytoskeletal tension.
|
2202.06635
|
Josip Mesari\'c
|
Josip Mesari\'c
|
Novel prediction methods for virtual drug screening
|
Review article
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Drug development is an expensive and time-consuming process where thousands
of chemical compounds are being tested in order to find those possessing
drug-like properties while being safe and effective. One of key parts of the
early drug discovery process has become virtual drug screening -- a method used
to narrow down search for potential drugs by running computer simulations of
drug-target interactions. As these methods are known to demand huge amounts of
computational power to get accurate results, prediction models based on machine
learning techniques became a popular solution requiring less computational
power as well as offering the ability to generate novel chemical structures for
further research. Deep learning is to stay in drug discovery but has a long way
to go. Only in the past few years with increases in computing power have
researchers really started to embrace the potential of neural networks in
various stages of the drug discovery process. While prediction methods promise
great perspective in the future development of drug discovery they open new
questions and challenges that still have to be solved.
|
[
{
"created": "Mon, 14 Feb 2022 11:41:39 GMT",
"version": "v1"
}
] |
2022-02-15
|
[
[
"Mesarić",
"Josip",
""
]
] |
Drug development is an expensive and time-consuming process where thousands of chemical compounds are being tested in order to find those possessing drug-like properties while being safe and effective. One of key parts of the early drug discovery process has become virtual drug screening -- a method used to narrow down search for potential drugs by running computer simulations of drug-target interactions. As these methods are known to demand huge amounts of computational power to get accurate results, prediction models based on machine learning techniques became a popular solution requiring less computational power as well as offering the ability to generate novel chemical structures for further research. Deep learning is to stay in drug discovery but has a long way to go. Only in the past few years with increases in computing power have researchers really started to embrace the potential of neural networks in various stages of the drug discovery process. While prediction methods promise great perspective in the future development of drug discovery they open new questions and challenges that still have to be solved.
|
2004.14297
|
Sebastian Dominguez
|
D. S. Ryan, S. Dom\'inguez, S. A. Ross, N. Nigam, J. M. Wakeling
|
The energy of muscle contraction. II. Transverse compression and work
|
20 pages, 7 figures. Manuscript submitted to Frontiers in Physiology
| null |
10.3389/fphys.2020.538522
| null |
q-bio.TO q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study we reproduced this compression-induced reduction in muscle
force through the use of a three-dimensional finite element model of
contracting muscle. The model used the principle of minimum total energy and
allowed for the redistribution of energy through different strain
energy-densities; this allowed us to determine the importance of the strain
energy-densities to the transverse forces developed by the muscle. Furthermore,
we were able to study how external work done on the muscle by transverse
compression affects the internal work and strain-energy distribution of the
muscle.
|
[
{
"created": "Tue, 28 Apr 2020 06:18:27 GMT",
"version": "v1"
}
] |
2021-01-13
|
[
[
"Ryan",
"D. S.",
""
],
[
"Domínguez",
"S.",
""
],
[
"Ross",
"S. A.",
""
],
[
"Nigam",
"N.",
""
],
[
"Wakeling",
"J. M.",
""
]
] |
In this study we reproduced this compression-induced reduction in muscle force through the use of a three-dimensional finite element model of contracting muscle. The model used the principle of minimum total energy and allowed for the redistribution of energy through different strain energy-densities; this allowed us to determine the importance of the strain energy-densities to the transverse forces developed by the muscle. Furthermore, we were able to study how external work done on the muscle by transverse compression affects the internal work and strain-energy distribution of the muscle.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.