id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.06690 | Yan Guo | Xianggen Liu, Yan Guo, Haoran Li, Jin Liu, Shudong Huang, Bowen Ke,
and Jiancheng Lv | DrugLLM: Open Large Language Model for Few-shot Molecule Generation | 17 pages, 3 figures | null | null | null | q-bio.BM cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have made great strides in areas such as
language processing and computer vision. Despite the emergence of diverse
techniques to improve few-shot learning capacity, current LLMs fall short in
handling the languages in biology and chemistry. For example, they are
struggling to capture the relationship between molecule structure and
pharmacochemical properties. Consequently, the few-shot learning capacity of
small-molecule drug modification remains impeded. In this work, we introduced
DrugLLM, a LLM tailored for drug design. During the training process, we
employed Group-based Molecular Representation (GMR) to represent molecules,
arranging them in sequences that reflect modifications aimed at enhancing
specific molecular properties. DrugLLM learns how to modify molecules in drug
discovery by predicting the next molecule based on past modifications.
Extensive computational experiments demonstrate that DrugLLM can generate new
molecules with expected properties based on limited examples, presenting a
powerful few-shot molecule generation capacity.
| [
{
"created": "Tue, 7 May 2024 09:18:13 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Liu",
"Xianggen",
""
],
[
"Guo",
"Yan",
""
],
[
"Li",
"Haoran",
""
],
[
"Liu",
"Jin",
""
],
[
"Huang",
"Shudong",
""
],
[
"Ke",
"Bowen",
""
],
[
"Lv",
"Jiancheng",
""
]
] | Large Language Models (LLMs) have made great strides in areas such as language processing and computer vision. Despite the emergence of diverse techniques to improve few-shot learning capacity, current LLMs fall short in handling the languages in biology and chemistry. For example, they are struggling to capture the relationship between molecule structure and pharmacochemical properties. Consequently, the few-shot learning capacity of small-molecule drug modification remains impeded. In this work, we introduced DrugLLM, a LLM tailored for drug design. During the training process, we employed Group-based Molecular Representation (GMR) to represent molecules, arranging them in sequences that reflect modifications aimed at enhancing specific molecular properties. DrugLLM learns how to modify molecules in drug discovery by predicting the next molecule based on past modifications. Extensive computational experiments demonstrate that DrugLLM can generate new molecules with expected properties based on limited examples, presenting a powerful few-shot molecule generation capacity. |
1111.6563 | Arash Sangari Mr. | Arash Sangari, Hasti Mirkia and Amir H. Assadi | Perception of Motion and Architectural Form: Computational Relationships
between Optical Flow and Perspective | 10 pages, 13 figures, submitted and accepted in DoCEIS'2012
Conference: http://www.uninova.pt/doceis/doceis12/home/home.php | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perceptual geometry refers to the interdisciplinary research whose objectives
focuses on study of geometry from the perspective of visual perception, and in
turn, applies such geometric findings to the ecological study of vision.
Perceptual geometry attempts to answer fundamental questions in perception of
form and representation of space through synthesis of cognitive and biological
theories of visual perception with geometric theories of the physical world.
Perception of form, space and motion are among fundamental problems in vision
science. In cognitive and computational models of human perception, the
theories for modeling motion are treated separately from models for perception
of form.
| [
{
"created": "Mon, 28 Nov 2011 19:51:18 GMT",
"version": "v1"
}
] | 2011-11-29 | [
[
"Sangari",
"Arash",
""
],
[
"Mirkia",
"Hasti",
""
],
[
"Assadi",
"Amir H.",
""
]
] | Perceptual geometry refers to the interdisciplinary research whose objectives focuses on study of geometry from the perspective of visual perception, and in turn, applies such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form, space and motion are among fundamental problems in vision science. In cognitive and computational models of human perception, the theories for modeling motion are treated separately from models for perception of form. |
1102.4623 | Michael Courtney | Elizabeth Keenan, Sarah Warner, Ashley Crowe, and Michael Courtney | Length, Weight, and Yield in Channel Catfish, Lake Diane, MI | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Channel catfish (Ictalurus punctatus) are important to both
commercial aquaculture and recreational fisheries. Little published data is
available on length-weight relationships of channel catfish in Michigan. Though
there is no record of public or private stocking, channel catfish appeared in
Lake Diane between 1984 and 1995 and it has developed into an excellent
fishery. Results: NLLS regression yields parameter estimates of b = 3.2293 and
a = 0.00522. The improved model yields the same estimate for the exponent, b,
and a length estimate (parameter L1) of 45.23 cm. Estimates of uncertainty and
covariance are smaller for the improved model, but the correlation coefficient
is r = 0.995 in both cases. LLS regression produced different parameter values,
a = 0.01356 and b = 2.9726, and a smaller correlation coefficient, r = 0.980.
On average, catfish in the sample weighed 106.0% of the standard weight, (Brown
et al.) and the linear regression (no slope) of fillet yield vs. total weight
suggests a typical fillet yield of 28.1% with r = 0.989. Conclusion: Most of
the fish in the sample were above the standard weight, heavier than the 75th
percentile for their length. Channel catfish are doing well in Lake Diane and
the population is well matched to the food supply. Management should attempt to
maintain current population levels. In this case, the improved length-weight
model, W(L) = (L/L1)^b, provided lower uncertainties in parameter estimates and
smaller covariance than the traditional model.
| [
{
"created": "Tue, 22 Feb 2011 21:09:34 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Aug 2012 03:22:23 GMT",
"version": "v2"
}
] | 2012-08-28 | [
[
"Keenan",
"Elizabeth",
""
],
[
"Warner",
"Sarah",
""
],
[
"Crowe",
"Ashley",
""
],
[
"Courtney",
"Michael",
""
]
] | Background: Channel catfish (Ictalurus punctatus) are important to both commercial aquaculture and recreational fisheries. Little published data is available on length-weight relationships of channel catfish in Michigan. Though there is no record of public or private stocking, channel catfish appeared in Lake Diane between 1984 and 1995 and it has developed into an excellent fishery. Results: NLLS regression yields parameter estimates of b = 3.2293 and a = 0.00522. The improved model yields the same estimate for the exponent, b, and a length estimate (parameter L1) of 45.23 cm. Estimates of uncertainty and covariance are smaller for the improved model, but the correlation coefficient is r = 0.995 in both cases. LLS regression produced different parameter values, a = 0.01356 and b = 2.9726, and a smaller correlation coefficient, r = 0.980. On average, catfish in the sample weighed 106.0% of the standard weight, (Brown et al.) and the linear regression (no slope) of fillet yield vs. total weight suggests a typical fillet yield of 28.1% with r = 0.989. Conclusion: Most of the fish in the sample were above the standard weight, heavier than the 75th percentile for their length. Channel catfish are doing well in Lake Diane and the population is well matched to the food supply. Management should attempt to maintain current population levels. In this case, the improved length-weight model, W(L) = (L/L1)^b, provided lower uncertainties in parameter estimates and smaller covariance than the traditional model. |
2207.00574 | Mark Jones Dr | Leo van Iersel, Mark Jones, Mathias Weller | Embedding phylogenetic trees in networks of low treewidth | null | Discrete Mathematics & Theoretical Computer Science, vol. 25:2,
Discrete Algorithms (October 2, 2023) dmtcs:10116 | 10.46298/dmtcs.10116 | null | q-bio.PE cs.DM | http://creativecommons.org/licenses/by/4.0/ | Given a rooted, binary phylogenetic network and a rooted, binary phylogenetic
tree, can the tree be embedded into the network? This problem, called
\textsc{Tree Containment}, arises when validating networks constructed by
phylogenetic inference methods.We present the first algorithm for (rooted)
\textsc{Tree Containment} using the treewidth $t$ of the input network $N$ as
parameter, showing that the problem can be solved in $2^{O(t^2)}\cdot|N|$ time
and space.
| [
{
"created": "Fri, 1 Jul 2022 17:51:57 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jul 2022 09:22:30 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Sep 2022 14:51:29 GMT",
"version": "v3"
},
{
"created": "Mon, 15 May 2023 08:41:31 GMT",
"version": "v4"
},
{
"created": "Tue, 19 Sep 2023 13:54:24 GMT",
"version": "v5"
}
] | 2024-02-14 | [
[
"van Iersel",
"Leo",
""
],
[
"Jones",
"Mark",
""
],
[
"Weller",
"Mathias",
""
]
] | Given a rooted, binary phylogenetic network and a rooted, binary phylogenetic tree, can the tree be embedded into the network? This problem, called \textsc{Tree Containment}, arises when validating networks constructed by phylogenetic inference methods.We present the first algorithm for (rooted) \textsc{Tree Containment} using the treewidth $t$ of the input network $N$ as parameter, showing that the problem can be solved in $2^{O(t^2)}\cdot|N|$ time and space. |
2212.09729 | Filip Novicky | Filip Novicky, Thomas Parr, Karl Friston, M. Berk Mirza, Noor Sajid | Bistable perception, precision and neuromodulation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bistable perception follows from observing a static, ambiguous, (visual)
stimulus with two possible interpretations. Here, we present an active
(Bayesian) inference account of bistable perception and posit that perceptual
transitions between different interpretations (i.e., inferences) of the same
stimulus ensue from specific eye movements that shift the focus to a different
visual feature. Formally, these inferences are a consequence of precision
control that determines how confident beliefs are and change the frequency with
which one can perceive - and alternate between - two distinct percepts. We
hypothesised that there are multiple, but distinct, ways in which precision
modulation can interact to give rise to a similar frequency of bistable
perception. We validated this using numerical simulations of the Necker's cube
paradigm and demonstrate the multiple routes that underwrite the frequency of
perceptual alternation. Our results provide an (enactive) computational account
of the intricate precision balance underwriting bistable perception.
Importantly, these precision parameters can be considered the computational
homologues of particular neurotransmitters - i.e., acetylcholine,
noradrenaline, dopamine - that have been previously implicated in controlling
bistable perception, providing a computational link between the neurochemistry
and perception.
| [
{
"created": "Mon, 19 Dec 2022 18:52:09 GMT",
"version": "v1"
}
] | 2022-12-20 | [
[
"Novicky",
"Filip",
""
],
[
"Parr",
"Thomas",
""
],
[
"Friston",
"Karl",
""
],
[
"Mirza",
"M. Berk",
""
],
[
"Sajid",
"Noor",
""
]
] | Bistable perception follows from observing a static, ambiguous, (visual) stimulus with two possible interpretations. Here, we present an active (Bayesian) inference account of bistable perception and posit that perceptual transitions between different interpretations (i.e., inferences) of the same stimulus ensue from specific eye movements that shift the focus to a different visual feature. Formally, these inferences are a consequence of precision control that determines how confident beliefs are and change the frequency with which one can perceive - and alternate between - two distinct percepts. We hypothesised that there are multiple, but distinct, ways in which precision modulation can interact to give rise to a similar frequency of bistable perception. We validated this using numerical simulations of the Necker's cube paradigm and demonstrate the multiple routes that underwrite the frequency of perceptual alternation. Our results provide an (enactive) computational account of the intricate precision balance underwriting bistable perception. Importantly, these precision parameters can be considered the computational homologues of particular neurotransmitters - i.e., acetylcholine, noradrenaline, dopamine - that have been previously implicated in controlling bistable perception, providing a computational link between the neurochemistry and perception. |
1601.06234 | Derdei Bichara | Derdei Bichara, Susan A. Holechek, Jorge Velazquez-Castro, Anarina L.
Murillo and Carlos Castillo-Chavez | On the Dynamics of Dengue Virus type 2 with Residence Times and Vertical
Transmission | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A two-patch mathematical model of Dengue virus type 2 (DENV-2) that accounts
for vectors' vertical transmission and between patches human dispersal is
introduced. Dispersal is modeled via a Lagrangian approach. A host-patch
residence-times basic reproduction number is derived and conditions under which
the disease dies out or persists are established. Analytical and numerical
results highlight the role of hosts' dispersal in mitigating or exacerbating
disease dynamics. The framework is used to explore dengue dynamics using, as a
starting point, the 2002 outbreak in the state of Colima, Mexico.
| [
{
"created": "Sat, 23 Jan 2016 05:31:56 GMT",
"version": "v1"
}
] | 2016-01-26 | [
[
"Bichara",
"Derdei",
""
],
[
"Holechek",
"Susan A.",
""
],
[
"Velazquez-Castro",
"Jorge",
""
],
[
"Murillo",
"Anarina L.",
""
],
[
"Castillo-Chavez",
"Carlos",
""
]
] | A two-patch mathematical model of Dengue virus type 2 (DENV-2) that accounts for vectors' vertical transmission and between patches human dispersal is introduced. Dispersal is modeled via a Lagrangian approach. A host-patch residence-times basic reproduction number is derived and conditions under which the disease dies out or persists are established. Analytical and numerical results highlight the role of hosts' dispersal in mitigating or exacerbating disease dynamics. The framework is used to explore dengue dynamics using, as a starting point, the 2002 outbreak in the state of Colima, Mexico. |
1208.5035 | Chandra Wickramasinghe | N. Chandra Wickramasinghe | DNA sequencing and predictions of the cosmic theory of life | 10 pages, 2 figures, accepted for publication Astrophysics and Space
Science 2012 | null | 10.1007/s10509-012-1227-y | null | q-bio.OT astro-ph.GA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of cometary panspermia, developed by the late Sir Fred Hoyle and
the present author argues that life originated cosmically as a unique event in
one of a great multitude of comets or planetary bodies in the Universe. Life on
Earth did not originate here but was introduced by impacting comets, and its
further evolution was driven by the subsequent acquisition of cosmically
derived genes. Explicit predictions of this theory published in 1979-1981,
stating how the acquisition of new genes drives evolution, are compared with
recent developments in relation to horizontal gene transfer, and the role of
retroviruses in evolution. Precisely-stated predictions of the theory of
cometary panspermia are shown to have been verified.
| [
{
"created": "Fri, 24 Aug 2012 19:03:43 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Wickramasinghe",
"N. Chandra",
""
]
] | The theory of cometary panspermia, developed by the late Sir Fred Hoyle and the present author argues that life originated cosmically as a unique event in one of a great multitude of comets or planetary bodies in the Universe. Life on Earth did not originate here but was introduced by impacting comets, and its further evolution was driven by the subsequent acquisition of cosmically derived genes. Explicit predictions of this theory published in 1979-1981, stating how the acquisition of new genes drives evolution, are compared with recent developments in relation to horizontal gene transfer, and the role of retroviruses in evolution. Precisely-stated predictions of the theory of cometary panspermia are shown to have been verified. |
1606.02860 | Ines Thiele | Maike K. Aurich, Ronan M.T. Fleming, and Ines Thiele | MetaboTools: A comprehensive toolbox for analysis of genome-scale
metabolic models | null | null | null | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolomic data sets provide a direct read-out of cellular phenotypes and
are increasingly generated to study biological questions. Our previous work
revealed the potential of analyzing extracellular metabolomic data in the
context of the metabolic model using constraint-based modeling. Through this
work, which consists of a protocol, a toolbox, and tutorials of two use cases,
we make our methods available to the broader scientific community. The protocol
describes, in a step-wise manner, the workflow of data integration and
computational analysis. The MetaboTools comprise the Matlab code required to
complete the workflow described in the protocol. Tutorials explain the
computational steps for integration of two different data sets and demonstrate
a comprehensive set of methods for the computational analysis of metabolic
models and stratification thereof into different phenotypes. The presented
workflow supports integrative analysis of multiple omics data sets.
Importantly, all analysis tools can be applied to metabolic models without
performing the entire workflow. Taken together, this protocol constitutes a
comprehensive guide to the intra-model analysis of extracellular metabolomic
data and a resource offering a broad set of computational analysis tools for a
wide biomedical and non-biomedical research community.
| [
{
"created": "Thu, 9 Jun 2016 08:20:15 GMT",
"version": "v1"
}
] | 2016-06-10 | [
[
"Aurich",
"Maike K.",
""
],
[
"Fleming",
"Ronan M. T.",
""
],
[
"Thiele",
"Ines",
""
]
] | Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Our previous work revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. Through this work, which consists of a protocol, a toolbox, and tutorials of two use cases, we make our methods available to the broader scientific community. The protocol describes, in a step-wise manner, the workflow of data integration and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorials explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, this protocol constitutes a comprehensive guide to the intra-model analysis of extracellular metabolomic data and a resource offering a broad set of computational analysis tools for a wide biomedical and non-biomedical research community. |
2401.00014 | Marco Frasca | J. Gliozzo, G. Marin\`o, A. Bonometti, M. Frasca and D. Malchiodi | Resource-Limited Automated Ki67 Index Estimation in Breast Cancer | null | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction of tumor progression and chemotherapy response has been
recently tackled exploiting Tumor Infiltrating Lymphocytes (TILs) and the
nuclear protein Ki67 as prognostic factors. Recently, deep neural networks
(DNNs) have been shown to achieve top results in estimating Ki67 expression and
simultaneous determination of intratumoral TILs score in breast cancer cells.
However, in the last ten years the extraordinary progress induced by deep
models proliferated at least as much as their resource demand. The exorbitant
computational costs required to query (and in some cases also to store) a deep
model represent a strong limitation in resource-limited contexts, like that of
IoT-based applications to support healthcare personnel. To this end, we propose
a resource consumption-aware DNN for the effective estimate of the percentage
of Ki67-positive cells in breast cancer screenings. Our approach reduced up to
75% and 89% the usage of memory and disk space respectively, up to 1.5x the
energy consumption, and preserved or improved the overall accuracy of a
benchmark state-of-the-art solution. Encouraged by such positive results, we
developed and structured the adopted framework so as to allow its general
purpose usage, along with a public software repository to support its usage.
| [
{
"created": "Fri, 22 Dec 2023 16:33:03 GMT",
"version": "v1"
}
] | 2024-01-02 | [
[
"Gliozzo",
"J.",
""
],
[
"Marinò",
"G.",
""
],
[
"Bonometti",
"A.",
""
],
[
"Frasca",
"M.",
""
],
[
"Malchiodi",
"D.",
""
]
] | The prediction of tumor progression and chemotherapy response has been recently tackled exploiting Tumor Infiltrating Lymphocytes (TILs) and the nuclear protein Ki67 as prognostic factors. Recently, deep neural networks (DNNs) have been shown to achieve top results in estimating Ki67 expression and simultaneous determination of intratumoral TILs score in breast cancer cells. However, in the last ten years the extraordinary progress induced by deep models proliferated at least as much as their resource demand. The exorbitant computational costs required to query (and in some cases also to store) a deep model represent a strong limitation in resource-limited contexts, like that of IoT-based applications to support healthcare personnel. To this end, we propose a resource consumption-aware DNN for the effective estimate of the percentage of Ki67-positive cells in breast cancer screenings. Our approach reduced up to 75% and 89% the usage of memory and disk space respectively, up to 1.5x the energy consumption, and preserved or improved the overall accuracy of a benchmark state-of-the-art solution. Encouraged by such positive results, we developed and structured the adopted framework so as to allow its general purpose usage, along with a public software repository to support its usage. |
1306.2243 | Yuri Tani Utsunomiya | Yuri Tani Utsunomiya, Rodrigo Vitorio Alonso, Adriana Santana do
Carmo, Francine Campagnari, Jos\'e Antonio Vinsintin, Jos\'e Fernando Garcia | mendelFix: a Perl script for checking Mendelian errors in high density
SNP data of trio designs | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we present mendelFix, a Perl script for checking Mendelian errors in
genome-wide SNP data of trio designs. The program takes 12-recoded PLINK PED
and MAP files as input to calculate a series of summary statistics for
Mendelian errors, sets missing offspring genotypes that present Mendelian
inconsistencies, and implements a simplistic procedure to infer missing
genotypes using parent information. The program can be easily incorporated in
any pipeline for family-based SNP data analysis, and is distributed as free
software under the GNU General Public License.
| [
{
"created": "Mon, 10 Jun 2013 16:26:10 GMT",
"version": "v1"
}
] | 2013-06-11 | [
[
"Utsunomiya",
"Yuri Tani",
""
],
[
"Alonso",
"Rodrigo Vitorio",
""
],
[
"Carmo",
"Adriana Santana do",
""
],
[
"Campagnari",
"Francine",
""
],
[
"Vinsintin",
"José Antonio",
""
],
[
"Garcia",
"José Fernando",
""
]
] | Here we present mendelFix, a Perl script for checking Mendelian errors in genome-wide SNP data of trio designs. The program takes 12-recoded PLINK PED and MAP files as input to calculate a series of summary statistics for Mendelian errors, sets missing offspring genotypes that present Mendelian inconsistencies, and implements a simplistic procedure to infer missing genotypes using parent information. The program can be easily incorporated in any pipeline for family-based SNP data analysis, and is distributed as free software under the GNU General Public License. |
2006.05572 | MohammadReza Ebrahimi | MohammadReza Ebrahimi, Navona Calarco, Kieran Campbell, Colin Hawco,
Aristotle Voineskos, Ashish Khisti | Time-Resolved fMRI Shared Response Model using Gaussian Process Factor
Analysis | null | null | null | null | q-bio.NC cs.LG eess.IV stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-subject fMRI studies are challenging due to the high variability of
both brain anatomy and functional brain topographies across participants. An
effective way of aggregating multi-subject fMRI data is to extract a shared
representation that filters out unwanted variability among subjects. Some
recent work has implemented probabilistic models to extract a shared
representation in task fMRI. In the present work, we improve upon these models
by incorporating temporal information in the common latent structures. We
introduce a new model, Shared Gaussian Process Factor Analysis (S-GPFA), that
discovers shared latent trajectories and subject-specific functional
topographies, while modelling temporal correlation in fMRI data. We demonstrate
the efficacy of our model in revealing ground truth latent structures using
simulated data, and replicate experimental performance of time-segment matching
and inter-subject similarity on the publicly available Raider and Sherlock
datasets. We further test the utility of our model by analyzing its learned
model parameters in the large multi-site SPINS dataset, on a social cognition
task from participants with and without schizophrenia.
| [
{
"created": "Wed, 10 Jun 2020 00:15:01 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Sep 2020 01:13:56 GMT",
"version": "v2"
}
] | 2020-09-08 | [
[
"Ebrahimi",
"MohammadReza",
""
],
[
"Calarco",
"Navona",
""
],
[
"Campbell",
"Kieran",
""
],
[
"Hawco",
"Colin",
""
],
[
"Voineskos",
"Aristotle",
""
],
[
"Khisti",
"Ashish",
""
]
] | Multi-subject fMRI studies are challenging due to the high variability of both brain anatomy and functional brain topographies across participants. An effective way of aggregating multi-subject fMRI data is to extract a shared representation that filters out unwanted variability among subjects. Some recent work has implemented probabilistic models to extract a shared representation in task fMRI. In the present work, we improve upon these models by incorporating temporal information in the common latent structures. We introduce a new model, Shared Gaussian Process Factor Analysis (S-GPFA), that discovers shared latent trajectories and subject-specific functional topographies, while modelling temporal correlation in fMRI data. We demonstrate the efficacy of our model in revealing ground truth latent structures using simulated data, and replicate experimental performance of time-segment matching and inter-subject similarity on the publicly available Raider and Sherlock datasets. We further test the utility of our model by analyzing its learned model parameters in the large multi-site SPINS dataset, on a social cognition task from participants with and without schizophrenia. |
2207.00813 | Hejie Cui | Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, Carl Yang | Interpretable Graph Neural Networks for Connectome-Based Brain Disorder
Analysis | Previous version presented at icml-imlh 2021 (no proceedings,
archived at 2107.05097), this version is accepted to miccai 2022 | null | null | null | q-bio.NC cs.AI cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human brains lie at the core of complex neurobiological systems, where the
neurons, circuits, and subsystems interact in enigmatic ways. Understanding the
structural and functional mechanisms of the brain has long been an intriguing
pursuit for neuroscience research and clinical disorder therapy. Mapping the
connections of the human brain as a network is one of the most pervasive
paradigms in neuroscience. Graph Neural Networks (GNNs) have recently emerged
as a potential method for modeling complex network data. Deep models, on the
other hand, have low interpretability, which prevents their usage in
decision-critical contexts like healthcare. To bridge this gap, we propose an
interpretable framework to analyze disorder-specific Regions of Interest (ROIs)
and prominent connections. The proposed framework consists of two modules: a
brain-network-oriented backbone model for disease prediction and a globally
shared explanation generator that highlights disorder-specific biomarkers
including salient ROIs and important connections. We conduct experiments on
three real-world datasets of brain disorders. The results verify that our
framework can obtain outstanding performance and also identify meaningful
biomarkers. All code for this work is available at
https://github.com/HennyJie/IBGNN.git.
| [
{
"created": "Thu, 30 Jun 2022 08:02:05 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Jul 2022 07:34:02 GMT",
"version": "v2"
}
] | 2022-07-26 | [
[
"Cui",
"Hejie",
""
],
[
"Dai",
"Wei",
""
],
[
"Zhu",
"Yanqiao",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"He",
"Lifang",
""
],
[
"Yang",
"Carl",
""
]
] | Human brains lie at the core of complex neurobiological systems, where the neurons, circuits, and subsystems interact in enigmatic ways. Understanding the structural and functional mechanisms of the brain has long been an intriguing pursuit for neuroscience research and clinical disorder therapy. Mapping the connections of the human brain as a network is one of the most pervasive paradigms in neuroscience. Graph Neural Networks (GNNs) have recently emerged as a potential method for modeling complex network data. Deep models, on the other hand, have low interpretability, which prevents their usage in decision-critical contexts like healthcare. To bridge this gap, we propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections. The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator that highlights disorder-specific biomarkers including salient ROIs and important connections. We conduct experiments on three real-world datasets of brain disorders. The results verify that our framework can obtain outstanding performance and also identify meaningful biomarkers. All code for this work is available at https://github.com/HennyJie/IBGNN.git. |
2011.10998 | Maja Tr\k{e}bacz | Maja Tr\k{e}bacz, Zohreh Shams, Mateja Jamnik, Paul Scherer, Nikola
Simidjievski, Helena Andres Terre, Pietro Li\`o | Using ontology embeddings for structural inductive bias in gene
expression data analysis | 4 pages + 2 page references, 15th Machine Learning in Computational
Biology (MLCB) meeting, 2020 | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Stratifying cancer patients based on their gene expression levels allows
improving diagnosis, survival analysis and treatment planning. However, such
data is extremely highly dimensional as it contains expression values for over
20000 genes per patient, and the number of samples in the datasets is low. To
deal with such settings, we propose to incorporate prior biological knowledge
about genes from ontologies into the machine learning system for the task of
patient classification given their gene expression data. We use ontology
embeddings that capture the semantic similarities between the genes to direct a
Graph Convolutional Network, and therefore sparsify the network connections. We
show this approach provides an advantage for predicting clinical targets from
high-dimensional low-sample data.
| [
{
"created": "Sun, 22 Nov 2020 12:13:29 GMT",
"version": "v1"
}
] | 2020-11-24 | [
[
"Trębacz",
"Maja",
""
],
[
"Shams",
"Zohreh",
""
],
[
"Jamnik",
"Mateja",
""
],
[
"Scherer",
"Paul",
""
],
[
"Simidjievski",
"Nikola",
""
],
[
"Terre",
"Helena Andres",
""
],
[
"Liò",
"Pietro",
""
]
] | Stratifying cancer patients based on their gene expression levels allows improving diagnosis, survival analysis and treatment planning. However, such data is extremely highly dimensional as it contains expression values for over 20000 genes per patient, and the number of samples in the datasets is low. To deal with such settings, we propose to incorporate prior biological knowledge about genes from ontologies into the machine learning system for the task of patient classification given their gene expression data. We use ontology embeddings that capture the semantic similarities between the genes to direct a Graph Convolutional Network, and therefore sparsify the network connections. We show this approach provides an advantage for predicting clinical targets from high-dimensional low-sample data. |
1406.0428 | Matthew Hall Mr | Matthew Hall and Andrew Rambaut | Epidemic reconstruction in a phylogenetics framework: transmission trees
as partitions | 40 pages, 3 figures | null | 10.1371/journal.pcbi.1004613 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of transmission trees for epidemics from genetic data has
been the subject of some recent interest. It has been demonstrated that the
transmission tree structure can be investigated by augmenting internal nodes of
a phylogenetic tree constructed using pathogen sequences from the epidemic with
information about the host that held the corresponding lineage. In this paper,
we note that this augmentation is equivalent to a correspondence between
transmission trees and partitions of the phylogenetic tree into connected
subtrees each containing one tip, and provide a framework for Markov Chain
Monte Carlo inference of phylogenies that are partitioned in this way, giving a
new method to co-estimate both trees. The procedure is integrated in the
existing phylogenetic inference package BEAST.
| [
{
"created": "Mon, 2 Jun 2014 16:22:40 GMT",
"version": "v1"
}
] | 2016-01-07 | [
[
"Hall",
"Matthew",
""
],
[
"Rambaut",
"Andrew",
""
]
] | The reconstruction of transmission trees for epidemics from genetic data has been the subject of some recent interest. It has been demonstrated that the transmission tree structure can be investigated by augmenting internal nodes of a phylogenetic tree constructed using pathogen sequences from the epidemic with information about the host that held the corresponding lineage. In this paper, we note that this augmentation is equivalent to a correspondence between transmission trees and partitions of the phylogenetic tree into connected subtrees each containing one tip, and provide a framework for Markov Chain Monte Carlo inference of phylogenies that are partitioned in this way, giving a new method to co-estimate both trees. The procedure is integrated in the existing phylogenetic inference package BEAST. |
1709.00053 | Sarbaz H. A. Khoshnaw | Sarbaz H. A. Khoshnaw | Dynamic Analysis of a Predator and Prey Model with Some Computational
Simulations | 14 pages, 7 figures | Journal of Applied Bioinformatics and Computational
Biology,6(2)2017 | 10.4172/2329-9533.1000137 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mathematical modelling and numerical simulations of interaction populations
are crucial topics in systems biology. The interactions of ecological models
may occur among individuals of the same species or individuals of different
species. Describing the dynamics of such models occasionally requires some
techniques of model analysis. Choosing appropriate techniques of model analysis
is often a difficult task. We define a prey (mouse) and predator (cat) model.
The system is modelled by a pair of non-linear ordinary differential equations
using mass action law, under constant rates. A proper scaling is suggested to
minimize the number of parameters. More interestingly, we propose a homotopy
technique with n expanding parame- ters for finding some analytical approximate
solutions. Furthermore, using the local sensitivity method is another important
step forward in this study because it helps to identify critical model
parameters. Numerical simulations are provided using Matlab for different
parameters and initial conditions.
| [
{
"created": "Thu, 31 Aug 2017 19:40:30 GMT",
"version": "v1"
}
] | 2017-09-04 | [
[
"Khoshnaw",
"Sarbaz H. A.",
""
]
] | Mathematical modelling and numerical simulations of interaction populations are crucial topics in systems biology. The interactions of ecological models may occur among individuals of the same species or individuals of different species. Describing the dynamics of such models occasionally requires some techniques of model analysis. Choosing appropriate techniques of model analysis is often a difficult task. We define a prey (mouse) and predator (cat) model. The system is modelled by a pair of non-linear ordinary differential equations using mass action law, under constant rates. A proper scaling is suggested to minimize the number of parameters. More interestingly, we propose a homotopy technique with n expanding parame- ters for finding some analytical approximate solutions. Furthermore, using the local sensitivity method is another important step forward in this study because it helps to identify critical model parameters. Numerical simulations are provided using Matlab for different parameters and initial conditions. |
2209.09941 | Truong Son Hy | Nhat Khang Ngo and Truong Son Hy and Risi Kondor | Predicting Drug-Drug Interactions using Deep Generative Models on Graphs | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latent representations of drugs and their targets produced by contemporary
graph autoencoder-based models have proved useful in predicting many types of
node-pair interactions on large networks, including drug-drug, drug-target, and
target-target interactions. However, most existing approaches model the node's
latent spaces in which node distributions are rigid and disjoint; these
limitations hinder the methods from generating new links among pairs of nodes.
In this paper, we present the effectiveness of variational graph autoencoders
(VGAE) in modeling latent node representations on multimodal networks. Our
approach can produce flexible latent spaces for each node type of the
multimodal graph; the embeddings are used later for predicting links among node
pairs under different edge types. To further enhance the models' performance,
we suggest a new method that concatenates Morgan fingerprints, which capture
the molecular structures of each drug, with their latent embeddings before
preceding them to the decoding stage for link prediction. Our proposed model
shows competitive results on two multimodal networks: (1) a multi-graph
consisting of drug and protein nodes, and (2) a multi-graph consisting of drug
and cell line nodes. Our source code is publicly available at
https://github.com/HySonLab/drug-interactions.
| [
{
"created": "Wed, 14 Sep 2022 14:27:32 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Oct 2022 04:53:57 GMT",
"version": "v2"
},
{
"created": "Sun, 30 Oct 2022 16:31:26 GMT",
"version": "v3"
}
] | 2022-11-01 | [
[
"Ngo",
"Nhat Khang",
""
],
[
"Hy",
"Truong Son",
""
],
[
"Kondor",
"Risi",
""
]
] | Latent representations of drugs and their targets produced by contemporary graph autoencoder-based models have proved useful in predicting many types of node-pair interactions on large networks, including drug-drug, drug-target, and target-target interactions. However, most existing approaches model the node's latent spaces in which node distributions are rigid and disjoint; these limitations hinder the methods from generating new links among pairs of nodes. In this paper, we present the effectiveness of variational graph autoencoders (VGAE) in modeling latent node representations on multimodal networks. Our approach can produce flexible latent spaces for each node type of the multimodal graph; the embeddings are used later for predicting links among node pairs under different edge types. To further enhance the models' performance, we suggest a new method that concatenates Morgan fingerprints, which capture the molecular structures of each drug, with their latent embeddings before preceding them to the decoding stage for link prediction. Our proposed model shows competitive results on two multimodal networks: (1) a multi-graph consisting of drug and protein nodes, and (2) a multi-graph consisting of drug and cell line nodes. Our source code is publicly available at https://github.com/HySonLab/drug-interactions. |
1809.08273 | Ayoub Hajlaoui | Ayoub Hajlaoui, Mohamed Chetouani, Slim Essid | EEG-based Inter-Subject Correlation Schemes in a Stimuli-Shared
Framework: Interplay with Valence and Arousal | 9 pages, 12 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Affective computing is confronted to high inter-subject variability, in both
emotional and physiological responses to a given stimulus. In a stimuli-shared
framework, that is to say for different subjects who watch the same stimuli,
Inter-Subject Correlation (ISC) measured from Electroencephalographic (EEG)
recordings characterize the correlations between the respective signals at the
different EEG channels. In order to investigate the interplay between ISC and
emotion, we propose to study the effect of valence and arousal on the ISC
score. To this end, we exploited various computational schemes corresponding to
different subsets of the dataset: all the data, stimulus-wise, subject
pairwise, and both stimulus-wise and subject pairwise. We thus applied these
schemes to the HCI MAHNOB and DEAP databases. Our results suggest that the ISC
score decreases with valence and increases with arousal, as already shown by
previous results on functional MRI.
| [
{
"created": "Mon, 17 Sep 2018 10:01:17 GMT",
"version": "v1"
}
] | 2018-09-25 | [
[
"Hajlaoui",
"Ayoub",
""
],
[
"Chetouani",
"Mohamed",
""
],
[
"Essid",
"Slim",
""
]
] | Affective computing is confronted to high inter-subject variability, in both emotional and physiological responses to a given stimulus. In a stimuli-shared framework, that is to say for different subjects who watch the same stimuli, Inter-Subject Correlation (ISC) measured from Electroencephalographic (EEG) recordings characterize the correlations between the respective signals at the different EEG channels. In order to investigate the interplay between ISC and emotion, we propose to study the effect of valence and arousal on the ISC score. To this end, we exploited various computational schemes corresponding to different subsets of the dataset: all the data, stimulus-wise, subject pairwise, and both stimulus-wise and subject pairwise. We thus applied these schemes to the HCI MAHNOB and DEAP databases. Our results suggest that the ISC score decreases with valence and increases with arousal, as already shown by previous results on functional MRI. |
2404.10954 | David Benrimoh | Albert Powers, Philip Angelos, Alexandria Bond, Emily Farina, Carolyn
Fredericks, Jay Gandhi, Maximillian Greenwald, Gabriela Hernandez-Busot,
Gabriel Hosein, Megan Kelley, Catalina Mourgues, William Palmer, Julia
Rodriguez-Sanchez, Rashina Seabury, Silmilly Toribio, Raina Vin, Jeremy
Weleff, David Benrimoh | A computational account of the development and evolution of psychotic
symptoms | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The mechanisms of psychotic symptoms like hallucinations and delusions are
often investigated in fully-formed illness, well after symptoms emerge. These
investigations have yielded key insights, but are not well-positioned to reveal
the dynamic forces underlying symptom formation itself. Understanding symptom
development over time would allow us to identify steps in the
pathophysiological process leading to psychosis, shifting the focus of
psychiatric intervention from symptom alleviation to prevention. We propose a
model for understanding the emergence of psychotic symptoms within the context
of an adaptive, developing neural system. We will make the case for a
pathophysiological process that begins with cortical hyperexcitability and
bottom-up noise transmission, which engenders inappropriate belief formation
via aberrant prediction error signaling. We will argue that this bottom-up
noise drives learning about the (im)precision of new incoming sensory
information because of diminished signal-to-noise ratio, causing an adaptive
relative over-reliance on prior beliefs. This over-reliance on priors
predisposes to hallucinations and covaries with hallucination severity. An
over-reliance on priors may also lead to increased conviction in the beliefs
generated by bottom-up noise and drive movement toward conversion to psychosis.
We will identify predictions of our model at each stage, examine evidence to
support or refute those predictions, and propose experiments that could falsify
or help select between alternative elements of the overall model. Nesting
computational abnormalities within longitudinal development allows us to
account for hidden dynamics among the mechanisms driving symptom formation and
to view established symptomatology as a point of equilibrium among competing
biological forces.
| [
{
"created": "Tue, 16 Apr 2024 23:30:57 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Powers",
"Albert",
""
],
[
"Angelos",
"Philip",
""
],
[
"Bond",
"Alexandria",
""
],
[
"Farina",
"Emily",
""
],
[
"Fredericks",
"Carolyn",
""
],
[
"Gandhi",
"Jay",
""
],
[
"Greenwald",
"Maximillian",
""
],
[
"Hernandez-Busot",
"Gabriela",
""
],
[
"Hosein",
"Gabriel",
""
],
[
"Kelley",
"Megan",
""
],
[
"Mourgues",
"Catalina",
""
],
[
"Palmer",
"William",
""
],
[
"Rodriguez-Sanchez",
"Julia",
""
],
[
"Seabury",
"Rashina",
""
],
[
"Toribio",
"Silmilly",
""
],
[
"Vin",
"Raina",
""
],
[
"Weleff",
"Jeremy",
""
],
[
"Benrimoh",
"David",
""
]
] | The mechanisms of psychotic symptoms like hallucinations and delusions are often investigated in fully-formed illness, well after symptoms emerge. These investigations have yielded key insights, but are not well-positioned to reveal the dynamic forces underlying symptom formation itself. Understanding symptom development over time would allow us to identify steps in the pathophysiological process leading to psychosis, shifting the focus of psychiatric intervention from symptom alleviation to prevention. We propose a model for understanding the emergence of psychotic symptoms within the context of an adaptive, developing neural system. We will make the case for a pathophysiological process that begins with cortical hyperexcitability and bottom-up noise transmission, which engenders inappropriate belief formation via aberrant prediction error signaling. We will argue that this bottom-up noise drives learning about the (im)precision of new incoming sensory information because of diminished signal-to-noise ratio, causing an adaptive relative over-reliance on prior beliefs. This over-reliance on priors predisposes to hallucinations and covaries with hallucination severity. An over-reliance on priors may also lead to increased conviction in the beliefs generated by bottom-up noise and drive movement toward conversion to psychosis. We will identify predictions of our model at each stage, examine evidence to support or refute those predictions, and propose experiments that could falsify or help select between alternative elements of the overall model. Nesting computational abnormalities within longitudinal development allows us to account for hidden dynamics among the mechanisms driving symptom formation and to view established symptomatology as a point of equilibrium among competing biological forces. |
2002.00448 | Ekkehard Ullner | Afifurrahman and Ekkehard Ullner and Antonio Politi | Stability of synchronous states in sparse neuronal networks | 9 pages, 7 figures | null | null | null | q-bio.NC math.DS nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stability of synchronous states is analysed in the context of two
populations of inhibitory and excitatory neurons, characterized by different
pulse-widths. The problem is reduced to that of determining the eigenvalues of
a suitable class of sparse random matrices, randomness being a consequence of
the network structure. A detailed analysis, which includes also the study of
finite-amplitude perturbations, is performed in the limit of narrow pulses,
finding that the stability depends crucially on the relative pulse-width. This
has implications for the overall property of the asynchronous (balanced)
regime.
| [
{
"created": "Sun, 2 Feb 2020 18:29:36 GMT",
"version": "v1"
}
] | 2020-02-04 | [
[
"Afifurrahman",
"",
""
],
[
"Ullner",
"Ekkehard",
""
],
[
"Politi",
"Antonio",
""
]
] | The stability of synchronous states is analysed in the context of two populations of inhibitory and excitatory neurons, characterized by different pulse-widths. The problem is reduced to that of determining the eigenvalues of a suitable class of sparse random matrices, randomness being a consequence of the network structure. A detailed analysis, which includes also the study of finite-amplitude perturbations, is performed in the limit of narrow pulses, finding that the stability depends crucially on the relative pulse-width. This has implications for the overall property of the asynchronous (balanced) regime. |
2404.10120 | Shahzeb Raja Noureen | Shahzeb Raja Noureen, Richard L. Mort, Christian A. Yates | Modelling adhesion in stochastic and mean-field models of cell migration | null | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Adhesion between cells plays an important role in many biological processes
such as tissue morphogenesis and homeostasis, wound healing and cancer cell
metastasis. From a mathematical perspective, adhesion between multiple cell
types has been previously analysed using discrete and continuum models
including the Cellular Potts models and partial differential equations (PDEs).
While these models can represent certain biological situations well, Cellular
Potts models can be computationally expensive and continuum models only capture
the macroscopic behaviour of a population of cells, ignoring stochasticity and
the discrete nature of cell dynamics. Cellular automaton models allow us to
address these problems and can be used for a wide variety of biological
systems. In this paper, we consider a cellular automaton approach and develop
an on-lattice agent-based model (ABM) for cell migration and adhesion in a
population composed of two cell types. By deriving and comparing the
corresponding PDEs to the ABM, we demonstrate that cell aggregation and cell
sorting are not possible in the PDE model. Therefore, we propose a set of
stochastic mean equations (SMEs) which better capture the behaviour of the ABM
in one and two dimensions.
| [
{
"created": "Mon, 15 Apr 2024 20:18:44 GMT",
"version": "v1"
}
] | 2024-04-17 | [
[
"Noureen",
"Shahzeb Raja",
""
],
[
"Mort",
"Richard L.",
""
],
[
"Yates",
"Christian A.",
""
]
] | Adhesion between cells plays an important role in many biological processes such as tissue morphogenesis and homeostasis, wound healing and cancer cell metastasis. From a mathematical perspective, adhesion between multiple cell types has been previously analysed using discrete and continuum models including the Cellular Potts models and partial differential equations (PDEs). While these models can represent certain biological situations well, Cellular Potts models can be computationally expensive and continuum models only capture the macroscopic behaviour of a population of cells, ignoring stochasticity and the discrete nature of cell dynamics. Cellular automaton models allow us to address these problems and can be used for a wide variety of biological systems. In this paper, we consider a cellular automaton approach and develop an on-lattice agent-based model (ABM) for cell migration and adhesion in a population composed of two cell types. By deriving and comparing the corresponding PDEs to the ABM, we demonstrate that cell aggregation and cell sorting are not possible in the PDE model. Therefore, we propose a set of stochastic mean equations (SMEs) which better capture the behaviour of the ABM in one and two dimensions. |
2404.06515 | Bryan Hernandez | Bryan S. Hernandez, Patrick Vincent N. Lubenia, Eduardo R. Mendoza | Embedding-based comparison of reaction networks of Wnt signaling | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces a new method for comparing two reaction networks of the
same or closely related systems through their embedded networks in terms of the
shared set of species. Hence, we call this method the Common Species Embedded
Networks (CSEN) analysis. Using this approach, we conduct a comparison of
existing reaction networks associated with Wnt signaling models (Lee, Schmitz,
MacLean, and Feinberg) that we have identified. The analysis yields three
important results for these Wnt models. First, the CSEN analysis of the Lee
(mono-stationary) and Feinberg (multi-stationary) shows a strong similarity,
justifying the study of the Feinberg model, which was a modified Lee model
constructed to study an important network property called "concordance". It
also challenge the absoluteness of discrimination of the models into
mono-stationarity versus multi-stationarity, which is a main result of Maclean
et al. (PNAS USA 2015). Second, the CSEN analysis provides evidence supporting
a strong similarity between the Schmitz and MacLean models, as indicated by the
"proximate equivalence" that we have identified. Third, the analysis
underscores the absence of a comparable relationship between the Feinberg and
MacLean models, highlighting distinctive differences between the two. Thus, our
approach could be a useful tool to compare mathematical models of the same or
closely related systems.
| [
{
"created": "Mon, 1 Apr 2024 06:26:18 GMT",
"version": "v1"
}
] | 2024-04-11 | [
[
"Hernandez",
"Bryan S.",
""
],
[
"Lubenia",
"Patrick Vincent N.",
""
],
[
"Mendoza",
"Eduardo R.",
""
]
] | This work introduces a new method for comparing two reaction networks of the same or closely related systems through their embedded networks in terms of the shared set of species. Hence, we call this method the Common Species Embedded Networks (CSEN) analysis. Using this approach, we conduct a comparison of existing reaction networks associated with Wnt signaling models (Lee, Schmitz, MacLean, and Feinberg) that we have identified. The analysis yields three important results for these Wnt models. First, the CSEN analysis of the Lee (mono-stationary) and Feinberg (multi-stationary) shows a strong similarity, justifying the study of the Feinberg model, which was a modified Lee model constructed to study an important network property called "concordance". It also challenge the absoluteness of discrimination of the models into mono-stationarity versus multi-stationarity, which is a main result of Maclean et al. (PNAS USA 2015). Second, the CSEN analysis provides evidence supporting a strong similarity between the Schmitz and MacLean models, as indicated by the "proximate equivalence" that we have identified. Third, the analysis underscores the absence of a comparable relationship between the Feinberg and MacLean models, highlighting distinctive differences between the two. Thus, our approach could be a useful tool to compare mathematical models of the same or closely related systems. |
1210.7967 | Antonino Staiano | Francesco Camastra and Angelo Ciaramella and Antonino Staiano | A note on some mathematical models on the effects of Bt-maize exposure | 10 pages, 1 figure. Early draft of a paper accepted for publication
on Environmental and Ecological Statistics | null | 10.1007/s10651-013-0264-1 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some mathematical models for the estimation of the effects of Cry1Ab and
Cry1F Bt-maize exposure in the biodiversity are discussed. Novel results about
these models are obtained and described in the note. The exact formula for the
proportion of the population that suffers mortality exposed to Cry1Ab pollen,
underlining its dependence on the margin from the Bt crop edge, is derived. In
addition, regarding Cry1F pollen effects, it is proposed a procedure, using a
probabilistic and statistical approach, that computes the width of the non
Bt-stripes used as mitigation measures. Finally, it has been derived a lower
bound, using probabilistic consideration, on the species sensitivity of
Lepidoptera.
| [
{
"created": "Tue, 30 Oct 2012 11:25:35 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2013 09:24:56 GMT",
"version": "v2"
}
] | 2013-08-28 | [
[
"Camastra",
"Francesco",
""
],
[
"Ciaramella",
"Angelo",
""
],
[
"Staiano",
"Antonino",
""
]
] | Some mathematical models for the estimation of the effects of Cry1Ab and Cry1F Bt-maize exposure in the biodiversity are discussed. Novel results about these models are obtained and described in the note. The exact formula for the proportion of the population that suffers mortality exposed to Cry1Ab pollen, underlining its dependence on the margin from the Bt crop edge, is derived. In addition, regarding Cry1F pollen effects, it is proposed a procedure, using a probabilistic and statistical approach, that computes the width of the non Bt-stripes used as mitigation measures. Finally, it has been derived a lower bound, using probabilistic consideration, on the species sensitivity of Lepidoptera. |
2407.01548 | Minglu Zhao | Minglu Zhao, Dehong Xu, Tao Gao | From Cognition to Computation: A Comparative Review of Human Attention
and Transformer Architectures | null | null | null | null | q-bio.OT cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention is a cornerstone of human cognition that facilitates the efficient
extraction of information in everyday life. Recent developments in artificial
intelligence like the Transformer architecture also incorporate the idea of
attention in model designs. However, despite the shared fundamental principle
of selectively attending to information, human attention and the Transformer
model display notable differences, particularly in their capacity constraints,
attention pathways, and intentional mechanisms. Our review aims to provide a
comparative analysis of these mechanisms from a cognitive-functional
perspective, thereby shedding light on several open research questions. The
exploration encourages interdisciplinary efforts to derive insights from human
attention mechanisms in the pursuit of developing more generalized artificial
intelligence.
| [
{
"created": "Thu, 25 Apr 2024 05:13:38 GMT",
"version": "v1"
}
] | 2024-07-03 | [
[
"Zhao",
"Minglu",
""
],
[
"Xu",
"Dehong",
""
],
[
"Gao",
"Tao",
""
]
] | Attention is a cornerstone of human cognition that facilitates the efficient extraction of information in everyday life. Recent developments in artificial intelligence like the Transformer architecture also incorporate the idea of attention in model designs. However, despite the shared fundamental principle of selectively attending to information, human attention and the Transformer model display notable differences, particularly in their capacity constraints, attention pathways, and intentional mechanisms. Our review aims to provide a comparative analysis of these mechanisms from a cognitive-functional perspective, thereby shedding light on several open research questions. The exploration encourages interdisciplinary efforts to derive insights from human attention mechanisms in the pursuit of developing more generalized artificial intelligence. |
2207.14295 | Fabian Weigend | Fabian C. Weigend and Edward Gray and Oliver Obst and Jason Siegler | Benefits and limitations of a new hydraulic performance model | 22 pages, 10 figures, 3 tables | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Purpose: Performance models are important tools for coaches and athletes to
optimise competition outcomes or training schedules. A recently published
hydraulic performance model has been reported to outperform established
work-balance models in predicting recovery during intermittent exercise. The
new hydraulic model was optimised to predict exercise recovery dynamics. In
this work, we hypothesised that the benefits of the model come at the cost of
inaccurate predictions of metabolic responses to exercise such as
$\dot{V}_{\mathrm{O}_2}$.
Methods: Hydraulic model predictions were compared to breath-by-breath
$\dot{V}_{\mathrm{O}_2}$ data from 25 constant high-intensity exercise tests of
5 participants (age $32\pm7.8$ years, weight $73.6 \pm 5.81$ kg,
$\dot{V}_{\mathrm{O}_2\mathrm{max}} \; 3.59 \pm 0.62$ L/min). Each test was
performed to volitional exhaustion on a cycle ergometer with a duration between
2 and 12 min. The comparison focuses on the onset of $\dot{V}_{\mathrm{O}_2}$
kinetics.
Results: On average, the hydraulic model predicted peak
$\dot{V}_{\mathrm{O}_2}$ during exercise $216\pm113$~s earlier than observed in
the data. The new hydraulic model also did not predict the so-called
$\dot{V}_{\mathrm{O}_2}$ slow component and made the unrealistic assumption
that there is no $\dot{V}_{\mathrm{O}_2}$ at the onset of exercise.
Conclusion: While the new hydraulic model may be a powerful tool for
predicting energy recovery, it should not be used to predict metabolic
responses during high-intensity exercise. The present study contributes towards
a more holistic picture of the benefits and limitations of the new hydraulic
model. Data and code are published as open source.
| [
{
"created": "Thu, 28 Jul 2022 04:27:10 GMT",
"version": "v1"
}
] | 2022-08-01 | [
[
"Weigend",
"Fabian C.",
""
],
[
"Gray",
"Edward",
""
],
[
"Obst",
"Oliver",
""
],
[
"Siegler",
"Jason",
""
]
] | Purpose: Performance models are important tools for coaches and athletes to optimise competition outcomes or training schedules. A recently published hydraulic performance model has been reported to outperform established work-balance models in predicting recovery during intermittent exercise. The new hydraulic model was optimised to predict exercise recovery dynamics. In this work, we hypothesised that the benefits of the model come at the cost of inaccurate predictions of metabolic responses to exercise such as $\dot{V}_{\mathrm{O}_2}$. Methods: Hydraulic model predictions were compared to breath-by-breath $\dot{V}_{\mathrm{O}_2}$ data from 25 constant high-intensity exercise tests of 5 participants (age $32\pm7.8$ years, weight $73.6 \pm 5.81$ kg, $\dot{V}_{\mathrm{O}_2\mathrm{max}} \; 3.59 \pm 0.62$ L/min). Each test was performed to volitional exhaustion on a cycle ergometer with a duration between 2 and 12 min. The comparison focuses on the onset of $\dot{V}_{\mathrm{O}_2}$ kinetics. Results: On average, the hydraulic model predicted peak $\dot{V}_{\mathrm{O}_2}$ during exercise $216\pm113$~s earlier than observed in the data. The new hydraulic model also did not predict the so-called $\dot{V}_{\mathrm{O}_2}$ slow component and made the unrealistic assumption that there is no $\dot{V}_{\mathrm{O}_2}$ at the onset of exercise. Conclusion: While the new hydraulic model may be a powerful tool for predicting energy recovery, it should not be used to predict metabolic responses during high-intensity exercise. The present study contributes towards a more holistic picture of the benefits and limitations of the new hydraulic model. Data and code are published as open source. |
1708.02063 | Kristina Crona | Kristina Crona and Mengming Luo | Higher order epistasis and fitness peaks | 11 pages, 4 figures | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that higher order epistasis has a substantial impact on evolutionary
dynamics by analyzing peaks in the fitness landscapes. There are 193,270,310
fitness graphs, or cube orientations, for 4-locus systems, distributed on 511,
863 isomorphism classes. We identify all fitness graphs with 6 or more peaks.
81 percent of them imply 4-way epistasis, whereas 9 percent of all 4-locus
fitness graphs imply 4-way epistasis. Fitness graphs are useful in that they
reflect the entire collection of fitness landscapes rather than focusing on a
particular model. Our results depend on a characterization of fitness graphs
that imply $n$-way epistasis. The characterization is expressed in terms of a
partition property that can be derived from Hall's marriage theorem for
bipartite graphs. A similar partition condition holds for any partial order.
The result answers an open problem posed at a conference on interactions
between algebra and the sciences at the Max Planck institute.
| [
{
"created": "Mon, 7 Aug 2017 10:49:27 GMT",
"version": "v1"
}
] | 2017-08-08 | [
[
"Crona",
"Kristina",
""
],
[
"Luo",
"Mengming",
""
]
] | We show that higher order epistasis has a substantial impact on evolutionary dynamics by analyzing peaks in the fitness landscapes. There are 193,270,310 fitness graphs, or cube orientations, for 4-locus systems, distributed on 511, 863 isomorphism classes. We identify all fitness graphs with 6 or more peaks. 81 percent of them imply 4-way epistasis, whereas 9 percent of all 4-locus fitness graphs imply 4-way epistasis. Fitness graphs are useful in that they reflect the entire collection of fitness landscapes rather than focusing on a particular model. Our results depend on a characterization of fitness graphs that imply $n$-way epistasis. The characterization is expressed in terms of a partition property that can be derived from Hall's marriage theorem for bipartite graphs. A similar partition condition holds for any partial order. The result answers an open problem posed at a conference on interactions between algebra and the sciences at the Max Planck institute. |
2407.08714 | Yacov Hel-Or | Ofer Lipman, Shany Grossman, Doron Friedman, Yacov Hel-Or, Rafael
Malach | Invariant inter-subject relational structures in the human visual cortex | null | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | It is a fundamental behavior that different individuals see the world in a
largely similar manner. This is an essential basis for humans' ability to
cooperate and communicate. However, what are the neuronal properties that
underlie these inter-subject commonalities of our visual world? Finding out
what aspects of neuronal coding remain invariant across individuals' brains
will shed light not only on this fundamental question but will also point to
the neuronal coding scheme as the basis of visual perception. Here, we address
this question by obtaining intracranial recordings from three cohorts of
patients taking part in a different visual recognition task (overall 19
patients and 244 high-order visual contacts included in the analyses) and
examining the neuronal coding scheme most consistent across individuals' visual
cortex. Our results highlight relational coding - expressed by the set of
similarity distances between profiles of pattern activations - as the most
consistent representation across individuals. Alternative coding schemes, such
as population vector coding or linear coding, failed to achieve similar
inter-subject consistency. Our results thus support relational coding as the
central neuronal code underlying individuals' shared perceptual content in the
human brain.
| [
{
"created": "Thu, 11 Jul 2024 17:50:19 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Lipman",
"Ofer",
""
],
[
"Grossman",
"Shany",
""
],
[
"Friedman",
"Doron",
""
],
[
"Hel-Or",
"Yacov",
""
],
[
"Malach",
"Rafael",
""
]
] | It is a fundamental behavior that different individuals see the world in a largely similar manner. This is an essential basis for humans' ability to cooperate and communicate. However, what are the neuronal properties that underlie these inter-subject commonalities of our visual world? Finding out what aspects of neuronal coding remain invariant across individuals' brains will shed light not only on this fundamental question but will also point to the neuronal coding scheme as the basis of visual perception. Here, we address this question by obtaining intracranial recordings from three cohorts of patients taking part in a different visual recognition task (overall 19 patients and 244 high-order visual contacts included in the analyses) and examining the neuronal coding scheme most consistent across individuals' visual cortex. Our results highlight relational coding - expressed by the set of similarity distances between profiles of pattern activations - as the most consistent representation across individuals. Alternative coding schemes, such as population vector coding or linear coding, failed to achieve similar inter-subject consistency. Our results thus support relational coding as the central neuronal code underlying individuals' shared perceptual content in the human brain. |
2109.07300 | Cameron Smith | Cameron A. Smith and Christian A. Yates and Ben Ashby | Critical weaknesses in shielding strategies for COVID-19 | null | null | null | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic, caused by the coronavirus SARS-CoV-2, has led to a
wide range of non-pharmaceutical interventions being implemented around the
world to curb transmission. However, the economic and social costs of some of
these measures, especially lockdowns, has been high. An alternative and widely
discussed public health strategy for the COVID-19 pandemic would have been to
'shield' those most vulnerable to COVID-19 (minimising their contacts with
others), while allowing infection to spread among lower risk individuals with
the aim of reaching herd immunity. Here we retrospectively explore the
effectiveness of this strategy using a stochastic SEIR framework, showing that
even under the unrealistic assumption of perfect shielding, hospitals would
have been rapidly overwhelmed with many avoidable deaths among lower risk
individuals. Crucially, even a small (20%) reduction in the effectiveness of
shielding would have likely led to a large increase (>150%) in the number of
deaths compared to perfect shielding. Our findings demonstrate that shielding
the vulnerable while allowing infections to spread among the wider population
would not have been a viable public health strategy for COVID-19 and is
unlikely to be effective for future pandemics.
| [
{
"created": "Wed, 15 Sep 2021 13:55:00 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Apr 2022 19:11:56 GMT",
"version": "v2"
}
] | 2022-04-29 | [
[
"Smith",
"Cameron A.",
""
],
[
"Yates",
"Christian A.",
""
],
[
"Ashby",
"Ben",
""
]
] | The COVID-19 pandemic, caused by the coronavirus SARS-CoV-2, has led to a wide range of non-pharmaceutical interventions being implemented around the world to curb transmission. However, the economic and social costs of some of these measures, especially lockdowns, has been high. An alternative and widely discussed public health strategy for the COVID-19 pandemic would have been to 'shield' those most vulnerable to COVID-19 (minimising their contacts with others), while allowing infection to spread among lower risk individuals with the aim of reaching herd immunity. Here we retrospectively explore the effectiveness of this strategy using a stochastic SEIR framework, showing that even under the unrealistic assumption of perfect shielding, hospitals would have been rapidly overwhelmed with many avoidable deaths among lower risk individuals. Crucially, even a small (20%) reduction in the effectiveness of shielding would have likely led to a large increase (>150%) in the number of deaths compared to perfect shielding. Our findings demonstrate that shielding the vulnerable while allowing infections to spread among the wider population would not have been a viable public health strategy for COVID-19 and is unlikely to be effective for future pandemics. |
1803.09721 | Fabio Vandin | Rebecca Sarto Basso, Dorit S. Hochbaum, Fabio Vandin | Efficient algorithms to discover alterations with complementary
functional association in cancer | Accepted at RECOMB 2018 | null | 10.1371/journal.pcbi.1006802 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent large cancer studies have measured somatic alterations in an
unprecedented number of tumours. These large datasets allow the identification
of cancer-related sets of genetic alterations by identifying relevant
combinatorial patterns. Among such patterns, mutual exclusivity has been
employed by several recent methods that have shown its effectivenes in
characterizing gene sets associated to cancer. Mutual exclusivity arises
because of the complementarity, at the functional level, of alterations in
genes which are part of a group (e.g., a pathway) performing a given function.
The availability of quantitative target profiles, from genetic perturbations or
from clinical phenotypes, provides additional information that can be leveraged
to improve the identification of cancer related gene sets by discovering groups
with complementary functional associations with such targets.
In this work we study the problem of finding groups of mutually exclusive
alterations associated with a quantitative (functional) target. We propose a
combinatorial formulation for the problem, and prove that the associated
computation problem is computationally hard. We design two algorithms to solve
the problem and implement them in our tool UNCOVER. We provide analytic
evidence of the effectiveness of UNCOVER in finding high-quality solutions and
show experimentally that UNCOVER finds sets of alterations significantly
associated with functional targets in a variety of scenarios. In addition, our
algorithms are much faster than the state-of-the-art, allowing the analysis of
large datasets of thousands of target profiles from cancer cell lines. We show
that on one such dataset from project Achilles our methods identify several
significant gene sets with complementary functional associations with targets.
| [
{
"created": "Mon, 26 Mar 2018 17:23:39 GMT",
"version": "v1"
}
] | 2019-06-19 | [
[
"Basso",
"Rebecca Sarto",
""
],
[
"Hochbaum",
"Dorit S.",
""
],
[
"Vandin",
"Fabio",
""
]
] | Recent large cancer studies have measured somatic alterations in an unprecedented number of tumours. These large datasets allow the identification of cancer-related sets of genetic alterations by identifying relevant combinatorial patterns. Among such patterns, mutual exclusivity has been employed by several recent methods that have shown its effectivenes in characterizing gene sets associated to cancer. Mutual exclusivity arises because of the complementarity, at the functional level, of alterations in genes which are part of a group (e.g., a pathway) performing a given function. The availability of quantitative target profiles, from genetic perturbations or from clinical phenotypes, provides additional information that can be leveraged to improve the identification of cancer related gene sets by discovering groups with complementary functional associations with such targets. In this work we study the problem of finding groups of mutually exclusive alterations associated with a quantitative (functional) target. We propose a combinatorial formulation for the problem, and prove that the associated computation problem is computationally hard. We design two algorithms to solve the problem and implement them in our tool UNCOVER. We provide analytic evidence of the effectiveness of UNCOVER in finding high-quality solutions and show experimentally that UNCOVER finds sets of alterations significantly associated with functional targets in a variety of scenarios. In addition, our algorithms are much faster than the state-of-the-art, allowing the analysis of large datasets of thousands of target profiles from cancer cell lines. We show that on one such dataset from project Achilles our methods identify several significant gene sets with complementary functional associations with targets. |
q-bio/0405009 | Wolfhard Janke | Reinhard Schiemann, Michael Bachmann, Wolfhard Janke | Exact Sequence Analysis for Three-Dimensional HP Lattice Proteins | 12 pages, RevTeX, 21 Postscript figures, Author Information under
http://www.physik.uni-leipzig.de/CQT | J. Chem. Phys. 122, 114705(1-10) (2005). | 10.1063/1.1814941 | Leipzig-LU-ITP 2004/010 | q-bio.BM cond-mat.stat-mech | null | We have exactly enumerated all sequences and conformations of HP proteins
with chains of up to 19 monomers on the simple cubic lattice. For two variants
of the hydrophobic-polar (HP) model, where only two types of monomers are
distinguished, we determined and statistically analyzed designing sequences,
i.e., sequences that have a non-degenerate ground state. Furthermore we were
interested in characteristic thermodynamic properties of HP proteins with
designing sequences. In order to be able to perform these exact studies, we
applied an efficient enumeration method based on contact sets.
| [
{
"created": "Thu, 13 May 2004 23:00:48 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Schiemann",
"Reinhard",
""
],
[
"Bachmann",
"Michael",
""
],
[
"Janke",
"Wolfhard",
""
]
] | We have exactly enumerated all sequences and conformations of HP proteins with chains of up to 19 monomers on the simple cubic lattice. For two variants of the hydrophobic-polar (HP) model, where only two types of monomers are distinguished, we determined and statistically analyzed designing sequences, i.e., sequences that have a non-degenerate ground state. Furthermore we were interested in characteristic thermodynamic properties of HP proteins with designing sequences. In order to be able to perform these exact studies, we applied an efficient enumeration method based on contact sets. |
1712.00425 | K. Anton Feenstra | Sanne Abeln, Jaap Heringa, K. Anton Feenstra | Strategies for protein structure model generation | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | This chapter deals with approaches for protein three-dimensional structure
prediction, starting out from a single input sequence with unknown struc- ture,
the 'query' or 'target' sequence. Both template based and template free
modelling techniques are treated, and how resulting structural models may be
selected and refined. We give a concrete flowchart for how to de- cide which
modelling strategy is best suited in particular circumstances, and which steps
need to be taken in each strategy. Notably, the ability to locate a suitable
structural template by homology or fold recognition is crucial; without this
models will be of low quality at best. With a template avail- able, the quality
of the query-template alignment crucially determines the model quality. We also
discuss how other, courser, experimental data may be incorporated in the
modelling process to alleviate the problem of missing template structures.
Finally, we discuss measures to predict the quality of models generated.
| [
{
"created": "Fri, 1 Dec 2017 17:48:49 GMT",
"version": "v1"
}
] | 2017-12-04 | [
[
"Abeln",
"Sanne",
""
],
[
"Heringa",
"Jaap",
""
],
[
"Feenstra",
"K. Anton",
""
]
] | This chapter deals with approaches for protein three-dimensional structure prediction, starting out from a single input sequence with unknown struc- ture, the 'query' or 'target' sequence. Both template based and template free modelling techniques are treated, and how resulting structural models may be selected and refined. We give a concrete flowchart for how to de- cide which modelling strategy is best suited in particular circumstances, and which steps need to be taken in each strategy. Notably, the ability to locate a suitable structural template by homology or fold recognition is crucial; without this models will be of low quality at best. With a template avail- able, the quality of the query-template alignment crucially determines the model quality. We also discuss how other, courser, experimental data may be incorporated in the modelling process to alleviate the problem of missing template structures. Finally, we discuss measures to predict the quality of models generated. |
2008.12105 | Tommaso Lorenzi | Gissell Estrada-Rodriguez and Tommaso Lorenzi | Macroscopic limit of a kinetic model describing the switch in T cell
migration modes via binary interactions | 24 pages, 2 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experimental results on the immune response to cancer indicate that
activation of cytotoxic T lymphocytes (CTLs) through interactions with
dendritic cells (DCs) can trigger a change in CTL migration patterns. In
particular, while CTLs in the pre-activation state move in a non-local search
pattern, the search pattern of activated CTLs is more localised. In this paper,
we develop a kinetic model for such a switch in CTL migration modes. The model
is formulated as a coupled system of balance equations for the one-particle
distribution functions of CTLs in the pre-activation state, activated CTLs and
DCs. CTL activation is modelled via binary interactions between CTLs in the
pre-activation state and DCs. Moreover, cell motion is represented as a
velocity-jump process, with the running time of CTLs in the pre-activation
state following a long-tailed distribution, which is consistent with a L\'evy
walk, and the running time of activated CTLs following a Poisson distribution,
which corresponds to Brownian motion. We formally show that the macroscopic
limit of the model comprises a coupled system of balance equations for the cell
densities whereby activated CTL movement is described via a classical diffusion
term, whilst a fractional diffusion term describes the movement of CTLs in the
pre-activation state. The modelling approach presented here and its possible
generalisations are expected to find applications in the study of the immune
response to cancer and in other biological contexts in which switch from
non-local to localised migration patterns occurs.
| [
{
"created": "Mon, 27 Jul 2020 05:46:43 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Mar 2021 09:05:47 GMT",
"version": "v2"
},
{
"created": "Wed, 15 Sep 2021 12:50:23 GMT",
"version": "v3"
}
] | 2021-09-16 | [
[
"Estrada-Rodriguez",
"Gissell",
""
],
[
"Lorenzi",
"Tommaso",
""
]
] | Experimental results on the immune response to cancer indicate that activation of cytotoxic T lymphocytes (CTLs) through interactions with dendritic cells (DCs) can trigger a change in CTL migration patterns. In particular, while CTLs in the pre-activation state move in a non-local search pattern, the search pattern of activated CTLs is more localised. In this paper, we develop a kinetic model for such a switch in CTL migration modes. The model is formulated as a coupled system of balance equations for the one-particle distribution functions of CTLs in the pre-activation state, activated CTLs and DCs. CTL activation is modelled via binary interactions between CTLs in the pre-activation state and DCs. Moreover, cell motion is represented as a velocity-jump process, with the running time of CTLs in the pre-activation state following a long-tailed distribution, which is consistent with a L\'evy walk, and the running time of activated CTLs following a Poisson distribution, which corresponds to Brownian motion. We formally show that the macroscopic limit of the model comprises a coupled system of balance equations for the cell densities whereby activated CTL movement is described via a classical diffusion term, whilst a fractional diffusion term describes the movement of CTLs in the pre-activation state. The modelling approach presented here and its possible generalisations are expected to find applications in the study of the immune response to cancer and in other biological contexts in which switch from non-local to localised migration patterns occurs. |
1210.3949 | Juliana Capitanio | Juliana S. Capitanio and Richard W. Wozniak | Hematopoietic cancers and Nup98 fusions: determining common mechanisms
of malignancy | 13 pages, 2 tables, 1 supplementary table, 6 figures, 2 supplementary
figures | null | 10.6084/m9.figshare.1248957 | null | q-bio.MN q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chromosomal aberrations are very frequent in leukemias and several recurring
mutations capable of malignant transformation have been described. These
mutations usually occur in hematopoietic stem cells (HSC), transforming them
into leukemia stem cells. NUP98 gene translocations are an example of such
chromosomal aberrations; these translocations produce a fusion protein
containing the N-terminal portion of Nup98 and the C-terminal of a fusion
partner. Over 75% of Nup98 fusions can interact with chromatin, and lead to
changes in gene expression. Therefore, I hypothesize that nup98 fusions act as
rogue transcriptional regulators in the cell. Collecting previously published
gene expression data (microarray) from HSCs expressing Nup98 fusions, we can
generate data to corroborate this hypothesis. Several different fusions affect
the expression of similar genes; these are involved in a few biological
processes in the cell: embryonic development, immune system formation and
chromatin organization. Deregulated genes also present similar transcription
factor binding sites in their regulatory regions. These putative regulatory
transcription factors are highly interconnected through protein-protein
interactions and transcriptional regulation among themselves, and they have
important roles in cell cycle regulation, embryonic development, hematopoiesis,
apoptosis and chromatin modification.
| [
{
"created": "Mon, 15 Oct 2012 09:18:31 GMT",
"version": "v1"
}
] | 2014-11-25 | [
[
"Capitanio",
"Juliana S.",
""
],
[
"Wozniak",
"Richard W.",
""
]
] | Chromosomal aberrations are very frequent in leukemias and several recurring mutations capable of malignant transformation have been described. These mutations usually occur in hematopoietic stem cells (HSC), transforming them into leukemia stem cells. NUP98 gene translocations are an example of such chromosomal aberrations; these translocations produce a fusion protein containing the N-terminal portion of Nup98 and the C-terminal of a fusion partner. Over 75% of Nup98 fusions can interact with chromatin, and lead to changes in gene expression. Therefore, I hypothesize that nup98 fusions act as rogue transcriptional regulators in the cell. Collecting previously published gene expression data (microarray) from HSCs expressing Nup98 fusions, we can generate data to corroborate this hypothesis. Several different fusions affect the expression of similar genes; these are involved in a few biological processes in the cell: embryonic development, immune system formation and chromatin organization. Deregulated genes also present similar transcription factor binding sites in their regulatory regions. These putative regulatory transcription factors are highly interconnected through protein-protein interactions and transcriptional regulation among themselves, and they have important roles in cell cycle regulation, embryonic development, hematopoiesis, apoptosis and chromatin modification. |
1503.01915 | Andreas Reppas | Andreas I. Reppas, Georgios Lolas, Andreas Deutsch and Haralampos
Hatzikirou | The extrinsic noise effect on lateral inhibition differentiation waves | null | null | null | null | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multipotent differentiation, where cells adopt one of several cell fates, is
a determinate and orchestrated procedure that often incorporates stochastic
mechanisms in order to diversify cell types. How these stochastic phenomena
interact to govern cell fate are poorly understood. Nonetheless, cell fate
decision making procedure is mainly regulated through the activation of
differentiation waves and associated signaling pathways. In the current work,
we focus on the Notch/Delta signaling pathway which is not only known to
trigger such waves but also is used to achieve the principle of lateral
inhibition, i.e. a competition for exclusive fates through cross-signaling
between neighboring cells. Such a process ensures unambiguous stochastic
decisions influenced by intrinsic noise sources, e.g.~as ones found in the
regulation of signaling pathways, and extrinsic stochastic fluctuations,
attributed to micro-environmental factors. However, the effect of intrinsic and
extrinsic noise on cell fate determination is an open problem. Our goal is to
elucidate how the induction of extrinsic noise affects cell fate specification
in a lateral inhibition mechanism. Using a stochastic Cellular Automaton with
continuous state space, we show that extrinsic noise results in the emergence
of steady-state furrow patterns of cells in a "frustrated/transient" phenotypic
state.
| [
{
"created": "Fri, 6 Mar 2015 11:08:35 GMT",
"version": "v1"
}
] | 2015-03-09 | [
[
"Reppas",
"Andreas I.",
""
],
[
"Lolas",
"Georgios",
""
],
[
"Deutsch",
"Andreas",
""
],
[
"Hatzikirou",
"Haralampos",
""
]
] | Multipotent differentiation, where cells adopt one of several cell fates, is a determinate and orchestrated procedure that often incorporates stochastic mechanisms in order to diversify cell types. How these stochastic phenomena interact to govern cell fate are poorly understood. Nonetheless, cell fate decision making procedure is mainly regulated through the activation of differentiation waves and associated signaling pathways. In the current work, we focus on the Notch/Delta signaling pathway which is not only known to trigger such waves but also is used to achieve the principle of lateral inhibition, i.e. a competition for exclusive fates through cross-signaling between neighboring cells. Such a process ensures unambiguous stochastic decisions influenced by intrinsic noise sources, e.g.~as ones found in the regulation of signaling pathways, and extrinsic stochastic fluctuations, attributed to micro-environmental factors. However, the effect of intrinsic and extrinsic noise on cell fate determination is an open problem. Our goal is to elucidate how the induction of extrinsic noise affects cell fate specification in a lateral inhibition mechanism. Using a stochastic Cellular Automaton with continuous state space, we show that extrinsic noise results in the emergence of steady-state furrow patterns of cells in a "frustrated/transient" phenotypic state. |
2303.01809 | Wim Hordijk | Wim Hordijk | A Concise and Formal Definition of RAF Sets and the RAF Algorithm | 7 pages, 1 figure | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autocatalytic sets are self-catalyzing and self-sustaining chemical reaction
networks that are believed to have played an important role in the origin of
life. They have been studied extensively both theoretically as well as
experimentally. This short note provides (1) a complete and formal definition
of autocatalytic sets (or RAF sets), and (2) an efficient algorithm to detect
such sets in arbitrary reaction networks. Although both have been presented in
various forms in earlier publications, this note serves as a concise and
convenient reference.
| [
{
"created": "Fri, 3 Mar 2023 09:31:29 GMT",
"version": "v1"
}
] | 2023-03-06 | [
[
"Hordijk",
"Wim",
""
]
] | Autocatalytic sets are self-catalyzing and self-sustaining chemical reaction networks that are believed to have played an important role in the origin of life. They have been studied extensively both theoretically as well as experimentally. This short note provides (1) a complete and formal definition of autocatalytic sets (or RAF sets), and (2) an efficient algorithm to detect such sets in arbitrary reaction networks. Although both have been presented in various forms in earlier publications, this note serves as a concise and convenient reference. |
2105.12727 | J. C. Phillips | J. C. Phillips | How Life Works: Darwinian Evolution of Proteins | 93 pages, 37 figures. This is a review article that highlights the
main ideas from arXiv:1610.04116 | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | We review the development of thermodynamic protein hydropathic scaling
theory, starting from backgrounds in mathematics and statistical mechanics, and
leading to biomedical applications. Darwinian evolution has organized each
protein family in different ways, but dynamical hydropathic scaling theory is
both simple and effective in providing readily transferable dynamical insights
for many proteins represented in the uncounted amino acid sequences, as well as
the 90 thousand static structures contained in the online Protein Data Base.
Critical point theory is general, and recently it has proved to be the most
effective way of describing protein networks that have evolved towards nearly
perfect functionality in given environments, self-organized criticality.
Darwinian evolutionary patterns are governed by common dynamical hydropathic
scaling principles, which can be quantified using scales that have been
developed bioinformatically by studying thousands of static PDB structures. The
most effective dynamical scales involve hydropathic globular sculpting
interactions averaged over length scales centered on domain dimensions. A
central feature of dynamical hydropathic scaling theory is the characteristic
domain length associated with a given protein functionality. Evolution has
functioned in such a way that the minimal critical length scale established so
far is about nine amino acids, but in some cases it is much larger. Some
ingenuity is needed to find this primary length scale, as shown by the examples
discussed here. Often a survey of the Darwinian evolution of a protein sequence
suggests a means of determining the critical length scale. The evolution of
Coronavirus is an interesting application; it identifies critical mutations.
| [
{
"created": "Wed, 21 Apr 2021 16:07:33 GMT",
"version": "v1"
}
] | 2021-05-27 | [
[
"Phillips",
"J. C.",
""
]
] | We review the development of thermodynamic protein hydropathic scaling theory, starting from backgrounds in mathematics and statistical mechanics, and leading to biomedical applications. Darwinian evolution has organized each protein family in different ways, but dynamical hydropathic scaling theory is both simple and effective in providing readily transferable dynamical insights for many proteins represented in the uncounted amino acid sequences, as well as the 90 thousand static structures contained in the online Protein Data Base. Critical point theory is general, and recently it has proved to be the most effective way of describing protein networks that have evolved towards nearly perfect functionality in given environments, self-organized criticality. Darwinian evolutionary patterns are governed by common dynamical hydropathic scaling principles, which can be quantified using scales that have been developed bioinformatically by studying thousands of static PDB structures. The most effective dynamical scales involve hydropathic globular sculpting interactions averaged over length scales centered on domain dimensions. A central feature of dynamical hydropathic scaling theory is the characteristic domain length associated with a given protein functionality. Evolution has functioned in such a way that the minimal critical length scale established so far is about nine amino acids, but in some cases it is much larger. Some ingenuity is needed to find this primary length scale, as shown by the examples discussed here. Often a survey of the Darwinian evolution of a protein sequence suggests a means of determining the critical length scale. The evolution of Coronavirus is an interesting application; it identifies critical mutations. |
2401.02691 | AMM Nurul Alam | AMM Nurul Alam, Chan-Jin Kim, So-Hee Kim, Swati Kumari, Eun-Yeong Lee,
Young-Hwa Hwang, Seon-Tea Joo | Scaffolding fundamentals and recent advances in sustainable scaffolding
techniques for cultured meat development | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | In cultured meat (CM) products the paramount significance lies in the
fundamental attributes like texture and sensory of the processed end product.
To cater to the tactile and gustatory preferences of real meat, the product
needs to be designed to incorporate its texture and sensory attributes.
Presently CM products are mainly grounded products like sausage, nugget,
frankfurter, burger patty, surimi, and steak with less sophistication and need
to mimic real meat to grapple with the traditional meat market. The existence
of fibrous microstructure in connective and muscle tissues has attracted
considerable interest in the realm of tissue engineering. Scaffolding plays an
important role in CM production by aiding cell adhesion, growth,
differentiation, and alignment. A wide array of scaffolding technologies has
been developed for implementation in the realm of biomedical research. In
recent years researchers also focus on edible scaffolding to ease the process
of CM. However, it is imperative to implement cutting edge technologies like 3D
scaffolds, 3D printing, electrospun nanofibers in order to advance the creation
of sustainable and edible scaffolding methods in CM production, with the
ultimate goal of replicating the sensory and nutritional attributes to mimic
real meat cut. This review discusses recent advances in scaffolding techniques
and biomaterials related to structured CM production and required advances to
create muscle fiber structures to mimic real meat.
Keywords: Cultured meat, Scaffolding, Biomaterials, Edible scaffolding,
Electrospinning, 3D bioprinting, real meat.
| [
{
"created": "Fri, 5 Jan 2024 07:46:07 GMT",
"version": "v1"
}
] | 2024-01-08 | [
[
"Alam",
"AMM Nurul",
""
],
[
"Kim",
"Chan-Jin",
""
],
[
"Kim",
"So-Hee",
""
],
[
"Kumari",
"Swati",
""
],
[
"Lee",
"Eun-Yeong",
""
],
[
"Hwang",
"Young-Hwa",
""
],
[
"Joo",
"Seon-Tea",
""
]
] | In cultured meat (CM) products the paramount significance lies in the fundamental attributes like texture and sensory of the processed end product. To cater to the tactile and gustatory preferences of real meat, the product needs to be designed to incorporate its texture and sensory attributes. Presently CM products are mainly grounded products like sausage, nugget, frankfurter, burger patty, surimi, and steak with less sophistication and need to mimic real meat to grapple with the traditional meat market. The existence of fibrous microstructure in connective and muscle tissues has attracted considerable interest in the realm of tissue engineering. Scaffolding plays an important role in CM production by aiding cell adhesion, growth, differentiation, and alignment. A wide array of scaffolding technologies has been developed for implementation in the realm of biomedical research. In recent years researchers also focus on edible scaffolding to ease the process of CM. However, it is imperative to implement cutting edge technologies like 3D scaffolds, 3D printing, electrospun nanofibers in order to advance the creation of sustainable and edible scaffolding methods in CM production, with the ultimate goal of replicating the sensory and nutritional attributes to mimic real meat cut. This review discusses recent advances in scaffolding techniques and biomaterials related to structured CM production and required advances to create muscle fiber structures to mimic real meat. Keywords: Cultured meat, Scaffolding, Biomaterials, Edible scaffolding, Electrospinning, 3D bioprinting, real meat. |
2004.13489 | Babacar Mbaye Ndiaye | Babacar Mbaye Ndiaye, Lena Tendeng, Diaraf Seck | Comparative prediction of confirmed cases with COVID-19 pandemic by
machine learning, deterministic and stochastic SIR models | arXiv admin note: text overlap with arXiv:2004.01574 | null | null | null | q-bio.PE math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a machine learning technics and SIR models
(deterministic and stochastic cases) with numerical approximations to predict
the number of cases infected with the COVID-19, for both in few days and the
following three weeks. Like in [1] and based on the public data from [2], we
estimate parameters and make predictions to help on how to find concrete
actions to control the situation. Under optimistic estimation, the pandemic in
some countries will end soon, while for most of the countries in the world, the
hit of anti-pandemic will be no later than the beginning of May.
| [
{
"created": "Fri, 24 Apr 2020 22:54:10 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Ndiaye",
"Babacar Mbaye",
""
],
[
"Tendeng",
"Lena",
""
],
[
"Seck",
"Diaraf",
""
]
] | In this paper, we propose a machine learning technics and SIR models (deterministic and stochastic cases) with numerical approximations to predict the number of cases infected with the COVID-19, for both in few days and the following three weeks. Like in [1] and based on the public data from [2], we estimate parameters and make predictions to help on how to find concrete actions to control the situation. Under optimistic estimation, the pandemic in some countries will end soon, while for most of the countries in the world, the hit of anti-pandemic will be no later than the beginning of May. |
1701.08085 | Giuseppe Longo | Giuseppe Longo | The Biological Consequences of the Computational World: Mathematical
Reflections on Cancer Biology | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The role of continua has been clear since antiquity in the mathematical
approaches to physics, while discrete manifolds were brought to the limelight
mostly by Quantum and Information Theories, in the XX century. We first recall
how theorizing and measuring radically change in physics when using discrete
vs. continuous mathematical manifolds. It will follow that the reference to
discrete structures and digital information is far from neutral in knowledge
construction. In biology, in particular, the introduction of information as a
new observable on discrete data types has been promoting a dramatic
reorganization of the tools for knowledge. We briefly analyze the origin and
the nature, then some consequences of the bias thus induced in life sciences,
with particular emphasis on research on cancer. We finally summarize new
theoretical frames that propose different directions as for the organizing
principles for biological thinking and experimenting, including in cancer
research. Cancer is now viewed as an organismal, tissue based issue, according
to the perspective proposed in (Sonnenschein, Soto, 1999; Baker, 2015).
| [
{
"created": "Fri, 27 Jan 2017 15:39:53 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Apr 2017 17:24:12 GMT",
"version": "v2"
}
] | 2017-04-04 | [
[
"Longo",
"Giuseppe",
""
]
] | The role of continua has been clear since antiquity in the mathematical approaches to physics, while discrete manifolds were brought to the limelight mostly by Quantum and Information Theories, in the XX century. We first recall how theorizing and measuring radically change in physics when using discrete vs. continuous mathematical manifolds. It will follow that the reference to discrete structures and digital information is far from neutral in knowledge construction. In biology, in particular, the introduction of information as a new observable on discrete data types has been promoting a dramatic reorganization of the tools for knowledge. We briefly analyze the origin and the nature, then some consequences of the bias thus induced in life sciences, with particular emphasis on research on cancer. We finally summarize new theoretical frames that propose different directions as for the organizing principles for biological thinking and experimenting, including in cancer research. Cancer is now viewed as an organismal, tissue based issue, according to the perspective proposed in (Sonnenschein, Soto, 1999; Baker, 2015). |
1910.13443 | Adam Safron | Adam Safron | Multilevel evolutionary developmental optimization (MEDO): A theoretical
framework for understanding preferences and selection dynamics | null | null | null | null | q-bio.NC econ.GN q-fin.EC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is motivation and how does it work? Where do goals come from and how do
they vary within and between species and individuals? Why do we prefer some
things over others? MEDO is a theoretical framework for understanding these
questions in abstract terms, as well as for generating and evaluating specific
hypotheses that seek to explain goal-oriented behavior. MEDO views preferences
as selective pressures influencing the likelihood of particular outcomes. With
respect to biological organisms, these patterns must compete and cooperate in
shaping system evolution. To the extent that shaping processes are themselves
altered by experience, this enables feedback relationships where histories of
reward and punishment can impact future motivation. In this way, various biases
can undergo either amplification or attenuation, resulting in preferences and
behavioral orientations of varying degrees of inter-temporal and
inter-situational stability. MEDO specifically models all shaping dynamics in
terms of natural selection operating on multiple levels--genetic, neural, and
cultural--and even considers aspects of development to themselves be
evolutionary processes. Thus, MEDO reflects a kind of generalized Darwinism, in
that it assumes that natural selection provides a common principle for
understanding the emergence of complexity within all dynamical systems in which
replication, variation, and selection occur. However, MEDO combines this
evolutionary perspective with economic decision theory, which describes both
the preferences underlying individual choices, as well as the preferences
underlying choices made by engineers in designing optimized systems. In this
way, MEDO uses economic decision theory to describe goal-oriented behaviors as
well as the interacting evolutionary optimization processes from which they
emerge. (Please note: this manuscript was written and finalized in 2012.)
| [
{
"created": "Sun, 27 Oct 2019 22:58:17 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Nov 2019 02:18:59 GMT",
"version": "v2"
}
] | 2019-11-12 | [
[
"Safron",
"Adam",
""
]
] | What is motivation and how does it work? Where do goals come from and how do they vary within and between species and individuals? Why do we prefer some things over others? MEDO is a theoretical framework for understanding these questions in abstract terms, as well as for generating and evaluating specific hypotheses that seek to explain goal-oriented behavior. MEDO views preferences as selective pressures influencing the likelihood of particular outcomes. With respect to biological organisms, these patterns must compete and cooperate in shaping system evolution. To the extent that shaping processes are themselves altered by experience, this enables feedback relationships where histories of reward and punishment can impact future motivation. In this way, various biases can undergo either amplification or attenuation, resulting in preferences and behavioral orientations of varying degrees of inter-temporal and inter-situational stability. MEDO specifically models all shaping dynamics in terms of natural selection operating on multiple levels--genetic, neural, and cultural--and even considers aspects of development to themselves be evolutionary processes. Thus, MEDO reflects a kind of generalized Darwinism, in that it assumes that natural selection provides a common principle for understanding the emergence of complexity within all dynamical systems in which replication, variation, and selection occur. However, MEDO combines this evolutionary perspective with economic decision theory, which describes both the preferences underlying individual choices, as well as the preferences underlying choices made by engineers in designing optimized systems. In this way, MEDO uses economic decision theory to describe goal-oriented behaviors as well as the interacting evolutionary optimization processes from which they emerge. (Please note: this manuscript was written and finalized in 2012.) |
2012.06890 | Jaroslav Ilnytskyi Dr. | J.M.Ilnytskyi | SEIRS epidemiology model for the COVID-19 pandemy in the extreme case of
no acquired immunity | 32 pages, 14 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We consider the SEIRS compartment epidemiology model suitable for predicting
the evolution of the COVID-19 pandemy in the extreme limiting case of no
acquired immunity. The disease-free and endemic fixed points are found and
their stability is analysed. The expression for the basic reproduction ratio is
obtained and discussed, emphasizing on its dependence on the model parameters.
The threshold contact ratio is found which determines the possibility for a
stable disease-free fixed point existence. Numeric solution for the pandemy
evolution is also undertaken together with the approximate analytic solutions
for the early stage of the disease spread as well as as for its decay after the
rapid measures are undertaken. We analysed several possible scenarios for
introducing and relaxing the quarantine measures. The cyclic "quarantine on"
and "quarantine off" strategy at fixed identification and isolation ratios fail
to reduce the lowering of the second and the consecutive waves, whereas this
goal is possible to achieve if the flexible increase of the identification and
isolation ratios is also involved.
| [
{
"created": "Sat, 12 Dec 2020 19:12:34 GMT",
"version": "v1"
}
] | 2020-12-15 | [
[
"Ilnytskyi",
"J. M.",
""
]
] | We consider the SEIRS compartment epidemiology model suitable for predicting the evolution of the COVID-19 pandemy in the extreme limiting case of no acquired immunity. The disease-free and endemic fixed points are found and their stability is analysed. The expression for the basic reproduction ratio is obtained and discussed, emphasizing on its dependence on the model parameters. The threshold contact ratio is found which determines the possibility for a stable disease-free fixed point existence. Numeric solution for the pandemy evolution is also undertaken together with the approximate analytic solutions for the early stage of the disease spread as well as as for its decay after the rapid measures are undertaken. We analysed several possible scenarios for introducing and relaxing the quarantine measures. The cyclic "quarantine on" and "quarantine off" strategy at fixed identification and isolation ratios fail to reduce the lowering of the second and the consecutive waves, whereas this goal is possible to achieve if the flexible increase of the identification and isolation ratios is also involved. |
0706.0406 | Kavita Jain | Kavita Jain | Evolutionary dynamics of the most populated genotype on rugged fitness
landscapes | Minor changes. To appear in Phys Rev E | Phys. Rev. E 76, 031922 (2007) | 10.1103/PhysRevE.76.031922 | null | q-bio.PE cond-mat.stat-mech | null | We consider an asexual population evolving on rugged fitness landscapes which
are defined on the multi-dimensional genotypic space and have many local
optima. We track the most populated genotype as it changes when the population
jumps from a fitness peak to a better one during the process of adaptation.
This is done using the dynamics of the shell model which is a simplified
version of the quasispecies model for infinite populations and standard
Wright-Fisher dynamics for large finite populations. We show that the
population fraction of a genotype obtained within the quasispecies model and
the shell model match for fit genotypes and at short times, but the dynamics of
the two models are identical for questions related to the most populated
genotype. We calculate exactly several properties of the jumps in infinite
populations some of which were obtained numerically in previous works. We also
present our preliminary simulation results for finite populations. In
particular, we measure the jump distribution in time and find that it decays as
$t^{-2}$ as in the quasispecies problem.
| [
{
"created": "Mon, 4 Jun 2007 10:09:26 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Aug 2007 16:13:16 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Jain",
"Kavita",
""
]
] | We consider an asexual population evolving on rugged fitness landscapes which are defined on the multi-dimensional genotypic space and have many local optima. We track the most populated genotype as it changes when the population jumps from a fitness peak to a better one during the process of adaptation. This is done using the dynamics of the shell model which is a simplified version of the quasispecies model for infinite populations and standard Wright-Fisher dynamics for large finite populations. We show that the population fraction of a genotype obtained within the quasispecies model and the shell model match for fit genotypes and at short times, but the dynamics of the two models are identical for questions related to the most populated genotype. We calculate exactly several properties of the jumps in infinite populations some of which were obtained numerically in previous works. We also present our preliminary simulation results for finite populations. In particular, we measure the jump distribution in time and find that it decays as $t^{-2}$ as in the quasispecies problem. |
1610.04046 | Joseph Y. Halpern | Joseph Y. Halpern and Lior Seeman | Is state-dependent valuation more adaptive than simpler rules? | This replaces the previous version, entitled `A Comment on "The
Ecological Rationality of State-Dependent Valuation"'. Besides the change
title, only relatively minor changes were made to the text. To appear,
Behavioural Processes | null | null | null | q-bio.PE cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | McNamara, Trimmer, and Houston (2012) claim to provide an explanation of
certain systematic deviations from rational behavior using a mechanism that
could arise through natural selection. We provide an arguably much simpler
mechanism in terms of computational limitations, that performs better in the
environment described by McNamara, Trimmer, and Houston (2012). To argue
convincingly that animals' use of state-dependent valuation is adaptive and is
likely to be selected for by natural selection, one must argue that, in some
sense, it is a better approach than the simple strategies that we propose.
| [
{
"created": "Thu, 13 Oct 2016 12:21:11 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Dec 2017 17:49:09 GMT",
"version": "v2"
}
] | 2017-12-27 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Seeman",
"Lior",
""
]
] | McNamara, Trimmer, and Houston (2012) claim to provide an explanation of certain systematic deviations from rational behavior using a mechanism that could arise through natural selection. We provide an arguably much simpler mechanism in terms of computational limitations, that performs better in the environment described by McNamara, Trimmer, and Houston (2012). To argue convincingly that animals' use of state-dependent valuation is adaptive and is likely to be selected for by natural selection, one must argue that, in some sense, it is a better approach than the simple strategies that we propose. |
0911.1797 | Andrei Zinovyev Dr. | Andrei Zinovyev, Nadya Morozova, Nora Nonne, Emmanuel Barillot, Annick
Harel-Bellan, Alexander N. Gorban | Dynamical modeling of microRNA action on the protein translation process | submited to BMC Syst Biol | BMC Systems Biology 2010, 4 (1):13
http://www.biomedcentral.com/1752-0509/4/13 | 10.1186/1752-0509-4-13 | null | q-bio.MN q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein translation is a multistep process which can be represented as a
cascade of biochemical reactions (initiation, ribosome assembly, elongation,
etc.), the rate of which can be regulated by small non-coding microRNAs through
multiple mechanisms. It remains unclear what mechanisms of microRNA action are
most dominant: moreover, many experimental reports deliver controversal
messages on what is the concrete mechanism actually observed in the experiment.
Parker and Nissan (Parker and Nissan, RNA, 2008) demonstrated that it is
impossible to distinguish alternative biological hypotheses using the steady
state data on the rate of protein synthesis. For their analysis they used two
simple kinetic models of protein translation. In contrary, we show that
dynamical data allow to discriminate some of the mechanisms of microRNA action.
We demonstrate this using the same models as in (Parker and Nissan, RNA, 2008)
for the sake of comparison but the methods developed (asymptotology of
biochemical networks) can be used for other models. As one of the results of
our analysis, we formulate a hypothesis that the effect of microRNA action is
measurable and observable only if it affects the dominant system
(generalization of the limiting step notion for complex networks) of the
protein translation machinery. The dominant system can vary in different
experimental conditions that can partially explain the existing controversy of
some of the experimental data.
| [
{
"created": "Mon, 9 Nov 2009 22:14:12 GMT",
"version": "v1"
}
] | 2010-03-09 | [
[
"Zinovyev",
"Andrei",
""
],
[
"Morozova",
"Nadya",
""
],
[
"Nonne",
"Nora",
""
],
[
"Barillot",
"Emmanuel",
""
],
[
"Harel-Bellan",
"Annick",
""
],
[
"Gorban",
"Alexander N.",
""
]
] | Protein translation is a multistep process which can be represented as a cascade of biochemical reactions (initiation, ribosome assembly, elongation, etc.), the rate of which can be regulated by small non-coding microRNAs through multiple mechanisms. It remains unclear what mechanisms of microRNA action are most dominant: moreover, many experimental reports deliver controversal messages on what is the concrete mechanism actually observed in the experiment. Parker and Nissan (Parker and Nissan, RNA, 2008) demonstrated that it is impossible to distinguish alternative biological hypotheses using the steady state data on the rate of protein synthesis. For their analysis they used two simple kinetic models of protein translation. In contrary, we show that dynamical data allow to discriminate some of the mechanisms of microRNA action. We demonstrate this using the same models as in (Parker and Nissan, RNA, 2008) for the sake of comparison but the methods developed (asymptotology of biochemical networks) can be used for other models. As one of the results of our analysis, we formulate a hypothesis that the effect of microRNA action is measurable and observable only if it affects the dominant system (generalization of the limiting step notion for complex networks) of the protein translation machinery. The dominant system can vary in different experimental conditions that can partially explain the existing controversy of some of the experimental data. |
1709.10353 | Shikha Jain | Sachin Kumar, Shikha Jain | Assessing the Effects of Treatment in HIV-TB Co-infection Model | 26 pages | Eur. Phys. J. Plus (2018) 133: 294 | 10.1140/epjp/i2018-12117-8 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a population model for HIV-TB co-infection dynamics by considering
treatments for HIV infection, active tuberculosis and co-infection. The HIV
only and TB only models are analyzed separately, as well as full model. The
basic reproduction numbers for TB ($\mathcal{R}_0^T$) and HIV
($\mathcal{R}_0^H$) and overall reproduction number for the system
$\mathcal{R}_0= \max\{\mathcal{R}_0^T, \mathcal{R}_0^H\}$ are computed. The
equilibria and their stability are studied. The main model undergoes
supercritical transcritical bifurcation at $\mathcal{R}_0^T=1$ and
$\mathcal{R}_0^H=1$ whereas the parameters $\beta^*=\beta e$ and
$\lambda^*=\lambda \sigma$ act as bifurcation parameters, respectively.
Numerical simulation claims the existence of interior equilibrium when both the
reproduction numbers are greater than unity. We explore the effect of early and
late HIV treatment on disease-induced deaths during the TB treatment course.
Mathematical analysis of our model shows that successful disease eradication
requires treatment of single disease, that is, treatment for HIV only and TB
only infected individuals with addition to co-infection treatment and in
absence of which disease eradication is extremely difficult even for
$\mathcal{R}_0<1$. When both the diseases are epidemic, the treatment for TB
only infected individuals is very effective in reducing the total infected
population and disease-induced deaths in comparison to the treatment for HIV
infected individuals while these are minimum when both the single disease
treatments are given with co-infection treatment.
| [
{
"created": "Thu, 28 Sep 2017 11:52:38 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Aug 2018 10:31:57 GMT",
"version": "v2"
}
] | 2018-08-10 | [
[
"Kumar",
"Sachin",
""
],
[
"Jain",
"Shikha",
""
]
] | We propose a population model for HIV-TB co-infection dynamics by considering treatments for HIV infection, active tuberculosis and co-infection. The HIV only and TB only models are analyzed separately, as well as full model. The basic reproduction numbers for TB ($\mathcal{R}_0^T$) and HIV ($\mathcal{R}_0^H$) and overall reproduction number for the system $\mathcal{R}_0= \max\{\mathcal{R}_0^T, \mathcal{R}_0^H\}$ are computed. The equilibria and their stability are studied. The main model undergoes supercritical transcritical bifurcation at $\mathcal{R}_0^T=1$ and $\mathcal{R}_0^H=1$ whereas the parameters $\beta^*=\beta e$ and $\lambda^*=\lambda \sigma$ act as bifurcation parameters, respectively. Numerical simulation claims the existence of interior equilibrium when both the reproduction numbers are greater than unity. We explore the effect of early and late HIV treatment on disease-induced deaths during the TB treatment course. Mathematical analysis of our model shows that successful disease eradication requires treatment of single disease, that is, treatment for HIV only and TB only infected individuals with addition to co-infection treatment and in absence of which disease eradication is extremely difficult even for $\mathcal{R}_0<1$. When both the diseases are epidemic, the treatment for TB only infected individuals is very effective in reducing the total infected population and disease-induced deaths in comparison to the treatment for HIV infected individuals while these are minimum when both the single disease treatments are given with co-infection treatment. |
1811.12091 | Patrick Krauss | Patrick Krauss, Karin Prebeck, Achim Schilling, Claus Metzner | Stochastic resonance in three-neuron motifs | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic resonance is a non-linear phenomenon, in which the sensitivity of
signal detectors can be enhanced by adding random noise to the detector input.
Here, we demonstrate that noise can also improve the information flux in
recurrent neural networks. In particular, we show for the case of three-neuron
motifs that the mutual information between successive network states can be
maximized by adding a suitable amount of noise to the neuron inputs. This
striking result suggests that noise in the brain may not be a problem that
needs to be suppressed, but indeed a resource that is dynamically regulated in
order to optimize information processing.
| [
{
"created": "Thu, 29 Nov 2018 12:12:47 GMT",
"version": "v1"
}
] | 2018-11-30 | [
[
"Krauss",
"Patrick",
""
],
[
"Prebeck",
"Karin",
""
],
[
"Schilling",
"Achim",
""
],
[
"Metzner",
"Claus",
""
]
] | Stochastic resonance is a non-linear phenomenon, in which the sensitivity of signal detectors can be enhanced by adding random noise to the detector input. Here, we demonstrate that noise can also improve the information flux in recurrent neural networks. In particular, we show for the case of three-neuron motifs that the mutual information between successive network states can be maximized by adding a suitable amount of noise to the neuron inputs. This striking result suggests that noise in the brain may not be a problem that needs to be suppressed, but indeed a resource that is dynamically regulated in order to optimize information processing. |
2012.06633 | Julie Rowlett | Susanne Menden-Deuer, Medet Nursultanov, Sinead Collins, Tatiana
Rynearson, and Julie Rowlett | Biodiversity of marine microbes is safeguarded by phenotypic variability
in ecological traits | null | null | 10.1371/journal.pone.0254799 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Why, contrary to theoretical predictions, do marine microbe communities
harbor tremendous phenotypic heterogeneity? How can so many marine microbe
species competing in the same niche coexist? We discovered a unifying
explanation for both phenomena by investigating a non-cooperative game that
interpolates between individual-level competitions and species-level outcomes.
We identified all equilibrium strategies of the game. These strategies are
characterized by maximal phenotypic heterogeneity. They are also neutral
towards each other in the sense that an unlimited number of species can
co-exist while competing according to the equilibrium strategies. Whereas prior
theory predicts that natural selection would minimize trait variation around an
optimum value, here we obtained a rigorous mathematical proof that species with
maximally variable traits are those that endure. This discrepancy may reflect a
disparity between predictions from models developed for larger organisms in
contrast to our microbe-centric model. Rigorous mathematics proves that
phenotypic heterogeneity is itself a mechanistic underpinning of microbial
diversity. This discovery has fundamental ramifications for microbial ecology
and may represent an adaptive reservoir sheltering biodiversity in changing
environmental conditions.
| [
{
"created": "Fri, 11 Dec 2020 20:59:21 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Jan 2021 14:53:29 GMT",
"version": "v2"
}
] | 2021-08-06 | [
[
"Menden-Deuer",
"Susanne",
""
],
[
"Nursultanov",
"Medet",
""
],
[
"Collins",
"Sinead",
""
],
[
"Rynearson",
"Tatiana",
""
],
[
"Rowlett",
"Julie",
""
]
] | Why, contrary to theoretical predictions, do marine microbe communities harbor tremendous phenotypic heterogeneity? How can so many marine microbe species competing in the same niche coexist? We discovered a unifying explanation for both phenomena by investigating a non-cooperative game that interpolates between individual-level competitions and species-level outcomes. We identified all equilibrium strategies of the game. These strategies are characterized by maximal phenotypic heterogeneity. They are also neutral towards each other in the sense that an unlimited number of species can co-exist while competing according to the equilibrium strategies. Whereas prior theory predicts that natural selection would minimize trait variation around an optimum value, here we obtained a rigorous mathematical proof that species with maximally variable traits are those that endure. This discrepancy may reflect a disparity between predictions from models developed for larger organisms in contrast to our microbe-centric model. Rigorous mathematics proves that phenotypic heterogeneity is itself a mechanistic underpinning of microbial diversity. This discovery has fundamental ramifications for microbial ecology and may represent an adaptive reservoir sheltering biodiversity in changing environmental conditions. |
1602.03200 | Chantal Nguyen | Chantal Nguyen, Jean M. Carlson | Optimizing Real-Time Vaccine Allocation in a Stochastic SIR Model | null | null | 10.1371/journal.pone.0152950 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time vaccination following an outbreak can effectively mitigate the
damage caused by an infectious disease. However, in many cases, available
resources are insufficient to vaccinate the entire at-risk population,
logistics result in delayed vaccine deployment, and the interaction between
members of different cities facilitates a wide spatial spread of infection.
Limited vaccine, time delays, and interaction (or coupling) of cities lead to
tradeoffs that impact the overall magnitude of the epidemic. These tradeoffs
mandate investigation of optimal strategies that minimize the severity of the
epidemic by prioritizing allocation of vaccine to specific subpopulations. We
use an SIR model to describe the disease dynamics of an epidemic which breaks
out in one city and spreads to another. We solve a master equation to determine
the resulting probability distribution of the final epidemic size. We then
identify tradeoffs between vaccine, time delay, and coupling, and we determine
the optimal vaccination protocols resulting from these tradeoffs.
| [
{
"created": "Tue, 9 Feb 2016 21:41:07 GMT",
"version": "v1"
}
] | 2016-03-29 | [
[
"Nguyen",
"Chantal",
""
],
[
"Carlson",
"Jean M.",
""
]
] | Real-time vaccination following an outbreak can effectively mitigate the damage caused by an infectious disease. However, in many cases, available resources are insufficient to vaccinate the entire at-risk population, logistics result in delayed vaccine deployment, and the interaction between members of different cities facilitates a wide spatial spread of infection. Limited vaccine, time delays, and interaction (or coupling) of cities lead to tradeoffs that impact the overall magnitude of the epidemic. These tradeoffs mandate investigation of optimal strategies that minimize the severity of the epidemic by prioritizing allocation of vaccine to specific subpopulations. We use an SIR model to describe the disease dynamics of an epidemic which breaks out in one city and spreads to another. We solve a master equation to determine the resulting probability distribution of the final epidemic size. We then identify tradeoffs between vaccine, time delay, and coupling, and we determine the optimal vaccination protocols resulting from these tradeoffs. |
2305.10473 | Collin Beaudoin | Collin Beaudoin, Koustubh Phalak, Swaroop Ghosh | Predicting Side Effect of Drug Molecules using Recurrent Neural Networks | 6 pages, 4 figures, 2 tables | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Identification and verification of molecular properties such as side effects
is one of the most important and time-consuming steps in the process of
molecule synthesis. For example, failure to identify side effects before
submission to regulatory groups can cost millions of dollars and months of
additional research to the companies. Failure to identify side effects during
the regulatory review can also cost lives. The complexity and expense of this
task have made it a candidate for a machine learning-based solution. Prior
approaches rely on complex model designs and excessive parameter counts for
side effect predictions. We believe reliance on complex models only shifts the
difficulty away from chemists rather than alleviating the issue. Implementing
large models is also expensive without prior access to high-performance
computers. We propose a heuristic approach that allows for the utilization of
simple neural networks, specifically the recurrent neural network, with a 98+%
reduction in the number of required parameters compared to available large
language models while still obtaining near identical results as top-performing
models.
| [
{
"created": "Wed, 17 May 2023 16:56:19 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2024 18:07:20 GMT",
"version": "v2"
}
] | 2024-04-12 | [
[
"Beaudoin",
"Collin",
""
],
[
"Phalak",
"Koustubh",
""
],
[
"Ghosh",
"Swaroop",
""
]
] | Identification and verification of molecular properties such as side effects is one of the most important and time-consuming steps in the process of molecule synthesis. For example, failure to identify side effects before submission to regulatory groups can cost millions of dollars and months of additional research to the companies. Failure to identify side effects during the regulatory review can also cost lives. The complexity and expense of this task have made it a candidate for a machine learning-based solution. Prior approaches rely on complex model designs and excessive parameter counts for side effect predictions. We believe reliance on complex models only shifts the difficulty away from chemists rather than alleviating the issue. Implementing large models is also expensive without prior access to high-performance computers. We propose a heuristic approach that allows for the utilization of simple neural networks, specifically the recurrent neural network, with a 98+% reduction in the number of required parameters compared to available large language models while still obtaining near identical results as top-performing models. |
1404.0389 | Krishnakumar Garikipati | Mirko Maraldi, Krishna Garikipati | The mechano-chemistry of cytoskeletal force generation | 22 pages, 6 figures, 1 table, accepted in Biomechanics and Modeling
in Mechanobiology | null | null | null | q-bio.SC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this communication, we propose a model to study the non-equilibrium
process by which actin stress fibers develop force in contractile cells. The
emphasis here is on the non-equilibrium thermodynamics, which is necessary to
address the mechanics as well as the chemistry of dynamic cell contractility.
In this setting we are able to develop a framework that relates (a) the
dynamics of force generation within the cell and (b) the cell response to
external stimuli to the chemical processes occurring within the cell, as well
as to the mechanics of linkage between the stress fibers, focal adhesions and
extra-cellular matrix.
| [
{
"created": "Tue, 1 Apr 2014 20:08:14 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Apr 2014 23:59:56 GMT",
"version": "v2"
}
] | 2014-04-25 | [
[
"Maraldi",
"Mirko",
""
],
[
"Garikipati",
"Krishna",
""
]
] | In this communication, we propose a model to study the non-equilibrium process by which actin stress fibers develop force in contractile cells. The emphasis here is on the non-equilibrium thermodynamics, which is necessary to address the mechanics as well as the chemistry of dynamic cell contractility. In this setting we are able to develop a framework that relates (a) the dynamics of force generation within the cell and (b) the cell response to external stimuli to the chemical processes occurring within the cell, as well as to the mechanics of linkage between the stress fibers, focal adhesions and extra-cellular matrix. |
1802.07201 | Haiming Tang | Haiming Tang, Angela Wilkins | Inference of gene loss rates after whole genome duplications at early
vertebrates through ancient genome reconstructions | null | null | null | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The famous 2R hypothesis was first proposed by Susumu Ohno in 1970. It states
that the two whole genome duplications had shaped the genome of early
vertebrates. The most convincing evidence for 2R hypothesis comes from the 4:1
ratio chromosomal regions that have preserved both gene content and order in
vertebrates compared with closely related. However, due to the shortage of such
strict evidence, the 2R hypothesis is still under debates.
Here, we present a combined perspective of phylogenetic and genomic homology
to revisit the hypothesis of 2R whole genome duplications. Ancestral vertebrate
genomes as well as ancient duplication events were created from 17 extant
vertebrate species. Extant descendants from the duplication events at early
vertebrates were extracted and reorganized to partial genomes. We then examined
the gene order based synteny, and projected back to phylogenetic gene trees for
examination of synteny evidence of the reconstructed early vertebrate genes. We
identified 7877 ancestral genes that were created from 3026 duplication events
at early vertebrates, and more than 50% of the duplication events show synteny
evidence. Thus, our reconstructions provide very strong evidence for the 2R
hypothesis.
We also reconstructed the genome of early vertebrates, and built a model of
the gene gains and losses in early vertebrates. We estimated that there were
about 12,000 genes in early vertebrates before 2R, and the probability of a
random gene get lost after the first round of whole genome duplication is
around 0.45, and the probability of a random gene get lost after the second
round of whole genome duplication is around 0.55.
This research provides convincing evidence for the 2R hypothesis, and may
provide further insights in vertebral evolution.
Data availability: https://github.com/haimingt/Ohnologs-and-2R-WGD
| [
{
"created": "Tue, 20 Feb 2018 17:02:50 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2018 16:34:14 GMT",
"version": "v2"
}
] | 2018-05-23 | [
[
"Tang",
"Haiming",
""
],
[
"Wilkins",
"Angela",
""
]
] | The famous 2R hypothesis was first proposed by Susumu Ohno in 1970. It states that the two whole genome duplications had shaped the genome of early vertebrates. The most convincing evidence for 2R hypothesis comes from the 4:1 ratio chromosomal regions that have preserved both gene content and order in vertebrates compared with closely related. However, due to the shortage of such strict evidence, the 2R hypothesis is still under debates. Here, we present a combined perspective of phylogenetic and genomic homology to revisit the hypothesis of 2R whole genome duplications. Ancestral vertebrate genomes as well as ancient duplication events were created from 17 extant vertebrate species. Extant descendants from the duplication events at early vertebrates were extracted and reorganized to partial genomes. We then examined the gene order based synteny, and projected back to phylogenetic gene trees for examination of synteny evidence of the reconstructed early vertebrate genes. We identified 7877 ancestral genes that were created from 3026 duplication events at early vertebrates, and more than 50% of the duplication events show synteny evidence. Thus, our reconstructions provide very strong evidence for the 2R hypothesis. We also reconstructed the genome of early vertebrates, and built a model of the gene gains and losses in early vertebrates. We estimated that there were about 12,000 genes in early vertebrates before 2R, and the probability of a random gene get lost after the first round of whole genome duplication is around 0.45, and the probability of a random gene get lost after the second round of whole genome duplication is around 0.55. This research provides convincing evidence for the 2R hypothesis, and may provide further insights in vertebral evolution. Data availability: https://github.com/haimingt/Ohnologs-and-2R-WGD |
1206.3697 | Paul Smolen | Paul Smolen, Douglas A. Baxter, John H. Byrne | Molecular Constraints on Synaptic Tagging and Maintenance of Long-Term
Potentiation: A Predictive Model | v3. Minor text edits to reflect published version | PLoS Comput Biol 8(8): e1002620, 2012 | 10.1371/journal.pcbi.1002620 | null | q-bio.NC q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein synthesis-dependent, late long-term potentiation (LTP) and depression
(LTD) at glutamatergic hippocampal synapses are well characterized examples of
long-term synaptic plasticity. Persistent increased activity of the enzyme
protein kinase M (PKM) is thought essential for maintaining LTP. Additional
spatial and temporal features that govern LTP and LTD induction are embodied in
the synaptic tagging and capture (STC) and cross capture hypotheses. Only
synapses that have been "tagged" by an stimulus sufficient for LTP and learning
can "capture" PKM. A model was developed to simulate the dynamics of key
molecules required for LTP and LTD. The model concisely represents
relationships between tagging, capture, LTD, and LTP maintenance. The model
successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and
cross capture, and makes testable predictions concerning the dynamics of PKM.
The maintenance of LTP, and consequently of at least some forms of long-term
memory, is predicted to require continual positive feedback in which PKM
enhances its own synthesis only at potentiated synapses. This feedback
underlies bistability in the activity of PKM. Second, cross capture requires
the induction of LTD to induce dendritic PKM synthesis, although this may
require tagging of a nearby synapse for LTP. The model also simulates the
effects of PKM inhibition, and makes additional predictions for the dynamics of
CaM kinases. Experiments testing the above predictions would significantly
advance the understanding of memory maintenance.
| [
{
"created": "Sat, 16 Jun 2012 19:11:27 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jun 2012 23:36:22 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Aug 2012 20:13:55 GMT",
"version": "v3"
}
] | 2015-03-13 | [
[
"Smolen",
"Paul",
""
],
[
"Baxter",
"Douglas A.",
""
],
[
"Byrne",
"John H.",
""
]
] | Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of the enzyme protein kinase M (PKM) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been "tagged" by an stimulus sufficient for LTP and learning can "capture" PKM. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKM. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKM enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKM. Second, cross capture requires the induction of LTD to induce dendritic PKM synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKM inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance. |
2401.09558 | Sebastian Lobentanzer | Sebastian Lobentanzer, Pablo Rodriguez-Mier, Stefan Bauer, Julio
Saez-Rodriguez | Molecular causality in the advent of foundation models | 22 pages, 0 figures, 87 references; submitted to MSB | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Correlation is not causation. As simple as this widely agreed-upon statement
may seem, scientifically defining causality and using it to drive our modern
biomedical research is immensely challenging. In this perspective, we attempt
to synergise the partly disparate fields of systems biology, causal reasoning,
and machine learning, to inform future approaches in the field of systems
biology and molecular networks.
| [
{
"created": "Wed, 17 Jan 2024 19:25:04 GMT",
"version": "v1"
}
] | 2024-01-19 | [
[
"Lobentanzer",
"Sebastian",
""
],
[
"Rodriguez-Mier",
"Pablo",
""
],
[
"Bauer",
"Stefan",
""
],
[
"Saez-Rodriguez",
"Julio",
""
]
] | Correlation is not causation. As simple as this widely agreed-upon statement may seem, scientifically defining causality and using it to drive our modern biomedical research is immensely challenging. In this perspective, we attempt to synergise the partly disparate fields of systems biology, causal reasoning, and machine learning, to inform future approaches in the field of systems biology and molecular networks. |
1911.10878 | Lukas Eigentler | Lukas Eigentler and Jonathan A. Sherratt | Effects of precipitation intermittency on vegetation patterns in
semi-arid landscapes | null | null | 10.1016/j.physd.2020.132396 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patterns of vegetation are a characteristic feature of many semi-arid
regions. The limiting resource in these ecosystems is water, which is added to
the system through short and intense rainfall events that cause a pulse of
biological processes such as plant growth and seed dispersal. We propose an
impulsive model based on the Klausmeier reaction-advection-diffusion system,
analytically investigate the effects of rainfall intermittency on the onset of
patterns, and augment our results by numerical simulations of model extensions.
Our investigation focuses on the parameter region in which a transition between
uniform and patterned vegetation occurs. Results show that decay-type processes
associated with a low frequency of precipitation pulses inhibit the onset of
patterns and that under intermittent rainfall regimes, a spatially uniform
solution is sustained at lower total precipitation volumes than under
continuous rainfall, if plant species are unable to efficiently use low soil
moisture levels. Unlike in the classical setting of a reaction-diffusion model,
patterns are not caused by a diffusion-driven instability but by a combination
of sufficiently long periods of droughts between precipitation pulses and water
diffusion. Our results further indicate that the introduction of pulse-type
seed dispersal weakens the effects of changes to width and shape of the plant
dispersal kernel on the onset of patterns.
| [
{
"created": "Mon, 25 Nov 2019 12:51:49 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Dec 2019 11:31:59 GMT",
"version": "v2"
}
] | 2020-02-17 | [
[
"Eigentler",
"Lukas",
""
],
[
"Sherratt",
"Jonathan A.",
""
]
] | Patterns of vegetation are a characteristic feature of many semi-arid regions. The limiting resource in these ecosystems is water, which is added to the system through short and intense rainfall events that cause a pulse of biological processes such as plant growth and seed dispersal. We propose an impulsive model based on the Klausmeier reaction-advection-diffusion system, analytically investigate the effects of rainfall intermittency on the onset of patterns, and augment our results by numerical simulations of model extensions. Our investigation focuses on the parameter region in which a transition between uniform and patterned vegetation occurs. Results show that decay-type processes associated with a low frequency of precipitation pulses inhibit the onset of patterns and that under intermittent rainfall regimes, a spatially uniform solution is sustained at lower total precipitation volumes than under continuous rainfall, if plant species are unable to efficiently use low soil moisture levels. Unlike in the classical setting of a reaction-diffusion model, patterns are not caused by a diffusion-driven instability but by a combination of sufficiently long periods of droughts between precipitation pulses and water diffusion. Our results further indicate that the introduction of pulse-type seed dispersal weakens the effects of changes to width and shape of the plant dispersal kernel on the onset of patterns. |
1808.07574 | Jean Claude Kamgang J C K | Jean Claude Kamgang and Christopher Thron | Analysis of Malaria Control Measures Effectiveness Using Multi-Stage
Vector Model | 34 pages , 3 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze an epidemiological model to evaluate the effectiveness of multiple
means of control in malaria-endemic areas. The mathematical model consists of a
system of several ordinary differential equations, and is based on a
multicompartment representation of the system. The model takes into account the
mutliple resting-questing stages undergone by adult female mosquitos during the
period in which they function as disease vectors. We compute the basic
reproduction number $\mathcal R_0$, and show that that if $\mathcal R_0<1$, the
disease free equilibrium (DFE) is globally asymptotically stable (GAS) on the
non-negative orthant. If $\mathcal R_0>1$, the system admits a unique endemic
equilibrium (EE) that is GAS. We perform a sensitivity analysis of the
dependence of $\mathcal R_0$ and the EE on parameters related to control
measures, such as killing effectiveness and bite prevention. Finally, we
discuss the implications for a comprehensive, cost-effective strategy for
malaria control.
| [
{
"created": "Wed, 22 Aug 2018 21:55:02 GMT",
"version": "v1"
}
] | 2023-06-28 | [
[
"Kamgang",
"Jean Claude",
""
],
[
"Thron",
"Christopher",
""
]
] | We analyze an epidemiological model to evaluate the effectiveness of multiple means of control in malaria-endemic areas. The mathematical model consists of a system of several ordinary differential equations, and is based on a multicompartment representation of the system. The model takes into account the mutliple resting-questing stages undergone by adult female mosquitos during the period in which they function as disease vectors. We compute the basic reproduction number $\mathcal R_0$, and show that that if $\mathcal R_0<1$, the disease free equilibrium (DFE) is globally asymptotically stable (GAS) on the non-negative orthant. If $\mathcal R_0>1$, the system admits a unique endemic equilibrium (EE) that is GAS. We perform a sensitivity analysis of the dependence of $\mathcal R_0$ and the EE on parameters related to control measures, such as killing effectiveness and bite prevention. Finally, we discuss the implications for a comprehensive, cost-effective strategy for malaria control. |
1912.08106 | Buddhapriya Chakrabarti | Alexander I. P. Taylor, Lianne D. Gahan, Rosemary A. Staniforth, and
Buddhapriya Chakrabarti | A Two-Step Biopolymer Nucleation Model Shows a Nonequilibrium Critical
Point | 22 pages, 6 figures (for the full abstract see the paper pdf) | null | 10.1063/5.0009394 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biopolymer self-assembly pathways are central to biological activity, but are
complicated by the ability of the monomeric subunits of biopolymers to adopt
different conformational states. As a result, biopolymer nucleation often
involves a two-step mechanism where the monomers first condense to form a
metastable intermediate, and this then converts to a stable polymer by
conformational rearrangement of its constituent monomers. While existing
mathematical models neglect the dynamics by which intermediates convert to
stable polymers, experiments and simulations show that these dynamics
frequently occur on comparable timescales to condensation of intermediates and
growth of mature polymers, and thus cannot be ignored. Moreover, nucleation
intermediates are responsible for cell toxicity in pathologies such as
Alzheimer's, Parkinson's, and prion diseases. Due to the relationship between
conformation and biological function, the slow conversion dynamics of these
species will strongly affect their toxicity. In this study, we present a
modified Oosawa model which explicitly accounts for simultaneous assembly and
conversion. To describe the conversion dynamics, we propose an experimentally
motivated initiation-propagation (IP) mechanism in which the stable phase
arises locally within the intermediate, and then spreads through additional
conversion events induced by nearest-neighbor interactions, analogous to
one-dimensional Glauber dynamics. Our mathematical analysis shows that the
competing timescales of assembly and conversion result in a nonequilibrium
critical point, separating a regime where intermediates are kinetically
unstable from one where conformationally mixed intermediates can accumulate.
Our work provides the first general model of two-step biopolymer nucleation,
which can be used to quantitatively predict the concentration and composition
of biologically crucial intermediates.
| [
{
"created": "Tue, 17 Dec 2019 16:01:12 GMT",
"version": "v1"
}
] | 2020-08-26 | [
[
"Taylor",
"Alexander I. P.",
""
],
[
"Gahan",
"Lianne D.",
""
],
[
"Staniforth",
"Rosemary A.",
""
],
[
"Chakrabarti",
"Buddhapriya",
""
]
] | Biopolymer self-assembly pathways are central to biological activity, but are complicated by the ability of the monomeric subunits of biopolymers to adopt different conformational states. As a result, biopolymer nucleation often involves a two-step mechanism where the monomers first condense to form a metastable intermediate, and this then converts to a stable polymer by conformational rearrangement of its constituent monomers. While existing mathematical models neglect the dynamics by which intermediates convert to stable polymers, experiments and simulations show that these dynamics frequently occur on comparable timescales to condensation of intermediates and growth of mature polymers, and thus cannot be ignored. Moreover, nucleation intermediates are responsible for cell toxicity in pathologies such as Alzheimer's, Parkinson's, and prion diseases. Due to the relationship between conformation and biological function, the slow conversion dynamics of these species will strongly affect their toxicity. In this study, we present a modified Oosawa model which explicitly accounts for simultaneous assembly and conversion. To describe the conversion dynamics, we propose an experimentally motivated initiation-propagation (IP) mechanism in which the stable phase arises locally within the intermediate, and then spreads through additional conversion events induced by nearest-neighbor interactions, analogous to one-dimensional Glauber dynamics. Our mathematical analysis shows that the competing timescales of assembly and conversion result in a nonequilibrium critical point, separating a regime where intermediates are kinetically unstable from one where conformationally mixed intermediates can accumulate. Our work provides the first general model of two-step biopolymer nucleation, which can be used to quantitatively predict the concentration and composition of biologically crucial intermediates. |
2210.09554 | Bartek Rajwa | Abida Sanjana Shemonti, Emanuele Plebani, Natalia P. Biscola, Deborah
M. Jaffey, Leif A. Havton, Janet R. Keast, Alex Pothen, M. Murat Dundar,
Terry L. Powley, Bartek Rajwa | A novel statistical methodology for quantifying the spatial arrangements
of axons in peripheral nerves | 10 figures | null | null | null | q-bio.NC cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A thorough understanding of the neuroanatomy of peripheral nerves is required
for a better insight into their function and the development of neuromodulation
tools and strategies. In biophysical modeling, it is commonly assumed that the
complex spatial arrangement of myelinated and unmyelinated axons in peripheral
nerves is random, however, in reality the axonal organization is inhomogeneous
and anisotropic. Present quantitative neuroanatomy methods analyze peripheral
nerves in terms of the number of axons and the morphometric characteristics of
the axons, such as area and diameter. In this study, we employed spatial
statistics and point process models to describe the spatial arrangement of
axons and Sinkhorn distances to compute the similarities between these
arrangements (in terms of first- and second-order statistics) in various vagus
and pelvic nerve cross-sections. We utilized high-resolution TEM images that
have been segmented using a custom-built high-throughput deep learning system
based on a highly modified U-Net architecture. Our findings show a novel and
innovative approach to quantifying similarities between spatial point patterns
using metrics derived from the solution to the optimal transport problem. We
also present a generalizable pipeline for quantitative analysis of peripheral
nerve architecture. Our data demonstrate differences between male- and
female-originating samples and similarities between the pelvic and abdominal
vagus nerves.
| [
{
"created": "Tue, 18 Oct 2022 03:04:11 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Shemonti",
"Abida Sanjana",
""
],
[
"Plebani",
"Emanuele",
""
],
[
"Biscola",
"Natalia P.",
""
],
[
"Jaffey",
"Deborah M.",
""
],
[
"Havton",
"Leif A.",
""
],
[
"Keast",
"Janet R.",
""
],
[
"Pothen",
"Alex",
""
],
[
"Dundar",
"M. Murat",
""
],
[
"Powley",
"Terry L.",
""
],
[
"Rajwa",
"Bartek",
""
]
] | A thorough understanding of the neuroanatomy of peripheral nerves is required for a better insight into their function and the development of neuromodulation tools and strategies. In biophysical modeling, it is commonly assumed that the complex spatial arrangement of myelinated and unmyelinated axons in peripheral nerves is random, however, in reality the axonal organization is inhomogeneous and anisotropic. Present quantitative neuroanatomy methods analyze peripheral nerves in terms of the number of axons and the morphometric characteristics of the axons, such as area and diameter. In this study, we employed spatial statistics and point process models to describe the spatial arrangement of axons and Sinkhorn distances to compute the similarities between these arrangements (in terms of first- and second-order statistics) in various vagus and pelvic nerve cross-sections. We utilized high-resolution TEM images that have been segmented using a custom-built high-throughput deep learning system based on a highly modified U-Net architecture. Our findings show a novel and innovative approach to quantifying similarities between spatial point patterns using metrics derived from the solution to the optimal transport problem. We also present a generalizable pipeline for quantitative analysis of peripheral nerve architecture. Our data demonstrate differences between male- and female-originating samples and similarities between the pelvic and abdominal vagus nerves. |
2308.13439 | Jason Hindes | Jason Hindes, Luis Mier-y-Teran-Romero, Ira B. Schwartz, and Michael
Assaf | Outbreak-size distributions under fluctuating rates | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | We study the effect of noisy infection (contact) and recovery rates on the
distribution of outbreak sizes in the stochastic SIR model. The rates are
modeled as Ornstein-Uhlenbeck processes with finite correlation time and
variance, which we illustrate using outbreak data from the RSV 2019-2020 season
in the US. In the limit of large populations, we find analytical solutions for
the outbreak-size distribution in the long-correlated (adiabatic) and
short-correlated (white) noise regimes, and demonstrate that the distribution
can be highly skewed with significant probabilities for large fluctuations away
from mean-field theory. Furthermore, we assess the relative contribution of
demographic and reaction-rate noise on the outbreak-size variance, and show
that demographic noise becomes irrelevant in the presence of slowly varying
reaction-rate noise but persists for large system sizes if the noise is fast.
Finally, we show that the crossover to the white-noise regime typically occurs
for correlation times that are on the same order as the characteristic recovery
time in the model.
| [
{
"created": "Fri, 25 Aug 2023 15:35:20 GMT",
"version": "v1"
}
] | 2023-08-28 | [
[
"Hindes",
"Jason",
""
],
[
"Mier-y-Teran-Romero",
"Luis",
""
],
[
"Schwartz",
"Ira B.",
""
],
[
"Assaf",
"Michael",
""
]
] | We study the effect of noisy infection (contact) and recovery rates on the distribution of outbreak sizes in the stochastic SIR model. The rates are modeled as Ornstein-Uhlenbeck processes with finite correlation time and variance, which we illustrate using outbreak data from the RSV 2019-2020 season in the US. In the limit of large populations, we find analytical solutions for the outbreak-size distribution in the long-correlated (adiabatic) and short-correlated (white) noise regimes, and demonstrate that the distribution can be highly skewed with significant probabilities for large fluctuations away from mean-field theory. Furthermore, we assess the relative contribution of demographic and reaction-rate noise on the outbreak-size variance, and show that demographic noise becomes irrelevant in the presence of slowly varying reaction-rate noise but persists for large system sizes if the noise is fast. Finally, we show that the crossover to the white-noise regime typically occurs for correlation times that are on the same order as the characteristic recovery time in the model. |
1307.6829 | Frank Albert | Frank W. Albert, Sebastian Treusch, Arthur H. Shockley, Joshua S.
Bloom and Leonid Kruglyak | Genetics of single-cell protein abundance variation in large yeast
populations | null | null | 10.1038/nature12904 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many DNA sequence variants influence phenotypes by altering gene expression.
Our understanding of these variants is limited by sample sizes of current
studies and by measurements of mRNA rather than protein abundance. We developed
a powerful method for identifying genetic loci that influence protein
expression in very large populations of the yeast Saccharomyes cerevisiae. The
method measures single-cell protein abundance through the use of
green-fluorescent-protein tags. We applied this method to 160 genes and
detected many more loci per gene than previous studies. We also observed closer
correspondence between loci that influence protein abundance and loci that
influence mRNA abundance of a given gene. Most loci cluster at hotspot
locations that influence multiple proteins - in some cases, more than half of
those examined. The variants that underlie these hotspots have profound effects
on the gene regulatory network and provide insights into genetic variation in
cell physiology between yeast strains.
| [
{
"created": "Thu, 25 Jul 2013 18:13:17 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Albert",
"Frank W.",
""
],
[
"Treusch",
"Sebastian",
""
],
[
"Shockley",
"Arthur H.",
""
],
[
"Bloom",
"Joshua S.",
""
],
[
"Kruglyak",
"Leonid",
""
]
] | Many DNA sequence variants influence phenotypes by altering gene expression. Our understanding of these variants is limited by sample sizes of current studies and by measurements of mRNA rather than protein abundance. We developed a powerful method for identifying genetic loci that influence protein expression in very large populations of the yeast Saccharomyes cerevisiae. The method measures single-cell protein abundance through the use of green-fluorescent-protein tags. We applied this method to 160 genes and detected many more loci per gene than previous studies. We also observed closer correspondence between loci that influence protein abundance and loci that influence mRNA abundance of a given gene. Most loci cluster at hotspot locations that influence multiple proteins - in some cases, more than half of those examined. The variants that underlie these hotspots have profound effects on the gene regulatory network and provide insights into genetic variation in cell physiology between yeast strains. |
1805.10681 | Lutz Fromhage | Lutz Fromhage and Michael D Jennions | The Strategic Reference Gene: an organismal theory of inclusive fitness | 43 pages, 7 figures | Proceedings of the Royal Society B, 2019 | 10.1098/rspb.2019.0459 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to define and use the concept of inclusive fitness is a contentious topic
in evolutionary theory. Inclusive fitness can be used to calculate selection on
a focal gene, but it is also applied to whole organisms. Individuals are then
predicted to appear designed as if to maximise their inclusive fitness,
provided that certain conditions are met (formally when interactions between
individuals are 'additive'). Here we argue that applying the concept of
inclusive fitness to organisms is justified under far broader conditions than
previously shown, but only if it is appropriately defined. Specifically, we
propose that organisms should maximise the sum of their offspring (including
any accrued due to the behaviour/phenotype of relatives), plus any effects on
their relatives' offspring production, weighted by relatedness. In contrast,
most theoreticians have argued that a focal individual's inclusive fitness
should exclude any offspring accrued due to the behaviour of relatives. Our
approach is based on the notion that long-term evolution follows the genome's
'majority interest' of building coherent bodies that are efficient 'vehicles'
for gene propagation. A gene favoured by selection that reduces the propagation
of unlinked genes at other loci (e.g. meiotic segregation distorters that lower
sperm production) is eventually neutralised by counter-selection throughout the
rest of the genome. Most phenotypes will therefore appear as if designed to
maximise the propagation of any given gene in a focal individual and its
relatives.
| [
{
"created": "Sun, 27 May 2018 20:08:27 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Jul 2018 15:03:55 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Nov 2018 11:29:03 GMT",
"version": "v3"
},
{
"created": "Mon, 11 Mar 2019 11:02:17 GMT",
"version": "v4"
},
{
"created": "Mon, 13 May 2019 08:26:56 GMT",
"version": "v5"
}
] | 2019-06-11 | [
[
"Fromhage",
"Lutz",
""
],
[
"Jennions",
"Michael D",
""
]
] | How to define and use the concept of inclusive fitness is a contentious topic in evolutionary theory. Inclusive fitness can be used to calculate selection on a focal gene, but it is also applied to whole organisms. Individuals are then predicted to appear designed as if to maximise their inclusive fitness, provided that certain conditions are met (formally when interactions between individuals are 'additive'). Here we argue that applying the concept of inclusive fitness to organisms is justified under far broader conditions than previously shown, but only if it is appropriately defined. Specifically, we propose that organisms should maximise the sum of their offspring (including any accrued due to the behaviour/phenotype of relatives), plus any effects on their relatives' offspring production, weighted by relatedness. In contrast, most theoreticians have argued that a focal individual's inclusive fitness should exclude any offspring accrued due to the behaviour of relatives. Our approach is based on the notion that long-term evolution follows the genome's 'majority interest' of building coherent bodies that are efficient 'vehicles' for gene propagation. A gene favoured by selection that reduces the propagation of unlinked genes at other loci (e.g. meiotic segregation distorters that lower sperm production) is eventually neutralised by counter-selection throughout the rest of the genome. Most phenotypes will therefore appear as if designed to maximise the propagation of any given gene in a focal individual and its relatives. |
1805.03108 | Joseph Ramsey | Joseph Ramsey and Bryan Andrews | FASK with Interventional Knowledge Recovers Edges from the Sachs Model | 13 pages, 21 figures, 2 tables, Technical Report | null | null | null | q-bio.MN cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report a procedure that, in one step from continuous data with minimal
preparation, recovers the graph found by Sachs et al. \cite{sachs2005causal},
with only a few edges different. The algorithm, Fast Adjacency Skewness (FASK),
relies on a mixture of linear reasoning and reasoning from the skewness of
variables; the Sachs data is a good candidate for this procedure since the
skewness of the variables is quite pronounced. We review the ground truth model
from Sachs et al. as well as some of the fluctuations seen in the protein
abundances in the system, give the Sachs model and the FASK model, and perform
a detailed comparison. Some variation in hyper-parameters is explored, though
the main result uses values at or near the defaults learned from work modeling
fMRI data.
| [
{
"created": "Sun, 6 May 2018 15:49:04 GMT",
"version": "v1"
}
] | 2018-05-09 | [
[
"Ramsey",
"Joseph",
""
],
[
"Andrews",
"Bryan",
""
]
] | We report a procedure that, in one step from continuous data with minimal preparation, recovers the graph found by Sachs et al. \cite{sachs2005causal}, with only a few edges different. The algorithm, Fast Adjacency Skewness (FASK), relies on a mixture of linear reasoning and reasoning from the skewness of variables; the Sachs data is a good candidate for this procedure since the skewness of the variables is quite pronounced. We review the ground truth model from Sachs et al. as well as some of the fluctuations seen in the protein abundances in the system, give the Sachs model and the FASK model, and perform a detailed comparison. Some variation in hyper-parameters is explored, though the main result uses values at or near the defaults learned from work modeling fMRI data. |
1605.07383 | Diego Fasoli | Diego Fasoli, Anna Cattani, Stefano Panzeri | From Local Chaos to Critical Slowing Down: A Theory of the Functional
Connectivity of Small Neural Circuits | 24 pages, 5 figures; compiled version of the Supplementary Materials
added (12 pages) | null | null | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional connectivity is a fundamental property of neural networks that
quantifies the segregation and integration of information between cortical
areas. Due to mathematical complexity, a theory that could explain how the
parameters of mesoscopic networks composed of a few tens of neurons affect the
functional connectivity is still to be formulated. Yet, many interesting
problems in neuroscience involve the study of networks composed of a small
number of neurons. Based on a recent study of the dynamics of small neural
circuits, we combine the analysis of local bifurcations of multi-population
neural networks of arbitrary size with the analytical calculation of the
functional connectivity. We study the functional connectivity in different
regimes, showing that external stimuli cause the network to switch from
asynchronous states characterized by weak correlation and low variability
(local chaos), to synchronous states characterized by strong correlations and
wide temporal fluctuations (critical slowing down). Local chaos typically
occurs in large networks, but here we show that it can also be generated by
strong stimuli in small neural circuits. On the other side, critical slowing
down is expected to occur when the stimulus moves the network close to a local
bifurcation. In particular, strongly positive correlations occur at the
saddle-node and Andronov-Hopf bifurcations of the network, while strongly
negative correlations occur when the network undergoes a spontaneous
symmetry-breaking at the branching-point bifurcations. These results prove that
the functional connectivity of firing-rate network models is strongly affected
by the external stimuli even if the anatomical connections are fixed, and
suggest an effective mechanism through which biological networks can
dynamically modulate the encoding and integration of sensory information.
| [
{
"created": "Tue, 24 May 2016 11:22:08 GMT",
"version": "v1"
},
{
"created": "Thu, 26 May 2016 15:42:02 GMT",
"version": "v2"
}
] | 2016-05-27 | [
[
"Fasoli",
"Diego",
""
],
[
"Cattani",
"Anna",
""
],
[
"Panzeri",
"Stefano",
""
]
] | Functional connectivity is a fundamental property of neural networks that quantifies the segregation and integration of information between cortical areas. Due to mathematical complexity, a theory that could explain how the parameters of mesoscopic networks composed of a few tens of neurons affect the functional connectivity is still to be formulated. Yet, many interesting problems in neuroscience involve the study of networks composed of a small number of neurons. Based on a recent study of the dynamics of small neural circuits, we combine the analysis of local bifurcations of multi-population neural networks of arbitrary size with the analytical calculation of the functional connectivity. We study the functional connectivity in different regimes, showing that external stimuli cause the network to switch from asynchronous states characterized by weak correlation and low variability (local chaos), to synchronous states characterized by strong correlations and wide temporal fluctuations (critical slowing down). Local chaos typically occurs in large networks, but here we show that it can also be generated by strong stimuli in small neural circuits. On the other side, critical slowing down is expected to occur when the stimulus moves the network close to a local bifurcation. In particular, strongly positive correlations occur at the saddle-node and Andronov-Hopf bifurcations of the network, while strongly negative correlations occur when the network undergoes a spontaneous symmetry-breaking at the branching-point bifurcations. These results prove that the functional connectivity of firing-rate network models is strongly affected by the external stimuli even if the anatomical connections are fixed, and suggest an effective mechanism through which biological networks can dynamically modulate the encoding and integration of sensory information. |
1603.08952 | Giovanni Bussi | Sandro Bottaro, Alejandro Gil-Ley, Giovanni Bussi | RNA Folding Pathways in Stop Motion | Accepted for publication on Nucleic Acids Research | Nucleic Acids Research 44, 5883 (2016) | 10.1093/nar/gkw239 | null | q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a method for predicting RNA folding pathways, with an
application to the most important RNA tetraloops. The method is based on the
idea that ensembles of three-dimensional fragments extracted from
high-resolution crystal structures are heterogeneous enough to describe
metastable as well as intermediate states. These ensembles are first validated
by performing a quantitative comparison against available solution NMR data of
a set of RNA tetranucleotides. Notably, the agreement is better with respect to
the one obtained by comparing NMR with extensive all-atom molecular dynamics
simulations. We then propose a procedure based on diffusion maps and Markov
models that makes it possible to obtain reaction pathways and their relative
probabilities from fragment ensembles. This approach is applied to study the
helix-to-loop folding pathway of all the tetraloops from the GNRA and UNCG
families. The results give detailed insights into the folding mechanism that
are compatible with available experimental data and clarify the role of
intermediate states observed in previous simulation studies. The method is
computationally inexpensive and can be used to study arbitrary conformational
transitions.
| [
{
"created": "Tue, 29 Mar 2016 20:34:01 GMT",
"version": "v1"
}
] | 2016-11-21 | [
[
"Bottaro",
"Sandro",
""
],
[
"Gil-Ley",
"Alejandro",
""
],
[
"Bussi",
"Giovanni",
""
]
] | We introduce a method for predicting RNA folding pathways, with an application to the most important RNA tetraloops. The method is based on the idea that ensembles of three-dimensional fragments extracted from high-resolution crystal structures are heterogeneous enough to describe metastable as well as intermediate states. These ensembles are first validated by performing a quantitative comparison against available solution NMR data of a set of RNA tetranucleotides. Notably, the agreement is better with respect to the one obtained by comparing NMR with extensive all-atom molecular dynamics simulations. We then propose a procedure based on diffusion maps and Markov models that makes it possible to obtain reaction pathways and their relative probabilities from fragment ensembles. This approach is applied to study the helix-to-loop folding pathway of all the tetraloops from the GNRA and UNCG families. The results give detailed insights into the folding mechanism that are compatible with available experimental data and clarify the role of intermediate states observed in previous simulation studies. The method is computationally inexpensive and can be used to study arbitrary conformational transitions. |
1506.09178 | Giovanni Bussi | Giovanni Pinamonti, Sandro Bottaro, Cristian Micheletti, and Giovanni
Bussi | Elastic network models for RNA: a comparative assessment with molecular
dynamics and SHAPE experiments | This article has been accepted for publication in Nucleic Acids
Research Published by Oxford University Press | Nucleic Acids Res. 43, 7260 (2015) | 10.1093/nar/gkv708 | null | q-bio.BM cond-mat.stat-mech physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elastic network models (ENMs) are valuable and efficient tools for
characterizing the collective internal dynamics of proteins based on the
knowledge of their native structures. The increasing evidence that the
biological functionality of RNAs is often linked to their innate internal
motions, poses the question of whether ENM approaches can be successfully
extended to this class of biomolecules. This issue is tackled here by
considering various families of elastic networks of increasing complexity
applied to a representative set of RNAs. The fluctuations predicted by the
alternative ENMs are stringently validated by comparison against extensive
molecular dynamics simulations and SHAPE experiments. We find that simulations
and experimental data are systematically best reproduced by either an all-atom
or a three-beads-per-nucleotide representation (sugar-base-phosphate), with the
latter arguably providing the best balance of accuracy and computational
complexity.
| [
{
"created": "Tue, 30 Jun 2015 17:45:58 GMT",
"version": "v1"
}
] | 2015-09-01 | [
[
"Pinamonti",
"Giovanni",
""
],
[
"Bottaro",
"Sandro",
""
],
[
"Micheletti",
"Cristian",
""
],
[
"Bussi",
"Giovanni",
""
]
] | Elastic network models (ENMs) are valuable and efficient tools for characterizing the collective internal dynamics of proteins based on the knowledge of their native structures. The increasing evidence that the biological functionality of RNAs is often linked to their innate internal motions, poses the question of whether ENM approaches can be successfully extended to this class of biomolecules. This issue is tackled here by considering various families of elastic networks of increasing complexity applied to a representative set of RNAs. The fluctuations predicted by the alternative ENMs are stringently validated by comparison against extensive molecular dynamics simulations and SHAPE experiments. We find that simulations and experimental data are systematically best reproduced by either an all-atom or a three-beads-per-nucleotide representation (sugar-base-phosphate), with the latter arguably providing the best balance of accuracy and computational complexity. |
2004.03147 | Sourish Das | Sourish Das | Prediction of COVID-19 Disease Progression in India : Under the Effect
of National Lockdown | null | null | null | null | q-bio.PE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this policy paper, we implement the epidemiological SIR to estimate the
basic reproduction number $\mathcal{R}_0$ at national and state level. We also
developed the statistical machine learning model to predict the cases ahead of
time. Our analysis indicates that the situation of Punjab
($\mathcal{R}_0\approx 16$) is not good. It requires immediate aggressive
attention. We see the $\mathcal{R}_0$ for Madhya Pradesh (3.37) , Maharastra
(3.25) and Tamil Nadu (3.09) are more than 3. The $\mathcal{R}_0$ of Andhra
Pradesh (2.96), Delhi (2.82) and West Bengal (2.77) is more than the India's
$\mathcal{R}_0=2.75$, as of 04 March, 2020. India's $\mathcal{R}_0=2.75$ (as of
04 March, 2020) is very much comparable to Hubei/China at the early disease
progression stage. Our analysis indicates that the early disease progression of
India is that of similar to China. Therefore, with lockdown in place, India
should expect as many as cases if not more like China. If lockdown works, we
should expect less than 66,224 cases by May 01,2020. All data and \texttt{R}
code for this paper is available from
\url{https://github.com/sourish-cmi/Covid19}
| [
{
"created": "Tue, 7 Apr 2020 06:35:41 GMT",
"version": "v1"
}
] | 2020-04-08 | [
[
"Das",
"Sourish",
""
]
] | In this policy paper, we implement the epidemiological SIR to estimate the basic reproduction number $\mathcal{R}_0$ at national and state level. We also developed the statistical machine learning model to predict the cases ahead of time. Our analysis indicates that the situation of Punjab ($\mathcal{R}_0\approx 16$) is not good. It requires immediate aggressive attention. We see the $\mathcal{R}_0$ for Madhya Pradesh (3.37) , Maharastra (3.25) and Tamil Nadu (3.09) are more than 3. The $\mathcal{R}_0$ of Andhra Pradesh (2.96), Delhi (2.82) and West Bengal (2.77) is more than the India's $\mathcal{R}_0=2.75$, as of 04 March, 2020. India's $\mathcal{R}_0=2.75$ (as of 04 March, 2020) is very much comparable to Hubei/China at the early disease progression stage. Our analysis indicates that the early disease progression of India is that of similar to China. Therefore, with lockdown in place, India should expect as many as cases if not more like China. If lockdown works, we should expect less than 66,224 cases by May 01,2020. All data and \texttt{R} code for this paper is available from \url{https://github.com/sourish-cmi/Covid19} |
2212.10414 | Francisco Acosta | Francisco Acosta, Sophia Sanborn, Khanh Dao Duc, Manu Madhav, Nina
Miolane | Quantifying Extrinsic Curvature in Neural Manifolds | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The neural manifold hypothesis postulates that the activity of a neural
population forms a low-dimensional manifold whose structure reflects that of
the encoded task variables. In this work, we combine topological deep
generative models and extrinsic Riemannian geometry to introduce a novel
approach for studying the structure of neural manifolds. This approach (i)
computes an explicit parameterization of the manifolds and (ii) estimates their
local extrinsic curvature--hence quantifying their shape within the neural
state space. Importantly, we prove that our methodology is invariant with
respect to transformations that do not bear meaningful neuroscience
information, such as permutation of the order in which neurons are recorded. We
show empirically that we correctly estimate the geometry of synthetic manifolds
generated from smooth deformations of circles, spheres, and tori, using
realistic noise levels. We additionally validate our methodology on simulated
and real neural data, and show that we recover geometric structure known to
exist in hippocampal place cells. We expect this approach to open new avenues
of inquiry into geometric neural correlates of perception and behavior.
| [
{
"created": "Tue, 20 Dec 2022 16:46:44 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 03:12:07 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Apr 2023 23:31:35 GMT",
"version": "v3"
}
] | 2023-04-26 | [
[
"Acosta",
"Francisco",
""
],
[
"Sanborn",
"Sophia",
""
],
[
"Duc",
"Khanh Dao",
""
],
[
"Madhav",
"Manu",
""
],
[
"Miolane",
"Nina",
""
]
] | The neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables. In this work, we combine topological deep generative models and extrinsic Riemannian geometry to introduce a novel approach for studying the structure of neural manifolds. This approach (i) computes an explicit parameterization of the manifolds and (ii) estimates their local extrinsic curvature--hence quantifying their shape within the neural state space. Importantly, we prove that our methodology is invariant with respect to transformations that do not bear meaningful neuroscience information, such as permutation of the order in which neurons are recorded. We show empirically that we correctly estimate the geometry of synthetic manifolds generated from smooth deformations of circles, spheres, and tori, using realistic noise levels. We additionally validate our methodology on simulated and real neural data, and show that we recover geometric structure known to exist in hippocampal place cells. We expect this approach to open new avenues of inquiry into geometric neural correlates of perception and behavior. |
1401.3786 | Kevin Thornton | Kevin R. Thornton | A C++ template library for efficient forward-time population genetic
simulation of large populations | null | null | 10.1534/genetics.114.165019 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | fwdpp is a C++ library of routines intended to facilitate the development of
forward-time simulations under arbitrary mutation and fitness models. The
library design provides a combination of speed, low memory overhead, and
modeling flexibility not currently available from other forward simulation
tools. The library is particularly useful when the simulation of large
populations is required, as programs implemented using the library are much
more efficient that other available forward simulation programs.
| [
{
"created": "Wed, 15 Jan 2014 23:22:41 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jan 2014 15:43:14 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Jun 2014 21:06:17 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Jun 2014 16:40:35 GMT",
"version": "v4"
}
] | 2014-06-24 | [
[
"Thornton",
"Kevin R.",
""
]
] | fwdpp is a C++ library of routines intended to facilitate the development of forward-time simulations under arbitrary mutation and fitness models. The library design provides a combination of speed, low memory overhead, and modeling flexibility not currently available from other forward simulation tools. The library is particularly useful when the simulation of large populations is required, as programs implemented using the library are much more efficient that other available forward simulation programs. |
2104.00094 | Benjamin Eltzner | Henrik Wiechers, Benjamin Eltzner, Stephan F. Huckemann, Kanti V.
Mardia | Clustering Schemes on the Torus with Application to RNA Clashes | 8 pages, 4 figures, conference submission to GSI 2021 | null | null | null | q-bio.BM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular structures of RNA molecules reconstructed from X-ray
crystallography frequently contain errors. Motivated by this problem we examine
clustering on a torus since RNA shapes can be described by dihedral angles. A
previously developed clustering method for torus data involves two tuning
parameters and we assess clustering results for different parameter values in
relation to the problem of so-called RNA clashes. This clustering problem is
part of the dynamically evolving field of statistics on manifolds. Statistical
problems on the torus highlight general challenges for statistics on manifolds.
Therefore, the torus PCA and clustering methods we propose make an important
contribution to directional statistics and statistics on manifolds in general.
| [
{
"created": "Sun, 28 Feb 2021 15:25:25 GMT",
"version": "v1"
}
] | 2021-04-02 | [
[
"Wiechers",
"Henrik",
""
],
[
"Eltzner",
"Benjamin",
""
],
[
"Huckemann",
"Stephan F.",
""
],
[
"Mardia",
"Kanti V.",
""
]
] | Molecular structures of RNA molecules reconstructed from X-ray crystallography frequently contain errors. Motivated by this problem we examine clustering on a torus since RNA shapes can be described by dihedral angles. A previously developed clustering method for torus data involves two tuning parameters and we assess clustering results for different parameter values in relation to the problem of so-called RNA clashes. This clustering problem is part of the dynamically evolving field of statistics on manifolds. Statistical problems on the torus highlight general challenges for statistics on manifolds. Therefore, the torus PCA and clustering methods we propose make an important contribution to directional statistics and statistics on manifolds in general. |
2103.14754 | Pedro Moreira Sr. | Pedro Moreira, Ana Marta Sequeira, Sara Pereira, R\'uben Rodrigues,
Miguel Rocha, Diana Lousa | ViralFP: A webserver of viral fusion proteins | null | null | null | null | q-bio.QM q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Viral fusion proteins are attached to the membrane of enveloped viruses (a
group that includes Coronaviruses, Dengue, HIV and Influenza) and catalyze
fusion between the viral and host membranes, enabling the virus to insert its
genetic material into the host cell. Given the importance of these
biomolecules, this work presents a centralized database containing the most
relevant information on viral fusion proteins, available through a free-to-use
web server accessible through the URL https://viralfp.bio.di.uminho.pt/. This
web application contains several bioinformatic tools, such as Clustal sequence
alignment and Weblogo, including as well a machine learning-based tool capable
of predicting the location of fusion peptides (the component of fusion proteins
that inserts into the host's cell membrane) within the fusion protein sequence.
Given the crucial role of these proteins in viral infection, their importance
as natural targets of our immune system and their potential as therapeutic
targets, this web application aims to foster our ability to fight pathogenic
viruses.
| [
{
"created": "Fri, 26 Mar 2021 22:32:17 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jun 2021 11:14:26 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Jun 2021 16:41:08 GMT",
"version": "v3"
}
] | 2021-06-25 | [
[
"Moreira",
"Pedro",
""
],
[
"Sequeira",
"Ana Marta",
""
],
[
"Pereira",
"Sara",
""
],
[
"Rodrigues",
"Rúben",
""
],
[
"Rocha",
"Miguel",
""
],
[
"Lousa",
"Diana",
""
]
] | Viral fusion proteins are attached to the membrane of enveloped viruses (a group that includes Coronaviruses, Dengue, HIV and Influenza) and catalyze fusion between the viral and host membranes, enabling the virus to insert its genetic material into the host cell. Given the importance of these biomolecules, this work presents a centralized database containing the most relevant information on viral fusion proteins, available through a free-to-use web server accessible through the URL https://viralfp.bio.di.uminho.pt/. This web application contains several bioinformatic tools, such as Clustal sequence alignment and Weblogo, including as well a machine learning-based tool capable of predicting the location of fusion peptides (the component of fusion proteins that inserts into the host's cell membrane) within the fusion protein sequence. Given the crucial role of these proteins in viral infection, their importance as natural targets of our immune system and their potential as therapeutic targets, this web application aims to foster our ability to fight pathogenic viruses. |
1506.06159 | Valery Kirzhner | Valery Kirzhner and Zeev Volkovich | Evaluation of the Number of Different Genomes on Medium and
Identification of Known Genomes Using Composition Spectra Approach | null | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article presents the theoretical foundations of the algorithm for
calculating the number of different genomes in the medium under study and of
two algorithms for determining the presence of a particular (known) genome in
this medium. The approach is based on the analysis of the compositional spectra
of subsequently sequenced samples of the medium. The theoretical estimations
required for the implementation of the algorithms are obtained.
| [
{
"created": "Fri, 19 Jun 2015 21:12:34 GMT",
"version": "v1"
}
] | 2015-06-23 | [
[
"Kirzhner",
"Valery",
""
],
[
"Volkovich",
"Zeev",
""
]
] | The article presents the theoretical foundations of the algorithm for calculating the number of different genomes in the medium under study and of two algorithms for determining the presence of a particular (known) genome in this medium. The approach is based on the analysis of the compositional spectra of subsequently sequenced samples of the medium. The theoretical estimations required for the implementation of the algorithms are obtained. |
1907.04436 | James Brunner | James D. Brunner and Nicholas Chia | Metabolite mediated modeling of microbial community dynamics captures
emergent behavior more effectively than species-species modeling | 23 pages, 8 Figures | null | 10.1098/rsif.2019.0423 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Personalized models of the gut microbiome are valuable for disease prevention
and treatment. For this, one requires a mathematical model that predicts
microbial community composition and the emergent behavior of microbial
communities. We seek a modeling strategy that can capture emergent behavior
when built from sets of universal individual interactions. Our investigation
reveals that species-metabolite interaction modeling is better able to capture
emergent behavior in community composition dynamics than direct species-species
modeling.
Using publicly available data, we examine the ability of species-species
models and species-metabolite models to predict trio growth experiments from
the outcomes of pair growth experiments. We compare quadratic species-species
interaction models and quadratic species-metabolite interaction models, and
conclude that only species-metabolite models have the necessary complexity to
to explain a wide variety of interdependent growth outcomes. We also show that
general species-species interaction models cannot match patterns observed in
community growth dynamics, whereas species-metabolite models can. We conclude
that species-metabolite modeling will be important in the development of
accurate, clinically useful models of microbial communities.
| [
{
"created": "Tue, 9 Jul 2019 22:08:33 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Aug 2019 16:08:28 GMT",
"version": "v2"
}
] | 2019-10-29 | [
[
"Brunner",
"James D.",
""
],
[
"Chia",
"Nicholas",
""
]
] | Personalized models of the gut microbiome are valuable for disease prevention and treatment. For this, one requires a mathematical model that predicts microbial community composition and the emergent behavior of microbial communities. We seek a modeling strategy that can capture emergent behavior when built from sets of universal individual interactions. Our investigation reveals that species-metabolite interaction modeling is better able to capture emergent behavior in community composition dynamics than direct species-species modeling. Using publicly available data, we examine the ability of species-species models and species-metabolite models to predict trio growth experiments from the outcomes of pair growth experiments. We compare quadratic species-species interaction models and quadratic species-metabolite interaction models, and conclude that only species-metabolite models have the necessary complexity to to explain a wide variety of interdependent growth outcomes. We also show that general species-species interaction models cannot match patterns observed in community growth dynamics, whereas species-metabolite models can. We conclude that species-metabolite modeling will be important in the development of accurate, clinically useful models of microbial communities. |
0802.3915 | Pierre Peyret | S\'ebastien Rimour (LMGE), David Hill (LIMOS), C\'ecile Militon
(LMGE), Pierre Peyret (LMGE) | GoArrays: highly dynamic and efficient microarray probe design | null | Bioinformatics 21, 7 (2005) 1094-103 | 10.1093/bioinformatics/bti112 | null | q-bio.QM | null | MOTIVATION: The use of oligonucleotide microarray technology requires a very
detailed attention to the design of specific probes spotted on the solid phase.
These problems are far from being commonplace since they refer to complex
physicochemical constraints. Whereas there are more and more publicly available
programs for microarray oligonucleotide design, most of them use the same
algorithm or criteria to design oligos, with only little variation. RESULTS: We
show that classical approaches used in oligo design software may be inefficient
under certain experimental conditions, especially when dealing with complex
target mixtures. Indeed, our biological model is a human obligate parasite, the
microsporidia Encephalitozoon cuniculi. Targets that are extracted from
biological samples are composed of a mixture of pathogen transcripts and host
cell transcripts. We propose a new approach to design oligonucleotides which
combines good specificity with a potentially high sensitivity. This approach is
original in the biological point of view as well as in the algorithmic point of
view. We also present an experimental validation of this new strategy by
comparing results obtained with standard oligos and with our composite oligos.
A specific E.cuniculi microarray will overcome the difficulty to discriminate
the parasite mRNAs from the host cell mRNAs demonstrating the power of the
microarray approach to elucidate the lifestyle of an intracellular pathogen
using mix mRNAs.
| [
{
"created": "Tue, 26 Feb 2008 21:38:14 GMT",
"version": "v1"
}
] | 2008-02-28 | [
[
"Rimour",
"Sébastien",
"",
"LMGE"
],
[
"Hill",
"David",
"",
"LIMOS"
],
[
"Militon",
"Cécile",
"",
"LMGE"
],
[
"Peyret",
"Pierre",
"",
"LMGE"
]
] | MOTIVATION: The use of oligonucleotide microarray technology requires a very detailed attention to the design of specific probes spotted on the solid phase. These problems are far from being commonplace since they refer to complex physicochemical constraints. Whereas there are more and more publicly available programs for microarray oligonucleotide design, most of them use the same algorithm or criteria to design oligos, with only little variation. RESULTS: We show that classical approaches used in oligo design software may be inefficient under certain experimental conditions, especially when dealing with complex target mixtures. Indeed, our biological model is a human obligate parasite, the microsporidia Encephalitozoon cuniculi. Targets that are extracted from biological samples are composed of a mixture of pathogen transcripts and host cell transcripts. We propose a new approach to design oligonucleotides which combines good specificity with a potentially high sensitivity. This approach is original in the biological point of view as well as in the algorithmic point of view. We also present an experimental validation of this new strategy by comparing results obtained with standard oligos and with our composite oligos. A specific E.cuniculi microarray will overcome the difficulty to discriminate the parasite mRNAs from the host cell mRNAs demonstrating the power of the microarray approach to elucidate the lifestyle of an intracellular pathogen using mix mRNAs. |
2304.06661 | Kieran Sharkey | Wajid Ali, Christopher E. Overton, Robert R. Wilkinson, Kieran J.
Sharkey | Deterministic epidemic models overestimate the basic reproduction number
of observed outbreaks | To be published in Infectious Disease Modelling | null | 10.1016/j.idm.2024.02.007 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The basic reproduction number, $R_0$, is a well-known quantifier of epidemic
spread. However, a class of existing methods for estimating $R_0$ from
incidence data early in the epidemic can lead to an over-estimation of this
quantity. In particular, when fitting deterministic models to estimate the rate
of spread, we do not account for the stochastic nature of epidemics and that,
given the same system, some outbreaks may lead to epidemics and some may not.
Typically, an observed epidemic that we wish to control is a major outbreak.
This amounts to implicit selection for major outbreaks which leads to the
over-estimation problem. We formally characterised the split between major and
minor outbreaks by using Otsu's method which provides us with a working
definition. We show that by conditioning a `deterministic' model on major
outbreaks, we can more reliably estimate the basic reproduction number from an
observed epidemic trajectory.
| [
{
"created": "Thu, 13 Apr 2023 16:49:36 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2024 15:11:29 GMT",
"version": "v2"
}
] | 2024-03-27 | [
[
"Ali",
"Wajid",
""
],
[
"Overton",
"Christopher E.",
""
],
[
"Wilkinson",
"Robert R.",
""
],
[
"Sharkey",
"Kieran J.",
""
]
] | The basic reproduction number, $R_0$, is a well-known quantifier of epidemic spread. However, a class of existing methods for estimating $R_0$ from incidence data early in the epidemic can lead to an over-estimation of this quantity. In particular, when fitting deterministic models to estimate the rate of spread, we do not account for the stochastic nature of epidemics and that, given the same system, some outbreaks may lead to epidemics and some may not. Typically, an observed epidemic that we wish to control is a major outbreak. This amounts to implicit selection for major outbreaks which leads to the over-estimation problem. We formally characterised the split between major and minor outbreaks by using Otsu's method which provides us with a working definition. We show that by conditioning a `deterministic' model on major outbreaks, we can more reliably estimate the basic reproduction number from an observed epidemic trajectory. |
0902.1821 | Narayanan Viswanath Chulliparambil | Viswanath.C.Narayanan | A new distance between DNA sequences | 18 pages | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new distance metric for DNA sequences, which can be defined on
any evolutionary Markov model with infinitesimal generator matrix Q. That is
the new metric can be defined under existing models such as Jukes-Cantor model,
Kimura-2-parameter model, F84 model, GTR model etc. Since our metric does not
depend on the form of the generator matrix Q, it can be defined for very
general models including those with varying nucleotide substitution rates among
lineages. This makes our metric widely applicable. The simulation experiments
carried out shows that the new metric, when defined under classical models such
as the JC, F84 and Kimura-2-parameter models, performs better than these
existing metrics in recovering phylogenetic trees from sequence data. Our
simulation experiments also show that the new metric, under a model that allows
varying nucleotide substitution rates among lineages, performs equally well or
better than its other forms studied.
| [
{
"created": "Wed, 11 Feb 2009 08:42:55 GMT",
"version": "v1"
}
] | 2009-02-12 | [
[
"Narayanan",
"Viswanath. C.",
""
]
] | We propose a new distance metric for DNA sequences, which can be defined on any evolutionary Markov model with infinitesimal generator matrix Q. That is the new metric can be defined under existing models such as Jukes-Cantor model, Kimura-2-parameter model, F84 model, GTR model etc. Since our metric does not depend on the form of the generator matrix Q, it can be defined for very general models including those with varying nucleotide substitution rates among lineages. This makes our metric widely applicable. The simulation experiments carried out shows that the new metric, when defined under classical models such as the JC, F84 and Kimura-2-parameter models, performs better than these existing metrics in recovering phylogenetic trees from sequence data. Our simulation experiments also show that the new metric, under a model that allows varying nucleotide substitution rates among lineages, performs equally well or better than its other forms studied. |
1409.5302 | Shakti N. Menon | Jinshan Xu, Shakti N. Menon, Rajeev Singh, Nicolas B. Garnier,
Sitabhra Sinha and Alain Pumir | The role of cellular coupling in the spontaneous generation of
electrical activity in uterine tissue | null | PLoS ONE 10 (2015) e0118443 | 10.1371/journal.pone.0118443 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spontaneous emergence of contraction-inducing electrical activity in the
uterus at the beginning of labor remains poorly understood, partly due to the
seemingly contradictory observation that isolated uterine cells are not
spontaneously active. It is known, however, that the expression of gap
junctions increases dramatically in the approach to parturition, which results
in a significant increase in inter-cellular electrical coupling. In this paper,
we build upon previous studies of the activity of electrically excitable smooth
muscle cells (myocytes) and investigate the mechanism through which the
coupling of these cells to electrically passive cells results in the generation
of spontaneous activity in the uterus. Using a recently developed, realistic
model of uterine muscle cell dynamics, we investigate a system consisting of a
myocyte coupled to passive cells. We then extend our analysis to a simple
two-dimensional lattice model of the tissue, with each myocyte being coupled to
its neighbors, as well as to a random number of passive cells. We observe that
different dynamical regimes can be observed over a range of gap junction
conductances: at low coupling strength, the activity is confined to cell
clusters, while the activity for high coupling may spread across the entire
tissue. Additionally, we find that the system supports the spontaneous
generation of spiral wave activity. Our results are both qualitatively and
quantitatively consistent with observations from in vitro experiments. In
particular, we demonstrate that an increase in inter-cellular electrical
coupling, for realistic parameter values, strongly facilitates the appearance
of spontaneous action potentials that may eventually lead to parturition.
| [
{
"created": "Thu, 18 Sep 2014 13:24:24 GMT",
"version": "v1"
}
] | 2016-03-24 | [
[
"Xu",
"Jinshan",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Singh",
"Rajeev",
""
],
[
"Garnier",
"Nicolas B.",
""
],
[
"Sinha",
"Sitabhra",
""
],
[
"Pumir",
"Alain",
""
]
] | The spontaneous emergence of contraction-inducing electrical activity in the uterus at the beginning of labor remains poorly understood, partly due to the seemingly contradictory observation that isolated uterine cells are not spontaneously active. It is known, however, that the expression of gap junctions increases dramatically in the approach to parturition, which results in a significant increase in inter-cellular electrical coupling. In this paper, we build upon previous studies of the activity of electrically excitable smooth muscle cells (myocytes) and investigate the mechanism through which the coupling of these cells to electrically passive cells results in the generation of spontaneous activity in the uterus. Using a recently developed, realistic model of uterine muscle cell dynamics, we investigate a system consisting of a myocyte coupled to passive cells. We then extend our analysis to a simple two-dimensional lattice model of the tissue, with each myocyte being coupled to its neighbors, as well as to a random number of passive cells. We observe that different dynamical regimes can be observed over a range of gap junction conductances: at low coupling strength, the activity is confined to cell clusters, while the activity for high coupling may spread across the entire tissue. Additionally, we find that the system supports the spontaneous generation of spiral wave activity. Our results are both qualitatively and quantitatively consistent with observations from in vitro experiments. In particular, we demonstrate that an increase in inter-cellular electrical coupling, for realistic parameter values, strongly facilitates the appearance of spontaneous action potentials that may eventually lead to parturition. |
2010.00758 | Arti Dua | Ashutosh Kumar, R. Adhikari and Arti Dua | Transients generate memory and break hyperbolicity in stochastic
enzymatic networks | 17 pages, 8 figures | null | 10.1063/5.0031368 | null | q-bio.MN physics.bio-ph physics.chem-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The hyperbolic dependence of catalytic rate on substrate concentration is a
classical result in enzyme kinetics, quantified by the celebrated
Michaelis-Menten equation. The ubiquity of this relation in diverse chemical
and biological contexts has recently been rationalized by a graph-theoretic
analysis of deterministic reaction networks. Experiments, however, have
revealed that "molecular noise" - intrinsic stochasticity at the molecular
scale - leads to significant deviations from classical results and to
unexpected effects like "molecular memory", i.e., the breakdown of statistical
independence between turnover events. Here we show, through a new method of
analysis, that memory and non-hyperbolicity have a common source in an initial,
and observably long, transient peculiar to stochastic reaction networks of
multiple enzymes. Networks of single enzymes do not admit such transients. The
transient yields, asymptotically, to a steady-state in which memory vanishes
and hyperbolicity is recovered. We propose new statistical measures, defined in
terms of turnover times, to distinguish between the transient and steady states
and apply these to experimental data from a landmark experiment that first
observed molecular memory in a single enzyme with multiple binding sites. Our
study shows that catalysis at the molecular level with more than one enzyme
always contains a non-classical regime and provides insight on how the
classical limit is attained.
| [
{
"created": "Fri, 2 Oct 2020 02:59:07 GMT",
"version": "v1"
}
] | 2021-02-24 | [
[
"Kumar",
"Ashutosh",
""
],
[
"Adhikari",
"R.",
""
],
[
"Dua",
"Arti",
""
]
] | The hyperbolic dependence of catalytic rate on substrate concentration is a classical result in enzyme kinetics, quantified by the celebrated Michaelis-Menten equation. The ubiquity of this relation in diverse chemical and biological contexts has recently been rationalized by a graph-theoretic analysis of deterministic reaction networks. Experiments, however, have revealed that "molecular noise" - intrinsic stochasticity at the molecular scale - leads to significant deviations from classical results and to unexpected effects like "molecular memory", i.e., the breakdown of statistical independence between turnover events. Here we show, through a new method of analysis, that memory and non-hyperbolicity have a common source in an initial, and observably long, transient peculiar to stochastic reaction networks of multiple enzymes. Networks of single enzymes do not admit such transients. The transient yields, asymptotically, to a steady-state in which memory vanishes and hyperbolicity is recovered. We propose new statistical measures, defined in terms of turnover times, to distinguish between the transient and steady states and apply these to experimental data from a landmark experiment that first observed molecular memory in a single enzyme with multiple binding sites. Our study shows that catalysis at the molecular level with more than one enzyme always contains a non-classical regime and provides insight on how the classical limit is attained. |
1401.6430 | Brian Williams Dr | Brian G. Williams | Responding to the AIDS epidemic in Angola | 10 pages. arXiv admin note: substantial text overlap with
arXiv:1311.1815 | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The epidemic of HIV in Angola started later and stabilized at lower levels
than elsewhere in southern Africa. With a relatively small population and a
high GDP, Angola is in a good position to intervene decisively to control HIV.
The effectiveness, availability and affordability of anti-retroviral therapy
(ART) make it possible to contemplate ending the epidemic of HIV/AIDS in
Angola. We consider what would have happened without ART, the No ART
counterfactual, the impact on the epidemic if the current roll-out of ART is
maintained, the Current Programme, the impact if coverage is rapidly increased
to reach 90% of people with CD4+ cell counts below 350/micro-litre by 2015 and
HIV-positive pregnant women are all offered ART for life (Option B+), the
Accelerated Programme, and what might be possible under the 2013 guidelines
from the World Health Organization, starting in 2015 and reaching full coverage
of ART by 2018, the Expanded Programme.If Angola is to reach the 2015 targets
in the Presidents Acceleration Plan testing services will need to be expanded.
A regular, uninterrupted supply of drugs will have to be assured. Existing
health staff will need to be strengthened. Community health workers will need
to be mobilized and trained to encourage people to be tested and accept
treatment, to monitor progress and to support people on treatment; this in turn
will help to reduce stigma and discrimination, loss to follow up of people
diagnosed with HIV, and improve adherence for those on treatment. Effective
monitoring and evaluation systems will have to be in place and data collection
will have to be extended and improved to support the development of reliable
estimates of the current and future state of the epidemic, the success of the
programme, levels of viral load suppression for those on ART and the incidence
of infection.
| [
{
"created": "Wed, 22 Jan 2014 16:10:21 GMT",
"version": "v1"
}
] | 2014-01-27 | [
[
"Williams",
"Brian G.",
""
]
] | The epidemic of HIV in Angola started later and stabilized at lower levels than elsewhere in southern Africa. With a relatively small population and a high GDP, Angola is in a good position to intervene decisively to control HIV. The effectiveness, availability and affordability of anti-retroviral therapy (ART) make it possible to contemplate ending the epidemic of HIV/AIDS in Angola. We consider what would have happened without ART, the No ART counterfactual, the impact on the epidemic if the current roll-out of ART is maintained, the Current Programme, the impact if coverage is rapidly increased to reach 90% of people with CD4+ cell counts below 350/micro-litre by 2015 and HIV-positive pregnant women are all offered ART for life (Option B+), the Accelerated Programme, and what might be possible under the 2013 guidelines from the World Health Organization, starting in 2015 and reaching full coverage of ART by 2018, the Expanded Programme.If Angola is to reach the 2015 targets in the Presidents Acceleration Plan testing services will need to be expanded. A regular, uninterrupted supply of drugs will have to be assured. Existing health staff will need to be strengthened. Community health workers will need to be mobilized and trained to encourage people to be tested and accept treatment, to monitor progress and to support people on treatment; this in turn will help to reduce stigma and discrimination, loss to follow up of people diagnosed with HIV, and improve adherence for those on treatment. Effective monitoring and evaluation systems will have to be in place and data collection will have to be extended and improved to support the development of reliable estimates of the current and future state of the epidemic, the success of the programme, levels of viral load suppression for those on ART and the incidence of infection. |
1701.06086 | Ricard Sole | Javier Macia, Blai Vidiella and Ricard Sole | Synthetic associative learning in engineered multicellular consortia | 5 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Associative learning is one of the key mechanisms displayed by living
organisms in order to adapt to their changing environments. It was early
recognized to be a general trait of complex multicellular organisms but also
found in "simpler" ones. It has also been explored within synthetic biology
using molecular circuits that are directly inspired in neural network models of
conditioning. These designs involve complex wiring diagrams to be implemented
within one single cell and the presence of diverse molecular wires become a
challenge that might be very difficult to overcome. Here we present three
alternative circuit designs based on two-cell microbial consortia able to
properly display associative learning responses to two classes of stimuli and
displaying long and short-term memory (i. e. the association can be lost with
time). These designs might be a helpful approach for engineering the human gut
microbiome or even synthetic organoids, defining a new class of decision-making
biological circuits capable of memory and adaptation to changing conditions.
The potential implications and extensions are outlined.
| [
{
"created": "Sat, 21 Jan 2017 20:57:08 GMT",
"version": "v1"
}
] | 2017-01-24 | [
[
"Macia",
"Javier",
""
],
[
"Vidiella",
"Blai",
""
],
[
"Sole",
"Ricard",
""
]
] | Associative learning is one of the key mechanisms displayed by living organisms in order to adapt to their changing environments. It was early recognized to be a general trait of complex multicellular organisms but also found in "simpler" ones. It has also been explored within synthetic biology using molecular circuits that are directly inspired in neural network models of conditioning. These designs involve complex wiring diagrams to be implemented within one single cell and the presence of diverse molecular wires become a challenge that might be very difficult to overcome. Here we present three alternative circuit designs based on two-cell microbial consortia able to properly display associative learning responses to two classes of stimuli and displaying long and short-term memory (i. e. the association can be lost with time). These designs might be a helpful approach for engineering the human gut microbiome or even synthetic organoids, defining a new class of decision-making biological circuits capable of memory and adaptation to changing conditions. The potential implications and extensions are outlined. |
1808.08565 | Birgitta Dresp-Langley | Birgitta Dresp-Langley | Affine Geometry, Visual Sensation, and Preference for Symmetry of Things
in a Thing | null | 2016, Symmetry, 8, 127 | 10.3390/sym8110127 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolution and geometry generate complexity in similar ways. Evolution drives
natural selection while geometry may capture the logic of this selection and
express it visually, in terms of specific generic properties representing some
kind of advantage. Geometry is ideally suited for expressing the logic of
evolutionary selection for symmetry, which is found in the shape curves of vein
systems and other natural objects such as leaves, cell membranes, or tunnel
systems built by ants. The topology and geometry of symmetry is controlled by
numerical parameters, which act in analogy with a biological organism's DNA.
The introductory part of this paper reviews findings from experiments
illustrating the critical role of two-dimensional design parameters and shape
symmetry for visual or tactile shape sensation, and for perception-based
decision making in populations of experts and non-experts. Thereafter, results
from a pilot study on the effects of fractal symmetry, referred to herein as
the symmetry of things in a thing, on aesthetic judgments and visual preference
are presented. In a first experiment (psychophysical scaling procedure),
non-expert observers had to rate (scale from 0 to 10) the perceived
attractiveness of a random series of 2D fractal trees with varying degrees of
fractal symmetry. In a second experiment (two-alternative forced choice
procedure), they had to express their preference for one of two shapes from the
series. The shape pairs were presented successively in random order. Results
show that the smallest possible fractal deviation from "symmetry of things in a
thing" significantly reduces the perceived attractiveness of such shapes. The
potential of future studies where different levels of complexity of fractal
patterns are weighed against different degrees of symmetry is pointed out in
the conclusion.
| [
{
"created": "Sun, 26 Aug 2018 14:45:24 GMT",
"version": "v1"
}
] | 2018-09-05 | [
[
"Dresp-Langley",
"Birgitta",
""
]
] | Evolution and geometry generate complexity in similar ways. Evolution drives natural selection while geometry may capture the logic of this selection and express it visually, in terms of specific generic properties representing some kind of advantage. Geometry is ideally suited for expressing the logic of evolutionary selection for symmetry, which is found in the shape curves of vein systems and other natural objects such as leaves, cell membranes, or tunnel systems built by ants. The topology and geometry of symmetry is controlled by numerical parameters, which act in analogy with a biological organism's DNA. The introductory part of this paper reviews findings from experiments illustrating the critical role of two-dimensional design parameters and shape symmetry for visual or tactile shape sensation, and for perception-based decision making in populations of experts and non-experts. Thereafter, results from a pilot study on the effects of fractal symmetry, referred to herein as the symmetry of things in a thing, on aesthetic judgments and visual preference are presented. In a first experiment (psychophysical scaling procedure), non-expert observers had to rate (scale from 0 to 10) the perceived attractiveness of a random series of 2D fractal trees with varying degrees of fractal symmetry. In a second experiment (two-alternative forced choice procedure), they had to express their preference for one of two shapes from the series. The shape pairs were presented successively in random order. Results show that the smallest possible fractal deviation from "symmetry of things in a thing" significantly reduces the perceived attractiveness of such shapes. The potential of future studies where different levels of complexity of fractal patterns are weighed against different degrees of symmetry is pointed out in the conclusion. |
1501.07342 | Mikhail Tikhonov | Mikhail Tikhonov, Shawn C. Little and Thomas Gregor | Only accessible information is useful: insights from gradient-mediated
patterning | Updated and refocused; 9 pages, 4 figures + supplement | Open Science 2(11): 150486, 2015 | 10.1098/rsos.150486 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information theory is gaining popularity as a tool to characterize
performance of biological systems. However, information is commonly quantified
without reference to whether or how a system could extract and use it; as a
result, information-theoretic quantities are easily misinterpreted. Here we
take the example of pattern-forming developmental systems which are commonly
structured as cascades of sequential gene expression steps. Such a multi-tiered
structure appears to constitute sub-optimal use of the positional information
provided by the input morphogen because noise is added at each tier. However,
the conventional theory fails to distinguish between the total information in a
morphogen and information that can be usefully extracted and interpreted by
downstream elements. We demonstrate that quantifying the information that is
_accessible_ to the system naturally explains the prevalence of multi-tiered
network architectures as a consequence of the noise inherent to the control of
gene expression. We support our argument with empirical observations from
patterning along the major body axis of the fruit fly embryo. Our results
exhibit the limitations of the standard information-theoretic characterization
of biological signaling and illustrate how they can be resolved.
| [
{
"created": "Thu, 29 Jan 2015 04:55:55 GMT",
"version": "v1"
},
{
"created": "Thu, 7 May 2015 17:56:09 GMT",
"version": "v2"
}
] | 2016-11-28 | [
[
"Tikhonov",
"Mikhail",
""
],
[
"Little",
"Shawn C.",
""
],
[
"Gregor",
"Thomas",
""
]
] | Information theory is gaining popularity as a tool to characterize performance of biological systems. However, information is commonly quantified without reference to whether or how a system could extract and use it; as a result, information-theoretic quantities are easily misinterpreted. Here we take the example of pattern-forming developmental systems which are commonly structured as cascades of sequential gene expression steps. Such a multi-tiered structure appears to constitute sub-optimal use of the positional information provided by the input morphogen because noise is added at each tier. However, the conventional theory fails to distinguish between the total information in a morphogen and information that can be usefully extracted and interpreted by downstream elements. We demonstrate that quantifying the information that is _accessible_ to the system naturally explains the prevalence of multi-tiered network architectures as a consequence of the noise inherent to the control of gene expression. We support our argument with empirical observations from patterning along the major body axis of the fruit fly embryo. Our results exhibit the limitations of the standard information-theoretic characterization of biological signaling and illustrate how they can be resolved. |
2112.14334 | Luisa Ramirez | Luisa Ramirez, William Bialek | Compression as a path to simplification: Models of collective neural
activity | null | null | null | null | q-bio.NC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patterns of activity in networks of neurons are a prototypical complex
system. Here we analyze data on the retina to show that information shared
between a single neuron and the rest of the network is compressible, through a
combination of the information bottleneck and an iteration scheme inspired by
the renormalization group. The result is that the number of parameters needed
to describe the distribution of joint activity scales with the square of the
number of neurons, even though the interactions are not well approximated as
pairwise. Our results also show that the shared information is essentially
equal to the information that individual neurons carry about natural visual
inputs, which has implications for the structure of the neural code.
| [
{
"created": "Tue, 28 Dec 2021 23:51:46 GMT",
"version": "v1"
}
] | 2021-12-30 | [
[
"Ramirez",
"Luisa",
""
],
[
"Bialek",
"William",
""
]
] | Patterns of activity in networks of neurons are a prototypical complex system. Here we analyze data on the retina to show that information shared between a single neuron and the rest of the network is compressible, through a combination of the information bottleneck and an iteration scheme inspired by the renormalization group. The result is that the number of parameters needed to describe the distribution of joint activity scales with the square of the number of neurons, even though the interactions are not well approximated as pairwise. Our results also show that the shared information is essentially equal to the information that individual neurons carry about natural visual inputs, which has implications for the structure of the neural code. |
2012.13248 | Ahmed Allam | Kyriakos Schwarz, Ahmed Allam, Nicolas Andres Perez Gonzalez, Michael
Krauthammer | AttentionDDI: Siamese Attention-based Deep Learning method for drug-drug
interaction predictions | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Drug-drug interactions (DDIs) refer to processes triggered by the
administration of two or more drugs leading to side effects beyond those
observed when drugs are administered by themselves. Due to the massive number
of possible drug pairs, it is nearly impossible to experimentally test all
combinations and discover previously unobserved side effects. Therefore,
machine learning based methods are being used to address this issue.
Methods: We propose a Siamese self-attention multi-modal neural network for
DDI prediction that integrates multiple drug similarity measures that have been
derived from a comparison of drug characteristics including drug targets,
pathways and gene expression profiles.
Results: Our proposed DDI prediction model provides multiple advantages: 1)
It is trained end-to-end, overcoming limitations of models composed of multiple
separate steps, 2) it offers model explainability via an Attention mechanism
for identifying salient input features and 3) it achieves similar or better
prediction performance (AUPR scores ranging from 0.77 to 0.92) compared to
state-of-the-art DDI models when tested on various benchmark datasets. Novel
DDI predictions are further validated using independent data resources.
Conclusions: We find that a Siamese multi-modal neural network is able to
accurately predict DDIs and that an Attention mechanism, typically used in the
Natural Language Processing domain, can be beneficially applied to aid in DDI
model explainability.
| [
{
"created": "Thu, 24 Dec 2020 13:33:07 GMT",
"version": "v1"
}
] | 2020-12-25 | [
[
"Schwarz",
"Kyriakos",
""
],
[
"Allam",
"Ahmed",
""
],
[
"Gonzalez",
"Nicolas Andres Perez",
""
],
[
"Krauthammer",
"Michael",
""
]
] | Background: Drug-drug interactions (DDIs) refer to processes triggered by the administration of two or more drugs leading to side effects beyond those observed when drugs are administered by themselves. Due to the massive number of possible drug pairs, it is nearly impossible to experimentally test all combinations and discover previously unobserved side effects. Therefore, machine learning based methods are being used to address this issue. Methods: We propose a Siamese self-attention multi-modal neural network for DDI prediction that integrates multiple drug similarity measures that have been derived from a comparison of drug characteristics including drug targets, pathways and gene expression profiles. Results: Our proposed DDI prediction model provides multiple advantages: 1) It is trained end-to-end, overcoming limitations of models composed of multiple separate steps, 2) it offers model explainability via an Attention mechanism for identifying salient input features and 3) it achieves similar or better prediction performance (AUPR scores ranging from 0.77 to 0.92) compared to state-of-the-art DDI models when tested on various benchmark datasets. Novel DDI predictions are further validated using independent data resources. Conclusions: We find that a Siamese multi-modal neural network is able to accurately predict DDIs and that an Attention mechanism, typically used in the Natural Language Processing domain, can be beneficially applied to aid in DDI model explainability. |
1504.07422 | Ehtibar Dzhafarov | Ehtibar Dzhafarov, Ru Zhang, and Janne Kujala | Is there contextuality in behavioral and social systems? | To be published in Phil. Trans. R. Soc. A, text with supplementary
files is not the journal's format | Phil. Trans. R. Soc. A 374: 20150099, 2015 | 10.1098/rsta.2015.0099 | null | q-bio.NC math.PR quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most behavioral and social experiments aimed at revealing contextuality are
confined to cyclic systems with binary outcomes. In quantum physics, this broad
class of systems includes as special cases
Klyachko-Can-Binicioglu-Shumovsky-type, Einstein-Podolsky-Rosen-Bell-type, and
Suppes-Zanotti-Leggett-Garg-type systems. The theory of contextuality known as
Contextuality-by-Default allows one to define and measure contextuality in all
such system, even if there are context-dependent errors in measurements, or if
something in the contexts directly interacts with the measurements. This makes
the theory especially suitable for behavioral and social systems, where direct
interactions of "everything with everything" are ubiquitous. For cyclic systems
with binary outcomes the theory provides necessary and sufficient conditions
for noncontextuality, and these conditions are known to be breached in certain
quantum systems. We review several behavioral and social data sets (from polls
of public opinion to visual illusions to conjoint choices to word combinations
to psychophysical matching), and none of these data provides any evidence for
contextuality. Our working hypothesis is that this may be a broadly applicable
rule: behavioral and social systems are noncontextual, i.e., all "contextual
effects" in them result from the ubiquitous dependence of response
distributions on the elements of contexts other than the ones to which the
response is presumably or normatively directed.
| [
{
"created": "Tue, 28 Apr 2015 10:59:51 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Apr 2015 23:34:43 GMT",
"version": "v2"
},
{
"created": "Mon, 25 May 2015 19:55:31 GMT",
"version": "v3"
},
{
"created": "Sat, 13 Jun 2015 23:15:08 GMT",
"version": "v4"
},
{
"created": "Sun, 23 Aug 2015 09:01:41 GMT",
"version": "v5"
}
] | 2016-02-12 | [
[
"Dzhafarov",
"Ehtibar",
""
],
[
"Zhang",
"Ru",
""
],
[
"Kujala",
"Janne",
""
]
] | Most behavioral and social experiments aimed at revealing contextuality are confined to cyclic systems with binary outcomes. In quantum physics, this broad class of systems includes as special cases Klyachko-Can-Binicioglu-Shumovsky-type, Einstein-Podolsky-Rosen-Bell-type, and Suppes-Zanotti-Leggett-Garg-type systems. The theory of contextuality known as Contextuality-by-Default allows one to define and measure contextuality in all such system, even if there are context-dependent errors in measurements, or if something in the contexts directly interacts with the measurements. This makes the theory especially suitable for behavioral and social systems, where direct interactions of "everything with everything" are ubiquitous. For cyclic systems with binary outcomes the theory provides necessary and sufficient conditions for noncontextuality, and these conditions are known to be breached in certain quantum systems. We review several behavioral and social data sets (from polls of public opinion to visual illusions to conjoint choices to word combinations to psychophysical matching), and none of these data provides any evidence for contextuality. Our working hypothesis is that this may be a broadly applicable rule: behavioral and social systems are noncontextual, i.e., all "contextual effects" in them result from the ubiquitous dependence of response distributions on the elements of contexts other than the ones to which the response is presumably or normatively directed. |
2401.09514 | Stuart Kauffman | Stuart Kauffman and Andrea Roli | Is the Emergence of Life an Expected Phase Transition in the Evolving
Universe? | null | null | null | null | q-bio.PE physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | We propose a novel definition of life in terms of which its emergence in the
universe is expected, and its ever-creative open-ended evolution is entailed by
no law. Living organisms are Kantian Wholes that achieve Catalytic Closure,
Constraint Closure, and Spatial Closure. We here unite for the first time two
established mathematical theories, namely Collectively Autocatalytic Sets and
the Theory of the Adjacent Possible. The former establishes that a first-order
phase transition to molecular reproduction is expected in the chemical
evolution of the universe where the diversity and complexity of molecules
increases; the latter posits that, under loose hypotheses, if the system starts
with a small number of beginning molecules, each of which can combine with
copies of itself or other molecules to make new molecules, over time the number
of kinds of molecules increases slowly but then explodes upward hyperbolically.
Together these theories imply that life is expected as a phase transition in
the evolving universe. The familiar distinction between software and hardware
loses its meaning in living cells. We propose new ways to study the phylogeny
of metabolisms, new astronomical ways to search for life on exoplanets, new
experiments to seek the emergence of the most rudimentary life, and the hint of
a coherent testable pathway to prokaryotes with template replication and
coding.
| [
{
"created": "Wed, 17 Jan 2024 15:22:32 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2024 08:39:16 GMT",
"version": "v2"
}
] | 2024-04-11 | [
[
"Kauffman",
"Stuart",
""
],
[
"Roli",
"Andrea",
""
]
] | We propose a novel definition of life in terms of which its emergence in the universe is expected, and its ever-creative open-ended evolution is entailed by no law. Living organisms are Kantian Wholes that achieve Catalytic Closure, Constraint Closure, and Spatial Closure. We here unite for the first time two established mathematical theories, namely Collectively Autocatalytic Sets and the Theory of the Adjacent Possible. The former establishes that a first-order phase transition to molecular reproduction is expected in the chemical evolution of the universe where the diversity and complexity of molecules increases; the latter posits that, under loose hypotheses, if the system starts with a small number of beginning molecules, each of which can combine with copies of itself or other molecules to make new molecules, over time the number of kinds of molecules increases slowly but then explodes upward hyperbolically. Together these theories imply that life is expected as a phase transition in the evolving universe. The familiar distinction between software and hardware loses its meaning in living cells. We propose new ways to study the phylogeny of metabolisms, new astronomical ways to search for life on exoplanets, new experiments to seek the emergence of the most rudimentary life, and the hint of a coherent testable pathway to prokaryotes with template replication and coding. |
2307.01210 | Mahboobeh Parsapoor | Mahboobeh Parsapoor (Mah Parsa) and Hamed Ghodrati, Vincenzo Dentamaro
and Christopher R. Madan and Ioulietta Lazarou and Spiros Nikolopoulos and
Ioannis Kompatsiaris | AI and Non AI Assessments for Dementia | 49 pages | null | null | null | q-bio.OT cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Current progress in the artificial intelligence domain has led to the
development of various types of AI-powered dementia assessments, which can be
employed to identify patients at the early stage of dementia. It can
revolutionize the dementia care settings. It is essential that the medical
community be aware of various AI assessments and choose them considering their
degrees of validity, efficiency, practicality, reliability, and accuracy
concerning the early identification of patients with dementia (PwD). On the
other hand, AI developers should be informed about various non-AI assessments
as well as recently developed AI assessments. Thus, this paper, which can be
readable by both clinicians and AI engineers, fills the gap in the literature
in explaining the existing solutions for the recognition of dementia to
clinicians, as well as the techniques used and the most widespread dementia
datasets to AI engineers. It follows a review of papers on AI and non-AI
assessments for dementia to provide valuable information about various dementia
assessments for both the AI and medical communities. The discussion and
conclusion highlight the most prominent research directions and the maturity of
existing solutions.
| [
{
"created": "Fri, 30 Jun 2023 03:28:47 GMT",
"version": "v1"
}
] | 2023-07-21 | [
[
"Parsapoor",
"Mahboobeh",
"",
"Mah Parsa"
],
[
"Ghodrati",
"Hamed",
""
],
[
"Dentamaro",
"Vincenzo",
""
],
[
"Madan",
"Christopher R.",
""
],
[
"Lazarou",
"Ioulietta",
""
],
[
"Nikolopoulos",
"Spiros",
""
],
[
"Kompatsiaris",
"Ioannis",
""
]
] | Current progress in the artificial intelligence domain has led to the development of various types of AI-powered dementia assessments, which can be employed to identify patients at the early stage of dementia. It can revolutionize the dementia care settings. It is essential that the medical community be aware of various AI assessments and choose them considering their degrees of validity, efficiency, practicality, reliability, and accuracy concerning the early identification of patients with dementia (PwD). On the other hand, AI developers should be informed about various non-AI assessments as well as recently developed AI assessments. Thus, this paper, which can be readable by both clinicians and AI engineers, fills the gap in the literature in explaining the existing solutions for the recognition of dementia to clinicians, as well as the techniques used and the most widespread dementia datasets to AI engineers. It follows a review of papers on AI and non-AI assessments for dementia to provide valuable information about various dementia assessments for both the AI and medical communities. The discussion and conclusion highlight the most prominent research directions and the maturity of existing solutions. |
1708.07768 | Monique Tirion | Monique M. Tirion and Daniel ben-Avraham | PDB-NMA of a Protein Homodimer Reproduces Distinct Experimental Motility
Asymmetry | null | Physical Biology 15 (2018) 026004 | 10.1088/1478-3975/aaa277 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have extended our analytically derived PDB-NMA formulation, ATMAN [1], to
include protein dimers using mixed internal and Cartesian coordinates. A test
case on a 1.3\AA\ resolution model of a small homodimer, ActVA-ORF6, consisting
of two 112-residue subunits identically folded in a compact 50\AA\ sphere,
reproduces the distinct experimental Debye-Waller motility asymmetry for the
two chains, demonstrating that structure sensitively selects vibrational
signatures. The vibrational analysis of this PDB entry, together with
biochemical and crystallographic data, demonstrates the cooperative nature of
the dimeric interaction of the two subunits and suggests a mechanical model for
subunit interconversion during the catalytic cycle.
| [
{
"created": "Fri, 25 Aug 2017 15:07:02 GMT",
"version": "v1"
}
] | 2018-06-29 | [
[
"Tirion",
"Monique M.",
""
],
[
"ben-Avraham",
"Daniel",
""
]
] | We have extended our analytically derived PDB-NMA formulation, ATMAN [1], to include protein dimers using mixed internal and Cartesian coordinates. A test case on a 1.3\AA\ resolution model of a small homodimer, ActVA-ORF6, consisting of two 112-residue subunits identically folded in a compact 50\AA\ sphere, reproduces the distinct experimental Debye-Waller motility asymmetry for the two chains, demonstrating that structure sensitively selects vibrational signatures. The vibrational analysis of this PDB entry, together with biochemical and crystallographic data, demonstrates the cooperative nature of the dimeric interaction of the two subunits and suggests a mechanical model for subunit interconversion during the catalytic cycle. |
2403.12984 | Azmine Toushik Wasi | Azmine Toushik Wasi and \v{S}erbetar Karlo and Raima Islam and Taki
Hasan Rafi and Dong-Kyu Chae | When SMILES have Language: Drug Classification using Text Classification
Methods on Drug SMILES Strings | 7 pages, 2 figures, 5 tables, Accepted (invited to present) to the
The Second Tiny Papers Track at ICLR 2024
(https://openreview.net/forum?id=VUYCyH8fCw) | The Second Tiny Papers Track at {ICLR} 2024, Tiny Papers @ {ICLR}
2024, Vienna Austria, May 11, 2024 | null | null | q-bio.BM cs.CL cs.IR cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Complex chemical structures, like drugs, are usually defined by SMILES
strings as a sequence of molecules and bonds. These SMILES strings are used in
different complex machine learning-based drug-related research and
representation works. Escaping from complex representation, in this work, we
pose a single question: What if we treat drug SMILES as conventional sentences
and engage in text classification for drug classification? Our experiments
affirm the possibility with very competitive scores. The study explores the
notion of viewing each atom and bond as sentence components, employing basic
NLP methods to categorize drug types, proving that complex problems can also be
solved with simpler perspectives. The data and code are available here:
https://github.com/azminewasi/Drug-Classification-NLP.
| [
{
"created": "Sun, 3 Mar 2024 11:09:32 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Mar 2024 21:51:03 GMT",
"version": "v2"
}
] | 2024-03-29 | [
[
"Wasi",
"Azmine Toushik",
""
],
[
"Karlo",
"Šerbetar",
""
],
[
"Islam",
"Raima",
""
],
[
"Rafi",
"Taki Hasan",
""
],
[
"Chae",
"Dong-Kyu",
""
]
] | Complex chemical structures, like drugs, are usually defined by SMILES strings as a sequence of molecules and bonds. These SMILES strings are used in different complex machine learning-based drug-related research and representation works. Escaping from complex representation, in this work, we pose a single question: What if we treat drug SMILES as conventional sentences and engage in text classification for drug classification? Our experiments affirm the possibility with very competitive scores. The study explores the notion of viewing each atom and bond as sentence components, employing basic NLP methods to categorize drug types, proving that complex problems can also be solved with simpler perspectives. The data and code are available here: https://github.com/azminewasi/Drug-Classification-NLP. |
0810.4179 | Yuriy Pershin | Yuriy V. Pershin, Steven La Fontaine and Massimiliano Di Ventra | Memristive model of amoeba's learning | null | Phys. Rev. E 80, 021926 (2009) | 10.1103/PhysRevE.80.021926 | null | q-bio.CB cond-mat.other | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, it was shown that the amoeba-like cell {\it Physarum polycephalum}
when exposed to a pattern of periodic environmental changes learns and adapts
its behavior in anticipation of the next stimulus to come. Here we show that
such behavior can be mapped into the response of a simple electronic circuit
consisting of an $LC$ contour and a memory-resistor (a memristor) to a train of
voltage pulses that mimic environment changes. We also identify a possible
biological origin of the memristive behavior in the cell. These biological
memory features are likely to occur in other unicellular as well as
multicellular organisms, albeit in different forms. Therefore, the above
memristive circuit model, which has learning properties, is useful to better
understand the origins of primitive intelligence.
| [
{
"created": "Wed, 22 Oct 2008 23:31:11 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Oct 2008 19:13:36 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Jul 2009 02:27:38 GMT",
"version": "v3"
}
] | 2009-11-21 | [
[
"Pershin",
"Yuriy V.",
""
],
[
"La Fontaine",
"Steven",
""
],
[
"Di Ventra",
"Massimiliano",
""
]
] | Recently, it was shown that the amoeba-like cell {\it Physarum polycephalum} when exposed to a pattern of periodic environmental changes learns and adapts its behavior in anticipation of the next stimulus to come. Here we show that such behavior can be mapped into the response of a simple electronic circuit consisting of an $LC$ contour and a memory-resistor (a memristor) to a train of voltage pulses that mimic environment changes. We also identify a possible biological origin of the memristive behavior in the cell. These biological memory features are likely to occur in other unicellular as well as multicellular organisms, albeit in different forms. Therefore, the above memristive circuit model, which has learning properties, is useful to better understand the origins of primitive intelligence. |
2005.00608 | Jorge De Heuvel | Jorge de Heuvel (1), Jens Wilting (1), Moritz Becker (1 and 2), Viola
Priesemann (1), Johannes Zierenberg (1) ((1) Max Planck Institute for
Dynamics and Self-Organization, G\"ottingen, Germany, (2) Department of
Computational Neuroscience, Third Institute of Physics - Biophysics,
Georg-August-University, G\"ottingen, Germany) | Characterizing spreading dynamics of subsampled systems with
non-stationary external input | null | Phys. Rev. E 102, 040301 (2020) | 10.1103/PhysRevE.102.040301 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many systems with propagation dynamics, such as spike propagation in neural
networks and spreading of infectious diseases, can be approximated by
autoregressive models. The estimation of model parameters can be complicated by
the experimental limitation that one observes only a fraction of the system
(subsampling) and potentially time-dependent parameters, leading to incorrect
estimates. We show analytically how to overcome the subsampling bias when
estimating the propagation rate for systems with certain non-stationary
external input. This approach is readily applicable to trial-based experimental
setups and seasonal fluctuations, as demonstrated on spike recordings from
monkey prefrontal cortex and spreading of norovirus and measles.
| [
{
"created": "Fri, 24 Apr 2020 12:29:55 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"de Heuvel",
"Jorge",
"",
"1 and 2"
],
[
"Wilting",
"Jens",
"",
"1 and 2"
],
[
"Becker",
"Moritz",
"",
"1 and 2"
],
[
"Priesemann",
"Viola",
""
],
[
"Zierenberg",
"Johannes",
""
]
] | Many systems with propagation dynamics, such as spike propagation in neural networks and spreading of infectious diseases, can be approximated by autoregressive models. The estimation of model parameters can be complicated by the experimental limitation that one observes only a fraction of the system (subsampling) and potentially time-dependent parameters, leading to incorrect estimates. We show analytically how to overcome the subsampling bias when estimating the propagation rate for systems with certain non-stationary external input. This approach is readily applicable to trial-based experimental setups and seasonal fluctuations, as demonstrated on spike recordings from monkey prefrontal cortex and spreading of norovirus and measles. |
2304.01049 | Mandana Mirbakhsh | Mandana Mirbakhsh, Zahra Zahed, Sepideh Mashayekhi, Monire Jafari | Investigation of In Vitro Apocarotenoid Expression in Perianth of
Saffron (Crocus sativus L.) Under Different Soil EC | null | null | 10.25047/agriprima.v7i1.508 | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Crocus sativus is a triploid sterile plant with red stigmas belonging to the
family of Iridaceae, and sub-family Crocoideae. Crocin, picrocrocin, and
safranal are three major carotenoid derivatives that are responsible for the
color, taste, and specific aroma of Crocus. Saffron flowers are harvested
manually and used as spice, dye, or medicinal applications. The natural
propagation rate of most geophytes including saffron is relatively low. An in
vitro multiplication technique like micropropagation has been used for the
propagation of saffron. To understand the efficiency of this alternative and
study the molecular basis of apocarotenoid biosynthesis/accumulation, the
RT-PCR method was performed on perianth explants that were cultured on MS
medium to observe the level of expression of zeaxanthin cleavage dioxygenase
(CsZCD) gene during stigma development, and also the impact of soil EC on its
expression. The present study was conducted at Plant molecular and physiology
Lab, Alzahra University, Tehran, Iran during 2011-2013. Stigma-like structures
(SLSs) on calli were collected from immature perianth explants from floral buds
of corms that were collected from Ghaen city, and compared to (Torbat-e
Haidariye, Mardabad, and Shahroud cities) for investigating the impact of
different soil EC on CsZCD expression. The results indicated that the CsZCD
gene was highly expressed in fully developed red SLSs in perianth of cultured
samples of Shahroud with the highest salinity. In this research, a close
relationship between soil EC and second metabolites regulation is studied.
Overall, these results will pave the way for understanding the molecular basis
of apocarotenoid biosynthesis and other aspects of stigma development in C.
sativus.
| [
{
"created": "Mon, 3 Apr 2023 14:51:08 GMT",
"version": "v1"
}
] | 2023-04-04 | [
[
"Mirbakhsh",
"Mandana",
""
],
[
"Zahed",
"Zahra",
""
],
[
"Mashayekhi",
"Sepideh",
""
],
[
"Jafari",
"Monire",
""
]
] | Crocus sativus is a triploid sterile plant with red stigmas belonging to the family of Iridaceae, and sub-family Crocoideae. Crocin, picrocrocin, and safranal are three major carotenoid derivatives that are responsible for the color, taste, and specific aroma of Crocus. Saffron flowers are harvested manually and used as spice, dye, or medicinal applications. The natural propagation rate of most geophytes including saffron is relatively low. An in vitro multiplication technique like micropropagation has been used for the propagation of saffron. To understand the efficiency of this alternative and study the molecular basis of apocarotenoid biosynthesis/accumulation, the RT-PCR method was performed on perianth explants that were cultured on MS medium to observe the level of expression of zeaxanthin cleavage dioxygenase (CsZCD) gene during stigma development, and also the impact of soil EC on its expression. The present study was conducted at Plant molecular and physiology Lab, Alzahra University, Tehran, Iran during 2011-2013. Stigma-like structures (SLSs) on calli were collected from immature perianth explants from floral buds of corms that were collected from Ghaen city, and compared to (Torbat-e Haidariye, Mardabad, and Shahroud cities) for investigating the impact of different soil EC on CsZCD expression. The results indicated that the CsZCD gene was highly expressed in fully developed red SLSs in perianth of cultured samples of Shahroud with the highest salinity. In this research, a close relationship between soil EC and second metabolites regulation is studied. Overall, these results will pave the way for understanding the molecular basis of apocarotenoid biosynthesis and other aspects of stigma development in C. sativus. |
q-bio/0506023 | Krishnakumar Garikipati | K. Garikipati, J. E. Olberding, E. M. Arruda, K. Grosh, H. Narayanan,
S. Calve | Biological remodelling: Stationary energy, configurational change,
internal variables and dissipation | 24 pages, 4 figures. Replaced version has corrections to typos in
equations, and the corresponding correct plot of the solution--all in Section
3 | null | 10.1016/j.jmps.2005.11.011 | null | q-bio.TO | null | Remodelling is defined as an evolution of microstructure or variations in the
configuration of the underlying manifold. The manner in which a biological
tissue and its subsystems remodel their structure is treated in a continuum
mechanical setting. While some examples of remodelling are conveniently
modelled as evolution of the reference configuration (Case I), others are more
suited to an internal variable description (Case II). In this paper we explore
the applicability of stationary energy states to remodelled systems. A
variational treatment is introduced by assuming that stationary energy states
are attained by changes in microstructure via one of the two mechanisms--Cases
I and II. An example is presented to illustrate each case. The example
illustrating Case II is further studied in the context of the thermodynamic
dissipation inequality.
| [
{
"created": "Thu, 16 Jun 2005 16:59:06 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Jul 2005 18:31:01 GMT",
"version": "v2"
}
] | 2009-11-11 | [
[
"Garikipati",
"K.",
""
],
[
"Olberding",
"J. E.",
""
],
[
"Arruda",
"E. M.",
""
],
[
"Grosh",
"K.",
""
],
[
"Narayanan",
"H.",
""
],
[
"Calve",
"S.",
""
]
] | Remodelling is defined as an evolution of microstructure or variations in the configuration of the underlying manifold. The manner in which a biological tissue and its subsystems remodel their structure is treated in a continuum mechanical setting. While some examples of remodelling are conveniently modelled as evolution of the reference configuration (Case I), others are more suited to an internal variable description (Case II). In this paper we explore the applicability of stationary energy states to remodelled systems. A variational treatment is introduced by assuming that stationary energy states are attained by changes in microstructure via one of the two mechanisms--Cases I and II. An example is presented to illustrate each case. The example illustrating Case II is further studied in the context of the thermodynamic dissipation inequality. |
1709.02325 | Michael Vaiana | Michael Vaiana and Sarah Muldoon | Multilayer Brain Networks | null | null | 10.1007/s00332-017-9436-8 | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of neuroscience is facing an unprecedented expanse in the volume
and diversity of available data. Traditionally, network models have provided
key insights into the structure and function of the brain. With the advent of
big data in neuroscience, both more sophisticated models capable of
characterizing the increasing complexity of the data and novel methods of
quantitative analysis are needed. Recently multilayer networks, a mathematical
extension of traditional networks, have gained increasing popularity in
neuroscience due to their ability to capture the full information of
multi-model, multi-scale, spatiotemporal data sets. Here, we review multilayer
networks and their applications in neuroscience, showing how incorporating the
multilayer framework into network neuroscience analysis has uncovered
previously hidden features of brain networks. We specifically highlight the use
of multilayer networks to model disease, structure-function relationships,
network evolution, and link multi-scale data. Finally, we close with a
discussion of promising new directions of multilayer network neuroscience
research and propose a modified definition of multilayer networks designed to
unite and clarify the use of the multilayer formalism in describing real-world
systems.
| [
{
"created": "Thu, 7 Sep 2017 16:03:48 GMT",
"version": "v1"
}
] | 2018-02-14 | [
[
"Vaiana",
"Michael",
""
],
[
"Muldoon",
"Sarah",
""
]
] | The field of neuroscience is facing an unprecedented expanse in the volume and diversity of available data. Traditionally, network models have provided key insights into the structure and function of the brain. With the advent of big data in neuroscience, both more sophisticated models capable of characterizing the increasing complexity of the data and novel methods of quantitative analysis are needed. Recently multilayer networks, a mathematical extension of traditional networks, have gained increasing popularity in neuroscience due to their ability to capture the full information of multi-model, multi-scale, spatiotemporal data sets. Here, we review multilayer networks and their applications in neuroscience, showing how incorporating the multilayer framework into network neuroscience analysis has uncovered previously hidden features of brain networks. We specifically highlight the use of multilayer networks to model disease, structure-function relationships, network evolution, and link multi-scale data. Finally, we close with a discussion of promising new directions of multilayer network neuroscience research and propose a modified definition of multilayer networks designed to unite and clarify the use of the multilayer formalism in describing real-world systems. |
2309.10008 | Hanlin Zhang | Hanlin Zhang, Wenzheng Cheng | DeepHEN: quantitative prediction essential lncRNA genes and rethinking
essentialities of lncRNA genes | null | null | null | null | q-bio.MN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene essentiality refers to the degree to which a gene is necessary for the
survival and reproductive efficacy of a living organism. Although the
essentiality of non-coding genes has been documented, there are still aspects
of non-coding genes' essentiality that are unknown to us. For example, We do
not know the contribution of sequence features and network spatial features to
essentiality. As a consequence, in this work, we propose DeepHEN that could
answer the above question. By buidling a new lncRNA-proteion-protein network
and utilizing both representation learning and graph neural network, we
successfully build our DeepHEN models that could predict the essentiality of
lncRNA genes. Compared to other methods for predicting the essentiality of
lncRNA genes, our DeepHEN model not only tells whether sequence features or
network spatial features have a greater influence on essentiality but also
addresses the overfitting issue of those methods caused by the low number of
essential lncRNA genes, as evidenced by the results of enrichment analysis.
| [
{
"created": "Mon, 18 Sep 2023 02:46:33 GMT",
"version": "v1"
}
] | 2023-09-20 | [
[
"Zhang",
"Hanlin",
""
],
[
"Cheng",
"Wenzheng",
""
]
] | Gene essentiality refers to the degree to which a gene is necessary for the survival and reproductive efficacy of a living organism. Although the essentiality of non-coding genes has been documented, there are still aspects of non-coding genes' essentiality that are unknown to us. For example, We do not know the contribution of sequence features and network spatial features to essentiality. As a consequence, in this work, we propose DeepHEN that could answer the above question. By buidling a new lncRNA-proteion-protein network and utilizing both representation learning and graph neural network, we successfully build our DeepHEN models that could predict the essentiality of lncRNA genes. Compared to other methods for predicting the essentiality of lncRNA genes, our DeepHEN model not only tells whether sequence features or network spatial features have a greater influence on essentiality but also addresses the overfitting issue of those methods caused by the low number of essential lncRNA genes, as evidenced by the results of enrichment analysis. |
q-bio/0701014 | Ramin Golestanian | Farshid Mohammad-Rafiee and Ramin Golestanian | Elastic Correlations in Nucleosomal DNA Structure | null | Phys. Rev. Lett. 94, 238102 (2005) | 10.1103/PhysRevLett.94.238102 | null | q-bio.BM cond-mat.soft | null | The structure of DNA in the nucleosome core particle is studied using an
elastic model that incorporates anisotropy in the bending energetics and
twist-bend coupling. Using the experimentally determined structure of
nucleosomal DNA [T.J. Richmond and C.A. Davey, Nature {\bf 423}, 145 (2003)],
it is shown that elastic correlations exist between twist, roll, tilt, and
stretching of DNA, as well as the distance between phosphate groups. The
twist-bend coupling term is shown to be able to capture these correlations to a
large extent, and a fit to the experimental data yields a new estimate of G=25
nm for the value of the twist-bend coupling constant.
| [
{
"created": "Tue, 9 Jan 2007 12:06:17 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Mohammad-Rafiee",
"Farshid",
""
],
[
"Golestanian",
"Ramin",
""
]
] | The structure of DNA in the nucleosome core particle is studied using an elastic model that incorporates anisotropy in the bending energetics and twist-bend coupling. Using the experimentally determined structure of nucleosomal DNA [T.J. Richmond and C.A. Davey, Nature {\bf 423}, 145 (2003)], it is shown that elastic correlations exist between twist, roll, tilt, and stretching of DNA, as well as the distance between phosphate groups. The twist-bend coupling term is shown to be able to capture these correlations to a large extent, and a fit to the experimental data yields a new estimate of G=25 nm for the value of the twist-bend coupling constant. |
2210.09361 | Khanh Dao Duc | A. Tajmir Riahi, G. Woollard, F. Poitevin, A. Condon, K. Dao Duc | AlignOT: An optimal transport based algorithm for fast 3D alignment with
applications to cryogenic electron microscopy density maps | null | null | null | null | q-bio.BM math.OC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Aligning electron density maps from Cryogenic electron microscopy (cryo-EM)
is a first key step for studying multiple conformations of a biomolecule. As
this step remains costly and challenging, with standard alignment tools being
potentially stuck in local minima, we propose here a new procedure, called
AlignOT, which relies on the use of computational optimal transport (OT) to
align EM maps in 3D space. By embedding a fast estimation of OT maps within a
stochastic gradient descent algorithm, our method searches for a rotation that
minimizes the Wasserstein distance between two maps, represented as point
clouds. We quantify the impact of various parameters on the precision and
accuracy of the alignment, and show that AlignOT can outperform the standard
local alignment methods, with an increased range of rotation angles leading to
proper alignment. We further benchmark AlignOT on various pairs of experimental
maps, which account for different types of conformational heterogeneities and
geometric properties. As our experiments show good performance, we anticipate
that our method can be broadly applied to align 3D EM maps.
| [
{
"created": "Mon, 17 Oct 2022 18:56:52 GMT",
"version": "v1"
}
] | 2022-10-19 | [
[
"Riahi",
"A. Tajmir",
""
],
[
"Woollard",
"G.",
""
],
[
"Poitevin",
"F.",
""
],
[
"Condon",
"A.",
""
],
[
"Duc",
"K. Dao",
""
]
] | Aligning electron density maps from Cryogenic electron microscopy (cryo-EM) is a first key step for studying multiple conformations of a biomolecule. As this step remains costly and challenging, with standard alignment tools being potentially stuck in local minima, we propose here a new procedure, called AlignOT, which relies on the use of computational optimal transport (OT) to align EM maps in 3D space. By embedding a fast estimation of OT maps within a stochastic gradient descent algorithm, our method searches for a rotation that minimizes the Wasserstein distance between two maps, represented as point clouds. We quantify the impact of various parameters on the precision and accuracy of the alignment, and show that AlignOT can outperform the standard local alignment methods, with an increased range of rotation angles leading to proper alignment. We further benchmark AlignOT on various pairs of experimental maps, which account for different types of conformational heterogeneities and geometric properties. As our experiments show good performance, we anticipate that our method can be broadly applied to align 3D EM maps. |
2407.08974 | Xue Gong | Joshua Zhi En Tan, JunJie Wee, Xue Gong, Kelin Xia | Topology-enhanced machine learning model (Top-ML) for anticancer peptide
prediction | null | null | null | null | q-bio.QM cs.LG math.GN q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Recently, therapeutic peptides have demonstrated great promise for cancer
treatment. To explore powerful anticancer peptides, artificial intelligence
(AI)-based approaches have been developed to systematically screen potential
candidates. However, the lack of efficient featurization of peptides has become
a bottleneck for these machine-learning models. In this paper, we propose a
topology-enhanced machine learning model (Top-ML) for anticancer peptide
prediction. Our Top-ML employs peptide topological features derived from its
sequence "connection" information characterized by vector and spectral
descriptors. Our Top-ML model has been validated on two widely used AntiCP 2.0
benchmark datasets and has achieved state-of-the-art performance. Our results
highlight the potential of leveraging novel topology-based featurization to
accelerate the identification of anticancer peptides.
| [
{
"created": "Fri, 12 Jul 2024 04:04:54 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Tan",
"Joshua Zhi En",
""
],
[
"Wee",
"JunJie",
""
],
[
"Gong",
"Xue",
""
],
[
"Xia",
"Kelin",
""
]
] | Recently, therapeutic peptides have demonstrated great promise for cancer treatment. To explore powerful anticancer peptides, artificial intelligence (AI)-based approaches have been developed to systematically screen potential candidates. However, the lack of efficient featurization of peptides has become a bottleneck for these machine-learning models. In this paper, we propose a topology-enhanced machine learning model (Top-ML) for anticancer peptide prediction. Our Top-ML employs peptide topological features derived from its sequence "connection" information characterized by vector and spectral descriptors. Our Top-ML model has been validated on two widely used AntiCP 2.0 benchmark datasets and has achieved state-of-the-art performance. Our results highlight the potential of leveraging novel topology-based featurization to accelerate the identification of anticancer peptides. |
2205.02169 | Masahito Ohue | Kairi Furui, Masahito Ohue | Compound virtual screening by learning-to-rank with gradient boosting
decision tree and enrichment-based cumulative gain | {\copyright} 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works | In Proceedings of The 19th IEEE International Conference on
Computational Intelligence in Bioinformatics and Computational Biology (CIBCB
2022) | 10.1109/CIBCB55180.2022.9863032 | null | q-bio.BM cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning-to-rank, a machine learning technique widely used in information
retrieval, has recently been applied to the problem of ligand-based virtual
screening, to accelerate the early stages of new drug development. Ranking
prediction models learn based on ordinal relationships, making them suitable
for integrating assay data from various environments. Existing studies of rank
prediction in compound screening have generally used a learning-to-rank method
called RankSVM. However, they have not been compared with or validated against
the gradient boosting decision tree (GBDT)-based learning-to-rank methods that
have gained popularity recently. Furthermore, although the ranking metric
called Normalized Discounted Cumulative Gain (NDCG) is widely used in
information retrieval, it only determines whether the predictions are better
than those of other models. In other words, NDCG is incapable of recognizing
when a prediction model produces worse than random results. Nevertheless, NDCG
is still used in the performance evaluation of compound screening using
learning-to-rank. This study used the GBDT model with ranking loss functions,
called lambdarank and lambdaloss, for ligand-based virtual screening; results
were compared with existing RankSVM methods and GBDT models using regression.
We also proposed a new ranking metric, Normalized Enrichment Discounted
Cumulative Gain (NEDCG), which aims to properly evaluate the goodness of
ranking predictions. Results showed that the GBDT model with learning-to-rank
outperformed existing regression methods using GBDT and RankSVM on diverse
datasets. Moreover, NEDCG showed that predictions by regression were comparable
to random predictions in multi-assay, multi-family datasets, demonstrating its
usefulness for a more direct assessment of compound screening performance.
| [
{
"created": "Wed, 4 May 2022 16:36:24 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Aug 2022 16:25:41 GMT",
"version": "v2"
}
] | 2022-08-30 | [
[
"Furui",
"Kairi",
""
],
[
"Ohue",
"Masahito",
""
]
] | Learning-to-rank, a machine learning technique widely used in information retrieval, has recently been applied to the problem of ligand-based virtual screening, to accelerate the early stages of new drug development. Ranking prediction models learn based on ordinal relationships, making them suitable for integrating assay data from various environments. Existing studies of rank prediction in compound screening have generally used a learning-to-rank method called RankSVM. However, they have not been compared with or validated against the gradient boosting decision tree (GBDT)-based learning-to-rank methods that have gained popularity recently. Furthermore, although the ranking metric called Normalized Discounted Cumulative Gain (NDCG) is widely used in information retrieval, it only determines whether the predictions are better than those of other models. In other words, NDCG is incapable of recognizing when a prediction model produces worse than random results. Nevertheless, NDCG is still used in the performance evaluation of compound screening using learning-to-rank. This study used the GBDT model with ranking loss functions, called lambdarank and lambdaloss, for ligand-based virtual screening; results were compared with existing RankSVM methods and GBDT models using regression. We also proposed a new ranking metric, Normalized Enrichment Discounted Cumulative Gain (NEDCG), which aims to properly evaluate the goodness of ranking predictions. Results showed that the GBDT model with learning-to-rank outperformed existing regression methods using GBDT and RankSVM on diverse datasets. Moreover, NEDCG showed that predictions by regression were comparable to random predictions in multi-assay, multi-family datasets, demonstrating its usefulness for a more direct assessment of compound screening performance. |
2407.07915 | Beth Stokes | Beth M. Stokes, Tim Rogers, Richard James | Speed and shape of population fronts with density-dependent diffusion | null | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | We investigate travelling wave solutions in reaction-diffusion models of
animal range expansion in the case that population diffusion is
density-dependent. We find that the speed of the selected wave depends
critically on the strength of diffusion at low density. For sufficiently large
low-density diffusion, the wave propagates at a speed predicted by a simple
linear analysis. For small or zero low-density diffusion, the linear analysis
is not sufficient, but a variational approach yields exact or approximate
expressions for the speed and shape of population fronts.
| [
{
"created": "Fri, 5 Jul 2024 16:07:58 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Stokes",
"Beth M.",
""
],
[
"Rogers",
"Tim",
""
],
[
"James",
"Richard",
""
]
] | We investigate travelling wave solutions in reaction-diffusion models of animal range expansion in the case that population diffusion is density-dependent. We find that the speed of the selected wave depends critically on the strength of diffusion at low density. For sufficiently large low-density diffusion, the wave propagates at a speed predicted by a simple linear analysis. For small or zero low-density diffusion, the linear analysis is not sufficient, but a variational approach yields exact or approximate expressions for the speed and shape of population fronts. |
q-bio/0412006 | Ciro Minichini | C. Minichini and A. Sciarrino | Mutation model for oligonucleotides fitting a Yule distribution | 13 pages, 4 figures | null | null | DSF-41/2004 | q-bio.BM cond-mat.other q-bio.OT | null | A spin chain, describing a nucleotides sequence, is identified by the labels
of a vector state of an irreducible representation of U_q->0(sl2). A master
equation for the distribution function is written, where the intensity of the
one-spin flip is assumed to depend on the variation of the labels. The
numerically computed equilibrium distribution is nicely fitted by a Yule
distribution, which is the observed distribution of the ranked short
oligonucleotides frequency in DNA
| [
{
"created": "Fri, 3 Dec 2004 12:12:49 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Minichini",
"C.",
""
],
[
"Sciarrino",
"A.",
""
]
] | A spin chain, describing a nucleotides sequence, is identified by the labels of a vector state of an irreducible representation of U_q->0(sl2). A master equation for the distribution function is written, where the intensity of the one-spin flip is assumed to depend on the variation of the labels. The numerically computed equilibrium distribution is nicely fitted by a Yule distribution, which is the observed distribution of the ranked short oligonucleotides frequency in DNA |
0708.3825 | Ana Nunes | M. Sim\~oes, M. M. Telo da Gama and A. Nunes | Stochastic Fluctuations in Epidemics on Networks | 29 pages, 8 figures | null | null | null | q-bio.PE | null | The effects of demographic stochasticity in the long term behaviour of
endemic infectious diseases have been considered for long as a necessary
addition to an underlying deterministic theory. The latter would explain the
regular behaviour of recurrent epidemics, and the former the superimposed noise
of observed incidence patterns. Recently, a stochastic theory based on a
mechanism of resonance with internal noise has shifted the role of
stochasticity closer to the center stage, by showing that the major dynamic
patterns found in the incidence data can be explained as resonant fluctuations,
whose behaviour is largely independent of the amplitude of seasonal forcing,
and by contrast very sensitive to the basic epidemiological parameters. Here we
elaborate on that approach, by adding an ingredient which is missing in
standard epidemic models, the 'mixing network' through which infection may
propagate. We find that spatial correlations have a major effect in the
enhancement of the amplitude and the coherence of the resonant stochastic
fluctuations, providing the ordered patterns of recurrent epidemics, whose
period may differ significantly from that of the small oscillations around the
deterministic equilibrium. We also show that the inclusion of a more realistic,
time correlated, recovery profile instead of exponentially distributed
infectious periods may, even in the random-mixing limit, contribute to the same
effect.
| [
{
"created": "Tue, 28 Aug 2007 18:05:54 GMT",
"version": "v1"
}
] | 2007-08-29 | [
[
"Simões",
"M.",
""
],
[
"da Gama",
"M. M. Telo",
""
],
[
"Nunes",
"A.",
""
]
] | The effects of demographic stochasticity in the long term behaviour of endemic infectious diseases have been considered for long as a necessary addition to an underlying deterministic theory. The latter would explain the regular behaviour of recurrent epidemics, and the former the superimposed noise of observed incidence patterns. Recently, a stochastic theory based on a mechanism of resonance with internal noise has shifted the role of stochasticity closer to the center stage, by showing that the major dynamic patterns found in the incidence data can be explained as resonant fluctuations, whose behaviour is largely independent of the amplitude of seasonal forcing, and by contrast very sensitive to the basic epidemiological parameters. Here we elaborate on that approach, by adding an ingredient which is missing in standard epidemic models, the 'mixing network' through which infection may propagate. We find that spatial correlations have a major effect in the enhancement of the amplitude and the coherence of the resonant stochastic fluctuations, providing the ordered patterns of recurrent epidemics, whose period may differ significantly from that of the small oscillations around the deterministic equilibrium. We also show that the inclusion of a more realistic, time correlated, recovery profile instead of exponentially distributed infectious periods may, even in the random-mixing limit, contribute to the same effect. |
0810.2118 | Margaret Cheung | Dirar Homouz, Loren Stagg, Pernilla Wittung-Stafshede, and Margaret S.
Cheung | Macromolecular crowding modulates folding mechanism of alpha/beta
protein apoflavodoxin | to appear in Biophysical Journal (2009). to appear in Biophysical
Journal (2009) | null | 10.1016/j.bpj.2008.10.014 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein dynamics in cells may be different from that in dilute solutions in
vitro since the environment in cells is highly concentrated with other
macromolecules. This volume exclusion due to macromolecular crowding is
predicted to affect both equilibrium and kinetic processes involving protein
conformational changes. To quantify macromolecular crowding effects on protein
folding mechanisms, here we have investigated the folding energy landscape of
an alpha/beta protein, apoflavodoxin, in the presence of inert macromolecular
crowding agents using in silico and in vitro approaches. By coarse-grained
molecular simulations and topology-based potential interactions, we probed the
effects of increased volume fraction of crowding agents (phi_c) as well as of
crowding agent geometry (sphere or spherocylinder) at high phi_c. Parallel
kinetic folding experiments with purified Desulfovibro desulfuricans
apoflavodoxin in vitro were performed in the presence of Ficoll (sphere) and
Dextran (spherocylinder) synthetic crowding agents. In conclusion, we have
identified in silico crowding conditions that best enhance protein stability
and discovered that upon manipulation of the crowding conditions, folding
routes experiencing topological frustrations can be either enhanced or
relieved. The test-tube experiments confirmed that apoflavodoxin's
time-resolved folding path is modulated by crowding agent geometry. We propose
that macromolecular crowding effects may be a tool for manipulation of protein
folding and function in living cells.
| [
{
"created": "Sun, 12 Oct 2008 17:04:54 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Homouz",
"Dirar",
""
],
[
"Stagg",
"Loren",
""
],
[
"Wittung-Stafshede",
"Pernilla",
""
],
[
"Cheung",
"Margaret S.",
""
]
] | Protein dynamics in cells may be different from that in dilute solutions in vitro since the environment in cells is highly concentrated with other macromolecules. This volume exclusion due to macromolecular crowding is predicted to affect both equilibrium and kinetic processes involving protein conformational changes. To quantify macromolecular crowding effects on protein folding mechanisms, here we have investigated the folding energy landscape of an alpha/beta protein, apoflavodoxin, in the presence of inert macromolecular crowding agents using in silico and in vitro approaches. By coarse-grained molecular simulations and topology-based potential interactions, we probed the effects of increased volume fraction of crowding agents (phi_c) as well as of crowding agent geometry (sphere or spherocylinder) at high phi_c. Parallel kinetic folding experiments with purified Desulfovibro desulfuricans apoflavodoxin in vitro were performed in the presence of Ficoll (sphere) and Dextran (spherocylinder) synthetic crowding agents. In conclusion, we have identified in silico crowding conditions that best enhance protein stability and discovered that upon manipulation of the crowding conditions, folding routes experiencing topological frustrations can be either enhanced or relieved. The test-tube experiments confirmed that apoflavodoxin's time-resolved folding path is modulated by crowding agent geometry. We propose that macromolecular crowding effects may be a tool for manipulation of protein folding and function in living cells. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.