id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1406.7123 | Ivan Gregor | I. Gregor, J. Dr\"oge, M. Schirmer, C. Quince, A. C. McHardy | PhyloPythiaS+: A self-training method for the rapid reconstruction of
low-ranking taxonomic bins from metagenomes | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Metagenomics is an approach for characterizing environmental microbial
communities in situ, it allows their functional and taxonomic characterization
and to recover sequences from uncultured taxa. For communities of up to medium
diversity, e.g. excluding environments such as soil, this is often achieved by
a combination of sequence assembly and binning, where sequences are grouped
into 'bins' representing taxa of the underlying microbial community from which
they originate. Assignment to low-ranking taxonomic bins is an important
challenge for binning methods as is scalability to Gb-sized datasets generated
with deep sequencing techniques. One of the best available methods for the
recovery of species bins from an individual metagenome sample is the
expert-trained PhyloPythiaS package, where a human expert decides on the taxa
to incorporate in a composition-based taxonomic metagenome classifier and
identifies the 'training' sequences using marker genes directly from the
sample. Due to the manual effort involved, this approach does not scale to
multiple metagenome samples and requires substantial expertise, which
researchers who are new to the area may not have. With these challenges in
mind, we have developed PhyloPythiaS+, a successor to our previously described
method PhyloPythia(S). The newly developed + component performs the work
previously done by the human expert. PhyloPythiaS+ also includes a new k-mer
counting algorithm, which accelerated k-mer counting 100-fold and reduced the
overall execution time of the software by a factor of three. Our software
allows to analyze Gb-sized metagenomes with inexpensive hardware, and to
recover species or genera-level bins with low error rates in a fully automated
fashion.
| [
{
"created": "Fri, 27 Jun 2014 09:32:48 GMT",
"version": "v1"
}
] | 2014-06-30 | [
[
"Gregor",
"I.",
""
],
[
"Dröge",
"J.",
""
],
[
"Schirmer",
"M.",
""
],
[
"Quince",
"C.",
""
],
[
"McHardy",
"A. C.",
""
]
] | Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. For communities of up to medium diversity, e.g. excluding environments such as soil, this is often achieved by a combination of sequence assembly and binning, where sequences are grouped into 'bins' representing taxa of the underlying microbial community from which they originate. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for the recovery of species bins from an individual metagenome sample is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in a composition-based taxonomic metagenome classifier and identifies the 'training' sequences using marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area may not have. With these challenges in mind, we have developed PhyloPythiaS+, a successor to our previously described method PhyloPythia(S). The newly developed + component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated k-mer counting 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. |
1101.4984 | James F. Glazebrook PhD | James F. Glazebrook, Rodrick Wallace | Rate distortion coevolutionary dynamics and the flow nature of cognitive
epigenetic systems | 22 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We outline a model for a cognitive epigenetic system based on elements of the
Shannon theory of information and the statistical physics of the generalized
Onsager relations. Particular attention is paid to the concept of the rate
distortion function and from another direction as motivated by the
thermodynamics of computing, the fundamental homology with the free energy
density of a physical system. A unifying aspect of the dynamic framework
involves the concept of a groupoid and of a groupoid atlas. From a stochastic
differential equation we postulate a multidimensional Ito process for an
epigenetic system from which a stochastic flow may permeate through components
of this atlas.
| [
{
"created": "Wed, 26 Jan 2011 01:49:26 GMT",
"version": "v1"
}
] | 2011-01-27 | [
[
"Glazebrook",
"James F.",
""
],
[
"Wallace",
"Rodrick",
""
]
] | We outline a model for a cognitive epigenetic system based on elements of the Shannon theory of information and the statistical physics of the generalized Onsager relations. Particular attention is paid to the concept of the rate distortion function and from another direction as motivated by the thermodynamics of computing, the fundamental homology with the free energy density of a physical system. A unifying aspect of the dynamic framework involves the concept of a groupoid and of a groupoid atlas. From a stochastic differential equation we postulate a multidimensional Ito process for an epigenetic system from which a stochastic flow may permeate through components of this atlas. |
0705.2710 | Danielle Rojas-Rousse | Danielle Rojas-Rousse (IRBII), Karine Poitrineau, Cesar Basso | The potential of mass rearing of Monoksa dorsiplana (Pteromalidae) a
native gregarious ectoparasitoid of Pseudopachymeria spinipes (Bruchidae)in
South America | null | Biological Control 41 (30/04/2007) 348-353 | null | null | q-bio.PE | null | In Chile and Uruguay,the gregarious Pteromalidae (Monoksa dorsiplana) has
been discovered emerging from seeds of the persistent pods of Acacia caven
attacked by the univoltin bruchid Pseudopachymeria spinipes. We investigated
the potential for mass rearing of this gregarious ectoparasitoid on an
alternative bruchid host, Callosobruchus maculatus, to use it against the
bruchidae of native and cultured species of Leguminosea seeds in South America.
The mass rearing of M.dorsiplana was carried out in a population cage where the
density of egg-laying females per infested seed was increased from 1:1 on the
first day to 5:1 on the last (fifth) day. Under these experimental conditions
egg-clutch size per host increased, and at the same time the mortality of eggs
laid also increased. The density of egg-laying females influenced the sex ratio
which tended towards a balance of sons and daughters,in contrast to the sex
ratio of a single egg-laying female per host (1 son to 7 daughters). The mean
weight of adults emerging from a parasitized host was negatively correlated
with the egg-clutch size, i.e., as egg-clutch size increased, adult weight
decreased. All these results show that mass rearing of the gregarious
ectoparasitoid M.dorsiplana was possible under laboratory conditions on an
alternative bruchid host C.maculatus. As M.dorsiplana is a natural enemy of
larval and pupal stages of bruchidae, the next step was to investigate whether
the biological control of bruchid C.maculatus was possible in an experimental
structure of stored beans.
| [
{
"created": "Fri, 18 May 2007 14:29:40 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Rojas-Rousse",
"Danielle",
"",
"IRBII"
],
[
"Poitrineau",
"Karine",
""
],
[
"Basso",
"Cesar",
""
]
] | In Chile and Uruguay,the gregarious Pteromalidae (Monoksa dorsiplana) has been discovered emerging from seeds of the persistent pods of Acacia caven attacked by the univoltin bruchid Pseudopachymeria spinipes. We investigated the potential for mass rearing of this gregarious ectoparasitoid on an alternative bruchid host, Callosobruchus maculatus, to use it against the bruchidae of native and cultured species of Leguminosea seeds in South America. The mass rearing of M.dorsiplana was carried out in a population cage where the density of egg-laying females per infested seed was increased from 1:1 on the first day to 5:1 on the last (fifth) day. Under these experimental conditions egg-clutch size per host increased, and at the same time the mortality of eggs laid also increased. The density of egg-laying females influenced the sex ratio which tended towards a balance of sons and daughters,in contrast to the sex ratio of a single egg-laying female per host (1 son to 7 daughters). The mean weight of adults emerging from a parasitized host was negatively correlated with the egg-clutch size, i.e., as egg-clutch size increased, adult weight decreased. All these results show that mass rearing of the gregarious ectoparasitoid M.dorsiplana was possible under laboratory conditions on an alternative bruchid host C.maculatus. As M.dorsiplana is a natural enemy of larval and pupal stages of bruchidae, the next step was to investigate whether the biological control of bruchid C.maculatus was possible in an experimental structure of stored beans. |
q-bio/0507035 | Leonard M. Sander | Thomas Callaghan, Evgeniy Khain, Leonard M. Sander, and Robert M. Ziff | A stochastic model for wound healing | 16 pages, 7 figures | null | 10.1007/s10955-006-9022-1 | null | q-bio.CB | null | We present a discrete stochastic model which represents many of the salient
features of the biological process of wound healing. The model describes fronts
of cells invading a wound. We have numerical results in one and two dimensions.
In one dimension we can give analytic results for the front speed as a power
series expansion in a parameter, p, that gives the relative size of
proliferation and diffusion processes for the invading cells. In two dimensions
the model becomes the Eden model for p near 1. In both one and two dimensions
for small p, front propagation for this model should approach that of the
Fisher-Kolmogorov equation. However, as in other cases, this discrete model
approaches Fisher-Kolmogorov behavior slowly.
| [
{
"created": "Fri, 22 Jul 2005 16:01:54 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Callaghan",
"Thomas",
""
],
[
"Khain",
"Evgeniy",
""
],
[
"Sander",
"Leonard M.",
""
],
[
"Ziff",
"Robert M.",
""
]
] | We present a discrete stochastic model which represents many of the salient features of the biological process of wound healing. The model describes fronts of cells invading a wound. We have numerical results in one and two dimensions. In one dimension we can give analytic results for the front speed as a power series expansion in a parameter, p, that gives the relative size of proliferation and diffusion processes for the invading cells. In two dimensions the model becomes the Eden model for p near 1. In both one and two dimensions for small p, front propagation for this model should approach that of the Fisher-Kolmogorov equation. However, as in other cases, this discrete model approaches Fisher-Kolmogorov behavior slowly. |
1901.01560 | Luca Peliti | Luca Peliti | Evolution and Probability | 16 pages, 12 figures, a popular science talk | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Life forms exhibit such a degree of exquisite organization that it seems
impossible that they could have developed out of a process of trial and error,
as intimated by the theory of Darwinian evolution. In this general public paper
I discuss how differential reproduction rates work in producing an exceedingly
high degree of improbability, and the conceptual tools of the theory of
evolution help us to predict, to some degree, the course of evolution -- as it
is routinely done, e.g., in the process leading to the yearly influenza
vaccines.
| [
{
"created": "Sun, 6 Jan 2019 16:01:10 GMT",
"version": "v1"
}
] | 2019-01-08 | [
[
"Peliti",
"Luca",
""
]
] | Life forms exhibit such a degree of exquisite organization that it seems impossible that they could have developed out of a process of trial and error, as intimated by the theory of Darwinian evolution. In this general public paper I discuss how differential reproduction rates work in producing an exceedingly high degree of improbability, and the conceptual tools of the theory of evolution help us to predict, to some degree, the course of evolution -- as it is routinely done, e.g., in the process leading to the yearly influenza vaccines. |
1809.08378 | Ryan Renslow | Sean M. Colby, Dennis G. Thomas, Jamie R. Nunez, Douglas J. Baxter,
Kurt R. Glaesemann, Joseph M. Brown, Meg A Pirrung, Niranjan Govind, Justin
G. Teeguarden, Thomas O. Metz, Ryan S. Renslow | ISiCLE: A molecular collision cross section calculation pipeline for
establishing large in silico reference libraries for compound identification | null | null | null | null | q-bio.BM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comprehensive and confident identifications of metabolites and other
chemicals in complex samples will revolutionize our understanding of the role
these chemically diverse molecules play in biological systems. Despite recent
advances, metabolomics studies still result in the detection of a
disproportionate number of features than cannot be confidently assigned to a
chemical structure. This inadequacy is driven by the single most significant
limitation in metabolomics: the reliance on reference libraries constructed by
analysis of authentic reference chemicals. To this end, we have developed the
in silico chemical library engine (ISiCLE), a high-performance
computing-friendly cheminformatics workflow for generating libraries of
chemical properties. In the instantiation described here, we predict probable
three-dimensional molecular conformers using chemical identifiers as input,
from which collision cross sections (CCS) are derived. The approach employs
state-of-the-art first-principles simulation, distinguished by use of molecular
dynamics, quantum chemistry, and ion mobility calculations to generate
structures and libraries, all without training data. Importantly, optimization
of ISiCLE included a refactoring of the popular MOBCAL code for
trajectory-based mobility calculations, improving its computational efficiency
by over two orders of magnitude. Calculated CCS values were validated against
1,983 experimentally-measured CCS values and compared to previously reported
CCS calculation approaches. An online database is introduced for sharing both
calculated and experimental CCS values (metabolomics.pnnl.gov), initially
including a CCS library with over 1 million entries. Finally, three successful
applications of molecule characterization using calculated CCS are described.
This work represents a promising method to address the limitations of small
molecule identification.
| [
{
"created": "Sat, 22 Sep 2018 03:46:56 GMT",
"version": "v1"
}
] | 2018-09-25 | [
[
"Colby",
"Sean M.",
""
],
[
"Thomas",
"Dennis G.",
""
],
[
"Nunez",
"Jamie R.",
""
],
[
"Baxter",
"Douglas J.",
""
],
[
"Glaesemann",
"Kurt R.",
""
],
[
"Brown",
"Joseph M.",
""
],
[
"Pirrung",
"Meg A",
""
],
[
"Govind",
"Niranjan",
""
],
[
"Teeguarden",
"Justin G.",
""
],
[
"Metz",
"Thomas O.",
""
],
[
"Renslow",
"Ryan S.",
""
]
] | Comprehensive and confident identifications of metabolites and other chemicals in complex samples will revolutionize our understanding of the role these chemically diverse molecules play in biological systems. Despite recent advances, metabolomics studies still result in the detection of a disproportionate number of features than cannot be confidently assigned to a chemical structure. This inadequacy is driven by the single most significant limitation in metabolomics: the reliance on reference libraries constructed by analysis of authentic reference chemicals. To this end, we have developed the in silico chemical library engine (ISiCLE), a high-performance computing-friendly cheminformatics workflow for generating libraries of chemical properties. In the instantiation described here, we predict probable three-dimensional molecular conformers using chemical identifiers as input, from which collision cross sections (CCS) are derived. The approach employs state-of-the-art first-principles simulation, distinguished by use of molecular dynamics, quantum chemistry, and ion mobility calculations to generate structures and libraries, all without training data. Importantly, optimization of ISiCLE included a refactoring of the popular MOBCAL code for trajectory-based mobility calculations, improving its computational efficiency by over two orders of magnitude. Calculated CCS values were validated against 1,983 experimentally-measured CCS values and compared to previously reported CCS calculation approaches. An online database is introduced for sharing both calculated and experimental CCS values (metabolomics.pnnl.gov), initially including a CCS library with over 1 million entries. Finally, three successful applications of molecule characterization using calculated CCS are described. This work represents a promising method to address the limitations of small molecule identification. |
2009.10514 | Maxime De Bois | Maxime De Bois, Moun\^im A. El Yacoubi, Mehdi Ammi | Integration of Clinical Criteria into the Training of Deep Models:
Application to Glucose Prediction for Diabetic People | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard objective functions used during the training of neural-network-based
predictive models do not consider clinical criteria, leading to models that are
not necessarily clinically acceptable. In this study, we look at this problem
from the perspective of the forecasting of future glucose values for diabetic
people. In this study, we propose the coherent mean squared glycemic error
(gcMSE) loss function. It penalizes the model during its training not only of
the prediction errors, but also on the predicted variation errors which is
important in glucose prediction. Moreover, it makes possible to adjust the
weighting of the different areas in the error space to better focus on
dangerous regions. In order to use the loss function in practice, we propose an
algorithm that progressively improves the clinical acceptability of the model,
so that we can achieve the best tradeoff possible between accuracy and given
clinical criteria. We evaluate the approaches using two diabetes datasets, one
having type-1 patients and the other type-2 patients. The results show that
using the gcMSE loss function, instead of a standard MSE loss function,
improves the clinical acceptability of the models. In particular, the
improvements are significant in the hypoglycemia region. We also show that this
increased clinical acceptability comes at the cost of a decrease in the average
accuracy of the model. Finally, we show that this tradeoff between accuracy and
clinical acceptability can be successfully addressed with the proposed
algorithm. For given clinical criteria, the algorithm can find the optimal
solution that maximizes the accuracy while at the same meeting the criteria.
| [
{
"created": "Mon, 21 Sep 2020 15:05:28 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Sep 2020 08:05:47 GMT",
"version": "v2"
}
] | 2020-09-24 | [
[
"De Bois",
"Maxime",
""
],
[
"Yacoubi",
"Mounîm A. El",
""
],
[
"Ammi",
"Mehdi",
""
]
] | Standard objective functions used during the training of neural-network-based predictive models do not consider clinical criteria, leading to models that are not necessarily clinically acceptable. In this study, we look at this problem from the perspective of the forecasting of future glucose values for diabetic people. In this study, we propose the coherent mean squared glycemic error (gcMSE) loss function. It penalizes the model during its training not only of the prediction errors, but also on the predicted variation errors which is important in glucose prediction. Moreover, it makes possible to adjust the weighting of the different areas in the error space to better focus on dangerous regions. In order to use the loss function in practice, we propose an algorithm that progressively improves the clinical acceptability of the model, so that we can achieve the best tradeoff possible between accuracy and given clinical criteria. We evaluate the approaches using two diabetes datasets, one having type-1 patients and the other type-2 patients. The results show that using the gcMSE loss function, instead of a standard MSE loss function, improves the clinical acceptability of the models. In particular, the improvements are significant in the hypoglycemia region. We also show that this increased clinical acceptability comes at the cost of a decrease in the average accuracy of the model. Finally, we show that this tradeoff between accuracy and clinical acceptability can be successfully addressed with the proposed algorithm. For given clinical criteria, the algorithm can find the optimal solution that maximizes the accuracy while at the same meeting the criteria. |
1509.07440 | Denis Michel | Denis Michel | A role for ATP-dependent chromatin remodeling in the hierarchical
cooperativity between noninteracting transcription factors | 12 pages, 4 figures. This manuscript is an extended version of the
article: Hierarchical cooperativity mediated by chromatin remodeling; the
model of the MMTV transcription regulation. 2011. J. Theor. Biol. 287, 74-81 | J. Theor. Biol. 287, 74-81 (2011) | 10.1016/j.jtbi.2011.07.020 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chromatin remodeling machineries are abundant and diverse in eukaryotic
cells. They have been involved in a variety of situations such as histone
exchange and DNA repair, but their importance in gene expression remains
unclear. Although the influence of nucleosome position on the regulation of
gene expression is generally envisioned under the quasi-equilibrium
perspective, it is proposed that given the ATP-dependence of chromatin
remodeling enzymes, certain mechanisms necessitate non-equilibrium treatments.
Examination of the celebrated chromatin remodeling system of the mouse mammary
tumor virus, in which the binding of transcription factors opens the way to
other ones, reveals that breaking equilibrium offers a subtle mode of
transcription factor cooperativity, avoids molecular trapping phenomena and
allows to reconcile previously conflicting experimental data. This mechanism
provides a control lever of promoter responsiveness to transcription factor
combinations, challenging the classical view of the unilateral influence of
pioneer on secondary transcription factors.
| [
{
"created": "Thu, 24 Sep 2015 17:14:07 GMT",
"version": "v1"
}
] | 2015-09-25 | [
[
"Michel",
"Denis",
""
]
] | Chromatin remodeling machineries are abundant and diverse in eukaryotic cells. They have been involved in a variety of situations such as histone exchange and DNA repair, but their importance in gene expression remains unclear. Although the influence of nucleosome position on the regulation of gene expression is generally envisioned under the quasi-equilibrium perspective, it is proposed that given the ATP-dependence of chromatin remodeling enzymes, certain mechanisms necessitate non-equilibrium treatments. Examination of the celebrated chromatin remodeling system of the mouse mammary tumor virus, in which the binding of transcription factors opens the way to other ones, reveals that breaking equilibrium offers a subtle mode of transcription factor cooperativity, avoids molecular trapping phenomena and allows to reconcile previously conflicting experimental data. This mechanism provides a control lever of promoter responsiveness to transcription factor combinations, challenging the classical view of the unilateral influence of pioneer on secondary transcription factors. |
0708.1865 | Pan-Jun Kim | Pan-Jun Kim, Dong-Yup Lee, Tae Yong Kim, Kwang Ho Lee, Hawoong Jeong,
Sang Yup Lee, Sunwon Park | Metabolite essentiality elucidates robustness of Escherichia coli
metabolism | Supplements available at
http://stat.kaist.ac.kr/publication/2007/PJKim_pnas_supplement.pdf | Proc. Natl. Acad. Sci. USA. 104 13638 (2007) | 10.1073/pnas.0703262104 | null | q-bio.MN physics.bio-ph q-bio.QM | null | Complex biological systems are very robust to genetic and environmental
changes at all levels of organization. Many biological functions of Escherichia
coli metabolism can be sustained against single-gene or even multiple-gene
mutations by using redundant or alternative pathways. Thus, only a limited
number of genes have been identified to be lethal to the cell. In this regard,
the reaction-centric gene deletion study has a limitation in understanding the
metabolic robustness. Here, we report the use of flux-sum, which is the
summation of all incoming or outgoing fluxes around a particular metabolite
under pseudo-steady state conditions, as a good conserved property for
elucidating such robustness of E. coli from the metabolite point of view. The
functional behavior, as well as the structural and evolutionary properties of
metabolites essential to the cell survival, was investigated by means of a
constraints-based flux analysis under perturbed conditions. The essential
metabolites are capable of maintaining a steady flux-sum even against severe
perturbation by actively redistributing the relevant fluxes. Disrupting the
flux-sum maintenance was found to suppress cell growth. This approach of
analyzing metabolite essentiality provides insight into cellular robustness and
concomitant fragility, which can be used for several applications, including
the development of new drugs for treating pathogens.
| [
{
"created": "Tue, 14 Aug 2007 11:38:00 GMT",
"version": "v1"
}
] | 2007-08-30 | [
[
"Kim",
"Pan-Jun",
""
],
[
"Lee",
"Dong-Yup",
""
],
[
"Kim",
"Tae Yong",
""
],
[
"Lee",
"Kwang Ho",
""
],
[
"Jeong",
"Hawoong",
""
],
[
"Lee",
"Sang Yup",
""
],
[
"Park",
"Sunwon",
""
]
] | Complex biological systems are very robust to genetic and environmental changes at all levels of organization. Many biological functions of Escherichia coli metabolism can be sustained against single-gene or even multiple-gene mutations by using redundant or alternative pathways. Thus, only a limited number of genes have been identified to be lethal to the cell. In this regard, the reaction-centric gene deletion study has a limitation in understanding the metabolic robustness. Here, we report the use of flux-sum, which is the summation of all incoming or outgoing fluxes around a particular metabolite under pseudo-steady state conditions, as a good conserved property for elucidating such robustness of E. coli from the metabolite point of view. The functional behavior, as well as the structural and evolutionary properties of metabolites essential to the cell survival, was investigated by means of a constraints-based flux analysis under perturbed conditions. The essential metabolites are capable of maintaining a steady flux-sum even against severe perturbation by actively redistributing the relevant fluxes. Disrupting the flux-sum maintenance was found to suppress cell growth. This approach of analyzing metabolite essentiality provides insight into cellular robustness and concomitant fragility, which can be used for several applications, including the development of new drugs for treating pathogens. |
1908.07048 | Jinzhi Lei | Jinzhi Lei | Evolutionary dynamics of cancer: from epigenetic regulation to cell
population dynamics -- mathematical model framework, applications, and open
problems | 19 pages, 3 figures | null | 10.1007/s11425-019-1629-7 | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Predictive modeling of the evolutionary dynamics of cancer is a challenge
issue in computational cancer biology. In this paper, we propose a general
mathematical model framework for the evolutionary dynamics of cancer with
plasticity and heterogeneity in cancer cells. Cancer is a group of diseases
involving abnormal cell growth, during which abnormal regulations in stem cell
regeneration are essential for the dynamics of cancer development. In general,
the dynamics of stem cell regeneration can be simplified as a $\mathrm{G_0}$
phase cell cycle model, which lead to a delay differentiation equation. When
cell heterogeneity and plasticity are considered, we establish a
differential-integral equation based on the random transition of epigenetic
states of stem cells during cell division. The proposed model highlights cell
heterogeneity and plasticity, and connects the heterogeneity with cell-to-cell
variance in cellular behaviors, e.g. proliferation, apoptosis, and
differentiation/senescence, and can be extended to include gene
mutation-induced tumor development. Hybrid computations models are developed
based on the mathematical model framework, and are applied to the process of
inflammation-induced tumorigenesis and tumor relapse after CAR-T therapy.
Finally, we give rise to several mathematical problems related to the proposed
differential-integral equation. Answers to these problems are crucial for the
understanding of the evolutionary dynamics of cancer.
| [
{
"created": "Mon, 19 Aug 2019 19:54:29 GMT",
"version": "v1"
}
] | 2020-01-10 | [
[
"Lei",
"Jinzhi",
""
]
] | Predictive modeling of the evolutionary dynamics of cancer is a challenge issue in computational cancer biology. In this paper, we propose a general mathematical model framework for the evolutionary dynamics of cancer with plasticity and heterogeneity in cancer cells. Cancer is a group of diseases involving abnormal cell growth, during which abnormal regulations in stem cell regeneration are essential for the dynamics of cancer development. In general, the dynamics of stem cell regeneration can be simplified as a $\mathrm{G_0}$ phase cell cycle model, which lead to a delay differentiation equation. When cell heterogeneity and plasticity are considered, we establish a differential-integral equation based on the random transition of epigenetic states of stem cells during cell division. The proposed model highlights cell heterogeneity and plasticity, and connects the heterogeneity with cell-to-cell variance in cellular behaviors, e.g. proliferation, apoptosis, and differentiation/senescence, and can be extended to include gene mutation-induced tumor development. Hybrid computations models are developed based on the mathematical model framework, and are applied to the process of inflammation-induced tumorigenesis and tumor relapse after CAR-T therapy. Finally, we give rise to several mathematical problems related to the proposed differential-integral equation. Answers to these problems are crucial for the understanding of the evolutionary dynamics of cancer. |
2108.13661 | Eric Bonnet | Eric Bonnet | Using convolutional neural networks for the classification of breast
cancer images | 13 pages, 4 figures, 4 tables; corrected typos; added an additional
breast carcinoma image dataset; added a total of twelve CNN models tested.
Additional testing for transfer learning and complexity of the models | null | null | null | q-bio.QM eess.IV | http://creativecommons.org/licenses/by/4.0/ | An important part of breast cancer staging is the assessment of the sentinel
axillary node for early signs of tumor spreading. However, this assessment by
pathologists is not always easy and retrospective surveys often requalify the
status of a high proportion of sentinel nodes. Convolutional Neural Networks
(CNNs) are a class of deep learning algorithms that have shown excellent
performances in the most challenging visual classification tasks, with numerous
applications in medical imaging. In this study I compare twelve different CNNs
and different hardware acceleration devices for the detection of breast cancer
from microscopic images of breast cancer tissue. Convolutional models are
trained and tested on two public datasets. The first one is composed of more
than 300,000 images of sentinel lymph node tissue from breast cancer patients,
while the second one has more than 220,000 images from inductive breast
carcinoma tissue, one of the most common forms of breast cancer. Four different
hardware acceleration cards were used, with an off-the-shelf deep learning
framework. The impact of transfer learning and hyperparameters fine-tuning are
tested. Hardware acceleration device performance can improve training time by a
factor of five to twelve, depending on the model used. On the other hand,
increasing convolutional depth will augment the training time by a factor of
four to six times, depending on the acceleration device used. Increasing the
depth and the complexity of the model generally improves performance, but the
relationship is not linear and also depends on the architecture of the model.
The performance of transfer learning is always worse compared to a complete
retraining of the model. Fine-tuning the hyperparameters of the model improves
the results, with the best model showing a performance comparable to
state-of-the-art models.
| [
{
"created": "Tue, 31 Aug 2021 07:53:41 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Oct 2022 14:15:20 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Apr 2024 11:53:17 GMT",
"version": "v3"
}
] | 2024-04-30 | [
[
"Bonnet",
"Eric",
""
]
] | An important part of breast cancer staging is the assessment of the sentinel axillary node for early signs of tumor spreading. However, this assessment by pathologists is not always easy and retrospective surveys often requalify the status of a high proportion of sentinel nodes. Convolutional Neural Networks (CNNs) are a class of deep learning algorithms that have shown excellent performances in the most challenging visual classification tasks, with numerous applications in medical imaging. In this study I compare twelve different CNNs and different hardware acceleration devices for the detection of breast cancer from microscopic images of breast cancer tissue. Convolutional models are trained and tested on two public datasets. The first one is composed of more than 300,000 images of sentinel lymph node tissue from breast cancer patients, while the second one has more than 220,000 images from inductive breast carcinoma tissue, one of the most common forms of breast cancer. Four different hardware acceleration cards were used, with an off-the-shelf deep learning framework. The impact of transfer learning and hyperparameters fine-tuning are tested. Hardware acceleration device performance can improve training time by a factor of five to twelve, depending on the model used. On the other hand, increasing convolutional depth will augment the training time by a factor of four to six times, depending on the acceleration device used. Increasing the depth and the complexity of the model generally improves performance, but the relationship is not linear and also depends on the architecture of the model. The performance of transfer learning is always worse compared to a complete retraining of the model. Fine-tuning the hyperparameters of the model improves the results, with the best model showing a performance comparable to state-of-the-art models. |
2206.07632 | Seul Lee | Seul Lee, Jaehyeong Jo, Sung Ju Hwang | Exploring Chemical Space with Score-based Out-of-distribution Generation | ICML 2023 | null | null | null | q-bio.BM cs.LG physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A well-known limitation of existing molecular generative models is that the
generated molecules highly resemble those in the training set. To generate
truly novel molecules that may have even better properties for de novo drug
discovery, more powerful exploration in the chemical space is necessary. To
this end, we propose Molecular Out-Of-distribution Diffusion(MOOD), a
score-based diffusion scheme that incorporates out-of-distribution (OOD)
control in the generative stochastic differential equation (SDE) with simple
control of a hyperparameter, thus requires no additional costs. Since some
novel molecules may not meet the basic requirements of real-world drugs, MOOD
performs conditional generation by utilizing the gradients from a property
predictor that guides the reverse-time diffusion process to high-scoring
regions according to target properties such as protein-ligand interactions,
drug-likeness, and synthesizability. This allows MOOD to search for novel and
meaningful molecules rather than generating unseen yet trivial ones. We
experimentally validate that MOOD is able to explore the chemical space beyond
the training distribution, generating molecules that outscore ones found with
existing methods, and even the top 0.01% of the original training pool. Our
code is available at https://github.com/SeulLee05/MOOD.
| [
{
"created": "Mon, 6 Jun 2022 06:17:11 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2023 10:31:37 GMT",
"version": "v2"
},
{
"created": "Sat, 3 Jun 2023 08:43:39 GMT",
"version": "v3"
}
] | 2023-06-06 | [
[
"Lee",
"Seul",
""
],
[
"Jo",
"Jaehyeong",
""
],
[
"Hwang",
"Sung Ju",
""
]
] | A well-known limitation of existing molecular generative models is that the generated molecules highly resemble those in the training set. To generate truly novel molecules that may have even better properties for de novo drug discovery, more powerful exploration in the chemical space is necessary. To this end, we propose Molecular Out-Of-distribution Diffusion(MOOD), a score-based diffusion scheme that incorporates out-of-distribution (OOD) control in the generative stochastic differential equation (SDE) with simple control of a hyperparameter, thus requires no additional costs. Since some novel molecules may not meet the basic requirements of real-world drugs, MOOD performs conditional generation by utilizing the gradients from a property predictor that guides the reverse-time diffusion process to high-scoring regions according to target properties such as protein-ligand interactions, drug-likeness, and synthesizability. This allows MOOD to search for novel and meaningful molecules rather than generating unseen yet trivial ones. We experimentally validate that MOOD is able to explore the chemical space beyond the training distribution, generating molecules that outscore ones found with existing methods, and even the top 0.01% of the original training pool. Our code is available at https://github.com/SeulLee05/MOOD. |
1307.0178 | Dan Siegal-Gaskins | Dan Siegal-Gaskins, Vincent Noireaux, and Richard M. Murray | Biomolecular resource utilization in elementary cell-free gene circuits | Accepted to the 2013 American Control Conference | null | null | null | q-bio.MN q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a detailed dynamical model of the behavior of
transcription-translation circuits in vitro that makes explicit the roles
played by essential molecular resources. A set of simple two-gene test circuits
operating in a cell-free biochemical 'breadboard' validate this model and
highlight the consequences of limited resource availability. In particular, we
are able to confirm the existence of biomolecular 'crosstalk' and isolate its
individual sources. The implications of crosstalk for biomolecular circuit
design and function are discussed.
| [
{
"created": "Sun, 30 Jun 2013 05:14:12 GMT",
"version": "v1"
}
] | 2013-07-02 | [
[
"Siegal-Gaskins",
"Dan",
""
],
[
"Noireaux",
"Vincent",
""
],
[
"Murray",
"Richard M.",
""
]
] | We present a detailed dynamical model of the behavior of transcription-translation circuits in vitro that makes explicit the roles played by essential molecular resources. A set of simple two-gene test circuits operating in a cell-free biochemical 'breadboard' validate this model and highlight the consequences of limited resource availability. In particular, we are able to confirm the existence of biomolecular 'crosstalk' and isolate its individual sources. The implications of crosstalk for biomolecular circuit design and function are discussed. |
1408.6006 | Maxim Lavrentovich | Maxim O. Lavrentovich and David R. Nelson | Survival Probabilities at Spherical Frontiers | 35 pages, 11 figures, revised version | Theor. Popul. Biol. 102 (2015) 26-39 | 10.1016/j.tpb.2015.03.002 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by tumor growth and spatial population genetics, we study the
interplay between evolutionary and spatial dynamics at the surfaces of
three-dimensional, spherical range expansions. We consider range expansion
radii that grow with an arbitrary power-law in time:
$R(t)=R_0(1+t/t^*)^{\Theta}$, where $\Theta$ is a growth exponent, $R_0$ is the
initial radius, and $t^*$ is a characteristic time for the growth, to be
affected by the inflating geometry. We vary the parameters $t^*$ and $\Theta$
to capture a variety of possible growth regimes. Guided by recent results for
two-dimensional inflating range expansions, we identify key dimensionless
parameters that describe the survival probability of a mutant cell with a small
selective advantage arising at the population frontier. Using analytical
techniques, we calculate this probability for arbitrary $\Theta$. We compare
our results to simulations of linearly inflating expansions ($\Theta=1$
spherical Fisher-Kolmogorov-Petrovsky-Piscunov waves) and treadmilling
populations ($\Theta=0$, with cells in the interior removed by apoptosis or a
similar process). We find that mutations at linearly inflating fronts have
survival probabilities enhanced by factors of 100 or more relative to mutations
at treadmilling population frontiers. We also discuss the special properties of
"marginally inflating" $(\Theta=1/2)$ expansions.
| [
{
"created": "Tue, 26 Aug 2014 04:57:36 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jun 2015 14:59:59 GMT",
"version": "v2"
}
] | 2015-06-02 | [
[
"Lavrentovich",
"Maxim O.",
""
],
[
"Nelson",
"David R.",
""
]
] | Motivated by tumor growth and spatial population genetics, we study the interplay between evolutionary and spatial dynamics at the surfaces of three-dimensional, spherical range expansions. We consider range expansion radii that grow with an arbitrary power-law in time: $R(t)=R_0(1+t/t^*)^{\Theta}$, where $\Theta$ is a growth exponent, $R_0$ is the initial radius, and $t^*$ is a characteristic time for the growth, to be affected by the inflating geometry. We vary the parameters $t^*$ and $\Theta$ to capture a variety of possible growth regimes. Guided by recent results for two-dimensional inflating range expansions, we identify key dimensionless parameters that describe the survival probability of a mutant cell with a small selective advantage arising at the population frontier. Using analytical techniques, we calculate this probability for arbitrary $\Theta$. We compare our results to simulations of linearly inflating expansions ($\Theta=1$ spherical Fisher-Kolmogorov-Petrovsky-Piscunov waves) and treadmilling populations ($\Theta=0$, with cells in the interior removed by apoptosis or a similar process). We find that mutations at linearly inflating fronts have survival probabilities enhanced by factors of 100 or more relative to mutations at treadmilling population frontiers. We also discuss the special properties of "marginally inflating" $(\Theta=1/2)$ expansions. |
1001.3449 | Adrian Melott | A.L. Melott (Kansas) and R.K. Bambach (Museum of Natural History,
Smithsonian Institution) | An ubiquitous ~62 Myr periodic fluctuation superimposed on general
trends in fossil biodiversity | Summry of comments presented at the North American Paleontological
Convention, June 25, 2009 | null | null | null | q-bio.PE astro-ph.EP physics.bio-ph physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A 62 Myr periodicity is superimposed on other longer-term trends in fossil
biodiversity. This cycle can be discerned in marine data based on the Sepkoski
compendium, the Paleobiology Database, and the Fossil Record 2. The signal also
exists in changes in sea level/sediment, but is much weaker than in
biodiversity itself. A significant excess of 19 previously identified
Phanerozoic mass extinctions occur on the declining phase of the 62 Myr cycle.
appearance of the signal in sampling-standardized biodiversity data, it is
likely not to be a sampling artifact, but either a consequence of sea-level
changes or an additional effect of some common cause for them both. In either
case, it is intriguing why both changes would have a regular pattern.
| [
{
"created": "Wed, 20 Jan 2010 03:04:53 GMT",
"version": "v1"
}
] | 2010-01-21 | [
[
"Melott",
"A. L.",
"",
"Kansas"
],
[
"Bambach",
"R. K.",
"",
"Museum of Natural History,\n Smithsonian Institution"
]
] | A 62 Myr periodicity is superimposed on other longer-term trends in fossil biodiversity. This cycle can be discerned in marine data based on the Sepkoski compendium, the Paleobiology Database, and the Fossil Record 2. The signal also exists in changes in sea level/sediment, but is much weaker than in biodiversity itself. A significant excess of 19 previously identified Phanerozoic mass extinctions occur on the declining phase of the 62 Myr cycle. appearance of the signal in sampling-standardized biodiversity data, it is likely not to be a sampling artifact, but either a consequence of sea-level changes or an additional effect of some common cause for them both. In either case, it is intriguing why both changes would have a regular pattern. |
2210.08194 | Margaret Cheung | August George, Doo Nam Kim, Trevor Moser, Ian T. Gildea, James E.
Evans, Margaret S. Cheung | Graph identification of proteins in tomograms (GRIP-Tomo) | submitted for peer review | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In this study, we present a method of pattern mining based on network theory
that enables the identification of protein structures or complexes from
synthetic volume densities, without the knowledge of predefined templates or
human biases for refinement. We hypothesized that the topological connectivity
of protein structures is invariant, and they are distinctive for the purpose of
protein identification from distorted data presented in volume densities.
Three-dimensional densities of a protein or a complex from simulated
tomographic volumes were transformed into mathematical graphs as observables.
We systematically introduced data distortion or defects such as missing
fullness of data, the tumbling effect, and the missing wedge effect into the
simulated volumes, and varied the distance cutoffs in pixels to capture the
varying connectivity between the density cluster centroids in the presence of
defects. A similarity score between the graphs from the simulated volumes and
the graphs transformed from the physical protein structures in point data was
calculated by comparing their network theory order parameters including node
degrees, betweenness centrality, and graph densities. By capturing the
essential topological features defining the heterogeneous morphologies of a
network, we were able to accurately identify proteins and homo-multimeric
complexes from ten topologically distinctive samples without realistic noise
added. Our approach empowers future developments of tomogram processing by
providing pattern mining with interpretability, to enable the classification of
single-domain protein native topologies as well as distinct single-domain
proteins from multimeric complexes within noisy volumes.
| [
{
"created": "Sat, 15 Oct 2022 04:51:38 GMT",
"version": "v1"
}
] | 2022-10-18 | [
[
"George",
"August",
""
],
[
"Kim",
"Doo Nam",
""
],
[
"Moser",
"Trevor",
""
],
[
"Gildea",
"Ian T.",
""
],
[
"Evans",
"James E.",
""
],
[
"Cheung",
"Margaret S.",
""
]
] | In this study, we present a method of pattern mining based on network theory that enables the identification of protein structures or complexes from synthetic volume densities, without the knowledge of predefined templates or human biases for refinement. We hypothesized that the topological connectivity of protein structures is invariant, and they are distinctive for the purpose of protein identification from distorted data presented in volume densities. Three-dimensional densities of a protein or a complex from simulated tomographic volumes were transformed into mathematical graphs as observables. We systematically introduced data distortion or defects such as missing fullness of data, the tumbling effect, and the missing wedge effect into the simulated volumes, and varied the distance cutoffs in pixels to capture the varying connectivity between the density cluster centroids in the presence of defects. A similarity score between the graphs from the simulated volumes and the graphs transformed from the physical protein structures in point data was calculated by comparing their network theory order parameters including node degrees, betweenness centrality, and graph densities. By capturing the essential topological features defining the heterogeneous morphologies of a network, we were able to accurately identify proteins and homo-multimeric complexes from ten topologically distinctive samples without realistic noise added. Our approach empowers future developments of tomogram processing by providing pattern mining with interpretability, to enable the classification of single-domain protein native topologies as well as distinct single-domain proteins from multimeric complexes within noisy volumes. |
2007.02557 | Thomas Caulfield | Mathew Coban, Juliet Morrison PhD, William D. Freeman MD, Evette
Radisky PhD, Karine G. Le Roch PhD, Thomas R. Caulfield PhD | Attacking COVID-19 Progression using Multi-Drug Therapy for Synergetic
Target Engagement | Main text: 29 pages with references, 1 main table, 6 main figures;
Supplemental section: 30 pages, 3 supplemental tables, 4 supplemental figures | Biomolecules 2021, 11(6), 787 | 10.3390/biom11060787 | null | q-bio.BM q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 is a devastating respiratory and inflammatory illness caused by a
new coronavirus that is rapidly spreading throughout the human population. Over
the past 6 months, severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2), the virus responsible for COVID-19, has already infected over
11.6 million (25% located in United States) and killed more than 540K people
around the world. As we face one of the most challenging times in our recent
history, there is an urgent need to identify drug candidates that can attack
SARS-CoV-2 on multiple fronts. We have therefore initiated a computational
dynamics drug pipeline using molecular modeling, structure simulation, docking
and machine learning models to predict the inhibitory activity of several
million compounds against two essential SARS-CoV-2 viral proteins and their
host protein interactors; S/Ace2, Tmprss2, Cathepsins L and K, and Mpro to
prevent binding, membrane fusion and replication of the virus, respectively.
All together we generated an ensemble of structural conformations that increase
high quality docking outcomes to screen over >6 million compounds including all
FDA-approved drugs, drugs under clinical trial (>3000) and an additional >30
million selected chemotypes from fragment libraries. Our results yielded an
initial set of 350 high value compounds from both new and FDA-approved
compounds that can now be tested experimentally in appropriate biological model
systems. We anticipate that our results will initiate screening campaigns and
accelerate the discovery of COVID-19 treatments.
| [
{
"created": "Mon, 6 Jul 2020 07:08:45 GMT",
"version": "v1"
}
] | 2021-06-25 | [
[
"Coban",
"Mathew",
""
],
[
"PhD",
"Juliet Morrison",
""
],
[
"MD",
"William D. Freeman",
""
],
[
"PhD",
"Evette Radisky",
""
],
[
"PhD",
"Karine G. Le Roch",
""
],
[
"PhD",
"Thomas R. Caulfield",
""
]
] | COVID-19 is a devastating respiratory and inflammatory illness caused by a new coronavirus that is rapidly spreading throughout the human population. Over the past 6 months, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for COVID-19, has already infected over 11.6 million (25% located in United States) and killed more than 540K people around the world. As we face one of the most challenging times in our recent history, there is an urgent need to identify drug candidates that can attack SARS-CoV-2 on multiple fronts. We have therefore initiated a computational dynamics drug pipeline using molecular modeling, structure simulation, docking and machine learning models to predict the inhibitory activity of several million compounds against two essential SARS-CoV-2 viral proteins and their host protein interactors; S/Ace2, Tmprss2, Cathepsins L and K, and Mpro to prevent binding, membrane fusion and replication of the virus, respectively. All together we generated an ensemble of structural conformations that increase high quality docking outcomes to screen over >6 million compounds including all FDA-approved drugs, drugs under clinical trial (>3000) and an additional >30 million selected chemotypes from fragment libraries. Our results yielded an initial set of 350 high value compounds from both new and FDA-approved compounds that can now be tested experimentally in appropriate biological model systems. We anticipate that our results will initiate screening campaigns and accelerate the discovery of COVID-19 treatments. |
1008.1410 | Avner Wallach | Avner Wallach, Danny Eytan, Asaf Gal, Christoph Zrenner, Ron Meir and
Shimon Marom | Neuronal Response Clamp | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the first recordings made of evoked action potentials it has become
apparent that the responses of individual neurons to ongoing physiologically
relevant input, are highly variable. This variability is manifested in
non-stationary behavior of practically every observable neuronal response
feature. Here we introduce the Neuronal Response Clamp, a closed-loop technique
enabling full control over two important single neuron activity variables:
response probability and stimulus-spike latency. The technique is applicable
over extended durations (up to several hours), and is effective even on the
background of ongoing neuronal network activity. The Response Clamp technique
is a powerful tool, extending the voltage-clamp and dynamic-clamp approaches to
the neuron's functional level, namely - its spiking behavior.
| [
{
"created": "Sun, 8 Aug 2010 15:13:12 GMT",
"version": "v1"
}
] | 2010-08-10 | [
[
"Wallach",
"Avner",
""
],
[
"Eytan",
"Danny",
""
],
[
"Gal",
"Asaf",
""
],
[
"Zrenner",
"Christoph",
""
],
[
"Meir",
"Ron",
""
],
[
"Marom",
"Shimon",
""
]
] | Since the first recordings made of evoked action potentials it has become apparent that the responses of individual neurons to ongoing physiologically relevant input, are highly variable. This variability is manifested in non-stationary behavior of practically every observable neuronal response feature. Here we introduce the Neuronal Response Clamp, a closed-loop technique enabling full control over two important single neuron activity variables: response probability and stimulus-spike latency. The technique is applicable over extended durations (up to several hours), and is effective even on the background of ongoing neuronal network activity. The Response Clamp technique is a powerful tool, extending the voltage-clamp and dynamic-clamp approaches to the neuron's functional level, namely - its spiking behavior. |
0708.3502 | Dietrich Stauffer | D. Stauffer and S. Moss de Oliveira | Child mortality in Penna ageing model | To pages including one figure | null | null | null | q-bio.PE | null | Assuming the deleterious mutations in the Penna ageing model to affect mainly
the young ages, we get an enhanced mortality at very young age, followed by a
minimum of the mortality, and then the usual exponential increase of mortality
with age.
| [
{
"created": "Sun, 26 Aug 2007 18:27:28 GMT",
"version": "v1"
}
] | 2007-08-28 | [
[
"Stauffer",
"D.",
""
],
[
"de Oliveira",
"S. Moss",
""
]
] | Assuming the deleterious mutations in the Penna ageing model to affect mainly the young ages, we get an enhanced mortality at very young age, followed by a minimum of the mortality, and then the usual exponential increase of mortality with age. |
2111.09780 | Kevin McKee | Kevin L. McKee, Ian C. Crandell, Rishidev Chaudhuri, Randall C.
O'Reilly | Locally Learned Synaptic Dropout for Complete Bayesian Inference | 30 pages, 8 Figures | null | null | null | q-bio.NC stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Bayesian brain hypothesis postulates that the brain accurately operates
on statistical distributions according to Bayes' theorem. The random failure of
presynaptic vesicles to release neurotransmitters may allow the brain to sample
from posterior distributions of network parameters, interpreted as epistemic
uncertainty. It has not been shown previously how random failures might allow
networks to sample from observed distributions, also known as aleatoric or
residual uncertainty. Sampling from both distributions enables probabilistic
inference, efficient search, and creative or generative problem solving. We
demonstrate that under a population-code based interpretation of neural
activity, both types of distribution can be represented and sampled with
synaptic failure alone. We first define a biologically constrained neural
network and sampling scheme based on synaptic failure and lateral inhibition.
Within this framework, we derive drop-out based epistemic uncertainty, then
prove an analytic mapping from synaptic efficacy to release probability that
allows networks to sample from arbitrary, learned distributions represented by
a receiving layer. Second, our result leads to a local learning rule by which
synapses adapt their release probabilities. Our result demonstrates complete
Bayesian inference, related to the variational learning method of dropout, in a
biologically constrained network using only locally-learned synaptic failure
rates.
| [
{
"created": "Thu, 18 Nov 2021 16:23:00 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Nov 2021 16:04:20 GMT",
"version": "v2"
},
{
"created": "Mon, 29 Nov 2021 18:47:26 GMT",
"version": "v3"
}
] | 2021-11-30 | [
[
"McKee",
"Kevin L.",
""
],
[
"Crandell",
"Ian C.",
""
],
[
"Chaudhuri",
"Rishidev",
""
],
[
"O'Reilly",
"Randall C.",
""
]
] | The Bayesian brain hypothesis postulates that the brain accurately operates on statistical distributions according to Bayes' theorem. The random failure of presynaptic vesicles to release neurotransmitters may allow the brain to sample from posterior distributions of network parameters, interpreted as epistemic uncertainty. It has not been shown previously how random failures might allow networks to sample from observed distributions, also known as aleatoric or residual uncertainty. Sampling from both distributions enables probabilistic inference, efficient search, and creative or generative problem solving. We demonstrate that under a population-code based interpretation of neural activity, both types of distribution can be represented and sampled with synaptic failure alone. We first define a biologically constrained neural network and sampling scheme based on synaptic failure and lateral inhibition. Within this framework, we derive drop-out based epistemic uncertainty, then prove an analytic mapping from synaptic efficacy to release probability that allows networks to sample from arbitrary, learned distributions represented by a receiving layer. Second, our result leads to a local learning rule by which synapses adapt their release probabilities. Our result demonstrates complete Bayesian inference, related to the variational learning method of dropout, in a biologically constrained network using only locally-learned synaptic failure rates. |
1210.2908 | Oriol G\"uell | Oriol G\"uell, Francesc Sagu\'es, Georg Basler, Zoran Nikoloski, M.
\'Angeles Serrano | Assessing the significance of knockout cascades in metabolic networks | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex networks have been shown to be robust against random structural
perturbations, but vulnerable against targeted attacks. Robustness analysis
usually simulates the removal of individual or sets of nodes, followed by the
assessment of the inflicted damage. For complex metabolic networks, it has been
suggested that evolutionary pressure may favor robustness against reaction
removal. However, the removal of a reaction and its impact on the network may
as well be interpreted as selective regulation of pathway activities,
suggesting a tradeoff between the efficiency of regulation and vulnerability.
Here, we employ a cascading failure algorithm to simulate the removal of single
and pairs of reactions from the metabolic networks of two organisms, and
estimate the significance of the results using two different null models:
degree preserving and mass-balanced randomization. Our analysis suggests that
evolutionary pressure promotes larger cascades of non-viable reactions, and
thus favors the ability of efficient metabolic regulation at the expense of
robustness.
| [
{
"created": "Wed, 10 Oct 2012 13:28:01 GMT",
"version": "v1"
}
] | 2012-11-13 | [
[
"Güell",
"Oriol",
""
],
[
"Sagués",
"Francesc",
""
],
[
"Basler",
"Georg",
""
],
[
"Nikoloski",
"Zoran",
""
],
[
"Serrano",
"M. Ángeles",
""
]
] | Complex networks have been shown to be robust against random structural perturbations, but vulnerable against targeted attacks. Robustness analysis usually simulates the removal of individual or sets of nodes, followed by the assessment of the inflicted damage. For complex metabolic networks, it has been suggested that evolutionary pressure may favor robustness against reaction removal. However, the removal of a reaction and its impact on the network may as well be interpreted as selective regulation of pathway activities, suggesting a tradeoff between the efficiency of regulation and vulnerability. Here, we employ a cascading failure algorithm to simulate the removal of single and pairs of reactions from the metabolic networks of two organisms, and estimate the significance of the results using two different null models: degree preserving and mass-balanced randomization. Our analysis suggests that evolutionary pressure promotes larger cascades of non-viable reactions, and thus favors the ability of efficient metabolic regulation at the expense of robustness. |
q-bio/0601028 | Ila Fiete | Ila R. Fiete and H. Sebastian Seung | Gradient learning in spiking neural networks by dynamic perturbation of
conductances | 5 pages; 1 figure; submitted to PRL | Phys. Rev. Lett. 97, 048104 (2006) | 10.1103/PhysRevLett.97.048104 | null | q-bio.NC | null | We present a method of estimating the gradient of an objective function with
respect to the synaptic weights of a spiking neural network. The method works
by measuring the fluctuations in the objective function in response to dynamic
perturbation of the membrane conductances of the neurons. It is compatible with
recurrent networks of conductance-based model neurons with dynamic synapses.
The method can be interpreted as a biologically plausible synaptic learning
rule, if the dynamic perturbations are generated by a special class of
``empiric'' synapses driven by random spike trains from an external source.
| [
{
"created": "Thu, 19 Jan 2006 23:19:55 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Fiete",
"Ila R.",
""
],
[
"Seung",
"H. Sebastian",
""
]
] | We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of ``empiric'' synapses driven by random spike trains from an external source. |
2106.14732 | Alan D. Rendall | Burcu G\"urb\"uz and Alan D. Rendall | Analysis of a model of the Calvin cycle with diffusion of ATP | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of a mathematical model of the Calvin cycle, which is part of
photosynthesis, is analysed. Since diffusion of ATP is included in the model a
system of reaction-diffusion equations is obtained. It is proved that for a
suitable choice of parameters there exist spatially inhomogeneous positive
steady states, in fact infinitely many of them. It is also shown that all
positive steady states, homogeneous and inhomogeneous, are nonlinearly
unstable. The only smooth steady state which could be stable is a trivial one,
where all concentrations except that of ATP are zero. It is found that in the
spatially homogeneous case there are steady states with the property that the
linearization about that state has eigenvalues which are not real, indicating
the presence of oscillations. Numerical simulations exhibit solutions for which
the concentrations are not monotone functions of time.
| [
{
"created": "Mon, 28 Jun 2021 14:03:09 GMT",
"version": "v1"
}
] | 2021-06-29 | [
[
"Gürbüz",
"Burcu",
""
],
[
"Rendall",
"Alan D.",
""
]
] | The dynamics of a mathematical model of the Calvin cycle, which is part of photosynthesis, is analysed. Since diffusion of ATP is included in the model a system of reaction-diffusion equations is obtained. It is proved that for a suitable choice of parameters there exist spatially inhomogeneous positive steady states, in fact infinitely many of them. It is also shown that all positive steady states, homogeneous and inhomogeneous, are nonlinearly unstable. The only smooth steady state which could be stable is a trivial one, where all concentrations except that of ATP are zero. It is found that in the spatially homogeneous case there are steady states with the property that the linearization about that state has eigenvalues which are not real, indicating the presence of oscillations. Numerical simulations exhibit solutions for which the concentrations are not monotone functions of time. |
2107.03387 | Tim Cvetko | Tim Cvetko, Tinkara Robek | Sleep syndromes onset detection based on automatic sleep staging
algorithm | 12 pages, 3 figures | null | null | null | q-bio.NC cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel method and a practical approach to
predicting early onsets of sleep syndromes, including restless leg syndrome,
insomnia, based on an algorithm that is comprised of two modules. A Fast
Fourier Transform is applied to 30 seconds long epochs of EEG recordings to
provide localized time-frequency information, and a deep convolutional LSTM
neural network is trained for sleep stage classification. Automating sleep
stages detection from EEG data offers great potential to tackling sleep
irregularities on a daily basis. Thereby, a novel approach for sleep stage
classification is proposed which combines the best of signal processing and
statistics. In this study, we used the PhysioNet Sleep European Data Format
(EDF) Database. The code evaluation showed impressive results, reaching an
accuracy of 86.43, precision of 77.76, recall of 93,32, F1-score of 89.12 with
the final mean false error loss of 0.09.
| [
{
"created": "Wed, 7 Jul 2021 15:38:47 GMT",
"version": "v1"
}
] | 2021-07-09 | [
[
"Cvetko",
"Tim",
""
],
[
"Robek",
"Tinkara",
""
]
] | In this paper, we propose a novel method and a practical approach to predicting early onsets of sleep syndromes, including restless leg syndrome, insomnia, based on an algorithm that is comprised of two modules. A Fast Fourier Transform is applied to 30 seconds long epochs of EEG recordings to provide localized time-frequency information, and a deep convolutional LSTM neural network is trained for sleep stage classification. Automating sleep stages detection from EEG data offers great potential to tackling sleep irregularities on a daily basis. Thereby, a novel approach for sleep stage classification is proposed which combines the best of signal processing and statistics. In this study, we used the PhysioNet Sleep European Data Format (EDF) Database. The code evaluation showed impressive results, reaching an accuracy of 86.43, precision of 77.76, recall of 93,32, F1-score of 89.12 with the final mean false error loss of 0.09. |
1108.6062 | Konstantin Klemm | Gunnar Boldhaus, Florian Greil, Konstantin Klemm | Prediction of lethal and synthetically lethal knock-outs in regulatory
networks | published version, 10 pages, 6 figures, 2 tables; supplement at
http://www.bioinf.uni-leipzig.de/publications/supplements/11-018 | Theory in Biosciences 132, 17-25 (2013) | 10.1007/s12064-012-0164-1 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The complex interactions involved in regulation of a cell's function are
captured by its interaction graph. More often than not, detailed knowledge
about enhancing or suppressive regulatory influences and cooperative effects is
lacking and merely the presence or absence of directed interactions is known.
Here we investigate to which extent such reduced information allows to forecast
the effect of a knock-out or a combination of knock-outs. Specifically we ask
in how far the lethality of eliminating nodes may be predicted by their network
centrality, such as degree and betweenness, without knowing the function of the
system. The function is taken as the ability to reproduce a fixed point under a
discrete Boolean dynamics. We investigate two types of stochastically generated
networks: fully random networks and structures grown with a mechanism of node
duplication and subsequent divergence of interactions. On all networks we find
that the out-degree is a good predictor of the lethality of a single node
knock-out. For knock-outs of node pairs, the fraction of successors shared
between the two knocked-out nodes (out-overlap) is a good predictor of
synthetic lethality. Out-degree and out-overlap are locally defined and
computationally simple centrality measures that provide a predictive power
close to the optimal predictor.
| [
{
"created": "Tue, 30 Aug 2011 20:00:58 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Feb 2013 14:45:41 GMT",
"version": "v2"
}
] | 2013-02-15 | [
[
"Boldhaus",
"Gunnar",
""
],
[
"Greil",
"Florian",
""
],
[
"Klemm",
"Konstantin",
""
]
] | The complex interactions involved in regulation of a cell's function are captured by its interaction graph. More often than not, detailed knowledge about enhancing or suppressive regulatory influences and cooperative effects is lacking and merely the presence or absence of directed interactions is known. Here we investigate to which extent such reduced information allows to forecast the effect of a knock-out or a combination of knock-outs. Specifically we ask in how far the lethality of eliminating nodes may be predicted by their network centrality, such as degree and betweenness, without knowing the function of the system. The function is taken as the ability to reproduce a fixed point under a discrete Boolean dynamics. We investigate two types of stochastically generated networks: fully random networks and structures grown with a mechanism of node duplication and subsequent divergence of interactions. On all networks we find that the out-degree is a good predictor of the lethality of a single node knock-out. For knock-outs of node pairs, the fraction of successors shared between the two knocked-out nodes (out-overlap) is a good predictor of synthetic lethality. Out-degree and out-overlap are locally defined and computationally simple centrality measures that provide a predictive power close to the optimal predictor. |
1904.07113 | Anindita Bhadra | Debottam Bhattacharjee, Rohan Sarkar, Shubhra Sau and Anindita Bhadra | A tale of two species: How humans shape dog behaviour in urban habitats | 3 figures, 14 pages of supplementary material | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species inhabiting urban environments experience enormous anthropogenic
stress. Behavioural plasticity and flexibility of temperament are crucial to
successful urban-adaptation. Urban free-ranging dogs experience variable human
impact, from positive to negative and represent an ideal system to evaluate the
effects of human-induced stress on behavioural plasticity. We tested 600 adult
dogs from 60 sites across India, categorised as high - HF, low - LF, and
intermediate - IF human flux zones, to understand their sociability towards an
unfamiliar human. Dogs in the HF and IF zones were bolder and as compared to
their shy counterparts in LF zones. The IF zone dogs were the most sociable.
This is the first-ever study aimed to understand how the experiences of
interactions with humans in its immediate environment might shape the responses
of an animal to humans. This is very relevant in the context of human-animal
conflict induced by rapid urbanization and habitat loss across the world.
| [
{
"created": "Fri, 12 Apr 2019 10:59:47 GMT",
"version": "v1"
}
] | 2019-04-16 | [
[
"Bhattacharjee",
"Debottam",
""
],
[
"Sarkar",
"Rohan",
""
],
[
"Sau",
"Shubhra",
""
],
[
"Bhadra",
"Anindita",
""
]
] | Species inhabiting urban environments experience enormous anthropogenic stress. Behavioural plasticity and flexibility of temperament are crucial to successful urban-adaptation. Urban free-ranging dogs experience variable human impact, from positive to negative and represent an ideal system to evaluate the effects of human-induced stress on behavioural plasticity. We tested 600 adult dogs from 60 sites across India, categorised as high - HF, low - LF, and intermediate - IF human flux zones, to understand their sociability towards an unfamiliar human. Dogs in the HF and IF zones were bolder and as compared to their shy counterparts in LF zones. The IF zone dogs were the most sociable. This is the first-ever study aimed to understand how the experiences of interactions with humans in its immediate environment might shape the responses of an animal to humans. This is very relevant in the context of human-animal conflict induced by rapid urbanization and habitat loss across the world. |
2209.04406 | Camille Noufi | Camille Noufi, Adam C. Lammert, Daryush D. Mehta, James R. Williamson,
Gregory Ciccarelli, Douglas Sturim, Jordan R. Green, Thomas F. Quatieri and
Thomas F. Campbell | Longitudinal Acoustic Speech Tracking Following Pediatric Traumatic
Brain Injury | null | null | null | null | q-bio.NC cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recommendations for common outcome measures following pediatric traumatic
brain injury (TBI) support the integration of instrumental measurements
alongside perceptual assessment in recovery and treatment plans. A
comprehensive set of sensitive, robust and non-invasive measurements is
therefore essential in assessing variations in speech characteristics over time
following pediatric TBI. In this article, we study the changes in the acoustic
speech patterns of a pediatric cohort of ten subjects diagnosed with severe
TBI. We extract a diverse set of both well-known and novel acoustic features
from child speech recorded throughout the year after the child produced
intelligible words. These features are analyzed individually and by speech
subsystem, within-subject and across the cohort. As a group, older children
exhibit highly significant (p<0.01) increases in pitch variation and phoneme
diversity, shortened pause length, and steadying articulation rate variability.
Younger children exhibit similar steadied rate variability alongside an
increase in formant-based articulation complexity. Correlation analysis of the
feature set with age and comparisons to normative developmental data confirm
that age at injury plays a significant role in framing the recovery trajectory.
Nearly all speech features significantly change (p<0.05) for the cohort as a
whole, confirming that acoustic measures supplementing perceptual assessment
are needed to identify efficacious treatment targets for speech therapy
following TBI.
| [
{
"created": "Fri, 9 Sep 2022 17:18:41 GMT",
"version": "v1"
}
] | 2022-09-12 | [
[
"Noufi",
"Camille",
""
],
[
"Lammert",
"Adam C.",
""
],
[
"Mehta",
"Daryush D.",
""
],
[
"Williamson",
"James R.",
""
],
[
"Ciccarelli",
"Gregory",
""
],
[
"Sturim",
"Douglas",
""
],
[
"Green",
"Jordan R.",
""
],
[
"Quatieri",
"Thomas F.",
""
],
[
"Campbell",
"Thomas F.",
""
]
] | Recommendations for common outcome measures following pediatric traumatic brain injury (TBI) support the integration of instrumental measurements alongside perceptual assessment in recovery and treatment plans. A comprehensive set of sensitive, robust and non-invasive measurements is therefore essential in assessing variations in speech characteristics over time following pediatric TBI. In this article, we study the changes in the acoustic speech patterns of a pediatric cohort of ten subjects diagnosed with severe TBI. We extract a diverse set of both well-known and novel acoustic features from child speech recorded throughout the year after the child produced intelligible words. These features are analyzed individually and by speech subsystem, within-subject and across the cohort. As a group, older children exhibit highly significant (p<0.01) increases in pitch variation and phoneme diversity, shortened pause length, and steadying articulation rate variability. Younger children exhibit similar steadied rate variability alongside an increase in formant-based articulation complexity. Correlation analysis of the feature set with age and comparisons to normative developmental data confirm that age at injury plays a significant role in framing the recovery trajectory. Nearly all speech features significantly change (p<0.05) for the cohort as a whole, confirming that acoustic measures supplementing perceptual assessment are needed to identify efficacious treatment targets for speech therapy following TBI. |
1808.04075 | Dirk Ostwald | Dirk Ostwald, Sebastian Schneider, Rasmus Bruckner, Lilla Horvath | Random field theory-based p-values: A review of the SPM implementation | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | P-values and null-hypothesis significance testing are popular data-analytical
tools in functional neuroimaging. Sparked by the analysis of resting-state fMRI
data, there has been a resurgence of interest in the validity of some of the
p-values evaluated with the widely used software SPM in recent years. The
default parametric p-values reported in SPM are based on the application of
results from random field theory to statistical parametric maps, a framework
commonly referred to as RFT. While RFT was established two decades ago and has
since been applied in a plethora of fMRI studies, there does not exist a
unified documentation of the mathematical and computational underpinnings of
RFT as implemented in current versions of SPM. Here, we provide such a
documentation with the aim of contributing to contemporary efforts towards
higher levels of computational transparency in functional neuroimaging.
| [
{
"created": "Mon, 13 Aug 2018 06:18:50 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Jan 2019 12:57:02 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Aug 2021 12:17:36 GMT",
"version": "v3"
}
] | 2021-08-10 | [
[
"Ostwald",
"Dirk",
""
],
[
"Schneider",
"Sebastian",
""
],
[
"Bruckner",
"Rasmus",
""
],
[
"Horvath",
"Lilla",
""
]
] | P-values and null-hypothesis significance testing are popular data-analytical tools in functional neuroimaging. Sparked by the analysis of resting-state fMRI data, there has been a resurgence of interest in the validity of some of the p-values evaluated with the widely used software SPM in recent years. The default parametric p-values reported in SPM are based on the application of results from random field theory to statistical parametric maps, a framework commonly referred to as RFT. While RFT was established two decades ago and has since been applied in a plethora of fMRI studies, there does not exist a unified documentation of the mathematical and computational underpinnings of RFT as implemented in current versions of SPM. Here, we provide such a documentation with the aim of contributing to contemporary efforts towards higher levels of computational transparency in functional neuroimaging. |
2205.03135 | Simon Olsson | Christopher Kolloff and Simon Olsson | Machine Learning in Molecular Dynamics Simulations of Biomolecular
Systems | 36 pages, 4 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning (ML) has emerged as a pervasive tool in science,
engineering, and beyond. Its success has also led to several synergies with
molecular dynamics (MD) simulations, which we use to identify and characterize
the major metastable states of molecular systems. Typically, we aim to
determine the relative stabilities of these states and how rapidly they
interchange. This information allows mechanistic descriptions of molecular
mechanisms, enables a quantitative comparison with experiments, and facilitates
their rational design. ML impacts all aspects of MD simulations -- from
analyzing the data and accelerating sampling to defining more efficient or more
accurate simulation models.
| [
{
"created": "Fri, 6 May 2022 10:56:51 GMT",
"version": "v1"
}
] | 2022-05-09 | [
[
"Kolloff",
"Christopher",
""
],
[
"Olsson",
"Simon",
""
]
] | Machine learning (ML) has emerged as a pervasive tool in science, engineering, and beyond. Its success has also led to several synergies with molecular dynamics (MD) simulations, which we use to identify and characterize the major metastable states of molecular systems. Typically, we aim to determine the relative stabilities of these states and how rapidly they interchange. This information allows mechanistic descriptions of molecular mechanisms, enables a quantitative comparison with experiments, and facilitates their rational design. ML impacts all aspects of MD simulations -- from analyzing the data and accelerating sampling to defining more efficient or more accurate simulation models. |
1812.11384 | William Bialek | Victoria Antonetti, William Bialek, Thomas Gregor, Gentian Muhaxheri,
Mariela Petkova, and Martin Scheeler | Precise spatial scaling in the early fly embryo | null | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The early fly embryo offers a relatively pure version of the problem of
spatial scaling in biological pattern formation. Within three hours, a
"blueprint" for the final segmented body plan of the animal is visible in
striped patterns of gene expression. We measure the positions of these stripes
in an ensemble of 100+ embryos from a laboratory strain of Drosophila
melanogaster, under controlled conditions. These embryos vary in length by only
4% (rms), yet stripes are positioned with 1% accuracy; precision and scaling of
the pattern are intertwined. We can see directly the variation of absolute
stripe positions with length, and the precision is so high as to exclude
alternatives, such as combinations of unscaled signals from the two ends of the
embryo.
| [
{
"created": "Sat, 29 Dec 2018 15:38:52 GMT",
"version": "v1"
}
] | 2019-01-01 | [
[
"Antonetti",
"Victoria",
""
],
[
"Bialek",
"William",
""
],
[
"Gregor",
"Thomas",
""
],
[
"Muhaxheri",
"Gentian",
""
],
[
"Petkova",
"Mariela",
""
],
[
"Scheeler",
"Martin",
""
]
] | The early fly embryo offers a relatively pure version of the problem of spatial scaling in biological pattern formation. Within three hours, a "blueprint" for the final segmented body plan of the animal is visible in striped patterns of gene expression. We measure the positions of these stripes in an ensemble of 100+ embryos from a laboratory strain of Drosophila melanogaster, under controlled conditions. These embryos vary in length by only 4% (rms), yet stripes are positioned with 1% accuracy; precision and scaling of the pattern are intertwined. We can see directly the variation of absolute stripe positions with length, and the precision is so high as to exclude alternatives, such as combinations of unscaled signals from the two ends of the embryo. |
1707.03532 | Brian Camley | Brian A. Camley and Wouter-Jan Rappel | Cell-to-cell variation sets a tissue-rheology-dependent bound on
collective gradient sensing | null | PNAS (2017) | 10.1073/pnas.1712309114 | null | q-bio.CB cond-mat.soft physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When a single cell senses a chemical gradient and chemotaxes, stochastic
receptor-ligand binding can be a fundamental limit to the cell's accuracy. For
clusters of cells responding to gradients, however, there is a critical
difference: even genetically identical cells have differing responses to
chemical signals. With theory and simulation, we show collective chemotaxis is
limited by cell-to-cell variation in signaling. We find that when different
cells cooperate the resulting bias can be much larger than the effects of
ligand-receptor binding. Specifically, when a strongly-responding cell is at
one end of a cell cluster, cluster motion is biased toward that cell. These
errors are mitigated if clusters average measurements over times long enough
for cells to rearrange. In consequence, fluid clusters are better able to sense
gradients: we derive a link between cluster accuracy, cell-to-cell variation,
and the cluster rheology. Because of this connection, increasing the noisiness
of individual cell motion can actually increase the collective accuracy of a
cluster by improving fluidity.
| [
{
"created": "Wed, 12 Jul 2017 04:01:44 GMT",
"version": "v1"
}
] | 2017-11-09 | [
[
"Camley",
"Brian A.",
""
],
[
"Rappel",
"Wouter-Jan",
""
]
] | When a single cell senses a chemical gradient and chemotaxes, stochastic receptor-ligand binding can be a fundamental limit to the cell's accuracy. For clusters of cells responding to gradients, however, there is a critical difference: even genetically identical cells have differing responses to chemical signals. With theory and simulation, we show collective chemotaxis is limited by cell-to-cell variation in signaling. We find that when different cells cooperate the resulting bias can be much larger than the effects of ligand-receptor binding. Specifically, when a strongly-responding cell is at one end of a cell cluster, cluster motion is biased toward that cell. These errors are mitigated if clusters average measurements over times long enough for cells to rearrange. In consequence, fluid clusters are better able to sense gradients: we derive a link between cluster accuracy, cell-to-cell variation, and the cluster rheology. Because of this connection, increasing the noisiness of individual cell motion can actually increase the collective accuracy of a cluster by improving fluidity. |
0805.3583 | Vladimir Ivancevic | Vladimir G. Ivancevic | New Mechanics of Traumatic Brain Injury | 18 pages, 1 figure, Latex | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction and prevention of traumatic brain injury is a very important
aspect of preventive medical science. This paper proposes a new coupled
loading-rate hypothesis for the traumatic brain injury (TBI), which states that
the main cause of the TBI is an external Euclidean jolt, or SE(3)-jolt, an
impulsive loading that strikes the head in several coupled degrees-of-freedom
simultaneously. To show this, based on the previously defined covariant force
law, we formulate the coupled Newton-Euler dynamics of brain's micro-motions
within the cerebrospinal fluid and derive from it the coupled SE(3)-jolt
dynamics. The SE(3)-jolt is a cause of the TBI in two forms of brain's rapid
discontinuous deformations: translational dislocations and rotational
disclinations. Brain's dislocations and disclinations, caused by the
SE(3)-jolt, are described using the Cosserat multipolar viscoelastic continuum
brain model.
Keywords: Traumatic brain injuries, coupled loading-rate hypothesis,
Euclidean jolt, coupled Newton-Euler dynamics, brain's dislocations and
disclinations
| [
{
"created": "Fri, 23 May 2008 06:14:02 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Sep 2008 04:18:08 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Nov 2008 02:16:26 GMT",
"version": "v3"
}
] | 2008-11-18 | [
[
"Ivancevic",
"Vladimir G.",
""
]
] | The prediction and prevention of traumatic brain injury is a very important aspect of preventive medical science. This paper proposes a new coupled loading-rate hypothesis for the traumatic brain injury (TBI), which states that the main cause of the TBI is an external Euclidean jolt, or SE(3)-jolt, an impulsive loading that strikes the head in several coupled degrees-of-freedom simultaneously. To show this, based on the previously defined covariant force law, we formulate the coupled Newton-Euler dynamics of brain's micro-motions within the cerebrospinal fluid and derive from it the coupled SE(3)-jolt dynamics. The SE(3)-jolt is a cause of the TBI in two forms of brain's rapid discontinuous deformations: translational dislocations and rotational disclinations. Brain's dislocations and disclinations, caused by the SE(3)-jolt, are described using the Cosserat multipolar viscoelastic continuum brain model. Keywords: Traumatic brain injuries, coupled loading-rate hypothesis, Euclidean jolt, coupled Newton-Euler dynamics, brain's dislocations and disclinations |
2111.04107 | Pavol Drot\'ar | Pavol Drot\'ar, Arian Rokkum Jamasb, Ben Day, C\u{a}t\u{a}lina Cangea,
Pietro Li\`o | Structure-aware generation of drug-like molecules | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structure-based drug design involves finding ligand molecules that exhibit
structural and chemical complementarity to protein pockets. Deep generative
methods have shown promise in proposing novel molecules from scratch (de-novo
design), avoiding exhaustive virtual screening of chemical space. Most
generative de-novo models fail to incorporate detailed ligand-protein
interactions and 3D pocket structures. We propose a novel supervised model that
generates molecular graphs jointly with 3D pose in a discretised molecular
space. Molecules are built atom-by-atom inside pockets, guided by structural
information from crystallographic data. We evaluate our model using a docking
benchmark and find that guided generation improves predicted binding affinities
by 8% and drug-likeness scores by 10% over the baseline. Furthermore, our model
proposes molecules with binding scores exceeding some known ligands, which
could be useful in future wet-lab studies.
| [
{
"created": "Sun, 7 Nov 2021 15:19:54 GMT",
"version": "v1"
}
] | 2021-11-09 | [
[
"Drotár",
"Pavol",
""
],
[
"Jamasb",
"Arian Rokkum",
""
],
[
"Day",
"Ben",
""
],
[
"Cangea",
"Cătălina",
""
],
[
"Liò",
"Pietro",
""
]
] | Structure-based drug design involves finding ligand molecules that exhibit structural and chemical complementarity to protein pockets. Deep generative methods have shown promise in proposing novel molecules from scratch (de-novo design), avoiding exhaustive virtual screening of chemical space. Most generative de-novo models fail to incorporate detailed ligand-protein interactions and 3D pocket structures. We propose a novel supervised model that generates molecular graphs jointly with 3D pose in a discretised molecular space. Molecules are built atom-by-atom inside pockets, guided by structural information from crystallographic data. We evaluate our model using a docking benchmark and find that guided generation improves predicted binding affinities by 8% and drug-likeness scores by 10% over the baseline. Furthermore, our model proposes molecules with binding scores exceeding some known ligands, which could be useful in future wet-lab studies. |
2002.08813 | Alain Destexhe | Alain Destexhe and Jonathan D. Touboul | Is there sufficient evidence for criticality in cortical systems? | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies have found evidence that the brain operates at a critical point,
a processus known as self-organized criticality. A recent paper found
remarkable scalings suggestive of criticality in systems as different as neural
cultures, anesthetized or awake brains. We point out here that the diversity of
these states would question any claimed role of criticality in information
processing. Furthermore, we show that two non-critical systems pass all the
tests for criticality, a control that was not provided in the original article.
We conclude that such false positives demonstrate that the presence of
criticality in the brain is still not proven and that we need better methods
that scaling analyses.
| [
{
"created": "Thu, 20 Feb 2020 15:47:41 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jul 2020 19:26:44 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Dec 2020 16:05:58 GMT",
"version": "v3"
}
] | 2020-12-29 | [
[
"Destexhe",
"Alain",
""
],
[
"Touboul",
"Jonathan D.",
""
]
] | Many studies have found evidence that the brain operates at a critical point, a processus known as self-organized criticality. A recent paper found remarkable scalings suggestive of criticality in systems as different as neural cultures, anesthetized or awake brains. We point out here that the diversity of these states would question any claimed role of criticality in information processing. Furthermore, we show that two non-critical systems pass all the tests for criticality, a control that was not provided in the original article. We conclude that such false positives demonstrate that the presence of criticality in the brain is still not proven and that we need better methods that scaling analyses. |
2405.19221 | Megan Peters | Seyedmehdi Orouji, Martin C. Liu, Tal Korem, Megan A. K. Peters | Domain adaptation in small-scale and heterogeneous biological datasets | main manuscript + supplement | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning techniques are steadily becoming more important in modern
biology, and are used to build predictive models, discover patterns, and
investigate biological problems. However, models trained on one dataset are
often not generalizable to other datasets from different cohorts or
laboratories, due to differences in the statistical properties of these
datasets. These could stem from technical differences, such as the measurement
technique used, or from relevant biological differences between the populations
studied. Domain adaptation, a type of transfer learning, can alleviate this
problem by aligning the statistical distributions of features and samples among
different datasets so that similar models can be applied across them. However,
a majority of state-of-the-art domain adaptation methods are designed to work
with large-scale data, mostly text and images, while biological datasets often
suffer from small sample sizes, and possess complexities such as heterogeneity
of the feature space. This Review aims to synthetically discuss domain
adaptation methods in the context of small-scale and highly heterogeneous
biological data. We describe the benefits and challenges of domain adaptation
in biological research and critically discuss some of its objectives,
strengths, and weaknesses through key representative methodologies. We argue
for the incorporation of domain adaptation techniques to the computational
biologist's toolkit, with further development of customized approaches.
| [
{
"created": "Wed, 29 May 2024 16:01:15 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Orouji",
"Seyedmehdi",
""
],
[
"Liu",
"Martin C.",
""
],
[
"Korem",
"Tal",
""
],
[
"Peters",
"Megan A. K.",
""
]
] | Machine learning techniques are steadily becoming more important in modern biology, and are used to build predictive models, discover patterns, and investigate biological problems. However, models trained on one dataset are often not generalizable to other datasets from different cohorts or laboratories, due to differences in the statistical properties of these datasets. These could stem from technical differences, such as the measurement technique used, or from relevant biological differences between the populations studied. Domain adaptation, a type of transfer learning, can alleviate this problem by aligning the statistical distributions of features and samples among different datasets so that similar models can be applied across them. However, a majority of state-of-the-art domain adaptation methods are designed to work with large-scale data, mostly text and images, while biological datasets often suffer from small sample sizes, and possess complexities such as heterogeneity of the feature space. This Review aims to synthetically discuss domain adaptation methods in the context of small-scale and highly heterogeneous biological data. We describe the benefits and challenges of domain adaptation in biological research and critically discuss some of its objectives, strengths, and weaknesses through key representative methodologies. We argue for the incorporation of domain adaptation techniques to the computational biologist's toolkit, with further development of customized approaches. |
2405.06836 | Salma Ahmed | Salma J. Ahmed, Mustafa A. Elattar | Improving Targeted Molecule Generation through Language Model
Fine-Tuning Via Reinforcement Learning | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Developing new drugs is laborious and costly, demanding extensive time
investment. In this study, we introduce an innovative de-novo drug design
strategy, which harnesses the capabilities of language models to devise
targeted drugs for specific proteins. Employing a Reinforcement Learning (RL)
framework utilizing Proximal Policy Optimization (PPO), we refine the model to
acquire a policy for generating drugs tailored to protein targets. Our method
integrates a composite reward function, combining considerations of drug-target
interaction and molecular validity. Following RL fine-tuning, our approach
demonstrates promising outcomes, yielding notable improvements in molecular
validity, interaction efficacy, and critical chemical properties, achieving
65.37 for Quantitative Estimation of Drug-likeness (QED), 321.55 for Molecular
Weight (MW), and 4.47 for Octanol-Water Partition Coefficient (logP),
respectively. Furthermore, out of the generated drugs, only 0.041\% do not
exhibit novelty.
| [
{
"created": "Fri, 10 May 2024 22:19:12 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Ahmed",
"Salma J.",
""
],
[
"Elattar",
"Mustafa A.",
""
]
] | Developing new drugs is laborious and costly, demanding extensive time investment. In this study, we introduce an innovative de-novo drug design strategy, which harnesses the capabilities of language models to devise targeted drugs for specific proteins. Employing a Reinforcement Learning (RL) framework utilizing Proximal Policy Optimization (PPO), we refine the model to acquire a policy for generating drugs tailored to protein targets. Our method integrates a composite reward function, combining considerations of drug-target interaction and molecular validity. Following RL fine-tuning, our approach demonstrates promising outcomes, yielding notable improvements in molecular validity, interaction efficacy, and critical chemical properties, achieving 65.37 for Quantitative Estimation of Drug-likeness (QED), 321.55 for Molecular Weight (MW), and 4.47 for Octanol-Water Partition Coefficient (logP), respectively. Furthermore, out of the generated drugs, only 0.041\% do not exhibit novelty. |
2303.00864 | Tom Chou | Mingtao Xia, Xiangting Li, Tom Chou | Overcompensation of transient and permanent death rate increases in
age-structured models with cannibalistic interactions | 19 pages including mathematical appendices, 4 figures | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | There has been renewed interest in understanding the mathematical structure
of ecological population models that lead to overcompensation, the process by
which a population recovers to a higher level after suffering a permanent
increase in predation or harvesting. Here, we apply a recently formulated
kinetic population theory to formally construct an age-structured
single-species population model that includes a cannibalistic interaction in
which older individuals prey on younger ones. Depending on the age-dependent
structure of this interaction, our model can exhibit transient or steady-state
overcompensation of an increased death rate as well as oscillations of the
total population, both phenomena that have been observed in ecological systems.
Analytic and numerical analysis of our model reveals sufficient conditions for
overcompensation and oscillations. We also show how our structured population
partial integrodifferential equation (PIDE) model can be reduced to coupled ODE
models representing piecewise constant parameter domains, providing additional
mathematical insight into the emergence of overcompensation.
| [
{
"created": "Wed, 1 Mar 2023 23:31:17 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 03:27:21 GMT",
"version": "v2"
}
] | 2024-03-18 | [
[
"Xia",
"Mingtao",
""
],
[
"Li",
"Xiangting",
""
],
[
"Chou",
"Tom",
""
]
] | There has been renewed interest in understanding the mathematical structure of ecological population models that lead to overcompensation, the process by which a population recovers to a higher level after suffering a permanent increase in predation or harvesting. Here, we apply a recently formulated kinetic population theory to formally construct an age-structured single-species population model that includes a cannibalistic interaction in which older individuals prey on younger ones. Depending on the age-dependent structure of this interaction, our model can exhibit transient or steady-state overcompensation of an increased death rate as well as oscillations of the total population, both phenomena that have been observed in ecological systems. Analytic and numerical analysis of our model reveals sufficient conditions for overcompensation and oscillations. We also show how our structured population partial integrodifferential equation (PIDE) model can be reduced to coupled ODE models representing piecewise constant parameter domains, providing additional mathematical insight into the emergence of overcompensation. |
q-bio/0701025 | Agnes Szejka | Agnes Szejka, Barbara Drossel | Evolution of Canalizing Boolean Networks | 8 pages, 10 figures; revised and extended version | Eur. Phys. J. B 56, 373-380 (2007) | 10.1140/epjb/e2007-00135-2 | null | q-bio.PE q-bio.MN | null | Boolean networks with canalizing functions are used to model gene regulatory
networks. In order to learn how such networks may behave under evolutionary
forces, we simulate the evolution of a single Boolean network by means of an
adaptive walk, which allows us to explore the fitness landscape. Mutations
change the connections and the functions of the nodes. Our fitness criterion is
the robustness of the dynamical attractors against small perturbations. We find
that with this fitness criterion the global maximum is always reached and that
there is a huge neutral space of 100% fitness. Furthermore, in spite of having
such a high degree of robustness, the evolved networks still share many
features with "chaotic" networks.
| [
{
"created": "Wed, 17 Jan 2007 18:27:26 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jun 2007 08:54:02 GMT",
"version": "v2"
}
] | 2011-11-09 | [
[
"Szejka",
"Agnes",
""
],
[
"Drossel",
"Barbara",
""
]
] | Boolean networks with canalizing functions are used to model gene regulatory networks. In order to learn how such networks may behave under evolutionary forces, we simulate the evolution of a single Boolean network by means of an adaptive walk, which allows us to explore the fitness landscape. Mutations change the connections and the functions of the nodes. Our fitness criterion is the robustness of the dynamical attractors against small perturbations. We find that with this fitness criterion the global maximum is always reached and that there is a huge neutral space of 100% fitness. Furthermore, in spite of having such a high degree of robustness, the evolved networks still share many features with "chaotic" networks. |
1903.01652 | Margaret Frank | Mao Li, Margaret H. Frank, and Zo\"e Migicovsky | ColourQuant: a high-throughput technique to extract and quantify colour
phenotypes from plant images | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Colour patterning contributes to important plant traits that influence
ecological interactions, horticultural breeding, and agricultural performance.
High-throughput phenotyping of colour is valuable for understanding plant
biology and selecting for traits related to colour during plant breeding. Here
we present ColourQuant, an automated high-throughput pipeline that allows users
to extract colour phenotypes from images. This pipeline includes methods for
colour phenotyping using mean pixel values, Gaussian density estimator of Lab
colour, and the analysis of shape-independent colour patterning by circular
deformation.
| [
{
"created": "Tue, 5 Mar 2019 03:55:24 GMT",
"version": "v1"
}
] | 2019-03-06 | [
[
"Li",
"Mao",
""
],
[
"Frank",
"Margaret H.",
""
],
[
"Migicovsky",
"Zoë",
""
]
] | Colour patterning contributes to important plant traits that influence ecological interactions, horticultural breeding, and agricultural performance. High-throughput phenotyping of colour is valuable for understanding plant biology and selecting for traits related to colour during plant breeding. Here we present ColourQuant, an automated high-throughput pipeline that allows users to extract colour phenotypes from images. This pipeline includes methods for colour phenotyping using mean pixel values, Gaussian density estimator of Lab colour, and the analysis of shape-independent colour patterning by circular deformation. |
q-bio/0411053 | Ralf Metzler | Tobias Ambjornsson and Ralf Metzler | Coupled dynamics of DNA-breathing and single-stranded DNA binding
proteins | REVTeX4, 4 pages, 5 figures, revised version | null | null | null | q-bio.BM cond-mat.stat-mech | null | We study the size fluctuations of a local denaturation zone in a DNA molecule
in the presence of proteins that selectively bind to single-stranded DNA, based
on a (2+1)-dimensional master equation. By tuning the physical parameters we
can drive the system from undisturbed bubble fluctuations to full, binding
protein-induced denaturation. We determine the effective free energy landscape
of the DNA-bubble and explore its relaxation modes.
| [
{
"created": "Tue, 30 Nov 2004 13:06:58 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jun 2005 10:57:06 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Ambjornsson",
"Tobias",
""
],
[
"Metzler",
"Ralf",
""
]
] | We study the size fluctuations of a local denaturation zone in a DNA molecule in the presence of proteins that selectively bind to single-stranded DNA, based on a (2+1)-dimensional master equation. By tuning the physical parameters we can drive the system from undisturbed bubble fluctuations to full, binding protein-induced denaturation. We determine the effective free energy landscape of the DNA-bubble and explore its relaxation modes. |
1902.09360 | Les Hatton | Les Hatton, Gregory Warr | CoHSI V: Identical multiple scale-independent systems within genomes and
computer software | 22 pages, 13 figures, 35 references | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mechanism-free and symbol-agnostic conservation principle, the Conservation
of Hartley-Shannon Information (CoHSI) is predicted to constrain the structure
of discrete systems regardless of their origin or function. Despite their
distinct provenance, genomes and computer software share a simple structural
property; they are linear symbol-based discrete systems, and thus they present
an opportunity to test in a comparative context the predictions of CoHSI. Here,
without any consideration of, or relevance to, their role in specifying
function, we identify that 10 representative genomes (from microbes to human)
and a large collection of software contain identically structured nested
subsystems. In the case of base sequences in genomes, CoHSI predicts that if we
split the genome into n-tuples (a 2-tuple is a pair of consecutive bases; a
3-tuple is a trio and so on), without regard for whether or not a region is
coding, then each collection of n-tuples will constitute a homogeneous discrete
system and will obey a power-law in frequency of occurrence of the n-tuples. We
consider 1-, 2-, 3-, 4-, 5-, 6-, 7- and 8-tuples of ten species and demonstrate
that the predicted power-law behavior is emphatically present, and furthermore
as predicted, is insensitive to the start window for the tuple extraction i.e.
the reading frame is irrelevant.
We go on to provide a proof of Chargaff's second parity rule and on the basis
of this proof, predict higher order tuple parity rules which we then identify
in the genome data.
CoHSI predicts precisely the same behavior in computer software. This
prediction was tested and confirmed using 2-, 3- and 4-tuples of the
hexadecimal representation of machine code in multiple computer programs,
underlining the fundamental role played by CoHSI in defining the landscape in
which discrete symbol-based systems must operate.
| [
{
"created": "Mon, 25 Feb 2019 15:24:22 GMT",
"version": "v1"
}
] | 2019-02-26 | [
[
"Hatton",
"Les",
""
],
[
"Warr",
"Gregory",
""
]
] | A mechanism-free and symbol-agnostic conservation principle, the Conservation of Hartley-Shannon Information (CoHSI) is predicted to constrain the structure of discrete systems regardless of their origin or function. Despite their distinct provenance, genomes and computer software share a simple structural property; they are linear symbol-based discrete systems, and thus they present an opportunity to test in a comparative context the predictions of CoHSI. Here, without any consideration of, or relevance to, their role in specifying function, we identify that 10 representative genomes (from microbes to human) and a large collection of software contain identically structured nested subsystems. In the case of base sequences in genomes, CoHSI predicts that if we split the genome into n-tuples (a 2-tuple is a pair of consecutive bases; a 3-tuple is a trio and so on), without regard for whether or not a region is coding, then each collection of n-tuples will constitute a homogeneous discrete system and will obey a power-law in frequency of occurrence of the n-tuples. We consider 1-, 2-, 3-, 4-, 5-, 6-, 7- and 8-tuples of ten species and demonstrate that the predicted power-law behavior is emphatically present, and furthermore as predicted, is insensitive to the start window for the tuple extraction i.e. the reading frame is irrelevant. We go on to provide a proof of Chargaff's second parity rule and on the basis of this proof, predict higher order tuple parity rules which we then identify in the genome data. CoHSI predicts precisely the same behavior in computer software. This prediction was tested and confirmed using 2-, 3- and 4-tuples of the hexadecimal representation of machine code in multiple computer programs, underlining the fundamental role played by CoHSI in defining the landscape in which discrete symbol-based systems must operate. |
2301.07386 | Lingbin Bian | Lingbin Bian, Nizhuan Wang, Leonardo Novelli, Jonathan Keith, and
Adeel Razi | Hierarchical Bayesian inference for community detection and connectivity
of functional brain networks | null | null | null | null | q-bio.NC stat.AP | http://creativecommons.org/licenses/by/4.0/ | Many functional magnetic resonance imaging (fMRI) studies rely on estimates
of hierarchically organised brain networks whose segregation and integration
reflect the dynamic transitions of latent cognitive states. However, most
existing methods for estimating the community structure of networks from both
individual and group-level analysis neglect the variability between subjects
and lack validation. In this paper, we develop a new multilayer community
detection method based on Bayesian latent block modelling. The method can
robustly detect the group-level community structure of weighted functional
networks that give rise to hidden brain states with an unknown number of
communities and retain the variability of individual networks. For validation,
we propose a new community structure-based multivariate Gaussian generative
model to simulate synthetic signal. Our result shows that the inferred
community memberships using hierarchical Bayesian analysis are consistent with
the predefined node labels in the generative model. The method is also tested
using real working memory task-fMRI data of 100 unrelated healthy subjects from
the Human Connectome Project. The results show distinctive community structure
patterns between 2-back, 0-back, and fixation conditions, which may reflect
cognitive and behavioural states under working memory task conditions.
| [
{
"created": "Wed, 18 Jan 2023 09:30:46 GMT",
"version": "v1"
},
{
"created": "Sun, 26 May 2024 13:34:59 GMT",
"version": "v2"
}
] | 2024-05-28 | [
[
"Bian",
"Lingbin",
""
],
[
"Wang",
"Nizhuan",
""
],
[
"Novelli",
"Leonardo",
""
],
[
"Keith",
"Jonathan",
""
],
[
"Razi",
"Adeel",
""
]
] | Many functional magnetic resonance imaging (fMRI) studies rely on estimates of hierarchically organised brain networks whose segregation and integration reflect the dynamic transitions of latent cognitive states. However, most existing methods for estimating the community structure of networks from both individual and group-level analysis neglect the variability between subjects and lack validation. In this paper, we develop a new multilayer community detection method based on Bayesian latent block modelling. The method can robustly detect the group-level community structure of weighted functional networks that give rise to hidden brain states with an unknown number of communities and retain the variability of individual networks. For validation, we propose a new community structure-based multivariate Gaussian generative model to simulate synthetic signal. Our result shows that the inferred community memberships using hierarchical Bayesian analysis are consistent with the predefined node labels in the generative model. The method is also tested using real working memory task-fMRI data of 100 unrelated healthy subjects from the Human Connectome Project. The results show distinctive community structure patterns between 2-back, 0-back, and fixation conditions, which may reflect cognitive and behavioural states under working memory task conditions. |
2106.05388 | Jiabin Tang | Jiabin Tang, Shivani Patel, Steve Gentleman, Paul Matthews | Neurological Consequences of COVID-19 Infection | 19 pages, 4 figures | null | null | null | q-bio.NC q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | COVID-19 infections have well described systemic manifestations, especially
respiratory problems. There are currently no specific treatments or vaccines
against the current strain. With higher case numbers, a range of neurological
symptoms are becoming apparent. The mechanisms responsible for these are not
well defined, other than those related to hypoxia and microthrombi. We
speculate that sustained systemic immune activation seen with SARS-CoV-2 may
also cause secondary autoimmune activation in the CNS. Patients with chronic
neurological diseases may be at higher risk because of chronic secondary
respiratory disease and potentially poor nutritional status. Here, we review
the impact of COVID-19 on people with chronic neurological diseases and
potential mechanisms. We believe special attention to protecting people with
neurodegenerative disease is warranted. We are concerned about a possible
delayed pandemic in the form of an increased burden of neurodegenerative
disease after acceleration of pathology by systemic COVID-19 infections.
| [
{
"created": "Wed, 9 Jun 2021 21:02:12 GMT",
"version": "v1"
}
] | 2021-06-11 | [
[
"Tang",
"Jiabin",
""
],
[
"Patel",
"Shivani",
""
],
[
"Gentleman",
"Steve",
""
],
[
"Matthews",
"Paul",
""
]
] | COVID-19 infections have well described systemic manifestations, especially respiratory problems. There are currently no specific treatments or vaccines against the current strain. With higher case numbers, a range of neurological symptoms are becoming apparent. The mechanisms responsible for these are not well defined, other than those related to hypoxia and microthrombi. We speculate that sustained systemic immune activation seen with SARS-CoV-2 may also cause secondary autoimmune activation in the CNS. Patients with chronic neurological diseases may be at higher risk because of chronic secondary respiratory disease and potentially poor nutritional status. Here, we review the impact of COVID-19 on people with chronic neurological diseases and potential mechanisms. We believe special attention to protecting people with neurodegenerative disease is warranted. We are concerned about a possible delayed pandemic in the form of an increased burden of neurodegenerative disease after acceleration of pathology by systemic COVID-19 infections. |
1907.00247 | Jude Kong | Jude D. Kong, Hao Wang, Tariq Siddique, Julia Foght, Kathleen Semple,
Zvonko Burkus, and Mark A. Lewis | Second-generation stoichiometric mathematical model to predict methane
emissions from oil sands tailings | null | null | 10.1016/j.scitotenv.2019.133645 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microbial metabolism of fugitive hydrocarbons produces greenhouse gas (GHG)
emissions from oil sands tailings ponds (OSTP) and end pit lakes (EPL) that
retain semisolid wastes from surface mining of oil sands ores. Predicting GHG
production, particularly methane (CH4), would help oil sands operators mitigate
tailings emissions and would assist regulators evaluating the trajectory of
reclamation scenarios. Using empirical datasets from laboratory incubation of
OSTP sediments with pertinent hydrocarbons, we developed a stoichiometric model
for CH4 generation by indigenous microbes. This model improved on previous
first-approximation models by considering long-term biodegradation kinetics for
18 relevant hydrocarbons from three different oil sands operations, lag times,
nutrient limitations, and microbial growth and death rates. Laboratory
measurements were used to estimate model parameter values and to validate the
new model. Goodness of fit analysis showed that the stoichiometric model
predicted CH4 production well; normalized mean square error analysis revealed
that it surpassed previous models. Comparison of model predictions with field
measurements of CH4 emissions further validated the new model. Importantly, the
model also identified parameters that are currently lacking but are needed to
enable future robust modeling of CH4 production from OSTP and EPL in situ.
| [
{
"created": "Sat, 29 Jun 2019 18:08:21 GMT",
"version": "v1"
}
] | 2019-08-29 | [
[
"Kong",
"Jude D.",
""
],
[
"Wang",
"Hao",
""
],
[
"Siddique",
"Tariq",
""
],
[
"Foght",
"Julia",
""
],
[
"Semple",
"Kathleen",
""
],
[
"Burkus",
"Zvonko",
""
],
[
"Lewis",
"Mark A.",
""
]
] | Microbial metabolism of fugitive hydrocarbons produces greenhouse gas (GHG) emissions from oil sands tailings ponds (OSTP) and end pit lakes (EPL) that retain semisolid wastes from surface mining of oil sands ores. Predicting GHG production, particularly methane (CH4), would help oil sands operators mitigate tailings emissions and would assist regulators evaluating the trajectory of reclamation scenarios. Using empirical datasets from laboratory incubation of OSTP sediments with pertinent hydrocarbons, we developed a stoichiometric model for CH4 generation by indigenous microbes. This model improved on previous first-approximation models by considering long-term biodegradation kinetics for 18 relevant hydrocarbons from three different oil sands operations, lag times, nutrient limitations, and microbial growth and death rates. Laboratory measurements were used to estimate model parameter values and to validate the new model. Goodness of fit analysis showed that the stoichiometric model predicted CH4 production well; normalized mean square error analysis revealed that it surpassed previous models. Comparison of model predictions with field measurements of CH4 emissions further validated the new model. Importantly, the model also identified parameters that are currently lacking but are needed to enable future robust modeling of CH4 production from OSTP and EPL in situ. |
0707.2076 | Orion Penner | Orion Penner, Vishal Sood, Gabe Musso, Kim Baskerville, Peter
Grassberger, Maya Paczuski | Node similarity within subgraphs of protein interaction networks | 10 pages, 5 figures. Edited for typos, clarity, figures improved for
readability | null | 10.1016/j.physa.2008.02.043 | null | q-bio.MN cond-mat.stat-mech | null | We propose a biologically motivated quantity, twinness, to evaluate local
similarity between nodes in a network. The twinness of a pair of nodes is the
number of connected, labeled subgraphs of size n in which the two nodes possess
identical neighbours. The graph animal algorithm is used to estimate twinness
for each pair of nodes (for subgraph sizes n=4 to n=12) in four different
protein interaction networks (PINs). These include an Escherichia coli PIN and
three Saccharomyces cerevisiae PINs -- each obtained using state-of-the-art
high throughput methods. In almost all cases, the average twinness of node
pairs is vastly higher than expected from a null model obtained by switching
links. For all n, we observe a difference in the ratio of type A twins (which
are unlinked pairs) to type B twins (which are linked pairs) distinguishing the
prokaryote E. coli from the eukaryote S. cerevisiae. Interaction similarity is
expected due to gene duplication, and whole genome duplication paralogues in S.
cerevisiae have been reported to co-cluster into the same complexes. Indeed, we
find that these paralogous proteins are over-represented as twins compared to
pairs chosen at random. These results indicate that twinness can detect
ancestral relationships from currently available PIN data.
| [
{
"created": "Fri, 13 Jul 2007 19:46:04 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Aug 2007 22:16:52 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Penner",
"Orion",
""
],
[
"Sood",
"Vishal",
""
],
[
"Musso",
"Gabe",
""
],
[
"Baskerville",
"Kim",
""
],
[
"Grassberger",
"Peter",
""
],
[
"Paczuski",
"Maya",
""
]
] | We propose a biologically motivated quantity, twinness, to evaluate local similarity between nodes in a network. The twinness of a pair of nodes is the number of connected, labeled subgraphs of size n in which the two nodes possess identical neighbours. The graph animal algorithm is used to estimate twinness for each pair of nodes (for subgraph sizes n=4 to n=12) in four different protein interaction networks (PINs). These include an Escherichia coli PIN and three Saccharomyces cerevisiae PINs -- each obtained using state-of-the-art high throughput methods. In almost all cases, the average twinness of node pairs is vastly higher than expected from a null model obtained by switching links. For all n, we observe a difference in the ratio of type A twins (which are unlinked pairs) to type B twins (which are linked pairs) distinguishing the prokaryote E. coli from the eukaryote S. cerevisiae. Interaction similarity is expected due to gene duplication, and whole genome duplication paralogues in S. cerevisiae have been reported to co-cluster into the same complexes. Indeed, we find that these paralogous proteins are over-represented as twins compared to pairs chosen at random. These results indicate that twinness can detect ancestral relationships from currently available PIN data. |
0708.0559 | Eduardo Candelario-Jalil | E. Candelario-Jalil, D. Alvarez, N. Merino, O. S. Leon | Delayed treatment with nimesulide reduces measures of oxidative stress
following global ischemic brain injury in gerbils | null | Neuroscience Research 47(2): 245-253 (2003) | null | null | q-bio.TO | null | Metabolism of arachidonic acid by cyclooxygenase is one of the primary
sources of reactive oxygen species in the ischemic brain. Neuronal
overexpression of cyclooxygenase-2 has recently been shown to contribute to
neurodegeneration following ischemic injury. In the present study, we examined
the possibility that the neuroprotective effects of the cyclooxygenase-2
inhibitor nimesulide would depend upon reduction of oxidative stress following
cerebral ischemia. Gerbils were subjected to 5 min of transient global cerebral
ischemia followed by 48 h of reperfusion and markers of oxidative stress were
measured in hippocampus of gerbils receiving vehicle or nimesulide treatment at
three different clinically relevant doses (3, 6 or 12 mg/kg). Compared with
vehicle, nimesulide significantly (P<0.05) reduced hippocampal glutathione
depletion and lipid peroxidation, as assessed by the levels of malondialdehyde
(MDA), 4-hydroxy-alkenals (4-HDA) and lipid hydroperoxides levels, even when
the treatment was delayed until 6 h after ischemia. Biochemical evidences of
nimesulide neuroprotection were supported by histofluorescence findings using
the novel marker of neuronal degeneration Fluoro-Jade B. Few Fluoro-Jade B
positive cells were seen in CA1 region of hippocampus in ischemic animals
treated with nimesulide compared with vehicle. These results suggest that
nimesulide may protect neurons by attenuating oxidative stress and reperfusion
injury following the ischemic insult with a wide therapeutic window of
protection.
| [
{
"created": "Fri, 3 Aug 2007 18:23:38 GMT",
"version": "v1"
}
] | 2007-08-06 | [
[
"Candelario-Jalil",
"E.",
""
],
[
"Alvarez",
"D.",
""
],
[
"Merino",
"N.",
""
],
[
"Leon",
"O. S.",
""
]
] | Metabolism of arachidonic acid by cyclooxygenase is one of the primary sources of reactive oxygen species in the ischemic brain. Neuronal overexpression of cyclooxygenase-2 has recently been shown to contribute to neurodegeneration following ischemic injury. In the present study, we examined the possibility that the neuroprotective effects of the cyclooxygenase-2 inhibitor nimesulide would depend upon reduction of oxidative stress following cerebral ischemia. Gerbils were subjected to 5 min of transient global cerebral ischemia followed by 48 h of reperfusion and markers of oxidative stress were measured in hippocampus of gerbils receiving vehicle or nimesulide treatment at three different clinically relevant doses (3, 6 or 12 mg/kg). Compared with vehicle, nimesulide significantly (P<0.05) reduced hippocampal glutathione depletion and lipid peroxidation, as assessed by the levels of malondialdehyde (MDA), 4-hydroxy-alkenals (4-HDA) and lipid hydroperoxides levels, even when the treatment was delayed until 6 h after ischemia. Biochemical evidences of nimesulide neuroprotection were supported by histofluorescence findings using the novel marker of neuronal degeneration Fluoro-Jade B. Few Fluoro-Jade B positive cells were seen in CA1 region of hippocampus in ischemic animals treated with nimesulide compared with vehicle. These results suggest that nimesulide may protect neurons by attenuating oxidative stress and reperfusion injury following the ischemic insult with a wide therapeutic window of protection. |
2210.03198 | James Brunner | James D. Brunner and Nicholas Chia | Metabolic Model-based Ecological Modeling for Probiotic Design | 18 pages, 6 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The microbial community composition in the human gut has a profound effect on
human health. This observation has lead to extensive use of microbiome
therapies, including over-the-counter ``probiotic" treatments intended to alter
the composition of the microbiome. Despite so much promise and commercial
interest, the factors that contribute to the success or failure of
microbiome-targeted treatments remain unclear. We investigate the biotic
interactions that lead to successful engraftment of a novel bacterial strain
introduced to the microbiome as in probiotic treatments. We use pairwise
genome-scale metabolic modeling with a generalized resource allocation
constraint to build a network of interactions between 818 species with well
developed models available in the AGORA database. We create induced sub-graphs
using the taxa present in samples from three experimental engraftment studies
and assess the likelihood of invader engraftment based on network structure. To
do so, we use a set of dynamical models designed to reflect connect network
topology to growth dynamics. We show that a generalized Lotka-Volterra model
has strong ability to predict if a particular invader or probiotic will
successfully engraft into an individual's microbiome. Furthermore, we show that
the mechanistic nature of the model is useful for revealing which
microbe-microbe interactions potentially drive engraftment.
| [
{
"created": "Thu, 6 Oct 2022 20:40:02 GMT",
"version": "v1"
}
] | 2022-10-10 | [
[
"Brunner",
"James D.",
""
],
[
"Chia",
"Nicholas",
""
]
] | The microbial community composition in the human gut has a profound effect on human health. This observation has lead to extensive use of microbiome therapies, including over-the-counter ``probiotic" treatments intended to alter the composition of the microbiome. Despite so much promise and commercial interest, the factors that contribute to the success or failure of microbiome-targeted treatments remain unclear. We investigate the biotic interactions that lead to successful engraftment of a novel bacterial strain introduced to the microbiome as in probiotic treatments. We use pairwise genome-scale metabolic modeling with a generalized resource allocation constraint to build a network of interactions between 818 species with well developed models available in the AGORA database. We create induced sub-graphs using the taxa present in samples from three experimental engraftment studies and assess the likelihood of invader engraftment based on network structure. To do so, we use a set of dynamical models designed to reflect connect network topology to growth dynamics. We show that a generalized Lotka-Volterra model has strong ability to predict if a particular invader or probiotic will successfully engraft into an individual's microbiome. Furthermore, we show that the mechanistic nature of the model is useful for revealing which microbe-microbe interactions potentially drive engraftment. |
2302.09670 | Madhur Mangalam | Madhur Mangalam, Ralf Metzler, Damian G. Kelty-Stephen | Ergodic characterization of non-ergodic anomalous diffusion processes | 24 pages; 10 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Canonical characterization techniques that rely upon mean squared
displacement ($\mathrm{MSD}$) break down for non-ergodic processes, making it
challenging to characterize anomalous diffusion from an individual time-series
measurement. Non-ergodicity reigns when the time-averaged mean square
displacement $\mathrm{TA}$-$\mathrm{MSD}$ differs from the ensemble-averaged
mean squared displacement $\mathrm{EA}$-$\mathrm{MSD}$ even in the limit of
long measurement series. In these cases, the typical theoretical results for
ensemble averages cannot be used to understand and interpret data acquired from
time averages. The difficulty then lies in obtaining statistical descriptors of
the measured diffusion process that are not non-ergodic. We show that linear
descriptors such as the standard deviation ($SD$), coefficient of variation
($CV$), and root mean square ($RMS$) break ergodicity in proportion to
non-ergodicity in the diffusion process. In contrast, time series of
descriptors addressing sequential structure and its potential nonlinearity:
multifractality change in a time-independent way and fulfill the ergodic
assumption, largely independent of the time series' non-ergodicity. We show
that these findings follow the multiplicative cascades underlying these
diffusion processes. Adding fractal and multifractal descriptors to typical
linear descriptors would improve the characterization of anomalous diffusion
processes. Two particular points bear emphasis here. First, as an appropriate
formalism for encoding the nonlinearity that might generate non-ergodicity,
multifractal modeling offers descriptors that can behave ergodically enough to
meet the needs of linear modeling. Second, this capacity to describe
non-ergodic processes in ergodic terms offers the possibility that multifractal
modeling could unify several disparate non-ergodic diffusion processes into a
common framework.
| [
{
"created": "Sun, 19 Feb 2023 20:44:52 GMT",
"version": "v1"
}
] | 2023-02-21 | [
[
"Mangalam",
"Madhur",
""
],
[
"Metzler",
"Ralf",
""
],
[
"Kelty-Stephen",
"Damian G.",
""
]
] | Canonical characterization techniques that rely upon mean squared displacement ($\mathrm{MSD}$) break down for non-ergodic processes, making it challenging to characterize anomalous diffusion from an individual time-series measurement. Non-ergodicity reigns when the time-averaged mean square displacement $\mathrm{TA}$-$\mathrm{MSD}$ differs from the ensemble-averaged mean squared displacement $\mathrm{EA}$-$\mathrm{MSD}$ even in the limit of long measurement series. In these cases, the typical theoretical results for ensemble averages cannot be used to understand and interpret data acquired from time averages. The difficulty then lies in obtaining statistical descriptors of the measured diffusion process that are not non-ergodic. We show that linear descriptors such as the standard deviation ($SD$), coefficient of variation ($CV$), and root mean square ($RMS$) break ergodicity in proportion to non-ergodicity in the diffusion process. In contrast, time series of descriptors addressing sequential structure and its potential nonlinearity: multifractality change in a time-independent way and fulfill the ergodic assumption, largely independent of the time series' non-ergodicity. We show that these findings follow the multiplicative cascades underlying these diffusion processes. Adding fractal and multifractal descriptors to typical linear descriptors would improve the characterization of anomalous diffusion processes. Two particular points bear emphasis here. First, as an appropriate formalism for encoding the nonlinearity that might generate non-ergodicity, multifractal modeling offers descriptors that can behave ergodically enough to meet the needs of linear modeling. Second, this capacity to describe non-ergodic processes in ergodic terms offers the possibility that multifractal modeling could unify several disparate non-ergodic diffusion processes into a common framework. |
1802.05424 | Sutapa Mukherji | Sutapa Mukherji | Threshold response and bistability in gene regulation by small noncoding
RNA | 19 pages | Eur. Phys. J. E (2018) 41: 2 | null | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study through mathematical modelling the combined effect of
transcriptional and translational regulation by proteins and small noncoding
RNAs (sRNA) in a genetic feedback motif that has an important role in the
survival of E.coli under stress associated with oxygen and energy availability.
We show that subtle changes in this motif can bring in drastically different
effects on the gene expression. In particular, we show that a threshold
response in the gene expression changes to a bistable response as the
regulation on sRNA synthesis or degradation is altered. These results are
obtained under deterministic conditions. Next, we study how the gene expression
is altered by additive and multiplicative noise which might arise due to
probabilistic occurrences of different biochemical events. Using the
Fokker-Planck formulation, we obtain steady state probability distributions for
sRNA concentration for the network motifs displaying bistability. The
probability distributions are found to be bimodal with two peaks at low and
high concentrations of sRNAs. We further study the variations in the
probability distributions under different values of noise strength and
correlations. The results presented here might be of interest for designing
synthetic network for artificial control.
| [
{
"created": "Thu, 15 Feb 2018 07:36:43 GMT",
"version": "v1"
}
] | 2018-02-16 | [
[
"Mukherji",
"Sutapa",
""
]
] | In this paper, we study through mathematical modelling the combined effect of transcriptional and translational regulation by proteins and small noncoding RNAs (sRNA) in a genetic feedback motif that has an important role in the survival of E.coli under stress associated with oxygen and energy availability. We show that subtle changes in this motif can bring in drastically different effects on the gene expression. In particular, we show that a threshold response in the gene expression changes to a bistable response as the regulation on sRNA synthesis or degradation is altered. These results are obtained under deterministic conditions. Next, we study how the gene expression is altered by additive and multiplicative noise which might arise due to probabilistic occurrences of different biochemical events. Using the Fokker-Planck formulation, we obtain steady state probability distributions for sRNA concentration for the network motifs displaying bistability. The probability distributions are found to be bimodal with two peaks at low and high concentrations of sRNAs. We further study the variations in the probability distributions under different values of noise strength and correlations. The results presented here might be of interest for designing synthetic network for artificial control. |
1404.6267 | Octavio Miramontes | Octavio Miramontes and Og DeSouza | Social Evolution: New Horizons | 16 pages 5 figures, chapter in forthcoming open access book
"Frontiers in Ecology, Evolution and Complexity" CopIt-arXives 2014, Mexico | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperation is a widespread natural phenomenon yet current evolutionary
thinking is dominated by the paradigm of selfish competition. Recent advanced
in many fronts of Biology and Non-linear Physics are helping to bring
cooperation to its proper place. In this contribution, the most important
controversies and open research avenues in the field of social evolution are
reviewed. It is argued that a novel theory of social evolution must integrate
the concepts of the science of Complex Systems with those of the Darwinian
tradition. Current gene-centric approaches should be reviewed and com-
plemented with evidence from multilevel phenomena (group selection), the
constrains given by the non-linear nature of biological dynamical systems and
the emergent nature of dissipative phenomena.
| [
{
"created": "Thu, 24 Apr 2014 21:00:14 GMT",
"version": "v1"
},
{
"created": "Mon, 26 May 2014 19:55:23 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Jun 2014 16:28:56 GMT",
"version": "v3"
}
] | 2014-06-13 | [
[
"Miramontes",
"Octavio",
""
],
[
"DeSouza",
"Og",
""
]
] | Cooperation is a widespread natural phenomenon yet current evolutionary thinking is dominated by the paradigm of selfish competition. Recent advanced in many fronts of Biology and Non-linear Physics are helping to bring cooperation to its proper place. In this contribution, the most important controversies and open research avenues in the field of social evolution are reviewed. It is argued that a novel theory of social evolution must integrate the concepts of the science of Complex Systems with those of the Darwinian tradition. Current gene-centric approaches should be reviewed and com- plemented with evidence from multilevel phenomena (group selection), the constrains given by the non-linear nature of biological dynamical systems and the emergent nature of dissipative phenomena. |
2002.07873 | Emily T Winn | Emily T. Winn, Marilyn Vazquez, Prachi Loliencar, Kaisa Taipale, Xu
Wang and Giseon Heo | A survey of statistical learning techniques as applied to inexpensive
pediatric Obstructive Sleep Apnea data | null | null | null | null | q-bio.QM cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pediatric obstructive sleep apnea affects an estimated 1-5% of
elementary-school aged children and can lead to other detrimental health
problems. Swift diagnosis and treatment are critical to a child's growth and
development, but the variability of symptoms and the complexity of the
available data make this a challenge. We take a first step in streamlining the
process by focusing on inexpensive data from questionnaires and craniofacial
measurements. We apply correlation networks, the Mapper algorithm from
topological data analysis, and singular value decomposition in a process of
exploratory data analysis. We then apply a variety of supervised and
unsupervised learning techniques from statistics, machine learning, and
topology, ranging from support vector machines to Bayesian classifiers and
manifold learning. Finally, we analyze the results of each of these methods and
discuss the implications for a multi-data-sourced algorithm moving forward.
| [
{
"created": "Mon, 17 Feb 2020 18:15:32 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Feb 2020 14:35:46 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Aug 2021 18:41:12 GMT",
"version": "v3"
}
] | 2021-08-10 | [
[
"Winn",
"Emily T.",
""
],
[
"Vazquez",
"Marilyn",
""
],
[
"Loliencar",
"Prachi",
""
],
[
"Taipale",
"Kaisa",
""
],
[
"Wang",
"Xu",
""
],
[
"Heo",
"Giseon",
""
]
] | Pediatric obstructive sleep apnea affects an estimated 1-5% of elementary-school aged children and can lead to other detrimental health problems. Swift diagnosis and treatment are critical to a child's growth and development, but the variability of symptoms and the complexity of the available data make this a challenge. We take a first step in streamlining the process by focusing on inexpensive data from questionnaires and craniofacial measurements. We apply correlation networks, the Mapper algorithm from topological data analysis, and singular value decomposition in a process of exploratory data analysis. We then apply a variety of supervised and unsupervised learning techniques from statistics, machine learning, and topology, ranging from support vector machines to Bayesian classifiers and manifold learning. Finally, we analyze the results of each of these methods and discuss the implications for a multi-data-sourced algorithm moving forward. |
0908.3946 | Thierry Huillet | Thierry Huillet (LPTM) | Information and (co-)variances in discrete evolutionary genetics
involving solely selection | a paraitre dans Journal of Statistical Mechanics: Theory and
Applications | null | 10.1088/1742-5468/2009/09/P09013 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this Note is twofold: First, we introduce the general
formalism of evolutionary genetics dynamics involving fitnesses, under both the
deterministic and stochastic setups, and chiefly in discrete-time. In the
process, we particularize it to a one-parameter model where only a selection
parameter is unknown. Then and in a parallel manner, we discuss the estimation
problems of the selection parameter based on a single-generation frequency
distribution shift under both deterministic and stochastic evolutionary
dynamics. In the stochastics, we consider both the celebrated Wright-Fisher and
Moran models.
| [
{
"created": "Thu, 27 Aug 2009 08:05:35 GMT",
"version": "v1"
}
] | 2015-05-14 | [
[
"Huillet",
"Thierry",
"",
"LPTM"
]
] | The purpose of this Note is twofold: First, we introduce the general formalism of evolutionary genetics dynamics involving fitnesses, under both the deterministic and stochastic setups, and chiefly in discrete-time. In the process, we particularize it to a one-parameter model where only a selection parameter is unknown. Then and in a parallel manner, we discuss the estimation problems of the selection parameter based on a single-generation frequency distribution shift under both deterministic and stochastic evolutionary dynamics. In the stochastics, we consider both the celebrated Wright-Fisher and Moran models. |
2201.10322 | Christopher Thornton PhD | Christopher B Thornton, Niina Kolehmainen, Kianoush Nazarpour | Using unsupervised machine learning to quantify physical activity from
accelerometry in a diverse and rapidly changing population | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Accelerometers are widely used to measure physical activity behaviour,
including in children. The traditional method for processing acceleration data
uses cut points to define physical activity intensity, relying on calibration
studies that relate the magnitude of acceleration to energy expenditure.
However, these relationships do not generalise across diverse populations and
hence they must be parametrised for each subpopulation (e.g., age groups) which
is costly and makes studies across diverse populations and over time difficult.
A data driven approach that allows physical activity intensity states to emerge
from the data, without relying on parameters derived from external populations,
and offers a new perspective on this problem and potentially improved results.
We applied an unsupervised machine learning approach, namely a hidden semi
Markov model, to segment and cluster the accelerometer data recorded from 279
children (9 to 38 months old) with a diverse range of physical and
social-cognitive abilities (measured using the Paediatric Evaluation of
Disability Inventory). We benchmarked this analysis with the cut points
approach calculated using the best available thresholds for the population.
Time spent active as measured by this unsupervised approach correlated more
strongly with measures of the childs mobility, social-cognitive capacity,
independence, daily activity, and age than that measured using the cut points
approach. Unsupervised machine learning offers the potential to provide a more
sensitive, appropriate, and cost-effective approach to quantifying physical
activity behaviour in diverse populations, compared to the current cut points
approach. This, in turn, supports research that is more inclusive of diverse or
rapidly changing populations.
| [
{
"created": "Tue, 25 Jan 2022 13:50:30 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Feb 2022 10:07:23 GMT",
"version": "v2"
}
] | 2022-02-22 | [
[
"Thornton",
"Christopher B",
""
],
[
"Kolehmainen",
"Niina",
""
],
[
"Nazarpour",
"Kianoush",
""
]
] | Accelerometers are widely used to measure physical activity behaviour, including in children. The traditional method for processing acceleration data uses cut points to define physical activity intensity, relying on calibration studies that relate the magnitude of acceleration to energy expenditure. However, these relationships do not generalise across diverse populations and hence they must be parametrised for each subpopulation (e.g., age groups) which is costly and makes studies across diverse populations and over time difficult. A data driven approach that allows physical activity intensity states to emerge from the data, without relying on parameters derived from external populations, and offers a new perspective on this problem and potentially improved results. We applied an unsupervised machine learning approach, namely a hidden semi Markov model, to segment and cluster the accelerometer data recorded from 279 children (9 to 38 months old) with a diverse range of physical and social-cognitive abilities (measured using the Paediatric Evaluation of Disability Inventory). We benchmarked this analysis with the cut points approach calculated using the best available thresholds for the population. Time spent active as measured by this unsupervised approach correlated more strongly with measures of the childs mobility, social-cognitive capacity, independence, daily activity, and age than that measured using the cut points approach. Unsupervised machine learning offers the potential to provide a more sensitive, appropriate, and cost-effective approach to quantifying physical activity behaviour in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive of diverse or rapidly changing populations. |
2406.01501 | Andrew Straw | Stephan Lochner, Daniel Honerkamp, Abhinav Valada, Andrew D. Straw | Reinforcement Learning as a Robotics-Inspired Framework for Insect
Navigation: From Spatial Representations to Neural Implementation | 26 pages, 5 figures; submitted to Frontiers in Computational
Neuroscience | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bees are among the master navigators of the insect world. Despite impressive
advances in robot navigation research, the performance of these insects is
still unrivaled by any artificial system in terms of training efficiency and
generalization capabilities, particularly considering the limited computational
capacity. On the other hand, computational principles underlying these
extraordinary feats are still only partially understood. The theoretical
framework of reinforcement learning (RL) provides an ideal focal point to bring
the two fields together for mutual benefit. In particular, we analyze and
compare representations of space in robot and insect navigation models through
the lens of RL, as the efficiency of insect navigation is likely rooted in an
efficient and robust internal representation, linking retinotopic (egocentric)
visual input with the geometry of the environment. While RL has long been at
the core of robot navigation research, current computational theories of insect
navigation are not commonly formulated within this framework, but largely as an
associative learning process implemented in the insect brain, especially in the
mushroom body (MB). Here we propose specific hypothetical components of the MB
circuit that would enable the implementation of a certain class of relatively
simple RL algorithms, capable of integrating distinct components of a
navigation task, reminiscent of hierarchical RL models used in robot
navigation. We discuss how current models of insect and robot navigation are
exploring representations beyond classical, complete map-like representations,
with spatial information being embedded in the respective latent
representations to varying degrees.
| [
{
"created": "Mon, 3 Jun 2024 16:28:09 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2024 11:04:39 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jul 2024 16:28:10 GMT",
"version": "v3"
}
] | 2024-07-25 | [
[
"Lochner",
"Stephan",
""
],
[
"Honerkamp",
"Daniel",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Straw",
"Andrew D.",
""
]
] | Bees are among the master navigators of the insect world. Despite impressive advances in robot navigation research, the performance of these insects is still unrivaled by any artificial system in terms of training efficiency and generalization capabilities, particularly considering the limited computational capacity. On the other hand, computational principles underlying these extraordinary feats are still only partially understood. The theoretical framework of reinforcement learning (RL) provides an ideal focal point to bring the two fields together for mutual benefit. In particular, we analyze and compare representations of space in robot and insect navigation models through the lens of RL, as the efficiency of insect navigation is likely rooted in an efficient and robust internal representation, linking retinotopic (egocentric) visual input with the geometry of the environment. While RL has long been at the core of robot navigation research, current computational theories of insect navigation are not commonly formulated within this framework, but largely as an associative learning process implemented in the insect brain, especially in the mushroom body (MB). Here we propose specific hypothetical components of the MB circuit that would enable the implementation of a certain class of relatively simple RL algorithms, capable of integrating distinct components of a navigation task, reminiscent of hierarchical RL models used in robot navigation. We discuss how current models of insect and robot navigation are exploring representations beyond classical, complete map-like representations, with spatial information being embedded in the respective latent representations to varying degrees. |
2103.00258 | Dagmar Iber | Christine Lang, Lisa Conrad, Dagmar Iber | Organ-specific Branching Morphogenesis | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common developmental process, called branching morphogenesis, generates the
epithelial trees in a variety of organs, including the lungs, kidneys, and
glands. How branching morphogenesis can create epithelial architectures of very
different shapes and functions remains elusive. In this review, we compare
branching morphogenesis and its regulation in lungs and kidneys and discuss the
role of signaling pathways, the mesenchyme, the extracellular matrix, and the
cytoskeleton as potential organ-specific determinants of branch position,
orientation, and shape. Identifying the determinants of branch and organ shape
and their adaptation in different organs may reveal how a highly conserved
developmental process can be adapted to different structural and functional
frameworks and should provide important insights into epithelial morphogenesis
and developmental disorders.
| [
{
"created": "Sat, 27 Feb 2021 16:02:53 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Lang",
"Christine",
""
],
[
"Conrad",
"Lisa",
""
],
[
"Iber",
"Dagmar",
""
]
] | A common developmental process, called branching morphogenesis, generates the epithelial trees in a variety of organs, including the lungs, kidneys, and glands. How branching morphogenesis can create epithelial architectures of very different shapes and functions remains elusive. In this review, we compare branching morphogenesis and its regulation in lungs and kidneys and discuss the role of signaling pathways, the mesenchyme, the extracellular matrix, and the cytoskeleton as potential organ-specific determinants of branch position, orientation, and shape. Identifying the determinants of branch and organ shape and their adaptation in different organs may reveal how a highly conserved developmental process can be adapted to different structural and functional frameworks and should provide important insights into epithelial morphogenesis and developmental disorders. |
1504.03802 | Filippo Biscarini | Filippo Biscarini, Stefano Biffani and Alessandra Stella | M\'as all\'a del GWAS: alternativas para localizar QTLs | 5 pages, 3 figures, article in Spanish with abstract both in Spanish
and English | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beyond GWAS: alternatives to localize QTLs in farm animals. Two methods that
could be used for QTL mapping as alternatives to standard GWAS are presented.
The first relies on the differential frequency of runs of homozygosity (ROH) in
groups of animals (e.g. cases and controls), while the second stems from
resampling techniques used for the prediction of carriers of a mutation, and is
based on the frequency of inclusion of polymorphisms (SNP) in the predictive
model. ROH were applied to the detection of reproductive diseases in
Holstein-Friesian cattle, while resampling was applied to the detection of
carriers of the BH2 haplotype in Brown Swiss cattle. These alternative
approaches may complement GWAS analyses in localizing more accurately QTLs for
traits of interest in livestock.
| [
{
"created": "Wed, 15 Apr 2015 07:24:34 GMT",
"version": "v1"
}
] | 2015-04-16 | [
[
"Biscarini",
"Filippo",
""
],
[
"Biffani",
"Stefano",
""
],
[
"Stella",
"Alessandra",
""
]
] | Beyond GWAS: alternatives to localize QTLs in farm animals. Two methods that could be used for QTL mapping as alternatives to standard GWAS are presented. The first relies on the differential frequency of runs of homozygosity (ROH) in groups of animals (e.g. cases and controls), while the second stems from resampling techniques used for the prediction of carriers of a mutation, and is based on the frequency of inclusion of polymorphisms (SNP) in the predictive model. ROH were applied to the detection of reproductive diseases in Holstein-Friesian cattle, while resampling was applied to the detection of carriers of the BH2 haplotype in Brown Swiss cattle. These alternative approaches may complement GWAS analyses in localizing more accurately QTLs for traits of interest in livestock. |
2306.03603 | Christos Sourmpis | Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec | Trial matching: capturing variability with data-constrained spiking
neural networks | 12 pages of main text, 4 figures in main, 5 pages of appendix, 5
figures in appendix | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Simultaneous behavioral and electrophysiological recordings call for new
methods to reveal the interactions between neural activity and behavior. A
milestone would be an interpretable model of the co-variability of spiking
activity and behavior across trials. Here, we model a mouse cortical
sensory-motor pathway in a tactile detection task reported by licking with a
large recurrent spiking neural network (RSNN), fitted to the recordings via
gradient-based optimization. We focus specifically on the difficulty to match
the trial-to-trial variability in the data. Our solution relies on optimal
transport to define a distance between the distributions of generated and
recorded trials. The technique is applied to artificial data and neural
recordings covering six cortical areas. We find that the resulting RSNN can
generate realistic cortical activity and predict jaw movements across the main
modes of trial-to-trial variability. Our analysis also identifies an unexpected
mode of variability in the data corresponding to task-irrelevant movements of
the mouse.
| [
{
"created": "Tue, 6 Jun 2023 11:46:31 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Dec 2023 16:41:24 GMT",
"version": "v2"
}
] | 2023-12-04 | [
[
"Sourmpis",
"Christos",
""
],
[
"Petersen",
"Carl",
""
],
[
"Gerstner",
"Wulfram",
""
],
[
"Bellec",
"Guillaume",
""
]
] | Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a mouse cortical sensory-motor pathway in a tactile detection task reported by licking with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse. |
1004.1378 | Sebastian Risau-Gusman | Sebastian Risau-Gusman | Influence of network dynamics on the spread of sexually transmitted
diseases | 21 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network epidemiology often assumes that the relationships defining the social
network of a population are static. The dynamics of relationships is only taken
indirectly into account, by assuming that the relevant information to study
epidemic spread is encoded in the network obtained by considering numbers of
partners accumulated over periods of time roughly proportional to the
infectious period of the disease at hand. On the other hand, models explicitly
including social dynamics are often too schematic to provide a reasonable
representation of a real population, or so detailed that no general conclusions
can be drawn from them. Here we present a model of social dynamics that is
general enough that its parameters can be obtained by fitting data from surveys
about sexual behaviour, but that can still be studied analytically, using mean
field techniques. This allows us to obtain some general results about epidemic
spreading. We show that using accumulated network data to estimate the static
epidemic threshold leads to a significant underestimation of it. We also show
that, for a dynamic network, the relative epidemic threshold is an increasing
function of the infectious period of the disease, implying that the static
value is a lower bound to the real threshold.
| [
{
"created": "Thu, 8 Apr 2010 17:29:52 GMT",
"version": "v1"
}
] | 2010-04-09 | [
[
"Risau-Gusman",
"Sebastian",
""
]
] | Network epidemiology often assumes that the relationships defining the social network of a population are static. The dynamics of relationships is only taken indirectly into account, by assuming that the relevant information to study epidemic spread is encoded in the network obtained by considering numbers of partners accumulated over periods of time roughly proportional to the infectious period of the disease at hand. On the other hand, models explicitly including social dynamics are often too schematic to provide a reasonable representation of a real population, or so detailed that no general conclusions can be drawn from them. Here we present a model of social dynamics that is general enough that its parameters can be obtained by fitting data from surveys about sexual behaviour, but that can still be studied analytically, using mean field techniques. This allows us to obtain some general results about epidemic spreading. We show that using accumulated network data to estimate the static epidemic threshold leads to a significant underestimation of it. We also show that, for a dynamic network, the relative epidemic threshold is an increasing function of the infectious period of the disease, implying that the static value is a lower bound to the real threshold. |
1806.11066 | Cedric Perret | Cedric Perret, Simon T. Powers, Jeremy Pitt and Emma Hart | Can justice be fair when it is blind? How social network structures can
promote or prevent the evolution of despotism | To appear in Proceedings of the Artificial Life Conference 2018
(ALIFE 2018), MIT Press | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchy is an efficient way for a group to organize, but often goes along
with inequality that benefits leaders. To control despotic behaviour, followers
can assess leaders decisions by aggregating their own and their neighbours
experience, and in response challenge despotic leaders. But in hierarchical
social networks, this interactional justice can be limited by (i) the high
influence of a small clique who are treated better, and (ii) the low
connectedness of followers. Here we study how the connectedness of a social
network affects the co-evolution of despotism in leaders and tolerance to
despotism in followers. We simulate the evolution of a population of agents,
where the influence of an agent is its number of social links. Whether a leader
remains in power is controlled by the overall satisfaction of group members, as
determined by their joint assessment of the leaders behaviour. We demonstrate
that centralization of a social network around a highly influential clique
greatly increases the level of despotism. This is because the clique is more
satisfied, and their higher influence spreads their positive opinion of the
leader throughout the network. Finally, our results suggest that increasing the
connectedness of followers limits despotism while maintaining hierarchy.
| [
{
"created": "Thu, 28 Jun 2018 16:32:02 GMT",
"version": "v1"
}
] | 2018-06-29 | [
[
"Perret",
"Cedric",
""
],
[
"Powers",
"Simon T.",
""
],
[
"Pitt",
"Jeremy",
""
],
[
"Hart",
"Emma",
""
]
] | Hierarchy is an efficient way for a group to organize, but often goes along with inequality that benefits leaders. To control despotic behaviour, followers can assess leaders decisions by aggregating their own and their neighbours experience, and in response challenge despotic leaders. But in hierarchical social networks, this interactional justice can be limited by (i) the high influence of a small clique who are treated better, and (ii) the low connectedness of followers. Here we study how the connectedness of a social network affects the co-evolution of despotism in leaders and tolerance to despotism in followers. We simulate the evolution of a population of agents, where the influence of an agent is its number of social links. Whether a leader remains in power is controlled by the overall satisfaction of group members, as determined by their joint assessment of the leaders behaviour. We demonstrate that centralization of a social network around a highly influential clique greatly increases the level of despotism. This is because the clique is more satisfied, and their higher influence spreads their positive opinion of the leader throughout the network. Finally, our results suggest that increasing the connectedness of followers limits despotism while maintaining hierarchy. |
1007.4527 | Tsvi Tlusty | Yonatan Savir and Tsvi Tlusty | Optimal Design of a Molecular Recognizer: Molecular Recognition as a
Bayesian Signal Detection Problem | Bayesian detection, conformational changes, molecular recognition,
specificity. http://www.weizmann.ac.il/complex/tlusty/papers/IEEE2008.pdf | IEEE Journal of Selected Topics in Signal Processing, vol. 2,
issue 3, pp. 390-399, 2008 | 10.1109/JSTSP.2008.923859 | null | q-bio.MN cs.IT math.IT physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous biological functions-such as enzymatic catalysis, the immune
response system, and the DNA-protein regulatory network-rely on the ability of
molecules to specifically recognize target molecules within a large pool of
similar competitors in a noisy biochemical environment. Using the basic
framework of signal detection theory, we treat the molecular recognition
process as a signal detection problem and examine its overall performance.
Thus, we evaluate the optimal properties of a molecular recognizer in the
presence of competition and noise. Our analysis reveals that the optimal design
undergoes a "phase transition" as the structural properties of the molecules
and interaction energies between them vary. In one phase, the recognizer should
be complementary in structure to its target (like a lock and a key), while in
the other, conformational changes upon binding, which often accompany molecular
recognition, enhance recognition quality. Using this framework, the abundance
of conformational changes may be explained as a result of increasing the
fitness of the recognizer. Furthermore, this analysis may be used in future
design of artificial signal processing devices based on biomolecules.
| [
{
"created": "Mon, 26 Jul 2010 18:19:13 GMT",
"version": "v1"
}
] | 2010-07-27 | [
[
"Savir",
"Yonatan",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | Numerous biological functions-such as enzymatic catalysis, the immune response system, and the DNA-protein regulatory network-rely on the ability of molecules to specifically recognize target molecules within a large pool of similar competitors in a noisy biochemical environment. Using the basic framework of signal detection theory, we treat the molecular recognition process as a signal detection problem and examine its overall performance. Thus, we evaluate the optimal properties of a molecular recognizer in the presence of competition and noise. Our analysis reveals that the optimal design undergoes a "phase transition" as the structural properties of the molecules and interaction energies between them vary. In one phase, the recognizer should be complementary in structure to its target (like a lock and a key), while in the other, conformational changes upon binding, which often accompany molecular recognition, enhance recognition quality. Using this framework, the abundance of conformational changes may be explained as a result of increasing the fitness of the recognizer. Furthermore, this analysis may be used in future design of artificial signal processing devices based on biomolecules. |
1510.00767 | Justin Yeakel | Justin D. Yeakel, Uttam Bhat, Emma A. Elliott Smith, Seth D. Newsome | Exploring the isotopic niche: isotopic variance, physiological
incorporation, and the temporal dynamics of foraging | 27 pages, 9 figures | null | 10.3389/fevo.2016.00001 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consumer foraging behaviors are dynamic, changing in response to prey
availability, seasonality, competition, and even the consumer's physiological
state. The isotopic composition of a consumer is a product of these factors as
well as the isotopic 'landscape' of its prey, i.e. the isotopic mixing space.
Here we build a mechanistic framework that links the ecological and
physiological processes of an individual consumer to the isotopic distribution
that describes its diet, and ultimately to the isotopic composition of its own
tissues, defined as its 'isotopic niche'. By coupling these processes, we
systematically investigate under what conditions the isotopic niche of a
consumer changes as a function of both the geometric properties of its mixing
space and foraging strategies that may be static or dynamic over time. Results
of our derivations reveal general insight into the conditions impacting
isotopic niche width as a function of consumer specialization on prey, as well
as the consumer's ability to transition between diets over time. We show
analytically that moderate specialization on isotopically unique prey can serve
to maximize a consumer's isotopic niche width, while temporally dynamic diets
will tend to result in peak isotopic variance during dietary transitions. We
demonstrate the relevance of our theoretical findings by examining a marine
system composed of nine invertebrate species commonly consumed by sea otters.
In general, our analytical framework highlights the complex interplay of mixing
space geometry and consumer dietary behavior in driving expansion and
contraction of the isotopic niche. Because this approach is established on
ecological mechanism, it is well-suited for enhancing the ecological
interpretation, and uncovering the root causes, of observed isotopic data.
| [
{
"created": "Sat, 3 Oct 2015 02:32:59 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jan 2016 21:11:08 GMT",
"version": "v2"
}
] | 2016-02-12 | [
[
"Yeakel",
"Justin D.",
""
],
[
"Bhat",
"Uttam",
""
],
[
"Smith",
"Emma A. Elliott",
""
],
[
"Newsome",
"Seth D.",
""
]
] | Consumer foraging behaviors are dynamic, changing in response to prey availability, seasonality, competition, and even the consumer's physiological state. The isotopic composition of a consumer is a product of these factors as well as the isotopic 'landscape' of its prey, i.e. the isotopic mixing space. Here we build a mechanistic framework that links the ecological and physiological processes of an individual consumer to the isotopic distribution that describes its diet, and ultimately to the isotopic composition of its own tissues, defined as its 'isotopic niche'. By coupling these processes, we systematically investigate under what conditions the isotopic niche of a consumer changes as a function of both the geometric properties of its mixing space and foraging strategies that may be static or dynamic over time. Results of our derivations reveal general insight into the conditions impacting isotopic niche width as a function of consumer specialization on prey, as well as the consumer's ability to transition between diets over time. We show analytically that moderate specialization on isotopically unique prey can serve to maximize a consumer's isotopic niche width, while temporally dynamic diets will tend to result in peak isotopic variance during dietary transitions. We demonstrate the relevance of our theoretical findings by examining a marine system composed of nine invertebrate species commonly consumed by sea otters. In general, our analytical framework highlights the complex interplay of mixing space geometry and consumer dietary behavior in driving expansion and contraction of the isotopic niche. Because this approach is established on ecological mechanism, it is well-suited for enhancing the ecological interpretation, and uncovering the root causes, of observed isotopic data. |
q-bio/0606043 | Jiajia Dong | J.J. Dong, B.Schmittmann, and R.K.P.Zia | Towards a model for protein production rates | 8 pages, 8 figures; This submission is a duplicate of
arXiv:q-bio/0602024, and has been removed | Journal of Statistical Physics, Volume 128, Numbers 1-2 / July,
2007 | 10.1007/s10955-006-9134-7 | null | q-bio.QM | null | This submission is a duplicate of arXiv:q-bio/0602024 and has been removed.
| [
{
"created": "Fri, 30 Jun 2006 15:53:36 GMT",
"version": "v1"
}
] | 2007-08-02 | [
[
"Dong",
"J. J.",
""
],
[
"Schmittmann",
"B.",
""
],
[
"Zia",
"R. K. P.",
""
]
] | This submission is a duplicate of arXiv:q-bio/0602024 and has been removed. |
2012.14192 | Lucia Russo | Konstantinos Kaloudis, George A. Kevrekidis, Helena C. Maltezou, Cleo
Anastassopoulou, Athanasios Tsakris, Lucia Russo | Estimation of the effective reproduction number for SARS-CoV-2 infection
during the first epidemic wave in the metropolitan area of Athens, Greece | null | null | null | null | q-bio.PE physics.soc-ph stat.AP | http://creativecommons.org/licenses/by/4.0/ | Herein, we provide estimations for the effective reproduction number $R_e$
for the greater metropolitan area of Athens, Greece during the first wave of
the pandemic (February 26-May 15, 2020). For our calculations, we implemented,
in a comparative approach, the two most widely used methods for the estimation
of $R_e$, that by Wallinga and Teunis and by Cori et al. Data were retrieved
from the national database of SARS-CoV-2 infections in Greece. Our analysis
revealed that the expected value of Re dropped below 1 around March 15, shortly
after the suspension of the operation of educational institutions of all levels
nationwide on March 10, and the closing of all retail activities (cafes, bars,
museums, shopping centres, sports facilities and restaurants) on March 13. On
May 4, the date on which the gradual relaxation of the strict lockdown
commenced, the expected value of $R_e$ was slightly below 1, however with
relatively high levels of uncertainty due to the limited number of notified
cases during this period. Finally, we discuss the limitations and pitfalls of
the methods utilized for the estimation of the $R_e$, highlighting that the
results of such analyses should be considered only as indicative by policy
makers.
| [
{
"created": "Mon, 28 Dec 2020 11:08:51 GMT",
"version": "v1"
}
] | 2020-12-29 | [
[
"Kaloudis",
"Konstantinos",
""
],
[
"Kevrekidis",
"George A.",
""
],
[
"Maltezou",
"Helena C.",
""
],
[
"Anastassopoulou",
"Cleo",
""
],
[
"Tsakris",
"Athanasios",
""
],
[
"Russo",
"Lucia",
""
]
] | Herein, we provide estimations for the effective reproduction number $R_e$ for the greater metropolitan area of Athens, Greece during the first wave of the pandemic (February 26-May 15, 2020). For our calculations, we implemented, in a comparative approach, the two most widely used methods for the estimation of $R_e$, that by Wallinga and Teunis and by Cori et al. Data were retrieved from the national database of SARS-CoV-2 infections in Greece. Our analysis revealed that the expected value of Re dropped below 1 around March 15, shortly after the suspension of the operation of educational institutions of all levels nationwide on March 10, and the closing of all retail activities (cafes, bars, museums, shopping centres, sports facilities and restaurants) on March 13. On May 4, the date on which the gradual relaxation of the strict lockdown commenced, the expected value of $R_e$ was slightly below 1, however with relatively high levels of uncertainty due to the limited number of notified cases during this period. Finally, we discuss the limitations and pitfalls of the methods utilized for the estimation of the $R_e$, highlighting that the results of such analyses should be considered only as indicative by policy makers. |
2305.01941 | Noelia Ferruz | Sergio Romero-Romero, Sebastian Lindner, Noelia Ferruz | Exploring the Protein Sequence Space with Global Generative Models | 16 pages, 4 figures, 2 tables | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in specialized large-scale architectures for training
image and language have profoundly impacted the field of computer vision and
natural language processing (NLP). Language models, such as the recent ChatGPT
and GPT4 have demonstrated exceptional capabilities in processing, translating,
and generating human languages. These breakthroughs have also been reflected in
protein research, leading to the rapid development of numerous new methods in a
short time, with unprecedented performance. Language models, in particular,
have seen widespread use in protein research, as they have been utilized to
embed proteins, generate novel ones, and predict tertiary structures. In this
book chapter, we provide an overview of the use of protein generative models,
reviewing 1) language models for the design of novel artificial proteins, 2)
works that use non-Transformer architectures, and 3) applications in directed
evolution approaches.
| [
{
"created": "Wed, 3 May 2023 07:45:29 GMT",
"version": "v1"
}
] | 2023-05-04 | [
[
"Romero-Romero",
"Sergio",
""
],
[
"Lindner",
"Sebastian",
""
],
[
"Ferruz",
"Noelia",
""
]
] | Recent advancements in specialized large-scale architectures for training image and language have profoundly impacted the field of computer vision and natural language processing (NLP). Language models, such as the recent ChatGPT and GPT4 have demonstrated exceptional capabilities in processing, translating, and generating human languages. These breakthroughs have also been reflected in protein research, leading to the rapid development of numerous new methods in a short time, with unprecedented performance. Language models, in particular, have seen widespread use in protein research, as they have been utilized to embed proteins, generate novel ones, and predict tertiary structures. In this book chapter, we provide an overview of the use of protein generative models, reviewing 1) language models for the design of novel artificial proteins, 2) works that use non-Transformer architectures, and 3) applications in directed evolution approaches. |
1209.0337 | Antonio Freitas | Caroline Montaudouin, Marie Anson, Yi Hao, Susanne V. Duncker, Tahia
Fernandez, Emmanuelle Gaudin, Michael Ehrenstein, William G. Kerr, Jean-Herve
Colle, Pierre Bruhns, Marc Daeron, Antonio A. Freitas | Quorum sensing contributes to activated B cell homeostasis and to
prevent autoimmunity | 34 pages, 5 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maintenance of plasma IgM levels is critical for immune system function and
homeostasis in humans and mice. However, the mechanisms that control
homeostasis of the activated IgM-secreting B cells are unknown. After adoptive
transfer into immune-deficient hosts, B-lymphocytes expand poorly but fully
reconstitute the pool of natural IgM-secreting cells and circulating IgM
levels. By using sequential cell transfers and B cell populations from several
mutant mice, we were able to identify novel mechanisms regulating the size of
the IgM-secreting B cell pool. Contrary to previous mechanisms described
regulating homeostasis, which involve competition for the same niche by cells
having overlapping survival requirements, homeostasis of the innate
IgM-secreting B cell pool is also achieved when B cells populations are able to
monitor the number of activated B cells by detecting their secreted products.
Notably, B cell populations are able to assess the density of activated B cells
by sensing their secreted IgG. This process involves the Fc{\gamma}RIIB, a
low-affinity IgG receptor that is expressed on B cells and acts as a negative
regulator of B cell activation, and its intracellular effector the inositol
phosphatase SHIP. As a result of the engagement of this inhibitory pathway the
number of activated IgM-secreting B cells is kept under control. We hypothesize
that malfunction of this quorum-sensing mechanism may lead to uncontrolled B
cell activation and autoimmunity.
| [
{
"created": "Mon, 3 Sep 2012 13:14:56 GMT",
"version": "v1"
}
] | 2015-03-13 | [
[
"Montaudouin",
"Caroline",
""
],
[
"Anson",
"Marie",
""
],
[
"Hao",
"Yi",
""
],
[
"Duncker",
"Susanne V.",
""
],
[
"Fernandez",
"Tahia",
""
],
[
"Gaudin",
"Emmanuelle",
""
],
[
"Ehrenstein",
"Michael",
""
],
[
"Kerr",
"William G.",
""
],
[
"Colle",
"Jean-Herve",
""
],
[
"Bruhns",
"Pierre",
""
],
[
"Daeron",
"Marc",
""
],
[
"Freitas",
"Antonio A.",
""
]
] | Maintenance of plasma IgM levels is critical for immune system function and homeostasis in humans and mice. However, the mechanisms that control homeostasis of the activated IgM-secreting B cells are unknown. After adoptive transfer into immune-deficient hosts, B-lymphocytes expand poorly but fully reconstitute the pool of natural IgM-secreting cells and circulating IgM levels. By using sequential cell transfers and B cell populations from several mutant mice, we were able to identify novel mechanisms regulating the size of the IgM-secreting B cell pool. Contrary to previous mechanisms described regulating homeostasis, which involve competition for the same niche by cells having overlapping survival requirements, homeostasis of the innate IgM-secreting B cell pool is also achieved when B cells populations are able to monitor the number of activated B cells by detecting their secreted products. Notably, B cell populations are able to assess the density of activated B cells by sensing their secreted IgG. This process involves the Fc{\gamma}RIIB, a low-affinity IgG receptor that is expressed on B cells and acts as a negative regulator of B cell activation, and its intracellular effector the inositol phosphatase SHIP. As a result of the engagement of this inhibitory pathway the number of activated IgM-secreting B cells is kept under control. We hypothesize that malfunction of this quorum-sensing mechanism may lead to uncontrolled B cell activation and autoimmunity. |
1305.1985 | Jarrett Byrnes | Jarrett E. K. Byrnes, Lars Gamfeldt, Forest Isbell, Jonathan S.
Lefcheck, John N. Griffin, Andrew Hector, Bradley J. Cardinale, David U.
Hooper, Laura E. Dee, J. Emmett Duffy | Investigating the relationship between biodiversity and ecosystem
multifunctionality: Challenges and solutions | This article has been submitted to Methods in Ecology & Evolution for
review | null | 10.1111/2041-210X.12143 | null | q-bio.QM q-bio.PE stat.AP | http://creativecommons.org/licenses/by/3.0/ | Extensive research shows that more species-rich assemblages are generally
more productive and efficient in resource use than comparable assemblages with
fewer species. But the question of how diversity simultaneously affects the
wide variety of ecological functions that ecosystems perform remains relatively
understudied, and it presents several analytical and empirical challenges that
remain unresolved. In particular, researchers have developed several disparate
metrics to quantify multifunctionality, each characterizing different aspects
of the concept, and each with pros and cons. We compare four approaches to
characterizing multifunctionality and its dependence on biodiversity,
quantifying 1) magnitudes of multiple individual functions separately, 2) the
extent to which different species promote different functions, 3) the average
level of a suite of functions, and 4) the number of functions that
simultaneously exceed a critical threshold. We illustrate each approach using
data from the pan-European BIODEPTH experiment and the R multifunc package
developed for this purpose, evaluate the strengths and weaknesses of each
approach, and implement several methodological improvements. We conclude that a
extension of the fourth approach that systematically explores all possible
threshold values provides the most comprehensive description of
multifunctionality to date. We outline this method and recommend its use in
future research.
| [
{
"created": "Thu, 9 May 2013 01:08:37 GMT",
"version": "v1"
}
] | 2014-01-28 | [
[
"Byrnes",
"Jarrett E. K.",
""
],
[
"Gamfeldt",
"Lars",
""
],
[
"Isbell",
"Forest",
""
],
[
"Lefcheck",
"Jonathan S.",
""
],
[
"Griffin",
"John N.",
""
],
[
"Hector",
"Andrew",
""
],
[
"Cardinale",
"Bradley J.",
""
],
[
"Hooper",
"David U.",
""
],
[
"Dee",
"Laura E.",
""
],
[
"Duffy",
"J. Emmett",
""
]
] | Extensive research shows that more species-rich assemblages are generally more productive and efficient in resource use than comparable assemblages with fewer species. But the question of how diversity simultaneously affects the wide variety of ecological functions that ecosystems perform remains relatively understudied, and it presents several analytical and empirical challenges that remain unresolved. In particular, researchers have developed several disparate metrics to quantify multifunctionality, each characterizing different aspects of the concept, and each with pros and cons. We compare four approaches to characterizing multifunctionality and its dependence on biodiversity, quantifying 1) magnitudes of multiple individual functions separately, 2) the extent to which different species promote different functions, 3) the average level of a suite of functions, and 4) the number of functions that simultaneously exceed a critical threshold. We illustrate each approach using data from the pan-European BIODEPTH experiment and the R multifunc package developed for this purpose, evaluate the strengths and weaknesses of each approach, and implement several methodological improvements. We conclude that a extension of the fourth approach that systematically explores all possible threshold values provides the most comprehensive description of multifunctionality to date. We outline this method and recommend its use in future research. |
2401.10922 | Rohan Nurani Rajagopal | Nurani Rajagopal Rohan, Sayan Gupta, V. Srinivasa Chakravarthy | A Chaotic Associative Memory | 10 pages, 8 Figures, Submitted to "Chaos: An Interdisciplinary
Journal of Nonlinear Science" | null | null | null | q-bio.NC nlin.CD | http://creativecommons.org/licenses/by/4.0/ | We propose a novel Chaotic Associative Memory model using a network of
chaotic Rossler systems and investigate the storage capacity and retrieval
capabilities of this model as a function of increasing periodicity and chaos.
In early models of associate memory networks, memories were modeled as fixed
points, which may be mathematically convenient but has poor neurobiological
plausibility. Since brain dynamics is inherently oscillatory, attempts have
been made to construct associative memories using nonlinear oscillatory
networks. However, oscillatory associative memories are plagued by the problem
of poor storage capacity, though efforts have been made to improve capacity by
adding higher order oscillatory modes. The chaotic associative memory proposed
here exploits the continuous spectrum of chaotic elements and has higher
storage capacity than previously described oscillatory associate memories.
| [
{
"created": "Mon, 15 Jan 2024 13:32:32 GMT",
"version": "v1"
}
] | 2024-01-23 | [
[
"Rohan",
"Nurani Rajagopal",
""
],
[
"Gupta",
"Sayan",
""
],
[
"Chakravarthy",
"V. Srinivasa",
""
]
] | We propose a novel Chaotic Associative Memory model using a network of chaotic Rossler systems and investigate the storage capacity and retrieval capabilities of this model as a function of increasing periodicity and chaos. In early models of associate memory networks, memories were modeled as fixed points, which may be mathematically convenient but has poor neurobiological plausibility. Since brain dynamics is inherently oscillatory, attempts have been made to construct associative memories using nonlinear oscillatory networks. However, oscillatory associative memories are plagued by the problem of poor storage capacity, though efforts have been made to improve capacity by adding higher order oscillatory modes. The chaotic associative memory proposed here exploits the continuous spectrum of chaotic elements and has higher storage capacity than previously described oscillatory associate memories. |
1411.2896 | Carsten Conradi | Carsten Conradi and Maya Mincheva | Graph-theoretic analysis of multistationarity using degree theory | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biochemical mechanisms with mass action kinetics are often modeled by systems
of polynomial differential equations (DE). Determining directly if the DE
system has multiple equilibria (multistationarity) is difficult for realistic
systems, since they are large, nonlinear and contain many unknown parameters.
Mass action biochemical mechanisms can be represented by a directed bipartite
graph with species and reaction nodes. Graph-theoretic methods can then be used
to assess the potential of a given biochemical mechanism for multistationarity
by identifying structures in the bipartite graph referred to as critical
fragments. In this article we present a graph-theoretic method for conservative
biochemical mechanisms characterized by bounded species concentrations, which
makes the use of degree theory arguments possible. We illustrate the results
with an example of a MAPK network.
| [
{
"created": "Tue, 11 Nov 2014 17:38:25 GMT",
"version": "v1"
}
] | 2014-11-12 | [
[
"Conradi",
"Carsten",
""
],
[
"Mincheva",
"Maya",
""
]
] | Biochemical mechanisms with mass action kinetics are often modeled by systems of polynomial differential equations (DE). Determining directly if the DE system has multiple equilibria (multistationarity) is difficult for realistic systems, since they are large, nonlinear and contain many unknown parameters. Mass action biochemical mechanisms can be represented by a directed bipartite graph with species and reaction nodes. Graph-theoretic methods can then be used to assess the potential of a given biochemical mechanism for multistationarity by identifying structures in the bipartite graph referred to as critical fragments. In this article we present a graph-theoretic method for conservative biochemical mechanisms characterized by bounded species concentrations, which makes the use of degree theory arguments possible. We illustrate the results with an example of a MAPK network. |
1501.00302 | Iddo Friedberg | David C Ream, Asma R Bankapur, Iddo Friedberg | An Event-Driven Approach for Studying Gene Block Evolution in Bacteria | Accepted in Bioinformatics (OUP) | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Motivation: Gene blocks are genes co-located on the chromosome. In many
cases, genes blocks are conserved between bacterial species, sometimes as
operons, when genes are co-transcribed. The conservation is rarely absolute:
gene loss, gain, duplication, block splitting, and block fusion are frequently
observed. An open question in bacterial molecular evolution is that of the
formation and breakup of gene blocks, for which several models have been
proposed. These models, however, are not generally applicable to all types of
gene blocks, and consequently cannot be used to broadly compare and study gene
block evolution. To address this problem we introduce an event-based method for
tracking gene block evolution in bacteria. Results: We show here that the
evolution of gene blocks in proteobacteria can be described by a small set of
events. Those include the insertion of genes into, or the splitting of genes
out of a gene block, gene loss, and gene duplication. We show how the
event-based method of gene block evolution allows us to determine the
evolutionary rate, and to trace the ancestral states of their formation. We
conclude that the event-based method can be used to help us understand the
formation of these important bacterial genomic structures. Availability: The
software is available under GPLv3 license on
http://github.com/reamdc1/gene_block_evolution.git Supplementary online
material: http://iddo-friedberg.net/operon-evolution Contact: Iddo Friedberg
i.friedberg@miamioh.edu
| [
{
"created": "Thu, 1 Jan 2015 19:47:22 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Feb 2015 23:25:39 GMT",
"version": "v2"
}
] | 2015-02-26 | [
[
"Ream",
"David C",
""
],
[
"Bankapur",
"Asma R",
""
],
[
"Friedberg",
"Iddo",
""
]
] | Motivation: Gene blocks are genes co-located on the chromosome. In many cases, genes blocks are conserved between bacterial species, sometimes as operons, when genes are co-transcribed. The conservation is rarely absolute: gene loss, gain, duplication, block splitting, and block fusion are frequently observed. An open question in bacterial molecular evolution is that of the formation and breakup of gene blocks, for which several models have been proposed. These models, however, are not generally applicable to all types of gene blocks, and consequently cannot be used to broadly compare and study gene block evolution. To address this problem we introduce an event-based method for tracking gene block evolution in bacteria. Results: We show here that the evolution of gene blocks in proteobacteria can be described by a small set of events. Those include the insertion of genes into, or the splitting of genes out of a gene block, gene loss, and gene duplication. We show how the event-based method of gene block evolution allows us to determine the evolutionary rate, and to trace the ancestral states of their formation. We conclude that the event-based method can be used to help us understand the formation of these important bacterial genomic structures. Availability: The software is available under GPLv3 license on http://github.com/reamdc1/gene_block_evolution.git Supplementary online material: http://iddo-friedberg.net/operon-evolution Contact: Iddo Friedberg i.friedberg@miamioh.edu |
1505.03558 | Anna Melbinger | Anna Melbinger, Jonas Cremer and Erwin Frey | The Emergence of Cooperation from a Single Mutant during Microbial
Life-Cycles | main text: 14 pages, 5 figures; supplement: 4 pages, figures | Journal of the Royal Society Interface (2015), Vol. 12, Issue: 108 | 10.1098/rsif.2015.0171 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative behavior is widespread in nature, even though cooperating
individuals always run the risk to be exploited by free-riders. Population
structure effectively promotes cooperation given that a threshold in the level
of cooperation was already reached. However, the question how cooperation can
emerge from a single mutant, which cannot rely on a benefit provided by other
cooperators, is still puzzling. Here, we investigate this question for a
well-defined but generic situation based on typical life-cycles of microbial
populations where individuals regularly form new colonies followed by growth
phases. We analyze two evolutionary mechanisms favoring cooperative behavior
and study their strength depending on the inoculation size and the length of a
life-cycle. In particular, we find that population bottlenecks followed by
exponential growth phases strongly increase the survival and fixation
probabilities of a single cooperator in a free-riding population.
| [
{
"created": "Wed, 13 May 2015 21:45:10 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Jun 2015 23:33:39 GMT",
"version": "v2"
}
] | 2015-06-16 | [
[
"Melbinger",
"Anna",
""
],
[
"Cremer",
"Jonas",
""
],
[
"Frey",
"Erwin",
""
]
] | Cooperative behavior is widespread in nature, even though cooperating individuals always run the risk to be exploited by free-riders. Population structure effectively promotes cooperation given that a threshold in the level of cooperation was already reached. However, the question how cooperation can emerge from a single mutant, which cannot rely on a benefit provided by other cooperators, is still puzzling. Here, we investigate this question for a well-defined but generic situation based on typical life-cycles of microbial populations where individuals regularly form new colonies followed by growth phases. We analyze two evolutionary mechanisms favoring cooperative behavior and study their strength depending on the inoculation size and the length of a life-cycle. In particular, we find that population bottlenecks followed by exponential growth phases strongly increase the survival and fixation probabilities of a single cooperator in a free-riding population. |
2006.08471 | Piero Poletti | Piero Poletti, Marcello Tirani, Danilo Cereda, Filippo Trentini,
Giorgio Guzzetta, Giuliana Sabatino, Valentina Marziano, Ambra Castrofino,
Francesca Grosso, Gabriele Del Castillo, Raffaella Piccarreta, ATS Lombardy
COVID-19 Task Force, Aida Andreassi, Alessia Melegaro, Maria Gramegna, Marco
Ajelli, Stefano Merler | Probability of symptoms and critical disease after SARS-CoV-2 infection | sample increased: results updated with new records coming from the
ongoing serological surveys | JAMA Netw Open. 2021;4(3):e211085 | 10.1001/jamanetworkopen.2021.1085 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We quantified the probability of developing symptoms (respiratory or fever
\geq 37.5 {\deg}C) and critical disease (requiring intensive care or resulting
in death) of SARS-CoV-2 positive subjects. 5,484 contacts of SARS-CoV-2 index
cases detected in Lombardy, Italy were analyzed, and positive subjects were
ascertained via nasal swabs and serological assays. 73.9% of all infected
individuals aged less than 60 years did not develop symptoms (95% confidence
interval: 71.8-75.9%). The risk of symptoms increased with age. 6.6% of
infected subjects older than 60 years had critical disease, with males at
significantly higher risk.
| [
{
"created": "Mon, 15 Jun 2020 15:21:06 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jun 2020 12:22:55 GMT",
"version": "v2"
}
] | 2022-02-21 | [
[
"Poletti",
"Piero",
""
],
[
"Tirani",
"Marcello",
""
],
[
"Cereda",
"Danilo",
""
],
[
"Trentini",
"Filippo",
""
],
[
"Guzzetta",
"Giorgio",
""
],
[
"Sabatino",
"Giuliana",
""
],
[
"Marziano",
"Valentina",
""
],
[
"Castrofino",
"Ambra",
""
],
[
"Grosso",
"Francesca",
""
],
[
"Del Castillo",
"Gabriele",
""
],
[
"Piccarreta",
"Raffaella",
""
],
[
"Force",
"ATS Lombardy COVID-19 Task",
""
],
[
"Andreassi",
"Aida",
""
],
[
"Melegaro",
"Alessia",
""
],
[
"Gramegna",
"Maria",
""
],
[
"Ajelli",
"Marco",
""
],
[
"Merler",
"Stefano",
""
]
] | We quantified the probability of developing symptoms (respiratory or fever \geq 37.5 {\deg}C) and critical disease (requiring intensive care or resulting in death) of SARS-CoV-2 positive subjects. 5,484 contacts of SARS-CoV-2 index cases detected in Lombardy, Italy were analyzed, and positive subjects were ascertained via nasal swabs and serological assays. 73.9% of all infected individuals aged less than 60 years did not develop symptoms (95% confidence interval: 71.8-75.9%). The risk of symptoms increased with age. 6.6% of infected subjects older than 60 years had critical disease, with males at significantly higher risk. |
2112.07579 | Tuul Sepp | Ciara Baines, Richard Meitern, Randel Kreitsberg, Tuul Sepp | Comparative study of the evolution of human cancer gene duplications
across fish | 31 pages, 5 figures. Submitted to Molecular Biology and Evolution | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comparative studies of cancer-related genes allow us to gain novel
information about the evolution and function of these genes, but also to
understand cancer as a driving force in biological systems and species life
histories. So far, comparative studies of cancer genes have focused on mammals.
Here, we provide the first comparative study of cancer-related gene copy number
variation in fish. As fish are evolutionarily older and genetically more
diverse than mammals, their tumour suppression mechanisms should not only
include most of the mammalian mechanisms, but also reveal novel (but
potentially phylogenetically older) previously undetected mechanisms. We have
matched the sequenced genomes of 65 fish species from the Ensemble database
with the cancer gene information from the COSMIC database. By calculating the
number of gene copies across species using the Ensembl CAFE data (providing
species trees for gene copy number counts), we were able to develop a novel,
less resource demanding method for ortholog identification. Our analysis
demonstrates a masked relationship with cancer-related gene copy number
variation (CNV) and maximum lifespan in fish species, suggesting that higher
tumour suppressor gene CNV lengthens and oncogene CNV shortens lifespan, when
both traits are added to the model. Based on the correlation between tumour
suppressor and oncogene CNV, we were able to show which species have more
tumour suppressors in relation to oncogenes. It could therefore be suggested
that these species have stronger genetic defences against oncogenic processes.
Fish studies could yet be a largely unexplored treasure trove for understanding
the evolution and ecology of cancer, by providing novel insights into the study
of cancer and tumour suppression, in addition to the study of fish evolution,
life-history trade-offs, and ecology.
| [
{
"created": "Tue, 14 Dec 2021 17:35:01 GMT",
"version": "v1"
}
] | 2021-12-15 | [
[
"Baines",
"Ciara",
""
],
[
"Meitern",
"Richard",
""
],
[
"Kreitsberg",
"Randel",
""
],
[
"Sepp",
"Tuul",
""
]
] | Comparative studies of cancer-related genes allow us to gain novel information about the evolution and function of these genes, but also to understand cancer as a driving force in biological systems and species life histories. So far, comparative studies of cancer genes have focused on mammals. Here, we provide the first comparative study of cancer-related gene copy number variation in fish. As fish are evolutionarily older and genetically more diverse than mammals, their tumour suppression mechanisms should not only include most of the mammalian mechanisms, but also reveal novel (but potentially phylogenetically older) previously undetected mechanisms. We have matched the sequenced genomes of 65 fish species from the Ensemble database with the cancer gene information from the COSMIC database. By calculating the number of gene copies across species using the Ensembl CAFE data (providing species trees for gene copy number counts), we were able to develop a novel, less resource demanding method for ortholog identification. Our analysis demonstrates a masked relationship with cancer-related gene copy number variation (CNV) and maximum lifespan in fish species, suggesting that higher tumour suppressor gene CNV lengthens and oncogene CNV shortens lifespan, when both traits are added to the model. Based on the correlation between tumour suppressor and oncogene CNV, we were able to show which species have more tumour suppressors in relation to oncogenes. It could therefore be suggested that these species have stronger genetic defences against oncogenic processes. Fish studies could yet be a largely unexplored treasure trove for understanding the evolution and ecology of cancer, by providing novel insights into the study of cancer and tumour suppression, in addition to the study of fish evolution, life-history trade-offs, and ecology. |
0707.3047 | Alain Barrat | Aurelien Gautreau (LPT), Alain Barrat (LPT), Marc Barthelemy (DPTA) | Arrival Time Statistics in Global Disease Spread | null | J. Stat. Mech. (2007) L09001 | 10.1088/1742-5468/2007/09/L09001 | null | q-bio.PE | null | Metapopulation models describing cities with different populations coupled by
the travel of individuals are of great importance in the understanding of
disease spread on a large scale. An important example is the Rvachev-Longini
model [{\it Math. Biosci.} {\bf 75}, 3-22 (1985)] which is widely used in
computational epidemiology. Few analytical results are however available and in
particular little is known about paths followed by epidemics and disease
arrival times. We study the arrival time of a disease in a city as a function
of the starting seed of the epidemics. We propose an analytical Ansatz, test it
in the case of a spreading on the world wide air transportation network, and
show that it predicts accurately the arrival order of a disease in world-wide
cities.
| [
{
"created": "Fri, 20 Jul 2007 11:45:42 GMT",
"version": "v1"
}
] | 2007-09-19 | [
[
"Gautreau",
"Aurelien",
"",
"LPT"
],
[
"Barrat",
"Alain",
"",
"LPT"
],
[
"Barthelemy",
"Marc",
"",
"DPTA"
]
] | Metapopulation models describing cities with different populations coupled by the travel of individuals are of great importance in the understanding of disease spread on a large scale. An important example is the Rvachev-Longini model [{\it Math. Biosci.} {\bf 75}, 3-22 (1985)] which is widely used in computational epidemiology. Few analytical results are however available and in particular little is known about paths followed by epidemics and disease arrival times. We study the arrival time of a disease in a city as a function of the starting seed of the epidemics. We propose an analytical Ansatz, test it in the case of a spreading on the world wide air transportation network, and show that it predicts accurately the arrival order of a disease in world-wide cities. |
1604.07649 | Guillaume Attuel | G. Attuel and N. Derval and T. Desplantez and M. Haissaguerre and M.
Hocini and P. Ja\"is and R. Dubois | Critical fluctuations of the electrical activity of the heart:
Shortcomings of models of excitability and interpretation | submitted to EPL: received 12 August 2013, rejected 27 september 2013 | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We report unexpected evidence of critical fluctuations of the electric
potential of the heart during atrial fibrillation in humans. Scale invariance
and long range correlations are found, which we show cannot be accounted for
solely with the property of excitability, since disorder emerges by the
formation of chaotic patterns in excitable media. To shed light on the data, we
discuss the hypothesis that, in fact, fibrillation appears through a phase
transition, which we compare on phenomenological grounds to a quenched-in
disorder magnetic transition. We infer that, during propagation of pulses,
random pinning might occur due to random modulation of the gap junction
channels.
| [
{
"created": "Fri, 22 Apr 2016 16:12:06 GMT",
"version": "v1"
}
] | 2016-04-27 | [
[
"Attuel",
"G.",
""
],
[
"Derval",
"N.",
""
],
[
"Desplantez",
"T.",
""
],
[
"Haissaguerre",
"M.",
""
],
[
"Hocini",
"M.",
""
],
[
"Jaïs",
"P.",
""
],
[
"Dubois",
"R.",
""
]
] | We report unexpected evidence of critical fluctuations of the electric potential of the heart during atrial fibrillation in humans. Scale invariance and long range correlations are found, which we show cannot be accounted for solely with the property of excitability, since disorder emerges by the formation of chaotic patterns in excitable media. To shed light on the data, we discuss the hypothesis that, in fact, fibrillation appears through a phase transition, which we compare on phenomenological grounds to a quenched-in disorder magnetic transition. We infer that, during propagation of pulses, random pinning might occur due to random modulation of the gap junction channels. |
2007.04765 | Omar El Housni | Omar El Housni, Mika Sumida, Paat Rusmevichientong, Huseyin Topaloglu,
Serhan Ziya | Future Evolution of COVID-19 Pandemic in North Carolina: Can We Flatten
the Curve? | arXiv admin note: substantial text overlap with arXiv:2005.14700 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On June 24th, Governor Cooper announced that North Carolina will not be
moving into Phase 3 of its reopening process at least until July 17th. Given
the recent increases in daily positive cases and hospitalizations, this
decision was not surprising. However, given the political and economic
pressures which are forcing the state to reopen, it is not clear what actions
will help North Carolina to avoid the worst. We use a compartmentalized model
to study the effects of social distancing measures and testing capacity
combined with contact tracing on the evolution of the pandemic in North
Carolina until the end of the year. We find that going back to restrictions
that were in place during Phase 1 will slow down the spread but if the state
wants to continue to reopen or at least remain in Phase 2 or Phase 3 it needs
to significantly expand its testing and contact tracing capacity. Even under
our best-case scenario of high contact tracing effectiveness, the number of
contact tracers the state currently employs is inadequate.
| [
{
"created": "Fri, 3 Jul 2020 15:53:34 GMT",
"version": "v1"
}
] | 2020-07-10 | [
[
"Housni",
"Omar El",
""
],
[
"Sumida",
"Mika",
""
],
[
"Rusmevichientong",
"Paat",
""
],
[
"Topaloglu",
"Huseyin",
""
],
[
"Ziya",
"Serhan",
""
]
] | On June 24th, Governor Cooper announced that North Carolina will not be moving into Phase 3 of its reopening process at least until July 17th. Given the recent increases in daily positive cases and hospitalizations, this decision was not surprising. However, given the political and economic pressures which are forcing the state to reopen, it is not clear what actions will help North Carolina to avoid the worst. We use a compartmentalized model to study the effects of social distancing measures and testing capacity combined with contact tracing on the evolution of the pandemic in North Carolina until the end of the year. We find that going back to restrictions that were in place during Phase 1 will slow down the spread but if the state wants to continue to reopen or at least remain in Phase 2 or Phase 3 it needs to significantly expand its testing and contact tracing capacity. Even under our best-case scenario of high contact tracing effectiveness, the number of contact tracers the state currently employs is inadequate. |
1311.0399 | Sam Greenbury | Sam F. Greenbury, Iain G. Johnston, Ard A. Louis, Sebastian E. Ahnert | A tractable genotype-phenotype map for the self-assembly of protein
quaternary structure | 12 pages, 6 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mapping between biological genotypes and phenotypes is central to the
study of biological evolution. Here we introduce a rich, intuitive, and
biologically realistic genotype-phenotype (GP) map, that serves as a model of
self-assembling biological structures, such as protein complexes, and remains
computationally and analytically tractable. Our GP map arises naturally from
the self-assembly of polyomino structures on a 2D lattice and exhibits a number
of properties: $\textit{redundancy}$ (genotypes vastly outnumber phenotypes),
$\textit{phenotype bias}$ (genotypic redundancy varies greatly between
phenotypes), $\textit{genotype component disconnectivity}$ (phenotypes consist
of disconnected mutational networks) and $\textit{shape space covering}$ (most
phenotypes can be reached in a small number of mutations). We also show that
the mutational robustness of phenotypes scales very roughly logarithmically
with phenotype redundancy and is positively correlated with phenotypic
evolvability. Although our GP map describes the assembly of disconnected
objects, it shares many properties with other popular GP maps for connected
units, such as models for RNA secondary structure or the HP lattice model for
protein tertiary structure. The remarkable fact that these important properties
similarly emerge from such different models suggests the possibility that
universal features underlie a much wider class of biologically realistic GP
maps.
| [
{
"created": "Sat, 2 Nov 2013 17:50:50 GMT",
"version": "v1"
}
] | 2013-11-05 | [
[
"Greenbury",
"Sam F.",
""
],
[
"Johnston",
"Iain G.",
""
],
[
"Louis",
"Ard A.",
""
],
[
"Ahnert",
"Sebastian E.",
""
]
] | The mapping between biological genotypes and phenotypes is central to the study of biological evolution. Here we introduce a rich, intuitive, and biologically realistic genotype-phenotype (GP) map, that serves as a model of self-assembling biological structures, such as protein complexes, and remains computationally and analytically tractable. Our GP map arises naturally from the self-assembly of polyomino structures on a 2D lattice and exhibits a number of properties: $\textit{redundancy}$ (genotypes vastly outnumber phenotypes), $\textit{phenotype bias}$ (genotypic redundancy varies greatly between phenotypes), $\textit{genotype component disconnectivity}$ (phenotypes consist of disconnected mutational networks) and $\textit{shape space covering}$ (most phenotypes can be reached in a small number of mutations). We also show that the mutational robustness of phenotypes scales very roughly logarithmically with phenotype redundancy and is positively correlated with phenotypic evolvability. Although our GP map describes the assembly of disconnected objects, it shares many properties with other popular GP maps for connected units, such as models for RNA secondary structure or the HP lattice model for protein tertiary structure. The remarkable fact that these important properties similarly emerge from such different models suggests the possibility that universal features underlie a much wider class of biologically realistic GP maps. |
2005.08380 | Andy Lau | Andy M. Lau, Jurgen Claesen, Kjetil Hansen, Argyris Politis | Deuteros 2.0: Peptide-level significance testing of data from hydrogen
deuterium exchange mass spectrometry | Application note with 3 pages, 1 figure | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Summary: Hydrogen deuterium exchange mass spectrometry (HDX-MS) is becoming
increasing routine for monitoring changes in the structural dynamics of
proteins. Differential HDX-MS allows comparison of individual protein states,
such as in the absence or presence of a ligand. This can be used to attribute
changes in conformation to binding events, allowing the mapping of entire
con-formational networks. As such, the number of necessary cross-state
comparisons quickly increas-es as additional states are introduced to the
system of study. There are currently very few software packages available that
offer quick and informative comparison of HDX-MS datasets and even few-er which
offer statistical analysis and advanced visualization. Following the feedback
from our origi-nal software Deuteros, we present Deuteros 2.0 which has been
redesigned from the ground up to fulfil a greater role in the HDX-MS analysis
pipeline. Deuteros 2.0 features a repertoire of facilities for back exchange
correction, data summarization, peptide-level statistical analysis and advanced
data plotting features. Availability: Deuteros 2.0 can be downloaded from
https://github.com/andymlau/Deuteros_2.0 under the Apache 2.0 license.
Installation of Deuteros 2.0 requires the MATLAB Runtime Library available free
of charge from MathWorks
(https://www.mathworks.com/products/compiler/matlab-runtime.html) and is
available for both Windows and Mac operating systems.
| [
{
"created": "Sun, 17 May 2020 22:01:10 GMT",
"version": "v1"
}
] | 2020-05-19 | [
[
"Lau",
"Andy M.",
""
],
[
"Claesen",
"Jurgen",
""
],
[
"Hansen",
"Kjetil",
""
],
[
"Politis",
"Argyris",
""
]
] | Summary: Hydrogen deuterium exchange mass spectrometry (HDX-MS) is becoming increasing routine for monitoring changes in the structural dynamics of proteins. Differential HDX-MS allows comparison of individual protein states, such as in the absence or presence of a ligand. This can be used to attribute changes in conformation to binding events, allowing the mapping of entire con-formational networks. As such, the number of necessary cross-state comparisons quickly increas-es as additional states are introduced to the system of study. There are currently very few software packages available that offer quick and informative comparison of HDX-MS datasets and even few-er which offer statistical analysis and advanced visualization. Following the feedback from our origi-nal software Deuteros, we present Deuteros 2.0 which has been redesigned from the ground up to fulfil a greater role in the HDX-MS analysis pipeline. Deuteros 2.0 features a repertoire of facilities for back exchange correction, data summarization, peptide-level statistical analysis and advanced data plotting features. Availability: Deuteros 2.0 can be downloaded from https://github.com/andymlau/Deuteros_2.0 under the Apache 2.0 license. Installation of Deuteros 2.0 requires the MATLAB Runtime Library available free of charge from MathWorks (https://www.mathworks.com/products/compiler/matlab-runtime.html) and is available for both Windows and Mac operating systems. |
1705.03322 | Ivo Siekmann | Ivo Siekmann | An applied mathematician's perspective on Rosennean Complexity | 33 pages, 1 figure | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theoretical biologist Robert Rosen developed a highly original approach
for investigating the question "What is life?", the most fundamental problem of
biology. Considering that Rosen made extensive use of mathematics it might seem
surprising that his ideas have only rarely been implemented in mathematical
models. On the one hand, Rosen propagates relational models that neglect
underlying structural details of the components and focus on relationships
between the elements of a biological system, according to the motto "throw away
the physics, keep the organisation". Rosen's strong rejection of mechanistic
models that he implicitly associates with a strong form of reductionism might
have deterred mathematical modellers from adopting his ideas for their own
work. On the other hand Rosen's presentation of his modelling framework, (M,R)
systems, is highly abstract which makes it hard to appreciate how this approach
could be applied to concrete biological problems. In this article, both the
mathematics as well as those aspects of Rosen's work are analysed that relate
to his philosophical ideas. It is shown that Rosen's relational models are a
particular type of mechanistic model with specific underlying assumptions
rather than a different kind of model that excludes mechanistic models. The
strengths and weaknesses of relational models are investigated by comparison
with current network biology literature. Finally, it is argued that Rosen's
definition of life, "organisms are closed to efficient causation", should be
considered as a hypothesis to be tested and ideas how this postulate could be
implemented in mathematical models are presented.
| [
{
"created": "Thu, 4 May 2017 16:50:37 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Aug 2017 10:22:12 GMT",
"version": "v2"
}
] | 2017-08-15 | [
[
"Siekmann",
"Ivo",
""
]
] | The theoretical biologist Robert Rosen developed a highly original approach for investigating the question "What is life?", the most fundamental problem of biology. Considering that Rosen made extensive use of mathematics it might seem surprising that his ideas have only rarely been implemented in mathematical models. On the one hand, Rosen propagates relational models that neglect underlying structural details of the components and focus on relationships between the elements of a biological system, according to the motto "throw away the physics, keep the organisation". Rosen's strong rejection of mechanistic models that he implicitly associates with a strong form of reductionism might have deterred mathematical modellers from adopting his ideas for their own work. On the other hand Rosen's presentation of his modelling framework, (M,R) systems, is highly abstract which makes it hard to appreciate how this approach could be applied to concrete biological problems. In this article, both the mathematics as well as those aspects of Rosen's work are analysed that relate to his philosophical ideas. It is shown that Rosen's relational models are a particular type of mechanistic model with specific underlying assumptions rather than a different kind of model that excludes mechanistic models. The strengths and weaknesses of relational models are investigated by comparison with current network biology literature. Finally, it is argued that Rosen's definition of life, "organisms are closed to efficient causation", should be considered as a hypothesis to be tested and ideas how this postulate could be implemented in mathematical models are presented. |
1404.3655 | Henrike Heyne | Henrike O. Heyne, Susann Lautenschl\"ager, Ronald Nelson, Fran\c{c}ois
Besnier, Maxime Rotival, Alexander Cagan, Rimma Kozhemyakina, Irina Z.
Plyusnina, Lyudmila Trut, \"Orjan Carlborg, Enrico Petretto, Leonid Kruglyak,
Svante P\"a\"abo, Torsten Sch\"oneberg, Frank W. Albert | Genetic Influences on Brain Gene Expression in Rats Selected for
Tameness and Aggression | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inter-individual differences in many behaviors are partly due to genetic
differences, but the identification of the genes and variants that influence
behavior remains challenging. Here, we studied an F2 intercross of two outbred
lines of rats selected for tame and aggressive behavior towards humans for more
than 64 generations. By using a mapping approach that is able to identify
genetic loci segregating within the lines, we identified four times more loci
influencing tameness and aggression than by an approach that assumes fixation
of causative alleles, suggesting that many causative loci were not driven to
fixation by the selection. We used RNA sequencing in 150 F2 animals to identify
hundreds of loci that influence brain gene expression. Several of these loci
colocalize with tameness loci and may reflect the same genetic variants.
Through analyses of correlations between allele effects on behavior and gene
expression, differential expression between the tame and aggressive rat
selection lines, and correlations between gene expression and tameness in F2
animals, we identify the genes Gltscr2, Lgi4, Zfp40 and Slc17a7 as candidate
contributors to the strikingly different behavior of the tame and aggressive
animals.
| [
{
"created": "Mon, 14 Apr 2014 17:09:25 GMT",
"version": "v1"
}
] | 2014-04-15 | [
[
"Heyne",
"Henrike O.",
""
],
[
"Lautenschläger",
"Susann",
""
],
[
"Nelson",
"Ronald",
""
],
[
"Besnier",
"François",
""
],
[
"Rotival",
"Maxime",
""
],
[
"Cagan",
"Alexander",
""
],
[
"Kozhemyakina",
"Rimma",
""
],
[
"Plyusnina",
"Irina Z.",
""
],
[
"Trut",
"Lyudmila",
""
],
[
"Carlborg",
"Örjan",
""
],
[
"Petretto",
"Enrico",
""
],
[
"Kruglyak",
"Leonid",
""
],
[
"Pääbo",
"Svante",
""
],
[
"Schöneberg",
"Torsten",
""
],
[
"Albert",
"Frank W.",
""
]
] | Inter-individual differences in many behaviors are partly due to genetic differences, but the identification of the genes and variants that influence behavior remains challenging. Here, we studied an F2 intercross of two outbred lines of rats selected for tame and aggressive behavior towards humans for more than 64 generations. By using a mapping approach that is able to identify genetic loci segregating within the lines, we identified four times more loci influencing tameness and aggression than by an approach that assumes fixation of causative alleles, suggesting that many causative loci were not driven to fixation by the selection. We used RNA sequencing in 150 F2 animals to identify hundreds of loci that influence brain gene expression. Several of these loci colocalize with tameness loci and may reflect the same genetic variants. Through analyses of correlations between allele effects on behavior and gene expression, differential expression between the tame and aggressive rat selection lines, and correlations between gene expression and tameness in F2 animals, we identify the genes Gltscr2, Lgi4, Zfp40 and Slc17a7 as candidate contributors to the strikingly different behavior of the tame and aggressive animals. |
2105.03866 | Guillaume Charrier | Lia Lamacque, Florian Sabin, Thierry Am\'eglio, St\'ephane Herbette,
Guillaume Charrier | Detection of acoustic events in Lavender for measuring the xylem
vulnerability to embolism and cellular damages | 6 figures, 1 table, + 2 supplementary figures and 1 supplementary
table | null | 10.1093/jxb/erac061 | null | q-bio.TO q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Acoustic emission analysis is a promising technique to investigate the
physiological events leading to drought-induced injuries and mortality.
However, the nature and the source of the acoustic emissions are not fully
understood and make the use of this technique difficult as a direct measure of
the loss of xylem hydraulic conductance. In this study, acoustic emissions were
recorded during severe dehydration in lavender plants and compared to the
dynamics of embolism development and cell lysis. The timing and characteristics
of acoustic signals from two independent recording systems were compared by
principal component analysis. In parallel, changes in water potential, branch
diameter, loss of hydraulic conductance and electrolyte leakage were measured
to quantify drought-induced damages. Two distinct phases of acoustic emissions
were observed during dehydration: the first one associated with a rapid loss of
diameter and a significant increase in loss of xylem conductance (90%) and the
second one with a significant increase in electrolyte leakage and slower
diameter changes. This second phase corresponds to a complete loss of recovery
capacity. The acoustic signals of both phases were discriminated by the third
and fourth principal components. The loss of hydraulic conductance during the
first acoustic phase suggests the hydraulic origin of these signals (i.e.
cavitation events). For the second phase, the signals showed much higher
variability between plants and acoustic systems suggesting that the sources of
these signals may be plural, although likely including cellular damage. A
simple algorithm was developed to discriminate hydraulic-related acoustic
signals from other sources, allowing the reconstruction of dynamic hydraulic
vulnerability curves. However, hydraulic failure precedes cellular damage and
lack of whole plant recovery is associated to these latter.
| [
{
"created": "Sun, 9 May 2021 08:00:24 GMT",
"version": "v1"
}
] | 2022-02-17 | [
[
"Lamacque",
"Lia",
""
],
[
"Sabin",
"Florian",
""
],
[
"Améglio",
"Thierry",
""
],
[
"Herbette",
"Stéphane",
""
],
[
"Charrier",
"Guillaume",
""
]
] | Acoustic emission analysis is a promising technique to investigate the physiological events leading to drought-induced injuries and mortality. However, the nature and the source of the acoustic emissions are not fully understood and make the use of this technique difficult as a direct measure of the loss of xylem hydraulic conductance. In this study, acoustic emissions were recorded during severe dehydration in lavender plants and compared to the dynamics of embolism development and cell lysis. The timing and characteristics of acoustic signals from two independent recording systems were compared by principal component analysis. In parallel, changes in water potential, branch diameter, loss of hydraulic conductance and electrolyte leakage were measured to quantify drought-induced damages. Two distinct phases of acoustic emissions were observed during dehydration: the first one associated with a rapid loss of diameter and a significant increase in loss of xylem conductance (90%) and the second one with a significant increase in electrolyte leakage and slower diameter changes. This second phase corresponds to a complete loss of recovery capacity. The acoustic signals of both phases were discriminated by the third and fourth principal components. The loss of hydraulic conductance during the first acoustic phase suggests the hydraulic origin of these signals (i.e. cavitation events). For the second phase, the signals showed much higher variability between plants and acoustic systems suggesting that the sources of these signals may be plural, although likely including cellular damage. A simple algorithm was developed to discriminate hydraulic-related acoustic signals from other sources, allowing the reconstruction of dynamic hydraulic vulnerability curves. However, hydraulic failure precedes cellular damage and lack of whole plant recovery is associated to these latter. |
1211.7196 | Daniel Gamermann Dr. | D. Gamermann, A. Montagud, R. A. Jaime Infante, J. Triana, P. F. de
C\'ordoba, J. F. Urchuegu\'ia | PyNetMet: Python tools for efficient work with networks and metabolic
models | 1 Figure, 2 Tables | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The study of genome-scale metabolic models and their underlying
networks is one of the most important fields in systems biology. The complexity
of these models and their description makes the use of computational tools an
essential element in their research. Therefore there is a strong need of
efficient and versatile computational tools for the research in this area.
Results: In this manuscript we present PyNetMet, a Python library of tools to
work with networks and metabolic models. These are open-source free tools for
use in a Python platform, which adds considerably versatility to them when
compared with their desktop software similars. On the other hand these tools
allow one to work with different standards of metabolic models (OptGene and
SBML) and the fact that they are programmed in Python opens the possibility of
efficient integration with any other already existing Python tool.
Conclusions: PyNetMet is, therefore, a collection of computational tools that
will facilitate the research work with metabolic models and networks.
| [
{
"created": "Fri, 30 Nov 2012 09:25:06 GMT",
"version": "v1"
}
] | 2012-12-03 | [
[
"Gamermann",
"D.",
""
],
[
"Montagud",
"A.",
""
],
[
"Infante",
"R. A. Jaime",
""
],
[
"Triana",
"J.",
""
],
[
"de Córdoba",
"P. F.",
""
],
[
"Urchueguía",
"J. F.",
""
]
] | Background: The study of genome-scale metabolic models and their underlying networks is one of the most important fields in systems biology. The complexity of these models and their description makes the use of computational tools an essential element in their research. Therefore there is a strong need of efficient and versatile computational tools for the research in this area. Results: In this manuscript we present PyNetMet, a Python library of tools to work with networks and metabolic models. These are open-source free tools for use in a Python platform, which adds considerably versatility to them when compared with their desktop software similars. On the other hand these tools allow one to work with different standards of metabolic models (OptGene and SBML) and the fact that they are programmed in Python opens the possibility of efficient integration with any other already existing Python tool. Conclusions: PyNetMet is, therefore, a collection of computational tools that will facilitate the research work with metabolic models and networks. |
q-bio/0511027 | Changbong Hyeon | G. Caliskan, C. Hyeon, U. Perez-Salas, R. M. Briber, S. A. Woodson,
and D. Thirumalai | Persistence Length Changes Dramatically as RNA Folds | 4 page. Phys. Rev. Lett. in press | Phys. Rev. Lett 95, 268303 (2005) | 10.1103/PhysRevLett.95.268303 | null | q-bio.BM cond-mat.soft q-bio.QM | null | We determine the persistence length, $l_p$, for a bacterial group I ribozyme
as a function of concentration of monovalent and divalent cations by fitting
the distance distribution functions $P(r)$ obtained from small angle X-ray
scattering intensity data to the asymptotic form of the calculated $P_{WLC}(r)$
for a worm-like chain (WLC). The $l_p$ values change dramatically over a narrow
range of \Mg concentration from $\sim$21 \AA in the unfolded state (\textbf{U})
to $\sim$10 \AA in the compact ($\mathrm{I_C}$) and native states. Variations
in $l_p$ with increasing \Na concentration are more gradual. In accord with the
predictions of polyelectrolyte theory we find $l_p \propto 1/ \kappa^2$ where
$\kappa$ is the inverse Debye-screening length.
| [
{
"created": "Wed, 16 Nov 2005 00:01:08 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Caliskan",
"G.",
""
],
[
"Hyeon",
"C.",
""
],
[
"Perez-Salas",
"U.",
""
],
[
"Briber",
"R. M.",
""
],
[
"Woodson",
"S. A.",
""
],
[
"Thirumalai",
"D.",
""
]
] | We determine the persistence length, $l_p$, for a bacterial group I ribozyme as a function of concentration of monovalent and divalent cations by fitting the distance distribution functions $P(r)$ obtained from small angle X-ray scattering intensity data to the asymptotic form of the calculated $P_{WLC}(r)$ for a worm-like chain (WLC). The $l_p$ values change dramatically over a narrow range of \Mg concentration from $\sim$21 \AA in the unfolded state (\textbf{U}) to $\sim$10 \AA in the compact ($\mathrm{I_C}$) and native states. Variations in $l_p$ with increasing \Na concentration are more gradual. In accord with the predictions of polyelectrolyte theory we find $l_p \propto 1/ \kappa^2$ where $\kappa$ is the inverse Debye-screening length. |
2307.11365 | Sarah Vollert | Sarah A. Vollert, Christopher Drovandi, Matthew P. Adams | Unlocking ensemble ecosystem modelling for large and complex networks | null | PLoS Comput Biol 20(3): e1011976 | 10.1371/journal.pcbi.1011976 | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-sa/4.0/ | The potential effects of conservation actions on threatened species can be
predicted using ensemble ecosystem models by forecasting populations with and
without intervention. These model ensembles commonly assume stable coexistence
of species in the absence of available data. However, existing
ensemble-generation methods become computationally inefficient as the size of
the ecosystem network increases, preventing larger networks from being studied.
We present a novel sequential Monte Carlo sampling approach for ensemble
generation that is orders of magnitude faster than existing approaches. We
demonstrate that the methods produce equivalent parameter inferences, model
predictions, and tightly constrained parameter combinations using a novel
sensitivity analysis method. For one case study, we demonstrate a speed-up from
108 days to 6 hours, while maintaining equivalent ensembles. Additionally, we
demonstrate how to identify the parameter combinations that strongly drive
feasibility and stability, drawing ecological insight from the ensembles. Now,
for the first time, larger and more realistic networks can be practically
simulated and analysed.
| [
{
"created": "Fri, 21 Jul 2023 05:36:24 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jan 2024 01:21:48 GMT",
"version": "v2"
}
] | 2024-03-22 | [
[
"Vollert",
"Sarah A.",
""
],
[
"Drovandi",
"Christopher",
""
],
[
"Adams",
"Matthew P.",
""
]
] | The potential effects of conservation actions on threatened species can be predicted using ensemble ecosystem models by forecasting populations with and without intervention. These model ensembles commonly assume stable coexistence of species in the absence of available data. However, existing ensemble-generation methods become computationally inefficient as the size of the ecosystem network increases, preventing larger networks from being studied. We present a novel sequential Monte Carlo sampling approach for ensemble generation that is orders of magnitude faster than existing approaches. We demonstrate that the methods produce equivalent parameter inferences, model predictions, and tightly constrained parameter combinations using a novel sensitivity analysis method. For one case study, we demonstrate a speed-up from 108 days to 6 hours, while maintaining equivalent ensembles. Additionally, we demonstrate how to identify the parameter combinations that strongly drive feasibility and stability, drawing ecological insight from the ensembles. Now, for the first time, larger and more realistic networks can be practically simulated and analysed. |
1304.7991 | Simon Mitternacht | S. {\AE}. J\'onsson, S. Mitternacht, A. Irb\"ack | Mechanical resistance in unstructured proteins | v3: Added correct journal reference plus minor corrections | Biophysical Journal, Volume 104, Issue 12, 2725-2732, 18 June 2013 | 10.1016/j.bpj.2013.05.003 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-molecule pulling experiments on unstructured proteins linked to
neurodegenerative diseases have measured rupture forces comparable to those for
stable folded proteins. To investigate the structural mechanisms of this
unexpected force resistance, we perform pulling simulations of the amyloid
{\beta}-peptide (A{\beta}) and {\alpha}-synuclein ({\alpha}S), starting from
simulated conformational ensembles for the free monomers. For both proteins,
the simulations yield a set of rupture events that agree well with the
experimental data. By analyzing the conformations right before rupture in each
event, we find that the mechanically resistant structures share a common
architecture, with similarities to the folds adopted by A{\beta} and {\alpha}S
in amyloid fibrils. The disease-linked Arctic mutation of A{\beta} is found to
increase the occurrence of highly force-resistant structures. Our study
suggests that the high rupture forces observed in A{\beta} and {\alpha}S
pulling experiments are caused by structures that might have a key role in
amyloid formation.
| [
{
"created": "Tue, 30 Apr 2013 13:30:54 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2013 09:14:35 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2013 19:13:38 GMT",
"version": "v3"
}
] | 2013-06-19 | [
[
"Jónsson",
"S. Æ.",
""
],
[
"Mitternacht",
"S.",
""
],
[
"Irbäck",
"A.",
""
]
] | Single-molecule pulling experiments on unstructured proteins linked to neurodegenerative diseases have measured rupture forces comparable to those for stable folded proteins. To investigate the structural mechanisms of this unexpected force resistance, we perform pulling simulations of the amyloid {\beta}-peptide (A{\beta}) and {\alpha}-synuclein ({\alpha}S), starting from simulated conformational ensembles for the free monomers. For both proteins, the simulations yield a set of rupture events that agree well with the experimental data. By analyzing the conformations right before rupture in each event, we find that the mechanically resistant structures share a common architecture, with similarities to the folds adopted by A{\beta} and {\alpha}S in amyloid fibrils. The disease-linked Arctic mutation of A{\beta} is found to increase the occurrence of highly force-resistant structures. Our study suggests that the high rupture forces observed in A{\beta} and {\alpha}S pulling experiments are caused by structures that might have a key role in amyloid formation. |
1811.07592 | Giulio Bondanelli | Giulio Bondanelli and Srdjan Ostojic | Coding with transient trajectories in recurrent neural networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following a stimulus, the neural response typically strongly varies in time
and across neurons before settling to a steady-state. While classical
population coding theory disregards the temporal dimension, recent works have
argued that trajectories of transient activity can be particularly informative
about stimulus identity and may form the basis of computations through
dynamics. Yet the dynamical mechanisms needed to generate a population code
based on transient trajectories have not been fully elucidated. Here we examine
transient coding in a broad class of high-dimensional linear networks of
recurrently connected units. We start by reviewing a well-known result that
leads to a distinction between two classes of networks: networks in which all
inputs lead to weak, decaying transients, and networks in which specific inputs
elicit strongly amplified transient responses and are mapped onto orthogonal
output states during the dynamics. Theses two classes are simply distinguished
based on the spectrum of the symmetric part of the connectivity matrix. For the
second class of networks, which is a sub-class of non-normal networks, we
provide a procedure to identify transiently amplified inputs and the
corresponding readouts. We first apply these results to standard
randomly-connected and two-population networks. We then build minimal, low-rank
networks that robustly implement trajectories mapping a specific input onto a
specific output state. Finally, we demonstrate that the capacity of the
obtained networks increases proportionally with their size.
| [
{
"created": "Mon, 19 Nov 2018 10:25:31 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jul 2019 13:52:11 GMT",
"version": "v2"
}
] | 2019-07-05 | [
[
"Bondanelli",
"Giulio",
""
],
[
"Ostojic",
"Srdjan",
""
]
] | Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit strongly amplified transient responses and are mapped onto orthogonal output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size. |
1309.4039 | Gerrit Ansmann | Klaus Lehnertz and Gerrit Ansmann and Stephan Bialonski and Henning
Dickten and Christian Geier and Stephan Porz | Evolving networks in the human epileptic brain | In press (Physica D) | Physica D 267, 7-15 (2014) | 10.1016/j.physd.2013.06.009 | null | q-bio.NC physics.data-an physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network theory provides novel concepts that promise an improved
characterization of interacting dynamical systems. Within this framework,
evolving networks can be considered as being composed of nodes, representing
systems, and of time-varying edges, representing interactions between these
systems. This approach is highly attractive to further our understanding of the
physiological and pathophysiological dynamics in human brain networks. Indeed,
there is growing evidence that the epileptic process can be regarded as a
large-scale network phenomenon. We here review methodologies for inferring
networks from empirical time series and for a characterization of these
evolving networks. We summarize recent findings derived from studies that
investigate human epileptic brain networks evolving on timescales ranging from
few seconds to weeks. We point to possible pitfalls and open issues, and
discuss future perspectives.
| [
{
"created": "Mon, 16 Sep 2013 17:05:56 GMT",
"version": "v1"
}
] | 2014-08-26 | [
[
"Lehnertz",
"Klaus",
""
],
[
"Ansmann",
"Gerrit",
""
],
[
"Bialonski",
"Stephan",
""
],
[
"Dickten",
"Henning",
""
],
[
"Geier",
"Christian",
""
],
[
"Porz",
"Stephan",
""
]
] | Network theory provides novel concepts that promise an improved characterization of interacting dynamical systems. Within this framework, evolving networks can be considered as being composed of nodes, representing systems, and of time-varying edges, representing interactions between these systems. This approach is highly attractive to further our understanding of the physiological and pathophysiological dynamics in human brain networks. Indeed, there is growing evidence that the epileptic process can be regarded as a large-scale network phenomenon. We here review methodologies for inferring networks from empirical time series and for a characterization of these evolving networks. We summarize recent findings derived from studies that investigate human epileptic brain networks evolving on timescales ranging from few seconds to weeks. We point to possible pitfalls and open issues, and discuss future perspectives. |
1409.0605 | Tiberiu Harko | Tiberiu Harko, M. K. Mak | Travelling wave solutions of the reaction-diffusion mathematical model
of glioblastoma growth: An Abel equation based approach | 29 pages, 4 figures, accepted for publication in Mathematical
Biosciences and Engineering | Mathematical Biosciences and Engineering, vol. 12, pp. 41-69, 2015 | null | null | q-bio.TO physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider quasi-stationary (travelling wave type) solutions to a nonlinear
reaction-diffusion equation with arbitrary, autonomous coefficients, describing
the evolution of glioblastomas, aggressive primary brain tumors that are
characterized by extensive infiltration into the brain and are highly resistant
to treatment. The second order nonlinear equation describing the glioblastoma
growth through travelling waves can be reduced to a first order Abel type
equation. By using the integrability conditions for the Abel equation several
classes of exact travelling wave solutions of the general reaction-diffusion
equation that describes glioblastoma growth are obtained, corresponding to
different forms of the product of the diffusion and reaction functions. The
solutions are obtained by using the Chiellini lemma and the Lemke
transformation, respectively, and the corresponding equations represent
generalizations of the classical Fisher--Kolmogorov equation. The biological
implications of two classes of solutions are also investigated by using both
numerical and semi-analytical methods for realistic values of the biological
parameters.
| [
{
"created": "Tue, 2 Sep 2014 04:06:45 GMT",
"version": "v1"
}
] | 2014-12-17 | [
[
"Harko",
"Tiberiu",
""
],
[
"Mak",
"M. K.",
""
]
] | We consider quasi-stationary (travelling wave type) solutions to a nonlinear reaction-diffusion equation with arbitrary, autonomous coefficients, describing the evolution of glioblastomas, aggressive primary brain tumors that are characterized by extensive infiltration into the brain and are highly resistant to treatment. The second order nonlinear equation describing the glioblastoma growth through travelling waves can be reduced to a first order Abel type equation. By using the integrability conditions for the Abel equation several classes of exact travelling wave solutions of the general reaction-diffusion equation that describes glioblastoma growth are obtained, corresponding to different forms of the product of the diffusion and reaction functions. The solutions are obtained by using the Chiellini lemma and the Lemke transformation, respectively, and the corresponding equations represent generalizations of the classical Fisher--Kolmogorov equation. The biological implications of two classes of solutions are also investigated by using both numerical and semi-analytical methods for realistic values of the biological parameters. |
q-bio/0508019 | Jayprokas Chakrabarti | Smarajit Das, Zhumur Ghosh, Jayprokas Chakrabarti, Bibekanand Mallick
and Satyabrata Sahoo | Positioning Crenarchaeal tRNA-Introns | 15 pages, 2 figures | null | null | null | q-bio.GN | null | We precisely position a noncanonical intron in the odd second copy of
tRNAAsp(GTC) gene in the newly sequenced crenarchaea S.acidocaldarius. The
uniform assortment of some features from normal aspartate tDNA and some from
those corresponding to non-standard amino acids conduce us to conjecture it to
be a novel tRNA gene, probably coding for a modified aspartate residue. Further
we reposition intron in tRNAHis(GUG) gene in P.aerophilum.The BHB motif at the
exon-intron boundaries are re-analyzed and found to support our conjectures.
| [
{
"created": "Tue, 16 Aug 2005 10:28:12 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Das",
"Smarajit",
""
],
[
"Ghosh",
"Zhumur",
""
],
[
"Chakrabarti",
"Jayprokas",
""
],
[
"Mallick",
"Bibekanand",
""
],
[
"Sahoo",
"Satyabrata",
""
]
] | We precisely position a noncanonical intron in the odd second copy of tRNAAsp(GTC) gene in the newly sequenced crenarchaea S.acidocaldarius. The uniform assortment of some features from normal aspartate tDNA and some from those corresponding to non-standard amino acids conduce us to conjecture it to be a novel tRNA gene, probably coding for a modified aspartate residue. Further we reposition intron in tRNAHis(GUG) gene in P.aerophilum.The BHB motif at the exon-intron boundaries are re-analyzed and found to support our conjectures. |
1307.5878 | Joshua M. Deutsch | J. M. Deutsch and Ian P. Lewis | Motor function in interpolar microtubules during metaphase | 13 pages, 7 figures | null | null | null | q-bio.CB q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze experimental observations of microtubules undergoing small
fluctuations about a "balance point" when mixed in solution of two different
kinesin motor proteins, KLP61F and Ncd. It has been proposed that the
microtubule movement is due to stochastic variations in the densities of the
two species of motor proteins. We test this hypothesis here by showing how it
maps onto a one-dimensional random walk in a random environment. Our estimate
of the amplitude of the fluctuations agrees with experimental observations. We
point out that there is an initial transient in the position of the microtubule
where it will typically move of order its own length. We compare the physics of
this gliding assay to a recent theory of the role of antagonistic motors on
restricting interpolar microtubule sliding of a cell's mitotic spindle during
prometaphase. It is concluded that randomly positioned antagonistic motors can
restrict relative movement of microtubules, however they do so imperfectly. A
variation in motor concentrations is also analyzed and shown to lead to greater
control of spindle length.
| [
{
"created": "Mon, 22 Jul 2013 20:37:23 GMT",
"version": "v1"
}
] | 2013-07-24 | [
[
"Deutsch",
"J. M.",
""
],
[
"Lewis",
"Ian P.",
""
]
] | We analyze experimental observations of microtubules undergoing small fluctuations about a "balance point" when mixed in solution of two different kinesin motor proteins, KLP61F and Ncd. It has been proposed that the microtubule movement is due to stochastic variations in the densities of the two species of motor proteins. We test this hypothesis here by showing how it maps onto a one-dimensional random walk in a random environment. Our estimate of the amplitude of the fluctuations agrees with experimental observations. We point out that there is an initial transient in the position of the microtubule where it will typically move of order its own length. We compare the physics of this gliding assay to a recent theory of the role of antagonistic motors on restricting interpolar microtubule sliding of a cell's mitotic spindle during prometaphase. It is concluded that randomly positioned antagonistic motors can restrict relative movement of microtubules, however they do so imperfectly. A variation in motor concentrations is also analyzed and shown to lead to greater control of spindle length. |
1703.09994 | Irina Kareva | Irina Kareva | Angiogenesis regulators as a possible key to accelerated growth of
secondary tumors following primary tumor resection | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Resection of primary tumors is often followed by accelerated growth of
metastases. Here we propose that this effect may be due to the fact that
resection of primary tumor results in a decrease in the total systemic amount
of angiogenesis stimulators, such as VEGF and bFGF. This in turn causes
decrease in the systemic level of angiogenesis inhibitors, such as PF-4 and
TSP-1, which at least temporarily relieves inhibition of secondary tumors,
allowing them to grow. This construct is predicated on the notion that systemic
level of angiogenesis inhibitors is regulated by the systemic level of
angiogenesis stimulators, as the host is trying to maintain the homeostatic
balance of stimulators to inhibitors in the body. We evaluate this hypothesis
using a conceptual mathematical model and show that indeed, this mechanism can
explain accelerated growth of secondary tumors following resection of a primary
tumor. We also show that there exists a tradeoff between time of surgery and
time to onset of metastatic growth. We conclude with a discussion of possible
therapeutic approaches that may counteract this effect and reduce metastatic
recurrences after surgery.
| [
{
"created": "Wed, 29 Mar 2017 12:10:30 GMT",
"version": "v1"
}
] | 2017-03-30 | [
[
"Kareva",
"Irina",
""
]
] | Resection of primary tumors is often followed by accelerated growth of metastases. Here we propose that this effect may be due to the fact that resection of primary tumor results in a decrease in the total systemic amount of angiogenesis stimulators, such as VEGF and bFGF. This in turn causes decrease in the systemic level of angiogenesis inhibitors, such as PF-4 and TSP-1, which at least temporarily relieves inhibition of secondary tumors, allowing them to grow. This construct is predicated on the notion that systemic level of angiogenesis inhibitors is regulated by the systemic level of angiogenesis stimulators, as the host is trying to maintain the homeostatic balance of stimulators to inhibitors in the body. We evaluate this hypothesis using a conceptual mathematical model and show that indeed, this mechanism can explain accelerated growth of secondary tumors following resection of a primary tumor. We also show that there exists a tradeoff between time of surgery and time to onset of metastatic growth. We conclude with a discussion of possible therapeutic approaches that may counteract this effect and reduce metastatic recurrences after surgery. |
1409.2207 | Liane Gabora | Paul Sowden, Andrew Pringle, and Liane Gabora | The Shifting Sands of Creative Thinking: Connections to Dual Process
Theory | 17 pages | Thinking & Reasoning, 21(1), 40-60 (2015) | 10.1080/13546783.2014.885464 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dual process models of cognition suggest there are two kinds of thought:
rapid, automatic Type 1 processes, and effortful, controlled Type 2 processes.
Models of creative thinking also distinguish between two sets of processes:
those involved in the generation of ideas, and those involved with their
refinement, evaluation and/or selection. Here we review dual process models in
both these literatures and delineate the similarities and differences. Both
generative and evaluative creative processing modes involve elements that have
been attributed to each of the dual processes of cognition. We explore the
notion that creative thinking may rest upon the nature of a shifting process
between generative and evaluative modes of thought. We suggest that through a
synthesis application of the evidence bases on dual process models of cognition
and from neuroimaging, together with developing chronometric approaches to
explore the shifting process, could assist the development of interventions to
facilitate creativity.
| [
{
"created": "Mon, 8 Sep 2014 05:04:30 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2019 21:51:13 GMT",
"version": "v2"
}
] | 2019-07-17 | [
[
"Sowden",
"Paul",
""
],
[
"Pringle",
"Andrew",
""
],
[
"Gabora",
"Liane",
""
]
] | Dual process models of cognition suggest there are two kinds of thought: rapid, automatic Type 1 processes, and effortful, controlled Type 2 processes. Models of creative thinking also distinguish between two sets of processes: those involved in the generation of ideas, and those involved with their refinement, evaluation and/or selection. Here we review dual process models in both these literatures and delineate the similarities and differences. Both generative and evaluative creative processing modes involve elements that have been attributed to each of the dual processes of cognition. We explore the notion that creative thinking may rest upon the nature of a shifting process between generative and evaluative modes of thought. We suggest that through a synthesis application of the evidence bases on dual process models of cognition and from neuroimaging, together with developing chronometric approaches to explore the shifting process, could assist the development of interventions to facilitate creativity. |
1307.7810 | Aaron Darling | Denisa Duma, Mary Wootters, Anna C. Gilbert, Hung Q. Ngo, Atri Rudra,
Matthew Alpert, Timothy J. Close, Gianfranco Ciardo, and Stefano Lonardi | Accurate Decoding of Pooled Sequenced Data Using Compressed Sensing | Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013) | null | null | null | q-bio.QM cs.CE cs.IT math.IT q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to overcome the limitations imposed by DNA barcoding when
multiplexing a large number of samples in the current generation of
high-throughput sequencing instruments, we have recently proposed a new
protocol that leverages advances in combinatorial pooling design (group
testing) doi:10.1371/journal.pcbi.1003010. We have also demonstrated how this
new protocol would enable de novo selective sequencing and assembly of large,
highly-repetitive genomes. Here we address the problem of decoding pooled
sequenced data obtained from such a protocol. Our algorithm employs a
synergistic combination of ideas from compressed sensing and the decoding of
error-correcting codes. Experimental results on synthetic data for the rice
genome and real data for the barley genome show that our novel decoding
algorithm enables significantly higher quality assemblies than the previous
approach.
| [
{
"created": "Tue, 30 Jul 2013 04:34:34 GMT",
"version": "v1"
}
] | 2013-08-02 | [
[
"Duma",
"Denisa",
""
],
[
"Wootters",
"Mary",
""
],
[
"Gilbert",
"Anna C.",
""
],
[
"Ngo",
"Hung Q.",
""
],
[
"Rudra",
"Atri",
""
],
[
"Alpert",
"Matthew",
""
],
[
"Close",
"Timothy J.",
""
],
[
"Ciardo",
"Gianfranco",
""
],
[
"Lonardi",
"Stefano",
""
]
] | In order to overcome the limitations imposed by DNA barcoding when multiplexing a large number of samples in the current generation of high-throughput sequencing instruments, we have recently proposed a new protocol that leverages advances in combinatorial pooling design (group testing) doi:10.1371/journal.pcbi.1003010. We have also demonstrated how this new protocol would enable de novo selective sequencing and assembly of large, highly-repetitive genomes. Here we address the problem of decoding pooled sequenced data obtained from such a protocol. Our algorithm employs a synergistic combination of ideas from compressed sensing and the decoding of error-correcting codes. Experimental results on synthetic data for the rice genome and real data for the barley genome show that our novel decoding algorithm enables significantly higher quality assemblies than the previous approach. |
1605.02166 | Tatsuya Sasaki | Tatsuya Sasaki, Isamu Okada, Yutaka Nakai | The evolution of conditional moral assessment in indirect reciprocity | 27 pages, 2 figures and 2 tables | Scientific Reports 7, 41870 (2017) | 10.1038/srep41870 | null | q-bio.PE cs.SI nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indirect reciprocity is a major mechanism in the maintenance of cooperation
among unrelated individuals. Indirect reciprocity leads to conditional
cooperation according to social norms that discriminate the good (those who
deserve to be rewarded with help) and the bad (those who should be punished by
refusal of help). Despite intensive research, however, there is no definitive
consensus on what social norms best promote cooperation through indirect
reciprocity, and it remains unclear even how those who refuse to help the bad
should be assessed. Here, we propose a new simple norm called "Staying" that
prescribes abstaining from assessment. Under the Staying norm, the image of the
person who makes the decision to give help stays the same as in the last
assessment if the person on the receiving end has a bad image. In this case,
the choice about whether or not to give help to the potential receiver does not
affect the image of the potential giver. We analyze the Staying norm in terms
of evolutionary game theory and demonstrate that Staying is most effective in
establishing cooperation compared to the prevailing social norms, which rely on
constant monitoring and unconditional assessment. The application of Staying
suggests that the strict application of moral judgment is limited.
| [
{
"created": "Sat, 7 May 2016 10:28:53 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Feb 2017 13:39:06 GMT",
"version": "v2"
}
] | 2017-02-07 | [
[
"Sasaki",
"Tatsuya",
""
],
[
"Okada",
"Isamu",
""
],
[
"Nakai",
"Yutaka",
""
]
] | Indirect reciprocity is a major mechanism in the maintenance of cooperation among unrelated individuals. Indirect reciprocity leads to conditional cooperation according to social norms that discriminate the good (those who deserve to be rewarded with help) and the bad (those who should be punished by refusal of help). Despite intensive research, however, there is no definitive consensus on what social norms best promote cooperation through indirect reciprocity, and it remains unclear even how those who refuse to help the bad should be assessed. Here, we propose a new simple norm called "Staying" that prescribes abstaining from assessment. Under the Staying norm, the image of the person who makes the decision to give help stays the same as in the last assessment if the person on the receiving end has a bad image. In this case, the choice about whether or not to give help to the potential receiver does not affect the image of the potential giver. We analyze the Staying norm in terms of evolutionary game theory and demonstrate that Staying is most effective in establishing cooperation compared to the prevailing social norms, which rely on constant monitoring and unconditional assessment. The application of Staying suggests that the strict application of moral judgment is limited. |
1902.09482 | Arni S.R. Srinivasa Rao | Arni S.R. Srinivasa Rao and Roy M. Anderson | Helminth Dynamics: Mean Number of Worms, Reproductive Rates | 13 pages | Handbook of Statist., 36, Elsevier/North-Holland, Amsterdam, 2017 | 10.1016/bs.host.2017.05.003 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We derive formulas to compute mean number of worms in a newly Helminth
infected population before secondary infections are started (population is
closed). We have proved the two types of growth functions arise in this process
as measurable functions.
| [
{
"created": "Mon, 25 Feb 2019 17:59:13 GMT",
"version": "v1"
}
] | 2021-06-24 | [
[
"Rao",
"Arni S. R. Srinivasa",
""
],
[
"Anderson",
"Roy M.",
""
]
] | We derive formulas to compute mean number of worms in a newly Helminth infected population before secondary infections are started (population is closed). We have proved the two types of growth functions arise in this process as measurable functions. |
1411.2205 | Liaofu Luo | Liaofu Luo | A Model on Genome Evolution | 13 pages | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A model of genome evolution is proposed. Based on three assumptions the
evolutionary theory of a genome is formulated. The general law on the direction
of genome evolution is given. Both the deterministic classical equation and the
stochastic quantum equation are proposed. It is proved that the classical
equation can be put in a form of the least action principle and the latter can
be used for obtaining the quantum generalization of the evolutionary law. The
wave equation and uncertainty relation for the quantum evolution are deduced
logically. It is shown that the classical trajectory is a limiting case of the
general quantum evolution depicted in the coarse-grained time. The observed
smooth/sudden evolution is interpreted by the alternating occurrence of the
classical and quantum phases. The speciation event is explained by the quantum
transition in quantum phase. Fundamental constants of time dimension, the
quantization constant and the evolutionary inertia, are introduced for
characterizing the genome evolution. The size of minimum genome is deduced from
the quantum uncertainty lower bound. The present work shows the quantum law may
be more general than thought, since it plays key roles not only in atomic
physics, but also in genome evolution.
| [
{
"created": "Sun, 9 Nov 2014 07:20:08 GMT",
"version": "v1"
}
] | 2014-11-11 | [
[
"Luo",
"Liaofu",
""
]
] | A model of genome evolution is proposed. Based on three assumptions the evolutionary theory of a genome is formulated. The general law on the direction of genome evolution is given. Both the deterministic classical equation and the stochastic quantum equation are proposed. It is proved that the classical equation can be put in a form of the least action principle and the latter can be used for obtaining the quantum generalization of the evolutionary law. The wave equation and uncertainty relation for the quantum evolution are deduced logically. It is shown that the classical trajectory is a limiting case of the general quantum evolution depicted in the coarse-grained time. The observed smooth/sudden evolution is interpreted by the alternating occurrence of the classical and quantum phases. The speciation event is explained by the quantum transition in quantum phase. Fundamental constants of time dimension, the quantization constant and the evolutionary inertia, are introduced for characterizing the genome evolution. The size of minimum genome is deduced from the quantum uncertainty lower bound. The present work shows the quantum law may be more general than thought, since it plays key roles not only in atomic physics, but also in genome evolution. |
2302.11681 | Hai-Jun Zhou | Zhen-Ye Huang, Ruyi Zhou, Miao Huang, Hai-Jun Zhou | Energy--Information Trade-off Induces Continuous and Discontinuous Phase
Transitions in Lateral Predictive Coding | 6 pages main text, supplementary text combined. This is an
extensively revised version, containing new analytical results and numerical
results | Science China Physics, Mechanics Astronomy 67, 260511 (2024) | 10.1007/s11433-024-2341-2 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lateral predictive coding is a recurrent neural network which creates
energy-efficient internal representations by exploiting statistical regularity
in sensory inputs. Here we investigate the trade-off between information
robustness and energy in a linear model of lateral predictive coding
analytically and by numerical minimization of a free energy. We observe several
phase transitions in the synaptic weight matrix, especially a continuous
transition which breaks reciprocity and permutation symmetry and builds cyclic
dominance and a discontinuous transition with the associated sudden emergence
of tight balance between excitatory and inhibitory interactions. The optimal
network follows an ideal-gas law in an extended temperature range and saturates
the efficiency upper-bound of energy utilization. These results bring
theoretical insights on the emergence and evolution of complex internal models
in predictive processing systems.
| [
{
"created": "Wed, 22 Feb 2023 22:34:23 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jan 2024 02:19:51 GMT",
"version": "v2"
}
] | 2024-06-17 | [
[
"Huang",
"Zhen-Ye",
""
],
[
"Zhou",
"Ruyi",
""
],
[
"Huang",
"Miao",
""
],
[
"Zhou",
"Hai-Jun",
""
]
] | Lateral predictive coding is a recurrent neural network which creates energy-efficient internal representations by exploiting statistical regularity in sensory inputs. Here we investigate the trade-off between information robustness and energy in a linear model of lateral predictive coding analytically and by numerical minimization of a free energy. We observe several phase transitions in the synaptic weight matrix, especially a continuous transition which breaks reciprocity and permutation symmetry and builds cyclic dominance and a discontinuous transition with the associated sudden emergence of tight balance between excitatory and inhibitory interactions. The optimal network follows an ideal-gas law in an extended temperature range and saturates the efficiency upper-bound of energy utilization. These results bring theoretical insights on the emergence and evolution of complex internal models in predictive processing systems. |
0907.0073 | Anirban Banerji | Anirban Banerji, Indira Ghosh | Criteria to observe mesoscopic emergence of protein biophysical
properties | A mathematical model, 25 pages | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins are regularly described with some general indices (mass fractal
dimension, surface fractal dimension, entropy, enthalpy, free energies,
hydrophobicity, denaturation temperature etc..), which are inherently
statistical in nature. These general indices emerge from innumerable (innately
context-dependent and time-dependent) interactions between various atoms of a
protein. Many a studies have been performed on the nature of these inter-atomic
interactions and the change of profile of atomic fluctuations that they cause.
However, we still do not know, under a given context, for a given duration of
time, how does a macroscopic biophysical property emerge from the cumulative
inter-atomic interactions. An exact answer to that question will involve
bridging the gap between nano-scale distinguishable atomic description and
macroscopic indistinguishable (statistical) measures, along the mesoscopic
scale of observation. In this work we propose a computationally implementable
mathematical model that derives expressions for observability of emergence of a
macroscopic biophysical property from a set of interacting (fluctuating) atoms.
Since most of the aforementioned interactions are non-linear in nature;
observability criteria are derived for both linear and the non-linear
descriptions of protein interior. The study assumes paramount importance in
21st-century biology, from both the theoretical and practical utilitarian point
of view.
| [
{
"created": "Wed, 1 Jul 2009 06:49:45 GMT",
"version": "v1"
}
] | 2009-07-02 | [
[
"Banerji",
"Anirban",
""
],
[
"Ghosh",
"Indira",
""
]
] | Proteins are regularly described with some general indices (mass fractal dimension, surface fractal dimension, entropy, enthalpy, free energies, hydrophobicity, denaturation temperature etc..), which are inherently statistical in nature. These general indices emerge from innumerable (innately context-dependent and time-dependent) interactions between various atoms of a protein. Many a studies have been performed on the nature of these inter-atomic interactions and the change of profile of atomic fluctuations that they cause. However, we still do not know, under a given context, for a given duration of time, how does a macroscopic biophysical property emerge from the cumulative inter-atomic interactions. An exact answer to that question will involve bridging the gap between nano-scale distinguishable atomic description and macroscopic indistinguishable (statistical) measures, along the mesoscopic scale of observation. In this work we propose a computationally implementable mathematical model that derives expressions for observability of emergence of a macroscopic biophysical property from a set of interacting (fluctuating) atoms. Since most of the aforementioned interactions are non-linear in nature; observability criteria are derived for both linear and the non-linear descriptions of protein interior. The study assumes paramount importance in 21st-century biology, from both the theoretical and practical utilitarian point of view. |
2012.04227 | Masayo Inoue | Masayo Inoue, Kunihiko Kaneko | Entangled gene regulatory networks with cooperative expression endow
robust adaptive responses to unforeseen environmental changes | 10 pages, 7 figures | Phys. Rev. Research 3, 033183 (2021) | 10.1103/PhysRevResearch.3.033183 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living organisms must respond to environmental changes. Generally, accurate
and rapid responses are provided by simple, unidirectional networks that
connect inputs with outputs. Besides accuracy and speed, biological responses
should also be robust to environmental or intracellular noise and mutations.
Furthermore, cells must also respond to unforeseen environmental changes that
have not previously been experienced, to avoid extinction prior to the
evolutionary rewiring of their networks, which takes numerous generations. We
have investigated gene regulatory networks that mutually activate or inhibit,
and have demonstrated that complex entangled networks can make appropriate
input-output relationships that satisfy the robust and adaptive responses
required for unforeseen challenges. Such entangled networks function for sloppy
and unreliable responses with low Hill coefficient reactions for the expression
of each gene. To compensate for such sloppiness, several detours in the
regulatory network exist. By taking advantage of the averaging over such
detours, the network shows a higher robustness to environmental and
intracellular noise as well as to mutations in the network, when compared to
simple unidirectional circuits. Furthermore, the appropriate response to
unforeseen challenges, allowing for functional outputs, is achieved as many
genes exhibit similar dynamic expression responses, irrespective of inputs, as
confirmed by applying dynamic time warping and dynamic mode decomposition. As
complex entangled networks are common in gene regulatory networks and global
gene expression responses are observed in microbial experiments, the present
results provide a novel design principle for cellular networks.
| [
{
"created": "Tue, 8 Dec 2020 05:34:42 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Inoue",
"Masayo",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | Living organisms must respond to environmental changes. Generally, accurate and rapid responses are provided by simple, unidirectional networks that connect inputs with outputs. Besides accuracy and speed, biological responses should also be robust to environmental or intracellular noise and mutations. Furthermore, cells must also respond to unforeseen environmental changes that have not previously been experienced, to avoid extinction prior to the evolutionary rewiring of their networks, which takes numerous generations. We have investigated gene regulatory networks that mutually activate or inhibit, and have demonstrated that complex entangled networks can make appropriate input-output relationships that satisfy the robust and adaptive responses required for unforeseen challenges. Such entangled networks function for sloppy and unreliable responses with low Hill coefficient reactions for the expression of each gene. To compensate for such sloppiness, several detours in the regulatory network exist. By taking advantage of the averaging over such detours, the network shows a higher robustness to environmental and intracellular noise as well as to mutations in the network, when compared to simple unidirectional circuits. Furthermore, the appropriate response to unforeseen challenges, allowing for functional outputs, is achieved as many genes exhibit similar dynamic expression responses, irrespective of inputs, as confirmed by applying dynamic time warping and dynamic mode decomposition. As complex entangled networks are common in gene regulatory networks and global gene expression responses are observed in microbial experiments, the present results provide a novel design principle for cellular networks. |
2401.06166 | Yan Ding | Yan Ding, Hao Cheng, Ziliang Ye, Ruyi Feng, Wei Tian, Peng Xie, Juan
Zhang, Zhongze Gu | AdaMR: Adaptable Molecular Representation for Unified Pre-training
Strategy | null | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | We propose Adjustable Molecular Representation (AdaMR), a new large-scale
uniform pre-training strategy for small-molecule drugs, as a novel unified
pre-training strategy. AdaMR utilizes a granularity-adjustable molecular
encoding strategy, which is accomplished through a pre-training job termed
molecular canonicalization, setting it apart from recent large-scale molecular
models. This adaptability in granularity enriches the model's learning
capability at multiple levels and improves its performance in multi-task
scenarios. Specifically, the substructure-level molecular representation
preserves information about specific atom groups or arrangements, influencing
chemical properties and functionalities. This proves advantageous for tasks
such as property prediction. Simultaneously, the atomic-level representation,
combined with generative molecular canonicalization pre-training tasks,
enhances validity, novelty, and uniqueness in generative tasks. All of these
features work together to give AdaMR outstanding performance on a range of
downstream tasks. We fine-tuned our proposed pre-trained model on six molecular
property prediction tasks (MoleculeNet datasets) and two generative tasks
(ZINC250K datasets), achieving state-of-the-art (SOTA) results on five out of
eight tasks.
| [
{
"created": "Thu, 28 Dec 2023 10:53:17 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Apr 2024 13:28:02 GMT",
"version": "v2"
}
] | 2024-04-30 | [
[
"Ding",
"Yan",
""
],
[
"Cheng",
"Hao",
""
],
[
"Ye",
"Ziliang",
""
],
[
"Feng",
"Ruyi",
""
],
[
"Tian",
"Wei",
""
],
[
"Xie",
"Peng",
""
],
[
"Zhang",
"Juan",
""
],
[
"Gu",
"Zhongze",
""
]
] | We propose Adjustable Molecular Representation (AdaMR), a new large-scale uniform pre-training strategy for small-molecule drugs, as a novel unified pre-training strategy. AdaMR utilizes a granularity-adjustable molecular encoding strategy, which is accomplished through a pre-training job termed molecular canonicalization, setting it apart from recent large-scale molecular models. This adaptability in granularity enriches the model's learning capability at multiple levels and improves its performance in multi-task scenarios. Specifically, the substructure-level molecular representation preserves information about specific atom groups or arrangements, influencing chemical properties and functionalities. This proves advantageous for tasks such as property prediction. Simultaneously, the atomic-level representation, combined with generative molecular canonicalization pre-training tasks, enhances validity, novelty, and uniqueness in generative tasks. All of these features work together to give AdaMR outstanding performance on a range of downstream tasks. We fine-tuned our proposed pre-trained model on six molecular property prediction tasks (MoleculeNet datasets) and two generative tasks (ZINC250K datasets), achieving state-of-the-art (SOTA) results on five out of eight tasks. |
1911.08583 | Jahan Schad | Jahan N. Schad | Mirror Neuron; A Beautiful Unnecessary Concept | null | null | null | Journal of Neurology & Stroke 2021; 11(6); 169-170 | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mirror neuron theory that has enjoyed continued validations was developed
with no particular attention to the phenomenon of the vision. Understandably
the perception of vision has always been thought to happen, naturally, as that
for any of the other four senses. However, the reality that underlies this
presumption is by no means obvious; vision perception is based on remote
sensing of the ecology, fundamentally different form that of the other senses,
which have tactile stimulation origin (contact with matter). While its reality,
as explicated here, explains why the above presumption is true, it also bears
heavily on the mirror neuron theory: the revelation of the nature of vision
makes mirror neurons unnecessary. The extensive cognitive neurosciences
investigation of primates and humans, over the past three decades, have
experimentally validated the theory of mirror neurons which had been put
forward early in the period (1980s and 1990s) based on the results of cognitive
research experiments on the macaque monkeys. Based on further experimental
works, phenomena such as learning, empathy, and some aspects of survival, are
ascribed to the operations of this class of additional neurons. Here I reason
that all the results of the efforts of the proponents of the theory can, not
only find explanation in the context of the new theory of vision but also
provide support for it. This new take of the phenomenon of vision is developed
based on the nature of the experimental methods that have succeeded in
developing some measure of vision for the blinds, and the inferences from the
very likely nature of the computational strategy of the brain. I present
evidence that the mental phenomena, which rendered the claim of the mirror
neurons, are in essence the results of subjects beings variably touched by
their ecology, through the coherent tactile operation of all senses.
| [
{
"created": "Thu, 14 Nov 2019 18:48:44 GMT",
"version": "v1"
}
] | 2022-03-01 | [
[
"Schad",
"Jahan N.",
""
]
] | The mirror neuron theory that has enjoyed continued validations was developed with no particular attention to the phenomenon of the vision. Understandably the perception of vision has always been thought to happen, naturally, as that for any of the other four senses. However, the reality that underlies this presumption is by no means obvious; vision perception is based on remote sensing of the ecology, fundamentally different form that of the other senses, which have tactile stimulation origin (contact with matter). While its reality, as explicated here, explains why the above presumption is true, it also bears heavily on the mirror neuron theory: the revelation of the nature of vision makes mirror neurons unnecessary. The extensive cognitive neurosciences investigation of primates and humans, over the past three decades, have experimentally validated the theory of mirror neurons which had been put forward early in the period (1980s and 1990s) based on the results of cognitive research experiments on the macaque monkeys. Based on further experimental works, phenomena such as learning, empathy, and some aspects of survival, are ascribed to the operations of this class of additional neurons. Here I reason that all the results of the efforts of the proponents of the theory can, not only find explanation in the context of the new theory of vision but also provide support for it. This new take of the phenomenon of vision is developed based on the nature of the experimental methods that have succeeded in developing some measure of vision for the blinds, and the inferences from the very likely nature of the computational strategy of the brain. I present evidence that the mental phenomena, which rendered the claim of the mirror neurons, are in essence the results of subjects beings variably touched by their ecology, through the coherent tactile operation of all senses. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.