id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0706.2126 | Thierry Cachat | Eugene Asarin (LIAFA), Thierry Cachat (LIAFA), Alexander Seliverstov
(IITP), Tayssir Touili (LIAFA), Vassily Lyubetsky (IITP) | Attenuation Regulation as a Term Rewriting System | to appear | Proceedings of the Second International Conference on Algebraic
Biology (06/07/2007) 12 | null | null | q-bio.QM | null | The classical attenuation regulation of gene expression in bacteria is
considered. We propose to represent the secondary RNA structure in the leader
region of a gene or an operon by a term, and we give a probabilistic term
rewriting system modeling the whole process of such a regulation.
| [
{
"created": "Thu, 14 Jun 2007 13:59:59 GMT",
"version": "v1"
}
] | 2007-06-15 | [
[
"Asarin",
"Eugene",
"",
"LIAFA"
],
[
"Cachat",
"Thierry",
"",
"LIAFA"
],
[
"Seliverstov",
"Alexander",
"",
"IITP"
],
[
"Touili",
"Tayssir",
"",
"LIAFA"
],
[
"Lyubetsky",
"Vassily",
"",
"IITP"
]
] | The classical attenuation regulation of gene expression in bacteria is considered. We propose to represent the secondary RNA structure in the leader region of a gene or an operon by a term, and we give a probabilistic term rewriting system modeling the whole process of such a regulation. |
1403.2574 | Sven Bilke | Yevgeniy Gindin, Manuel S. Valenzuela, Mirit I. Aladjem, Paul S.
Meltzer and Sven Bilke | A chromatin structure based model accurately predicts DNA replication
timing in human cells | null | null | null | null | q-bio.SC q-bio.GN | http://creativecommons.org/licenses/publicdomain/ | The metazoan genome is replicated in precise cell lineage specific temporal
order. However, the mechanism controlling this orchestrated process is poorly
understood as no molecular mechanisms have been identified that actively
regulate the firing sequence of genome replication. Here we develop a
mechanistic model of genome replication capable of predicting, with accuracy
rivaling experimental repeats, observed empirical replication timing program in
humans. In our model, replication is initiated in an uncoordinated
(time-stochastic) manner at well-defined sites. The model contains, in addition
to the choice of the genomic landmark that localizes initiation, only a single
adjustable parameter of direct biological relevance: the number of replication
forks. We find that DNase hypersensitive sites are optimal and independent
determinants of DNA replication initiation. We demonstrate that the DNA
replication timing program in human cells is a robust emergent phenomenon that,
by its very nature, does not require a regulatory mechanism determining a
proper replication initiation firing sequence.
| [
{
"created": "Tue, 11 Mar 2014 13:41:52 GMT",
"version": "v1"
}
] | 2014-03-12 | [
[
"Gindin",
"Yevgeniy",
""
],
[
"Valenzuela",
"Manuel S.",
""
],
[
"Aladjem",
"Mirit I.",
""
],
[
"Meltzer",
"Paul S.",
""
],
[
"Bilke",
"Sven",
""
]
] | The metazoan genome is replicated in precise cell lineage specific temporal order. However, the mechanism controlling this orchestrated process is poorly understood as no molecular mechanisms have been identified that actively regulate the firing sequence of genome replication. Here we develop a mechanistic model of genome replication capable of predicting, with accuracy rivaling experimental repeats, observed empirical replication timing program in humans. In our model, replication is initiated in an uncoordinated (time-stochastic) manner at well-defined sites. The model contains, in addition to the choice of the genomic landmark that localizes initiation, only a single adjustable parameter of direct biological relevance: the number of replication forks. We find that DNase hypersensitive sites are optimal and independent determinants of DNA replication initiation. We demonstrate that the DNA replication timing program in human cells is a robust emergent phenomenon that, by its very nature, does not require a regulatory mechanism determining a proper replication initiation firing sequence. |
1105.5764 | Andrey Shabalin | Andrey A. Shabalin | Matrix eQTL: Ultra fast eQTL analysis via large matrix operations | 9 pages, 1 figure | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Expression quantitative trait loci (eQTL) mapping aims to determine genomic
regions that regulate gene transcription. Expression QTL is used to study the
regulatory structure of normal tissues and to search for genetic factors in
complex diseases such as cancer, diabetes, and cystic fibrosis. A modern eQTL
dataset contains millions of SNPs and thousands of transcripts measured for
hundreds of samples. This makes the analysis computationally complex as it
involves independent testing for association for every transcript-SNP pair. The
heavy computational burden makes eQTL analysis less popular, often forces
analysts to restrict their attention to just a subset of transcripts and SNPs.
As larger genotype and gene expression datasets become available, the demand
for fast tools for eQTL analysis increases. We present a new method for fast
eQTL analysis via linear models, called Matrix eQTL. Matrix eQTL can model and
test for association using both linear regression and ANOVA models. The models
can include covariates to account for such factors as population structure,
gender, and clinical variables. It also supports testing of heteroscedastic
models and models with correlated errors. In our experiment on large datasets
Matrix eQTL was thousands of times faster than the existing popular software
for QTL/eQTL analysis. Matrix eQTL is implemented as both Matlab and R packages
and thus can easily be run on Windows, Mac OS, and Linux systems. The software
is freely available at the following address:
http://www.bios.unc.edu/research/genomic_software/Matrix_eQTL
| [
{
"created": "Sun, 29 May 2011 07:16:12 GMT",
"version": "v1"
}
] | 2011-05-31 | [
[
"Shabalin",
"Andrey A.",
""
]
] | Expression quantitative trait loci (eQTL) mapping aims to determine genomic regions that regulate gene transcription. Expression QTL is used to study the regulatory structure of normal tissues and to search for genetic factors in complex diseases such as cancer, diabetes, and cystic fibrosis. A modern eQTL dataset contains millions of SNPs and thousands of transcripts measured for hundreds of samples. This makes the analysis computationally complex as it involves independent testing for association for every transcript-SNP pair. The heavy computational burden makes eQTL analysis less popular, often forces analysts to restrict their attention to just a subset of transcripts and SNPs. As larger genotype and gene expression datasets become available, the demand for fast tools for eQTL analysis increases. We present a new method for fast eQTL analysis via linear models, called Matrix eQTL. Matrix eQTL can model and test for association using both linear regression and ANOVA models. The models can include covariates to account for such factors as population structure, gender, and clinical variables. It also supports testing of heteroscedastic models and models with correlated errors. In our experiment on large datasets Matrix eQTL was thousands of times faster than the existing popular software for QTL/eQTL analysis. Matrix eQTL is implemented as both Matlab and R packages and thus can easily be run on Windows, Mac OS, and Linux systems. The software is freely available at the following address: http://www.bios.unc.edu/research/genomic_software/Matrix_eQTL |
1202.0405 | Federico Felizzi | Federico Felizzi, Jerome Galtier, Georgios Fengos, Dagmar Iber | Spatial organization of proteomes: A low-rank approximation | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of signal transduction via a descriptive analysis
of the spatial organization of the complement of proteins exerting a certain
function within a cellular compartment. We propose a scheme to assign a
numerical value to individual proteins in a protein interaction network by
means of a simple optimization algorithm. We test our procedure against
datasets focusing on the proteomes in the neurite and soma compartments.
| [
{
"created": "Thu, 2 Feb 2012 10:51:23 GMT",
"version": "v1"
}
] | 2012-02-03 | [
[
"Felizzi",
"Federico",
""
],
[
"Galtier",
"Jerome",
""
],
[
"Fengos",
"Georgios",
""
],
[
"Iber",
"Dagmar",
""
]
] | We investigate the problem of signal transduction via a descriptive analysis of the spatial organization of the complement of proteins exerting a certain function within a cellular compartment. We propose a scheme to assign a numerical value to individual proteins in a protein interaction network by means of a simple optimization algorithm. We test our procedure against datasets focusing on the proteomes in the neurite and soma compartments. |
1606.04354 | Alberto Pezzotta | Alberto Pezzotta, Matteo Adorisio, Antonio Celani | From Conformational Spread to Allosteric and Cooperative models of E.
coli flagellar motor | 22 pages, 10 figures; new version with appendix | null | 10.1088/1742-5468/aa569e | null | q-bio.QM physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Escherichia coli swims using flagella activated by rotary motors. The
direction of rotation of the motors is indirectly regulated by the binding of a
single messenger protein. The conformational spread model has been shown to
accurately describe the equilibrium properties as well as the dynamics of the
flagellar motor. In this paper we study this model from an analytic point of
view. By exploiting the separation of timescales observed in experiments, we
show how to reduce the conformational spread model to a coarse-grained,
cooperative binding model. We show that this simplified model reproduces very
well the dynamics of the motor switch.
| [
{
"created": "Tue, 14 Jun 2016 13:40:17 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jul 2016 10:15:04 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Sep 2016 12:43:06 GMT",
"version": "v3"
}
] | 2017-03-08 | [
[
"Pezzotta",
"Alberto",
""
],
[
"Adorisio",
"Matteo",
""
],
[
"Celani",
"Antonio",
""
]
] | Escherichia coli swims using flagella activated by rotary motors. The direction of rotation of the motors is indirectly regulated by the binding of a single messenger protein. The conformational spread model has been shown to accurately describe the equilibrium properties as well as the dynamics of the flagellar motor. In this paper we study this model from an analytic point of view. By exploiting the separation of timescales observed in experiments, we show how to reduce the conformational spread model to a coarse-grained, cooperative binding model. We show that this simplified model reproduces very well the dynamics of the motor switch. |
2002.08891 | Maxwell Bertolero Dr | Maxwell A. Bertolero, Dustin Moraczewski, Adam Thomas, and Danielle S.
Bassett | Deep Neural Networks Carve the Brain at its Joints | null | null | null | null | q-bio.NC physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How an individual's unique brain connectivity determines that individual's
cognition, behavior, and risk for pathology is a fundamental question in basic
and clinical neuroscience. In seeking answers, many have turned to machine
learning, with some noting the particular promise of deep neural networks in
modelling complex non-linear functions. However, it is not clear that complex
functions actually exist between brain connectivity and behavior, and thus if
deep neural networks necessarily outperform simpler linear models, or if their
results would be interpretable. Here we show that, across 52 subject measures
of cognition and behavior, deep neural networks fit to each brain region's
connectivity outperform linear regression, particularly for the brain's
connector hubs--regions with diverse brain connectivity--whereas the two
approaches perform similarly when fit to brain systems. Critically, averaging
deep neural network predictions across brain regions results in the most
accurate predictions, demonstrating the ability of deep neural networks to
easily model the various functions that exists between regional brain
connectivity and behavior, carving the brain at its joints. Finally, we shine
light into the black box of deep neural networks using multislice network
models. We determined that the relationship between connector hubs and behavior
is best captured by modular deep neural networks. Our results demonstrate that
both simple and complex relationships exist between brain connectivity and
behavior, and that deep neural networks can fit both. Moreover, deep neural
networks are particularly powerful when they are first fit to the various
functions of a system independently and then combined. Finally, deep neural
networks are interpretable when their architectures are structurally
characterized using multislice network models.
| [
{
"created": "Thu, 20 Feb 2020 17:37:53 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Sep 2020 15:09:51 GMT",
"version": "v2"
}
] | 2023-12-08 | [
[
"Bertolero",
"Maxwell A.",
""
],
[
"Moraczewski",
"Dustin",
""
],
[
"Thomas",
"Adam",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | How an individual's unique brain connectivity determines that individual's cognition, behavior, and risk for pathology is a fundamental question in basic and clinical neuroscience. In seeking answers, many have turned to machine learning, with some noting the particular promise of deep neural networks in modelling complex non-linear functions. However, it is not clear that complex functions actually exist between brain connectivity and behavior, and thus if deep neural networks necessarily outperform simpler linear models, or if their results would be interpretable. Here we show that, across 52 subject measures of cognition and behavior, deep neural networks fit to each brain region's connectivity outperform linear regression, particularly for the brain's connector hubs--regions with diverse brain connectivity--whereas the two approaches perform similarly when fit to brain systems. Critically, averaging deep neural network predictions across brain regions results in the most accurate predictions, demonstrating the ability of deep neural networks to easily model the various functions that exists between regional brain connectivity and behavior, carving the brain at its joints. Finally, we shine light into the black box of deep neural networks using multislice network models. We determined that the relationship between connector hubs and behavior is best captured by modular deep neural networks. Our results demonstrate that both simple and complex relationships exist between brain connectivity and behavior, and that deep neural networks can fit both. Moreover, deep neural networks are particularly powerful when they are first fit to the various functions of a system independently and then combined. Finally, deep neural networks are interpretable when their architectures are structurally characterized using multislice network models. |
2012.06112 | Swapna Sasi Mrs | Pranav Mahajan, Advait Rane, Swapna Sasi, Basabdatta Sen Bhattacharya | Quantifying Synchronization in a Biologically Inspired Neural Network | 13 pages | null | null | null | q-bio.QM q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a collated set of algorithms to obtain objective measures of
synchronisation in brain time-series data. The algorithms are implemented in
MATLAB; we refer to our collated set of 'tools' as SyncBox. Our motivation for
SyncBox is to understand the underlying dynamics in an existing population
neural network, commonly referred to as neural mass models, that mimic Local
Field Potentials of the visual thalamic tissue. Specifically, we aim to measure
the phase synchronisation objectively in the model response to periodic
stimuli; this is to mimic the condition of
Steady-state-visually-evoked-potentials (SSVEP), which are scalp
Electroencephalograph (EEG) corresponding to periodic stimuli. We showcase the
use of SyncBox on our existing neural mass model of the visual thalamus.
Following our successful testing of SyncBox, it is currently being used for
further research on understanding the underlying dynamics in enhanced neural
networks of the visual pathway
| [
{
"created": "Fri, 11 Dec 2020 04:08:15 GMT",
"version": "v1"
}
] | 2020-12-14 | [
[
"Mahajan",
"Pranav",
""
],
[
"Rane",
"Advait",
""
],
[
"Sasi",
"Swapna",
""
],
[
"Bhattacharya",
"Basabdatta Sen",
""
]
] | We present a collated set of algorithms to obtain objective measures of synchronisation in brain time-series data. The algorithms are implemented in MATLAB; we refer to our collated set of 'tools' as SyncBox. Our motivation for SyncBox is to understand the underlying dynamics in an existing population neural network, commonly referred to as neural mass models, that mimic Local Field Potentials of the visual thalamic tissue. Specifically, we aim to measure the phase synchronisation objectively in the model response to periodic stimuli; this is to mimic the condition of Steady-state-visually-evoked-potentials (SSVEP), which are scalp Electroencephalograph (EEG) corresponding to periodic stimuli. We showcase the use of SyncBox on our existing neural mass model of the visual thalamus. Following our successful testing of SyncBox, it is currently being used for further research on understanding the underlying dynamics in enhanced neural networks of the visual pathway |
2312.14732 | Mariona Torrens Fontanals | Mariona Torrens-Fontanals, Panagiotis Tourlas, Stefan Doerr, Gianni De
Fabritiis | PlayMolecule Viewer: a toolkit for the visualization of molecules and
other data | 10 pages, 4 figures, submitted to the Journal of Chemical Information
and Modeling | null | 10.1021/acs.jcim.3c01776 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PlayMolecule Viewer is a web-based data visualization toolkit designed to
streamline the exploration of data resulting from structural bioinformatics or
computer-aided drug design efforts. By harnessing state-of-the-art web
technologies such as WebAssembly, PlayMolecule Viewer integrates powerful
Python libraries directly within the browser environment, which enhances its
capabilities of managing multiple types of molecular data. With its intuitive
interface, it allows users to easily upload, visualize, select, and manipulate
molecular structures and associated data. The toolkit supports a wide range of
common structural file formats and offers a variety of molecular
representations to cater to different visualization needs. PlayMolecule Viewer
is freely accessible at open.playmolecule.org, ensuring accessibility and
availability to the scientific community and beyond.
| [
{
"created": "Fri, 22 Dec 2023 14:40:18 GMT",
"version": "v1"
}
] | 2024-01-31 | [
[
"Torrens-Fontanals",
"Mariona",
""
],
[
"Tourlas",
"Panagiotis",
""
],
[
"Doerr",
"Stefan",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] | PlayMolecule Viewer is a web-based data visualization toolkit designed to streamline the exploration of data resulting from structural bioinformatics or computer-aided drug design efforts. By harnessing state-of-the-art web technologies such as WebAssembly, PlayMolecule Viewer integrates powerful Python libraries directly within the browser environment, which enhances its capabilities of managing multiple types of molecular data. With its intuitive interface, it allows users to easily upload, visualize, select, and manipulate molecular structures and associated data. The toolkit supports a wide range of common structural file formats and offers a variety of molecular representations to cater to different visualization needs. PlayMolecule Viewer is freely accessible at open.playmolecule.org, ensuring accessibility and availability to the scientific community and beyond. |
1706.02451 | Wutu Lin | Tiger W. Lin, Anup Das, Giri P. Krishnan, Maxim Bazhenov, Terrence J.
Sejnowski | Differential Covariance: A New Class of Methods to Estimate Sparse
Connectivity from Neural Recordings | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With our ability to record more neurons simultaneously, making sense of these
data is a challenge. Functional connectivity is one popular way to study the
relationship between multiple neural signals. Correlation-based methods are a
set of currently well-used techniques for functional connectivity estimation.
However, due to explaining away and unobserved common inputs (Stevenson et al.,
2008), they produce spurious connections. The general linear model (GLM), which
models spikes trains as Poisson processes (Okatan et al., 2005; Truccolo et
al., 2005; Pillow et al., 2008), avoids these confounds. We develop here a new
class of methods by using differential signals based on simulated intracellular
voltage recordings. It is equivalent to a regularized AR(2) model. We also
expand the method to simulated local field potential (LFP) recordings and
calcium imaging. In all of our simulated data, the differential
covariance-based methods achieved better or similar performance to the GLM
method and required fewer data samples. This new class of methods provides
alternative ways to analyze neural signals.
| [
{
"created": "Thu, 8 Jun 2017 04:54:27 GMT",
"version": "v1"
}
] | 2017-06-09 | [
[
"Lin",
"Tiger W.",
""
],
[
"Das",
"Anup",
""
],
[
"Krishnan",
"Giri P.",
""
],
[
"Bazhenov",
"Maxim",
""
],
[
"Sejnowski",
"Terrence J.",
""
]
] | With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship between multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson et al., 2008), they produce spurious connections. The general linear model (GLM), which models spikes trains as Poisson processes (Okatan et al., 2005; Truccolo et al., 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential (LFP) recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved better or similar performance to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. |
1802.10207 | Girik Malik | Girik Malik, Anirban Banerji, Maksim Kouza, Irina A. Buhimschi,
Andrzej Kloczkowski | Deciphering general characteristics of residues constituting allosteric
communication paths | 29 pages, 4 figures, 5 tables | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considering all the PDB annotated allosteric proteins (from ASD - AlloSteric
Database) belonging to four different classes (kinases, nuclear receptors,
peptidases and transcription factors), this work has attempted to decipher
certain consistent patterns present in the residues constituting the allosteric
communication sub-system (ACSS). The thermal fluctuations of hydrophobic
residues in ACSSs were found to be significantly higher than those present in
the non-ACSS part of the same proteins, while polar residues showed the
opposite trend.
The basic residues and hydroxyl residues were found to be slightly more
predominant than the acidic residues and amide residues in ACSSs, hydrophobic
residues were found extremely frequently in kinase ACSSs. Despite having
different sequences and different lengths of ACSS, they were found to be
structurally quite similar to each other - suggesting a preferred structural
template for communication. ACSS structures recorded low RMSD and high Akaike
Information Criterion(AIC) scores among themselves. While the ACSS networks for
all the groups of allosteric proteins showed low degree centrality and
closeness centrality, the betweenness centrality magnitudes revealed nonuniform
behavior. Though cliques and communities could be identified within the ACSS,
maximal-common-subgraph considering all the ACSS could not be generated,
primarily due to the diversity in the dataset. Barring one particular case, the
entire ACSS for any class of allosteric proteins did not demonstrate "small
world" behavior, though the sub-graphs of the ACSSs, in certain cases, were
found to form small-world networks.
| [
{
"created": "Tue, 27 Feb 2018 23:21:21 GMT",
"version": "v1"
}
] | 2018-03-01 | [
[
"Malik",
"Girik",
""
],
[
"Banerji",
"Anirban",
""
],
[
"Kouza",
"Maksim",
""
],
[
"Buhimschi",
"Irina A.",
""
],
[
"Kloczkowski",
"Andrzej",
""
]
] | Considering all the PDB annotated allosteric proteins (from ASD - AlloSteric Database) belonging to four different classes (kinases, nuclear receptors, peptidases and transcription factors), this work has attempted to decipher certain consistent patterns present in the residues constituting the allosteric communication sub-system (ACSS). The thermal fluctuations of hydrophobic residues in ACSSs were found to be significantly higher than those present in the non-ACSS part of the same proteins, while polar residues showed the opposite trend. The basic residues and hydroxyl residues were found to be slightly more predominant than the acidic residues and amide residues in ACSSs, hydrophobic residues were found extremely frequently in kinase ACSSs. Despite having different sequences and different lengths of ACSS, they were found to be structurally quite similar to each other - suggesting a preferred structural template for communication. ACSS structures recorded low RMSD and high Akaike Information Criterion(AIC) scores among themselves. While the ACSS networks for all the groups of allosteric proteins showed low degree centrality and closeness centrality, the betweenness centrality magnitudes revealed nonuniform behavior. Though cliques and communities could be identified within the ACSS, maximal-common-subgraph considering all the ACSS could not be generated, primarily due to the diversity in the dataset. Barring one particular case, the entire ACSS for any class of allosteric proteins did not demonstrate "small world" behavior, though the sub-graphs of the ACSSs, in certain cases, were found to form small-world networks. |
2009.09422 | Alfredo Braunstein | Antoine Baker, Indaco Biazzo, Alfredo Braunstein, Giovanni Catania,
Luca Dall'Asta, Alessandro Ingrosso, Florent Krzakala, Fabio Mazza, Marc
M\'ezard, Anna Paola Muntoni, Maria Refinetti, Stefano Sarao Mannelli, Lenka
Zdeborov\'a | Epidemic mitigation by statistical inference from contact tracing data | 21 pages, 7 figures | PNAS 2021 Vol. 118 No. 32 e2106548118 | 10.1073/pnas.2106548118 | null | q-bio.PE cond-mat.stat-mech cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contact-tracing is an essential tool in order to mitigate the impact of
pandemic such as the COVID-19. In order to achieve efficient and scalable
contact-tracing in real time, digital devices can play an important role. While
a lot of attention has been paid to analyzing the privacy and ethical risks of
the associated mobile applications, so far much less research has been devoted
to optimizing their performance and assessing their impact on the mitigation of
the epidemic. We develop Bayesian inference methods to estimate the risk that
an individual is infected. This inference is based on the list of his recent
contacts and their own risk levels, as well as personal information such as
results of tests or presence of syndromes. We propose to use probabilistic risk
estimation in order to optimize testing and quarantining strategies for the
control of an epidemic. Our results show that in some range of epidemic
spreading (typically when the manual tracing of all contacts of infected people
becomes practically impossible, but before the fraction of infected people
reaches the scale where a lock-down becomes unavoidable), this inference of
individuals at risk could be an efficient way to mitigate the epidemic. Our
approaches translate into fully distributed algorithms that only require
communication between individuals who have recently been in contact. Such
communication may be encrypted and anonymized and thus compatible with privacy
preserving standards. We conclude that probabilistic risk estimation is capable
to enhance performance of digital contact tracing and should be considered in
the currently developed mobile applications.
| [
{
"created": "Sun, 20 Sep 2020 12:24:45 GMT",
"version": "v1"
}
] | 2021-07-29 | [
[
"Baker",
"Antoine",
""
],
[
"Biazzo",
"Indaco",
""
],
[
"Braunstein",
"Alfredo",
""
],
[
"Catania",
"Giovanni",
""
],
[
"Dall'Asta",
"Luca",
""
],
[
"Ingrosso",
"Alessandro",
""
],
[
"Krzakala",
"Florent",
""
],
[
"Mazza",
"Fabio",
""
],
[
"Mézard",
"Marc",
""
],
[
"Muntoni",
"Anna Paola",
""
],
[
"Refinetti",
"Maria",
""
],
[
"Mannelli",
"Stefano Sarao",
""
],
[
"Zdeborová",
"Lenka",
""
]
] | Contact-tracing is an essential tool in order to mitigate the impact of pandemic such as the COVID-19. In order to achieve efficient and scalable contact-tracing in real time, digital devices can play an important role. While a lot of attention has been paid to analyzing the privacy and ethical risks of the associated mobile applications, so far much less research has been devoted to optimizing their performance and assessing their impact on the mitigation of the epidemic. We develop Bayesian inference methods to estimate the risk that an individual is infected. This inference is based on the list of his recent contacts and their own risk levels, as well as personal information such as results of tests or presence of syndromes. We propose to use probabilistic risk estimation in order to optimize testing and quarantining strategies for the control of an epidemic. Our results show that in some range of epidemic spreading (typically when the manual tracing of all contacts of infected people becomes practically impossible, but before the fraction of infected people reaches the scale where a lock-down becomes unavoidable), this inference of individuals at risk could be an efficient way to mitigate the epidemic. Our approaches translate into fully distributed algorithms that only require communication between individuals who have recently been in contact. Such communication may be encrypted and anonymized and thus compatible with privacy preserving standards. We conclude that probabilistic risk estimation is capable to enhance performance of digital contact tracing and should be considered in the currently developed mobile applications. |
1302.2047 | Sim-Hui Tee | Sim-Hui Tee | In Silico Analysis of Tandem Repeats in GIF of Gastric Parietal Cells | Submitted to Journal of Biomedical Engineering Research | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tandem repeats are ubiquitous in the genome of organisms and their mutated
forms play a vital role in pathogenesis. In this study, tandem repeats in
Gastric Intrinsic Factor (GIF) of gastric parietal cells have been investigated
using an in silico approach. Six types of the nucleotide tandem repeat motifs
have been investigated, including mono-, di-, tri-, tetra-, penta- and
hexanucleotide. The distribution of the repeat motifs in the gene was analyzed.
The results of this study provide an insight into the biomolecular mechanisms
and pathogenesis implicated by the GIF of gastric parietal cells. Based on the
findings of the tandem repeats in GIF of gastric parietal cells, therapeutic
strategies and disease markers may be developed accordingly by the biomedical
scientists.
| [
{
"created": "Fri, 8 Feb 2013 14:31:34 GMT",
"version": "v1"
}
] | 2013-02-11 | [
[
"Tee",
"Sim-Hui",
""
]
] | Tandem repeats are ubiquitous in the genome of organisms and their mutated forms play a vital role in pathogenesis. In this study, tandem repeats in Gastric Intrinsic Factor (GIF) of gastric parietal cells have been investigated using an in silico approach. Six types of the nucleotide tandem repeat motifs have been investigated, including mono-, di-, tri-, tetra-, penta- and hexanucleotide. The distribution of the repeat motifs in the gene was analyzed. The results of this study provide an insight into the biomolecular mechanisms and pathogenesis implicated by the GIF of gastric parietal cells. Based on the findings of the tandem repeats in GIF of gastric parietal cells, therapeutic strategies and disease markers may be developed accordingly by the biomedical scientists. |
1504.04543 | Michael GB Blum | Nicolas Duforet-Frebourg, Keurcien Luu, Guillaume Laval, Eric Bazin,
Michael G.B. Blum | Detecting genomic signatures of natural selection with principal
component analysis: application to the 1000 Genomes data | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To characterize natural selection, various analytical methods for detecting
candidate genomic regions have been developed. We propose to perform
genome-wide scans of natural selection using principal component analysis. We
show that the common Fst index of genetic differentiation between populations
can be viewed as a proportion of variance explained by the principal
components. Considering the correlations between genetic variants and each
principal component provides a conceptual framework to detect genetic variants
involved in local adaptation without any prior definition of populations. To
validate the PCA-based approach, we consider the 1000 Genomes data (phase 1)
after removal of recently admixed individuals resulting in 850 individuals
coming from Africa, Asia, and Europe. The number of genetic variants is of the
order of 36 millions obtained with a low-coverage sequencing depth (3X). The
correlations between genetic variation and each principal component provide
well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and
also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and non-coding
RNAs. In addition to identifying genes involved in biological adaptation, we
identify two biological pathways involved in polygenic adaptation that are
related to the innate immune system (beta defensins) and to lipid metabolism
(fatty acid omega oxidation). An additional analysis of European data shows
that a genome scan based on PCA retrieves classical examples of local
adaptation even when there are no well-defined populations. PCA-based
statistics, implemented in the PCAdapt R package and the PCAdapt open-source
software, retrieve well-known signals of human adaptation, which is encouraging
for future whole-genome sequencing project, especially when defining
populations is difficult.
| [
{
"created": "Wed, 8 Apr 2015 13:49:11 GMT",
"version": "v1"
},
{
"created": "Thu, 21 May 2015 16:39:24 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Sep 2015 08:36:43 GMT",
"version": "v3"
},
{
"created": "Tue, 29 Sep 2015 08:20:39 GMT",
"version": "v4"
},
{
"created": "Wed, 18 Nov 2015 20:50:18 GMT",
"version": "v5"
}
] | 2015-11-19 | [
[
"Duforet-Frebourg",
"Nicolas",
""
],
[
"Luu",
"Keurcien",
""
],
[
"Laval",
"Guillaume",
""
],
[
"Bazin",
"Eric",
""
],
[
"Blum",
"Michael G. B.",
""
]
] | To characterize natural selection, various analytical methods for detecting candidate genomic regions have been developed. We propose to perform genome-wide scans of natural selection using principal component analysis. We show that the common Fst index of genetic differentiation between populations can be viewed as a proportion of variance explained by the principal components. Considering the correlations between genetic variants and each principal component provides a conceptual framework to detect genetic variants involved in local adaptation without any prior definition of populations. To validate the PCA-based approach, we consider the 1000 Genomes data (phase 1) after removal of recently admixed individuals resulting in 850 individuals coming from Africa, Asia, and Europe. The number of genetic variants is of the order of 36 millions obtained with a low-coverage sequencing depth (3X). The correlations between genetic variation and each principal component provide well-known targets for positive selection (EDAR, SLC24A5, SLC45A2, DARC), and also new candidate genes (APPBPP2, TP1A1, RTTN, KCNMA, MYO5C) and non-coding RNAs. In addition to identifying genes involved in biological adaptation, we identify two biological pathways involved in polygenic adaptation that are related to the innate immune system (beta defensins) and to lipid metabolism (fatty acid omega oxidation). An additional analysis of European data shows that a genome scan based on PCA retrieves classical examples of local adaptation even when there are no well-defined populations. PCA-based statistics, implemented in the PCAdapt R package and the PCAdapt open-source software, retrieve well-known signals of human adaptation, which is encouraging for future whole-genome sequencing project, especially when defining populations is difficult. |
2103.13156 | Cecilia Berardo | Cecilia Berardo and Stefan Geritz | Coevolution of the reckless prey and the patient predator | 22 pages, 3 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The war of attrition in game theory is a model of a stand-off situation
between two opponents where the winner is determined by its persistence. We
model a stand-off between a predator and a prey when the prey is hiding and the
predator is waiting for the prey to come out from its refuge, or when the two
are locked in a situation of mutual threat of injury or even death. The
stand-off is resolved when the predator gives up or when the prey tries to
escape. Instead of using the asymmetric war of attrition, we embed the
stand-off as an integral part of the predator-prey model of Rosenzweig and
MacArthur derived from first principles. We apply this model to study the
coevolution of the giving-up rates of the prey and the predator, using the
adaptive dynamics approach. We find that the long term evolutionary process
leads to three qualitatively different scenarios: the predator gives up
immediately, while the prey never gives up; the predator never gives up, while
the prey adopts any giving-up rate greater than or equal to a given positive
threshold value; the predator goes extinct. We observe that some results are
the same as for the asymmetric war of attrition, but others are quite
different.
| [
{
"created": "Wed, 24 Mar 2021 12:57:43 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Aug 2021 18:20:27 GMT",
"version": "v2"
}
] | 2021-08-19 | [
[
"Berardo",
"Cecilia",
""
],
[
"Geritz",
"Stefan",
""
]
] | The war of attrition in game theory is a model of a stand-off situation between two opponents where the winner is determined by its persistence. We model a stand-off between a predator and a prey when the prey is hiding and the predator is waiting for the prey to come out from its refuge, or when the two are locked in a situation of mutual threat of injury or even death. The stand-off is resolved when the predator gives up or when the prey tries to escape. Instead of using the asymmetric war of attrition, we embed the stand-off as an integral part of the predator-prey model of Rosenzweig and MacArthur derived from first principles. We apply this model to study the coevolution of the giving-up rates of the prey and the predator, using the adaptive dynamics approach. We find that the long term evolutionary process leads to three qualitatively different scenarios: the predator gives up immediately, while the prey never gives up; the predator never gives up, while the prey adopts any giving-up rate greater than or equal to a given positive threshold value; the predator goes extinct. We observe that some results are the same as for the asymmetric war of attrition, but others are quite different. |
1102.5694 | Iain Johnston | Iain G. Johnston, Sebastian A. Ahnert, Jonathan P. K. Doye and Ard A.
Louis | Evolutionary Dynamics in a Simple Model of Self-Assembly | null | Phys. Rev. E 83, 066105 (2011) | 10.1103/PhysRevE.83.066105 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the evolutionary dynamics of an idealised model for the robust
self-assembly of two-dimensional structures called polyominoes. The model
includes rules that encode interactions between sets of square tiles that drive
the self-assembly process. The relationship between the model's rule set and
its resulting self-assembled structure can be viewed as a genotype-phenotype
map and incorporated into a genetic algorithm. The rule sets evolve under
selection for specified target structures. The corresponding, complex fitness
landscape generates rich evolutionary dynamics as a function of parameters such
as the population size, search space size, mutation rate, and method of
recombination. Furthermore, these systems are simple enough that in some cases
the associated model genome space can be completely characterised, shedding
light on how the evolutionary dynamics depends on the detailed structure of the
fitness landscape. Finally, we apply the model to study the emergence of the
preference for dihedral over cyclic symmetry observed for homomeric protein
tetramers.
| [
{
"created": "Mon, 28 Feb 2011 15:53:08 GMT",
"version": "v1"
}
] | 2013-09-03 | [
[
"Johnston",
"Iain G.",
""
],
[
"Ahnert",
"Sebastian A.",
""
],
[
"Doye",
"Jonathan P. K.",
""
],
[
"Louis",
"Ard A.",
""
]
] | We investigate the evolutionary dynamics of an idealised model for the robust self-assembly of two-dimensional structures called polyominoes. The model includes rules that encode interactions between sets of square tiles that drive the self-assembly process. The relationship between the model's rule set and its resulting self-assembled structure can be viewed as a genotype-phenotype map and incorporated into a genetic algorithm. The rule sets evolve under selection for specified target structures. The corresponding, complex fitness landscape generates rich evolutionary dynamics as a function of parameters such as the population size, search space size, mutation rate, and method of recombination. Furthermore, these systems are simple enough that in some cases the associated model genome space can be completely characterised, shedding light on how the evolutionary dynamics depends on the detailed structure of the fitness landscape. Finally, we apply the model to study the emergence of the preference for dihedral over cyclic symmetry observed for homomeric protein tetramers. |
2105.14284 | Samuel Fischer | Samuel M. Fischer, Pouria Ramazi, Sean Simmons, Mark S. Poesch, Mark
A. Lewis | Boosting propagule transport models with individual-specific data from
mobile apps | Keywords: Angler; Gravity Model; Invasives (Applied Ecology);
Modelling (Disease Ecology); Smartphone Apps; Survey Method; Vector; Whirling
Disease | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Management of invasive species and pathogens requires information about the
traffic of potential vectors. Such information is often taken from vector
traffic models fitted to survey data. Here, user-specific data collected via
mobile apps offer new opportunities to obtain more accurate estimates and to
analyze how vectors' individual preferences affect propagule flows. However,
data voluntarily reported via apps may lack some trip records, adding a
significant layer of uncertainty. We show how the benefits of app-based data
can be exploited despite this drawback.
Based on data collected via an angler app, we built a stochastic model for
angler traffic in the Canadian province Alberta. There, anglers facilitate the
spread of whirling disease, a parasite-induced fish disease. The model is
temporally and spatially explicit and accounts for individual preferences and
repeating behaviour of anglers, helping to address the problem of missing trip
records.
We obtained estimates of angler traffic between all subbasins in Alberta. The
model's accuracy exceeds that of direct empirical estimates even when fewer
data were used to fit the model. The results indicate that anglers' local
preferences and their tendency to revisit previous destinations reduce the
number of long inter-waterbody trips potentially dispersing whirling disease.
According to our model, anglers revisit their previous destination in 64% of
their trips, making these trips irrelevant for the spread of whirling disease.
Furthermore, 54% of fishing trips end in individual-specific spatially
contained areas with mean radius of 54.7km. Finally, although the fraction of
trips that anglers report was unknown, we were able to estimate the total
yearly number of fishing trips in Alberta, matching an independent empirical
estimate.
| [
{
"created": "Sat, 29 May 2021 12:37:57 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Nov 2021 20:01:37 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Dec 2022 17:25:55 GMT",
"version": "v3"
}
] | 2022-12-14 | [
[
"Fischer",
"Samuel M.",
""
],
[
"Ramazi",
"Pouria",
""
],
[
"Simmons",
"Sean",
""
],
[
"Poesch",
"Mark S.",
""
],
[
"Lewis",
"Mark A.",
""
]
] | Management of invasive species and pathogens requires information about the traffic of potential vectors. Such information is often taken from vector traffic models fitted to survey data. Here, user-specific data collected via mobile apps offer new opportunities to obtain more accurate estimates and to analyze how vectors' individual preferences affect propagule flows. However, data voluntarily reported via apps may lack some trip records, adding a significant layer of uncertainty. We show how the benefits of app-based data can be exploited despite this drawback. Based on data collected via an angler app, we built a stochastic model for angler traffic in the Canadian province Alberta. There, anglers facilitate the spread of whirling disease, a parasite-induced fish disease. The model is temporally and spatially explicit and accounts for individual preferences and repeating behaviour of anglers, helping to address the problem of missing trip records. We obtained estimates of angler traffic between all subbasins in Alberta. The model's accuracy exceeds that of direct empirical estimates even when fewer data were used to fit the model. The results indicate that anglers' local preferences and their tendency to revisit previous destinations reduce the number of long inter-waterbody trips potentially dispersing whirling disease. According to our model, anglers revisit their previous destination in 64% of their trips, making these trips irrelevant for the spread of whirling disease. Furthermore, 54% of fishing trips end in individual-specific spatially contained areas with mean radius of 54.7km. Finally, although the fraction of trips that anglers report was unknown, we were able to estimate the total yearly number of fishing trips in Alberta, matching an independent empirical estimate. |
2007.09477 | Marcio Watanabe | Marcio Watanabe | Efficacy of Hydroxychloroquine as Prophylaxis for Covid-19 | 10 pages, 1 figure | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Limitations in the design of the experiment of Boulware et al[1] are
considered in Cohen[2]. They are not subject to correction but they are
reported for readers' consideration. However, they made an analysis for the
incidence based on Fisher's hypothesis test for means while they published
detailed time dependent data which were not analyzed, disregarding an important
information. Here we make the analyses with this time dependent data adopting a
simple regression analysis.
We conclude their randomized, double-blind, placebo-controlled trial presents
statistical evidence, at 99% confidence level, that the treatment of Covid-19
patients with hydroxychloroquine is effective in reducing the appearance of
symptoms if used before or right after exposure to the virus. For 0 to 2 days
after exposure to virus, the estimated relative reduction in symptomatic
outcomes is 72% after 0 days, 48.9% after 1 day and 29.3% after 2 days. For 3
days after exposure, the estimated relative reduction is 15.7% but results are
not statistically conclusive and for 4 or more days after exposure there is no
statistical evidence that hydroxychloroquine is effective in reducing the
appearance of symptoms.
Our results show that the time elapsed between infection and the beginning of
treatment is crucial for the efficacy of hydroxychloroquine as a treatment to
Covid-19.
| [
{
"created": "Sat, 18 Jul 2020 17:03:21 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jul 2020 18:39:56 GMT",
"version": "v2"
}
] | 2020-07-23 | [
[
"Watanabe",
"Marcio",
""
]
] | Limitations in the design of the experiment of Boulware et al[1] are considered in Cohen[2]. They are not subject to correction but they are reported for readers' consideration. However, they made an analysis for the incidence based on Fisher's hypothesis test for means while they published detailed time dependent data which were not analyzed, disregarding an important information. Here we make the analyses with this time dependent data adopting a simple regression analysis. We conclude their randomized, double-blind, placebo-controlled trial presents statistical evidence, at 99% confidence level, that the treatment of Covid-19 patients with hydroxychloroquine is effective in reducing the appearance of symptoms if used before or right after exposure to the virus. For 0 to 2 days after exposure to virus, the estimated relative reduction in symptomatic outcomes is 72% after 0 days, 48.9% after 1 day and 29.3% after 2 days. For 3 days after exposure, the estimated relative reduction is 15.7% but results are not statistically conclusive and for 4 or more days after exposure there is no statistical evidence that hydroxychloroquine is effective in reducing the appearance of symptoms. Our results show that the time elapsed between infection and the beginning of treatment is crucial for the efficacy of hydroxychloroquine as a treatment to Covid-19. |
q-bio/0510027 | C. Soul\'e | C. Soul\'e (IHES) | Mathematical approaches to differentiation and gene regulation | To appear in Notes Comptes-Rendus Acad. Sc. Paris, Biologie | null | null | null | q-bio.MN | null | We consider some mathematical issues raised by the modelling of gene
networks. The expression of genes is governed by a complex set of regulations,
which is often described symbolically by interaction graphs. Once such a graph
has been established, there remains the difficult task to decide which
dynamical properties of the gene network can be inferred from it, in the
absence of precise quantitative data about their regulation. In this paper we
discuss a rule proposed by R.Thomas according to which the possibility for the
network to have several stationary states implies the existence of a positive
circuit in the corresponding interaction graph. We prove that, when properly
formulated in rigorous terms, this rule becomes a theorem valid for several
different types of formal models of gene networks. This result is already known
for models of differential or boolean type. We show here that a stronger
version of it holds in the differential setup when the decay of protein
concentrations is taken into account. This allows us to verify also the
validity of Thomas' rule in the context of piecewise-linear models and the
corresponding discrete models. We discuss open problems as well.
| [
{
"created": "Fri, 14 Oct 2005 07:02:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Soulé",
"C.",
"",
"IHES"
]
] | We consider some mathematical issues raised by the modelling of gene networks. The expression of genes is governed by a complex set of regulations, which is often described symbolically by interaction graphs. Once such a graph has been established, there remains the difficult task to decide which dynamical properties of the gene network can be inferred from it, in the absence of precise quantitative data about their regulation. In this paper we discuss a rule proposed by R.Thomas according to which the possibility for the network to have several stationary states implies the existence of a positive circuit in the corresponding interaction graph. We prove that, when properly formulated in rigorous terms, this rule becomes a theorem valid for several different types of formal models of gene networks. This result is already known for models of differential or boolean type. We show here that a stronger version of it holds in the differential setup when the decay of protein concentrations is taken into account. This allows us to verify also the validity of Thomas' rule in the context of piecewise-linear models and the corresponding discrete models. We discuss open problems as well. |
1609.05151 | Sidney Redner | M. Chupeau, O. Benichou, S. Redner | Random Search with Memory in Patchy Media: Exploration-Exploitation
Tradeoff | 5 pages, 3 figures, plus 3 pages of supplemental material. V2: Final
manuscript to appear in PRE | Phys. Rev. E 95, 012157 (2017) | 10.1103/PhysRevE.95.012157 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to best exploit patchy resources? This long-standing question belongs to
the extensively studied class of explore/exploit problems that arise in a wide
range of situations, from animal foraging, to robotic exploration, and to human
decision processes. Despite its broad relevance, the issue of optimal
exploitation has previously only been tackled through two paradigmatic limiting
models---patch-use and random search---that do not account for the interplay
between searcher motion within patches and resource depletion. Here, we bridge
this gap by introducing a minimal patch exploitation model that incorporates
this coupling: the searcher depletes the resources along its random-walk
trajectory within a patch and travels to a new patch after it takes
$\mathcal{S}$ consecutive steps without finding resources. We compute the
distribution of the amount of resources $F_t$ consumed by time $t$ for this
non-Markovian random walker and show that exploring multiple patches is
beneficial. In one dimension, we analytically derive the optimal strategy to
maximize $F_t$. We show that this strategy is robust with respect to the
distribution of resources within patches and the criterion for leaving a given
patch. We also show that $F_t$ can be optimized in the ecologically-relevant
case of two-dimensional patchy environments.
| [
{
"created": "Fri, 16 Sep 2016 17:45:25 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Jan 2017 23:42:36 GMT",
"version": "v2"
}
] | 2017-01-31 | [
[
"Chupeau",
"M.",
""
],
[
"Benichou",
"O.",
""
],
[
"Redner",
"S.",
""
]
] | How to best exploit patchy resources? This long-standing question belongs to the extensively studied class of explore/exploit problems that arise in a wide range of situations, from animal foraging, to robotic exploration, and to human decision processes. Despite its broad relevance, the issue of optimal exploitation has previously only been tackled through two paradigmatic limiting models---patch-use and random search---that do not account for the interplay between searcher motion within patches and resource depletion. Here, we bridge this gap by introducing a minimal patch exploitation model that incorporates this coupling: the searcher depletes the resources along its random-walk trajectory within a patch and travels to a new patch after it takes $\mathcal{S}$ consecutive steps without finding resources. We compute the distribution of the amount of resources $F_t$ consumed by time $t$ for this non-Markovian random walker and show that exploring multiple patches is beneficial. In one dimension, we analytically derive the optimal strategy to maximize $F_t$. We show that this strategy is robust with respect to the distribution of resources within patches and the criterion for leaving a given patch. We also show that $F_t$ can be optimized in the ecologically-relevant case of two-dimensional patchy environments. |
0704.1885 | Jesse Bloom | Jesse D. Bloom, Zhongyi Lu, David Chen, Alpan Raval, Ophelia S.
Venturelli, and Frances H. Arnold | Evolution favors protein mutational robustness in sufficiently large
populations | null | BMC Biology 5:29 (2007) | 10.1186/1741-7007-5-29 | null | q-bio.PE q-bio.BM | null | BACKGROUND: An important question is whether evolution favors properties such
as mutational robustness or evolvability that do not directly benefit any
individual, but can influence the course of future evolution. Functionally
similar proteins can differ substantially in their robustness to mutations and
capacity to evolve new functions, but it has remained unclear whether any of
these differences might be due to evolutionary selection for these properties.
RESULTS: Here we use laboratory experiments to demonstrate that evolution
favors protein mutational robustness if the evolving population is sufficiently
large. We neutrally evolve cytochrome P450 proteins under identical selection
pressures and mutation rates in populations of different sizes, and show that
proteins from the larger and thus more polymorphic population tend towards
higher mutational robustness. Proteins from the larger population also evolve
greater stability, a biophysical property that is known to enhance both
mutational robustness and evolvability. The excess mutational robustness and
stability is well described by existing mathematical theories, and can be
quantitatively related to the way that the proteins occupy their neutral
network.
CONCLUSIONS: Our work is the first experimental demonstration of the general
tendency of evolution to favor mutational robustness and protein stability in
highly polymorphic populations. We suggest that this phenomenon may contribute
to the mutational robustness and evolvability of viruses and bacteria that
exist in large populations.
| [
{
"created": "Sat, 14 Apr 2007 19:42:13 GMT",
"version": "v1"
}
] | 2009-04-16 | [
[
"Bloom",
"Jesse D.",
""
],
[
"Lu",
"Zhongyi",
""
],
[
"Chen",
"David",
""
],
[
"Raval",
"Alpan",
""
],
[
"Venturelli",
"Ophelia S.",
""
],
[
"Arnold",
"Frances H.",
""
]
] | BACKGROUND: An important question is whether evolution favors properties such as mutational robustness or evolvability that do not directly benefit any individual, but can influence the course of future evolution. Functionally similar proteins can differ substantially in their robustness to mutations and capacity to evolve new functions, but it has remained unclear whether any of these differences might be due to evolutionary selection for these properties. RESULTS: Here we use laboratory experiments to demonstrate that evolution favors protein mutational robustness if the evolving population is sufficiently large. We neutrally evolve cytochrome P450 proteins under identical selection pressures and mutation rates in populations of different sizes, and show that proteins from the larger and thus more polymorphic population tend towards higher mutational robustness. Proteins from the larger population also evolve greater stability, a biophysical property that is known to enhance both mutational robustness and evolvability. The excess mutational robustness and stability is well described by existing mathematical theories, and can be quantitatively related to the way that the proteins occupy their neutral network. CONCLUSIONS: Our work is the first experimental demonstration of the general tendency of evolution to favor mutational robustness and protein stability in highly polymorphic populations. We suggest that this phenomenon may contribute to the mutational robustness and evolvability of viruses and bacteria that exist in large populations. |
1512.08660 | John Vandermeer | John Vandermeer and Doug Jackson | Spatial pattern and power function deviation in a cellular automata
model of an ant population | null | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A spatially explicit model previously developed to apply to a population of
arboreal nesting ants in southern Mexico is explored in detail, including
notions of criticality and robust criticality and expected power law
adjustment.
| [
{
"created": "Tue, 29 Dec 2015 11:44:44 GMT",
"version": "v1"
}
] | 2015-12-31 | [
[
"Vandermeer",
"John",
""
],
[
"Jackson",
"Doug",
""
]
] | A spatially explicit model previously developed to apply to a population of arboreal nesting ants in southern Mexico is explored in detail, including notions of criticality and robust criticality and expected power law adjustment. |
0712.3897 | Yuri A. Dabaghian | Yu. Dabaghian, A. G. Cohn, L. Frank | Topological Maps from Signals | posted by permission of ACM for personal use. The definitive version
was published in (ACMGIS .07, November 7-9, 2007, Seattle, WA) ISBN
978-1-59593-914-2/07/11. 11 pages, 4 figures | proceedings of the 15th ACM International Symposium ACM GIS 2007,
pp. 392-395 | null | null | q-bio.QM q-bio.NC | null | We discuss the task of reconstructing the topological map of an environment
based on the sequences of locations visited by a mobile agent -- this occurs in
systems neuroscience, where one runs into the task of reconstructing the global
topological map of the environment based on activation patterns of the place
coding cells in hippocampus area of the brain. A similar task appears in the
context of establishing wifi connectivity maps.
| [
{
"created": "Sun, 23 Dec 2007 06:00:20 GMT",
"version": "v1"
}
] | 2007-12-27 | [
[
"Dabaghian",
"Yu.",
""
],
[
"Cohn",
"A. G.",
""
],
[
"Frank",
"L.",
""
]
] | We discuss the task of reconstructing the topological map of an environment based on the sequences of locations visited by a mobile agent -- this occurs in systems neuroscience, where one runs into the task of reconstructing the global topological map of the environment based on activation patterns of the place coding cells in hippocampus area of the brain. A similar task appears in the context of establishing wifi connectivity maps. |
1907.09738 | Linqing Feng | Linqing Feng, Jun Ho Song, Jiwon Kim, Soomin Jeong, Jin Sung Park,
Jinhyun Kim | Robust Nucleus Detection with Partially Labeled Exemplars | null | IEEE Access, vol. 7, pp. 162169-162178, 2019 | 10.1109/ACCESS.2019.2952098 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Quantitative analysis of cell nuclei in microscopic images is an essential
yet challenging source of biological and pathological information. The major
challenge is accurate detection and segmentation of densely packed nuclei in
images acquired under a variety of conditions. Mask R-CNN-based methods have
achieved state-of-the-art nucleus segmentation. However, the current pipeline
requires fully annotated training images, which are time consuming to create
and sometimes noisy. Importantly, nuclei often appear similar within the same
image. This similarity could be utilized to segment nuclei with only partially
labeled training examples. We propose a simple yet effective region-proposal
module for the current Mask R-CNN pipeline to perform few-exemplar learning. To
capture the similarities between unlabeled regions and labeled nuclei, we apply
decomposed self-attention to learned features. On the self-attention map, we
observe strong activation at the centers and edges of all nuclei, including
unlabeled nuclei. On this basis, our region-proposal module propagates partial
annotations to the whole image and proposes effective bounding boxes for the
bounding box-regression and binary mask-generation modules. Our method
effectively learns from unlabeled regions thereby improving detection
performance. We test our method with various nuclear images. When trained with
only 1/4 of the nuclei annotated, our approach retains a detection accuracy
comparable to that from training with fully annotated data. Moreover, our
method can serve as a bootstrapping step to create full annotations of
datasets, iteratively generating and correcting annotations until a
predetermined coverage and accuracy are reached. The source code is available
at https://github.com/feng-lab/nuclei.
| [
{
"created": "Tue, 23 Jul 2019 07:53:22 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2019 09:34:26 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Nov 2019 11:05:31 GMT",
"version": "v3"
}
] | 2019-11-14 | [
[
"Feng",
"Linqing",
""
],
[
"Song",
"Jun Ho",
""
],
[
"Kim",
"Jiwon",
""
],
[
"Jeong",
"Soomin",
""
],
[
"Park",
"Jin Sung",
""
],
[
"Kim",
"Jinhyun",
""
]
] | Quantitative analysis of cell nuclei in microscopic images is an essential yet challenging source of biological and pathological information. The major challenge is accurate detection and segmentation of densely packed nuclei in images acquired under a variety of conditions. Mask R-CNN-based methods have achieved state-of-the-art nucleus segmentation. However, the current pipeline requires fully annotated training images, which are time consuming to create and sometimes noisy. Importantly, nuclei often appear similar within the same image. This similarity could be utilized to segment nuclei with only partially labeled training examples. We propose a simple yet effective region-proposal module for the current Mask R-CNN pipeline to perform few-exemplar learning. To capture the similarities between unlabeled regions and labeled nuclei, we apply decomposed self-attention to learned features. On the self-attention map, we observe strong activation at the centers and edges of all nuclei, including unlabeled nuclei. On this basis, our region-proposal module propagates partial annotations to the whole image and proposes effective bounding boxes for the bounding box-regression and binary mask-generation modules. Our method effectively learns from unlabeled regions thereby improving detection performance. We test our method with various nuclear images. When trained with only 1/4 of the nuclei annotated, our approach retains a detection accuracy comparable to that from training with fully annotated data. Moreover, our method can serve as a bootstrapping step to create full annotations of datasets, iteratively generating and correcting annotations until a predetermined coverage and accuracy are reached. The source code is available at https://github.com/feng-lab/nuclei. |
1410.6364 | Andrea De Martino | Daniele De Martino, Fabrizio Capuani, Andrea De Martino | Inferring metabolic phenotypes from the exometabolome through a
thermodynamic variational principle | 10 pages, to appear in New J Phys (Special Issue) | New J. Phys. 16 (2014) 115018 | 10.1088/1367-2630/16/11/115018 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks of biochemical reactions, like cellular metabolic networks, are kept
in non-equilibrium steady states by the exchange fluxes connecting them to the
environment. In most cases, feasible flux configurations can be derived from
minimal mass-balance assumptions upon prescribing in- and out-take fluxes. Here
we consider the problem of inferring intracellular flux patterns from
extracellular metabolite levels. Resorting to a thermodynamic out of
equilibrium variational principle to describe the network at steady state, we
show that the switch from fermentative to oxidative phenotypes in cells can be
characterized in terms of the glucose, lactate, oxygen and carbon dioxide
concentrations. Results obtained for an exactly solvable toy model are fully
recovered for a large scale reconstruction of human catabolism. Finally we
argue that, in spite of the many approximations involved in the theory,
available data for several human cell types are well described by the predicted
phenotypic map of the problem.
| [
{
"created": "Thu, 23 Oct 2014 13:44:14 GMT",
"version": "v1"
}
] | 2014-12-01 | [
[
"De Martino",
"Daniele",
""
],
[
"Capuani",
"Fabrizio",
""
],
[
"De Martino",
"Andrea",
""
]
] | Networks of biochemical reactions, like cellular metabolic networks, are kept in non-equilibrium steady states by the exchange fluxes connecting them to the environment. In most cases, feasible flux configurations can be derived from minimal mass-balance assumptions upon prescribing in- and out-take fluxes. Here we consider the problem of inferring intracellular flux patterns from extracellular metabolite levels. Resorting to a thermodynamic out of equilibrium variational principle to describe the network at steady state, we show that the switch from fermentative to oxidative phenotypes in cells can be characterized in terms of the glucose, lactate, oxygen and carbon dioxide concentrations. Results obtained for an exactly solvable toy model are fully recovered for a large scale reconstruction of human catabolism. Finally we argue that, in spite of the many approximations involved in the theory, available data for several human cell types are well described by the predicted phenotypic map of the problem. |
1804.08404 | Hartmut Grote | Hartmut Grote | Commentary: Intentional Observer Effects on Quantum Randomness: A
Bayesian Analysis Reveals Evidence Against Micro-Psychokinesis | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper titled `Intentional Observer Effects on Quantum Randomness: A
Bayesian Analysis Reveals Evidence Against Micro-Psychokinesis', published in
Frontiers of Psychology in March 2018, reports on a mind-matter experiment with
the main result of strong evidence against Micro-Psychokinesis. Despite this
conclusion, the authors interpret the observed pattern in their data as
possible evidence for Micro-Psychokinesis, albeit of a different kind.
Suggesting a connection to some existing models, the authors put forward the
hypothesis that a higher frequency of slow data variations can be observed in
their experiment data than in a set of control data. This commentary analyses
this claim and concludes that the variation in the data motivating this
hypothesis would show up just by chance with a probability of p=0.328 under a
null hypothesis. Therefore, there is no evidence for the hypothesis of faster
data variations, and thus for this kind of suggested Micro-Psychokinesis in
this experiment.
| [
{
"created": "Thu, 5 Apr 2018 16:49:53 GMT",
"version": "v1"
}
] | 2018-04-24 | [
[
"Grote",
"Hartmut",
""
]
] | The paper titled `Intentional Observer Effects on Quantum Randomness: A Bayesian Analysis Reveals Evidence Against Micro-Psychokinesis', published in Frontiers of Psychology in March 2018, reports on a mind-matter experiment with the main result of strong evidence against Micro-Psychokinesis. Despite this conclusion, the authors interpret the observed pattern in their data as possible evidence for Micro-Psychokinesis, albeit of a different kind. Suggesting a connection to some existing models, the authors put forward the hypothesis that a higher frequency of slow data variations can be observed in their experiment data than in a set of control data. This commentary analyses this claim and concludes that the variation in the data motivating this hypothesis would show up just by chance with a probability of p=0.328 under a null hypothesis. Therefore, there is no evidence for the hypothesis of faster data variations, and thus for this kind of suggested Micro-Psychokinesis in this experiment. |
2408.06377 | Wenwen Min | Donghai Fang, Fangfang Zhu, Dongting Xie and Wenwen Min | Masked Graph Autoencoders with Contrastive Augmentation for Spatially
Resolved Transcriptomics Data | null | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of Spatial Resolved Transcriptomics (SRT)
technology, it is now possible to comprehensively measure gene transcription
while preserving the spatial context of tissues. Spatial domain identification
and gene denoising are key objectives in SRT data analysis. We propose a
Contrastively Augmented Masked Graph Autoencoder (STMGAC) to learn
low-dimensional latent representations for domain identification. In the latent
space, persistent signals for representations are obtained through
self-distillation to guide self-supervised matching. At the same time, positive
and negative anchor pairs are constructed using triplet learning to augment the
discriminative ability. We evaluated the performance of STMGAC on five
datasets, achieving results superior to those of existing baseline methods. All
code and public datasets used in this paper are available at
https://github.com/wenwenmin/STMGAC and https://zenodo.org/records/13253801.
| [
{
"created": "Fri, 9 Aug 2024 02:49:23 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Fang",
"Donghai",
""
],
[
"Zhu",
"Fangfang",
""
],
[
"Xie",
"Dongting",
""
],
[
"Min",
"Wenwen",
""
]
] | With the rapid advancement of Spatial Resolved Transcriptomics (SRT) technology, it is now possible to comprehensively measure gene transcription while preserving the spatial context of tissues. Spatial domain identification and gene denoising are key objectives in SRT data analysis. We propose a Contrastively Augmented Masked Graph Autoencoder (STMGAC) to learn low-dimensional latent representations for domain identification. In the latent space, persistent signals for representations are obtained through self-distillation to guide self-supervised matching. At the same time, positive and negative anchor pairs are constructed using triplet learning to augment the discriminative ability. We evaluated the performance of STMGAC on five datasets, achieving results superior to those of existing baseline methods. All code and public datasets used in this paper are available at https://github.com/wenwenmin/STMGAC and https://zenodo.org/records/13253801. |
2201.04371 | Simon Labarthe | Guillaume Ravel (PLEIADE, BioGeCo), Michel Bergmann (MEMPHIS), Alain
Trubuil (MaIAGE), Julien Deschamps (MICALIS), Romain Briandet (MICALIS),
Simon Labarthe (PLEIADE, BioGeCo, MaIAGE) | Inferring characteristics of bacterial swimming in biofilm matrix from
time-lapse confocal laser scanning microscopy | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biofilms are spatially organized microorganism colonies embedded in a
self-produced matrix, conferring to the microbial community resistance to
environmental stresses. Motile bacteria have been observed swimming in the
matrix of pathogenic exogeneous host biofilms. This observation opened new
promising routes for deleterious biofilms biocontrol: these bacterial swimmers
enhance biofilm vascularization for chemical treatment or could deliver
biocontrol agent by microbial hitchhiking or local synthesis.
%\cite{muok2021microbial,yu2020hitchhiking,samad2017swimming}. Hence,
characterizing swimmer trajectories in the biofilm matrix is of particular
interest to understand and optimize its biocontrol.In this study, a new
methodology is developed to analyze time-lapse confocal laser scanning images
to describe and compare the swimming trajectories of bacterial swimmers
populations and their adaptations to the biofilm structure. The method is based
on the inference of a kinetic model of swimmer population including mechanistic
interactions with the host biofilm. After validation on synthetic data, the
methodology is implemented on images of three different motile {Bacillus
species swimming in a Staphylococcus aureus biofilm. The fitted model allows to
stratify the swimmer populations by their swimming behavior and provides
insights into the mechanisms deployed by the micro-swimmers to adapt their
swimming traits to the biofilm matrix.
| [
{
"created": "Wed, 12 Jan 2022 09:09:17 GMT",
"version": "v1"
},
{
"created": "Mon, 23 May 2022 08:19:17 GMT",
"version": "v2"
}
] | 2022-05-24 | [
[
"Ravel",
"Guillaume",
"",
"PLEIADE, BioGeCo"
],
[
"Bergmann",
"Michel",
"",
"MEMPHIS"
],
[
"Trubuil",
"Alain",
"",
"MaIAGE"
],
[
"Deschamps",
"Julien",
"",
"MICALIS"
],
[
"Briandet",
"Romain",
"",
"MICALIS"
],
[
"Labarthe",
"Simon",
"",
"PLEIADE, BioGeCo, MaIAGE"
]
] | Biofilms are spatially organized microorganism colonies embedded in a self-produced matrix, conferring to the microbial community resistance to environmental stresses. Motile bacteria have been observed swimming in the matrix of pathogenic exogeneous host biofilms. This observation opened new promising routes for deleterious biofilms biocontrol: these bacterial swimmers enhance biofilm vascularization for chemical treatment or could deliver biocontrol agent by microbial hitchhiking or local synthesis. %\cite{muok2021microbial,yu2020hitchhiking,samad2017swimming}. Hence, characterizing swimmer trajectories in the biofilm matrix is of particular interest to understand and optimize its biocontrol.In this study, a new methodology is developed to analyze time-lapse confocal laser scanning images to describe and compare the swimming trajectories of bacterial swimmers populations and their adaptations to the biofilm structure. The method is based on the inference of a kinetic model of swimmer population including mechanistic interactions with the host biofilm. After validation on synthetic data, the methodology is implemented on images of three different motile {Bacillus species swimming in a Staphylococcus aureus biofilm. The fitted model allows to stratify the swimmer populations by their swimming behavior and provides insights into the mechanisms deployed by the micro-swimmers to adapt their swimming traits to the biofilm matrix. |
q-bio/0504032 | Chikara Furusawa | Chikara Furusawa and Kunihiko Kaneko | Evolutionary origin of power-laws in Biochemical Reaction Network;
embedding abundance distribution into topology | 11 pages, 3 figures | null | 10.1103/PhysRevE.73.011912 | null | q-bio.PE q-bio.MN | null | The evolutionary origin of universal statistics in biochemical reaction
network is studied, to explain the power-law distribution of reaction links and
the power-law distributions of chemical abundances. Using cell models with
catalytic reaction network, we find evidence that the power-law distribution in
abundances of chemicals emerges by the selection of cells with higher growth
speeds. Through the further evolution, this inhomogeneity in chemical
abundances is shown to be embedded in the distribution of links, leading to the
power-law distribution. These findings provide novel insights into the nature
of network evolution in living cells.
| [
{
"created": "Thu, 28 Apr 2005 07:13:55 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Furusawa",
"Chikara",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | The evolutionary origin of universal statistics in biochemical reaction network is studied, to explain the power-law distribution of reaction links and the power-law distributions of chemical abundances. Using cell models with catalytic reaction network, we find evidence that the power-law distribution in abundances of chemicals emerges by the selection of cells with higher growth speeds. Through the further evolution, this inhomogeneity in chemical abundances is shown to be embedded in the distribution of links, leading to the power-law distribution. These findings provide novel insights into the nature of network evolution in living cells. |
2405.04920 | Nicola Dietler | Nicola Dietler, Alia Abbara, Subham Choudhury, Anne-Florence Bitbol | Impact of phylogeny on the inference of functional sectors from protein
sequence data | null | null | null | null | q-bio.PE physics.bio-ph q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical analysis of multiple sequence alignments of homologous proteins
has revealed groups of coevolving amino acids called sectors. These groups of
amino-acid sites feature collective correlations in their amino-acid usage, and
they are associated to functional properties. Modeling showed that natural
selection on an additive functional trait of a protein is generically expected
to give rise to a functional sector. These modeling results motivated a
principled method, called ICOD, which is designed to identify functional
sectors, as well as mutational effects, from sequence data. However, a
challenge for all methods aiming to identify sectors from multiple sequence
alignments is that correlations in amino-acid usage can also arise from the
mere fact that homologous sequences share common ancestry, i.e. from phylogeny.
Here, we generate controlled synthetic data from a minimal model comprising
both phylogeny and functional sectors. We use this data to dissect the impact
of phylogeny on sector identification and on mutational effect inference by
different methods. We find that ICOD is most robust to phylogeny, but that
conservation is also quite robust. Next, we consider natural multiple sequence
alignments of protein families for which deep mutational scan experimental data
is available. We show that in this natural data, conservation and ICOD best
identify sites with strong functional roles, in agreement with our results on
synthetic data. Importantly, these two methods have different premises, since
they respectively focus on conservation and on correlations. Thus, their joint
use can reveal complementary information.
| [
{
"created": "Wed, 8 May 2024 09:42:08 GMT",
"version": "v1"
}
] | 2024-05-09 | [
[
"Dietler",
"Nicola",
""
],
[
"Abbara",
"Alia",
""
],
[
"Choudhury",
"Subham",
""
],
[
"Bitbol",
"Anne-Florence",
""
]
] | Statistical analysis of multiple sequence alignments of homologous proteins has revealed groups of coevolving amino acids called sectors. These groups of amino-acid sites feature collective correlations in their amino-acid usage, and they are associated to functional properties. Modeling showed that natural selection on an additive functional trait of a protein is generically expected to give rise to a functional sector. These modeling results motivated a principled method, called ICOD, which is designed to identify functional sectors, as well as mutational effects, from sequence data. However, a challenge for all methods aiming to identify sectors from multiple sequence alignments is that correlations in amino-acid usage can also arise from the mere fact that homologous sequences share common ancestry, i.e. from phylogeny. Here, we generate controlled synthetic data from a minimal model comprising both phylogeny and functional sectors. We use this data to dissect the impact of phylogeny on sector identification and on mutational effect inference by different methods. We find that ICOD is most robust to phylogeny, but that conservation is also quite robust. Next, we consider natural multiple sequence alignments of protein families for which deep mutational scan experimental data is available. We show that in this natural data, conservation and ICOD best identify sites with strong functional roles, in agreement with our results on synthetic data. Importantly, these two methods have different premises, since they respectively focus on conservation and on correlations. Thus, their joint use can reveal complementary information. |
2309.17161 | Mark Van Der Wilk | Jacob Green, Cecilia Cabrera Diaz, Maximilian A. H. Jakobs, Andrea
Dimitracopoulos, Mark van der Wilk, Ryan D. Greenhalgh | Current Methods for Drug Property Prediction in the Real World | null | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting drug properties is key in drug discovery to enable de-risking of
assets before expensive clinical trials, and to find highly active compounds
faster. Interest from the Machine Learning community has led to the release of
a variety of benchmark datasets and proposed methods. However, it remains
unclear for practitioners which method or approach is most suitable, as
different papers benchmark on different datasets and methods, leading to
varying conclusions that are not easily compared. Our large-scale empirical
study links together numerous earlier works on different datasets and methods;
thus offering a comprehensive overview of the existing property classes,
datasets, and their interactions with different methods. We emphasise the
importance of uncertainty quantification and the time and therefore cost of
applying these methods in the drug development decision-making cycle. We
discover that the best method depends on the dataset, and that engineered
features with classical ML methods often outperform deep learning.
Specifically, QSAR datasets are typically best analysed with classical methods
such as Gaussian Processes while ADMET datasets are sometimes better described
by Trees or Deep Learning methods such as Graph Neural Networks or language
models. Our work highlights that practitioners do not yet have a
straightforward, black-box procedure to rely on, and sets the precedent for
creating practitioner-relevant benchmarks. Deep learning approaches must be
proven on these benchmarks to become the practical method of choice in drug
property prediction.
| [
{
"created": "Tue, 25 Jul 2023 17:50:05 GMT",
"version": "v1"
}
] | 2023-10-02 | [
[
"Green",
"Jacob",
""
],
[
"Diaz",
"Cecilia Cabrera",
""
],
[
"Jakobs",
"Maximilian A. H.",
""
],
[
"Dimitracopoulos",
"Andrea",
""
],
[
"van der Wilk",
"Mark",
""
],
[
"Greenhalgh",
"Ryan D.",
""
]
] | Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials, and to find highly active compounds faster. Interest from the Machine Learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods; thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and therefore cost of applying these methods in the drug development decision-making cycle. We discover that the best method depends on the dataset, and that engineered features with classical ML methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes while ADMET datasets are sometimes better described by Trees or Deep Learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on, and sets the precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction. |
1502.04307 | Peter O. Fedichev | D. Podolskiy, I. Molodtcov, A. Zenin, V. Kogan, L. I. Menshikov, Vadim
N. Gladyshev, Robert J. Shmookler Reis, and P. O. Fedichev | Critical dynamics of gene networks is a mechanism behind ageing and
Gompertz law | 22 pages, 9 figures | null | null | null | q-bio.MN physics.bio-ph q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although accumulation of molecular damage is suggested to be an important
molecular mechanism of aging, a quantitative link between the dynamics of
damage accumulation and mortality of species has so far remained elusive. To
address this question, we examine stability properties of a generic gene
regulatory network (GRN) and demonstrate that many characteristics of aging and
the associated population mortality rate emerge as inherent properties of the
critical dynamics of gene regulation and metabolic levels. Based on the
analysis of age-dependent changes in gene-expression and metabolic profiles in
Drosophila melanogaster, we explicitly show that the underlying GRNs are nearly
critical and inherently unstable. This instability manifests itself as aging in
the form of distortion of gene expression and metabolic profiles with age, and
causes the characteristic increase in mortality rate with age as described by a
form of the Gompertz law. In addition, we explain late-life mortality
deceleration observed at very late ages for large populations. We show that
aging contains a stochastic component, related to accumulation of regulatory
errors in transcription/translation/metabolic pathways due to imperfection of
signaling cascades in the network and of responses to environmental factors. We
also establish that there is a strong deterministic component, suggesting
genetic control. Since mortality in humans, where it is characterized best, is
strongly associated with the incidence of age-related diseases, our findings
support the idea that aging is the driving force behind the development of
chronic human diseases.
| [
{
"created": "Sun, 15 Feb 2015 12:41:44 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Nov 2016 06:25:52 GMT",
"version": "v2"
}
] | 2016-11-24 | [
[
"Podolskiy",
"D.",
""
],
[
"Molodtcov",
"I.",
""
],
[
"Zenin",
"A.",
""
],
[
"Kogan",
"V.",
""
],
[
"Menshikov",
"L. I.",
""
],
[
"Gladyshev",
"Vadim N.",
""
],
[
"Reis",
"Robert J. Shmookler",
""
],
[
"Fedichev",
"P. O.",
""
]
] | Although accumulation of molecular damage is suggested to be an important molecular mechanism of aging, a quantitative link between the dynamics of damage accumulation and mortality of species has so far remained elusive. To address this question, we examine stability properties of a generic gene regulatory network (GRN) and demonstrate that many characteristics of aging and the associated population mortality rate emerge as inherent properties of the critical dynamics of gene regulation and metabolic levels. Based on the analysis of age-dependent changes in gene-expression and metabolic profiles in Drosophila melanogaster, we explicitly show that the underlying GRNs are nearly critical and inherently unstable. This instability manifests itself as aging in the form of distortion of gene expression and metabolic profiles with age, and causes the characteristic increase in mortality rate with age as described by a form of the Gompertz law. In addition, we explain late-life mortality deceleration observed at very late ages for large populations. We show that aging contains a stochastic component, related to accumulation of regulatory errors in transcription/translation/metabolic pathways due to imperfection of signaling cascades in the network and of responses to environmental factors. We also establish that there is a strong deterministic component, suggesting genetic control. Since mortality in humans, where it is characterized best, is strongly associated with the incidence of age-related diseases, our findings support the idea that aging is the driving force behind the development of chronic human diseases. |
0801.4278 | Shu-Dong Zhang | Shu-Dong Zhang and Timothy W. Gant | A simple and robust method for connecting small-molecule drugs using
gene-expression signatures | 8 pages, 2 figures, and 2 tables; supplementary data supplied as a
ZIP file | BMC Bioinformatics 2008, 9:258 | 10.1186/1471-2105-9-258 | null | q-bio.QM q-bio.GN | null | Interaction of a drug or chemical with a biological system can result in a
gene-expression profile or signature characteristic of the event. Using a
suitably robust algorithm these signatures can potentially be used to connect
molecules with similar pharmacological or toxicological properties. The
Connectivity Map was a novel concept and innovative tool first introduced by
Lamb et al to connect small molecules, genes, and diseases using genomic
signatures [Lamb et al (2006), Science 313, 1929-1935]. However, the
Connectivity Map had some limitations, particularly there was no effective
safeguard against false connections if the observed connections were considered
on an individual-by-individual basis. Further when several connections to the
same small-molecule compound were viewed as a set, the implicit null hypothesis
tested was not the most relevant one for the discovery of real connections.
Here we propose a simple and robust method for constructing the reference
gene-expression profiles and a new connection scoring scheme, which importantly
allows the valuation of statistical significance of all the connections
observed. We tested the new method with the two example gene-signatures (HDAC
inhibitors and Estrogens) used by Lamb et al and also a new gene signature of
immunosuppressive drugs. Our testing with this new method shows that it
achieves a higher level of specificity and sensitivity than the original
method. For example, our method successfully identified raloxifene and
tamoxifen as having significant anti-estrogen effects, while Lamb et al's
Connectivity Map failed to identify these. With these properties our new method
has potential use in drug development for the recognition of pharmacological
and toxicological properties in new drug candidates.
| [
{
"created": "Mon, 28 Jan 2008 14:00:01 GMT",
"version": "v1"
}
] | 2008-06-02 | [
[
"Zhang",
"Shu-Dong",
""
],
[
"Gant",
"Timothy W.",
""
]
] | Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties. The Connectivity Map was a novel concept and innovative tool first introduced by Lamb et al to connect small molecules, genes, and diseases using genomic signatures [Lamb et al (2006), Science 313, 1929-1935]. However, the Connectivity Map had some limitations, particularly there was no effective safeguard against false connections if the observed connections were considered on an individual-by-individual basis. Further when several connections to the same small-molecule compound were viewed as a set, the implicit null hypothesis tested was not the most relevant one for the discovery of real connections. Here we propose a simple and robust method for constructing the reference gene-expression profiles and a new connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with the two example gene-signatures (HDAC inhibitors and Estrogens) used by Lamb et al and also a new gene signature of immunosuppressive drugs. Our testing with this new method shows that it achieves a higher level of specificity and sensitivity than the original method. For example, our method successfully identified raloxifene and tamoxifen as having significant anti-estrogen effects, while Lamb et al's Connectivity Map failed to identify these. With these properties our new method has potential use in drug development for the recognition of pharmacological and toxicological properties in new drug candidates. |
1904.01847 | Tal Einav | Tal Einav, Rob Phillips | How the Avidity of Polymerase Binding to the -35/-10 Promoter Sites
Affects Gene Expression | null | null | 10.1073/pnas.1905615116 | null | q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | Although the key promoter elements necessary to drive transcription in
Escherichia coli have long been understood, we still cannot predict the
behavior of arbitrary novel promoters, hampering our ability to characterize
the myriad of sequenced regulatory architectures as well as to design novel
synthetic circuits. This work builds upon a beautiful recent experiment by
Urtecho et al. who measured the gene expression of over 10,000 promoters
spanning all possible combinations of a small set of regulatory elements. Using
this data, we demonstrate that a central claim in energy matrix models of gene
expression - that each promoter element contributes independently and
additively to gene expression - contradicts experimental measurements. We
propose that a key missing ingredient from such models is the avidity between
the -35 and -10 RNA polymerase binding sites and develop what we call a refined
energy matrix model that incorporates this effect. We show that this the
refined energy matrix model can characterize the full suite of gene expression
data and explore several applications of this framework, namely, how
multivalent binding at the -35 and -10 sites can buffer RNAP kinetics against
mutations and how promoters that bind overly tightly to RNA polymerase can
inhibit gene expression. The success of our approach suggests that avidity
represents a key physical principle governing the interaction of RNA polymerase
to its promoter.
| [
{
"created": "Wed, 3 Apr 2019 08:42:39 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Einav",
"Tal",
""
],
[
"Phillips",
"Rob",
""
]
] | Although the key promoter elements necessary to drive transcription in Escherichia coli have long been understood, we still cannot predict the behavior of arbitrary novel promoters, hampering our ability to characterize the myriad of sequenced regulatory architectures as well as to design novel synthetic circuits. This work builds upon a beautiful recent experiment by Urtecho et al. who measured the gene expression of over 10,000 promoters spanning all possible combinations of a small set of regulatory elements. Using this data, we demonstrate that a central claim in energy matrix models of gene expression - that each promoter element contributes independently and additively to gene expression - contradicts experimental measurements. We propose that a key missing ingredient from such models is the avidity between the -35 and -10 RNA polymerase binding sites and develop what we call a refined energy matrix model that incorporates this effect. We show that this the refined energy matrix model can characterize the full suite of gene expression data and explore several applications of this framework, namely, how multivalent binding at the -35 and -10 sites can buffer RNAP kinetics against mutations and how promoters that bind overly tightly to RNA polymerase can inhibit gene expression. The success of our approach suggests that avidity represents a key physical principle governing the interaction of RNA polymerase to its promoter. |
1607.03955 | William Barendse PhD | Sean McWilliam, Peter M. Grewe, Rowan J. Bunch and William Barendse | A draft genome assembly of southern bluefin tuna Thunnus maccoyii | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tuna are large pelagic fish whose populations are close to panmixia. In
addition, they are threatened species, so it is important for the maintenance
and monitoring of genetic diversity that genetic information at a genome level
be obtained. Here we report the draft assembly of the southern bluefin tuna
genome and the collection of genome-wide sequence data for five other tuna
species. We sampled five tuna species of the genus Thunnus, the northern and
southern bluefin, yellowfin, albacore, and bigeye, as well as the skipjack
(Katsuwonis pelamis), a tuna-like species. Genome assembly was facilitated at
k-mer=25 while k-mer=51 generated assembly artefacts. The estimated size of the
southern bluefin tuna genome was 795 Mb. We assembled two southern bluefin tuna
individuals independently using both paired end and mate pair sequence. This
resulted in scaffolds with N50>174,000 bp and maximum scaffold lengths>1.4 Mb.
Our estimate of the size of the assembled genome was the scaffolded sequences
in common to both assemblies, which amounted to 721 Mb of the 795 Mb of the
southern bluefin tuna genome sequence. Using BLAST, there were matches between
13,039 of 14,341 (91%) refseq mRNA of the zebrafish Danio rerio to the tuna
assembly indicating that most of a generic fish transcriptome was covered by
the assembly.
| [
{
"created": "Wed, 13 Jul 2016 23:20:06 GMT",
"version": "v1"
}
] | 2016-07-15 | [
[
"McWilliam",
"Sean",
""
],
[
"Grewe",
"Peter M.",
""
],
[
"Bunch",
"Rowan J.",
""
],
[
"Barendse",
"William",
""
]
] | Tuna are large pelagic fish whose populations are close to panmixia. In addition, they are threatened species, so it is important for the maintenance and monitoring of genetic diversity that genetic information at a genome level be obtained. Here we report the draft assembly of the southern bluefin tuna genome and the collection of genome-wide sequence data for five other tuna species. We sampled five tuna species of the genus Thunnus, the northern and southern bluefin, yellowfin, albacore, and bigeye, as well as the skipjack (Katsuwonis pelamis), a tuna-like species. Genome assembly was facilitated at k-mer=25 while k-mer=51 generated assembly artefacts. The estimated size of the southern bluefin tuna genome was 795 Mb. We assembled two southern bluefin tuna individuals independently using both paired end and mate pair sequence. This resulted in scaffolds with N50>174,000 bp and maximum scaffold lengths>1.4 Mb. Our estimate of the size of the assembled genome was the scaffolded sequences in common to both assemblies, which amounted to 721 Mb of the 795 Mb of the southern bluefin tuna genome sequence. Using BLAST, there were matches between 13,039 of 14,341 (91%) refseq mRNA of the zebrafish Danio rerio to the tuna assembly indicating that most of a generic fish transcriptome was covered by the assembly. |
1301.0953 | Alex Volinsky | Alex A. Volinsky, Nikolai V. Gubarev, Galina M. Orlovskaya, Elena V.
Marchenko | Human anaerobic intestinal "rope" parasites | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human intestinal helminths are described in this paper. They can be over a
meter long, with an irregular cylindrical shape, resembling a rope. These
anaerobic intestinal "rope" parasites differ significantly from other
well-known intestinal parasites. Rope parasites can leave human body with
enemas, and are often mistaken for intestinal lining, feces, or decayed remains
of other parasites. Rope parasites can attach to intestinal walls with suction
bubbles, which later develop into suction heads. Walls of the rope parasites
consist of scale-like cells forming multiple branched channels along the
parasite's length. Rope parasites can move by jet propulsion, passing gas
bubbles through these channels. Currently known antihelminthic methods include
special enemas. Most humans are likely hosting these helminths.
| [
{
"created": "Sat, 5 Jan 2013 23:18:37 GMT",
"version": "v1"
}
] | 2013-01-08 | [
[
"Volinsky",
"Alex A.",
""
],
[
"Gubarev",
"Nikolai V.",
""
],
[
"Orlovskaya",
"Galina M.",
""
],
[
"Marchenko",
"Elena V.",
""
]
] | Human intestinal helminths are described in this paper. They can be over a meter long, with an irregular cylindrical shape, resembling a rope. These anaerobic intestinal "rope" parasites differ significantly from other well-known intestinal parasites. Rope parasites can leave human body with enemas, and are often mistaken for intestinal lining, feces, or decayed remains of other parasites. Rope parasites can attach to intestinal walls with suction bubbles, which later develop into suction heads. Walls of the rope parasites consist of scale-like cells forming multiple branched channels along the parasite's length. Rope parasites can move by jet propulsion, passing gas bubbles through these channels. Currently known antihelminthic methods include special enemas. Most humans are likely hosting these helminths. |
1307.3434 | Jacob Kuriyan | Jacob Kuriyan, Nathaniel Cobb | Forecasts of Cancer and Chronic Patients: Big Data Metrics of Population
Health | 26 pages, 6 figures and 6 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic diseases and cancer account for over 75 percent of healthcare costs
in the US. Increased prevention services and improved primary care are thought
to decrease costs. Current models for detecting changes in the health of
populations are cumbersome and expensive, and are not sensitive in the short
term. In this paper we model population health as a dynamical system to predict
the time evolution of the new diagnosis of chronic diseases and cancer. This
provides a reliable forecasting tool and a means of measuring short-term
changes in the health status of the population resulting from preventive care
programs. Twelve month forecasts of cancer and chronic populations were
accurate with errors lying between 3 percent and 6 percent. We confirmed what
other studies have demonstrated that diabetes patients are at increased cancer
risk but, interestingly, we also discovered that all of the studied chronic
conditions increased cancer risk just as diabetes did, and by a similar amount.
The model(i)yields a new metric for measuring performance of preventive and
clinical care programs that can provide timely feedback for quality improvement
programs;(ii)helps understand "savings" in the context of preventive care
programs and explains how they can be calculated in the short term, even though
they materialize only in the long term and(iii)provides an analytic tool and
metrics to infer correlations and derive insights on the effect of changes in
socio-economic factors affecting population health on improving health and
lowering costs of populations.
| [
{
"created": "Fri, 12 Jul 2013 12:25:24 GMT",
"version": "v1"
}
] | 2013-07-15 | [
[
"Kuriyan",
"Jacob",
""
],
[
"Cobb",
"Nathaniel",
""
]
] | Chronic diseases and cancer account for over 75 percent of healthcare costs in the US. Increased prevention services and improved primary care are thought to decrease costs. Current models for detecting changes in the health of populations are cumbersome and expensive, and are not sensitive in the short term. In this paper we model population health as a dynamical system to predict the time evolution of the new diagnosis of chronic diseases and cancer. This provides a reliable forecasting tool and a means of measuring short-term changes in the health status of the population resulting from preventive care programs. Twelve month forecasts of cancer and chronic populations were accurate with errors lying between 3 percent and 6 percent. We confirmed what other studies have demonstrated that diabetes patients are at increased cancer risk but, interestingly, we also discovered that all of the studied chronic conditions increased cancer risk just as diabetes did, and by a similar amount. The model(i)yields a new metric for measuring performance of preventive and clinical care programs that can provide timely feedback for quality improvement programs;(ii)helps understand "savings" in the context of preventive care programs and explains how they can be calculated in the short term, even though they materialize only in the long term and(iii)provides an analytic tool and metrics to infer correlations and derive insights on the effect of changes in socio-economic factors affecting population health on improving health and lowering costs of populations. |
q-bio/0311021 | M. J. Gagen | Larry J. Croft, Martin J. Lercher, Michael J. Gagen, John S. Mattick | Is prokaryotic complexity limited by accelerated growth in regulatory
overhead? | 10 pages, 1 figure | null | null | null | q-bio.MN q-bio.GN | null | Increased biological complexity is generally associated with the addition of
new genetic information, which must be integrated into the existing regulatory
network that operates within the cell. General arguments on network control, as
well as several recent genomic observations, indicate that regulatory gene
number grows disproportionally fast with increasing genome size. We present two
models for the growth of regulatory networks. Both predict that the number of
transcriptional regulators will scale quadratically with total gene number.
This appears to be in good quantitative agreement with genomic data from 89
fully sequenced prokaryotes. Moreover, the empirical curve predicts that any
new non-regulatory gene will be accompanied by more than one additional
regulator beyond a genome size of about 20,000 genes, within a factor of two of
the observed ceiling. Our analysis places transcriptional regulatory networks
in the class of accelerating networks. We suggest that prokaryotic complexity
may have been limited throughout evolution by regulatory overhead, and
conversely that complex eukaryotes must have bypassed this constraint by novel
strategies.
| [
{
"created": "Sat, 15 Nov 2003 09:15:54 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Croft",
"Larry J.",
""
],
[
"Lercher",
"Martin J.",
""
],
[
"Gagen",
"Michael J.",
""
],
[
"Mattick",
"John S.",
""
]
] | Increased biological complexity is generally associated with the addition of new genetic information, which must be integrated into the existing regulatory network that operates within the cell. General arguments on network control, as well as several recent genomic observations, indicate that regulatory gene number grows disproportionally fast with increasing genome size. We present two models for the growth of regulatory networks. Both predict that the number of transcriptional regulators will scale quadratically with total gene number. This appears to be in good quantitative agreement with genomic data from 89 fully sequenced prokaryotes. Moreover, the empirical curve predicts that any new non-regulatory gene will be accompanied by more than one additional regulator beyond a genome size of about 20,000 genes, within a factor of two of the observed ceiling. Our analysis places transcriptional regulatory networks in the class of accelerating networks. We suggest that prokaryotic complexity may have been limited throughout evolution by regulatory overhead, and conversely that complex eukaryotes must have bypassed this constraint by novel strategies. |
2004.06107 | Vasiliy Leonenko | Vasiliy N. Leonenko | Analyzing the spatial distribution of acute coronary syndrome cases
using synthesized data on arterial hypertension prevalence | Accepted for publication in the ICCS 2020 proceedings (Springer
Lecture Notes in Computer Science series) | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the current study, the authors demonstrate the method aimed at analyzing
the distribution of acute coronary syndrome (ACS) cases in Saint Petersburg
using the synthetic population approach and a statistical model for arterial
hypertension prevalence. The cumulative number of emergency services calls in a
separate geographical area (a grid cell of a map) associated with ACS is
matched with the assessed number of dwellers and individuals with arterial
hypertension, which makes it possible to find locations with excessive ACS
incidence. The proposed method is implemented in Python programming language,
the visualization results are shown using QGIS open software. Three categories
of locations are proposed based on the analysis results. The demonstrated
method might be applied for using the statistical assessments of hidden health
conditions in the population to categorize spatial distributions of their
visible consequences.
| [
{
"created": "Fri, 10 Apr 2020 19:47:02 GMT",
"version": "v1"
}
] | 2020-04-15 | [
[
"Leonenko",
"Vasiliy N.",
""
]
] | In the current study, the authors demonstrate the method aimed at analyzing the distribution of acute coronary syndrome (ACS) cases in Saint Petersburg using the synthetic population approach and a statistical model for arterial hypertension prevalence. The cumulative number of emergency services calls in a separate geographical area (a grid cell of a map) associated with ACS is matched with the assessed number of dwellers and individuals with arterial hypertension, which makes it possible to find locations with excessive ACS incidence. The proposed method is implemented in Python programming language, the visualization results are shown using QGIS open software. Three categories of locations are proposed based on the analysis results. The demonstrated method might be applied for using the statistical assessments of hidden health conditions in the population to categorize spatial distributions of their visible consequences. |
2402.15186 | Yousong Peng | Yifan Wu, Yousong Peng | Ten computational challenges in human virome studies | 16 pages, 1 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In recent years, substantial advancements have been achieved in understanding
the diversity of the human virome and its intricate roles in human health and
diseases. Despite this progress, the field of human virome research remains
nascent, primarily hindered by the absence of effective methods, particularly
in the domain of computational tools. This perspective systematically outlines
ten computational challenges spanning various phases of virome studies, ranging
from virus identification, sequencing quality evaluation, genome assembly,
annotation of viral taxonomy, genome and proteins, inference of biological
properties, applications of the virome in disease diagnosis, interactions with
other microbes, and associations with human diseases. The resolution of these
challenges necessitates ongoing collaboration among computational scientists,
virologists, and multidisciplinary experts. In essence, this perspective serves
as a comprehensive guide for directing computational efforts in human virome
studies, aiming to significantly propel the field forward.
| [
{
"created": "Fri, 23 Feb 2024 08:33:00 GMT",
"version": "v1"
}
] | 2024-02-26 | [
[
"Wu",
"Yifan",
""
],
[
"Peng",
"Yousong",
""
]
] | In recent years, substantial advancements have been achieved in understanding the diversity of the human virome and its intricate roles in human health and diseases. Despite this progress, the field of human virome research remains nascent, primarily hindered by the absence of effective methods, particularly in the domain of computational tools. This perspective systematically outlines ten computational challenges spanning various phases of virome studies, ranging from virus identification, sequencing quality evaluation, genome assembly, annotation of viral taxonomy, genome and proteins, inference of biological properties, applications of the virome in disease diagnosis, interactions with other microbes, and associations with human diseases. The resolution of these challenges necessitates ongoing collaboration among computational scientists, virologists, and multidisciplinary experts. In essence, this perspective serves as a comprehensive guide for directing computational efforts in human virome studies, aiming to significantly propel the field forward. |
1407.0972 | Walter S. Lasecki | Mircea Davidescu, Iain Couzin | Transient Leadership and Collective Cell Movement in Early Diverged
Multicellular Animals | null | null | null | ci-2014/51 | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collective motion of cells is critical to some of the most vital tasks
including wound healing, development, and immune response [Friedl and Gilmour
2009; Tokarski et al. 2012; Lee et al. 2012; Beltman et al. 2009], and is
common to many pathological processes including cancer cell invasion and
teratogenesis [Khalil and Friedl 2010]. The extensive understanding of movement
by single cells [R{\o}rth 2011; Insall and Machesky 2011; Houk et al. 2012] is
insufficient to predict the behavior of cellular groups [Theveneau et al. 2013;
Trepat, X. and Fredberg 2011], and identifying underlying rules of coordination
in collective cell migration is still evasive. Few of the supposed benefits of
collective motion have ever been tested at the cellular scale. As an example,
though collective sensing allows for larger groups to exhibit greater accuracy
in navigation [Simons 2004; Berdahl et al. 2013] and group taxis is possible
through the leadership of only a few individuals [Couzin et al. 2005], such
effects have never been investigated in collective cell migration. We will
investigate collective motion and decision-making in a primitive multicellular
animal, Trichoplax adhaerens to understand how intercellular coordination
affects animal behavior and how migration accuracy scales with cellular group
size.
| [
{
"created": "Mon, 30 Jun 2014 02:26:31 GMT",
"version": "v1"
}
] | 2014-07-04 | [
[
"Davidescu",
"Mircea",
""
],
[
"Couzin",
"Iain",
""
]
] | Collective motion of cells is critical to some of the most vital tasks including wound healing, development, and immune response [Friedl and Gilmour 2009; Tokarski et al. 2012; Lee et al. 2012; Beltman et al. 2009], and is common to many pathological processes including cancer cell invasion and teratogenesis [Khalil and Friedl 2010]. The extensive understanding of movement by single cells [R{\o}rth 2011; Insall and Machesky 2011; Houk et al. 2012] is insufficient to predict the behavior of cellular groups [Theveneau et al. 2013; Trepat, X. and Fredberg 2011], and identifying underlying rules of coordination in collective cell migration is still evasive. Few of the supposed benefits of collective motion have ever been tested at the cellular scale. As an example, though collective sensing allows for larger groups to exhibit greater accuracy in navigation [Simons 2004; Berdahl et al. 2013] and group taxis is possible through the leadership of only a few individuals [Couzin et al. 2005], such effects have never been investigated in collective cell migration. We will investigate collective motion and decision-making in a primitive multicellular animal, Trichoplax adhaerens to understand how intercellular coordination affects animal behavior and how migration accuracy scales with cellular group size. |
2310.01392 | Sangeeta Saha | Sangeeta Saha, Swadesh Pal and Roderick Melnik | The analysis of the impact of fear in the presence of additional food
and prey refuge with nonlocal predator-prey models | 26 pages, 28 figures | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In a predator-prey interaction, many factors are present that affect the
growth of the species, either positively or negatively. Fear of predation is
such a factor that causes psychological stress in a prey species, so their
overall growth starts to decline. In this work, a predator-prey model is
proposed where the prey species faces a reduction in their growth out of fear,
and the predator is also provided with an alternative food source that helps
the prey to hide in a safer place. The dynamics produce a wide range of
interesting results, including the significance of the presence of a certain
amount of fear or even prey refuge for population coexistence. The analysis is
extended later to the nonlocal model to analyze how the non-equilibrium
phenomena influence the dynamical behaviour. It is observed through numerical
simulations that the scope of pattern formation reduces with the increase of
fear level in the spatio-temporal model, whereas the incorporation of nonlocal
interaction further increases the chance of species colonization.
| [
{
"created": "Mon, 2 Oct 2023 17:50:22 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 22:01:44 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Saha",
"Sangeeta",
""
],
[
"Pal",
"Swadesh",
""
],
[
"Melnik",
"Roderick",
""
]
] | In a predator-prey interaction, many factors are present that affect the growth of the species, either positively or negatively. Fear of predation is such a factor that causes psychological stress in a prey species, so their overall growth starts to decline. In this work, a predator-prey model is proposed where the prey species faces a reduction in their growth out of fear, and the predator is also provided with an alternative food source that helps the prey to hide in a safer place. The dynamics produce a wide range of interesting results, including the significance of the presence of a certain amount of fear or even prey refuge for population coexistence. The analysis is extended later to the nonlocal model to analyze how the non-equilibrium phenomena influence the dynamical behaviour. It is observed through numerical simulations that the scope of pattern formation reduces with the increase of fear level in the spatio-temporal model, whereas the incorporation of nonlocal interaction further increases the chance of species colonization. |
1403.7105 | Dur-e-Zehta Baig | Dur-e-Zehra Baig | Physiological Control of Human Heart Rate and Oxygen Consumption during
Rhythmic Exercises | null | null | null | null | q-bio.QM cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physical exercise has significant benefits for humans in improving the health
and quality of their lives, by improving the functional performance of their
cardiovascular and respiratory systems. However, it is very important to
control the workload, e.g. the frequency of body movements, within the
capability of the individual to maximise the efficiency of the exercise. The
workload is generally represented in terms of heart rate (HR) and oxygen
consumption VO2. We focus particularly on the control of HR and VO2 using the
workload of an individual body movement, also known as the exercise rate (ER),
in this research. The first part of this report deals with the modelling and
control of HR during an unknown type of rhythmic exercise. A novel feature of
the developed system is to control HR via manipulating ER as a control input.
The relation between ER and HR is modelled using a simple autoregressive model
with unknown parameters. The parameters of the model are estimated using a
Kalman filter and an indirect adaptive H1 controller is designed. The
performance of the system is tested and validated on six subjects during rowing
and cycling exercise. The results demonstrate that the designed control system
can regulate HR to a predefined profile. The second part of this report deals
with the problem of estimating VO2 during rhythmic exercise, as the direct
measurement of VO2 is not realisable in these environments. Therefore,
non-invasive sensors are used to measure HR, RespR, and ER to estimate VO2. The
developed approach for cycling and rowing exercise predicts the percentage
change in maximum VO2 from the resting to the exercising phases, using a
Hammerstein model.. Results show that the average quality of fit in both
exercises is improved as the intensity of exercise is increased.
| [
{
"created": "Sat, 15 Mar 2014 05:53:19 GMT",
"version": "v1"
}
] | 2014-03-28 | [
[
"Baig",
"Dur-e-Zehra",
""
]
] | Physical exercise has significant benefits for humans in improving the health and quality of their lives, by improving the functional performance of their cardiovascular and respiratory systems. However, it is very important to control the workload, e.g. the frequency of body movements, within the capability of the individual to maximise the efficiency of the exercise. The workload is generally represented in terms of heart rate (HR) and oxygen consumption VO2. We focus particularly on the control of HR and VO2 using the workload of an individual body movement, also known as the exercise rate (ER), in this research. The first part of this report deals with the modelling and control of HR during an unknown type of rhythmic exercise. A novel feature of the developed system is to control HR via manipulating ER as a control input. The relation between ER and HR is modelled using a simple autoregressive model with unknown parameters. The parameters of the model are estimated using a Kalman filter and an indirect adaptive H1 controller is designed. The performance of the system is tested and validated on six subjects during rowing and cycling exercise. The results demonstrate that the designed control system can regulate HR to a predefined profile. The second part of this report deals with the problem of estimating VO2 during rhythmic exercise, as the direct measurement of VO2 is not realisable in these environments. Therefore, non-invasive sensors are used to measure HR, RespR, and ER to estimate VO2. The developed approach for cycling and rowing exercise predicts the percentage change in maximum VO2 from the resting to the exercising phases, using a Hammerstein model.. Results show that the average quality of fit in both exercises is improved as the intensity of exercise is increased. |
1305.5363 | David Golan | David Golan and Saharon Rosset | Narrowing the gap on heritability of common disease by direct estimation
in case-control GWAS | main text: 30 pages, 4 figures. Supplementary text: 35 pages, 16
figures | null | null | null | q-bio.PE q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the major developments in recent years in the search for missing
heritability of human phenotypes is the adoption of linear mixed-effects models
(LMMs) to estimate heritability due to genetic variants which are not
significantly associated with the phenotype. A variant of the LMM approach has
been adapted to case-control studies and applied to many major diseases by Lee
et al. (2011), successfully accounting for a considerable portion of the
missing heritability. For example, for Crohn's disease their estimated
heritability was 22% compared to 50-60% from family studies. In this letter we
propose to estimate heritability of disease directly by regression of phenotype
similarities on genotype correlations, corrected to account for ascertainment.
We refer to this method as genetic correlation regression (GCR). Using GCR we
estimate the heritability of Crohn's disease at 34% using the same data. We
demonstrate through extensive simulation that our method yields unbiased
heritability estimates, which are consistently higher than LMM estimates.
Moreover, we develop a heuristic correction to LMM estimates, which can be
applied to published LMM results. Applying our heuristic correction increases
the estimated heritability of multiple sclerosis from 30% to 52.6%.
| [
{
"created": "Thu, 23 May 2013 09:48:55 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2013 10:46:32 GMT",
"version": "v2"
}
] | 2013-05-28 | [
[
"Golan",
"David",
""
],
[
"Rosset",
"Saharon",
""
]
] | One of the major developments in recent years in the search for missing heritability of human phenotypes is the adoption of linear mixed-effects models (LMMs) to estimate heritability due to genetic variants which are not significantly associated with the phenotype. A variant of the LMM approach has been adapted to case-control studies and applied to many major diseases by Lee et al. (2011), successfully accounting for a considerable portion of the missing heritability. For example, for Crohn's disease their estimated heritability was 22% compared to 50-60% from family studies. In this letter we propose to estimate heritability of disease directly by regression of phenotype similarities on genotype correlations, corrected to account for ascertainment. We refer to this method as genetic correlation regression (GCR). Using GCR we estimate the heritability of Crohn's disease at 34% using the same data. We demonstrate through extensive simulation that our method yields unbiased heritability estimates, which are consistently higher than LMM estimates. Moreover, we develop a heuristic correction to LMM estimates, which can be applied to published LMM results. Applying our heuristic correction increases the estimated heritability of multiple sclerosis from 30% to 52.6%. |
1501.01144 | Tomasz Rutkowski | Katsuhiko Hamada, Hiromu Mori, Hiroyuki Shinoda, and Tomasz M.
Rutkowski | Airborne Ultrasonic Tactile Display BCI | arXiv admin note: substantial text overlap with arXiv:1404.4184 | Brain-Computer Interface Research - A State-of-the-Art Summary 4.
SpringerBriefs in Electrical and Computer Engineering. 2015. p. 57-65 | 10.1007/978-3-319-25190-5_6 | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This chapter presents results of our project, which studied whether
contactless and airborne ultrasonic tactile display (AUTD) stimuli delivered to
a user's palms could serve as a platform for a brain computer interface (BCI)
paradigm. We used six palm positions to evoke combined somatosensory brain
responses to implement a novel contactless tactile BCI. This achievement was
awarded the top prize in the Annual BCI Research Award 2014 competition. This
chapter also presents a comparison with a classical attached vibrotactile
transducer-based BCI paradigm. Experiment results from subjects performing
online experiments validate the novel BCI paradigm.
| [
{
"created": "Tue, 6 Jan 2015 10:58:27 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Jan 2016 06:32:29 GMT",
"version": "v2"
}
] | 2016-01-05 | [
[
"Hamada",
"Katsuhiko",
""
],
[
"Mori",
"Hiromu",
""
],
[
"Shinoda",
"Hiroyuki",
""
],
[
"Rutkowski",
"Tomasz M.",
""
]
] | This chapter presents results of our project, which studied whether contactless and airborne ultrasonic tactile display (AUTD) stimuli delivered to a user's palms could serve as a platform for a brain computer interface (BCI) paradigm. We used six palm positions to evoke combined somatosensory brain responses to implement a novel contactless tactile BCI. This achievement was awarded the top prize in the Annual BCI Research Award 2014 competition. This chapter also presents a comparison with a classical attached vibrotactile transducer-based BCI paradigm. Experiment results from subjects performing online experiments validate the novel BCI paradigm. |
2012.10687 | Nicola Vassena | Nicola Vassena | Sensitivity of metabolic networks: perturbing metabolite concentrations | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensitivity studies the network response to perturbations. We consider local
perturbations of the concentrations of metabolites at an equilibrium. We
investigate the responses in the network, both of the metabolite concentrations
and of the reaction fluxes. Our approach is purely qualitative, rather than
quantitative. In fact, our analysis is based, solely, on the stoichiometry of
the reaction network and we do not require any quantitative information on the
reaction rates. Instead, the description is done only in algebraic terms, and
the only data required is the network structure. Biological applications
include metabolic control and network reconstruction.
| [
{
"created": "Sat, 19 Dec 2020 13:47:06 GMT",
"version": "v1"
}
] | 2020-12-22 | [
[
"Vassena",
"Nicola",
""
]
] | Sensitivity studies the network response to perturbations. We consider local perturbations of the concentrations of metabolites at an equilibrium. We investigate the responses in the network, both of the metabolite concentrations and of the reaction fluxes. Our approach is purely qualitative, rather than quantitative. In fact, our analysis is based, solely, on the stoichiometry of the reaction network and we do not require any quantitative information on the reaction rates. Instead, the description is done only in algebraic terms, and the only data required is the network structure. Biological applications include metabolic control and network reconstruction. |
1010.2842 | Harold P. de Vladar | Harold P. de Vladar and Nick Barton | The statistical mechanics of a polygenic characterunder stabilizing
selection, mutation and drift | 35 pages, 8 figures | null | 10.1098/rsif.2010.0438 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By exploiting an analogy between population genetics and statistical
mechanics, we study the evolution of a polygenic trait under stabilizing
selection, mutation, and genetic drift. This requires us to track only four
macroscopic variables, instead of the distribution of all the allele
frequencies that influence the trait. These macroscopic variables are the
expectations of: the trait mean and its square, the genetic variance, and of a
measure of heterozygosity, and are derived from a generating function that is
in turn derived by maximizing an entropy measure. These four macroscopics are
enough to accurately describe the dynamics of the trait mean and of its genetic
variance (and in principle of any other quantity). Unlike previous approaches
that were based on an infinite series of moments or cumulants, which had to be
truncated arbitrarily, our calculations provide a well-defined approximation
procedure. We apply the framework to abrupt and gradual changes in the optimum,
as well as to changes in the strength of stabilizing selection. Our
approximations are surprisingly accurate, even for systems with as few as 5
loci. We find that when the effects of drift are included, the expected genetic
variance is hardly altered by directional selection, even though it fluctuates
in any particular instance. We also find hysteresis, showing that even after
averaging over the microscopic variables, the macroscopic trajectories retain a
memory of the underlying genetic states.
| [
{
"created": "Thu, 14 Oct 2010 07:48:44 GMT",
"version": "v1"
}
] | 2010-11-18 | [
[
"de Vladar",
"Harold P.",
""
],
[
"Barton",
"Nick",
""
]
] | By exploiting an analogy between population genetics and statistical mechanics, we study the evolution of a polygenic trait under stabilizing selection, mutation, and genetic drift. This requires us to track only four macroscopic variables, instead of the distribution of all the allele frequencies that influence the trait. These macroscopic variables are the expectations of: the trait mean and its square, the genetic variance, and of a measure of heterozygosity, and are derived from a generating function that is in turn derived by maximizing an entropy measure. These four macroscopics are enough to accurately describe the dynamics of the trait mean and of its genetic variance (and in principle of any other quantity). Unlike previous approaches that were based on an infinite series of moments or cumulants, which had to be truncated arbitrarily, our calculations provide a well-defined approximation procedure. We apply the framework to abrupt and gradual changes in the optimum, as well as to changes in the strength of stabilizing selection. Our approximations are surprisingly accurate, even for systems with as few as 5 loci. We find that when the effects of drift are included, the expected genetic variance is hardly altered by directional selection, even though it fluctuates in any particular instance. We also find hysteresis, showing that even after averaging over the microscopic variables, the macroscopic trajectories retain a memory of the underlying genetic states. |
0902.0622 | Michael B\"orsch | A. Kovalev, N. Zarrabi, F. Werz, M. Boersch, Z. Ristic, H. Lill, D.
Bald, C. Tietz, J. Wrachtrup | Hidden Semi-Markov Models for Single-Molecule Conformational Dynamics | 23 pages, 11 figures | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The conformational kinetics of enzymes can be reliably revealed when they are
governed by Markovian dynamics. Hidden Markov Models (HMMs) are appropriate
especially in the case of conformational states that are hardly
distinguishable. However, the evolution of the conformational states of
proteins mostly shows non-Markovian behavior, recognizable by
non-monoexponential state dwell time histograms. The application of a Hidden
Markov Model technique to a cyclic system demonstrating semi-Markovian dynamics
is presented in this paper and the required extension of the model design is
discussed. As standard ranking criteria of models cannot deal with these
systems properly, a new approach is proposed considering the shape of the dwell
time histograms. We observed the rotational kinetics of a single F1-ATPase
alpha3beta3gamma sub-complex over six orders of magnitude of different ATP to
ADP and Pi concentration ratios, and established a general model describing the
kinetics for the entire range of concentrations. The HMM extension described
here is applicable in general to the accurate analysis of protein dynamics.
| [
{
"created": "Tue, 3 Feb 2009 21:21:51 GMT",
"version": "v1"
}
] | 2009-02-05 | [
[
"Kovalev",
"A.",
""
],
[
"Zarrabi",
"N.",
""
],
[
"Werz",
"F.",
""
],
[
"Boersch",
"M.",
""
],
[
"Ristic",
"Z.",
""
],
[
"Lill",
"H.",
""
],
[
"Bald",
"D.",
""
],
[
"Tietz",
"C.",
""
],
[
"Wrachtrup",
"J.",
""
]
] | The conformational kinetics of enzymes can be reliably revealed when they are governed by Markovian dynamics. Hidden Markov Models (HMMs) are appropriate especially in the case of conformational states that are hardly distinguishable. However, the evolution of the conformational states of proteins mostly shows non-Markovian behavior, recognizable by non-monoexponential state dwell time histograms. The application of a Hidden Markov Model technique to a cyclic system demonstrating semi-Markovian dynamics is presented in this paper and the required extension of the model design is discussed. As standard ranking criteria of models cannot deal with these systems properly, a new approach is proposed considering the shape of the dwell time histograms. We observed the rotational kinetics of a single F1-ATPase alpha3beta3gamma sub-complex over six orders of magnitude of different ATP to ADP and Pi concentration ratios, and established a general model describing the kinetics for the entire range of concentrations. The HMM extension described here is applicable in general to the accurate analysis of protein dynamics. |
1712.02150 | Abhishek Dey | Abhishek Dey, Kushal Chakrabarti, Krishan Kumar Gola and Shaunak Sen | A Kalman Filter Approach for Biomolecular Systems with Noise Covariance
Updating | 23 pages, 9 figures | null | null | null | q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important part of system modeling is determining parameter values,
particularly for biomolecular systems, where direct measurements of individual
parameters are typically hard. While Extended Kalman Filters have been used for
this purpose, the choice of the process noise covariance is generally unclear.
In this chapter, we address this issue for biomolecular systems using a
combination of Monte Carlo simulations and experimental data, exploiting the
dependence of the process noise covariance on the states and parameters, as
given in the Langevin framework. We adapt a Hybrid Extended Kalman Filtering
technique by updating the process noise covariance at each time step based on
estimates. We compare the performance of this framework with different fixed
values of process noise covariance in biomolecular system models, including an
oscillator model, as well as in experimentally measured data for a negative
transcriptional feedback circuit. We find that the Extended Kalman Filter with
such process noise covariance update is closer to the optimality condition in
the sense that the innovation sequence becomes white and in achieving a balance
between the mean square estimation error and parameter convergence time. The
results of this chapter may help in the use of Extended Kalman Filters for
systems where process noise covariance depends on states and/or parameters.
| [
{
"created": "Wed, 6 Dec 2017 12:09:42 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Apr 2018 10:12:15 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Nov 2018 15:34:11 GMT",
"version": "v3"
}
] | 2018-11-13 | [
[
"Dey",
"Abhishek",
""
],
[
"Chakrabarti",
"Kushal",
""
],
[
"Gola",
"Krishan Kumar",
""
],
[
"Sen",
"Shaunak",
""
]
] | An important part of system modeling is determining parameter values, particularly for biomolecular systems, where direct measurements of individual parameters are typically hard. While Extended Kalman Filters have been used for this purpose, the choice of the process noise covariance is generally unclear. In this chapter, we address this issue for biomolecular systems using a combination of Monte Carlo simulations and experimental data, exploiting the dependence of the process noise covariance on the states and parameters, as given in the Langevin framework. We adapt a Hybrid Extended Kalman Filtering technique by updating the process noise covariance at each time step based on estimates. We compare the performance of this framework with different fixed values of process noise covariance in biomolecular system models, including an oscillator model, as well as in experimentally measured data for a negative transcriptional feedback circuit. We find that the Extended Kalman Filter with such process noise covariance update is closer to the optimality condition in the sense that the innovation sequence becomes white and in achieving a balance between the mean square estimation error and parameter convergence time. The results of this chapter may help in the use of Extended Kalman Filters for systems where process noise covariance depends on states and/or parameters. |
1408.0907 | Alfredo Braunstein | Fabrizio Altarelli, Alfredo Braunstein, Luca Dall'Asta, Alessandro
Ingrosso, Riccardo Zecchina | The zero-patient problem with noisy observations | null | J. Stat. Mech. (2014) P10016 | 10.1088/1742-5468/2014/10/P10016 | null | q-bio.PE cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Belief Propagation approach has been recently proposed for the zero-patient
problem in a SIR epidemics. The zero-patient problem consists in finding the
initial source of an epidemic outbreak given observations at a later time. In
this work, we study a harder but related inference problem, in which
observations are noisy and there is confusion between observed states. In
addition to studying the zero-patient problem, we also tackle the problem of
completing and correcting the observations possibly finding undiscovered
infected individuals and false test results. Moreover, we devise a set of
equations, based on the variational expression of the Bethe free energy, to
find the zero patient along with maximum-likelihood epidemic parameters. We
show, by means of simulated epidemics, how this method is able to infer details
on the past history of an epidemic outbreak based solely on the topology of the
contact network and a single snapshot of partial and noisy observations.
| [
{
"created": "Tue, 5 Aug 2014 09:47:29 GMT",
"version": "v1"
}
] | 2015-07-10 | [
[
"Altarelli",
"Fabrizio",
""
],
[
"Braunstein",
"Alfredo",
""
],
[
"Dall'Asta",
"Luca",
""
],
[
"Ingrosso",
"Alessandro",
""
],
[
"Zecchina",
"Riccardo",
""
]
] | A Belief Propagation approach has been recently proposed for the zero-patient problem in a SIR epidemics. The zero-patient problem consists in finding the initial source of an epidemic outbreak given observations at a later time. In this work, we study a harder but related inference problem, in which observations are noisy and there is confusion between observed states. In addition to studying the zero-patient problem, we also tackle the problem of completing and correcting the observations possibly finding undiscovered infected individuals and false test results. Moreover, we devise a set of equations, based on the variational expression of the Bethe free energy, to find the zero patient along with maximum-likelihood epidemic parameters. We show, by means of simulated epidemics, how this method is able to infer details on the past history of an epidemic outbreak based solely on the topology of the contact network and a single snapshot of partial and noisy observations. |
1312.3260 | James Lindesay | James Lindesay, Tshela E. Mason, William Hercules, and Georgia M.
Dunston | The Foundations of Genodynamics: The Development of Metrics for
Genomic-Environmental Interactions | 22 pages, 14 figures | null | null | null | q-bio.GN physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single nucleotide polymorphisms (SNPs) represent an important type of dynamic
sites within the human genome. These common variants often locally correlate
into more complex multi-SNP haploblocks that are maintained throughout
generations in a stable population. The information encoded in the structure of
common SNPs and SNP haploblock variation can be characterized through a
normalized information content (NIC) metric. Such an intrinsic measure allows
disparate regions of individual genomes and the genomes of various populations
to be quantitatively compared in a meaningful way.
Using our defined measures of genomic information, the interplay of
maintained statistical variations due to the environmental baths within which
stable populations exist can be interrogated. We develop the analogous
"thermodynamics" characterizing the state variables for genomic populations
that are stable under stochastic environmental stresses. Since living systems
have not been found to develop in the absence of environmental influences, we
focus on describing the analogous genomic free energy measures in this
development.
The intensive parameter describing how an environment drives genomic
diversity is found to depend inversely upon the NIC of the genome of a stable
population within that environment. Once this environmental potential has been
determined from the whole genome of a population, additive state variables can
be directly related to the probabilities of the occurrence of given viable SNP
based units (alleles) within that genome. This formulation allows the
determination of both population averaged state variables as well as the
genomic energies of individual alleles and their combinations. The
determination of individual allelic potentials then should allow the
parameterization of specific environmental influences upon shared alleles
across populations in varying environments.
| [
{
"created": "Wed, 11 Dec 2013 17:51:56 GMT",
"version": "v1"
}
] | 2013-12-12 | [
[
"Lindesay",
"James",
""
],
[
"Mason",
"Tshela E.",
""
],
[
"Hercules",
"William",
""
],
[
"Dunston",
"Georgia M.",
""
]
] | Single nucleotide polymorphisms (SNPs) represent an important type of dynamic sites within the human genome. These common variants often locally correlate into more complex multi-SNP haploblocks that are maintained throughout generations in a stable population. The information encoded in the structure of common SNPs and SNP haploblock variation can be characterized through a normalized information content (NIC) metric. Such an intrinsic measure allows disparate regions of individual genomes and the genomes of various populations to be quantitatively compared in a meaningful way. Using our defined measures of genomic information, the interplay of maintained statistical variations due to the environmental baths within which stable populations exist can be interrogated. We develop the analogous "thermodynamics" characterizing the state variables for genomic populations that are stable under stochastic environmental stresses. Since living systems have not been found to develop in the absence of environmental influences, we focus on describing the analogous genomic free energy measures in this development. The intensive parameter describing how an environment drives genomic diversity is found to depend inversely upon the NIC of the genome of a stable population within that environment. Once this environmental potential has been determined from the whole genome of a population, additive state variables can be directly related to the probabilities of the occurrence of given viable SNP based units (alleles) within that genome. This formulation allows the determination of both population averaged state variables as well as the genomic energies of individual alleles and their combinations. The determination of individual allelic potentials then should allow the parameterization of specific environmental influences upon shared alleles across populations in varying environments. |
2001.02436 | Joel Miller | Joel C. Miller and Tony TIng | EoN (Epidemics on Networks): a fast, flexible Python package for
simulation, analytic approximation, and analysis of epidemics on networks | This paper is available in a shorter form without examples at
https://joss.theoj.org/papers/10.21105/joss.01731.pdf | Journal of Open Source Software, 4(44), 1731 (2019) | 10.21105/joss.01731 | null | q-bio.QM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a description of the Epidemics on Networks (EoN) python package
designed for studying disease spread in static networks. The package consists
of over $100$ methods available for users to perform stochastic simulation of a
range of different processes including SIS and SIR disease, and generic simple
or comlex contagions.
| [
{
"created": "Wed, 8 Jan 2020 10:15:47 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Jan 2020 15:11:17 GMT",
"version": "v2"
}
] | 2020-01-22 | [
[
"Miller",
"Joel C.",
""
],
[
"TIng",
"Tony",
""
]
] | We provide a description of the Epidemics on Networks (EoN) python package designed for studying disease spread in static networks. The package consists of over $100$ methods available for users to perform stochastic simulation of a range of different processes including SIS and SIR disease, and generic simple or comlex contagions. |
2403.01702 | Wenjia Shi | Wenjia Shi, Yao Ma, Peilin Hu, Mi Pang, Xiaona Huang, Yiting Dang,
Yuxin Xie, Danni Wu | Hill Function-based Model of Transcriptional Response: Impact of
Nonspecific Binding and RNAP Interactions | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hill function is one of the widely used gene transcription regulation models.
Its attribute of fitting may result in a lack of an underlying physical
picture, yet the fitting parameters can provide information about biochemical
reactions, such as the number of transcription factors (TFs) and the binding
energy between regulatory elements. However, it remains unclear when and how
much biochemical information can Hill function provide in addition to fitting.
Here, started from the interactions between TFs and RNA polymerase during
transcription regulation and both of their association-dissociation reactions
at specific/nonspecific sites on DNA, the regulatory effect of TFs was deduced
as fold change. We found that, for weak promoter, fold change can degrade into
the regulatory factor (Freg) which is closely correlated with Hill function. By
directly comparing and fitting with Hill function, the fitting parameters and
corresponding biochemical reaction parameters in Freg were analyzed and
discussed, where the single TF and multiple TFs that with cooperativity and
basic logic effects were considered. We concluded the strength of promoter and
interactions between TFs determine whether Hill function can reflect the
corresponding biochemical information. Our findings highlight the role of Hill
function in modeling/fitting for transcriptional regulation, which also
benefits the preparation of synthetic regulatory elements.
| [
{
"created": "Mon, 4 Mar 2024 03:32:56 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Shi",
"Wenjia",
""
],
[
"Ma",
"Yao",
""
],
[
"Hu",
"Peilin",
""
],
[
"Pang",
"Mi",
""
],
[
"Huang",
"Xiaona",
""
],
[
"Dang",
"Yiting",
""
],
[
"Xie",
"Yuxin",
""
],
[
"Wu",
"Danni",
""
]
] | Hill function is one of the widely used gene transcription regulation models. Its attribute of fitting may result in a lack of an underlying physical picture, yet the fitting parameters can provide information about biochemical reactions, such as the number of transcription factors (TFs) and the binding energy between regulatory elements. However, it remains unclear when and how much biochemical information can Hill function provide in addition to fitting. Here, started from the interactions between TFs and RNA polymerase during transcription regulation and both of their association-dissociation reactions at specific/nonspecific sites on DNA, the regulatory effect of TFs was deduced as fold change. We found that, for weak promoter, fold change can degrade into the regulatory factor (Freg) which is closely correlated with Hill function. By directly comparing and fitting with Hill function, the fitting parameters and corresponding biochemical reaction parameters in Freg were analyzed and discussed, where the single TF and multiple TFs that with cooperativity and basic logic effects were considered. We concluded the strength of promoter and interactions between TFs determine whether Hill function can reflect the corresponding biochemical information. Our findings highlight the role of Hill function in modeling/fitting for transcriptional regulation, which also benefits the preparation of synthetic regulatory elements. |
2402.11950 | Tadahaya Mizuno | Yasuhiro Yoshikai, Tadahaya Mizuno, Shumpei Nemoto, Hiroyuki Kusuhara | A novel molecule generative model of VAE combined with Transformer for
unseen structure generation | 23 pages, 9 figures | null | null | null | q-bio.BM cs.LG physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Recently, molecule generation using deep learning has been actively
investigated in drug discovery. In this field, Transformer and VAE are widely
used as powerful models, but they are rarely used in combination due to
structural and performance mismatch of them. This study proposes a model that
combines these two models through structural and parameter optimization in
handling diverse molecules. The proposed model shows comparable performance to
existing models in generating molecules, and showed by far superior performance
in generating molecules with unseen structures. Another advantage of this VAE
model is that it generates molecules from latent representation, and therefore
properties of molecules can be easily predicted or conditioned with it, and
indeed, we show that the latent representation of the model successfully
predicts molecular properties. Ablation study suggested the advantage of VAE
over other generative models like language model in generating novel molecules.
It also indicated that the latent representation can be shortened to ~32
dimensional variables without loss of reconstruction, suggesting the
possibility of a much smaller molecular descriptor or model than existing ones.
This study is expected to provide a virtual chemical library containing a wide
variety of compounds for virtual screening and to enable efficient screening.
| [
{
"created": "Mon, 19 Feb 2024 08:46:04 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 08:51:55 GMT",
"version": "v2"
}
] | 2024-04-08 | [
[
"Yoshikai",
"Yasuhiro",
""
],
[
"Mizuno",
"Tadahaya",
""
],
[
"Nemoto",
"Shumpei",
""
],
[
"Kusuhara",
"Hiroyuki",
""
]
] | Recently, molecule generation using deep learning has been actively investigated in drug discovery. In this field, Transformer and VAE are widely used as powerful models, but they are rarely used in combination due to structural and performance mismatch of them. This study proposes a model that combines these two models through structural and parameter optimization in handling diverse molecules. The proposed model shows comparable performance to existing models in generating molecules, and showed by far superior performance in generating molecules with unseen structures. Another advantage of this VAE model is that it generates molecules from latent representation, and therefore properties of molecules can be easily predicted or conditioned with it, and indeed, we show that the latent representation of the model successfully predicts molecular properties. Ablation study suggested the advantage of VAE over other generative models like language model in generating novel molecules. It also indicated that the latent representation can be shortened to ~32 dimensional variables without loss of reconstruction, suggesting the possibility of a much smaller molecular descriptor or model than existing ones. This study is expected to provide a virtual chemical library containing a wide variety of compounds for virtual screening and to enable efficient screening. |
1405.5559 | Stanley Lazic | Stanley E. Lazic, Johannes Fuss, Peter Gass | Quantifying the behavioural relevance of hippocampal neurogenesis | To be published in PLoS ONE | null | 10.1371/journal.pone.0113855 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few studies that examine the neurogenesis--behaviour relationship formally
establish covariation between neurogenesis and behaviour or rule out competing
explanations. The behavioural relevance of neurogenesis might therefore be
overestimated if other mechanisms account for some, or even all, of the
experimental effects. A systematic review of the literature was conducted and
the data reanalysed using causal mediation analysis, which can estimate the
behavioural contribution of new hippocampal neurons separately from other
mechanisms that might be operating. Results from eleven eligible individual
studies were then combined in a meta-analysis to increase precision
(representing data from 215 animals) and showed that neurogenesis made a
negligible contribution to behaviour (standarised effect = 0.15; 95% CI = -0.04
to 0.34; p = 0.128); other mechanisms accounted for the majority of
experimental effects (standardised effect = 1.06; 95% CI = 0.74 to 1.38; p =
1.7 $\times 10^{-11}$).
| [
{
"created": "Wed, 21 May 2014 21:21:02 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Nov 2014 18:35:29 GMT",
"version": "v2"
}
] | 2015-06-19 | [
[
"Lazic",
"Stanley E.",
""
],
[
"Fuss",
"Johannes",
""
],
[
"Gass",
"Peter",
""
]
] | Few studies that examine the neurogenesis--behaviour relationship formally establish covariation between neurogenesis and behaviour or rule out competing explanations. The behavioural relevance of neurogenesis might therefore be overestimated if other mechanisms account for some, or even all, of the experimental effects. A systematic review of the literature was conducted and the data reanalysed using causal mediation analysis, which can estimate the behavioural contribution of new hippocampal neurons separately from other mechanisms that might be operating. Results from eleven eligible individual studies were then combined in a meta-analysis to increase precision (representing data from 215 animals) and showed that neurogenesis made a negligible contribution to behaviour (standarised effect = 0.15; 95% CI = -0.04 to 0.34; p = 0.128); other mechanisms accounted for the majority of experimental effects (standardised effect = 1.06; 95% CI = 0.74 to 1.38; p = 1.7 $\times 10^{-11}$). |
1009.2095 | Benjamin Machta | Benjamin B. Machta, Stefanos Papanikolaou, James P. Sethna, Sarah L.
Veatch | A minimal model of plasma membrane heterogeneity requires coupling
cortical actin to criticality | 18 pages, 5 figures and 12 page Supplement (2 additional figures) | Biophysical Journal, Volume 100, Issue 7, 1668-1677, 2011 | 10.1016/j.bpj.2011.02.029 | null | q-bio.SC cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a minimal model of plasma membrane heterogeneity that combines
criticality with connectivity to cortical cytoskeleton. Our model is motivated
by recent observations of micron-sized critical fluctuations in plasma membrane
vesicles that are detached from their cortical cytoskeleton. We incorporate
criticality using a conserved order parameter Ising model coupled to a simple
actin cytoskeleton interacting through point-like pinning sites. Using this
minimal model, we recapitulate several experimental observations of plasma
membrane raft heterogeneity. Small (r~20nm) and dynamic fluctuations at
physiological temperatures arise from criticality. Including connectivity to
cortical cytoskeleton disrupts large fluctuations, prevents macroscopic phase
separation at low temperatures (T<=22{\deg}C), and provides a template for long
lived fluctuations at physiological temperature (T=37{\deg}C).
Cytoskeleton-stabilized fluctuations produce significant barriers to the
diffusion of some membrane components in a manner that is weakly dependent on
the number of pinning sites and strongly dependent on criticality. More
generally, we demonstrate that critical fluctuations provide a physical
mechanism to organize and spatially segregate membrane components by providing
channels for interaction over large distances.
| [
{
"created": "Fri, 10 Sep 2010 20:11:34 GMT",
"version": "v1"
}
] | 2012-03-13 | [
[
"Machta",
"Benjamin B.",
""
],
[
"Papanikolaou",
"Stefanos",
""
],
[
"Sethna",
"James P.",
""
],
[
"Veatch",
"Sarah L.",
""
]
] | We present a minimal model of plasma membrane heterogeneity that combines criticality with connectivity to cortical cytoskeleton. Our model is motivated by recent observations of micron-sized critical fluctuations in plasma membrane vesicles that are detached from their cortical cytoskeleton. We incorporate criticality using a conserved order parameter Ising model coupled to a simple actin cytoskeleton interacting through point-like pinning sites. Using this minimal model, we recapitulate several experimental observations of plasma membrane raft heterogeneity. Small (r~20nm) and dynamic fluctuations at physiological temperatures arise from criticality. Including connectivity to cortical cytoskeleton disrupts large fluctuations, prevents macroscopic phase separation at low temperatures (T<=22{\deg}C), and provides a template for long lived fluctuations at physiological temperature (T=37{\deg}C). Cytoskeleton-stabilized fluctuations produce significant barriers to the diffusion of some membrane components in a manner that is weakly dependent on the number of pinning sites and strongly dependent on criticality. More generally, we demonstrate that critical fluctuations provide a physical mechanism to organize and spatially segregate membrane components by providing channels for interaction over large distances. |
1510.01935 | Erik Yusko | Erik C. Yusko, Brandon R. Bruhn, Olivia Eggenberger, Jared
Houghtaling, Ryan C. Rollings, Nathan C. Walsh, Santoshi Nandivada, Mariya
Pindrus, Adam R. Hall, David Sept, Jiali Li, Devendra S. Kalonia, Michael
Mayer | Real-time shape approximation and 5-D fingerprinting of single proteins | null | null | 10.1038/nnano.2016.267 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work exploits the zeptoliter sensing volume of electrolyte-filled
nanopores to determine, simultaneously and in real time, the approximate shape,
volume, charge, rotational diffusion coefficient, and dipole moment of
individual proteins. We have developed the theory for a quantitative
understanding and analysis of modulations in ionic current that arise from
rotational dynamics of single proteins as they move through the electric field
inside a nanopore. The resulting multi-parametric information raises the
possibility to characterize, identify, and quantify individual proteins and
protein complexes in a mixture. This approach interrogates single proteins in
solution and determines parameters such as the approximate shape and dipole
moment, which are excellent protein descriptors and cannot be obtained
otherwise from single protein molecules in solution. Taken together, this
five-dimensional characterization of biomolecules at the single particle level
has the potential for instantaneous protein identification, quantification, and
possibly sorting with implications for structural biology, proteomics,
biomarker detection, and routine protein analysis.
| [
{
"created": "Sat, 29 Aug 2015 20:04:10 GMT",
"version": "v1"
}
] | 2017-04-26 | [
[
"Yusko",
"Erik C.",
""
],
[
"Bruhn",
"Brandon R.",
""
],
[
"Eggenberger",
"Olivia",
""
],
[
"Houghtaling",
"Jared",
""
],
[
"Rollings",
"Ryan C.",
""
],
[
"Walsh",
"Nathan C.",
""
],
[
"Nandivada",
"Santoshi",
""
],
[
"Pindrus",
"Mariya",
""
],
[
"Hall",
"Adam R.",
""
],
[
"Sept",
"David",
""
],
[
"Li",
"Jiali",
""
],
[
"Kalonia",
"Devendra S.",
""
],
[
"Mayer",
"Michael",
""
]
] | This work exploits the zeptoliter sensing volume of electrolyte-filled nanopores to determine, simultaneously and in real time, the approximate shape, volume, charge, rotational diffusion coefficient, and dipole moment of individual proteins. We have developed the theory for a quantitative understanding and analysis of modulations in ionic current that arise from rotational dynamics of single proteins as they move through the electric field inside a nanopore. The resulting multi-parametric information raises the possibility to characterize, identify, and quantify individual proteins and protein complexes in a mixture. This approach interrogates single proteins in solution and determines parameters such as the approximate shape and dipole moment, which are excellent protein descriptors and cannot be obtained otherwise from single protein molecules in solution. Taken together, this five-dimensional characterization of biomolecules at the single particle level has the potential for instantaneous protein identification, quantification, and possibly sorting with implications for structural biology, proteomics, biomarker detection, and routine protein analysis. |
1602.05092 | Andrea Barreiro | Andrea K. Barreiro and J. Nathan Kutz and Eli Shlizerman | Symmetries constrain dynamics in a family of balanced neural networks | In review; submitted 1/21/2016 Revision submitted 9/24/16 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine a family of random firing-rate neural networks in which we enforce
the neurobiological constraint of Dale's Law --- each neuron makes either
excitatory or inhibitory connections onto its post-synaptic targets. We find
that this constrained system may be described as a perturbation from a system
with non-trivial symmetries. We analyze the symmetric system using the tools of
equivariant bifurcation theory, and demonstrate that the symmetry-implied
structures remain evident in the perturbed system. In comparison, spectral
characteristics of the network coupling matrix are relatively uninformative
about the behavior of the constrained system.
| [
{
"created": "Tue, 16 Feb 2016 16:50:15 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Sep 2016 22:06:12 GMT",
"version": "v2"
}
] | 2016-09-27 | [
[
"Barreiro",
"Andrea K.",
""
],
[
"Kutz",
"J. Nathan",
""
],
[
"Shlizerman",
"Eli",
""
]
] | We examine a family of random firing-rate neural networks in which we enforce the neurobiological constraint of Dale's Law --- each neuron makes either excitatory or inhibitory connections onto its post-synaptic targets. We find that this constrained system may be described as a perturbation from a system with non-trivial symmetries. We analyze the symmetric system using the tools of equivariant bifurcation theory, and demonstrate that the symmetry-implied structures remain evident in the perturbed system. In comparison, spectral characteristics of the network coupling matrix are relatively uninformative about the behavior of the constrained system. |
q-bio/0611053 | N. Jacq | L.-M. Birkholtz, O. Bastien (DRDC), G. Wells, D. Grando (DRDC), F.
Joubert, V. Kasam (LPC-Clermont), M. Zimmermann, P. Ortet (DEVM), N. Jacq
(LPC-Clermont), S. Roy (DRDC/Bim), M. Hoffmann-Apitius, V. Breton
(LPC-Clermont), A. I. Louw, E. Mar\'echal (DRDC) | Integration and mining of malaria molecular, functional and
pharmacological data: how far are we from a chemogenomic knowledge space? | 43 pages, 4 figures, to appear in Malaria Journal | Malaria Journal 5 (2006) 1-24 | 10.1186/1475-2875-5-110 | null | q-bio.QM cs.DC q-bio.GN | null | The organization and mining of malaria genomic and post-genomic data is
highly motivated by the necessity to predict and characterize new biological
targets and new drugs. Biological targets are sought in a biological space
designed from the genomic data from Plasmodium falciparum, but using also the
millions of genomic data from other species. Drug candidates are sought in a
chemical space containing the millions of small molecules stored in public and
private chemolibraries. Data management should therefore be as reliable and
versatile as possible. In this context, we examined five aspects of the
organization and mining of malaria genomic and post-genomic data: 1) the
comparison of protein sequences including compositionally atypical malaria
sequences, 2) the high throughput reconstruction of molecular phylogenies, 3)
the representation of biological processes particularly metabolic pathways, 4)
the versatile methods to integrate genomic data, biological representations and
functional profiling obtained from X-omic experiments after drug treatments and
5) the determination and prediction of protein structures and their molecular
docking with drug candidate structures. Progresses toward a grid-enabled
chemogenomic knowledge space are discussed.
| [
{
"created": "Fri, 17 Nov 2006 10:24:23 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Birkholtz",
"L. -M.",
"",
"DRDC"
],
[
"Bastien",
"O.",
"",
"DRDC"
],
[
"Wells",
"G.",
"",
"DRDC"
],
[
"Grando",
"D.",
"",
"DRDC"
],
[
"Joubert",
"F.",
"",
"LPC-Clermont"
],
[
"Kasam",
"V.",
"",
"LPC-Clermont"
],
[
"Zimmermann",
"M.",
"",
"DEVM"
],
[
"Ortet",
"P.",
"",
"DEVM"
],
[
"Jacq",
"N.",
"",
"LPC-Clermont"
],
[
"Roy",
"S.",
"",
"DRDC/Bim"
],
[
"Hoffmann-Apitius",
"M.",
"",
"LPC-Clermont"
],
[
"Breton",
"V.",
"",
"LPC-Clermont"
],
[
"Louw",
"A. I.",
"",
"DRDC"
],
[
"Maréchal",
"E.",
"",
"DRDC"
]
] | The organization and mining of malaria genomic and post-genomic data is highly motivated by the necessity to predict and characterize new biological targets and new drugs. Biological targets are sought in a biological space designed from the genomic data from Plasmodium falciparum, but using also the millions of genomic data from other species. Drug candidates are sought in a chemical space containing the millions of small molecules stored in public and private chemolibraries. Data management should therefore be as reliable and versatile as possible. In this context, we examined five aspects of the organization and mining of malaria genomic and post-genomic data: 1) the comparison of protein sequences including compositionally atypical malaria sequences, 2) the high throughput reconstruction of molecular phylogenies, 3) the representation of biological processes particularly metabolic pathways, 4) the versatile methods to integrate genomic data, biological representations and functional profiling obtained from X-omic experiments after drug treatments and 5) the determination and prediction of protein structures and their molecular docking with drug candidate structures. Progresses toward a grid-enabled chemogenomic knowledge space are discussed. |
q-bio/0601050 | Hiro-Sato Niwa | Hiro-Sato Niwa | Exploitation dynamics of fish stocks | 27 pages, 19 figures | Ecological Informatics 1 (2006) 87-99 | 10.1016/j.ecoinf.2005.10.002 | null | q-bio.PE cond-mat.stat-mech | null | I address the question of the fluctuations in fishery landings. Using the
fishery statistics time-series collected by the Food and Agriculture
Organization of the United Nations since the early 1950s, I here analyze
fishing activities and find two scaling features of capture fisheries
production: (i) the standard deviation of growth rate of the domestically
landed catches decays as a power-law function of country landings with an
exponent of value 0.15; (ii) the average number of fishers in a country scales
to the 0.7 power of country landings. I show how these socio-ecological
patterns may be related, yielding a scaling relation between these exponents.
The predicted scaling relation implies that the width of the annual per capita
growth-rate distribution scales to the 0.2 power of country landings, i.e.
annual fluctuations in per capita landed catches increase with increased per
capita catches in highly producing countries. Beside the scaling behavior, I
report that fluctuations in the annual domestic landings have increased in the
last 30 years, while the mean of the annual growth rate declined significantly
after 1972.
| [
{
"created": "Tue, 31 Jan 2006 02:17:47 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Niwa",
"Hiro-Sato",
""
]
] | I address the question of the fluctuations in fishery landings. Using the fishery statistics time-series collected by the Food and Agriculture Organization of the United Nations since the early 1950s, I here analyze fishing activities and find two scaling features of capture fisheries production: (i) the standard deviation of growth rate of the domestically landed catches decays as a power-law function of country landings with an exponent of value 0.15; (ii) the average number of fishers in a country scales to the 0.7 power of country landings. I show how these socio-ecological patterns may be related, yielding a scaling relation between these exponents. The predicted scaling relation implies that the width of the annual per capita growth-rate distribution scales to the 0.2 power of country landings, i.e. annual fluctuations in per capita landed catches increase with increased per capita catches in highly producing countries. Beside the scaling behavior, I report that fluctuations in the annual domestic landings have increased in the last 30 years, while the mean of the annual growth rate declined significantly after 1972. |
1302.3832 | Silvio Ferreira | Leticia R Paiva, Hallan S Silva, Silvio C Ferreira and Marcelo L
Martins | Multiscale model for the effects of adaptive immunity suppression on the
viral therapy of cancer | 14 pages, 4 figures | null | 10.1088/1478-3975/10/2/025005 | null | q-bio.TO physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oncolytic virotherapy - the use of viruses that specifically kill tumor cells
- is an innovative and highly promising route for treating cancer. However, its
therapeutic outcomes are mainly impaired by the host immune response to the
viral infection. In the present work, we propose a multiscale mathematical
model to study how the immune response interferes with the viral oncolytic
activity. The model assumes that cytotoxic T cells can induce apoptosis in
infected cancer cells and that free viruses can be inactivated by neutralizing
antibodies or cleared at a constant rate by the innate immune response. Our
simulations suggest that reprogramming the immune microenvironment in tumors
could substantially enhance the oncolytic virotherapy in immune-competent
hosts. Viable routes to such reprogramming are either in situ virus-mediated
impairing of CD$8^+$ T cells motility or blockade of B and T lymphocytes
recruitment. Our theoretical results can shed light on the design of viral
vectors or new protocols with neat potential impacts on the clinical practice.
| [
{
"created": "Fri, 15 Feb 2013 18:20:51 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Paiva",
"Leticia R",
""
],
[
"Silva",
"Hallan S",
""
],
[
"Ferreira",
"Silvio C",
""
],
[
"Martins",
"Marcelo L",
""
]
] | Oncolytic virotherapy - the use of viruses that specifically kill tumor cells - is an innovative and highly promising route for treating cancer. However, its therapeutic outcomes are mainly impaired by the host immune response to the viral infection. In the present work, we propose a multiscale mathematical model to study how the immune response interferes with the viral oncolytic activity. The model assumes that cytotoxic T cells can induce apoptosis in infected cancer cells and that free viruses can be inactivated by neutralizing antibodies or cleared at a constant rate by the innate immune response. Our simulations suggest that reprogramming the immune microenvironment in tumors could substantially enhance the oncolytic virotherapy in immune-competent hosts. Viable routes to such reprogramming are either in situ virus-mediated impairing of CD$8^+$ T cells motility or blockade of B and T lymphocytes recruitment. Our theoretical results can shed light on the design of viral vectors or new protocols with neat potential impacts on the clinical practice. |
1701.07138 | Wiktor Mlynarski | Wiktor M{\l}ynarski, Josh H. McDermott | Learning Mid-Level Auditory Codes from Natural Sound Statistics | 38 pages, 12 figures | null | null | null | q-bio.NC cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interaction with the world requires an organism to transform sensory signals
into representations in which behaviorally meaningful properties of the
environment are made explicit. These representations are derived through
cascades of neuronal processing stages in which neurons at each stage recode
the output of preceding stages. Explanations of sensory coding may thus involve
understanding how low-level patterns are combined into more complex structures.
Although models exist in the visual domain to explain how mid-level features
such as junctions and curves might be derived from oriented filters in early
visual cortex, little is known about analogous grouping principles for
mid-level auditory representations. We propose a hierarchical generative model
of natural sounds that learns combinations of spectrotemporal features from
natural stimulus statistics. In the first layer the model forms a sparse
convolutional code of spectrograms using a dictionary of learned
spectrotemporal kernels. To generalize from specific kernel activation
patterns, the second layer encodes patterns of time-varying magnitude of
multiple first layer coefficients. Because second-layer features are sensitive
to combinations of spectrotemporal features, the representation they support
encodes more complex acoustic patterns than the first layer. When trained on
corpora of speech and environmental sounds, some second-layer units learned to
group spectrotemporal features that occur together in natural sounds. Others
instantiate opponency between dissimilar sets of spectrotemporal features. Such
groupings might be instantiated by neurons in the auditory cortex, providing a
hypothesis for mid-level neuronal computation.
| [
{
"created": "Wed, 25 Jan 2017 02:00:50 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jan 2017 16:34:43 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Jan 2017 19:21:10 GMT",
"version": "v3"
},
{
"created": "Mon, 15 May 2017 21:39:01 GMT",
"version": "v4"
},
{
"created": "Sun, 15 Oct 2017 02:18:20 GMT",
"version": "v5"
}
] | 2017-10-17 | [
[
"Młynarski",
"Wiktor",
""
],
[
"McDermott",
"Josh H.",
""
]
] | Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. Although models exist in the visual domain to explain how mid-level features such as junctions and curves might be derived from oriented filters in early visual cortex, little is known about analogous grouping principles for mid-level auditory representations. We propose a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first layer coefficients. Because second-layer features are sensitive to combinations of spectrotemporal features, the representation they support encodes more complex acoustic patterns than the first layer. When trained on corpora of speech and environmental sounds, some second-layer units learned to group spectrotemporal features that occur together in natural sounds. Others instantiate opponency between dissimilar sets of spectrotemporal features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for mid-level neuronal computation. |
1704.08945 | Saar Rahav | Ilana Bogod, Saar Rahav | Kinetic discrimination of a polymerase in the presence of obstacles | 16 pages, 4 figures | Phys. Rev. E, v95, 042408 (2017) | 10.1103/PhysRevE.95.042408 | null | q-bio.SC cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the causes of high fidelity of copying in biological systems is
kinetic discrimination. In this mechanism larger dissipation and copying
velocity result in improved copying accuracy. We consider a model of a
polymerase which simultaneously copies a single stranded RNA and opens a
single- to double-stranded junction serving as an obstacle. The presence of the
obstacle slows down the motor, resulting in a change of its fidelity, which can
be used to gain information about the motor and junction dynamics. We find that
the motor's fidelity does not depend on details of the motor-junction
interaction, such as whether the interaction is passive or active. Analysis of
the copying fidelity can still be used as a tool for investigating the junction
kinetics.
| [
{
"created": "Wed, 26 Apr 2017 12:32:04 GMT",
"version": "v1"
}
] | 2017-05-01 | [
[
"Bogod",
"Ilana",
""
],
[
"Rahav",
"Saar",
""
]
] | One of the causes of high fidelity of copying in biological systems is kinetic discrimination. In this mechanism larger dissipation and copying velocity result in improved copying accuracy. We consider a model of a polymerase which simultaneously copies a single stranded RNA and opens a single- to double-stranded junction serving as an obstacle. The presence of the obstacle slows down the motor, resulting in a change of its fidelity, which can be used to gain information about the motor and junction dynamics. We find that the motor's fidelity does not depend on details of the motor-junction interaction, such as whether the interaction is passive or active. Analysis of the copying fidelity can still be used as a tool for investigating the junction kinetics. |
1911.02803 | Olivier Rivoire | Olivier Rivoire | Geometry and flexibility of optimal catalysts in a minimal elastic
network model | null | null | null | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have a general knowledge of the principles by which catalysts accelerate
the rate of chemical reactions but no precise understanding of the geometrical
and physical constraints to which their design is subject. To analyze these
constraints, we introduce a minimal model of catalysis based on elastic
networks where the implications of the geometry and flexibility of a catalyst
can be studied systematically. The model demonstrates the relevance and
limitations of the principle of transition-state stabilization: optimal
catalysts are found to have a geometry complementary to the transition state
but a degree of flexibility that non-trivially depends on the parameters of the
reaction as well as on external parameters such as the concentrations of
reactants and products. The results illustrate how simple physical models can
provide valuable insights on the design of catalysts.
| [
{
"created": "Thu, 7 Nov 2019 08:44:16 GMT",
"version": "v1"
}
] | 2019-11-11 | [
[
"Rivoire",
"Olivier",
""
]
] | We have a general knowledge of the principles by which catalysts accelerate the rate of chemical reactions but no precise understanding of the geometrical and physical constraints to which their design is subject. To analyze these constraints, we introduce a minimal model of catalysis based on elastic networks where the implications of the geometry and flexibility of a catalyst can be studied systematically. The model demonstrates the relevance and limitations of the principle of transition-state stabilization: optimal catalysts are found to have a geometry complementary to the transition state but a degree of flexibility that non-trivially depends on the parameters of the reaction as well as on external parameters such as the concentrations of reactants and products. The results illustrate how simple physical models can provide valuable insights on the design of catalysts. |
2404.17603 | Meshach Ndlovu | Meshach Ndlovu | Assessing biological control method on the progression of anaplasmosis
disease in dominant cattle species in the Matabeleland north province
Zimbabwe | null | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | This paper presents a compartmental model for the transmission dynamics of
Anaplasmosis in resource limited farmers cattle subjected to a biological
control method. The study seeks to evaluate the stability and control of cattle
herds dynamics relative to finite agitation. Anaplasmosis disease pose a major
threat in eradicating cattle population growth in resources limited
communities. In gaining the insight of the disease, the following model
analysis strategies were used in order to compute simulations, analysis of the
model upon varying initial predator population and testing the effects of
different predation rate on the disease dynamics. It is essential that the
progression of Anaplasmosis be stable after the introduction of tick predators
into cattle-tick system because that provides the usability of predation as a
control measure. After analysing the effect of different prediction rates on
the spread of the disease in resource limited communities the study asserted
that tick predators like birds and bacteria have been neglected as contributors
to natural mechanism of Anaplasmosis. Furthermore, the study brought to light
that predictors have been neglected as major contributors to natural control
mechanism of Anaplasmosis in Zimbabwe. Additional numerical simulations showed
that predation method can be used in the eradication of Anaplasmosis disease
thus improving rural livelihood. Investigation of natural tick enemies and
predation behaviour can lead to better control of the Anaplasmosis disease
efficiently and effectively. Finally, we recommend the necessity for resource
limited farmers to capitalise on the use of biological disease control
measures.
| [
{
"created": "Wed, 24 Apr 2024 18:40:32 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Ndlovu",
"Meshach",
""
]
] | This paper presents a compartmental model for the transmission dynamics of Anaplasmosis in resource limited farmers cattle subjected to a biological control method. The study seeks to evaluate the stability and control of cattle herds dynamics relative to finite agitation. Anaplasmosis disease pose a major threat in eradicating cattle population growth in resources limited communities. In gaining the insight of the disease, the following model analysis strategies were used in order to compute simulations, analysis of the model upon varying initial predator population and testing the effects of different predation rate on the disease dynamics. It is essential that the progression of Anaplasmosis be stable after the introduction of tick predators into cattle-tick system because that provides the usability of predation as a control measure. After analysing the effect of different prediction rates on the spread of the disease in resource limited communities the study asserted that tick predators like birds and bacteria have been neglected as contributors to natural mechanism of Anaplasmosis. Furthermore, the study brought to light that predictors have been neglected as major contributors to natural control mechanism of Anaplasmosis in Zimbabwe. Additional numerical simulations showed that predation method can be used in the eradication of Anaplasmosis disease thus improving rural livelihood. Investigation of natural tick enemies and predation behaviour can lead to better control of the Anaplasmosis disease efficiently and effectively. Finally, we recommend the necessity for resource limited farmers to capitalise on the use of biological disease control measures. |
1305.7068 | Yann Ponty | Vladimir Reinharz, Yann Ponty (LIX, INRIA Saclay - Ile de France),
J\'er\^ome Waldisp\"uhl | Using structural and evolutionary information to detect and correct
pyrosequencing errors in non-coding RNAs | Extended version of RECOMB'13 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of the sequence-structure relationship in RNA molecules are
essential to evolutionary studies but also to concrete applications such as
error-correction methodologies in sequencing technologies. The prohibitive
sizes of the mutational and conformational landscapes combined with the volume
of data to proceed require efficient algorithms to compute sequence-structure
properties. More specifically, here we aim to calculate which mutations
increase the most the likelihood of a sequence to a given structure and RNA
family. In this paper, we introduce RNApyro, an efficient linear-time and space
inside-outside algorithm that computes exact mutational probabilities under
secondary structure and evolutionary constraints given as a multiple sequence
alignment with a consensus structure. We develop a scoring scheme combining
classical stacking base pair energies to novel isostericity scales, and apply
our techniques to correct point-wise errors in 5s and 16s rRNA sequences. Our
results suggest that RNApyro is a promising algorithm to complement existing
tools in the NGS error-correction pipeline.
| [
{
"created": "Thu, 30 May 2013 11:47:24 GMT",
"version": "v1"
}
] | 2013-05-31 | [
[
"Reinharz",
"Vladimir",
"",
"LIX, INRIA Saclay - Ile de France"
],
[
"Ponty",
"Yann",
"",
"LIX, INRIA Saclay - Ile de France"
],
[
"Waldispühl",
"Jérôme",
""
]
] | Analysis of the sequence-structure relationship in RNA molecules are essential to evolutionary studies but also to concrete applications such as error-correction methodologies in sequencing technologies. The prohibitive sizes of the mutational and conformational landscapes combined with the volume of data to proceed require efficient algorithms to compute sequence-structure properties. More specifically, here we aim to calculate which mutations increase the most the likelihood of a sequence to a given structure and RNA family. In this paper, we introduce RNApyro, an efficient linear-time and space inside-outside algorithm that computes exact mutational probabilities under secondary structure and evolutionary constraints given as a multiple sequence alignment with a consensus structure. We develop a scoring scheme combining classical stacking base pair energies to novel isostericity scales, and apply our techniques to correct point-wise errors in 5s and 16s rRNA sequences. Our results suggest that RNApyro is a promising algorithm to complement existing tools in the NGS error-correction pipeline. |
0902.0959 | Michael E. Wall | Michael E. Wall, David A. Markowitz, Judah L. Rosner, Robert G. Martin | Model of Transcriptional Activation by MarA in Escherichia coli | 30 pages, 2 figures | null | 10.1016/j.bpj.2009.12.397 | LA-UR-09-00238 | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have developed a mathematical model of transcriptional activation by MarA
in Escherichia coli, and used the model to analyze measurements of
MarA-dependent activity of the marRAB, sodA, and micF promoters in mar-rob-
cells. The model rationalizes an unexpected poor correlation between the
mid-point of in vivo promoter activity profiles and in vitro equilibrium
constants for MarA binding to promoter sequences. Analysis of the promoter
activity data using the model yielded the following predictions regarding
activation mechanisms: (1) MarA activation of the marRAB, sodA, and micF
promoters involves a net acceleration of the kinetics of transitions after RNA
polymerase binding, up to and including promoter escape and message elongation;
(2) RNA polymerase binds to these promoters with nearly unit occupancy in the
absence of MarA, making recruitment of polymerase an insignificant factor in
activation of these promoters; and (3) instead of recruitment, activation of
the micF promoter might involve a repulsion of polymerase combined with a large
acceleration of the kinetics of polymerase activity. These predictions are
consistent with published chromatin immunoprecipitation assays of interactions
between polymerase and the E. coli chromosome. A lack of recruitment in
transcriptional activation represents an exception to the textbook description
of activation of bacterial sigma-70 promoters. However, use of accelerated
polymerase kinetics instead of recruitment might confer a competitive advantage
to E. coli by decreasing latency in gene regulation.
| [
{
"created": "Thu, 5 Feb 2009 18:59:55 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Wall",
"Michael E.",
""
],
[
"Markowitz",
"David A.",
""
],
[
"Rosner",
"Judah L.",
""
],
[
"Martin",
"Robert G.",
""
]
] | We have developed a mathematical model of transcriptional activation by MarA in Escherichia coli, and used the model to analyze measurements of MarA-dependent activity of the marRAB, sodA, and micF promoters in mar-rob- cells. The model rationalizes an unexpected poor correlation between the mid-point of in vivo promoter activity profiles and in vitro equilibrium constants for MarA binding to promoter sequences. Analysis of the promoter activity data using the model yielded the following predictions regarding activation mechanisms: (1) MarA activation of the marRAB, sodA, and micF promoters involves a net acceleration of the kinetics of transitions after RNA polymerase binding, up to and including promoter escape and message elongation; (2) RNA polymerase binds to these promoters with nearly unit occupancy in the absence of MarA, making recruitment of polymerase an insignificant factor in activation of these promoters; and (3) instead of recruitment, activation of the micF promoter might involve a repulsion of polymerase combined with a large acceleration of the kinetics of polymerase activity. These predictions are consistent with published chromatin immunoprecipitation assays of interactions between polymerase and the E. coli chromosome. A lack of recruitment in transcriptional activation represents an exception to the textbook description of activation of bacterial sigma-70 promoters. However, use of accelerated polymerase kinetics instead of recruitment might confer a competitive advantage to E. coli by decreasing latency in gene regulation. |
1106.6319 | Joel Miller | Joel C. Miller and Erik M. Volz | Edge-based compartmental modeling for epidemic spread Part II: Model
Selection and Hierarchies | null | Journal of Mathematical Biology October 2013, Volume 67, Issue 4,
pp 869-899 | 10.1007/s00285-012-0572-3 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the edge-based compartmental models for epidemic spread developed
in Part I. We show conditions under which simpler models may be substituted for
more detailed models, and in so doing we define a hierarchy of epidemic models.
In particular we provide conditions under which it is appropriate to use the
standard mass action SIR model, and we show what happens when these conditions
fail. Using our hierarchy, we provide a procedure leading to the choice of the
appropriate model for a given population. Our result about the convergence of
models to the Mass Action model gives clear, rigorous conditions under which
the Mass Action model is accurate.
| [
{
"created": "Thu, 30 Jun 2011 18:10:10 GMT",
"version": "v1"
}
] | 2015-09-03 | [
[
"Miller",
"Joel C.",
""
],
[
"Volz",
"Erik M.",
""
]
] | We consider the edge-based compartmental models for epidemic spread developed in Part I. We show conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. |
2308.09086 | Lucian Chan | Lucian Chan and Marcel Verdonk and Carl Poelking | Embracing assay heterogeneity with neural processes for markedly
improved bioactivity predictions | null | null | null | null | q-bio.QM cs.LG q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Predicting the bioactivity of a ligand is one of the hardest and most
important challenges in computer-aided drug discovery. Despite years of data
collection and curation efforts by research organizations worldwide,
bioactivity data remains sparse and heterogeneous, thus hampering efforts to
build predictive models that are accurate, transferable and robust. The
intrinsic variability of the experimental data is further compounded by data
aggregation practices that neglect heterogeneity to overcome sparsity. Here we
discuss the limitations of these practices and present a hierarchical
meta-learning framework that exploits the information synergy across disparate
assays by successfully accounting for assay heterogeneity. We show that the
model achieves a drastic improvement in affinity prediction across diverse
protein targets and assay types compared to conventional baselines. It can
quickly adapt to new target contexts using very few observations, thus enabling
large-scale virtual screening in early-phase drug discovery.
| [
{
"created": "Thu, 17 Aug 2023 16:26:58 GMT",
"version": "v1"
}
] | 2023-08-21 | [
[
"Chan",
"Lucian",
""
],
[
"Verdonk",
"Marcel",
""
],
[
"Poelking",
"Carl",
""
]
] | Predicting the bioactivity of a ligand is one of the hardest and most important challenges in computer-aided drug discovery. Despite years of data collection and curation efforts by research organizations worldwide, bioactivity data remains sparse and heterogeneous, thus hampering efforts to build predictive models that are accurate, transferable and robust. The intrinsic variability of the experimental data is further compounded by data aggregation practices that neglect heterogeneity to overcome sparsity. Here we discuss the limitations of these practices and present a hierarchical meta-learning framework that exploits the information synergy across disparate assays by successfully accounting for assay heterogeneity. We show that the model achieves a drastic improvement in affinity prediction across diverse protein targets and assay types compared to conventional baselines. It can quickly adapt to new target contexts using very few observations, thus enabling large-scale virtual screening in early-phase drug discovery. |
2007.02197 | Mar\'ia da Fonseca | Mar\'ia da Fonseca and In\'es Samengo | Statistical properties of color matching functions | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In trichromats, color vision entails the projection of an
infinite-dimensional space (the one containing all possible electromagnetic
power spectra) onto the 3-dimensional space that modulates the activity of the
three types of cones. This drastic reduction in dimensionality gives rise to
metamerism, that is, the perceptual chromatic equivalence between two different
light spectra. The classes of equivalence of metamerism are revealed by
color-matching experiments, in which observers adjust the intensity of three
monochromatic light beams of three pre-set wavelengths (the primaries) to
produce a mixture that is perceptually equal to a given monochromatic target
stimulus. Here we use the linear relation between the color matching functions
and the absorption probabilities of each type of cone to find particularly
useful triplets of primaries. As a second goal, we also derive an analytical
description of the trial-to-trial variability and the correlations of color
matching functions stemming from Poissonian noise in photon capture. We analyze
how the statistical properties of the responses to color-matching experiments
vary with the retinal composition and the wavelengths of peak absorption
probability, and compare them with experimental data on subject-to-subject
variability obtained previously.
| [
{
"created": "Sat, 4 Jul 2020 22:10:33 GMT",
"version": "v1"
},
{
"created": "Sat, 1 May 2021 18:34:08 GMT",
"version": "v2"
}
] | 2021-05-04 | [
[
"da Fonseca",
"María",
""
],
[
"Samengo",
"Inés",
""
]
] | In trichromats, color vision entails the projection of an infinite-dimensional space (the one containing all possible electromagnetic power spectra) onto the 3-dimensional space that modulates the activity of the three types of cones. This drastic reduction in dimensionality gives rise to metamerism, that is, the perceptual chromatic equivalence between two different light spectra. The classes of equivalence of metamerism are revealed by color-matching experiments, in which observers adjust the intensity of three monochromatic light beams of three pre-set wavelengths (the primaries) to produce a mixture that is perceptually equal to a given monochromatic target stimulus. Here we use the linear relation between the color matching functions and the absorption probabilities of each type of cone to find particularly useful triplets of primaries. As a second goal, we also derive an analytical description of the trial-to-trial variability and the correlations of color matching functions stemming from Poissonian noise in photon capture. We analyze how the statistical properties of the responses to color-matching experiments vary with the retinal composition and the wavelengths of peak absorption probability, and compare them with experimental data on subject-to-subject variability obtained previously. |
1401.1168 | Carsten Maedler | Pritiraj Mohanty, Yu Chen, Xihua Wang, Mi K. Hong, Carol L. Rosenberg,
David T. Weaver, Shyamsunder Erramilli | Field Effect Transistor Nanosensor for Breast Cancer Diagnostics | null | Invited Review in Biosensors and Molecular Technologies for Cancer
Diagnostics, edited by K. H. Herold and A. Rasooly (CRC Press, 2012) | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Silicon nanochannel field effect transistor (FET) biosensors are one of the
most promising technologies in the development of highly sensitive and
label-free analyte detection for cancer diagnostics. With their exceptional
electrical properties and small dimensions, silicon nanochannels are ideally
suited for extraordinarily high sensitivity. In fact, the high
surface-to-volume ratios of these systems make single molecule detection
possible. Further, FET biosensors offer the benefits of high speed, low cost,
and high yield manufacturing, without sacrificing the sensitivity typical for
traditional optical methods in diagnostics. Top down manufacturing methods
leverage advantages in Complementary Metal Oxide Semiconductor (CMOS)
technologies, making richly multiplexed sensor arrays a reality. Here, we
discuss the fabrication and use of silicon nanochannel FET devices as
biosensors for breast cancer diagnosis and monitoring.
| [
{
"created": "Mon, 6 Jan 2014 19:05:45 GMT",
"version": "v1"
}
] | 2014-01-07 | [
[
"Mohanty",
"Pritiraj",
""
],
[
"Chen",
"Yu",
""
],
[
"Wang",
"Xihua",
""
],
[
"Hong",
"Mi K.",
""
],
[
"Rosenberg",
"Carol L.",
""
],
[
"Weaver",
"David T.",
""
],
[
"Erramilli",
"Shyamsunder",
""
]
] | Silicon nanochannel field effect transistor (FET) biosensors are one of the most promising technologies in the development of highly sensitive and label-free analyte detection for cancer diagnostics. With their exceptional electrical properties and small dimensions, silicon nanochannels are ideally suited for extraordinarily high sensitivity. In fact, the high surface-to-volume ratios of these systems make single molecule detection possible. Further, FET biosensors offer the benefits of high speed, low cost, and high yield manufacturing, without sacrificing the sensitivity typical for traditional optical methods in diagnostics. Top down manufacturing methods leverage advantages in Complementary Metal Oxide Semiconductor (CMOS) technologies, making richly multiplexed sensor arrays a reality. Here, we discuss the fabrication and use of silicon nanochannel FET devices as biosensors for breast cancer diagnosis and monitoring. |
1805.09084 | PierGianLuca Porta Mana | PierGianLuca Porta Mana, Vahid Rostami, Emiliano Torre, Yasser Roudi | Maximum-entropy and representative samples of neuronal activity: a
dilemma | 12 pages, 2 figures. V2: added references and updated contact details | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The present work shows that the maximum-entropy method can be applied to a
sample of neuronal recordings along two different routes: (1) apply to the
sample; or (2) apply to a larger, unsampled neuronal population from which the
sample is drawn, and then marginalize to the sample. These two routes give
inequivalent results. The second route can be further generalized to the case
where the size of the larger population is unknown. Which route should be
chosen? Some arguments are presented in favour of the second. This work also
presents and discusses probability formulae that relate states of knowledge
about a population and its samples, and that may be useful for sampling
problems in neuroscience.
| [
{
"created": "Wed, 23 May 2018 12:17:49 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Oct 2020 16:49:02 GMT",
"version": "v2"
}
] | 2020-10-20 | [
[
"Mana",
"PierGianLuca Porta",
""
],
[
"Rostami",
"Vahid",
""
],
[
"Torre",
"Emiliano",
""
],
[
"Roudi",
"Yasser",
""
]
] | The present work shows that the maximum-entropy method can be applied to a sample of neuronal recordings along two different routes: (1) apply to the sample; or (2) apply to a larger, unsampled neuronal population from which the sample is drawn, and then marginalize to the sample. These two routes give inequivalent results. The second route can be further generalized to the case where the size of the larger population is unknown. Which route should be chosen? Some arguments are presented in favour of the second. This work also presents and discusses probability formulae that relate states of knowledge about a population and its samples, and that may be useful for sampling problems in neuroscience. |
1508.06854 | Konstantinos Kouvaris | Kostas Kouvaris, Jeff Clune, Louis Kounios, Markus Brede, Richard A.
Watson | How Evolution Learns to Generalise: Principles of under-fitting,
over-fitting and induction in the evolution of developmental organisation | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most intriguing questions in evolution is how organisms exhibit
suitable phenotypic variation to rapidly adapt in novel selective environments
which is crucial for evolvability. Recent work showed that when selective
environments vary in a systematic manner, it is possible that development can
constrain the phenotypic space in regions that are evolutionarily more
advantageous. Yet, the underlying mechanism that enables the spontaneous
emergence of such adaptive developmental constraints is poorly understood. How
can natural selection, given its myopic and conservative nature, favour
developmental organisations that facilitate adaptive evolution in future
previously unseen environments? Such capacity suggests a form of
\textit{foresight} facilitated by the ability of evolution to accumulate and
exploit information not only about the particular phenotypes selected in the
past, but regularities in the environment that are also relevant to future
environments. Here we argue that the ability of evolution to discover such
regularities is analogous to the ability of learning systems to generalise from
past experience. Conversely, the canalisation of evolved developmental
processes to past selective environments and failure of natural selection to
enhance evolvability in future selective environments is directly analogous to
the problem of over-fitting and failure to generalise in machine learning. We
show that this analogy arises from an underlying mechanistic equivalence by
showing that conditions corresponding to those that alleviate over-fitting in
machine learning enhance the evolution of generalised developmental
organisations under natural selection. This equivalence provides access to a
well-developed theoretical framework that enables us to characterise the
conditions where natural selection will find general rather than particular
solutions to environmental conditions.
| [
{
"created": "Thu, 27 Aug 2015 13:37:00 GMT",
"version": "v1"
}
] | 2015-08-28 | [
[
"Kouvaris",
"Kostas",
""
],
[
"Clune",
"Jeff",
""
],
[
"Kounios",
"Louis",
""
],
[
"Brede",
"Markus",
""
],
[
"Watson",
"Richard A.",
""
]
] | One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of \textit{foresight} facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments. Here we argue that the ability of evolution to discover such regularities is analogous to the ability of learning systems to generalise from past experience. Conversely, the canalisation of evolved developmental processes to past selective environments and failure of natural selection to enhance evolvability in future selective environments is directly analogous to the problem of over-fitting and failure to generalise in machine learning. We show that this analogy arises from an underlying mechanistic equivalence by showing that conditions corresponding to those that alleviate over-fitting in machine learning enhance the evolution of generalised developmental organisations under natural selection. This equivalence provides access to a well-developed theoretical framework that enables us to characterise the conditions where natural selection will find general rather than particular solutions to environmental conditions. |
1411.6473 | Hsiu-Hau Lin | Tsung-Cheng Lu, Yi-Ko Chen, Hsiu-Hau Lin and Chun-Chung-Chen | Fluctuation-induced dissipation in evolutionary dynamics | 6 pages, 3 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biodiversity and extinction are central issues in evolution. Dynamical
balance among different species in ecosystems is often described by
deterministic replicator equations with moderate success. However, fluctuations
are inevitable, either caused by external environment or intrinsic random
competitions in finite populations, and the evolutionary dynamics is stochastic
in nature. Here we show that, after appropriate coarse-graining, random
fluctuations generate dissipation towards extinction because the evolution
trajectories in the phase space of all competing species possess positive
curvature. As a demonstrating example, we compare the fluctuation-induced
dissipative dynamics in Lotka-Volterra model with numerical simulations and
find impressive agreement. Our finding is closely related to the
fluctuation-dissipation theorem in statistical mechanics but the marked
difference is the non-equilibrium essence of the generic evolutionary dynamics.
As the evolving ecosystems are far from equilibrium, the relation between
fluctuations and dissipations is often complicated and dependent on microscopic
details. It is thus remarkable that the generic positivity of the trajectory
curvature warrants dissipation arisen from the seemingly harmless fluctuations.
The unexpected dissipative dynamics is beyond the reach of conventional
replicator equations and plays a crucial role in investigating the biodiversity
in ecosystems.
| [
{
"created": "Mon, 24 Nov 2014 14:55:27 GMT",
"version": "v1"
}
] | 2014-11-25 | [
[
"Lu",
"Tsung-Cheng",
""
],
[
"Chen",
"Yi-Ko",
""
],
[
"Lin",
"Hsiu-Hau",
""
],
[
"Chun-Chung-Chen",
"",
""
]
] | Biodiversity and extinction are central issues in evolution. Dynamical balance among different species in ecosystems is often described by deterministic replicator equations with moderate success. However, fluctuations are inevitable, either caused by external environment or intrinsic random competitions in finite populations, and the evolutionary dynamics is stochastic in nature. Here we show that, after appropriate coarse-graining, random fluctuations generate dissipation towards extinction because the evolution trajectories in the phase space of all competing species possess positive curvature. As a demonstrating example, we compare the fluctuation-induced dissipative dynamics in Lotka-Volterra model with numerical simulations and find impressive agreement. Our finding is closely related to the fluctuation-dissipation theorem in statistical mechanics but the marked difference is the non-equilibrium essence of the generic evolutionary dynamics. As the evolving ecosystems are far from equilibrium, the relation between fluctuations and dissipations is often complicated and dependent on microscopic details. It is thus remarkable that the generic positivity of the trajectory curvature warrants dissipation arisen from the seemingly harmless fluctuations. The unexpected dissipative dynamics is beyond the reach of conventional replicator equations and plays a crucial role in investigating the biodiversity in ecosystems. |
2109.03115 | Simon Dahan | Simon Dahan, Logan Z. J. Williams, Daniel Rueckert, Emma C. Robinson | Improving Phenotype Prediction using Long-Range Spatio-Temporal Dynamics
of Functional Connectivity | MLCN 2021 | null | null | null | q-bio.NC cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | The study of functional brain connectivity (FC) is important for
understanding the underlying mechanisms of many psychiatric disorders. Many
recent analyses adopt graph convolutional networks, to study non-linear
interactions between functionally-correlated states. However, although patterns
of brain activation are known to be hierarchically organised in both space and
time, many methods have failed to extract powerful spatio-temporal features. To
overcome those challenges, and improve understanding of long-range functional
dynamics, we translate an approach, from the domain of skeleton-based action
recognition, designed to model interactions across space and time. We evaluate
this approach using the Human Connectome Project (HCP) dataset on sex
classification and fluid intelligence prediction. To account for subject
topographic variability of functional organisation, we modelled functional
connectomes using multi-resolution dual-regressed (subject-specific) ICA nodes.
Results show a prediction accuracy of 94.4% for sex classification (an increase
of 6.2% compared to other methods), and an improvement of correlation with
fluid intelligence of 0.325 vs 0.144, relative to a baseline model that encodes
space and time separately. Results suggest that explicit encoding of
spatio-temporal dynamics of brain functional activity may improve the precision
with which behavioural and cognitive phenotypes may be predicted in the future.
| [
{
"created": "Tue, 7 Sep 2021 14:23:34 GMT",
"version": "v1"
}
] | 2021-09-08 | [
[
"Dahan",
"Simon",
""
],
[
"Williams",
"Logan Z. J.",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Robinson",
"Emma C.",
""
]
] | The study of functional brain connectivity (FC) is important for understanding the underlying mechanisms of many psychiatric disorders. Many recent analyses adopt graph convolutional networks, to study non-linear interactions between functionally-correlated states. However, although patterns of brain activation are known to be hierarchically organised in both space and time, many methods have failed to extract powerful spatio-temporal features. To overcome those challenges, and improve understanding of long-range functional dynamics, we translate an approach, from the domain of skeleton-based action recognition, designed to model interactions across space and time. We evaluate this approach using the Human Connectome Project (HCP) dataset on sex classification and fluid intelligence prediction. To account for subject topographic variability of functional organisation, we modelled functional connectomes using multi-resolution dual-regressed (subject-specific) ICA nodes. Results show a prediction accuracy of 94.4% for sex classification (an increase of 6.2% compared to other methods), and an improvement of correlation with fluid intelligence of 0.325 vs 0.144, relative to a baseline model that encodes space and time separately. Results suggest that explicit encoding of spatio-temporal dynamics of brain functional activity may improve the precision with which behavioural and cognitive phenotypes may be predicted in the future. |
2307.04397 | Mathieu Hemery | Mathieu Hemery, Fran\c{c}ois Fages | On Estimating Derivatives of Input Signals in Biochemistry | null | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The online estimation of the derivative of an input signal is widespread in
control theory and engineering. In the realm of chemical reaction networks
(CRN), this raises however a number of specific issues on the different ways to
achieve it. A CRN pattern for implementing a derivative block has already been
proposed for the PID control of biochemical processes, and proved correct using
Tikhonov's limit theorem. In this paper, we give a detailed mathematical
analysis of that CRN, thus clarifying the computed quantity and quantifying the
error done as a function of the reaction kinetic parameters. In a synthetic
biology perspective, we show how this can be used to design error correcting
terms to compute online functions involving derivatives with CRNs. In the
systems biology perspective, we give the list of models in BioModels containing
(in the sense of subgraph epimorphisms) the core derivative CRN, most of which
being models of oscillators and control systems in the cell, and discuss in
detail two such examples: one model of the circadian clock and one model of a
bistable switch.
| [
{
"created": "Mon, 10 Jul 2023 07:59:24 GMT",
"version": "v1"
}
] | 2023-07-11 | [
[
"Hemery",
"Mathieu",
""
],
[
"Fages",
"François",
""
]
] | The online estimation of the derivative of an input signal is widespread in control theory and engineering. In the realm of chemical reaction networks (CRN), this raises however a number of specific issues on the different ways to achieve it. A CRN pattern for implementing a derivative block has already been proposed for the PID control of biochemical processes, and proved correct using Tikhonov's limit theorem. In this paper, we give a detailed mathematical analysis of that CRN, thus clarifying the computed quantity and quantifying the error done as a function of the reaction kinetic parameters. In a synthetic biology perspective, we show how this can be used to design error correcting terms to compute online functions involving derivatives with CRNs. In the systems biology perspective, we give the list of models in BioModels containing (in the sense of subgraph epimorphisms) the core derivative CRN, most of which being models of oscillators and control systems in the cell, and discuss in detail two such examples: one model of the circadian clock and one model of a bistable switch. |
2001.11679 | Peter Csermely | P\'eter Csermely, Nina Kunsic, P\'eter Mendik, M\'ark Kerest\'ely,
Teod\'ora Farag\'o, D\'aniel V. Veres, and P\'eter Tompa | Learning of signaling networks: molecular mechanisms | cover story of the 2020 April issue of Trends in Biochemical Sciences | Trends in Biochemical Sciences (2020) 45, 284-294 | 10.1016/j.tibs.2019.12.005 | null | q-bio.MN cs.NE nlin.AO physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular processes of neuronal learning have been well-described. However,
learning mechanisms of non-neuronal cells have not been fully understood at the
molecular level. Here, we discuss molecular mechanisms of cellular learning,
including conformational memory of intrinsically disordered proteins and
prions, signaling cascades, protein translocation, RNAs (microRNA and lncRNA),
and chromatin memory. We hypothesize that these processes constitute the
learning of signaling networks and correspond to a generalized Hebbian learning
process of single, non-neuronal cells, and discuss how cellular learning may
open novel directions in drug design and inspire new artificial intelligence
methods.
| [
{
"created": "Fri, 31 Jan 2020 07:15:43 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Mar 2020 09:18:48 GMT",
"version": "v2"
}
] | 2020-03-18 | [
[
"Csermely",
"Péter",
""
],
[
"Kunsic",
"Nina",
""
],
[
"Mendik",
"Péter",
""
],
[
"Kerestély",
"Márk",
""
],
[
"Faragó",
"Teodóra",
""
],
[
"Veres",
"Dániel V.",
""
],
[
"Tompa",
"Péter",
""
]
] | Molecular processes of neuronal learning have been well-described. However, learning mechanisms of non-neuronal cells have not been fully understood at the molecular level. Here, we discuss molecular mechanisms of cellular learning, including conformational memory of intrinsically disordered proteins and prions, signaling cascades, protein translocation, RNAs (microRNA and lncRNA), and chromatin memory. We hypothesize that these processes constitute the learning of signaling networks and correspond to a generalized Hebbian learning process of single, non-neuronal cells, and discuss how cellular learning may open novel directions in drug design and inspire new artificial intelligence methods. |
1410.8230 | Leandro Alonso | Leandro M. Alonso | Parameter estimation, nonlinearity and Occam's razor | null | Chaos 25, 033104 (2015) | 10.1063/1.4914452 | null | q-bio.QM nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonlinear systems are capable of displaying complex behavior even if this is
the result of a small number of interacting time scales. A widely studied case
is when complex dynamics emerges out of a nonlinear system being forced by a
simple harmonic function. In order to identify if a recorded time series is the
result of a nonlinear system responding to a simpler forcing, we develop a
discrete nonlinear transformation for time series based on synchronization
techniques. This allows a parameter estimation procedure which simultaneously
searches for a good fit of the recorded data, and small complexity of a
fluctuating driving parameter. We illustrate this procedure using data from
respiratory patterns during birdsong production.
| [
{
"created": "Thu, 30 Oct 2014 02:26:35 GMT",
"version": "v1"
}
] | 2015-06-01 | [
[
"Alonso",
"Leandro M.",
""
]
] | Nonlinear systems are capable of displaying complex behavior even if this is the result of a small number of interacting time scales. A widely studied case is when complex dynamics emerges out of a nonlinear system being forced by a simple harmonic function. In order to identify if a recorded time series is the result of a nonlinear system responding to a simpler forcing, we develop a discrete nonlinear transformation for time series based on synchronization techniques. This allows a parameter estimation procedure which simultaneously searches for a good fit of the recorded data, and small complexity of a fluctuating driving parameter. We illustrate this procedure using data from respiratory patterns during birdsong production. |
1703.05471 | Patricio Maturana | Patricio Maturana, Brendon J. Brewer, Steffen Klaere, Remco Bouckaert | Model selection and parameter inference in phylogenetics using Nested
Sampling | 23 pages, 12 figures, 3 tables | null | 10.1093/sysbio/syy050 | null | q-bio.QM stat.CO | http://creativecommons.org/licenses/by/4.0/ | Bayesian inference methods rely on numerical algorithms for both model
selection and parameter inference. In general, these algorithms require a high
computational effort to yield reliable estimates. One of the major challenges
in phylogenetics is the estimation of the marginal likelihood. This quantity is
commonly used for comparing different evolutionary models, but its calculation,
even for simple models, incurs high computational cost. Another interesting
challenge relates to the estimation of the posterior distribution. Often, long
Markov chains are required to get sufficient samples to carry out parameter
inference, especially for tree distributions. In general, these problems are
addressed separately by using different procedures. Nested sampling (NS) is a
Bayesian computation algorithm which provides the means to estimate marginal
likelihoods together with their uncertainties, and to sample from the posterior
distribution at no extra cost. The methods currently used in phylogenetics for
marginal likelihood estimation lack in practicality due to their dependence on
many tuning parameters and the inability of most implementations to provide a
direct way to calculate the uncertainties associated with the estimates. To
address these issues, we introduce NS to phylogenetics. Its performance is
assessed under different scenarios and compared to established methods. We
conclude that NS is a competitive and attractive algorithm for phylogenetic
inference. An implementation is available as a package for BEAST 2 under the
LGPL licence, accessible at https://github.com/BEAST2-Dev/nested-sampling.
| [
{
"created": "Thu, 16 Mar 2017 05:00:37 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Dec 2017 05:30:12 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Apr 2018 05:22:48 GMT",
"version": "v3"
}
] | 2018-08-09 | [
[
"Maturana",
"Patricio",
""
],
[
"Brewer",
"Brendon J.",
""
],
[
"Klaere",
"Steffen",
""
],
[
"Bouckaert",
"Remco",
""
]
] | Bayesian inference methods rely on numerical algorithms for both model selection and parameter inference. In general, these algorithms require a high computational effort to yield reliable estimates. One of the major challenges in phylogenetics is the estimation of the marginal likelihood. This quantity is commonly used for comparing different evolutionary models, but its calculation, even for simple models, incurs high computational cost. Another interesting challenge relates to the estimation of the posterior distribution. Often, long Markov chains are required to get sufficient samples to carry out parameter inference, especially for tree distributions. In general, these problems are addressed separately by using different procedures. Nested sampling (NS) is a Bayesian computation algorithm which provides the means to estimate marginal likelihoods together with their uncertainties, and to sample from the posterior distribution at no extra cost. The methods currently used in phylogenetics for marginal likelihood estimation lack in practicality due to their dependence on many tuning parameters and the inability of most implementations to provide a direct way to calculate the uncertainties associated with the estimates. To address these issues, we introduce NS to phylogenetics. Its performance is assessed under different scenarios and compared to established methods. We conclude that NS is a competitive and attractive algorithm for phylogenetic inference. An implementation is available as a package for BEAST 2 under the LGPL licence, accessible at https://github.com/BEAST2-Dev/nested-sampling. |
1902.01143 | Stefan Hammer | Stefan Hammer, Christian G\"unzel, Mario M\"orl, Sven Findei{\ss} | Evolving methods for rational de novo design of functional RNA molecules | Published at METHODS, Issue title: Chemical Biology of RNA, Guest
Editor: Michael Ryckelynck | null | 10.1016/j.ymeth.2019.04.022 | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Artificial RNA molecules with novel functionality have many applications in
synthetic biology, pharmacy and white biotechnology. The de novo design of such
devices using computational methods and prediction tools is a
resource-efficient alternative to experimental screening and selection
pipelines. In this review, we describe methods common to many such
computational approaches, thoroughly dissect these methods and highlight open
questions for the individual steps. Initially, it is essential to investigate
the biological target system, the regulatory mechanism that will be exploited,
as well as the desired components in order to define design objectives.
Subsequent computational design is needed to combine the selected components
and to obtain novel functionality. This process can usually be split into
constrained sequence sampling, the formulation of an optimization problem and
an in silico analysis to narrow down the number of candidates with respect to
secondary goals. Finally, experimental analysis is important to check whether
the defined design objectives are indeed met in the target environment and
detailed characterization experiments should be performed to improve the
mechanistic models and detect missing design requirements.
| [
{
"created": "Mon, 4 Feb 2019 12:14:46 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2019 10:42:39 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Apr 2019 18:58:35 GMT",
"version": "v3"
},
{
"created": "Fri, 10 May 2019 07:53:48 GMT",
"version": "v4"
}
] | 2019-05-13 | [
[
"Hammer",
"Stefan",
""
],
[
"Günzel",
"Christian",
""
],
[
"Mörl",
"Mario",
""
],
[
"Findeiß",
"Sven",
""
]
] | Artificial RNA molecules with novel functionality have many applications in synthetic biology, pharmacy and white biotechnology. The de novo design of such devices using computational methods and prediction tools is a resource-efficient alternative to experimental screening and selection pipelines. In this review, we describe methods common to many such computational approaches, thoroughly dissect these methods and highlight open questions for the individual steps. Initially, it is essential to investigate the biological target system, the regulatory mechanism that will be exploited, as well as the desired components in order to define design objectives. Subsequent computational design is needed to combine the selected components and to obtain novel functionality. This process can usually be split into constrained sequence sampling, the formulation of an optimization problem and an in silico analysis to narrow down the number of candidates with respect to secondary goals. Finally, experimental analysis is important to check whether the defined design objectives are indeed met in the target environment and detailed characterization experiments should be performed to improve the mechanistic models and detect missing design requirements. |
2007.08429 | Camile Bahi | Camile Bahi | 5-HT2A mediated plasticity as a target in major depression: a narrative
review connecting the dots from neurobiology to cognition and psychology | 27 pages, 9 figures | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the world's first primary morbidity factor, depression has a considerable
impact on both an individual as well as a societal level. despite their
discovery several decades ago, classical antidepressants have been shown to
provide limited benefits against this condition. However, substances such as
ketamine and psychedelics have recently shown promising results and even
received the grade of Breakthrough therapy for this indication. The accurate
mechanisms of action underlying the efficacy of these substances are still to
be defined, but some similarities appear to be shared on different levels
across these substances. These include their structural, functional and
behavioral plasticity promoting abilities, as well as their capacity to promote
Brain-Derived Neurotrophic Factor overexpression, which seems to constitute a
key element underlying their immediate and long-lasting action. From this
observation, the present review aims to examine and connect the pharmacological
pathways involved in these therapies to the neurobiological, cognitive and
psychological responses that could be shared by both 5-HT2AR agonists and NMDA
antagonists. It is suggested that BDNF overexpression resulting from mTOR
activation mediates both structural and functional plasticity, resulting in
connectivity changes among high-level cognitive networks such as the Default
Mode Network, finally leading to an increased and long-lasting psychological
flexibility. Connecting these pieces of evidence could provide insights about
their precise mechanisms of action and help researchers to develop biomarkers
for antidepressant response. If the hypotheses suggested in this review are
verified by further trials, they could also constitute a starting point to
developing safer and more efficient antidepressants, as well as provide
information about the interactions that exist between different
neurotransmitters systems.
| [
{
"created": "Thu, 16 Jul 2020 16:09:32 GMT",
"version": "v1"
}
] | 2020-07-17 | [
[
"Bahi",
"Camile",
""
]
] | As the world's first primary morbidity factor, depression has a considerable impact on both an individual as well as a societal level. despite their discovery several decades ago, classical antidepressants have been shown to provide limited benefits against this condition. However, substances such as ketamine and psychedelics have recently shown promising results and even received the grade of Breakthrough therapy for this indication. The accurate mechanisms of action underlying the efficacy of these substances are still to be defined, but some similarities appear to be shared on different levels across these substances. These include their structural, functional and behavioral plasticity promoting abilities, as well as their capacity to promote Brain-Derived Neurotrophic Factor overexpression, which seems to constitute a key element underlying their immediate and long-lasting action. From this observation, the present review aims to examine and connect the pharmacological pathways involved in these therapies to the neurobiological, cognitive and psychological responses that could be shared by both 5-HT2AR agonists and NMDA antagonists. It is suggested that BDNF overexpression resulting from mTOR activation mediates both structural and functional plasticity, resulting in connectivity changes among high-level cognitive networks such as the Default Mode Network, finally leading to an increased and long-lasting psychological flexibility. Connecting these pieces of evidence could provide insights about their precise mechanisms of action and help researchers to develop biomarkers for antidepressant response. If the hypotheses suggested in this review are verified by further trials, they could also constitute a starting point to developing safer and more efficient antidepressants, as well as provide information about the interactions that exist between different neurotransmitters systems. |
1404.7281 | Tom Michoel | Sofie Demeyer, Tom Michoel | Graph-based data integration predicts long-range regulatory interactions
across the human genome | 19 pages, 7 figures, 2 tables + 12 pages supplementary material (4
figures, 7 tables) | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcriptional regulation of gene expression is one of the main processes
that affect cell diversification from a single set of genes. Regulatory
proteins often interact with DNA regions located distally from the
transcription start sites (TSS) of the genes. We developed a computational
method that combines open chromatin and gene expression information for a large
number of cell types to identify these distal regulatory elements. Our method
builds correlation graphs for publicly available DNase-seq and exon array
datasets with matching samples and uses graph-based methods to filter findings
supported by multiple datasets and remove indirect interactions. The resulting
set of interactions was validated with both anecdotal information of known
long-range interactions and unbiased experimental data deduced from Hi-C and
CAGE experiments. Our results provide a novel set of high-confidence candidate
open chromatin regions involved in gene regulation, often located several Mb
away from the TSS of their target gene.
| [
{
"created": "Tue, 29 Apr 2014 09:19:40 GMT",
"version": "v1"
}
] | 2014-04-30 | [
[
"Demeyer",
"Sofie",
""
],
[
"Michoel",
"Tom",
""
]
] | Transcriptional regulation of gene expression is one of the main processes that affect cell diversification from a single set of genes. Regulatory proteins often interact with DNA regions located distally from the transcription start sites (TSS) of the genes. We developed a computational method that combines open chromatin and gene expression information for a large number of cell types to identify these distal regulatory elements. Our method builds correlation graphs for publicly available DNase-seq and exon array datasets with matching samples and uses graph-based methods to filter findings supported by multiple datasets and remove indirect interactions. The resulting set of interactions was validated with both anecdotal information of known long-range interactions and unbiased experimental data deduced from Hi-C and CAGE experiments. Our results provide a novel set of high-confidence candidate open chromatin regions involved in gene regulation, often located several Mb away from the TSS of their target gene. |
q-bio/0507011 | B. Roy Frieden | B. Roy Frieden and Robert A. Gatenby | Power laws of complex systems from Extreme physical information | null | null | 10.1103/PhysRevE.72.036101 | null | q-bio.CB | null | Many complex systems obey allometric, or power, laws y=Yx^{a}. Here y is the
measured value of some system attribute a, Y is a constant, and x is a
stochastic variable. Remarkably, for many living systems the exponent a is
limited to values +or- n/4, n=0,1,2... Here x is the mass of a randomly
selected creature in the population. These quarter-power laws hold for many
attributes, such as pulse rate (n=-1). Allometry has, in the past, been
theoretically justified on a case-by-case basis. An ultimate goal is to find a
common cause for allometry of all types and for both living and nonliving
systems. The principle I - J = extrem. of Extreme physical information (EPI) is
found to provide such a cause. It describes the flow of Fisher information J =>
I from an attribute value a on the cell level to its exterior observation y.
Data y are formed via a system channel function y = f(x,a), with f(x,a) to be
found. Extremizing the difference I - J through variation of f(x,a) results in
a general allometric law f(x,a)= y = Yx^{a}. Darwinian evolution is presumed to
cause a second extremization of I - J, now with respect to the choice of a. The
solution is a=+or-n/4, n=0,1,2..., defining the particular powers of biological
allometry. Under special circumstances, the model predicts that such biological
systems are controlled by but two distinct intracellular information sources.
These sources are conjectured to be cellular DNA and cellular transmembrane ion
gradients
| [
{
"created": "Thu, 7 Jul 2005 20:57:29 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Frieden",
"B. Roy",
""
],
[
"Gatenby",
"Robert A.",
""
]
] | Many complex systems obey allometric, or power, laws y=Yx^{a}. Here y is the measured value of some system attribute a, Y is a constant, and x is a stochastic variable. Remarkably, for many living systems the exponent a is limited to values +or- n/4, n=0,1,2... Here x is the mass of a randomly selected creature in the population. These quarter-power laws hold for many attributes, such as pulse rate (n=-1). Allometry has, in the past, been theoretically justified on a case-by-case basis. An ultimate goal is to find a common cause for allometry of all types and for both living and nonliving systems. The principle I - J = extrem. of Extreme physical information (EPI) is found to provide such a cause. It describes the flow of Fisher information J => I from an attribute value a on the cell level to its exterior observation y. Data y are formed via a system channel function y = f(x,a), with f(x,a) to be found. Extremizing the difference I - J through variation of f(x,a) results in a general allometric law f(x,a)= y = Yx^{a}. Darwinian evolution is presumed to cause a second extremization of I - J, now with respect to the choice of a. The solution is a=+or-n/4, n=0,1,2..., defining the particular powers of biological allometry. Under special circumstances, the model predicts that such biological systems are controlled by but two distinct intracellular information sources. These sources are conjectured to be cellular DNA and cellular transmembrane ion gradients |
0706.3735 | Jeremy Gunawardena | Matthew Thomson and Jeremy Gunawardena | Multi-bit information storage by multisite phosphorylation | 29 pages, 6 figures | null | null | null | q-bio.MN | null | Cells store information in DNA and in stable programs of gene expression,
which thereby implement forms of long-term cellular memory. Cells must also
possess short-term forms of information storage, implemented
post-translationally, to transduce and interpret external signals. CaMKII, for
instance, is thought to implement a one-bit (bistable) short-term memory
required for learning at post-synaptic densities. Here we show by mathematical
analysis that multisite protein phosphorylation, which is ubiquitous in all
eukaryotic signalling pathways, exhibits multistability for which the maximal
number of steady states increases with the number of sites. If there are n
sites, the maximal information storage capacity is at least log_2 (n+2)/2 bits
when n is even and log_2 (n+1)/2 bits when n is odd. Furthermore, when
substrate is in excess, enzyme saturation together with an alternating low/high
pattern in the site-specific relative catalytic efficiencies, enriches for
multistability. That is, within physiologically plausible ranges for
parameters, multistability becomes more likely than monostability. We discuss
the experimental challenges in pursuing these predictions and in determining
the biological role of short-term information storage.
| [
{
"created": "Tue, 26 Jun 2007 04:02:19 GMT",
"version": "v1"
}
] | 2007-06-27 | [
[
"Thomson",
"Matthew",
""
],
[
"Gunawardena",
"Jeremy",
""
]
] | Cells store information in DNA and in stable programs of gene expression, which thereby implement forms of long-term cellular memory. Cells must also possess short-term forms of information storage, implemented post-translationally, to transduce and interpret external signals. CaMKII, for instance, is thought to implement a one-bit (bistable) short-term memory required for learning at post-synaptic densities. Here we show by mathematical analysis that multisite protein phosphorylation, which is ubiquitous in all eukaryotic signalling pathways, exhibits multistability for which the maximal number of steady states increases with the number of sites. If there are n sites, the maximal information storage capacity is at least log_2 (n+2)/2 bits when n is even and log_2 (n+1)/2 bits when n is odd. Furthermore, when substrate is in excess, enzyme saturation together with an alternating low/high pattern in the site-specific relative catalytic efficiencies, enriches for multistability. That is, within physiologically plausible ranges for parameters, multistability becomes more likely than monostability. We discuss the experimental challenges in pursuing these predictions and in determining the biological role of short-term information storage. |
q-bio/0410021 | Markus Porto | Markus Porto, H. Eduardo Roman, Michele Vendruscolo, and Ugo Bastolla | Prediction of site-specific amino acid distributions and limits of
divergent evolutionary changes in protein sequences | 9 pages, 4 figures, references added | Mol. Biol. Evol. 22, 630-638 (2005) (see also Editor's Erratum,
Mol. Biol. Evol. 22, 1156 (2005)) | null | null | q-bio.BM q-bio.PE | null | We derive an analytic expression for site-specific stationary distributions
of amino acids from the Structurally Constrained Neutral (SCN) model of protein
evolution with conservation of folding stability. The stationary distributions
that we obtain have a Boltzmann-like shape, and their effective temperature
parameter, measuring the limit of divergent evolutionary changes at a given
site, can be predicted from a site-specific topological property, the principal
eigenvector of the contact matrix of the native conformation of the protein.
These analytic results, obtained without free parameters, are compared with
simulations of the SCN model and with the site-specific amino acid
distributions obtained from the Protein Data Bank. These results also provide
new insights into how the topology of a protein fold influences its
designability, i.e. the number of sequences compatible with that fold. The
dependence of the effective temperature on the principal eigenvector decreases
for longer proteins, a possible consequence of the fact that selection for
thermodynamic stability becomes weaker in this case.
| [
{
"created": "Tue, 19 Oct 2004 14:29:29 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Nov 2004 18:05:13 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Porto",
"Markus",
""
],
[
"Roman",
"H. Eduardo",
""
],
[
"Vendruscolo",
"Michele",
""
],
[
"Bastolla",
"Ugo",
""
]
] | We derive an analytic expression for site-specific stationary distributions of amino acids from the Structurally Constrained Neutral (SCN) model of protein evolution with conservation of folding stability. The stationary distributions that we obtain have a Boltzmann-like shape, and their effective temperature parameter, measuring the limit of divergent evolutionary changes at a given site, can be predicted from a site-specific topological property, the principal eigenvector of the contact matrix of the native conformation of the protein. These analytic results, obtained without free parameters, are compared with simulations of the SCN model and with the site-specific amino acid distributions obtained from the Protein Data Bank. These results also provide new insights into how the topology of a protein fold influences its designability, i.e. the number of sequences compatible with that fold. The dependence of the effective temperature on the principal eigenvector decreases for longer proteins, a possible consequence of the fact that selection for thermodynamic stability becomes weaker in this case. |
1501.05003 | Sarthok Sircar | S. Sircar and A. J. Roberts | Ion mediated crosslink driven mucous swelling kinetics | 18 pages, 9 figures | null | null | null | q-bio.TO cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an experimentally guided, multi-phasic, multi-species ionic gel
model to compare and make qualitative predictions on the rheology of mucus of
healthy individuals (Wild Type) versus those infected with Cystic Fibrosis. The
mixture theory consists of the mucus (polymer phase) and water (solvent phase)
as well as several different ions: H+, Na+ and Ca++. The model is linearized to
study the hydration of spherically symmetric mucus gels and calibrated against
the experimental data of mucus diffusivities. Near equilibrium, the linearized
form of the equation describing the radial size of the gel, reduces to the
well-known expression used in the kinetic theory of swelling hydrogels.
Numerical studies reveal that the Donnan potential is the dominating mechanism
driving the mucus swelling/deswelling transition. However, the altered swelling
kinetics of the Cystic Fibrosis infected mucus is not merely governed by the
hydroelectric composition of the swelling media, but also due to the altered
movement of electrolytes as well as due to the defective properties of the
mucin polymer network.
| [
{
"created": "Tue, 20 Jan 2015 22:17:37 GMT",
"version": "v1"
}
] | 2015-01-22 | [
[
"Sircar",
"S.",
""
],
[
"Roberts",
"A. J.",
""
]
] | We present an experimentally guided, multi-phasic, multi-species ionic gel model to compare and make qualitative predictions on the rheology of mucus of healthy individuals (Wild Type) versus those infected with Cystic Fibrosis. The mixture theory consists of the mucus (polymer phase) and water (solvent phase) as well as several different ions: H+, Na+ and Ca++. The model is linearized to study the hydration of spherically symmetric mucus gels and calibrated against the experimental data of mucus diffusivities. Near equilibrium, the linearized form of the equation describing the radial size of the gel, reduces to the well-known expression used in the kinetic theory of swelling hydrogels. Numerical studies reveal that the Donnan potential is the dominating mechanism driving the mucus swelling/deswelling transition. However, the altered swelling kinetics of the Cystic Fibrosis infected mucus is not merely governed by the hydroelectric composition of the swelling media, but also due to the altered movement of electrolytes as well as due to the defective properties of the mucin polymer network. |
q-bio/0702023 | Brigitte Gaillard | H. Meunier (DEPE-Iphc), J.-B. Leca (DEPE-Iphc), J.-L. Deneubourg, O.
Petit (DEPE-Iphc) | Group movement decisions in capuchin monkeys : the utility of an
experimental study and a mathematical model to explore the relationship
between individual and collective behaviours | null | Behaviour 143 (2006) 1511-1527 | null | null | q-bio.PE | null | In primate groups, collective movements are typically described as processes
dependent on leadership mechanisms. However, in some species, decision-making
includes negotiations and distributed leadership. These facts suggest that
simple underlying processes may explain certain decision mechanisms during
collective movements. To study such processes, we have designed experiments on
white-faced capuchin monkeys (Cebus capucinus) during which we provoked
collective movements involving a binary choice. These experiments enabled us to
analyse the spatial decisions of individuals in the group.We found that the
underlying process includes anonymous mimetism, which means that each
individual may influence all members of the group. To support this result, we
created a mathematical model issued from our experimental data. A totally
anonymous model does not fit perfectly with our experimental distribution. A
more individualised model, which takes into account the specific behaviour of
social peripheral individuals, revealed the validity of the mimetism
hypothesis. Even though white-faced capuchins have complex cognitive abilities,
a coexistence of anonymous and social mechanisms appears to influence their
choice of direction during collective movements. The present approach may offer
vital insights into the relationships between individual behaviours and their
emergent collective acts.
| [
{
"created": "Fri, 9 Feb 2007 15:15:34 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Meunier",
"H.",
"",
"DEPE-Iphc"
],
[
"Leca",
"J. -B.",
"",
"DEPE-Iphc"
],
[
"Deneubourg",
"J. -L.",
"",
"DEPE-Iphc"
],
[
"Petit",
"O.",
"",
"DEPE-Iphc"
]
] | In primate groups, collective movements are typically described as processes dependent on leadership mechanisms. However, in some species, decision-making includes negotiations and distributed leadership. These facts suggest that simple underlying processes may explain certain decision mechanisms during collective movements. To study such processes, we have designed experiments on white-faced capuchin monkeys (Cebus capucinus) during which we provoked collective movements involving a binary choice. These experiments enabled us to analyse the spatial decisions of individuals in the group.We found that the underlying process includes anonymous mimetism, which means that each individual may influence all members of the group. To support this result, we created a mathematical model issued from our experimental data. A totally anonymous model does not fit perfectly with our experimental distribution. A more individualised model, which takes into account the specific behaviour of social peripheral individuals, revealed the validity of the mimetism hypothesis. Even though white-faced capuchins have complex cognitive abilities, a coexistence of anonymous and social mechanisms appears to influence their choice of direction during collective movements. The present approach may offer vital insights into the relationships between individual behaviours and their emergent collective acts. |
1601.04974 | Leo van Iersel | Laura Jetten and Leo van Iersel | Nonbinary tree-based phylogenetic networks | null | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rooted phylogenetic networks are used to describe evolutionary histories that
contain non-treelike evolutionary events such as hybridization and horizontal
gene transfer. In some cases, such histories can be described by a phylogenetic
base-tree with additional linking arcs, which can for example represent gene
transfer events. Such phylogenetic networks are called tree-based. Here, we
consider two possible generalizations of this concept to nonbinary networks,
which we call tree-based and strictly-tree-based nonbinary phylogenetic
networks. We give simple graph-theoretic characterizations of tree-based and
strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each
of these two classes that it can be decided in polynomial time whether a given
network is contained in the class. Our approach also provides a new view on
tree-based binary phylogenetic networks. Finally, we discuss two examples of
nonbinary phylogenetic networks in biology and show how our results can be
applied to them.
| [
{
"created": "Tue, 19 Jan 2016 16:13:31 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Jun 2016 08:11:59 GMT",
"version": "v2"
},
{
"created": "Fri, 30 Sep 2016 05:34:13 GMT",
"version": "v3"
}
] | 2016-10-03 | [
[
"Jetten",
"Laura",
""
],
[
"van Iersel",
"Leo",
""
]
] | Rooted phylogenetic networks are used to describe evolutionary histories that contain non-treelike evolutionary events such as hybridization and horizontal gene transfer. In some cases, such histories can be described by a phylogenetic base-tree with additional linking arcs, which can for example represent gene transfer events. Such phylogenetic networks are called tree-based. Here, we consider two possible generalizations of this concept to nonbinary networks, which we call tree-based and strictly-tree-based nonbinary phylogenetic networks. We give simple graph-theoretic characterizations of tree-based and strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each of these two classes that it can be decided in polynomial time whether a given network is contained in the class. Our approach also provides a new view on tree-based binary phylogenetic networks. Finally, we discuss two examples of nonbinary phylogenetic networks in biology and show how our results can be applied to them. |
1009.4141 | Yi Huiguang | Huiguang Yi | CO-phylum: An Assembly-Free Phylogenomic Approach for Close Related
Organisms | 21 pages, 6 figures | Nucl. Acids Res. first published online January 18, 2013 | 10.1093/nar/gkt003 | null | q-bio.QM q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Phylogenomic approaches developed thus far are either too time-consuming or
lack a solid evolutionary basis. Moreover, no phylogenomic approach is capable
of constructing a tree directly from unassembled raw sequencing data. A new
phylogenomic method, CO-phylum, is developed to alleviate these flaws.
CO-phylum can generate a high-resolution and highly accurate tree using
complete genome or unassembled sequencing data of close related organisms, in
addition, CO-phylum distance is almost linear with p-distance.
| [
{
"created": "Tue, 21 Sep 2010 16:48:58 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Nov 2010 00:04:47 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Apr 2011 14:26:03 GMT",
"version": "v3"
}
] | 2013-01-22 | [
[
"Yi",
"Huiguang",
""
]
] | Phylogenomic approaches developed thus far are either too time-consuming or lack a solid evolutionary basis. Moreover, no phylogenomic approach is capable of constructing a tree directly from unassembled raw sequencing data. A new phylogenomic method, CO-phylum, is developed to alleviate these flaws. CO-phylum can generate a high-resolution and highly accurate tree using complete genome or unassembled sequencing data of close related organisms, in addition, CO-phylum distance is almost linear with p-distance. |
1310.2210 | Siavash Ghavami | Siavash Ghavami, Olaf Wolkenhauer, Farshad Lahouti, Mukhtar Ullah,
Michael Linnebacher | Accounting for Randomness in Measurement and Sampling in Study of Cancer
Cell Population Dynamics | 41 pages, 9 figures,submitted to IET System Biology Journal | null | 10.1049/iet-syb.2013.0031 | null | q-bio.CB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studying the development of malignant tumours, it is important to know and
predict the proportions of different cell types in tissue samples. Knowing the
expected temporal evolution of the proportion of normal tissue cells, compared
to stem-like and non-stem like cancer cells, gives an indication about the
progression of the disease and indicates the expected response to interventions
with drugs. Such processes have been modeled using Markov processes. An
essential step for the simulation of such models is then the determination of
state transition probabilities. We here consider the experimentally more
realistic scenario in which the measurement of cell population sizes is noisy,
leading to a particular hidden Markov model. In this context, randomness in
measurement is related to noisy measurements, which are used for the estimation
of the transition probability matrix. Randomness in sampling, on the other
hand, is here related to the error in estimating the state probability from
small cell populations. Using aggregated data of fluorescence-activated cell
sorting (FACS) measurement, we develop a minimum mean square error estimator
(MMSE) and maximum likelihood (ML) estimator and formulate two problems to find
the minimum number of required samples and measurements to guarantee the
accuracy of predicted population sizes using a transition probability matrix
estimated from noisy data. We analyze the properties of two estimators for
different noise distributions and prove an optimal solution for Gaussian
distributions with the MMSE. Our numerical results show, that for noisy
measurements the convergence mechanism of transition probabilities and steady
states differ widely from the real values if one uses the standard
deterministic approach in which measurements are assumed to be noise free.
| [
{
"created": "Tue, 8 Oct 2013 18:13:09 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jan 2014 10:09:25 GMT",
"version": "v2"
}
] | 2015-11-24 | [
[
"Ghavami",
"Siavash",
""
],
[
"Wolkenhauer",
"Olaf",
""
],
[
"Lahouti",
"Farshad",
""
],
[
"Ullah",
"Mukhtar",
""
],
[
"Linnebacher",
"Michael",
""
]
] | Studying the development of malignant tumours, it is important to know and predict the proportions of different cell types in tissue samples. Knowing the expected temporal evolution of the proportion of normal tissue cells, compared to stem-like and non-stem like cancer cells, gives an indication about the progression of the disease and indicates the expected response to interventions with drugs. Such processes have been modeled using Markov processes. An essential step for the simulation of such models is then the determination of state transition probabilities. We here consider the experimentally more realistic scenario in which the measurement of cell population sizes is noisy, leading to a particular hidden Markov model. In this context, randomness in measurement is related to noisy measurements, which are used for the estimation of the transition probability matrix. Randomness in sampling, on the other hand, is here related to the error in estimating the state probability from small cell populations. Using aggregated data of fluorescence-activated cell sorting (FACS) measurement, we develop a minimum mean square error estimator (MMSE) and maximum likelihood (ML) estimator and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes using a transition probability matrix estimated from noisy data. We analyze the properties of two estimators for different noise distributions and prove an optimal solution for Gaussian distributions with the MMSE. Our numerical results show, that for noisy measurements the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach in which measurements are assumed to be noise free. |
1608.01936 | Bernard Feldman | Bernard J. Feldman | Explanation for Cancer in Rats, Mice and Humans due to Cell Phone
Radiofrequency Radiation | Five pages, no figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very recently, the National Toxicology Program reported a correlation between
exposure to whole body 900 MHz radiofrequency radiation and cancer in the
brains and hearts of Sprague Dawley male rats. This paper proposes the
following explanation for these results. The neurons around the rat's brain and
heart form closed electrical circuits and, following Faraday's Law, 900 MHz
radiofrequency radiation induces 900 MHz electrical currents in these neural
circuits. In turn, these 900 MHz currents in the neural circuits generate
sufficient localized heat in the neural cell axons to shift the equilibrium
concentration of carcinogenic radicals to higher levels and thus, to higher
incidences of cancer. This model is then applied to mice and humans.
| [
{
"created": "Fri, 22 Jul 2016 16:02:00 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jul 2017 19:15:51 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Jul 2017 13:38:51 GMT",
"version": "v3"
}
] | 2017-07-20 | [
[
"Feldman",
"Bernard J.",
""
]
] | Very recently, the National Toxicology Program reported a correlation between exposure to whole body 900 MHz radiofrequency radiation and cancer in the brains and hearts of Sprague Dawley male rats. This paper proposes the following explanation for these results. The neurons around the rat's brain and heart form closed electrical circuits and, following Faraday's Law, 900 MHz radiofrequency radiation induces 900 MHz electrical currents in these neural circuits. In turn, these 900 MHz currents in the neural circuits generate sufficient localized heat in the neural cell axons to shift the equilibrium concentration of carcinogenic radicals to higher levels and thus, to higher incidences of cancer. This model is then applied to mice and humans. |
2103.12186 | Feng Fu | Kamran Kaveh and Feng Fu | Immune checkpoint therapy modeling of PD-1/PD-L1 blockades reveals
subtle difference in their response dynamics and potential synergy in
combination | null | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Immune checkpoint therapy is one of the most promising immunotherapeutic
methods that are likely able to give rise to durable treatment response for
various cancer types. Despite much progress in the past decade, there are still
critical open questions with particular regards to quantifying and predicting
the efficacy of treatment and potential optimal regimens for combining
different immune-checkpoint blockades. To shed light on this issue, here we
develop clinically-relevant, dynamical systems models of cancer immunotherapy
with a focus on the immune checkpoint PD-1/PD-L1 blockades. Our model allows
the acquisition of adaptive immune resistance in the absence of treatment,
whereas immune checkpoint blockades can reverse such resistance and boost
anti-tumor activities of effector cells. Our numerical analysis predicts that
anti-PD-1 agents are commonly less effective than anti-PD-L1 agents for a wide
range of model parameters. We also observe that combination treatment of
anti-PD-1 and anti-PD-L1 blockades leads to a desirable synergistic effect. Our
modeling framework lays the ground for future data-driven analysis on
combination therapeutics of immune-checkpoint treatment regimes and thorough
investigation of optimized treatment on a patient-by-patient basis.
| [
{
"created": "Mon, 22 Mar 2021 21:28:26 GMT",
"version": "v1"
}
] | 2021-03-24 | [
[
"Kaveh",
"Kamran",
""
],
[
"Fu",
"Feng",
""
]
] | Immune checkpoint therapy is one of the most promising immunotherapeutic methods that are likely able to give rise to durable treatment response for various cancer types. Despite much progress in the past decade, there are still critical open questions with particular regards to quantifying and predicting the efficacy of treatment and potential optimal regimens for combining different immune-checkpoint blockades. To shed light on this issue, here we develop clinically-relevant, dynamical systems models of cancer immunotherapy with a focus on the immune checkpoint PD-1/PD-L1 blockades. Our model allows the acquisition of adaptive immune resistance in the absence of treatment, whereas immune checkpoint blockades can reverse such resistance and boost anti-tumor activities of effector cells. Our numerical analysis predicts that anti-PD-1 agents are commonly less effective than anti-PD-L1 agents for a wide range of model parameters. We also observe that combination treatment of anti-PD-1 and anti-PD-L1 blockades leads to a desirable synergistic effect. Our modeling framework lays the ground for future data-driven analysis on combination therapeutics of immune-checkpoint treatment regimes and thorough investigation of optimized treatment on a patient-by-patient basis. |
1906.09908 | Zhen Zhou | Zhen Zhou, Xiaobo Chen, Yu Zhang, Lishan Qiao, Renping Yu, Gang Pan,
Han Zhang, Dinggang Shen | Brain Network Construction and Classification Toolbox (BrainNetClass) | null | null | 10.1002/hbm.24979 | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Brain functional network has become an increasingly used approach in
understanding brain functions and diseases. Many network construction methods
have been developed, whereas the majority of the studies still used static
pairwise Pearson's correlation-based functional connectivity. The goal of this
work is to introduce a toolbox namely "Brain Network Construction and
Classification" (BrainNetClass) to the field to promote more advanced brain
network construction methods. It comprises various brain network construction
methods, including some state-of-the-art methods that were recently developed
to capture more complex interactions among brain regions along with connectome
feature extraction, reduction, parameter optimization towards network-based
individualized classification. BrainNetClass is a MATLAB-based, open-source,
cross-platform toolbox with graphical user-friendly interfaces for cognitive
and clinical neuroscientists to perform rigorous computer-aided diagnosis with
interpretable result presentations even though they do not possess neuroimage
computing and machine learning knowledge. We demonstrate the implementations of
this toolbox on real resting-state functional MRI datasets. BrainNetClass
(v1.0) can be downloaded from https://github.com/zzstefan/BrainNetClass.
| [
{
"created": "Mon, 17 Jun 2019 18:19:18 GMT",
"version": "v1"
}
] | 2020-03-13 | [
[
"Zhou",
"Zhen",
""
],
[
"Chen",
"Xiaobo",
""
],
[
"Zhang",
"Yu",
""
],
[
"Qiao",
"Lishan",
""
],
[
"Yu",
"Renping",
""
],
[
"Pan",
"Gang",
""
],
[
"Zhang",
"Han",
""
],
[
"Shen",
"Dinggang",
""
]
] | Brain functional network has become an increasingly used approach in understanding brain functions and diseases. Many network construction methods have been developed, whereas the majority of the studies still used static pairwise Pearson's correlation-based functional connectivity. The goal of this work is to introduce a toolbox namely "Brain Network Construction and Classification" (BrainNetClass) to the field to promote more advanced brain network construction methods. It comprises various brain network construction methods, including some state-of-the-art methods that were recently developed to capture more complex interactions among brain regions along with connectome feature extraction, reduction, parameter optimization towards network-based individualized classification. BrainNetClass is a MATLAB-based, open-source, cross-platform toolbox with graphical user-friendly interfaces for cognitive and clinical neuroscientists to perform rigorous computer-aided diagnosis with interpretable result presentations even though they do not possess neuroimage computing and machine learning knowledge. We demonstrate the implementations of this toolbox on real resting-state functional MRI datasets. BrainNetClass (v1.0) can be downloaded from https://github.com/zzstefan/BrainNetClass. |
1810.00954 | Alexandru Hening | Alexandru Hening and Dang H. Nguyen | The competitive exclusion principle in stochastic environments | 26 pages, 4 figures, to appear in Journal of Mathematical Biology | Journal of Mathematical Biology, vol. 80,1323-1351(2020) | null | null | q-bio.PE math.DS math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In its simplest form, the competitive exclusion principle states that a
number of species competing for a smaller number of resources cannot coexist.
However, it has been observed empirically that in some settings it is possible
to have coexistence. One example is Hutchinson's `paradox of the plankton'.
This is an instance where a large number of phytoplankton species coexist while
competing for a very limited number of resources. Both experimental and
theoretical studies have shown that temporal fluctuations of the environment
can facilitate coexistence for competing species. Hutchinson conjectured that
one can get coexistence because nonequilibrium conditions would make it
possible for different species to be favored by the environment at different
times.
In this paper we show in various settings how a variable (stochastic)
environment enables a set of competing species limited by a smaller number of
resources or other density dependent factors to coexist. If the environmental
fluctuations are modeled by white noise, and the per-capita growth rates of the
competitors depend linearly on the resources, we prove that there is
competitive exclusion. However, if either the dependence between the growth
rates and the resources is not linear or the white noise term is nonlinear we
show that coexistence on fewer resources than species is possible. Even more
surprisingly, if the temporal environmental variation comes from switching the
environment at random times between a finite number of possible states, it is
possible for all species to coexist even if the growth rates depend linearly on
the resources. We show in an example (a variant of which first appeared in
Benaim and Lobry '16) that, contrary to Hutchinson's explanation, one can
switch between two environments in which the same species is favored and still
get coexistence.
| [
{
"created": "Mon, 1 Oct 2018 20:05:23 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2019 01:03:40 GMT",
"version": "v2"
}
] | 2021-02-18 | [
[
"Hening",
"Alexandru",
""
],
[
"Nguyen",
"Dang H.",
""
]
] | In its simplest form, the competitive exclusion principle states that a number of species competing for a smaller number of resources cannot coexist. However, it has been observed empirically that in some settings it is possible to have coexistence. One example is Hutchinson's `paradox of the plankton'. This is an instance where a large number of phytoplankton species coexist while competing for a very limited number of resources. Both experimental and theoretical studies have shown that temporal fluctuations of the environment can facilitate coexistence for competing species. Hutchinson conjectured that one can get coexistence because nonequilibrium conditions would make it possible for different species to be favored by the environment at different times. In this paper we show in various settings how a variable (stochastic) environment enables a set of competing species limited by a smaller number of resources or other density dependent factors to coexist. If the environmental fluctuations are modeled by white noise, and the per-capita growth rates of the competitors depend linearly on the resources, we prove that there is competitive exclusion. However, if either the dependence between the growth rates and the resources is not linear or the white noise term is nonlinear we show that coexistence on fewer resources than species is possible. Even more surprisingly, if the temporal environmental variation comes from switching the environment at random times between a finite number of possible states, it is possible for all species to coexist even if the growth rates depend linearly on the resources. We show in an example (a variant of which first appeared in Benaim and Lobry '16) that, contrary to Hutchinson's explanation, one can switch between two environments in which the same species is favored and still get coexistence. |
1402.6971 | Ka-Kit Lam | Ka-Kit Lam, Asif Khalak, David Tse | Near-optimal Assembly for Shotgun Sequencing with Noisy Reads | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work identified the fundamental limits on the information requirements
in terms of read length and coverage depth required for successful de novo
genome reconstruction from shotgun sequencing data, based on the idealistic
assumption of no errors in the reads (noiseless reads). In this work, we show
that even when there is noise in the reads, one can successfully reconstruct
with information requirements close to the noiseless fundamental limit. A new
assembly algorithm, X-phased Multibridging, is designed based on a
probabilistic model of the genome. It is shown through analysis to perform well
on the model, and through simulations to perform well on real genomes.
| [
{
"created": "Thu, 27 Feb 2014 16:58:26 GMT",
"version": "v1"
}
] | 2014-02-28 | [
[
"Lam",
"Ka-Kit",
""
],
[
"Khalak",
"Asif",
""
],
[
"Tse",
"David",
""
]
] | Recent work identified the fundamental limits on the information requirements in terms of read length and coverage depth required for successful de novo genome reconstruction from shotgun sequencing data, based on the idealistic assumption of no errors in the reads (noiseless reads). In this work, we show that even when there is noise in the reads, one can successfully reconstruct with information requirements close to the noiseless fundamental limit. A new assembly algorithm, X-phased Multibridging, is designed based on a probabilistic model of the genome. It is shown through analysis to perform well on the model, and through simulations to perform well on real genomes. |
2406.18240 | Iv\'an Matas Gonzalez | Francisca Silva-Claver\'ia and Carmen Serrano and Iv\'an Matas and
Amalia Serrano and Tom\'as Toledo-Pastrana and David Moreno-Ram\'irez and
Bego\~na Acha | Concordance in basal cell carcinoma diagnosis. Building a proper ground
truth to train Artificial Intelligence tools | Manuscript word count: 3000, Number of figures: 2, Number of tables:
3 | null | null | null | q-bio.QM cs.CV cs.IR stat.ME | http://creativecommons.org/licenses/by/4.0/ | Background: The existence of different basal cell carcinoma (BCC) clinical
criteria cannot be objectively validated. An adequate ground-truth is needed to
train an artificial intelligence (AI) tool that explains the BCC diagnosis by
providing its dermoscopic features. Objectives: To determine the consensus
among dermatologists on dermoscopic criteria of 204 BCC. To analyze the
performance of an AI tool when the ground-truth is inferred. Methods: A single
center, diagnostic and prospective study was conducted to analyze the agreement
in dermoscopic criteria by four dermatologists and then derive a reference
standard. 1434 dermoscopic images have been used, that were taken by a primary
health physician, sent via teledermatology, and diagnosed by a dermatologist.
They were randomly selected from the teledermatology platform (2019-2021). 204
of them were tested with an AI tool; the remainder trained it. The performance
of the AI tool trained using the ground-truth of one dermatologist versus the
ground-truth statistically inferred from the consensus of four dermatologists
was analyzed using McNemar's test and Hamming distance. Results: Dermatologists
achieve perfect agreement in the diagnosis of BCC (Fleiss-Kappa=0.9079), and a
high correlation with the biopsy (PPV=0.9670). However, there is low agreement
in detecting some dermoscopic criteria. Statistical differences were found in
the performance of the AI tool trained using the ground-truth of one
dermatologist versus the ground-truth statistically inferred from the consensus
of four dermatologists. Conclusions: Care should be taken when training an AI
tool to determine the BCC patterns present in a lesion. Ground-truth should be
established from multiple dermatologists.
| [
{
"created": "Wed, 26 Jun 2024 10:44:48 GMT",
"version": "v1"
}
] | 2024-06-27 | [
[
"Silva-Clavería",
"Francisca",
""
],
[
"Serrano",
"Carmen",
""
],
[
"Matas",
"Iván",
""
],
[
"Serrano",
"Amalia",
""
],
[
"Toledo-Pastrana",
"Tomás",
""
],
[
"Moreno-Ramírez",
"David",
""
],
[
"Acha",
"Begoña",
""
]
] | Background: The existence of different basal cell carcinoma (BCC) clinical criteria cannot be objectively validated. An adequate ground-truth is needed to train an artificial intelligence (AI) tool that explains the BCC diagnosis by providing its dermoscopic features. Objectives: To determine the consensus among dermatologists on dermoscopic criteria of 204 BCC. To analyze the performance of an AI tool when the ground-truth is inferred. Methods: A single center, diagnostic and prospective study was conducted to analyze the agreement in dermoscopic criteria by four dermatologists and then derive a reference standard. 1434 dermoscopic images have been used, that were taken by a primary health physician, sent via teledermatology, and diagnosed by a dermatologist. They were randomly selected from the teledermatology platform (2019-2021). 204 of them were tested with an AI tool; the remainder trained it. The performance of the AI tool trained using the ground-truth of one dermatologist versus the ground-truth statistically inferred from the consensus of four dermatologists was analyzed using McNemar's test and Hamming distance. Results: Dermatologists achieve perfect agreement in the diagnosis of BCC (Fleiss-Kappa=0.9079), and a high correlation with the biopsy (PPV=0.9670). However, there is low agreement in detecting some dermoscopic criteria. Statistical differences were found in the performance of the AI tool trained using the ground-truth of one dermatologist versus the ground-truth statistically inferred from the consensus of four dermatologists. Conclusions: Care should be taken when training an AI tool to determine the BCC patterns present in a lesion. Ground-truth should be established from multiple dermatologists. |
1308.6028 | Caterina La Porta AM | Caterina A. M. La Porta, Stefano Zapperi, James P. Sethna | Senescent Cells in Growing Tumors: Population Dynamics and Cancer Stem
Cells | null | PLoS Comput Biol 8(1): e1002316, 2012 | 10.1371/journal.pcbi.1002316 | null | q-bio.TO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumors are defined by their intense proliferation, but sometimes cancer cells
turn senescent and stop replicating. In the stochastic cancer model in which
all cells are tumorigenic, senescence is seen as the result of random
mutations, suggesting that it could represent a barrier to tumor growth. In the
hierarchical cancer model a subset of the cells, the cancer stem cells, divide
indefinitely while other cells eventually turn senescent. Here we formulate
cancer growth in mathematical terms and obtain predictions for the evolution of
senescence. We perform experiments in human melanoma cells which are compatible
with the hierarchical model and show that senescence is a reversible process
controlled by survivin. We conclude that enhancing senescence is unlikely to
provide a useful therapeutic strategy to fight cancer, unless the cancer stem
cells are specifically targeted
| [
{
"created": "Wed, 28 Aug 2013 02:10:26 GMT",
"version": "v1"
}
] | 2013-08-29 | [
[
"La Porta",
"Caterina A. M.",
""
],
[
"Zapperi",
"Stefano",
""
],
[
"Sethna",
"James P.",
""
]
] | Tumors are defined by their intense proliferation, but sometimes cancer cells turn senescent and stop replicating. In the stochastic cancer model in which all cells are tumorigenic, senescence is seen as the result of random mutations, suggesting that it could represent a barrier to tumor growth. In the hierarchical cancer model a subset of the cells, the cancer stem cells, divide indefinitely while other cells eventually turn senescent. Here we formulate cancer growth in mathematical terms and obtain predictions for the evolution of senescence. We perform experiments in human melanoma cells which are compatible with the hierarchical model and show that senescence is a reversible process controlled by survivin. We conclude that enhancing senescence is unlikely to provide a useful therapeutic strategy to fight cancer, unless the cancer stem cells are specifically targeted |
1509.06926 | Matthias Chung | Matthias Chung, Justin Krueger, and Mihai Pop | Robust Parameter Estimation for Biological Systems: A Study on the
Dynamics of Microbial Communities | null | null | null | null | q-bio.QM math.OC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interest in the study of in-host microbial communities has increased in
recent years due to our improved understanding of the communities' significant
role in host health. As a result, the ability to model these communities using
differential equations, for example, and analyze the results has become
increasingly relevant. The size of the models and limitations in data
collection among many other considerations require that we develop new
parameter estimation methods to address the challenges that arise when using
traditional parameter estimation methods for models of these in-host microbial
communities. In this work, we present the challenges that appear when applying
traditional parameter estimation techniques to differential equation models of
microbial communities, and we provide an original, alternative method to those
techniques. We show the derivation of our method and how our method avoids the
limitations of traditional techniques while including additional benefits. We
also provide simulation studies to demonstrate our method's viability, the
application of our method to a model of intestinal microbial communities to
demonstrate the insights that can be gained from our method, and sample code to
give readers the opportunity to apply our method to their own research.
| [
{
"created": "Wed, 23 Sep 2015 11:24:50 GMT",
"version": "v1"
}
] | 2015-09-24 | [
[
"Chung",
"Matthias",
""
],
[
"Krueger",
"Justin",
""
],
[
"Pop",
"Mihai",
""
]
] | Interest in the study of in-host microbial communities has increased in recent years due to our improved understanding of the communities' significant role in host health. As a result, the ability to model these communities using differential equations, for example, and analyze the results has become increasingly relevant. The size of the models and limitations in data collection among many other considerations require that we develop new parameter estimation methods to address the challenges that arise when using traditional parameter estimation methods for models of these in-host microbial communities. In this work, we present the challenges that appear when applying traditional parameter estimation techniques to differential equation models of microbial communities, and we provide an original, alternative method to those techniques. We show the derivation of our method and how our method avoids the limitations of traditional techniques while including additional benefits. We also provide simulation studies to demonstrate our method's viability, the application of our method to a model of intestinal microbial communities to demonstrate the insights that can be gained from our method, and sample code to give readers the opportunity to apply our method to their own research. |
1803.00338 | Manuel Molano-Mazon | Manuel Molano-Mazon, Arno Onken, Eugenio Piasini, Stefano Panzeri | Synthesizing realistic neural population activity patterns using
Generative Adversarial Networks | Published as a conference paper at ICLR 2018 V2: minor changes in
supp. material | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to synthesize realistic patterns of neural activity is crucial
for studying neural information processing. Here we used the Generative
Adversarial Networks (GANs) framework to simulate the concerted activity of a
population of neurons. We adapted the Wasserstein-GAN variant to facilitate the
generation of unconstrained neural population activity patterns while still
benefiting from parameter sharing in the temporal domain. We demonstrate that
our proposed GAN, which we termed Spike-GAN, generates spike trains that match
accurately the first- and second-order statistics of datasets of tens of
neurons and also approximates well their higher-order statistics. We applied
Spike-GAN to a real dataset recorded from salamander retina and showed that it
performs as well as state-of-the-art approaches based on the maximum entropy
and the dichotomized Gaussian frameworks. Importantly, Spike-GAN does not
require to specify a priori the statistics to be matched by the model, and so
constitutes a more flexible method than these alternative approaches. Finally,
we show how to exploit a trained Spike-GAN to construct 'importance maps' to
detect the most relevant statistical structures present in a spike train.
Spike-GAN provides a powerful, easy-to-use technique for generating realistic
spiking neural activity and for describing the most relevant features of the
large-scale neural population recordings studied in modern systems
neuroscience.
| [
{
"created": "Thu, 1 Mar 2018 12:30:22 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Apr 2018 08:26:53 GMT",
"version": "v2"
}
] | 2018-04-18 | [
[
"Molano-Mazon",
"Manuel",
""
],
[
"Onken",
"Arno",
""
],
[
"Piasini",
"Eugenio",
""
],
[
"Panzeri",
"Stefano",
""
]
] | The ability to synthesize realistic patterns of neural activity is crucial for studying neural information processing. Here we used the Generative Adversarial Networks (GANs) framework to simulate the concerted activity of a population of neurons. We adapted the Wasserstein-GAN variant to facilitate the generation of unconstrained neural population activity patterns while still benefiting from parameter sharing in the temporal domain. We demonstrate that our proposed GAN, which we termed Spike-GAN, generates spike trains that match accurately the first- and second-order statistics of datasets of tens of neurons and also approximates well their higher-order statistics. We applied Spike-GAN to a real dataset recorded from salamander retina and showed that it performs as well as state-of-the-art approaches based on the maximum entropy and the dichotomized Gaussian frameworks. Importantly, Spike-GAN does not require to specify a priori the statistics to be matched by the model, and so constitutes a more flexible method than these alternative approaches. Finally, we show how to exploit a trained Spike-GAN to construct 'importance maps' to detect the most relevant statistical structures present in a spike train. Spike-GAN provides a powerful, easy-to-use technique for generating realistic spiking neural activity and for describing the most relevant features of the large-scale neural population recordings studied in modern systems neuroscience. |
1606.02917 | Geir Halnes | Geir Halnes, Klas H. Pettersen, Leiv {\O}yehaug, Marie E. Rognes,
Gaute T. Einevoll | Astrocytic Ion Dynamics: Implications for Potassium Buffering and Liquid
Flow | 27 pages, 7 figures | M. De Pitt\`a and H. Berry (eds.), Computational Glioscience,
Springer Series in Computational Neuroscience, Springer Nature Switzerland AG
2019 | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review modeling of astrocyte ion dynamics with a specific focus on the
implications of so-called spatial potassium buffering, where excess potassium
in the extracellular space (ECS) is transported away to prevent pathological
neural spiking. The recently introduced Kirchoff-Nernst-Planck (KNP) scheme for
modeling ion dynamics in astrocytes (and brain tissue in general) is outlined
and used to study such spatial buffering. We next describe how the ion dynamics
of astrocytes may regulate microscopic liquid flow by osmotic effects and how
such microscopic flow can be linked to whole-brain macroscopic flow. We thus
include the key elements in a putative multiscale theory with astrocytes
linking neural activity on a microscopic scale to macroscopic fluid flow.
| [
{
"created": "Thu, 9 Jun 2016 11:41:49 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2019 23:26:09 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Dec 2019 14:50:57 GMT",
"version": "v3"
}
] | 2019-12-03 | [
[
"Halnes",
"Geir",
""
],
[
"Pettersen",
"Klas H.",
""
],
[
"Øyehaug",
"Leiv",
""
],
[
"Rognes",
"Marie E.",
""
],
[
"Einevoll",
"Gaute T.",
""
]
] | We review modeling of astrocyte ion dynamics with a specific focus on the implications of so-called spatial potassium buffering, where excess potassium in the extracellular space (ECS) is transported away to prevent pathological neural spiking. The recently introduced Kirchoff-Nernst-Planck (KNP) scheme for modeling ion dynamics in astrocytes (and brain tissue in general) is outlined and used to study such spatial buffering. We next describe how the ion dynamics of astrocytes may regulate microscopic liquid flow by osmotic effects and how such microscopic flow can be linked to whole-brain macroscopic flow. We thus include the key elements in a putative multiscale theory with astrocytes linking neural activity on a microscopic scale to macroscopic fluid flow. |
2102.10538 | Zhenyu Han | Zhenyu Han, Fengli Xu, Yong Li, Tao Jiang, Depeng Jin, Jianhua Lu,
James A. Evans | Policy-Aware Mobility Model Explains the Growth of COVID-19 in Cities | null | null | null | null | q-bio.PE cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the continued spread of coronavirus, the task of forecasting distinctive
COVID-19 growth curves in different cities, which remain inadequately explained
by standard epidemiological models, is critical for medical supply and
treatment. Predictions must take into account non-pharmaceutical interventions
to slow the spread of coronavirus, including stay-at-home orders, social
distancing, quarantine and compulsory mask-wearing, leading to reductions in
intra-city mobility and viral transmission. Moreover, recent work associating
coronavirus with human mobility and detailed movement data suggest the need to
consider urban mobility in disease forecasts. Here we show that by
incorporating intra-city mobility and policy adoption into a novel
metapopulation SEIR model, we can accurately predict complex COVID-19 growth
patterns in U.S. cities ($R^2$ = 0.990). Estimated mobility change due to
policy interventions is consistent with empirical observation from Apple
Mobility Trends Reports (Pearson's R = 0.872), suggesting the utility of
model-based predictions where data are limited. Our model also reproduces urban
"superspreading", where a few neighborhoods account for most secondary
infections across urban space, arising from uneven neighborhood populations and
heightened intra-city churn in popular neighborhoods. Therefore, our model can
facilitate location-aware mobility reduction policy that more effectively
mitigates disease transmission at similar social cost. Finally, we demonstrate
our model can serve as a fine-grained analytic and simulation framework that
informs the design of rational non-pharmaceutical interventions policies.
| [
{
"created": "Sun, 21 Feb 2021 07:39:17 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Han",
"Zhenyu",
""
],
[
"Xu",
"Fengli",
""
],
[
"Li",
"Yong",
""
],
[
"Jiang",
"Tao",
""
],
[
"Jin",
"Depeng",
""
],
[
"Lu",
"Jianhua",
""
],
[
"Evans",
"James A.",
""
]
] | With the continued spread of coronavirus, the task of forecasting distinctive COVID-19 growth curves in different cities, which remain inadequately explained by standard epidemiological models, is critical for medical supply and treatment. Predictions must take into account non-pharmaceutical interventions to slow the spread of coronavirus, including stay-at-home orders, social distancing, quarantine and compulsory mask-wearing, leading to reductions in intra-city mobility and viral transmission. Moreover, recent work associating coronavirus with human mobility and detailed movement data suggest the need to consider urban mobility in disease forecasts. Here we show that by incorporating intra-city mobility and policy adoption into a novel metapopulation SEIR model, we can accurately predict complex COVID-19 growth patterns in U.S. cities ($R^2$ = 0.990). Estimated mobility change due to policy interventions is consistent with empirical observation from Apple Mobility Trends Reports (Pearson's R = 0.872), suggesting the utility of model-based predictions where data are limited. Our model also reproduces urban "superspreading", where a few neighborhoods account for most secondary infections across urban space, arising from uneven neighborhood populations and heightened intra-city churn in popular neighborhoods. Therefore, our model can facilitate location-aware mobility reduction policy that more effectively mitigates disease transmission at similar social cost. Finally, we demonstrate our model can serve as a fine-grained analytic and simulation framework that informs the design of rational non-pharmaceutical interventions policies. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.