id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010.01758
|
Claudia Solis-Lemus
|
Claudia Solis-Lemus, Arrigo Coen, Cecile Ane
|
On the Identifiability of Phylogenetic Networks under a Pseudolikelihood
model
| null | null | null | null |
q-bio.PE math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Tree of Life is the graphical structure that represents the evolutionary
process from single-cell organisms at the origin of life to the vast
biodiversity we see today. Reconstructing this tree from genomic sequences is
challenging due to the variety of biological forces that shape the signal in
the data, and many of those processes like incomplete lineage sorting and
hybridization can produce confounding information. Here, we present the
mathematical version of the identifiability proofs of phylogenetic networks
under the pseudolikelihood model in SNaQ. We establish that the ability to
detect different hybridization events depends on the number of nodes on the
hybridization blob, with small blobs (corresponding to closely related species)
being the hardest to be detected. Our work focuses on level-1 networks, but
raises attention to the importance of identifiability studies on phylogenetic
inference methods for broader classes of networks.
|
[
{
"created": "Mon, 5 Oct 2020 03:28:25 GMT",
"version": "v1"
}
] |
2020-10-06
|
[
[
"Solis-Lemus",
"Claudia",
""
],
[
"Coen",
"Arrigo",
""
],
[
"Ane",
"Cecile",
""
]
] |
The Tree of Life is the graphical structure that represents the evolutionary process from single-cell organisms at the origin of life to the vast biodiversity we see today. Reconstructing this tree from genomic sequences is challenging due to the variety of biological forces that shape the signal in the data, and many of those processes like incomplete lineage sorting and hybridization can produce confounding information. Here, we present the mathematical version of the identifiability proofs of phylogenetic networks under the pseudolikelihood model in SNaQ. We establish that the ability to detect different hybridization events depends on the number of nodes on the hybridization blob, with small blobs (corresponding to closely related species) being the hardest to be detected. Our work focuses on level-1 networks, but raises attention to the importance of identifiability studies on phylogenetic inference methods for broader classes of networks.
|
1804.04538
|
Priya Ranjan
|
Anju Mishra, Shanu Sharma, Sanjay Kumar, Priya Ranjan, and Amit
Ujlayan
|
Automated Classification of Hand-grip action on Objects using Machine
Learning
|
This is a report on an ongoing project
| null | null | null |
q-bio.NC cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain computer interface is the current area of research to provide
assistance to disabled persons. To cope up with the growing needs of BCI
applications, this paper presents an automated classification scheme for
handgrip actions on objects by using Electroencephalography (EEG) data. The
presented approach focuses on investigation of classifying correct and
incorrect handgrip responses for objects by using EEG recorded patterns. The
method starts with preprocessing of data, followed by extraction of relevant
features from the epoch data in the form of discrete wavelet transform (DWT),
and entropy measures. After computing feature vectors, artificial neural
network classifiers used to classify the patterns into correct and incorrect
handgrips on different objects. The proposed method was tested on real dataset,
which contains EEG recordings from 14 persons. The results showed that the
proposed approach is effective and may be useful to develop a variety of BCI
based devices to control hand movements.
|
[
{
"created": "Fri, 9 Mar 2018 11:51:43 GMT",
"version": "v1"
}
] |
2018-04-13
|
[
[
"Mishra",
"Anju",
""
],
[
"Sharma",
"Shanu",
""
],
[
"Kumar",
"Sanjay",
""
],
[
"Ranjan",
"Priya",
""
],
[
"Ujlayan",
"Amit",
""
]
] |
Brain computer interface is the current area of research to provide assistance to disabled persons. To cope up with the growing needs of BCI applications, this paper presents an automated classification scheme for handgrip actions on objects by using Electroencephalography (EEG) data. The presented approach focuses on investigation of classifying correct and incorrect handgrip responses for objects by using EEG recorded patterns. The method starts with preprocessing of data, followed by extraction of relevant features from the epoch data in the form of discrete wavelet transform (DWT), and entropy measures. After computing feature vectors, artificial neural network classifiers used to classify the patterns into correct and incorrect handgrips on different objects. The proposed method was tested on real dataset, which contains EEG recordings from 14 persons. The results showed that the proposed approach is effective and may be useful to develop a variety of BCI based devices to control hand movements.
|
1003.5839
|
Arne Traulsen
|
Chaitanya S. Gokhale and Arne Traulsen
|
Evolutionary games in the multiverse
| null |
PNAS 107, 5500-5504 (2010)
|
10.1073/pnas.0912214107
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolutionary game dynamics of two players with two strategies has been
studied in great detail. These games have been used to model many biologically
relevant scenarios, ranging from social dilemmas in mammals to microbial
diversity. Some of these games may in fact take place between a number of
individuals and not just between two. Here, we address one-shot games with
multiple players. As long as we have only two strategies, many results from two
player games can be generalized to multiple players. For games with multiple
players and more than two strategies, we show that statements derived for
pairwise interactions do no longer hold. For two player games with any number
of strategies there can be at most one isolated internal equilibrium. For any
number of players $\boldsymbol{d}$ with any number of strategies n, there can
be at most (d-1)^(n-1) isolated internal equilibria. Multiplayer games show a
great dynamical complexity that cannot be captured based on pairwise
interactions. Our results hold for any game and can easily be applied for
specific cases, e.g. public goods games or multiplayer stag hunts.
|
[
{
"created": "Tue, 30 Mar 2010 15:05:56 GMT",
"version": "v1"
}
] |
2010-03-31
|
[
[
"Gokhale",
"Chaitanya S.",
""
],
[
"Traulsen",
"Arne",
""
]
] |
Evolutionary game dynamics of two players with two strategies has been studied in great detail. These games have been used to model many biologically relevant scenarios, ranging from social dilemmas in mammals to microbial diversity. Some of these games may in fact take place between a number of individuals and not just between two. Here, we address one-shot games with multiple players. As long as we have only two strategies, many results from two player games can be generalized to multiple players. For games with multiple players and more than two strategies, we show that statements derived for pairwise interactions do no longer hold. For two player games with any number of strategies there can be at most one isolated internal equilibrium. For any number of players $\boldsymbol{d}$ with any number of strategies n, there can be at most (d-1)^(n-1) isolated internal equilibria. Multiplayer games show a great dynamical complexity that cannot be captured based on pairwise interactions. Our results hold for any game and can easily be applied for specific cases, e.g. public goods games or multiplayer stag hunts.
|
1212.0662
|
Nicolas Perony
|
Nicolas Perony, Barbara K\"onig, and Frank Schweitzer
|
A stochastic model of social interaction in wild house mice
|
12 pages, 5 figures, 2 tables. Originally published in the
Proceedings of the European Conference on Complex Systems 2010 (ECCS'10),
Lisbon, September 13-17, 2010
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate to what extent the interaction dynamics of a population of
wild house mouse (Mus musculus domesticus) in their environment can be
explained by a simple stochastic model. We use a Markov chain model to describe
the transitions of mice in a discrete space of nestboxes, and implement a
multi-agent simulation of the model. We find that some important features of
our behavioural dataset can be reproduced using this simplified stochastic
representation, and discuss the improvements that could be made to our model in
order to increase the accuracy of its predictions. Our findings have
implications for the understanding of the complexity underlying social
behaviour in the animal kingdom and the cognitive requirements of such
behaviour.
|
[
{
"created": "Tue, 4 Dec 2012 10:07:56 GMT",
"version": "v1"
}
] |
2012-12-05
|
[
[
"Perony",
"Nicolas",
""
],
[
"König",
"Barbara",
""
],
[
"Schweitzer",
"Frank",
""
]
] |
We investigate to what extent the interaction dynamics of a population of wild house mouse (Mus musculus domesticus) in their environment can be explained by a simple stochastic model. We use a Markov chain model to describe the transitions of mice in a discrete space of nestboxes, and implement a multi-agent simulation of the model. We find that some important features of our behavioural dataset can be reproduced using this simplified stochastic representation, and discuss the improvements that could be made to our model in order to increase the accuracy of its predictions. Our findings have implications for the understanding of the complexity underlying social behaviour in the animal kingdom and the cognitive requirements of such behaviour.
|
q-bio/0701009
|
Johannes Wollbold
|
Johannes Wollbold
|
Attribute Exploration of Discrete Temporal Transitions
|
Only the email address and reference have been replaced
|
In: Gely, A. et al.. Contributions to ICFCA 2007 - 5th
International Conference on Formal Concept Analysis. Clermont-Ferrand 2007,
121-130
| null | null |
q-bio.QM cs.AI q-bio.MN
| null |
Discrete temporal transitions occur in a variety of domains, but this work is
mainly motivated by applications in molecular biology: explaining and analyzing
observed transcriptome and proteome time series by literature and database
knowledge. The starting point of a formal concept analysis model is presented.
The objects of a formal context are states of the interesting entities, and the
attributes are the variable properties defining the current state (e.g.
observed presence or absence of proteins). Temporal transitions assign a
relation to the objects, defined by deterministic or non-deterministic
transition rules between sets of pre- and postconditions. This relation can be
generalized to its transitive closure, i.e. states are related if one results
from the other by a transition sequence of arbitrary length. The focus of the
work is the adaptation of the attribute exploration algorithm to such a
relational context, so that questions concerning temporal dependencies can be
asked during the exploration process and be answered from the computed stem
base. Results are given for the abstract example of a game and a small gene
regulatory network relevant to a biomedical question.
|
[
{
"created": "Thu, 4 Jan 2007 14:10:51 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Sep 2007 08:46:00 GMT",
"version": "v2"
}
] |
2007-09-18
|
[
[
"Wollbold",
"Johannes",
""
]
] |
Discrete temporal transitions occur in a variety of domains, but this work is mainly motivated by applications in molecular biology: explaining and analyzing observed transcriptome and proteome time series by literature and database knowledge. The starting point of a formal concept analysis model is presented. The objects of a formal context are states of the interesting entities, and the attributes are the variable properties defining the current state (e.g. observed presence or absence of proteins). Temporal transitions assign a relation to the objects, defined by deterministic or non-deterministic transition rules between sets of pre- and postconditions. This relation can be generalized to its transitive closure, i.e. states are related if one results from the other by a transition sequence of arbitrary length. The focus of the work is the adaptation of the attribute exploration algorithm to such a relational context, so that questions concerning temporal dependencies can be asked during the exploration process and be answered from the computed stem base. Results are given for the abstract example of a game and a small gene regulatory network relevant to a biomedical question.
|
1810.06831
|
Andrew Francis
|
Michael Hendriksen and Andrew Francis
|
Lattice consensus: A partial order on phylogenetic trees that induces an
associatively stable consensus method
|
The paper has an error in the proof of Theorem 5.3, and this affects
5.4, which is incorrect (there is a counterexample to the Theorem statement).
As a consequence the results in Section 6 about a consensus method are
vacuous. Some results in the paper stand, for instance the results in
Sections 2, 3, 4, and 7
| null | null | null |
q-bio.PE math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a long tradition of the axiomatic study of consensus methods in
phylogenetics that satisfy certain desirable properties. One
recently-introduced property is associative stability, which is desirable
because it confers a computational advantage, in that the consensus method only
needs to be computed "pairwise". In this paper, we introduce a phylogenetic
consensus method that satisfies this property, in addition to being "regular".
The method is based on the introduction of a partial order on the set of rooted
phylogenetic trees, itself based on the notion of a hierarchy-preserving map
between trees. This partial order may be of independent interest. We call the
method "lattice consensus", because it takes the unique maximal element in a
lattice of trees defined by the partial order. Aside from being associatively
stable, lattice consensus also satisfies the property of being Pareto on rooted
triples, answering in the affirmative a question of Bryant et al (2017). We
conclude the paper with an answer to another question of Bryant et al, showing
that there is no regular extension stable consensus method for binary trees.
|
[
{
"created": "Tue, 16 Oct 2018 06:26:40 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Oct 2018 07:14:08 GMT",
"version": "v2"
}
] |
2018-10-22
|
[
[
"Hendriksen",
"Michael",
""
],
[
"Francis",
"Andrew",
""
]
] |
There is a long tradition of the axiomatic study of consensus methods in phylogenetics that satisfy certain desirable properties. One recently-introduced property is associative stability, which is desirable because it confers a computational advantage, in that the consensus method only needs to be computed "pairwise". In this paper, we introduce a phylogenetic consensus method that satisfies this property, in addition to being "regular". The method is based on the introduction of a partial order on the set of rooted phylogenetic trees, itself based on the notion of a hierarchy-preserving map between trees. This partial order may be of independent interest. We call the method "lattice consensus", because it takes the unique maximal element in a lattice of trees defined by the partial order. Aside from being associatively stable, lattice consensus also satisfies the property of being Pareto on rooted triples, answering in the affirmative a question of Bryant et al (2017). We conclude the paper with an answer to another question of Bryant et al, showing that there is no regular extension stable consensus method for binary trees.
|
2012.02246
|
Gabriel Schamberg
|
Gabriel Schamberg, Sourish Chakravarty, Taylor E. Baum, Emery N. Brown
|
Inferring neural dynamics during burst suppression using a
neurophysiology-inspired switching state-space model
|
To appear in the proceedings of the 2020 IEEE Asilomar Conference on
Signals, Systems, and Computers
| null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Burst suppression is an electroencephalography (EEG) pattern associated with
profoundly inactivated brain states characterized by cerebral metabolic
depression. Its distinctive feature is alternation between short temporal
segments of near-isoelectric inactivity (suppressions) and relatively
high-voltage activity (bursts). Prior modeling studies suggest that
burst-suppression EEG is a manifestation of two alternating brain states
associated with consumption (during a burst) and production (during a
suppression) of adenosine triphosphate (ATP). This finding motivates us to
infer latent states characterizing alternating brain states and underlying ATP
kinetics from instantaneous power of multichannel EEG using a switching
state-space model. Our model assumes Gaussian distributed data as a broadcast
network manifestation of one of two global brain states. The two brain states
are allowed to stochastically alternate with transition probabilities that
depend on the instantaneous ATP level, which evolves according to first-order
kinetics. The rate constants governing the ATP kinetics are allowed to vary as
first-order autoregressive processes. Our latent state estimates are determined
from data using a sequential Monte Carlo algorithm. Our
neurophysiology-informed model not only provides unsupervised segmentation of
multi-channel burst-suppression EEG but can also generate additional insights
on the level of brain inactivation during anesthesia.
|
[
{
"created": "Thu, 3 Dec 2020 20:30:46 GMT",
"version": "v1"
}
] |
2020-12-07
|
[
[
"Schamberg",
"Gabriel",
""
],
[
"Chakravarty",
"Sourish",
""
],
[
"Baum",
"Taylor E.",
""
],
[
"Brown",
"Emery N.",
""
]
] |
Burst suppression is an electroencephalography (EEG) pattern associated with profoundly inactivated brain states characterized by cerebral metabolic depression. Its distinctive feature is alternation between short temporal segments of near-isoelectric inactivity (suppressions) and relatively high-voltage activity (bursts). Prior modeling studies suggest that burst-suppression EEG is a manifestation of two alternating brain states associated with consumption (during a burst) and production (during a suppression) of adenosine triphosphate (ATP). This finding motivates us to infer latent states characterizing alternating brain states and underlying ATP kinetics from instantaneous power of multichannel EEG using a switching state-space model. Our model assumes Gaussian distributed data as a broadcast network manifestation of one of two global brain states. The two brain states are allowed to stochastically alternate with transition probabilities that depend on the instantaneous ATP level, which evolves according to first-order kinetics. The rate constants governing the ATP kinetics are allowed to vary as first-order autoregressive processes. Our latent state estimates are determined from data using a sequential Monte Carlo algorithm. Our neurophysiology-informed model not only provides unsupervised segmentation of multi-channel burst-suppression EEG but can also generate additional insights on the level of brain inactivation during anesthesia.
|
2007.03157
|
Shiladitya Banerjee
|
Jake Cornwall Scoones, Deb Sankar Banerjee, Shiladitya Banerjee
|
Size-regulated symmetry breaking in reaction-diffusion models of
developmental transitions
|
11 pages, 5 figures, Perspective Article
| null | null | null |
q-bio.TO nlin.PS physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The development of multicellular organisms proceeds through a series of
morphogenetic and cell-state transitions, transforming homogeneous zygotes into
complex adults by a process of self-organization. Many of these transitions are
achieved by spontaneous symmetry breaking mechanisms, allowing cells and
tissues to acquire pattern and polarity by virtue of local interactions without
an upstream supply of information. The combined work of theory and experiment
has elucidated how these systems break symmetry during developmental
transitions. Given such transitions are multiple and their temporal ordering is
crucial, an equally important question is how these developmental transitions
are coordinated in time. Using a minimal mass-conserved substrate-depletion
model for symmetry breaking as our case study, we elucidate mechanisms by which
cells and tissues can couple reaction-diffusion driven symmetry breaking to the
timing of developmental transitions, arguing that the dependence of patterning
mode on system size may be a generic principle by which developing organisms
measure time. By analyzing different regimes of our model, simulated on growing
domains, we elaborate three distinct behaviours, allowing for clock-, timer-,
or switch-like dynamics. By relating these behaviours to experimentally
documented case studies of developmental timing, we provide a minimal
conceptual framework to interrogate how developing organisms coordinate
developmental transitions.
|
[
{
"created": "Tue, 7 Jul 2020 01:29:09 GMT",
"version": "v1"
}
] |
2020-07-08
|
[
[
"Scoones",
"Jake Cornwall",
""
],
[
"Banerjee",
"Deb Sankar",
""
],
[
"Banerjee",
"Shiladitya",
""
]
] |
The development of multicellular organisms proceeds through a series of morphogenetic and cell-state transitions, transforming homogeneous zygotes into complex adults by a process of self-organization. Many of these transitions are achieved by spontaneous symmetry breaking mechanisms, allowing cells and tissues to acquire pattern and polarity by virtue of local interactions without an upstream supply of information. The combined work of theory and experiment has elucidated how these systems break symmetry during developmental transitions. Given such transitions are multiple and their temporal ordering is crucial, an equally important question is how these developmental transitions are coordinated in time. Using a minimal mass-conserved substrate-depletion model for symmetry breaking as our case study, we elucidate mechanisms by which cells and tissues can couple reaction-diffusion driven symmetry breaking to the timing of developmental transitions, arguing that the dependence of patterning mode on system size may be a generic principle by which developing organisms measure time. By analyzing different regimes of our model, simulated on growing domains, we elaborate three distinct behaviours, allowing for clock-, timer-, or switch-like dynamics. By relating these behaviours to experimentally documented case studies of developmental timing, we provide a minimal conceptual framework to interrogate how developing organisms coordinate developmental transitions.
|
2308.05685
|
Netta Haroush
|
Netta Haroush, Michal Levo, Eric Wieschaus and Thomas Gregor
|
Functional analysis of a gene locus in response to non-canonical
combinations of transcription factors
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transcription factor combinations determine gene locus activity and thereby
cell identity. However, the precise link between concentrations of such
activating transcription factors and target-gene activity is ambiguous. Here we
investigate this link for the gap gene dependent activation of the even-skipped
(eve) locus in the Drosophila embryo. We simultaneously measure the
spatiotemporal gap gene concentrations in hemizygous and homozygous gap
mutants, and link these to eve activity. Although changes in expression extend
well beyond the genetically manipulated gene, nearly all expression
alternations approximate the canonical combinations of activating levels in
wild-type, sometimes necessitating pattern shifts. Expression levels that
diverge from the wild-type repertoire still drive locus activation. Specific
stripes in the homozygous mutants show partial penetrance, justifying their
renown variable phenotypes. However, all eve stripes appear at highly
reproducible positions, even though a broader span of gap gene expression
levels activates eve. Our results suggest a correction capacity of the gap gene
network and set constraints on the activity of multi-enhancer gene loci.
|
[
{
"created": "Thu, 10 Aug 2023 16:38:59 GMT",
"version": "v1"
}
] |
2023-08-11
|
[
[
"Haroush",
"Netta",
""
],
[
"Levo",
"Michal",
""
],
[
"Wieschaus",
"Eric",
""
],
[
"Gregor",
"Thomas",
""
]
] |
Transcription factor combinations determine gene locus activity and thereby cell identity. However, the precise link between concentrations of such activating transcription factors and target-gene activity is ambiguous. Here we investigate this link for the gap gene dependent activation of the even-skipped (eve) locus in the Drosophila embryo. We simultaneously measure the spatiotemporal gap gene concentrations in hemizygous and homozygous gap mutants, and link these to eve activity. Although changes in expression extend well beyond the genetically manipulated gene, nearly all expression alternations approximate the canonical combinations of activating levels in wild-type, sometimes necessitating pattern shifts. Expression levels that diverge from the wild-type repertoire still drive locus activation. Specific stripes in the homozygous mutants show partial penetrance, justifying their renown variable phenotypes. However, all eve stripes appear at highly reproducible positions, even though a broader span of gap gene expression levels activates eve. Our results suggest a correction capacity of the gap gene network and set constraints on the activity of multi-enhancer gene loci.
|
1602.05177
|
Olivier Sperandio
|
G Moroy (UMR S973, UP7), O Sperandio (UMR S973, UP7), S Rielland (UMR
S973, UP7), S Khemka (LBPA), K Druart (UMR S973, UP7), D. Goyal (LBPA), D.
Perahia (LBPA), M. A. Miteva (UMR S973, UP7)
|
Sampling of conformational ensemble for virtual screening using
molecular dynamics simulations and normal mode analysis
| null |
Future Medicinal Chemistry, 2015, 7 (17), pp.2317-2331
|
10.4155/fmc.15.150
| null |
q-bio.QM q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aim: Molecular dynamics simulations and normal mode analysis are
well-established approaches to generate receptor conformational ensembles
(RCEs) for ligand docking and virtual screening. Here, we report new fast
molecular dynamics-based and normal mode analysis-based protocols combined with
conformational pocket classifications to efficiently generate RCEs. Materials
\& methods: We assessed our protocols on two well-characterized protein targets
showing local active site flexibility, dihydrofolate reductase and large
collective movements, CDK2. The performance of the RCEs was validated by
distinguishing known ligands of dihydrofolate reductase and CDK2 among a
dataset of diverse chemical decoys. Results \& discussion: Our results show
that different simulation protocols can be efficient for generation of RCEs
depending on different kind of protein flexibility.
|
[
{
"created": "Tue, 16 Feb 2016 20:42:10 GMT",
"version": "v1"
}
] |
2016-02-17
|
[
[
"Moroy",
"G",
"",
"UMR S973, UP7"
],
[
"Sperandio",
"O",
"",
"UMR S973, UP7"
],
[
"Rielland",
"S",
"",
"UMR\n S973, UP7"
],
[
"Khemka",
"S",
"",
"LBPA"
],
[
"Druart",
"K",
"",
"UMR S973, UP7"
],
[
"Goyal",
"D.",
"",
"LBPA"
],
[
"Perahia",
"D.",
"",
"LBPA"
],
[
"Miteva",
"M. A.",
"",
"UMR S973, UP7"
]
] |
Aim: Molecular dynamics simulations and normal mode analysis are well-established approaches to generate receptor conformational ensembles (RCEs) for ligand docking and virtual screening. Here, we report new fast molecular dynamics-based and normal mode analysis-based protocols combined with conformational pocket classifications to efficiently generate RCEs. Materials \& methods: We assessed our protocols on two well-characterized protein targets showing local active site flexibility, dihydrofolate reductase and large collective movements, CDK2. The performance of the RCEs was validated by distinguishing known ligands of dihydrofolate reductase and CDK2 among a dataset of diverse chemical decoys. Results \& discussion: Our results show that different simulation protocols can be efficient for generation of RCEs depending on different kind of protein flexibility.
|
2310.13598
|
Laurent Gatto
|
Samuel Gr\'egoire and Christophe Vanderaa and S\'ebastien Pyr dit Ruys
and Gabriel Mazzucchelli and Christopher Kune and Didier Vertommen and
Laurent Gatto
|
Standardised workflow for mass spectrometry-based single-cell proteomics
data processing and analysis using the scp package
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Mass spectrometry (MS) based single-cell proteomics (SCP) explores cellular
heterogeneity by focusing on the functional effectors of the cells - proteins.
However, extracting meaningful biological information from MS data is far from
trivial, especially with single cells. Currently, data analysis workflows are
substantially different from one research team to another. Moreover,it is
difficult to evaluate pipelines as ground truths are missing. Our team has
developed the R/Bioconductor package called scp to provide a standardised
framework for SCP data analysis. It relies on the widely used QFeatures and
SingleCellExperiment data structures. In addition, we used a design containing
cell lines mixed in known proportions to generate controlled variability for
data analysis benchmarking. In this work, we provide a flexible data analysis
protocol for SCP data using the scp package together with comprehensive
explanations at each step of the processing. Our main steps are quality control
on the feature and cell level, aggregation of the raw data into peptides and
proteins, normalisation and batch correction. We validate our workflow using
our ground truth data set. We illustrate how to use this modular, standardised
framework and highlight some crucial steps.
|
[
{
"created": "Fri, 20 Oct 2023 15:46:51 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Dec 2023 16:47:50 GMT",
"version": "v2"
}
] |
2023-12-14
|
[
[
"Grégoire",
"Samuel",
""
],
[
"Vanderaa",
"Christophe",
""
],
[
"Ruys",
"Sébastien Pyr dit",
""
],
[
"Mazzucchelli",
"Gabriel",
""
],
[
"Kune",
"Christopher",
""
],
[
"Vertommen",
"Didier",
""
],
[
"Gatto",
"Laurent",
""
]
] |
Mass spectrometry (MS) based single-cell proteomics (SCP) explores cellular heterogeneity by focusing on the functional effectors of the cells - proteins. However, extracting meaningful biological information from MS data is far from trivial, especially with single cells. Currently, data analysis workflows are substantially different from one research team to another. Moreover,it is difficult to evaluate pipelines as ground truths are missing. Our team has developed the R/Bioconductor package called scp to provide a standardised framework for SCP data analysis. It relies on the widely used QFeatures and SingleCellExperiment data structures. In addition, we used a design containing cell lines mixed in known proportions to generate controlled variability for data analysis benchmarking. In this work, we provide a flexible data analysis protocol for SCP data using the scp package together with comprehensive explanations at each step of the processing. Our main steps are quality control on the feature and cell level, aggregation of the raw data into peptides and proteins, normalisation and batch correction. We validate our workflow using our ground truth data set. We illustrate how to use this modular, standardised framework and highlight some crucial steps.
|
2012.03720
|
Leon Avery
|
Leon Avery, Brian Ingalls, Catherine Dumur, Alexander Artyukhin
|
A Keller-Segel model for C elegans L1 aggregation
| null | null |
10.1371/journal.pcbi.1009231
| null |
q-bio.MN q-bio.QM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We describe a mathematical model for the aggregation of starved first-stage C
elegans larvae (L1s). We propose that starved L1s produce and respond
chemotactically to two labile diffusible chemical signals, a short-range
attractant and a longer range repellent. This model takes the mathematical form
of three coupled partial differential equations, one that describes the
movement of the worms and one for each of the chemical signals. Numerical
solution of these equations produced a pattern of aggregates that resembled
that of worm aggregates observed in experiments. We also describe the
identification of a sensory receptor gene, srh-2, whose expression is induced
under conditions that promote L1 aggregation. Worms whose srh-2 gene has been
knocked out form irregularly shaped aggregates. Our model suggests this
phenotype may be explained by the mutant worms slowing their movement more
quickly than the wild type.
|
[
{
"created": "Fri, 4 Dec 2020 17:27:23 GMT",
"version": "v1"
},
{
"created": "Fri, 7 May 2021 16:03:43 GMT",
"version": "v2"
}
] |
2021-09-15
|
[
[
"Avery",
"Leon",
""
],
[
"Ingalls",
"Brian",
""
],
[
"Dumur",
"Catherine",
""
],
[
"Artyukhin",
"Alexander",
""
]
] |
We describe a mathematical model for the aggregation of starved first-stage C elegans larvae (L1s). We propose that starved L1s produce and respond chemotactically to two labile diffusible chemical signals, a short-range attractant and a longer range repellent. This model takes the mathematical form of three coupled partial differential equations, one that describes the movement of the worms and one for each of the chemical signals. Numerical solution of these equations produced a pattern of aggregates that resembled that of worm aggregates observed in experiments. We also describe the identification of a sensory receptor gene, srh-2, whose expression is induced under conditions that promote L1 aggregation. Worms whose srh-2 gene has been knocked out form irregularly shaped aggregates. Our model suggests this phenotype may be explained by the mutant worms slowing their movement more quickly than the wild type.
|
1111.2019
|
Santiago Ra\'ul Doyle
|
Santiago R. Doyle, Florencia Carusela, Sebasti\'an Guala and Fernando
Momo
|
A null model for testing thermodynamic optimization in ecological
systems
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several authors have hypothesized that ecological systems are subject to
thermodynamic optimization, which, if proven correct, could represent a long
sought general principle of organization in ecology. Although there have been
recent advances, this still remains as an unresolved topic, and ecologists lack
a general method to test thermodynamic optimization hypotheses in specific
systems. Here we present a general, novel approach that allows generating a
null model for testing thermodynamic optimization on ecological systems. We
first describe the general methodology, which is based in the analysis of a
parametrized mathematical model of the system and the explicit consideration of
constraints. Next we present an application example to an animal population
using a general age-structured population model and physiological parameters
from the literature. We finalize discussing the relevance of this work in the
context of the current state of ecology, and implications for the further
development of a thermodynamic ecological theory.
|
[
{
"created": "Tue, 8 Nov 2011 19:14:37 GMT",
"version": "v1"
}
] |
2011-11-09
|
[
[
"Doyle",
"Santiago R.",
""
],
[
"Carusela",
"Florencia",
""
],
[
"Guala",
"Sebastián",
""
],
[
"Momo",
"Fernando",
""
]
] |
Several authors have hypothesized that ecological systems are subject to thermodynamic optimization, which, if proven correct, could represent a long sought general principle of organization in ecology. Although there have been recent advances, this still remains as an unresolved topic, and ecologists lack a general method to test thermodynamic optimization hypotheses in specific systems. Here we present a general, novel approach that allows generating a null model for testing thermodynamic optimization on ecological systems. We first describe the general methodology, which is based in the analysis of a parametrized mathematical model of the system and the explicit consideration of constraints. Next we present an application example to an animal population using a general age-structured population model and physiological parameters from the literature. We finalize discussing the relevance of this work in the context of the current state of ecology, and implications for the further development of a thermodynamic ecological theory.
|
2102.00002
|
Elvira Di Nardo Prof.
|
A. Buonocore, A. Di Crescenzo, E. Di Nardo
|
Input-output behaviour of a model neuron with alternating drift
| null |
BioSystems (2002) 67, 27-34
|
10.1016/S0303-2647(02)00060-6
| null |
q-bio.NC math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The input-output behaviour of the Wiener neuronal model subject to
alternating input is studied under the assumption that the effect of such an
input is to make the drift itself of an alternating type. Firing densities and
related statistics are obtained via simulations of the sample-paths of the
process in the following three cases: the drift changes occur during random
periods characterized by (i) exponential distribution, (ii) Erlang distribution
with a preassigned shape parameter, and (iii) deterministic distribution. The
obtained results are compared with those holding for the Wiener neuronal model
subject to sinusoidal input
|
[
{
"created": "Sun, 31 Jan 2021 16:30:22 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Buonocore",
"A.",
""
],
[
"Di Crescenzo",
"A.",
""
],
[
"Di Nardo",
"E.",
""
]
] |
The input-output behaviour of the Wiener neuronal model subject to alternating input is studied under the assumption that the effect of such an input is to make the drift itself of an alternating type. Firing densities and related statistics are obtained via simulations of the sample-paths of the process in the following three cases: the drift changes occur during random periods characterized by (i) exponential distribution, (ii) Erlang distribution with a preassigned shape parameter, and (iii) deterministic distribution. The obtained results are compared with those holding for the Wiener neuronal model subject to sinusoidal input
|
0801.3675
|
Paolo Ribeca
|
Paolo Ribeca and Emanuele Raineri
|
Faster exact Markovian probability functions for motif occurrences: a
DFA-only approach
|
18 pages, 7 figures and 2 tables
| null |
10.1093/bioinformatics/btn525
| null |
q-bio.GN q-bio.QM
| null |
Background: The computation of the statistical properties of motif
occurrences has an obviously relevant practical application: for example,
patterns that are significantly over- or under-represented in the genome are
interesting candidates for biological roles. However, the problem is
computationally hard; as a result, virtually all the existing pipelines use
fast but approximate scoring functions, in spite of the fact that they have
been shown to systematically produce incorrect results. A few interesting exact
approaches are known, but they are very slow and hence not practical in the
case of realistic sequences. Results: We give an exact solution, solely based
on deterministic finite-state automata (DFAs), to the problem of finding not
only the p-value, but the whole relevant part of the Markovian probability
distribution function of a motif in a biological sequence. In particular, the
time complexity of the algorithm in the most interesting regimes is far better
than that of Nuel (2006), which was the fastest similar exact algorithm known
to date; in many cases, even approximate methods are outperformed. Conclusions:
DFAs are a standard tool of computer science for the study of patterns, but so
far they have been sparingly used in the study of biological motifs. Previous
works do propose algorithms involving automata, but there they are used
respectively as a first step to build a Finite Markov Chain Imbedding (FMCI),
or to write a generating function: whereas we only rely on the concept of DFA
to perform the calculations. This innovative approach can realistically be used
for exact statistical studies of very long genomes and protein sequences, as we
illustrate with some examples on the scale of the human genome.
|
[
{
"created": "Thu, 24 Jan 2008 15:39:48 GMT",
"version": "v1"
}
] |
2021-11-01
|
[
[
"Ribeca",
"Paolo",
""
],
[
"Raineri",
"Emanuele",
""
]
] |
Background: The computation of the statistical properties of motif occurrences has an obviously relevant practical application: for example, patterns that are significantly over- or under-represented in the genome are interesting candidates for biological roles. However, the problem is computationally hard; as a result, virtually all the existing pipelines use fast but approximate scoring functions, in spite of the fact that they have been shown to systematically produce incorrect results. A few interesting exact approaches are known, but they are very slow and hence not practical in the case of realistic sequences. Results: We give an exact solution, solely based on deterministic finite-state automata (DFAs), to the problem of finding not only the p-value, but the whole relevant part of the Markovian probability distribution function of a motif in a biological sequence. In particular, the time complexity of the algorithm in the most interesting regimes is far better than that of Nuel (2006), which was the fastest similar exact algorithm known to date; in many cases, even approximate methods are outperformed. Conclusions: DFAs are a standard tool of computer science for the study of patterns, but so far they have been sparingly used in the study of biological motifs. Previous works do propose algorithms involving automata, but there they are used respectively as a first step to build a Finite Markov Chain Imbedding (FMCI), or to write a generating function: whereas we only rely on the concept of DFA to perform the calculations. This innovative approach can realistically be used for exact statistical studies of very long genomes and protein sequences, as we illustrate with some examples on the scale of the human genome.
|
1112.3640
|
Christopher L. Henley
|
Hanrong Chen, C. L. Henley, and B. Xu
|
Propagating left/right asymmetry in the zebrafish embryo:
one-dimensional model
|
13 pages, 5 figures
| null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During embryonic development in vertebrates, left-right (L/R) asymmetry is
reliably generated by a conserved mechanism: a L/R asymmetric signal is
transmitted from the embryonic node to other parts of the embryo by the L/R
asymmetric expression and diffusion of the TGF-$\beta$ related proteins Nodal
and Lefty via propagating gene expression fronts in the lateral plate mesoderm
(LPM) and midline. In zebrafish embryos, Nodal and Lefty expression can only
occur along 3 narrow stripes that express the co-receptor \emph{one-eyed
pinhead} (oep): Nodal along stripes in the left and right LPM, and Lefty along
the midline. In wild-type embryos, Nodal is only expressed in the left LPM but
not the right, because of inhibition by Lefty from the midline; however,
bilateral Nodal expression occurs in loss-of-handedness mutants. A
two-dimensional model of the zebrafish embryo predicts this loss of L/R
asymmetry in oep mutants \cite{henley-xu-burdine}. In this paper, we simplify
this two-dimensional picture to a one-dimensional model of Nodal and Lefty
front propagation along the oep-expressing stripes. We represent Nodal and
Lefty production by step functions that turn on when a linear function of Nodal
and Lefty densities crosses a threshold. We do a parameter exploration of front
propagation behavior, and find the existence of \emph{pinned} intervals, along
which the linear function underlying production is pinned to the threshold.
Finally, we find parameter regimes for which spatially uniform oscillating
solutions are possible.
|
[
{
"created": "Thu, 15 Dec 2011 20:29:10 GMT",
"version": "v1"
}
] |
2011-12-16
|
[
[
"Chen",
"Hanrong",
""
],
[
"Henley",
"C. L.",
""
],
[
"Xu",
"B.",
""
]
] |
During embryonic development in vertebrates, left-right (L/R) asymmetry is reliably generated by a conserved mechanism: a L/R asymmetric signal is transmitted from the embryonic node to other parts of the embryo by the L/R asymmetric expression and diffusion of the TGF-$\beta$ related proteins Nodal and Lefty via propagating gene expression fronts in the lateral plate mesoderm (LPM) and midline. In zebrafish embryos, Nodal and Lefty expression can only occur along 3 narrow stripes that express the co-receptor \emph{one-eyed pinhead} (oep): Nodal along stripes in the left and right LPM, and Lefty along the midline. In wild-type embryos, Nodal is only expressed in the left LPM but not the right, because of inhibition by Lefty from the midline; however, bilateral Nodal expression occurs in loss-of-handedness mutants. A two-dimensional model of the zebrafish embryo predicts this loss of L/R asymmetry in oep mutants \cite{henley-xu-burdine}. In this paper, we simplify this two-dimensional picture to a one-dimensional model of Nodal and Lefty front propagation along the oep-expressing stripes. We represent Nodal and Lefty production by step functions that turn on when a linear function of Nodal and Lefty densities crosses a threshold. We do a parameter exploration of front propagation behavior, and find the existence of \emph{pinned} intervals, along which the linear function underlying production is pinned to the threshold. Finally, we find parameter regimes for which spatially uniform oscillating solutions are possible.
|
1311.5517
|
Helene Hill
|
Joel H Pitt and Helene Z Hill
|
Statistical Detection of Potentially Fabricated Data
|
31 pages of text including 2 figures, 3 tables and an Appendix
containing the mathematical derivation of a model for detecting and
quantifying the probability for the occurrence of the average of 3 counts as
one of those counts. 166 pages of raw data that were used in the analyses
| null | null | null |
q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scientific fraud is an increasingly vexing problem. Many current programs for
fraud detection focus on image manipulation, while techniques for detection
based on anomalous patterns that may be discoverable in the underlying
numerical data get much less attention, even though these techniques are often
easy to apply. We employed three such techniques in a case study in which we
considered data sets from several hundred experiments. We compared patterns in
the data sets from one research teaching specialist (RTS), to those of 9 other
members of the same laboratory and from 3 outside laboratories. Application of
two conventional statistical tests and a newly developed test for anomalous
patterns in the triplicate data commonly produced in such research to various
data sets reported by the RTS resulted in repeated rejection of the hypotheses
(often at p-levels well below 0.001) that anomalous patterns in his data may
have occurred by chance. This analysis emphasizes the importance of access to
raw data that form the bases of publications, reports and grant applications in
order to evaluate the correctness of the conclusions, as well as the utility of
methods for detecting anomalous, especially fabricated, numerical results.
|
[
{
"created": "Thu, 21 Nov 2013 19:03:55 GMT",
"version": "v1"
}
] |
2013-11-22
|
[
[
"Pitt",
"Joel H",
""
],
[
"Hill",
"Helene Z",
""
]
] |
Scientific fraud is an increasingly vexing problem. Many current programs for fraud detection focus on image manipulation, while techniques for detection based on anomalous patterns that may be discoverable in the underlying numerical data get much less attention, even though these techniques are often easy to apply. We employed three such techniques in a case study in which we considered data sets from several hundred experiments. We compared patterns in the data sets from one research teaching specialist (RTS), to those of 9 other members of the same laboratory and from 3 outside laboratories. Application of two conventional statistical tests and a newly developed test for anomalous patterns in the triplicate data commonly produced in such research to various data sets reported by the RTS resulted in repeated rejection of the hypotheses (often at p-levels well below 0.001) that anomalous patterns in his data may have occurred by chance. This analysis emphasizes the importance of access to raw data that form the bases of publications, reports and grant applications in order to evaluate the correctness of the conclusions, as well as the utility of methods for detecting anomalous, especially fabricated, numerical results.
|
2302.12455
|
Eitan Lerner
|
Evelyn Ploetz, Benjamin Ambrose, Anders Barth, Richard B\"orner, Felix
Erichson, Achillefs N. Kapanidis, Harold D. Kim, Marcia Levitus, Timothy M.
Lohman, Abhishek Mazumder, David S. Rueda, Fabio D. Steffen, Thorben Cordes,
Steven W. Magennis and Eitan Lerner
|
A new twist on PIFE: photoisomerisation-related fluorescence enhancement
|
No Comments
| null |
10.1088/2050-6120/acfb58
| null |
q-bio.BM q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
PIFE was first used as an acronym for protein-induced fluorescence
enhancement, which refers to the increase in fluorescence observed upon the
interaction of a fluorophore, such as a cyanine, with a protein. This
fluorescence enhancement is due to changes in the rate of cis/trans
photoisomerisation. It is clear now that this mechanism is generally applicable
to interactions with any biomolecule and, in this review, we propose that PIFE
is thereby renamed according to its fundamental working principle as
photoisomerisation-related fluorescence enhancement, keeping the PIFE acronym
intact. We discuss the photochemistry of cyanine fluorophores, the mechanism of
PIFE, its advantages and limitations, and recent approaches to turn PIFE into a
quantitative assay. We provide an overview of its current applications to
different biomolecules and discuss potential future uses, including the study
of protein-protein interactions, protein-ligand interactions and conformational
changes in biomolecules.
|
[
{
"created": "Fri, 24 Feb 2023 05:11:25 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jul 2023 10:01:48 GMT",
"version": "v2"
}
] |
2023-10-24
|
[
[
"Ploetz",
"Evelyn",
""
],
[
"Ambrose",
"Benjamin",
""
],
[
"Barth",
"Anders",
""
],
[
"Börner",
"Richard",
""
],
[
"Erichson",
"Felix",
""
],
[
"Kapanidis",
"Achillefs N.",
""
],
[
"Kim",
"Harold D.",
""
],
[
"Levitus",
"Marcia",
""
],
[
"Lohman",
"Timothy M.",
""
],
[
"Mazumder",
"Abhishek",
""
],
[
"Rueda",
"David S.",
""
],
[
"Steffen",
"Fabio D.",
""
],
[
"Cordes",
"Thorben",
""
],
[
"Magennis",
"Steven W.",
""
],
[
"Lerner",
"Eitan",
""
]
] |
PIFE was first used as an acronym for protein-induced fluorescence enhancement, which refers to the increase in fluorescence observed upon the interaction of a fluorophore, such as a cyanine, with a protein. This fluorescence enhancement is due to changes in the rate of cis/trans photoisomerisation. It is clear now that this mechanism is generally applicable to interactions with any biomolecule and, in this review, we propose that PIFE is thereby renamed according to its fundamental working principle as photoisomerisation-related fluorescence enhancement, keeping the PIFE acronym intact. We discuss the photochemistry of cyanine fluorophores, the mechanism of PIFE, its advantages and limitations, and recent approaches to turn PIFE into a quantitative assay. We provide an overview of its current applications to different biomolecules and discuss potential future uses, including the study of protein-protein interactions, protein-ligand interactions and conformational changes in biomolecules.
|
1305.4354
|
Steven Frank
|
Steven A. Frank
|
Natural selection. VII. History and interpretation of kin selection
theory
| null | null |
10.1111/jeb.12131
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Kin selection theory is a kind of causal analysis. The initial form of kin
selection ascribed cause to costs, benefits, and genetic relatedness. The
theory then slowly developed a deeper and more sophisticated approach to
partitioning the causes of social evolution. Controversy followed because
causal analysis inevitably attracts opposing views. It is always possible to
separate total effects into different component causes. Alternative causal
schemes emphasize different aspects of a problem, reflecting the distinct
goals, interests, and biases of different perspectives. For example, group
selection is a particular causal scheme with certain advantages and significant
limitations. Ultimately, to use kin selection theory to analyze natural
patterns and to understand the history of debates over different approaches,
one must follow the underlying history of causal analysis. This article
describes the history of kin selection theory, with emphasis on how the causal
perspective improved through the study of key patterns of natural history, such
as dispersal and sex ratio, and through a unified approach to demographic and
social processes. Independent historical developments in the multivariate
analysis of quantitative traits merged with the causal analysis of social
evolution by kin selection.
|
[
{
"created": "Sun, 19 May 2013 12:18:53 GMT",
"version": "v1"
}
] |
2014-06-18
|
[
[
"Frank",
"Steven A.",
""
]
] |
Kin selection theory is a kind of causal analysis. The initial form of kin selection ascribed cause to costs, benefits, and genetic relatedness. The theory then slowly developed a deeper and more sophisticated approach to partitioning the causes of social evolution. Controversy followed because causal analysis inevitably attracts opposing views. It is always possible to separate total effects into different component causes. Alternative causal schemes emphasize different aspects of a problem, reflecting the distinct goals, interests, and biases of different perspectives. For example, group selection is a particular causal scheme with certain advantages and significant limitations. Ultimately, to use kin selection theory to analyze natural patterns and to understand the history of debates over different approaches, one must follow the underlying history of causal analysis. This article describes the history of kin selection theory, with emphasis on how the causal perspective improved through the study of key patterns of natural history, such as dispersal and sex ratio, and through a unified approach to demographic and social processes. Independent historical developments in the multivariate analysis of quantitative traits merged with the causal analysis of social evolution by kin selection.
|
1409.4404
|
Almaz Mustafin
|
Almaz Mustafin
|
Awakened oscillations in coupled consumer-resource pairs
|
31 pages, 8 figures 2 tables, 48 references
|
Journal of Applied Mathematics 2014 (2014), Article ID 561958,
pages 1-20
|
10.1155/2014/561958
| null |
q-bio.PE nlin.AO physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper concerns two interacting consumer-resource pairs based on
chemostat-like equations under the assumption that the dynamics of the resource
is considerably slower than that of the consumer. The presence of two different
time scales enables to carry out a fairly complete analysis of the problem.
This is done by treating consumers and resources in the coupled system as
fast-scale and slow-scale variables respectively and subsequently considering
developments in phase planes of these variables, fast and slow, as if they are
independent. When uncoupled, each pair has unique asymptotically stable steady
state and no self-sustained oscillatory behavior (although damped oscillations
about the equilibrium are admitted). When the consumer-resource pairs are
weakly coupled through direct reciprocal inhibition of consumers, the whole
system exhibits self-sustained relaxation oscillations with a period that can
be significantly longer than intrinsic relaxation time of either pair. It is
shown that the model equations adequately describe locally linked
consumer-resource systems of quite different nature: living populations under
interspecific interference competition and lasers coupled via their cavity
losses.
|
[
{
"created": "Sun, 14 Sep 2014 07:32:22 GMT",
"version": "v1"
}
] |
2014-09-17
|
[
[
"Mustafin",
"Almaz",
""
]
] |
The paper concerns two interacting consumer-resource pairs based on chemostat-like equations under the assumption that the dynamics of the resource is considerably slower than that of the consumer. The presence of two different time scales enables to carry out a fairly complete analysis of the problem. This is done by treating consumers and resources in the coupled system as fast-scale and slow-scale variables respectively and subsequently considering developments in phase planes of these variables, fast and slow, as if they are independent. When uncoupled, each pair has unique asymptotically stable steady state and no self-sustained oscillatory behavior (although damped oscillations about the equilibrium are admitted). When the consumer-resource pairs are weakly coupled through direct reciprocal inhibition of consumers, the whole system exhibits self-sustained relaxation oscillations with a period that can be significantly longer than intrinsic relaxation time of either pair. It is shown that the model equations adequately describe locally linked consumer-resource systems of quite different nature: living populations under interspecific interference competition and lasers coupled via their cavity losses.
|
2012.00281
|
Muriel Gros-Balthazard
|
Muriel Gros-Balthazard and Jonathan M. Flowers
|
A brief history of the origin of domesticated date palms
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
The study of the origins of crops is of interest from both a fundamental
evolutionary understanding viewpoint, and from an applied agricultural
technology perspective. The date palm (Phoenix dactylifera L.) is the iconic
fruit crop of hot and arid regions of North Africa and the Middle East,
producing sugar-rich fruits, known as dates. There are many different cultivars
each with distinctive fruit traits, and there are many wild Phoenix species
too, which in total form a complex of related species. The understanding of
plant domestication involves multiple disciplines, including phylogeography,
population genetics and archaeology. In the past decade, they have prompted new
discoveries on the evolutionary history of date palm, but a complete
understanding of its origins remains to be elucidated, along with the genetic
architecture of its domestication syndrome. In this chapter, we review the
current state of the art regarding the origins of the domesticated date palm.
We first discuss whether date palms are domesticated, and highlight how they
diverge from their wild Phoenix relatives. We then outline patterns in the
population genetic and archaeobotanical data, and review different models for
the origins of domesticated date palms by highlighting sources of evidence that
are either consistent or inconsistent with each model. We then review the
process of date palm domestication, and emphasize the human activities that
have prompted its domestication. We particularly focus on the evolution of
fruit traits.
|
[
{
"created": "Tue, 1 Dec 2020 05:40:54 GMT",
"version": "v1"
}
] |
2020-12-02
|
[
[
"Gros-Balthazard",
"Muriel",
""
],
[
"Flowers",
"Jonathan M.",
""
]
] |
The study of the origins of crops is of interest from both a fundamental evolutionary understanding viewpoint, and from an applied agricultural technology perspective. The date palm (Phoenix dactylifera L.) is the iconic fruit crop of hot and arid regions of North Africa and the Middle East, producing sugar-rich fruits, known as dates. There are many different cultivars each with distinctive fruit traits, and there are many wild Phoenix species too, which in total form a complex of related species. The understanding of plant domestication involves multiple disciplines, including phylogeography, population genetics and archaeology. In the past decade, they have prompted new discoveries on the evolutionary history of date palm, but a complete understanding of its origins remains to be elucidated, along with the genetic architecture of its domestication syndrome. In this chapter, we review the current state of the art regarding the origins of the domesticated date palm. We first discuss whether date palms are domesticated, and highlight how they diverge from their wild Phoenix relatives. We then outline patterns in the population genetic and archaeobotanical data, and review different models for the origins of domesticated date palms by highlighting sources of evidence that are either consistent or inconsistent with each model. We then review the process of date palm domestication, and emphasize the human activities that have prompted its domestication. We particularly focus on the evolution of fruit traits.
|
0903.4168
|
Satoru Hayasaka
|
Satoru Hayasaka, Paul J. Laurienti
|
Degree distributions in mesoscopic and macroscopic functional brain
networks
| null | null | null | null |
q-bio.NC q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigated the degree distribution of brain networks extracted from
functional magnetic resonance imaging of the human brain. In particular, the
distributions are compared between macroscopic brain networks using
region-based nodes and mesoscopic brain networks using voxel-based nodes. We
found that the distribution from these networks follow the same family of
distributions and represent a continuum of exponentially truncated power law
distributions.
|
[
{
"created": "Tue, 24 Mar 2009 19:42:36 GMT",
"version": "v1"
}
] |
2009-03-25
|
[
[
"Hayasaka",
"Satoru",
""
],
[
"Laurienti",
"Paul J.",
""
]
] |
We investigated the degree distribution of brain networks extracted from functional magnetic resonance imaging of the human brain. In particular, the distributions are compared between macroscopic brain networks using region-based nodes and mesoscopic brain networks using voxel-based nodes. We found that the distribution from these networks follow the same family of distributions and represent a continuum of exponentially truncated power law distributions.
|
2101.08211
|
Xinwei Yu
|
Xinwei Yu, Matthew S. Creamer, Francesco Randi, Anuj K. Sharma, Scott
W. Linderman, Andrew M. Leifer
|
Fast deep learning correspondence for neuron tracking and identification
in C.elegans using synthetic training
|
5 figures
|
eLife 2021;10:e66410
|
10.7554/eLife.66410
| null |
q-bio.QM cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present an automated method to track and identify neurons in C. elegans,
called "fast Deep Learning Correspondence" or fDLC, based on the transformer
network architecture. The model is trained once on empirically derived
synthetic data and then predicts neural correspondence across held-out real
animals via transfer learning. The same pre-trained model both tracks neurons
across time and identifies corresponding neurons across individuals.
Performance is evaluated against hand-annotated datasets, including NeuroPAL
[1]. Using only position information, the method achieves 80.0% accuracy at
tracking neurons within an individual and 65.8% accuracy at identifying neurons
across individuals. Accuracy is even higher on a published dataset [2].
Accuracy reaches 76.5% when using color information from NeuroPAL. Unlike
previous methods, fDLC does not require straightening or transforming the
animal into a canonical coordinate system. The method is fast and predicts
correspondence in 10 ms making it suitable for future real-time applications.
|
[
{
"created": "Wed, 20 Jan 2021 16:46:37 GMT",
"version": "v1"
}
] |
2021-07-16
|
[
[
"Yu",
"Xinwei",
""
],
[
"Creamer",
"Matthew S.",
""
],
[
"Randi",
"Francesco",
""
],
[
"Sharma",
"Anuj K.",
""
],
[
"Linderman",
"Scott W.",
""
],
[
"Leifer",
"Andrew M.",
""
]
] |
We present an automated method to track and identify neurons in C. elegans, called "fast Deep Learning Correspondence" or fDLC, based on the transformer network architecture. The model is trained once on empirically derived synthetic data and then predicts neural correspondence across held-out real animals via transfer learning. The same pre-trained model both tracks neurons across time and identifies corresponding neurons across individuals. Performance is evaluated against hand-annotated datasets, including NeuroPAL [1]. Using only position information, the method achieves 80.0% accuracy at tracking neurons within an individual and 65.8% accuracy at identifying neurons across individuals. Accuracy is even higher on a published dataset [2]. Accuracy reaches 76.5% when using color information from NeuroPAL. Unlike previous methods, fDLC does not require straightening or transforming the animal into a canonical coordinate system. The method is fast and predicts correspondence in 10 ms making it suitable for future real-time applications.
|
q-bio/0508040
|
Chih-Yuan Tseng
|
Hung-I Pai, Chih-Yuan Tseng and HC Lee
|
Identifying Biomagnetic Sources in the Brain by the Maximum Entropy
Approach
|
8 pages, 8 figures. Presented at 25th International Workshop on
Bayesian Inference and Maximum Entropy Methods in Science and Engineering,
San Jose, CA, USA Aug 7-12, 2005
|
p. 527 in "Bayesian Inference and Maximum Entropy Methods in
Science and Engineering" ed. by K. H. Knuth, A. E. Abbda, R. D. Moriss, and
J. P. Castle (A.I.P. Vol. 803, 2005)
|
10.1063/1.2149834
| null |
q-bio.NC q-bio.QM
| null |
Magnetoencephalographic (MEG) measurements record magnetic fields generated
from neurons while information is being processed in the brain. The inverse
problem of identifying sources of biomagnetic fields and deducing their
intensities from MEG measurements is ill-posed when the number of field
detectors is far less than the number of sources. This problem is less severe
if there is already a reasonable prior knowledge in the form of a distribution
in the intensity of source activation. In this case the problem of identifying
and deducing source intensities may be transformed to one of using the MEG data
to update a prior distribution to a posterior distribution. Here we report on
some work done using the maximum entropy method (ME) as an updating tool.
Specifically, we propose an implementation of the ME method in cases when the
prior contain almost no knowledge of source activation. Two examples are
studied, in which part of motor cortex is activated with uniform and varying
intensities, respectively.
|
[
{
"created": "Mon, 29 Aug 2005 03:21:25 GMT",
"version": "v1"
}
] |
2009-11-11
|
[
[
"Pai",
"Hung-I",
""
],
[
"Tseng",
"Chih-Yuan",
""
],
[
"Lee",
"HC",
""
]
] |
Magnetoencephalographic (MEG) measurements record magnetic fields generated from neurons while information is being processed in the brain. The inverse problem of identifying sources of biomagnetic fields and deducing their intensities from MEG measurements is ill-posed when the number of field detectors is far less than the number of sources. This problem is less severe if there is already a reasonable prior knowledge in the form of a distribution in the intensity of source activation. In this case the problem of identifying and deducing source intensities may be transformed to one of using the MEG data to update a prior distribution to a posterior distribution. Here we report on some work done using the maximum entropy method (ME) as an updating tool. Specifically, we propose an implementation of the ME method in cases when the prior contain almost no knowledge of source activation. Two examples are studied, in which part of motor cortex is activated with uniform and varying intensities, respectively.
|
1612.05463
|
Thomas Gueudr\'e PhD
|
Thomas Gueudr\'e
|
Growth over time-correlated disorder: a spectral approach to Mean-field
|
10 pages + Appendix
|
Phys. Rev. E 95, 042134 (2017)
|
10.1103/PhysRevE.95.042134
| null |
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We generalize a model of growth over a disordered environment, to a large
class of It\=o processes. In particular, we study how the microscopic
properties of the noise influence the macroscopic growth rate. The present
model can account for growth processes in large dimensions, and provides a bed
to understand better the trade-off between exploration and exploitation. An
additional mapping to the Schr\"ordinger equation readily provides a set of
disorders for which this model can be solved exactly. This mean-field approach
exhibits interesting features, such as a freezing transition and an optimal
point of growth, that can be studied in details, and gives yet another
explanation for the occurrence of the $\textit{Zipf law}$ in complex,
well-connected systems.
|
[
{
"created": "Fri, 16 Dec 2016 13:38:24 GMT",
"version": "v1"
}
] |
2017-04-26
|
[
[
"Gueudré",
"Thomas",
""
]
] |
We generalize a model of growth over a disordered environment, to a large class of It\=o processes. In particular, we study how the microscopic properties of the noise influence the macroscopic growth rate. The present model can account for growth processes in large dimensions, and provides a bed to understand better the trade-off between exploration and exploitation. An additional mapping to the Schr\"ordinger equation readily provides a set of disorders for which this model can be solved exactly. This mean-field approach exhibits interesting features, such as a freezing transition and an optimal point of growth, that can be studied in details, and gives yet another explanation for the occurrence of the $\textit{Zipf law}$ in complex, well-connected systems.
|
2402.18583
|
Ling Yang
|
Zhilin Huang, Ling Yang, Zaixi Zhang, Xiangxin Zhou, Yu Bao, Xiawu
Zheng, Yuwei Yang, Yu Wang, Wenming Yang
|
Binding-Adaptive Diffusion Models for Structure-Based Drug Design
|
Accepted by AAAI 2024. Project:
https://github.com/YangLing0818/BindDM
| null | null | null |
q-bio.BM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Structure-based drug design (SBDD) aims to generate 3D ligand molecules that
bind to specific protein targets. Existing 3D deep generative models including
diffusion models have shown great promise for SBDD. However, it is complex to
capture the essential protein-ligand interactions exactly in 3D space for
molecular generation. To address this problem, we propose a novel framework,
namely Binding-Adaptive Diffusion Models (BindDM). In BindDM, we adaptively
extract subcomplex, the essential part of binding sites responsible for
protein-ligand interactions. Then the selected protein-ligand subcomplex is
processed with SE(3)-equivariant neural networks, and transmitted back to each
atom of the complex for augmenting the target-aware 3D molecule diffusion
generation with binding interaction information. We iterate this hierarchical
complex-subcomplex process with cross-hierarchy interaction node for adequately
fusing global binding context between the complex and its corresponding
subcomplex. Empirical studies on the CrossDocked2020 dataset show BindDM can
generate molecules with more realistic 3D structures and higher binding
affinities towards the protein targets, with up to -5.92 Avg. Vina Score, while
maintaining proper molecular properties. Our code is available at
https://github.com/YangLing0818/BindDM
|
[
{
"created": "Mon, 15 Jan 2024 00:34:00 GMT",
"version": "v1"
}
] |
2024-03-01
|
[
[
"Huang",
"Zhilin",
""
],
[
"Yang",
"Ling",
""
],
[
"Zhang",
"Zaixi",
""
],
[
"Zhou",
"Xiangxin",
""
],
[
"Bao",
"Yu",
""
],
[
"Zheng",
"Xiawu",
""
],
[
"Yang",
"Yuwei",
""
],
[
"Wang",
"Yu",
""
],
[
"Yang",
"Wenming",
""
]
] |
Structure-based drug design (SBDD) aims to generate 3D ligand molecules that bind to specific protein targets. Existing 3D deep generative models including diffusion models have shown great promise for SBDD. However, it is complex to capture the essential protein-ligand interactions exactly in 3D space for molecular generation. To address this problem, we propose a novel framework, namely Binding-Adaptive Diffusion Models (BindDM). In BindDM, we adaptively extract subcomplex, the essential part of binding sites responsible for protein-ligand interactions. Then the selected protein-ligand subcomplex is processed with SE(3)-equivariant neural networks, and transmitted back to each atom of the complex for augmenting the target-aware 3D molecule diffusion generation with binding interaction information. We iterate this hierarchical complex-subcomplex process with cross-hierarchy interaction node for adequately fusing global binding context between the complex and its corresponding subcomplex. Empirical studies on the CrossDocked2020 dataset show BindDM can generate molecules with more realistic 3D structures and higher binding affinities towards the protein targets, with up to -5.92 Avg. Vina Score, while maintaining proper molecular properties. Our code is available at https://github.com/YangLing0818/BindDM
|
2110.14602
|
Vitaly Vanchurin
|
Vitaly Vanchurin, Yuri I. Wolf, Mikhail I. Katsnelson, Eugene V.
Koonin
|
Towards a Theory of Evolution as Multilevel Learning
|
29 pages, 3 figures
| null |
10.1073/pnas.2120037119
| null |
q-bio.PE cond-mat.dis-nn cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We apply the theory of learning to physically renormalizable systems in an
attempt to develop a theory of biological evolution, including the origin of
life, as multilevel learning. We formulate seven fundamental principles of
evolution that appear to be necessary and sufficient to render a universe
observable and show that they entail the major features of biological
evolution, including replication and natural selection. These principles also
follow naturally from the theory of learning. We formulate the theory of
evolution using the mathematical framework of neural networks, which provides
for detailed analysis of evolutionary phenomena. To demonstrate the potential
of the proposed theoretical framework, we derive a generalized version of the
Central Dogma of molecular biology by analyzing the flow of information during
learning (back-propagation) and predicting (forward-propagation) the
environment by evolving organisms. The more complex evolutionary phenomena,
such as major transitions in evolution, in particular, the origin of life, have
to be analyzed in the thermodynamic limit, which is described in detail in the
accompanying paper.
|
[
{
"created": "Wed, 27 Oct 2021 17:21:16 GMT",
"version": "v1"
}
] |
2022-10-12
|
[
[
"Vanchurin",
"Vitaly",
""
],
[
"Wolf",
"Yuri I.",
""
],
[
"Katsnelson",
"Mikhail I.",
""
],
[
"Koonin",
"Eugene V.",
""
]
] |
We apply the theory of learning to physically renormalizable systems in an attempt to develop a theory of biological evolution, including the origin of life, as multilevel learning. We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable and show that they entail the major features of biological evolution, including replication and natural selection. These principles also follow naturally from the theory of learning. We formulate the theory of evolution using the mathematical framework of neural networks, which provides for detailed analysis of evolutionary phenomena. To demonstrate the potential of the proposed theoretical framework, we derive a generalized version of the Central Dogma of molecular biology by analyzing the flow of information during learning (back-propagation) and predicting (forward-propagation) the environment by evolving organisms. The more complex evolutionary phenomena, such as major transitions in evolution, in particular, the origin of life, have to be analyzed in the thermodynamic limit, which is described in detail in the accompanying paper.
|
1308.6240
|
Wentian Li
|
Wentian Li, Jan Freudenberg, Pedro Miramontes
|
Diminishing Return for Increased Mappability with Longer Sequencing
Reads: Implications of the k-mer Distributions in the Human Genome
|
5 figures
|
BMC Bioinformatics, 15:2 (2014)
|
10.1186/1471-2105-15-2
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The amount of non-unique sequence (non-singletons) in a genome directly
affects the difficulty of read alignment to a reference assembly for high
throughput-sequencing data. Although a greater length increases the chance for
reads being uniquely mapped to the reference genome, a quantitative analysis of
the influence of read lengths on mappability has been lacking. To address this
question, we evaluate the k-mer distribution of the human reference genome. The
k-mer frequency is determined for k ranging from 20 to 1000 basepairs. We use
the proportion of non-singleton k-mers to evaluate the mappability of reads for
a corresponding read length. We observe that the proportion of non-singletons
decreases slowly with increasing k, and can be fitted by piecewise power-law
functions with different exponents at different k ranges. A faster decay at
smaller values for k indicates more limited gains for read lengths > 200
basepairs. The frequency distributions of k-mers exhibit long tails in a
power-law-like trend, and rank frequency plots exhibit a concave Zipf's curve.
The location of the most frequent 1000-mers comprises 172 kilobase-ranged
regions, including four large stretches on chromosomes 1 and X, containing
genes with biomedical implications. Even the read length 1000 would be
insufficient to reliably sequence these specific regions.
|
[
{
"created": "Wed, 28 Aug 2013 18:14:47 GMT",
"version": "v1"
}
] |
2017-03-03
|
[
[
"Li",
"Wentian",
""
],
[
"Freudenberg",
"Jan",
""
],
[
"Miramontes",
"Pedro",
""
]
] |
The amount of non-unique sequence (non-singletons) in a genome directly affects the difficulty of read alignment to a reference assembly for high throughput-sequencing data. Although a greater length increases the chance for reads being uniquely mapped to the reference genome, a quantitative analysis of the influence of read lengths on mappability has been lacking. To address this question, we evaluate the k-mer distribution of the human reference genome. The k-mer frequency is determined for k ranging from 20 to 1000 basepairs. We use the proportion of non-singleton k-mers to evaluate the mappability of reads for a corresponding read length. We observe that the proportion of non-singletons decreases slowly with increasing k, and can be fitted by piecewise power-law functions with different exponents at different k ranges. A faster decay at smaller values for k indicates more limited gains for read lengths > 200 basepairs. The frequency distributions of k-mers exhibit long tails in a power-law-like trend, and rank frequency plots exhibit a concave Zipf's curve. The location of the most frequent 1000-mers comprises 172 kilobase-ranged regions, including four large stretches on chromosomes 1 and X, containing genes with biomedical implications. Even the read length 1000 would be insufficient to reliably sequence these specific regions.
|
2302.04338
|
Viren Shah
|
Viren Shah, Justin Womack, Anthony E. Zamora, Scott S. Terhune, and
Ranjan K. Dash
|
Simulating the Evolution of Signaling Signatures during CART-Cell --
Tumor Cell Interactions
| null | null | null | null |
q-bio.MN
|
http://creativecommons.org/licenses/by/4.0/
|
Immunotherapies have been proven to have significant therapeutic efficacy in
the treatment of cancer. The last decade has seen adoptive cell therapies, such
as chimeric antigen receptor T-cell (CART-cell) therapy, gain FDA approval
against specific cancers. Additionally, there are numerous clinical trials
ongoing investigating additional designs and targets. Nevertheless, despite the
excitement and promising potential of CART-cell therapy, response rates to
therapy vary greatly between studies, patients, and cancers. There remains an
unmet need to develop computational frameworks that more accurately predict
CART-cell function and clinical efficacy. Here we present a coarse-grained
model simulated with logical rules that demonstrates the evolution of signaling
signatures following the inter-action between CART-cells and tumor cells and
allows for in silico based prediction of CART-cell functionality prior to
experimentation.
|
[
{
"created": "Wed, 8 Feb 2023 21:10:58 GMT",
"version": "v1"
}
] |
2023-02-10
|
[
[
"Shah",
"Viren",
""
],
[
"Womack",
"Justin",
""
],
[
"Zamora",
"Anthony E.",
""
],
[
"Terhune",
"Scott S.",
""
],
[
"Dash",
"Ranjan K.",
""
]
] |
Immunotherapies have been proven to have significant therapeutic efficacy in the treatment of cancer. The last decade has seen adoptive cell therapies, such as chimeric antigen receptor T-cell (CART-cell) therapy, gain FDA approval against specific cancers. Additionally, there are numerous clinical trials ongoing investigating additional designs and targets. Nevertheless, despite the excitement and promising potential of CART-cell therapy, response rates to therapy vary greatly between studies, patients, and cancers. There remains an unmet need to develop computational frameworks that more accurately predict CART-cell function and clinical efficacy. Here we present a coarse-grained model simulated with logical rules that demonstrates the evolution of signaling signatures following the inter-action between CART-cells and tumor cells and allows for in silico based prediction of CART-cell functionality prior to experimentation.
|
0801.0253
|
William Bialek
|
Greg J. Stephens and William Bialek
|
Toward a statistical mechanics of four letter words
| null | null |
10.1103/PhysRevE.81.066119
| null |
q-bio.NC cs.CL physics.data-an physics.soc-ph
| null |
We consider words as a network of interacting letters, and approximate the
probability distribution of states taken on by this network. Despite the
intuition that the rules of English spelling are highly combinatorial (and
arbitrary), we find that maximum entropy models consistent with pairwise
correlations among letters provide a surprisingly good approximation to the
full statistics of four letter words, capturing ~92% of the multi-information
among letters and even "discovering" real words that were not represented in
the data from which the pairwise correlations were estimated. The maximum
entropy model defines an energy landscape on the space of possible words, and
local minima in this landscape account for nearly two-thirds of words used in
written English.
|
[
{
"created": "Mon, 31 Dec 2007 23:51:51 GMT",
"version": "v1"
}
] |
2013-05-29
|
[
[
"Stephens",
"Greg J.",
""
],
[
"Bialek",
"William",
""
]
] |
We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial (and arbitrary), we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of four letter words, capturing ~92% of the multi-information among letters and even "discovering" real words that were not represented in the data from which the pairwise correlations were estimated. The maximum entropy model defines an energy landscape on the space of possible words, and local minima in this landscape account for nearly two-thirds of words used in written English.
|
1011.2939
|
Bob Eisenberg
|
Bob Eisenberg
|
From Structure to Function in Open Ionic Channels
|
Nearly final version of publication
|
Journal of Membrane Biol. 171, 1-24 (1999)
|
10.1007/s002329900554
| null |
q-bio.BM cond-mat.soft math-ph math.MP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a simple working hypothesis that all permeation properties of
open ionic channels can be predicted by understanding electrodiffusion in fixed
structures, without invoking conformation changes, or changes in chemical
bonds. We know, of course, that ions can bind to specific protein structures,
and that this binding is not easily described by the traditional electrostatic
equations of physics textbooks, that describe average electric fields, the
so-called `mean field'. The question is which specific properties can be
explained just by mean field electrostatics and which cannot. I believe the
best way to uncover the specific chemical properties of channels is to invoke
them as little as possible, seeking to explain with mean field electrostatics
first. Then, when phenomena appear that cannot be described that way, by the
mean field alone, we turn to chemically specific explanations, seeking the
appropriate tools (of electrochemistry, Langevin, or molecular dynamics, for
example) to understand them. In this spirit, we turn now to the structure of
open ionic channels, apply the laws of electrodiffusion to them, and see how
many of their properties we can predict just that way.
|
[
{
"created": "Fri, 12 Nov 2010 15:10:07 GMT",
"version": "v1"
}
] |
2015-03-17
|
[
[
"Eisenberg",
"Bob",
""
]
] |
We consider a simple working hypothesis that all permeation properties of open ionic channels can be predicted by understanding electrodiffusion in fixed structures, without invoking conformation changes, or changes in chemical bonds. We know, of course, that ions can bind to specific protein structures, and that this binding is not easily described by the traditional electrostatic equations of physics textbooks, that describe average electric fields, the so-called `mean field'. The question is which specific properties can be explained just by mean field electrostatics and which cannot. I believe the best way to uncover the specific chemical properties of channels is to invoke them as little as possible, seeking to explain with mean field electrostatics first. Then, when phenomena appear that cannot be described that way, by the mean field alone, we turn to chemically specific explanations, seeking the appropriate tools (of electrochemistry, Langevin, or molecular dynamics, for example) to understand them. In this spirit, we turn now to the structure of open ionic channels, apply the laws of electrodiffusion to them, and see how many of their properties we can predict just that way.
|
2312.15055
|
Kexuan Li
|
Kexuan Li
|
Deep Learning for Efficient GWAS Feature Selection
| null | null | null | null |
q-bio.GN cs.LG stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome-Wide Association Studies (GWAS) face unique challenges in the era of
big genomics data, particularly when dealing with ultra-high-dimensional
datasets where the number of genetic features significantly exceeds the
available samples. This paper introduces an extension to the feature selection
methodology proposed by Mirzaei et al. (2020), specifically tailored to tackle
the intricacies associated with ultra-high-dimensional GWAS data. Our extended
approach enhances the original method by introducing a Frobenius norm penalty
into the student network, augmenting its capacity to adapt to scenarios
characterized by a multitude of features and limited samples. Operating
seamlessly in both supervised and unsupervised settings, our method employs two
key neural networks. The first leverages an autoencoder or supervised
autoencoder for dimension reduction, extracting salient features from the
ultra-high-dimensional genomic data. The second network, a regularized
feed-forward model with a single hidden layer, is designed for precise feature
selection. The introduction of the Frobenius norm penalty in the student
network significantly boosts the method's resilience to the challenges posed by
ultra-high-dimensional GWAS datasets. Experimental results showcase the
efficacy of our approach in feature selection for GWAS data. The method not
only handles the inherent complexities of ultra-high-dimensional settings but
also demonstrates superior adaptability to the nuanced structures present in
genomics data. The flexibility and versatility of our proposed methodology are
underscored by its successful performance across a spectrum of experiments.
|
[
{
"created": "Fri, 22 Dec 2023 20:35:47 GMT",
"version": "v1"
}
] |
2023-12-27
|
[
[
"Li",
"Kexuan",
""
]
] |
Genome-Wide Association Studies (GWAS) face unique challenges in the era of big genomics data, particularly when dealing with ultra-high-dimensional datasets where the number of genetic features significantly exceeds the available samples. This paper introduces an extension to the feature selection methodology proposed by Mirzaei et al. (2020), specifically tailored to tackle the intricacies associated with ultra-high-dimensional GWAS data. Our extended approach enhances the original method by introducing a Frobenius norm penalty into the student network, augmenting its capacity to adapt to scenarios characterized by a multitude of features and limited samples. Operating seamlessly in both supervised and unsupervised settings, our method employs two key neural networks. The first leverages an autoencoder or supervised autoencoder for dimension reduction, extracting salient features from the ultra-high-dimensional genomic data. The second network, a regularized feed-forward model with a single hidden layer, is designed for precise feature selection. The introduction of the Frobenius norm penalty in the student network significantly boosts the method's resilience to the challenges posed by ultra-high-dimensional GWAS datasets. Experimental results showcase the efficacy of our approach in feature selection for GWAS data. The method not only handles the inherent complexities of ultra-high-dimensional settings but also demonstrates superior adaptability to the nuanced structures present in genomics data. The flexibility and versatility of our proposed methodology are underscored by its successful performance across a spectrum of experiments.
|
1612.02116
|
Tarunendu Mapder
|
Tarunendu Mapder
|
Signal Manifestation Trade-offs in Incoherent Feed-Forward Loops
|
10 pages, 4 figures
| null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Signal processing in biological systems is delicately executed by specialised
networks, which are modular assemblies of network motifs. The motifs are
independently functional circuits found in enormous numbers in any living cell.
A very common network motif is the feed-forward loop (FFL), which regulates a
downstream node by an upstream one in a direct and an indirect way within the
network. If the direct and indirect regulations go antagonistic, the motif is
known as an incoherent FFL (ICFFL). The current study is aimed at exploring the
reason for the variation in the evolutionary selection of the four types of
ICFFLs. As comparative measures, I compute sensitivity amplification,
adaptation precision and efficiency from the temporal dynamics and mutual
information between the input-output nodes of the motifs at steady state. The
ICFFL II performs very efficiently in adaptation but poor in information
processing. On the other hand, ICFFL I and III are better in information
transmission compared to adaptation efficiency. Which is the fittest among them
under the pressure of natural selection? To sort out this puzzle, I take help
from the multi-objective Pareto efficiency. The results, found in the Pareto
task space, are in good agreement with the reported abundance level of all the
types in eukaryotes as well as prokaryotes.
|
[
{
"created": "Wed, 7 Dec 2016 05:18:04 GMT",
"version": "v1"
}
] |
2016-12-08
|
[
[
"Mapder",
"Tarunendu",
""
]
] |
Signal processing in biological systems is delicately executed by specialised networks, which are modular assemblies of network motifs. The motifs are independently functional circuits found in enormous numbers in any living cell. A very common network motif is the feed-forward loop (FFL), which regulates a downstream node by an upstream one in a direct and an indirect way within the network. If the direct and indirect regulations go antagonistic, the motif is known as an incoherent FFL (ICFFL). The current study is aimed at exploring the reason for the variation in the evolutionary selection of the four types of ICFFLs. As comparative measures, I compute sensitivity amplification, adaptation precision and efficiency from the temporal dynamics and mutual information between the input-output nodes of the motifs at steady state. The ICFFL II performs very efficiently in adaptation but poor in information processing. On the other hand, ICFFL I and III are better in information transmission compared to adaptation efficiency. Which is the fittest among them under the pressure of natural selection? To sort out this puzzle, I take help from the multi-objective Pareto efficiency. The results, found in the Pareto task space, are in good agreement with the reported abundance level of all the types in eukaryotes as well as prokaryotes.
|
1011.2699
|
Sang Hoon Lee
|
Sang Hoon Lee, Pan-Jun Kim, Hawoong Jeong
|
Global organization of protein complexome in the yeast Saccharomyces
cerevisiae
|
48 pages, 6 figures, 3 tables, 8 additional files (3 supporting
tables and 5 supporting figures) on the Web
|
BMC Syst. Biol. 5, 126 (2011)
|
10.1186/1752-0509-5-126
| null |
q-bio.QM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proteins in organisms, rather than act alone, usually form protein complexes
to perform cellular functions. We analyze the topological network structure of
protein complexes and their component proteins in the budding yeast in terms of
the bipartite network and its projections, where the complexes and proteins are
its two distinct components. Compared to conventional protein-protein
interaction networks, the networks from the protein complexes show more
homogeneous structures than those of the binary protein interactions, implying
the formation of complexes that cause a relatively more uniform number of
interaction partners. In addition, we suggest a new optimization method to
determine the abundance and function of protein complexes, based on the
information of their global organization. Estimating abundance and biological
functions is of great importance for many researches, by providing a
quantitative description of cell behaviors, instead of just a "catalogues" of
the lists of protein interactions. With our new optimization method, we present
genome-wide assignments of abundance and biological functions for complexes, as
well as previously unknown abundance and functions of proteins, which can
provide significant information for further investigations in proteomics. It is
strongly supported by a number of biologically relevant examples, such as the
relationship between the cytoskeleton proteins and signal transduction and the
metabolic enzyme Eno2's involvement in the cell division process. We believe
that our methods and findings are applicable not only to the specific area of
proteomics, but also to much broader areas of systems biology with the concept
of optimization principle.
|
[
{
"created": "Thu, 11 Nov 2010 16:18:03 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Apr 2011 18:32:49 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Aug 2011 17:14:49 GMT",
"version": "v3"
},
{
"created": "Mon, 15 Aug 2011 10:35:04 GMT",
"version": "v4"
}
] |
2011-08-16
|
[
[
"Lee",
"Sang Hoon",
""
],
[
"Kim",
"Pan-Jun",
""
],
[
"Jeong",
"Hawoong",
""
]
] |
Proteins in organisms, rather than act alone, usually form protein complexes to perform cellular functions. We analyze the topological network structure of protein complexes and their component proteins in the budding yeast in terms of the bipartite network and its projections, where the complexes and proteins are its two distinct components. Compared to conventional protein-protein interaction networks, the networks from the protein complexes show more homogeneous structures than those of the binary protein interactions, implying the formation of complexes that cause a relatively more uniform number of interaction partners. In addition, we suggest a new optimization method to determine the abundance and function of protein complexes, based on the information of their global organization. Estimating abundance and biological functions is of great importance for many researches, by providing a quantitative description of cell behaviors, instead of just a "catalogues" of the lists of protein interactions. With our new optimization method, we present genome-wide assignments of abundance and biological functions for complexes, as well as previously unknown abundance and functions of proteins, which can provide significant information for further investigations in proteomics. It is strongly supported by a number of biologically relevant examples, such as the relationship between the cytoskeleton proteins and signal transduction and the metabolic enzyme Eno2's involvement in the cell division process. We believe that our methods and findings are applicable not only to the specific area of proteomics, but also to much broader areas of systems biology with the concept of optimization principle.
|
1010.2829
|
Andrew Noble
|
Andrew E. Noble, Nico M. Temme, William F. Fagan, Timothy H. Keitt
|
A sampling theory for asymmetric communities
|
46 pages, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first analytical model of asymmetric community dynamics to
yield Hubbell's neutral theory in the limit of functional equivalence among all
species. Our focus centers on an asymmetric extension of Hubbell's local
community dynamics, while an analogous extension of Hubbell's metacommunity
dynamics is deferred to an appendix. We find that mass-effects may facilitate
coexistence in asymmetric local communities and generate unimodal species
abundance distributions indistinguishable from those of symmetric communities.
Multiple modes, however, only arise from asymmetric processes and provide a
strong indication of non-neutral dynamics. Although the exact stationary
distributions of fully asymmetric communities must be calculated numerically,
we derive approximate sampling distributions for the general case and for
nearly neutral communities where symmetry is broken by a single species
distinct from all others in ecological fitness and dispersal ability. In the
latter case, our approximate distributions are fully normalized, and novel
asymptotic expansions of the required hypergeometric functions are provided to
make evaluations tractable for large communities. Employing these results in a
Bayesian analysis may provide a novel statistical test to assess the
consistency of species abundance data with the neutral hypothesis.
|
[
{
"created": "Thu, 14 Oct 2010 05:17:57 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Dec 2010 02:31:07 GMT",
"version": "v2"
}
] |
2010-12-06
|
[
[
"Noble",
"Andrew E.",
""
],
[
"Temme",
"Nico M.",
""
],
[
"Fagan",
"William F.",
""
],
[
"Keitt",
"Timothy H.",
""
]
] |
We introduce the first analytical model of asymmetric community dynamics to yield Hubbell's neutral theory in the limit of functional equivalence among all species. Our focus centers on an asymmetric extension of Hubbell's local community dynamics, while an analogous extension of Hubbell's metacommunity dynamics is deferred to an appendix. We find that mass-effects may facilitate coexistence in asymmetric local communities and generate unimodal species abundance distributions indistinguishable from those of symmetric communities. Multiple modes, however, only arise from asymmetric processes and provide a strong indication of non-neutral dynamics. Although the exact stationary distributions of fully asymmetric communities must be calculated numerically, we derive approximate sampling distributions for the general case and for nearly neutral communities where symmetry is broken by a single species distinct from all others in ecological fitness and dispersal ability. In the latter case, our approximate distributions are fully normalized, and novel asymptotic expansions of the required hypergeometric functions are provided to make evaluations tractable for large communities. Employing these results in a Bayesian analysis may provide a novel statistical test to assess the consistency of species abundance data with the neutral hypothesis.
|
q-bio/0402018
|
Peng-Ye Wang
|
Ping Xie, Shuo-Xing Dou, Peng-Ye Wang
|
Dynamics of heterodimeric kinesins and cooperation of kinesins
|
18 pages, 5 figures
| null | null | null |
q-bio.BM
| null |
Using the model for the processive movement of a dimeric kinesin we proposed
before, we study the dynamics of a number of mutant homodimeric and
heterodimeric kinesins that were constructed by Kaseda et al. (Kaseda, K.,
Higuchi, H. and Hirose, K. PNAS 99, 16058 (2002)). The theoretical results of
ATPase rate per head, moving velocity, and stall force of the motors show good
agreement with the experimental results by Kaseda et al.: The puzzling dynamic
behaviors of heterodimeric kinesin that consists of two distinct heads compared
with its parent homodimers can be easily explained by using independent ATPase
rates of the two heads in our model. We also study the collective kinetic
behaviors of kinesins in MT-gliding motility. The results explains well that
the average MT-gliding velocity is independent of the number of bound motors
and is equal to the moving velocity of a single kinesin relative to MT.
|
[
{
"created": "Mon, 9 Feb 2004 05:32:00 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Xie",
"Ping",
""
],
[
"Dou",
"Shuo-Xing",
""
],
[
"Wang",
"Peng-Ye",
""
]
] |
Using the model for the processive movement of a dimeric kinesin we proposed before, we study the dynamics of a number of mutant homodimeric and heterodimeric kinesins that were constructed by Kaseda et al. (Kaseda, K., Higuchi, H. and Hirose, K. PNAS 99, 16058 (2002)). The theoretical results of ATPase rate per head, moving velocity, and stall force of the motors show good agreement with the experimental results by Kaseda et al.: The puzzling dynamic behaviors of heterodimeric kinesin that consists of two distinct heads compared with its parent homodimers can be easily explained by using independent ATPase rates of the two heads in our model. We also study the collective kinetic behaviors of kinesins in MT-gliding motility. The results explains well that the average MT-gliding velocity is independent of the number of bound motors and is equal to the moving velocity of a single kinesin relative to MT.
|
1404.4005
|
Premal Shah
|
Premal Shah, David M. McCandlish and Joshua B. Plotkin
|
Historical contingency and entrenchment in protein evolution under
purifying selection
|
42 pages, 13 figures
| null |
10.1073/pnas.1412933112
| null |
q-bio.PE q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The fitness contribution of an allele at one genetic site may depend on
alleles at other sites, a phenomenon known as epistasis. Epistasis can
profoundly influence the process of evolution in populations under selection,
and can shape the course of protein evolution across divergent species. Whereas
epistasis between adaptive substitutions has been the subject of extensive
study, relatively little is known about epistasis under purifying selection.
Here we use mechanistic models of thermodynamic stability in a ligand-binding
protein to explore the structure of epistatic interactions between
substitutions that fix in protein sequences under purifying selection. We find
that the selection coefficients of mutations that are nearly-neutral when they
fix are highly contingent on the presence of preceding mutations. Conversely,
mutations that are nearly-neutral when they fix are subsequently entrenched due
to epistasis with later substitutions. Our evolutionary model includes
insertions and deletions, as well as point mutations, and so it allows us to
quantify epistasis within each of these classes of mutations, and also to study
the evolution of protein length. We find that protein length remains largely
constant over time, because indels are more deleterious than point mutations.
Our results imply that, even under purifying selection, protein sequence
evolution is highly contingent on history and so it cannot be predicted by the
phenotypic effects of mutations assayed in the wild-type sequence.
|
[
{
"created": "Tue, 15 Apr 2014 18:18:39 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Apr 2014 17:03:56 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Jul 2014 19:36:56 GMT",
"version": "v3"
}
] |
2015-06-11
|
[
[
"Shah",
"Premal",
""
],
[
"McCandlish",
"David M.",
""
],
[
"Plotkin",
"Joshua B.",
""
]
] |
The fitness contribution of an allele at one genetic site may depend on alleles at other sites, a phenomenon known as epistasis. Epistasis can profoundly influence the process of evolution in populations under selection, and can shape the course of protein evolution across divergent species. Whereas epistasis between adaptive substitutions has been the subject of extensive study, relatively little is known about epistasis under purifying selection. Here we use mechanistic models of thermodynamic stability in a ligand-binding protein to explore the structure of epistatic interactions between substitutions that fix in protein sequences under purifying selection. We find that the selection coefficients of mutations that are nearly-neutral when they fix are highly contingent on the presence of preceding mutations. Conversely, mutations that are nearly-neutral when they fix are subsequently entrenched due to epistasis with later substitutions. Our evolutionary model includes insertions and deletions, as well as point mutations, and so it allows us to quantify epistasis within each of these classes of mutations, and also to study the evolution of protein length. We find that protein length remains largely constant over time, because indels are more deleterious than point mutations. Our results imply that, even under purifying selection, protein sequence evolution is highly contingent on history and so it cannot be predicted by the phenotypic effects of mutations assayed in the wild-type sequence.
|
q-bio/0701028
|
Francesco Pederiva
|
M. Sega, P. Faccioli, F. Pederiva, G. Garberoglio, H. Orland
|
Quantitative Protein Dynamics from Dominant Folding Pathways
|
4 pages, 1 figure
| null |
10.1103/PhysRevLett.99.118102
| null |
q-bio.QM cond-mat.soft q-bio.BM
| null |
We develop a theoretical approach to the protein folding problem based on
out-of-equilibrium stochastic dynamics. Within this framework, the
computational difficulties related to the existence of large time scale gaps in
the protein folding problem are removed and simulating the entire reaction in
atomistic details using existing computers becomes feasible. In addition, this
formalism provides a natural framework to investigate the relationships between
thermodynamical and kinetic aspects of the folding. For example, it is possible
to show that, in order to have a large probability to remain unchanged under
Langevin diffusion, the native state has to be characterized by a small
conformational entropy. We discuss how to determine the most probable folding
pathway, to identify configurations representative of the transition state and
to compute the most probable transition time. We perform an illustrative
application of these ideas, studying the conformational evolution of alanine
di-peptide, within an all-atom model based on the empiric GROMOS96 force field.
|
[
{
"created": "Thu, 18 Jan 2007 10:18:25 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Sega",
"M.",
""
],
[
"Faccioli",
"P.",
""
],
[
"Pederiva",
"F.",
""
],
[
"Garberoglio",
"G.",
""
],
[
"Orland",
"H.",
""
]
] |
We develop a theoretical approach to the protein folding problem based on out-of-equilibrium stochastic dynamics. Within this framework, the computational difficulties related to the existence of large time scale gaps in the protein folding problem are removed and simulating the entire reaction in atomistic details using existing computers becomes feasible. In addition, this formalism provides a natural framework to investigate the relationships between thermodynamical and kinetic aspects of the folding. For example, it is possible to show that, in order to have a large probability to remain unchanged under Langevin diffusion, the native state has to be characterized by a small conformational entropy. We discuss how to determine the most probable folding pathway, to identify configurations representative of the transition state and to compute the most probable transition time. We perform an illustrative application of these ideas, studying the conformational evolution of alanine di-peptide, within an all-atom model based on the empiric GROMOS96 force field.
|
0807.1059
|
Michel Aoun
|
Michel Aoun, Jean-Yves Cabon, Annick Hourmant
|
Potential Phytoextraction with in-vitro regenerated plantlets of
Brassica juncea (L.) Czern. in presence of CdCl$_2$: Cadmium accumulation and
physiological parameter measurement
|
12 pages, 2 figures and 2 tables
| null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heavy metal contamination of agricultural land is partly responsible for
limiting crop productivity. Cd$^{2+}$ is known as a non-essentiel HM that can
be harmful to plants even at low concentrations. Brassica juncea (L.) is able
to accumulate more than 400 $\mu$g.g$^{-1}$ D.W. in the shoot, a physiological
trait which may be exploited for the phytoremediation of contaminated soils and
waters. . The application of 75 $\mu$M CdCl$_2$ for three days does not show
any effect in the B. juncea growth parameters (F.W. and D.W.) whatever the type
of plantlets. This application decreases also the contents of chlorophyll a,
carotenoids and Chl a/b ratio (2.26) for plantlets regenerated in the absence
of CdCl$_2$ but not those of plantlets regenerated in its presence. Roots have
the highest contents (3071; 1544 $\mu$g.g$^{-1}$ D.W.) followed by stems (850;
687$\mu$g.g$^{-1}$ D.W.) and leaves (463; 264$\mu$g.g$^{-1}$ D.W.)
respectively. In our conditions, we suggest that the low accumulation in the
plantlets regenerated in the presence of CdCl$_2$ by the means of in-vitro
regeneration technology is still benefical, to some extent, for the
phytoextraction process and seems to be an interesting technology that allows
the cultivation of these plantlets in contaminated soils with low accumulation
of metal in their shoots and probably in their seeds used in many food
technologies.
|
[
{
"created": "Mon, 7 Jul 2008 16:13:56 GMT",
"version": "v1"
}
] |
2008-07-08
|
[
[
"Aoun",
"Michel",
""
],
[
"Cabon",
"Jean-Yves",
""
],
[
"Hourmant",
"Annick",
""
]
] |
Heavy metal contamination of agricultural land is partly responsible for limiting crop productivity. Cd$^{2+}$ is known as a non-essentiel HM that can be harmful to plants even at low concentrations. Brassica juncea (L.) is able to accumulate more than 400 $\mu$g.g$^{-1}$ D.W. in the shoot, a physiological trait which may be exploited for the phytoremediation of contaminated soils and waters. . The application of 75 $\mu$M CdCl$_2$ for three days does not show any effect in the B. juncea growth parameters (F.W. and D.W.) whatever the type of plantlets. This application decreases also the contents of chlorophyll a, carotenoids and Chl a/b ratio (2.26) for plantlets regenerated in the absence of CdCl$_2$ but not those of plantlets regenerated in its presence. Roots have the highest contents (3071; 1544 $\mu$g.g$^{-1}$ D.W.) followed by stems (850; 687$\mu$g.g$^{-1}$ D.W.) and leaves (463; 264$\mu$g.g$^{-1}$ D.W.) respectively. In our conditions, we suggest that the low accumulation in the plantlets regenerated in the presence of CdCl$_2$ by the means of in-vitro regeneration technology is still benefical, to some extent, for the phytoextraction process and seems to be an interesting technology that allows the cultivation of these plantlets in contaminated soils with low accumulation of metal in their shoots and probably in their seeds used in many food technologies.
|
1903.10131
|
Zachary Kilpatrick PhD
|
Nicholas W. Barendregt, Kre\v{s}imir Josi\'c, and Zachary P.
Kilpatrick
|
Analyzing dynamic decision-making models using Chapman-Kolmogorov
equations
|
24 pages, 9 figures
| null | null | null |
q-bio.NC math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision-making in dynamic environments typically requires adaptive evidence
accumulation that weights new evidence more heavily than old observations.
Recent experimental studies of dynamic decision tasks require subjects to make
decisions for which the correct choice switches stochastically throughout a
single trial. In such cases, an ideal observer's belief is described by an
evolution equation that is doubly stochastic, reflecting stochasticity in the
both observations and environmental changes. In these contexts, we show that
the probability density of the belief can be represented using differential
Chapman-Kolmogorov equations, allowing efficient computation of ensemble
statistics. This allows us to reliably compare normative models to
near-normative approximations using, as model performance metrics, decision
response accuracy and Kullback-Leibler divergence of the belief distributions.
Such belief distributions could be obtained empirically from subjects by asking
them to report their decision confidence. We also study how response accuracy
is affected by additional internal noise, showing optimality requires longer
integration timescales as more noise is added. Lastly, we demonstrate that our
method can be applied to tasks in which evidence arrives in a discrete,
pulsatile fashion, rather than continuously.
|
[
{
"created": "Mon, 25 Mar 2019 04:29:34 GMT",
"version": "v1"
}
] |
2019-03-26
|
[
[
"Barendregt",
"Nicholas W.",
""
],
[
"Josić",
"Krešimir",
""
],
[
"Kilpatrick",
"Zachary P.",
""
]
] |
Decision-making in dynamic environments typically requires adaptive evidence accumulation that weights new evidence more heavily than old observations. Recent experimental studies of dynamic decision tasks require subjects to make decisions for which the correct choice switches stochastically throughout a single trial. In such cases, an ideal observer's belief is described by an evolution equation that is doubly stochastic, reflecting stochasticity in the both observations and environmental changes. In these contexts, we show that the probability density of the belief can be represented using differential Chapman-Kolmogorov equations, allowing efficient computation of ensemble statistics. This allows us to reliably compare normative models to near-normative approximations using, as model performance metrics, decision response accuracy and Kullback-Leibler divergence of the belief distributions. Such belief distributions could be obtained empirically from subjects by asking them to report their decision confidence. We also study how response accuracy is affected by additional internal noise, showing optimality requires longer integration timescales as more noise is added. Lastly, we demonstrate that our method can be applied to tasks in which evidence arrives in a discrete, pulsatile fashion, rather than continuously.
|
2212.12542
|
Antoine Villie
|
Antoine Villi\'e, Philippe Veber, Yohann de Castro, Laurent Jacob
|
Neural Networks beyond explainability: Selective inference for sequence
motifs
| null | null | null | null |
q-bio.GN cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Over the past decade, neural networks have been successful at making
predictions from biological sequences, especially in the context of regulatory
genomics. As in other fields of deep learning, tools have been devised to
extract features such as sequence motifs that can explain the predictions made
by a trained network. Here we intend to go beyond explainable machine learning
and introduce SEISM, a selective inference procedure to test the association
between these extracted features and the predicted phenotype. In particular, we
discuss how training a one-layer convolutional network is formally equivalent
to selecting motifs maximizing some association score. We adapt existing
sampling-based selective inference procedures by quantizing this selection over
an infinite set to a large but finite grid. Finally, we show that sampling
under a specific choice of parameters is sufficient to characterize the
composite null hypothesis typically used for selective inference-a result that
goes well beyond our particular framework. We illustrate the behavior of our
method in terms of calibration, power and speed and discuss its power/speed
trade-off with a simpler data-split strategy. SEISM paves the way to an easier
analysis of neural networks used in regulatory genomics, and to more powerful
methods for genome wide association studies (GWAS).
|
[
{
"created": "Fri, 23 Dec 2022 10:49:07 GMT",
"version": "v1"
}
] |
2022-12-27
|
[
[
"Villié",
"Antoine",
""
],
[
"Veber",
"Philippe",
""
],
[
"de Castro",
"Yohann",
""
],
[
"Jacob",
"Laurent",
""
]
] |
Over the past decade, neural networks have been successful at making predictions from biological sequences, especially in the context of regulatory genomics. As in other fields of deep learning, tools have been devised to extract features such as sequence motifs that can explain the predictions made by a trained network. Here we intend to go beyond explainable machine learning and introduce SEISM, a selective inference procedure to test the association between these extracted features and the predicted phenotype. In particular, we discuss how training a one-layer convolutional network is formally equivalent to selecting motifs maximizing some association score. We adapt existing sampling-based selective inference procedures by quantizing this selection over an infinite set to a large but finite grid. Finally, we show that sampling under a specific choice of parameters is sufficient to characterize the composite null hypothesis typically used for selective inference-a result that goes well beyond our particular framework. We illustrate the behavior of our method in terms of calibration, power and speed and discuss its power/speed trade-off with a simpler data-split strategy. SEISM paves the way to an easier analysis of neural networks used in regulatory genomics, and to more powerful methods for genome wide association studies (GWAS).
|
2303.04902
|
Yamin Li
|
Yamin Li, Saishuang Wu, Jiayang Xu, Haiwa Wang, Qi Zhu, Wen Shi, Yue
Fang, Fan Jiang, Shanbao Tong, Yunting Zhang, Xiaoli Guo
|
Inter-brain substrates of role switching during mother-child interaction
| null | null |
10.1002/hbm.26672
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mother-child interaction is highly dynamic and reciprocal. Switching roles in
these back-and-forth interactions serves as a crucial feature of reciprocal
behaviors while the underlying neural entrainment is still not well-studied.
Here, we designed a role-controlled cooperative task with dual EEG recording to
study how differently two brains interact when mothers and children hold
different roles. When children were actors and mothers were observers,
mother-child inter-brain synchrony emerged within the theta oscillations and
the frontal lobe, which highly correlated with children's attachment to their
mothers. When their roles were reversed, this synchrony was shifted to the
alpha oscillations and the central area and associated with mothers' perception
of their relationship with their children. The results suggested an
observer-actor neural alignment within the actor's oscillations, which was
modulated by the actor-toward-observer emotional bonding. Our findings
contribute to the understanding of how inter-brain synchrony is established and
dynamically changed during mother-child reciprocal interaction.
|
[
{
"created": "Wed, 8 Mar 2023 21:43:26 GMT",
"version": "v1"
}
] |
2024-04-04
|
[
[
"Li",
"Yamin",
""
],
[
"Wu",
"Saishuang",
""
],
[
"Xu",
"Jiayang",
""
],
[
"Wang",
"Haiwa",
""
],
[
"Zhu",
"Qi",
""
],
[
"Shi",
"Wen",
""
],
[
"Fang",
"Yue",
""
],
[
"Jiang",
"Fan",
""
],
[
"Tong",
"Shanbao",
""
],
[
"Zhang",
"Yunting",
""
],
[
"Guo",
"Xiaoli",
""
]
] |
Mother-child interaction is highly dynamic and reciprocal. Switching roles in these back-and-forth interactions serves as a crucial feature of reciprocal behaviors while the underlying neural entrainment is still not well-studied. Here, we designed a role-controlled cooperative task with dual EEG recording to study how differently two brains interact when mothers and children hold different roles. When children were actors and mothers were observers, mother-child inter-brain synchrony emerged within the theta oscillations and the frontal lobe, which highly correlated with children's attachment to their mothers. When their roles were reversed, this synchrony was shifted to the alpha oscillations and the central area and associated with mothers' perception of their relationship with their children. The results suggested an observer-actor neural alignment within the actor's oscillations, which was modulated by the actor-toward-observer emotional bonding. Our findings contribute to the understanding of how inter-brain synchrony is established and dynamically changed during mother-child reciprocal interaction.
|
1402.4824
|
Sergio G\'omez
|
Sara Teller, Clara Granell, Manlio De Domenico, Jordi Soriano, Sergio
Gomez, Alex Arenas
|
Emergence of assortative mixing between clusters of cultured neurons
|
33 pages, 10 figures
|
PLOS Comput. Biol. 10(9) (2014) e1003796
|
10.1371/journal.pcbi.1003796
| null |
q-bio.NC cond-mat.dis-nn physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of the activity of neuronal cultures is considered to be a good
proxy of the functional connectivity of in vivo neuronal tissues. Thus, the
functional complex network inferred from activity patterns is a promising way
to unravel the interplay between structure and functionality of neuronal
systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal
cultures formed by interconnected aggregates of neurons (clusters). Dynamics is
characterized by the fast activation of groups of clusters in sequences termed
bursts. The analysis of the time delays between clusters' activations within
the bursts allows the reconstruction of the directed functional connectivity of
the network. We propose a method to statistically infer this connectivity and
analyze the resulting properties of the associated complex networks.
Surprisingly enough, in contrast to what has been reported for many biological
networks, the clustered neuronal cultures present assortative mixing
connectivity values, as well as a rich--club core, meaning that there is a
preference for clusters to link to other clusters that share similar functional
connectivity, which shapes a `connectivity backbone' in the network. These
results point out that the grouping of neurons and the assortative connectivity
between clusters are intrinsic survival mechanisms of the culture.
|
[
{
"created": "Wed, 19 Feb 2014 21:05:25 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jul 2014 19:13:46 GMT",
"version": "v2"
}
] |
2014-09-09
|
[
[
"Teller",
"Sara",
""
],
[
"Granell",
"Clara",
""
],
[
"De Domenico",
"Manlio",
""
],
[
"Soriano",
"Jordi",
""
],
[
"Gomez",
"Sergio",
""
],
[
"Arenas",
"Alex",
""
]
] |
The analysis of the activity of neuronal cultures is considered to be a good proxy of the functional connectivity of in vivo neuronal tissues. Thus, the functional complex network inferred from activity patterns is a promising way to unravel the interplay between structure and functionality of neuronal systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal cultures formed by interconnected aggregates of neurons (clusters). Dynamics is characterized by the fast activation of groups of clusters in sequences termed bursts. The analysis of the time delays between clusters' activations within the bursts allows the reconstruction of the directed functional connectivity of the network. We propose a method to statistically infer this connectivity and analyze the resulting properties of the associated complex networks. Surprisingly enough, in contrast to what has been reported for many biological networks, the clustered neuronal cultures present assortative mixing connectivity values, as well as a rich--club core, meaning that there is a preference for clusters to link to other clusters that share similar functional connectivity, which shapes a `connectivity backbone' in the network. These results point out that the grouping of neurons and the assortative connectivity between clusters are intrinsic survival mechanisms of the culture.
|
0910.2660
|
John Hopfield
|
J. J. Hopfield and Carlos D. Brody
|
Sequence reproduction, single trial learning, and mimicry based on a
mammalian-like distributed code for time
|
18 pages
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Animals learn tasks requiring a sequence of actions over time. Waiting a
given time before taking an action is a simple example. Mimicry is a complex
example, e.g. in humans, humming a brief tune you have just heard.
Re-experiencing a sensory pattern mentally must involve reproducing a sequence
of neural activities over time. In mammals, neurons in prefrontal cortex have
time-dependent firing rates that vary smoothly and slowly in a stereotyped
fashion. We show through modeling that a Many are Equal computation can use
such slowly-varying activities to identify each timepoint in a sequence by the
population pattern of activity at the timepoint. The MAE operation implemented
here is facilitated by a common inhibitory conductivity due to a theta rhythm.
Sequences of analog values of discrete events, exemplified by a brief tune
having notes of different durations and intensities, can be learned in a single
trial through STDP. An action sequence can be played back sped up, slowed down,
or reversed by modulating the system that generates the slowly changing
stereotyped activities. Synaptic adaptation and cellular post-hyperpolarization
rebound contribute to robustness. An ability to mimic a sequence only seconds
after observing it requires the STDP to be effective within seconds.
|
[
{
"created": "Wed, 14 Oct 2009 16:12:16 GMT",
"version": "v1"
}
] |
2009-10-15
|
[
[
"Hopfield",
"J. J.",
""
],
[
"Brody",
"Carlos D.",
""
]
] |
Animals learn tasks requiring a sequence of actions over time. Waiting a given time before taking an action is a simple example. Mimicry is a complex example, e.g. in humans, humming a brief tune you have just heard. Re-experiencing a sensory pattern mentally must involve reproducing a sequence of neural activities over time. In mammals, neurons in prefrontal cortex have time-dependent firing rates that vary smoothly and slowly in a stereotyped fashion. We show through modeling that a Many are Equal computation can use such slowly-varying activities to identify each timepoint in a sequence by the population pattern of activity at the timepoint. The MAE operation implemented here is facilitated by a common inhibitory conductivity due to a theta rhythm. Sequences of analog values of discrete events, exemplified by a brief tune having notes of different durations and intensities, can be learned in a single trial through STDP. An action sequence can be played back sped up, slowed down, or reversed by modulating the system that generates the slowly changing stereotyped activities. Synaptic adaptation and cellular post-hyperpolarization rebound contribute to robustness. An ability to mimic a sequence only seconds after observing it requires the STDP to be effective within seconds.
|
2104.08334
|
Cem \"Ozel
|
Cem \"Ozel, Muharrem Erdem Bo\u{g}o\c{c}lu, Ceren Ke\c{c}eciler, Ecem
Kaplan and Sevil Y\"ucel
|
Utilization of the simulated flue gas on the cultivation of Chlorella
protothecoides
|
6 pages, 6 figures
|
Journal of the Indian Chemical Society (2019), 96, 1137-1142
| null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, fossil-based fuels have been used to supply the energy needs
of the world. Fossil-based fuels induce accumulation of the atmospheric CO2
which causes global warming. One of CO2 source is flue gas emission from the
power plant. The microalgae have been considered excellent biological materials
for reduction CO2 with ability to photosynthesis. In this study, air, 10% CO2,
15% CO2 and simulated flue gas (containing 15% CO2) feed were used to observe
effect CO2 concentration and flue gas on cell growth and lipid content of
Chlorella protothecoides. The highest dry cell weight (1,5 g/L) and lipid
content (45%) values were obtained with 15% CO2 feed while the highest growth
rate (1,10), biomass productivity (0,125 g/L/day), and lipid weight (0,63 g/g)
were observed in 10% CO2 feed. Cultures fed with flue gas did not inhibited C.
protothecoides growth and showed similar results with those fed with 15% CO2
gas in terms of growth rate, dry cell weight, biomass productivity and lipid
content. These results showed that C. protothecoides has great potential for
reducing CO2 emission from flue gas.
|
[
{
"created": "Fri, 16 Apr 2021 19:31:43 GMT",
"version": "v1"
}
] |
2021-04-20
|
[
[
"Özel",
"Cem",
""
],
[
"Boğoçlu",
"Muharrem Erdem",
""
],
[
"Keçeciler",
"Ceren",
""
],
[
"Kaplan",
"Ecem",
""
],
[
"Yücel",
"Sevil",
""
]
] |
In recent years, fossil-based fuels have been used to supply the energy needs of the world. Fossil-based fuels induce accumulation of the atmospheric CO2 which causes global warming. One of CO2 source is flue gas emission from the power plant. The microalgae have been considered excellent biological materials for reduction CO2 with ability to photosynthesis. In this study, air, 10% CO2, 15% CO2 and simulated flue gas (containing 15% CO2) feed were used to observe effect CO2 concentration and flue gas on cell growth and lipid content of Chlorella protothecoides. The highest dry cell weight (1,5 g/L) and lipid content (45%) values were obtained with 15% CO2 feed while the highest growth rate (1,10), biomass productivity (0,125 g/L/day), and lipid weight (0,63 g/g) were observed in 10% CO2 feed. Cultures fed with flue gas did not inhibited C. protothecoides growth and showed similar results with those fed with 15% CO2 gas in terms of growth rate, dry cell weight, biomass productivity and lipid content. These results showed that C. protothecoides has great potential for reducing CO2 emission from flue gas.
|
0905.1458
|
Michael Krumin
|
Michael Krumin, Avner Shimron and Shy Shoham
|
Correlation-distortion based identification of Linear-Nonlinear-Poisson
models
| null | null |
10.1007/s10827-009-0184-0
| null |
q-bio.NC q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear-Nonlinear-Poisson (LNP) models are a popular and powerful tool for
describing encoding (stimulus-response) transformations by single sensory as
well as motor neurons. Recently, there has been rising interest in the second-
and higher-order correlation structure of neural spike trains, and how it may
be related to specific encoding relationships. The distortion of signal
correlations as they are transformed through particular LNP models is
predictable and in some cases analytically tractable and invertible. Here, we
propose that LNP encoding models can potentially be identified strictly from
the correlation transformations they induce, and develop a computational method
for identifying minimum-phase single-neuron temporal kernels under white and
colored- random Gaussian excitation. Unlike reverse-correlation or
maximum-likelihood, correlation-distortion based identification does not
require the simultaneous observation of stimulus-response pairs - only their
respective second order statistics. Although in principle filter kernels are
not necessarily minimum-phase, and only their spectral amplitude can be
uniquely determined from output correlations, we show that in practice this
method provides excellent estimates of kernels from a range of parametric
models of neural systems. We conclude by discussing how this approach could
potentially enable neural models to be estimated from a much wider variety of
experimental conditions and systems, and its limitations.
|
[
{
"created": "Sun, 10 May 2009 09:01:18 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Oct 2009 01:00:23 GMT",
"version": "v2"
}
] |
2016-09-08
|
[
[
"Krumin",
"Michael",
""
],
[
"Shimron",
"Avner",
""
],
[
"Shoham",
"Shy",
""
]
] |
Linear-Nonlinear-Poisson (LNP) models are a popular and powerful tool for describing encoding (stimulus-response) transformations by single sensory as well as motor neurons. Recently, there has been rising interest in the second- and higher-order correlation structure of neural spike trains, and how it may be related to specific encoding relationships. The distortion of signal correlations as they are transformed through particular LNP models is predictable and in some cases analytically tractable and invertible. Here, we propose that LNP encoding models can potentially be identified strictly from the correlation transformations they induce, and develop a computational method for identifying minimum-phase single-neuron temporal kernels under white and colored- random Gaussian excitation. Unlike reverse-correlation or maximum-likelihood, correlation-distortion based identification does not require the simultaneous observation of stimulus-response pairs - only their respective second order statistics. Although in principle filter kernels are not necessarily minimum-phase, and only their spectral amplitude can be uniquely determined from output correlations, we show that in practice this method provides excellent estimates of kernels from a range of parametric models of neural systems. We conclude by discussing how this approach could potentially enable neural models to be estimated from a much wider variety of experimental conditions and systems, and its limitations.
|
2112.03151
|
Refath Bari
|
Refath Bari
|
A Neuronal Noise Critique of Integrated Information Theory
|
Submitted to PLoS ONE
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Integrated Information Theory (IIT) is an audacious attempt to pin down the
abstract, phenomenological experiences of consciousness into a rigorous,
mathematical framework. We show that IIT's stance in regards to neuronal noise
is inconsistent with experimental data demonstrating that neuronal noise in the
brain plays a critical role in learning, visual recognition, and even
categorical representation. IIT predicts that entropy due to noise will reduce
the information integration of a physical system, which is inconsistent with
experimental data demonstrating that decision-related noise is a necessary
condition for learning and visual recognition tasks. IIT must therefore be
reformulated to accommodate experimental evidence showing both the successes
and failures of noise.
|
[
{
"created": "Mon, 6 Dec 2021 16:37:39 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Dec 2021 15:38:19 GMT",
"version": "v2"
}
] |
2021-12-10
|
[
[
"Bari",
"Refath",
""
]
] |
Integrated Information Theory (IIT) is an audacious attempt to pin down the abstract, phenomenological experiences of consciousness into a rigorous, mathematical framework. We show that IIT's stance in regards to neuronal noise is inconsistent with experimental data demonstrating that neuronal noise in the brain plays a critical role in learning, visual recognition, and even categorical representation. IIT predicts that entropy due to noise will reduce the information integration of a physical system, which is inconsistent with experimental data demonstrating that decision-related noise is a necessary condition for learning and visual recognition tasks. IIT must therefore be reformulated to accommodate experimental evidence showing both the successes and failures of noise.
|
2007.03678
|
Ada Sedova
|
Scott LeGrand, Aaron Scheinberg, Andreas F. Tillack, Mathialakan
Thavappiragasam, Josh V. Vermaas, Rupesh Agarwal, Jeff Larkin, Duncan Poole,
Diogo Santos-Martins, Leonardo Solis-Vasquez, Andreas Koch, Stefano Forli,
Oscar Hernandez, Jeremy C. Smith and Ada Sedova
|
GPU-Accelerated Drug Discovery with Docking on the Summit Supercomputer:
Porting, Optimization, and Application to COVID-19 Research
| null | null |
10.1145/3388440.3412472
| null |
q-bio.BM q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein-ligand docking is an in silico tool used to screen potential drug
compounds for their ability to bind to a given protein receptor within a
drug-discovery campaign. Experimental drug screening is expensive and time
consuming, and it is desirable to carry out large scale docking calculations in
a high-throughput manner to narrow the experimental search space. Few of the
existing computational docking tools were designed with high performance
computing in mind. Therefore, optimizations to maximize use of high-performance
computational resources available at leadership-class computing facilities
enables these facilities to be leveraged for drug discovery. Here we present
the porting, optimization, and validation of the AutoDock-GPU program for the
Summit supercomputer, and its application to initial compound screening efforts
to target proteins of the SARS-CoV-2 virus responsible for the current COVID-19
pandemic.
|
[
{
"created": "Mon, 6 Jul 2020 20:31:12 GMT",
"version": "v1"
}
] |
2020-11-16
|
[
[
"LeGrand",
"Scott",
""
],
[
"Scheinberg",
"Aaron",
""
],
[
"Tillack",
"Andreas F.",
""
],
[
"Thavappiragasam",
"Mathialakan",
""
],
[
"Vermaas",
"Josh V.",
""
],
[
"Agarwal",
"Rupesh",
""
],
[
"Larkin",
"Jeff",
""
],
[
"Poole",
"Duncan",
""
],
[
"Santos-Martins",
"Diogo",
""
],
[
"Solis-Vasquez",
"Leonardo",
""
],
[
"Koch",
"Andreas",
""
],
[
"Forli",
"Stefano",
""
],
[
"Hernandez",
"Oscar",
""
],
[
"Smith",
"Jeremy C.",
""
],
[
"Sedova",
"Ada",
""
]
] |
Protein-ligand docking is an in silico tool used to screen potential drug compounds for their ability to bind to a given protein receptor within a drug-discovery campaign. Experimental drug screening is expensive and time consuming, and it is desirable to carry out large scale docking calculations in a high-throughput manner to narrow the experimental search space. Few of the existing computational docking tools were designed with high performance computing in mind. Therefore, optimizations to maximize use of high-performance computational resources available at leadership-class computing facilities enables these facilities to be leveraged for drug discovery. Here we present the porting, optimization, and validation of the AutoDock-GPU program for the Summit supercomputer, and its application to initial compound screening efforts to target proteins of the SARS-CoV-2 virus responsible for the current COVID-19 pandemic.
|
1511.04470
|
Richard Barnes
|
Richard Barnes, Clarence Lehman
|
Modeling of Bovine Spongiform Encephalopathy in a Two-Species Feedback
Loop
|
12 pages, 4 figures
|
Epidemics. Vol. 5, Issue 2, June 2013, pp 85--91
|
10.1016/j.epidem.2013.04.001
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bovine spongiform encephalopathy, otherwise known as mad cow disease, can
spread when an individual cow consumes feed containing the infected tissues of
another individual, forming a one-species feedback loop. Such feedback is the
primary means of transmission for BSE during epidemic conditions. Following
outbreaks in the European Union and elsewhere, many governments enacted
legislation designed to limit the spread of such diseases via elimination or
reduction of one-species feedback loops in agricultural systems. However,
two-species feedback loops---those in which infectious material from
one-species is consumed by a secondary species whose tissue is then consumed by
the first species---were not universally prohibited and have not been studied
before. Here we present a basic ecological disease model which examines the
role feedback loops may play in the spread of BSE and related diseases. Our
model shows that there are critical thresholds between the infection's
expansion and decrease related to the lifespan of the hosts, the growth rate of
the prions, and the amount of prions circulating between hosts. The ecological
disease dynamics can be intrinsically oscillatory, having outbreaks as well as
refractory periods which can make it appear that the disease is under control
while it is still increasing. We show that non-susceptible species that have
been intentionally inserted into a feedback loop to stop the spread of disease
do not, strictly by themselves, guarantee its control, though they may give
that appearance by increasing the refractory period of an epidemic's
oscillations. We suggest ways in which age-related dynamics and cross-species
coupling should be considered in continuing evaluations aimed at maintaining a
safe food supply.
|
[
{
"created": "Fri, 13 Nov 2015 22:01:12 GMT",
"version": "v1"
}
] |
2015-11-17
|
[
[
"Barnes",
"Richard",
""
],
[
"Lehman",
"Clarence",
""
]
] |
Bovine spongiform encephalopathy, otherwise known as mad cow disease, can spread when an individual cow consumes feed containing the infected tissues of another individual, forming a one-species feedback loop. Such feedback is the primary means of transmission for BSE during epidemic conditions. Following outbreaks in the European Union and elsewhere, many governments enacted legislation designed to limit the spread of such diseases via elimination or reduction of one-species feedback loops in agricultural systems. However, two-species feedback loops---those in which infectious material from one-species is consumed by a secondary species whose tissue is then consumed by the first species---were not universally prohibited and have not been studied before. Here we present a basic ecological disease model which examines the role feedback loops may play in the spread of BSE and related diseases. Our model shows that there are critical thresholds between the infection's expansion and decrease related to the lifespan of the hosts, the growth rate of the prions, and the amount of prions circulating between hosts. The ecological disease dynamics can be intrinsically oscillatory, having outbreaks as well as refractory periods which can make it appear that the disease is under control while it is still increasing. We show that non-susceptible species that have been intentionally inserted into a feedback loop to stop the spread of disease do not, strictly by themselves, guarantee its control, though they may give that appearance by increasing the refractory period of an epidemic's oscillations. We suggest ways in which age-related dynamics and cross-species coupling should be considered in continuing evaluations aimed at maintaining a safe food supply.
|
2001.06773
|
Wei Zhao
|
Qing Nie, Lingxia Qiao, Yuchi Qiu, Lei Zhang and Wei Zhao
|
Noise control and utility: from regulatory network to spatial patterning
| null | null | null | null |
q-bio.MN q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochasticity (or noise) at cellular and molecular levels has been observed
extensively as a universal feature for living systems. However, how living
systems deal with noise while performing desirable biological functions remains
a major mystery. Regulatory network configurations, such as their topology and
timescale, are shown to be critical in attenuating noise, and noise is also
found to facilitate cell fate decision. Here we review major recent findings on
noise attenuation through regulatory control, the benefit of noise via
noise-induced cellular plasticity during developmental patterning, and
summarize key principles underlying noise control.
|
[
{
"created": "Sun, 19 Jan 2020 04:39:38 GMT",
"version": "v1"
}
] |
2020-01-22
|
[
[
"Nie",
"Qing",
""
],
[
"Qiao",
"Lingxia",
""
],
[
"Qiu",
"Yuchi",
""
],
[
"Zhang",
"Lei",
""
],
[
"Zhao",
"Wei",
""
]
] |
Stochasticity (or noise) at cellular and molecular levels has been observed extensively as a universal feature for living systems. However, how living systems deal with noise while performing desirable biological functions remains a major mystery. Regulatory network configurations, such as their topology and timescale, are shown to be critical in attenuating noise, and noise is also found to facilitate cell fate decision. Here we review major recent findings on noise attenuation through regulatory control, the benefit of noise via noise-induced cellular plasticity during developmental patterning, and summarize key principles underlying noise control.
|
1710.08149
|
Mo Zhang
|
Mo Zhang, Xiang Li, Mengjia Xu, Quanzheng Li
|
Image Segmentation and Classification for Sickle Cell Disease using
Deformable U-Net
| null | null | null | null |
q-bio.CB cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reliable cell segmentation and classification from biomedical images is a
crucial step for both scientific research and clinical practice. A major
challenge for more robust segmentation and classification methods is the large
variations in the size, shape and viewpoint of the cells, combining with the
low image quality caused by noise and artifacts. To address this issue, in this
work we propose a learning-based, simultaneous cell segmentation and
classification method based on the deep U-Net structure with deformable
convolution layers. The U-Net architecture for deep learning has been shown to
offer a precise localization for image semantic segmentation. Moreover,
deformable convolution layer enables the free form deformation of the feature
learning process, thus makes the whole network more robust to various cell
morphologies and image settings. The proposed method is tested on microscopic
red blood cell images from patients with sickle cell disease. The results show
that U-Net with deformable convolution achieves the highest accuracy for
segmentation and classification, comparing with original U-Net structure.
|
[
{
"created": "Mon, 23 Oct 2017 08:53:07 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2017 02:26:00 GMT",
"version": "v2"
},
{
"created": "Sun, 29 Oct 2017 04:02:32 GMT",
"version": "v3"
}
] |
2017-10-31
|
[
[
"Zhang",
"Mo",
""
],
[
"Li",
"Xiang",
""
],
[
"Xu",
"Mengjia",
""
],
[
"Li",
"Quanzheng",
""
]
] |
Reliable cell segmentation and classification from biomedical images is a crucial step for both scientific research and clinical practice. A major challenge for more robust segmentation and classification methods is the large variations in the size, shape and viewpoint of the cells, combining with the low image quality caused by noise and artifacts. To address this issue, in this work we propose a learning-based, simultaneous cell segmentation and classification method based on the deep U-Net structure with deformable convolution layers. The U-Net architecture for deep learning has been shown to offer a precise localization for image semantic segmentation. Moreover, deformable convolution layer enables the free form deformation of the feature learning process, thus makes the whole network more robust to various cell morphologies and image settings. The proposed method is tested on microscopic red blood cell images from patients with sickle cell disease. The results show that U-Net with deformable convolution achieves the highest accuracy for segmentation and classification, comparing with original U-Net structure.
|
2208.10545
|
Miguel Ib\'a\~nez Berganza
|
Miguel Ib\'a\~nez-Berganza, Carlo Lucibello, Luca Mariani, Giovanni
Pezzulo
|
Information-theoretical analysis of the neural code for decoupled face
representation
|
26 pages, 8 figures (+11 pages, 7 figures in the supporting
information section). In v3: new figure 8 in section 3.2.3; further details
added to the supporting information; title changed
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Processing faces accurately and efficiently is a key capability of humans and
other animals that engage in sophisticated social tasks. Recent studies
reported a decoupled coding for faces in the primate inferotemporal cortex,
with two separate neural populations coding for the geometric position of
(texture-free) facial landmarks and for the image texture at fixed landmark
positions, respectively. Here, we formally assess the efficiency of this
decoupled coding by appealing to the information-theoretic notion of
description length, which quantifies the amount of information that is saved
when encoding novel facial images, with a given precision. We show that despite
decoupled coding describes the facial images in terms of two sets of principal
components (of landmark shape and image texture), it is more efficient (i.e.,
yields more information compression) than the encoding in terms of the image
principal components only, which corresponds to the widely used eigenface
method. The advantage of decoupled coding over eigenface coding increases with
image resolution and is especially prominent when coding variants of training
set images that only differ in facial expressions. Moreover, we demonstrate
that decoupled coding entails better performance in three different tasks: the
representation of facial images, the (daydream) sampling of novel facial
images, and the recognition of facial identities and gender. In summary, our
study provides a first principle perspective on the efficiency and accuracy of
the decoupled coding of facial stimuli reported in the primate inferotemporal
cortex.
|
[
{
"created": "Mon, 22 Aug 2022 18:50:34 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Aug 2022 06:25:07 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Jan 2023 14:33:21 GMT",
"version": "v3"
}
] |
2023-01-19
|
[
[
"Ibáñez-Berganza",
"Miguel",
""
],
[
"Lucibello",
"Carlo",
""
],
[
"Mariani",
"Luca",
""
],
[
"Pezzulo",
"Giovanni",
""
]
] |
Processing faces accurately and efficiently is a key capability of humans and other animals that engage in sophisticated social tasks. Recent studies reported a decoupled coding for faces in the primate inferotemporal cortex, with two separate neural populations coding for the geometric position of (texture-free) facial landmarks and for the image texture at fixed landmark positions, respectively. Here, we formally assess the efficiency of this decoupled coding by appealing to the information-theoretic notion of description length, which quantifies the amount of information that is saved when encoding novel facial images, with a given precision. We show that despite decoupled coding describes the facial images in terms of two sets of principal components (of landmark shape and image texture), it is more efficient (i.e., yields more information compression) than the encoding in terms of the image principal components only, which corresponds to the widely used eigenface method. The advantage of decoupled coding over eigenface coding increases with image resolution and is especially prominent when coding variants of training set images that only differ in facial expressions. Moreover, we demonstrate that decoupled coding entails better performance in three different tasks: the representation of facial images, the (daydream) sampling of novel facial images, and the recognition of facial identities and gender. In summary, our study provides a first principle perspective on the efficiency and accuracy of the decoupled coding of facial stimuli reported in the primate inferotemporal cortex.
|
1212.3807
|
Irina Kareva
|
Irina Kareva, Benjamin Morin, Georgy Karev
|
Preventing the tragedy of the commons through punishment of
over-consumers and encouragement of under-consumers
| null | null | null | null |
q-bio.PE math.CA
|
http://creativecommons.org/licenses/publicdomain/
|
The conditions that can lead to the exploitative depletion of a shared
resource, i.e, the tragedy of the commons, can be reformulated as a game of
prisoner's dilemma: while preserving the common resource is in the best
interest of the group, over-consumption is in the interest of each particular
individual at any given point in time. One way to try and prevent the tragedy
of the commons is through infliction of punishment for over-consumption and/or
encouraging under-consumption, thus selecting against over-consumers. Here, the
effectiveness of various punishment functions in an evolving consumer-resource
system is evaluated within a framework of a parametrically heterogeneous system
of ordinary differential equations (ODEs). Conditions leading to the
possibility of sustainable coexistence with the common resource for a subset of
cases are identified analytically using adaptive dynamics; the effects of
punishment on heterogeneous populations with different initial composition are
evaluated using the Reduction theorem for replicator equations. Obtained
results suggest that one cannot prevent the tragedy of the commons through
rewarding of under-consumers alone - there must also be an implementation of
some degree of punishment that increases in a non-linear fashion with respect
to over-consumption and which may vary depending on the initial distribution of
clones in the population.
|
[
{
"created": "Sun, 16 Dec 2012 17:28:41 GMT",
"version": "v1"
}
] |
2012-12-18
|
[
[
"Kareva",
"Irina",
""
],
[
"Morin",
"Benjamin",
""
],
[
"Karev",
"Georgy",
""
]
] |
The conditions that can lead to the exploitative depletion of a shared resource, i.e, the tragedy of the commons, can be reformulated as a game of prisoner's dilemma: while preserving the common resource is in the best interest of the group, over-consumption is in the interest of each particular individual at any given point in time. One way to try and prevent the tragedy of the commons is through infliction of punishment for over-consumption and/or encouraging under-consumption, thus selecting against over-consumers. Here, the effectiveness of various punishment functions in an evolving consumer-resource system is evaluated within a framework of a parametrically heterogeneous system of ordinary differential equations (ODEs). Conditions leading to the possibility of sustainable coexistence with the common resource for a subset of cases are identified analytically using adaptive dynamics; the effects of punishment on heterogeneous populations with different initial composition are evaluated using the Reduction theorem for replicator equations. Obtained results suggest that one cannot prevent the tragedy of the commons through rewarding of under-consumers alone - there must also be an implementation of some degree of punishment that increases in a non-linear fashion with respect to over-consumption and which may vary depending on the initial distribution of clones in the population.
|
2305.02082
|
Jacob Thorstensen
|
Jacob Thorstensen, Tyler Henderson and Justin Kavanagh
|
Serotonergic and noradrenergic contributions to human motor cortical and
spinal motoneuronal excitability
|
38 pages, 3 tables, no figures
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Animal models indicate that motor behaviour is shaped by monoamine
neurotransmitters released diffusely throughout the brain and spinal cord. We
present strong evidence that human motor pathways are equally affected by
neuromodulation through noradrenergic and serotonergic projections arising from
the brainstem. To do so, we have identified and collated human experiments
examining the off-label effects of well-characterised serotonergic and
noradrenergic drugs on lab-based electrophysiology measures of
corticospinal-motoneuronal excitability. Specifically, we focus on the effects
that serotonin and noradrenaline associated drugs have on muscle responses to
magnetic or electrical stimulation of the motor cortex and peripheral nerves,
and other closely related tests of motoneuron excitability, to best segment
drug effects to a supraspinal or spinal locus. We find that serotonin enhancing
drugs tend to reduce the excitability of the human motor cortex, but that
augmented noradrenergic transmission increases motor cortical excitability by
enhancing measures of intracortical facilitation and reducing inhibition. Both
monoamines tend to enhance the excitability of human motoneurons. Overall, this
work details the importance of neuromodulators for the output of human motor
pathways and suggests that commonly prescribed monoaminergic drugs have
off-label motor control uses outside of their typical psychiatric/neurological
indications.
|
[
{
"created": "Thu, 27 Apr 2023 23:10:10 GMT",
"version": "v1"
}
] |
2023-05-04
|
[
[
"Thorstensen",
"Jacob",
""
],
[
"Henderson",
"Tyler",
""
],
[
"Kavanagh",
"Justin",
""
]
] |
Animal models indicate that motor behaviour is shaped by monoamine neurotransmitters released diffusely throughout the brain and spinal cord. We present strong evidence that human motor pathways are equally affected by neuromodulation through noradrenergic and serotonergic projections arising from the brainstem. To do so, we have identified and collated human experiments examining the off-label effects of well-characterised serotonergic and noradrenergic drugs on lab-based electrophysiology measures of corticospinal-motoneuronal excitability. Specifically, we focus on the effects that serotonin and noradrenaline associated drugs have on muscle responses to magnetic or electrical stimulation of the motor cortex and peripheral nerves, and other closely related tests of motoneuron excitability, to best segment drug effects to a supraspinal or spinal locus. We find that serotonin enhancing drugs tend to reduce the excitability of the human motor cortex, but that augmented noradrenergic transmission increases motor cortical excitability by enhancing measures of intracortical facilitation and reducing inhibition. Both monoamines tend to enhance the excitability of human motoneurons. Overall, this work details the importance of neuromodulators for the output of human motor pathways and suggests that commonly prescribed monoaminergic drugs have off-label motor control uses outside of their typical psychiatric/neurological indications.
|
2309.15950
|
Susan Martonosi
|
Abraham Holleran and Susan E. Martonosi and Michael Veatch
|
To Give or Not To Give: Pandemic Vaccine Donation Policy
|
21 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:2303.05917
| null | null | null |
q-bio.PE math.OC physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
The global SARS-CoV-2 (COVID-19) pandemic highlighted the challenge of
equitable vaccine distribution between high- and low-income countries. Many
high-income countries were reluctant or slow to distribute extra doses of the
vaccine to lower-income countries via the COVID-19 Vaccines Global Access
(COVAX) collaboration. In addition to moral objections to such vaccine
nationalism, vaccine inequity during a pandemic could contribute to the
evolution of new variants of the virus and possibly increase total deaths,
including in the high-income countries. Using the COVID-19 pandemic as a case
study, we use the epidemiological model of Holleran et al. that incorporates
virus mutation. We identify realistic scenarios under which a donor country
prefers to donate vaccines before distributing them locally in order to
minimize local deaths during a pandemic. We demonstrate that a nondonor-first
vaccination policy can delay, sometimes dramatically, the emergence of
more-contagious variants. Even more surprising, donating all vaccines is
sometimes better for the donor country than a sharing policy in which half of
the vaccines are donated and half are retained because of the impact donation
can have on delaying the emergence of a more contagious virus. Nondonor-first
vaccine allocation is optimal in scenarios in which the local health impact of
the vaccine is limited or when delaying emergence of a variant is especially
valuable. In all cases, we find that vaccine distribution is not a zero-sum
game between donor and nondonor countries. Thus, in addition to moral reasons
to avoid vaccine nationalism, donor nations can also realize local health
benefits from donating vaccines. The insights yielded by this framework can be
used to guide equitable vaccine distribution in future pandemics.
|
[
{
"created": "Wed, 27 Sep 2023 19:08:35 GMT",
"version": "v1"
}
] |
2023-09-29
|
[
[
"Holleran",
"Abraham",
""
],
[
"Martonosi",
"Susan E.",
""
],
[
"Veatch",
"Michael",
""
]
] |
The global SARS-CoV-2 (COVID-19) pandemic highlighted the challenge of equitable vaccine distribution between high- and low-income countries. Many high-income countries were reluctant or slow to distribute extra doses of the vaccine to lower-income countries via the COVID-19 Vaccines Global Access (COVAX) collaboration. In addition to moral objections to such vaccine nationalism, vaccine inequity during a pandemic could contribute to the evolution of new variants of the virus and possibly increase total deaths, including in the high-income countries. Using the COVID-19 pandemic as a case study, we use the epidemiological model of Holleran et al. that incorporates virus mutation. We identify realistic scenarios under which a donor country prefers to donate vaccines before distributing them locally in order to minimize local deaths during a pandemic. We demonstrate that a nondonor-first vaccination policy can delay, sometimes dramatically, the emergence of more-contagious variants. Even more surprising, donating all vaccines is sometimes better for the donor country than a sharing policy in which half of the vaccines are donated and half are retained because of the impact donation can have on delaying the emergence of a more contagious virus. Nondonor-first vaccine allocation is optimal in scenarios in which the local health impact of the vaccine is limited or when delaying emergence of a variant is especially valuable. In all cases, we find that vaccine distribution is not a zero-sum game between donor and nondonor countries. Thus, in addition to moral reasons to avoid vaccine nationalism, donor nations can also realize local health benefits from donating vaccines. The insights yielded by this framework can be used to guide equitable vaccine distribution in future pandemics.
|
2407.08751
|
Auguste Schulz
|
Jaivardhan Kapoor, Auguste Schulz, Julius Vetter, Felix Pei, Richard
Gao, Jakob H. Macke
|
Latent Diffusion for Neural Spiking Data
| null | null | null | null |
q-bio.NC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Modern datasets in neuroscience enable unprecedented inquiries into the
relationship between complex behaviors and the activity of many simultaneously
recorded neurons. While latent variable models can successfully extract
low-dimensional embeddings from such recordings, using them to generate
realistic spiking data, especially in a behavior-dependent manner, still poses
a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS),
a diffusion-based generative model with a low-dimensional latent space: LDNS
employs an autoencoder with structured state-space (S4) layers to project
discrete high-dimensional spiking data into continuous time-aligned latents. On
these inferred latents, we train expressive (conditional) diffusion models,
enabling us to sample neural activity with realistic single-neuron and
population spiking statistics. We validate LDNS on synthetic data, accurately
recovering latent structure, firing rates, and spiking statistics. Next, we
demonstrate its flexibility by generating variable-length data that mimics
human cortical activity during attempted speech. We show how to equip LDNS with
an expressive observation model that accounts for single-neuron dynamics not
mediated by the latent state, further increasing the realism of generated
samples. Finally, conditional LDNS trained on motor cortical activity during
diverse reaching behaviors can generate realistic spiking data given reach
direction or unseen reach trajectories. In summary, LDNS simultaneously enables
inference of low-dimensional latents and realistic conditional generation of
neural spiking datasets, opening up further possibilities for simulating
experimentally testable hypotheses.
|
[
{
"created": "Thu, 27 Jun 2024 13:47:06 GMT",
"version": "v1"
}
] |
2024-07-15
|
[
[
"Kapoor",
"Jaivardhan",
""
],
[
"Schulz",
"Auguste",
""
],
[
"Vetter",
"Julius",
""
],
[
"Pei",
"Felix",
""
],
[
"Gao",
"Richard",
""
],
[
"Macke",
"Jakob H.",
""
]
] |
Modern datasets in neuroscience enable unprecedented inquiries into the relationship between complex behaviors and the activity of many simultaneously recorded neurons. While latent variable models can successfully extract low-dimensional embeddings from such recordings, using them to generate realistic spiking data, especially in a behavior-dependent manner, still poses a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS), a diffusion-based generative model with a low-dimensional latent space: LDNS employs an autoencoder with structured state-space (S4) layers to project discrete high-dimensional spiking data into continuous time-aligned latents. On these inferred latents, we train expressive (conditional) diffusion models, enabling us to sample neural activity with realistic single-neuron and population spiking statistics. We validate LDNS on synthetic data, accurately recovering latent structure, firing rates, and spiking statistics. Next, we demonstrate its flexibility by generating variable-length data that mimics human cortical activity during attempted speech. We show how to equip LDNS with an expressive observation model that accounts for single-neuron dynamics not mediated by the latent state, further increasing the realism of generated samples. Finally, conditional LDNS trained on motor cortical activity during diverse reaching behaviors can generate realistic spiking data given reach direction or unseen reach trajectories. In summary, LDNS simultaneously enables inference of low-dimensional latents and realistic conditional generation of neural spiking datasets, opening up further possibilities for simulating experimentally testable hypotheses.
|
q-bio/0506012
|
Yongyun Ji
|
Yong-Yun Ji, You-Quan Li, Jun-Wen Mao and Xiao-Wei Tang
|
The prion-like folding behavior in aggregated proteins
|
7 pages, 6 figures
|
Physical Review E 72, 041912 (2005), Virtual Journal of Biological
Physics Research(October 15, 2005)
|
10.1103/PhysRevE.72.041912
| null |
q-bio.BM
| null |
We investigate the folding behavior of protein sequences by numerically
studying all sequences with maximally compact lattice model through exhaustive
enumeration. We get the prion-like behavior of protein folding. Individual
proteins remaining stable in the isolated native state may change their
conformations when they aggregate. We observe the folding properties as the
interfacial interaction strength changes, and find that the strength must be
strong enough before the propagation of the most stable structures happens.
|
[
{
"created": "Thu, 9 Jun 2005 05:11:24 GMT",
"version": "v1"
}
] |
2014-11-18
|
[
[
"Ji",
"Yong-Yun",
""
],
[
"Li",
"You-Quan",
""
],
[
"Mao",
"Jun-Wen",
""
],
[
"Tang",
"Xiao-Wei",
""
]
] |
We investigate the folding behavior of protein sequences by numerically studying all sequences with maximally compact lattice model through exhaustive enumeration. We get the prion-like behavior of protein folding. Individual proteins remaining stable in the isolated native state may change their conformations when they aggregate. We observe the folding properties as the interfacial interaction strength changes, and find that the strength must be strong enough before the propagation of the most stable structures happens.
|
2109.06011
|
Jan Sosulski
|
Jan Sosulski, David H\"ubner, Aaron Klein, Michael Tangermann
|
Online Optimization of Stimulation Speed in an Auditory Brain-Computer
Interface under Time Constraints
| null | null | null | null |
q-bio.NC cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The decoding of brain signals recorded via, e.g., an electroencephalogram,
using machine learning is key to brain-computer interfaces (BCIs). Stimulation
parameters or other experimental settings of the BCI protocol typically are
chosen according to the literature. The decoding performance directly depends
on the choice of parameters, as they influence the elicited brain signals and
optimal parameters are subject-dependent. Thus a fast and automated selection
procedure for experimental parameters could greatly improve the usability of
BCIs.
We evaluate a standalone random search and a combined Bayesian optimization
with random search in a closed-loop auditory event-related potential protocol.
We aimed at finding the individually best stimulation speed -- also known as
stimulus onset asynchrony (SOA) -- that maximizes the classification
performance of a regularized linear discriminant analysis. To make the Bayesian
optimization feasible under noise and the time pressure posed by an online BCI
experiment, we first used offline simulations to initialize and constrain the
internal optimization model. Then we evaluated our approach online with 13
healthy subjects.
We could show that for 8 out of 13 subjects, the proposed approach using
Bayesian optimization succeeded to select the individually optimal SOA out of
multiple evaluated SOA values. Our data suggests, however, that subjects were
influenced to very different degrees by the SOA parameter. This makes the
automatic parameter selection infeasible for subjects where the influence is
limited.
Our work proposes an approach to exploit the benefits of individualized
experimental protocols and evaluated it in an auditory BCI. When applied to
other experimental parameters our approach could enhance the usability of BCI
for different target groups -- specifically if an individual disease progress
may prevent the use of standard parameters.
|
[
{
"created": "Thu, 26 Aug 2021 08:18:03 GMT",
"version": "v1"
}
] |
2021-09-14
|
[
[
"Sosulski",
"Jan",
""
],
[
"Hübner",
"David",
""
],
[
"Klein",
"Aaron",
""
],
[
"Tangermann",
"Michael",
""
]
] |
The decoding of brain signals recorded via, e.g., an electroencephalogram, using machine learning is key to brain-computer interfaces (BCIs). Stimulation parameters or other experimental settings of the BCI protocol typically are chosen according to the literature. The decoding performance directly depends on the choice of parameters, as they influence the elicited brain signals and optimal parameters are subject-dependent. Thus a fast and automated selection procedure for experimental parameters could greatly improve the usability of BCIs. We evaluate a standalone random search and a combined Bayesian optimization with random search in a closed-loop auditory event-related potential protocol. We aimed at finding the individually best stimulation speed -- also known as stimulus onset asynchrony (SOA) -- that maximizes the classification performance of a regularized linear discriminant analysis. To make the Bayesian optimization feasible under noise and the time pressure posed by an online BCI experiment, we first used offline simulations to initialize and constrain the internal optimization model. Then we evaluated our approach online with 13 healthy subjects. We could show that for 8 out of 13 subjects, the proposed approach using Bayesian optimization succeeded to select the individually optimal SOA out of multiple evaluated SOA values. Our data suggests, however, that subjects were influenced to very different degrees by the SOA parameter. This makes the automatic parameter selection infeasible for subjects where the influence is limited. Our work proposes an approach to exploit the benefits of individualized experimental protocols and evaluated it in an auditory BCI. When applied to other experimental parameters our approach could enhance the usability of BCI for different target groups -- specifically if an individual disease progress may prevent the use of standard parameters.
|
2209.08402
|
Heyrim Cho
|
Heyrim Cho, Allison L. Lewis, Kathleen M. Storey, Helen M. Byrne
|
Designing experimental conditions to use the Lotka-Volterra model to
infer tumor cell line interaction types
|
25 pages, 18 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
The Lotka-Volterra model is widely used to model interactions between two
species. Here, we generate synthetic data mimicking competitive, mutualistic
and antagonistic interactions between two tumor cell lines, and then use the
Lotka-Volterra model to infer the interaction type. Structural identifiability
of the Lotka-Volterra model is confirmed, and practical identifiability is
assessed for three experimental designs: (a) use of a single data set, with a
mixture of both cell lines observed over time, (b) a sequential design where
growth rates and carrying capacities are estimated using data from experiments
in which each cell line is grown in isolation, and then interaction parameters
are estimated from an experiment involving a mixture of both cell lines, and
(c) a parallel experimental design where all model parameters are fitted to
data from two mixtures simultaneously. In addition to assessing each design for
practical identifiability, we investigate how the predictive power of the
model-i.e., its ability to fit data for initial ratios other than those to
which it was calibrated-is affected by the choice of experimental design. The
parallel calibration procedure is found to be optimal and is further tested on
in silico data generated from a spatially-resolved cellular automaton model,
which accounts for oxygen consumption and allows for variation in the intensity
level of the interaction between the two cell lines. We use this study to
highlight the care that must be taken when interpreting parameter estimates for
the spatially-averaged Lotka-Volterra model when it is calibrated against data
produced by the spatially-resolved cellular automaton model, since baseline
competition for space and resources in the CA model may contribute to a
discrepancy between the type of interaction used to generate the CA data and
the type of interaction inferred by the LV model.
|
[
{
"created": "Sat, 17 Sep 2022 20:59:42 GMT",
"version": "v1"
}
] |
2022-09-20
|
[
[
"Cho",
"Heyrim",
""
],
[
"Lewis",
"Allison L.",
""
],
[
"Storey",
"Kathleen M.",
""
],
[
"Byrne",
"Helen M.",
""
]
] |
The Lotka-Volterra model is widely used to model interactions between two species. Here, we generate synthetic data mimicking competitive, mutualistic and antagonistic interactions between two tumor cell lines, and then use the Lotka-Volterra model to infer the interaction type. Structural identifiability of the Lotka-Volterra model is confirmed, and practical identifiability is assessed for three experimental designs: (a) use of a single data set, with a mixture of both cell lines observed over time, (b) a sequential design where growth rates and carrying capacities are estimated using data from experiments in which each cell line is grown in isolation, and then interaction parameters are estimated from an experiment involving a mixture of both cell lines, and (c) a parallel experimental design where all model parameters are fitted to data from two mixtures simultaneously. In addition to assessing each design for practical identifiability, we investigate how the predictive power of the model-i.e., its ability to fit data for initial ratios other than those to which it was calibrated-is affected by the choice of experimental design. The parallel calibration procedure is found to be optimal and is further tested on in silico data generated from a spatially-resolved cellular automaton model, which accounts for oxygen consumption and allows for variation in the intensity level of the interaction between the two cell lines. We use this study to highlight the care that must be taken when interpreting parameter estimates for the spatially-averaged Lotka-Volterra model when it is calibrated against data produced by the spatially-resolved cellular automaton model, since baseline competition for space and resources in the CA model may contribute to a discrepancy between the type of interaction used to generate the CA data and the type of interaction inferred by the LV model.
|
0910.1953
|
Yunfeng Shan Dr.
|
Yunfeng Shan, and Xiu-Qing Li
|
GeneSupport Maximum Gene-Support Tree Approach to Species Phylogeny
Inference
|
Application note
| null | null | null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Summary: GeneSupport implements a genome-scale algorithm: Maximum
Gene-Support Tree to estimate species tree from gene trees based on multilocus
sequences. It provides a new option for multiple genes to infer species tree.
It is incorporated into popular phylogentic program: PHYLIP package with the
same usage and user interface. It is suitable for phylogenetic methods such as
maximum parsimony, maximum likelihood, Baysian and neighbour-joining, which is
used to reconstruct single gene trees firstly with a variety of phylogenetic
inference programs.
|
[
{
"created": "Sat, 10 Oct 2009 22:31:55 GMT",
"version": "v1"
}
] |
2009-10-13
|
[
[
"Shan",
"Yunfeng",
""
],
[
"Li",
"Xiu-Qing",
""
]
] |
Summary: GeneSupport implements a genome-scale algorithm: Maximum Gene-Support Tree to estimate species tree from gene trees based on multilocus sequences. It provides a new option for multiple genes to infer species tree. It is incorporated into popular phylogentic program: PHYLIP package with the same usage and user interface. It is suitable for phylogenetic methods such as maximum parsimony, maximum likelihood, Baysian and neighbour-joining, which is used to reconstruct single gene trees firstly with a variety of phylogenetic inference programs.
|
2012.05538
|
Qiyao Peng
|
Qiyao Peng, Fred Vermolen, Daphne Weihs
|
A Formalism for Modelling Traction forces and Cell Shape Evolution
during Cell Migration in Various Biomedical Processes
| null | null |
10.1007/s10237-021-01456-2
| null |
q-bio.CB cs.NA math.NA
|
http://creativecommons.org/licenses/by/4.0/
|
The phenomenological model for cell shape deformation and cell migration
(Chen et.al. 2018; Vermolen and Gefen 2012) is extended with the incorporation
of cell traction forces and the evolution of cell equilibrium shapes as a
result of cell differentiation. Plastic deformations of the extracellular
matrix are modelled using morphoelasticity theory. The resulting partial
differential differential equations are solved by the use of the finite element
method. The paper treats various biological scenarios that entail cell
migration and cell shape evolution. The experimental observations in Mak et.al.
(2013), where transmigration of cancer cells through narrow apertures is
studied, are reproduced using a Monte Carlo framework.
|
[
{
"created": "Thu, 10 Dec 2020 09:29:44 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2021 14:08:25 GMT",
"version": "v2"
}
] |
2021-04-27
|
[
[
"Peng",
"Qiyao",
""
],
[
"Vermolen",
"Fred",
""
],
[
"Weihs",
"Daphne",
""
]
] |
The phenomenological model for cell shape deformation and cell migration (Chen et.al. 2018; Vermolen and Gefen 2012) is extended with the incorporation of cell traction forces and the evolution of cell equilibrium shapes as a result of cell differentiation. Plastic deformations of the extracellular matrix are modelled using morphoelasticity theory. The resulting partial differential differential equations are solved by the use of the finite element method. The paper treats various biological scenarios that entail cell migration and cell shape evolution. The experimental observations in Mak et.al. (2013), where transmigration of cancer cells through narrow apertures is studied, are reproduced using a Monte Carlo framework.
|
2009.00359
|
Eva Smij\'akov\'a
|
Lubos Brim, Samuel Pastva, David Safranek, Eva Smijakova
|
Parallel One-Step Control of Parametrised Boolean Networks
| null | null | null | null |
q-bio.MN cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Boolean network (BN) is a simple model widely used to study complex dynamic
behaviour of biological systems. Nonetheless, it might be difficult to gather
enough data to precisely capture the behavior of a biological system into a set
of Boolean functions. These issues can be dealt with to some extent using
parametrised Boolean networks (ParBNs), as it allows to leave some update
functions unspecified. In this paper, we attack the control problem for ParBNs
with asynchronous semantics. While there is an extensive work on controlling
BNs without parameters, the problem of control for ParBNs has not been in fact
addressed yet. The goal of control is to ensure the stabilisation of a system
in a given state using as few interventions as possible. There are many ways to
control BN dynamics. Here, we consider the one-step approach in which the
system is instantaneously perturbed out of its actual state. A naive approach
to handle control of ParBNs is using parameter scan and solve the control
problem for each parameter valuation separately using known techniques for
non-parametrised BNs. This approach is however highly inefficient as the
parameter space of ParBNs grows doubly-exponentially in the worst case. In this
paper, we propose a novel semi-symbolic algorithm for the one-step control
problem of ParBNs, that builds on a symbolic data structures to avoid scanning
individual parameters. We evaluate the performance of our approach on real
biological models.
|
[
{
"created": "Tue, 1 Sep 2020 11:29:43 GMT",
"version": "v1"
}
] |
2020-09-02
|
[
[
"Brim",
"Lubos",
""
],
[
"Pastva",
"Samuel",
""
],
[
"Safranek",
"David",
""
],
[
"Smijakova",
"Eva",
""
]
] |
Boolean network (BN) is a simple model widely used to study complex dynamic behaviour of biological systems. Nonetheless, it might be difficult to gather enough data to precisely capture the behavior of a biological system into a set of Boolean functions. These issues can be dealt with to some extent using parametrised Boolean networks (ParBNs), as it allows to leave some update functions unspecified. In this paper, we attack the control problem for ParBNs with asynchronous semantics. While there is an extensive work on controlling BNs without parameters, the problem of control for ParBNs has not been in fact addressed yet. The goal of control is to ensure the stabilisation of a system in a given state using as few interventions as possible. There are many ways to control BN dynamics. Here, we consider the one-step approach in which the system is instantaneously perturbed out of its actual state. A naive approach to handle control of ParBNs is using parameter scan and solve the control problem for each parameter valuation separately using known techniques for non-parametrised BNs. This approach is however highly inefficient as the parameter space of ParBNs grows doubly-exponentially in the worst case. In this paper, we propose a novel semi-symbolic algorithm for the one-step control problem of ParBNs, that builds on a symbolic data structures to avoid scanning individual parameters. We evaluate the performance of our approach on real biological models.
|
1611.04872
|
Emanuela Merelli
|
Marco Piangerelli, Matteo Rucco and Emanuela Merelli
|
Topological classifier for detecting the emergence of epileptic seizures
|
Open data: Physionet data-set
|
BMC Res Notes 11, 392, 2018
|
10.1186/s13104-018-3482-7
| null |
q-bio.NC cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we study how to apply topological data analysis to create a
method suitable to classify EEGs of patients affected by epilepsy. The
topological space constructed from the collection of EEGs signals is analyzed
by Persistent Entropy acting as a global topological feature for discriminating
between healthy and epileptic signals. The Physionet data-set has been used for
testing the classifier.
|
[
{
"created": "Sat, 12 Nov 2016 10:11:30 GMT",
"version": "v1"
}
] |
2020-09-14
|
[
[
"Piangerelli",
"Marco",
""
],
[
"Rucco",
"Matteo",
""
],
[
"Merelli",
"Emanuela",
""
]
] |
In this work we study how to apply topological data analysis to create a method suitable to classify EEGs of patients affected by epilepsy. The topological space constructed from the collection of EEGs signals is analyzed by Persistent Entropy acting as a global topological feature for discriminating between healthy and epileptic signals. The Physionet data-set has been used for testing the classifier.
|
2211.16599
|
Dale Zhou
|
Dale Zhou, Jason Z. Kim, Adam R. Pines, Valerie J. Sydnor, David R.
Roalf, John A. Detre, Ruben C. Gur, Raquel E. Gur, Theodore D. Satterthwaite,
Dani S. Bassett
|
Compression supports low-dimensional representations of behavior across
neural circuits
|
arXiv admin note: text overlap with arXiv:2001.05078
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dimensionality reduction, a form of compression, can simplify representations
of information to increase efficiency and reveal general patterns. Yet, this
simplification also forfeits information, thereby reducing representational
capacity. Hence, the brain may benefit from generating both compressed and
uncompressed activity, and may do so in a heterogeneous manner across diverse
neural circuits that represent low-level (sensory) or high-level (cognitive)
stimuli. However, precisely how compression and representational capacity
differ across the cortex remains unknown. Here we predict different levels of
compression across regional circuits by using random walks on networks to model
activity flow and to formulate rate-distortion functions, which are the basis
of lossy compression. Using a large sample of youth ($n=1,040$), we test
predictions in two ways: by measuring the dimensionality of spontaneous
activity from sensorimotor to association cortex, and by assessing the
representational capacity for 24 behaviors in neural circuits and 20 cognitive
variables in recurrent neural networks. Our network theory of compression
predicts the dimensionality of activity ($t=12.13, p<0.001$) and the
representational capacity of biological ($r=0.53, p=0.016$) and artificial
($r=0.61, p<0.001$) networks. The model suggests how a basic form of
compression is an emergent property of activity flow between distributed
circuits that communicate with the rest of the network.
|
[
{
"created": "Tue, 29 Nov 2022 21:26:10 GMT",
"version": "v1"
}
] |
2022-12-01
|
[
[
"Zhou",
"Dale",
""
],
[
"Kim",
"Jason Z.",
""
],
[
"Pines",
"Adam R.",
""
],
[
"Sydnor",
"Valerie J.",
""
],
[
"Roalf",
"David R.",
""
],
[
"Detre",
"John A.",
""
],
[
"Gur",
"Ruben C.",
""
],
[
"Gur",
"Raquel E.",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Bassett",
"Dani S.",
""
]
] |
Dimensionality reduction, a form of compression, can simplify representations of information to increase efficiency and reveal general patterns. Yet, this simplification also forfeits information, thereby reducing representational capacity. Hence, the brain may benefit from generating both compressed and uncompressed activity, and may do so in a heterogeneous manner across diverse neural circuits that represent low-level (sensory) or high-level (cognitive) stimuli. However, precisely how compression and representational capacity differ across the cortex remains unknown. Here we predict different levels of compression across regional circuits by using random walks on networks to model activity flow and to formulate rate-distortion functions, which are the basis of lossy compression. Using a large sample of youth ($n=1,040$), we test predictions in two ways: by measuring the dimensionality of spontaneous activity from sensorimotor to association cortex, and by assessing the representational capacity for 24 behaviors in neural circuits and 20 cognitive variables in recurrent neural networks. Our network theory of compression predicts the dimensionality of activity ($t=12.13, p<0.001$) and the representational capacity of biological ($r=0.53, p=0.016$) and artificial ($r=0.61, p<0.001$) networks. The model suggests how a basic form of compression is an emergent property of activity flow between distributed circuits that communicate with the rest of the network.
|
1511.01956
|
Elizabeth Allman
|
Elizabeth S. Allman, John A. Rhodes, Seth Sullivant
|
Statistically-Consistent k-mer Methods for Phylogenetic Tree
Reconstruction
|
25 pages, 9 figures figure added, to appear, JCB
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Frequencies of $k$-mers in sequences are sometimes used as a basis for
inferring phylogenetic trees without first obtaining a multiple sequence
alignment. We show that a standard approach of using the squared-Euclidean
distance between $k$-mer vectors to approximate a tree metric can be
statistically inconsistent. To remedy this, we derive model-based distance
corrections for orthologous sequences without gaps, which lead to consistent
tree inference. The identifiability of model parameters from $k$-mer
frequencies is also studied. Finally, we report simulations showing the
corrected distance out-performs many other $k$-mer methods, even when sequences
are generated with an insertion and deletion process. These results have
implications for multiple sequence alignment as well, since $k$-mer methods are
usually the first step in constructing a guide tree for such algorithms.
|
[
{
"created": "Thu, 5 Nov 2015 23:46:49 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jan 2016 18:48:11 GMT",
"version": "v2"
}
] |
2016-01-15
|
[
[
"Allman",
"Elizabeth S.",
""
],
[
"Rhodes",
"John A.",
""
],
[
"Sullivant",
"Seth",
""
]
] |
Frequencies of $k$-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared-Euclidean distance between $k$-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from $k$-mer frequencies is also studied. Finally, we report simulations showing the corrected distance out-performs many other $k$-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well, since $k$-mer methods are usually the first step in constructing a guide tree for such algorithms.
|
2305.08316
|
Ziyuan Zhao
|
Ziyuan Zhao, Peisheng Qian, Xulei Yang, Zeng Zeng, Cuntai Guan, Wai
Leong Tam, Xiaoli Li
|
SemiGNN-PPI: Self-Ensembling Multi-Graph Neural Network for Efficient
and Generalizable Protein-Protein Interaction Prediction
|
Accepted by IJCAI 2023
| null | null | null |
q-bio.MN cs.AI cs.CE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein-protein interactions (PPIs) are crucial in various biological
processes and their study has significant implications for drug development and
disease diagnosis. Existing deep learning methods suffer from significant
performance degradation under complex real-world scenarios due to various
factors, e.g., label scarcity and domain shift. In this paper, we propose a
self-ensembling multigraph neural network (SemiGNN-PPI) that can effectively
predict PPIs while being both efficient and generalizable. In SemiGNN-PPI, we
not only model the protein correlations but explore the label dependencies by
constructing and processing multiple graphs from the perspectives of both
features and labels in the graph learning process. We further marry GNN with
Mean Teacher to effectively leverage unlabeled graph-structured PPI data for
self-ensemble graph learning. We also design multiple graph consistency
constraints to align the student and teacher graphs in the feature embedding
space, enabling the student model to better learn from the teacher model by
incorporating more relationships. Extensive experiments on PPI datasets of
different scales with different evaluation settings demonstrate that
SemiGNN-PPI outperforms state-of-the-art PPI prediction methods, particularly
in challenging scenarios such as training with limited annotations and testing
on unseen data.
|
[
{
"created": "Mon, 15 May 2023 03:06:44 GMT",
"version": "v1"
}
] |
2023-05-16
|
[
[
"Zhao",
"Ziyuan",
""
],
[
"Qian",
"Peisheng",
""
],
[
"Yang",
"Xulei",
""
],
[
"Zeng",
"Zeng",
""
],
[
"Guan",
"Cuntai",
""
],
[
"Tam",
"Wai Leong",
""
],
[
"Li",
"Xiaoli",
""
]
] |
Protein-protein interactions (PPIs) are crucial in various biological processes and their study has significant implications for drug development and disease diagnosis. Existing deep learning methods suffer from significant performance degradation under complex real-world scenarios due to various factors, e.g., label scarcity and domain shift. In this paper, we propose a self-ensembling multigraph neural network (SemiGNN-PPI) that can effectively predict PPIs while being both efficient and generalizable. In SemiGNN-PPI, we not only model the protein correlations but explore the label dependencies by constructing and processing multiple graphs from the perspectives of both features and labels in the graph learning process. We further marry GNN with Mean Teacher to effectively leverage unlabeled graph-structured PPI data for self-ensemble graph learning. We also design multiple graph consistency constraints to align the student and teacher graphs in the feature embedding space, enabling the student model to better learn from the teacher model by incorporating more relationships. Extensive experiments on PPI datasets of different scales with different evaluation settings demonstrate that SemiGNN-PPI outperforms state-of-the-art PPI prediction methods, particularly in challenging scenarios such as training with limited annotations and testing on unseen data.
|
2405.09327
|
C\'ecile An\'e
|
Benjamin Teo, Paul Bastide, C\'ecile An\'e
|
Leveraging graphical model techniques to study evolution on phylogenetic
networks
| null | null | null | null |
q-bio.PE stat.CO
|
http://creativecommons.org/licenses/by/4.0/
|
The evolution of molecular and phenotypic traits is commonly modelled using
Markov processes along a rooted phylogeny. This phylogeny can be a tree, or a
network if it includes reticulations, representing events such as hybridization
or admixture. Computing the likelihood of data observed at the leaves is costly
as the size and complexity of the phylogeny grows. Efficient algorithms exist
for trees, but cannot be applied to networks. We show that a vast array of
models for trait evolution along phylogenetic networks can be reformulated as
graphical models, for which efficient belief propagation algorithms exist. We
provide a brief review of belief propagation on general graphical models, then
focus on linear Gaussian models for continuous traits. We show how belief
propagation techniques can be applied for exact or approximate (but more
scalable) likelihood and gradient calculations, and prove novel results for
efficient parameter inference of some models. We highlight the possible
fruitful interactions between graphical models and phylogenetic methods. For
example, approximate likelihood approaches have the potential to greatly reduce
computational costs for phylogenies with reticulations.
|
[
{
"created": "Wed, 15 May 2024 13:27:03 GMT",
"version": "v1"
}
] |
2024-05-16
|
[
[
"Teo",
"Benjamin",
""
],
[
"Bastide",
"Paul",
""
],
[
"Ané",
"Cécile",
""
]
] |
The evolution of molecular and phenotypic traits is commonly modelled using Markov processes along a rooted phylogeny. This phylogeny can be a tree, or a network if it includes reticulations, representing events such as hybridization or admixture. Computing the likelihood of data observed at the leaves is costly as the size and complexity of the phylogeny grows. Efficient algorithms exist for trees, but cannot be applied to networks. We show that a vast array of models for trait evolution along phylogenetic networks can be reformulated as graphical models, for which efficient belief propagation algorithms exist. We provide a brief review of belief propagation on general graphical models, then focus on linear Gaussian models for continuous traits. We show how belief propagation techniques can be applied for exact or approximate (but more scalable) likelihood and gradient calculations, and prove novel results for efficient parameter inference of some models. We highlight the possible fruitful interactions between graphical models and phylogenetic methods. For example, approximate likelihood approaches have the potential to greatly reduce computational costs for phylogenies with reticulations.
|
2104.01468
|
Nathan Ranno
|
Nathan Ranno, Dong Si
|
Neural Representations of Cryo-EM Maps and a Graph-Based Interpretation
|
15 pages, 8 figures
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Advances in imagery at atomic and near-atomic resolution, such as cryogenic
electron microscopy (cryo-EM), have led to an influx of high resolution images
of proteins and other macromolecular structures to data banks worldwide.
Producing a protein structure from the discrete voxel grid data of cryo-EM maps
involves interpolation into the continuous spatial domain. We present a novel
data format called the neural cryo-EM map, which is formed from a set of neural
networks that accurately parameterize cryo-EM maps and provide native,
spatially continuous data for density and gradient. As a case study of this
data format, we create graph-based interpretations of high resolution
experimental cryo-EM maps. Normalized cryo-EM map values interpolated using the
non-linear neural cryo-EM format are more accurate, consistently scoring less
than 0.01 mean absolute error, than a conventional tri-linear interpolation,
which scores up to 0.12 mean absolute error. Our graph-based interpretations of
115 experimental cryo-EM maps from 1.15 to 4.0 Angstrom resolution provide high
coverage of the underlying amino acid residue locations, while accuracy of
nodes is correlated with resolution. The nodes of graphs created from atomic
resolution maps (higher than 1.6 Angstroms) provide greater than 99% residue
coverage as well as 85% full atomic coverage with a mean of than 0.19 Angstrom
root mean squared deviation (RMSD). Other graphs have a mean 84% residue
coverage with less specificity of the nodes due to experimental noise and
differences of density context at lower resolutions. This work may be
generalized for transforming any 3D grid-based data format into non-linear,
continuous, and differentiable format for the downstream geometric deep
learning applications.
|
[
{
"created": "Sat, 3 Apr 2021 19:49:16 GMT",
"version": "v1"
}
] |
2021-04-06
|
[
[
"Ranno",
"Nathan",
""
],
[
"Si",
"Dong",
""
]
] |
Advances in imagery at atomic and near-atomic resolution, such as cryogenic electron microscopy (cryo-EM), have led to an influx of high resolution images of proteins and other macromolecular structures to data banks worldwide. Producing a protein structure from the discrete voxel grid data of cryo-EM maps involves interpolation into the continuous spatial domain. We present a novel data format called the neural cryo-EM map, which is formed from a set of neural networks that accurately parameterize cryo-EM maps and provide native, spatially continuous data for density and gradient. As a case study of this data format, we create graph-based interpretations of high resolution experimental cryo-EM maps. Normalized cryo-EM map values interpolated using the non-linear neural cryo-EM format are more accurate, consistently scoring less than 0.01 mean absolute error, than a conventional tri-linear interpolation, which scores up to 0.12 mean absolute error. Our graph-based interpretations of 115 experimental cryo-EM maps from 1.15 to 4.0 Angstrom resolution provide high coverage of the underlying amino acid residue locations, while accuracy of nodes is correlated with resolution. The nodes of graphs created from atomic resolution maps (higher than 1.6 Angstroms) provide greater than 99% residue coverage as well as 85% full atomic coverage with a mean of than 0.19 Angstrom root mean squared deviation (RMSD). Other graphs have a mean 84% residue coverage with less specificity of the nodes due to experimental noise and differences of density context at lower resolutions. This work may be generalized for transforming any 3D grid-based data format into non-linear, continuous, and differentiable format for the downstream geometric deep learning applications.
|
2303.14248
|
Mattia Sensi
|
Rossella Della Marca, Alberto d'Onofrio, Mattia Sensi, Sara Sottile
|
A geometric analysis of the impact of large but finite switching rates
on vaccination evolutionary games
|
26 pages, 6 figures
|
Nonlinear Analysis: Real World Applications, Volume 75, February
2024, 103986
|
10.1016/j.nonrwa.2023.103986
| null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In contemporary society, social networks accelerate decision dynamics causing
a rapid switch of opinions in a number of fields, including the prevention of
infectious diseases by means of vaccines. This means that opinion dynamics can
nowadays be much faster than the spread of epidemics. Hence, we propose a
Susceptible-Infectious-Removed epidemic model coupled with an evolutionary
vaccination game embedding the public health system efforts to increase vaccine
uptake. This results in a global system ``epidemic model + evolutionary game''.
The epidemiological novelty of this work is that we assume that the switching
to the strategy ``pro vaccine'' depends on the incidence of the disease. As a
consequence of the above-mentioned accelerated decisions, the dynamics of the
system acts on two different scales: a fast scale for the vaccine decisions and
a slower scale for the spread of the disease. Another, and more methodological,
element of novelty is that we apply Geometrical Singular Perturbation Theory
(GSPT) to such a two-scale model and we then compare the geometric analysis
with the Quasi-Steady-State Approximation (QSSA) approach, showing a
criticality in the latter. Later, we apply the GSPT approach to the disease
prevalence-based model already studied in (Della Marca and d'Onofrio, Comm Nonl
Sci Num Sim, 2021) via the QSSA approach by considering medium-large values of
the strategy switching parameter.
|
[
{
"created": "Fri, 24 Mar 2023 19:26:51 GMT",
"version": "v1"
}
] |
2023-11-06
|
[
[
"Della Marca",
"Rossella",
""
],
[
"d'Onofrio",
"Alberto",
""
],
[
"Sensi",
"Mattia",
""
],
[
"Sottile",
"Sara",
""
]
] |
In contemporary society, social networks accelerate decision dynamics causing a rapid switch of opinions in a number of fields, including the prevention of infectious diseases by means of vaccines. This means that opinion dynamics can nowadays be much faster than the spread of epidemics. Hence, we propose a Susceptible-Infectious-Removed epidemic model coupled with an evolutionary vaccination game embedding the public health system efforts to increase vaccine uptake. This results in a global system ``epidemic model + evolutionary game''. The epidemiological novelty of this work is that we assume that the switching to the strategy ``pro vaccine'' depends on the incidence of the disease. As a consequence of the above-mentioned accelerated decisions, the dynamics of the system acts on two different scales: a fast scale for the vaccine decisions and a slower scale for the spread of the disease. Another, and more methodological, element of novelty is that we apply Geometrical Singular Perturbation Theory (GSPT) to such a two-scale model and we then compare the geometric analysis with the Quasi-Steady-State Approximation (QSSA) approach, showing a criticality in the latter. Later, we apply the GSPT approach to the disease prevalence-based model already studied in (Della Marca and d'Onofrio, Comm Nonl Sci Num Sim, 2021) via the QSSA approach by considering medium-large values of the strategy switching parameter.
|
2310.20601
|
Jacob Tanner
|
Jacob Tanner, Sina Mansour L., Ludovico Coletta, Alessandro Gozzi,
Richard F. Betzel
|
Functional connectivity modules in recurrent neural networks: function,
origin and dynamics
| null | null | null | null |
q-bio.NC cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Understanding the ubiquitous phenomenon of neural synchronization across
species and organizational levels is crucial for decoding brain function.
Despite its prevalence, the specific functional role, origin, and dynamical
implication of modular structures in correlation-based networks remains
ambiguous. Using recurrent neural networks trained on systems neuroscience
tasks, this study investigates these important characteristics of modularity in
correlation networks. We demonstrate that modules are functionally coherent
units that contribute to specialized information processing. We show that
modules form spontaneously from asymmetries in the sign and weight of
projections from the input layer to the recurrent layer. Moreover, we show that
modules define connections with similar roles in governing system behavior and
dynamics. Collectively, our findings clarify the function, formation, and
operational significance of functional connectivity modules, offering insights
into cortical function and laying the groundwork for further studies on brain
function, development, and dynamics.
|
[
{
"created": "Tue, 31 Oct 2023 16:37:01 GMT",
"version": "v1"
}
] |
2023-11-01
|
[
[
"Tanner",
"Jacob",
""
],
[
"L.",
"Sina Mansour",
""
],
[
"Coletta",
"Ludovico",
""
],
[
"Gozzi",
"Alessandro",
""
],
[
"Betzel",
"Richard F.",
""
]
] |
Understanding the ubiquitous phenomenon of neural synchronization across species and organizational levels is crucial for decoding brain function. Despite its prevalence, the specific functional role, origin, and dynamical implication of modular structures in correlation-based networks remains ambiguous. Using recurrent neural networks trained on systems neuroscience tasks, this study investigates these important characteristics of modularity in correlation networks. We demonstrate that modules are functionally coherent units that contribute to specialized information processing. We show that modules form spontaneously from asymmetries in the sign and weight of projections from the input layer to the recurrent layer. Moreover, we show that modules define connections with similar roles in governing system behavior and dynamics. Collectively, our findings clarify the function, formation, and operational significance of functional connectivity modules, offering insights into cortical function and laying the groundwork for further studies on brain function, development, and dynamics.
|
2311.13801
|
Sikta Das Adhikari
|
Sikta Das Adhikari, Jiaxin Yang, Jianrong Wang, Yuehua Cui
|
A selective review of recent developments in spatially variable gene
detection for spatial transcriptomics
| null | null |
10.1016/j.csbj.2024.01.016
| null |
q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
With the emergence of advanced spatial transcriptomic technologies, there has
been a surge in research papers dedicated to analyzing spatial transcriptomics
data, resulting in significant contributions to our understanding of biology.
The initial stage of downstream analysis of spatial transcriptomic data has
centered on identifying spatially variable genes (SVGs) or genes expressed with
specific spatial patterns across the tissue. SVG detection is an important task
since many downstream analyses depend on these selected SVGs. Over the past few
years, a plethora of new methods have been proposed for the detection of SVGs,
accompanied by numerous innovative concepts and discussions. This article
provides a selective review of methods and their practical implementations,
offering valuable insights into the current literature in this field.
|
[
{
"created": "Thu, 23 Nov 2023 04:20:14 GMT",
"version": "v1"
}
] |
2024-04-11
|
[
[
"Adhikari",
"Sikta Das",
""
],
[
"Yang",
"Jiaxin",
""
],
[
"Wang",
"Jianrong",
""
],
[
"Cui",
"Yuehua",
""
]
] |
With the emergence of advanced spatial transcriptomic technologies, there has been a surge in research papers dedicated to analyzing spatial transcriptomics data, resulting in significant contributions to our understanding of biology. The initial stage of downstream analysis of spatial transcriptomic data has centered on identifying spatially variable genes (SVGs) or genes expressed with specific spatial patterns across the tissue. SVG detection is an important task since many downstream analyses depend on these selected SVGs. Over the past few years, a plethora of new methods have been proposed for the detection of SVGs, accompanied by numerous innovative concepts and discussions. This article provides a selective review of methods and their practical implementations, offering valuable insights into the current literature in this field.
|
1812.05668
|
Jiansheng Wu
|
Hang Yu, Ziyi Liu, Jiansheng Wu
|
Forgetting in order to Remember Better
|
4 pages, 2 figures
| null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In human memory, forgetting occur rapidly after the remembering and the rate
of forgetting slowed down as time went. This is so-called the Ebbinghaus
forgetting curve. There are many explanations of how this curve occur based on
the properties of the brains. In this article, we use a simple mathematical
model to explain the mechanism of forgetting based on rearrangement inequality
and get a general formalism for short-term and long-term memory and use it to
fit the Ebbinghaus forgetting curve. We also find out that forgetting is not a
flaw, instead it is help to improve the efficiency of remembering when human
confront different situations by reducing the interference of information and
reducing the number of retrievals. Furthurmove, we find that the interference
of informations limits the capacity of human memory, which is the "magic number
seven".
|
[
{
"created": "Wed, 12 Dec 2018 18:54:27 GMT",
"version": "v1"
}
] |
2018-12-17
|
[
[
"Yu",
"Hang",
""
],
[
"Liu",
"Ziyi",
""
],
[
"Wu",
"Jiansheng",
""
]
] |
In human memory, forgetting occur rapidly after the remembering and the rate of forgetting slowed down as time went. This is so-called the Ebbinghaus forgetting curve. There are many explanations of how this curve occur based on the properties of the brains. In this article, we use a simple mathematical model to explain the mechanism of forgetting based on rearrangement inequality and get a general formalism for short-term and long-term memory and use it to fit the Ebbinghaus forgetting curve. We also find out that forgetting is not a flaw, instead it is help to improve the efficiency of remembering when human confront different situations by reducing the interference of information and reducing the number of retrievals. Furthurmove, we find that the interference of informations limits the capacity of human memory, which is the "magic number seven".
|
2201.08980
|
Thomas Sturm
|
Christoph L\"uders, Thomas Sturm, Ovidiu Radulescu
|
ODEbase: A Repository of ODE Systems for Systems Biology
| null | null | null | null |
q-bio.MN cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, symbolic computation and computer algebra systems have been
successfully applied in systems biology, especially in chemical reaction
network theory. One advantage of symbolic computation is its potential for
qualitative answers to biological questions. Qualitative methods analyze
dynamical input systems as formal objects, in contrast to investigating only
part of the state space, as is the case with numerical simulation. However,
symbolic computation tools and libraries have a different set of requirements
for their input data than their numerical counterparts. A common format used in
mathematical modeling of biological processes is SBML. We illustrate that the
use of SBML data in symbolic computation requires significant pre-processing,
incorporating external biological and mathematical expertise. ODEbase provides
high quality symbolic computation input data derived from established existing
biomodels, covering in particular the BioModels database.
|
[
{
"created": "Sat, 22 Jan 2022 07:22:01 GMT",
"version": "v1"
}
] |
2022-01-25
|
[
[
"Lüders",
"Christoph",
""
],
[
"Sturm",
"Thomas",
""
],
[
"Radulescu",
"Ovidiu",
""
]
] |
Recently, symbolic computation and computer algebra systems have been successfully applied in systems biology, especially in chemical reaction network theory. One advantage of symbolic computation is its potential for qualitative answers to biological questions. Qualitative methods analyze dynamical input systems as formal objects, in contrast to investigating only part of the state space, as is the case with numerical simulation. However, symbolic computation tools and libraries have a different set of requirements for their input data than their numerical counterparts. A common format used in mathematical modeling of biological processes is SBML. We illustrate that the use of SBML data in symbolic computation requires significant pre-processing, incorporating external biological and mathematical expertise. ODEbase provides high quality symbolic computation input data derived from established existing biomodels, covering in particular the BioModels database.
|
0710.1333
|
Jesus M. Cortes
|
J.M. Cortes, A. Greve, A.B. Barrett and M.C.W. van Rossum
|
Dynamics and robustness of familiarity memory
|
22 pages, 3 figures
| null | null | null |
q-bio.NC
| null |
When one is presented with an item or a face, one can sometimes have a sense
of recognition without being able to recall where or when one has encountered
it before. This sense of recognition is known as familiarity. Following
previous computational models of familiarity memory we investigate the
dynamical properties of familiarity discrimination, and contrast two different
familiarity discriminators: one based on the energy of the neural network, and
the other based on the time derivative of the energy. We show how the
familiarity signal decays after a stimulus is presented, and examine the
robustness of the familiarity discriminator in the presence of random
fluctuations in neural activity. For both discriminators we establish, via a
combined method of signal-to-noise ratio and mean field analysis, how the
maximum number of successfully discriminated stimuli depends on the noise
level.
|
[
{
"created": "Sun, 7 Oct 2007 00:14:07 GMT",
"version": "v1"
}
] |
2007-10-09
|
[
[
"Cortes",
"J. M.",
""
],
[
"Greve",
"A.",
""
],
[
"Barrett",
"A. B.",
""
],
[
"van Rossum",
"M. C. W.",
""
]
] |
When one is presented with an item or a face, one can sometimes have a sense of recognition without being able to recall where or when one has encountered it before. This sense of recognition is known as familiarity. Following previous computational models of familiarity memory we investigate the dynamical properties of familiarity discrimination, and contrast two different familiarity discriminators: one based on the energy of the neural network, and the other based on the time derivative of the energy. We show how the familiarity signal decays after a stimulus is presented, and examine the robustness of the familiarity discriminator in the presence of random fluctuations in neural activity. For both discriminators we establish, via a combined method of signal-to-noise ratio and mean field analysis, how the maximum number of successfully discriminated stimuli depends on the noise level.
|
q-bio/0406004
|
Manoj Gopalakrishnan
|
Manoj Gopalakrishnan, Kimberly Forsten-Williams, Theressa R. Cassino,
Luz Padro, Thomas E. Ryan and Uwe C. Tauber
|
Ligand Rebinding: Self-consistent Mean-field Theory and Numerical
Simulations Applied to SPR Studies
|
minor errors in notation corrected, added figure, appendix and
glossary, 37 pages, to appear in Eur. Biophys. J
|
Eur. Biophys. J. 34 (2005) 943
| null | null |
q-bio.QM cond-mat.stat-mech q-bio.SC
| null |
Rebinding of dissociated ligands from cell surface proteins can confound
quantitative measurements of dissociation rates important for characterizing
the affinity of binding interactions. This can be true also for in vitro
techniques such as surface plasmon resonance (SPR). We present experimental
results using SPR for the interaction of insulin-like growth factor-I (IGF-I)
with one of its binding proteins, IGF binding protein-3 (IGFBP-3), and show
that rebinding, even with the addition of soluble heparin in the dissociation
phase, does not exhibit the expected exponential decay characteristic of a 1:1
binding reaction. We thus consider the effect of (multiple) rebinding events
and, within a self-consistent mean-field approximation, we derive the complete
mathematical form for the fraction of bound ligand as a function of time. We
show that, except for very low surface coverage/association rate, this function
is non-exponential at all times, indicating that multiple rebinding events
strongly influence dissociation even at early times. We compare the mean-field
results with numerical simulations and find good agreement, although deviations
are measurable in certain cases. Our analysis of the IGF-I-IGFBP-3 data
indicates that rebinding is prominent for this system and that the theoretical
predictions fit the experimental data well. Our results provide a means for
analyzing SPR biosensor data where rebinding is problematic and a methodology
to do so is presented.
|
[
{
"created": "Wed, 2 Jun 2004 07:05:42 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Oct 2004 08:23:48 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Feb 2005 12:16:39 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Gopalakrishnan",
"Manoj",
""
],
[
"Forsten-Williams",
"Kimberly",
""
],
[
"Cassino",
"Theressa R.",
""
],
[
"Padro",
"Luz",
""
],
[
"Ryan",
"Thomas E.",
""
],
[
"Tauber",
"Uwe C.",
""
]
] |
Rebinding of dissociated ligands from cell surface proteins can confound quantitative measurements of dissociation rates important for characterizing the affinity of binding interactions. This can be true also for in vitro techniques such as surface plasmon resonance (SPR). We present experimental results using SPR for the interaction of insulin-like growth factor-I (IGF-I) with one of its binding proteins, IGF binding protein-3 (IGFBP-3), and show that rebinding, even with the addition of soluble heparin in the dissociation phase, does not exhibit the expected exponential decay characteristic of a 1:1 binding reaction. We thus consider the effect of (multiple) rebinding events and, within a self-consistent mean-field approximation, we derive the complete mathematical form for the fraction of bound ligand as a function of time. We show that, except for very low surface coverage/association rate, this function is non-exponential at all times, indicating that multiple rebinding events strongly influence dissociation even at early times. We compare the mean-field results with numerical simulations and find good agreement, although deviations are measurable in certain cases. Our analysis of the IGF-I-IGFBP-3 data indicates that rebinding is prominent for this system and that the theoretical predictions fit the experimental data well. Our results provide a means for analyzing SPR biosensor data where rebinding is problematic and a methodology to do so is presented.
|
2105.02144
|
Sandeep Juneja
|
Sandeep Juneja and Daksh Mittal
|
Modelling the Second Covid-19 Wave in Mumbai
|
34 pages, 33 figures (including 3 tables)
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
India has been hit by a huge second wave of Covid-19 that started in
mid-February 2021. Mumbai was amongst the first cities to see the increase. In
this report, we use our agent based simulator to computationally study the
second wave in Mumbai. We build upon our earlier analysis, where projections
were made from November 2020 onwards. We use our simulator to conduct an
extensive scenario analysis - we play out many plausible scenarios through
varying economic activity, reinfection levels, population compliance,
infectiveness, prevalence and lethality of the possible variant strains, and
infection spread via local trains to arrive at those that may better explain
the second wave fatality numbers. We observe and highlight that timings of peak
and valley of the fatalities in the second wave are robust to many plausible
scenarios, suggesting that they are likely to be accurate projections for
Mumbai. During the second wave, the observed fatalities were low in February
and mid-March and saw a phase change or a steep increase in the growth rate
after around late March. We conduct extensive experiments to replicate this
observed sharp convexity. This is not an easy phenomena to replicate, and we
find that explanations such as increased laxity in the population, increased
reinfections, increased intensity of infections in Mumbai transportation,
increased lethality in the virus, or a combination amongst them, generally do a
poor job of matching this pattern. We find that the most likely explanation is
presence of small amount of extremely infective variant on February 1 that
grows rapidly thereafter and becomes a dominant strain by Mid-March. From a
prescriptive view, this points to an urgent need for extensive and continuous
genome sequencing to establish existence and prevalence of different virus
strains in Mumbai and in India, as they evolve over time.
|
[
{
"created": "Wed, 5 May 2021 15:51:56 GMT",
"version": "v1"
}
] |
2021-05-06
|
[
[
"Juneja",
"Sandeep",
""
],
[
"Mittal",
"Daksh",
""
]
] |
India has been hit by a huge second wave of Covid-19 that started in mid-February 2021. Mumbai was amongst the first cities to see the increase. In this report, we use our agent based simulator to computationally study the second wave in Mumbai. We build upon our earlier analysis, where projections were made from November 2020 onwards. We use our simulator to conduct an extensive scenario analysis - we play out many plausible scenarios through varying economic activity, reinfection levels, population compliance, infectiveness, prevalence and lethality of the possible variant strains, and infection spread via local trains to arrive at those that may better explain the second wave fatality numbers. We observe and highlight that timings of peak and valley of the fatalities in the second wave are robust to many plausible scenarios, suggesting that they are likely to be accurate projections for Mumbai. During the second wave, the observed fatalities were low in February and mid-March and saw a phase change or a steep increase in the growth rate after around late March. We conduct extensive experiments to replicate this observed sharp convexity. This is not an easy phenomena to replicate, and we find that explanations such as increased laxity in the population, increased reinfections, increased intensity of infections in Mumbai transportation, increased lethality in the virus, or a combination amongst them, generally do a poor job of matching this pattern. We find that the most likely explanation is presence of small amount of extremely infective variant on February 1 that grows rapidly thereafter and becomes a dominant strain by Mid-March. From a prescriptive view, this points to an urgent need for extensive and continuous genome sequencing to establish existence and prevalence of different virus strains in Mumbai and in India, as they evolve over time.
|
2304.12825
|
Fang Sun
|
Fang Sun, Zhihao Zhan, Hongyu Guo, Ming Zhang, Jian Tang
|
GraphVF: Controllable Protein-Specific 3D Molecule Generation with
Variational Flow
|
15 pages, 8 figures
| null | null | null |
q-bio.BM cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Designing molecules that bind to specific target proteins is a fundamental
task in drug discovery. Recent models leverage geometric constraints to
generate ligand molecules that bind cohesively with specific protein pockets.
However, these models cannot effectively generate 3D molecules with 2D skeletal
curtailments and property constraints, which are pivotal to drug potency and
development. To tackle this challenge, we propose GraphVF, a variational
flow-based framework that combines 2D topology and 3D geometry, for
controllable generation of binding 3D molecules. Empirically, our method
achieves state-of-the-art binding affinity and realistic sub-structural layouts
for protein-specific generation. In particular, GraphVF represents the first
controllable geometry-aware, protein-specific molecule generation method, which
can generate binding 3D molecules with tailored sub-structures and
physio-chemical properties. Our code is available at
https://github.com/Franco-Solis/GraphVF-code.
|
[
{
"created": "Thu, 23 Feb 2023 17:32:49 GMT",
"version": "v1"
}
] |
2023-04-26
|
[
[
"Sun",
"Fang",
""
],
[
"Zhan",
"Zhihao",
""
],
[
"Guo",
"Hongyu",
""
],
[
"Zhang",
"Ming",
""
],
[
"Tang",
"Jian",
""
]
] |
Designing molecules that bind to specific target proteins is a fundamental task in drug discovery. Recent models leverage geometric constraints to generate ligand molecules that bind cohesively with specific protein pockets. However, these models cannot effectively generate 3D molecules with 2D skeletal curtailments and property constraints, which are pivotal to drug potency and development. To tackle this challenge, we propose GraphVF, a variational flow-based framework that combines 2D topology and 3D geometry, for controllable generation of binding 3D molecules. Empirically, our method achieves state-of-the-art binding affinity and realistic sub-structural layouts for protein-specific generation. In particular, GraphVF represents the first controllable geometry-aware, protein-specific molecule generation method, which can generate binding 3D molecules with tailored sub-structures and physio-chemical properties. Our code is available at https://github.com/Franco-Solis/GraphVF-code.
|
2209.00380
|
Nathalie Buonviso
|
Maxime Juventin, Mickael Zbili, Nicolas Fourcaud-Trocm\'e, Samuel
Garcia, Nathalie Buonviso (CRNL), Corine Amat
|
Respiratory rhythm entrains membrane potential and spiking of
non-olfactory neurons
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, several studies have tended to show a respiratory drive in
numerous brain areas so that the respiratory rhythm could be considered as a
master clock promoting communication between distant brain areas. However,
outside of the olfactory system it is not known if respiration-related
oscillation (RRo) could exist in the membrane potential (MP) of neurons neither
if it can structure spiking discharge. To fill this gap, we co-recorded MP and
LFP activities in different non-olfactory brain areas: median prefrontal cortex
(mPFC), primary somatosensory cortex (S1), primary visual cortex (V1), and
hippocampus (HPC), in urethane-anesthetized rats. Using respiratory cycle by
respiratory cycle analysis, we observed that respiration could modulate both MP
and spiking discharges in all recorded areas. Further quantifications revealed
RRo episodes were transient in most neurons (5 consecutive cycles in average).
RRo development in MP was largely influenced by the presence of respiratory
modulation in the LFP. Finally, moderate hyperpolarization reduced RRo
occurence within cells of mpFC and S1. By showing the respiratory rhythm
influenced brain activity deep to the MP of non-olfactory neurons, our data
support the idea respiratory rhythm could mediate long-range communication.
|
[
{
"created": "Thu, 1 Sep 2022 11:49:58 GMT",
"version": "v1"
}
] |
2022-09-02
|
[
[
"Juventin",
"Maxime",
"",
"CRNL"
],
[
"Zbili",
"Mickael",
"",
"CRNL"
],
[
"Fourcaud-Trocmé",
"Nicolas",
"",
"CRNL"
],
[
"Garcia",
"Samuel",
"",
"CRNL"
],
[
"Buonviso",
"Nathalie",
"",
"CRNL"
],
[
"Amat",
"Corine",
""
]
] |
In recent years, several studies have tended to show a respiratory drive in numerous brain areas so that the respiratory rhythm could be considered as a master clock promoting communication between distant brain areas. However, outside of the olfactory system it is not known if respiration-related oscillation (RRo) could exist in the membrane potential (MP) of neurons neither if it can structure spiking discharge. To fill this gap, we co-recorded MP and LFP activities in different non-olfactory brain areas: median prefrontal cortex (mPFC), primary somatosensory cortex (S1), primary visual cortex (V1), and hippocampus (HPC), in urethane-anesthetized rats. Using respiratory cycle by respiratory cycle analysis, we observed that respiration could modulate both MP and spiking discharges in all recorded areas. Further quantifications revealed RRo episodes were transient in most neurons (5 consecutive cycles in average). RRo development in MP was largely influenced by the presence of respiratory modulation in the LFP. Finally, moderate hyperpolarization reduced RRo occurence within cells of mpFC and S1. By showing the respiratory rhythm influenced brain activity deep to the MP of non-olfactory neurons, our data support the idea respiratory rhythm could mediate long-range communication.
|
1105.0515
|
Yunkyu Sohn
|
Yunkyu Sohn, Jung-Kyoo Choi and T.K. Ahn
|
Core-Periphery Segregation in Evolving Prisoner's Dilemma Networks
| null | null | null | null |
q-bio.PE cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense cooperative networks are an essential element of social capital for a
prosperous society. These networks enable individuals to overcome collective
action dilemmas by enhancing trust. In many biological and social settings,
network structures evolve endogenously as agents exit relationships and build
new ones. However, the process by which evolutionary dynamics lead to
self-organization of dense cooperative networks has not been explored. Our
large group prisoner's dilemma experiments with exit and partner choice options
show that core-periphery segregation of cooperators and defectors drives the
emergence of cooperation. Cooperators' Quit-for-Tat and defectors' Roving
strategy lead to a highly asymmetric core and periphery structure. Densely
connected to each other, cooperators successfully isolate defectors and earn
larger payoffs than defectors. Our analysis of the topological characteristics
of evolving networks illuminates how social capital is generated.
|
[
{
"created": "Tue, 3 May 2011 08:40:31 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Dec 2012 14:38:34 GMT",
"version": "v2"
}
] |
2012-12-11
|
[
[
"Sohn",
"Yunkyu",
""
],
[
"Choi",
"Jung-Kyoo",
""
],
[
"Ahn",
"T. K.",
""
]
] |
Dense cooperative networks are an essential element of social capital for a prosperous society. These networks enable individuals to overcome collective action dilemmas by enhancing trust. In many biological and social settings, network structures evolve endogenously as agents exit relationships and build new ones. However, the process by which evolutionary dynamics lead to self-organization of dense cooperative networks has not been explored. Our large group prisoner's dilemma experiments with exit and partner choice options show that core-periphery segregation of cooperators and defectors drives the emergence of cooperation. Cooperators' Quit-for-Tat and defectors' Roving strategy lead to a highly asymmetric core and periphery structure. Densely connected to each other, cooperators successfully isolate defectors and earn larger payoffs than defectors. Our analysis of the topological characteristics of evolving networks illuminates how social capital is generated.
|
2004.01011
|
Ellen Baake
|
Ellen Baake and Anton Wakolbinger
|
Microbial populations under selection
|
to appear in: Probabilistic Structures in Evolution, E.~Baake and
A.~Wakolbinger (eds.), EMS Publishing House, Zurich
|
in: Probabilistic Structures in Evolution (E. Baake, A.
Wakolbinger, eds.), EMS Press, Berlin, 2021, pp. 43-68
| null | null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This chapter gives a synopsis of recent approaches to model and analyse the
evolution of microbial populations under selection. The first part reviews two
population genetic models of Lenski's long-term evolution experiment with
Escherichia coli, where models aim at explaining the observed curve of the
evolution of the mean fitness. The second part describes a model of a
host-pathogen system where the population of pathogenes experiences balancing
selection, migration, and mutation, as motivated by observations of the genetic
diversity of HCMV (the human cytomegalovirus) across hosts.
|
[
{
"created": "Thu, 2 Apr 2020 14:04:27 GMT",
"version": "v1"
}
] |
2021-06-30
|
[
[
"Baake",
"Ellen",
""
],
[
"Wakolbinger",
"Anton",
""
]
] |
This chapter gives a synopsis of recent approaches to model and analyse the evolution of microbial populations under selection. The first part reviews two population genetic models of Lenski's long-term evolution experiment with Escherichia coli, where models aim at explaining the observed curve of the evolution of the mean fitness. The second part describes a model of a host-pathogen system where the population of pathogenes experiences balancing selection, migration, and mutation, as motivated by observations of the genetic diversity of HCMV (the human cytomegalovirus) across hosts.
|
1304.2960
|
Kieran Smallbone
|
Kieran Smallbone
|
Standardized network reconstruction of E. coli metabolism
| null | null | null | null |
q-bio.MN
|
http://creativecommons.org/licenses/publicdomain/
|
We have created a genome-scale network reconstruction of Escherichia coli
metabolism. Existing reconstructions were improved in terms of annotation
standards, to facilitate their subsequent use in dynamic modelling. The
resultant network is available from EcoliNet (http://ecoli.sf.net/).
|
[
{
"created": "Tue, 9 Apr 2013 09:07:13 GMT",
"version": "v1"
}
] |
2013-04-11
|
[
[
"Smallbone",
"Kieran",
""
]
] |
We have created a genome-scale network reconstruction of Escherichia coli metabolism. Existing reconstructions were improved in terms of annotation standards, to facilitate their subsequent use in dynamic modelling. The resultant network is available from EcoliNet (http://ecoli.sf.net/).
|
2108.12386
|
Kapila Gunasekera PhD
|
Daniel Nilsson, Kapila Gunasekera, Jan Mani, Magne Osteras, Laurent
Farinelli, Loic Baerlocher, Isabel Roditi, Torsten Ochsenreiter
|
Spliced Leader Trapping Reveals Widespread Alternative Splicing Patterns
in the Highly Dynamic Transcriptome of Trypanosoma brucei
|
13 pages, 8 figures
| null |
10.1371/journal.ppat.1001037
| null |
q-bio.GN q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Trans-splicing of leader sequences onto the 59ends of mRNAs is a widespread
phenomenon in protozoa, nematodes and some chordates. Using parallel sequencing
we have developed a method to simultaneously map 59splice sites and analyze the
corresponding gene expression profile, that we term spliced leader trapping
(SLT). The method can be applied to any organism with a sequenced genome and
trans-splicing of a conserved leader sequence. We analyzed the expression
profiles and splicing patterns of bloodstream and insect forms of the parasite
Trypanosoma brucei. We detected the 59splice sites of 85% of the annotated
protein-coding genes and, contrary to previous reports, found up to 40% of
transcripts to be differentially expressed. Furthermore, we discovered more
than 2500 alternative splicing events, many of which appear to be
stage-regulated. Based on our findings we hypothesize that alternatively
spliced transcripts present a new means of regulating gene expression and could
potentially contribute to protein diversity in the parasite. The entire dataset
can be accessed online at TriTrypDB or through: http://splicer.unibe.ch/.
|
[
{
"created": "Fri, 27 Aug 2021 16:48:46 GMT",
"version": "v1"
}
] |
2021-08-30
|
[
[
"Nilsson",
"Daniel",
""
],
[
"Gunasekera",
"Kapila",
""
],
[
"Mani",
"Jan",
""
],
[
"Osteras",
"Magne",
""
],
[
"Farinelli",
"Laurent",
""
],
[
"Baerlocher",
"Loic",
""
],
[
"Roditi",
"Isabel",
""
],
[
"Ochsenreiter",
"Torsten",
""
]
] |
Trans-splicing of leader sequences onto the 59ends of mRNAs is a widespread phenomenon in protozoa, nematodes and some chordates. Using parallel sequencing we have developed a method to simultaneously map 59splice sites and analyze the corresponding gene expression profile, that we term spliced leader trapping (SLT). The method can be applied to any organism with a sequenced genome and trans-splicing of a conserved leader sequence. We analyzed the expression profiles and splicing patterns of bloodstream and insect forms of the parasite Trypanosoma brucei. We detected the 59splice sites of 85% of the annotated protein-coding genes and, contrary to previous reports, found up to 40% of transcripts to be differentially expressed. Furthermore, we discovered more than 2500 alternative splicing events, many of which appear to be stage-regulated. Based on our findings we hypothesize that alternatively spliced transcripts present a new means of regulating gene expression and could potentially contribute to protein diversity in the parasite. The entire dataset can be accessed online at TriTrypDB or through: http://splicer.unibe.ch/.
|
1008.4938
|
Randen Patterson
|
Yoojin Hong, Kyung Dae Ko, Gaurav Bhardwaj, Zhenhai Zhang, Damian B.
van Rossum, and Randen L. Patterson
|
Towards Solving the Inverse Protein Folding Problem
|
22 pages, 11 figures
| null | null | null |
q-bio.QM cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately assigning folds for divergent protein sequences is a major
obstacle to structural studies and underlies the inverse protein folding
problem. Herein, we outline our theories for fold-recognition in the
"twilight-zone" of sequence similarity (<25% identity). Our analyses
demonstrate that structural sequence profiles built using Position-Specific
Scoring Matrices (PSSMs) significantly outperform multiple popular
homology-modeling algorithms for relating and predicting structures given only
their amino acid sequences. Importantly, structural sequence profiles
reconstitute SCOP fold classifications in control and test datasets. Results
from our experiments suggest that structural sequence profiles can be used to
rapidly annotate protein folds at proteomic scales. We propose that encoding
the entire Protein DataBank (~1070 folds) into structural sequence profiles
would extract interoperable information capable of improving most if not all
methods of structural modeling.
|
[
{
"created": "Sun, 29 Aug 2010 15:34:02 GMT",
"version": "v1"
}
] |
2010-08-31
|
[
[
"Hong",
"Yoojin",
""
],
[
"Ko",
"Kyung Dae",
""
],
[
"Bhardwaj",
"Gaurav",
""
],
[
"Zhang",
"Zhenhai",
""
],
[
"van Rossum",
"Damian B.",
""
],
[
"Patterson",
"Randen L.",
""
]
] |
Accurately assigning folds for divergent protein sequences is a major obstacle to structural studies and underlies the inverse protein folding problem. Herein, we outline our theories for fold-recognition in the "twilight-zone" of sequence similarity (<25% identity). Our analyses demonstrate that structural sequence profiles built using Position-Specific Scoring Matrices (PSSMs) significantly outperform multiple popular homology-modeling algorithms for relating and predicting structures given only their amino acid sequences. Importantly, structural sequence profiles reconstitute SCOP fold classifications in control and test datasets. Results from our experiments suggest that structural sequence profiles can be used to rapidly annotate protein folds at proteomic scales. We propose that encoding the entire Protein DataBank (~1070 folds) into structural sequence profiles would extract interoperable information capable of improving most if not all methods of structural modeling.
|
1307.4789
|
Wes Maciejewski
|
Wes Maciejewski
|
Reproductive Value in Graph-structured Populations
| null |
Journal of Theoretical Biology, (2014), vol.340, pp.285-293
|
10.1016/j.jtbi.2013.09.032
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evolutionary graph theory has grown to be an area of intense study. Despite
the amount of interest in the field, it seems to have grown separate from other
subfields of population genetics and evolution. In the current work I introduce
the concept of Fisher's (1930) reproductive value into the study of evolution
on graphs. Reproductive value is a measure of the expected genetic contribution
of an individual to a distant future generation. In a heterogeneous
graph-structured population, differences in the number of connections among
individuals translates into differences in the expected number of offspring,
even if all individuals have the same fecundity. These differences are
accounted for by reproductive value. The introduction of reproductive value
permits the calculation of the fixation probability of a mutant in a neutral
evolutionary process in any graph-structured population for either the moran
birth-death or death-birth process.
|
[
{
"created": "Wed, 17 Jul 2013 20:56:44 GMT",
"version": "v1"
}
] |
2014-07-30
|
[
[
"Maciejewski",
"Wes",
""
]
] |
Evolutionary graph theory has grown to be an area of intense study. Despite the amount of interest in the field, it seems to have grown separate from other subfields of population genetics and evolution. In the current work I introduce the concept of Fisher's (1930) reproductive value into the study of evolution on graphs. Reproductive value is a measure of the expected genetic contribution of an individual to a distant future generation. In a heterogeneous graph-structured population, differences in the number of connections among individuals translates into differences in the expected number of offspring, even if all individuals have the same fecundity. These differences are accounted for by reproductive value. The introduction of reproductive value permits the calculation of the fixation probability of a mutant in a neutral evolutionary process in any graph-structured population for either the moran birth-death or death-birth process.
|
2107.12799
|
Mohammad Reza Dayer
|
Mohammad Reza Dayer
|
New Candidates for Furin Inhibition as Probable Treat for COVID-19:
Docking Output
| null | null | null | null |
q-bio.BM q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Furin is a serine protease that takes part in the processing and activation
of the host cell pre-proteins. The enzyme also plays an important role in the
activation of several viruses like the newly emerging SARS-CoV-2 virus that
causes COVID-19 disease with a high rate of virulence and mortality. Unlike
viral enzymes, furin owns a constant sequence and active site characteristics
and seems to be a better target for drug design for COVID-19 treatment.
Considering furin active site as receptor and some approved drugs from
different classes including antiviral, antibiotics, and anti protozoa/anti
parasites with suspected beneficial effects on COVID-19, as ligands we have
carried out docking experiments in HEX software to pickup those capable to bind
furin active site with high affinity and suggest them as probable candidates
for clinical trials assessments. Our docking experiments show that saquinavir,
nelfinavir, and atazanavir with cumulative inhibitory effects of 2.52, 2.16,
and 2.13 respectively seem to be the best candidates for furin inhibition even
in severe cases of COVID-19 as adjuvant therapy, while clarithromycin,
niclosamide, and erythromycin with cumulative inhibitory indices of 1.97, 1.90,
and 1.84 respectively with lower side effects than antiviral drugs could be
suggested as prophylaxes for the first stage of COVID-19 as a promising treat.
|
[
{
"created": "Tue, 27 Jul 2021 13:12:57 GMT",
"version": "v1"
}
] |
2021-07-28
|
[
[
"Dayer",
"Mohammad Reza",
""
]
] |
Furin is a serine protease that takes part in the processing and activation of the host cell pre-proteins. The enzyme also plays an important role in the activation of several viruses like the newly emerging SARS-CoV-2 virus that causes COVID-19 disease with a high rate of virulence and mortality. Unlike viral enzymes, furin owns a constant sequence and active site characteristics and seems to be a better target for drug design for COVID-19 treatment. Considering furin active site as receptor and some approved drugs from different classes including antiviral, antibiotics, and anti protozoa/anti parasites with suspected beneficial effects on COVID-19, as ligands we have carried out docking experiments in HEX software to pickup those capable to bind furin active site with high affinity and suggest them as probable candidates for clinical trials assessments. Our docking experiments show that saquinavir, nelfinavir, and atazanavir with cumulative inhibitory effects of 2.52, 2.16, and 2.13 respectively seem to be the best candidates for furin inhibition even in severe cases of COVID-19 as adjuvant therapy, while clarithromycin, niclosamide, and erythromycin with cumulative inhibitory indices of 1.97, 1.90, and 1.84 respectively with lower side effects than antiviral drugs could be suggested as prophylaxes for the first stage of COVID-19 as a promising treat.
|
2008.01810
|
Neta Maimon
|
Assaf Suberry, Neta B. Maimon and Zohar Eitan
|
Sad syntax? Tonal closure Affects Children's Perception of Emotional
Valence
|
44 pages, 7 figures, 1 table, 1 appendix
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Western music is largely governed by tonality, a quasi syntactic system
regulating musical continuity and closure. Converging measures have established
the psychological reality of tonality as a cognitive schema raising distinct
expectancy for both adults and children. However, while tonal expectations were
associated with emotion in adults, little is known about the tonality emotional
effects in children. Here we examine whether children associate levels of tonal
closure with emotional valence, whether such associations are age dependent,
and how they interact with other musical dimensions. 52 children, aged 7, 11,
listened to chord progressions implying closure followed by a probe tone.
Probes could realize closure (tonic note), violate it mildly (unstable diatonic
note) or extremely (out of key note). Three timbres (piano, guitar, woodwinds)
and three pitch heights were used for each closure level. Stimuli were
described to participants as exchanges between two children (chords, probe).
Participants chose one of two emojis, suggesting positive or negative emotions,
as representing the 2nd child response. A significant effect of tonal closure
was found, with no interactions with age, instrument, or pitch height. Results
suggest that tonality, a non referential cognitive schema, affects children
perception of emotion in music early, robustly and independently of basic
musical dimensions.
|
[
{
"created": "Tue, 4 Aug 2020 20:13:53 GMT",
"version": "v1"
}
] |
2020-08-06
|
[
[
"Suberry",
"Assaf",
""
],
[
"Maimon",
"Neta B.",
""
],
[
"Eitan",
"Zohar",
""
]
] |
Western music is largely governed by tonality, a quasi syntactic system regulating musical continuity and closure. Converging measures have established the psychological reality of tonality as a cognitive schema raising distinct expectancy for both adults and children. However, while tonal expectations were associated with emotion in adults, little is known about the tonality emotional effects in children. Here we examine whether children associate levels of tonal closure with emotional valence, whether such associations are age dependent, and how they interact with other musical dimensions. 52 children, aged 7, 11, listened to chord progressions implying closure followed by a probe tone. Probes could realize closure (tonic note), violate it mildly (unstable diatonic note) or extremely (out of key note). Three timbres (piano, guitar, woodwinds) and three pitch heights were used for each closure level. Stimuli were described to participants as exchanges between two children (chords, probe). Participants chose one of two emojis, suggesting positive or negative emotions, as representing the 2nd child response. A significant effect of tonal closure was found, with no interactions with age, instrument, or pitch height. Results suggest that tonality, a non referential cognitive schema, affects children perception of emotion in music early, robustly and independently of basic musical dimensions.
|
2405.09953
|
Jessica Thompson
|
Jessica A.F. Thompson, Hannah Sheahan, Tsvetomira Dumbalska, Julian
Sandbrink, Manuela Piazza, Christopher Summerfield
|
Zero-shot counting with a dual-stream neural network model
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks have provided a computational framework for
understanding object recognition, grounded in the neurophysiology of the
primate ventral stream, but fail to account for how we process relational
aspects of a scene. For example, deep neural networks fail at problems that
involve enumerating the number of elements in an array, a problem that in
humans relies on parietal cortex. Here, we build a 'dual-stream' neural network
model which, equipped with both dorsal and ventral streams, can generalise its
counting ability to wholly novel items ('zero-shot' counting). In doing so, it
forms spatial response fields and lognormal number codes that resemble those
observed in macaque posterior parietal cortex. We use the dual-stream network
to make successful predictions about behavioural studies of the human gaze
during similar counting tasks.
|
[
{
"created": "Thu, 16 May 2024 09:56:37 GMT",
"version": "v1"
}
] |
2024-05-17
|
[
[
"Thompson",
"Jessica A. F.",
""
],
[
"Sheahan",
"Hannah",
""
],
[
"Dumbalska",
"Tsvetomira",
""
],
[
"Sandbrink",
"Julian",
""
],
[
"Piazza",
"Manuela",
""
],
[
"Summerfield",
"Christopher",
""
]
] |
Deep neural networks have provided a computational framework for understanding object recognition, grounded in the neurophysiology of the primate ventral stream, but fail to account for how we process relational aspects of a scene. For example, deep neural networks fail at problems that involve enumerating the number of elements in an array, a problem that in humans relies on parietal cortex. Here, we build a 'dual-stream' neural network model which, equipped with both dorsal and ventral streams, can generalise its counting ability to wholly novel items ('zero-shot' counting). In doing so, it forms spatial response fields and lognormal number codes that resemble those observed in macaque posterior parietal cortex. We use the dual-stream network to make successful predictions about behavioural studies of the human gaze during similar counting tasks.
|
2003.05694
|
Maryam Al Shehhi Dr
|
Maryam R. Al Shehhi, David Nelson, Rashid R Alkhori, Rashid Alshihi,
and Kourosh Salehi-Ashtiani
|
Characterizing Algal blooms in a shallow and a deep channel over a
decade (2008-2018)
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The outbreaks of algal blooms occur in both shallow and deep-water bodies. To
compare the characteristics of algal blooms in the shallow and deep water, we
consider the Arabian Gulf and Sea of Oman as a case study. While the Arabian
Gulf is a shallow region dominated by advective features, the Sea of Oman is a
deep channel dominated by numerous eddies. Both these regions is have rarely
been studied due to lack of available data thus preventing the characterization
of phenomenon such as algal blooms in the regions. Nonetheless, a recent unique
and comprehensive dataset of the frequent algal blooms collected over the last
decade in the Arabian Gulf and Oman Sea is utilized in this study to analyze
the spatio-temporal variability of the blooms thereof. These data are also used
to characterize the algal bloom species and analyze the relationship between
the water properties and the algal blooms in the shallow and deep waters. There
is a general decreasing trend of the algal bloom events from 2010 to 2018 in
the Arabian Gulf while in the Sea of Oman there is an increasing trend. We
reveal a clear seasonality with the highest frequency of algal blooms during
winter and spring. We have noticed that algal blooms have the feature of the
annual cycle with initial blooms happening in November-December and
December-January in the Arabian Gulf and Sea of Oman, respectively. The
analysis further demonstrates that the algal blooms grow better at salinity
levels of 39-40 psu/37-37.5 psu, temperature of 23-24 oC, and pH of 8 in the
Arabian Gulf/Oman Sea. Findings of this study provide insight into the
relationship between water properties and algal bloom frequency, and a basis
for future research into the drivers behind these observed spatio-temporal
trends.
|
[
{
"created": "Thu, 12 Mar 2020 10:30:41 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Mar 2020 14:12:33 GMT",
"version": "v2"
}
] |
2020-03-17
|
[
[
"Shehhi",
"Maryam R. Al",
""
],
[
"Nelson",
"David",
""
],
[
"Alkhori",
"Rashid R",
""
],
[
"Alshihi",
"Rashid",
""
],
[
"Salehi-Ashtiani",
"Kourosh",
""
]
] |
The outbreaks of algal blooms occur in both shallow and deep-water bodies. To compare the characteristics of algal blooms in the shallow and deep water, we consider the Arabian Gulf and Sea of Oman as a case study. While the Arabian Gulf is a shallow region dominated by advective features, the Sea of Oman is a deep channel dominated by numerous eddies. Both these regions is have rarely been studied due to lack of available data thus preventing the characterization of phenomenon such as algal blooms in the regions. Nonetheless, a recent unique and comprehensive dataset of the frequent algal blooms collected over the last decade in the Arabian Gulf and Oman Sea is utilized in this study to analyze the spatio-temporal variability of the blooms thereof. These data are also used to characterize the algal bloom species and analyze the relationship between the water properties and the algal blooms in the shallow and deep waters. There is a general decreasing trend of the algal bloom events from 2010 to 2018 in the Arabian Gulf while in the Sea of Oman there is an increasing trend. We reveal a clear seasonality with the highest frequency of algal blooms during winter and spring. We have noticed that algal blooms have the feature of the annual cycle with initial blooms happening in November-December and December-January in the Arabian Gulf and Sea of Oman, respectively. The analysis further demonstrates that the algal blooms grow better at salinity levels of 39-40 psu/37-37.5 psu, temperature of 23-24 oC, and pH of 8 in the Arabian Gulf/Oman Sea. Findings of this study provide insight into the relationship between water properties and algal bloom frequency, and a basis for future research into the drivers behind these observed spatio-temporal trends.
|
1402.0451
|
Adriano Barra Dr.
|
Elena Agliari and Elena Biselli and Adele De Ninno and Giovanna
Schiavoni and Lucia Gabriele and Anna Gerardino and Fabrizio Mattei and
Adriano Barra and Luca Businaro
|
Cancer-driven dynamics of immune cells in a microfluidic environment
| null |
Nature Scientific Reports 4, 6639 (2014)
|
10.1038/srep06639
|
Roma01.Math
|
q-bio.CB cond-mat.dis-nn physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scope of the present work is to frame into a rigorous, quantitative scaffold
- stemmed from stochastic process theory - two sets of experiments designed to
infer the spontaneous organization of leukocytes against cancer cells, namely
mice splenocytes vs. B16 mouse tumor cells, and embedded in an "ad hoc"
microfluidic environment developed on a LabOnChip technology. In the former,
splenocytes from knocked out (KO) mice engineered to silence the transcription
factor IRF-8, crucial for the development and function of several immune
populations, were used. In this case lymphocytes and cancer cells exhibited a
poor reciprocal exchange, resulting in the inability of coordinating or
mounting an effective immune response against melanoma. In the second class of
tests, wild type (WT) splenocytes were able to interact with and to coordinate
a response against the tumor cells through physical interaction. The
environment where cells moved was built of by two different chambers,
containing respectively melanoma cells and splenocytes, connected by capillary
migration channels allowing leucocytes to migrate from their chamber toward the
melanoma one. We collected and analyzed data on the motility of the cells and
found that the first ensemble of IRF-8 KO cells performed pure uncorrelated
random walks, while WT splenocytes were able to make singular drifted random
walks, that, averaged over the ensemble of cells, collapsed on a straight
ballistic motion for the system as a whole. At a finer level of investigation,
we found that IRF-8 KO splenocytes moved rather uniformly since their step
lengths were exponentially distributed, while WT counterpart displayed a
qualitatively broader motion as their step lengths along the direction of the
melanoma were log-normally distributed.
|
[
{
"created": "Mon, 3 Feb 2014 18:11:20 GMT",
"version": "v1"
}
] |
2016-02-02
|
[
[
"Agliari",
"Elena",
""
],
[
"Biselli",
"Elena",
""
],
[
"De Ninno",
"Adele",
""
],
[
"Schiavoni",
"Giovanna",
""
],
[
"Gabriele",
"Lucia",
""
],
[
"Gerardino",
"Anna",
""
],
[
"Mattei",
"Fabrizio",
""
],
[
"Barra",
"Adriano",
""
],
[
"Businaro",
"Luca",
""
]
] |
Scope of the present work is to frame into a rigorous, quantitative scaffold - stemmed from stochastic process theory - two sets of experiments designed to infer the spontaneous organization of leukocytes against cancer cells, namely mice splenocytes vs. B16 mouse tumor cells, and embedded in an "ad hoc" microfluidic environment developed on a LabOnChip technology. In the former, splenocytes from knocked out (KO) mice engineered to silence the transcription factor IRF-8, crucial for the development and function of several immune populations, were used. In this case lymphocytes and cancer cells exhibited a poor reciprocal exchange, resulting in the inability of coordinating or mounting an effective immune response against melanoma. In the second class of tests, wild type (WT) splenocytes were able to interact with and to coordinate a response against the tumor cells through physical interaction. The environment where cells moved was built of by two different chambers, containing respectively melanoma cells and splenocytes, connected by capillary migration channels allowing leucocytes to migrate from their chamber toward the melanoma one. We collected and analyzed data on the motility of the cells and found that the first ensemble of IRF-8 KO cells performed pure uncorrelated random walks, while WT splenocytes were able to make singular drifted random walks, that, averaged over the ensemble of cells, collapsed on a straight ballistic motion for the system as a whole. At a finer level of investigation, we found that IRF-8 KO splenocytes moved rather uniformly since their step lengths were exponentially distributed, while WT counterpart displayed a qualitatively broader motion as their step lengths along the direction of the melanoma were log-normally distributed.
|
0910.1830
|
Navodit Misra
|
Navodit Misra, Guy Blelloch, R. Ravi and Russell Schwartz
|
Generalized Buneman pruning for inferring the most parsimonious
multi-state phylogeny
|
15 pages
| null |
10.1007/978-3-642-12683-3_24
| null |
q-bio.PE q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate reconstruction of phylogenies remains a key challenge in
evolutionary biology. Most biologically plausible formulations of the problem
are formally NP-hard, with no known efficient solution. The standard in
practice are fast heuristic methods that are empirically known to work very
well in general, but can yield results arbitrarily far from optimal. Practical
exact methods, which yield exponential worst-case running times but generally
much better times in practice, provide an important alternative. We report
progress in this direction by introducing a provably optimal method for the
weighted multi-state maximum parsimony phylogeny problem. The method is based
on generalizing the notion of the Buneman graph, a construction key to
efficient exact methods for binary sequences, so as to apply to sequences with
arbitrary finite numbers of states with arbitrary state transition weights. We
implement an integer linear programming (ILP) method for the multi-state
problem using this generalized Buneman graph and demonstrate that the resulting
method is able to solve data sets that are intractable by prior exact methods
in run times comparable with popular heuristics. Our work provides the first
method for provably optimal maximum parsimony phylogeny inference that is
practical for multi-state data sets of more than a few characters.
|
[
{
"created": "Fri, 9 Oct 2009 19:59:00 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Oct 2009 20:11:16 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Apr 2010 01:46:04 GMT",
"version": "v3"
}
] |
2015-05-14
|
[
[
"Misra",
"Navodit",
""
],
[
"Blelloch",
"Guy",
""
],
[
"Ravi",
"R.",
""
],
[
"Schwartz",
"Russell",
""
]
] |
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
|
q-bio/0601038
|
Michele Bezzi
|
Michele Bezzi
|
Quantifying the information transmitted in a single stimulus
|
13 pages, 4 figures
| null | null | null |
q-bio.NC q-bio.QM
| null |
Shannon mutual information provides a measure of how much information is, on
average, contained in a set of neural activities about a set of stimuli. It has
been extensively used to study neural coding in different brain areas. To apply
a similar approach to investigate single stimulus encoding, we need to
introduce a quantity specific for a single stimulus. This quantity has been
defined in literature by four different measures, but none of them satisfies
the same intuitive properties (non-negativity, additivity), that characterize
mutual information. We present here a detailed analysis of the different
meanings and properties of these four definitions. We show that all these
measures satisfy, at least, a weaker additivity condition, i.e. limited to the
response set. This allows us to use them for analysing correlated coding, as we
illustrate in a toy-example from hippocampal place cells.
|
[
{
"created": "Mon, 23 Jan 2006 13:51:28 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Bezzi",
"Michele",
""
]
] |
Shannon mutual information provides a measure of how much information is, on average, contained in a set of neural activities about a set of stimuli. It has been extensively used to study neural coding in different brain areas. To apply a similar approach to investigate single stimulus encoding, we need to introduce a quantity specific for a single stimulus. This quantity has been defined in literature by four different measures, but none of them satisfies the same intuitive properties (non-negativity, additivity), that characterize mutual information. We present here a detailed analysis of the different meanings and properties of these four definitions. We show that all these measures satisfy, at least, a weaker additivity condition, i.e. limited to the response set. This allows us to use them for analysing correlated coding, as we illustrate in a toy-example from hippocampal place cells.
|
2402.10387
|
Peter Eckmann
|
Peter Eckmann, Dongxia Wu, Germano Heinzelmann, Michael K Gilson, Rose
Yu
|
MFBind: a Multi-Fidelity Approach for Evaluating Drug Compounds in
Practical Generative Modeling
|
9 pages, 4 figures
| null | null | null |
q-bio.BM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current generative models for drug discovery primarily use molecular docking
to evaluate the quality of generated compounds. However, such models are often
not useful in practice because even compounds with high docking scores do not
consistently show experimental activity. More accurate methods for activity
prediction exist, such as molecular dynamics based binding free energy
calculations, but they are too computationally expensive to use in a generative
model. We propose a multi-fidelity approach, Multi-Fidelity Bind (MFBind), to
achieve the optimal trade-off between accuracy and computational cost. MFBind
integrates docking and binding free energy simulators to train a multi-fidelity
deep surrogate model with active learning. Our deep surrogate model utilizes a
pretraining technique and linear prediction heads to efficiently fit small
amounts of high-fidelity data. We perform extensive experiments and show that
MFBind (1) outperforms other state-of-the-art single and multi-fidelity
baselines in surrogate modeling, and (2) boosts the performance of generative
models with markedly higher quality compounds.
|
[
{
"created": "Fri, 16 Feb 2024 00:48:20 GMT",
"version": "v1"
}
] |
2024-02-19
|
[
[
"Eckmann",
"Peter",
""
],
[
"Wu",
"Dongxia",
""
],
[
"Heinzelmann",
"Germano",
""
],
[
"Gilson",
"Michael K",
""
],
[
"Yu",
"Rose",
""
]
] |
Current generative models for drug discovery primarily use molecular docking to evaluate the quality of generated compounds. However, such models are often not useful in practice because even compounds with high docking scores do not consistently show experimental activity. More accurate methods for activity prediction exist, such as molecular dynamics based binding free energy calculations, but they are too computationally expensive to use in a generative model. We propose a multi-fidelity approach, Multi-Fidelity Bind (MFBind), to achieve the optimal trade-off between accuracy and computational cost. MFBind integrates docking and binding free energy simulators to train a multi-fidelity deep surrogate model with active learning. Our deep surrogate model utilizes a pretraining technique and linear prediction heads to efficiently fit small amounts of high-fidelity data. We perform extensive experiments and show that MFBind (1) outperforms other state-of-the-art single and multi-fidelity baselines in surrogate modeling, and (2) boosts the performance of generative models with markedly higher quality compounds.
|
1906.01224
|
Tomokazu Konishi
|
Tomokazu Konishi, Haruna Ohrui
|
A distribution-dependent analysis of open-field test movies
|
30 pages, 3 Figures, including supplementary data
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the open-field test has been widely used, its reliability and
compatibility are frequently questioned. Although many indicating parameters
were introduced for this test, they did not take data distributions into
consideration. This oversight may have caused the problems mentioned above.
Here, an exploratory approach for the analysis of video records of tests of
elderly mice was taken that described the distributions using the least number
of parameters. First, the locomotor activity of the animals was separated into
two clusters: dash and search. The accelerations found in each of the clusters
were distributed normally. The speed and the duration of the clusters exhibited
an exponential distribution. Although the exponential model includes a single
parameter, an additional parameter that indicated instability of the behaviour
was required in many cases for fitting to the data. As this instability
parameter exhibited an inverse correlation with speed, the function of the
brain that maintained stability would be required for a better performance.
According to the distributions, the travel distance, which has been regarded as
an important indicator, was not a robust estimator of the animals' condition.
|
[
{
"created": "Tue, 4 Jun 2019 06:49:49 GMT",
"version": "v1"
}
] |
2019-06-05
|
[
[
"Konishi",
"Tomokazu",
""
],
[
"Ohrui",
"Haruna",
""
]
] |
Although the open-field test has been widely used, its reliability and compatibility are frequently questioned. Although many indicating parameters were introduced for this test, they did not take data distributions into consideration. This oversight may have caused the problems mentioned above. Here, an exploratory approach for the analysis of video records of tests of elderly mice was taken that described the distributions using the least number of parameters. First, the locomotor activity of the animals was separated into two clusters: dash and search. The accelerations found in each of the clusters were distributed normally. The speed and the duration of the clusters exhibited an exponential distribution. Although the exponential model includes a single parameter, an additional parameter that indicated instability of the behaviour was required in many cases for fitting to the data. As this instability parameter exhibited an inverse correlation with speed, the function of the brain that maintained stability would be required for a better performance. According to the distributions, the travel distance, which has been regarded as an important indicator, was not a robust estimator of the animals' condition.
|
0704.2454
|
Vahid Rezania
|
Vahid Rezania, Jack Tuszynski, Michael Hendzel
|
Modeling transcription factor binding events to DNA using a random
walker/jumper representation on a 1D/2D lattice with different affinity sites
|
24 pages, 9 figures
|
Physical Biology, 4, 256-267 (2007)
|
10.1088/1478-3975/4/4/003
| null |
q-bio.QM q-bio.BM
| null |
Surviving in a diverse environment requires corresponding organism responses.
At the cellular level, such adjustment relies on the transcription factors
(TFs) which must rapidly find their target sequences amidst a vast amount of
non-relevant sequences on DNA molecules. Whether these transcription factors
locate their target sites through a 1D or 3D pathway is still a matter of
speculation. It has been suggested that the optimum search time is when the
protein equally shares its search time between 1D and 3D diffusions. In this
paper, we study the above problem using a Monte Carlo simulation by considering
a very simple physical model. A 1D strip, representing a DNA, with a number of
low affinity sites, corresponding to non-target sites, and high affinity sites,
corresponding to target sites, is considered and later extended to a 2D strip.
We study the 1D and 3D exploration pathways, and combinations of the two modes
by considering three different types of molecules: a walker that randomly walks
along the strip with no dissociation; a jumper that represents dissociation and
then re-association of a TF with the strip at later time at a distant site; and
a hopper that is similar to the jumper but it dissociates and then
re-associates at a faster rate than the jumper. We analyze the final
probability distribution of molecules for each case and find that TFs can
locate their targets fast enough even if they spend 15% of their search time
diffusing freely in the solution. This indeed agrees with recent experimental
results obtained by Elf et al. 2007 and is in contrast with theoretical
expectation.
|
[
{
"created": "Thu, 19 Apr 2007 03:20:02 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Aug 2007 17:44:57 GMT",
"version": "v2"
}
] |
2009-11-13
|
[
[
"Rezania",
"Vahid",
""
],
[
"Tuszynski",
"Jack",
""
],
[
"Hendzel",
"Michael",
""
]
] |
Surviving in a diverse environment requires corresponding organism responses. At the cellular level, such adjustment relies on the transcription factors (TFs) which must rapidly find their target sequences amidst a vast amount of non-relevant sequences on DNA molecules. Whether these transcription factors locate their target sites through a 1D or 3D pathway is still a matter of speculation. It has been suggested that the optimum search time is when the protein equally shares its search time between 1D and 3D diffusions. In this paper, we study the above problem using a Monte Carlo simulation by considering a very simple physical model. A 1D strip, representing a DNA, with a number of low affinity sites, corresponding to non-target sites, and high affinity sites, corresponding to target sites, is considered and later extended to a 2D strip. We study the 1D and 3D exploration pathways, and combinations of the two modes by considering three different types of molecules: a walker that randomly walks along the strip with no dissociation; a jumper that represents dissociation and then re-association of a TF with the strip at later time at a distant site; and a hopper that is similar to the jumper but it dissociates and then re-associates at a faster rate than the jumper. We analyze the final probability distribution of molecules for each case and find that TFs can locate their targets fast enough even if they spend 15% of their search time diffusing freely in the solution. This indeed agrees with recent experimental results obtained by Elf et al. 2007 and is in contrast with theoretical expectation.
|
2406.09094
|
Luis Aniello La Rocca
|
Luis A. La Rocca, Konrad Gerischer, Anton Bovier and Peter M. Krawitz
|
Refining the drift barrier hypothesis: a role of recessive gene count
and an inhomogeneous Muller`s ratchet
|
21 pages, 4 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The drift-barrier hypothesis states that random genetic drift constrains the
refinement of a phenotype under natural selection. The influence of effective
population size and the genome-wide deleterious mutation rate were studied
theoretically, and an inverse relationship between mutation rate and genome
size has been observed for many species. However, the effect of the recessive
gene count, an important feature of the genomic architecture, is unknown. In a
Wright-Fisher model, we studied the mutation burden for a growing number of N
completely recessive and lethal disease genes. Diploid individuals are
represented with a binary $2 \times N$ matrix denoting wild-type and mutated
alleles. Analytic results for specific cases were complemented by simulations
across a broad parameter regime for gene count, mutation and recombination
rates. Simulations revealed transitions to higher mutation burden and
prevalence within a few generations that were linked to the extinction of the
wild-type haplotype (least-loaded class). This metastability, that is, phases
of quasi-equilibrium with intermittent transitions, persists over $100\,000$
generations. The drift-barrier hypothesis is confirmed by a high mutation
burden resulting in population collapse. Simulations showed the emergence of
mutually exclusive haplotypes for a mutation rate above 0.02 lethal equivalents
per generation for a genomic architecture and population size representing
complex multicellular organisms such as humans. In such systems, recombination
proves pivotal, preventing population collapse and maintaining a mutation
burden below 10. This study advances our understanding of gene pool stability,
and particularly the role of the number of recessive disorders. Insights into
Muller`s ratchet dynamics are provided, and the essential role of recombination
in curbing mutation burden and stabilizing the gene pool is demonstrated.
|
[
{
"created": "Thu, 13 Jun 2024 13:22:41 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 08:58:03 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Jul 2024 09:37:34 GMT",
"version": "v3"
},
{
"created": "Wed, 24 Jul 2024 10:23:03 GMT",
"version": "v4"
}
] |
2024-07-25
|
[
[
"La Rocca",
"Luis A.",
""
],
[
"Gerischer",
"Konrad",
""
],
[
"Bovier",
"Anton",
""
],
[
"Krawitz",
"Peter M.",
""
]
] |
The drift-barrier hypothesis states that random genetic drift constrains the refinement of a phenotype under natural selection. The influence of effective population size and the genome-wide deleterious mutation rate were studied theoretically, and an inverse relationship between mutation rate and genome size has been observed for many species. However, the effect of the recessive gene count, an important feature of the genomic architecture, is unknown. In a Wright-Fisher model, we studied the mutation burden for a growing number of N completely recessive and lethal disease genes. Diploid individuals are represented with a binary $2 \times N$ matrix denoting wild-type and mutated alleles. Analytic results for specific cases were complemented by simulations across a broad parameter regime for gene count, mutation and recombination rates. Simulations revealed transitions to higher mutation burden and prevalence within a few generations that were linked to the extinction of the wild-type haplotype (least-loaded class). This metastability, that is, phases of quasi-equilibrium with intermittent transitions, persists over $100\,000$ generations. The drift-barrier hypothesis is confirmed by a high mutation burden resulting in population collapse. Simulations showed the emergence of mutually exclusive haplotypes for a mutation rate above 0.02 lethal equivalents per generation for a genomic architecture and population size representing complex multicellular organisms such as humans. In such systems, recombination proves pivotal, preventing population collapse and maintaining a mutation burden below 10. This study advances our understanding of gene pool stability, and particularly the role of the number of recessive disorders. Insights into Muller`s ratchet dynamics are provided, and the essential role of recombination in curbing mutation burden and stabilizing the gene pool is demonstrated.
|
2012.06848
|
Arnaud Liehrmann
|
Arnaud Liehrmann, Guillem Rigaill and Toby Dylan Hocking
|
Increased peak detection accuracy in over-dispersed ChIP-seq data with
supervised segmentation models
|
20 pages, 8 figures; updated broken citations and references
| null | null | null |
q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivation: Histone modification constitutes a basic mechanism for the
genetic regulation of gene expression. In early 2000s, a powerful technique has
emerged that couples chromatin immunoprecipitation with high-throughput
sequencing (ChIP-seq). This technique provides a direct survey of the DNA
regions associated to these modifications. In order to realize the full
potential of this technique, increasingly sophisticated statistical algorithms
have been developed or adapted to analyze the massive amount of data it
generates. Many of these algorithms were built around natural assumptions such
as the Poisson one to model the noise in the count data. In this work we start
from these natural assumptions and show that it is possible to improve upon
them. Results: The results of our comparisons on seven reference datasets of
histone modifications (H3K36me3 and H3K4me3) suggest that natural assumptions
are not always realistic under application conditions. We show that the
unconstrained multiple changepoint detection model, with alternative noise
assumptions and a suitable setup, reduces the over-dispersion exhibited by
count data and turns out to detect peaks more accurately than algorithms which
rely on these natural assumptions.
|
[
{
"created": "Sat, 12 Dec 2020 16:03:27 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Dec 2020 12:34:48 GMT",
"version": "v2"
}
] |
2020-12-16
|
[
[
"Liehrmann",
"Arnaud",
""
],
[
"Rigaill",
"Guillem",
""
],
[
"Hocking",
"Toby Dylan",
""
]
] |
Motivation: Histone modification constitutes a basic mechanism for the genetic regulation of gene expression. In early 2000s, a powerful technique has emerged that couples chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq). This technique provides a direct survey of the DNA regions associated to these modifications. In order to realize the full potential of this technique, increasingly sophisticated statistical algorithms have been developed or adapted to analyze the massive amount of data it generates. Many of these algorithms were built around natural assumptions such as the Poisson one to model the noise in the count data. In this work we start from these natural assumptions and show that it is possible to improve upon them. Results: The results of our comparisons on seven reference datasets of histone modifications (H3K36me3 and H3K4me3) suggest that natural assumptions are not always realistic under application conditions. We show that the unconstrained multiple changepoint detection model, with alternative noise assumptions and a suitable setup, reduces the over-dispersion exhibited by count data and turns out to detect peaks more accurately than algorithms which rely on these natural assumptions.
|
2005.12446
|
Massimo Marchiori
|
Massimo Marchiori
|
COVID-19 and the Social Distancing Paradox: dangers and solutions
|
8 pages with 4 figures
| null | null | null |
q-bio.PE eess.SP physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Without proven effect treatments and vaccines, Social Distancing
is the key protection factor against COVID-19. Social distancing alone should
have been enough to protect again the virus, yet things have gone very
differently, with a big mismatch between theory and practice. What are the
reasons? A big problem is that there is no actual social distancing data, and
the corresponding people behavior in a pandemic is unknown. We collect the
world-first dataset on social distancing during the COVID-19 outbreak, so to
see for the first time how people really implement social distancing, identify
dangers of the current situation, and find solutions against this and future
pandemics.
Methods: Using a sensor-based social distancing belt we collected social
distance data from people in Italy for over two months during the most critical
COVID-19 outbreak. Additionally, we investigated if and how wearing various
Personal Protection Equipment, like masks, influences social distancing.
Results: Without masks, people adopt a counter-intuitively dangerous
strategy, a paradox that could explain the relative lack of effectiveness of
social distancing. Using masks radically changes the situation, breaking the
paradoxical behavior and leading to a safe social distance behavior. In
shortage of masks, DIY (Do It Yourself) masks can also be used: even without
filtering protection, they provide social distancing protection. Goggles should
be recommended for general use, as they give an extra powerful safety boost.
Generic Public Health policies and media campaigns do not work well on social
distancing: explicit focus on the behavioral problems of necessary mobility are
needed.
|
[
{
"created": "Tue, 26 May 2020 00:01:53 GMT",
"version": "v1"
}
] |
2020-05-27
|
[
[
"Marchiori",
"Massimo",
""
]
] |
Background: Without proven effect treatments and vaccines, Social Distancing is the key protection factor against COVID-19. Social distancing alone should have been enough to protect again the virus, yet things have gone very differently, with a big mismatch between theory and practice. What are the reasons? A big problem is that there is no actual social distancing data, and the corresponding people behavior in a pandemic is unknown. We collect the world-first dataset on social distancing during the COVID-19 outbreak, so to see for the first time how people really implement social distancing, identify dangers of the current situation, and find solutions against this and future pandemics. Methods: Using a sensor-based social distancing belt we collected social distance data from people in Italy for over two months during the most critical COVID-19 outbreak. Additionally, we investigated if and how wearing various Personal Protection Equipment, like masks, influences social distancing. Results: Without masks, people adopt a counter-intuitively dangerous strategy, a paradox that could explain the relative lack of effectiveness of social distancing. Using masks radically changes the situation, breaking the paradoxical behavior and leading to a safe social distance behavior. In shortage of masks, DIY (Do It Yourself) masks can also be used: even without filtering protection, they provide social distancing protection. Goggles should be recommended for general use, as they give an extra powerful safety boost. Generic Public Health policies and media campaigns do not work well on social distancing: explicit focus on the behavioral problems of necessary mobility are needed.
|
0811.2837
|
Tom Chou
|
Pak-Wing Fok, Chin-Lin Guo, Tom Chou
|
Charge transport-mediated recruitment of DNA repair enzymes
|
9 Figures, Accepted to J. Chem. Phys
|
Journal of Chemical Physics, 129, 235101, (2008)
|
10.1063/1.3026735
| null |
q-bio.BM q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Damaged or mismatched bases in DNA can be repaired by Base Excision Repair
(BER) enzymes that replace the defective base. Although the detailed molecular
structures of many BER enzymes are known, how they colocalize to lesions
remains unclear. One hypothesis involves charge transport (CT) along DNA
[Yavin, {\it et al.}, PNAS, {\bf 102}, 3546, (2005)]. In this CT mechanism,
electrons are released by recently adsorbed BER enzymes and travel along the
DNA. The electrons can scatter (by heterogeneities along the DNA) back to the
enzyme, destabilizing and knocking it off the DNA, or, they can be absorbed by
nearby lesions and guanine radicals. We develop a stochastic model to describe
the electron dynamics, and compute probabilities of electron capture by guanine
radicals and repair enzymes. We also calculate first passage times of electron
return, and ensemble-average these results over guanine radical distributions.
Our statistical results provide the rules that enable us to perform
implicit-electron Monte-Carlo simulations of repair enzyme binding and
redistribution near lesions. When lesions are electron absorbing, we show that
the CT mechanism suppresses wasteful buildup of enzymes along intact portions
of the DNA, maximizing enzyme concentration near lesions.
|
[
{
"created": "Tue, 18 Nov 2008 04:38:18 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Fok",
"Pak-Wing",
""
],
[
"Guo",
"Chin-Lin",
""
],
[
"Chou",
"Tom",
""
]
] |
Damaged or mismatched bases in DNA can be repaired by Base Excision Repair (BER) enzymes that replace the defective base. Although the detailed molecular structures of many BER enzymes are known, how they colocalize to lesions remains unclear. One hypothesis involves charge transport (CT) along DNA [Yavin, {\it et al.}, PNAS, {\bf 102}, 3546, (2005)]. In this CT mechanism, electrons are released by recently adsorbed BER enzymes and travel along the DNA. The electrons can scatter (by heterogeneities along the DNA) back to the enzyme, destabilizing and knocking it off the DNA, or, they can be absorbed by nearby lesions and guanine radicals. We develop a stochastic model to describe the electron dynamics, and compute probabilities of electron capture by guanine radicals and repair enzymes. We also calculate first passage times of electron return, and ensemble-average these results over guanine radical distributions. Our statistical results provide the rules that enable us to perform implicit-electron Monte-Carlo simulations of repair enzyme binding and redistribution near lesions. When lesions are electron absorbing, we show that the CT mechanism suppresses wasteful buildup of enzymes along intact portions of the DNA, maximizing enzyme concentration near lesions.
|
1304.5836
|
Bin Ao
|
Bin Ao, Sheng Zhang, Caiyong Ye, Lei Chang, Guangming Zhou, Lei Yang
|
Oscillation in microRNA Feedback Loop
|
There were some mistakes in the analysis in this paper's first
version submitted on 22 Apr 2013. We corrected them in the second version
submitted on 23 Sep 2013. So please delete the "v1" version of this paper,
Thank you!
| null | null | null |
q-bio.MN nlin.PS
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The dynamic behaviors of microRNA and mRNA under external stress are studied
with biological experiments and mathematics models. In this study, we developed
a mathematic model to describe the biological phenomenon and for the first time
reported that, as responses to external stress, the expression levels of
microRNA and mRNA sustained oscillation. And the period of the oscillation is
much shorter than several reported transcriptional regulation negative feedback
loop.
|
[
{
"created": "Mon, 22 Apr 2013 05:39:26 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Sep 2013 09:36:46 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Dec 2013 07:52:20 GMT",
"version": "v3"
}
] |
2013-12-17
|
[
[
"Ao",
"Bin",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Ye",
"Caiyong",
""
],
[
"Chang",
"Lei",
""
],
[
"Zhou",
"Guangming",
""
],
[
"Yang",
"Lei",
""
]
] |
The dynamic behaviors of microRNA and mRNA under external stress are studied with biological experiments and mathematics models. In this study, we developed a mathematic model to describe the biological phenomenon and for the first time reported that, as responses to external stress, the expression levels of microRNA and mRNA sustained oscillation. And the period of the oscillation is much shorter than several reported transcriptional regulation negative feedback loop.
|
2108.01982
|
Farzad Fatehi
|
Farzad Fatehi, Richard J. Bingham, Eric C. Dykeman, Peter G. Stockley,
and Reidun Twarock
|
An age-structured model of hepatitis B viral infection highlights the
potential of different therapeutic strategies
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hepatitis B virus is a global health threat, and its elimination by 2030 has
been prioritised by the World Health Organisation. Here we present an
age-structured model for the immune response to an HBV infection, which takes
into account contributions from both cell-mediated and humoral immunity. The
model has been validated using published patient data recorded during acute
infection. It has been adapted to the scenarios of chronic infection, clearance
of infection, and flare-ups via variation of the immune response parameters.
The impacts of immune response exhaustion and non-infectious subviral particles
on the immune response dynamics are analysed. A comparison of different
treatment options in the context of this model reveals that drugs targeting
aspects of the viral life cycle are more effective than exhaustion therapy, a
form of therapy mitigating immune response exhaustion. Our results suggest that
antiviral treatment is best started when viral load is declining rather than in
a flare-up. The model suggests that a fast antibody production rate always lead
to viral clearance, highlighting the promise of antibody therapies currently in
clinical trials.
|
[
{
"created": "Wed, 4 Aug 2021 11:45:12 GMT",
"version": "v1"
}
] |
2021-08-05
|
[
[
"Fatehi",
"Farzad",
""
],
[
"Bingham",
"Richard J.",
""
],
[
"Dykeman",
"Eric C.",
""
],
[
"Stockley",
"Peter G.",
""
],
[
"Twarock",
"Reidun",
""
]
] |
Hepatitis B virus is a global health threat, and its elimination by 2030 has been prioritised by the World Health Organisation. Here we present an age-structured model for the immune response to an HBV infection, which takes into account contributions from both cell-mediated and humoral immunity. The model has been validated using published patient data recorded during acute infection. It has been adapted to the scenarios of chronic infection, clearance of infection, and flare-ups via variation of the immune response parameters. The impacts of immune response exhaustion and non-infectious subviral particles on the immune response dynamics are analysed. A comparison of different treatment options in the context of this model reveals that drugs targeting aspects of the viral life cycle are more effective than exhaustion therapy, a form of therapy mitigating immune response exhaustion. Our results suggest that antiviral treatment is best started when viral load is declining rather than in a flare-up. The model suggests that a fast antibody production rate always lead to viral clearance, highlighting the promise of antibody therapies currently in clinical trials.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.