id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0411012 | Rajesh Karmakar | Rajesh Karmakar and Indrani Bose | Graded and Binary Responses in Stochastic Gene Expression | to be published in Physical Biology | Phys. Biol. 1 (2004) 197-204 | 10.1088/1478-3967/1/4/001 | null | q-bio.OT cond-mat.stat-mech | null | Recently, several theoretical and experimental studies have been undertaken
to probe the effect of stochasticity on gene expression (GE). In experiments,
the GE response to an inducing signal in a cell, measured by the amount of
mRNAs/proteins synthesized, is found to be either graded or binary. The latter
type of response gives rise to a bimodal distribution in protein levels in an
ensemble of cells. One possible origin of binary response is cellular
bistability achieved through positive feedback or autoregulation. In this
paper, we study a simple, stochastic model of GE and show that the origin of
binary response lies exclusively in stochasticity. The transitions between the
active and inactive states of the gene are random in nature. Graded and binary
responses occur in the model depending on the relative stability of the
activated and deactivated gene states with respect to that of
mRNAs/proteins.The theoretical results on binary response provide a good
description of the ``all-or-none'' phenomenon observed in an eukaryotic system.
| [
{
"created": "Wed, 3 Nov 2004 10:16:02 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Karmakar",
"Rajesh",
""
],
[
"Bose",
"Indrani",
""
]
] | Recently, several theoretical and experimental studies have been undertaken to probe the effect of stochasticity on gene expression (GE). In experiments, the GE response to an inducing signal in a cell, measured by the amount of mRNAs/proteins synthesized, is found to be either graded or binary. The latter type of response gives rise to a bimodal distribution in protein levels in an ensemble of cells. One possible origin of binary response is cellular bistability achieved through positive feedback or autoregulation. In this paper, we study a simple, stochastic model of GE and show that the origin of binary response lies exclusively in stochasticity. The transitions between the active and inactive states of the gene are random in nature. Graded and binary responses occur in the model depending on the relative stability of the activated and deactivated gene states with respect to that of mRNAs/proteins.The theoretical results on binary response provide a good description of the ``all-or-none'' phenomenon observed in an eukaryotic system. |
1712.06353 | Christoph Metzner | Christoph Metzner, Tuomo M\"aki-Marttunen, Bartosz Zurowski and Volker
Steuber | Modules for Automated Validation and Comparison of Models of
Neurophysiological and Neurocognitive Biomarkers of Psychiatric Disorders:
ASSRUnit - A Case Study | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The characterisation of biomarkers and endophenotypic measures has been a
central goal of research in psychiatry over the last years. While most of this
research has focused on the identification of biomarkers and endophenotypes,
using various experimental approaches, it has been recognised that their
instantiations, through computational models, have a great potential to help us
understand and interpret these experimental results. However, the enormous
increase in available neurophysiological and neurocognitive as well as
computational data also poses new challenges. How can a researcher stay on top
of the experimental literature? How can computational modelling data be
efficiently compared to experimental data? How can computational modelling most
effectively inform experimentalists? Recently, a general scientific framework
for the generation of executable tests that automatically compare model results
to experimental observations, SciUnit, has been proposed. Here we exploit this
framework for research in psychiatry to address the challenges mentioned above.
We extend the SciUnit framework by adding an experimental database, which
contains a comprehensive collection of relevant experimental observations, and
a prediction database, which contains a collection of predictions generated by
computational models. Together with appropriately designed SciUnit tests and
methods to mine and visualise the databases, model data and test results, this
extended framework has the potential to greatly facilitate the use of
computational models in psychiatry. As an initial example we present ASSRUnit,
a module for auditory steady-state response deficits in psychiatric disorders.
| [
{
"created": "Mon, 18 Dec 2017 11:58:35 GMT",
"version": "v1"
}
] | 2017-12-19 | [
[
"Metzner",
"Christoph",
""
],
[
"Mäki-Marttunen",
"Tuomo",
""
],
[
"Zurowski",
"Bartosz",
""
],
[
"Steuber",
"Volker",
""
]
] | The characterisation of biomarkers and endophenotypic measures has been a central goal of research in psychiatry over the last years. While most of this research has focused on the identification of biomarkers and endophenotypes, using various experimental approaches, it has been recognised that their instantiations, through computational models, have a great potential to help us understand and interpret these experimental results. However, the enormous increase in available neurophysiological and neurocognitive as well as computational data also poses new challenges. How can a researcher stay on top of the experimental literature? How can computational modelling data be efficiently compared to experimental data? How can computational modelling most effectively inform experimentalists? Recently, a general scientific framework for the generation of executable tests that automatically compare model results to experimental observations, SciUnit, has been proposed. Here we exploit this framework for research in psychiatry to address the challenges mentioned above. We extend the SciUnit framework by adding an experimental database, which contains a comprehensive collection of relevant experimental observations, and a prediction database, which contains a collection of predictions generated by computational models. Together with appropriately designed SciUnit tests and methods to mine and visualise the databases, model data and test results, this extended framework has the potential to greatly facilitate the use of computational models in psychiatry. As an initial example we present ASSRUnit, a module for auditory steady-state response deficits in psychiatric disorders. |
2107.04421 | Breno De Oliveira Ferraz | P. P. Avelino, B. F. de Oliveira and R. S. Trintin | Impact of parity in rock-paper-scissors type models | 7 pages, 7 figures | null | null | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | We investigate the impact of parity on the abundance of weak species in the
context of the simplest generalization of the rock-paper-scissors model to an
arbitrary number of species -- we consider models with a total number of
species ($N_S$) between 3 and 12, having one or more (weak) species
characterized by a reduced predation probability (by a factor of ${\mathcal
P}_w$ with respect to the other species). We show, using lattice based spatial
stochastic simulations with random initial conditions, large enough for
coexistence to prevail, that parity effects are significant. We find that the
performance of weak species is dependent on whether the total number of species
is even or odd, especially for $N_S \le 8$, with odd numbers of species being
on average more favourable to weak species than even ones. We further show
that, despite the significant dispersion observed among individual models, a
weak species has on average a higher abundance than a strong one if ${\mathcal
P}_w$ is sufficiently smaller than unity -- the notable exception being the
four species case.
| [
{
"created": "Fri, 9 Jul 2021 13:13:25 GMT",
"version": "v1"
}
] | 2021-07-12 | [
[
"Avelino",
"P. P.",
""
],
[
"de Oliveira",
"B. F.",
""
],
[
"Trintin",
"R. S.",
""
]
] | We investigate the impact of parity on the abundance of weak species in the context of the simplest generalization of the rock-paper-scissors model to an arbitrary number of species -- we consider models with a total number of species ($N_S$) between 3 and 12, having one or more (weak) species characterized by a reduced predation probability (by a factor of ${\mathcal P}_w$ with respect to the other species). We show, using lattice based spatial stochastic simulations with random initial conditions, large enough for coexistence to prevail, that parity effects are significant. We find that the performance of weak species is dependent on whether the total number of species is even or odd, especially for $N_S \le 8$, with odd numbers of species being on average more favourable to weak species than even ones. We further show that, despite the significant dispersion observed among individual models, a weak species has on average a higher abundance than a strong one if ${\mathcal P}_w$ is sufficiently smaller than unity -- the notable exception being the four species case. |
q-bio/0607030 | Nki Echenim | Nki Echenim (INRIA Rocquencourt), Frederique Clement (INRIA
Rocquencourt), Michel Sorine (INRIA Rocquencourt) | Multi-scale modeling of follicular ovulation as a reachability problem | null | Multiscale Modeling & Simulation 6, 3 (2007) 895--912 | 10.1137/060664495 | null | q-bio.TO math.AP q-bio.QM | null | During each ovarian cycle, only a definite number of follicles ovulate, while
the others undergo a degeneration process called atresia. We have designed a
multi-scale mathematical model where ovulation and atresia result from a
hormonal controlled selection process. A 2D-conservation law describes the age
and maturity structuration of the follicular cell population. In this paper, we
focus on the operating mode of the control, through the study of the
characteristics of the conservation law. We describe in particular the set of
microscopic initial conditions leading to the macroscopic phenomenon of either
ovulation or atresia, in the framework of backwards reachable sets theory.
| [
{
"created": "Thu, 20 Jul 2006 06:47:45 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Feb 2007 08:03:55 GMT",
"version": "v2"
}
] | 2007-10-22 | [
[
"Echenim",
"Nki",
"",
"INRIA Rocquencourt"
],
[
"Clement",
"Frederique",
"",
"INRIA\n Rocquencourt"
],
[
"Sorine",
"Michel",
"",
"INRIA Rocquencourt"
]
] | During each ovarian cycle, only a definite number of follicles ovulate, while the others undergo a degeneration process called atresia. We have designed a multi-scale mathematical model where ovulation and atresia result from a hormonal controlled selection process. A 2D-conservation law describes the age and maturity structuration of the follicular cell population. In this paper, we focus on the operating mode of the control, through the study of the characteristics of the conservation law. We describe in particular the set of microscopic initial conditions leading to the macroscopic phenomenon of either ovulation or atresia, in the framework of backwards reachable sets theory. |
0903.0816 | Patrick Warren | Patrick B. Warren | Cells, cancer, and rare events: homeostatic metastability in stochastic
non-linear dynamics models of skin cell proliferation | 4 pages, 2 figures, RevTeX 4.0 | null | null | null | q-bio.CB cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recently proposed single progenitor cell model for skin cell proliferation
[Clayton et al., Nature v446, 185 (2007)] is extended to incorporate
homeostasis as a fixed point of the dynamics. Unlimited cell proliferation in
such a model can be viewed as a paradigm for the onset of cancer. A novel way
in which this can arise is if the homeostatic fixed point becomes metastable,
so that the cell populations can escape from the homeostatic basin of
attraction by a large but rare stochastic fluctuation. Such an event can be
viewed as the final step in a multi-stage model of carcinogenesis. This offers
a possible explanation for the peculiar epidemiology of lung cancer in
ex-smokers.
| [
{
"created": "Wed, 4 Mar 2009 17:08:09 GMT",
"version": "v1"
}
] | 2009-03-05 | [
[
"Warren",
"Patrick B.",
""
]
] | A recently proposed single progenitor cell model for skin cell proliferation [Clayton et al., Nature v446, 185 (2007)] is extended to incorporate homeostasis as a fixed point of the dynamics. Unlimited cell proliferation in such a model can be viewed as a paradigm for the onset of cancer. A novel way in which this can arise is if the homeostatic fixed point becomes metastable, so that the cell populations can escape from the homeostatic basin of attraction by a large but rare stochastic fluctuation. Such an event can be viewed as the final step in a multi-stage model of carcinogenesis. This offers a possible explanation for the peculiar epidemiology of lung cancer in ex-smokers. |
2401.12616 | Zilong Li | Zilong Li, Cong Liu, Xin Pan, Guosheng Ding, Ruiming Wang | The stability and instability of the language control network: a
longitudinal resting-state functional magnetic resonance imaging study | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The language control network is vital among language-related networks
responsible for solving the problem of multiple language switching. Researchers
have expressed concerns about the instability of the language control network
when exposed to external influences (e.g., Long-term second language learning).
However, some studies have suggested that the language control network is
stable. Therefore, whether the language control network is stable or not
remains unclear. In the present study, we directly evaluated the stability and
instability of the language control network using resting-state functional
magnetic resonance imaging (rs-fMRI). We employed cohorts of Chinese first-year
college students majoring in English who underwent second language (L2)
acquisition courses at a university and those who did not. Two resting-state
fMRI scans were acquired approximately 1 year apart. We found that the language
control network was both moderately stable and unstable. We further
investigated the morphological coexistence patterns of stability and
instability within the language control network. First, we extracted
connections representing stability and plasticity from the entire network. We
then evaluated whether the coexistence patterns were modular (stability and
instability involve different brain regions) or non-modular (stability and
plasticity involve the same brain regions but have unique connectivity
patterns). We found that both stability and instability coexisted in a
non-modular pattern. Compared with the non-English major group, the English
major group has a more non-modular coexistence pattern.. These findings provide
preliminary evidence of the coexistence of stability and instability in the
language control network.
| [
{
"created": "Tue, 23 Jan 2024 10:16:36 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 14:15:18 GMT",
"version": "v2"
}
] | 2024-03-08 | [
[
"Li",
"Zilong",
""
],
[
"Liu",
"Cong",
""
],
[
"Pan",
"Xin",
""
],
[
"Ding",
"Guosheng",
""
],
[
"Wang",
"Ruiming",
""
]
] | The language control network is vital among language-related networks responsible for solving the problem of multiple language switching. Researchers have expressed concerns about the instability of the language control network when exposed to external influences (e.g., Long-term second language learning). However, some studies have suggested that the language control network is stable. Therefore, whether the language control network is stable or not remains unclear. In the present study, we directly evaluated the stability and instability of the language control network using resting-state functional magnetic resonance imaging (rs-fMRI). We employed cohorts of Chinese first-year college students majoring in English who underwent second language (L2) acquisition courses at a university and those who did not. Two resting-state fMRI scans were acquired approximately 1 year apart. We found that the language control network was both moderately stable and unstable. We further investigated the morphological coexistence patterns of stability and instability within the language control network. First, we extracted connections representing stability and plasticity from the entire network. We then evaluated whether the coexistence patterns were modular (stability and instability involve different brain regions) or non-modular (stability and plasticity involve the same brain regions but have unique connectivity patterns). We found that both stability and instability coexisted in a non-modular pattern. Compared with the non-English major group, the English major group has a more non-modular coexistence pattern.. These findings provide preliminary evidence of the coexistence of stability and instability in the language control network. |
1607.06754 | Mahmoud Hassan | N. Nader, M. Hassan, W. Falou, C. Marque and M. Khalil | A node-wise analysis of the uterine muscle networks for pregnancy
monitoring | 4 pages, 3 figures, accepted in the IEEE EMBC conferance | null | 10.1109/EMBC.2016.7590801 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent past years have seen a noticeable increase of interest in the
correlation analysis of electrohysterographic (EHG) signals in the perspective
of improving the pregnancy monitoring. Here we propose a new approach based on
the functional connectivity between multichannel (4x4 matrix) EHG signals
recorded from the women abdomen. The proposed pipeline includes i) the
computation of the statistical couplings between the multichannel EHG signals,
ii) the characterization of the connectivity matrices, computed by using the
imaginary part of the coherence, based on the graph-theory analysis and iii)
the use of these measures for pregnancy monitoring. The method was evaluated on
a dataset of EHGs, in order to track the correlation between EHGs collected by
each electrode of the matrix (called node-wise analysis) and follow their
evolution along weeks before labor. Results showed that the strength of each
node significantly increases from pregnancy to labor. Electrodes located on the
median vertical axis of the uterus seemed to be the more discriminant. We
speculate that the network-based analysis can be a very promising tool to
improve pregnancy monitoring.
| [
{
"created": "Thu, 2 Jun 2016 09:28:00 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Nader",
"N.",
""
],
[
"Hassan",
"M.",
""
],
[
"Falou",
"W.",
""
],
[
"Marque",
"C.",
""
],
[
"Khalil",
"M.",
""
]
] | The recent past years have seen a noticeable increase of interest in the correlation analysis of electrohysterographic (EHG) signals in the perspective of improving the pregnancy monitoring. Here we propose a new approach based on the functional connectivity between multichannel (4x4 matrix) EHG signals recorded from the women abdomen. The proposed pipeline includes i) the computation of the statistical couplings between the multichannel EHG signals, ii) the characterization of the connectivity matrices, computed by using the imaginary part of the coherence, based on the graph-theory analysis and iii) the use of these measures for pregnancy monitoring. The method was evaluated on a dataset of EHGs, in order to track the correlation between EHGs collected by each electrode of the matrix (called node-wise analysis) and follow their evolution along weeks before labor. Results showed that the strength of each node significantly increases from pregnancy to labor. Electrodes located on the median vertical axis of the uterus seemed to be the more discriminant. We speculate that the network-based analysis can be a very promising tool to improve pregnancy monitoring. |
1704.05883 | Duc Nguyen | Duc Duy Nguyen, Tian Xiao, Menglun Wang, Guo-Wei Wei | Rigidity strengthening is a vital mechanism for protein-ligand binding | 9 pages, 6 figures | null | null | null | q-bio.BM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-ligand binding is essential to almost all life processes. The
understanding of protein-ligand interactions is fundamentally important to
rational drug design and protein design. Based on large scale data sets, we
show that protein rigidity strengthening or flexibility reduction is a pivoting
mechanism in protein-ligand binding. Our approach based solely on rigidity is
able to unveil a surprisingly long range contribution of four residue layers to
protein-ligand binding, which has a ramification for drug and protein design.
Additionally, the present work reveals that among various pairwise
interactions, the short range ones within the distance of the van der Waals
diameter are most important. It is found that the present approach outperforms
all the other state-of-the-art scoring functions for protein-ligand binding
affinity predictions of two benchmark data sets
| [
{
"created": "Fri, 31 Mar 2017 14:59:32 GMT",
"version": "v1"
}
] | 2017-04-21 | [
[
"Nguyen",
"Duc Duy",
""
],
[
"Xiao",
"Tian",
""
],
[
"Wang",
"Menglun",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Protein-ligand binding is essential to almost all life processes. The understanding of protein-ligand interactions is fundamentally important to rational drug design and protein design. Based on large scale data sets, we show that protein rigidity strengthening or flexibility reduction is a pivoting mechanism in protein-ligand binding. Our approach based solely on rigidity is able to unveil a surprisingly long range contribution of four residue layers to protein-ligand binding, which has a ramification for drug and protein design. Additionally, the present work reveals that among various pairwise interactions, the short range ones within the distance of the van der Waals diameter are most important. It is found that the present approach outperforms all the other state-of-the-art scoring functions for protein-ligand binding affinity predictions of two benchmark data sets |
2206.02524 | Miriam Van Mersbergen | Miriam van Mersbergen, Jeffrey Marchetta, Daniel Foti, Eric Pillow,
Apartim Dasgupta, Chandler Cain, Stephen Morvant | Comparison of aerosol emissions during specific speech tasks | 22 double spaces pages, 6 figures, 2 tables | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The study of aerosols and droplets emitted from the oral cavity has become
increasingly important throughout the COVID-19 pandemic. Studies show
particulates emitted while speaking were generally much smaller compared to
coughing or sneezing. However, recent investigations revealed that they are
large enough to carry respiratory contagions. Although studies have shown that
particulate emissions do indeed occur during speech, to date, there is little
information about the relative contribution of different speech sounds in
producing particle emissions. This study compares airborne aerosol generation
in participants producing isolated speech sounds: fricative consonants, plosive
consonants, and vowel sounds. While participants produced isolated speech
tasks, a planar beam of laser light, a high-speed camera, and image software
calculated the number of particulates detected overtime. This study compares
airborne aerosols emitted by human participants at a distance of 2.54 cm
between the laser sheet and the mouth and reveals statistically significant
increases in particulate counts over ambient dust distribution for all speech
sounds. Vowel sounds were statistically greater than consonants, suggesting
that mouth opening, as opposed to place of vocal tract constriction or manner
of sound production, might be the primary influence in the degree to which
particulates become aerosolized during speech. Results of this research will
inform boundary conditions for computation models of aerosolized particulates
during speech.
| [
{
"created": "Wed, 18 May 2022 19:48:13 GMT",
"version": "v1"
}
] | 2022-06-07 | [
[
"van Mersbergen",
"Miriam",
""
],
[
"Marchetta",
"Jeffrey",
""
],
[
"Foti",
"Daniel",
""
],
[
"Pillow",
"Eric",
""
],
[
"Dasgupta",
"Apartim",
""
],
[
"Cain",
"Chandler",
""
],
[
"Morvant",
"Stephen",
""
... | The study of aerosols and droplets emitted from the oral cavity has become increasingly important throughout the COVID-19 pandemic. Studies show particulates emitted while speaking were generally much smaller compared to coughing or sneezing. However, recent investigations revealed that they are large enough to carry respiratory contagions. Although studies have shown that particulate emissions do indeed occur during speech, to date, there is little information about the relative contribution of different speech sounds in producing particle emissions. This study compares airborne aerosol generation in participants producing isolated speech sounds: fricative consonants, plosive consonants, and vowel sounds. While participants produced isolated speech tasks, a planar beam of laser light, a high-speed camera, and image software calculated the number of particulates detected overtime. This study compares airborne aerosols emitted by human participants at a distance of 2.54 cm between the laser sheet and the mouth and reveals statistically significant increases in particulate counts over ambient dust distribution for all speech sounds. Vowel sounds were statistically greater than consonants, suggesting that mouth opening, as opposed to place of vocal tract constriction or manner of sound production, might be the primary influence in the degree to which particulates become aerosolized during speech. Results of this research will inform boundary conditions for computation models of aerosolized particulates during speech. |
1803.04999 | Elad Noor | Elad Noor | Removing both Internal and Unrealistic Energy-Generating Cycles in Flux
Balance Analysis | 13 pages, 3 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constraint-based stoichiometric models are ubiquitous in metabolic research,
with Flux Balance Analysis (FBA) being the most widely used method to describe
metabolic phenotypes of cells growing in steady-state. Of the many variants of
constrain-based modelling methods published throughout the years, only few have
focused on thermodynamic issues, in particular the elimination of non-physical
and non-physiological cyclic fluxes. In this work, we revisit two of these
methods, namely thermodynamic FBA and loopless FBA, and analyze the strengths
and weaknesses of each one. Finally, we suggest a compromise denoted
semi-thermodynamic FBA (st-FBA) which imposes stronger thermodynamic constrains
on the flux polytope compared to loopless FBA, without requiring a large set of
thermodynamic parameters as in the case of thermodynamic FBA. We show that
st-FBA is a useful and simple way to eliminate thermodynamically infeasible
cycles that generate ATP.
| [
{
"created": "Tue, 13 Mar 2018 18:30:00 GMT",
"version": "v1"
}
] | 2018-03-15 | [
[
"Noor",
"Elad",
""
]
] | Constraint-based stoichiometric models are ubiquitous in metabolic research, with Flux Balance Analysis (FBA) being the most widely used method to describe metabolic phenotypes of cells growing in steady-state. Of the many variants of constrain-based modelling methods published throughout the years, only few have focused on thermodynamic issues, in particular the elimination of non-physical and non-physiological cyclic fluxes. In this work, we revisit two of these methods, namely thermodynamic FBA and loopless FBA, and analyze the strengths and weaknesses of each one. Finally, we suggest a compromise denoted semi-thermodynamic FBA (st-FBA) which imposes stronger thermodynamic constrains on the flux polytope compared to loopless FBA, without requiring a large set of thermodynamic parameters as in the case of thermodynamic FBA. We show that st-FBA is a useful and simple way to eliminate thermodynamically infeasible cycles that generate ATP. |
1306.2734 | Marios Kyriazis Dr | Marios Kyriazis | Reversal of informational entropy and the acquisition of germ-like
immortality by somatic cells | 2 Figures, 1 Table, Appendix | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We live within an increasingly technological, information-laden environment
for the first time in human evolution.This subjects us, and will continue to
subject us in an accelerating fashion, to an unremitting exposure to meaningful
information that requires action. Directly dependent upon this new environment
are novel evolutionary pressures, which can modify existing resource allocation
mechanisms and may eventually favor the survival of somatic cells,particularly
neurons, at the expense of germ line cells. Here it is argued that persistent,
structured information-sharing in both virtual and real domains, leads to
increased biological complexity and functionality, which reflects upon human
survival characteristics. Certain immortalisation mechanisms currently employed
by germ cells may thus need to be downgraded in order to enable somatic cells
to manage these new energy demands placed by our modern environment. Relevant
concepts from a variety of disciplines such as the evolution of complex
adaptive systems, information theory, digital hyper-connectivity, and cell
immortalisation will be reviewed.
| [
{
"created": "Wed, 12 Jun 2013 07:46:43 GMT",
"version": "v1"
}
] | 2013-06-13 | [
[
"Kyriazis",
"Marios",
""
]
] | We live within an increasingly technological, information-laden environment for the first time in human evolution.This subjects us, and will continue to subject us in an accelerating fashion, to an unremitting exposure to meaningful information that requires action. Directly dependent upon this new environment are novel evolutionary pressures, which can modify existing resource allocation mechanisms and may eventually favor the survival of somatic cells,particularly neurons, at the expense of germ line cells. Here it is argued that persistent, structured information-sharing in both virtual and real domains, leads to increased biological complexity and functionality, which reflects upon human survival characteristics. Certain immortalisation mechanisms currently employed by germ cells may thus need to be downgraded in order to enable somatic cells to manage these new energy demands placed by our modern environment. Relevant concepts from a variety of disciplines such as the evolution of complex adaptive systems, information theory, digital hyper-connectivity, and cell immortalisation will be reviewed. |
2404.00111 | Zhuojun Yu | Zhuojun Yu and Peter J. Thomas | Variational design of sensory feedback for powerstroke-recovery systems | 48 pages, 17 figures, 3 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the raison d'etre of the brain is the survival of the body, there
are relatively few theoretical studies of closed-loop rhythmic motor control
systems. In this paper we provide a unified framework, based on variational
analysis, for investigating the dual goals of performance and robustness in
powerstroke-recovery systems. We augment two previously published closed-loop
motor control models by equipping each model with a performance measure based
on the rate of progress of the system relative to a spatially extended external
substrate -- such as progress relative to the ground for a locomotor task. The
sensitivity measure quantifies the ability of the system to maintain
performance in response to external perturbations. Motivated by a search for
optimal design principles for feedback control achieving the complementary
requirements of efficiency and robustness, we discuss the
performance-sensitivity patterns of the systems featuring different sensory
feedback architectures. In a paradigmatic half-center oscillator (HCO)-motor
system, we observe that the excitation-inhibition property of feedback
mechanisms determines the sensitivity pattern while the activation-inactivation
property determines the performance pattern. Moreover, we show that the
nonlinearity of the sigmoid activation of feedback signals allows the existence
of optimal combinations of performance and sensitivity. In a detailed hindlimb
locomotor system, we find that a force-dependent feedback can simultaneously
optimize both performance and robustness, while length-dependent feedback
variations result in significant performance-versus-sensitivity tradeoffs.
Thus, this work provides an analytical framework for studying feedback control
of oscillations in nonlinear dynamical systems, leading to several insights
that have the potential to inform the design of control or rehabilitation
systems.
| [
{
"created": "Fri, 29 Mar 2024 18:56:16 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Yu",
"Zhuojun",
""
],
[
"Thomas",
"Peter J.",
""
]
] | Although the raison d'etre of the brain is the survival of the body, there are relatively few theoretical studies of closed-loop rhythmic motor control systems. In this paper we provide a unified framework, based on variational analysis, for investigating the dual goals of performance and robustness in powerstroke-recovery systems. We augment two previously published closed-loop motor control models by equipping each model with a performance measure based on the rate of progress of the system relative to a spatially extended external substrate -- such as progress relative to the ground for a locomotor task. The sensitivity measure quantifies the ability of the system to maintain performance in response to external perturbations. Motivated by a search for optimal design principles for feedback control achieving the complementary requirements of efficiency and robustness, we discuss the performance-sensitivity patterns of the systems featuring different sensory feedback architectures. In a paradigmatic half-center oscillator (HCO)-motor system, we observe that the excitation-inhibition property of feedback mechanisms determines the sensitivity pattern while the activation-inactivation property determines the performance pattern. Moreover, we show that the nonlinearity of the sigmoid activation of feedback signals allows the existence of optimal combinations of performance and sensitivity. In a detailed hindlimb locomotor system, we find that a force-dependent feedback can simultaneously optimize both performance and robustness, while length-dependent feedback variations result in significant performance-versus-sensitivity tradeoffs. Thus, this work provides an analytical framework for studying feedback control of oscillations in nonlinear dynamical systems, leading to several insights that have the potential to inform the design of control or rehabilitation systems. |
2201.06792 | Victor Vikram Odouard | Victor Vikram Odouard and Michael Holton Price | Tit for Tattling: Cooperation, communication, and how each could
stabilize the other | 27 pages, 3-page appendix | null | null | null | q-bio.PE econ.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indirect reciprocity is a mechanism by which individuals cooperate with those
who have cooperated with others. This creates a regime in which repeated
interactions are not necessary to incent cooperation (as would be required for
direct reciprocity). However, indirect reciprocity creates a new problem: how
do agents know who has cooperated with others? To know this, agents would need
to access some form of reputation information. Perhaps there is a communication
system to disseminate reputation information, but how does it remain truthful
and informative? Most papers assume the existence of a truthful, forthcoming,
and informative communication system; in this paper, we seek to explain how
such a communication system could remain evolutionarily stable in the absence
of exogenous pressures. Specifically, we present three conditions that together
maintain both the truthfulness of the communication system and the prevalence
of cooperation: individuals (1) use a norm that rewards the behaviors that it
prescribes (an aligned norm), (2) can signal not only about the actions of
other agents, but also about their truthfulness (by acting as third party
observers to an interaction), and (3) make occasional mistakes, demonstrating
how error can create stability by introducing diversity.
| [
{
"created": "Tue, 18 Jan 2022 07:44:11 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Apr 2022 20:58:50 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Dec 2022 05:46:01 GMT",
"version": "v3"
},
{
"created": "Fri, 4 Aug 2023 20:38:38 GMT",
"version": "v4"
}
] | 2023-08-08 | [
[
"Odouard",
"Victor Vikram",
""
],
[
"Price",
"Michael Holton",
""
]
] | Indirect reciprocity is a mechanism by which individuals cooperate with those who have cooperated with others. This creates a regime in which repeated interactions are not necessary to incent cooperation (as would be required for direct reciprocity). However, indirect reciprocity creates a new problem: how do agents know who has cooperated with others? To know this, agents would need to access some form of reputation information. Perhaps there is a communication system to disseminate reputation information, but how does it remain truthful and informative? Most papers assume the existence of a truthful, forthcoming, and informative communication system; in this paper, we seek to explain how such a communication system could remain evolutionarily stable in the absence of exogenous pressures. Specifically, we present three conditions that together maintain both the truthfulness of the communication system and the prevalence of cooperation: individuals (1) use a norm that rewards the behaviors that it prescribes (an aligned norm), (2) can signal not only about the actions of other agents, but also about their truthfulness (by acting as third party observers to an interaction), and (3) make occasional mistakes, demonstrating how error can create stability by introducing diversity. |
2311.18376 | Zahra Kavian | Zahra Kavian, Kimia Hajisadeghi, Yashar Rezazadeh, Mehrbod Faraji,
Reza Ebrahimpour | Age Effects on Decision-Making, Drift Diffusion Model | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Training can improve human decision-making performance. After several
training sessions, a person can quickly and accurately complete a task.
However, decision-making is always a trade-off between accuracy and response
time. Factors such as age and drug abuse can affect the decision-making
process. This study examines how training can improve the performance of
different age groups in completing a random dot motion (RDM) task. The
participants are divided into two groups: old and young. They undergo a
three-phase training and then repeat the same RDM task. The hierarchical
drift-diffusion model analyzes the subjects' responses and determines how the
model's parameters change after training for both age groups. The results show
that after training, the participants were able to accumulate sensory
information faster, and the model drift rate increased. However, their decision
boundary decreased as they became more confident and had a lower
decision-making threshold. Additionally, the old group had a higher boundary
and lower drift rate in both pre and post-training, and there was less
difference between the two group parameters after training.
| [
{
"created": "Thu, 30 Nov 2023 09:19:12 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Kavian",
"Zahra",
""
],
[
"Hajisadeghi",
"Kimia",
""
],
[
"Rezazadeh",
"Yashar",
""
],
[
"Faraji",
"Mehrbod",
""
],
[
"Ebrahimpour",
"Reza",
""
]
] | Training can improve human decision-making performance. After several training sessions, a person can quickly and accurately complete a task. However, decision-making is always a trade-off between accuracy and response time. Factors such as age and drug abuse can affect the decision-making process. This study examines how training can improve the performance of different age groups in completing a random dot motion (RDM) task. The participants are divided into two groups: old and young. They undergo a three-phase training and then repeat the same RDM task. The hierarchical drift-diffusion model analyzes the subjects' responses and determines how the model's parameters change after training for both age groups. The results show that after training, the participants were able to accumulate sensory information faster, and the model drift rate increased. However, their decision boundary decreased as they became more confident and had a lower decision-making threshold. Additionally, the old group had a higher boundary and lower drift rate in both pre and post-training, and there was less difference between the two group parameters after training. |
q-bio/0702021 | Patrick Warren | Patrick B. Warren, Janette L. Jones | Duality, thermodynamics, and the linear programming problem in
constraint-based models of metabolism | 4 pages, 2 figures, 1 table, RevTeX 4, final accepted version | null | 10.1103/PhysRevLett.99.108101 | null | q-bio.SC | null | It is shown that the dual to the linear programming problem that arises in
constraint-based models of metabolism can be given a thermodynamic
interpretation in which the shadow prices are chemical potential analogues, and
the objective is to minimise free energy consumption given a free energy drain
corresponding to growth. The interpretation is distinct from conventional
non-equilibrium thermodynamics, although it does satisfy a minimum entropy
production principle. It can be used to motivate extensions of constraint-based
modelling, for example to microbial ecosystems.
| [
{
"created": "Fri, 9 Feb 2007 14:57:25 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Aug 2007 10:21:38 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Warren",
"Patrick B.",
""
],
[
"Jones",
"Janette L.",
""
]
] | It is shown that the dual to the linear programming problem that arises in constraint-based models of metabolism can be given a thermodynamic interpretation in which the shadow prices are chemical potential analogues, and the objective is to minimise free energy consumption given a free energy drain corresponding to growth. The interpretation is distinct from conventional non-equilibrium thermodynamics, although it does satisfy a minimum entropy production principle. It can be used to motivate extensions of constraint-based modelling, for example to microbial ecosystems. |
2403.09182 | Adam Malik | Adam A. Malik, Cecilia Krona, Soumi Kundu, Philip Gerlee, Sven
Nelander | Anatomically aware simulation of patient-specific glioblastoma
xenografts | null | null | null | null | q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Patient-derived cells (PDC) mouse xenografts are increasingly important tools
in glioblastoma (GBM) research, essential to investigate case-specific growth
patterns and treatment responses. Despite the central role of xenograft models
in the field, few good simulation models are available to probe the dynamics of
tumor growth and to support therapy design. We therefore propose a new
framework for the patient-specific simulation of GBM in the mouse brain. Unlike
existing methods, our simulations leverage a high-resolution map of the mouse
brain anatomy to yield patient-specific results that are in good agreement with
experimental observations. To facilitate the fitting of our model to
histological data, we use Approximate Bayesian Computation. Because our model
uses few parameters, reflecting growth, invasion and niche dependencies, it is
well suited for case comparisons and for probing treatment effects. We
demonstrate how our model can be used to simulate different treatment by
perturbing the different model parameters. We expect in silico replicates of
mouse xenograft tumors can improve the assessment of therapeutic outcomes and
boost the statistical power of preclinical GBM studies.
| [
{
"created": "Thu, 14 Mar 2024 08:52:43 GMT",
"version": "v1"
}
] | 2024-03-15 | [
[
"Malik",
"Adam A.",
""
],
[
"Krona",
"Cecilia",
""
],
[
"Kundu",
"Soumi",
""
],
[
"Gerlee",
"Philip",
""
],
[
"Nelander",
"Sven",
""
]
] | Patient-derived cells (PDC) mouse xenografts are increasingly important tools in glioblastoma (GBM) research, essential to investigate case-specific growth patterns and treatment responses. Despite the central role of xenograft models in the field, few good simulation models are available to probe the dynamics of tumor growth and to support therapy design. We therefore propose a new framework for the patient-specific simulation of GBM in the mouse brain. Unlike existing methods, our simulations leverage a high-resolution map of the mouse brain anatomy to yield patient-specific results that are in good agreement with experimental observations. To facilitate the fitting of our model to histological data, we use Approximate Bayesian Computation. Because our model uses few parameters, reflecting growth, invasion and niche dependencies, it is well suited for case comparisons and for probing treatment effects. We demonstrate how our model can be used to simulate different treatment by perturbing the different model parameters. We expect in silico replicates of mouse xenograft tumors can improve the assessment of therapeutic outcomes and boost the statistical power of preclinical GBM studies. |
1902.05568 | Philippe Poulin | Philippe Poulin, Daniel J\"orgens, Pierre-Marc Jodoin, Maxime
Descoteaux | Tractography and machine learning: Current state and open challenges | null | null | 10.1016/j.mri.2019.04.013 | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Supervised machine learning (ML) algorithms have recently been proposed as an
alternative to traditional tractography methods in order to address some of
their weaknesses. They can be path-based and local-model-free, and easily
incorporate anatomical priors to make contextual and non-local decisions that
should help the tracking process. ML-based techniques have thus shown promising
reconstructions of larger spatial extent of existing white matter bundles,
promising reconstructions of less false positives, and promising robustness to
known position and shape biases of current tractography techniques. But as of
today, none of these ML-based methods have shown conclusive performances or
have been adopted as a de facto solution to tractography. One reason for this
might be the lack of well-defined and extensive frameworks to train, evaluate,
and compare these methods.
In this paper, we describe several datasets and evaluation tools that contain
useful features for ML algorithms, along with the various methods proposed in
the recent years. We then discuss the strategies that are used to evaluate and
compare those methods, as well as their shortcomings. Finally, we describe the
particular needs of ML tractography methods and discuss tangible solutions for
future works.
| [
{
"created": "Thu, 14 Feb 2019 19:12:32 GMT",
"version": "v1"
}
] | 2019-05-22 | [
[
"Poulin",
"Philippe",
""
],
[
"Jörgens",
"Daniel",
""
],
[
"Jodoin",
"Pierre-Marc",
""
],
[
"Descoteaux",
"Maxime",
""
]
] | Supervised machine learning (ML) algorithms have recently been proposed as an alternative to traditional tractography methods in order to address some of their weaknesses. They can be path-based and local-model-free, and easily incorporate anatomical priors to make contextual and non-local decisions that should help the tracking process. ML-based techniques have thus shown promising reconstructions of larger spatial extent of existing white matter bundles, promising reconstructions of less false positives, and promising robustness to known position and shape biases of current tractography techniques. But as of today, none of these ML-based methods have shown conclusive performances or have been adopted as a de facto solution to tractography. One reason for this might be the lack of well-defined and extensive frameworks to train, evaluate, and compare these methods. In this paper, we describe several datasets and evaluation tools that contain useful features for ML algorithms, along with the various methods proposed in the recent years. We then discuss the strategies that are used to evaluate and compare those methods, as well as their shortcomings. Finally, we describe the particular needs of ML tractography methods and discuss tangible solutions for future works. |
2210.10911 | Jaime Cofre | Jaime Cofre | The Neoplasia as embryological phenomenon and its implication in the
animal evolution and the origin of cancer. III. The role of flagellated cell
fusion in the formation of the first animal and evolutionary clues to the
Warburg effect | 49 pages, 2 figures. Keywords: Cancer; Neoplasia; Evolution;
Embryology; Multiflagellate fusion; Warburg effect; Ediacaran period | null | null | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Cytasters have been underestimated in terms of their potential relevance to
embryonic development and evolution. From the perspective discussed herein,
structures such as the multiciliated cells of comb rows and balancers
supporting mineralized statoliths and macrocilia in Beroe ovata point to a past
event of multiflagellate fusion in the origin of metazoans. These structures,
which are unique in evolutionary history, indicate that early animals handled
basal bodies and their duplication in a manner consistent with a "developmental
program" originated in the Ctenophora. Furthermore, the fact that centrosome
amplification leads to spontaneous tumorigenesis suggests that the centrosome
regulation process was co-opted into a neoplastic functional module.
Multicilia, cilia, and flagella are deeply rooted in the evolution of animals
and Neoplasia. The fusion of several flagellated microgametes into a cell with
a subsequent phase of zygotic (haplontic) meiosis might have been at the origin
of both animal evolution and the neoplastic process. In the Ediacaran ocean, we
also encounter evolutionary links between the Warburg effect and Neoplasia.
| [
{
"created": "Wed, 19 Oct 2022 22:20:51 GMT",
"version": "v1"
}
] | 2022-10-21 | [
[
"Cofre",
"Jaime",
""
]
] | Cytasters have been underestimated in terms of their potential relevance to embryonic development and evolution. From the perspective discussed herein, structures such as the multiciliated cells of comb rows and balancers supporting mineralized statoliths and macrocilia in Beroe ovata point to a past event of multiflagellate fusion in the origin of metazoans. These structures, which are unique in evolutionary history, indicate that early animals handled basal bodies and their duplication in a manner consistent with a "developmental program" originated in the Ctenophora. Furthermore, the fact that centrosome amplification leads to spontaneous tumorigenesis suggests that the centrosome regulation process was co-opted into a neoplastic functional module. Multicilia, cilia, and flagella are deeply rooted in the evolution of animals and Neoplasia. The fusion of several flagellated microgametes into a cell with a subsequent phase of zygotic (haplontic) meiosis might have been at the origin of both animal evolution and the neoplastic process. In the Ediacaran ocean, we also encounter evolutionary links between the Warburg effect and Neoplasia. |
0910.3146 | Soumya Banerjee | Soumya Banerjee, Melanie Moses | A Hybrid Agent Based and Differential Equation Model of Body Size
Effects on Pathogen Replication and Immune System Response | 3 pages | The 8th International Conference on Artificial Immune Systems
(ICARIS), Volume 5666-014, 14-18, 2009 | 10.1007/978-3-642-03246-2 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many emerging pathogens infect multiple host species, and multi-host
pathogens may have very different dynamics in different host species. This
research addresses how pathogen replication rates and Immune System (IS)
response times are constrained by host body size. An Ordinary Differential
Equation (ODE) model is used to show that pathogen replication rates decline
with host body size but IS response rates remain invariant with body size. An
Agent-Based Model (ABM) is used to investigate two models of IS architecture
that could explain scale invariance of IS response rates. A stage structured
hybrid model is proposed that strikes a balance between the detailed
representation of an ABM and computational tractability of an ODE, by using
them in the initial and latter stages of an infection, respectively.
| [
{
"created": "Fri, 16 Oct 2009 15:50:53 GMT",
"version": "v1"
}
] | 2009-10-19 | [
[
"Banerjee",
"Soumya",
""
],
[
"Moses",
"Melanie",
""
]
] | Many emerging pathogens infect multiple host species, and multi-host pathogens may have very different dynamics in different host species. This research addresses how pathogen replication rates and Immune System (IS) response times are constrained by host body size. An Ordinary Differential Equation (ODE) model is used to show that pathogen replication rates decline with host body size but IS response rates remain invariant with body size. An Agent-Based Model (ABM) is used to investigate two models of IS architecture that could explain scale invariance of IS response rates. A stage structured hybrid model is proposed that strikes a balance between the detailed representation of an ABM and computational tractability of an ODE, by using them in the initial and latter stages of an infection, respectively. |
1511.07266 | Davide Valenti | D. Valenti, A. Giuffrida, G. Denaro, N. Pizzolato, L. Curcio, B.
Spagnolo, S. Mazzola, G. Basilone, A. Bonanno | Noise Induced Phenomena in the Dynamics of Two Competing Species | 23 pages, 8 figures; to be published in Math. Model. Nat. Phenom.
(2016) | null | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Noise through its interaction with the nonlinearity of the living systems can
give rise to counter-intuitive phenomena. In this paper we shortly review noise
induced effects in different ecosystems, in which two populations compete for
the same resources. We also present new results on spatial patterns of two
populations, while modeling real distributions of anchovies and sardines. The
transient dynamics of these ecosystems are analyzed through generalized
Lotka-Volterra equations in the presence of multiplicative noise, which models
the interaction between the species and the environment. We find noise induced
phenomena such as quasi-deterministic oscillations, stochastic resonance, noise
delayed extinction, and noise induced pattern formation. In addition, our
theoretical results are validated with experimental findings. Specifically the
results, obtained by a coupled map lattice model, well reproduce the spatial
distributions of anchovies and sardines, observed in a marine ecosystem.
Moreover, the experimental dynamical behavior of two competing bacterial
populations in a meat product and the probability distribution at long times of
one of them are well reproduced by a stochastic microbial predictive model.
| [
{
"created": "Mon, 23 Nov 2015 15:18:14 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Nov 2015 08:16:27 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Sep 2016 16:08:34 GMT",
"version": "v3"
}
] | 2016-09-26 | [
[
"Valenti",
"D.",
""
],
[
"Giuffrida",
"A.",
""
],
[
"Denaro",
"G.",
""
],
[
"Pizzolato",
"N.",
""
],
[
"Curcio",
"L.",
""
],
[
"Spagnolo",
"B.",
""
],
[
"Mazzola",
"S.",
""
],
[
"Basilone",
"G."... | Noise through its interaction with the nonlinearity of the living systems can give rise to counter-intuitive phenomena. In this paper we shortly review noise induced effects in different ecosystems, in which two populations compete for the same resources. We also present new results on spatial patterns of two populations, while modeling real distributions of anchovies and sardines. The transient dynamics of these ecosystems are analyzed through generalized Lotka-Volterra equations in the presence of multiplicative noise, which models the interaction between the species and the environment. We find noise induced phenomena such as quasi-deterministic oscillations, stochastic resonance, noise delayed extinction, and noise induced pattern formation. In addition, our theoretical results are validated with experimental findings. Specifically the results, obtained by a coupled map lattice model, well reproduce the spatial distributions of anchovies and sardines, observed in a marine ecosystem. Moreover, the experimental dynamical behavior of two competing bacterial populations in a meat product and the probability distribution at long times of one of them are well reproduced by a stochastic microbial predictive model. |
q-bio/0312012 | Reka Albert | Reka Albert, Yu-wen Chiu and Hans G. Othmer | Dynamic receptor team formation can explain the high signal transduction
gain in E. coli | Accepted for publication in the Biophysical Journal | Biophysical Journal 86, 2650-2659 (2004) | 10.1016/S0006-3495(04)74321-0 | null | q-bio.MN q-bio.QM | null | Evolution has provided many organisms with sophisticated sensory systems that
enable them to respond to signals in their environment. The response frequently
involves alteration in the pattern of movement, such as the chemokinesis of the
bacterium Escherichia coli, which swims by rotating its flagella. When rotated
counterclockwise (CCW) the flagella coalesce into a propulsive bundle,
producing a relatively straight ``run'', and when rotated clockwise (CW) they
fly apart, resulting in a ``tumble'' which reorients the cell with little
translocation. A stochastic process generates the runs and tumbles, and in a
chemoeffector gradient runs that carry the cell in a favorable direction are
extended. The overall structure of the signal transduction pathways is
well-characterized in E. coli, but important details are still not understood.
Only recently has a source of gain in the signal transduction network been
identified experimentally, and here we present a mathematical model based on
dynamic assembly of receptor teams that can explain this observation.
| [
{
"created": "Mon, 8 Dec 2003 19:39:59 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Albert",
"Reka",
""
],
[
"Chiu",
"Yu-wen",
""
],
[
"Othmer",
"Hans G.",
""
]
] | Evolution has provided many organisms with sophisticated sensory systems that enable them to respond to signals in their environment. The response frequently involves alteration in the pattern of movement, such as the chemokinesis of the bacterium Escherichia coli, which swims by rotating its flagella. When rotated counterclockwise (CCW) the flagella coalesce into a propulsive bundle, producing a relatively straight ``run'', and when rotated clockwise (CW) they fly apart, resulting in a ``tumble'' which reorients the cell with little translocation. A stochastic process generates the runs and tumbles, and in a chemoeffector gradient runs that carry the cell in a favorable direction are extended. The overall structure of the signal transduction pathways is well-characterized in E. coli, but important details are still not understood. Only recently has a source of gain in the signal transduction network been identified experimentally, and here we present a mathematical model based on dynamic assembly of receptor teams that can explain this observation. |
2302.07541 | Ziqiao Zhang | Ziqiao Zhang, Bangyi Zhao, Ailin Xie, Yatao Bian, Shuigeng Zhou | Activity Cliff Prediction: Dataset and Benchmark | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Activity cliffs (ACs), which are generally defined as pairs of structurally
similar molecules that are active against the same bio-target but significantly
different in the binding potency, are of great importance to drug discovery. Up
to date, the AC prediction problem, i.e., to predict whether a pair of
molecules exhibit the AC relationship, has not yet been fully explored. In this
paper, we first introduce ACNet, a large-scale dataset for AC prediction. ACNet
curates over 400K Matched Molecular Pairs (MMPs) against 190 targets, including
over 20K MMP-cliffs and 380K non-AC MMPs, and provides five subsets for model
development and evaluation. Then, we propose a baseline framework to benchmark
the predictive performance of molecular representations encoded by deep neural
networks for AC prediction, and 16 models are evaluated in experiments. Our
experimental results show that deep learning models can achieve good
performance when the models are trained on tasks with adequate amount of data,
while the imbalanced, low-data and out-of-distribution features of the ACNet
dataset still make it challenging for deep neural networks to cope with. In
addition, the traditional ECFP method shows a natural advantage on MMP-cliff
prediction, and outperforms other deep learning models on most of the data
subsets. To the best of our knowledge, our work constructs the first
large-scale dataset for AC prediction, which may stimulate the study of AC
prediction models and prompt further breakthroughs in AI-aided drug discovery.
The codes and dataset can be accessed by https://drugai.github.io/ACNet/.
| [
{
"created": "Wed, 15 Feb 2023 09:19:07 GMT",
"version": "v1"
}
] | 2023-02-16 | [
[
"Zhang",
"Ziqiao",
""
],
[
"Zhao",
"Bangyi",
""
],
[
"Xie",
"Ailin",
""
],
[
"Bian",
"Yatao",
""
],
[
"Zhou",
"Shuigeng",
""
]
] | Activity cliffs (ACs), which are generally defined as pairs of structurally similar molecules that are active against the same bio-target but significantly different in the binding potency, are of great importance to drug discovery. Up to date, the AC prediction problem, i.e., to predict whether a pair of molecules exhibit the AC relationship, has not yet been fully explored. In this paper, we first introduce ACNet, a large-scale dataset for AC prediction. ACNet curates over 400K Matched Molecular Pairs (MMPs) against 190 targets, including over 20K MMP-cliffs and 380K non-AC MMPs, and provides five subsets for model development and evaluation. Then, we propose a baseline framework to benchmark the predictive performance of molecular representations encoded by deep neural networks for AC prediction, and 16 models are evaluated in experiments. Our experimental results show that deep learning models can achieve good performance when the models are trained on tasks with adequate amount of data, while the imbalanced, low-data and out-of-distribution features of the ACNet dataset still make it challenging for deep neural networks to cope with. In addition, the traditional ECFP method shows a natural advantage on MMP-cliff prediction, and outperforms other deep learning models on most of the data subsets. To the best of our knowledge, our work constructs the first large-scale dataset for AC prediction, which may stimulate the study of AC prediction models and prompt further breakthroughs in AI-aided drug discovery. The codes and dataset can be accessed by https://drugai.github.io/ACNet/. |
2310.01457 | Nicolas Grimault | Julie Th\'evenet, L\'eo Papet, G\'erard Coureaud, Nicolas Boyer,
Florence Levr\'ero, Nicolas Grimault (CRNL), Nicolas Mathevon | Crocodile perception of distress in hominid baby cries | null | Proceedings of the Royal Society B: Biological Sciences, 2023, 290
(2004) | 10.1098/rspb.2023.0201 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is generally argued that distress vocalizations, a common modality for
alerting conspecifics across a wide range of terrestrial vertebrates, share
acoustic features that allow heterospecific communication. Yet studies suggest
that the acoustic traits used to decode distress may vary between species,
leading to decoding errors. Here we found through playback experiments that
Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and
human), and that the intensity of crocodile response depends critically on a
set of specific acoustic features (mainly deterministic chaos, harmonicity and
spectral prominences). Our results suggest that crocodiles are sensitive to the
degree of distress encoded in the vocalizations of phylogenetically very
distant vertebrates. A comparison of these results with those obtained with
human subjects confronted with the same stimuli further indicates that
crocodiles and humans use different acoustic criteria to assess the distress
encoded in infant cries. Interestingly, the acoustic features driving crocodile
reaction are likely to be more reliable markers of distress than those used by
humans. These results highlight that the acoustic features encoding information
in vertebrate sound signals are not necessarily identical across species.
| [
{
"created": "Mon, 2 Oct 2023 11:49:08 GMT",
"version": "v1"
}
] | 2023-10-04 | [
[
"Thévenet",
"Julie",
"",
"CRNL"
],
[
"Papet",
"Léo",
"",
"CRNL"
],
[
"Coureaud",
"Gérard",
"",
"CRNL"
],
[
"Boyer",
"Nicolas",
"",
"CRNL"
],
[
"Levréro",
"Florence",
"",
"CRNL"
],
[
"Grimault",
"Nicolas",
... | It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species. |
2005.12402 | Eric Jones | Eric W. Jones, Jiming Sheng, Jean M. Carlson, Shenshen Wang | Aging-induced fragility of the immune system | 23 pages, 7 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The adaptive and innate branches of the vertebrate immune system work in
close collaboration to protect organisms from harmful pathogens. As an organism
ages its immune system undergoes immunosenescence, characterized by declined
performance or malfunction in either immune branch, which can lead to disease
and death. In this study we develop a mathematical framework of coupled innate
and adaptive immune responses, namely the integrated immune branch (IIB) model.
This model describes dynamics of immune components in both branches, uses a
shape-space representation to encode pathogen-specific immune memory, and
exhibits three steady states -- health, septic death, and chronic inflammation
-- qualitatively similar to clinically-observed immune outcomes. In this model,
the immune system (initialized in the health state) is subjected to a sequence
of pathogen encounters, and we use the number of prior pathogen encounters as a
proxy for the "age" of the immune system. We find that repeated pathogen
encounters may trigger a fragility in which any encounter with a novel pathogen
will cause the system to irreversibly switch from health to chronic
inflammation. This transition is consistent with the onset of "inflammaging", a
condition observed in aged individuals who experience chronic low-grade
inflammation even in the absence of pathogens. The IIB model predicts that the
onset of chronic inflammation strongly depends on the history of encountered
pathogens; the timing of onset differs drastically when the same set of
infections occurs in a different order. Lastly, the coupling between the innate
and adaptive immune branches generates a trade-off between rapid pathogen
clearance and a delayed onset of immunosenescence.
| [
{
"created": "Mon, 18 May 2020 01:10:49 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Aug 2020 21:28:52 GMT",
"version": "v2"
}
] | 2020-08-28 | [
[
"Jones",
"Eric W.",
""
],
[
"Sheng",
"Jiming",
""
],
[
"Carlson",
"Jean M.",
""
],
[
"Wang",
"Shenshen",
""
]
] | The adaptive and innate branches of the vertebrate immune system work in close collaboration to protect organisms from harmful pathogens. As an organism ages its immune system undergoes immunosenescence, characterized by declined performance or malfunction in either immune branch, which can lead to disease and death. In this study we develop a mathematical framework of coupled innate and adaptive immune responses, namely the integrated immune branch (IIB) model. This model describes dynamics of immune components in both branches, uses a shape-space representation to encode pathogen-specific immune memory, and exhibits three steady states -- health, septic death, and chronic inflammation -- qualitatively similar to clinically-observed immune outcomes. In this model, the immune system (initialized in the health state) is subjected to a sequence of pathogen encounters, and we use the number of prior pathogen encounters as a proxy for the "age" of the immune system. We find that repeated pathogen encounters may trigger a fragility in which any encounter with a novel pathogen will cause the system to irreversibly switch from health to chronic inflammation. This transition is consistent with the onset of "inflammaging", a condition observed in aged individuals who experience chronic low-grade inflammation even in the absence of pathogens. The IIB model predicts that the onset of chronic inflammation strongly depends on the history of encountered pathogens; the timing of onset differs drastically when the same set of infections occurs in a different order. Lastly, the coupling between the innate and adaptive immune branches generates a trade-off between rapid pathogen clearance and a delayed onset of immunosenescence. |
2308.01830 | Nicolas Deperrois | Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, and Jakob Jordan | Learning beyond sensations: how dreams organize neuronal representations | 16 pages, 3 figures, perspective article | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Semantic representations in higher sensory cortices form the basis for
robust, yet flexible behavior. These representations are acquired over the
course of development in an unsupervised fashion and continuously maintained
over an organism's lifespan. Predictive learning theories propose that these
representations emerge from predicting or reconstructing sensory inputs.
However, brains are known to generate virtual experiences, such as during
imagination and dreaming, that go beyond previously experienced inputs. Here,
we suggest that virtual experiences may be just as relevant as actual sensory
inputs in shaping cortical representations. In particular, we discuss two
complementary learning principles that organize representations through the
generation of virtual experiences. First, "adversarial dreaming" proposes that
creative dreams support a cortical implementation of adversarial learning in
which feedback and feedforward pathways engage in a productive game of trying
to fool each other. Second, "contrastive dreaming" proposes that the invariance
of neuronal representations to irrelevant factors of variation is acquired by
trying to map similar virtual experiences together via a contrastive learning
process. These principles are compatible with known cortical structure and
dynamics and the phenomenology of sleep thus providing promising directions to
explain cortical learning beyond the classical predictive learning paradigm.
| [
{
"created": "Thu, 3 Aug 2023 15:45:12 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Dec 2023 12:20:33 GMT",
"version": "v2"
}
] | 2023-12-06 | [
[
"Deperrois",
"Nicolas",
""
],
[
"Petrovici",
"Mihai A.",
""
],
[
"Senn",
"Walter",
""
],
[
"Jordan",
"Jakob",
""
]
] | Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive learning theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive learning paradigm. |
2011.09115 | Pradeep Lam | Pradeep Lam, Alyssa H. Zhu, Iyad Ba Gari, Neda Jahanshad, Paul M.
Thompson | 3D Grid-Attention Networks for Interpretable Age and Alzheimer's Disease
Prediction from Structural MRI | null | null | null | null | q-bio.TO eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | We propose an interpretable 3D Grid-Attention deep neural network that can
accurately predict a person's age and whether they have Alzheimer's disease
(AD) from a structural brain MRI scan. Building on a 3D convolutional neural
network, we added two attention modules at different layers of abstraction, so
that features learned are spatially related to the global features for the
task. The attention layers allow the network to focus on brain regions relevant
to the task, while masking out irrelevant or noisy regions. In evaluations
based on 4,561 3-Tesla T1-weighted MRI scans from 4 phases of the Alzheimer's
Disease Neuroimaging Initiative (ADNI), salience maps for age and AD prediction
partially overlapped, but lower-level features overlapped more than
higher-level features. The brain age prediction network also distinguished AD
and healthy control groups better than another state-of-the-art method. The
resulting visual analyses can distinguish interpretable feature patterns that
are important for predicting clinical diagnosis. Future work is needed to test
performance across scanners and populations.
| [
{
"created": "Wed, 18 Nov 2020 06:41:20 GMT",
"version": "v1"
}
] | 2020-11-19 | [
[
"Lam",
"Pradeep",
""
],
[
"Zhu",
"Alyssa H.",
""
],
[
"Gari",
"Iyad Ba",
""
],
[
"Jahanshad",
"Neda",
""
],
[
"Thompson",
"Paul M.",
""
]
] | We propose an interpretable 3D Grid-Attention deep neural network that can accurately predict a person's age and whether they have Alzheimer's disease (AD) from a structural brain MRI scan. Building on a 3D convolutional neural network, we added two attention modules at different layers of abstraction, so that features learned are spatially related to the global features for the task. The attention layers allow the network to focus on brain regions relevant to the task, while masking out irrelevant or noisy regions. In evaluations based on 4,561 3-Tesla T1-weighted MRI scans from 4 phases of the Alzheimer's Disease Neuroimaging Initiative (ADNI), salience maps for age and AD prediction partially overlapped, but lower-level features overlapped more than higher-level features. The brain age prediction network also distinguished AD and healthy control groups better than another state-of-the-art method. The resulting visual analyses can distinguish interpretable feature patterns that are important for predicting clinical diagnosis. Future work is needed to test performance across scanners and populations. |
1808.07149 | Anne-Florence Bitbol | Shou-Wen Wang, Anne-Florence Bitbol and Ned S. Wingreen | Revealing evolutionary constraints on proteins through sequence analysis | 37 pages, 28 figures | PLoS Comput. Biol. 15(4): e1007010 (2019) | 10.1371/journal.pcbi.1007010 | null | q-bio.BM physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical analysis of alignments of large numbers of protein sequences has
revealed "sectors" of collectively coevolving amino acids in several protein
families. Here, we show that selection acting on any functional property of a
protein, represented by an additive trait, can give rise to such a sector. As
an illustration of a selected trait, we consider the elastic energy of an
important conformational change within an elastic network model, and we show
that selection acting on this energy leads to correlations among residues. For
this concrete example and more generally, we demonstrate that the main
signature of functional sectors lies in the small-eigenvalue modes of the
covariance matrix of the selected sequences. However, secondary signatures of
these functional sectors also exist in the extensively-studied large-eigenvalue
modes. Our simple, general model leads us to propose a principled method to
identify functional sectors, along with the magnitudes of mutational effects,
from sequence data. We further demonstrate the robustness of these functional
sectors to various forms of selection, and the robustness of our approach to
the identification of multiple selected traits.
| [
{
"created": "Tue, 21 Aug 2018 22:17:26 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Mar 2019 23:10:28 GMT",
"version": "v2"
}
] | 2019-04-26 | [
[
"Wang",
"Shou-Wen",
""
],
[
"Bitbol",
"Anne-Florence",
""
],
[
"Wingreen",
"Ned S.",
""
]
] | Statistical analysis of alignments of large numbers of protein sequences has revealed "sectors" of collectively coevolving amino acids in several protein families. Here, we show that selection acting on any functional property of a protein, represented by an additive trait, can give rise to such a sector. As an illustration of a selected trait, we consider the elastic energy of an important conformational change within an elastic network model, and we show that selection acting on this energy leads to correlations among residues. For this concrete example and more generally, we demonstrate that the main signature of functional sectors lies in the small-eigenvalue modes of the covariance matrix of the selected sequences. However, secondary signatures of these functional sectors also exist in the extensively-studied large-eigenvalue modes. Our simple, general model leads us to propose a principled method to identify functional sectors, along with the magnitudes of mutational effects, from sequence data. We further demonstrate the robustness of these functional sectors to various forms of selection, and the robustness of our approach to the identification of multiple selected traits. |
2305.17470 | Dipam Das | Debasish Bhattacharjee, Dipam Das, Santanu Acharjee, Tarini Kumar
Dutta | Two predators one prey model that integrates the effect of supplementary
food resources due to one predator's kleptoparasitism under the possibility
of retribution by the other predator | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In ecology, foraging requires animals to expend energy in order to obtain
resources. The cost of foraging can be reduced through kleptoparasitism, the
theft of a resource that another individual has expended effort to acquire.
Thus, kleptoparasitism is one of the most significant feeding techniques in
ecology. In this study, we investigate a two predator one prey paradigm in
which one predator acts as a kleptoparasite and the other as a host. This
research considers the post-kleptoparasitism scenario, which has received
little attention in the literature. Parametric requirements for the existence
as well as local and global stability of biologically viable equilibria have
been proposed. The occurrences of various one parametric bifurcations, such as
saddle-node bifurcation, transcritical bifurcation, and Hopf bifurcation, as
well as two parametric bifurcations, such as Bautin bifurcation, are explored
in depth. Relatively low growth rate of first predator induces a subcritical
Hopf bifurcation although a supercritical Hopf bifurcation occurs at relatively
high growth rate of first predator making coexistence of all three species
possible. Some numerical simulations have been provided for the purpose of
verifying our theoretical conclusions.
| [
{
"created": "Sat, 27 May 2023 13:05:36 GMT",
"version": "v1"
}
] | 2023-05-30 | [
[
"Bhattacharjee",
"Debasish",
""
],
[
"Das",
"Dipam",
""
],
[
"Acharjee",
"Santanu",
""
],
[
"Dutta",
"Tarini Kumar",
""
]
] | In ecology, foraging requires animals to expend energy in order to obtain resources. The cost of foraging can be reduced through kleptoparasitism, the theft of a resource that another individual has expended effort to acquire. Thus, kleptoparasitism is one of the most significant feeding techniques in ecology. In this study, we investigate a two predator one prey paradigm in which one predator acts as a kleptoparasite and the other as a host. This research considers the post-kleptoparasitism scenario, which has received little attention in the literature. Parametric requirements for the existence as well as local and global stability of biologically viable equilibria have been proposed. The occurrences of various one parametric bifurcations, such as saddle-node bifurcation, transcritical bifurcation, and Hopf bifurcation, as well as two parametric bifurcations, such as Bautin bifurcation, are explored in depth. Relatively low growth rate of first predator induces a subcritical Hopf bifurcation although a supercritical Hopf bifurcation occurs at relatively high growth rate of first predator making coexistence of all three species possible. Some numerical simulations have been provided for the purpose of verifying our theoretical conclusions. |
1501.06890 | Michael Spanner | G.S. Thekkadath and Michael Spanner | Comment on "Human Time-Frequency Acuity Beats the Fourier Uncertainty
Principle" | 2 pages, 1 figure, accepted to Phys. Rev. Lett | null | 10.1103/PhysRevLett.114.069401 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the initial article [Phys. Rev. Lett. 110, 044301 (2013), arXiv:1208.4611]
it was claimed that human hearing can beat the Fourier uncertainty principle.
In this Comment, we demonstrate that the experiment designed and implemented in
the original article was ill-chosen to test Fourier uncertainty in human
hearing.
| [
{
"created": "Tue, 27 Jan 2015 20:03:50 GMT",
"version": "v1"
}
] | 2015-06-23 | [
[
"Thekkadath",
"G. S.",
""
],
[
"Spanner",
"Michael",
""
]
] | In the initial article [Phys. Rev. Lett. 110, 044301 (2013), arXiv:1208.4611] it was claimed that human hearing can beat the Fourier uncertainty principle. In this Comment, we demonstrate that the experiment designed and implemented in the original article was ill-chosen to test Fourier uncertainty in human hearing. |
1507.08232 | Augusto Gonzalez | Roberto Herrero, Dario A. Leon and Augusto Gonzalez | Levy model of cancer | arXiv admin note: text overlap with arXiv:1507.06920 | Scientific Reports volume 12, Article number: 4748 (2022) | 10.1038/s41598-022-08502-8 | null | q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A small portion of a tissue defines a microstate in gene expression space.
Mutations, epigenetic events or external factors cause microstate displacements
which are modeled by combining small independent gene expression variations and
large Levy jumps, resulting from the collective variations of a set of genes.
The risk of cancer in a tissue is estimated as the microstate probability to
transit from the normal to the tumor region in gene expression space. The
formula coming from the contribution of large Levy jumps seems to provide a
qualitatively correct description of the lifetime risk of cancer, and reveals
an interesting connection between the risk and the way the tissue is protected
against infections.
| [
{
"created": "Wed, 29 Jul 2015 17:33:30 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2015 14:56:12 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Sep 2015 16:16:37 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Jul 2020 12:19:03 GMT",
"version": "v4"
}
] | 2022-05-19 | [
[
"Herrero",
"Roberto",
""
],
[
"Leon",
"Dario A.",
""
],
[
"Gonzalez",
"Augusto",
""
]
] | A small portion of a tissue defines a microstate in gene expression space. Mutations, epigenetic events or external factors cause microstate displacements which are modeled by combining small independent gene expression variations and large Levy jumps, resulting from the collective variations of a set of genes. The risk of cancer in a tissue is estimated as the microstate probability to transit from the normal to the tumor region in gene expression space. The formula coming from the contribution of large Levy jumps seems to provide a qualitatively correct description of the lifetime risk of cancer, and reveals an interesting connection between the risk and the way the tissue is protected against infections. |
1506.00354 | Yasser Roudi | Yasser Roudi and Graham Taylor | Learning with hidden variables | revised version accepted in Current Opinion in Neurobiology | Current Opinion in Neurobiology (2015), 35: 110-118 | 10.1016/j.conb.2015.07.006 | null | q-bio.NC cond-mat.dis-nn cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning and inferring features that generate sensory input is a task
continuously performed by cortex. In recent years, novel algorithms and
learning rules have been proposed that allow neural network models to learn
such features from natural images, written text, audio signals, etc. These
networks usually involve deep architectures with many layers of hidden neurons.
Here we review recent advancements in this area emphasizing, amongst other
things, the processing of dynamical inputs by networks with hidden nodes and
the role of single neuron models. These points and the questions they arise can
provide conceptual advancements in understanding of learning in the cortex and
the relationship between machine learning approaches to learning with hidden
nodes and those in cortical circuits.
| [
{
"created": "Mon, 1 Jun 2015 05:36:19 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jul 2015 20:37:48 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Roudi",
"Yasser",
""
],
[
"Taylor",
"Graham",
""
]
] | Learning and inferring features that generate sensory input is a task continuously performed by cortex. In recent years, novel algorithms and learning rules have been proposed that allow neural network models to learn such features from natural images, written text, audio signals, etc. These networks usually involve deep architectures with many layers of hidden neurons. Here we review recent advancements in this area emphasizing, amongst other things, the processing of dynamical inputs by networks with hidden nodes and the role of single neuron models. These points and the questions they arise can provide conceptual advancements in understanding of learning in the cortex and the relationship between machine learning approaches to learning with hidden nodes and those in cortical circuits. |
2005.01001 | Joydeep Munshi | Joydeep Munshi, Indranil Roy and Ganesh Balasubramanian | Spatiotemporal dynamics in demography-sensitive disease transmission:
COVID-19 spread in NY as a case study | 14 pages, 6 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid transmission of the highly contagious novel coronavirus has been
represented through several data-guided approaches across targeted geographies,
in an attempt to understand when the pandemic will be under control and imposed
lockdown measures can be relaxed. However, these epidemiological models
predominantly based on training data employing number of cases and fatalities
are limited in that they do not account for the spatiotemporal population
dynamics that principally contributes to the disease spread. Here, a stochastic
cellular automata enabled predictive model is presented that is able to
accurate describe the effect of demography-dependent population dynamics on
disease transmission. Using the spread of coronavirus in the state of New York
as a case study, results from the computational framework remarkably agree with
the actual count for infected cases and deaths as reported across
organizations. The predictions suggest that an extended lockdown in some form,
for up to 180 days, can significantly reduce the risk of a second wave of the
outbreak. In addition, increased availability of medical testing is able to
reduce the number of infected patients, even when less stringent social
distancing guidelines and imposed. Equipping this stochastic approach with
demographic factors such as age ratio, pre-existing health conditions,
robustifies the model to predict the transmittivity of future outbreaks before
they transform into an epidemic.
| [
{
"created": "Sun, 3 May 2020 06:28:37 GMT",
"version": "v1"
}
] | 2020-05-05 | [
[
"Munshi",
"Joydeep",
""
],
[
"Roy",
"Indranil",
""
],
[
"Balasubramanian",
"Ganesh",
""
]
] | The rapid transmission of the highly contagious novel coronavirus has been represented through several data-guided approaches across targeted geographies, in an attempt to understand when the pandemic will be under control and imposed lockdown measures can be relaxed. However, these epidemiological models predominantly based on training data employing number of cases and fatalities are limited in that they do not account for the spatiotemporal population dynamics that principally contributes to the disease spread. Here, a stochastic cellular automata enabled predictive model is presented that is able to accurate describe the effect of demography-dependent population dynamics on disease transmission. Using the spread of coronavirus in the state of New York as a case study, results from the computational framework remarkably agree with the actual count for infected cases and deaths as reported across organizations. The predictions suggest that an extended lockdown in some form, for up to 180 days, can significantly reduce the risk of a second wave of the outbreak. In addition, increased availability of medical testing is able to reduce the number of infected patients, even when less stringent social distancing guidelines and imposed. Equipping this stochastic approach with demographic factors such as age ratio, pre-existing health conditions, robustifies the model to predict the transmittivity of future outbreaks before they transform into an epidemic. |
2101.12559 | Francesca Pitolli | D. Calvetti, B. Johnson, A. Pascarella, F. Pitolli, E. Somersalo, B.
Vantaggi | Mining the Mind: Linear Discriminant Analysis of MEG source
reconstruction time series supports dynamic changes in deep brain regions
during meditation sessions | 34 pages, 15 figures | Brain Topography 34, 840-862 (2021) | 10.1007/s10548-021-00874-w | null | q-bio.NC cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Meditation practices have been claimed to have a positive effect on the
regulation of mood and emotion for quite some time by practitioners, and in
recent times there has been a sustained effort to provide a more precise
description of the changes induced by meditation on human brain. Longitudinal
studies have reported morphological changes in cortical thickness and volume in
selected brain regions due to meditation practice, which is interpreted as
evidence for effectiveness of it beyond the subjective self reporting. Evidence
based on real time monitoring of meditating brain by functional imaging
modalities such as MEG or EEG remains a challenge. In this article we consider
MEG data collected during meditation sessions of experienced Buddhist monks
practicing focused attention (Samatha) and open monitoring (Vipassana)
meditation, contrasted by resting state with eyes closed. The MEG data is first
mapped to time series of brain activity averaged over brain regions
corresponding to a standard Destrieux brain atlas, and further by bootstrapping
and spectral analysis to data matrices representing a random sample of power
spectral densities over bandwidths corresponding to $\alpha$, $\beta$,
$\gamma$, and $\theta$ bands in the spectral range. We demonstrate using linear
discriminant analysis (LDA) that the samples corresponding to different
meditative or resting states contain enough fingerprints of the brain state to
allow a separation between different states, and we identify the brain regions
that appear to contribute to the separation. Our findings suggest that
cingulate cortex, insular cortex and some of the internal structures, most
notably accumbens, caudate and putamen nuclei, thalamus and amygdalae stand out
as separating regions, which seems to correlate well with earlier findings
based on longitudinal studies.
| [
{
"created": "Fri, 29 Jan 2021 13:22:36 GMT",
"version": "v1"
}
] | 2022-04-27 | [
[
"Calvetti",
"D.",
""
],
[
"Johnson",
"B.",
""
],
[
"Pascarella",
"A.",
""
],
[
"Pitolli",
"F.",
""
],
[
"Somersalo",
"E.",
""
],
[
"Vantaggi",
"B.",
""
]
] | Meditation practices have been claimed to have a positive effect on the regulation of mood and emotion for quite some time by practitioners, and in recent times there has been a sustained effort to provide a more precise description of the changes induced by meditation on human brain. Longitudinal studies have reported morphological changes in cortical thickness and volume in selected brain regions due to meditation practice, which is interpreted as evidence for effectiveness of it beyond the subjective self reporting. Evidence based on real time monitoring of meditating brain by functional imaging modalities such as MEG or EEG remains a challenge. In this article we consider MEG data collected during meditation sessions of experienced Buddhist monks practicing focused attention (Samatha) and open monitoring (Vipassana) meditation, contrasted by resting state with eyes closed. The MEG data is first mapped to time series of brain activity averaged over brain regions corresponding to a standard Destrieux brain atlas, and further by bootstrapping and spectral analysis to data matrices representing a random sample of power spectral densities over bandwidths corresponding to $\alpha$, $\beta$, $\gamma$, and $\theta$ bands in the spectral range. We demonstrate using linear discriminant analysis (LDA) that the samples corresponding to different meditative or resting states contain enough fingerprints of the brain state to allow a separation between different states, and we identify the brain regions that appear to contribute to the separation. Our findings suggest that cingulate cortex, insular cortex and some of the internal structures, most notably accumbens, caudate and putamen nuclei, thalamus and amygdalae stand out as separating regions, which seems to correlate well with earlier findings based on longitudinal studies. |
0806.1264 | Nicolas Vuillerme | Nicolas Vuillerme (TIMC), Olivier Chenu (TIMC), Nicolas Pinsault
(TIMC), Anthony Fleury (TIMC), Jacques Demongeot (TIMC), Yohan Payan (TIMC) | Can a Plantar Pressure-Based Tongue-Placed Electrotactile Biofeedback
Improve Postural Control Under Altered Vestibular and Neck Proprioceptive
Conditions? | null | Neuroscience / Neurosciences (2008) | 10.1016/j.neuroscience.2008.05.018 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigated the effects of a plantar pressure-based tongue-placed
electrotactile biofeedback on postural control during quiet standing under
normal and altered vestibular and neck proprioceptive conditions. To achieve
this goal, fourteen young healthy adults were asked to stand upright as
immobile as possible with their eyes closed in two Neutral and Extended head
postures and two conditions of No-biofeedback and Biofeedback. The underlying
principle of the biofeedback consisted of providing supplementary information
related to foot sole pressure distribution through a wireless embedded
tongue-placed tactile output device. Centre of foot pressure (CoP)
displacements were recorded using a plantar pressure data acquisition system.
Results showed that (1) the Extended head posture yielded increased CoP
displacements relative to the Neutral head posture in the No-biofeedback
condition, with a greater effect along the anteroposterior than mediolateral
axis, whereas (2) no significant difference between the two Neutral and
Extended head postures was observed in the Biofeedback condition. The present
findings suggested that the availability of the plantar pressure-based
tongue-placed electrotactile biofeedback allowed the subjects to suppress the
destabilizing effect induced by the disruption of vestibular and neck
proprioceptive inputs associated with the head extended posture. These results
are discussed according to the sensory re-weighting hypothesis, whereby the
central nervous system would dynamically and selectively adjust the relative
contributions of sensory inputs (i.e., the sensory weights) to maintain upright
stance depending on the sensory contexts and the neuromuscular constraints
acting on the subject.
| [
{
"created": "Sat, 7 Jun 2008 06:20:40 GMT",
"version": "v1"
}
] | 2008-12-18 | [
[
"Vuillerme",
"Nicolas",
"",
"TIMC"
],
[
"Chenu",
"Olivier",
"",
"TIMC"
],
[
"Pinsault",
"Nicolas",
"",
"TIMC"
],
[
"Fleury",
"Anthony",
"",
"TIMC"
],
[
"Demongeot",
"Jacques",
"",
"TIMC"
],
[
"Payan",
"Yoha... | We investigated the effects of a plantar pressure-based tongue-placed electrotactile biofeedback on postural control during quiet standing under normal and altered vestibular and neck proprioceptive conditions. To achieve this goal, fourteen young healthy adults were asked to stand upright as immobile as possible with their eyes closed in two Neutral and Extended head postures and two conditions of No-biofeedback and Biofeedback. The underlying principle of the biofeedback consisted of providing supplementary information related to foot sole pressure distribution through a wireless embedded tongue-placed tactile output device. Centre of foot pressure (CoP) displacements were recorded using a plantar pressure data acquisition system. Results showed that (1) the Extended head posture yielded increased CoP displacements relative to the Neutral head posture in the No-biofeedback condition, with a greater effect along the anteroposterior than mediolateral axis, whereas (2) no significant difference between the two Neutral and Extended head postures was observed in the Biofeedback condition. The present findings suggested that the availability of the plantar pressure-based tongue-placed electrotactile biofeedback allowed the subjects to suppress the destabilizing effect induced by the disruption of vestibular and neck proprioceptive inputs associated with the head extended posture. These results are discussed according to the sensory re-weighting hypothesis, whereby the central nervous system would dynamically and selectively adjust the relative contributions of sensory inputs (i.e., the sensory weights) to maintain upright stance depending on the sensory contexts and the neuromuscular constraints acting on the subject. |
2002.04739 | Juan Aurelio Tamayo | Javier Gamero, Juan A. Tamayo and Juan A. Martinez-Roman | Forecast of the evolution of the contagious disease caused by novel
coronavirus (2019-nCoV) in China | 8 pages, 7 figures, 1 table | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outbreak of novel coronavirus (2019-nCoV) in China has caused a viral
epidemic affecting tens of thousands of persons. Though the danger of this
epidemic is evident, the statistical analysis of data offered in this paper
indicates that the increase in new cases in China will stabilize in the coming
days or weeks. Our forecast could serve to evaluate risks and control the
evolution of this disease.
| [
{
"created": "Wed, 12 Feb 2020 00:15:20 GMT",
"version": "v1"
}
] | 2020-02-19 | [
[
"Gamero",
"Javier",
""
],
[
"Tamayo",
"Juan A.",
""
],
[
"Martinez-Roman",
"Juan A.",
""
]
] | The outbreak of novel coronavirus (2019-nCoV) in China has caused a viral epidemic affecting tens of thousands of persons. Though the danger of this epidemic is evident, the statistical analysis of data offered in this paper indicates that the increase in new cases in China will stabilize in the coming days or weeks. Our forecast could serve to evaluate risks and control the evolution of this disease. |
2402.14213 | Xinke Shen | Xinke Shen, Lingyi Tao, Xuyang Chen, Sen Song, Quanying Liu, Dan Zhang | Contrastive Learning of Shared Spatiotemporal EEG Representations Across
Individuals for Naturalistic Neuroscience | 54 pages, 17 figures | null | null | null | q-bio.NC cs.LG eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neural representations induced by naturalistic stimuli offer insights into
how humans respond to stimuli in daily life. Understanding neural mechanisms
underlying naturalistic stimuli processing hinges on the precise identification
and extraction of the shared neural patterns that are consistently present
across individuals. Targeting the Electroencephalogram (EEG) technique, known
for its rich spatial and temporal information, this study presents a framework
for Contrastive Learning of Shared SpatioTemporal EEG Representations across
individuals (CL-SSTER). CL-SSTER utilizes contrastive learning to maximize the
similarity of EEG representations across individuals for identical stimuli,
contrasting with those for varied stimuli. The network employed spatial and
temporal convolutions to simultaneously learn the spatial and temporal patterns
inherent in EEG. The versatility of CL-SSTER was demonstrated on three EEG
datasets, including a synthetic dataset, a natural speech comprehension EEG
dataset, and an emotional video watching EEG dataset. CL-SSTER attained the
highest inter-subject correlation (ISC) values compared to the state-of-the-art
ISC methods. The latent representations generated by CL-SSTER exhibited
reliable spatiotemporal EEG patterns, which can be explained by properties of
the naturalistic stimuli. CL-SSTER serves as an interpretable and scalable
framework for the identification of inter-subject shared neural representations
in naturalistic neuroscience.
| [
{
"created": "Thu, 22 Feb 2024 01:42:12 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Jul 2024 03:48:28 GMT",
"version": "v2"
}
] | 2024-07-16 | [
[
"Shen",
"Xinke",
""
],
[
"Tao",
"Lingyi",
""
],
[
"Chen",
"Xuyang",
""
],
[
"Song",
"Sen",
""
],
[
"Liu",
"Quanying",
""
],
[
"Zhang",
"Dan",
""
]
] | Neural representations induced by naturalistic stimuli offer insights into how humans respond to stimuli in daily life. Understanding neural mechanisms underlying naturalistic stimuli processing hinges on the precise identification and extraction of the shared neural patterns that are consistently present across individuals. Targeting the Electroencephalogram (EEG) technique, known for its rich spatial and temporal information, this study presents a framework for Contrastive Learning of Shared SpatioTemporal EEG Representations across individuals (CL-SSTER). CL-SSTER utilizes contrastive learning to maximize the similarity of EEG representations across individuals for identical stimuli, contrasting with those for varied stimuli. The network employed spatial and temporal convolutions to simultaneously learn the spatial and temporal patterns inherent in EEG. The versatility of CL-SSTER was demonstrated on three EEG datasets, including a synthetic dataset, a natural speech comprehension EEG dataset, and an emotional video watching EEG dataset. CL-SSTER attained the highest inter-subject correlation (ISC) values compared to the state-of-the-art ISC methods. The latent representations generated by CL-SSTER exhibited reliable spatiotemporal EEG patterns, which can be explained by properties of the naturalistic stimuli. CL-SSTER serves as an interpretable and scalable framework for the identification of inter-subject shared neural representations in naturalistic neuroscience. |
q-bio/0607013 | Matthias Keil | Matthias S. Keil | Pushing it to the Limit: Adaptation With Dynamically Switching Gain
Control | 10 pages,11 figures. This manuscript (co-authored with Jordi Vitria)
has been submitted to "EURASIP Journal on Applied Signal Processing" -
Special Issue on Image Perception, to appear 3rd Quarter 2006 | null | 10.1155/2007/51684 | null | q-bio.NC | null | With this paper we propose a model to simulate the functional aspects of
light adaptation in retinal photoreceptors. Our model, however, does not link
specific stages to the detailed molecular processes which are thought to
mediate adaptation in real photoreceptors. We rather model the photoreceptor as
a self-adjusting integration device, which adds up properly amplified luminance
signals. The integration process and the amplification obey a switching
behavior that acts to locally shut down the integration process in dependence
on the internal state of the receptor. The mathematical structure of our model
is quite simple, and its computational complexity is quite low. We present
results of computer simulations which demonstrate that our model adapts
properly to at least four orders of input magnitude.
| [
{
"created": "Mon, 10 Jul 2006 16:04:00 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Keil",
"Matthias S.",
""
]
] | With this paper we propose a model to simulate the functional aspects of light adaptation in retinal photoreceptors. Our model, however, does not link specific stages to the detailed molecular processes which are thought to mediate adaptation in real photoreceptors. We rather model the photoreceptor as a self-adjusting integration device, which adds up properly amplified luminance signals. The integration process and the amplification obey a switching behavior that acts to locally shut down the integration process in dependence on the internal state of the receptor. The mathematical structure of our model is quite simple, and its computational complexity is quite low. We present results of computer simulations which demonstrate that our model adapts properly to at least four orders of input magnitude. |
2007.12261 | Ramon Gomes da Silva | Matheus Henrique Dal Molin Ribeiro, Ramon Gomes da Silva, Viviana
Cocco Mariani, Leandro dos Santos Coelho | Short-term forecasting COVID-19 cumulative confirmed cases: Perspectives
for Brazil | 17 pages, 5 figures. Published paper. arXiv admin note: substantial
text overlap with arXiv:2007.10981 | Chaos, Solitons & Fractals. 135 (2020) 109853 | 10.1016/j.chaos.2020.109853 | null | q-bio.PE cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | The new Coronavirus (COVID-19) is an emerging disease responsible for
infecting millions of people since the first notification until nowadays.
Developing efficient short-term forecasting models allow knowing the number of
future cases. In this context, it is possible to develop strategic planning in
the public health system to avoid deaths. In this paper, autoregressive
integrated moving average (ARIMA), cubist (CUBIST), random forest (RF), ridge
regression (RIDGE), support vector regression (SVR), and stacking-ensemble
learning are evaluated in the task of time series forecasting with one, three,
and six-days ahead the COVID-19 cumulative confirmed cases in ten Brazilian
states with a high daily incidence. In the stacking learning approach, the
cubist, RF, RIDGE, and SVR models are adopted as base-learners and Gaussian
process (GP) as meta-learner. The models' effectiveness is evaluated based on
the improvement index, mean absolute error, and symmetric mean absolute
percentage error criteria. In most of the cases, the SVR and stacking ensemble
learning reach a better performance regarding adopted criteria than compared
models. In general, the developed models can generate accurate forecasting,
achieving errors in a range of 0.87% - 3.51%, 1.02% - 5.63%, and 0.95% - 6.90%
in one, three, and six-days-ahead, respectively. The ranking of models in all
scenarios is SVR, stacking ensemble learning, ARIMA, CUBIST, RIDGE, and RF
models. The use of evaluated models is recommended to forecasting and monitor
the ongoing growth of COVID-19 cases, once these models can assist the managers
in the decision-making support systems.
| [
{
"created": "Tue, 21 Jul 2020 17:58:58 GMT",
"version": "v1"
}
] | 2020-07-27 | [
[
"Ribeiro",
"Matheus Henrique Dal Molin",
""
],
[
"da Silva",
"Ramon Gomes",
""
],
[
"Mariani",
"Viviana Cocco",
""
],
[
"Coelho",
"Leandro dos Santos",
""
]
] | The new Coronavirus (COVID-19) is an emerging disease responsible for infecting millions of people since the first notification until nowadays. Developing efficient short-term forecasting models allow knowing the number of future cases. In this context, it is possible to develop strategic planning in the public health system to avoid deaths. In this paper, autoregressive integrated moving average (ARIMA), cubist (CUBIST), random forest (RF), ridge regression (RIDGE), support vector regression (SVR), and stacking-ensemble learning are evaluated in the task of time series forecasting with one, three, and six-days ahead the COVID-19 cumulative confirmed cases in ten Brazilian states with a high daily incidence. In the stacking learning approach, the cubist, RF, RIDGE, and SVR models are adopted as base-learners and Gaussian process (GP) as meta-learner. The models' effectiveness is evaluated based on the improvement index, mean absolute error, and symmetric mean absolute percentage error criteria. In most of the cases, the SVR and stacking ensemble learning reach a better performance regarding adopted criteria than compared models. In general, the developed models can generate accurate forecasting, achieving errors in a range of 0.87% - 3.51%, 1.02% - 5.63%, and 0.95% - 6.90% in one, three, and six-days-ahead, respectively. The ranking of models in all scenarios is SVR, stacking ensemble learning, ARIMA, CUBIST, RIDGE, and RF models. The use of evaluated models is recommended to forecasting and monitor the ongoing growth of COVID-19 cases, once these models can assist the managers in the decision-making support systems. |
q-bio/0607031 | Tam\'as Kiss | Peter Erdi, Tamas Kiss, Janos Toth, Balazs Ujfalussy, Laszlo Zalanyi | From systems biology to dynamical neuropharmacology: proposal for a new
methodology | For an opinion paper see: Aradi and Erdi: Computational
neuropharmacology: dynamical approaches in drug discovery TRENDS in
Pharmacological Sciences 27(5) 240-243 | null | null | null | q-bio.NC q-bio.QM | null | The concepts and methods of Systems Biology are being extended to
neuropharmacology, to test and design drugs against neurological and
psychiatric disorders. Computational modeling by integrating compartmental
neural modeling technique and detailed kinetic description of pharmacological
modulation of transmitter-receptor interaction is offered as a method to test
the electrophysiological and behavioral effects of putative drugs. Even more,
an inverse method is suggested as a method for controlling a neural system to
realize a prescribed temporal pattern. In particular, as an application of the
proposed new methodology a computational platform is offered to analyze the
generation and pharmacological modulation of theta rhythm related to anxiety is
analyzed here in more detail.
| [
{
"created": "Thu, 20 Jul 2006 08:25:26 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Erdi",
"Peter",
""
],
[
"Kiss",
"Tamas",
""
],
[
"Toth",
"Janos",
""
],
[
"Ujfalussy",
"Balazs",
""
],
[
"Zalanyi",
"Laszlo",
""
]
] | The concepts and methods of Systems Biology are being extended to neuropharmacology, to test and design drugs against neurological and psychiatric disorders. Computational modeling by integrating compartmental neural modeling technique and detailed kinetic description of pharmacological modulation of transmitter-receptor interaction is offered as a method to test the electrophysiological and behavioral effects of putative drugs. Even more, an inverse method is suggested as a method for controlling a neural system to realize a prescribed temporal pattern. In particular, as an application of the proposed new methodology a computational platform is offered to analyze the generation and pharmacological modulation of theta rhythm related to anxiety is analyzed here in more detail. |
1511.07827 | Robert Leech | Romy Lorenz, Ricardo P Monti, Ines R Violante, Aldo A Faisal,
Christoforos Anagnostopoulos, Robert Leech and Giovanni Montana | Stopping criteria for boosting automatic experimental design using
real-time fMRI with Bayesian optimization | Oral presentation at MLINI 2015 - 5th NIPS Workshop on Machine
Learning and Interpretation in Neuroimaging: Beyond the Scanner | null | null | MLINI/2015/15 | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian optimization has been proposed as a practical and efficient tool
through which to tune parameters in many difficult settings. Recently, such
techniques have been combined with real-time fMRI to propose a novel framework
which turns on its head the conventional functional neuroimaging approach. This
closed-loop method automatically designs the optimal experiment to evoke a
desired target brain pattern. One of the challenges associated with extending
such methods to real-time brain imaging is the need for adequate stopping
criteria, an aspect of Bayesian optimization which has received limited
attention. In light of high scanning costs and limited attentional capacities
of subjects an accurate and reliable stopping criteria is essential. In order
to address this issue we propose and empirically study the performance of two
stopping criteria.
| [
{
"created": "Tue, 24 Nov 2015 18:28:54 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Mar 2016 16:54:36 GMT",
"version": "v2"
}
] | 2016-06-10 | [
[
"Lorenz",
"Romy",
""
],
[
"Monti",
"Ricardo P",
""
],
[
"Violante",
"Ines R",
""
],
[
"Faisal",
"Aldo A",
""
],
[
"Anagnostopoulos",
"Christoforos",
""
],
[
"Leech",
"Robert",
""
],
[
"Montana",
"Giovanni",
... | Bayesian optimization has been proposed as a practical and efficient tool through which to tune parameters in many difficult settings. Recently, such techniques have been combined with real-time fMRI to propose a novel framework which turns on its head the conventional functional neuroimaging approach. This closed-loop method automatically designs the optimal experiment to evoke a desired target brain pattern. One of the challenges associated with extending such methods to real-time brain imaging is the need for adequate stopping criteria, an aspect of Bayesian optimization which has received limited attention. In light of high scanning costs and limited attentional capacities of subjects an accurate and reliable stopping criteria is essential. In order to address this issue we propose and empirically study the performance of two stopping criteria. |
0710.3889 | Anirvan M. Sengupta | Mohammad Sedighi and Anirvan M. Sengupta | Epigenetic Chromatin Silencing: Bistability and Front Propagation | 19 pages, 5 figures | null | 10.1088/1478-3975/4/4/002 | null | q-bio.MN | null | The role of post-translational modification of histones in eukaryotic gene
regulation is well recognized. Epigenetic silencing of genes via heritable
chromatin modifications plays a major role in cell fate specification in higher
organisms. We formulate a coarse-grained model of chromatin silencing in yeast
and study the conditions under which the system becomes bistable, allowing for
different epigenetic states. We also study the dynamics of the boundary between
the two locally stable states of chromatin: silenced and unsilenced. The model
could be of use in guiding the discussion on chromatin silencing in general. In
the context of silencing in budding yeast, it helps us understand the phenotype
of various mutants, some of which may be non-trivial to see without the help of
a mathematical model. One such example is a mutation that reduces the rate of
background acetylation of particular histone side-chains that competes with the
deacetylation by Sir2p. The resulting negative feedback due to a Sir protein
depletion effect gives rise to interesting counter-intuitive consequences. Our
mathematical analysis brings forth the different dynamical behaviors possible
within the same molecular model and guides the formulation of more refined
hypotheses that could be addressed experimentally.
| [
{
"created": "Mon, 22 Oct 2007 15:43:56 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Sedighi",
"Mohammad",
""
],
[
"Sengupta",
"Anirvan M.",
""
]
] | The role of post-translational modification of histones in eukaryotic gene regulation is well recognized. Epigenetic silencing of genes via heritable chromatin modifications plays a major role in cell fate specification in higher organisms. We formulate a coarse-grained model of chromatin silencing in yeast and study the conditions under which the system becomes bistable, allowing for different epigenetic states. We also study the dynamics of the boundary between the two locally stable states of chromatin: silenced and unsilenced. The model could be of use in guiding the discussion on chromatin silencing in general. In the context of silencing in budding yeast, it helps us understand the phenotype of various mutants, some of which may be non-trivial to see without the help of a mathematical model. One such example is a mutation that reduces the rate of background acetylation of particular histone side-chains that competes with the deacetylation by Sir2p. The resulting negative feedback due to a Sir protein depletion effect gives rise to interesting counter-intuitive consequences. Our mathematical analysis brings forth the different dynamical behaviors possible within the same molecular model and guides the formulation of more refined hypotheses that could be addressed experimentally. |
2010.11370 | Gelareh Mohammadi | Gelareh Mohammadi and Patrik Vuilleumier | A Multi-Componential Approach to Emotion Recognition and the Effect of
Personality | 13 pages | IEEE Transactions on Affective Computing, 2020 | 10.1109/TAFFC.2020.3028109 | null | q-bio.NC cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emotions are an inseparable part of human nature affecting our behavior in
response to the outside world. Although most empirical studies have been
dominated by two theoretical models including discrete categories of emotion
and dichotomous dimensions, results from neuroscience approaches suggest a
multi-processes mechanism underpinning emotional experience with a large
overlap across different emotions. While these findings are consistent with the
influential theories of emotion in psychology that emphasize a role for
multiple component processes to generate emotion episodes, few studies have
systematically investigated the relationship between discrete emotions and a
full componential view. This paper applies a componential framework with a
data-driven approach to characterize emotional experiences evoked during movie
watching. The results suggest that differences between various emotions can be
captured by a few (at least 6) latent dimensions, each defined by features
associated with component processes, including appraisal, expression,
physiology, motivation, and feeling. In addition, the link between discrete
emotions and component model is explored and results show that a componential
model with a limited number of descriptors is still able to predict the level
of experienced discrete emotion(s) to a satisfactory level. Finally, as
appraisals may vary according to individual dispositions and biases, we also
study the relationship between personality traits and emotions in our
computational framework and show that the role of personality on discrete
emotion differences can be better justified using the component model.
| [
{
"created": "Thu, 22 Oct 2020 01:27:23 GMT",
"version": "v1"
}
] | 2020-10-23 | [
[
"Mohammadi",
"Gelareh",
""
],
[
"Vuilleumier",
"Patrik",
""
]
] | Emotions are an inseparable part of human nature affecting our behavior in response to the outside world. Although most empirical studies have been dominated by two theoretical models including discrete categories of emotion and dichotomous dimensions, results from neuroscience approaches suggest a multi-processes mechanism underpinning emotional experience with a large overlap across different emotions. While these findings are consistent with the influential theories of emotion in psychology that emphasize a role for multiple component processes to generate emotion episodes, few studies have systematically investigated the relationship between discrete emotions and a full componential view. This paper applies a componential framework with a data-driven approach to characterize emotional experiences evoked during movie watching. The results suggest that differences between various emotions can be captured by a few (at least 6) latent dimensions, each defined by features associated with component processes, including appraisal, expression, physiology, motivation, and feeling. In addition, the link between discrete emotions and component model is explored and results show that a componential model with a limited number of descriptors is still able to predict the level of experienced discrete emotion(s) to a satisfactory level. Finally, as appraisals may vary according to individual dispositions and biases, we also study the relationship between personality traits and emotions in our computational framework and show that the role of personality on discrete emotion differences can be better justified using the component model. |
0710.4235 | George F. R. Ellis | G. Auletta, G. F. R. Ellis, and L. Jaeger | Top-Down Causation by Information Control: From a Philosophical Problem
to a Scientific Research Program | Revised version to meet referee's comments, and responding to a paper
by Wegscheid et al that was not mentioned in the previous version. 23 pages,
9 figures | null | null | null | q-bio.OT | null | It has been claimed that different types of causes must be considered in
biological systems, including top-down as well as same-level and bottom-up
causation, thus enabling the top levels to be causally efficacious in their own
right. To clarify this issue, important distinctions between information and
signs are introduced here and the concepts of information control and
functional equivalence classes in those systems are rigorously defined and used
to characterise when top down causation by feedback control happens, in a way
that is testable. The causally significant elements we consider are equivalence
classes of lower level processes, realised in biological systems through
different operations having the same outcome within the context of information
control and networks.
| [
{
"created": "Tue, 23 Oct 2007 10:59:08 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jan 2008 08:38:25 GMT",
"version": "v2"
}
] | 2011-11-10 | [
[
"Auletta",
"G.",
""
],
[
"Ellis",
"G. F. R.",
""
],
[
"Jaeger",
"L.",
""
]
] | It has been claimed that different types of causes must be considered in biological systems, including top-down as well as same-level and bottom-up causation, thus enabling the top levels to be causally efficacious in their own right. To clarify this issue, important distinctions between information and signs are introduced here and the concepts of information control and functional equivalence classes in those systems are rigorously defined and used to characterise when top down causation by feedback control happens, in a way that is testable. The causally significant elements we consider are equivalence classes of lower level processes, realised in biological systems through different operations having the same outcome within the context of information control and networks. |
2211.03992 | Takuma Usuzaki Dr | Takuma Usuzaki | Splitting expands the application range of Vision Transformer --
variable Vision Transformer (vViT) | 11 pages, 3 figures, 2 Tables | null | null | null | q-bio.QM eess.IV | http://creativecommons.org/licenses/by/4.0/ | Vision Transformer (ViT) has achieved outstanding results in computer vision.
Although there are many Transformer-based architectures derived from the
original ViT, the dimension of patches are often the same with each other. This
disadvantage leads to a limited application range in the medical field because
in the medical field, datasets whose dimension is different from each other;
e.g. medical image, patients' personal information, laboratory test and so on.
To overcome this limitation, we develop a new derived type of ViT termed
variable Vision Transformer (vViT). The aim of this study is to introduce vViT
and to apply vViT to radiomics using T1 weighted magnetic resonance image (MRI)
of glioma. In the prediction of 365 days of survival among glioma patients
using radiomics,vViT achieved 0.83, 0.82, 0.81, and 0.76 in sensitivity,
specificity, accuracy, and AUC-ROC, respectively. vViT has the potential to
handle different types of medical information at once.
| [
{
"created": "Tue, 8 Nov 2022 04:05:17 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Nov 2022 10:33:13 GMT",
"version": "v2"
},
{
"created": "Sat, 25 Mar 2023 05:11:43 GMT",
"version": "v3"
}
] | 2023-03-28 | [
[
"Usuzaki",
"Takuma",
""
]
] | Vision Transformer (ViT) has achieved outstanding results in computer vision. Although there are many Transformer-based architectures derived from the original ViT, the dimension of patches are often the same with each other. This disadvantage leads to a limited application range in the medical field because in the medical field, datasets whose dimension is different from each other; e.g. medical image, patients' personal information, laboratory test and so on. To overcome this limitation, we develop a new derived type of ViT termed variable Vision Transformer (vViT). The aim of this study is to introduce vViT and to apply vViT to radiomics using T1 weighted magnetic resonance image (MRI) of glioma. In the prediction of 365 days of survival among glioma patients using radiomics,vViT achieved 0.83, 0.82, 0.81, and 0.76 in sensitivity, specificity, accuracy, and AUC-ROC, respectively. vViT has the potential to handle different types of medical information at once. |
2308.06392 | Pekka Orponen | Malgorzata Nowicka, Vinay K. Gautam, Pekka Orponen | Automated rendering of multi-stranded DNA complexes with pseudoknots | 12 pages, 7 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | We present a general method for rendering representations of multi-stranded
DNA complexes from textual descriptions into 2D diagrams. The complexes can be
arbitrarily pseudoknotted, and if a planar rendering is possible, the method
will determine one in time which is essentially linear in the size of the
textual description. (That is, except for a final stochastic fine-tuning step.)
If a planar rendering is not possible, the method will compute a visually
pleasing approximate rendering in quadratic time. Examples of diagrams produced
by the method are presented in the paper.
| [
{
"created": "Fri, 11 Aug 2023 21:19:11 GMT",
"version": "v1"
}
] | 2023-08-15 | [
[
"Nowicka",
"Malgorzata",
""
],
[
"Gautam",
"Vinay K.",
""
],
[
"Orponen",
"Pekka",
""
]
] | We present a general method for rendering representations of multi-stranded DNA complexes from textual descriptions into 2D diagrams. The complexes can be arbitrarily pseudoknotted, and if a planar rendering is possible, the method will determine one in time which is essentially linear in the size of the textual description. (That is, except for a final stochastic fine-tuning step.) If a planar rendering is not possible, the method will compute a visually pleasing approximate rendering in quadratic time. Examples of diagrams produced by the method are presented in the paper. |
1001.2499 | Ralph Stern | Ralph H. Stern | The Discordance of Individual Risk Estimates and the Reference Class
Problem | 13 pages, 2 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multivariate methods that relate outcomes to risk factors have been adopted
clinically to individualize treatment. This has promoted the belief that
individuals have a true or unique risk.
The logic of assigning an individual a single risk value has been criticized
since 1866. The reason is that any individual can be simultaneously considered
a member of different groups, with each group having its own risk level (the
reference class problem).
Lemeshow et al. provided well-documented examples of remarkable discordance
between predictions for an individual by different valid predictive methods
utilizing different risk factors. The prevalence of such discordance is unknown
as it is rarely evaluated, but must be substantial due to the abundance of risk
factors.
Lemeshow et al. cautioned against using ICU mortality predictions for the
provision of care to individual patients. If individual risk estimates are used
clinically, users should be aware that valid methods may give very different
results.
| [
{
"created": "Thu, 14 Jan 2010 20:11:40 GMT",
"version": "v1"
}
] | 2010-01-15 | [
[
"Stern",
"Ralph H.",
""
]
] | Multivariate methods that relate outcomes to risk factors have been adopted clinically to individualize treatment. This has promoted the belief that individuals have a true or unique risk. The logic of assigning an individual a single risk value has been criticized since 1866. The reason is that any individual can be simultaneously considered a member of different groups, with each group having its own risk level (the reference class problem). Lemeshow et al. provided well-documented examples of remarkable discordance between predictions for an individual by different valid predictive methods utilizing different risk factors. The prevalence of such discordance is unknown as it is rarely evaluated, but must be substantial due to the abundance of risk factors. Lemeshow et al. cautioned against using ICU mortality predictions for the provision of care to individual patients. If individual risk estimates are used clinically, users should be aware that valid methods may give very different results. |
1806.00070 | Paul M\"uller | Paul M\"uller, Petra Schwille, Thomas Weidemann | Scanning Fluorescence Correlation Spectroscopy (SFCS) with a Scan Path
Perpendicular to the Membrane Plane | 16 pages, 4 figures | null | 10.1007/978-1-62703-649-8_29 | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scanning fluorescence correlation spectroscopy (SFCS) with a scan path
perpendicular to the membrane plane was introduced to measure diffusion and
interactions of fluorescent components in free standing biomembranes. Using a
confocal laser scanning microscope (CLSM) the open detection volume is moved
laterally with kHz frequency through the membrane and the photon events are
continuously recorded and stored in a file. While the accessory hardware
requirements for a conventional CLSM are minimal, data evaluation can pose a
bottleneck. The photon events must be assigned to each scan, in which the
maximum signal intensities have to be detected, binned, and aligned between the
scans, in order to derive the membrane related intensity fluctuations of one
spot. Finally, this time-dependent signal must be correlated and evaluated by
well known FCS model functions. Here we provide two platform independent, open
source software tools (PyScanFCS and PyCorrFit) that allow to perform all of
these steps and to establish perpendicular SFCS in its one- or two-focus as
well as its single- or dual-colour modality.
| [
{
"created": "Thu, 31 May 2018 19:48:09 GMT",
"version": "v1"
}
] | 2018-06-04 | [
[
"Müller",
"Paul",
""
],
[
"Schwille",
"Petra",
""
],
[
"Weidemann",
"Thomas",
""
]
] | Scanning fluorescence correlation spectroscopy (SFCS) with a scan path perpendicular to the membrane plane was introduced to measure diffusion and interactions of fluorescent components in free standing biomembranes. Using a confocal laser scanning microscope (CLSM) the open detection volume is moved laterally with kHz frequency through the membrane and the photon events are continuously recorded and stored in a file. While the accessory hardware requirements for a conventional CLSM are minimal, data evaluation can pose a bottleneck. The photon events must be assigned to each scan, in which the maximum signal intensities have to be detected, binned, and aligned between the scans, in order to derive the membrane related intensity fluctuations of one spot. Finally, this time-dependent signal must be correlated and evaluated by well known FCS model functions. Here we provide two platform independent, open source software tools (PyScanFCS and PyCorrFit) that allow to perform all of these steps and to establish perpendicular SFCS in its one- or two-focus as well as its single- or dual-colour modality. |
2307.07654 | Friedrich Schuessler | Friedrich Schuessler, Francesca Mastrogiuseppe, Srdjan Ostojic, Omri
Barak | Aligned and oblique dynamics in recurrent neural networks | Reviewed article (Elife) | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relation between neural activity and behaviorally relevant variables is
at the heart of neuroscience research. When strong, this relation is termed a
neural representation. There is increasing evidence, however, for partial
dissociations between activity in an area and relevant external variables.
While many explanations have been proposed, a theoretical framework for the
relationship between external and internal variables is lacking. Here, we
utilize recurrent neural networks (RNNs) to explore the question of when and
how neural dynamics and the network's output are related from a geometrical
point of view. We find that training RNNs can lead to two dynamical regimes:
dynamics can either be aligned with the directions that generate output
variables, or oblique to them. We show that the choice of readout weight
magnitude before training can serve as a control knob between the regimes,
similar to recent findings in feedforward networks. These regimes are
functionally distinct. Oblique networks are more heterogeneous and suppress
noise in their output directions. They are furthermore more robust to
perturbations along the output directions. Crucially, the oblique regime is
specific to recurrent (but not feedforward) networks, arising from dynamical
stability considerations. Finally, we show that tendencies towards the aligned
or the oblique regime can be dissociated in neural recordings. Altogether, our
results open a new perspective for interpreting neural activity by relating
network dynamics and their output.
| [
{
"created": "Fri, 14 Jul 2023 23:14:50 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Sep 2023 16:20:09 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Aug 2024 10:35:04 GMT",
"version": "v3"
}
] | 2024-08-05 | [
[
"Schuessler",
"Friedrich",
""
],
[
"Mastrogiuseppe",
"Francesca",
""
],
[
"Ostojic",
"Srdjan",
""
],
[
"Barak",
"Omri",
""
]
] | The relation between neural activity and behaviorally relevant variables is at the heart of neuroscience research. When strong, this relation is termed a neural representation. There is increasing evidence, however, for partial dissociations between activity in an area and relevant external variables. While many explanations have been proposed, a theoretical framework for the relationship between external and internal variables is lacking. Here, we utilize recurrent neural networks (RNNs) to explore the question of when and how neural dynamics and the network's output are related from a geometrical point of view. We find that training RNNs can lead to two dynamical regimes: dynamics can either be aligned with the directions that generate output variables, or oblique to them. We show that the choice of readout weight magnitude before training can serve as a control knob between the regimes, similar to recent findings in feedforward networks. These regimes are functionally distinct. Oblique networks are more heterogeneous and suppress noise in their output directions. They are furthermore more robust to perturbations along the output directions. Crucially, the oblique regime is specific to recurrent (but not feedforward) networks, arising from dynamical stability considerations. Finally, we show that tendencies towards the aligned or the oblique regime can be dissociated in neural recordings. Altogether, our results open a new perspective for interpreting neural activity by relating network dynamics and their output. |
2003.04890 | Giuseppe De Vito | Giuseppe De Vito, Paola Parlanti, Roberta Cecchi, Stefano Luin,
Valentina Cappello, Ilaria Tonazzini, Vincenzo Piazza | Effects of fixatives on myelin molecular order probed with RP-CARS
microscopy | 11 pages, 4 figures. This is the peer reviewed version of an article
that has been published in final form by Applied Optics. Please note that the
copyright statement on the front page of the article allows posting of this
manuscript by authors on arXiv, as specified here:
https://www.osapublishing.org/submit/review/copyright_permissions.cfm | Applied Optics 59 (2020) 1756-1762 | 10.1364/AO.384662 | null | q-bio.TO q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When live imaging is not feasible, sample fixation allows preserving the
ultrastructure of biological samples for subsequent microscopy analysis. This
process could be performed with various methods, each one affecting differently
the biological structure of the sample. While these alterations were
well-characterized using traditional microscopy, little information is
available about the effects of the fixatives on the spatial molecular
orientation of the biological tissue. We tackled this issue by employing
Rotating-Polarization Coherent Anti-Stokes Raman Scattering (RP-CARS)
microscopy to study the effects of different fixatives on the myelin
sub-micrometric molecular order and micrometric morphology. RP-CARS is a novel
technique derived from CARS microscopy that allows probing spatial orientation
of molecular bonds while maintaining the intrinsic chemical selectivity of CARS
microscopy. By characterizing the effects of the fixation procedures, the
present work represents a useful guide for the choice of the best fixation
technique(s), in particular for polarisation-resolved CARS microscopy. Finally,
we show that the combination of paraformaldehyde and glutaraldehyde can be
effectively employed as a fixative for RP-CARS microscopy, as long as the
effects on the molecular spatial distribution, here characterized, are taken
into account.
| [
{
"created": "Mon, 9 Mar 2020 19:16:17 GMT",
"version": "v1"
}
] | 2020-03-12 | [
[
"De Vito",
"Giuseppe",
""
],
[
"Parlanti",
"Paola",
""
],
[
"Cecchi",
"Roberta",
""
],
[
"Luin",
"Stefano",
""
],
[
"Cappello",
"Valentina",
""
],
[
"Tonazzini",
"Ilaria",
""
],
[
"Piazza",
"Vincenzo",
""
... | When live imaging is not feasible, sample fixation allows preserving the ultrastructure of biological samples for subsequent microscopy analysis. This process could be performed with various methods, each one affecting differently the biological structure of the sample. While these alterations were well-characterized using traditional microscopy, little information is available about the effects of the fixatives on the spatial molecular orientation of the biological tissue. We tackled this issue by employing Rotating-Polarization Coherent Anti-Stokes Raman Scattering (RP-CARS) microscopy to study the effects of different fixatives on the myelin sub-micrometric molecular order and micrometric morphology. RP-CARS is a novel technique derived from CARS microscopy that allows probing spatial orientation of molecular bonds while maintaining the intrinsic chemical selectivity of CARS microscopy. By characterizing the effects of the fixation procedures, the present work represents a useful guide for the choice of the best fixation technique(s), in particular for polarisation-resolved CARS microscopy. Finally, we show that the combination of paraformaldehyde and glutaraldehyde can be effectively employed as a fixative for RP-CARS microscopy, as long as the effects on the molecular spatial distribution, here characterized, are taken into account. |
2205.11676 | Aleksandr Brechalov | Anastasia Razdaibiedina and Alexander Brechalov | Learning multi-scale functional representations of proteins from
single-cell microscopy data | ICLR MLDD 2022 | null | null | null | q-bio.QM cs.CV cs.LG q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Protein function is inherently linked to its localization within the cell,
and fluorescent microscopy data is an indispensable resource for learning
representations of proteins. Despite major developments in molecular
representation learning, extracting functional information from biological
images remains a non-trivial computational task. Current state-of-the-art
approaches use autoencoder models to learn high-quality features by
reconstructing images. However, such methods are prone to capturing noise and
imaging artifacts. In this work, we revisit deep learning models used for
classifying major subcellular localizations, and evaluate representations
extracted from their final layers. We show that simple convolutional networks
trained on localization classification can learn protein representations that
encapsulate diverse functional information, and significantly outperform
autoencoder-based models. We also propose a robust evaluation strategy to
assess quality of protein representations across different scales of biological
function.
| [
{
"created": "Tue, 24 May 2022 00:00:07 GMT",
"version": "v1"
}
] | 2022-05-25 | [
[
"Razdaibiedina",
"Anastasia",
""
],
[
"Brechalov",
"Alexander",
""
]
] | Protein function is inherently linked to its localization within the cell, and fluorescent microscopy data is an indispensable resource for learning representations of proteins. Despite major developments in molecular representation learning, extracting functional information from biological images remains a non-trivial computational task. Current state-of-the-art approaches use autoencoder models to learn high-quality features by reconstructing images. However, such methods are prone to capturing noise and imaging artifacts. In this work, we revisit deep learning models used for classifying major subcellular localizations, and evaluate representations extracted from their final layers. We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information, and significantly outperform autoencoder-based models. We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function. |
2111.00287 | Linus Zhang | Ruijie Xu, Lin Zhang, Yu Chen | CdtGRN: Construction of qualitative time-delayed gene regulatory
networks with a deep learning method | null | null | null | null | q-bio.MN q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background:Gene regulations often change over time rather than being
constant. But many of gene regulatory networks extracted from databases are
static. The tumor suppressor gene $P53$ is involved in the pathogenesis of many
tumors, and its inhibition effects occur after a certain period. Therefore, it
is of great significance to elucidate the regulation mechanism over time
points. Result:A qualitative method for representing dynamic gene regulatory
network is developed, called CdtGRN. It adopts the combination of convolutional
neural networks(CNN) and fully connected networks(DNN) as the core mechanism of
prediction. The ionizing radiation Affymetrix dataset (E-MEXP-549) was obtained
at ArrayExpress, by microarray gene expression levels predicting relations
between regulation. CdtGRN is tested against a time-delayed gene regulatory
network with $22,284$ genes related to $P53$. The accuracy of CdtGRN reaches
92.07$\%$ on the classification of conservative verification set, and a kappa
coefficient reaches $0.84$ and an average AUC accuracy is 94.25$\%$. This
resulted in the construction of. Conclusion:The algorithm and program we
developed in our study would be useful for identifying dynamic gene regulatory
networks, and objectively analyze the delay of the regulatory relationship by
analyzing the gene expression levels at different time points. The time-delayed
gene regulatory network of $P53$ is also inferred and represented
qualitatively, which is helpful to understand the pathological mechanism of
tumors.
| [
{
"created": "Sat, 30 Oct 2021 16:37:04 GMT",
"version": "v1"
}
] | 2021-11-02 | [
[
"Xu",
"Ruijie",
""
],
[
"Zhang",
"Lin",
""
],
[
"Chen",
"Yu",
""
]
] | Background:Gene regulations often change over time rather than being constant. But many of gene regulatory networks extracted from databases are static. The tumor suppressor gene $P53$ is involved in the pathogenesis of many tumors, and its inhibition effects occur after a certain period. Therefore, it is of great significance to elucidate the regulation mechanism over time points. Result:A qualitative method for representing dynamic gene regulatory network is developed, called CdtGRN. It adopts the combination of convolutional neural networks(CNN) and fully connected networks(DNN) as the core mechanism of prediction. The ionizing radiation Affymetrix dataset (E-MEXP-549) was obtained at ArrayExpress, by microarray gene expression levels predicting relations between regulation. CdtGRN is tested against a time-delayed gene regulatory network with $22,284$ genes related to $P53$. The accuracy of CdtGRN reaches 92.07$\%$ on the classification of conservative verification set, and a kappa coefficient reaches $0.84$ and an average AUC accuracy is 94.25$\%$. This resulted in the construction of. Conclusion:The algorithm and program we developed in our study would be useful for identifying dynamic gene regulatory networks, and objectively analyze the delay of the regulatory relationship by analyzing the gene expression levels at different time points. The time-delayed gene regulatory network of $P53$ is also inferred and represented qualitatively, which is helpful to understand the pathological mechanism of tumors. |
q-bio/0609009 | Henrik Jeldtoft Jensen | Daniel John Lawson and Henrik Jeldtoft Jensen | Neutral Evolution as Diffusion in phenotype space: reproduction with
mutation but without selection | 4 pages, 2 figures. Paper now representative of published article | Phys. Rev. Lett. 98, 098102 (2007) | 10.1103/PhysRevLett.98.098102 | null | q-bio.PE | null | The process of `Evolutionary Diffusion', i.e. reproduction with local
mutation but without selection in a biological population, resembles standard
Diffusion in many ways. However, Evolutionary Diffusion allows the formation of
local peaks with a characteristic width that undergo drift, even in the
infinite population limit. We analytically calculate the mean peak width and
the effective random walk step size, and obtain the distribution of the peak
width which has a power law tail. We find that independent local mutations act
as a diffusion of interacting particles with increased stepsize.
| [
{
"created": "Thu, 7 Sep 2006 14:07:28 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Sep 2006 12:05:57 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Oct 2006 16:06:08 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Mar 2007 11:15:22 GMT",
"version": "v4"
}
] | 2007-05-23 | [
[
"Lawson",
"Daniel John",
""
],
[
"Jensen",
"Henrik Jeldtoft",
""
]
] | The process of `Evolutionary Diffusion', i.e. reproduction with local mutation but without selection in a biological population, resembles standard Diffusion in many ways. However, Evolutionary Diffusion allows the formation of local peaks with a characteristic width that undergo drift, even in the infinite population limit. We analytically calculate the mean peak width and the effective random walk step size, and obtain the distribution of the peak width which has a power law tail. We find that independent local mutations act as a diffusion of interacting particles with increased stepsize. |
1907.09586 | Vince Grolmusz | Mate Fellner and Balint Varga and Vince Grolmusz | Good Neighbors, Bad Neighbors: The Frequent Network Neighborhood Mapping
of the Hippocampus Enlightens Several Structural Factors of the Human
Intelligence on a 414-Subject Cohort | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human connectome has become the very frequent subject of study of
brain-scientists, psychologists, and imaging experts in the last decade. With
diffusion magnetic resonance imaging techniques, unified with advanced data
processing algorithms, today we are able to compute braingraphs with several
hundred, anatomically identified nodes and thousands of edges, corresponding to
the anatomical connections of the brain. The analysis of these graphs without
refined mathematical tools is hopeless. These tools need to address the high
error rate of the MRI processing workflow, and need to find structural causes
or at least correlations of psychological properties and cerebral connections.
Until now, structural connectomics was only rarely able identifying such causes
or correlations. In the present work, we study the frequent neighbor sets of
the most deeply investigated brain area, the hippocampus. By applying the
Frequent Network Neighborhood mapping method, we identified frequent
neighbor-sets of the hippocampus, which may influence numerous psychological
parameters, including intelligence-related ones. We have found neighbor sets,
which have significantly higher frequency in subjects with high-scored Penn
Matrix tests, and with low-scored Penn Word Memory tests. Our study utilizes
the braingraphs, computed from the imaging data of the Human Connectome
Project's 414 subjects, each with 463 anatomically identified nodes.
| [
{
"created": "Mon, 22 Jul 2019 21:30:44 GMT",
"version": "v1"
}
] | 2019-07-24 | [
[
"Fellner",
"Mate",
""
],
[
"Varga",
"Balint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | The human connectome has become the very frequent subject of study of brain-scientists, psychologists, and imaging experts in the last decade. With diffusion magnetic resonance imaging techniques, unified with advanced data processing algorithms, today we are able to compute braingraphs with several hundred, anatomically identified nodes and thousands of edges, corresponding to the anatomical connections of the brain. The analysis of these graphs without refined mathematical tools is hopeless. These tools need to address the high error rate of the MRI processing workflow, and need to find structural causes or at least correlations of psychological properties and cerebral connections. Until now, structural connectomics was only rarely able identifying such causes or correlations. In the present work, we study the frequent neighbor sets of the most deeply investigated brain area, the hippocampus. By applying the Frequent Network Neighborhood mapping method, we identified frequent neighbor-sets of the hippocampus, which may influence numerous psychological parameters, including intelligence-related ones. We have found neighbor sets, which have significantly higher frequency in subjects with high-scored Penn Matrix tests, and with low-scored Penn Word Memory tests. Our study utilizes the braingraphs, computed from the imaging data of the Human Connectome Project's 414 subjects, each with 463 anatomically identified nodes. |
2407.02634 | Daniel Rickert | Daniel Rickert, Wai-Tong Louis Fan, Matthew Hahn | Inconsistency of parsimony under the multispecies coalescent | 19 pages, 8 figures, 1 table (v2: resolved PDF error; removed
endfloat) | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | While it is known that parsimony can be statistically inconsistent under
certain models of evolution due to high levels of homoplasy, the consistency of
parsimony under the multispecies coalescent (MSC) is less well studied.
Previous studies have shown the consistency of concatenated parsimony
(parsimony applied to concatenated alignments) under the MSC for the rooted
4-taxa case under an infinite-sites model of mutation; on the other hand, other
work has also established the inconsistency of concatenated parsimony for the
unrooted 6-taxa case. These seemingly contradictory results suggest that
concatenated parsimony may fail to be consistent for trees with more than 5
taxa, for all unrooted trees, or for some combination of the two. Here, we
present a technique for computing the expected internal branch lengths of gene
trees under the MSC. This technique allows us to determine the regions of the
parameter space of the species tree under which concatenated parsimony fails
for different numbers of taxa, for rooted or unrooted trees. We use our new
approach to demonstrate that there are always regions of statistical
inconsistency for concatenated parsimony for the 5- and 6-taxa cases,
regardless of rooting. Our results therefore suggest that parsimony is not
generally dependable under the MSC.
| [
{
"created": "Tue, 2 Jul 2024 20:02:25 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Jul 2024 17:17:43 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Rickert",
"Daniel",
""
],
[
"Fan",
"Wai-Tong Louis",
""
],
[
"Hahn",
"Matthew",
""
]
] | While it is known that parsimony can be statistically inconsistent under certain models of evolution due to high levels of homoplasy, the consistency of parsimony under the multispecies coalescent (MSC) is less well studied. Previous studies have shown the consistency of concatenated parsimony (parsimony applied to concatenated alignments) under the MSC for the rooted 4-taxa case under an infinite-sites model of mutation; on the other hand, other work has also established the inconsistency of concatenated parsimony for the unrooted 6-taxa case. These seemingly contradictory results suggest that concatenated parsimony may fail to be consistent for trees with more than 5 taxa, for all unrooted trees, or for some combination of the two. Here, we present a technique for computing the expected internal branch lengths of gene trees under the MSC. This technique allows us to determine the regions of the parameter space of the species tree under which concatenated parsimony fails for different numbers of taxa, for rooted or unrooted trees. We use our new approach to demonstrate that there are always regions of statistical inconsistency for concatenated parsimony for the 5- and 6-taxa cases, regardless of rooting. Our results therefore suggest that parsimony is not generally dependable under the MSC. |
1508.00104 | Loic Chaumont | Lo\"ic Chaumont, Val\'ery Mal\'ecot, Richard Pymar, Chaker Sbai | Reconstructing pedigrees using probabilistic analysis of ISSR
amplification | 5 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data obtained from ISSR amplification may readily be extracted but only
allows us to know, for each gene, if a specific allele is present or not. From
this partial information we provide a probabilistic method to reconstruct the
pedigree corresponding to some families of diploid cultivars. This method
consists in determining for each individual what is the most likely couple of
parent pair amongst all older individuals, according to some probability
measure. The construction of this measure bears on the fact that the
probability to observe the specific alleles in the child, given the status of
the parents does not depend on the generation and is the same for each gene.
This assumption is then justified from a convergence result of gene frequencies
which is proved here. Our reconstruction method is applied to a family of 85
living accessions representing the common broom {\it Cytisus scoparius}.
| [
{
"created": "Sat, 1 Aug 2015 10:12:55 GMT",
"version": "v1"
}
] | 2015-08-04 | [
[
"Chaumont",
"Loïc",
""
],
[
"Malécot",
"Valéry",
""
],
[
"Pymar",
"Richard",
""
],
[
"Sbai",
"Chaker",
""
]
] | Data obtained from ISSR amplification may readily be extracted but only allows us to know, for each gene, if a specific allele is present or not. From this partial information we provide a probabilistic method to reconstruct the pedigree corresponding to some families of diploid cultivars. This method consists in determining for each individual what is the most likely couple of parent pair amongst all older individuals, according to some probability measure. The construction of this measure bears on the fact that the probability to observe the specific alleles in the child, given the status of the parents does not depend on the generation and is the same for each gene. This assumption is then justified from a convergence result of gene frequencies which is proved here. Our reconstruction method is applied to a family of 85 living accessions representing the common broom {\it Cytisus scoparius}. |
2404.06459 | David Morselli | David Morselli, Marcello E. Delitala, Adrianne L. Jenner, Federico
Frascoli | A hybrid discrete-continuum modelling approach for the interactions of
the immune system with oncolytic viral infections | 29 pages, 14 figures. Supplementary material available at
https://drive.google.com/drive/folders/1F2v172sQQRssDEi7XRFALmzH6KwK9wIt?usp=sharing | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Oncolytic virotherapy, utilizing genetically modified viruses to combat
cancer and trigger anti-cancer immune responses, has garnered significant
attention in recent years. In our previous work arXiv:2305.12386, we developed
a stochastic agent-based model elucidating the spatial dynamics of infected and
uninfected cells within solid tumours. Building upon this foundation, we
present a novel stochastic agent-based model to describe the intricate
interplay between the virus and the immune system; the agents' dynamics are
coupled with a balance equation for the concentration of the chemoattractant
that guides the movement of immune cells. We formally derive the continuum
limit of the model and carry out a systematic quantitative comparison between
this system of PDEs and the individual-based model in two spatial dimensions.
Furthermore, we describe the traveling waves of the three populations, with the
uninfected proliferative cells trying to escape from the infected cells while
immune cells infiltrate the tumour.
Simulations show a good agreement between agent-based approaches and
numerical results for the continuum model. Some parameter ranges give rise to
oscillations of cell number in both models, in line with the behaviour of the
corresponding nonspatial model, which presents Hopf bifurcations. Nevertheless,
in some situations the behaviours of the two models may differ significantly,
suggesting that stochasticity plays a key role in the dynamics. Our results
highlight that a too rapid immune response, before the infection is
well-established, appears to decrease the efficacy of the therapy and thus some
care is needed when oncolytic virotherapy is combined with immunotherapy. This
further suggests the importance of clinically improving the modulation of the
immune response according to the tumour's characteristics and to the immune
capabilities of the patients.
| [
{
"created": "Tue, 9 Apr 2024 17:00:33 GMT",
"version": "v1"
}
] | 2024-04-10 | [
[
"Morselli",
"David",
""
],
[
"Delitala",
"Marcello E.",
""
],
[
"Jenner",
"Adrianne L.",
""
],
[
"Frascoli",
"Federico",
""
]
] | Oncolytic virotherapy, utilizing genetically modified viruses to combat cancer and trigger anti-cancer immune responses, has garnered significant attention in recent years. In our previous work arXiv:2305.12386, we developed a stochastic agent-based model elucidating the spatial dynamics of infected and uninfected cells within solid tumours. Building upon this foundation, we present a novel stochastic agent-based model to describe the intricate interplay between the virus and the immune system; the agents' dynamics are coupled with a balance equation for the concentration of the chemoattractant that guides the movement of immune cells. We formally derive the continuum limit of the model and carry out a systematic quantitative comparison between this system of PDEs and the individual-based model in two spatial dimensions. Furthermore, we describe the traveling waves of the three populations, with the uninfected proliferative cells trying to escape from the infected cells while immune cells infiltrate the tumour. Simulations show a good agreement between agent-based approaches and numerical results for the continuum model. Some parameter ranges give rise to oscillations of cell number in both models, in line with the behaviour of the corresponding nonspatial model, which presents Hopf bifurcations. Nevertheless, in some situations the behaviours of the two models may differ significantly, suggesting that stochasticity plays a key role in the dynamics. Our results highlight that a too rapid immune response, before the infection is well-established, appears to decrease the efficacy of the therapy and thus some care is needed when oncolytic virotherapy is combined with immunotherapy. This further suggests the importance of clinically improving the modulation of the immune response according to the tumour's characteristics and to the immune capabilities of the patients. |
0809.3938 | Nils Becker | Nils B. Becker and Ralf Everaers | DNA nano-mechanics: how proteins deform the double helix | accepted for publication in JCP; some minor changes in response to
review 18 pages, 5 figure + supplement: 4 pages, 3 figures | null | 10.1063/1.3082157 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is a standard exercise in mechanical engineering to infer the external
forces and torques on a body from its static shape and known elastic
properties. Here we apply this kind of analysis to distorted double-helical DNA
in complexes with proteins. We extract the local mean forces and torques acting
on each base-pair of bound DNA from high-resolution complex structures. Our
method relies on known elastic potentials and a careful choice of coordinates
of the well-established rigid base-pair model of DNA. The results are robust
with respect to parameter and conformation uncertainty. They reveal the complex
nano-mechanical patterns of interaction between proteins and DNA. Being
non-trivially and non-locally related to observed DNA conformations, base-pair
forces and torques provide a new view on DNA-protein binding that complements
structural analysis.
| [
{
"created": "Tue, 23 Sep 2008 15:10:16 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Dec 2008 11:04:17 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Becker",
"Nils B.",
""
],
[
"Everaers",
"Ralf",
""
]
] | It is a standard exercise in mechanical engineering to infer the external forces and torques on a body from its static shape and known elastic properties. Here we apply this kind of analysis to distorted double-helical DNA in complexes with proteins. We extract the local mean forces and torques acting on each base-pair of bound DNA from high-resolution complex structures. Our method relies on known elastic potentials and a careful choice of coordinates of the well-established rigid base-pair model of DNA. The results are robust with respect to parameter and conformation uncertainty. They reveal the complex nano-mechanical patterns of interaction between proteins and DNA. Being non-trivially and non-locally related to observed DNA conformations, base-pair forces and torques provide a new view on DNA-protein binding that complements structural analysis. |
1510.06115 | Qing Wan | Changjin Wan, Liqiang Zhu, Yanghui Liu, Ping Feng, Zhaoping Liu,
Hailiang Cao, Peng Xiao, Yi Shi, and Qing Wan | Proton Conducting Graphene Oxide Coupled Neuron Transistors for
Brain-Inspired Cognitive Systems | arXiv admin note: text overlap with arXiv:1506.04658 | null | null | null | q-bio.NC cond-mat.mtrl-sci cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuron is the most important building block in our brain, and information
processing in individual neuron involves the transformation of input synaptic
spike trains into an appropriate output spike train. Hardware implementation of
neuron by individual ionic/electronic hybrid device is of great significance
for enhancing our understanding of the brain and solving sensory processing and
complex recognition tasks. Here, we provide a proof-of-principle artificial
neuron based on a proton conducting graphene oxide (GO) coupled oxide-based
electric-double-layer (EDL) transistor with multiple driving inputs and one
modulatory input terminal. Paired-pulse facilitation, dendritic integration and
orientation tuning were successfully emulated. Additionally, neuronal gain
control (arithmetic) in the scheme of rate coding is also experimentally
demonstrated. Our results provide a new-concept approach for building
brain-inspired cognitive systems.
| [
{
"created": "Wed, 21 Oct 2015 02:55:56 GMT",
"version": "v1"
}
] | 2015-11-19 | [
[
"Wan",
"Changjin",
""
],
[
"Zhu",
"Liqiang",
""
],
[
"Liu",
"Yanghui",
""
],
[
"Feng",
"Ping",
""
],
[
"Liu",
"Zhaoping",
""
],
[
"Cao",
"Hailiang",
""
],
[
"Xiao",
"Peng",
""
],
[
"Shi",
"Yi",
... | Neuron is the most important building block in our brain, and information processing in individual neuron involves the transformation of input synaptic spike trains into an appropriate output spike train. Hardware implementation of neuron by individual ionic/electronic hybrid device is of great significance for enhancing our understanding of the brain and solving sensory processing and complex recognition tasks. Here, we provide a proof-of-principle artificial neuron based on a proton conducting graphene oxide (GO) coupled oxide-based electric-double-layer (EDL) transistor with multiple driving inputs and one modulatory input terminal. Paired-pulse facilitation, dendritic integration and orientation tuning were successfully emulated. Additionally, neuronal gain control (arithmetic) in the scheme of rate coding is also experimentally demonstrated. Our results provide a new-concept approach for building brain-inspired cognitive systems. |
0806.2763 | Petter Holme | Petter Holme, Mikael Huss | Substance graphs are optimal simple-graph representations of metabolism | null | Chinese Sci. Bull. 55, 3161-3168 (2010) | 10.1007/s11434-010-4086-3 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One approach to studying the system-wide organization of biochemistry is to
use statistical graph theory. Even in such a heavily simplified method, which
disregards most of the dynamic aspects of biochemistry, one is faced with
fundamental questions, such as how the chemical reaction systems should be
reduced to a graph retaining as much functional information as possible from
the original reaction system. In such graph representations, should the edges
go between substrates and products, or substrates and substrates, or both?
Should vertices represent substances or reactions? Different definitions encode
different information about the reaction system. In this paper we evaluate four
different graph representations of metabolism, applied to data from different
organisms and databases. The graph representations are evaluated by comparing
the overlap between clusters (network modules) and annotated functions, and
also by comparing the set of identified currency metabolites with those that
other authors have identified using qualitative biological arguments. We find
that a "substance network," where all metabolites participating in a reaction
are connected, is relatively better than others, evaluated both with respect to
the functional overlap between modules and functions and to the number and
identity of identified currency metabolites.
| [
{
"created": "Tue, 17 Jun 2008 11:47:06 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Dec 2008 12:31:24 GMT",
"version": "v2"
}
] | 2010-08-17 | [
[
"Holme",
"Petter",
""
],
[
"Huss",
"Mikael",
""
]
] | One approach to studying the system-wide organization of biochemistry is to use statistical graph theory. Even in such a heavily simplified method, which disregards most of the dynamic aspects of biochemistry, one is faced with fundamental questions, such as how the chemical reaction systems should be reduced to a graph retaining as much functional information as possible from the original reaction system. In such graph representations, should the edges go between substrates and products, or substrates and substrates, or both? Should vertices represent substances or reactions? Different definitions encode different information about the reaction system. In this paper we evaluate four different graph representations of metabolism, applied to data from different organisms and databases. The graph representations are evaluated by comparing the overlap between clusters (network modules) and annotated functions, and also by comparing the set of identified currency metabolites with those that other authors have identified using qualitative biological arguments. We find that a "substance network," where all metabolites participating in a reaction are connected, is relatively better than others, evaluated both with respect to the functional overlap between modules and functions and to the number and identity of identified currency metabolites. |
2009.01354 | M. Gabriela M. Gomes | M. Gabriela M. Gomes, Nicholas A. Feasey, Marcelo U. Ferreira, E.
James LaCourse, Kate E. Langwig, Lisa Reimer, Beate Ringwald, Jamie Rylance,
J. Russell Stothard, Miriam Taegtmeyer, Dianne J. Terlouw, Rachel Tolhurst,
Tom Wingfield, Stephen B. Gordon | Unfolding selection to infer individual risk heterogeneity for
optimising disease forecasts and policy development | 10 pages, 3 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mathematical models are increasing adopted for setting targets for disease
prevention and control. As model-informed policies are implemented, however,
the inaccuracies of some forecasts become apparent, for example overprediction
of infection burdens and overestimation of intervention impacts. Here, we
attribute these discrepancies to methodological limitations in capturing the
heterogeneities of real-world systems. The mechanisms underpinning single
factors for infection and their interactions determine individual propensities
to acquire disease. These are potentially so numerous that to attain a full
mechanistic description may be unfeasible. To contribute constructively to the
development of health policies, model developers either leave factors out
(reductionism) or adopt a broader but coarse description (holism). In our view,
predictive capacity requires holistic descriptions of heterogeneity which are
currently underutilised in infectious disease epidemiology but common in other
disciplines.
| [
{
"created": "Wed, 2 Sep 2020 21:35:09 GMT",
"version": "v1"
},
{
"created": "Mon, 16 May 2022 17:12:53 GMT",
"version": "v2"
}
] | 2022-05-17 | [
[
"Gomes",
"M. Gabriela M.",
""
],
[
"Feasey",
"Nicholas A.",
""
],
[
"Ferreira",
"Marcelo U.",
""
],
[
"LaCourse",
"E. James",
""
],
[
"Langwig",
"Kate E.",
""
],
[
"Reimer",
"Lisa",
""
],
[
"Ringwald",
"Beate",... | Mathematical models are increasing adopted for setting targets for disease prevention and control. As model-informed policies are implemented, however, the inaccuracies of some forecasts become apparent, for example overprediction of infection burdens and overestimation of intervention impacts. Here, we attribute these discrepancies to methodological limitations in capturing the heterogeneities of real-world systems. The mechanisms underpinning single factors for infection and their interactions determine individual propensities to acquire disease. These are potentially so numerous that to attain a full mechanistic description may be unfeasible. To contribute constructively to the development of health policies, model developers either leave factors out (reductionism) or adopt a broader but coarse description (holism). In our view, predictive capacity requires holistic descriptions of heterogeneity which are currently underutilised in infectious disease epidemiology but common in other disciplines. |
1812.01112 | William H. Press | William H. Press and John A. Hawkins | An Indel-Resistant Error-Correcting Code for DNA-Based Information
Storage | 24 pages, 8 figures, 22 references | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic DNA can in principle be used for the archival storage of arbitrary
data. Because errors are introduced during DNA synthesis, storage, and
sequencing, an error-correcting code (ECC) is necessary for error-free recovery
of the data. Previous work has utilized ECCs that can correct substitution
errors, but not insertion or deletion errors (indels), instead relying on
sequencing depth and multiple alignment to detect and correct indels -- in
effect an inefficient multiple-repetition code. This paper describes an ECC,
termed "HEDGES", that corrects simultaneously for substitutions, insertions,
and deletions in a single read. Varying code rates allow for correction of up
to ~10% nucleotide errors and achieve 50% or better of the estimated Shannon
limit.
| [
{
"created": "Mon, 3 Dec 2018 22:21:21 GMT",
"version": "v1"
}
] | 2018-12-05 | [
[
"Press",
"William H.",
""
],
[
"Hawkins",
"John A.",
""
]
] | Synthetic DNA can in principle be used for the archival storage of arbitrary data. Because errors are introduced during DNA synthesis, storage, and sequencing, an error-correcting code (ECC) is necessary for error-free recovery of the data. Previous work has utilized ECCs that can correct substitution errors, but not insertion or deletion errors (indels), instead relying on sequencing depth and multiple alignment to detect and correct indels -- in effect an inefficient multiple-repetition code. This paper describes an ECC, termed "HEDGES", that corrects simultaneously for substitutions, insertions, and deletions in a single read. Varying code rates allow for correction of up to ~10% nucleotide errors and achieve 50% or better of the estimated Shannon limit. |
0812.4191 | Tobias Reichenbach | Maximilian Berr, Tobias Reichenbach, Martin Schottenloher, and Erwin
Frey | Zero-one survival behavior of cyclically competing species | 4 pages, 3 figures | Phys. Rev. Lett. 102, 048102 (2009) | 10.1103/PhysRevLett.102.048102 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coexistence of competing species is, due to unavoidable fluctuations, always
transient. In this Letter, we investigate the ultimate survival probabilities
characterizing different species in cyclic competition. We show that they often
obey a surprisingly simple, though non-trivial behavior. Within a model where
coexistence is neutrally stable, we demonstrate a robust zero-one law: When the
interactions between the three species are (generically) asymmetric, the
`weakest' species survives at a probability that tends to one for large
population sizes, while the other two are guaranteed to extinct. We rationalize
our findings from stochastic simulations by an analytic approach.
| [
{
"created": "Mon, 22 Dec 2008 14:13:19 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jan 2009 15:21:40 GMT",
"version": "v2"
}
] | 2009-01-30 | [
[
"Berr",
"Maximilian",
""
],
[
"Reichenbach",
"Tobias",
""
],
[
"Schottenloher",
"Martin",
""
],
[
"Frey",
"Erwin",
""
]
] | Coexistence of competing species is, due to unavoidable fluctuations, always transient. In this Letter, we investigate the ultimate survival probabilities characterizing different species in cyclic competition. We show that they often obey a surprisingly simple, though non-trivial behavior. Within a model where coexistence is neutrally stable, we demonstrate a robust zero-one law: When the interactions between the three species are (generically) asymmetric, the `weakest' species survives at a probability that tends to one for large population sizes, while the other two are guaranteed to extinct. We rationalize our findings from stochastic simulations by an analytic approach. |
2405.06708 | Suyuan Zhao | Suyuan Zhao, Jiahuan Zhang, Yushuai Wu, Yizhen Luo, Zaiqing Nie | LangCell: Language-Cell Pre-training for Cell Identity Understanding | Accpeted by ICML 2024, code released | null | null | null | q-bio.GN cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Cell identity encompasses various semantic aspects of a cell, including cell
type, pathway information, disease information, and more, which are essential
for biologists to gain insights into its biological characteristics.
Understanding cell identity from the transcriptomic data, such as annotating
cell types, has become an important task in bioinformatics. As these semantic
aspects are determined by human experts, it is impossible for AI models to
effectively carry out cell identity understanding tasks without the supervision
signals provided by single-cell and label pairs. The single-cell pre-trained
language models (PLMs) currently used for this task are trained only on a
single modality, transcriptomics data, lack an understanding of cell identity
knowledge. As a result, they have to be fine-tuned for downstream tasks and
struggle when lacking labeled data with the desired semantic labels. To address
this issue, we propose an innovative solution by constructing a unified
representation of single-cell data and natural language during the pre-training
phase, allowing the model to directly incorporate insights related to cell
identity. More specifically, we introduce $\textbf{LangCell}$, the first
$\textbf{Lang}$uage-$\textbf{Cell}$ pre-training framework. LangCell utilizes
texts enriched with cell identity information to gain a profound comprehension
of cross-modal knowledge. Results from experiments conducted on different
benchmarks show that LangCell is the only single-cell PLM that can work
effectively in zero-shot cell identity understanding scenarios, and also
significantly outperforms existing models in few-shot and fine-tuning cell
identity understanding scenarios.
| [
{
"created": "Thu, 9 May 2024 10:04:05 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 06:43:18 GMT",
"version": "v2"
},
{
"created": "Wed, 29 May 2024 02:54:47 GMT",
"version": "v3"
},
{
"created": "Fri, 7 Jun 2024 02:22:54 GMT",
"version": "v4"
},
{
"cre... | 2024-06-12 | [
[
"Zhao",
"Suyuan",
""
],
[
"Zhang",
"Jiahuan",
""
],
[
"Wu",
"Yushuai",
""
],
[
"Luo",
"Yizhen",
""
],
[
"Nie",
"Zaiqing",
""
]
] | Cell identity encompasses various semantic aspects of a cell, including cell type, pathway information, disease information, and more, which are essential for biologists to gain insights into its biological characteristics. Understanding cell identity from the transcriptomic data, such as annotating cell types, has become an important task in bioinformatics. As these semantic aspects are determined by human experts, it is impossible for AI models to effectively carry out cell identity understanding tasks without the supervision signals provided by single-cell and label pairs. The single-cell pre-trained language models (PLMs) currently used for this task are trained only on a single modality, transcriptomics data, lack an understanding of cell identity knowledge. As a result, they have to be fine-tuned for downstream tasks and struggle when lacking labeled data with the desired semantic labels. To address this issue, we propose an innovative solution by constructing a unified representation of single-cell data and natural language during the pre-training phase, allowing the model to directly incorporate insights related to cell identity. More specifically, we introduce $\textbf{LangCell}$, the first $\textbf{Lang}$uage-$\textbf{Cell}$ pre-training framework. LangCell utilizes texts enriched with cell identity information to gain a profound comprehension of cross-modal knowledge. Results from experiments conducted on different benchmarks show that LangCell is the only single-cell PLM that can work effectively in zero-shot cell identity understanding scenarios, and also significantly outperforms existing models in few-shot and fine-tuning cell identity understanding scenarios. |
2010.16253 | Faezeh Movahedi | Faezeh Movahedi, Rema Padman, James F. Antaki | Limitations of ROC on Imbalanced Data: Evaluation of LVAD Mortality Risk
Scores | Submitted to JACC Heart Failure | The Journal of Thoracic and Cardiovascular Surgery. 2021 Jul 30 | 10.1016/j.jtcvs.2021.07.041 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: This study illustrates the ambiguity of ROC in evaluating two
classifiers of 90-day LVAD mortality. This paper also introduces the precision
recall curve (PRC) as a supplemental metric that is more representative of LVAD
classifiers performance in predicting the minority class.
Background: In the LVAD domain, the receiver operating characteristic (ROC)
is a commonly applied metric of performance of classifiers. However, ROC can
provide a distorted view of classifiers ability to predict short-term mortality
due to the overwhelmingly greater proportion of patients who survive, i.e.
imbalanced data.
Methods: This study compared the ROC and PRC for the outcome of two
classifiers for 90-day LVAD mortality for 800 patients (test group) recorded in
INTERMACS who received a continuous-flow LVAD between 2006 and 2016 (mean age
of 59 years; 146 females vs. 654 males) in which mortality rate is only %8 at
90-day (imbalanced data). The two classifiers were HeartMate Risk Score (HMRS)
and a Random Forest (RF).
Results: The ROC indicates fairly good performance of RF and HRMS classifiers
with Area Under Curves (AUC) of 0.77 vs. 0.63, respectively. This is in
contrast with their PRC with AUC of 0.43 vs. 0.16 for RF and HRMS,
respectively. The PRC for HRMS showed the precision rapidly dropped to only 10%
with slightly increasing sensitivity.
Conclusion: The ROC can portray an overly-optimistic performance of a
classifier or risk score when applied to imbalanced data. The PRC provides
better insight about the performance of a classifier by focusing on the
minority class.
| [
{
"created": "Thu, 29 Oct 2020 11:10:15 GMT",
"version": "v1"
}
] | 2022-04-21 | [
[
"Movahedi",
"Faezeh",
""
],
[
"Padman",
"Rema",
""
],
[
"Antaki",
"James F.",
""
]
] | Objective: This study illustrates the ambiguity of ROC in evaluating two classifiers of 90-day LVAD mortality. This paper also introduces the precision recall curve (PRC) as a supplemental metric that is more representative of LVAD classifiers performance in predicting the minority class. Background: In the LVAD domain, the receiver operating characteristic (ROC) is a commonly applied metric of performance of classifiers. However, ROC can provide a distorted view of classifiers ability to predict short-term mortality due to the overwhelmingly greater proportion of patients who survive, i.e. imbalanced data. Methods: This study compared the ROC and PRC for the outcome of two classifiers for 90-day LVAD mortality for 800 patients (test group) recorded in INTERMACS who received a continuous-flow LVAD between 2006 and 2016 (mean age of 59 years; 146 females vs. 654 males) in which mortality rate is only %8 at 90-day (imbalanced data). The two classifiers were HeartMate Risk Score (HMRS) and a Random Forest (RF). Results: The ROC indicates fairly good performance of RF and HRMS classifiers with Area Under Curves (AUC) of 0.77 vs. 0.63, respectively. This is in contrast with their PRC with AUC of 0.43 vs. 0.16 for RF and HRMS, respectively. The PRC for HRMS showed the precision rapidly dropped to only 10% with slightly increasing sensitivity. Conclusion: The ROC can portray an overly-optimistic performance of a classifier or risk score when applied to imbalanced data. The PRC provides better insight about the performance of a classifier by focusing on the minority class. |
1310.5166 | Darren Cusanovich | Darren A. Cusanovich, Bryan Pavlovic, Jonathan K. Pritchard, Yoav
Gilad | The Functional Consequences of Variation in Transcription Factor Binding | 30 pages, 6 figures (7 supplemental figures and 6 supplemental tables
available upon request to cusanovich@uchicago.edu). Submitted to PLoS
Genetics | PLoS Genet 10(3) (2014) e1004226 | 10.1371/journal.pgen.1004226 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One goal of human genetics is to understand how the information for precise
and dynamic gene expression programs is encoded in the genome. The interactions
of transcription factors (TFs) with DNA regulatory elements clearly play an
important role in determining gene expression outputs, yet the regulatory logic
underlying functional transcription factor binding is poorly understood. Many
studies have focused on characterizing the genomic locations of TF binding, yet
it is unclear to what extent TF binding at any specific locus has functional
consequences with respect to gene expression output. To evaluate the context of
functional TF binding we knocked down 59 TFs and chromatin modifiers in one
HapMap lymphoblastoid cell line. We then identified genes whose expression was
affected by the knockdowns. We intersected the gene expression data with
transcription factor binding data (based on ChIP-seq and DNase-seq) within 10
kb of the transcription start sites of expressed genes. This combination of
data allowed us to infer functional TF binding. On average, 14.7% of genes
bound by a factor were differentially expressed following the knockdown of that
factor, suggesting that most interactions between TF and chromatin do not
result in measurable changes in gene expression levels of putative target
genes. We found that functional TF binding is enriched in regulatory elements
that harbor a large number of TF binding sites, at sites with predicted higher
binding affinity, and at sites that are enriched in genomic regions annotated
as active enhancers.
| [
{
"created": "Fri, 18 Oct 2013 21:19:26 GMT",
"version": "v1"
}
] | 2014-04-15 | [
[
"Cusanovich",
"Darren A.",
""
],
[
"Pavlovic",
"Bryan",
""
],
[
"Pritchard",
"Jonathan K.",
""
],
[
"Gilad",
"Yoav",
""
]
] | One goal of human genetics is to understand how the information for precise and dynamic gene expression programs is encoded in the genome. The interactions of transcription factors (TFs) with DNA regulatory elements clearly play an important role in determining gene expression outputs, yet the regulatory logic underlying functional transcription factor binding is poorly understood. Many studies have focused on characterizing the genomic locations of TF binding, yet it is unclear to what extent TF binding at any specific locus has functional consequences with respect to gene expression output. To evaluate the context of functional TF binding we knocked down 59 TFs and chromatin modifiers in one HapMap lymphoblastoid cell line. We then identified genes whose expression was affected by the knockdowns. We intersected the gene expression data with transcription factor binding data (based on ChIP-seq and DNase-seq) within 10 kb of the transcription start sites of expressed genes. This combination of data allowed us to infer functional TF binding. On average, 14.7% of genes bound by a factor were differentially expressed following the knockdown of that factor, suggesting that most interactions between TF and chromatin do not result in measurable changes in gene expression levels of putative target genes. We found that functional TF binding is enriched in regulatory elements that harbor a large number of TF binding sites, at sites with predicted higher binding affinity, and at sites that are enriched in genomic regions annotated as active enhancers. |
1503.08224 | Hsin-Ta Wu | Mark D.M. Leiserson, Hsin-Ta Wu, Fabio Vandin, Benjamin J. Raphael | CoMEt: A Statistical Approach to Identify Combinations of Mutually
Exclusive Alterations in Cancer | Accepted to RECOMB 2015 | null | null | null | q-bio.QM stat.CO | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Cancer is a heterogeneous disease with different combinations of genetic and
epigenetic alterations driving the development of cancer in different
individuals. While these alterations are believed to converge on genes in key
cellular signaling and regulatory pathways, our knowledge of these pathways
remains incomplete, making it difficult to identify driver alterations by their
recurrence across genes or known pathways. We introduce Combinations of
Mutually Exclusive Alterations (CoMEt), an algorithm to identify combinations
of alterations de novo, without any prior biological knowledge (e.g. pathways
or protein interactions). CoMEt searches for combinations of mutations that
exhibit mutual exclusivity, a pattern expected for mutations in pathways. CoMEt
has several important feature that distinguish it from existing approaches to
analyze mutual exclusivity among alterations. These include: an exact
statistical test for mutual exclusivity that is more sensitive in detecting
combinations containing rare alterations; simultaneous identification of
collections of one or more combinations of mutually exclusive alterations;
simultaneous analysis of subtype-specific mutations; and summarization over an
ensemble of collections of mutually exclusive alterations. These features
enable CoMEt to robustly identify alterations affecting multiple pathways, or
hallmarks of cancer. We show that CoMEt outperforms existing approaches on
simulated and real data. Application of CoMEt to hundreds of samples from four
different cancer types from TCGA reveals multiple mutually exclusive sets
within each cancer type. Many of these overlap known pathways, but others
reveal novel putative cancer genes. *Equal contribution.
| [
{
"created": "Fri, 27 Mar 2015 20:39:32 GMT",
"version": "v1"
}
] | 2015-03-31 | [
[
"Leiserson",
"Mark D. M.",
""
],
[
"Wu",
"Hsin-Ta",
""
],
[
"Vandin",
"Fabio",
""
],
[
"Raphael",
"Benjamin J.",
""
]
] | Cancer is a heterogeneous disease with different combinations of genetic and epigenetic alterations driving the development of cancer in different individuals. While these alterations are believed to converge on genes in key cellular signaling and regulatory pathways, our knowledge of these pathways remains incomplete, making it difficult to identify driver alterations by their recurrence across genes or known pathways. We introduce Combinations of Mutually Exclusive Alterations (CoMEt), an algorithm to identify combinations of alterations de novo, without any prior biological knowledge (e.g. pathways or protein interactions). CoMEt searches for combinations of mutations that exhibit mutual exclusivity, a pattern expected for mutations in pathways. CoMEt has several important feature that distinguish it from existing approaches to analyze mutual exclusivity among alterations. These include: an exact statistical test for mutual exclusivity that is more sensitive in detecting combinations containing rare alterations; simultaneous identification of collections of one or more combinations of mutually exclusive alterations; simultaneous analysis of subtype-specific mutations; and summarization over an ensemble of collections of mutually exclusive alterations. These features enable CoMEt to robustly identify alterations affecting multiple pathways, or hallmarks of cancer. We show that CoMEt outperforms existing approaches on simulated and real data. Application of CoMEt to hundreds of samples from four different cancer types from TCGA reveals multiple mutually exclusive sets within each cancer type. Many of these overlap known pathways, but others reveal novel putative cancer genes. *Equal contribution. |
2311.10869 | James Hazelden | James Hazelden, Yuhan Helena Liu, Eli Shlizerman, Eric Shea-Brown | Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs | null | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training networks consisting of biophysically accurate neuron models could
allow for new insights into how brain circuits can organize and solve tasks. We
begin by analyzing the extent to which the central algorithm for neural network
learning -- stochastic gradient descent through backpropagation (BP) -- can be
used to train such networks. We find that properties of biophysically based
neural network models needed for accurate modelling such as stiffness, high
nonlinearity and long evaluation timeframes relative to spike times makes BP
unstable and divergent in a variety of cases. To address these instabilities
and inspired by recent work, we investigate the use of "gradient-estimating"
evolutionary algorithms (EAs) for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP,
including being forward-pass only, robust to noisy and rigid losses, allowing
for discrete loss formulations, and potentially facilitating a more global
exploration of parameters. We apply our method to train a recurrent network of
Morris-Lecar neuron models on a stimulus integration and working memory task,
and show how it can succeed in cases where direct BP is inapplicable. To expand
on the viability of EAs in general, we apply them to a general neural ODE
problem and a stiff neural ODE benchmark and find again that EAs can
out-perform direct BP here, especially for the over-parameterized regime. Our
findings suggest that biophysical neurons could provide useful benchmarks for
testing the limits of BP-adjacent methods, and demonstrate the viability of EAs
for training networks with complex components.
| [
{
"created": "Fri, 17 Nov 2023 20:59:57 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Nov 2023 02:49:07 GMT",
"version": "v2"
}
] | 2023-11-22 | [
[
"Hazelden",
"James",
""
],
[
"Liu",
"Yuhan Helena",
""
],
[
"Shlizerman",
"Eli",
""
],
[
"Shea-Brown",
"Eric",
""
]
] | Training networks consisting of biophysically accurate neuron models could allow for new insights into how brain circuits can organize and solve tasks. We begin by analyzing the extent to which the central algorithm for neural network learning -- stochastic gradient descent through backpropagation (BP) -- can be used to train such networks. We find that properties of biophysically based neural network models needed for accurate modelling such as stiffness, high nonlinearity and long evaluation timeframes relative to spike times makes BP unstable and divergent in a variety of cases. To address these instabilities and inspired by recent work, we investigate the use of "gradient-estimating" evolutionary algorithms (EAs) for training biophysically based neural networks. We find that EAs have several advantages making them desirable over direct BP, including being forward-pass only, robust to noisy and rigid losses, allowing for discrete loss formulations, and potentially facilitating a more global exploration of parameters. We apply our method to train a recurrent network of Morris-Lecar neuron models on a stimulus integration and working memory task, and show how it can succeed in cases where direct BP is inapplicable. To expand on the viability of EAs in general, we apply them to a general neural ODE problem and a stiff neural ODE benchmark and find again that EAs can out-perform direct BP here, especially for the over-parameterized regime. Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP-adjacent methods, and demonstrate the viability of EAs for training networks with complex components. |
2004.04463 | Karl Friston | Karl J. Friston, Thomas Parr, Peter Zeidman, Adeel Razi, Guillaume
Flandin, Jean Daunizeau, Oliver J. Hulme, Alexander J. Billig, Vladimir
Litvak, Rosalyn J. Moran, Cathy J. Price and Christian Lambert | Dynamic causal modelling of COVID-19 | Technical report: 40 pages, 13 figures and 2 tables | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | This technical report describes a dynamic causal model of the spread of
coronavirus through a population. The model is based upon ensemble or
population dynamics that generate outcomes, like new cases and deaths over
time. The purpose of this model is to quantify the uncertainty that attends
predictions of relevant outcomes. By assuming suitable conditional
dependencies, one can model the effects of interventions (e.g., social
distancing) and differences among populations (e.g., herd immunity) to predict
what might happen in different circumstances. Technically, this model leverages
state-of-the-art variational (Bayesian) model inversion and comparison
procedures, originally developed to characterise the responses of neuronal
ensembles to perturbations. Here, this modelling is applied to epidemiological
populations to illustrate the kind of inferences that are supported and how the
model per se can be optimised given timeseries data. Although the purpose of
this paper is to describe a modelling protocol, the results illustrate some
interesting perspectives on the current pandemic; for example, the nonlinear
effects of herd immunity that speak to a self-organised mitigation process.
| [
{
"created": "Thu, 9 Apr 2020 10:13:44 GMT",
"version": "v1"
}
] | 2020-04-10 | [
[
"Friston",
"Karl J.",
""
],
[
"Parr",
"Thomas",
""
],
[
"Zeidman",
"Peter",
""
],
[
"Razi",
"Adeel",
""
],
[
"Flandin",
"Guillaume",
""
],
[
"Daunizeau",
"Jean",
""
],
[
"Hulme",
"Oliver J.",
""
],
[
... | This technical report describes a dynamic causal model of the spread of coronavirus through a population. The model is based upon ensemble or population dynamics that generate outcomes, like new cases and deaths over time. The purpose of this model is to quantify the uncertainty that attends predictions of relevant outcomes. By assuming suitable conditional dependencies, one can model the effects of interventions (e.g., social distancing) and differences among populations (e.g., herd immunity) to predict what might happen in different circumstances. Technically, this model leverages state-of-the-art variational (Bayesian) model inversion and comparison procedures, originally developed to characterise the responses of neuronal ensembles to perturbations. Here, this modelling is applied to epidemiological populations to illustrate the kind of inferences that are supported and how the model per se can be optimised given timeseries data. Although the purpose of this paper is to describe a modelling protocol, the results illustrate some interesting perspectives on the current pandemic; for example, the nonlinear effects of herd immunity that speak to a self-organised mitigation process. |
0912.2326 | Junhyong Kim | Sheng Guo (1,2), Li-San Wang (1,3), Junhyong Kim (1,2, 4) ((1) Penn
Center for Bioinformatics, (2) Genomics and Computational Biology Graduate
Group, (3) Institute on Aging, (4) Penn Genome Frontiers Institute) | Large-scale simulation of RNA macroevolution by an energy-dependent
fitness model | null | null | null | null | q-bio.PE q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulated nucleotide sequences are widely used in theoretical and empirical
molecular evolution studies. Conventional simulators generally use fixed
parameter time-homogeneous Markov model for sequence evolution. In this work,
we use the folding free energy of the secondary structure of an RNA as a proxy
for its phenotypic fitness, and simulate RNA macroevolution by a
mutation-selection population genetics model. Because the two-step process is
conditioned on an RNA and its mutant ensemble, we no longer have a global
substitution matrix, nor do we explicitly assume any for this inhomogeneous
stochastic process. After introducing the base model of RNA evolution, we
outline the heuristic implementation algorithm and several model improvements.
We then discuss the calibration of the model parameters and demonstrate that in
phylogeny reconstruction with both the parsimony method and the likelihood
method, the sequences generated by our simulator, rnasim, have greater
statistical complexity than those by two standard simulators, ROSE and Seq-Gen,
and are close to empirical sequences.
| [
{
"created": "Fri, 11 Dec 2009 19:47:19 GMT",
"version": "v1"
}
] | 2009-12-14 | [
[
"Guo",
"Sheng",
""
],
[
"Wang",
"Li-San",
""
],
[
"Kim",
"Junhyong",
""
]
] | Simulated nucleotide sequences are widely used in theoretical and empirical molecular evolution studies. Conventional simulators generally use fixed parameter time-homogeneous Markov model for sequence evolution. In this work, we use the folding free energy of the secondary structure of an RNA as a proxy for its phenotypic fitness, and simulate RNA macroevolution by a mutation-selection population genetics model. Because the two-step process is conditioned on an RNA and its mutant ensemble, we no longer have a global substitution matrix, nor do we explicitly assume any for this inhomogeneous stochastic process. After introducing the base model of RNA evolution, we outline the heuristic implementation algorithm and several model improvements. We then discuss the calibration of the model parameters and demonstrate that in phylogeny reconstruction with both the parsimony method and the likelihood method, the sequences generated by our simulator, rnasim, have greater statistical complexity than those by two standard simulators, ROSE and Seq-Gen, and are close to empirical sequences. |
0810.3488 | Emilio Hernandez-Garcia | Alejandro F. Rozenfeld (1), Sophie Arnaud-Haond (2,4), Emilio
Hernandez-Garcia (3), Victor M. Eguiluz (3), Ester A. Serrao (2) and Carlos
M. Duarte (1) ((1) IMEDEA, Mallorca, Spain. (2) CCMAR, Faro, Portugal. (3)
IFISC, Mallorca, Spain. (4) IFREMER, Brest, France) | Network analysis identifies weak and strong links in a metapopulation
system | 26 pages. To appear in PNAS. An additional movie can be found at
http://ifisc.uib-csic.es/publications/publication-detail.php?indice=1863 | Proceedings of the National Academy of Sciences of the USA (PNAS)
105, 18824-18829 (2008) | 10.1073/pnas.0805571105 | null | q-bio.PE cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of key populations shaping the structure and connectivity
of metapopulation systems is a major challenge in population ecology. The use
of molecular markers in the theoretical framework of population genetics has
allowed great advances in this field, but the prime question of quantifying the
role of each population in the system remains unresolved. Furthermore, the use
and interpretation of classical methods are still bounded by the need for a
priori information and underlying assumptions that are seldom respected in
natural systems. Network theory was applied to map the genetic structure in a
metapopulation system using microsatellite data from populations of a
threatened seagrass, Posidonia oceanica, across its whole geographical range.
The network approach, free from a priori assumptions and of usual underlying
hypothesis required for the interpretation of classical analysis, allows both
the straightforward characterization of hierarchical population structure and
the detection of populations acting as hubs critical for relaying gene flow or
sustaining the metapopulation system. This development opens major perspectives
in ecology and evolution in general, particularly in areas such as conservation
biology and epidemiology, where targeting specific populations is crucial.
| [
{
"created": "Mon, 20 Oct 2008 08:19:12 GMT",
"version": "v1"
}
] | 2008-11-21 | [
[
"Rozenfeld",
"Alejandro F.",
""
],
[
"Arnaud-Haond",
"Sophie",
""
],
[
"Hernandez-Garcia",
"Emilio",
""
],
[
"Eguiluz",
"Victor M.",
""
],
[
"Serrao",
"Ester A.",
""
],
[
"Duarte",
"Carlos M.",
""
]
] | The identification of key populations shaping the structure and connectivity of metapopulation systems is a major challenge in population ecology. The use of molecular markers in the theoretical framework of population genetics has allowed great advances in this field, but the prime question of quantifying the role of each population in the system remains unresolved. Furthermore, the use and interpretation of classical methods are still bounded by the need for a priori information and underlying assumptions that are seldom respected in natural systems. Network theory was applied to map the genetic structure in a metapopulation system using microsatellite data from populations of a threatened seagrass, Posidonia oceanica, across its whole geographical range. The network approach, free from a priori assumptions and of usual underlying hypothesis required for the interpretation of classical analysis, allows both the straightforward characterization of hierarchical population structure and the detection of populations acting as hubs critical for relaying gene flow or sustaining the metapopulation system. This development opens major perspectives in ecology and evolution in general, particularly in areas such as conservation biology and epidemiology, where targeting specific populations is crucial. |
1403.1228 | Bhaskar DasGupta | Reka Albert, Bhaskar DasGupta and Nasim Mobasheri | Topological implications of negative curvature for biological and social
networks | Physical Review E, 2014 | Physical Review E, 89 (3), 032811, 2014 | 10.1103/PhysRevE.89.032811 | null | q-bio.MN cs.DM cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network measures that reflect the most salient properties of complex
large-scale networks are in high demand in the network research community. In
this paper we adapt a combinatorial measure of negative curvature (also called
hyperbolicity) to parameterized finite networks, and show that a variety of
biological and social networks are hyperbolic. This hyperbolicity property has
strong implications on the higher-order connectivity and other topological
properties of these networks. Specifically, we derive and prove bounds on the
distance among shortest or approximately shortest paths in hyperbolic networks.
We describe two implications of these bounds to cross-talk in biological
networks, and to the existence of central, influential neighborhoods in both
biological and social networks.
| [
{
"created": "Wed, 5 Mar 2014 19:05:36 GMT",
"version": "v1"
}
] | 2014-03-25 | [
[
"Albert",
"Reka",
""
],
[
"DasGupta",
"Bhaskar",
""
],
[
"Mobasheri",
"Nasim",
""
]
] | Network measures that reflect the most salient properties of complex large-scale networks are in high demand in the network research community. In this paper we adapt a combinatorial measure of negative curvature (also called hyperbolicity) to parameterized finite networks, and show that a variety of biological and social networks are hyperbolic. This hyperbolicity property has strong implications on the higher-order connectivity and other topological properties of these networks. Specifically, we derive and prove bounds on the distance among shortest or approximately shortest paths in hyperbolic networks. We describe two implications of these bounds to cross-talk in biological networks, and to the existence of central, influential neighborhoods in both biological and social networks. |
2308.08662 | Steven Rossi | Steven P. Rossi, Yanjun Wang, Cornelia E. den Heyer, Hugues P.
Beno\^it | Evaluating the potential impacts of grey seal predation and fishery
bycatch/discards on cod productivity on the Western Scotian Shelf and in the
Bay of Fundy | 27 pages, 4 tables, 10 figures, 2 appendices | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recovery of many groundfish stocks throughout the Northwest Atlantic has
been impeded by elevated natural (i.e., non-fishing) mortality (M) among
older/larger individuals. The causes of elevated mortality are not well known,
though predation by rapidly growing grey seal herds and unreported fishing are
thought to be possible drivers of mortality for Atlantic Cod (Gadus morhua) on
the Western Scotian Shelf and in the Bay of Fundy (known as "4X5Y cod") and in
nearby ecosystems. We developed a statistical catch-at-age model for 4X5Y cod
that accounted for both grey seal predation and estimated bycatch/discards to
evaluate the degree to which either of these factors may influence cod
mortality. The model was fit over a range of predation and discarding scenarios
to account for uncertainties and a lack of data for these processes. We found
that most cod M remained unaccounted for unless cod comprised a large
proportion (>0.45) of the grey seal diet by weight. If the reported bycatch
estimates are taken as accurate, then the magnitude of cod discards from
non-directed fisheries was minor, though these estimates are highly uncertain.
| [
{
"created": "Wed, 16 Aug 2023 20:26:13 GMT",
"version": "v1"
}
] | 2023-08-21 | [
[
"Rossi",
"Steven P.",
""
],
[
"Wang",
"Yanjun",
""
],
[
"Heyer",
"Cornelia E. den",
""
],
[
"Benoît",
"Hugues P.",
""
]
] | The recovery of many groundfish stocks throughout the Northwest Atlantic has been impeded by elevated natural (i.e., non-fishing) mortality (M) among older/larger individuals. The causes of elevated mortality are not well known, though predation by rapidly growing grey seal herds and unreported fishing are thought to be possible drivers of mortality for Atlantic Cod (Gadus morhua) on the Western Scotian Shelf and in the Bay of Fundy (known as "4X5Y cod") and in nearby ecosystems. We developed a statistical catch-at-age model for 4X5Y cod that accounted for both grey seal predation and estimated bycatch/discards to evaluate the degree to which either of these factors may influence cod mortality. The model was fit over a range of predation and discarding scenarios to account for uncertainties and a lack of data for these processes. We found that most cod M remained unaccounted for unless cod comprised a large proportion (>0.45) of the grey seal diet by weight. If the reported bycatch estimates are taken as accurate, then the magnitude of cod discards from non-directed fisheries was minor, though these estimates are highly uncertain. |
q-bio/0311030 | Andras Lorincz | A. Lorincz | Comparation based bottom-up and top-down filtering model of the
hippocampus and its environment | 24 pages 7 figures | null | null | null | q-bio.NC | null | Two rate code models -- a reconstruction network model and a control model --
of the hippocampal-entorhinal loop are merged. The hippocampal-entorhinal loop
plays a double role in the unified model, it is part of a reconstruction
network and a controller, too. This double role turns the bottom-up information
flow into top-down control like signals. The role of bottom-up filtering is
information maximization, noise filtering, temporal integration and prediction,
whereas the role of top-down filtering is emphasizing, i.e., highlighting or
`paving of the way' as well as context based pattern completion. In the joined
model, the control task is performed by cortical areas, whereas reconstruction
networks can be found between cortical areas. While the controller is highly
non-linear, the reconstruction network is an almost linear architecture, which
is optimized for noise estimation and noise filtering. A conjecture of the
reconstruction network model -- that the long-term memory of the visual stream
is the linear feedback connections between neocortical areas -- is reinforced
by the joined model. Falsifying predictions are presented; some of them have
recent experimental support. Connections to attention and to awareness are
made.
| [
{
"created": "Fri, 21 Nov 2003 14:45:51 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Lorincz",
"A.",
""
]
] | Two rate code models -- a reconstruction network model and a control model -- of the hippocampal-entorhinal loop are merged. The hippocampal-entorhinal loop plays a double role in the unified model, it is part of a reconstruction network and a controller, too. This double role turns the bottom-up information flow into top-down control like signals. The role of bottom-up filtering is information maximization, noise filtering, temporal integration and prediction, whereas the role of top-down filtering is emphasizing, i.e., highlighting or `paving of the way' as well as context based pattern completion. In the joined model, the control task is performed by cortical areas, whereas reconstruction networks can be found between cortical areas. While the controller is highly non-linear, the reconstruction network is an almost linear architecture, which is optimized for noise estimation and noise filtering. A conjecture of the reconstruction network model -- that the long-term memory of the visual stream is the linear feedback connections between neocortical areas -- is reinforced by the joined model. Falsifying predictions are presented; some of them have recent experimental support. Connections to attention and to awareness are made. |
1110.2982 | Troy Day | Troy Day | Computability, G\"odel's Incompleteness Theorem, and an inherent limit
on the predictability of evolution | Journal of the Royal Society, Interface 2011 | null | 10.1098/rsif.2011.0479 | null | q-bio.PE cs.LO math.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process of evolutionary diversification unfolds in a vast genotypic space
of potential outcomes. During the past century there have been remarkable
advances in the development of theory for this diversification, and the
theory's success rests, in part, on the scope of its applicability. A great
deal of this theory focuses on a relatively small subset of the space of
potential genotypes, chosen largely based on historical or contemporary
patterns, and then predicts the evolutionary dynamics within this pre-defined
set. To what extent can such an approach be pushed to a broader perspective
that accounts for the potential open-endedness of evolutionary diversification?
There have been a number of significant theoretical developments along these
lines but the question of how far such theory can be pushed has not been
addressed. Here a theorem is proven demonstrating that, because of the digital
nature of inheritance, there are inherent limits on the kinds of questions that
can be answered using such an approach. In particular, even in extremely simple
evolutionary systems a complete theory accounting for the potential
open-endedness of evolution is unattainable unless evolution is progressive.
The theorem is closely related to G\"odel's Incompleteness Theorem and to the
Halting Problem from computability theory.
| [
{
"created": "Thu, 13 Oct 2011 15:43:28 GMT",
"version": "v1"
}
] | 2011-10-14 | [
[
"Day",
"Troy",
""
]
] | The process of evolutionary diversification unfolds in a vast genotypic space of potential outcomes. During the past century there have been remarkable advances in the development of theory for this diversification, and the theory's success rests, in part, on the scope of its applicability. A great deal of this theory focuses on a relatively small subset of the space of potential genotypes, chosen largely based on historical or contemporary patterns, and then predicts the evolutionary dynamics within this pre-defined set. To what extent can such an approach be pushed to a broader perspective that accounts for the potential open-endedness of evolutionary diversification? There have been a number of significant theoretical developments along these lines but the question of how far such theory can be pushed has not been addressed. Here a theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using such an approach. In particular, even in extremely simple evolutionary systems a complete theory accounting for the potential open-endedness of evolution is unattainable unless evolution is progressive. The theorem is closely related to G\"odel's Incompleteness Theorem and to the Halting Problem from computability theory. |
1701.09104 | Pavel Sumazin | Hua-Sheng Chiu, Mar\'ia Rodr\'iguez Mart\'inez, Mukesh Bansal, Aravind
Subramanian, Todd R. Golub, Xuerui Yang, Pavel Sumazin, and Andrea Califano | High-throughput validation of ceRNA regulatory networks | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: MicroRNAs (miRNAs) play multiple roles in tumor biology [1].
Interestingly, reports from multiple groups suggest that miRNA targets may be
coupled through competitive stoichiometric sequestration [2]. Specifically,
computational models predicted [3, 4] and experimental assays confirmed [5, 6]
that miRNA activity is dependent on miRNA target abundance, and consequently,
changes to the abundance of some miRNA targets lead to changes to the
regulation and abundance of their other targets. The resulting indirect
regulatory influence between miRNA targets resembles competition and has been
dubbed competitive endogenous RNA (ceRNA) [5, 7, 8]. Recent studies have
questioned the physiological relevance of ceRNA interactions [9], researchers
ability to accurately predict these interactions [10], and the number of genes
that are impacted by ceRNA interactions in specific cellular contexts [11].
Results: To address these concerns, we reverse engineered ceRNA networks
(ceRNETs) in breast and prostate adenocarcinomas using context-specific TCGA
profiles [12-14], and tested whether ceRNA interactions can predict the effects
of RNAi-mediated gene silencing perturbations in PC3 and MCF7 cells. Our
results, based on tests of thousands of inferred ceRNA interactions that are
predicted to alter hundreds of cancer genes in each of the two tumor contexts,
confirmed statistically significant effects for half of the predicted targets.
Conclusions: Our results suggest that the expression of a significant fraction
of cancer genes may be regulated by ceRNA interactions in each of the two tumor
contexts.
| [
{
"created": "Tue, 31 Jan 2017 16:07:24 GMT",
"version": "v1"
}
] | 2017-02-01 | [
[
"Chiu",
"Hua-Sheng",
""
],
[
"Martínez",
"María Rodríguez",
""
],
[
"Bansal",
"Mukesh",
""
],
[
"Subramanian",
"Aravind",
""
],
[
"Golub",
"Todd R.",
""
],
[
"Yang",
"Xuerui",
""
],
[
"Sumazin",
"Pavel",
""... | Background: MicroRNAs (miRNAs) play multiple roles in tumor biology [1]. Interestingly, reports from multiple groups suggest that miRNA targets may be coupled through competitive stoichiometric sequestration [2]. Specifically, computational models predicted [3, 4] and experimental assays confirmed [5, 6] that miRNA activity is dependent on miRNA target abundance, and consequently, changes to the abundance of some miRNA targets lead to changes to the regulation and abundance of their other targets. The resulting indirect regulatory influence between miRNA targets resembles competition and has been dubbed competitive endogenous RNA (ceRNA) [5, 7, 8]. Recent studies have questioned the physiological relevance of ceRNA interactions [9], researchers ability to accurately predict these interactions [10], and the number of genes that are impacted by ceRNA interactions in specific cellular contexts [11]. Results: To address these concerns, we reverse engineered ceRNA networks (ceRNETs) in breast and prostate adenocarcinomas using context-specific TCGA profiles [12-14], and tested whether ceRNA interactions can predict the effects of RNAi-mediated gene silencing perturbations in PC3 and MCF7 cells. Our results, based on tests of thousands of inferred ceRNA interactions that are predicted to alter hundreds of cancer genes in each of the two tumor contexts, confirmed statistically significant effects for half of the predicted targets. Conclusions: Our results suggest that the expression of a significant fraction of cancer genes may be regulated by ceRNA interactions in each of the two tumor contexts. |
2308.11927 | Enze Ye | Enze Ye, Yuhang Wang, Hong Zhang, Yiqin Gao, Huan Wang, He Sun | Recovering a Molecule's 3D Dynamics from Liquid-phase Electron
Microscopy Movies | null | null | null | null | q-bio.QM cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of biomolecules are crucial for our understanding of their
functioning in living systems. However, current 3D imaging techniques, such as
cryogenic electron microscopy (cryo-EM), require freezing the sample, which
limits the observation of their conformational changes in real time. The
innovative liquid-phase electron microscopy (liquid-phase EM) technique allows
molecules to be placed in the native liquid environment, providing a unique
opportunity to observe their dynamics. In this paper, we propose TEMPOR, a
Temporal Electron MicroscoPy Object Reconstruction algorithm for liquid-phase
EM that leverages an implicit neural representation (INR) and a dynamical
variational auto-encoder (DVAE) to recover time series of molecular structures.
We demonstrate its advantages in recovering different motion dynamics from two
simulated datasets, 7bcq and Cas9. To our knowledge, our work is the first
attempt to directly recover 3D structures of a temporally-varying particle from
liquid-phase EM movies. It provides a promising new approach for studying
molecules' 3D dynamics in structural biology.
| [
{
"created": "Wed, 23 Aug 2023 05:26:27 GMT",
"version": "v1"
}
] | 2023-08-24 | [
[
"Ye",
"Enze",
""
],
[
"Wang",
"Yuhang",
""
],
[
"Zhang",
"Hong",
""
],
[
"Gao",
"Yiqin",
""
],
[
"Wang",
"Huan",
""
],
[
"Sun",
"He",
""
]
] | The dynamics of biomolecules are crucial for our understanding of their functioning in living systems. However, current 3D imaging techniques, such as cryogenic electron microscopy (cryo-EM), require freezing the sample, which limits the observation of their conformational changes in real time. The innovative liquid-phase electron microscopy (liquid-phase EM) technique allows molecules to be placed in the native liquid environment, providing a unique opportunity to observe their dynamics. In this paper, we propose TEMPOR, a Temporal Electron MicroscoPy Object Reconstruction algorithm for liquid-phase EM that leverages an implicit neural representation (INR) and a dynamical variational auto-encoder (DVAE) to recover time series of molecular structures. We demonstrate its advantages in recovering different motion dynamics from two simulated datasets, 7bcq and Cas9. To our knowledge, our work is the first attempt to directly recover 3D structures of a temporally-varying particle from liquid-phase EM movies. It provides a promising new approach for studying molecules' 3D dynamics in structural biology. |
1310.4223 | Hamidreza Chitsaz | Hamidreza Chitsaz, Mohammad Aminisharifabad | Exact Learning of RNA Energy Parameters From Structure | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of exact learning of parameters of a linear RNA
energy model from secondary structure data. A necessary and sufficient
condition for learnability of parameters is derived, which is based on
computing the convex hull of union of translated Newton polytopes of input
sequences. The set of learned energy parameters is characterized as the convex
cone generated by the normal vectors to those facets of the resulting polytope
that are incident to the origin. In practice, the sufficient condition may not
be satisfied by the entire training data set; hence, computing a maximal subset
of training data for which the sufficient condition is satisfied is often
desired. We show that problem is NP-hard in general for an arbitrary
dimensional feature space. Using a randomized greedy algorithm, we select a
subset of RNA STRAND v2.0 database that satisfies the sufficient condition for
separate A-U, C-G, G-U base pair counting model. The set of learned energy
parameters includes experimentally measured energies of A-U, C-G, and G-U
pairs; hence, our parameter set is in agreement with the Turner parameters.
| [
{
"created": "Tue, 15 Oct 2013 23:04:00 GMT",
"version": "v1"
}
] | 2013-10-17 | [
[
"Chitsaz",
"Hamidreza",
""
],
[
"Aminisharifabad",
"Mohammad",
""
]
] | We consider the problem of exact learning of parameters of a linear RNA energy model from secondary structure data. A necessary and sufficient condition for learnability of parameters is derived, which is based on computing the convex hull of union of translated Newton polytopes of input sequences. The set of learned energy parameters is characterized as the convex cone generated by the normal vectors to those facets of the resulting polytope that are incident to the origin. In practice, the sufficient condition may not be satisfied by the entire training data set; hence, computing a maximal subset of training data for which the sufficient condition is satisfied is often desired. We show that problem is NP-hard in general for an arbitrary dimensional feature space. Using a randomized greedy algorithm, we select a subset of RNA STRAND v2.0 database that satisfies the sufficient condition for separate A-U, C-G, G-U base pair counting model. The set of learned energy parameters includes experimentally measured energies of A-U, C-G, and G-U pairs; hence, our parameter set is in agreement with the Turner parameters. |
0712.3042 | Ulrich S. Schwarz | T. Erdmann, S. Pierrat, P. Nassoy and U. S. Schwarz | Dynamic force spectroscopy on multiple bonds: experiments and model | to appear in Europhysics Letters | Europhys. Lett., 81:48001, 2008 | 10.1209/0295-5075/81/48001 | null | q-bio.BM cond-mat.soft physics.bio-ph | null | We probe the dynamic strength of multiple biotin-streptavidin adhesion bonds
under linear loading using the biomembrane force probe setup for dynamic force
spectroscopy. Measured rupture force histograms are compared to results from a
master equation model for the stochastic dynamics of bond rupture under load.
This allows us to extract the distribution of the number of initially closed
bonds. We also extract the molecular parameters of the adhesion bonds, in good
agreement with earlier results from single bond experiments. Our analysis shows
that the peaks in the measured histograms are not simple multiples of the
single bond values, but follow from a superposition procedure which generates
different peak positions.
| [
{
"created": "Tue, 18 Dec 2007 20:37:48 GMT",
"version": "v1"
}
] | 2010-02-24 | [
[
"Erdmann",
"T.",
""
],
[
"Pierrat",
"S.",
""
],
[
"Nassoy",
"P.",
""
],
[
"Schwarz",
"U. S.",
""
]
] | We probe the dynamic strength of multiple biotin-streptavidin adhesion bonds under linear loading using the biomembrane force probe setup for dynamic force spectroscopy. Measured rupture force histograms are compared to results from a master equation model for the stochastic dynamics of bond rupture under load. This allows us to extract the distribution of the number of initially closed bonds. We also extract the molecular parameters of the adhesion bonds, in good agreement with earlier results from single bond experiments. Our analysis shows that the peaks in the measured histograms are not simple multiples of the single bond values, but follow from a superposition procedure which generates different peak positions. |
2308.16019 | Joseph Davis | Samantha M. Webster, Mira B. May, Barrett M. Powell, Joseph H. Davis | Imaging structurally dynamic ribosomes with cryogenic electron
microscopy | 15 pages, 6 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Throughout the history of electron microscopy, ribosomes have served as an
ideal subject for imaging and technological development, which in turn has
driven our understanding of ribosomal biology. Here, we provide a historical
perspective at the intersection of electron microscopy technology development
and ribosome biology and reflect on how this technique has shed light on each
stage of the life cycle of this dynamic macromolecular machine. With an
emphasis on prokaryotic systems, we specifically describe how pairing cryo-EM
with clever experimental design, time-resolved techniques, and next-generation
heterogeneous structural analysis has afforded insights into the modular nature
of assembly, the roles of the many transient biogenesis and translation
co-factors, and the subtle variations in structure and function between strains
and species. The work concludes with a prospective outlook on the field,
highlighting the pivotal role cryogenic electron tomography is playing in
adding cellular context to our understanding of ribosomal life cycles, and
noting how this exciting technology promises to bridge the gap between cellular
and structural biology.
| [
{
"created": "Wed, 30 Aug 2023 13:21:11 GMT",
"version": "v1"
}
] | 2023-08-31 | [
[
"Webster",
"Samantha M.",
""
],
[
"May",
"Mira B.",
""
],
[
"Powell",
"Barrett M.",
""
],
[
"Davis",
"Joseph H.",
""
]
] | Throughout the history of electron microscopy, ribosomes have served as an ideal subject for imaging and technological development, which in turn has driven our understanding of ribosomal biology. Here, we provide a historical perspective at the intersection of electron microscopy technology development and ribosome biology and reflect on how this technique has shed light on each stage of the life cycle of this dynamic macromolecular machine. With an emphasis on prokaryotic systems, we specifically describe how pairing cryo-EM with clever experimental design, time-resolved techniques, and next-generation heterogeneous structural analysis has afforded insights into the modular nature of assembly, the roles of the many transient biogenesis and translation co-factors, and the subtle variations in structure and function between strains and species. The work concludes with a prospective outlook on the field, highlighting the pivotal role cryogenic electron tomography is playing in adding cellular context to our understanding of ribosomal life cycles, and noting how this exciting technology promises to bridge the gap between cellular and structural biology. |
2212.05491 | Lu Wang | Lu Wang, Bofu Tang, Feifei Liu, Zhenyu Jiang, Xianmei Meng | The diagnostic utility of endocytoscopy for the detection of esophageal
lesions: a systematic review and meta-analysis | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: To systematically evaluate the value of endocytoscopy (ECS) in the
diagnosis of early esophageal cancer (EC). Methods: Pubmed, Ovid and EMbase
databases were searched to collect diagnostic tests of ECS assisted diagnosis
of early EC. The retrieval time was from the establishment of the database to
August 2022. Review manager 5.4, Stata 16.0 and Meta-Disc 1.4 were used for
meta-analysis after two researchers independently screened literature,
extracted data and evaluated the bias risk of included studies. Results: A
total of 7 studies were included, including 520 lesions. Meta-analysis results
showed that the combined sensitivity(SE), specificity(SP), positive likelihood
ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR) and
positive posterior probability (PPP) of ECS screening for early EC were
0.95[95%CI: 0.84, 0.98], 0.92 [95%CI: 0.83, 0.96], 11.8 [95%CI: 5.3, 26.1],
0.06 [95%CI: 0.02, 0.18], 203 [95%CI: 50, 816], and 75%, respectively. The area
(AUC) under the receiver Operating Characteristic curve (SROC) was 0.98[95%CI:
0.96, 0.99]. Conclusions: Current evidence suggests that ECS can be used as an
effective screening tool for early EC. Due to the limited number and quality of
included studies, it is imperative to conduct more high-quality studies to
verify the above conclusions.
| [
{
"created": "Sun, 11 Dec 2022 12:36:10 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Jan 2023 03:50:51 GMT",
"version": "v2"
}
] | 2023-01-10 | [
[
"Wang",
"Lu",
""
],
[
"Tang",
"Bofu",
""
],
[
"Liu",
"Feifei",
""
],
[
"Jiang",
"Zhenyu",
""
],
[
"Meng",
"Xianmei",
""
]
] | Objective: To systematically evaluate the value of endocytoscopy (ECS) in the diagnosis of early esophageal cancer (EC). Methods: Pubmed, Ovid and EMbase databases were searched to collect diagnostic tests of ECS assisted diagnosis of early EC. The retrieval time was from the establishment of the database to August 2022. Review manager 5.4, Stata 16.0 and Meta-Disc 1.4 were used for meta-analysis after two researchers independently screened literature, extracted data and evaluated the bias risk of included studies. Results: A total of 7 studies were included, including 520 lesions. Meta-analysis results showed that the combined sensitivity(SE), specificity(SP), positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR) and positive posterior probability (PPP) of ECS screening for early EC were 0.95[95%CI: 0.84, 0.98], 0.92 [95%CI: 0.83, 0.96], 11.8 [95%CI: 5.3, 26.1], 0.06 [95%CI: 0.02, 0.18], 203 [95%CI: 50, 816], and 75%, respectively. The area (AUC) under the receiver Operating Characteristic curve (SROC) was 0.98[95%CI: 0.96, 0.99]. Conclusions: Current evidence suggests that ECS can be used as an effective screening tool for early EC. Due to the limited number and quality of included studies, it is imperative to conduct more high-quality studies to verify the above conclusions. |
2005.11420 | Juarez Azevedo | Suzete Afonso, Juarez Azevedo, Mariana Pinheiro | Epidemic analysis of COVID-19 in Brazil by a generalized SEIR model | 12 pages, 10 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We shall apply a generalized SEIR model to study the outbreak of COVID-19 in
Brazil. In particular, we would like to explain the projections of the increase
in the level of infection over a long period of time, overlapping large local
outbreaks in the most populous states in the country. A time-dependent dynamic
SEIR model inspired in a model previously used during the outbreak in China was
used to analyses the time trajectories of infected, recovered, and deaths. The
model has parameters that vary with time and are fitted considering a nonlinear
least-squares method. The simulations starting from April 8, 2020, concluded
that the time for a peak in Brazil will be in July 21, 2020 with total
cumulative infected cases around 982K people; in addition, an estimated total
death case will reach to 192K in the end. Besides that, Brazil will reach a
peak in terms of daily new infected cases and death cases around the middle of
July with 50K cases of infected and almost 6.0K daily deaths.
| [
{
"created": "Fri, 22 May 2020 23:26:55 GMT",
"version": "v1"
}
] | 2020-05-26 | [
[
"Afonso",
"Suzete",
""
],
[
"Azevedo",
"Juarez",
""
],
[
"Pinheiro",
"Mariana",
""
]
] | We shall apply a generalized SEIR model to study the outbreak of COVID-19 in Brazil. In particular, we would like to explain the projections of the increase in the level of infection over a long period of time, overlapping large local outbreaks in the most populous states in the country. A time-dependent dynamic SEIR model inspired in a model previously used during the outbreak in China was used to analyses the time trajectories of infected, recovered, and deaths. The model has parameters that vary with time and are fitted considering a nonlinear least-squares method. The simulations starting from April 8, 2020, concluded that the time for a peak in Brazil will be in July 21, 2020 with total cumulative infected cases around 982K people; in addition, an estimated total death case will reach to 192K in the end. Besides that, Brazil will reach a peak in terms of daily new infected cases and death cases around the middle of July with 50K cases of infected and almost 6.0K daily deaths. |
1707.04648 | Jan H. Kirchner | Jan H. Kirchner | Large Deviation Theory for Parameter Estimation in Simple Neuron Models | Bachelor thesis completed in compliance with the requirements of the
BSc. Cognitive Science of the University of Osnabr\"uck and under the
supervision of Johannes Leugering and Prof. Gordon Pipa | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To investigate the complex dynamics of a biological neuron that is subject to
small random perturbations we can use stochastic neuron models. While many
techniques have already been developed to study properties of such models,
especially the analysis of the (expected) first-passage time or (E)FPT remains
difficult. In this thesis I apply the large deviation theory (LDT), which is
already well-established in physics and finance, to the problem of determining
the EFPT of the mean-reverting Ornstein-Uhlenbeck (OU) process. The OU process
instantiates the Stochastic Leaky Integrate and Fire model and thus serves as
an example of a biologically inspired mathematical neuron model. I derive
several classical results using much simpler mathematics than the original
publications from neuroscience and I provide a few conceivable interpretations
and perspectives on these derivations. Using these results I explore some
possible applications for parameter estimation and I provide an additional
mathematical justification for using a Poisson process as a small-noise
approximation of the full model. Finally I perform several simulations to
verify these results and to reveal systematic biases of this estimator.
| [
{
"created": "Fri, 14 Jul 2017 21:47:00 GMT",
"version": "v1"
}
] | 2017-07-18 | [
[
"Kirchner",
"Jan H.",
""
]
] | To investigate the complex dynamics of a biological neuron that is subject to small random perturbations we can use stochastic neuron models. While many techniques have already been developed to study properties of such models, especially the analysis of the (expected) first-passage time or (E)FPT remains difficult. In this thesis I apply the large deviation theory (LDT), which is already well-established in physics and finance, to the problem of determining the EFPT of the mean-reverting Ornstein-Uhlenbeck (OU) process. The OU process instantiates the Stochastic Leaky Integrate and Fire model and thus serves as an example of a biologically inspired mathematical neuron model. I derive several classical results using much simpler mathematics than the original publications from neuroscience and I provide a few conceivable interpretations and perspectives on these derivations. Using these results I explore some possible applications for parameter estimation and I provide an additional mathematical justification for using a Poisson process as a small-noise approximation of the full model. Finally I perform several simulations to verify these results and to reveal systematic biases of this estimator. |
2012.06139 | Arun Sharma Dr | Arun Dev Sharma and Inderjeet Kaur | Calotropin from milk of Calotropis gigantean a potent inhibitor of COVID
19 corona virus infection by Molecular docking studies | arXiv admin note: substantial text overlap with arXiv:2004.00217 | Research & Reviews in Biotechnology & Biosciences Website:
www.biotechjournal.in Volume: 7, Issue: 2, Year: 2020 PP: 52-57 | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | SARS-CoV-2 (COVID-19), a positive single stranded RNA virus, member of corona
virus family, is spreading its tentacles across the world due to lack of drugs
at present. Being associated with cough, fever, and respiratory distress, this
disease caused more than 15 % mortality worldwide. Due to its vital role in
virus replication, Mpro/3CLpro has recently been regarded as a suitable target
for drug design. The current study focused on the inhibitory activity of
Calotropin, a component from milk of Calotropis gigantean, against Mpro protein
from SARS-CoV-2. Till date there is no work is undertaken on in-silico analysis
of this compound against Mpro of COVID-19 protein. In the present study,
molecular docking studies were conducted by using Patchdock tool. Protein
Interactions tool was used for protein interactions. The calculated parameters
such as docking score indicated effective binding of Calotropin to Mpro
protein. Interactions results indicated that, Mpro/ Calotropin complexes forms
hydrophobic interactions. Therefore, Calotropin may represent potential herbal
treatment to act as COVID-19 Mpro inhibitor. However, further research is
necessary to investigate their potential medicinal use.
| [
{
"created": "Fri, 11 Dec 2020 05:45:22 GMT",
"version": "v1"
}
] | 2020-12-15 | [
[
"Sharma",
"Arun Dev",
""
],
[
"Kaur",
"Inderjeet",
""
]
] | SARS-CoV-2 (COVID-19), a positive single stranded RNA virus, member of corona virus family, is spreading its tentacles across the world due to lack of drugs at present. Being associated with cough, fever, and respiratory distress, this disease caused more than 15 % mortality worldwide. Due to its vital role in virus replication, Mpro/3CLpro has recently been regarded as a suitable target for drug design. The current study focused on the inhibitory activity of Calotropin, a component from milk of Calotropis gigantean, against Mpro protein from SARS-CoV-2. Till date there is no work is undertaken on in-silico analysis of this compound against Mpro of COVID-19 protein. In the present study, molecular docking studies were conducted by using Patchdock tool. Protein Interactions tool was used for protein interactions. The calculated parameters such as docking score indicated effective binding of Calotropin to Mpro protein. Interactions results indicated that, Mpro/ Calotropin complexes forms hydrophobic interactions. Therefore, Calotropin may represent potential herbal treatment to act as COVID-19 Mpro inhibitor. However, further research is necessary to investigate their potential medicinal use. |
1806.03194 | Lulu Gong | Lulu Gong, Jian Gao and Ming Cao | Evolutionary Game Dynamics for Two Interacting Populations under
Environmental Feedback | 7 pages, submitted to a conference | 2018 IEEE Conference on Decision and Control (CDC) | 10.1109/CDC.2018.8619801 | null | q-bio.PE math.DS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the evolutionary dynamics of games under environmental feedback
using replicator equations for two interacting populations. One key feature is
to consider jointly the co-evolution of the dynamic payoff matrices and the
state of the environment: the payoff matrix varies with the changing
environment and at the same time, the state of the environment is affected
indirectly by the changing payoff matrix through the evolving population
profiles. For such co-evolutionary dynamics, we investigate whether convergence
will take place, and if so, how. In particular, we identify the scenarios where
oscillation offers the best predictions of long-run behavior by using
reversible system theory. The obtained results are useful to describe the
evolution of multi-community societies in which individuals' payoffs and
societal feedback interact.
| [
{
"created": "Fri, 8 Jun 2018 14:40:06 GMT",
"version": "v1"
}
] | 2021-05-20 | [
[
"Gong",
"Lulu",
""
],
[
"Gao",
"Jian",
""
],
[
"Cao",
"Ming",
""
]
] | We study the evolutionary dynamics of games under environmental feedback using replicator equations for two interacting populations. One key feature is to consider jointly the co-evolution of the dynamic payoff matrices and the state of the environment: the payoff matrix varies with the changing environment and at the same time, the state of the environment is affected indirectly by the changing payoff matrix through the evolving population profiles. For such co-evolutionary dynamics, we investigate whether convergence will take place, and if so, how. In particular, we identify the scenarios where oscillation offers the best predictions of long-run behavior by using reversible system theory. The obtained results are useful to describe the evolution of multi-community societies in which individuals' payoffs and societal feedback interact. |
1503.02336 | Lorenzo Livi | Lorenzo Livi, Enrico Maiorino, Alessandro Giuliani, Antonello Rizzi,
Alireza Sadeghian | A generative model for protein contact networks | 18 pages, 67 references | null | 10.1080/07391102.2015.1077736 | null | q-bio.BM physics.data-an q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a generative model for protein contact networks. The
soundness of the proposed model is investigated by focusing primarily on
mesoscopic properties elaborated from the spectra of the graph Laplacian. To
complement the analysis, we study also classical topological descriptors, such
as statistics of the shortest paths and the important feature of modularity.
Our experiments show that the proposed model results in a considerable
improvement with respect to two suitably chosen generative mechanisms,
mimicking with better approximation real protein contact networks in terms of
diffusion properties elaborated from the Laplacian spectra. However, as well as
the other considered models, it does not reproduce with sufficient accuracy the
shortest paths structure. To compensate this drawback, we designed a second
step involving a targeted edge reconfiguration process. The ensemble of
reconfigured networks denotes improvements that are statistically significant.
As a byproduct of our study, we demonstrate that modularity, a well-known
property of proteins, does not entirely explain the actual network architecture
characterizing protein contact networks. In fact, we conclude that modularity,
intended as a quantification of an underlying community structure, should be
considered as an emergent property of the structural organization of proteins.
Interestingly, such a property is suitably optimized in protein contact
networks together with the feature of path efficiency.
| [
{
"created": "Sun, 8 Mar 2015 22:23:26 GMT",
"version": "v1"
}
] | 2015-07-30 | [
[
"Livi",
"Lorenzo",
""
],
[
"Maiorino",
"Enrico",
""
],
[
"Giuliani",
"Alessandro",
""
],
[
"Rizzi",
"Antonello",
""
],
[
"Sadeghian",
"Alireza",
""
]
] | In this paper we present a generative model for protein contact networks. The soundness of the proposed model is investigated by focusing primarily on mesoscopic properties elaborated from the spectra of the graph Laplacian. To complement the analysis, we study also classical topological descriptors, such as statistics of the shortest paths and the important feature of modularity. Our experiments show that the proposed model results in a considerable improvement with respect to two suitably chosen generative mechanisms, mimicking with better approximation real protein contact networks in terms of diffusion properties elaborated from the Laplacian spectra. However, as well as the other considered models, it does not reproduce with sufficient accuracy the shortest paths structure. To compensate this drawback, we designed a second step involving a targeted edge reconfiguration process. The ensemble of reconfigured networks denotes improvements that are statistically significant. As a byproduct of our study, we demonstrate that modularity, a well-known property of proteins, does not entirely explain the actual network architecture characterizing protein contact networks. In fact, we conclude that modularity, intended as a quantification of an underlying community structure, should be considered as an emergent property of the structural organization of proteins. Interestingly, such a property is suitably optimized in protein contact networks together with the feature of path efficiency. |
q-bio/0512032 | Raffaele Vardavas | Raffaele Vardavas and Sally Blower | The WHO surveillance threshold and the emergence of drug-resistant HIV
strains in Botswana | null | null | null | null | q-bio.PE q-bio.OT | null | Background: Approximately 40% of adults in Botswana are HIV-infected. The
Botswana antiretroviral program began in 2002 and currently treats 34,000
patients with a goal of treating 85,000 patients (~30% of HIV-infected adults)
by 2009. We predict the evolution of drug-resistant strains of HIV that may
emerge as a consequence of this treatment program. We discuss the implications
of our results for the World Health Organization's (WHO's) proposed
surveillance system for detecting drug-resistant strains of HIV in Africa.
Methods: We use a mathematical model of the emergence of drug resistance. We
incorporate demographic and treatment data to make specific predictions as to
when the WHO surveillance threshold is likely to be exceeded. Results: Our
results show - even if rates of acquired resistance are high, but the
drug-resistant strains that evolve are only half as transmissible as wild-type
strains - that transmission of drug-resistant strains will remain low (< 5% by
2009) and are unlikely to exceed the WHO's surveillance threshold. However,our
results show that transmission of drug-resistant strains in Botswana could
increase to ~15% by 2009 if resistant strains are as transmissible as wild-type
strains. Conclusion: The WHO's surveillance system is designed to detect
transmitted resistance that exceeds a threshold level of 5%. Whether this
system will detect drug-resistant strains in Botswana by 2009 will depend upon
the transmissibility of the strains that emerge. Our results imply that it
could be many years before the WHO detects transmitted resistance in other
sub-Saharan African countries with less ambitious treatment programs than
Botswana.
| [
{
"created": "Thu, 15 Dec 2005 00:48:22 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Vardavas",
"Raffaele",
""
],
[
"Blower",
"Sally",
""
]
] | Background: Approximately 40% of adults in Botswana are HIV-infected. The Botswana antiretroviral program began in 2002 and currently treats 34,000 patients with a goal of treating 85,000 patients (~30% of HIV-infected adults) by 2009. We predict the evolution of drug-resistant strains of HIV that may emerge as a consequence of this treatment program. We discuss the implications of our results for the World Health Organization's (WHO's) proposed surveillance system for detecting drug-resistant strains of HIV in Africa. Methods: We use a mathematical model of the emergence of drug resistance. We incorporate demographic and treatment data to make specific predictions as to when the WHO surveillance threshold is likely to be exceeded. Results: Our results show - even if rates of acquired resistance are high, but the drug-resistant strains that evolve are only half as transmissible as wild-type strains - that transmission of drug-resistant strains will remain low (< 5% by 2009) and are unlikely to exceed the WHO's surveillance threshold. However,our results show that transmission of drug-resistant strains in Botswana could increase to ~15% by 2009 if resistant strains are as transmissible as wild-type strains. Conclusion: The WHO's surveillance system is designed to detect transmitted resistance that exceeds a threshold level of 5%. Whether this system will detect drug-resistant strains in Botswana by 2009 will depend upon the transmissibility of the strains that emerge. Our results imply that it could be many years before the WHO detects transmitted resistance in other sub-Saharan African countries with less ambitious treatment programs than Botswana. |
1411.4908 | Valentina Agoni | Valentina Agoni | Beta sheet propensity and the genetic code | null | null | null | null | q-bio.OT q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | So far mutations analysis was performed in terms of transitions and
trasversions, so on the basis of the molecule, or in terms of GC-content and
isochors, through the quantification of GC->AT mutations over AT->GC mutations.
We tried a different approach hypothesizing that probably we are partially
protected (by the genetic code) from amyloid fibrils formation.
| [
{
"created": "Sun, 9 Nov 2014 18:03:55 GMT",
"version": "v1"
}
] | 2014-11-19 | [
[
"Agoni",
"Valentina",
""
]
] | So far mutations analysis was performed in terms of transitions and trasversions, so on the basis of the molecule, or in terms of GC-content and isochors, through the quantification of GC->AT mutations over AT->GC mutations. We tried a different approach hypothesizing that probably we are partially protected (by the genetic code) from amyloid fibrils formation. |
2403.11753 | Julia Theresa Kamml | Julia Kamml, Claire Acevedo, and David Kammer | Mineral and cross-linking in collagen fibrils: The mechanical behavior
of bone tissue at the nano-scale | 11 pages, 8 figures | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The mineralized collagen fibril is the main building block of hard tissues
and it directly affects the macroscopic mechanics of biological tissues such as
bone. The mechanical behavior of the fibril itself is determined by its
structure: the content of collagen molecules, minerals, and cross-links, and
the mechanical interactions and properties of these components.
Advanced-Glycation-Endproducts (AGEs) cross-linking between tropocollagen
molecules within the collagen fibril is one important factor that is believed
to have a major influence on the tissue. For instance, it has been shown that
brittleness in bone correlates with increased AGEs densities. However, the
underlying nano-scale mechanisms within the mineralized collagen fibril remain
unknown. Here, we study the effect of mineral and AGEs cross-linking on fibril
deformation and fracture behavior by performing destructive tensile tests using
coarse-grained molecular dynamics simulations. Our results demonstrate that
after exceeding a critical content of mineral, it induces stiffening of the
collagen fibril at high strain levels. We show that mineral morphology and
location affect collagen fibril mechanics: The mineral content at which this
stiffening occurs depends on the mineral's location and morphology. Further,
both, increasing AGEs density and mineral content lead to stiffening and
increased peak stresses. At low mineral contents, the mechanical response of
the fibril is dominated by the AGEs, while at high mineral contents, the
mineral itself determines fibril mechanics.
| [
{
"created": "Mon, 18 Mar 2024 13:02:36 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Kamml",
"Julia",
""
],
[
"Acevedo",
"Claire",
""
],
[
"Kammer",
"David",
""
]
] | The mineralized collagen fibril is the main building block of hard tissues and it directly affects the macroscopic mechanics of biological tissues such as bone. The mechanical behavior of the fibril itself is determined by its structure: the content of collagen molecules, minerals, and cross-links, and the mechanical interactions and properties of these components. Advanced-Glycation-Endproducts (AGEs) cross-linking between tropocollagen molecules within the collagen fibril is one important factor that is believed to have a major influence on the tissue. For instance, it has been shown that brittleness in bone correlates with increased AGEs densities. However, the underlying nano-scale mechanisms within the mineralized collagen fibril remain unknown. Here, we study the effect of mineral and AGEs cross-linking on fibril deformation and fracture behavior by performing destructive tensile tests using coarse-grained molecular dynamics simulations. Our results demonstrate that after exceeding a critical content of mineral, it induces stiffening of the collagen fibril at high strain levels. We show that mineral morphology and location affect collagen fibril mechanics: The mineral content at which this stiffening occurs depends on the mineral's location and morphology. Further, both, increasing AGEs density and mineral content lead to stiffening and increased peak stresses. At low mineral contents, the mechanical response of the fibril is dominated by the AGEs, while at high mineral contents, the mineral itself determines fibril mechanics. |
1903.11448 | Jinzhi Lei | Jinzhi Lei | A general mathematical framework for understanding the behavior of
heterogeneous stem cell regeneration | 36 pages, 7 figures | Journal of Theoretical Biology, 2020 | 10.1016/j.jtbi.2020.110196 | null | q-bio.PE q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Stem cell heterogeneity is essential for the homeostasis in tissue
development. This paper established a general formulation for understanding the
dynamics of stem cell regeneration with cell heterogeneity and random
transitions of epigenetic states. The model generalizes the classical G0 cell
cycle model, and incorporates the epigenetic states of stem cells that are
represented by a continuous multidimensional variable and the kinetic rates of
cell behaviors, including proliferation, differentiation, and apoptosis, that
are dependent on their epigenetic states. Moreover, the random transition of
epigenetic states is represented by an inheritance probability that can be
described as a conditional beta distribution. This model can be extended to
investigate gene mutation-induced tumor development. The proposed formula is a
generalized formula that helps us to understand various dynamic processes of
stem cell regeneration, including tissue development, degeneration, and
abnormal growth.
| [
{
"created": "Wed, 27 Mar 2019 14:26:13 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jul 2024 13:26:37 GMT",
"version": "v2"
}
] | 2024-07-12 | [
[
"Lei",
"Jinzhi",
""
]
] | Stem cell heterogeneity is essential for the homeostasis in tissue development. This paper established a general formulation for understanding the dynamics of stem cell regeneration with cell heterogeneity and random transitions of epigenetic states. The model generalizes the classical G0 cell cycle model, and incorporates the epigenetic states of stem cells that are represented by a continuous multidimensional variable and the kinetic rates of cell behaviors, including proliferation, differentiation, and apoptosis, that are dependent on their epigenetic states. Moreover, the random transition of epigenetic states is represented by an inheritance probability that can be described as a conditional beta distribution. This model can be extended to investigate gene mutation-induced tumor development. The proposed formula is a generalized formula that helps us to understand various dynamic processes of stem cell regeneration, including tissue development, degeneration, and abnormal growth. |
2103.09198 | Andrew Sornborger | Oleksandr Iaroshenko, Andrew T. Sornborger | Binary Operations on Neuromorphic Hardware with Application to Linear
Algebraic Operations and Stochastic Equations | null | null | null | LA-UR-21-22286 | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Non-von Neumann computational hardware, based on neuron-inspired, non-linear
elements connected via linear, weighted synapses -- so-called neuromorphic
systems -- is a viable computational substrate. Since neuromorphic systems have
been shown to use less power than CPUs for many applications, they are of
potential use in autonomous systems such as robots, drones, and satellites, for
which power resources are at a premium. The power used by neuromorphic systems
is approximately proportional to the number of spiking events produced by
neurons on-chip. However, typical information encoding on these chips is in the
form of firing rates that unarily encode information. That is, the number of
spikes generated by a neuron is meant to be proportional to an encoded value
used in a computation or algorithm. Unary encoding is less efficient (produces
more spikes) than binary encoding. For this reason, here we present
neuromorphic computational mechanisms for implementing binary two's complement
operations. We use the mechanisms to construct a neuromorphic, binary matrix
multiplication algorithm that may be used as a primitive for linear
differential equation integration, deep networks, and other standard
calculations. We also construct a random walk circuit and apply it in Brownian
motion simulations. We study how both algorithms scale in circuit size and
iteration time.
| [
{
"created": "Tue, 16 Mar 2021 17:03:10 GMT",
"version": "v1"
}
] | 2021-03-17 | [
[
"Iaroshenko",
"Oleksandr",
""
],
[
"Sornborger",
"Andrew T.",
""
]
] | Non-von Neumann computational hardware, based on neuron-inspired, non-linear elements connected via linear, weighted synapses -- so-called neuromorphic systems -- is a viable computational substrate. Since neuromorphic systems have been shown to use less power than CPUs for many applications, they are of potential use in autonomous systems such as robots, drones, and satellites, for which power resources are at a premium. The power used by neuromorphic systems is approximately proportional to the number of spiking events produced by neurons on-chip. However, typical information encoding on these chips is in the form of firing rates that unarily encode information. That is, the number of spikes generated by a neuron is meant to be proportional to an encoded value used in a computation or algorithm. Unary encoding is less efficient (produces more spikes) than binary encoding. For this reason, here we present neuromorphic computational mechanisms for implementing binary two's complement operations. We use the mechanisms to construct a neuromorphic, binary matrix multiplication algorithm that may be used as a primitive for linear differential equation integration, deep networks, and other standard calculations. We also construct a random walk circuit and apply it in Brownian motion simulations. We study how both algorithms scale in circuit size and iteration time. |
0806.4518 | Patrick Schiavone | Tzvetelina Tzvetkova-Chevolleau (LTM, TIMC), Ang\'elique St\'ephanou
(TIMC), David Fuard (LTM), Jacques Ohayon (TIMC), Patrick Schiavone (LTM,
TIMC), Philippe Tracqui (TIMC) | The motility of normal and cancer cells in response to the combined
influence of substrate rigidity and anisotropic microstructure | null | Biomaterials 29, 10 (2008) 1541--1551 | 10.1016/j.biomaterials.2007.12.016 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell adhesion and migration are strongly influenced by extracellular matrix
(ECM) architecture and rigidity, but little is known about the concomitant
influence of such environmental signals to cell responses, especially when
considering cells of similar origin and morphology, but exhibiting a normal or
cancerous phenotype. Using micropatterned polydimethylsiloxane substrates
(PDMS) with tuneable stiffness (500kPa, 750kPa, 2000kPa) and topography (lines,
pillars or unpatterned), we systematically analyse the differential response of
normal (3T3) and cancer (SaI/N) fibroblastic cells. Our results demonstrate
that both cells exhibit differential morphology and motility responses to
changes in substrate rigidiy and microtopography. 3T3 polarization and
spreading are influenced by substrate microtopography and rigidity. The cells
exhibit a persistent type of migration, which depends on the substrate
anisotropy. In contrast, the dynamic of SaI/N spreading is strongly modified by
the substrate topography but not by substrate rigidity. SaI/N morphology and
migration seem to escape from extracellular cues: the cells exhibit
uncorrelated migration trajectories and a large dispersion of their migration
speed, which increases with substrate rigidity.
| [
{
"created": "Fri, 27 Jun 2008 12:57:20 GMT",
"version": "v1"
}
] | 2008-12-18 | [
[
"Tzvetkova-Chevolleau",
"Tzvetelina",
"",
"LTM, TIMC"
],
[
"Stéphanou",
"Angélique",
"",
"TIMC"
],
[
"Fuard",
"David",
"",
"LTM"
],
[
"Ohayon",
"Jacques",
"",
"TIMC"
],
[
"Schiavone",
"Patrick",
"",
"LTM,\n TIMC"
],... | Cell adhesion and migration are strongly influenced by extracellular matrix (ECM) architecture and rigidity, but little is known about the concomitant influence of such environmental signals to cell responses, especially when considering cells of similar origin and morphology, but exhibiting a normal or cancerous phenotype. Using micropatterned polydimethylsiloxane substrates (PDMS) with tuneable stiffness (500kPa, 750kPa, 2000kPa) and topography (lines, pillars or unpatterned), we systematically analyse the differential response of normal (3T3) and cancer (SaI/N) fibroblastic cells. Our results demonstrate that both cells exhibit differential morphology and motility responses to changes in substrate rigidiy and microtopography. 3T3 polarization and spreading are influenced by substrate microtopography and rigidity. The cells exhibit a persistent type of migration, which depends on the substrate anisotropy. In contrast, the dynamic of SaI/N spreading is strongly modified by the substrate topography but not by substrate rigidity. SaI/N morphology and migration seem to escape from extracellular cues: the cells exhibit uncorrelated migration trajectories and a large dispersion of their migration speed, which increases with substrate rigidity. |
1111.5468 | A. E. Sitnitsky | A. E. Sitnitsky | Model for crankshaft motion of protein backbone in nonspecific binding
site of serine proteases | 20 pages, 8 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The consequences of recent experimental finding that hydrogen bonds of the
anti-parallel $\beta $-sheet in nonspecific binding site of serine proteases
become significantly shorter and stronger synchronously with the catalytic act
are examined. We investigate the effect of the transformation of an ordinary
hydrogen bond into a low-barrier one on the crankshaft motion a peptide group
in the anti-parallel $\beta $-sheet. For this purpose we make use of a
realistic model of the peptide chain with stringent microscopically derived
coupling interaction potential and effective on-site potential. The coupling
interaction characterizing the peptide chain rigidity is found to be
surprisingly weak and repulsive in character. The effective on-site potential
is found to be a hard one, i.e., goes more steep than a harmonic one. At
transformation of the ordinary hydrogen bond into the low-barrier one the
frequency of crankshaft motion of the corresponding peptide group in the
anti-parallel $\beta $-sheet is roughly doubled.
| [
{
"created": "Wed, 23 Nov 2011 11:59:23 GMT",
"version": "v1"
}
] | 2011-11-24 | [
[
"Sitnitsky",
"A. E.",
""
]
] | The consequences of recent experimental finding that hydrogen bonds of the anti-parallel $\beta $-sheet in nonspecific binding site of serine proteases become significantly shorter and stronger synchronously with the catalytic act are examined. We investigate the effect of the transformation of an ordinary hydrogen bond into a low-barrier one on the crankshaft motion a peptide group in the anti-parallel $\beta $-sheet. For this purpose we make use of a realistic model of the peptide chain with stringent microscopically derived coupling interaction potential and effective on-site potential. The coupling interaction characterizing the peptide chain rigidity is found to be surprisingly weak and repulsive in character. The effective on-site potential is found to be a hard one, i.e., goes more steep than a harmonic one. At transformation of the ordinary hydrogen bond into the low-barrier one the frequency of crankshaft motion of the corresponding peptide group in the anti-parallel $\beta $-sheet is roughly doubled. |
2308.16046 | Bartlomiej Waclaw Dr | Witold Postek, Klaudia Staskiewicz, Elin Lilja, Bartlomiej Waclaw | Substrate geometry affects population dynamics in a bacterial biofilm | 15 pages and 4 figures (main text), 13 pages and 6 figures
(supplementary material) | null | null | null | q-bio.PE cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Biofilms inhabit a range of environments, such as dental plaques or soil
micropores, often characterized by intricate, non-even surfaces. However, the
impact of surface irregularities on the population dynamics of biofilms remains
elusive as most biofilm experiments are conducted on flat surfaces. Here, we
show that the shape of the surface on which a biofilm grows influences genetic
drift and selection within the biofilm. We culture E. coli biofilms in
micro-wells with an undulating bottom surface and observe the emergence of
clonal sectors whose size corresponds to that of the undulations, despite no
physical barrier separating different areas of the biofilm. The sectors are
remarkably stable over time and do not invade each other; we attribute this
stability to the characteristics of the velocity field within the growing
biofilm, which hinders mixing and clonal expansion. A microscopically-detailed
computer model fully reproduces these findings and highlights the role of
mechanical (physical) interactions such as adhesion and friction in microbial
evolution. The model also predicts clonal expansion to be severely limited even
for clones with a significant growth advantage - a finding which we
subsequently confirm experimentally using a mixture of antibiotic-sensitive and
antibiotic-resistant mutants in the presence of sub-lethal concentrations of
the antibiotic rifampicin. The strong suppression of selection contrasts
sharply with the behavior seen in bacterial colonies on agar commonly used to
study range expansion and evolution in biofilms. Our results show that biofilm
population dynamics can be controlled by patterning the surface, and
demonstrate how a better understanding of the physics of bacterial growth can
pave the way for new strategies in steering microbial evolution.
| [
{
"created": "Wed, 30 Aug 2023 14:14:46 GMT",
"version": "v1"
}
] | 2023-08-31 | [
[
"Postek",
"Witold",
""
],
[
"Staskiewicz",
"Klaudia",
""
],
[
"Lilja",
"Elin",
""
],
[
"Waclaw",
"Bartlomiej",
""
]
] | Biofilms inhabit a range of environments, such as dental plaques or soil micropores, often characterized by intricate, non-even surfaces. However, the impact of surface irregularities on the population dynamics of biofilms remains elusive as most biofilm experiments are conducted on flat surfaces. Here, we show that the shape of the surface on which a biofilm grows influences genetic drift and selection within the biofilm. We culture E. coli biofilms in micro-wells with an undulating bottom surface and observe the emergence of clonal sectors whose size corresponds to that of the undulations, despite no physical barrier separating different areas of the biofilm. The sectors are remarkably stable over time and do not invade each other; we attribute this stability to the characteristics of the velocity field within the growing biofilm, which hinders mixing and clonal expansion. A microscopically-detailed computer model fully reproduces these findings and highlights the role of mechanical (physical) interactions such as adhesion and friction in microbial evolution. The model also predicts clonal expansion to be severely limited even for clones with a significant growth advantage - a finding which we subsequently confirm experimentally using a mixture of antibiotic-sensitive and antibiotic-resistant mutants in the presence of sub-lethal concentrations of the antibiotic rifampicin. The strong suppression of selection contrasts sharply with the behavior seen in bacterial colonies on agar commonly used to study range expansion and evolution in biofilms. Our results show that biofilm population dynamics can be controlled by patterning the surface, and demonstrate how a better understanding of the physics of bacterial growth can pave the way for new strategies in steering microbial evolution. |
2007.00135 | Bastien Pasdeloup | Yassine El Ouahidi, Matis Feller, Matthieu Talagas, Bastien Pasdeloup | An Approach for Clustering Subjects According to Similarities in Cell
Distributions within Biopsies | null | null | null | null | q-bio.QM cs.LG q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a novel and interpretable methodology to cluster
subjects suffering from cancer, based on features extracted from their
biopsies. Contrary to existing approaches, we propose here to capture complex
patterns in the repartitions of their cells using histograms, and compare
subjects on the basis of these repartitions. We describe here our complete
workflow, including creation of the database, cells segmentation and
phenotyping, computation of complex features, choice of a distance function
between features, clustering between subjects using that distance, and survival
analysis of obtained clusters. We illustrate our approach on a database of
hematoxylin and eosin (H&E)-stained tissues of subjects suffering from Stage I
lung adenocarcinoma, where our results match existing knowledge in prognosis
estimation with high confidence.
| [
{
"created": "Tue, 30 Jun 2020 22:30:58 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Jul 2020 13:34:15 GMT",
"version": "v2"
}
] | 2020-07-07 | [
[
"Ouahidi",
"Yassine El",
""
],
[
"Feller",
"Matis",
""
],
[
"Talagas",
"Matthieu",
""
],
[
"Pasdeloup",
"Bastien",
""
]
] | In this paper, we introduce a novel and interpretable methodology to cluster subjects suffering from cancer, based on features extracted from their biopsies. Contrary to existing approaches, we propose here to capture complex patterns in the repartitions of their cells using histograms, and compare subjects on the basis of these repartitions. We describe here our complete workflow, including creation of the database, cells segmentation and phenotyping, computation of complex features, choice of a distance function between features, clustering between subjects using that distance, and survival analysis of obtained clusters. We illustrate our approach on a database of hematoxylin and eosin (H&E)-stained tissues of subjects suffering from Stage I lung adenocarcinoma, where our results match existing knowledge in prognosis estimation with high confidence. |
1204.6176 | Lucilla de Arcangelis | F. Lombardi, H. J. Herrmann, C. Perrone-Capano, D. Plenz and L. de
Arcangelis | The balance between excitation and inhibition controls the temporal
organization of neuronal avalanches | 5 pages, 3 figures, to appear on Physical Review Letters | null | null | null | q-bio.NC cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal avalanches, measured in vitro and in vivo, exhibit a robust critical
behaviour. Their temporal organization hides the presence of correlations. Here
we present experimental measurements of the waiting time distribution between
successive avalanches in the rat cortex in vitro. This exhibits a non-monotonic
behaviour, not usually found in other natural processes. Numerical simulations
provide evidence that this behaviour is a consequence of the alternation
between states of high and low activity, named up and down states, leading to a
balance between excitation and inhibition controlled by a single parameter.
During these periods both the single neuron state and the network excitability
level, keeping memory of past activity, are tuned by homeostatic mechanisms.
| [
{
"created": "Fri, 27 Apr 2012 11:51:56 GMT",
"version": "v1"
}
] | 2012-04-30 | [
[
"Lombardi",
"F.",
""
],
[
"Herrmann",
"H. J.",
""
],
[
"Perrone-Capano",
"C.",
""
],
[
"Plenz",
"D.",
""
],
[
"de Arcangelis",
"L.",
""
]
] | Neuronal avalanches, measured in vitro and in vivo, exhibit a robust critical behaviour. Their temporal organization hides the presence of correlations. Here we present experimental measurements of the waiting time distribution between successive avalanches in the rat cortex in vitro. This exhibits a non-monotonic behaviour, not usually found in other natural processes. Numerical simulations provide evidence that this behaviour is a consequence of the alternation between states of high and low activity, named up and down states, leading to a balance between excitation and inhibition controlled by a single parameter. During these periods both the single neuron state and the network excitability level, keeping memory of past activity, are tuned by homeostatic mechanisms. |
2005.09975 | Carlos Roberto Pena Ruano | P. Hern\'andez, C. Pena, A. Ramos and J.J. G\'omez-Cadenas | A new formulation of compartmental epidemic modelling for arbitrary
distributions of incubation and removal times | 21 pages, 11 figures. v2 matches published version: improved
presentation (including title, abstract and references), results and
conclusions unchanged | PLoS ONE 16(2): e0244107 (2021) | 10.1371/journal.pone.0244107 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paradigm for compartment models in epidemiology assumes exponentially
distributed incubation and removal times, which is not realistic in actual
populations. Commonly used variations with multiple exponentially distributed
variables are more flexible, yet do not allow for arbitrary distributions. We
present a new formulation, focussing on the SEIR concept that allows to include
general distributions of incubation and removal times. We compare the solution
to two types of agent-based model simulations, a spatially homogeneous one
where infection occurs by proximity, and a model on a scale-free network with
varying clustering properties, where the infection between any two agents
occurs via their link if it exists. We find good agreement in both cases.
Furthermore a family of asymptotic solutions of the equations is found in terms
of a logistic curve, which after a non-universal time shift, fits extremely
well all the microdynamical simulations. The formulation allows for a simple
numerical approach; software in Julia and Python is provided.
| [
{
"created": "Wed, 20 May 2020 11:37:38 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 09:50:06 GMT",
"version": "v2"
}
] | 2021-02-10 | [
[
"Hernández",
"P.",
""
],
[
"Pena",
"C.",
""
],
[
"Ramos",
"A.",
""
],
[
"Gómez-Cadenas",
"J. J.",
""
]
] | The paradigm for compartment models in epidemiology assumes exponentially distributed incubation and removal times, which is not realistic in actual populations. Commonly used variations with multiple exponentially distributed variables are more flexible, yet do not allow for arbitrary distributions. We present a new formulation, focussing on the SEIR concept that allows to include general distributions of incubation and removal times. We compare the solution to two types of agent-based model simulations, a spatially homogeneous one where infection occurs by proximity, and a model on a scale-free network with varying clustering properties, where the infection between any two agents occurs via their link if it exists. We find good agreement in both cases. Furthermore a family of asymptotic solutions of the equations is found in terms of a logistic curve, which after a non-universal time shift, fits extremely well all the microdynamical simulations. The formulation allows for a simple numerical approach; software in Julia and Python is provided. |
1304.5756 | Kevin J. Black | Kevin J. Black (1 and 2), Jonathan M. Koller (1), Brad D. Miller (1)
((1) Department of Psychiatry, Washington University in St. Louis, (2)
Departments of Neurology, Radiology and Anatomy & Neurobiology, Washington
University in St. Louis) | Rapid quantitative pharmacodynamic imaging by a novel method: theory,
simulation testing and proof of principle | 26 pages total, 4 tables, 10 figures. The original PDF file at
https://peerj.com/articles/117/ includes active hyperlinks. This version is
the final published version. (Differs from v2 only in that I corrected the
abstract on the arXiv.org page.) | PeerJ 1:e117, 2013 | 10.7717/peerj.117 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pharmacological challenge imaging has mapped, but rarely quantified, the
sensitivity of a biological system to a given drug. We describe a novel method
called rapid quantitative pharmacodynamic imaging. This method combines
pharmacokinetic-pharmacodynamic modeling, repeated small doses of a challenge
drug over a short time scale, and functional imaging to rapidly provide
quantitative estimates of drug sensitivity including EC50 (the concentration of
drug that produces half the maximum possible effect). We first test the method
with simulated data, assuming a typical sigmoidal dose-response curve and
assuming imperfect imaging that includes artifactual baseline signal drift and
random error. With these few assumptions, rapid quantitative pharmacodynamic
imaging reliably estimates EC50 from the simulated data, except when noise
overwhelms the drug effect or when the effect occurs only at high doses. In
preliminary fMRI studies of primate brain using a dopamine agonist, the
observed noise level is modest compared with observed drug effects, and a
quantitative EC50 can be obtained from some regional time-signal curves. Taken
together, these results suggest that research and clinical applications for
rapid quantitative pharmacodynamic imaging are realistic.
| [
{
"created": "Sun, 21 Apr 2013 15:53:05 GMT",
"version": "v1"
},
{
"created": "Wed, 7 May 2014 17:41:37 GMT",
"version": "v2"
},
{
"created": "Tue, 27 May 2014 00:44:24 GMT",
"version": "v3"
}
] | 2014-05-28 | [
[
"Black",
"Kevin J.",
"",
"1 and 2"
],
[
"Koller",
"Jonathan M.",
""
],
[
"Miller",
"Brad D.",
""
]
] | Pharmacological challenge imaging has mapped, but rarely quantified, the sensitivity of a biological system to a given drug. We describe a novel method called rapid quantitative pharmacodynamic imaging. This method combines pharmacokinetic-pharmacodynamic modeling, repeated small doses of a challenge drug over a short time scale, and functional imaging to rapidly provide quantitative estimates of drug sensitivity including EC50 (the concentration of drug that produces half the maximum possible effect). We first test the method with simulated data, assuming a typical sigmoidal dose-response curve and assuming imperfect imaging that includes artifactual baseline signal drift and random error. With these few assumptions, rapid quantitative pharmacodynamic imaging reliably estimates EC50 from the simulated data, except when noise overwhelms the drug effect or when the effect occurs only at high doses. In preliminary fMRI studies of primate brain using a dopamine agonist, the observed noise level is modest compared with observed drug effects, and a quantitative EC50 can be obtained from some regional time-signal curves. Taken together, these results suggest that research and clinical applications for rapid quantitative pharmacodynamic imaging are realistic. |
1309.1892 | Vince Grolmusz | Gabor Ivan and Vince Grolmusz | Dimension reduction of clustering results in bioinformatics | null | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | OPTICS is a density-based clustering algorithm that performs well in a wide
variety of applications. For a set of input objects, the algorithm creates a
so-called reachability plot that can be either used to produce cluster
membership assignments, or interpreted itself as an expressive two-dimensional
representation of the density-based clustering structure of the input set, even
if the input set is embedded in higher dimensions. The main focus of this work
is a visualization method that can be used to assign colours to all entries of
the input database, based on hierarchically represented a-priori knowledge
available for each of these objects. Based on two different,
bioinformatics-related applications we illustrate how the proposed method can
be efficiently used to identify clusters with proven real-life relevance.
| [
{
"created": "Sat, 7 Sep 2013 18:39:53 GMT",
"version": "v1"
}
] | 2013-09-10 | [
[
"Ivan",
"Gabor",
""
],
[
"Grolmusz",
"Vince",
""
]
] | OPTICS is a density-based clustering algorithm that performs well in a wide variety of applications. For a set of input objects, the algorithm creates a so-called reachability plot that can be either used to produce cluster membership assignments, or interpreted itself as an expressive two-dimensional representation of the density-based clustering structure of the input set, even if the input set is embedded in higher dimensions. The main focus of this work is a visualization method that can be used to assign colours to all entries of the input database, based on hierarchically represented a-priori knowledge available for each of these objects. Based on two different, bioinformatics-related applications we illustrate how the proposed method can be efficiently used to identify clusters with proven real-life relevance. |
2405.15489 | Yeqing Lin | Yeqing Lin, Minji Lee, Zhao Zhang, Mohammed AlQuraishi | Out of Many, One: Designing and Scaffolding Proteins at the Scale of the
Structural Universe with Genie 2 | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Protein diffusion models have emerged as a promising approach for protein
design. One such pioneering model is Genie, a method that asymmetrically
represents protein structures during the forward and backward processes, using
simple Gaussian noising for the former and expressive SE(3)-equivariant
attention for the latter. In this work we introduce Genie 2, extending Genie to
capture a larger and more diverse protein structure space through architectural
innovations and massive data augmentation. Genie 2 adds motif scaffolding
capabilities via a novel multi-motif framework that designs co-occurring motifs
with unspecified inter-motif positions and orientations. This makes possible
complex protein designs that engage multiple interaction partners and perform
multiple functions. On both unconditional and conditional generation, Genie 2
achieves state-of-the-art performance, outperforming all known methods on key
design metrics including designability, diversity, and novelty. Genie 2 also
solves more motif scaffolding problems than other methods and does so with more
unique and varied solutions. Taken together, these advances set a new standard
for structure-based protein design. Genie 2 inference and training code, as
well as model weights, are freely available at:
https://github.com/aqlaboratory/genie2.
| [
{
"created": "Fri, 24 May 2024 12:11:41 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Lin",
"Yeqing",
""
],
[
"Lee",
"Minji",
""
],
[
"Zhang",
"Zhao",
""
],
[
"AlQuraishi",
"Mohammed",
""
]
] | Protein diffusion models have emerged as a promising approach for protein design. One such pioneering model is Genie, a method that asymmetrically represents protein structures during the forward and backward processes, using simple Gaussian noising for the former and expressive SE(3)-equivariant attention for the latter. In this work we introduce Genie 2, extending Genie to capture a larger and more diverse protein structure space through architectural innovations and massive data augmentation. Genie 2 adds motif scaffolding capabilities via a novel multi-motif framework that designs co-occurring motifs with unspecified inter-motif positions and orientations. This makes possible complex protein designs that engage multiple interaction partners and perform multiple functions. On both unconditional and conditional generation, Genie 2 achieves state-of-the-art performance, outperforming all known methods on key design metrics including designability, diversity, and novelty. Genie 2 also solves more motif scaffolding problems than other methods and does so with more unique and varied solutions. Taken together, these advances set a new standard for structure-based protein design. Genie 2 inference and training code, as well as model weights, are freely available at: https://github.com/aqlaboratory/genie2. |
1209.1494 | Serik Sagitov | Krzysztof Bartoszek, Graham Jones, Bengt Oxelman, and Serik Sagitov | Time to a single hybridization event in a group of species with unknown
ancestral history | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a stochastic process for the generation of species which combines
a Yule process with a simple model for hybridization between pairs of
co-existent species. We assume that the origin of the process, when there was
one species, occurred at an unknown time in the past, and we condition the
process on producing n species via the Yule process and a single hybridization
event. We prove results about the distribution of the time of the hybridization
event. In particular we calculate a formula for all moments, and show that
under various conditions, the distribution tends to an exponential with rate
twice that of the birth rate for the Yule process.
| [
{
"created": "Fri, 7 Sep 2012 11:04:00 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Sep 2012 05:04:13 GMT",
"version": "v2"
}
] | 2012-09-11 | [
[
"Bartoszek",
"Krzysztof",
""
],
[
"Jones",
"Graham",
""
],
[
"Oxelman",
"Bengt",
""
],
[
"Sagitov",
"Serik",
""
]
] | We consider a stochastic process for the generation of species which combines a Yule process with a simple model for hybridization between pairs of co-existent species. We assume that the origin of the process, when there was one species, occurred at an unknown time in the past, and we condition the process on producing n species via the Yule process and a single hybridization event. We prove results about the distribution of the time of the hybridization event. In particular we calculate a formula for all moments, and show that under various conditions, the distribution tends to an exponential with rate twice that of the birth rate for the Yule process. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.