id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0801.0406 | Przemyslaw Biecek | Marta Zawierta, Wojciech Waga, Dorota Mackiewicz, Przemyslaw Biecek,
Stanislaw Cebrat | Phase Transition in Sexual Reproduction and Biological Evolution | 13 pages, 8 figures | null | 10.1142/S0129183108012595 | null | q-bio.PE | null | Using Monte Carlo model of biological evolution we have discovered that
populations can switch between two different strategies of their genomes'
evolution; Darwinian purifying selection and complementing the haplotypes. The
first one is exploited in the large panmictic populations while the second one
in the small highly inbred populations. The choice depends on the crossover
frequency. There is a power law relation between the critical value of
crossover frequency and the size of panmictic population. Under the constant
inbreeding this critical value of crossover does not depend on the population
size and has a character of phase transition. Close to this value sympatric
speciation is observed.
| [
{
"created": "Wed, 2 Jan 2008 15:34:38 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Zawierta",
"Marta",
""
],
[
"Waga",
"Wojciech",
""
],
[
"Mackiewicz",
"Dorota",
""
],
[
"Biecek",
"Przemyslaw",
""
],
[
"Cebrat",
"Stanislaw",
""
]
] | Using Monte Carlo model of biological evolution we have discovered that populations can switch between two different strategies of their genomes' evolution; Darwinian purifying selection and complementing the haplotypes. The first one is exploited in the large panmictic populations while the second one in the small highly inbred populations. The choice depends on the crossover frequency. There is a power law relation between the critical value of crossover frequency and the size of panmictic population. Under the constant inbreeding this critical value of crossover does not depend on the population size and has a character of phase transition. Close to this value sympatric speciation is observed. |
2003.06793 | Tat Dat Tran | Omri Tal and Tat Dat Tran | Adaptive Bet-Hedging Revisited: Considerations of Risk and Time Horizon | Accepted for publication in Bulletin of Mathematical Biology | null | null | null | q-bio.PE cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of adaptive bet-hedging commonly adopt insights from Kelly's famous
work on optimal gambling strategies and the financial value of information. In
particular, such models seek evolutionary solutions that maximize long term
average growth rate of lineages, even in the face of highly stochastic growth
trajectories. Here, we argue for extensive departures from the standard
approach to better account for evolutionary contingencies. Crucially, we
incorporate considerations of volatility minimization, motivated by interim
extinction risk in finite populations, within a finite time horizon approach to
growth maximization. We find that a game-theoretic competitive-optimality
approach best captures these additional constraints, and derive the equilibria
solutions under straightforward fitness payoff functions and extinction risks.
We show that for both maximal growth and minimal time relative payoffs the
log-optimal strategy is a unique pure-strategy symmetric equilibrium, invariant
with evolutionary time horizon and robust to low extinction risks.
| [
{
"created": "Sun, 15 Mar 2020 11:09:08 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Mar 2020 13:57:20 GMT",
"version": "v2"
}
] | 2020-03-18 | [
[
"Tal",
"Omri",
""
],
[
"Tran",
"Tat Dat",
""
]
] | Models of adaptive bet-hedging commonly adopt insights from Kelly's famous work on optimal gambling strategies and the financial value of information. In particular, such models seek evolutionary solutions that maximize long term average growth rate of lineages, even in the face of highly stochastic growth trajectories. Here, we argue for extensive departures from the standard approach to better account for evolutionary contingencies. Crucially, we incorporate considerations of volatility minimization, motivated by interim extinction risk in finite populations, within a finite time horizon approach to growth maximization. We find that a game-theoretic competitive-optimality approach best captures these additional constraints, and derive the equilibria solutions under straightforward fitness payoff functions and extinction risks. We show that for both maximal growth and minimal time relative payoffs the log-optimal strategy is a unique pure-strategy symmetric equilibrium, invariant with evolutionary time horizon and robust to low extinction risks. |
1504.01142 | Nam-phuong Nguyen | Nam-phuong Nguyen, Siavash Mirarab, Keerthana Kumar, Tandy Warnow | Ultra-large alignments using Phylogeny-aware Profiles | Online supplemental materials and data are available at
http://www.cs.utexas.edu/users/phylo/software/upp/ | null | null | null | q-bio.GN cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many biological questions, including the estimation of deep evolutionary
histories and the detection of remote homology between protein sequences, rely
upon multiple sequence alignments (MSAs) and phylogenetic trees of large
datasets. However, accurate large-scale multiple sequence alignment is very
difficult, especially when the dataset contains fragmentary sequences. We
present UPP, an MSA method that uses a new machine learning technique - the
Ensemble of Hidden Markov Models - that we propose here. UPP produces highly
accurate alignments for both nucleotide and amino acid sequences, even on
ultra-large datasets or datasets containing fragmentary sequences. UPP is
available at https://github.com/smirarab/sepp.
| [
{
"created": "Sun, 5 Apr 2015 17:15:38 GMT",
"version": "v1"
}
] | 2015-04-07 | [
[
"Nguyen",
"Nam-phuong",
""
],
[
"Mirarab",
"Siavash",
""
],
[
"Kumar",
"Keerthana",
""
],
[
"Warnow",
"Tandy",
""
]
] | Many biological questions, including the estimation of deep evolutionary histories and the detection of remote homology between protein sequences, rely upon multiple sequence alignments (MSAs) and phylogenetic trees of large datasets. However, accurate large-scale multiple sequence alignment is very difficult, especially when the dataset contains fragmentary sequences. We present UPP, an MSA method that uses a new machine learning technique - the Ensemble of Hidden Markov Models - that we propose here. UPP produces highly accurate alignments for both nucleotide and amino acid sequences, even on ultra-large datasets or datasets containing fragmentary sequences. UPP is available at https://github.com/smirarab/sepp. |
1909.03109 | Qiongge Li | Qiongge Li, Luca Pasquini, Gino Del Ferraro, Madeleine Gene, Kyung K.
Peck, Hern\'an A. Makse and Andrei I. Holodny | Monolingual and bilingual language networks in healthy subjects using
functional MRI and graph theory | 17 pages, 8 figures | null | null | null | q-bio.NC physics.bio-ph physics.data-an physics.med-ph physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-surgical language mapping with functional magnetic resonance imaging
(fMRI) is routinely conducted to assist the neurosurgeon in preventing damage
to brain regions responsible for language. Functional differences exist between
the monolingual versus the bilingual brain, whereas clinical fMRI tasks are
typically conducted in a single language. The presence of secondary language
processing mechanisms is a potential source of error in the inferred language
map. From fMRI data of healthy bilingual and monolingual subjects we obtain
language maps as functional networks. Our results show a sub-network "core"
architecture consisting of the Broca's, pre-supplementary motor, and premotor
areas present across all subjects. Wernicke's Area was found to connect to the
"core" to a different extent across groups. The $k$ core centrality measure
shows "core" areas belong to the maximum core while WA and other fROIs vary
across groups. The results may provide a benchmark to preserve equal treatment
outcomes for bilingual patients.
| [
{
"created": "Fri, 6 Sep 2019 19:59:04 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 16:27:18 GMT",
"version": "v2"
}
] | 2020-06-16 | [
[
"Li",
"Qiongge",
""
],
[
"Pasquini",
"Luca",
""
],
[
"Del Ferraro",
"Gino",
""
],
[
"Gene",
"Madeleine",
""
],
[
"Peck",
"Kyung K.",
""
],
[
"Makse",
"Hernán A.",
""
],
[
"Holodny",
"Andrei I.",
""
]
] | Pre-surgical language mapping with functional magnetic resonance imaging (fMRI) is routinely conducted to assist the neurosurgeon in preventing damage to brain regions responsible for language. Functional differences exist between the monolingual versus the bilingual brain, whereas clinical fMRI tasks are typically conducted in a single language. The presence of secondary language processing mechanisms is a potential source of error in the inferred language map. From fMRI data of healthy bilingual and monolingual subjects we obtain language maps as functional networks. Our results show a sub-network "core" architecture consisting of the Broca's, pre-supplementary motor, and premotor areas present across all subjects. Wernicke's Area was found to connect to the "core" to a different extent across groups. The $k$ core centrality measure shows "core" areas belong to the maximum core while WA and other fROIs vary across groups. The results may provide a benchmark to preserve equal treatment outcomes for bilingual patients. |
1405.5025 | Markus F. Weber | Markus F. Weber, Gabriele Poxleitner, Elke Hebisch, Erwin Frey and
Madeleine Opitz | Chemical warfare and survival strategies in bacterial range expansions | 31 pages, 5 figures | J. R. Soc. Interface 11, 20140172 (2014) | 10.1098/rsif.2014.0172 | LMU-ASC 31/14 | q-bio.PE physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dispersal of species is a fundamental ecological process in the evolution and
maintenance of biodiversity. Limited control over ecological parameters has
hindered progress in understanding of what enables species to colonise new
area, as well as the importance of inter-species interactions. Such control is
necessary to construct reliable mathematical models of ecosystems. In our work,
we studied dispersal in the context of bacterial range expansions and
identified the major determinants of species coexistence for a bacterial model
system of three Escherichia coli strains (toxin producing, sensitive, and
resistant). Genetic engineering allowed us to tune strain growth rates and to
design different ecological scenarios (cyclic and hierarchical). We found that
coexistence of all strains depended on three strongly interdependent factors:
composition of inoculum, relative strain growth rates, and effective toxin
range. Robust agreement between our experiments and a thoroughly calibrated
computational model enabled us to extrapolate these intricate interdependencies
in terms of phenomenological biodiversity laws. Our mathematical analysis also
suggested that cyclic dominance between strains is not a prerequisite for
coexistence in competitive range expansions. Instead, robust three-strain
coexistence required a balance between growth rates and either a reduced
initial ratio of the toxin-producing strain, or a sufficiently short toxin
range.
| [
{
"created": "Tue, 20 May 2014 10:38:22 GMT",
"version": "v1"
}
] | 2014-05-21 | [
[
"Weber",
"Markus F.",
""
],
[
"Poxleitner",
"Gabriele",
""
],
[
"Hebisch",
"Elke",
""
],
[
"Frey",
"Erwin",
""
],
[
"Opitz",
"Madeleine",
""
]
] | Dispersal of species is a fundamental ecological process in the evolution and maintenance of biodiversity. Limited control over ecological parameters has hindered progress in understanding of what enables species to colonise new area, as well as the importance of inter-species interactions. Such control is necessary to construct reliable mathematical models of ecosystems. In our work, we studied dispersal in the context of bacterial range expansions and identified the major determinants of species coexistence for a bacterial model system of three Escherichia coli strains (toxin producing, sensitive, and resistant). Genetic engineering allowed us to tune strain growth rates and to design different ecological scenarios (cyclic and hierarchical). We found that coexistence of all strains depended on three strongly interdependent factors: composition of inoculum, relative strain growth rates, and effective toxin range. Robust agreement between our experiments and a thoroughly calibrated computational model enabled us to extrapolate these intricate interdependencies in terms of phenomenological biodiversity laws. Our mathematical analysis also suggested that cyclic dominance between strains is not a prerequisite for coexistence in competitive range expansions. Instead, robust three-strain coexistence required a balance between growth rates and either a reduced initial ratio of the toxin-producing strain, or a sufficiently short toxin range. |
2007.01295 | Diego Paolo Ferruzzo Correa PhD. | Cristiane M. Batistela, Diego P. F. Correa, \'Atila M Bueno, and
Jos\'e R. C. Piqueira | Compartmental model with loss of immunity: analysis and parameters
estimation for Covid-19 | Second version. Under review | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The outbreak of Covid-19 led the world to an unprecedent health and
economical crisis. In an attempt to responde to this emergency researchers
worldwide are intensively studying the Covid-19 pandemic dynamics. In this
work, a SIRSi compartmental model is proposed, which is a modification of the
known classical SIR model. The proposed SIRSi model considers differences in
the immunization within a population, and the possibility of unreported or
asymptomatic cases. The model is adjusted to three major cities of S\~ao Paulo
State, in Brazil, namely, S\~ao Paulo, Santos and Campinas, providing estimates
on the duration and peaks of the outbreak.
| [
{
"created": "Thu, 2 Jul 2020 17:59:37 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Jul 2020 11:46:25 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Aug 2020 17:10:50 GMT",
"version": "v3"
}
] | 2020-08-07 | [
[
"Batistela",
"Cristiane M.",
""
],
[
"Correa",
"Diego P. F.",
""
],
[
"Bueno",
"Átila M",
""
],
[
"Piqueira",
"José R. C.",
""
]
] | The outbreak of Covid-19 led the world to an unprecedent health and economical crisis. In an attempt to responde to this emergency researchers worldwide are intensively studying the Covid-19 pandemic dynamics. In this work, a SIRSi compartmental model is proposed, which is a modification of the known classical SIR model. The proposed SIRSi model considers differences in the immunization within a population, and the possibility of unreported or asymptomatic cases. The model is adjusted to three major cities of S\~ao Paulo State, in Brazil, namely, S\~ao Paulo, Santos and Campinas, providing estimates on the duration and peaks of the outbreak. |
2005.11085 | Francesco Piazza | Timoteo Carletti, Duccio Fanelli, Francesco Piazza | COVID-19: The unreasonable effectiveness of simple models | main paper + supplementary material | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When the novel coronavirus disease SARS-CoV2 (COVID-19) was officially
declared a pandemic by the WHO in March 2020, the scientific community had
already braced up in the effort of making sense of the fast-growing wealth of
data gathered by national authorities all over the world. However, despite the
diversity of novel theoretical approaches and the comprehensiveness of many
widely established models, the official figures that recount the course of the
outbreak still sketch a largely elusive and intimidating picture. Here we show
unambiguously that the dynamics of the COVID-19 outbreak belongs to the simple
universality class of the SIR model and extensions thereof. Our analysis
naturally leads us to establish that there exists a fundamental limitation to
any theoretical approach, namely the unpredictable non-stationarity of the
testing frames behind the reported figures. However, we show how such bias can
be quantified self-consistently and employed to mine useful and accurate
information from the data. In particular, we describe how the time evolution of
the reporting rates controls the occurrence of the apparent epidemic peak,
which typically follows the true one in countries that were not vigorous enough
in their testing at the onset of the outbreak. The importance of testing early
and resolutely appears as a natural corollary of our analysis, as countries
that tested massively at the start clearly had their true peak earlier and less
deaths overall.
| [
{
"created": "Fri, 22 May 2020 10:04:41 GMT",
"version": "v1"
}
] | 2020-05-25 | [
[
"Carletti",
"Timoteo",
""
],
[
"Fanelli",
"Duccio",
""
],
[
"Piazza",
"Francesco",
""
]
] | When the novel coronavirus disease SARS-CoV2 (COVID-19) was officially declared a pandemic by the WHO in March 2020, the scientific community had already braced up in the effort of making sense of the fast-growing wealth of data gathered by national authorities all over the world. However, despite the diversity of novel theoretical approaches and the comprehensiveness of many widely established models, the official figures that recount the course of the outbreak still sketch a largely elusive and intimidating picture. Here we show unambiguously that the dynamics of the COVID-19 outbreak belongs to the simple universality class of the SIR model and extensions thereof. Our analysis naturally leads us to establish that there exists a fundamental limitation to any theoretical approach, namely the unpredictable non-stationarity of the testing frames behind the reported figures. However, we show how such bias can be quantified self-consistently and employed to mine useful and accurate information from the data. In particular, we describe how the time evolution of the reporting rates controls the occurrence of the apparent epidemic peak, which typically follows the true one in countries that were not vigorous enough in their testing at the onset of the outbreak. The importance of testing early and resolutely appears as a natural corollary of our analysis, as countries that tested massively at the start clearly had their true peak earlier and less deaths overall. |
1507.05970 | Pengsheng Zheng | Pengsheng Zheng | Chaotic Neuronal Oscillations in Spontaneous Cortical-Subcortical
Networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oscillatory activities are widely observed in specific frequency bands of
recorded field potentials in different brain regions, and play critical roles
in processing neural information. Understanding the structure of these
oscillatory activities is essential for understanding the brain function. So
far many details remain elusive about their rhythmic structures and how these
oscillations are generated. We show that many oscillatory activities in
spontaneous cortical-subcortical networks, such as delta, spindle, gamma,
high-gamma and sharp wave ripple bands in different brain regions, are genuine
chaotic time series which can be reconstructed as chaotic attractors through
appropriately selected embedding delay and dimension. The reconstructed
attractors are approximated by a simple radial basis function enabling high
precision short-term prediction. Simultaneously recorded oscillatory activities
in multiple brain regions differ greatly in term of temporal phase and
amplitude but can be approximated by the same function. Our results suggest
that neural oscillations are produced by deterministic chaotic systems. The
occurrence of neural oscillation events is predetermined, and the brain
possibly knows when and where the information will be processed and transferred
in the future time as a result of the deterministic dynamic.
| [
{
"created": "Tue, 21 Jul 2015 20:05:42 GMT",
"version": "v1"
}
] | 2015-07-23 | [
[
"Zheng",
"Pengsheng",
""
]
] | Oscillatory activities are widely observed in specific frequency bands of recorded field potentials in different brain regions, and play critical roles in processing neural information. Understanding the structure of these oscillatory activities is essential for understanding the brain function. So far many details remain elusive about their rhythmic structures and how these oscillations are generated. We show that many oscillatory activities in spontaneous cortical-subcortical networks, such as delta, spindle, gamma, high-gamma and sharp wave ripple bands in different brain regions, are genuine chaotic time series which can be reconstructed as chaotic attractors through appropriately selected embedding delay and dimension. The reconstructed attractors are approximated by a simple radial basis function enabling high precision short-term prediction. Simultaneously recorded oscillatory activities in multiple brain regions differ greatly in term of temporal phase and amplitude but can be approximated by the same function. Our results suggest that neural oscillations are produced by deterministic chaotic systems. The occurrence of neural oscillation events is predetermined, and the brain possibly knows when and where the information will be processed and transferred in the future time as a result of the deterministic dynamic. |
2305.11917 | Adam Lamson | Zijun Zhang, Adam R. Lamson, Michael Shelley, Olga Troyanskaya | Interpretable neural architecture search and transfer learning for
understanding CRISPR/Cas9 off-target enzymatic reactions | 23 pages, 4 figures | null | null | null | q-bio.MN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Finely-tuned enzymatic pathways control cellular processes, and their
dysregulation can lead to disease. Creating predictive and interpretable models
for these pathways is challenging because of the complexity of the pathways and
of the cellular and genomic contexts. Here we introduce Elektrum, a deep
learning framework which addresses these challenges with data-driven and
biophysically interpretable models for determining the kinetics of biochemical
systems. First, it uses in vitro kinetic assays to rapidly hypothesize an
ensemble of high-quality Kinetically Interpretable Neural Networks (KINNs) that
predict reaction rates. It then employs a novel transfer learning step, where
the KINNs are inserted as intermediary layers into deeper convolutional neural
networks, fine-tuning the predictions for reaction-dependent in vivo outcomes.
Elektrum makes effective use of the limited, but clean in vitro data and the
complex, yet plentiful in vivo data that captures cellular context. We apply
Elektrum to predict CRISPR-Cas9 off-target editing probabilities and
demonstrate that Elektrum achieves state-of-the-art performance, regularizes
neural network architectures, and maintains physical interpretability.
| [
{
"created": "Thu, 18 May 2023 23:49:42 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Sep 2023 16:36:45 GMT",
"version": "v2"
}
] | 2023-10-02 | [
[
"Zhang",
"Zijun",
""
],
[
"Lamson",
"Adam R.",
""
],
[
"Shelley",
"Michael",
""
],
[
"Troyanskaya",
"Olga",
""
]
] | Finely-tuned enzymatic pathways control cellular processes, and their dysregulation can lead to disease. Creating predictive and interpretable models for these pathways is challenging because of the complexity of the pathways and of the cellular and genomic contexts. Here we introduce Elektrum, a deep learning framework which addresses these challenges with data-driven and biophysically interpretable models for determining the kinetics of biochemical systems. First, it uses in vitro kinetic assays to rapidly hypothesize an ensemble of high-quality Kinetically Interpretable Neural Networks (KINNs) that predict reaction rates. It then employs a novel transfer learning step, where the KINNs are inserted as intermediary layers into deeper convolutional neural networks, fine-tuning the predictions for reaction-dependent in vivo outcomes. Elektrum makes effective use of the limited, but clean in vitro data and the complex, yet plentiful in vivo data that captures cellular context. We apply Elektrum to predict CRISPR-Cas9 off-target editing probabilities and demonstrate that Elektrum achieves state-of-the-art performance, regularizes neural network architectures, and maintains physical interpretability. |
1908.05261 | Baltazar Espinoza | Baltazar Espinoza, Carlos Castillo-Chavez, Charles Perrings | Mobility restrictions for the control of epidemics: When do they work? | null | null | 10.1371/journal.pone.0235731 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobility restrictions - travel advisories, trade and travel bans, border
closures and, in extreme cases, area quarantines or cordons sanitaires - are
among the most widely used measures to control infectious diseases.
Restrictions of this kind were important in the response to epidemics of SARS
(2003), H1N1 influenza (2009), and Ebola (2014). However, they do not always
work as expected. The imposition of a cordon sanitaire to control the 2014 West
African Ebola outbreak, for example, is argued to have led to a
higher-than-expected number of cases in the quarantined area. To determine when
mobility restrictions reduce the size of an epidemic, we use a model of disease
transmission within and between economically heterogeneous locally connected
communities. One community comprises a low-risk, resource-rich, low-density
population with access to effective medical resources. The other comprises a
high-risk, resource-poor, high-density population without access to effective
medical resources. We find that the overall size of an epidemic centered in the
high-risk community is sensitive to the stringency of mobility restrictions
between the two communities. Unrestricted mobility between the two risk
communities increases the number of secondary cases in the low-risk community
but reduces the overall epidemic size. By contrast, the imposition of a cordon
sanitaire around the high-risk community reduces the number of secondary
infections in the low-risk community but increases the overall epidemic size.
The degree to which mobility restrictions increase or decrease the overall
epidemic size depends on the level of risk in each community and the
characteristics of the disease.
| [
{
"created": "Wed, 14 Aug 2019 17:38:55 GMT",
"version": "v1"
}
] | 2020-09-09 | [
[
"Espinoza",
"Baltazar",
""
],
[
"Castillo-Chavez",
"Carlos",
""
],
[
"Perrings",
"Charles",
""
]
] | Mobility restrictions - travel advisories, trade and travel bans, border closures and, in extreme cases, area quarantines or cordons sanitaires - are among the most widely used measures to control infectious diseases. Restrictions of this kind were important in the response to epidemics of SARS (2003), H1N1 influenza (2009), and Ebola (2014). However, they do not always work as expected. The imposition of a cordon sanitaire to control the 2014 West African Ebola outbreak, for example, is argued to have led to a higher-than-expected number of cases in the quarantined area. To determine when mobility restrictions reduce the size of an epidemic, we use a model of disease transmission within and between economically heterogeneous locally connected communities. One community comprises a low-risk, resource-rich, low-density population with access to effective medical resources. The other comprises a high-risk, resource-poor, high-density population without access to effective medical resources. We find that the overall size of an epidemic centered in the high-risk community is sensitive to the stringency of mobility restrictions between the two communities. Unrestricted mobility between the two risk communities increases the number of secondary cases in the low-risk community but reduces the overall epidemic size. By contrast, the imposition of a cordon sanitaire around the high-risk community reduces the number of secondary infections in the low-risk community but increases the overall epidemic size. The degree to which mobility restrictions increase or decrease the overall epidemic size depends on the level of risk in each community and the characteristics of the disease. |
1706.04253 | Artem Novozhilov | Yuri S. Semenov, Artem S. Novozhilov | Generalized Quasispecies Model on Finite Metric Spaces: Isometry Groups
and Spectral Properties of Evolutionary Matrices | 32 pages, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quasispecies model introduced by Eigen in 1971 has close connections with
the isometry group of the space of binary sequences relative to the Hamming
distance metric. Generalizing this observation we introduce an abstract
quasispecies model on a finite metric space $X$ together with a group of
isometries $\Gamma$ acting transitively on $X$. We show that if the domain of
the fitness function has a natural decomposition into the union of $t$
$G$-orbits, $G$ being a subgroup of $\Gamma$, then the dominant eigenvalue of
the evolutionary matrix satisfies an algebraic equation of degree at most
$t\cdot {\rm rk}_{\mathbf Z} R$, where $R$ is what we call the orbital ring.
The general theory is illustrated by two examples, in both of which $X$ is
taken to be the metric space of vertices of a regular polytope with the "edge"
metric; namely, the case of a regular $m$-gon and of a hyperoctahedron are
considered.
| [
{
"created": "Tue, 13 Jun 2017 20:58:54 GMT",
"version": "v1"
}
] | 2017-06-15 | [
[
"Semenov",
"Yuri S.",
""
],
[
"Novozhilov",
"Artem S.",
""
]
] | The quasispecies model introduced by Eigen in 1971 has close connections with the isometry group of the space of binary sequences relative to the Hamming distance metric. Generalizing this observation we introduce an abstract quasispecies model on a finite metric space $X$ together with a group of isometries $\Gamma$ acting transitively on $X$. We show that if the domain of the fitness function has a natural decomposition into the union of $t$ $G$-orbits, $G$ being a subgroup of $\Gamma$, then the dominant eigenvalue of the evolutionary matrix satisfies an algebraic equation of degree at most $t\cdot {\rm rk}_{\mathbf Z} R$, where $R$ is what we call the orbital ring. The general theory is illustrated by two examples, in both of which $X$ is taken to be the metric space of vertices of a regular polytope with the "edge" metric; namely, the case of a regular $m$-gon and of a hyperoctahedron are considered. |
q-bio/0701029 | Horacio Ceva | Diego Medan, Roberto P.J. Perazzo, Mariano Devoto, Enrique Burgos,
Martin G. Zimmermann, Horacio Ceva, Ana M. Delbue | Analysis and Assembling of Network Structure in Mutualistic Systems | J. of. Theor. Biology, in press | Journal of Theoretical Biology 246 (2007) 510-521 | 10.1016/j.jtbi.2006.12.033 | null | q-bio.PE | null | It has been observed that mutualistic bipartite networks have a nested
structure of interactions. In addition, the degree distributions associated
with the two guilds involved in such networks (e.g. plants & pollinators or
plants & seed dispersers) approximately follow a truncated power law. We show
that nestedness and truncated power law distributions are intimately linked,
and that any biological reasons for such truncation are superimposed to finite
size effects . We further explore the internal organization of bipartite
networks by developing a self-organizing network model (SNM) that reproduces
empirical observations of pollination systems of widely different sizes. Since
the only inputs to the SNM are numbers of plant and animal species, and their
interactions (i.e., no data on local abundance of the interacting species are
needed), we suggest that the well-known association between species frequency
of interaction and species degree is a consequence rather than a cause, of the
observed network structure.
| [
{
"created": "Thu, 18 Jan 2007 20:42:31 GMT",
"version": "v1"
}
] | 2007-09-20 | [
[
"Medan",
"Diego",
""
],
[
"Perazzo",
"Roberto P. J.",
""
],
[
"Devoto",
"Mariano",
""
],
[
"Burgos",
"Enrique",
""
],
[
"Zimmermann",
"Martin G.",
""
],
[
"Ceva",
"Horacio",
""
],
[
"Delbue",
"Ana M.",
""
]
] | It has been observed that mutualistic bipartite networks have a nested structure of interactions. In addition, the degree distributions associated with the two guilds involved in such networks (e.g. plants & pollinators or plants & seed dispersers) approximately follow a truncated power law. We show that nestedness and truncated power law distributions are intimately linked, and that any biological reasons for such truncation are superimposed to finite size effects . We further explore the internal organization of bipartite networks by developing a self-organizing network model (SNM) that reproduces empirical observations of pollination systems of widely different sizes. Since the only inputs to the SNM are numbers of plant and animal species, and their interactions (i.e., no data on local abundance of the interacting species are needed), we suggest that the well-known association between species frequency of interaction and species degree is a consequence rather than a cause, of the observed network structure. |
2311.04807 | Romain Rieger | Romain Rieger, Sema Kaderli, Caroline Boulocher | In vivo impact on rabbit subchondral bone of viscosupplementation with a
hyaluronic acid antioxidant conjugate | 17 pages, 3 figures, 1 table | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | To assess the impact of an antioxidant-conjugated Hyaluronic Acid (HA) on
articular cartilage and subchondral bone in the context of osteoarthritis (OA),
we conducted a study using a hydrogel composed of HA-4-aminoresorcinol (HA4AR)
and compared it to a commercially available high molecular weight HA
formulation in a rabbit model of OA. Eighteen rabbits underwent unilateral
anterior cruciate ligament transection (ACLT) and were categorized into three
groups of six rabbits (Saline-group, HA-group and HA4AR-group) depending on the
intra-articular injection compound. Eight contralateral knees were used as
non-operated reference points (Contralateral-group). Iodine-enhanced
micro-computed tomography imaging was performed six weeks post-surgery to study
the articular cartilage volume and thickness as well as the subchondral bone
microarchitectural parameters and mineral density. In the HA and HA4AR groups,
the mean cartilage thickness was found to be similar to that of the
Contralateral-group. However, when we compared the HA-group to the HA4AR-group,
we observed a significant reduction in subchondral bone plate tissue mineral
density (p<0.05). In contrast, when we compared the HA4AR-group to the
Saline-group, no significant differences were noted in trabecular subchondral
bone microarchitectural parameters and subchondral bone plate and trabecular
bone mineral densities. Additionally, when the HA-group was compared to the
Saline-group, a notable decrease in subchondral bone plate tissue mineral
density was evident (p<0.01). Notably, the HA4AR hydrogel, comprising
HA-antioxidant conjugate, effectively preserved subchondral bone plate tissue
mineral density when compared to HA alone. Nevertheless, other aspects of bone
microarchitectural parameters remained unaltered, resulting in subchondral bone
mineral loss six weeks after surgery in the rabbit model.
| [
{
"created": "Wed, 8 Nov 2023 16:30:19 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2024 17:57:28 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jan 2024 15:14:27 GMT",
"version": "v3"
},
{
"created": "Mon, 8 Jul 2024 14:48:02 GMT",
"version": "v4"
}
] | 2024-07-09 | [
[
"Rieger",
"Romain",
""
],
[
"Kaderli",
"Sema",
""
],
[
"Boulocher",
"Caroline",
""
]
] | To assess the impact of an antioxidant-conjugated Hyaluronic Acid (HA) on articular cartilage and subchondral bone in the context of osteoarthritis (OA), we conducted a study using a hydrogel composed of HA-4-aminoresorcinol (HA4AR) and compared it to a commercially available high molecular weight HA formulation in a rabbit model of OA. Eighteen rabbits underwent unilateral anterior cruciate ligament transection (ACLT) and were categorized into three groups of six rabbits (Saline-group, HA-group and HA4AR-group) depending on the intra-articular injection compound. Eight contralateral knees were used as non-operated reference points (Contralateral-group). Iodine-enhanced micro-computed tomography imaging was performed six weeks post-surgery to study the articular cartilage volume and thickness as well as the subchondral bone microarchitectural parameters and mineral density. In the HA and HA4AR groups, the mean cartilage thickness was found to be similar to that of the Contralateral-group. However, when we compared the HA-group to the HA4AR-group, we observed a significant reduction in subchondral bone plate tissue mineral density (p<0.05). In contrast, when we compared the HA4AR-group to the Saline-group, no significant differences were noted in trabecular subchondral bone microarchitectural parameters and subchondral bone plate and trabecular bone mineral densities. Additionally, when the HA-group was compared to the Saline-group, a notable decrease in subchondral bone plate tissue mineral density was evident (p<0.01). Notably, the HA4AR hydrogel, comprising HA-antioxidant conjugate, effectively preserved subchondral bone plate tissue mineral density when compared to HA alone. Nevertheless, other aspects of bone microarchitectural parameters remained unaltered, resulting in subchondral bone mineral loss six weeks after surgery in the rabbit model. |
0704.2200 | Stefan Bornholdt | Maria I. Davidich, Stefan Bornholdt | Boolean network model predicts cell cycle sequence of fission yeast | 10 pages, 3 figures | null | 10.1371/journal.pone.0001672 | null | q-bio.MN | null | A Boolean network model of the cell-cycle regulatory network of fission yeast
(Schizosaccharomyces Pombe) is constructed solely on the basis of the known
biochemical interaction topology. Simulating the model in the computer,
faithfully reproduces the known sequence of regulatory activity patterns along
the cell cycle of the living cell. Contrary to existing differential equation
models, no parameters enter the model except the structure of the regulatory
circuitry. The dynamical properties of the model indicate that the biological
dynamical sequence is robustly implemented in the regulatory network, with the
biological stationary state G1 corresponding to the dominant attractor in state
space, and with the biological regulatory sequence being a strongly attractive
trajectory. Comparing the fission yeast cell-cycle model to a similar model of
the corresponding network in S. cerevisiae, a remarkable difference in
circuitry, as well as dynamics is observed. While the latter operates in a
strongly damped mode, driven by external excitation, the S. pombe network
represents an auto-excited system with external damping.
| [
{
"created": "Tue, 17 Apr 2007 18:47:31 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Davidich",
"Maria I.",
""
],
[
"Bornholdt",
"Stefan",
""
]
] | A Boolean network model of the cell-cycle regulatory network of fission yeast (Schizosaccharomyces Pombe) is constructed solely on the basis of the known biochemical interaction topology. Simulating the model in the computer, faithfully reproduces the known sequence of regulatory activity patterns along the cell cycle of the living cell. Contrary to existing differential equation models, no parameters enter the model except the structure of the regulatory circuitry. The dynamical properties of the model indicate that the biological dynamical sequence is robustly implemented in the regulatory network, with the biological stationary state G1 corresponding to the dominant attractor in state space, and with the biological regulatory sequence being a strongly attractive trajectory. Comparing the fission yeast cell-cycle model to a similar model of the corresponding network in S. cerevisiae, a remarkable difference in circuitry, as well as dynamics is observed. While the latter operates in a strongly damped mode, driven by external excitation, the S. pombe network represents an auto-excited system with external damping. |
1105.4705 | Marcus Kaiser | Marcus Kaiser | A Tutorial in Connectome Analysis: Topological and Spatial Features of
Brain Networks | Neuroimage, in press | Neuroimage. 2011 Aug 1;57(3):892-907 | 10.1016/j.neuroimage.2011.05.025 | null | q-bio.NC cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-throughput methods for yielding the set of connections in a neural
system, the connectome, are now being developed. This tutorial describes ways
to analyze the topological and spatial organization of the connectome at the
macroscopic level of connectivity between brain regions as well as the
microscopic level of connectivity between neurons. We will describe topological
features at three different levels: the local scale of individual nodes, the
regional scale of sets of nodes, and the global scale of the complete set of
nodes in a network. Such features can be used to characterize components of a
network and to compare different networks, e.g. the connectome of patients and
control subjects for clinical studies. At the global scale, different types of
networks can be distinguished and we will describe Erd\"os-R\'enyi random,
scale-free, small-world, modular, and hierarchical archetypes of networks.
Finally, the connectome also has a spatial organization and we describe methods
for analyzing wiring lengths of neural systems. As an introduction for new
researchers in the field of connectome analysis, we discuss the benefits and
limitations of each analysis approach.
| [
{
"created": "Tue, 24 May 2011 08:22:36 GMT",
"version": "v1"
}
] | 2011-12-23 | [
[
"Kaiser",
"Marcus",
""
]
] | High-throughput methods for yielding the set of connections in a neural system, the connectome, are now being developed. This tutorial describes ways to analyze the topological and spatial organization of the connectome at the macroscopic level of connectivity between brain regions as well as the microscopic level of connectivity between neurons. We will describe topological features at three different levels: the local scale of individual nodes, the regional scale of sets of nodes, and the global scale of the complete set of nodes in a network. Such features can be used to characterize components of a network and to compare different networks, e.g. the connectome of patients and control subjects for clinical studies. At the global scale, different types of networks can be distinguished and we will describe Erd\"os-R\'enyi random, scale-free, small-world, modular, and hierarchical archetypes of networks. Finally, the connectome also has a spatial organization and we describe methods for analyzing wiring lengths of neural systems. As an introduction for new researchers in the field of connectome analysis, we discuss the benefits and limitations of each analysis approach. |
1812.11884 | Gasper Tkacik | Sarah A Cepeda-Humerez and Jakob Ruess and Ga\v{s}per Tka\v{c}ik | Estimating information in time-varying signals | 32 pages | null | 10.1371/journal.pcbi.1007290 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Across diverse biological systems -- ranging from neural networks to
intracellular signaling and genetic regulatory networks -- the information
about changes in the environment is frequently encoded in the full temporal
dynamics of the network nodes. A pressing data-analysis challenge has thus been
to efficiently estimate the amount of information that these dynamics convey
from experimental data. Here we develop and evaluate decoding-based estimation
methods to lower bound the mutual information about a finite set of inputs,
encoded in single-cell high-dimensional time series data. For biological
reaction networks governed by the chemical Master equation, we derive
model-based information approximations and analytical upper bounds, against
which we benchmark our proposed model-free decoding estimators. In contrast to
the frequently-used k-nearest-neighbor estimator, decoding-based estimators
robustly extract a large fraction of the available information from
high-dimensional trajectories with a realistic number of data samples. We apply
these estimators to previously published data on Erk and Ca signaling in
mammalian cells and to yeast stress-response, and find that substantial amount
of information about environmental state can be encoded by non-trivial response
statistics even in stationary signals. We argue that these single-cell,
decoding-based information estimates, rather than the commonly-used tests for
significant differences between selected population response statistics,
provide a proper and unbiased measure for the performance of biological
signaling networks.
| [
{
"created": "Mon, 31 Dec 2018 16:31:32 GMT",
"version": "v1"
}
] | 2020-07-01 | [
[
"Cepeda-Humerez",
"Sarah A",
""
],
[
"Ruess",
"Jakob",
""
],
[
"Tkačik",
"Gašper",
""
]
] | Across diverse biological systems -- ranging from neural networks to intracellular signaling and genetic regulatory networks -- the information about changes in the environment is frequently encoded in the full temporal dynamics of the network nodes. A pressing data-analysis challenge has thus been to efficiently estimate the amount of information that these dynamics convey from experimental data. Here we develop and evaluate decoding-based estimation methods to lower bound the mutual information about a finite set of inputs, encoded in single-cell high-dimensional time series data. For biological reaction networks governed by the chemical Master equation, we derive model-based information approximations and analytical upper bounds, against which we benchmark our proposed model-free decoding estimators. In contrast to the frequently-used k-nearest-neighbor estimator, decoding-based estimators robustly extract a large fraction of the available information from high-dimensional trajectories with a realistic number of data samples. We apply these estimators to previously published data on Erk and Ca signaling in mammalian cells and to yeast stress-response, and find that substantial amount of information about environmental state can be encoded by non-trivial response statistics even in stationary signals. We argue that these single-cell, decoding-based information estimates, rather than the commonly-used tests for significant differences between selected population response statistics, provide a proper and unbiased measure for the performance of biological signaling networks. |
2205.04715 | Keith Li Chambers | Keith L Chambers, Michael G Watson and Mary R Myerscough | A lipid-structured mathematical model of atherosclerosis with macrophage
proliferation | 29 pages, 8 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the lipid-structured model for atherosclerotic plaque development
of Ford et al. (2019) to account for macrophage proliferation. Proliferation is
modelled as a non-local decrease in the lipid structural variable that is
similar to the treatment of cell division in size-structured models (e.g.
Efendiev et al. (2018)). Steady state analysis indicates that proliferation
assists in reducing eventual necrotic core size and acts to spread the lipid
load of the macrophage population amongst the cells. The relative contribution
of plaque macrophages by proliferation and recruitment from the bloodstream is
also examined. The model suggests that a more proliferative plaque differs from
an equivalent (same lipid content and cell count) recruitment-dominant plaque
only in the way lipid is distributed amongst the macrophages.
| [
{
"created": "Tue, 10 May 2022 07:37:05 GMT",
"version": "v1"
}
] | 2022-05-11 | [
[
"Chambers",
"Keith L",
""
],
[
"Watson",
"Michael G",
""
],
[
"Myerscough",
"Mary R",
""
]
] | We extend the lipid-structured model for atherosclerotic plaque development of Ford et al. (2019) to account for macrophage proliferation. Proliferation is modelled as a non-local decrease in the lipid structural variable that is similar to the treatment of cell division in size-structured models (e.g. Efendiev et al. (2018)). Steady state analysis indicates that proliferation assists in reducing eventual necrotic core size and acts to spread the lipid load of the macrophage population amongst the cells. The relative contribution of plaque macrophages by proliferation and recruitment from the bloodstream is also examined. The model suggests that a more proliferative plaque differs from an equivalent (same lipid content and cell count) recruitment-dominant plaque only in the way lipid is distributed amongst the macrophages. |
1809.09014 | Venelin Mitov | Venelin Mitov, Krzysztof Bartoszek, Georgios Asimomitis, Tanja Stadler | Fast likelihood evaluation for multivariate phylogenetic comparative
methods: the PCMBase R package | 34 pages, 6 figures | null | 10.1016/j.tpb.2019.11.005 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an R package, PCMBase, to rapidly calculate the likelihood for
multivariate phylogenetic comparative methods. The package is not specific to
particular models but offers the user the functionality to very easily
implement a wide range of models where the transition along a branch is
multivariate normal. We demonstrate the package's possibilities on the now
standard, multitrait Ornstein-Uhlenbeck process as well as the novel
multivariate punctuated equilibrium model. The package can handle trees of
various types (e.g. ultrametric, nonultrametric, polytomies, e.t.c.), as well
as measurement error, missing measurements or non-existing traits for some of
the species in the tree.
| [
{
"created": "Mon, 24 Sep 2018 15:48:21 GMT",
"version": "v1"
}
] | 2019-12-13 | [
[
"Mitov",
"Venelin",
""
],
[
"Bartoszek",
"Krzysztof",
""
],
[
"Asimomitis",
"Georgios",
""
],
[
"Stadler",
"Tanja",
""
]
] | We introduce an R package, PCMBase, to rapidly calculate the likelihood for multivariate phylogenetic comparative methods. The package is not specific to particular models but offers the user the functionality to very easily implement a wide range of models where the transition along a branch is multivariate normal. We demonstrate the package's possibilities on the now standard, multitrait Ornstein-Uhlenbeck process as well as the novel multivariate punctuated equilibrium model. The package can handle trees of various types (e.g. ultrametric, nonultrametric, polytomies, e.t.c.), as well as measurement error, missing measurements or non-existing traits for some of the species in the tree. |
1802.07922 | Chuan Zhang | Chuan Zhang (1 and 2 and 3), Lulu Ge (1 and 2 and 3), Xiaohu You (2)
((1) Lab of Efficient Architectures for Digital-communication and
Signal-processing (LEADS), (2) National Mobile Communications Research
Laboratory, (3) Quantum Information Center, Southeast University, China) | Synthesizing a Clock Signal with Reactions---Part II: Frequency
Alteration Based on Gears | null | null | null | null | q-bio.MN physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On a chassis of gear model, we have offered a quantitative description for
our method to synthesize a chemical clock signal with various duty cycles in
Part I. As Part II of the study, this paper devotes itself in proposing a
design methodology to handle frequency alteration issues for the chemical
clock, including both frequency division and frequency multiplication. Several
interesting examples are provided for a better explanation of our contribution.
All the simulation results verify and validate the correctness and efficiency
of our proposal.
| [
{
"created": "Thu, 22 Feb 2018 07:30:03 GMT",
"version": "v1"
}
] | 2018-02-23 | [
[
"Zhang",
"Chuan",
"",
"1 and 2 and 3"
],
[
"Ge",
"Lulu",
"",
"1 and 2 and 3"
],
[
"You",
"Xiaohu",
""
]
] | On a chassis of gear model, we have offered a quantitative description for our method to synthesize a chemical clock signal with various duty cycles in Part I. As Part II of the study, this paper devotes itself in proposing a design methodology to handle frequency alteration issues for the chemical clock, including both frequency division and frequency multiplication. Several interesting examples are provided for a better explanation of our contribution. All the simulation results verify and validate the correctness and efficiency of our proposal. |
2205.13644 | Lu Zhang | Lu Zhang, Xiaowei Yu, Yanjun Lyu, Zhengwang Wu, Haixing Dai, Lin Zhao,
Li Wang, Gang Li, Tianming Liu, Dajiang Zhu | Representing Brain Anatomical Regularity and Variability by Few-Shot
Embedding | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective representation of brain anatomical architecture is fundamental in
understanding brain regularity and variability. Despite numerous efforts, it is
still difficult to infer reliable anatomical correspondence at finer scale,
given the tremendous individual variability in cortical folding patterns. It is
even more challenging to disentangle common and individual patterns when
comparing brains at different neuro-developmental stages. In this work, we
developed a novel learning-based few-shot embedding framework to encode the
cortical folding patterns into a latent space represented by a group of
anatomically meaningful embedding vectors. Specifically, we adopted 3-hinge
(3HG) network as the substrate and designed an autoencoder-based embedding
framework to learn a common embedding vector for each 3HG's multi-hop feature:
each 3HG can be represented as a combination of these feature embeddings via a
set of individual specific coefficients to characterize individualized
anatomical information. That is, the regularity of folding patterns is encoded
into the embeddings, while the individual variations are preserved by the
multi=hop combination coefficients. To effectively learn the embeddings for the
population with very limited samples, few-shot learning was adopted. We applied
our method on adult HCP and pediatric datasets with 1,000+ brains (from 34
gestational weeks to young adult). Our experimental results show that: 1) the
learned embedding vectors can quantitatively encode the commonality and
individuality of cortical folding patterns; 2) with the embeddings we can
robustly infer the complicated many-to-many anatomical correspondences among
different brains and 3) our model can be successfully transferred to new
populations with very limited training samples.
| [
{
"created": "Thu, 26 May 2022 21:38:26 GMT",
"version": "v1"
}
] | 2022-05-30 | [
[
"Zhang",
"Lu",
""
],
[
"Yu",
"Xiaowei",
""
],
[
"Lyu",
"Yanjun",
""
],
[
"Wu",
"Zhengwang",
""
],
[
"Dai",
"Haixing",
""
],
[
"Zhao",
"Lin",
""
],
[
"Wang",
"Li",
""
],
[
"Li",
"Gang",
""
],
[
"Liu",
"Tianming",
""
],
[
"Zhu",
"Dajiang",
""
]
] | Effective representation of brain anatomical architecture is fundamental in understanding brain regularity and variability. Despite numerous efforts, it is still difficult to infer reliable anatomical correspondence at finer scale, given the tremendous individual variability in cortical folding patterns. It is even more challenging to disentangle common and individual patterns when comparing brains at different neuro-developmental stages. In this work, we developed a novel learning-based few-shot embedding framework to encode the cortical folding patterns into a latent space represented by a group of anatomically meaningful embedding vectors. Specifically, we adopted 3-hinge (3HG) network as the substrate and designed an autoencoder-based embedding framework to learn a common embedding vector for each 3HG's multi-hop feature: each 3HG can be represented as a combination of these feature embeddings via a set of individual specific coefficients to characterize individualized anatomical information. That is, the regularity of folding patterns is encoded into the embeddings, while the individual variations are preserved by the multi=hop combination coefficients. To effectively learn the embeddings for the population with very limited samples, few-shot learning was adopted. We applied our method on adult HCP and pediatric datasets with 1,000+ brains (from 34 gestational weeks to young adult). Our experimental results show that: 1) the learned embedding vectors can quantitatively encode the commonality and individuality of cortical folding patterns; 2) with the embeddings we can robustly infer the complicated many-to-many anatomical correspondences among different brains and 3) our model can be successfully transferred to new populations with very limited training samples. |
1804.05695 | Rolf Bader | Rolf Bader, Robert Mores | Cochlear detection of double-slip motion in cello bowing | 9 pages, 7 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A double-slip motion of a cello sound is investigated experimentally with a
bowing machine and analyzed using a Finite-Difference Time Domain (FDTD)
cochlear model. A double-slip sound is investigated. Here the sawtooth motion
of normal bowing is basically present, but within each period the bow hair
tears off the strings once more within the period, resulting in a blurred
sound. This additional intermediate slip appears around the middle of each
period and drifts temporally around while the sound progresses. When the
double-slip is perfectly in the middle of one period the sound is that of a
regular sawtooth motion. If not, two periodicities are present around double
the fundamental periodicity, making the sound arbitrary. Analyzing the sound
with a Wavelet-transform, the expected double-peak of two periodicities around
the second partial cannot be found. Analyzing the tone with a cochlear FDTD
model including the transfer of mechanical energy into spikes, the doubling and
even more complex behaviour is perfectly represented in the Interspike Interval
(ISI) of two adjacent spikes. This cochlear spike representation fits perfectly
to an amplitude peak detection algorithm, tracking the precise time point of
the double-slip within the fundamental period. Therefore the ear is able to
detect the double-slip motion right at the transition from the basilar membrane
motion into electrical spikes.
| [
{
"created": "Mon, 16 Apr 2018 14:10:57 GMT",
"version": "v1"
}
] | 2018-04-17 | [
[
"Bader",
"Rolf",
""
],
[
"Mores",
"Robert",
""
]
] | A double-slip motion of a cello sound is investigated experimentally with a bowing machine and analyzed using a Finite-Difference Time Domain (FDTD) cochlear model. A double-slip sound is investigated. Here the sawtooth motion of normal bowing is basically present, but within each period the bow hair tears off the strings once more within the period, resulting in a blurred sound. This additional intermediate slip appears around the middle of each period and drifts temporally around while the sound progresses. When the double-slip is perfectly in the middle of one period the sound is that of a regular sawtooth motion. If not, two periodicities are present around double the fundamental periodicity, making the sound arbitrary. Analyzing the sound with a Wavelet-transform, the expected double-peak of two periodicities around the second partial cannot be found. Analyzing the tone with a cochlear FDTD model including the transfer of mechanical energy into spikes, the doubling and even more complex behaviour is perfectly represented in the Interspike Interval (ISI) of two adjacent spikes. This cochlear spike representation fits perfectly to an amplitude peak detection algorithm, tracking the precise time point of the double-slip within the fundamental period. Therefore the ear is able to detect the double-slip motion right at the transition from the basilar membrane motion into electrical spikes. |
1503.03384 | Karunia Putra Wijaya | Karunia Putra Wijaya, Thomas Goetz, Edy Soewono | An optimal control model of mosquito reduction management in a dengue
endemic region | null | International Journal of Biomathematics, 7(5): 7, 1450056 (2014) | 10.1142/S1793524514500569 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aedes aegypti is known as the responsible vector transmitting dengue
flavivirus. Unavailability of medication to cure the transmission of the virus
in the human blood becomes a global health issue in recent decades. World
epidemiologists are encouraged to focus on the investigation over the effective
and inexpensive way to prevent dengue transmission, i.e. mosquito control. In
this paper, we present a model depicting the dynamics of mosquito population
based on indoor-outdoor life cycle classification. The basic mosquito offspring
number was obtained and analysis of equilibria was shown. We brought along a
discussion on the application of optimal control to the model in which two
simultaneous schemes were introduced. The first scheme is done by disseminating
chemical like temephos in spots where eggs and larvae develop, meanwhile the
second scheme is done by deploying fumigation through areas where adult
mosquitoes prevalently nest, indoor as well as outdoor. A version of the
gradient-based method was presented to set up a workflow in minimizing the
objective functional with respect to some control variables. Numerical results
from the analysis of the basic mosquito offspring number with constant control
and from that with optimal control suggested that the application of fumigation
is preferable over that of temephos. It was also suggested that applying both
control schemes simultaneously gives the most significant reduction in the
population.
| [
{
"created": "Mon, 9 Mar 2015 19:10:40 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2017 20:51:28 GMT",
"version": "v2"
}
] | 2017-12-04 | [
[
"Wijaya",
"Karunia Putra",
""
],
[
"Goetz",
"Thomas",
""
],
[
"Soewono",
"Edy",
""
]
] | Aedes aegypti is known as the responsible vector transmitting dengue flavivirus. Unavailability of medication to cure the transmission of the virus in the human blood becomes a global health issue in recent decades. World epidemiologists are encouraged to focus on the investigation over the effective and inexpensive way to prevent dengue transmission, i.e. mosquito control. In this paper, we present a model depicting the dynamics of mosquito population based on indoor-outdoor life cycle classification. The basic mosquito offspring number was obtained and analysis of equilibria was shown. We brought along a discussion on the application of optimal control to the model in which two simultaneous schemes were introduced. The first scheme is done by disseminating chemical like temephos in spots where eggs and larvae develop, meanwhile the second scheme is done by deploying fumigation through areas where adult mosquitoes prevalently nest, indoor as well as outdoor. A version of the gradient-based method was presented to set up a workflow in minimizing the objective functional with respect to some control variables. Numerical results from the analysis of the basic mosquito offspring number with constant control and from that with optimal control suggested that the application of fumigation is preferable over that of temephos. It was also suggested that applying both control schemes simultaneously gives the most significant reduction in the population. |
1406.3398 | Nikolai Slavov | Nikolai Slavov, David Botstein, Amy Caudy | Extensive regulation of metabolism and growth during the cell division
cycle | 34 pages, 7 figures | null | null | null | q-bio.GN nlin.AO physics.bio-ph q-bio.CB q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Yeast cells grown in culture can spontaneously synchronize their respiration,
metabolism, gene expression and cell division. Such metabolic oscillations in
synchronized cultures reflect single-cell oscillations, but the relationship
between the oscillations in single cells and synchronized cultures is poorly
understood. To understand this relationship and the coordination between
metabolism and cell division, we collected and analyzed DNA-content,
gene-expression and physiological data, at hundreds of time-points, from
cultures metabolically-synchronized at different growth rates, carbon sources
and biomass densities. The data enabled us to extend and generalize an
ensemble-average-over-phases (EAP) model that connects the population-average
gene-expression of asynchronous cultures to the gene-expression dynamics in the
single-cells comprising the cultures. The extended model explains the
carbon-source specific growth-rate responses of hundreds of genes. Our data
demonstrate that for a given growth rate, the frequency of metabolic cycling in
synchronized cultures increases with the biomass density. This observation
underscores the difference between metabolic cycling in synchronized cultures
and in single cells and suggests entraining of the single-cell cycle by a
quorum-sensing mechanism. Constant levels of residual glucose during the
metabolic cycling of synchronized cultures indicate that storage carbohydrates
are required to fuel not only the G1/S transition of the division cycle but
also the metabolic cycle. Despite the large variation in profiled conditions
and in the time-scale of their dynamics, most genes preserve invariant dynamics
of coordination with each other and with the rate of oxygen consumption.
Similarly, the G1/S transition always occurs at the beginning, middle or end of
the high oxygen consumption phases, analogous to observations in human and
drosophila cells.
| [
{
"created": "Fri, 13 Jun 2014 01:07:17 GMT",
"version": "v1"
}
] | 2014-06-16 | [
[
"Slavov",
"Nikolai",
""
],
[
"Botstein",
"David",
""
],
[
"Caudy",
"Amy",
""
]
] | Yeast cells grown in culture can spontaneously synchronize their respiration, metabolism, gene expression and cell division. Such metabolic oscillations in synchronized cultures reflect single-cell oscillations, but the relationship between the oscillations in single cells and synchronized cultures is poorly understood. To understand this relationship and the coordination between metabolism and cell division, we collected and analyzed DNA-content, gene-expression and physiological data, at hundreds of time-points, from cultures metabolically-synchronized at different growth rates, carbon sources and biomass densities. The data enabled us to extend and generalize an ensemble-average-over-phases (EAP) model that connects the population-average gene-expression of asynchronous cultures to the gene-expression dynamics in the single-cells comprising the cultures. The extended model explains the carbon-source specific growth-rate responses of hundreds of genes. Our data demonstrate that for a given growth rate, the frequency of metabolic cycling in synchronized cultures increases with the biomass density. This observation underscores the difference between metabolic cycling in synchronized cultures and in single cells and suggests entraining of the single-cell cycle by a quorum-sensing mechanism. Constant levels of residual glucose during the metabolic cycling of synchronized cultures indicate that storage carbohydrates are required to fuel not only the G1/S transition of the division cycle but also the metabolic cycle. Despite the large variation in profiled conditions and in the time-scale of their dynamics, most genes preserve invariant dynamics of coordination with each other and with the rate of oxygen consumption. Similarly, the G1/S transition always occurs at the beginning, middle or end of the high oxygen consumption phases, analogous to observations in human and drosophila cells. |
2201.07552 | Rion Brattig Correia | Ian B. Wood and Rion Brattig Correia and Wendy R. Miller and Luis M.
Rocha | Small Cohort of Epilepsy Patients Showed Increased Activity on Facebook
before Sudden Unexpected Death | Submitted to Epilepsy & Behavior | null | null | null | q-bio.QM cs.CY cs.SI stat.CO | http://creativecommons.org/licenses/by-sa/4.0/ | Sudden Unexpected Death in Epilepsy (SUDEP) remains a leading cause of death
in people with epilepsy. Despite the constant risk for patients and bereavement
to family members, to date the physiological mechanisms of SUDEP remain
unknown. Here we explore the potential to identify putative predictive signals
of SUDEP from online digital behavioral data using text and sentiment analysis.
Specifically, we analyze Facebook timelines of six epilepsy patients deceased
due to SUDEP, donated by surviving family members. We find preliminary evidence
for behavioral changes detectable by text and sentiment analysis tools. Namely,
in the months preceding their SUDEP event patient social media timelines show:
i) increase in verbosity; ii) increased use of functional words; and iii)
sentiment shifts as measured by different sentiment analysis tools. Combined,
these results suggest that social media engagement, as well as its sentiment,
may serve as possible early-warning signals for SUDEP in people with epilepsy.
While the small sample of patient timelines analyzed in this study prevents
generalization, our preliminary investigation demonstrates the potential of
social media data as complementary data in larger studies of SUDEP and
epilepsy.
| [
{
"created": "Wed, 19 Jan 2022 12:11:04 GMT",
"version": "v1"
}
] | 2022-01-20 | [
[
"Wood",
"Ian B.",
""
],
[
"Correia",
"Rion Brattig",
""
],
[
"Miller",
"Wendy R.",
""
],
[
"Rocha",
"Luis M.",
""
]
] | Sudden Unexpected Death in Epilepsy (SUDEP) remains a leading cause of death in people with epilepsy. Despite the constant risk for patients and bereavement to family members, to date the physiological mechanisms of SUDEP remain unknown. Here we explore the potential to identify putative predictive signals of SUDEP from online digital behavioral data using text and sentiment analysis. Specifically, we analyze Facebook timelines of six epilepsy patients deceased due to SUDEP, donated by surviving family members. We find preliminary evidence for behavioral changes detectable by text and sentiment analysis tools. Namely, in the months preceding their SUDEP event patient social media timelines show: i) increase in verbosity; ii) increased use of functional words; and iii) sentiment shifts as measured by different sentiment analysis tools. Combined, these results suggest that social media engagement, as well as its sentiment, may serve as possible early-warning signals for SUDEP in people with epilepsy. While the small sample of patient timelines analyzed in this study prevents generalization, our preliminary investigation demonstrates the potential of social media data as complementary data in larger studies of SUDEP and epilepsy. |
1810.11891 | Juntang Zhuang | Juntang Zhuang, Nicha C. Dvornek, Xiaoxiao Li, Pamela Ventola, James
S. Duncan | Prediction of severity and treatment outcome for ASD from fMRI | null | International Workshop on Predictive Intelligence In Medicine, pp
9-17, 2018, Springer | 10.1007/978-3-030-00320-3_2 | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autism spectrum disorder (ASD) is a complex neurodevelopmental syndrome.
Early diagnosis and precise treatment are essential for ASD patients. Although
researchers have built many analytical models, there has been limited progress
in accurate predictive models for early diagnosis. In this project, we aim to
build an accurate model to predict treatment outcome and ASD severity from
early stage functional magnetic resonance imaging (fMRI) scans. The difficulty
in building large databases of patients who have received specific treatments
and the high dimensionality of medical image analysis problems are challenges
in this work. We propose a generic and accurate two-level approach for
high-dimensional regression problems in medical image analysis. First, we
perform region-level feature selection using a predefined brain parcellation.
Based on the assumption that voxels within one region in the brain have similar
values, for each region we use the bootstrapped mean of voxels within it as a
feature. In this way, the dimension of data is reduced from number of voxels to
number of regions. Then we detect predictive regions by various feature
selection methods. Second, we extract voxels within selected regions, and
perform voxel-level feature selection. To use this model in both linear and
non-linear cases with limited training examples, we apply two-level elastic net
regression and random forest (RF) models respectively. To validate accuracy and
robustness of this approach, we perform experiments on both task-fMRI and
resting state fMRI datasets. Furthermore, we visualize the influence of each
region, and show that the results match well with other findings.
| [
{
"created": "Sun, 28 Oct 2018 21:48:21 GMT",
"version": "v1"
}
] | 2018-10-30 | [
[
"Zhuang",
"Juntang",
""
],
[
"Dvornek",
"Nicha C.",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Ventola",
"Pamela",
""
],
[
"Duncan",
"James S.",
""
]
] | Autism spectrum disorder (ASD) is a complex neurodevelopmental syndrome. Early diagnosis and precise treatment are essential for ASD patients. Although researchers have built many analytical models, there has been limited progress in accurate predictive models for early diagnosis. In this project, we aim to build an accurate model to predict treatment outcome and ASD severity from early stage functional magnetic resonance imaging (fMRI) scans. The difficulty in building large databases of patients who have received specific treatments and the high dimensionality of medical image analysis problems are challenges in this work. We propose a generic and accurate two-level approach for high-dimensional regression problems in medical image analysis. First, we perform region-level feature selection using a predefined brain parcellation. Based on the assumption that voxels within one region in the brain have similar values, for each region we use the bootstrapped mean of voxels within it as a feature. In this way, the dimension of data is reduced from number of voxels to number of regions. Then we detect predictive regions by various feature selection methods. Second, we extract voxels within selected regions, and perform voxel-level feature selection. To use this model in both linear and non-linear cases with limited training examples, we apply two-level elastic net regression and random forest (RF) models respectively. To validate accuracy and robustness of this approach, we perform experiments on both task-fMRI and resting state fMRI datasets. Furthermore, we visualize the influence of each region, and show that the results match well with other findings. |
2407.16375 | Alexandre Bonvin | Xiaotong Xu and Alexandre M.J. J. Bonvin | Ranking protein-protein models with large language models and graph
neural networks | 14 pages. Detailed protocol to use our DeepRank-GNN-esm software to
analyse models of protein-protein complexes | null | null | null | q-bio.BM cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Protein-protein interactions (PPIs) are associated with various diseases,
including cancer, infections, and neurodegenerative disorders. Obtaining
three-dimensional structural information on these PPIs serves as a foundation
to interfere with those or to guide drug design. Various strategies can be
followed to model those complexes, all typically resulting in a large number of
models. A challenging step in this process is the identification of good models
(near-native PPI conformations) from the large pool of generated models. To
address this challenge, we previously developed DeepRank-GNN-esm, a graph-based
deep learning algorithm for ranking modelled PPI structures harnessing the
power of protein language models. Here, we detail the use of our software with
examples. DeepRank-GNN-esm is freely available at
https://github.com/haddocking/DeepRank-GNN-esm
| [
{
"created": "Tue, 23 Jul 2024 10:51:35 GMT",
"version": "v1"
}
] | 2024-07-24 | [
[
"Xu",
"Xiaotong",
""
],
[
"Bonvin",
"Alexandre M. J. J.",
""
]
] | Protein-protein interactions (PPIs) are associated with various diseases, including cancer, infections, and neurodegenerative disorders. Obtaining three-dimensional structural information on these PPIs serves as a foundation to interfere with those or to guide drug design. Various strategies can be followed to model those complexes, all typically resulting in a large number of models. A challenging step in this process is the identification of good models (near-native PPI conformations) from the large pool of generated models. To address this challenge, we previously developed DeepRank-GNN-esm, a graph-based deep learning algorithm for ranking modelled PPI structures harnessing the power of protein language models. Here, we detail the use of our software with examples. DeepRank-GNN-esm is freely available at https://github.com/haddocking/DeepRank-GNN-esm |
0810.3342 | Hideo Hasegawa | Hideo Hasegawa (Tokyo Gakugei Univ.) | Population rate codes carried by mean, fluctuation and synchrony of
neuronal firings | 20 pages, 10 figures, accepted in Physica A (revised version of
arXiv:0706.3489) | Physica A 388 (2009) 499-513 | 10.1016/j.physa.2008.10.033 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A population of firing neurons is expected to carry information not only by
mean firing rate but also by fluctuation and synchrony among neurons. In order
to examine this possibility, we have studied responses of neuronal ensembles to
three kinds of inputs: mean-, fluctuation- and synchrony-driven inputs. The
generalized rate-code model including additive and multiplicative noise (H.
Hasegawa, Phys. Rev. E {\bf 75} (2007) 051904) has been studied by direct
simulations (DSs) and the augmented moment method (AMM) in which equations of
motion for mean firing rate, fluctuation and synchrony are derived. Results
calculated by the AMM are in good agreement with those by DSs. The independent
component analysis (ICA) of our results has shown that mean firing rate,
fluctuation (or variability) and synchrony may carry independent information in
the population rate-code model. The input-output relation of mean firing rates
is shown to have higher sensitivity for larger multiplicative noise, as
recently observed in prefrontal cortex. A comparison is made between results
obtained by the integrate-and-fire (IF) model and our rate-code model.
| [
{
"created": "Sat, 18 Oct 2008 19:32:29 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Hasegawa",
"Hideo",
"",
"Tokyo Gakugei Univ."
]
] | A population of firing neurons is expected to carry information not only by mean firing rate but also by fluctuation and synchrony among neurons. In order to examine this possibility, we have studied responses of neuronal ensembles to three kinds of inputs: mean-, fluctuation- and synchrony-driven inputs. The generalized rate-code model including additive and multiplicative noise (H. Hasegawa, Phys. Rev. E {\bf 75} (2007) 051904) has been studied by direct simulations (DSs) and the augmented moment method (AMM) in which equations of motion for mean firing rate, fluctuation and synchrony are derived. Results calculated by the AMM are in good agreement with those by DSs. The independent component analysis (ICA) of our results has shown that mean firing rate, fluctuation (or variability) and synchrony may carry independent information in the population rate-code model. The input-output relation of mean firing rates is shown to have higher sensitivity for larger multiplicative noise, as recently observed in prefrontal cortex. A comparison is made between results obtained by the integrate-and-fire (IF) model and our rate-code model. |
1311.5262 | Jan Smrek | Jonathan D. Halverson, Jan Smrek, Kurt Kremer, and Alexander Y.
Grosberg | From a melt of rings to chromosome territories: The role of topological
constraints in genome folding | 26 pages, 5 figures Added and updated references, corrected typos.
Figures 2 and 4 improved, content remained unchanged. Figure 5 edited to
contain only human cell data. Figure captions improved to reflect the
changes. Some paragraphs in sections IV B 2, IV B 3, IV D, VI F 2, VII are
edited or added for clarity and to be up to date. All results are unchanged | Rep. Prog. Phys. 77 (2014) 022601 | 10.1088/0034-4885/77/2/022601 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review pro and contra of the hypothesis that generic polymer properties of
topological constraints are behind many aspects of chromatin folding in
eukaryotic cells. For that purpose, we review, first, recent theoretical and
computational findings in polymer physics related to concentrated,
topologically-simple (unknotted and unlinked) chains or a system of chains.
Second, we review recent experimental discoveries related to genome folding.
Understanding in these fields is far from complete, but we show how looking at
them in parallel sheds new light on both.
| [
{
"created": "Wed, 20 Nov 2013 23:13:48 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jan 2014 14:56:23 GMT",
"version": "v2"
}
] | 2015-06-18 | [
[
"Halverson",
"Jonathan D.",
""
],
[
"Smrek",
"Jan",
""
],
[
"Kremer",
"Kurt",
""
],
[
"Grosberg",
"Alexander Y.",
""
]
] | We review pro and contra of the hypothesis that generic polymer properties of topological constraints are behind many aspects of chromatin folding in eukaryotic cells. For that purpose, we review, first, recent theoretical and computational findings in polymer physics related to concentrated, topologically-simple (unknotted and unlinked) chains or a system of chains. Second, we review recent experimental discoveries related to genome folding. Understanding in these fields is far from complete, but we show how looking at them in parallel sheds new light on both. |
1207.1228 | Claus Metzner | Claus Metzner | 1D analysis of 2D isotropic random walks | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many stochastic systems in physics and biology are investigated by recording
the two-dimensional (2D) positions of a moving test particle in regular time
intervals. The resulting sample trajectories are then used to induce the
properties of the underlying stochastic process. Often, it can be assumed a
priori that the underlying discrete-time random walk model is independent from
absolute position (homogeneity), direction (isotropy) and time (stationarity),
as well as ergodic. In this article we first review some common statistical
methods for analyzing 2D trajectories, based on quantities with built-in
rotational invariance. We then discuss an alternative approach in which the
two-dimensional trajectories are reduced to one dimension by projection onto an
arbitrary axis and rotational averaging. Each step of the resulting 1D
trajectory is further factorized into sign and magnitude. The statistical
properties of the signs and magnitudes are mathematically related to those of
the step lengths and turning angles of the original 2D trajectories,
demonstrating that no essential information is lost by this data reduction. The
resulting binary sequence of signs lends itself for a pattern counting
analysis, revealing temporal properties of the random process that are not
easily deduced from conventional measures such as the velocity autocorrelation
function. In order to highlight this simplified 1D description, we apply it to
a 2D random walk with restricted turning angles (RTA model), defined by a
finite-variance distribution $p(L)$ of step length and a narrow turning angle
distribution $p(\phi)$, assuming that the lengths and directions of the steps
are independent.
| [
{
"created": "Thu, 5 Jul 2012 11:32:34 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Aug 2012 08:10:33 GMT",
"version": "v2"
}
] | 2012-08-27 | [
[
"Metzner",
"Claus",
""
]
] | Many stochastic systems in physics and biology are investigated by recording the two-dimensional (2D) positions of a moving test particle in regular time intervals. The resulting sample trajectories are then used to induce the properties of the underlying stochastic process. Often, it can be assumed a priori that the underlying discrete-time random walk model is independent from absolute position (homogeneity), direction (isotropy) and time (stationarity), as well as ergodic. In this article we first review some common statistical methods for analyzing 2D trajectories, based on quantities with built-in rotational invariance. We then discuss an alternative approach in which the two-dimensional trajectories are reduced to one dimension by projection onto an arbitrary axis and rotational averaging. Each step of the resulting 1D trajectory is further factorized into sign and magnitude. The statistical properties of the signs and magnitudes are mathematically related to those of the step lengths and turning angles of the original 2D trajectories, demonstrating that no essential information is lost by this data reduction. The resulting binary sequence of signs lends itself for a pattern counting analysis, revealing temporal properties of the random process that are not easily deduced from conventional measures such as the velocity autocorrelation function. In order to highlight this simplified 1D description, we apply it to a 2D random walk with restricted turning angles (RTA model), defined by a finite-variance distribution $p(L)$ of step length and a narrow turning angle distribution $p(\phi)$, assuming that the lengths and directions of the steps are independent. |
1205.0793 | Jennifer Listgarten | Jennifer Listgarten, Christoph Lippert, Eun Yong Kang, Jing Xiang,
Carl M. Kadie and David Heckerman | A powerful and efficient set test for genetic markers that handles
confounders | * denotes equal contributions | null | 10.1093/bioinformatics/btt177 | null | q-bio.GN stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approaches for testing sets of variants, such as a set of rare or common
variants within a gene or pathway, for association with complex traits are
important. In particular, set tests allow for aggregation of weak signal within
a set, can capture interplay among variants, and reduce the burden of multiple
hypothesis testing. Until now, these approaches did not address confounding by
family relatedness and population structure, a problem that is becoming more
important as larger data sets are used to increase power.
Results: We introduce a new approach for set tests that handles confounders.
Our model is based on the linear mixed model and uses two random effects-one to
capture the set association signal and one to capture confounders. We also
introduce a computational speedup for two-random-effects models that makes this
approach feasible even for extremely large cohorts. Using this model with both
the likelihood ratio test and score test, we find that the former yields more
power while controlling type I error. Application of our approach to richly
structured GAW14 data demonstrates that our method successfully corrects for
population structure and family relatedness, while application of our method to
a 15,000 individual Crohn's disease case-control cohort demonstrates that it
additionally recovers genes not recoverable by univariate analysis.
Availability: A Python-based library implementing our approach is available
at http://mscompbio.codeplex.com
| [
{
"created": "Thu, 3 May 2012 19:05:38 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Sep 2012 16:49:49 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Apr 2013 04:30:32 GMT",
"version": "v3"
}
] | 2013-05-28 | [
[
"Listgarten",
"Jennifer",
""
],
[
"Lippert",
"Christoph",
""
],
[
"Kang",
"Eun Yong",
""
],
[
"Xiang",
"Jing",
""
],
[
"Kadie",
"Carl M.",
""
],
[
"Heckerman",
"David",
""
]
] | Approaches for testing sets of variants, such as a set of rare or common variants within a gene or pathway, for association with complex traits are important. In particular, set tests allow for aggregation of weak signal within a set, can capture interplay among variants, and reduce the burden of multiple hypothesis testing. Until now, these approaches did not address confounding by family relatedness and population structure, a problem that is becoming more important as larger data sets are used to increase power. Results: We introduce a new approach for set tests that handles confounders. Our model is based on the linear mixed model and uses two random effects-one to capture the set association signal and one to capture confounders. We also introduce a computational speedup for two-random-effects models that makes this approach feasible even for extremely large cohorts. Using this model with both the likelihood ratio test and score test, we find that the former yields more power while controlling type I error. Application of our approach to richly structured GAW14 data demonstrates that our method successfully corrects for population structure and family relatedness, while application of our method to a 15,000 individual Crohn's disease case-control cohort demonstrates that it additionally recovers genes not recoverable by univariate analysis. Availability: A Python-based library implementing our approach is available at http://mscompbio.codeplex.com |
1307.0869 | Dongying Wu | Dongying Wu, Guillaume Jospin, Jonathan A. Eisen | Systematic identification of gene families for use as markers for
phylogenetic and phylogeny- driven ecological studies of bacteria and archaea
and their major subgroups | 24 pages, 3 figures | null | 10.1371/journal.pone.0077033 | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | With the astonishing rate that the genomic and metagenomic sequence data sets
are accumulating, there are many reasons to constrain the data analyses. One
approach to such constrained analyses is to focus on select subsets of gene
families that are particularly well suited for the tasks at hand. Such gene
families have generally been referred to as marker genes. We are particularly
interested in identifying and using such marker genes for phylogenetic and
phylogeny-driven ecological studies of microbes and their communities. We
therefore refer to these as PhyEco (for phylogenetic and phylogenetic ecology)
markers. The dual use of these PhyEco markers means that we needed to develop
and apply a set of somewhat novel criteria for identification of the best
candidates for such markers. The criteria we focused on included universality
across the taxa of interest, ability to be used to produce robust phylogenetic
trees that reflect as much as possible the evolution of the species from which
the genes come, and low variation in copy number across taxa. We describe here
an automated protocol for identifying potential PhyEco markers from a set of
complete genome sequences. The protocol combines rapid searching, clustering
and phylogenetic tree building algorithms to generate protein families that
meet the criteria listed above. We report here the identification of PhyEco
markers for different taxonomic levels including 40 for all bacteria and
archaea, 114 for all bacteria, and much more for some of the individual phyla
of bacteria. This new list of PhyEco markers should allow much more detailed
automated phylogenetic and phylogenetic ecology analyses of these groups than
possible previously.
| [
{
"created": "Tue, 2 Jul 2013 22:16:16 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Wu",
"Dongying",
""
],
[
"Jospin",
"Guillaume",
""
],
[
"Eisen",
"Jonathan A.",
""
]
] | With the astonishing rate that the genomic and metagenomic sequence data sets are accumulating, there are many reasons to constrain the data analyses. One approach to such constrained analyses is to focus on select subsets of gene families that are particularly well suited for the tasks at hand. Such gene families have generally been referred to as marker genes. We are particularly interested in identifying and using such marker genes for phylogenetic and phylogeny-driven ecological studies of microbes and their communities. We therefore refer to these as PhyEco (for phylogenetic and phylogenetic ecology) markers. The dual use of these PhyEco markers means that we needed to develop and apply a set of somewhat novel criteria for identification of the best candidates for such markers. The criteria we focused on included universality across the taxa of interest, ability to be used to produce robust phylogenetic trees that reflect as much as possible the evolution of the species from which the genes come, and low variation in copy number across taxa. We describe here an automated protocol for identifying potential PhyEco markers from a set of complete genome sequences. The protocol combines rapid searching, clustering and phylogenetic tree building algorithms to generate protein families that meet the criteria listed above. We report here the identification of PhyEco markers for different taxonomic levels including 40 for all bacteria and archaea, 114 for all bacteria, and much more for some of the individual phyla of bacteria. This new list of PhyEco markers should allow much more detailed automated phylogenetic and phylogenetic ecology analyses of these groups than possible previously. |
2310.20599 | Tahereh Toosi | Tahereh Toosi and Elias B. Issa | Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward
Alignment | null | null | null | null | q-bio.NC cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In natural vision, feedback connections support versatile visual inference
capabilities such as making sense of the occluded or noisy bottom-up sensory
information or mediating pure top-down processes such as imagination. However,
the mechanisms by which the feedback pathway learns to give rise to these
capabilities flexibly are not clear. We propose that top-down effects emerge
through alignment between feedforward and feedback pathways, each optimizing
its own objectives. To achieve this co-optimization, we introduce
Feedback-Feedforward Alignment (FFA), a learning algorithm that leverages
feedback and feedforward pathways as mutual credit assignment computational
graphs, enabling alignment. In our study, we demonstrate the effectiveness of
FFA in co-optimizing classification and reconstruction tasks on widely used
MNIST and CIFAR10 datasets. Notably, the alignment mechanism in FFA endows
feedback connections with emergent visual inference functions, including
denoising, resolving occlusions, hallucination, and imagination. Moreover, FFA
offers bio-plausibility compared to traditional backpropagation (BP) methods in
implementation. By repurposing the computational graph of credit assignment
into a goal-driven feedback pathway, FFA alleviates weight transport problems
encountered in BP, enhancing the bio-plausibility of the learning algorithm.
Our study presents FFA as a promising proof-of-concept for the mechanisms
underlying how feedback connections in the visual cortex support flexible
visual functions. This work also contributes to the broader field of visual
inference underlying perceptual phenomena and has implications for developing
more biologically inspired learning algorithms.
| [
{
"created": "Tue, 31 Oct 2023 16:35:27 GMT",
"version": "v1"
}
] | 2023-11-01 | [
[
"Toosi",
"Tahereh",
""
],
[
"Issa",
"Elias B.",
""
]
] | In natural vision, feedback connections support versatile visual inference capabilities such as making sense of the occluded or noisy bottom-up sensory information or mediating pure top-down processes such as imagination. However, the mechanisms by which the feedback pathway learns to give rise to these capabilities flexibly are not clear. We propose that top-down effects emerge through alignment between feedforward and feedback pathways, each optimizing its own objectives. To achieve this co-optimization, we introduce Feedback-Feedforward Alignment (FFA), a learning algorithm that leverages feedback and feedforward pathways as mutual credit assignment computational graphs, enabling alignment. In our study, we demonstrate the effectiveness of FFA in co-optimizing classification and reconstruction tasks on widely used MNIST and CIFAR10 datasets. Notably, the alignment mechanism in FFA endows feedback connections with emergent visual inference functions, including denoising, resolving occlusions, hallucination, and imagination. Moreover, FFA offers bio-plausibility compared to traditional backpropagation (BP) methods in implementation. By repurposing the computational graph of credit assignment into a goal-driven feedback pathway, FFA alleviates weight transport problems encountered in BP, enhancing the bio-plausibility of the learning algorithm. Our study presents FFA as a promising proof-of-concept for the mechanisms underlying how feedback connections in the visual cortex support flexible visual functions. This work also contributes to the broader field of visual inference underlying perceptual phenomena and has implications for developing more biologically inspired learning algorithms. |
2010.00308 | Takuya Hayashi | Takuya Hayashi, Yujie Hou, Matthew F Glasser, Joonas A Autio, Kenneth
Knoblauch, Miho Inoue-Murayama, Tim Coalson, Essa Yacoub, Stephen Smith,
Henry Kennedy, and David C Van Essen | The NonHuman Primate Neuroimaging & Neuroanatomy Project | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Multi-modal neuroimaging projects are advancing our understanding of human
brain architecture, function, connectivity using high-quality non-invasive data
from many subjects. However, ground truth validation of connectivity using
invasive tracers is not feasible in humans. Our NonHuman Primate Neuroimaging &
Neuroanatomy Project (NHP_NNP) is an international effort (6 laboratories in 5
countries) to: (i) acquire and analyze high-quality multi-modal brain imaging
data of macaque and marmoset monkeys using protocols and methods adapted from
the HCP; (ii) acquire quantitative invasive tract-tracing data for cortical and
subcortical projections to cortical areas; and (iii) map the distributions of
different brain cell types with immunocytochemical stains to better define
brain areal boundaries. We are acquiring high-resolution structural,
functional, and diffusion MRI data together with behavioral measures from over
100 individual macaques and marmosets in order to generate non-invasive
measures of brain architecture such as myelin and cortical thickness maps, as
well as functional and diffusion tractography-based connectomes. We are using
classical and next-generation anatomical tracers to generate quantitative
connectivity maps based on brain-wide counting of labeled cortical and
subcortical neurons, providing ground truth measures of connectivity. Advanced
statistical modeling techniques address the consistency of both kinds of data
across individuals, allowing comparison of tracer-based and non-invasive
MRI-based connectivity measures. We aim to develop improved cortical and
subcortical areal atlases by combining histological and imaging methods.
Finally, we are collecting genetic and sociality-associated behavioral data in
all animals in an effort to understand how genetic variation shapes the
connectome and behavior.
| [
{
"created": "Thu, 1 Oct 2020 11:38:14 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Dec 2020 04:42:11 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Jan 2021 02:24:33 GMT",
"version": "v3"
}
] | 2021-01-12 | [
[
"Hayashi",
"Takuya",
""
],
[
"Hou",
"Yujie",
""
],
[
"Glasser",
"Matthew F",
""
],
[
"Autio",
"Joonas A",
""
],
[
"Knoblauch",
"Kenneth",
""
],
[
"Inoue-Murayama",
"Miho",
""
],
[
"Coalson",
"Tim",
""
],
[
"Yacoub",
"Essa",
""
],
[
"Smith",
"Stephen",
""
],
[
"Kennedy",
"Henry",
""
],
[
"Van Essen",
"David C",
""
]
] | Multi-modal neuroimaging projects are advancing our understanding of human brain architecture, function, connectivity using high-quality non-invasive data from many subjects. However, ground truth validation of connectivity using invasive tracers is not feasible in humans. Our NonHuman Primate Neuroimaging & Neuroanatomy Project (NHP_NNP) is an international effort (6 laboratories in 5 countries) to: (i) acquire and analyze high-quality multi-modal brain imaging data of macaque and marmoset monkeys using protocols and methods adapted from the HCP; (ii) acquire quantitative invasive tract-tracing data for cortical and subcortical projections to cortical areas; and (iii) map the distributions of different brain cell types with immunocytochemical stains to better define brain areal boundaries. We are acquiring high-resolution structural, functional, and diffusion MRI data together with behavioral measures from over 100 individual macaques and marmosets in order to generate non-invasive measures of brain architecture such as myelin and cortical thickness maps, as well as functional and diffusion tractography-based connectomes. We are using classical and next-generation anatomical tracers to generate quantitative connectivity maps based on brain-wide counting of labeled cortical and subcortical neurons, providing ground truth measures of connectivity. Advanced statistical modeling techniques address the consistency of both kinds of data across individuals, allowing comparison of tracer-based and non-invasive MRI-based connectivity measures. We aim to develop improved cortical and subcortical areal atlases by combining histological and imaging methods. Finally, we are collecting genetic and sociality-associated behavioral data in all animals in an effort to understand how genetic variation shapes the connectome and behavior. |
2402.07684 | Saurabh Sihag | Saurabh Sihag, Gonzalo Mateos, Alejandro Ribeiro | Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks | Preliminary work. Contact sihag.saurabh@gmail.com for the NeuroVNN
model and code used for results reported in this manuscript | null | null | null | q-bio.QM cs.LG stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Brain age is the estimate of biological age derived from neuroimaging
datasets using machine learning algorithms. Increasing brain age with respect
to chronological age can reflect increased vulnerability to neurodegeneration
and cognitive decline. In this paper, we study NeuroVNN, based on coVariance
neural networks, as a paradigm for foundation model for the brain age
prediction application. NeuroVNN is pre-trained as a regression model on
healthy population to predict chronological age using cortical thickness
features and fine-tuned to estimate brain age in different neurological
contexts. Importantly, NeuroVNN adds anatomical interpretability to brain age
and has a `scale-free' characteristic that allows its transference to datasets
curated according to any arbitrary brain atlas. Our results demonstrate that
NeuroVNN can extract biologically plausible brain age estimates in different
populations, as well as transfer successfully to datasets of dimensionalities
distinct from that for the dataset used to train NeuroVNN.
| [
{
"created": "Mon, 12 Feb 2024 14:46:31 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Sihag",
"Saurabh",
""
],
[
"Mateos",
"Gonzalo",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] | Brain age is the estimate of biological age derived from neuroimaging datasets using machine learning algorithms. Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline. In this paper, we study NeuroVNN, based on coVariance neural networks, as a paradigm for foundation model for the brain age prediction application. NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age using cortical thickness features and fine-tuned to estimate brain age in different neurological contexts. Importantly, NeuroVNN adds anatomical interpretability to brain age and has a `scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas. Our results demonstrate that NeuroVNN can extract biologically plausible brain age estimates in different populations, as well as transfer successfully to datasets of dimensionalities distinct from that for the dataset used to train NeuroVNN. |
1912.12379 | Yingcheng Sun | Yingcheng Sun, Xiangru Liang, Kenneth Loparo | A Common Gene Expression Signature Analysis Method for Multiple Types of
Cancer | null | 2019 19th Industrial Conference on Data Mining | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining gene expression profiles has proven valuable for identifying
signatures serving as surrogates of cancer phenotypes. However, the
similarities of such signatures across different cancer types have not been
strong enough to conclude that they represent a universal biological mechanism
shared among multiple cancer types. Here we describe a network-based approach
that explores gene-to-gene connections in multiple cancer datasets while
maximizing the overall association of the subnetwork with clinical outcomes.
With the dataset of The Cancer Genome Atlas (TCGA), we studied the
characteristics of common gene expression of three types of cancers: Rectum
adenocarcinoma (READ), Breast invasive carcinoma (BRCA) and Colon
adenocarcinoma (COAD). By analyzing several pairs of highly correlated genes
after filtering and clustering work, we found that the co-expressed genes
across multiple types of cancers point to particular biological mechanisms
related to cancer cell progression , suggesting that they represent important
attributes of cancer in need of being elucidated for potential applications in
diagnostic, prognostic and therapeutic products applicable to multiple cancer
types.
| [
{
"created": "Sat, 28 Dec 2019 01:07:36 GMT",
"version": "v1"
}
] | 2020-01-01 | [
[
"Sun",
"Yingcheng",
""
],
[
"Liang",
"Xiangru",
""
],
[
"Loparo",
"Kenneth",
""
]
] | Mining gene expression profiles has proven valuable for identifying signatures serving as surrogates of cancer phenotypes. However, the similarities of such signatures across different cancer types have not been strong enough to conclude that they represent a universal biological mechanism shared among multiple cancer types. Here we describe a network-based approach that explores gene-to-gene connections in multiple cancer datasets while maximizing the overall association of the subnetwork with clinical outcomes. With the dataset of The Cancer Genome Atlas (TCGA), we studied the characteristics of common gene expression of three types of cancers: Rectum adenocarcinoma (READ), Breast invasive carcinoma (BRCA) and Colon adenocarcinoma (COAD). By analyzing several pairs of highly correlated genes after filtering and clustering work, we found that the co-expressed genes across multiple types of cancers point to particular biological mechanisms related to cancer cell progression , suggesting that they represent important attributes of cancer in need of being elucidated for potential applications in diagnostic, prognostic and therapeutic products applicable to multiple cancer types. |
q-bio/0611031 | Katsuhiko Sato | Katsuhiko Sato and Kunihiko Kaneko | Evolution Equation of Phenotype Distribution: General Formulation and
Application to Error Catastrophe | 22 pages, 2 figures | null | 10.1103/PhysRevE.75.061909 | null | q-bio.PE cond-mat.stat-mech | null | An equation describing the evolution of phenotypic distribution is derived
using methods developed in statistical physics. The equation is solved by using
the singular perturbation method, and assuming that the number of bases in the
genetic sequence is large. Applying the equation to the mutation-selection
model by Eigen provides the critical mutation rate for the error catastrophe.
Phenotypic fluctuation of clones (individuals sharing the same gene) is
introduced into this evolution equation. With this formalism, it is found that
the critical mutation rate is sometimes increased by the phenotypic
fluctuations, i.e., noise can enhance robustness of a fitted state to mutation.
Our formalism is systematic and general, while approximations to derive more
tractable evolution equations are also discussed.
| [
{
"created": "Wed, 8 Nov 2006 20:38:35 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Nov 2006 02:13:16 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Sato",
"Katsuhiko",
""
],
[
"Kaneko",
"Kunihiko",
""
]
] | An equation describing the evolution of phenotypic distribution is derived using methods developed in statistical physics. The equation is solved by using the singular perturbation method, and assuming that the number of bases in the genetic sequence is large. Applying the equation to the mutation-selection model by Eigen provides the critical mutation rate for the error catastrophe. Phenotypic fluctuation of clones (individuals sharing the same gene) is introduced into this evolution equation. With this formalism, it is found that the critical mutation rate is sometimes increased by the phenotypic fluctuations, i.e., noise can enhance robustness of a fitted state to mutation. Our formalism is systematic and general, while approximations to derive more tractable evolution equations are also discussed. |
q-bio/0703065 | Eleonora Alfinito Dr. | E.Alfinito, C. Pennetta, and L.Reggiani | A network model to investigate structural and electrical properties of
proteins | 25 pages, 12 figures | Nanotechnology, 19, 065202 (2008) | 10.1088/0957-4484/19/6/065202 | null | q-bio.QM cond-mat.soft physics.bio-ph | null | One of the main trend in to date research and development is the
miniaturization of electronic devices. In this perspective, integrated
nanodevices based on proteins or biomolecules are attracting a major interest.
In fact, it has been shown that proteins like bacteriorhodopsin and azurin,
manifest electrical properties which are promising for the development of
active components in the field of molecular electronics. Here we focus on two
relevant kinds of proteins: The bovine rhodopsin, prototype of GPCR protein,
and the enzyme acetylcholinesterase (AChE), whose inhibition is one of the most
qualified treatments of Alzheimer disease. Both these proteins exert their
functioning starting with a conformational change of their native structure.
Our guess is that such a change should be accompanied with a detectable
variation of their electrical properties. To investigate this conjecture, we
present an impedance network model of proteins, able to estimate the different
electrical response associated with the different configurations. The model
resolution of the electrical response is found able to monitor the structure
and the conformational change of the given protein. In this respect, rhodopsin
exhibits a better differential response than AChE. This result gives room to
different interpretations of the degree of conformational change and in
particular supports a recent hypothesis on the existence of a mixed state
already in the native configuration of the protein.
| [
{
"created": "Fri, 30 Mar 2007 07:43:47 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Alfinito",
"E.",
""
],
[
"Pennetta",
"C.",
""
],
[
"Reggiani",
"L.",
""
]
] | One of the main trend in to date research and development is the miniaturization of electronic devices. In this perspective, integrated nanodevices based on proteins or biomolecules are attracting a major interest. In fact, it has been shown that proteins like bacteriorhodopsin and azurin, manifest electrical properties which are promising for the development of active components in the field of molecular electronics. Here we focus on two relevant kinds of proteins: The bovine rhodopsin, prototype of GPCR protein, and the enzyme acetylcholinesterase (AChE), whose inhibition is one of the most qualified treatments of Alzheimer disease. Both these proteins exert their functioning starting with a conformational change of their native structure. Our guess is that such a change should be accompanied with a detectable variation of their electrical properties. To investigate this conjecture, we present an impedance network model of proteins, able to estimate the different electrical response associated with the different configurations. The model resolution of the electrical response is found able to monitor the structure and the conformational change of the given protein. In this respect, rhodopsin exhibits a better differential response than AChE. This result gives room to different interpretations of the degree of conformational change and in particular supports a recent hypothesis on the existence of a mixed state already in the native configuration of the protein. |
1404.7241 | Ralph Brinks | Ralph Brinks | Change rates and prevalence of a dichotomous variable: simulations and
applications | 26 pages, 8 figures | null | 10.1371/journal.pone.0118955 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: A common modelling approach in public health and epidemiology
divides the population under study into compartments containing persons that
share the same status. Here we consider a three-state model with the
compartments: A, B and Dead. States A and B may be the states of any
dichotomous variable, for example, Healthy and Ill, respectively. The
transitions between the states are described by change rates (or synonymously:
densities), which depend on calendar time and on age. So far, a rigorous
mathematical calculation of the prevalence of property B has been difficult,
which has limited the use of the model in epidemiology and public health.
Methods: We develop an equation that simplifies the use of the three-state
model. To demonstrate the broad applicability and the validity of the equation,
it is applied to simulated data and real world data from different
health-related topics.
Results: The three-state model is governed by a partial differential equation
(PDE) that links the prevalence with the change rates between the states. The
validity of the PDE has been shown in two simulation studies, one about a
hypothetical chronic disease and one about dementia. In two further
applications, the equation may provide insights into smoking behaviour of males
in Germany and the knowledge about the ovulatory cycle in Egyptian women.
Conclusions: We have found a simple equation that links the prevalence of a
dichotomous variable with the transmission rates in the three-state model. The
equation has a broad applicability in epidemiology and public health. Examples
are the estimation of incidence rates from cross-sectional surveys, the
prediction of the future prevalence of chronic diseases, and planning of
interventions against risky behaviour (e.g., smoking).
| [
{
"created": "Tue, 29 Apr 2014 05:41:27 GMT",
"version": "v1"
}
] | 2015-06-19 | [
[
"Brinks",
"Ralph",
""
]
] | Background: A common modelling approach in public health and epidemiology divides the population under study into compartments containing persons that share the same status. Here we consider a three-state model with the compartments: A, B and Dead. States A and B may be the states of any dichotomous variable, for example, Healthy and Ill, respectively. The transitions between the states are described by change rates (or synonymously: densities), which depend on calendar time and on age. So far, a rigorous mathematical calculation of the prevalence of property B has been difficult, which has limited the use of the model in epidemiology and public health. Methods: We develop an equation that simplifies the use of the three-state model. To demonstrate the broad applicability and the validity of the equation, it is applied to simulated data and real world data from different health-related topics. Results: The three-state model is governed by a partial differential equation (PDE) that links the prevalence with the change rates between the states. The validity of the PDE has been shown in two simulation studies, one about a hypothetical chronic disease and one about dementia. In two further applications, the equation may provide insights into smoking behaviour of males in Germany and the knowledge about the ovulatory cycle in Egyptian women. Conclusions: We have found a simple equation that links the prevalence of a dichotomous variable with the transmission rates in the three-state model. The equation has a broad applicability in epidemiology and public health. Examples are the estimation of incidence rates from cross-sectional surveys, the prediction of the future prevalence of chronic diseases, and planning of interventions against risky behaviour (e.g., smoking). |
2008.00531 | Luis F Seoane PhD | Lu\'is F Seoane | Fate of Duplicated Neural Structures | Review with novel results. Position paper. 16 pages, 3 figures | Entropy 22, 928 (2020) | 10.3390/e22090928 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical mechanics determines the abundance of different arrangements of
matter depending on cost-benefit balances. Its formalism and phenomenology
percolate throughout biological processes and set limits to effective
computation. Under specific conditions, self-replicating and computationally
complex patterns become favored, yielding life, cognition, and Darwinian
evolution. Neurons and neural circuits sit at a crossroads between statistical
mechanics, computation, and (through their role in cognition) natural
selection. Can we establish a {\em statistical physics} of neural circuits?
Such theory would tell what kinds of brains to expect under set energetic,
evolutionary, and computational conditions. With this big picture in mind, we
focus on the fate of duplicated neural circuits. We look at examples from
central nervous systems, with a stress on computational thresholds that might
prompt this redundancy. We also study a naive cost-benefit balance for
duplicated circuits implementing complex phenotypes. From this we derive {\em
phase diagrams} and (phase-like) transitions between single and duplicated
circuits, which constrain evolutionary paths to complex cognition. Back to the
big picture, similar phase diagrams and transitions might constrain I/O and
internal connectivity patterns of neural circuits at large. The formalism of
statistical mechanics seems a natural framework for thsi worthy line of
research.
| [
{
"created": "Sun, 2 Aug 2020 17:43:57 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Aug 2020 14:12:28 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Feb 2021 19:06:11 GMT",
"version": "v3"
}
] | 2021-02-03 | [
[
"Seoane",
"Luís F",
""
]
] | Statistical mechanics determines the abundance of different arrangements of matter depending on cost-benefit balances. Its formalism and phenomenology percolate throughout biological processes and set limits to effective computation. Under specific conditions, self-replicating and computationally complex patterns become favored, yielding life, cognition, and Darwinian evolution. Neurons and neural circuits sit at a crossroads between statistical mechanics, computation, and (through their role in cognition) natural selection. Can we establish a {\em statistical physics} of neural circuits? Such theory would tell what kinds of brains to expect under set energetic, evolutionary, and computational conditions. With this big picture in mind, we focus on the fate of duplicated neural circuits. We look at examples from central nervous systems, with a stress on computational thresholds that might prompt this redundancy. We also study a naive cost-benefit balance for duplicated circuits implementing complex phenotypes. From this we derive {\em phase diagrams} and (phase-like) transitions between single and duplicated circuits, which constrain evolutionary paths to complex cognition. Back to the big picture, similar phase diagrams and transitions might constrain I/O and internal connectivity patterns of neural circuits at large. The formalism of statistical mechanics seems a natural framework for thsi worthy line of research. |
1911.09920 | Frederic Peruch | Dounia Arcens (LCPO), Etienne Grau (LCPO), St\'ephane Grelier (LCPO),
Henri Cramail (LCPO), Fr\'ed\'eric Peruch (LCPO) | 6-O-glucose palmitate synthesis with lipase: Investigation of some key
parameters | null | Journal of Molecular Catalysis, Elsevier, 2018, 460, pp.63 - 68 | 10.1016/j.mcat.2018.09.013 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fatty acid sugar esters represent an important class of non-ionic bio-based
surfactants. They can be synthesized from vinyl fatty acids and sugars with
enzyme as a catalyst. Herein, the influence of the solvent, the lipase and the
temperature on a model reaction between vinyl palmitate and glucose via
enzymatic catalysis has been investigated and the reaction conditions
optimized. Full conversion into 6-O-glucose palmitate was reached in 40 hours
in acetonitrile starting from a reactant ratio 1:1, at only 5%-wt loading of
lipase from Candida antarctica B (CALB) without the presence of molecular
sieves.
| [
{
"created": "Fri, 22 Nov 2019 08:40:14 GMT",
"version": "v1"
}
] | 2019-11-25 | [
[
"Arcens",
"Dounia",
"",
"LCPO"
],
[
"Grau",
"Etienne",
"",
"LCPO"
],
[
"Grelier",
"Stéphane",
"",
"LCPO"
],
[
"Cramail",
"Henri",
"",
"LCPO"
],
[
"Peruch",
"Frédéric",
"",
"LCPO"
]
] | Fatty acid sugar esters represent an important class of non-ionic bio-based surfactants. They can be synthesized from vinyl fatty acids and sugars with enzyme as a catalyst. Herein, the influence of the solvent, the lipase and the temperature on a model reaction between vinyl palmitate and glucose via enzymatic catalysis has been investigated and the reaction conditions optimized. Full conversion into 6-O-glucose palmitate was reached in 40 hours in acetonitrile starting from a reactant ratio 1:1, at only 5%-wt loading of lipase from Candida antarctica B (CALB) without the presence of molecular sieves. |
2402.17704 | Ruby Sedgwick | Ruby Sedgwick, John P. Goertz, Molly M. Stevens, Ruth Misener, Mark
van der Wilk | Transfer Learning Bayesian Optimization to Design Competitor DNA
Molecules for Use in Diagnostic Assays | null | null | null | null | q-bio.QM cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the rise in engineered biomolecular devices, there is an increased need
for tailor-made biological sequences. Often, many similar biological sequences
need to be made for a specific application meaning numerous, sometimes
prohibitively expensive, lab experiments are necessary for their optimization.
This paper presents a transfer learning design of experiments workflow to make
this development feasible. By combining a transfer learning surrogate model
with Bayesian optimization, we show how the total number of experiments can be
reduced by sharing information between optimization tasks. We demonstrate the
reduction in the number of experiments using data from the development of DNA
competitors for use in an amplification-based diagnostic assay. We use
cross-validation to compare the predictive accuracy of different transfer
learning models, and then compare the performance of the models for both single
objective and penalized optimization tasks.
| [
{
"created": "Tue, 27 Feb 2024 17:30:33 GMT",
"version": "v1"
}
] | 2024-02-28 | [
[
"Sedgwick",
"Ruby",
""
],
[
"Goertz",
"John P.",
""
],
[
"Stevens",
"Molly M.",
""
],
[
"Misener",
"Ruth",
""
],
[
"van der Wilk",
"Mark",
""
]
] | With the rise in engineered biomolecular devices, there is an increased need for tailor-made biological sequences. Often, many similar biological sequences need to be made for a specific application meaning numerous, sometimes prohibitively expensive, lab experiments are necessary for their optimization. This paper presents a transfer learning design of experiments workflow to make this development feasible. By combining a transfer learning surrogate model with Bayesian optimization, we show how the total number of experiments can be reduced by sharing information between optimization tasks. We demonstrate the reduction in the number of experiments using data from the development of DNA competitors for use in an amplification-based diagnostic assay. We use cross-validation to compare the predictive accuracy of different transfer learning models, and then compare the performance of the models for both single objective and penalized optimization tasks. |
2005.08035 | Matthew Leming | Matthew Leming, Simon Baron-Cohen, John Suckling | Single-participant structural connectivity matrices lead to greater
accuracy in classification of participants than function in autism in MRI | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce a technique of deriving symmetric connectivity
matrices from regional histograms of grey-matter volume estimated from
T1-weighted MRIs. We then validated the technique by inputting the connectivity
matrices into a convolutional neural network (CNN) to classify between
participants with autism and age-, motion-, and intracranial-volume-matched
controls from six different databases (29,288 total connectomes, mean age =
30.72, range 0.42-78.00, including 1555 subjects with autism). We compared this
method to similar classifications of the same participants using fMRI
connectivity matrices as well as univariate estimates of grey-matter volumes.
We further applied graph-theoretical metrics on output class activation maps to
identify areas of the matrices that the CNN preferentially used to make the
classification, focusing particularly on hubs. Our results gave AUROCs of
0.7298 (69.71% accuracy) when classifying by only structural connectivity,
0.6964 (67.72% accuracy) when classifying by only functional connectivity, and
0.7037 (66.43% accuracy) when classifying by univariate grey matter volumes.
Combining structural and functional connectivities gave an AUROC of 0.7354
(69.40% accuracy). Graph analysis of class activation maps revealed no
distinguishable network patterns for functional inputs, but did reveal
localized differences between groups in bilateral Heschl's gyrus and upper
vermis for structural connectivity. This work provides a simple means of
feature extraction for inputting large numbers of structural MRIs into machine
learning models.
| [
{
"created": "Sat, 16 May 2020 16:36:06 GMT",
"version": "v1"
},
{
"created": "Wed, 27 May 2020 16:09:21 GMT",
"version": "v2"
}
] | 2020-05-28 | [
[
"Leming",
"Matthew",
""
],
[
"Baron-Cohen",
"Simon",
""
],
[
"Suckling",
"John",
""
]
] | In this work, we introduce a technique of deriving symmetric connectivity matrices from regional histograms of grey-matter volume estimated from T1-weighted MRIs. We then validated the technique by inputting the connectivity matrices into a convolutional neural network (CNN) to classify between participants with autism and age-, motion-, and intracranial-volume-matched controls from six different databases (29,288 total connectomes, mean age = 30.72, range 0.42-78.00, including 1555 subjects with autism). We compared this method to similar classifications of the same participants using fMRI connectivity matrices as well as univariate estimates of grey-matter volumes. We further applied graph-theoretical metrics on output class activation maps to identify areas of the matrices that the CNN preferentially used to make the classification, focusing particularly on hubs. Our results gave AUROCs of 0.7298 (69.71% accuracy) when classifying by only structural connectivity, 0.6964 (67.72% accuracy) when classifying by only functional connectivity, and 0.7037 (66.43% accuracy) when classifying by univariate grey matter volumes. Combining structural and functional connectivities gave an AUROC of 0.7354 (69.40% accuracy). Graph analysis of class activation maps revealed no distinguishable network patterns for functional inputs, but did reveal localized differences between groups in bilateral Heschl's gyrus and upper vermis for structural connectivity. This work provides a simple means of feature extraction for inputting large numbers of structural MRIs into machine learning models. |
2111.08693 | Maeliss Jallais | Ma\"eliss Jallais (PARIETAL), Pedro Luiz Coelho Rodrigues (STATIFY),
Alexandre Gramfort (PARIETAL), Demian Wassermann (PARIETAL) | Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements | null | Journal of Machine Learning for Biomedical Imaging, Melba editors,
2022, pp.1-27 | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective characterisation of the brain grey matter cytoarchitecture with
quantitative sensitivity to soma density and volume remains an unsolved
challenge in diffusion MRI (dMRI). Solving the problem of relating the dMRI
signal with cytoarchitectural characteristics calls for the definition of a
mathematical model that describes brain tissue via a handful of
physiologically-relevant parameters and an algorithm for inverting the model.
To address this issue, we propose a new forward model, specifically a new
system of equations, requiring a few relatively sparse b-shells. We then apply
modern tools from Bayesian analysis known as likelihood-free inference (LFI) to
invert our proposed model. As opposed to other approaches from the literature,
our algorithm yields not only an estimation of the parameter vector $\theta$
that best describes a given observed data point $x_0$, but also a full
posterior distribution $p(\theta|x_0)$ over the parameter space. This enables a
richer description of the model inversion, providing indicators such as
credible intervals for the estimated parameters and a complete characterization
of the parameter regions where the model may present indeterminacies. We
approximate the posterior distribution using deep neural density estimators,
known as normalizing flows, and fit them using a set of repeated simulations
from the forward model. We validate our approach on simulations using dmipy and
then apply the whole pipeline on two publicly available datasets.
| [
{
"created": "Mon, 15 Nov 2021 09:08:27 GMT",
"version": "v1"
},
{
"created": "Wed, 4 May 2022 11:16:21 GMT",
"version": "v2"
}
] | 2022-05-05 | [
[
"Jallais",
"Maëliss",
"",
"PARIETAL"
],
[
"Rodrigues",
"Pedro Luiz Coelho",
"",
"STATIFY"
],
[
"Gramfort",
"Alexandre",
"",
"PARIETAL"
],
[
"Wassermann",
"Demian",
"",
"PARIETAL"
]
] | Effective characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in diffusion MRI (dMRI). Solving the problem of relating the dMRI signal with cytoarchitectural characteristics calls for the definition of a mathematical model that describes brain tissue via a handful of physiologically-relevant parameters and an algorithm for inverting the model. To address this issue, we propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells. We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model. As opposed to other approaches from the literature, our algorithm yields not only an estimation of the parameter vector $\theta$ that best describes a given observed data point $x_0$, but also a full posterior distribution $p(\theta|x_0)$ over the parameter space. This enables a richer description of the model inversion, providing indicators such as credible intervals for the estimated parameters and a complete characterization of the parameter regions where the model may present indeterminacies. We approximate the posterior distribution using deep neural density estimators, known as normalizing flows, and fit them using a set of repeated simulations from the forward model. We validate our approach on simulations using dmipy and then apply the whole pipeline on two publicly available datasets. |
1601.04183 | Peter Diehl Peter U. Diehl | Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre
Neftci and Guido Zarrella | TrueHappiness: Neuromorphic Emotion Recognition on TrueNorth | null | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to constructing a neuromorphic device that responds to
language input by producing neuron spikes in proportion to the strength of the
appropriate positive or negative emotional response. Specifically, we perform a
fine-grained sentiment analysis task with implementations on two different
systems: one using conventional spiking neural network (SNN) simulators and the
other one using IBM's Neurosynaptic System TrueNorth. Input words are projected
into a high-dimensional semantic space and processed through a fully-connected
neural network (FCNN) containing rectified linear units trained via
backpropagation. After training, this FCNN is converted to a SNN by
substituting the ReLUs with integrate-and-fire neurons. We show that there is
practically no performance loss due to conversion to a spiking network on a
sentiment analysis test set, i.e. correlations between predictions and human
annotations differ by less than 0.02 comparing the original DNN and its spiking
equivalent. Additionally, we show that the SNN generated with this technique
can be mapped to existing neuromorphic hardware -- in our case, the TrueNorth
chip. Mapping to the chip involves 4-bit synaptic weight discretization and
adjustment of the neuron thresholds. The resulting end-to-end system can take a
user input, i.e. a word in a vocabulary of over 300,000 words, and estimate its
sentiment on TrueNorth with a power consumption of approximately 50 uW.
| [
{
"created": "Sat, 16 Jan 2016 17:04:25 GMT",
"version": "v1"
}
] | 2016-01-19 | [
[
"Diehl",
"Peter U.",
""
],
[
"Pedroni",
"Bruno U.",
""
],
[
"Cassidy",
"Andrew",
""
],
[
"Merolla",
"Paul",
""
],
[
"Neftci",
"Emre",
""
],
[
"Zarrella",
"Guido",
""
]
] | We present an approach to constructing a neuromorphic device that responds to language input by producing neuron spikes in proportion to the strength of the appropriate positive or negative emotional response. Specifically, we perform a fine-grained sentiment analysis task with implementations on two different systems: one using conventional spiking neural network (SNN) simulators and the other one using IBM's Neurosynaptic System TrueNorth. Input words are projected into a high-dimensional semantic space and processed through a fully-connected neural network (FCNN) containing rectified linear units trained via backpropagation. After training, this FCNN is converted to a SNN by substituting the ReLUs with integrate-and-fire neurons. We show that there is practically no performance loss due to conversion to a spiking network on a sentiment analysis test set, i.e. correlations between predictions and human annotations differ by less than 0.02 comparing the original DNN and its spiking equivalent. Additionally, we show that the SNN generated with this technique can be mapped to existing neuromorphic hardware -- in our case, the TrueNorth chip. Mapping to the chip involves 4-bit synaptic weight discretization and adjustment of the neuron thresholds. The resulting end-to-end system can take a user input, i.e. a word in a vocabulary of over 300,000 words, and estimate its sentiment on TrueNorth with a power consumption of approximately 50 uW. |
1506.02076 | Akira Kinjo | Akira R. Kinjo | A unified statistical model of protein multiple sequence alignment
integrating direct coupling and insertions | 21 pages (2-column), 13 figures | Biophysics and Physicobiology Vol. 13, pp. 45-62 (2016) | 10.2142/biophysico.13.0_45 | null | q-bio.BM physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The multiple sequence alignment (MSA) of a protein family provides a wealth
of information in terms of the conservation pattern of amino acid residues not
only at each alignment site but also between distant sites. In order to
statistically model the MSA incorporating both short-range and long-range
correlations as well as insertions, I have derived a lattice gas model of the
MSA based on the principle of maximum entropy. The partition function, obtained
by the transfer matrix method with a mean-field approximation, accounts for all
possible alignments with all possible sequences. The model parameters for
short-range and long-range interactions were determined by a self-consistent
condition and by a Gaussian approximation, respectively. Using this model with
and without long-range interactions, I analyzed the globin and V-set domains by
increasing the "temperature" and by "mutating" a site. The correlations between
residue conservation and various measures of the system's stability indicate
that the long-range interactions make the conservation pattern more specific to
the structure, and increasingly stabilize better conserved residues.
| [
{
"created": "Fri, 5 Jun 2015 22:26:24 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Oct 2015 23:49:02 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Dec 2015 04:57:10 GMT",
"version": "v3"
}
] | 2016-04-27 | [
[
"Kinjo",
"Akira R.",
""
]
] | The multiple sequence alignment (MSA) of a protein family provides a wealth of information in terms of the conservation pattern of amino acid residues not only at each alignment site but also between distant sites. In order to statistically model the MSA incorporating both short-range and long-range correlations as well as insertions, I have derived a lattice gas model of the MSA based on the principle of maximum entropy. The partition function, obtained by the transfer matrix method with a mean-field approximation, accounts for all possible alignments with all possible sequences. The model parameters for short-range and long-range interactions were determined by a self-consistent condition and by a Gaussian approximation, respectively. Using this model with and without long-range interactions, I analyzed the globin and V-set domains by increasing the "temperature" and by "mutating" a site. The correlations between residue conservation and various measures of the system's stability indicate that the long-range interactions make the conservation pattern more specific to the structure, and increasingly stabilize better conserved residues. |
1704.04039 | Vladimir Golkov | Vladimir Golkov, Marcin J. Skwark, Atanas Mirchev, Georgi Dikov,
Alexander R. Geanes, Jeffrey Mendenhall, Jens Meiler and Daniel Cremers | 3D Deep Learning for Biological Function Prediction from Physical Fields | null | null | null | null | q-bio.BM cs.LG q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the biological function of molecules, be it proteins or drug-like
compounds, from their atomic structure is an important and long-standing
problem. Function is dictated by structure, since it is by spatial interactions
that molecules interact with each other, both in terms of steric
complementarity, as well as intermolecular forces. Thus, the electron density
field and electrostatic potential field of a molecule contain the "raw
fingerprint" of how this molecule can fit to binding partners. In this paper,
we show that deep learning can predict biological function of molecules
directly from their raw 3D approximated electron density and electrostatic
potential fields. Protein function based on EC numbers is predicted from the
approximated electron density field. In another experiment, the activity of
small molecules is predicted with quality comparable to state-of-the-art
descriptor-based methods. We propose several alternative computational models
for the GPU with different memory and runtime requirements for different sizes
of molecules and of databases. We also propose application-specific
multi-channel data representations. With future improvements of training
datasets and neural network settings in combination with complementary
information sources (sequence, genomic context, expression level), deep
learning can be expected to show its generalization power and revolutionize the
field of molecular function prediction.
| [
{
"created": "Thu, 13 Apr 2017 09:11:23 GMT",
"version": "v1"
}
] | 2017-04-14 | [
[
"Golkov",
"Vladimir",
""
],
[
"Skwark",
"Marcin J.",
""
],
[
"Mirchev",
"Atanas",
""
],
[
"Dikov",
"Georgi",
""
],
[
"Geanes",
"Alexander R.",
""
],
[
"Mendenhall",
"Jeffrey",
""
],
[
"Meiler",
"Jens",
""
],
[
"Cremers",
"Daniel",
""
]
] | Predicting the biological function of molecules, be it proteins or drug-like compounds, from their atomic structure is an important and long-standing problem. Function is dictated by structure, since it is by spatial interactions that molecules interact with each other, both in terms of steric complementarity, as well as intermolecular forces. Thus, the electron density field and electrostatic potential field of a molecule contain the "raw fingerprint" of how this molecule can fit to binding partners. In this paper, we show that deep learning can predict biological function of molecules directly from their raw 3D approximated electron density and electrostatic potential fields. Protein function based on EC numbers is predicted from the approximated electron density field. In another experiment, the activity of small molecules is predicted with quality comparable to state-of-the-art descriptor-based methods. We propose several alternative computational models for the GPU with different memory and runtime requirements for different sizes of molecules and of databases. We also propose application-specific multi-channel data representations. With future improvements of training datasets and neural network settings in combination with complementary information sources (sequence, genomic context, expression level), deep learning can be expected to show its generalization power and revolutionize the field of molecular function prediction. |
1207.5506 | Masashi Fujii | Masashi Fujii, Hiraku Nishimori and Akinori Awazu | Influences of Excluded Volume of Molecules on Signaling Processes on
Biomembrane | 31 pages, 10 figures | null | 10.1371/journal.pone.0062218 | null | q-bio.BM nlin.CG physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the influences of the excluded volume of molecules on
biochemical reaction processes on 2-dimensional surfaces using a model of
signal transduction processes on biomembranes. We perform simulations of the
2-dimensional cell-based model, which describes the reactions and diffusion of
the receptors, signaling proteins, target proteins, and crowders on the cell
membrane. The signaling proteins are activated by receptors, and these
activated signaling proteins activate target proteins that bind autonomously
from the cytoplasm to the membrane, and unbind from the membrane if activated.
If the target proteins bind frequently, the volume fraction of molecules on the
membrane becomes so large that the excluded volume of the molecules for the
reaction and diffusion dynamics cannot be negligible. We find that such
excluded volume effects of the molecules induce non-trivial variations of the
signal flow, defined as the activation frequency of target proteins, as
follows. With an increase in the binding rate of target proteins, the signal
flow varies by i) monotonically increasing; ii) increasing then decreasing in a
bell-shaped curve; or iii) increasing, decreasing, then increasing in an
S-shaped curve. We further demonstrate that the excluded volume of molecules
influences the hierarchical molecular distributions throughout the reaction
processes. In particular, when the system exhibits a large signal flow, the
signaling proteins tend to surround the receptors to form receptor-signaling
protein clusters, and the target proteins tend to become distributed around
such clusters. To explain these phenomena, we analyze the stochastic model of
the local motions of molecules around the receptor.
| [
{
"created": "Tue, 24 Jul 2012 05:17:49 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Oct 2012 10:33:14 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Jan 2013 04:31:26 GMT",
"version": "v3"
}
] | 2013-05-08 | [
[
"Fujii",
"Masashi",
""
],
[
"Nishimori",
"Hiraku",
""
],
[
"Awazu",
"Akinori",
""
]
] | We investigate the influences of the excluded volume of molecules on biochemical reaction processes on 2-dimensional surfaces using a model of signal transduction processes on biomembranes. We perform simulations of the 2-dimensional cell-based model, which describes the reactions and diffusion of the receptors, signaling proteins, target proteins, and crowders on the cell membrane. The signaling proteins are activated by receptors, and these activated signaling proteins activate target proteins that bind autonomously from the cytoplasm to the membrane, and unbind from the membrane if activated. If the target proteins bind frequently, the volume fraction of molecules on the membrane becomes so large that the excluded volume of the molecules for the reaction and diffusion dynamics cannot be negligible. We find that such excluded volume effects of the molecules induce non-trivial variations of the signal flow, defined as the activation frequency of target proteins, as follows. With an increase in the binding rate of target proteins, the signal flow varies by i) monotonically increasing; ii) increasing then decreasing in a bell-shaped curve; or iii) increasing, decreasing, then increasing in an S-shaped curve. We further demonstrate that the excluded volume of molecules influences the hierarchical molecular distributions throughout the reaction processes. In particular, when the system exhibits a large signal flow, the signaling proteins tend to surround the receptors to form receptor-signaling protein clusters, and the target proteins tend to become distributed around such clusters. To explain these phenomena, we analyze the stochastic model of the local motions of molecules around the receptor. |
2006.10212 | Yu Takagi | Yu Takagi, Steven W. Kennerley, Jun-ichiro Hirayama, Laurence T. Hunt | Demixed shared component analysis of neural population data from
multiple brain areas | Accepted at the conference on Neural Information Processing Systems
(NeurIPS 2020, spotlight) | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Recent advances in neuroscience data acquisition allow for the simultaneous
recording of large populations of neurons across multiple brain areas while
subjects perform complex cognitive tasks. Interpreting these data requires us
to index how task-relevant information is shared across brain regions, but this
is often confounded by the mixing of different task parameters at the single
neuron level. Here, inspired by a method developed for a single brain area, we
introduce a new technique for demixing variables across multiple brain areas,
called demixed shared component analysis (dSCA). dSCA decomposes population
activity into a few components, such that the shared components capture the
maximum amount of shared information across brain regions while also depending
on relevant task parameters. This yields interpretable components that express
which variables are shared between different brain regions and when this
information is shared across time. To illustrate our method, we reanalyze two
datasets recorded during decision-making tasks in rodents and macaques. We find
that dSCA provides new insights into the shared computation between different
brain areas in these datasets, relating to several different aspects of
decision formation.
| [
{
"created": "Thu, 18 Jun 2020 00:13:12 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Oct 2020 10:24:35 GMT",
"version": "v2"
}
] | 2020-10-27 | [
[
"Takagi",
"Yu",
""
],
[
"Kennerley",
"Steven W.",
""
],
[
"Hirayama",
"Jun-ichiro",
""
],
[
"Hunt",
"Laurence T.",
""
]
] | Recent advances in neuroscience data acquisition allow for the simultaneous recording of large populations of neurons across multiple brain areas while subjects perform complex cognitive tasks. Interpreting these data requires us to index how task-relevant information is shared across brain regions, but this is often confounded by the mixing of different task parameters at the single neuron level. Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA). dSCA decomposes population activity into a few components, such that the shared components capture the maximum amount of shared information across brain regions while also depending on relevant task parameters. This yields interpretable components that express which variables are shared between different brain regions and when this information is shared across time. To illustrate our method, we reanalyze two datasets recorded during decision-making tasks in rodents and macaques. We find that dSCA provides new insights into the shared computation between different brain areas in these datasets, relating to several different aspects of decision formation. |
2205.00057 | Andrew Jensen | Andrew Jensen, Paris Flood, Lindsey Palm-Vlasak, Will Burton, Paul
Rullkoetter, Scott Banks | Joint Track Machine Learning: An autonomous method for measuring 6DOF
TKA kinematics from single-plane x-ray images | 8 pages, 6 figures, 2 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic radiographic measurement of 3D TKA kinematics has provided important
information for implant design and surgical technique for over 30 years.
However, current methods of measuring TKA kinematics are too cumbersome or
time-consuming for practical clinical application. Even state-of-the-art
techniques require human-supervised initialization or human supervision
throughout the entire optimization process. Elimination of human supervision
could potentially bring this technology into clinical practicality. Therefore,
we propose a fully autonomous pipeline for quantifying TKA kinematics from
single-plane imaging. First, a convolutional neural network segments the
femoral and tibial implants from the image. Second, segmented images are
compared to Normalized Fourier Descriptor shape libraries for initial pose
estimates. Lastly, a Lipschitzian optimization routine minimizes the difference
between the segmented image and the projected implant. This technique reliably
reproduces human-supervised kinematics measurements from internal datasets and
external validation studies, with RMS differences of less than 0.7mm and
4{\deg} for internal studies and 0.8mm and 1.7{\deg} for external validation
studies. This performance indicates that it will soon be practical to perform
these measurements in a clinical setting.
| [
{
"created": "Fri, 29 Apr 2022 19:35:56 GMT",
"version": "v1"
}
] | 2022-05-03 | [
[
"Jensen",
"Andrew",
""
],
[
"Flood",
"Paris",
""
],
[
"Palm-Vlasak",
"Lindsey",
""
],
[
"Burton",
"Will",
""
],
[
"Rullkoetter",
"Paul",
""
],
[
"Banks",
"Scott",
""
]
] | Dynamic radiographic measurement of 3D TKA kinematics has provided important information for implant design and surgical technique for over 30 years. However, current methods of measuring TKA kinematics are too cumbersome or time-consuming for practical clinical application. Even state-of-the-art techniques require human-supervised initialization or human supervision throughout the entire optimization process. Elimination of human supervision could potentially bring this technology into clinical practicality. Therefore, we propose a fully autonomous pipeline for quantifying TKA kinematics from single-plane imaging. First, a convolutional neural network segments the femoral and tibial implants from the image. Second, segmented images are compared to Normalized Fourier Descriptor shape libraries for initial pose estimates. Lastly, a Lipschitzian optimization routine minimizes the difference between the segmented image and the projected implant. This technique reliably reproduces human-supervised kinematics measurements from internal datasets and external validation studies, with RMS differences of less than 0.7mm and 4{\deg} for internal studies and 0.8mm and 1.7{\deg} for external validation studies. This performance indicates that it will soon be practical to perform these measurements in a clinical setting. |
2403.11015 | Alireza Rowhanimanesh | Alireza Rowhanimanesh | Identifying the Attractors of Gene Regulatory Networks from Expression
Data under Uncertainty: An Interpretable Approach | null | null | null | null | q-bio.MN cs.AI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In systems biology, attractor landscape analysis of gene regulatory networks
is recognized as a powerful computational tool for studying various cellular
states from proliferation and differentiation to senescence and apoptosis.
Therefore, accurate identification of attractors plays a critical role in
determination of the cell fates. On the other hand, in a real biological
circuit, genetic/epigenetic alterations as well as varying environmental
factors drastically take effect on the location, characteristics, and even the
number of attractors. The central question is: Given a temporal gene expression
profile of a real gene regulatory network, how can the attractors be robustly
identified in the presence of huge amount of uncertainty? This paper addresses
this question using a novel approach based on Zadeh Computing with Words. The
proposed scheme could effectively identify the attractors from temporal gene
expression data in terms of both fuzzy logic-based and linguistic descriptions
which are simply interpretable by human experts. Therefore, this method can be
considered as an effective step towards interpretable artificial intelligence.
Without loss of generality, genetic toggle switch is considered as the case
study. The nonlinear dynamics of this benchmark gene regulatory network is
computationally modeled by the notion of uncertain stochastic differential
equations. The results of in-silico study demonstrate the efficiency and
robustness of the proposed method.
| [
{
"created": "Sat, 16 Mar 2024 20:56:22 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Rowhanimanesh",
"Alireza",
""
]
] | In systems biology, attractor landscape analysis of gene regulatory networks is recognized as a powerful computational tool for studying various cellular states from proliferation and differentiation to senescence and apoptosis. Therefore, accurate identification of attractors plays a critical role in determination of the cell fates. On the other hand, in a real biological circuit, genetic/epigenetic alterations as well as varying environmental factors drastically take effect on the location, characteristics, and even the number of attractors. The central question is: Given a temporal gene expression profile of a real gene regulatory network, how can the attractors be robustly identified in the presence of huge amount of uncertainty? This paper addresses this question using a novel approach based on Zadeh Computing with Words. The proposed scheme could effectively identify the attractors from temporal gene expression data in terms of both fuzzy logic-based and linguistic descriptions which are simply interpretable by human experts. Therefore, this method can be considered as an effective step towards interpretable artificial intelligence. Without loss of generality, genetic toggle switch is considered as the case study. The nonlinear dynamics of this benchmark gene regulatory network is computationally modeled by the notion of uncertain stochastic differential equations. The results of in-silico study demonstrate the efficiency and robustness of the proposed method. |
1906.05150 | Dimitris Vavoulis | Dimitrios V Vavoulis | Exploring Bayesian approaches to eQTL mapping through probabilistic
programming | 25 pages, 3 figures; to appear as a book chapter in "eQTL Analysis:
Methods and Protocols", a volume for the series "Methods in Molecular
Biology" published by Springer | null | null | null | q-bio.GN q-bio.QM stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | The discovery of genomic polymorphisms influencing gene expression (also
known as expression quantitative trait loci or eQTLs) can be formulated as a
sparse Bayesian multivariate/multiple regression problem. An important aspect
in the development of such models is the implementation of bespoke inference
methodologies, a process which can become quite laborious, when multiple
candidate models are being considered. We describe automatic, black-box
inference in such models using Stan, a popular probabilistic programming
language. The utilisation of systems like Stan can facilitate model prototyping
and testing, thus accelerating the data modelling process. The code described
in this chapter can be found at https://github.com/dvav/eQTLBookChapter.
| [
{
"created": "Wed, 12 Jun 2019 14:07:58 GMT",
"version": "v1"
}
] | 2019-06-13 | [
[
"Vavoulis",
"Dimitrios V",
""
]
] | The discovery of genomic polymorphisms influencing gene expression (also known as expression quantitative trait loci or eQTLs) can be formulated as a sparse Bayesian multivariate/multiple regression problem. An important aspect in the development of such models is the implementation of bespoke inference methodologies, a process which can become quite laborious, when multiple candidate models are being considered. We describe automatic, black-box inference in such models using Stan, a popular probabilistic programming language. The utilisation of systems like Stan can facilitate model prototyping and testing, thus accelerating the data modelling process. The code described in this chapter can be found at https://github.com/dvav/eQTLBookChapter. |
2310.11759 | Jonathan Vacher | Jonathan Vacher, Pascal Mamassian | Perceptual Scales Predicted by Fisher Information Metrics | 15 pages, 6 figures, 7 appendix | The Twelfth International Conference on Learning Representations.
2024 | null | null | q-bio.NC cs.CV cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Perception is often viewed as a process that transforms physical variables,
external to an observer, into internal psychological variables. Such a process
can be modeled by a function coined perceptual scale. The perceptual scale can
be deduced from psychophysical measurements that consist in comparing the
relative differences between stimuli (i.e. difference scaling experiments).
However, this approach is often overlooked by the modeling and experimentation
communities. Here, we demonstrate the value of measuring the perceptual scale
of classical (spatial frequency, orientation) and less classical physical
variables (interpolation between textures) by embedding it in recent
probabilistic modeling of perception. First, we show that the assumption that
an observer has an internal representation of univariate parameters such as
spatial frequency or orientation while stimuli are high-dimensional does not
lead to contradictory predictions when following the theoretical framework.
Second, we show that the measured perceptual scale corresponds to the
transduction function hypothesized in this framework. In particular, we
demonstrate that it is related to the Fisher information of the generative
model that underlies perception and we test the predictions given by the
generative model of different stimuli in a set a of difference scaling
experiments. Our main conclusion is that the perceptual scale is mostly driven
by the stimulus power spectrum. Finally, we propose that this measure of
perceptual scale is a way to push further the notion of perceptual distances by
estimating the perceptual geometry of images i.e. the path between images
instead of simply the distance between those.
| [
{
"created": "Wed, 18 Oct 2023 07:31:47 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 18:44:51 GMT",
"version": "v2"
}
] | 2024-03-19 | [
[
"Vacher",
"Jonathan",
""
],
[
"Mamassian",
"Pascal",
""
]
] | Perception is often viewed as a process that transforms physical variables, external to an observer, into internal psychological variables. Such a process can be modeled by a function coined perceptual scale. The perceptual scale can be deduced from psychophysical measurements that consist in comparing the relative differences between stimuli (i.e. difference scaling experiments). However, this approach is often overlooked by the modeling and experimentation communities. Here, we demonstrate the value of measuring the perceptual scale of classical (spatial frequency, orientation) and less classical physical variables (interpolation between textures) by embedding it in recent probabilistic modeling of perception. First, we show that the assumption that an observer has an internal representation of univariate parameters such as spatial frequency or orientation while stimuli are high-dimensional does not lead to contradictory predictions when following the theoretical framework. Second, we show that the measured perceptual scale corresponds to the transduction function hypothesized in this framework. In particular, we demonstrate that it is related to the Fisher information of the generative model that underlies perception and we test the predictions given by the generative model of different stimuli in a set a of difference scaling experiments. Our main conclusion is that the perceptual scale is mostly driven by the stimulus power spectrum. Finally, we propose that this measure of perceptual scale is a way to push further the notion of perceptual distances by estimating the perceptual geometry of images i.e. the path between images instead of simply the distance between those. |
2405.10345 | Divyagna Bavikadi | Divyagna Bavikadi, Ayushi Agarwal, Shashank Ganta, Yunro Chung,
Lusheng Song, Ji Qiu and Paulo Shakarian | Machine Learning Driven Biomarker Selection for Medical Diagnosis | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advances in experimental methods have enabled researchers to collect
data on thousands of analytes simultaneously. This has led to correlational
studies that associated molecular measurements with diseases such as
Alzheimer's, Liver, and Gastric Cancer. However, the use of thousands of
biomarkers selected from the analytes is not practical for real-world medical
diagnosis and is likely undesirable due to potentially formed spurious
correlations. In this study, we evaluate 4 different methods for biomarker
selection and 4 different machine learning (ML) classifiers for identifying
correlations, evaluating 16 approaches in all. We found that contemporary
methods outperform previously reported logistic regression in cases where 3 and
10 biomarkers are permitted. When specificity is fixed at 0.9, ML approaches
produced a sensitivity of 0.240 (3 biomarkers) and 0.520 (10 biomarkers), while
standard logistic regression provided a sensitivity of 0.000 (3 biomarkers) and
0.040 (10 biomarkers). We also noted that causal-based methods for biomarker
selection proved to be the most performant when fewer biomarkers were
permitted, while univariate feature selection was the most performant when a
greater number of biomarkers were permitted.
| [
{
"created": "Thu, 16 May 2024 01:30:47 GMT",
"version": "v1"
}
] | 2024-05-20 | [
[
"Bavikadi",
"Divyagna",
""
],
[
"Agarwal",
"Ayushi",
""
],
[
"Ganta",
"Shashank",
""
],
[
"Chung",
"Yunro",
""
],
[
"Song",
"Lusheng",
""
],
[
"Qiu",
"Ji",
""
],
[
"Shakarian",
"Paulo",
""
]
] | Recent advances in experimental methods have enabled researchers to collect data on thousands of analytes simultaneously. This has led to correlational studies that associated molecular measurements with diseases such as Alzheimer's, Liver, and Gastric Cancer. However, the use of thousands of biomarkers selected from the analytes is not practical for real-world medical diagnosis and is likely undesirable due to potentially formed spurious correlations. In this study, we evaluate 4 different methods for biomarker selection and 4 different machine learning (ML) classifiers for identifying correlations, evaluating 16 approaches in all. We found that contemporary methods outperform previously reported logistic regression in cases where 3 and 10 biomarkers are permitted. When specificity is fixed at 0.9, ML approaches produced a sensitivity of 0.240 (3 biomarkers) and 0.520 (10 biomarkers), while standard logistic regression provided a sensitivity of 0.000 (3 biomarkers) and 0.040 (10 biomarkers). We also noted that causal-based methods for biomarker selection proved to be the most performant when fewer biomarkers were permitted, while univariate feature selection was the most performant when a greater number of biomarkers were permitted. |
1312.1255 | Steven Kelk | Leo van Iersel, Steven Kelk, Nela Lekic, Leen Stougie | A short note on exponential-time algorithms for hybridization number | null | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this short note we prove that, given two (not necessarily binary) rooted
phylogenetic trees T_1, T_2 on the same set of taxa X, where |X|=n, the
hybridization number of T_1 and T_2 can be computed in time O^{*}(2^n) i.e.
O(2^{n} poly(n)). The result also means that a Maximum Acyclic Agreement Forest
(MAAF) can be computed within the same time bound.
| [
{
"created": "Wed, 4 Dec 2013 17:37:37 GMT",
"version": "v1"
}
] | 2013-12-05 | [
[
"van Iersel",
"Leo",
""
],
[
"Kelk",
"Steven",
""
],
[
"Lekic",
"Nela",
""
],
[
"Stougie",
"Leen",
""
]
] | In this short note we prove that, given two (not necessarily binary) rooted phylogenetic trees T_1, T_2 on the same set of taxa X, where |X|=n, the hybridization number of T_1 and T_2 can be computed in time O^{*}(2^n) i.e. O(2^{n} poly(n)). The result also means that a Maximum Acyclic Agreement Forest (MAAF) can be computed within the same time bound. |
1910.03661 | Navid Mohammad Mirzaei | Navid Mohammad Mirzaei and Pak-Wing Fok | Intimal Growth in Cylindrical Arteries: Impact of Anisotropic Growth on
Glagov Remodeling | 29 pages, 15 figures, 2 tables | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we investigate the effect of anisotropic growth on Glagov
remodeling in different cases: pure radial, pure circumferential, pure axial
and general anisotropic growth. We use the theory of morphoelasticity on an
axisymmetric arterial domain. For each case we explore their specific effect on
the Glagov curves and stress and provide the changes in collagen fibers angles
in the intima, media and adventitia. In addition, we compare the strain energy
produced by growth in radial, circumferential and axial direction and deduce
that anisotropic growth generally leads to lower strain energy than isotropic
growth. Therefore, we explore an anisotropic growth regime and use the
resulting model to simulate vessel remodeling. We compare the Glagov curves,
stress, energies and fiber angles in the anisotropic case with those of the
isotropic case. Our results show that the anisotropic growth produces a
remodeling curve more consistent with Glagov's experimental data with gentler
outward remodeling and more realistic stress profiles.
| [
{
"created": "Wed, 2 Oct 2019 00:51:00 GMT",
"version": "v1"
}
] | 2019-10-10 | [
[
"Mirzaei",
"Navid Mohammad",
""
],
[
"Fok",
"Pak-Wing",
""
]
] | In this paper we investigate the effect of anisotropic growth on Glagov remodeling in different cases: pure radial, pure circumferential, pure axial and general anisotropic growth. We use the theory of morphoelasticity on an axisymmetric arterial domain. For each case we explore their specific effect on the Glagov curves and stress and provide the changes in collagen fibers angles in the intima, media and adventitia. In addition, we compare the strain energy produced by growth in radial, circumferential and axial direction and deduce that anisotropic growth generally leads to lower strain energy than isotropic growth. Therefore, we explore an anisotropic growth regime and use the resulting model to simulate vessel remodeling. We compare the Glagov curves, stress, energies and fiber angles in the anisotropic case with those of the isotropic case. Our results show that the anisotropic growth produces a remodeling curve more consistent with Glagov's experimental data with gentler outward remodeling and more realistic stress profiles. |
1906.12222 | Alexander Gorban | Alexander N. Gorban, Valeri A. Makarov, Ivan Y. Tyukin | Symphony of high-dimensional brain | null | Physics of Life Reviews, 2019 | 10.1016/j.plrev.2019.06.003 | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper is the final part of the scientific discussion organised by the
Journal "Physics of Life Rviews" about the simplicity revolution in
neuroscience and AI. This discussion was initiated by the review paper "The
unreasonable effectiveness of small neural ensembles in high-dimensional
brain". Phys Life Rev 2019, doi 10.1016/j.plrev.2018.09.005, arXiv:1809.07656.
The topics of the discussion varied from the necessity to take into account the
difference between the theoretical random distributions and "extremely
non-random" real distributions and revise the common machine learning theory,
to different forms of the curse of dimensionality and high-dimensional pitfalls
in neuroscience. V. K{\r{u}}rkov{\'a}, A. Tozzi and J.F. Peters, R. Quian
Quiroga, P. Varona, R. Barrio, G. Kreiman, L. Fortuna, C. van Leeuwen, R. Quian
Quiroga, and V. Kreinovich, A.N. Gorban, V.A. Makarov, and I.Y. Tyukin
participated in the discussion. In this paper we analyse the symphony of
opinions and the possible outcomes of the simplicity revolution for machine
learning and neuroscience.
| [
{
"created": "Thu, 27 Jun 2019 17:46:24 GMT",
"version": "v1"
}
] | 2019-07-01 | [
[
"Gorban",
"Alexander N.",
""
],
[
"Makarov",
"Valeri A.",
""
],
[
"Tyukin",
"Ivan Y.",
""
]
] | This paper is the final part of the scientific discussion organised by the Journal "Physics of Life Rviews" about the simplicity revolution in neuroscience and AI. This discussion was initiated by the review paper "The unreasonable effectiveness of small neural ensembles in high-dimensional brain". Phys Life Rev 2019, doi 10.1016/j.plrev.2018.09.005, arXiv:1809.07656. The topics of the discussion varied from the necessity to take into account the difference between the theoretical random distributions and "extremely non-random" real distributions and revise the common machine learning theory, to different forms of the curse of dimensionality and high-dimensional pitfalls in neuroscience. V. K{\r{u}}rkov{\'a}, A. Tozzi and J.F. Peters, R. Quian Quiroga, P. Varona, R. Barrio, G. Kreiman, L. Fortuna, C. van Leeuwen, R. Quian Quiroga, and V. Kreinovich, A.N. Gorban, V.A. Makarov, and I.Y. Tyukin participated in the discussion. In this paper we analyse the symphony of opinions and the possible outcomes of the simplicity revolution for machine learning and neuroscience. |
1206.4082 | Pamela Reinagel | Claire B. Discenza and Pamela Reinagel | Dorsal lateral geniculate substructure in the Long-Evans rat: A cholera
toxin B-subunit study | null | Front. Neuroanat. 6:40 (2012) | 10.3389/fnana.2012.00040 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study describes the substructure of the dorsal lateral geniculate
nucleus of the thalamus of the pigmented rat (Rattus norvegicus) based on the
eye-of-origin of its retinal ganglion cell inputs. We made monocular
intra-ocular injections of the B-subunit of cholera toxin (CTB), a sensitive
anterograde tracer, in three adult male Long-Evans rats. In four additional
subjects, we injected fluorophor-conjugated CTB in both eyes, using a different
fluorophor in each eye. Brains of these subjects were fixed and sectioned, and
the labeled retinal ganglion cell termini were imaged with wide-field
sub-micron resolution slide scanners. Retinal termination zones were traced to
reconstruct a three dimensional model of the ipsilateral and contralateral
retinal termination zones in the dLGN on both sides of the brain. The dLGN
volume was 1.58 \pm0.094 mm^{3}, comprising 70 \pm 3% the volume of the entire
retinorecipient LGN. We find the retinal terminals to be well-segregated by eye
of origin. We consistently found three or four spatially separated
ipsilateral-recipient zones within each dLGN, rather than the single compact
zone expected. It remains to be determined whether these subdomains represent
distinct functional sublaminae.
| [
{
"created": "Mon, 18 Jun 2012 21:38:05 GMT",
"version": "v1"
}
] | 2012-10-15 | [
[
"Discenza",
"Claire B.",
""
],
[
"Reinagel",
"Pamela",
""
]
] | This study describes the substructure of the dorsal lateral geniculate nucleus of the thalamus of the pigmented rat (Rattus norvegicus) based on the eye-of-origin of its retinal ganglion cell inputs. We made monocular intra-ocular injections of the B-subunit of cholera toxin (CTB), a sensitive anterograde tracer, in three adult male Long-Evans rats. In four additional subjects, we injected fluorophor-conjugated CTB in both eyes, using a different fluorophor in each eye. Brains of these subjects were fixed and sectioned, and the labeled retinal ganglion cell termini were imaged with wide-field sub-micron resolution slide scanners. Retinal termination zones were traced to reconstruct a three dimensional model of the ipsilateral and contralateral retinal termination zones in the dLGN on both sides of the brain. The dLGN volume was 1.58 \pm0.094 mm^{3}, comprising 70 \pm 3% the volume of the entire retinorecipient LGN. We find the retinal terminals to be well-segregated by eye of origin. We consistently found three or four spatially separated ipsilateral-recipient zones within each dLGN, rather than the single compact zone expected. It remains to be determined whether these subdomains represent distinct functional sublaminae. |
1411.2359 | Benjamin M. Friedrich | Steffen Werner, Tom St\"uckemann, Manuel Beir\'an Amigo, Jochen C.
Rink, Frank J\"ulicher, Benjamin M. Friedrich | Scaling and regeneration of self-organized patterns | 5 pages, 3 color figures | Phys. Rev. Lett. 114, 138101, 2015 | 10.1103/PhysRevLett.114.138101 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological patterns generated during development and regeneration often scale
with organism size. Some organisms, e.g., flatworms, can regenerate a rescaled
body plan from tissue fragments of varying sizes. Inspired by these examples,
we introduce a generalization of Turing patterns that is self-organized and
self-scaling. A feedback loop involving diffusing expander molecules regulates
the reaction rates of a Turing system, thereby adjusting pattern length scales
proportional to system size. Our model captures essential features of body plan
regeneration in flatworms as observed in experiments.
| [
{
"created": "Mon, 10 Nov 2014 09:28:30 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Feb 2015 15:04:59 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Apr 2015 22:25:23 GMT",
"version": "v3"
}
] | 2015-04-15 | [
[
"Werner",
"Steffen",
""
],
[
"Stückemann",
"Tom",
""
],
[
"Amigo",
"Manuel Beirán",
""
],
[
"Rink",
"Jochen C.",
""
],
[
"Jülicher",
"Frank",
""
],
[
"Friedrich",
"Benjamin M.",
""
]
] | Biological patterns generated during development and regeneration often scale with organism size. Some organisms, e.g., flatworms, can regenerate a rescaled body plan from tissue fragments of varying sizes. Inspired by these examples, we introduce a generalization of Turing patterns that is self-organized and self-scaling. A feedback loop involving diffusing expander molecules regulates the reaction rates of a Turing system, thereby adjusting pattern length scales proportional to system size. Our model captures essential features of body plan regeneration in flatworms as observed in experiments. |
1302.0503 | Brian Williams Dr | Brian Gerard Williams | Could ART increase the population level incidence of TB? | 3 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HIV increases the likelihood that a person will develop TB. Starting them on
anti-retroviral therapy (ART) reduces their risk of TB but not to the level in
HIV negative people. Since HIV-positive people who are on ART can expect to
live a normal life for several decades this raises the possibility that their
elevated risk of infection, lasting for a long time, could lead to an increase
in the population level incidence of TB. Here we investigate the conditions
under which this could happen and show that provided HIV-positive people start
ART when their CD4+ cell count is greater than 350/microL and that there is
high coverage, ART will not lead to a long-term increase in HIV. Only if people
start ART very late and there is low coverage of ART might starting people on
ART increase the population level incidence of TB.
| [
{
"created": "Sun, 3 Feb 2013 15:03:54 GMT",
"version": "v1"
}
] | 2013-02-05 | [
[
"Williams",
"Brian Gerard",
""
]
] | HIV increases the likelihood that a person will develop TB. Starting them on anti-retroviral therapy (ART) reduces their risk of TB but not to the level in HIV negative people. Since HIV-positive people who are on ART can expect to live a normal life for several decades this raises the possibility that their elevated risk of infection, lasting for a long time, could lead to an increase in the population level incidence of TB. Here we investigate the conditions under which this could happen and show that provided HIV-positive people start ART when their CD4+ cell count is greater than 350/microL and that there is high coverage, ART will not lead to a long-term increase in HIV. Only if people start ART very late and there is low coverage of ART might starting people on ART increase the population level incidence of TB. |
1906.00889 | Benjamin Lansdell | Benjamin James Lansdell, Prashanth Ravi Prakash, Konrad Paul Kording | Learning to solve the credit assignment problem | 18 pages; 4 figures. (ICLR 2020 version) | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backpropagation is driving today's artificial neural networks (ANNs).
However, despite extensive research, it remains unclear if the brain implements
this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms
are often seen as a realistic alternative: neurons can randomly introduce
change, and use unspecific feedback signals to observe their effect on the cost
and thus approximate their gradient. However, the convergence rate of such
learning scales poorly with the number of involved neurons. Here we propose a
hybrid learning approach. Each neuron uses an RL-type strategy to learn how to
approximate the gradients that backpropagation would provide. We provide proof
that our approach converges to the true gradient for certain classes of
networks. In both feedforward and convolutional networks, we empirically show
that our approach learns to approximate the gradient, and can match or the
performance of exact gradient-based learning. Learning feedback weights
provides a biologically plausible mechanism of achieving good performance,
without the need for precise, pre-specified learning rules.
| [
{
"created": "Mon, 3 Jun 2019 15:48:38 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2019 12:06:38 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Oct 2019 14:11:01 GMT",
"version": "v3"
},
{
"created": "Wed, 22 Apr 2020 20:19:19 GMT",
"version": "v4"
}
] | 2020-04-24 | [
[
"Lansdell",
"Benjamin James",
""
],
[
"Prakash",
"Prashanth Ravi",
""
],
[
"Kording",
"Konrad Paul",
""
]
] | Backpropagation is driving today's artificial neural networks (ANNs). However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative: neurons can randomly introduce change, and use unspecific feedback signals to observe their effect on the cost and thus approximate their gradient. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach. Each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We provide proof that our approach converges to the true gradient for certain classes of networks. In both feedforward and convolutional networks, we empirically show that our approach learns to approximate the gradient, and can match or the performance of exact gradient-based learning. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules. |
2107.02935 | Alvin Chon | Alvin Chon, Xiaoqiu Huang | Sramm: short read alignment mapping metrics | 7 pages, 2 figures | Vol. 11, No.1/2, June 2021 | 10.5121/ijbb.2021.11201 | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Short Read Alignment Mapping Metrics (SRAMM): is an efficient and versatile
command line tool providing additional short read mapping metrics, filtering,
and graphs. Short read aligners report MAPing Quality (MAPQ), but these methods
generally are neither standardized nor well described in literature or software
manuals. Additionally, third party mapping quality programs are typically
computationally intensive or designed for specific applications. SRAMM
efficiently generates multiple different concept-based mapping scores to
provide for an informative post alignment examination and filtering process of
aligned short reads for various downstream applications. SRAMM is compatible
with Python 2.6+ and Python 3.6+ on all operating systems. It works with any
short read aligner that generates SAM/BAM/CRAM file outputs and reports 'AS'
tags. It is freely available under the MIT license at
http://github.com/achon/sramm.
| [
{
"created": "Tue, 6 Jul 2021 23:18:35 GMT",
"version": "v1"
}
] | 2021-07-08 | [
[
"Chon",
"Alvin",
""
],
[
"Huang",
"Xiaoqiu",
""
]
] | Short Read Alignment Mapping Metrics (SRAMM): is an efficient and versatile command line tool providing additional short read mapping metrics, filtering, and graphs. Short read aligners report MAPing Quality (MAPQ), but these methods generally are neither standardized nor well described in literature or software manuals. Additionally, third party mapping quality programs are typically computationally intensive or designed for specific applications. SRAMM efficiently generates multiple different concept-based mapping scores to provide for an informative post alignment examination and filtering process of aligned short reads for various downstream applications. SRAMM is compatible with Python 2.6+ and Python 3.6+ on all operating systems. It works with any short read aligner that generates SAM/BAM/CRAM file outputs and reports 'AS' tags. It is freely available under the MIT license at http://github.com/achon/sramm. |
1601.08202 | James Herbert-Read | Timothy. M. Schaerf, James E. Herbert-Read, Mary R. Myerscough, David
J. T. Sumpter and Ashley J. W. Ward | Identifying differences in the rules of interaction between individuals
in moving animal groups | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collective movement can be achieved when individuals respond to the local
movements and positions of their neighbours. Some individuals may
disproportionately influence group movement if they occupy particular spatial
positions in the group, for example, positions at the front of the group. We
asked, therefore, what led individuals in moving pairs of fish (Gambusia
holbrooki) to occupy a position in front of their partner. Individuals adjusted
their speed and direction differently in response to their partner's position,
resulting in individuals occupying different positions in the group.
Individuals that were found most often at the front of the pair had greater
mean changes in speed than their partner, and were less likely to turn towards
their partner, compared to those individuals most often found at the back of
the pair. The pair moved faster when led by the individual that was usually at
the front. Our results highlight how differences in the social responsiveness
between individuals can give rise to leadership in free moving groups. They
also demonstrate how the movement characteristics of groups depend on the
spatial configuration of individuals within them.
| [
{
"created": "Thu, 28 Jan 2016 14:59:42 GMT",
"version": "v1"
}
] | 2016-02-01 | [
[
"Schaerf",
"Timothy. M.",
""
],
[
"Herbert-Read",
"James E.",
""
],
[
"Myerscough",
"Mary R.",
""
],
[
"Sumpter",
"David J. T.",
""
],
[
"Ward",
"Ashley J. W.",
""
]
] | Collective movement can be achieved when individuals respond to the local movements and positions of their neighbours. Some individuals may disproportionately influence group movement if they occupy particular spatial positions in the group, for example, positions at the front of the group. We asked, therefore, what led individuals in moving pairs of fish (Gambusia holbrooki) to occupy a position in front of their partner. Individuals adjusted their speed and direction differently in response to their partner's position, resulting in individuals occupying different positions in the group. Individuals that were found most often at the front of the pair had greater mean changes in speed than their partner, and were less likely to turn towards their partner, compared to those individuals most often found at the back of the pair. The pair moved faster when led by the individual that was usually at the front. Our results highlight how differences in the social responsiveness between individuals can give rise to leadership in free moving groups. They also demonstrate how the movement characteristics of groups depend on the spatial configuration of individuals within them. |
2004.11260 | Gissell Estrada-Rodriguez | Luciano Abadias, Gissell Estrada-Rodriguez, Ernesto Estrada | Fractional-order susceptible-infected model: definition and applications
to the study of COVID-19 main protease | 21 pages, 2 figures | null | 10.1515/fca-2020-0033 | Fract. Calc. Appl. Anal. Vol. 23, No 3 (2020), pp. 635--655 | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a model for the transmission of perturbations across the amino
acids of a protein represented as an interaction network. The dynamics consists
of a Susceptible-Infected (SI) model based on the Caputo fractional-order
derivative. We find an upper bound to the analytical solution of this model
which represents the worse-case scenario on the propagation of perturbations
across a protein residue network. This upper bound is expressed in terms of
Mittag-Leffler functions of the adjacency matrix of the network of inter-amino
acids interactions. We then apply this model to the analysis of the propagation
of perturbations produced by inhibitors of the main protease of SARS CoV-2. We
find that the perturbations produced by strong inhibitors of the protease are
propagated far away from the binding site, confirming the long-range nature of
intra-protein communication. On the contrary, the weakest inhibitors only
transmit their perturbations across a close environment around the binding
site. These findings may help to the design of drug candidates against this new
coronavirus.
| [
{
"created": "Thu, 23 Apr 2020 15:42:17 GMT",
"version": "v1"
},
{
"created": "Mon, 25 May 2020 16:34:48 GMT",
"version": "v2"
}
] | 2020-07-01 | [
[
"Abadias",
"Luciano",
""
],
[
"Estrada-Rodriguez",
"Gissell",
""
],
[
"Estrada",
"Ernesto",
""
]
] | We propose a model for the transmission of perturbations across the amino acids of a protein represented as an interaction network. The dynamics consists of a Susceptible-Infected (SI) model based on the Caputo fractional-order derivative. We find an upper bound to the analytical solution of this model which represents the worse-case scenario on the propagation of perturbations across a protein residue network. This upper bound is expressed in terms of Mittag-Leffler functions of the adjacency matrix of the network of inter-amino acids interactions. We then apply this model to the analysis of the propagation of perturbations produced by inhibitors of the main protease of SARS CoV-2. We find that the perturbations produced by strong inhibitors of the protease are propagated far away from the binding site, confirming the long-range nature of intra-protein communication. On the contrary, the weakest inhibitors only transmit their perturbations across a close environment around the binding site. These findings may help to the design of drug candidates against this new coronavirus. |
q-bio/0506011 | D. Allan Drummond | D. Allan Drummond, Alpan Raval, Claus O. Wilke | A single determinant for the rate of yeast protein evolution | 44 pages, 3 figures; submitted | null | null | null | q-bio.PE q-bio.GN q-bio.QM | null | A gene's rate of sequence evolution is among the most fundamental
evolutionary quantities in common use, but what determines evolutionary rates
has remained unclear. Here, we show that the two most commonly used methods to
disentangle the determinants of evolutionary rate, partial correlation analysis
and ordinary multivariate regression, produce misleading or spurious results
when applied to noisy biological data. To overcome these difficulties, we
employ an alternative method, principal component regression, which is a
multivariate regression of evolutionary rate against the principal components
of the predictor variables. We carry out the first combined analysis of seven
predictors (gene expression level, dispensability, protein abundance, codon
adaptation index, gene length, number of protein-protein interactions, and the
gene's centrality in the interaction network). Strikingly, our analysis reveals
a single dominant component which explains 40-fold more variation in
evolutionary rate than any other, suggesting that protein evolutionary rate has
a single determinant among the seven predictors. The dominant component
explains nearly half the variation in the rate of synonymous and protein
evolution. Our results support the hypothesis that selection against the cost
of translation-error-induced protein misfolding governs the rate of synonymous
and protein sequence evolution in yeast.
| [
{
"created": "Thu, 9 Jun 2005 03:30:48 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Drummond",
"D. Allan",
""
],
[
"Raval",
"Alpan",
""
],
[
"Wilke",
"Claus O.",
""
]
] | A gene's rate of sequence evolution is among the most fundamental evolutionary quantities in common use, but what determines evolutionary rates has remained unclear. Here, we show that the two most commonly used methods to disentangle the determinants of evolutionary rate, partial correlation analysis and ordinary multivariate regression, produce misleading or spurious results when applied to noisy biological data. To overcome these difficulties, we employ an alternative method, principal component regression, which is a multivariate regression of evolutionary rate against the principal components of the predictor variables. We carry out the first combined analysis of seven predictors (gene expression level, dispensability, protein abundance, codon adaptation index, gene length, number of protein-protein interactions, and the gene's centrality in the interaction network). Strikingly, our analysis reveals a single dominant component which explains 40-fold more variation in evolutionary rate than any other, suggesting that protein evolutionary rate has a single determinant among the seven predictors. The dominant component explains nearly half the variation in the rate of synonymous and protein evolution. Our results support the hypothesis that selection against the cost of translation-error-induced protein misfolding governs the rate of synonymous and protein sequence evolution in yeast. |
1902.00486 | Yu-Ting Lin | Cheng-Hsi Chang, Yue-Lin Fang, Yu-Jung Wang, Hau-tieng Wu, Yu-Ting Lin | Differentiation of skin incision and laparoscopic trocar insertion via
quantifying transient bradycardia measured by electrocardiogram | One table and 4 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background. Most surgical procedures involve structures deeper than the skin.
However, the difference in surgical noxious stimulation between skin incision
and laparoscopic trocar insertion is unknown. By analyzing instantaneous heart
rate (IHR) calculated from the electrocardiogram, in particular the transient
bradycardia in response to surgical stimuli, this study investigates surgical
noxious stimuli arising from skin incision and laparoscopic trocar insertion.
Methods. Thirty-five patients undergoing laparoscopic cholecystectomy were
enrolled in this prospective observational study. Sequential surgical steps
including umbilical skin incision (11 mm), umbilical trocar insertion (11 mm),
xiphoid skin incision (5 mm), xiphoid trocar insertion (5 mm), subcostal skin
incision (3 mm), and subcostal trocar insertion (3 mm) were investigated. IHR
was derived from electrocardiography and calculated by the modern time-varying
power spectrum. Similar to the classical heart rate variability analysis, the
time-varying low frequency power (tvLF), time-varying high frequency power
(tvHF), and tvLF-to-tvHF ratio (tvLHR) were calculated. Prediction probability
(PK) analysis and global pointwise F-test were used to compare the performance
between indices and the heart rate readings from the patient monitor. Results.
Analysis of IHR showed that surgical stimulus elicits a transient bradycardia,
followed by the increase of heart rate. Transient bradycardia is more
significant in trocar insertion than skin incision. The IHR change quantifies
differential responses to different surgical intensity. Serial PK analysis
demonstrates de-sensitization in skin incision, but not in laparoscopic trocar
insertion. Conclusions. Quantitative indices present the transient bradycardia
introduced by noxious stimulation. The results indicate different effects
between skin incision and trocar insertion.
| [
{
"created": "Fri, 1 Feb 2019 18:14:08 GMT",
"version": "v1"
}
] | 2019-02-04 | [
[
"Chang",
"Cheng-Hsi",
""
],
[
"Fang",
"Yue-Lin",
""
],
[
"Wang",
"Yu-Jung",
""
],
[
"Wu",
"Hau-tieng",
""
],
[
"Lin",
"Yu-Ting",
""
]
] | Background. Most surgical procedures involve structures deeper than the skin. However, the difference in surgical noxious stimulation between skin incision and laparoscopic trocar insertion is unknown. By analyzing instantaneous heart rate (IHR) calculated from the electrocardiogram, in particular the transient bradycardia in response to surgical stimuli, this study investigates surgical noxious stimuli arising from skin incision and laparoscopic trocar insertion. Methods. Thirty-five patients undergoing laparoscopic cholecystectomy were enrolled in this prospective observational study. Sequential surgical steps including umbilical skin incision (11 mm), umbilical trocar insertion (11 mm), xiphoid skin incision (5 mm), xiphoid trocar insertion (5 mm), subcostal skin incision (3 mm), and subcostal trocar insertion (3 mm) were investigated. IHR was derived from electrocardiography and calculated by the modern time-varying power spectrum. Similar to the classical heart rate variability analysis, the time-varying low frequency power (tvLF), time-varying high frequency power (tvHF), and tvLF-to-tvHF ratio (tvLHR) were calculated. Prediction probability (PK) analysis and global pointwise F-test were used to compare the performance between indices and the heart rate readings from the patient monitor. Results. Analysis of IHR showed that surgical stimulus elicits a transient bradycardia, followed by the increase of heart rate. Transient bradycardia is more significant in trocar insertion than skin incision. The IHR change quantifies differential responses to different surgical intensity. Serial PK analysis demonstrates de-sensitization in skin incision, but not in laparoscopic trocar insertion. Conclusions. Quantitative indices present the transient bradycardia introduced by noxious stimulation. The results indicate different effects between skin incision and trocar insertion. |
1204.3966 | Flion Tang | Changbing Tang, Xiang Li, Lang Cao, Jingyuan Zhan | The \sigma law of evolutionary dynamics in community-structured
populations | 11 pages, 3 figures;Accepted by JTB | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | Evolutionary game dynamics in finite populations provides a new framework to
understand the selection of traits with frequency-dependent fitness. Recently,
a simple but fundamental law of evolutionary dynamics, which we call {\sigma}
law, describes how to determine the selection between two competing strategies:
in most evolutionary processes with two strategies, A and B, strategy A is
favored over B in weak selection if and only if {\sigma}R + S > T + {\sigma}P.
This relationship holds for a wide variety of structured populations with
mutation rate and weak selection under certain assumptions. In this paper, we
propose a model of games based on a community-structured population and revisit
this law under the Moran process. By calculating the average payoffs of A and B
individuals with the method of effective sojourn time, we find that {\sigma}
features not only the structured population characteristics but also the
reaction rate between individuals. That's to say, an interaction between two
individuals are not uniform, and we can take {\sigma} as a reaction rate
between any two individuals with the same strategy. We verify this viewpoint by
the modified replicator equation with non-uniform interaction rates in a
simplified version of the prisoner's dilemma game (PDG).
| [
{
"created": "Wed, 18 Apr 2012 03:17:37 GMT",
"version": "v1"
}
] | 2012-04-19 | [
[
"Tang",
"Changbing",
""
],
[
"Li",
"Xiang",
""
],
[
"Cao",
"Lang",
""
],
[
"Zhan",
"Jingyuan",
""
]
] | Evolutionary game dynamics in finite populations provides a new framework to understand the selection of traits with frequency-dependent fitness. Recently, a simple but fundamental law of evolutionary dynamics, which we call {\sigma} law, describes how to determine the selection between two competing strategies: in most evolutionary processes with two strategies, A and B, strategy A is favored over B in weak selection if and only if {\sigma}R + S > T + {\sigma}P. This relationship holds for a wide variety of structured populations with mutation rate and weak selection under certain assumptions. In this paper, we propose a model of games based on a community-structured population and revisit this law under the Moran process. By calculating the average payoffs of A and B individuals with the method of effective sojourn time, we find that {\sigma} features not only the structured population characteristics but also the reaction rate between individuals. That's to say, an interaction between two individuals are not uniform, and we can take {\sigma} as a reaction rate between any two individuals with the same strategy. We verify this viewpoint by the modified replicator equation with non-uniform interaction rates in a simplified version of the prisoner's dilemma game (PDG). |
2005.10311 | Peter Cotton | Peter Cotton | Repeat Contacts and the Spread of Disease: An Agent Model with
Compartmental Solution | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a probability of novel encounter derived from a physical model, we
augment the SIR compartmental model for disease spread. Scenarios with the same
initial trajectories and identical $R_0$ values can diverge greatly depending
on the speed at which our circles of acquaintances grow stale - leading to
order of magnitude differences in final case counts. A momentum effect arises
from variation in the mean time since infection, and this feeds back into new
infection rate and faster decline in the late stages of an outbreak. Rapid
extinction of an outbreak can occur in the early stages, but once this
opportunity is missed the effect is diminished and then, only herd immunity can
help.
| [
{
"created": "Wed, 20 May 2020 18:49:38 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jun 2020 17:23:22 GMT",
"version": "v2"
}
] | 2020-06-15 | [
[
"Cotton",
"Peter",
""
]
] | Using a probability of novel encounter derived from a physical model, we augment the SIR compartmental model for disease spread. Scenarios with the same initial trajectories and identical $R_0$ values can diverge greatly depending on the speed at which our circles of acquaintances grow stale - leading to order of magnitude differences in final case counts. A momentum effect arises from variation in the mean time since infection, and this feeds back into new infection rate and faster decline in the late stages of an outbreak. Rapid extinction of an outbreak can occur in the early stages, but once this opportunity is missed the effect is diminished and then, only herd immunity can help. |
0905.0869 | Carlos P. Roca | Carlos P. Roca, Jos\'e A. Cuesta and Angel S\'anchez | Imperfect Imitation Can Enhance Cooperation | 4 pages, 4 figures | Europhysics Letters 87, 48005 (2009) | 10.1209/0295-5075/87/48005 | null | q-bio.PE cs.GT physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The promotion of cooperation on spatial lattices is an important issue in
evolutionary game theory. This effect clearly depends on the update rule: it
diminishes with stochastic imitative rules whereas it increases with
unconditional imitation. To study the transition between both regimes, we
propose a new evolutionary rule, which stochastically combines unconditional
imitation with another imitative rule. We find that, surprinsingly, in many
social dilemmas this rule yields higher cooperative levels than any of the two
original ones. This nontrivial effect occurs because the basic rules induce a
separation of timescales in the microscopic processes at cluster interfaces.
The result is robust in the space of 2x2 symmetric games, on regular lattices
and on scale-free networks.
| [
{
"created": "Wed, 6 May 2009 16:35:54 GMT",
"version": "v1"
}
] | 2009-11-07 | [
[
"Roca",
"Carlos P.",
""
],
[
"Cuesta",
"José A.",
""
],
[
"Sánchez",
"Angel",
""
]
] | The promotion of cooperation on spatial lattices is an important issue in evolutionary game theory. This effect clearly depends on the update rule: it diminishes with stochastic imitative rules whereas it increases with unconditional imitation. To study the transition between both regimes, we propose a new evolutionary rule, which stochastically combines unconditional imitation with another imitative rule. We find that, surprinsingly, in many social dilemmas this rule yields higher cooperative levels than any of the two original ones. This nontrivial effect occurs because the basic rules induce a separation of timescales in the microscopic processes at cluster interfaces. The result is robust in the space of 2x2 symmetric games, on regular lattices and on scale-free networks. |
1905.02933 | Namiko Mitarai | Xingyu Zhang and Namiko Mitarai | Finite response time in stripe formation by bacteria with
density-suppressed motility | 7 pages, 4 figures, Introduction and references updated | null | null | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetically engineered bacteria to increase the tumbling frequency of the
run-and-tumble motion for the higher local bacterial density form visible
stripe pattern composed of successive high and low density regions on an agar
plate. We propose a model that includes a simplified regulatory dynamics of the
tumbling frequency in individual cells to clarify the role of finite response
time. We show that the time-delay due to the response dynamics results in the
instability in a homogeneous steady state allowing a pattern formation. For
further understanding, we propose a simplified two-state model that allows us
to describe the response time dependence of the instability analytically. We
show that the instability occurs at long wave length as long as the response
time is comparable with the tumbling timescale and the non-linearity of the
response function to the change of the density is high enough. The minimum
system size to see the instability grows with the response time $\tau$,
proportional to $\sqrt{\tau}$ in the large delay limit.
| [
{
"created": "Wed, 8 May 2019 06:58:20 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2019 15:00:13 GMT",
"version": "v2"
}
] | 2019-05-13 | [
[
"Zhang",
"Xingyu",
""
],
[
"Mitarai",
"Namiko",
""
]
] | Genetically engineered bacteria to increase the tumbling frequency of the run-and-tumble motion for the higher local bacterial density form visible stripe pattern composed of successive high and low density regions on an agar plate. We propose a model that includes a simplified regulatory dynamics of the tumbling frequency in individual cells to clarify the role of finite response time. We show that the time-delay due to the response dynamics results in the instability in a homogeneous steady state allowing a pattern formation. For further understanding, we propose a simplified two-state model that allows us to describe the response time dependence of the instability analytically. We show that the instability occurs at long wave length as long as the response time is comparable with the tumbling timescale and the non-linearity of the response function to the change of the density is high enough. The minimum system size to see the instability grows with the response time $\tau$, proportional to $\sqrt{\tau}$ in the large delay limit. |
2105.02754 | Evan Irving-Pease | Evan K. Irving-Pease, Rasa Muktupavela, Michael Dannemann, Fernando
Racimo | Quantitative Human Paleogenetics: what can ancient DNA tell us about
complex trait evolution? | null | null | 10.3389/fgene.2021.703541 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Genetic association data from national biobanks and large-scale association
studies have provided new prospects for understanding the genetic evolution of
complex traits and diseases in humans. In turn, genomes from ancient human
archaeological remains are now easier than ever to obtain, and provide a direct
window into changes in frequencies of trait-associated alleles in the past.
This has generated a new wave of studies aiming to analyse the genetic
component of traits in historic and prehistoric times using ancient DNA, and to
determine whether any such traits were subject to natural selection. In humans,
however, issues about the portability and robustness of complex trait inference
across different populations are particularly concerning when predictions are
extended to individuals that died thousands of years ago, and for which little,
if any, phenotypic validation is possible. In this review, we discuss the
advantages of incorporating ancient genomes into studies of trait-associated
variants, the need for models that can better accommodate ancient genomes into
quantitative genetic frameworks, and the existing limits to inferences about
complex trait evolution, particularly with respect to past populations.
| [
{
"created": "Thu, 6 May 2021 15:29:37 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jul 2021 11:38:07 GMT",
"version": "v2"
}
] | 2021-08-06 | [
[
"Irving-Pease",
"Evan K.",
""
],
[
"Muktupavela",
"Rasa",
""
],
[
"Dannemann",
"Michael",
""
],
[
"Racimo",
"Fernando",
""
]
] | Genetic association data from national biobanks and large-scale association studies have provided new prospects for understanding the genetic evolution of complex traits and diseases in humans. In turn, genomes from ancient human archaeological remains are now easier than ever to obtain, and provide a direct window into changes in frequencies of trait-associated alleles in the past. This has generated a new wave of studies aiming to analyse the genetic component of traits in historic and prehistoric times using ancient DNA, and to determine whether any such traits were subject to natural selection. In humans, however, issues about the portability and robustness of complex trait inference across different populations are particularly concerning when predictions are extended to individuals that died thousands of years ago, and for which little, if any, phenotypic validation is possible. In this review, we discuss the advantages of incorporating ancient genomes into studies of trait-associated variants, the need for models that can better accommodate ancient genomes into quantitative genetic frameworks, and the existing limits to inferences about complex trait evolution, particularly with respect to past populations. |
1706.02775 | Bashar Ibrahim | Bashar Ibrahim | A Mathematical Framework for Kinetochore-Driven Activation Feedback in
the Mitotic Checkpoint | null | null | 10.1007/s11538-017-0278-1 | null | q-bio.SC math.AP math.CA math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proliferating cells properly divide into their daughter cells through a
process that is mediated by kinetochores, protein-complexes that assemble at
the centromere of each sister chromatid. Each kinetochore has to establish a
tight bipolar attachment to the spindle apparatus before sister-chromatid
separation is initiated. The Spindle Assembly Checkpoint (SAC) links the
biophysical attachment status of the kinetochores to mitotic progression, and
ensures that even a single misaligned kinetochore keeps the checkpoint active.
The mechanism by which this is achieved is still elusive. Current computational
models of the human SAC disregard important biochemical properties by omitting
any kind of feedback loop, proper kinetochore signals, and other spatial
properties such as the stability of the system and diffusion effects. To allow
for more realistic in silico study of the dynamics of the SAC model, a minimal
mathematical framework for SAC activation and silencing is introduced. A
nonlinear ordinary differential equation model successfully reproduces
bifurcation signaling switches with attachment of all 92 kinetochores and
activation of APC/C by kinetochore-driven feedback. A partial differential
equation model and mathematical linear stability analyses indicate the
influence of diffusion and system stability. The conclusion is that
quantitative models of the human SAC should account for the positive feedback
on APC/C activation driven by the kinetochores which is essential for SAC
silencing. Experimental diffusion coefficients for MCC sub-complexes are found
to be insufficient for rapid APC/C inhibition. The presented analysis allows
for systems-level understanding of mitotic control and the minimal new model
can function as a basis for developing further quantitative-integrative models
of the cell division cycle
| [
{
"created": "Tue, 6 Jun 2017 13:52:01 GMT",
"version": "v1"
}
] | 2017-06-13 | [
[
"Ibrahim",
"Bashar",
""
]
] | Proliferating cells properly divide into their daughter cells through a process that is mediated by kinetochores, protein-complexes that assemble at the centromere of each sister chromatid. Each kinetochore has to establish a tight bipolar attachment to the spindle apparatus before sister-chromatid separation is initiated. The Spindle Assembly Checkpoint (SAC) links the biophysical attachment status of the kinetochores to mitotic progression, and ensures that even a single misaligned kinetochore keeps the checkpoint active. The mechanism by which this is achieved is still elusive. Current computational models of the human SAC disregard important biochemical properties by omitting any kind of feedback loop, proper kinetochore signals, and other spatial properties such as the stability of the system and diffusion effects. To allow for more realistic in silico study of the dynamics of the SAC model, a minimal mathematical framework for SAC activation and silencing is introduced. A nonlinear ordinary differential equation model successfully reproduces bifurcation signaling switches with attachment of all 92 kinetochores and activation of APC/C by kinetochore-driven feedback. A partial differential equation model and mathematical linear stability analyses indicate the influence of diffusion and system stability. The conclusion is that quantitative models of the human SAC should account for the positive feedback on APC/C activation driven by the kinetochores which is essential for SAC silencing. Experimental diffusion coefficients for MCC sub-complexes are found to be insufficient for rapid APC/C inhibition. The presented analysis allows for systems-level understanding of mitotic control and the minimal new model can function as a basis for developing further quantitative-integrative models of the cell division cycle |
2209.14751 | Horacio G. Rotstein | Ulises Chialva, Vicente Gonz\'alez Bosc\'a, Horacio G. Rotstein | Low-dimensional models of single neurons: A review | null | null | null | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classical Hodgkin-Huxley (HH) point-neuron model of action potential
generation is four-dimensional. It consists of four ordinary differential
equations describing the dynamics of the membrane potential and three gating
variables associated to a transient sodium and a delayed-rectifier potassium
ionic currents. Conductance-based models of HH type are higher-dimensional
extensions of the classical HH model. They include a number of supplementary
state variables associated with other ionic current types, and are able to
describe additional phenomena such as sub-threshold oscillations, mixed-mode
oscillations (subthreshold oscillations interspersed with spikes), clustering
and bursting. In this manuscript we discuss biophysically plausible and
phenomenological reduced models that preserve the biophysical and/or dynamic
description of models of HH type and the ability to produce complex phenomena,
but the number of effective dimensions (state variables) is lower. We describe
several representative models. We also describe systematic and heuristic
methods of deriving reduced models from models of HH type.
| [
{
"created": "Thu, 29 Sep 2022 13:07:21 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Feb 2023 17:21:47 GMT",
"version": "v2"
},
{
"created": "Tue, 14 Feb 2023 21:24:12 GMT",
"version": "v3"
}
] | 2023-02-16 | [
[
"Chialva",
"Ulises",
""
],
[
"Boscá",
"Vicente González",
""
],
[
"Rotstein",
"Horacio G.",
""
]
] | The classical Hodgkin-Huxley (HH) point-neuron model of action potential generation is four-dimensional. It consists of four ordinary differential equations describing the dynamics of the membrane potential and three gating variables associated to a transient sodium and a delayed-rectifier potassium ionic currents. Conductance-based models of HH type are higher-dimensional extensions of the classical HH model. They include a number of supplementary state variables associated with other ionic current types, and are able to describe additional phenomena such as sub-threshold oscillations, mixed-mode oscillations (subthreshold oscillations interspersed with spikes), clustering and bursting. In this manuscript we discuss biophysically plausible and phenomenological reduced models that preserve the biophysical and/or dynamic description of models of HH type and the ability to produce complex phenomena, but the number of effective dimensions (state variables) is lower. We describe several representative models. We also describe systematic and heuristic methods of deriving reduced models from models of HH type. |
1811.02478 | Ren\'e Vestergaard | Ren\'e Vestergaard and Emmanuel Pietriga | Proofs of life: molecular-biology reasoning simulates cell behaviors
from first principles | 37 pages, including 9 figures, plus 244 pages of supplementary
information | null | null | null | q-bio.OT cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We axiomatize the molecular-biology reasoning style, show compliance of the
standard reference: Ptashne, A Genetic Switch, and present proof-theory-induced
technologies to help infer phenotypes and to predict life cycles from
genotypes. The key is to note that `reductionist discipline' entails
constructive reasoning: any proof of a compound property can be decomposed to
proofs of constituent properties. Proof theory makes explicit the inner
structure of the axiomatized reasoning style and allows the permissible
dynamics to be presented as a mode of computation that can be executed and
analyzed. Constructivity and execution guarantee simulation when working over
domain-specific languages. Here, we exhibit phenotype properties for genotype
reasons: a molecular-biology argument is an open-system concurrent computation
that results in compartment changes and is performed among processes of
physiology change as determined from the molecular programming of given DNA.
Life cycles are the possible sequentializations of the processes. A main
implication of our construction is that formal correctness provides a
complementary perspective on science that is as fundamental there as for pure
mathematics. The bulk of the presented work has been verified formally correct
by computer.
| [
{
"created": "Tue, 30 Oct 2018 11:29:45 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jan 2019 12:46:36 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2019 09:00:35 GMT",
"version": "v3"
}
] | 2019-03-19 | [
[
"Vestergaard",
"René",
""
],
[
"Pietriga",
"Emmanuel",
""
]
] | We axiomatize the molecular-biology reasoning style, show compliance of the standard reference: Ptashne, A Genetic Switch, and present proof-theory-induced technologies to help infer phenotypes and to predict life cycles from genotypes. The key is to note that `reductionist discipline' entails constructive reasoning: any proof of a compound property can be decomposed to proofs of constituent properties. Proof theory makes explicit the inner structure of the axiomatized reasoning style and allows the permissible dynamics to be presented as a mode of computation that can be executed and analyzed. Constructivity and execution guarantee simulation when working over domain-specific languages. Here, we exhibit phenotype properties for genotype reasons: a molecular-biology argument is an open-system concurrent computation that results in compartment changes and is performed among processes of physiology change as determined from the molecular programming of given DNA. Life cycles are the possible sequentializations of the processes. A main implication of our construction is that formal correctness provides a complementary perspective on science that is as fundamental there as for pure mathematics. The bulk of the presented work has been verified formally correct by computer. |
2404.04041 | Jorge Vila | Jorge A. Vila | The Origin of Mutational Epistasis | 19 pages, 1 figure | null | null | null | q-bio.PE q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The interconnected processes of protein folding, mutations, epistasis, and
evolution have all been the subject of extensive analysis throughout the years
due to their significance for structural and evolutionary biology. The origin
(molecular basis) of epistasis (the non-additive interactions between
mutations) is still, nonetheless, unknown. The existence of a new perspective
on protein folding (a problem that needs to be conceived as an analytic whole)
will enable us to shed light on the origin of mutational epistasis at the
simplest level (within proteins) while also uncovering the reasons on why the
genetic background in which they occur (a key component of molecular evolution)
could foster changes in epistasis effects. Additionally, because mutations are
the source of epistasis, more research is needed to determine the impact of
posttranslational modifications (which have the potential to increase the
diversity of the proteome by several orders of magnitude) on both mutational
epistasis and protein evolvability. Finally, a protein evolution
thermodynamic-based analysis that does not consider specific mutational steps
or epistasis effects will be discussed. Our study explores the complex
processes behind the evolution of proteins upon mutations, clearing up some
previously unresolved issues and providing direction for further research in
this area.
| [
{
"created": "Fri, 5 Apr 2024 11:49:48 GMT",
"version": "v1"
}
] | 2024-04-08 | [
[
"Vila",
"Jorge A.",
""
]
] | The interconnected processes of protein folding, mutations, epistasis, and evolution have all been the subject of extensive analysis throughout the years due to their significance for structural and evolutionary biology. The origin (molecular basis) of epistasis (the non-additive interactions between mutations) is still, nonetheless, unknown. The existence of a new perspective on protein folding (a problem that needs to be conceived as an analytic whole) will enable us to shed light on the origin of mutational epistasis at the simplest level (within proteins) while also uncovering the reasons on why the genetic background in which they occur (a key component of molecular evolution) could foster changes in epistasis effects. Additionally, because mutations are the source of epistasis, more research is needed to determine the impact of posttranslational modifications (which have the potential to increase the diversity of the proteome by several orders of magnitude) on both mutational epistasis and protein evolvability. Finally, a protein evolution thermodynamic-based analysis that does not consider specific mutational steps or epistasis effects will be discussed. Our study explores the complex processes behind the evolution of proteins upon mutations, clearing up some previously unresolved issues and providing direction for further research in this area. |
2211.10442 | Alexander Partin | Alexander Partin (1), Thomas S. Brettin (1), Yitan Zhu (1), Oleksandr
Narykov (1), Austin Clyde (1 and 2), Jamie Overbeek (1), Rick L. Stevens (1
and 2) ((1) Division of Data Science and Learning, Argonne National
Laboratory, Argonne, IL, USA, (2) Department of Computer Science, The
University of Chicago, Chicago, IL, USA) | Deep learning methods for drug response prediction in cancer:
predominant and emerging trends | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Cancer claims millions of lives yearly worldwide. While many therapies have
been made available in recent years, by in large cancer remains unsolved.
Exploiting computational predictive models to study and treat cancer holds
great promise in improving drug development and personalized design of
treatment plans, ultimately suppressing tumors, alleviating suffering, and
prolonging lives of patients. A wave of recent papers demonstrates promising
results in predicting cancer response to drug treatments while utilizing deep
learning methods. These papers investigate diverse data representations, neural
network architectures, learning methodologies, and evaluations schemes.
However, deciphering promising predominant and emerging trends is difficult due
to the variety of explored methods and lack of standardized framework for
comparing drug response prediction models. To obtain a comprehensive landscape
of deep learning methods, we conducted an extensive search and analysis of deep
learning models that predict the response to single drug treatments. A total of
60 deep learning-based models have been curated and summary plots were
generated. Based on the analysis, observable patterns and prevalence of methods
have been revealed. This review allows to better understand the current state
of the field and identify major challenges and promising solution paths.
| [
{
"created": "Fri, 18 Nov 2022 03:26:31 GMT",
"version": "v1"
}
] | 2022-11-22 | [
[
"Partin",
"Alexander",
"",
"1 and 2"
],
[
"Brettin",
"Thomas S.",
"",
"1 and 2"
],
[
"Zhu",
"Yitan",
"",
"1 and 2"
],
[
"Narykov",
"Oleksandr",
"",
"1 and 2"
],
[
"Clyde",
"Austin",
"",
"1 and 2"
],
[
"Overbeek",
"Jamie",
"",
"1\n and 2"
],
[
"Stevens",
"Rick L.",
"",
"1\n and 2"
]
] | Cancer claims millions of lives yearly worldwide. While many therapies have been made available in recent years, by in large cancer remains unsolved. Exploiting computational predictive models to study and treat cancer holds great promise in improving drug development and personalized design of treatment plans, ultimately suppressing tumors, alleviating suffering, and prolonging lives of patients. A wave of recent papers demonstrates promising results in predicting cancer response to drug treatments while utilizing deep learning methods. These papers investigate diverse data representations, neural network architectures, learning methodologies, and evaluations schemes. However, deciphering promising predominant and emerging trends is difficult due to the variety of explored methods and lack of standardized framework for comparing drug response prediction models. To obtain a comprehensive landscape of deep learning methods, we conducted an extensive search and analysis of deep learning models that predict the response to single drug treatments. A total of 60 deep learning-based models have been curated and summary plots were generated. Based on the analysis, observable patterns and prevalence of methods have been revealed. This review allows to better understand the current state of the field and identify major challenges and promising solution paths. |
0806.3741 | Ernest Barreto | Ghanim Ullah, John R. Cressman Jr., Ernest Barreto, and Steven J.
Schiff | The Influence of Sodium and Potassium Dynamics on Excitability,
Seizures, and the Stability of Persistent States: II. Network and Glial
Dynamics | Post-review revision. 5 figures. This is the second of a pair of
related papers; see also arXiv:0806.3738 | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In these companion papers, we study how the interrelated dynamics of sodium
and potassium affect the excitability of neurons, the occurrence of seizures,
and the stability of persistent states of activity. We seek to study these
dynamics with respect to the following compartments: neurons, glia, and
extracellular space. We are particularly interested in the slower time-scale
dynamics that determine overall excitability, and set the stage for transient
episodes of persistent oscillations, working memory, or seizures. In this
second of two companion papers, we present an ionic current network model
composed of populations of Hodgkin-Huxley type excitatory and inhibitory
neurons embedded within extracellular space and glia, in order to investigate
the role of micro-environmental ionic dynamics on the stability of persistent
activity. We show that these networks reproduce seizure-like activity if glial
cells fail to maintain the proper micro-environmental conditions surrounding
neurons, and produce several experimentally testable predictions. Our work
suggests that the stability of persistent states to perturbation is set by
glial activity, and that how the response to such perturbations decays or grows
may be a critical factor in a variety of disparate transient phenomena such as
working memory, burst firing in neonatal brain or spinal cord, up states,
seizures, and cortical oscillations.
| [
{
"created": "Mon, 23 Jun 2008 19:21:17 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Jun 2008 14:40:46 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Oct 2008 16:33:09 GMT",
"version": "v3"
}
] | 2009-09-29 | [
[
"Ullah",
"Ghanim",
""
],
[
"Cressman",
"John R.",
"Jr."
],
[
"Barreto",
"Ernest",
""
],
[
"Schiff",
"Steven J.",
""
]
] | In these companion papers, we study how the interrelated dynamics of sodium and potassium affect the excitability of neurons, the occurrence of seizures, and the stability of persistent states of activity. We seek to study these dynamics with respect to the following compartments: neurons, glia, and extracellular space. We are particularly interested in the slower time-scale dynamics that determine overall excitability, and set the stage for transient episodes of persistent oscillations, working memory, or seizures. In this second of two companion papers, we present an ionic current network model composed of populations of Hodgkin-Huxley type excitatory and inhibitory neurons embedded within extracellular space and glia, in order to investigate the role of micro-environmental ionic dynamics on the stability of persistent activity. We show that these networks reproduce seizure-like activity if glial cells fail to maintain the proper micro-environmental conditions surrounding neurons, and produce several experimentally testable predictions. Our work suggests that the stability of persistent states to perturbation is set by glial activity, and that how the response to such perturbations decays or grows may be a critical factor in a variety of disparate transient phenomena such as working memory, burst firing in neonatal brain or spinal cord, up states, seizures, and cortical oscillations. |
q-bio/0605037 | Chang-Yong Lee | Chang-Yong Lee | Mass Fractal Dimension of the Ribosome and Implication of its Dynamic
Characteristics | 7 pages, 2 figures | Physical Review E 73, 042901 (2006) | 10.1103/PhysRevE.73.042901 | null | q-bio.BM | null | Self-similar properties of the ribosome in terms of the mass fractal
dimension are investigated. We find that both the 30S subunit and the 16S rRNA
have fractal dimensions of 2.58 and 2.82, respectively; while the 50S subunit
as well as the 23S rRNA has the mass fractal dimension close to 3, implying a
compact three dimensional macromolecule. This finding supports the dynamic and
active role of the 30S subunit in the protein synthesis, in contrast to the
pass role of the 50S subunit.
| [
{
"created": "Tue, 23 May 2006 00:52:41 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Lee",
"Chang-Yong",
""
]
] | Self-similar properties of the ribosome in terms of the mass fractal dimension are investigated. We find that both the 30S subunit and the 16S rRNA have fractal dimensions of 2.58 and 2.82, respectively; while the 50S subunit as well as the 23S rRNA has the mass fractal dimension close to 3, implying a compact three dimensional macromolecule. This finding supports the dynamic and active role of the 30S subunit in the protein synthesis, in contrast to the pass role of the 50S subunit. |
2003.11371 | Reza Sameni | Reza Sameni | Mathematical Modeling of Epidemic Diseases; A Case Study of the COVID-19
Coronavirus | 19 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, we study the propagation patterns of epidemic diseases such
as the COVID-19 coronavirus, from a mathematical modeling perspective. The
study is based on an extensions of the well-known
susceptible-infected-recovered (SIR) family of compartmental models. It is
shown how social measures such as distancing, regional lockdowns, quarantine
and global public health vigilance, influence the model parameters, which can
eventually change the mortality rates and active contaminated cases over time,
in the real world. As with all mathematical models, the predictive ability of
the model is limited by the accuracy of the available data and to the so-called
\textit{level of abstraction} used for modeling the problem. In order to
provide the broader audience of researchers a better understanding of spreading
patterns of epidemic diseases, a short introduction on biological systems
modeling is also presented and the Matlab source codes for the simulations are
provided online.
| [
{
"created": "Wed, 25 Mar 2020 12:46:21 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Apr 2020 23:42:26 GMT",
"version": "v2"
},
{
"created": "Tue, 19 May 2020 17:28:34 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Dec 2020 03:18:41 GMT",
"version": "v4"
}
] | 2021-01-01 | [
[
"Sameni",
"Reza",
""
]
] | In this research, we study the propagation patterns of epidemic diseases such as the COVID-19 coronavirus, from a mathematical modeling perspective. The study is based on an extensions of the well-known susceptible-infected-recovered (SIR) family of compartmental models. It is shown how social measures such as distancing, regional lockdowns, quarantine and global public health vigilance, influence the model parameters, which can eventually change the mortality rates and active contaminated cases over time, in the real world. As with all mathematical models, the predictive ability of the model is limited by the accuracy of the available data and to the so-called \textit{level of abstraction} used for modeling the problem. In order to provide the broader audience of researchers a better understanding of spreading patterns of epidemic diseases, a short introduction on biological systems modeling is also presented and the Matlab source codes for the simulations are provided online. |
1509.03162 | Jannis Schuecker | Jannis Schuecker, Maximilian Schmidt, Sacha J. van Albada, Markus
Diesmann, Moritz Helias | Fundamental activity constraints lead to specific interpretations of the
connectome | J. Schuecker and M. Schmidt contributed equally to this work | PLOS CB 13, 1-25 (2017) | 10.1371/journal.pcbi.1005179 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The continuous integration of experimental data into coherent models of the
brain is an increasing challenge of modern neuroscience. Such models provide a
bridge between structure and activity, and identify the mechanisms giving rise
to experimental observations. Nevertheless, structurally realistic network
models of spiking neurons are necessarily underconstrained even if experimental
data on brain connectivity are incorporated to the best of our knowledge.
Guided by physiological observations, any model must therefore explore the
parameter ranges within the uncertainty of the data. Based on simulation
results alone, however, the mechanisms underlying stable and physiologically
realistic activity often remain obscure. We here employ a mean-field reduction
of the dynamics, which allows us to include activity constraints into the
process of model construction. We shape the phase space of a multi-scale
network model of the vision-related areas of macaque cortex by systematically
refining its connectivity. Fundamental constraints on the activity, i.e.,
prohibiting quiescence and requiring global stability, prove sufficient to
obtain realistic layer- and area-specific activity. Only small adaptations of
the structure are required, showing that the network operates close to an
instability. The procedure identifies components of the network critical to its
collective dynamics and creates hypotheses for structural data and future
experiments. The method can be applied to networks involving any neuron model
with a known gain function.
| [
{
"created": "Thu, 10 Sep 2015 14:01:48 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Feb 2016 16:33:55 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Apr 2016 07:34:37 GMT",
"version": "v3"
},
{
"created": "Thu, 2 Mar 2017 08:03:15 GMT",
"version": "v4"
}
] | 2017-03-03 | [
[
"Schuecker",
"Jannis",
""
],
[
"Schmidt",
"Maximilian",
""
],
[
"van Albada",
"Sacha J.",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Helias",
"Moritz",
""
]
] | The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function. |
1407.6029 | Ngoc Mai Tran | Ila Fiete, David J. Schwab and Ngoc M. Tran | A binary Hopfield network with $1/\log(n)$ information rate and
applications to grid cell decoding | extended abstract, 4 pages, 2 figures | null | null | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Hopfield network is an auto-associative, distributive model of neural
memory storage and retrieval. A form of error-correcting code, the Hopfield
network can learn a set of patterns as stable points of the network dynamic,
and retrieve them from noisy inputs -- thus Hopfield networks are their own
decoders. Unlike in coding theory, where the information rate of a good code
(in the Shannon sense) is finite but the cost of decoding does not play a role
in the rate, the information rate of Hopfield networks trained with
state-of-the-art learning algorithms is of the order ${\log(n)}/{n}$, a
quantity that tends to zero asymptotically with $n$, the number of neurons in
the network. For specially constructed networks, the best information rate
currently achieved is of order ${1}/{\sqrt{n}}$. In this work, we design simple
binary Hopfield networks that have asymptotically vanishing error rates at an
information rate of ${1}/{\log(n)}$. These networks can be added as the
decoders of any neural code with noisy neurons. As an example, we apply our
network to a binary neural decoder of the grid cell code to attain information
rate ${1}/{\log(n)}$.
| [
{
"created": "Tue, 22 Jul 2014 20:32:46 GMT",
"version": "v1"
}
] | 2014-07-24 | [
[
"Fiete",
"Ila",
""
],
[
"Schwab",
"David J.",
""
],
[
"Tran",
"Ngoc M.",
""
]
] | A Hopfield network is an auto-associative, distributive model of neural memory storage and retrieval. A form of error-correcting code, the Hopfield network can learn a set of patterns as stable points of the network dynamic, and retrieve them from noisy inputs -- thus Hopfield networks are their own decoders. Unlike in coding theory, where the information rate of a good code (in the Shannon sense) is finite but the cost of decoding does not play a role in the rate, the information rate of Hopfield networks trained with state-of-the-art learning algorithms is of the order ${\log(n)}/{n}$, a quantity that tends to zero asymptotically with $n$, the number of neurons in the network. For specially constructed networks, the best information rate currently achieved is of order ${1}/{\sqrt{n}}$. In this work, we design simple binary Hopfield networks that have asymptotically vanishing error rates at an information rate of ${1}/{\log(n)}$. These networks can be added as the decoders of any neural code with noisy neurons. As an example, we apply our network to a binary neural decoder of the grid cell code to attain information rate ${1}/{\log(n)}$. |
2109.12798 | Mitsuo Kawato | Mitsuo Kawato (ATR), Aurelio Cortese (ATR/RIKEN) | From internal models toward metacognitive AI | 23 pages, 3 figures. Fund number (ATLA) revised on Dec. 17 | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several papers published in Biological Cybernetics in the 1980s and 1990s,
Kawato and colleagues proposed computational models explaining how internal
models are acquired in the cerebellum. These models were later supported by
neurophysiological experiments using monkeys and neuroimaging experiments
involving humans. These early studies influenced neuroscience from basic,
sensory-motor control to higher cognitive functions. One of the most perplexing
enigmas related to internal models is to understand the neural mechanisms that
enable animals to learn large-dimensional problems with so few trials.
Consciousness and metacognition -- the ability to monitor one's own thoughts,
may be part of the solution to this enigma. Based on literature reviews of the
past 20 years, here we propose a computational neuroscience model of
metacognition. The model comprises a modular hierarchical
reinforcement-learning architecture of parallel and layered, generative-inverse
model pairs. In the prefrontal cortex, a distributed executive network called
the "cognitive reality monitoring network" (CRMN) orchestrates conscious
involvement of generative-inverse model pairs in perception and action. Based
on mismatches between computations by generative and inverse models, as well as
reward prediction errors, CRMN computes a "responsibility signal" that gates
selection and learning of pairs in perception, action, and reinforcement
learning. A high responsibility signal is given to the pairs that best capture
the external world, that are competent in movements (small mismatch), and that
are capable of reinforcement learning (small reward prediction error). CRMN
selects pairs with higher responsibility signals as objects of metacognition,
and consciousness is determined by the entropy of responsibility signals across
all pairs.
| [
{
"created": "Mon, 27 Sep 2021 05:00:56 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Dec 2021 00:45:51 GMT",
"version": "v2"
}
] | 2021-12-22 | [
[
"Kawato",
"Mitsuo",
"",
"ATR"
],
[
"Cortese",
"Aurelio",
"",
"ATR/RIKEN"
]
] | In several papers published in Biological Cybernetics in the 1980s and 1990s, Kawato and colleagues proposed computational models explaining how internal models are acquired in the cerebellum. These models were later supported by neurophysiological experiments using monkeys and neuroimaging experiments involving humans. These early studies influenced neuroscience from basic, sensory-motor control to higher cognitive functions. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Consciousness and metacognition -- the ability to monitor one's own thoughts, may be part of the solution to this enigma. Based on literature reviews of the past 20 years, here we propose a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" (CRMN) orchestrates conscious involvement of generative-inverse model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a "responsibility signal" that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs. |
2309.06165 | Yousef Jamali | Maryam Saadati, Saba Sadat Khodaei, and Yousef Jamali | Unveiling the Complexity of Neural Populations: Evaluating the Validity
and Limitations of the Wilson-Cowan Model | 25 pages, 10 figures | null | null | null | q-bio.NC nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The population model of Wilson-Cowan is perhaps the most popular in the
history of computational neuroscience. It embraces the nonlinear mean field
dynamics of excitatory and inhibitory neuronal populations provided via a
temporal coarse-graining technique. The traditional Wilson-Cowan equations
exhibit either steady-state regimes or else limit cycle competitions for an
appropriate range of parameters. As these equations lower the resolution of the
neural system and obscure vital information, we assess the validity of
mass-type model approximations for complex neural behaviors. Using a
large-scale network of Hodgkin-Huxley style neurons, we derive implicit average
population dynamics based on mean field assumptions. Our comparison of the
microscopic neural activity with the macroscopic temporal profiles reveals
dependency on the binary state of interacting subpopulations and the random
property of the structural network at the Hopf bifurcation points when
different synaptic weights are considered. For substantial configurations of
stimulus intensity, our model provides further estimates of the neural
population's dynamics official, ranging from simple periodic to quasi-periodic
and aperiodic patterns, as well as phase transition regimes. While this shows
its great potential for studying the collective behavior of individual neurons
particularly concentrating on the occurrence of bifurcation phenomena, we must
accept a quite limited accuracy of the Wilson-Cowan approximations-at least in
some parameter regimes. Additionally, we report that the complexity and
temporal diversity of neural dynamics, especially in terms of limit cycle
trajectory, and synchronization can be induced by either small heterogeneity in
the degree of various types of local excitatory connectivity or considerable
diversity in the external drive to the excitatory pool.
| [
{
"created": "Tue, 12 Sep 2023 12:22:18 GMT",
"version": "v1"
}
] | 2023-09-13 | [
[
"Saadati",
"Maryam",
""
],
[
"Khodaei",
"Saba Sadat",
""
],
[
"Jamali",
"Yousef",
""
]
] | The population model of Wilson-Cowan is perhaps the most popular in the history of computational neuroscience. It embraces the nonlinear mean field dynamics of excitatory and inhibitory neuronal populations provided via a temporal coarse-graining technique. The traditional Wilson-Cowan equations exhibit either steady-state regimes or else limit cycle competitions for an appropriate range of parameters. As these equations lower the resolution of the neural system and obscure vital information, we assess the validity of mass-type model approximations for complex neural behaviors. Using a large-scale network of Hodgkin-Huxley style neurons, we derive implicit average population dynamics based on mean field assumptions. Our comparison of the microscopic neural activity with the macroscopic temporal profiles reveals dependency on the binary state of interacting subpopulations and the random property of the structural network at the Hopf bifurcation points when different synaptic weights are considered. For substantial configurations of stimulus intensity, our model provides further estimates of the neural population's dynamics official, ranging from simple periodic to quasi-periodic and aperiodic patterns, as well as phase transition regimes. While this shows its great potential for studying the collective behavior of individual neurons particularly concentrating on the occurrence of bifurcation phenomena, we must accept a quite limited accuracy of the Wilson-Cowan approximations-at least in some parameter regimes. Additionally, we report that the complexity and temporal diversity of neural dynamics, especially in terms of limit cycle trajectory, and synchronization can be induced by either small heterogeneity in the degree of various types of local excitatory connectivity or considerable diversity in the external drive to the excitatory pool. |
2106.04366 | Rodrigo Echeveste | Rodrigo Echeveste, Enzo Ferrante, Diego H. Milone, In\'es Samengo | Bridging physiological and perceptual views of autism by means of
sampling-based Bayesian inference | Accepted for publication in Network Neuroscience | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Theories for autism spectrum disorder (ASD) have been formulated at different
levels: ranging from physiological observations to perceptual and behavioral
descriptions. Understanding the physiological underpinnings of perceptual
traits in ASD remains a significant challenge in the field. Here we show how a
recurrent neural circuit model which was optimized to perform sampling-based
inference and displays characteristic features of cortical dynamics can help
bridge this gap. The model was able to establish a mechanistic link between two
descriptive levels for ASD: a physiological level, in terms of inhibitory
dysfunction, neural variability and oscillations, and a perceptual level, in
terms of hypopriors in Bayesian computations. We took two parallel paths:
inducing hypopriors in the probabilistic model, and an inhibitory dysfunction
in the network model, which lead to consistent results in terms of the
represented posteriors, providing support for the view that both descriptions
might constitute two sides of the same coin.
| [
{
"created": "Tue, 8 Jun 2021 14:06:13 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Dec 2021 21:05:43 GMT",
"version": "v2"
}
] | 2021-12-03 | [
[
"Echeveste",
"Rodrigo",
""
],
[
"Ferrante",
"Enzo",
""
],
[
"Milone",
"Diego H.",
""
],
[
"Samengo",
"Inés",
""
]
] | Theories for autism spectrum disorder (ASD) have been formulated at different levels: ranging from physiological observations to perceptual and behavioral descriptions. Understanding the physiological underpinnings of perceptual traits in ASD remains a significant challenge in the field. Here we show how a recurrent neural circuit model which was optimized to perform sampling-based inference and displays characteristic features of cortical dynamics can help bridge this gap. The model was able to establish a mechanistic link between two descriptive levels for ASD: a physiological level, in terms of inhibitory dysfunction, neural variability and oscillations, and a perceptual level, in terms of hypopriors in Bayesian computations. We took two parallel paths: inducing hypopriors in the probabilistic model, and an inhibitory dysfunction in the network model, which lead to consistent results in terms of the represented posteriors, providing support for the view that both descriptions might constitute two sides of the same coin. |
2404.05329 | Jing Ye | Jing Ye, Minzhi Fan, Xiaoyu Zhang, Shasha Lu, Mengyao Chai, Yunshan
Zhang, Xiaoyu Zhao, Shuang Li, Diming Zhang | In silico bioactivity prediction of proteins interacting with
graphene-based nanomaterials guides rational design of biosensor | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphene based nanomaterials have attracted significant attention for their
potentials in biomedical and biotechnology applications in recent years, owing
to the outstanding physical and chemical properties. However, the interaction
mechanism and impact on biological activity of macro and micro biomolecules
still require more concerns and further research in order to enhance their
applicability in biosensors, etc. Herein, an integrated method has been
developed to predict the protein bioactivity performance when interacting with
nanomaterials for protein based biosensor. Molecular dynamics simulation and
molecular docking technique were consolidated to investigate several
nanomaterials C60 fullerene, single walled carbon nanotube, pristine graphene
and graphene oxide, and their effect when interacting with protein. The
adsorption behavior, secondary structure changes and protein bioactivity
changes were simulated, and the results of protein activity simulation were
verified in combination with atomic force spectrum, circular dichroism spectrum
fluorescence and electrochemical experiments. The best quantification alignment
between bioactivity obtained by simulation and experiment measurements was
further explored. The two proteins, RNase A and Exonuclease III, were regarded
as analysis model for the proof of concept, and the prediction accuracy of
protein bioactivty could reach up to 0.98.
| [
{
"created": "Mon, 8 Apr 2024 09:15:43 GMT",
"version": "v1"
}
] | 2024-04-09 | [
[
"Ye",
"Jing",
""
],
[
"Fan",
"Minzhi",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Lu",
"Shasha",
""
],
[
"Chai",
"Mengyao",
""
],
[
"Zhang",
"Yunshan",
""
],
[
"Zhao",
"Xiaoyu",
""
],
[
"Li",
"Shuang",
""
],
[
"Zhang",
"Diming",
""
]
] | Graphene based nanomaterials have attracted significant attention for their potentials in biomedical and biotechnology applications in recent years, owing to the outstanding physical and chemical properties. However, the interaction mechanism and impact on biological activity of macro and micro biomolecules still require more concerns and further research in order to enhance their applicability in biosensors, etc. Herein, an integrated method has been developed to predict the protein bioactivity performance when interacting with nanomaterials for protein based biosensor. Molecular dynamics simulation and molecular docking technique were consolidated to investigate several nanomaterials C60 fullerene, single walled carbon nanotube, pristine graphene and graphene oxide, and their effect when interacting with protein. The adsorption behavior, secondary structure changes and protein bioactivity changes were simulated, and the results of protein activity simulation were verified in combination with atomic force spectrum, circular dichroism spectrum fluorescence and electrochemical experiments. The best quantification alignment between bioactivity obtained by simulation and experiment measurements was further explored. The two proteins, RNase A and Exonuclease III, were regarded as analysis model for the proof of concept, and the prediction accuracy of protein bioactivty could reach up to 0.98. |
1509.08524 | Lei Meng | Lei Meng, Aaron Striegel, and Tijana Milenkovic | Local versus Global Biological Network Alignment | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network alignment (NA) aims to find regions of similarities between molecular
networks of different species. There exist two NA categories: local (LNA) or
global (GNA). LNA finds small highly conserved network regions and produces a
many-to-many node mapping. GNA finds large conserved regions and produces a
one-to-one node mapping. Given the different outputs of LNA and GNA, when a new
NA method is proposed, it is compared against existing methods from the same
category. However, both NA categories have the same goal: to allow for
transferring functional knowledge from well- to poorly-studied species between
conserved network regions. So, which one to choose, LNA or GNA? To answer this,
we introduce the first systematic evaluation of the two NA categories.
We introduce new measures of alignment quality that allow for fair comparison
of the different LNA and GNA outputs, as such measures do not exist. We provide
user-friendly software for efficient alignment evaluation that implements the
new and existing measures. We evaluate prominent LNA and GNA methods on
synthetic and real-world biological networks. We study the effect on alignment
quality of using different interaction types and confidence levels. We find
that the superiority of one NA category over the other is context-dependent.
Further, when we contrast LNA and GNA in the application of learning novel
protein functional knowledge, the two produce very different predictions,
indicating their complementarity. Our results and software provide guidelines
for future NA method development and evaluation.
| [
{
"created": "Mon, 28 Sep 2015 22:18:03 GMT",
"version": "v1"
}
] | 2015-09-30 | [
[
"Meng",
"Lei",
""
],
[
"Striegel",
"Aaron",
""
],
[
"Milenkovic",
"Tijana",
""
]
] | Network alignment (NA) aims to find regions of similarities between molecular networks of different species. There exist two NA categories: local (LNA) or global (GNA). LNA finds small highly conserved network regions and produces a many-to-many node mapping. GNA finds large conserved regions and produces a one-to-one node mapping. Given the different outputs of LNA and GNA, when a new NA method is proposed, it is compared against existing methods from the same category. However, both NA categories have the same goal: to allow for transferring functional knowledge from well- to poorly-studied species between conserved network regions. So, which one to choose, LNA or GNA? To answer this, we introduce the first systematic evaluation of the two NA categories. We introduce new measures of alignment quality that allow for fair comparison of the different LNA and GNA outputs, as such measures do not exist. We provide user-friendly software for efficient alignment evaluation that implements the new and existing measures. We evaluate prominent LNA and GNA methods on synthetic and real-world biological networks. We study the effect on alignment quality of using different interaction types and confidence levels. We find that the superiority of one NA category over the other is context-dependent. Further, when we contrast LNA and GNA in the application of learning novel protein functional knowledge, the two produce very different predictions, indicating their complementarity. Our results and software provide guidelines for future NA method development and evaluation. |
2210.04520 | Timo Flesch | Timo Flesch, Andrew Saxe, Christopher Summerfield | Continual task learning in natural and artificial agents | 18 pages, 3 figures | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | How do humans and other animals learn new tasks? A wave of brain recording
studies has investigated how neural representations change during task
learning, with a focus on how tasks can be acquired and coded in ways that
minimise mutual interference. We review recent work that has explored the
geometry and dimensionality of neural task representations in neocortex, and
computational models that have exploited these findings to understand how the
brain may partition knowledge between tasks. We discuss how ideas from machine
learning, including those that combine supervised and unsupervised learning,
are helping neuroscientists understand how natural tasks are learned and coded
in biological brains.
| [
{
"created": "Mon, 10 Oct 2022 09:36:08 GMT",
"version": "v1"
}
] | 2022-10-11 | [
[
"Flesch",
"Timo",
""
],
[
"Saxe",
"Andrew",
""
],
[
"Summerfield",
"Christopher",
""
]
] | How do humans and other animals learn new tasks? A wave of brain recording studies has investigated how neural representations change during task learning, with a focus on how tasks can be acquired and coded in ways that minimise mutual interference. We review recent work that has explored the geometry and dimensionality of neural task representations in neocortex, and computational models that have exploited these findings to understand how the brain may partition knowledge between tasks. We discuss how ideas from machine learning, including those that combine supervised and unsupervised learning, are helping neuroscientists understand how natural tasks are learned and coded in biological brains. |
1403.0869 | Alessandra De Rossi | Alessandra De Rossi, Francesco Lisa, Luca Rubini, Alberto Zappavigna,
Ezio Venturino | A food chain ecoepidemic model: infection at the bottom trophic level | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider a three level food web subject to a disease
affecting the bottom prey. The resulting dynamics is much richer with respect
to the purely demographic model, in that it contains more transcritical
bifurcations, gluing together the various equilibria, as well as persistent
limit cycles, which are shown to be absent in the classical case. Finally,
bistability is discovered among some equilibria, leading to situations in which
the computation of their basins of attraction is relevant for the system
outcome in terms of its biological implications.
| [
{
"created": "Tue, 4 Mar 2014 17:37:14 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"De Rossi",
"Alessandra",
""
],
[
"Lisa",
"Francesco",
""
],
[
"Rubini",
"Luca",
""
],
[
"Zappavigna",
"Alberto",
""
],
[
"Venturino",
"Ezio",
""
]
] | In this paper we consider a three level food web subject to a disease affecting the bottom prey. The resulting dynamics is much richer with respect to the purely demographic model, in that it contains more transcritical bifurcations, gluing together the various equilibria, as well as persistent limit cycles, which are shown to be absent in the classical case. Finally, bistability is discovered among some equilibria, leading to situations in which the computation of their basins of attraction is relevant for the system outcome in terms of its biological implications. |
1602.06317 | Yuanhua Huang | Yuanhua Huang and Guido Sanguinetti | Statistical modeling of isoform splicing dynamics from RNA-seq time
series data | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Isoform quantification is an important goal of RNA-seq experiments, yet it
remains prob- lematic for genes with low expression or several isoforms. These
difficulties may in principle be ameliorated by exploiting correlated
experimental designs, such as time series or dosage response experiments. Time
series RNA-seq experiments, in particular, are becoming in- creasingly popular,
yet there are no methods that explicitly leverage the experimental design to
improve isoform quantification. Here we present DICEseq, the first isoform
quantification method tailored to correlated RNA-seq experiments. DICEseq
explicitly models the corre- lations between different RNA-seq experiments to
aid the quantification of isoforms across experiments. Numerical experiments on
simulated data sets show that DICEseq yields more accurate results than
state-of-the-art methods, an advantage that can become considerable at low
coverage levels. On real data sets, our results show that DICEseq provides
substan- tially more reproducible and robust quantifications, increasing the
correlation of estimates from replicate data sets by up to 10% on genes with
low or moderate expression levels (bot- tom third of all genes). Furthermore,
DICEseq permits to quantify the trade-off between temporal sampling of RNA and
depth of sequencing, frequently an important choice when planning experiments.
Our results have strong implications for the design of RNA-seq ex- periments,
and offer a novel tool for improved analysis of such data sets. Python code is
freely available at http://diceseq.sf.net.
| [
{
"created": "Fri, 19 Feb 2016 21:35:16 GMT",
"version": "v1"
}
] | 2016-02-23 | [
[
"Huang",
"Yuanhua",
""
],
[
"Sanguinetti",
"Guido",
""
]
] | Isoform quantification is an important goal of RNA-seq experiments, yet it remains prob- lematic for genes with low expression or several isoforms. These difficulties may in principle be ameliorated by exploiting correlated experimental designs, such as time series or dosage response experiments. Time series RNA-seq experiments, in particular, are becoming in- creasingly popular, yet there are no methods that explicitly leverage the experimental design to improve isoform quantification. Here we present DICEseq, the first isoform quantification method tailored to correlated RNA-seq experiments. DICEseq explicitly models the corre- lations between different RNA-seq experiments to aid the quantification of isoforms across experiments. Numerical experiments on simulated data sets show that DICEseq yields more accurate results than state-of-the-art methods, an advantage that can become considerable at low coverage levels. On real data sets, our results show that DICEseq provides substan- tially more reproducible and robust quantifications, increasing the correlation of estimates from replicate data sets by up to 10% on genes with low or moderate expression levels (bot- tom third of all genes). Furthermore, DICEseq permits to quantify the trade-off between temporal sampling of RNA and depth of sequencing, frequently an important choice when planning experiments. Our results have strong implications for the design of RNA-seq ex- periments, and offer a novel tool for improved analysis of such data sets. Python code is freely available at http://diceseq.sf.net. |
2007.15554 | Prasannavenkatesan Theerthagiri | Prasannavenkatesan Theerthagiri | Forecasting Hyponatremia in hospitalized patients Using Multilayer
Perceptron and Multivariate Linear Regression Techniques | 19 pages, 5 figure | Concurrency and Computation: Practice and Experience, 2021 | 10.1002/cpe.6248 | null | q-bio.QM cs.LG cs.NE stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The percentage of patients hospitalized due to hyponatremia is getting
higher. Hyponatremia is the deficiency of sodium electrolyte in the human
serum. This deficiency might indulge adverse effects and also associated with
longer hospital stay or mortality, if it wasnt actively treated and managed.
This work predicts the futuristic sodium levels of patients based on their
history of health problems using multilayer perceptron and multivariate linear
regression algorithm. This work analyses the patients age, information about
other disease such as diabetes, pneumonia, liver-disease, malignancy,
pulmonary, sepsis, SIADH, and sodium level of the patient during admission to
the hospital. The results of the proposed MLP algorithm is compared with MLR
algorithm based results. The MLP prediction results generates 23-72 of higher
prediction results than MLR algorithm. Thus, proposed MLR algorithm has
produced 57.1 of reduced mean squared error rate than the MLR results on
predicting future sodium ranges of patients. Further, proposed MLR algorithm
produces 27-50 of higher prediction precision rate.
| [
{
"created": "Wed, 15 Jul 2020 12:47:24 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Aug 2020 04:16:06 GMT",
"version": "v2"
}
] | 2021-06-15 | [
[
"Theerthagiri",
"Prasannavenkatesan",
""
]
] | The percentage of patients hospitalized due to hyponatremia is getting higher. Hyponatremia is the deficiency of sodium electrolyte in the human serum. This deficiency might indulge adverse effects and also associated with longer hospital stay or mortality, if it wasnt actively treated and managed. This work predicts the futuristic sodium levels of patients based on their history of health problems using multilayer perceptron and multivariate linear regression algorithm. This work analyses the patients age, information about other disease such as diabetes, pneumonia, liver-disease, malignancy, pulmonary, sepsis, SIADH, and sodium level of the patient during admission to the hospital. The results of the proposed MLP algorithm is compared with MLR algorithm based results. The MLP prediction results generates 23-72 of higher prediction results than MLR algorithm. Thus, proposed MLR algorithm has produced 57.1 of reduced mean squared error rate than the MLR results on predicting future sodium ranges of patients. Further, proposed MLR algorithm produces 27-50 of higher prediction precision rate. |
0910.5371 | Denis Semenov A. | Denis A. Semenov | From the Wobble to Reliable Hypothesis | 6 pages, 1 table | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple explanation for the symmetry and degeneracy of the genetic code has
been suggested. An alternative to the wobble hypothesis has been proposed. This
hypothesis offers explanations for: i) the difference between thymine and
uracil, ii) encoding of tryptophan by only one codon, iii) why E. coli have no
inosine in isoleucine tRNA, but isoleucine is encoded by three codons. The
facts revealed in this study offer a new insight into physical mechanisms of
the functioning of the genetic code.
| [
{
"created": "Wed, 28 Oct 2009 13:32:25 GMT",
"version": "v1"
}
] | 2009-10-29 | [
[
"Semenov",
"Denis A.",
""
]
] | A simple explanation for the symmetry and degeneracy of the genetic code has been suggested. An alternative to the wobble hypothesis has been proposed. This hypothesis offers explanations for: i) the difference between thymine and uracil, ii) encoding of tryptophan by only one codon, iii) why E. coli have no inosine in isoleucine tRNA, but isoleucine is encoded by three codons. The facts revealed in this study offer a new insight into physical mechanisms of the functioning of the genetic code. |
1302.3855 | Renqiang Min | Salim Chowdhury, Yanjun Qi, Alex Stewart, Rachel Ostroff, Renqiang Min | Cancer Diagnosis with QUIRE: QUadratic Interactions among infoRmative
fEatures | null | null | null | NEC Labs America, TR/TN # 2012-TR115 | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Responsible for many complex human diseases including cancers, disrupted or
abnormal gene interactions can be identified through their expression changes
correlating with the progression of a disease. However, the examination of all
possible combinatorial interactions between gene features in a genome-wide
case-control study is computationally infeasible as the search space is
exponential in nature.
In this paper, we propose a novel computational approach, QUIRE, to identify
discriminative complex interactions among informative gene features for cancer
diagnosis. QUIRE works in two stages, where it first identifies functionally
relevant feature groups for the disease and, then explores the search space
capturing the combinatorial relationships among the genes from the selected
informative groups. Using QUIRE, we explore the differential patterns and the
interactions among informative gene features in three different types of
cancers, Renal Cell Carcinoma(RCC), Ovarian Cancer(OVC) and Colorectal Cancer
(CRC). Our experimental results show that QUIRE identifies gene-gene
interactions that can better identify the different cancer stages of samples
and can predict CRC recurrence and death from CRC more successfully, as
compared to other state-of-the-art feature selection methods. A literature
survey shows that many of the interactions identified by QUIRE play important
roles in the development of cancer.
| [
{
"created": "Fri, 15 Feb 2013 19:42:07 GMT",
"version": "v1"
}
] | 2013-02-18 | [
[
"Chowdhury",
"Salim",
""
],
[
"Qi",
"Yanjun",
""
],
[
"Stewart",
"Alex",
""
],
[
"Ostroff",
"Rachel",
""
],
[
"Min",
"Renqiang",
""
]
] | Responsible for many complex human diseases including cancers, disrupted or abnormal gene interactions can be identified through their expression changes correlating with the progression of a disease. However, the examination of all possible combinatorial interactions between gene features in a genome-wide case-control study is computationally infeasible as the search space is exponential in nature. In this paper, we propose a novel computational approach, QUIRE, to identify discriminative complex interactions among informative gene features for cancer diagnosis. QUIRE works in two stages, where it first identifies functionally relevant feature groups for the disease and, then explores the search space capturing the combinatorial relationships among the genes from the selected informative groups. Using QUIRE, we explore the differential patterns and the interactions among informative gene features in three different types of cancers, Renal Cell Carcinoma(RCC), Ovarian Cancer(OVC) and Colorectal Cancer (CRC). Our experimental results show that QUIRE identifies gene-gene interactions that can better identify the different cancer stages of samples and can predict CRC recurrence and death from CRC more successfully, as compared to other state-of-the-art feature selection methods. A literature survey shows that many of the interactions identified by QUIRE play important roles in the development of cancer. |
2408.03809 | Johan Medrano | Johan Medrano, Noor Sajid | A broken duet: multistable dynamics of dyadic interactions | 24 pages, 6 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Misunderstandings in dyadic interactions often persist despite our best
efforts, particularly between native and non-native speakers, resembling a
broken duet that refuses to harmonise. This paper delves into the computational
mechanisms underpinning these misunderstandings through the lens of the broken
Lorenz system -- a continuous dynamical model. By manipulating a specific
parameter regime, we induce bistability within the Lorenz equations, thereby
confining trajectories to distinct attractors based on initial conditions. This
mirrors the persistence of divergent interpretations that often result in
misunderstandings. Our simulations reveal that differing prior beliefs between
interlocutors result in misaligned generative models, leading to stable yet
divergent states of understanding when exposed to the same percept.
Specifically, native speakers equipped with precise (i.e., overconfident)
priors expect inputs to align closely with their internal models, thus
struggling with unexpected variations. Conversely, non-native speakers with
imprecise (i.e., less confident) priors exhibit a greater capacity to adjust
and accommodate unforeseen inputs. Our results underscore the important role of
generative models in facilitating mutual understanding (i.e., establishing a
shared narrative) and highlight the necessity of accounting for multistable
dynamics in dyadic interactions.
| [
{
"created": "Wed, 7 Aug 2024 14:35:14 GMT",
"version": "v1"
}
] | 2024-08-08 | [
[
"Medrano",
"Johan",
""
],
[
"Sajid",
"Noor",
""
]
] | Misunderstandings in dyadic interactions often persist despite our best efforts, particularly between native and non-native speakers, resembling a broken duet that refuses to harmonise. This paper delves into the computational mechanisms underpinning these misunderstandings through the lens of the broken Lorenz system -- a continuous dynamical model. By manipulating a specific parameter regime, we induce bistability within the Lorenz equations, thereby confining trajectories to distinct attractors based on initial conditions. This mirrors the persistence of divergent interpretations that often result in misunderstandings. Our simulations reveal that differing prior beliefs between interlocutors result in misaligned generative models, leading to stable yet divergent states of understanding when exposed to the same percept. Specifically, native speakers equipped with precise (i.e., overconfident) priors expect inputs to align closely with their internal models, thus struggling with unexpected variations. Conversely, non-native speakers with imprecise (i.e., less confident) priors exhibit a greater capacity to adjust and accommodate unforeseen inputs. Our results underscore the important role of generative models in facilitating mutual understanding (i.e., establishing a shared narrative) and highlight the necessity of accounting for multistable dynamics in dyadic interactions. |
1809.09179 | Sabrina Araujo | Sabrina B. L. Araujo, Marcelo Eduardo Borges, Francisco W. von
Hartenthal, Leonardo R. Jorge, Thomas M. Lewinsohn, Paulo R. Guimaraes Jr.
and Minus van Baalen | Coevolutionary patterns caused by prey selection | null | null | 10.1016/j.jtbi.2020.110327 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many theoretical models have been formulated to better understand the
coevolutionary patterns that emerge from antagonistic interactions. These
models usually assume that the attacks by the exploiters are random, so the
effect of victim selection by exploiters on coevolutionary patterns remains
unexplored. Here we analytically studied the payoff for predators and prey
under coevolution assuming that every individual predator can attack only a
small number of prey any given time, considering two scenarios: (i) predation
occurs at random; (ii) predators select prey according to phenotype matching.
We also develop an individual based model to verify the robustness of our
analytical prediction. We show that both scenarios result in well known similar
coevolutionary patterns if population sizes are sufficiently high: symmetrical
coevolutionary branching and symmetrical coevolutionary cycling (Red Queen
dynamics). However, for small population sizes, prey selection can cause
unexpected coevolutionary patterns. One is the breaking of symmetry of the
coevolutionary pattern, where the phenotypes evolve towards one of two
evolutionarily stable patterns. As population size increases, the phenotypes
oscillate between these two values in a novel form of Red Queen dynamics, the
episodic reversal between the two stable patterns. Thus, prey selection causes
prey phenotypes to evolve towards more extreme values, which reduces the
fitness of both predators and prey, increasing the likelihood of extinction.
| [
{
"created": "Mon, 24 Sep 2018 19:33:08 GMT",
"version": "v1"
},
{
"created": "Tue, 19 May 2020 21:26:01 GMT",
"version": "v2"
}
] | 2020-05-21 | [
[
"Araujo",
"Sabrina B. L.",
""
],
[
"Borges",
"Marcelo Eduardo",
""
],
[
"von Hartenthal",
"Francisco W.",
""
],
[
"Jorge",
"Leonardo R.",
""
],
[
"Lewinsohn",
"Thomas M.",
""
],
[
"Guimaraes",
"Paulo R.",
"Jr."
],
[
"van Baalen",
"Minus",
""
]
] | Many theoretical models have been formulated to better understand the coevolutionary patterns that emerge from antagonistic interactions. These models usually assume that the attacks by the exploiters are random, so the effect of victim selection by exploiters on coevolutionary patterns remains unexplored. Here we analytically studied the payoff for predators and prey under coevolution assuming that every individual predator can attack only a small number of prey any given time, considering two scenarios: (i) predation occurs at random; (ii) predators select prey according to phenotype matching. We also develop an individual based model to verify the robustness of our analytical prediction. We show that both scenarios result in well known similar coevolutionary patterns if population sizes are sufficiently high: symmetrical coevolutionary branching and symmetrical coevolutionary cycling (Red Queen dynamics). However, for small population sizes, prey selection can cause unexpected coevolutionary patterns. One is the breaking of symmetry of the coevolutionary pattern, where the phenotypes evolve towards one of two evolutionarily stable patterns. As population size increases, the phenotypes oscillate between these two values in a novel form of Red Queen dynamics, the episodic reversal between the two stable patterns. Thus, prey selection causes prey phenotypes to evolve towards more extreme values, which reduces the fitness of both predators and prey, increasing the likelihood of extinction. |
1510.01888 | Steven Watterson | Andrew Parton, Victoria McGilligan, Maurice OKane, Francina R Baldrick
and Steven Watterson | Computational Modelling of Atherosclerosis | in Briefings in Bioinformatics (2015) | null | 10.1093/bib/bbv081 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Atherosclerosis is one of the principle pathologies of cardiovascular disease
with blood cholesterol a significant risk factor. The World Health Organisation
estimates that approximately 2.5 million deaths occur annually due to the risk
from elevated cholesterol with 39% of adults worldwide at future risk.
Atherosclerosis emerges from the combination of many dynamical factors,
including haemodynamics, endothelial damage, innate immunity and sterol
biochemistry. Despite its significance to public health, the dynamics that
drive atherosclerosis remain poorly understood. As a disease that depends on
multiple factors operating on different length scales, the natural framework to
apply to atherosclerosis is mathematical and computational modelling. A
computational model provides an integrated description of the disease and
serves as an in silico experimental system from which we can learn about the
disease and develop therapeutic hypotheses. Although the work completed in this
area to-date has been limited, there are clear signs that interest is growing
and that a nascent field is establishing itself. This paper discusses the
current state of modelling in this area, bringing together many recent results
for the first time. We review the work that has been done, discuss its scope
and highlight the gaps in our understanding that could yield future
opportunities.
| [
{
"created": "Wed, 7 Oct 2015 10:55:22 GMT",
"version": "v1"
}
] | 2015-10-08 | [
[
"Parton",
"Andrew",
""
],
[
"McGilligan",
"Victoria",
""
],
[
"OKane",
"Maurice",
""
],
[
"Baldrick",
"Francina R",
""
],
[
"Watterson",
"Steven",
""
]
] | Atherosclerosis is one of the principle pathologies of cardiovascular disease with blood cholesterol a significant risk factor. The World Health Organisation estimates that approximately 2.5 million deaths occur annually due to the risk from elevated cholesterol with 39% of adults worldwide at future risk. Atherosclerosis emerges from the combination of many dynamical factors, including haemodynamics, endothelial damage, innate immunity and sterol biochemistry. Despite its significance to public health, the dynamics that drive atherosclerosis remain poorly understood. As a disease that depends on multiple factors operating on different length scales, the natural framework to apply to atherosclerosis is mathematical and computational modelling. A computational model provides an integrated description of the disease and serves as an in silico experimental system from which we can learn about the disease and develop therapeutic hypotheses. Although the work completed in this area to-date has been limited, there are clear signs that interest is growing and that a nascent field is establishing itself. This paper discusses the current state of modelling in this area, bringing together many recent results for the first time. We review the work that has been done, discuss its scope and highlight the gaps in our understanding that could yield future opportunities. |
1006.2527 | Alan D. Rendall | Alan D. Rendall | Analysis of a mathematical model for interactions between T cells and
macrophages | 11 pages | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to carry out a mathematical analysis of a system of
ordinary differential equations introduced by R. Lev Bar-Or to model the
interactions between T cells and macrophages. Under certain restrictions on the
parameters of the model, theorems are proved about the number of stationary
solutions and their stability. In some cases the existence of periodic
solutions or heteroclinic cycles is ruled out. Evidence is presented that the
same biological phenomena could be equally well described by a simpler model.
| [
{
"created": "Sun, 13 Jun 2010 11:46:45 GMT",
"version": "v1"
}
] | 2010-06-15 | [
[
"Rendall",
"Alan D.",
""
]
] | The aim of this paper is to carry out a mathematical analysis of a system of ordinary differential equations introduced by R. Lev Bar-Or to model the interactions between T cells and macrophages. Under certain restrictions on the parameters of the model, theorems are proved about the number of stationary solutions and their stability. In some cases the existence of periodic solutions or heteroclinic cycles is ruled out. Evidence is presented that the same biological phenomena could be equally well described by a simpler model. |
1907.10748 | Sandeep Choubey | Sandeep Choubey, Dipjyoti Das, Saptarshi Majumdar | Cell-to-cell variability in organelle abundance reveals mechanisms of
organelle biogenesis | 30 pages, 3 Figures and 1 Table. Supplementary figures are attached | null | 10.1103/PhysRevE.100.022405 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How cells regulate the number of organelles is a fundamental question in cell
biology. While decades of experimental work have uncovered four fundamental
processes that regulate organelle biogenesis, namely, de novo synthesis,
fission, fusion and decay, a comprehensive understanding of how these processes
together control organelle abundance remains elusive. Recent fluorescence
microscopy experiments allow for the counting of organelles at the single-cell
level. These measurements provide information about the cell-to-cell
variability in organelle abundance in addition to the mean level. Motivated by
such measurements, we build upon a recent study and analyze a general
stochastic model of organelle biogenesis. We compute the exact analytical
expressions for the probability distribution of organelle numbers, their mean,
and variance across a population of single cells. It is shown that different
mechanisms of organelle biogenesis lead to distinct signatures in the
distribution of organelle numbers which allows us to discriminate between these
various mechanisms. By comparing our theory against published data for
peroxisome abundance measurements in yeast, we show that a widely believed
model of peroxisome biogenesis that involves de novo synthesis, fission, and
decay is inadequate in explaining the data. Also, our theory predicts
bimodality in certain limits of the model. Overall, the framework developed
here can be harnessed to gain mechanistic insights into the process of
organelle biogenesis.
| [
{
"created": "Wed, 24 Jul 2019 22:10:47 GMT",
"version": "v1"
}
] | 2019-09-04 | [
[
"Choubey",
"Sandeep",
""
],
[
"Das",
"Dipjyoti",
""
],
[
"Majumdar",
"Saptarshi",
""
]
] | How cells regulate the number of organelles is a fundamental question in cell biology. While decades of experimental work have uncovered four fundamental processes that regulate organelle biogenesis, namely, de novo synthesis, fission, fusion and decay, a comprehensive understanding of how these processes together control organelle abundance remains elusive. Recent fluorescence microscopy experiments allow for the counting of organelles at the single-cell level. These measurements provide information about the cell-to-cell variability in organelle abundance in addition to the mean level. Motivated by such measurements, we build upon a recent study and analyze a general stochastic model of organelle biogenesis. We compute the exact analytical expressions for the probability distribution of organelle numbers, their mean, and variance across a population of single cells. It is shown that different mechanisms of organelle biogenesis lead to distinct signatures in the distribution of organelle numbers which allows us to discriminate between these various mechanisms. By comparing our theory against published data for peroxisome abundance measurements in yeast, we show that a widely believed model of peroxisome biogenesis that involves de novo synthesis, fission, and decay is inadequate in explaining the data. Also, our theory predicts bimodality in certain limits of the model. Overall, the framework developed here can be harnessed to gain mechanistic insights into the process of organelle biogenesis. |
2109.02486 | Osama Hourani Dr. | Osama Hourani, Nasrollah Moghadam Charkari, Saeed Jalili | Voxel selection framework based on meta-heuristic search and mutual
information for brain decoding | 20 pages, 6 figures This is the ore-peer reviewed version of the
following article: Hourani O, Charkari NM, Jalili S. Voxel selection
framework based on metaheuristic search and mutual information for brain
decoding. Int J Imaging Syst Technol. 2019;29: 663-676.
https://doi.org/10.1002/ima.22353 which has been published in final form at
https://onlinelibrary.wiley.com/doi/abs/10.1002/ima.22353 | International Journal of Imaging Systems and Technology 29 (2019)
663-676 | 10.1002/ima.22353 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Visual stimulus decoding is an increasingly important challenge in
neuroscience. The goal is to classify the activity patterns from the human
brain; during the sighting of visual objects. One of the crucial problems in
the brain decoder is the selecting informative voxels. We propose a
meta-heuristic voxel selection framework for brain decoding. It is composed of
four phases: preprocessing of fMRI data; filtering insignificant voxels;
postprocessing; and meta-heuristics selection. The main contribution is
benefiting a meta-heuristics search algorithm to guide a wrapper voxel
selection. The main criterion to nominate a voxel is based on its mutual
information with the provided stimulus label. The results show impressive
accuracy rates which are 90.66 +/- 3.66 and 91.61 +/- 8.24 for DS105 and DS107,
respectively. This outperforms the most of existing brain decoders in similar
validation conditions. The experimental results are very encouraging which can
be successfully useId in the brain-computer interface.
| [
{
"created": "Thu, 29 Jul 2021 13:53:24 GMT",
"version": "v1"
}
] | 2021-09-07 | [
[
"Hourani",
"Osama",
""
],
[
"Charkari",
"Nasrollah Moghadam",
""
],
[
"Jalili",
"Saeed",
""
]
] | Visual stimulus decoding is an increasingly important challenge in neuroscience. The goal is to classify the activity patterns from the human brain; during the sighting of visual objects. One of the crucial problems in the brain decoder is the selecting informative voxels. We propose a meta-heuristic voxel selection framework for brain decoding. It is composed of four phases: preprocessing of fMRI data; filtering insignificant voxels; postprocessing; and meta-heuristics selection. The main contribution is benefiting a meta-heuristics search algorithm to guide a wrapper voxel selection. The main criterion to nominate a voxel is based on its mutual information with the provided stimulus label. The results show impressive accuracy rates which are 90.66 +/- 3.66 and 91.61 +/- 8.24 for DS105 and DS107, respectively. This outperforms the most of existing brain decoders in similar validation conditions. The experimental results are very encouraging which can be successfully useId in the brain-computer interface. |
2112.13117 | Alan Karr | Alan F. Karr, Jason Hauzel, Adam A. Porter, Marcel Schaefer | Application of Markov Structure of Genomes to Outlier Identification and
Read Classification | null | null | null | null | q-bio.GN cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this paper we apply the structure of genomes as second-order Markov
processes specified by the distributions of successive triplets of bases to two
bioinformatics problems: identification of outliers in genome databases and
read classification in metagenomics, using real coronavirus and adenovirus
data.
| [
{
"created": "Fri, 24 Dec 2021 18:03:38 GMT",
"version": "v1"
}
] | 2021-12-28 | [
[
"Karr",
"Alan F.",
""
],
[
"Hauzel",
"Jason",
""
],
[
"Porter",
"Adam A.",
""
],
[
"Schaefer",
"Marcel",
""
]
] | In this paper we apply the structure of genomes as second-order Markov processes specified by the distributions of successive triplets of bases to two bioinformatics problems: identification of outliers in genome databases and read classification in metagenomics, using real coronavirus and adenovirus data. |
2306.02929 | Artur Yakimovich | Rui Li and Gabriel della Maggiora, Vardan Andriasyan, Anthony
Petkidis, Artsemi Yushkevich, Mikhail Kudryashev, Artur Yakimovich | Microscopy image reconstruction with physics-informed denoising
diffusion probabilistic model | 16 pages, 5 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Light microscopy is a widespread and inexpensive imaging technique
facilitating biomedical discovery and diagnostics. However, light diffraction
barrier and imperfections in optics limit the level of detail of the acquired
images. The details lost can be reconstructed among others by deep learning
models. Yet, deep learning models are prone to introduce artefacts and
hallucinations into the reconstruction. Recent state-of-the-art image synthesis
models like the denoising diffusion probabilistic models (DDPMs) are no
exception to this. We propose to address this by incorporating the physical
problem of microscopy image formation into the model's loss function. To
overcome the lack of microscopy data, we train this model with synthetic data.
We simulate the effects of the microscope optics through the theoretical point
spread function and varying the noise levels to obtain synthetic data.
Furthermore, we incorporate the physical model of a light microscope into the
reverse process of a conditioned DDPM proposing a physics-informed DDPM
(PI-DDPM). We show consistent improvement and artefact reductions when compared
to model-based methods, deep-learning regression methods and regular
conditioned DDPMs.
| [
{
"created": "Mon, 5 Jun 2023 14:45:51 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jun 2023 17:39:10 GMT",
"version": "v2"
}
] | 2023-06-09 | [
[
"Li",
"Rui",
""
],
[
"della Maggiora",
"Gabriel",
""
],
[
"Andriasyan",
"Vardan",
""
],
[
"Petkidis",
"Anthony",
""
],
[
"Yushkevich",
"Artsemi",
""
],
[
"Kudryashev",
"Mikhail",
""
],
[
"Yakimovich",
"Artur",
""
]
] | Light microscopy is a widespread and inexpensive imaging technique facilitating biomedical discovery and diagnostics. However, light diffraction barrier and imperfections in optics limit the level of detail of the acquired images. The details lost can be reconstructed among others by deep learning models. Yet, deep learning models are prone to introduce artefacts and hallucinations into the reconstruction. Recent state-of-the-art image synthesis models like the denoising diffusion probabilistic models (DDPMs) are no exception to this. We propose to address this by incorporating the physical problem of microscopy image formation into the model's loss function. To overcome the lack of microscopy data, we train this model with synthetic data. We simulate the effects of the microscope optics through the theoretical point spread function and varying the noise levels to obtain synthetic data. Furthermore, we incorporate the physical model of a light microscope into the reverse process of a conditioned DDPM proposing a physics-informed DDPM (PI-DDPM). We show consistent improvement and artefact reductions when compared to model-based methods, deep-learning regression methods and regular conditioned DDPMs. |
2309.08779 | Michelle Bartolo | Michelle A Bartolo, Alyssa M Taylor-LaPole, Darsh Gandhi, Alexandria
Johnson, Yaqi Li, Emma Slack, Isaiah Stevens, Zachary Turner, Justin D
Weigand, Charles Puelz, Dirk Husmeier, Mette S Olufsen | Computational framework for the generation of one-dimensional vascular
models accounting for uncertainty in networks extracted from medical images | 42 pages, 10 figures | null | null | null | q-bio.TO q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Patient-specific computational modeling is a popular, non-invasive method to
answer medical questions. Medical images are used to extract geometric domains
necessary to create these models, providing a predictive tool for clinicians.
However, in vivo imaging is subject to uncertainty, impacting vessel dimensions
essential to the mathematical modeling process. While there are numerous
programs available to provide information about vessel length, radii, and
position, there is currently no exact way to determine and calibrate these
features. This raises the question, if we are building patient-specific models
based on uncertain measurements, how accurate are the geometries we extract and
how can we best represent a patient's vasculature? In this study, we develop a
novel framework to determine vessel dimensions using change points. We explore
the impact of uncertainty in the network extraction process on hemodynamics by
varying vessel dimensions and segmenting the same images multiple times. Our
analyses reveal that image segmentation, network size, and minor changes in
radius and length have significant impacts on pressure and flow dynamics in
rapidly branching structures and tapering vessels. Accordingly, we conclude
that it is critical to understand how uncertainty in network geometry
propagates to fluid dynamics, especially in clinical applications.
| [
{
"created": "Fri, 15 Sep 2023 21:51:06 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jan 2024 15:30:45 GMT",
"version": "v2"
},
{
"created": "Thu, 9 May 2024 01:42:23 GMT",
"version": "v3"
}
] | 2024-05-10 | [
[
"Bartolo",
"Michelle A",
""
],
[
"Taylor-LaPole",
"Alyssa M",
""
],
[
"Gandhi",
"Darsh",
""
],
[
"Johnson",
"Alexandria",
""
],
[
"Li",
"Yaqi",
""
],
[
"Slack",
"Emma",
""
],
[
"Stevens",
"Isaiah",
""
],
[
"Turner",
"Zachary",
""
],
[
"Weigand",
"Justin D",
""
],
[
"Puelz",
"Charles",
""
],
[
"Husmeier",
"Dirk",
""
],
[
"Olufsen",
"Mette S",
""
]
] | Patient-specific computational modeling is a popular, non-invasive method to answer medical questions. Medical images are used to extract geometric domains necessary to create these models, providing a predictive tool for clinicians. However, in vivo imaging is subject to uncertainty, impacting vessel dimensions essential to the mathematical modeling process. While there are numerous programs available to provide information about vessel length, radii, and position, there is currently no exact way to determine and calibrate these features. This raises the question, if we are building patient-specific models based on uncertain measurements, how accurate are the geometries we extract and how can we best represent a patient's vasculature? In this study, we develop a novel framework to determine vessel dimensions using change points. We explore the impact of uncertainty in the network extraction process on hemodynamics by varying vessel dimensions and segmenting the same images multiple times. Our analyses reveal that image segmentation, network size, and minor changes in radius and length have significant impacts on pressure and flow dynamics in rapidly branching structures and tapering vessels. Accordingly, we conclude that it is critical to understand how uncertainty in network geometry propagates to fluid dynamics, especially in clinical applications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.