id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2408.04501 | Katharina N\"oh | Richard D. Paul and Johannes Seiffarth and Hanno Scharr and Katharina
N\"oh | Robust Approximate Characterization of Single-Cell Heterogeneity in
Microbial Growth | 5 pages, 3 figures, IEEE ISBI Conference Proceedings 2024 | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Live-cell microscopy allows to go beyond measuring average features of
cellular populations to observe, quantify and explain biological heterogeneity.
Deep Learning-based instance segmentation and cell tracking form the gold
standard analysis tools to process the microscopy data collected, but tracking
in particular suffers severely from low temporal resolution. In this work, we
show that approximating cell cycle time distributions in microbial colonies of
C. glutamicum is possible without performing tracking, even at low temporal
resolution. To this end, we infer the parameters of a stochastic multi-stage
birth process model using the Bayesian Synthetic Likelihood method at varying
temporal resolutions by subsampling microscopy sequences, for which ground
truth tracking is available. Our results indicate, that the proposed approach
yields high quality approximations even at very low temporal resolution, where
tracking fails to yield reasonable results.
| [
{
"created": "Thu, 8 Aug 2024 14:55:16 GMT",
"version": "v1"
}
] | 2024-08-09 | [
[
"Paul",
"Richard D.",
""
],
[
"Seiffarth",
"Johannes",
""
],
[
"Scharr",
"Hanno",
""
],
[
"Nöh",
"Katharina",
""
]
] | Live-cell microscopy allows to go beyond measuring average features of cellular populations to observe, quantify and explain biological heterogeneity. Deep Learning-based instance segmentation and cell tracking form the gold standard analysis tools to process the microscopy data collected, but tracking in particular suffers severely from low temporal resolution. In this work, we show that approximating cell cycle time distributions in microbial colonies of C. glutamicum is possible without performing tracking, even at low temporal resolution. To this end, we infer the parameters of a stochastic multi-stage birth process model using the Bayesian Synthetic Likelihood method at varying temporal resolutions by subsampling microscopy sequences, for which ground truth tracking is available. Our results indicate, that the proposed approach yields high quality approximations even at very low temporal resolution, where tracking fails to yield reasonable results. |
1007.4933 | Chrysline Margus Pinol | C.M.N. Pinol and R.S. Banzon | Stability in a population model without random deaths by the Verhulst
factor | null | null | 10.1016/j.physa.2010.11.046 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large amount of population models use the concept of a carrying capacity.
Simulated populations are bounded by invoking finite resources through a
survival probability, commonly referred to as the Verhulst factor. The fact,
however, that resources are not easily accounted for in actual biological
systems makes the carrying capacity parameter ill-defined. Henceforth, we deem
it essential to consider cases for which the parameter is unnecessary. This
work demonstrates the possibility of Verhulst-free steady states using the
Penna aging model, with one semelparous birth per adult. Stable populations are
obtained by setting a mutation threshold that is higher than the reproduction
age.
| [
{
"created": "Wed, 28 Jul 2010 11:51:39 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Sep 2010 04:56:27 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Nov 2010 01:44:59 GMT",
"version": "v3"
}
] | 2015-05-19 | [
[
"Pinol",
"C. M. N.",
""
],
[
"Banzon",
"R. S.",
""
]
] | A large amount of population models use the concept of a carrying capacity. Simulated populations are bounded by invoking finite resources through a survival probability, commonly referred to as the Verhulst factor. The fact, however, that resources are not easily accounted for in actual biological systems makes the carrying capacity parameter ill-defined. Henceforth, we deem it essential to consider cases for which the parameter is unnecessary. This work demonstrates the possibility of Verhulst-free steady states using the Penna aging model, with one semelparous birth per adult. Stable populations are obtained by setting a mutation threshold that is higher than the reproduction age. |
1911.01721 | Eirini Troullinou | Eirini Troullinou, Grigorios Tsagkatakis, Ganna Palagina, Maria
Papadopouli, Stelios Manolis Smirnakis, Panagiotis Tsakalides | Adversarial dictionary learning for a robust analysis and modelling of
spontaneous neuronal activity | null | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of neuroscience is experiencing rapid growth in the complexity and
quantity of the recorded neural activity allowing us unprecedented access to
its dynamics in different brain areas. The objective of this work is to
discover directly from the experimental data rich and comprehensible models for
brain function that will be concurrently robust to noise. Considering this task
from the perspective of dimensionality reduction, we develop an innovative,
robust to noise dictionary learning framework based on adversarial training
methods for the identification of patterns of synchronous firing activity as
well as within a time lag. We employ real-world binary datasets describing the
spontaneous neuronal activity of laboratory mice over time, and we aim to their
efficient low-dimensional representation. The results on the classification
accuracy for the discrimination between the clean and the adversarial-noisy
activation patterns obtained by an SVM classifier highlight the efficacy of the
proposed scheme compared to other methods, and the visualization of the
dictionary's distribution demonstrates the multifarious information that we
obtain from it.
| [
{
"created": "Tue, 5 Nov 2019 11:32:06 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Dec 2019 17:18:05 GMT",
"version": "v2"
}
] | 2019-12-25 | [
[
"Troullinou",
"Eirini",
""
],
[
"Tsagkatakis",
"Grigorios",
""
],
[
"Palagina",
"Ganna",
""
],
[
"Papadopouli",
"Maria",
""
],
[
"Smirnakis",
"Stelios Manolis",
""
],
[
"Tsakalides",
"Panagiotis",
""
]
] | The field of neuroscience is experiencing rapid growth in the complexity and quantity of the recorded neural activity allowing us unprecedented access to its dynamics in different brain areas. The objective of this work is to discover directly from the experimental data rich and comprehensible models for brain function that will be concurrently robust to noise. Considering this task from the perspective of dimensionality reduction, we develop an innovative, robust to noise dictionary learning framework based on adversarial training methods for the identification of patterns of synchronous firing activity as well as within a time lag. We employ real-world binary datasets describing the spontaneous neuronal activity of laboratory mice over time, and we aim to their efficient low-dimensional representation. The results on the classification accuracy for the discrimination between the clean and the adversarial-noisy activation patterns obtained by an SVM classifier highlight the efficacy of the proposed scheme compared to other methods, and the visualization of the dictionary's distribution demonstrates the multifarious information that we obtain from it. |
2205.01325 | Nirmal Sivaraman | Nirmal Kumar Sivaraman, Manas Gaur, Shivansh Baijal, Sakthi Balan
Muthiah, Amit Sheth | Exo-SIR: An Epidemiological Model to Analyze the Impact of Exogenous
Spread of Infection | To appear in Springer Nature Journal of Data Science and Analytics.
arXiv admin note: substantial text overlap with arXiv:2008.06335 | null | null | null | q-bio.PE cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Epidemics like Covid-19 and Ebola have impacted people's lives significantly.
The impact of mobility of people across the countries or states in the spread
of epidemics has been significant. The spread of disease due to factors local
to the population under consideration is termed the endogenous spread. The
spread due to external factors like migration, mobility, etc. is called the
exogenous spread. In this paper, we introduce the Exo-SIR model, an extension
of the popular SIR model and a few variants of the model. The novelty in our
model is that it captures both the exogenous and endogenous spread of the
virus. First, we present an analytical study. Second, we simulate the Exo-SIR
model with and without assuming contact network for the population. Third, we
implement the Exo-SIR model on real datasets regarding Covid-19 and Ebola. We
found that endogenous infection is influenced by exogenous infection.
Furthermore, we found that the Exo-SIR model predicts the peak time better than
the SIR model. Hence, the Exo-SIR model would be helpful for governments to
plan policy interventions at the time of a pandemic.
| [
{
"created": "Tue, 3 May 2022 06:22:26 GMT",
"version": "v1"
}
] | 2022-05-04 | [
[
"Sivaraman",
"Nirmal Kumar",
""
],
[
"Gaur",
"Manas",
""
],
[
"Baijal",
"Shivansh",
""
],
[
"Muthiah",
"Sakthi Balan",
""
],
[
"Sheth",
"Amit",
""
]
] | Epidemics like Covid-19 and Ebola have impacted people's lives significantly. The impact of mobility of people across the countries or states in the spread of epidemics has been significant. The spread of disease due to factors local to the population under consideration is termed the endogenous spread. The spread due to external factors like migration, mobility, etc. is called the exogenous spread. In this paper, we introduce the Exo-SIR model, an extension of the popular SIR model and a few variants of the model. The novelty in our model is that it captures both the exogenous and endogenous spread of the virus. First, we present an analytical study. Second, we simulate the Exo-SIR model with and without assuming contact network for the population. Third, we implement the Exo-SIR model on real datasets regarding Covid-19 and Ebola. We found that endogenous infection is influenced by exogenous infection. Furthermore, we found that the Exo-SIR model predicts the peak time better than the SIR model. Hence, the Exo-SIR model would be helpful for governments to plan policy interventions at the time of a pandemic. |
1301.6099 | Gerald Weber | Denise Fagundes-Lima and Gerald Weber | CG-content log-ratio distributions of Caenorhabditis elegans and
Drosophila melanogaster mirtrons | 5 pages, 3 figures | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Mirtrons are a special type of pre-miRNA which originate from intronic
regions and are spliced directly from the transcript instead of being processed
by Drosha. The splicing mechanism is better understood for the processing of
mRNA for which was established that there is a characteristic CG content around
splice sites. Here we analyse the CG-content ratio of pre-miRNAs and mirtrons
and compare them with their genomic neighbourhood in an attempt to establish
key properties which are easy to evaluate and to understand their biogenesis.
We propose a simple log-ratio of the CG-content comparing the precursor
sequence and is flanking region. We discovered that Caenorhabditis elegans and
Drosophila melanogaster mirtrons, so far without exception, have smaller
CG-content than their genomic neighbourhood. This is markedly different from
usual pre-miRNAs which mostly have larger CG-content when compared to their
genomic neighbourhood. We also analysed some mammalian and primate mirtrons
which, in contrast the invertebrate mirtrons, have higher CG-content ratio.
| [
{
"created": "Fri, 25 Jan 2013 17:29:39 GMT",
"version": "v1"
}
] | 2013-01-28 | [
[
"Fagundes-Lima",
"Denise",
""
],
[
"Weber",
"Gerald",
""
]
] | Mirtrons are a special type of pre-miRNA which originate from intronic regions and are spliced directly from the transcript instead of being processed by Drosha. The splicing mechanism is better understood for the processing of mRNA for which was established that there is a characteristic CG content around splice sites. Here we analyse the CG-content ratio of pre-miRNAs and mirtrons and compare them with their genomic neighbourhood in an attempt to establish key properties which are easy to evaluate and to understand their biogenesis. We propose a simple log-ratio of the CG-content comparing the precursor sequence and is flanking region. We discovered that Caenorhabditis elegans and Drosophila melanogaster mirtrons, so far without exception, have smaller CG-content than their genomic neighbourhood. This is markedly different from usual pre-miRNAs which mostly have larger CG-content when compared to their genomic neighbourhood. We also analysed some mammalian and primate mirtrons which, in contrast the invertebrate mirtrons, have higher CG-content ratio. |
2201.10346 | Michael Levin | Michael Levin | Technological Approach to Mind Everywhere (TAME): an
experimentally-grounded framework for understanding diverse bodies and minds | null | null | null | null | q-bio.TO cs.MA q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Synthetic biology and bioengineering provide the opportunity to create novel
embodied cognitive systems (otherwise known as minds) in a very wide variety of
chimeric architectures combining evolved and designed material and software.
These advances are disrupting familiar concepts in the philosophy of mind, and
require new ways of thinking about and comparing truly diverse intelligences,
whose composition and origin are not like any of the available natural model
species. In this Perspective, I introduce TAME - Technological Approach to Mind
Everywhere - a framework for understanding and manipulating cognition in
unconventional substrates. TAME formalizes a non-binary (continuous),
empirically-based approach to strongly embodied agency. When applied to
regenerating/developmental systems, TAME suggests a perspective on
morphogenesis as an example of basal cognition. The deep symmetry between
problem-solving in anatomical, physiological, transcriptional, and 3D
(traditional behavioral) spaces drives specific hypotheses by which cognitive
capacities can scale during evolution. An important medium exploited by
evolution for joining active subunits into greater agents is developmental
bioelectricity, implemented by pre-neural use of ion channels and gap junctions
to scale cell-level feedback loops into anatomical homeostasis. This
architecture of multi-scale competency of biological systems has important
implications for plasticity of bodies and minds, greatly potentiating
evolvability. Considering classical and recent data from the perspectives of
computational science, evolutionary biology, and basal cognition, reveals a
rich research program with many implications for cognitive science,
evolutionary biology, regenerative medicine, and artificial intelligence.
| [
{
"created": "Fri, 24 Dec 2021 14:11:05 GMT",
"version": "v1"
}
] | 2022-01-26 | [
[
"Levin",
"Michael",
""
]
] | Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME - Technological Approach to Mind Everywhere - a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can scale during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence. |
1311.2200 | Guillaume Drion | Guillaume Drion, Alessio Franci, Vincent Seutin and Rodolphe Sepulchre | Modulation and Robustness of Endogenous Neuronal Spiking | 18 pages, 11 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal spiking exhibits an exquisite combination of modulation and
robustness properties, rarely matched in artificial systems. We exploit the
particular interconnection structure of conductance based models to investigate
this remarkable property. We find that much of neuronal modulation and
robustness can be explained by separating the total transmembrane current into
three different components corresponding to the three time scales of neuronal
bursting. Each equivalent current aggregates many ionic contributions into an
equivalent voltage-dependent conductance, which defines a key modulation
parameter. Plugging those equivalent feedback gains in a minimal abstract model
recovers many experimental modulation scenarii as modulatory paths in
elementary two-parameter charts. Likewise, robustness owes to the many possible
physiological realizations of a same equivalent conductance, highlighting the
role of equivalent conductances as prominent targets for neuromodulation and
intrinsic homeostasis.
| [
{
"created": "Sat, 9 Nov 2013 18:50:36 GMT",
"version": "v1"
}
] | 2013-11-12 | [
[
"Drion",
"Guillaume",
""
],
[
"Franci",
"Alessio",
""
],
[
"Seutin",
"Vincent",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | Neuronal spiking exhibits an exquisite combination of modulation and robustness properties, rarely matched in artificial systems. We exploit the particular interconnection structure of conductance based models to investigate this remarkable property. We find that much of neuronal modulation and robustness can be explained by separating the total transmembrane current into three different components corresponding to the three time scales of neuronal bursting. Each equivalent current aggregates many ionic contributions into an equivalent voltage-dependent conductance, which defines a key modulation parameter. Plugging those equivalent feedback gains in a minimal abstract model recovers many experimental modulation scenarii as modulatory paths in elementary two-parameter charts. Likewise, robustness owes to the many possible physiological realizations of a same equivalent conductance, highlighting the role of equivalent conductances as prominent targets for neuromodulation and intrinsic homeostasis. |
1902.10542 | Pierre Casadebaig | No\'emie Gaudio, Abraham J. Escobar-Guti\'errez, Pierre Casadebaig,
Jochem B. Evers, Fr\'ed\'eric G\'erard, Ga\"etan Louarn, Nathalie Colbach,
Sebastian Munz, Marie Launay, H\'el\`ene Marrou, Romain Barillot, Philippe
Hinsinger, Jacques-Eric Bergez, Didier Combes, Jean-Louis Durand, Ela Frak,
Lo\"ic Pag\`es, Christophe Pradal, S\'ebastien Saint-Jean, Wopke Van Der
Werf, Eric Justes | Current knowledge and future research opportunities for modeling annual
crop mixtures. A review | 42 pages, 5 figures | null | 10.1007/s13593-019-0562-6 | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Growing mixtures of annual arable crop species or genotypes is a promising
way to improve crop production without increasing agricultural inputs. To
design optimal crop mixtures, choices of species, genotypes, sowing proportion,
plant arrangement, and sowing date need to be made but field experiments alone
are not sufficient to explore such a large range of factors. Crop modeling
allows to study, understand and ultimately design cropping systems and is an
established method for sole crops. Recently, modeling started to be applied to
annual crop mixtures as well. Here, we review to what extent crop simulation
models and individual-based models are suitable to capture and predict the
specificities of annual crop mixtures. We argued that: 1) The crop mixture
spatio-temporal heterogeneity (influencing the occurrence of ecological
processes) determines the choice of the modeling approach (plant or crop
centered). 2) Only few crop models (adapted from sole crop models) and
individual-based models currently exist to simulate annual crop mixtures. 3)
Crop models are mainly used to address issues related to crop mixtures
management and to the integration of crop mixtures into larger scales such as
the rotation, whereas individual-based models are mainly used to identify plant
traits involved in crop mixture performance and to quantify the relative
contribution of the different ecological processes (niche complementarity,
facilitation, competition, plasticity) to crop mixture functioning. This review
highlights that modeling of annual crop mixtures is in its infancy and gives to
model users some important keys to choose the model based on the questions they
want to answer, with awareness of the strengths and weaknesses of each of the
modeling approaches.
| [
{
"created": "Wed, 27 Feb 2019 14:07:37 GMT",
"version": "v1"
}
] | 2019-02-28 | [
[
"Gaudio",
"Noémie",
""
],
[
"Escobar-Gutiérrez",
"Abraham J.",
""
],
[
"Casadebaig",
"Pierre",
""
],
[
"Evers",
"Jochem B.",
""
],
[
"Gérard",
"Frédéric",
""
],
[
"Louarn",
"Gaëtan",
""
],
[
"Colbach",
"Nathali... | Growing mixtures of annual arable crop species or genotypes is a promising way to improve crop production without increasing agricultural inputs. To design optimal crop mixtures, choices of species, genotypes, sowing proportion, plant arrangement, and sowing date need to be made but field experiments alone are not sufficient to explore such a large range of factors. Crop modeling allows to study, understand and ultimately design cropping systems and is an established method for sole crops. Recently, modeling started to be applied to annual crop mixtures as well. Here, we review to what extent crop simulation models and individual-based models are suitable to capture and predict the specificities of annual crop mixtures. We argued that: 1) The crop mixture spatio-temporal heterogeneity (influencing the occurrence of ecological processes) determines the choice of the modeling approach (plant or crop centered). 2) Only few crop models (adapted from sole crop models) and individual-based models currently exist to simulate annual crop mixtures. 3) Crop models are mainly used to address issues related to crop mixtures management and to the integration of crop mixtures into larger scales such as the rotation, whereas individual-based models are mainly used to identify plant traits involved in crop mixture performance and to quantify the relative contribution of the different ecological processes (niche complementarity, facilitation, competition, plasticity) to crop mixture functioning. This review highlights that modeling of annual crop mixtures is in its infancy and gives to model users some important keys to choose the model based on the questions they want to answer, with awareness of the strengths and weaknesses of each of the modeling approaches. |
q-bio/0703036 | Marconi Barbosa Dr | Luciano da Fontoura Costa and Marconi Soares Barbosa | Morphological relationship between axon and dendritic arborizations as
revealed by Minkowski functionals | 4 pages 1 figure | null | null | null | q-bio.QM cond-mat.stat-mech | null | The spatial structure of the axonal and dendritic arborizations is closely
related to the functionality of specific neurons or neuronal subsystems. The
present work describes how multiscale Minkowski functionals can be used in
order to characterize and compare the spatial organization of these two types
of arborizations. The discrimination potential of the method is illustrated
with respect to three classes of cortical neurons.
| [
{
"created": "Thu, 15 Mar 2007 18:11:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Costa",
"Luciano da Fontoura",
""
],
[
"Barbosa",
"Marconi Soares",
""
]
] | The spatial structure of the axonal and dendritic arborizations is closely related to the functionality of specific neurons or neuronal subsystems. The present work describes how multiscale Minkowski functionals can be used in order to characterize and compare the spatial organization of these two types of arborizations. The discrimination potential of the method is illustrated with respect to three classes of cortical neurons. |
1011.2829 | Byung Mook Weon | Byung Mook Weon and Jung Ho Je | Mathematical link of evolving aging and complexity | 14 pages, 5 figures, submitted to PLoS ONE | null | null | null | q-bio.PE physics.bio-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aging is a fundamental aspect of living systems that undergo a progressive
deterioration of physiological function with age and an increase of
vulnerability to disease and death. Living systems, known as complex systems,
require complexity in interactions among molecules, cells, organs, and
individuals or regulatory mechanisms to perform a variety of activities for
survival. On this basis, aging can be understood in terms of a progressive loss
of complexity with age; this suggests that complexity in living systems would
evolve with age. In general, aging dynamics is mathematically depicted by a
survival function, which monotonically changes from 1 to 0 with age. It would
be then useful to find an adequate survival function to link aging dynamics and
complexity evolution. Here we describe a flexible survival function, which is
derived from the stretched exponential function by adopting an age-dependent
exponent. We note that the exponent is associated with evolving complexity,
i.e., a fractal-like scaling in cumulative mortality. The survival function
well depicts a general feature in survival curves; healthy populations show a
tendency to evolve towards rectangular-like survival curves, as examples in
humans or laboratory animals. This tendency suggests that both aging and
complexity would evolve towards healthy survival in living systems. Our
function to link aging with complexity may contribute to better understanding
of biological aging in terms of complexity evolution.
| [
{
"created": "Fri, 12 Nov 2010 05:03:27 GMT",
"version": "v1"
}
] | 2010-11-15 | [
[
"Weon",
"Byung Mook",
""
],
[
"Je",
"Jung Ho",
""
]
] | Aging is a fundamental aspect of living systems that undergo a progressive deterioration of physiological function with age and an increase of vulnerability to disease and death. Living systems, known as complex systems, require complexity in interactions among molecules, cells, organs, and individuals or regulatory mechanisms to perform a variety of activities for survival. On this basis, aging can be understood in terms of a progressive loss of complexity with age; this suggests that complexity in living systems would evolve with age. In general, aging dynamics is mathematically depicted by a survival function, which monotonically changes from 1 to 0 with age. It would be then useful to find an adequate survival function to link aging dynamics and complexity evolution. Here we describe a flexible survival function, which is derived from the stretched exponential function by adopting an age-dependent exponent. We note that the exponent is associated with evolving complexity, i.e., a fractal-like scaling in cumulative mortality. The survival function well depicts a general feature in survival curves; healthy populations show a tendency to evolve towards rectangular-like survival curves, as examples in humans or laboratory animals. This tendency suggests that both aging and complexity would evolve towards healthy survival in living systems. Our function to link aging with complexity may contribute to better understanding of biological aging in terms of complexity evolution. |
2005.04104 | Boris Vishnepolsky | Malak Pirtskhalava, Boris Vishnepolsky, Maya Grigolava | Physicochemical Features and Peculiarities of Interaction of
Antimicrobial Peptides with the Membrane | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antimicrobial peptides (AMPs) are anti-infectives that have potential as a
novel and untapped class of biotherapeutics. Modes of action of antimicrobial
peptides imply interaction with cell envelope. Comprehensive understanding of
peculiarities of interactions of antimicrobial peptides with cell envelope is
necessary to perform the task-oriented design of new biotherapeutics, against
which for microbes it is hard to work out resistance. In order to enable a de
novo design with low costs and in high throughput, in silico predictive models
have to be required. To develop the performant predictive model, comprehensive
knowledge on mechanisms of action of AMPs has to be possessed. The last
knowledge will allow us to encode amino acid sequences expressively and to get
success to the choosing of the accurate classifier of AMPs. A shared protective
layer of microbial cells is inner, plasmatic membrane. The interaction of AMP
with a biological membrane (native and/or artificial) is the most
comprehensively studied. We provide a review of mechanisms and results of
interaction of AMP with the cell membrane, relying on the survey of
physicochemical, aggregative and structural features of AMPs. Potency and
mechanism of action of AMP have presented in the terms of amino acid
compositions and distributions of the polar and apolar residues along the
chain, that is in such physicochemical features of peptides as the
hydrophobicity, hydrophilicity, and amphiphilicity. Many different approaches
were used to classify AMPs. The survey of the knowledge on sequences,
structures, and modes of actions of AMP, allows concluding that, only the
physicochemical features of AMPs give the capability to perform the unambiguous
classification. Comprehensive knowledge of physicochemical features of AMP is
necessary to develop task-oriented methods of design of peptide-based
antibiotics de novo.
| [
{
"created": "Fri, 8 May 2020 15:19:53 GMT",
"version": "v1"
}
] | 2020-05-11 | [
[
"Pirtskhalava",
"Malak",
""
],
[
"Vishnepolsky",
"Boris",
""
],
[
"Grigolava",
"Maya",
""
]
] | Antimicrobial peptides (AMPs) are anti-infectives that have potential as a novel and untapped class of biotherapeutics. Modes of action of antimicrobial peptides imply interaction with cell envelope. Comprehensive understanding of peculiarities of interactions of antimicrobial peptides with cell envelope is necessary to perform the task-oriented design of new biotherapeutics, against which for microbes it is hard to work out resistance. In order to enable a de novo design with low costs and in high throughput, in silico predictive models have to be required. To develop the performant predictive model, comprehensive knowledge on mechanisms of action of AMPs has to be possessed. The last knowledge will allow us to encode amino acid sequences expressively and to get success to the choosing of the accurate classifier of AMPs. A shared protective layer of microbial cells is inner, plasmatic membrane. The interaction of AMP with a biological membrane (native and/or artificial) is the most comprehensively studied. We provide a review of mechanisms and results of interaction of AMP with the cell membrane, relying on the survey of physicochemical, aggregative and structural features of AMPs. Potency and mechanism of action of AMP have presented in the terms of amino acid compositions and distributions of the polar and apolar residues along the chain, that is in such physicochemical features of peptides as the hydrophobicity, hydrophilicity, and amphiphilicity. Many different approaches were used to classify AMPs. The survey of the knowledge on sequences, structures, and modes of actions of AMP, allows concluding that, only the physicochemical features of AMPs give the capability to perform the unambiguous classification. Comprehensive knowledge of physicochemical features of AMP is necessary to develop task-oriented methods of design of peptide-based antibiotics de novo. |
1402.3771 | Mike Steel Prof. | Mike Steel | Tracing evolutionary links between species | 18 pages, 6 figures (Invited review paper (draft version) for AMM) | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The idea that all life on earth traces back to a common beginning dates back
at least to Charles Darwin's {\em Origin of Species}. Ever since, biologists
have tried to piece together parts of this `tree of life' based on what we can
observe today: fossils, and the evolutionary signal that is present in the
genomes and phenotypes of different organisms. Mathematics has played a key
role in helping transform genetic data into phylogenetic (evolutionary) trees
and networks. Here, I will explain some of the central concepts and basic
results in phylogenetics, which benefit from several branches of mathematics,
including combinatorics, probability and algebra.
| [
{
"created": "Sun, 16 Feb 2014 08:24:07 GMT",
"version": "v1"
}
] | 2014-02-18 | [
[
"Steel",
"Mike",
""
]
] | The idea that all life on earth traces back to a common beginning dates back at least to Charles Darwin's {\em Origin of Species}. Ever since, biologists have tried to piece together parts of this `tree of life' based on what we can observe today: fossils, and the evolutionary signal that is present in the genomes and phenotypes of different organisms. Mathematics has played a key role in helping transform genetic data into phylogenetic (evolutionary) trees and networks. Here, I will explain some of the central concepts and basic results in phylogenetics, which benefit from several branches of mathematics, including combinatorics, probability and algebra. |
2212.05202 | Manvi Jain Ms | Manvi Jain, Karsheet Negi, Pooja S. Sahni, Mannu Brahmi, Neha Singh,
Dayal Pyari Srivastava, Jyoti Kumar | Neural Underpinnings of Decoupled Ethical Behavior in Adolescents as an
Interaction of Peer and Personal Values | 8 Pages, 6 Figures | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | In the present study, we are trying to understand how peer unethical behavior
stimulates the decoupling of emotions in adolescents. We have simulated an
interactive game-based environment in order to stimulate participants to make
decisions that are found to be correlated with their virtual partner decisions.
The responses given by participants were also recorded as neural signals using
an EEG to study neurophysiological correlates of different decision-making
behavioral patterns. There was an active correlation between personality values
and decision-making. Preliminary analysis was focused on studying the
differences in lower brain frequencies (0.1-4Hz) when the participants
developed frustration, in contrast to when they experienced gratitude. The
study presents three case studies in which delta frequencies increased in cases
when frustration was experienced and decreased when gratitude was experienced.
The study focused on understanding the neural underpinnings of corresponding
modified behavior in adolescents. The findings highlight an increase in delta
frequencies when apparent frustration was developed in adolescents due to their
peer unethical behavior. The delta frequencies lowered when participants were
tested for ethical behavior. The results concluded that based on personality
value types, adolescents tend to develop frustration toward perceived unethical
behavior and carry it over to other unrelated peers. This study is highly
explorative in nature, with preliminary analysis using only three case studies,
having a small sample size. However, the novelty of this study brings about new
dimensions to social cognition and personality studies.
| [
{
"created": "Sat, 10 Dec 2022 04:54:21 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Jan 2023 05:52:20 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Feb 2023 05:10:02 GMT",
"version": "v3"
},
{
"created": "Fri, 24 Feb 2023 05:30:56 GMT",
"version": "v4"
}
] | 2023-02-27 | [
[
"Jain",
"Manvi",
""
],
[
"Negi",
"Karsheet",
""
],
[
"Sahni",
"Pooja S.",
""
],
[
"Brahmi",
"Mannu",
""
],
[
"Singh",
"Neha",
""
],
[
"Srivastava",
"Dayal Pyari",
""
],
[
"Kumar",
"Jyoti",
""
]
] | In the present study, we are trying to understand how peer unethical behavior stimulates the decoupling of emotions in adolescents. We have simulated an interactive game-based environment in order to stimulate participants to make decisions that are found to be correlated with their virtual partner decisions. The responses given by participants were also recorded as neural signals using an EEG to study neurophysiological correlates of different decision-making behavioral patterns. There was an active correlation between personality values and decision-making. Preliminary analysis was focused on studying the differences in lower brain frequencies (0.1-4Hz) when the participants developed frustration, in contrast to when they experienced gratitude. The study presents three case studies in which delta frequencies increased in cases when frustration was experienced and decreased when gratitude was experienced. The study focused on understanding the neural underpinnings of corresponding modified behavior in adolescents. The findings highlight an increase in delta frequencies when apparent frustration was developed in adolescents due to their peer unethical behavior. The delta frequencies lowered when participants were tested for ethical behavior. The results concluded that based on personality value types, adolescents tend to develop frustration toward perceived unethical behavior and carry it over to other unrelated peers. This study is highly explorative in nature, with preliminary analysis using only three case studies, having a small sample size. However, the novelty of this study brings about new dimensions to social cognition and personality studies. |
2002.12138 | Noemie Globus | Noemie Globus, Roger D. Blandford | The Chiral Puzzle of Life | 18 pages, 6 figures, accepted for publication in The Astrophysical
Journal Letters. arXiv admin note: text overlap with arXiv:1911.02525 | null | 10.3847/2041-8213/ab8dc6 | null | q-bio.OT astro-ph.HE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological molecules chose one of two structurally, chiral systems which are
related by reflection in a mirror. It is proposed that this choice was made,
causally, by magnetically polarized and physically chiral cosmic-rays, which
are known to have a large role in mutagenesis. It is shown that the cosmic rays
can impose a small, but persistent, chiral bias in the rate at which they
induce structural changes in simple, chiral monomers that are the building
blocks of biopolymers. A much larger effect should be present with helical
biopolymers, in particular, those that may have been the progenitors of RNA and
DNA. It is shown that the interaction can be both electrostatic, just involving
the molecular electric field, and electromagnetic, also involving a magnetic
field. It is argued that this bias can lead to the emergence of a single,
chiral life form over an evolutionary timescale. If this mechanism dominates,
then the handedness of living systems should be universal. Experiments are
proposed to assess the efficacy of this process.
| [
{
"created": "Sun, 23 Feb 2020 20:02:56 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2020 18:13:41 GMT",
"version": "v2"
}
] | 2022-07-11 | [
[
"Globus",
"Noemie",
""
],
[
"Blandford",
"Roger D.",
""
]
] | Biological molecules chose one of two structurally, chiral systems which are related by reflection in a mirror. It is proposed that this choice was made, causally, by magnetically polarized and physically chiral cosmic-rays, which are known to have a large role in mutagenesis. It is shown that the cosmic rays can impose a small, but persistent, chiral bias in the rate at which they induce structural changes in simple, chiral monomers that are the building blocks of biopolymers. A much larger effect should be present with helical biopolymers, in particular, those that may have been the progenitors of RNA and DNA. It is shown that the interaction can be both electrostatic, just involving the molecular electric field, and electromagnetic, also involving a magnetic field. It is argued that this bias can lead to the emergence of a single, chiral life form over an evolutionary timescale. If this mechanism dominates, then the handedness of living systems should be universal. Experiments are proposed to assess the efficacy of this process. |
1903.09227 | Saul Kato | Greg Bubnis and Steven Ban and Matthew D. DiFranco and Saul Kato | A probabilistic atlas for cell identification | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a general framework for a collaborative machine learning system to
assist bioscience researchers with the task of labeling specific cell
identities from microscopic still or video imaging. The distinguishing features
of this approach versus prior approaches include: (1) use of a statistical
model of cell features that is iteratively improved, (2) generation of
probabilistic guesses at cell ID rather than single best-guesses for each cell,
(3) tracking of joint probabilities of features within and across cells, and
(4) ability to exploit multi-modal features, such as cell position, morphology,
reporter intensities, and activity. We provide an example implementation of
such a system applicable to labeling fluorescently tagged \textit{C. elegans}
neurons. As a proof of concept, we use a generative spring-mass model to
simulate sequences of cell imaging datasets with variable cell positions and
fluorescence intensities. Training on synthetic data, we find that atlases that
track inter-cell positional correlations give higher labeling accuracies than
those that treat cell positions independently. Tracking an additional feature
type, fluorescence intensity, boosts accuracy relative to a position-only
atlas, suggesting that multiple cell features could be leveraged to improve
automated label predictions.
| [
{
"created": "Thu, 21 Mar 2019 20:26:09 GMT",
"version": "v1"
}
] | 2019-03-25 | [
[
"Bubnis",
"Greg",
""
],
[
"Ban",
"Steven",
""
],
[
"DiFranco",
"Matthew D.",
""
],
[
"Kato",
"Saul",
""
]
] | We propose a general framework for a collaborative machine learning system to assist bioscience researchers with the task of labeling specific cell identities from microscopic still or video imaging. The distinguishing features of this approach versus prior approaches include: (1) use of a statistical model of cell features that is iteratively improved, (2) generation of probabilistic guesses at cell ID rather than single best-guesses for each cell, (3) tracking of joint probabilities of features within and across cells, and (4) ability to exploit multi-modal features, such as cell position, morphology, reporter intensities, and activity. We provide an example implementation of such a system applicable to labeling fluorescently tagged \textit{C. elegans} neurons. As a proof of concept, we use a generative spring-mass model to simulate sequences of cell imaging datasets with variable cell positions and fluorescence intensities. Training on synthetic data, we find that atlases that track inter-cell positional correlations give higher labeling accuracies than those that treat cell positions independently. Tracking an additional feature type, fluorescence intensity, boosts accuracy relative to a position-only atlas, suggesting that multiple cell features could be leveraged to improve automated label predictions. |
1607.01029 | Kieran Fox | Kieran C.R. Fox, Jessica R. Andrews-Hanna, Kalina Christoff | The neurobiology of self-generated thought from cells to systems:
Integrating evidence from lesion studies, human intracranial
electrophysiology, neurochemistry, and neuroendocrinology | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigation of the neural basis of self-generated thought is moving beyond
a simple identification with default network activation toward a more
comprehensive view recognizing the role of the frontoparietal control network
and other areas. A major task ahead is to unravel the functional roles and
temporal dynamics of the widely distributed brain regions recruited during
self-generated thought. We argue that various other neuroscientific methods -
including lesion studies, human intracranial electrophysiology, and
manipulation of neurochemistry - have much to contribute to this project. These
diverse data have yet to be synthesized with the growing understanding of
self-generated thought gained from neuroimaging, however. Here, we highlight
several areas of ongoing inquiry and illustrate how evidence from other
methodologies corroborates, complements, and clarifies findings from functional
neuroimaging. Each methodology has particular strengths: functional
neuroimaging reveals much about the variety of brain areas and networks
reliably recruited. Lesion studies point to regions critical to generating and
consciously experiencing self-generated thought. Human intracranial
electrophysiology illuminates how and where in the brain thought is generated
and where this activity subsequently spreads. Finally, measurement and
manipulation of neurotransmitter and hormone levels can clarify what kind of
neurochemical milieu drives or facilitates self-generated cognition.
Integrating evidence from multiple complementary modalities will be a critical
step on the way to improving our understanding of the neurobiology of
functional and dysfunctional forms of self-generated thought.
| [
{
"created": "Mon, 4 Jul 2016 20:03:29 GMT",
"version": "v1"
}
] | 2016-07-06 | [
[
"Fox",
"Kieran C. R.",
""
],
[
"Andrews-Hanna",
"Jessica R.",
""
],
[
"Christoff",
"Kalina",
""
]
] | Investigation of the neural basis of self-generated thought is moving beyond a simple identification with default network activation toward a more comprehensive view recognizing the role of the frontoparietal control network and other areas. A major task ahead is to unravel the functional roles and temporal dynamics of the widely distributed brain regions recruited during self-generated thought. We argue that various other neuroscientific methods - including lesion studies, human intracranial electrophysiology, and manipulation of neurochemistry - have much to contribute to this project. These diverse data have yet to be synthesized with the growing understanding of self-generated thought gained from neuroimaging, however. Here, we highlight several areas of ongoing inquiry and illustrate how evidence from other methodologies corroborates, complements, and clarifies findings from functional neuroimaging. Each methodology has particular strengths: functional neuroimaging reveals much about the variety of brain areas and networks reliably recruited. Lesion studies point to regions critical to generating and consciously experiencing self-generated thought. Human intracranial electrophysiology illuminates how and where in the brain thought is generated and where this activity subsequently spreads. Finally, measurement and manipulation of neurotransmitter and hormone levels can clarify what kind of neurochemical milieu drives or facilitates self-generated cognition. Integrating evidence from multiple complementary modalities will be a critical step on the way to improving our understanding of the neurobiology of functional and dysfunctional forms of self-generated thought. |
2009.05732 | Sebastian Contreras | Sebastian Contreras, Jonas Dehning, Matthias Loidolt, F. Paul
Spitzner, Jorge H.Urrea-Quintero, Sebastian B. Mohr, Michael Wilczek,
Johannes Zierenberg, Michael Wibral, Viola Priesemann | The challenges of containing SARS-CoV-2 via test-trace-and-isolate | null | Nat. Commun 12 (2021) 378 | 10.1038/s41467-020-20699-8 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Without a cure, vaccine, or proven long-term immunity against SARS-CoV-2,
test-trace-and-isolate (TTI) strategies present a promising tool to contain its
spread. For any TTI strategy, however, mitigation is challenged by pre- and
asymptomatic transmission, TTI-avoiders, and undetected spreaders, who strongly
contribute to hidden infection chains. Here, we studied a semi-analytical model
and identified two tipping points between controlled and uncontrolled spread:
(1) the behavior-driven reproduction number of the hidden chains becomes too
large to be compensated by the TTI capabilities, and (2) the number of new
infections exceeds the tracing capacity. Both trigger a self-accelerating
spread. We investigated how these tipping points depend on challenges like
limited cooperation, missing contacts, and imperfect isolation. Our model
results suggest that TTI alone is insufficient to contain an otherwise
unhindered spread of SARS-CoV-2, implying that complementary measures like
social distancing and improved hygiene remain necessary.
| [
{
"created": "Sat, 12 Sep 2020 05:53:48 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Nov 2020 16:03:34 GMT",
"version": "v2"
}
] | 2021-01-26 | [
[
"Contreras",
"Sebastian",
""
],
[
"Dehning",
"Jonas",
""
],
[
"Loidolt",
"Matthias",
""
],
[
"Spitzner",
"F. Paul",
""
],
[
"Urrea-Quintero",
"Jorge H.",
""
],
[
"Mohr",
"Sebastian B.",
""
],
[
"Wilczek",
"Mich... | Without a cure, vaccine, or proven long-term immunity against SARS-CoV-2, test-trace-and-isolate (TTI) strategies present a promising tool to contain its spread. For any TTI strategy, however, mitigation is challenged by pre- and asymptomatic transmission, TTI-avoiders, and undetected spreaders, who strongly contribute to hidden infection chains. Here, we studied a semi-analytical model and identified two tipping points between controlled and uncontrolled spread: (1) the behavior-driven reproduction number of the hidden chains becomes too large to be compensated by the TTI capabilities, and (2) the number of new infections exceeds the tracing capacity. Both trigger a self-accelerating spread. We investigated how these tipping points depend on challenges like limited cooperation, missing contacts, and imperfect isolation. Our model results suggest that TTI alone is insufficient to contain an otherwise unhindered spread of SARS-CoV-2, implying that complementary measures like social distancing and improved hygiene remain necessary. |
2106.16174 | Pingjun Chen | Pingjun Chen, Muhammad Aminu, Siba El Hussein, Joseph D. Khoury, and
Jia Wu | Hierarchical Phenotyping and Graph Modeling of Spatial Architecture in
Lymphoid Neoplasms | Accepted by MICCAI2021 | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | The cells and their spatial patterns in the tumor microenvironment (TME) play
a key role in tumor evolution, and yet the latter remains an understudied topic
in computational pathology. This study, to the best of our knowledge, is among
the first to hybridize local and global graph methods to profile orchestration
and interaction of cellular components. To address the challenge in
hematolymphoid cancers, where the cell classes in TME may be unclear, we first
implemented cell-level unsupervised learning and identified two new cell
subtypes. Local cell graphs or supercells were built for each image by
considering the individual cell's geospatial location and classes. Then, we
applied supercell level clustering and identified two new cell communities. In
the end, we built global graphs to abstract spatial interaction patterns and
extract features for disease diagnosis. We evaluate the proposed algorithm on
H&E slides of 60 hematolymphoid neoplasms and further compared it with three
cell level graph-based algorithms, including the global cell graph, cluster
cell graph, and FLocK. The proposed algorithm achieved a mean diagnosis
accuracy of 0.703 with the repeated 5-fold cross-validation scheme. In
conclusion, our algorithm shows superior performance over the existing methods
and can be potentially applied to other cancer types.
| [
{
"created": "Wed, 30 Jun 2021 16:09:32 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Sep 2021 02:23:31 GMT",
"version": "v2"
}
] | 2021-09-21 | [
[
"Chen",
"Pingjun",
""
],
[
"Aminu",
"Muhammad",
""
],
[
"Hussein",
"Siba El",
""
],
[
"Khoury",
"Joseph D.",
""
],
[
"Wu",
"Jia",
""
]
] | The cells and their spatial patterns in the tumor microenvironment (TME) play a key role in tumor evolution, and yet the latter remains an understudied topic in computational pathology. This study, to the best of our knowledge, is among the first to hybridize local and global graph methods to profile orchestration and interaction of cellular components. To address the challenge in hematolymphoid cancers, where the cell classes in TME may be unclear, we first implemented cell-level unsupervised learning and identified two new cell subtypes. Local cell graphs or supercells were built for each image by considering the individual cell's geospatial location and classes. Then, we applied supercell level clustering and identified two new cell communities. In the end, we built global graphs to abstract spatial interaction patterns and extract features for disease diagnosis. We evaluate the proposed algorithm on H&E slides of 60 hematolymphoid neoplasms and further compared it with three cell level graph-based algorithms, including the global cell graph, cluster cell graph, and FLocK. The proposed algorithm achieved a mean diagnosis accuracy of 0.703 with the repeated 5-fold cross-validation scheme. In conclusion, our algorithm shows superior performance over the existing methods and can be potentially applied to other cancer types. |
1507.08973 | Thomas Miconi | Thomas Miconi | Training recurrent neural networks with sparse, delayed rewards for
flexible decision tasks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks in the chaotic regime exhibit complex dynamics
reminiscent of high-level cortical activity during behavioral tasks. However,
existing training methods for such networks are either biologically
implausible, or require a real-time continuous error signal to guide the
learning process. This is in contrast with most behavioral tasks, which only
provide time-sparse, delayed rewards. Here we show that a biologically
plausible reward-modulated Hebbian learning algorithm, previously used in
feedforward models of birdsong learning, can train recurrent networks based
solely on delayed, phasic reward signals at the end of each trial. The method
requires no dedicated feedback or readout networks: the whole network
connectivity is subject to learning, and the network output is read from one
arbitrarily chosen network cell. We use this method to successfully train a
network on a delayed nonmatch to sample task (which requires memory, flexible
associations, and non-linear mixed selectivities). Using decoding techniques,
we show that the resulting networks exhibit dynamic coding of task-relevant
information, with neural encodings of various task features fluctuating widely
over the course of a trial. Furthermore, network activity moves from a
stimulus-specific representation to a response-specific representation during
response time, in accordance with neural recordings in behaving animals for
similar tasks. We conclude that recurrent neural networks, trained with
reward-modulated Hebbian learning, offer a plausible model of cortical dynamics
during learning and performance of flexible association.
| [
{
"created": "Fri, 31 Jul 2015 18:43:28 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Dec 2015 00:48:54 GMT",
"version": "v2"
}
] | 2015-12-09 | [
[
"Miconi",
"Thomas",
""
]
] | Recurrent neural networks in the chaotic regime exhibit complex dynamics reminiscent of high-level cortical activity during behavioral tasks. However, existing training methods for such networks are either biologically implausible, or require a real-time continuous error signal to guide the learning process. This is in contrast with most behavioral tasks, which only provide time-sparse, delayed rewards. Here we show that a biologically plausible reward-modulated Hebbian learning algorithm, previously used in feedforward models of birdsong learning, can train recurrent networks based solely on delayed, phasic reward signals at the end of each trial. The method requires no dedicated feedback or readout networks: the whole network connectivity is subject to learning, and the network output is read from one arbitrarily chosen network cell. We use this method to successfully train a network on a delayed nonmatch to sample task (which requires memory, flexible associations, and non-linear mixed selectivities). Using decoding techniques, we show that the resulting networks exhibit dynamic coding of task-relevant information, with neural encodings of various task features fluctuating widely over the course of a trial. Furthermore, network activity moves from a stimulus-specific representation to a response-specific representation during response time, in accordance with neural recordings in behaving animals for similar tasks. We conclude that recurrent neural networks, trained with reward-modulated Hebbian learning, offer a plausible model of cortical dynamics during learning and performance of flexible association. |
1911.12257 | Claire Harris | Claire L. Harris, Neil Brummitt, Christina A. Cobbold and Richard
Reeve | Dynamic virtual ecosystems as a tool for detecting large-scale responses
of biodiversity to environmental and land-use change | null | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecosystems are governed by dynamic processes such as competition for
resources, reproduction and dispersal. These shape their biodiversity and how
the system responds to change. Current approaches to modelling ecosystems,
especially plants, focus on either describing fine-scale processes for
individual species or broad-scale patterns for limited groups of plant
functional types.
Digitisation of herbarium and other plant records has unlocked a wealth of
information that can be used to drive models of plant communities and make
predictions for their future under different scenarios of climate change. The
advent of increased computational capacity and fast, high level programming
languages allows for simulation of such landscapes at unprecedented scales.
Here, we demonstrate a tool for Ecosystem Simulation through Integrated
Species Trait-Environment Modelling (EcoSISTEM), which models plant species
across multiple ecosystem sizes, from patches and small islands to regions and
entire continents. These simulated ecosystems support the ability to generate
many different types of habitat, as well as reproducing different disturbance
scenarios such as climate change, habitat loss and invasion. EcoSISTEM also
reproduces examples of real-world species distributions by integrating plant
occurrence records and global climate reconstructions to simulate plant species
throughout the continent of Africa for the past century.
EcoSISTEM allows us to flexibly explore the dynamics of tens of thousands of
species interacting across a continent. The code parallelises efficiently
across multiple nodes on high performance computing platforms, and has been
scaled up to run on over 1000 cores. It allows us to study the impact of
changes to climate, resources and habitat and investigate real-life mechanisms
surrounding climate change and biodiversity loss.
| [
{
"created": "Wed, 27 Nov 2019 16:24:16 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Aug 2022 14:48:24 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Mar 2023 21:41:54 GMT",
"version": "v3"
}
] | 2023-03-15 | [
[
"Harris",
"Claire L.",
""
],
[
"Brummitt",
"Neil",
""
],
[
"Cobbold",
"Christina A.",
""
],
[
"Reeve",
"Richard",
""
]
] | Ecosystems are governed by dynamic processes such as competition for resources, reproduction and dispersal. These shape their biodiversity and how the system responds to change. Current approaches to modelling ecosystems, especially plants, focus on either describing fine-scale processes for individual species or broad-scale patterns for limited groups of plant functional types. Digitisation of herbarium and other plant records has unlocked a wealth of information that can be used to drive models of plant communities and make predictions for their future under different scenarios of climate change. The advent of increased computational capacity and fast, high level programming languages allows for simulation of such landscapes at unprecedented scales. Here, we demonstrate a tool for Ecosystem Simulation through Integrated Species Trait-Environment Modelling (EcoSISTEM), which models plant species across multiple ecosystem sizes, from patches and small islands to regions and entire continents. These simulated ecosystems support the ability to generate many different types of habitat, as well as reproducing different disturbance scenarios such as climate change, habitat loss and invasion. EcoSISTEM also reproduces examples of real-world species distributions by integrating plant occurrence records and global climate reconstructions to simulate plant species throughout the continent of Africa for the past century. EcoSISTEM allows us to flexibly explore the dynamics of tens of thousands of species interacting across a continent. The code parallelises efficiently across multiple nodes on high performance computing platforms, and has been scaled up to run on over 1000 cores. It allows us to study the impact of changes to climate, resources and habitat and investigate real-life mechanisms surrounding climate change and biodiversity loss. |
1412.4920 | Tim Palmer | T.N. Palmer, M. O'Shea | Neuronal noise as a physical resource for human cognition | null | null | null | null | q-bio.NC cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new class of energy-efficient digital microprocessor is being developed
which is susceptible to thermal noise and consequently operates in
probabilistic rather than conventional deterministic mode. Hybrid computing
systems which combine probabilistic and deterministic processors can provide
robust and efficient tools for computational problems that hitherto would be
intractable by conventional deterministic algorithm. These developments suggest
a revised perspective on the consequences of ion-channel noise in slender
axons, often regarded as a hindrance to neuronal computations. It is proposed
that the human brain is such an energy-efficient hybrid computational system
whose remarkable characteristics emerge from constructive synergies between
probabilistic and deterministic modes of operation. In particular, the capacity
for intuition and creative problem solving appears to arise naturally from such
a hybrid system. Bearing in mind that physical thermal noise is both pure and
available at no cost, our proposal has implications for attempts to emulate the
energy-efficient human brain on conventional energy-intensive deterministic
supercomputers.
| [
{
"created": "Tue, 16 Dec 2014 08:42:05 GMT",
"version": "v1"
}
] | 2014-12-17 | [
[
"Palmer",
"T. N.",
""
],
[
"O'Shea",
"M.",
""
]
] | A new class of energy-efficient digital microprocessor is being developed which is susceptible to thermal noise and consequently operates in probabilistic rather than conventional deterministic mode. Hybrid computing systems which combine probabilistic and deterministic processors can provide robust and efficient tools for computational problems that hitherto would be intractable by conventional deterministic algorithm. These developments suggest a revised perspective on the consequences of ion-channel noise in slender axons, often regarded as a hindrance to neuronal computations. It is proposed that the human brain is such an energy-efficient hybrid computational system whose remarkable characteristics emerge from constructive synergies between probabilistic and deterministic modes of operation. In particular, the capacity for intuition and creative problem solving appears to arise naturally from such a hybrid system. Bearing in mind that physical thermal noise is both pure and available at no cost, our proposal has implications for attempts to emulate the energy-efficient human brain on conventional energy-intensive deterministic supercomputers. |
1404.2445 | Ido Kanter | Roni Vardi, Hagar Marmari and Ido Kanter | Error correction and fast detectors implemented by ultra-fast neuronal
plasticity | 7 pages, 4 figures, 1 table, to appear in Physical Review E | Phys. Rev. E 89, 042712 (2014) | 10.1103/PhysRevE.89.042712 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We experimentally show that the neuron functions as a precise
time-integrator, where the accumulated changes in neuronal response latencies,
under complex and random stimulation patterns, are solely a function of a
global quantity, the average time-lag between stimulations. In contrast,
momentary leaps in the neuronal response latency follow trends of consecutive
stimulations, indicating ultra-fast neuronal plasticity. On a circuit level,
this ultra-fast neuronal plasticity phenomenon implements error-correction
mechanisms and fast detectors for misplaced stimulations. Additionally, at
moderate/high stimulation rates this phenomenon destabilizes/stabilizes a
periodic neuronal activity disrupted by misplaced stimulations.
| [
{
"created": "Wed, 9 Apr 2014 11:44:27 GMT",
"version": "v1"
}
] | 2014-04-23 | [
[
"Vardi",
"Roni",
""
],
[
"Marmari",
"Hagar",
""
],
[
"Kanter",
"Ido",
""
]
] | We experimentally show that the neuron functions as a precise time-integrator, where the accumulated changes in neuronal response latencies, under complex and random stimulation patterns, are solely a function of a global quantity, the average time-lag between stimulations. In contrast, momentary leaps in the neuronal response latency follow trends of consecutive stimulations, indicating ultra-fast neuronal plasticity. On a circuit level, this ultra-fast neuronal plasticity phenomenon implements error-correction mechanisms and fast detectors for misplaced stimulations. Additionally, at moderate/high stimulation rates this phenomenon destabilizes/stabilizes a periodic neuronal activity disrupted by misplaced stimulations. |
q-bio/0310028 | Kasper Astrup Eriksen | Sergei Maslov, Kim Sneppen and Kasper Astrup Eriksen | Upstream Plasticity and Downstream Robustness in Evolution of Molecular
Networks | 10 pages, 4 figures. Submitted to BMC evolutionary biology | null | null | null | q-bio.MN q-bio.PE | null | Evolving biomolecular networks have to combine the stability against
perturbations with flexibility allowing their constituents to assume new roles
in the cell. Gene duplication followed by functional divergence of associated
proteins is a major force shaping molecular networks in living organisms.
Recent availability of system-wide data for yeast S. Cerevisiae allow us to
access the effects of gene duplication on robustness and plasticity of
molecular networks. We demonstrate that the upstream transcriptional regulation
of duplicated genes diverges fast, losing on average 4% of their common
transcription factors for every 1% divergence of their amino acid sequences. In
contrast, the set of physical interaction partners of their protein products
changes much slower. The relative stability of downstream functions of
duplicated genes, is further corroborated by their ability to substitute for
each other in gene knockout experiments. We believe that the combination of the
upstream plasticity and the downstream robustness is a general feature
determining the evolvability of molecular networks.
| [
{
"created": "Wed, 22 Oct 2003 13:11:47 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Maslov",
"Sergei",
""
],
[
"Sneppen",
"Kim",
""
],
[
"Eriksen",
"Kasper Astrup",
""
]
] | Evolving biomolecular networks have to combine the stability against perturbations with flexibility allowing their constituents to assume new roles in the cell. Gene duplication followed by functional divergence of associated proteins is a major force shaping molecular networks in living organisms. Recent availability of system-wide data for yeast S. Cerevisiae allow us to access the effects of gene duplication on robustness and plasticity of molecular networks. We demonstrate that the upstream transcriptional regulation of duplicated genes diverges fast, losing on average 4% of their common transcription factors for every 1% divergence of their amino acid sequences. In contrast, the set of physical interaction partners of their protein products changes much slower. The relative stability of downstream functions of duplicated genes, is further corroborated by their ability to substitute for each other in gene knockout experiments. We believe that the combination of the upstream plasticity and the downstream robustness is a general feature determining the evolvability of molecular networks. |
1903.11597 | Alexandra T. P. Carvalho | Beatriz C. Almeida, Pedro Figueiredo and Alexandra T. P. Carvalho
(CNC, Center for Neuroscience and Cell Biology, Institute for
Interdisciplinary Research (IIIUC), University of Coimbra, Coimbra
(Portugal)) | PCL enzymatic hydrolysis: a mechanistic study | null | ACS Omega, 2019 | 10.1021/acsomega.9b00345 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accumulation of plastic waste is a major environmental problem. Enzymes,
particularly esterases, play an important role in the biodegradation of
polyesters. These enzymes are usually only active on aliphatic polyesters, but
a few have showed catalytic activity for semi-aromatic polyesters. Due to the
importance of these processes, an atomic level characterization of how common
polyesters are degraded by esterases is necessary. Hereby, we present a
Molecular dynamics (MD) and Quantum Mechanics/Molecular Mechanics (QM/MM) MD
study of the hydrolysis of a model of polycaprolactone (PCL), one of the most
widely used biomaterials, by the thermophilic esterase from the archaeon
Archaeoglobus fulgidus (AfEST). This enzyme is particularly interesting because
it can withstand temperatures well above the glass transition of many
polyesters. Our insights about the reaction mechanism are important for the
design of customized enzymes able to degrade different synthetic polyesters.
| [
{
"created": "Wed, 27 Mar 2019 10:08:18 GMT",
"version": "v1"
}
] | 2019-04-01 | [
[
"Almeida",
"Beatriz C.",
"",
"CNC, Center for Neuroscience and Cell Biology, Institute for\n Interdisciplinary Research"
],
[
"Figueiredo",
"Pedro",
"",
"CNC, Center for Neuroscience and Cell Biology, Institute for\n Interdisciplinary Research"
],
[
"Carvalho",
"A... | Accumulation of plastic waste is a major environmental problem. Enzymes, particularly esterases, play an important role in the biodegradation of polyesters. These enzymes are usually only active on aliphatic polyesters, but a few have showed catalytic activity for semi-aromatic polyesters. Due to the importance of these processes, an atomic level characterization of how common polyesters are degraded by esterases is necessary. Hereby, we present a Molecular dynamics (MD) and Quantum Mechanics/Molecular Mechanics (QM/MM) MD study of the hydrolysis of a model of polycaprolactone (PCL), one of the most widely used biomaterials, by the thermophilic esterase from the archaeon Archaeoglobus fulgidus (AfEST). This enzyme is particularly interesting because it can withstand temperatures well above the glass transition of many polyesters. Our insights about the reaction mechanism are important for the design of customized enzymes able to degrade different synthetic polyesters. |
1412.1285 | Steven Frank | Steven A. Frank | The inductive theory of natural selection: summary and synthesis | Version 2: Changed title. Noted that condensed and simplified version
of this manuscript will be published as book chapter with original title "The
inductive theory of natural selection." See footnote on title page of pdf | null | null | null | q-bio.PE cs.NE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theory of natural selection has two forms. Deductive theory describes how
populations change over time. One starts with an initial population and some
rules for change. From those assumptions, one calculates the future state of
the population. Deductive theory predicts how populations adapt to
environmental challenge. Inductive theory describes the causes of change in
populations. One starts with a given amount of change. One then assigns
different parts of the total change to particular causes. Inductive theory
analyzes alternative causal models for how populations have adapted to
environmental challenge. This chapter emphasizes the inductive analysis of
cause.
| [
{
"created": "Wed, 3 Dec 2014 11:49:52 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Nov 2016 23:21:25 GMT",
"version": "v2"
}
] | 2016-11-15 | [
[
"Frank",
"Steven A.",
""
]
] | The theory of natural selection has two forms. Deductive theory describes how populations change over time. One starts with an initial population and some rules for change. From those assumptions, one calculates the future state of the population. Deductive theory predicts how populations adapt to environmental challenge. Inductive theory describes the causes of change in populations. One starts with a given amount of change. One then assigns different parts of the total change to particular causes. Inductive theory analyzes alternative causal models for how populations have adapted to environmental challenge. This chapter emphasizes the inductive analysis of cause. |
1110.3943 | Mareike Fischer | Mareike Fischer, Steffen Klaere, Minh Anh Thi Nguyen, Arndt von
Haeseler | On the group theoretical background of assigning stepwise mutations onto
phylogenies | null | null | null | null | q-bio.PE math.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent paper, Klaere et al. modeled the impact of substitutions on
arbitrary branches of a phylogenetic tree on an alignment site by the so-called
One Step Mutation (OSM) matrix. By utilizing the concept of the OSM matrix for
the four-state nucleotide alphabet, Nguyen et al. presented an efficient
procedure to compute the minimal number of substitutions needed to translate
one alignment site into another.The present paper delivers a proof for this
computation.Moreover, we provide several mathematical insights into the
generalization of the OSM matrix to multistate alphabets.The construction of
the OSM matrix is only possible if the matrices representing the substitution
types acting on the character states and the identity matrix form a commutative
group with respect to matrix multiplication. We illustrate a means to establish
such a group for the twenty-state amino acid alphabet and critically discuss
its biological usefulness.
| [
{
"created": "Tue, 18 Oct 2011 11:47:42 GMT",
"version": "v1"
}
] | 2011-10-19 | [
[
"Fischer",
"Mareike",
""
],
[
"Klaere",
"Steffen",
""
],
[
"Nguyen",
"Minh Anh Thi",
""
],
[
"von Haeseler",
"Arndt",
""
]
] | In a recent paper, Klaere et al. modeled the impact of substitutions on arbitrary branches of a phylogenetic tree on an alignment site by the so-called One Step Mutation (OSM) matrix. By utilizing the concept of the OSM matrix for the four-state nucleotide alphabet, Nguyen et al. presented an efficient procedure to compute the minimal number of substitutions needed to translate one alignment site into another.The present paper delivers a proof for this computation.Moreover, we provide several mathematical insights into the generalization of the OSM matrix to multistate alphabets.The construction of the OSM matrix is only possible if the matrices representing the substitution types acting on the character states and the identity matrix form a commutative group with respect to matrix multiplication. We illustrate a means to establish such a group for the twenty-state amino acid alphabet and critically discuss its biological usefulness. |
2407.19073 | Nikolai Schapin | Nikolai Schapin, Carles Navarro, Albert Bou, Gianni De Fabritiis | On Machine Learning Approaches for Protein-Ligand Binding Affinity
Prediction | 20 pages, 14 figures, 1 table | null | null | null | q-bio.BM cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binding affinity optimization is crucial in early-stage drug discovery. While
numerous machine learning methods exist for predicting ligand potency, their
comparative efficacy remains unclear. This study evaluates the performance of
classical tree-based models and advanced neural networks in protein-ligand
binding affinity prediction. Our comprehensive benchmarking encompasses 2D
models utilizing ligand-only RDKit embeddings and Large Language Model (LLM)
ligand representations, as well as 3D neural networks incorporating bound
protein-ligand conformations. We assess these models across multiple standard
datasets, examining various predictive scenarios including classification,
ranking, regression, and active learning. Results indicate that simpler models
can surpass more complex ones in specific tasks, while 3D models leveraging
structural information become increasingly competitive with larger training
datasets containing compounds with labelled affinity data against multiple
targets. Pre-trained 3D models, by incorporating protein pocket environments,
demonstrate significant advantages in data-scarce scenarios for specific
binding pockets. Additionally, LLM pretraining on 2D ligand data enhances
complex model performance, providing versatile embeddings that outperform
traditional RDKit features in computational efficiency. Finally, we show that
combining 2D and 3D model strengths improves active learning outcomes beyond
current state-of-the-art approaches. These findings offer valuable insights for
optimizing machine learning strategies in drug discovery pipelines.
| [
{
"created": "Mon, 15 Jul 2024 13:06:00 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Schapin",
"Nikolai",
""
],
[
"Navarro",
"Carles",
""
],
[
"Bou",
"Albert",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] | Binding affinity optimization is crucial in early-stage drug discovery. While numerous machine learning methods exist for predicting ligand potency, their comparative efficacy remains unclear. This study evaluates the performance of classical tree-based models and advanced neural networks in protein-ligand binding affinity prediction. Our comprehensive benchmarking encompasses 2D models utilizing ligand-only RDKit embeddings and Large Language Model (LLM) ligand representations, as well as 3D neural networks incorporating bound protein-ligand conformations. We assess these models across multiple standard datasets, examining various predictive scenarios including classification, ranking, regression, and active learning. Results indicate that simpler models can surpass more complex ones in specific tasks, while 3D models leveraging structural information become increasingly competitive with larger training datasets containing compounds with labelled affinity data against multiple targets. Pre-trained 3D models, by incorporating protein pocket environments, demonstrate significant advantages in data-scarce scenarios for specific binding pockets. Additionally, LLM pretraining on 2D ligand data enhances complex model performance, providing versatile embeddings that outperform traditional RDKit features in computational efficiency. Finally, we show that combining 2D and 3D model strengths improves active learning outcomes beyond current state-of-the-art approaches. These findings offer valuable insights for optimizing machine learning strategies in drug discovery pipelines. |
q-bio/0309003 | Eduardo D. Sontag | Eduardo D. Sontag | Adaptation and regulation with signal detection implies internal model | See http://www.math.rutgers.edu/~sontag for related work; to appear
in Systems and Control Letters | null | null | null | q-bio.QM q-bio.MN | null | This note provides a theorem showing, under suitable technical assumptions,
that if a system S adapts to a class of external signals U, in the sense of
egulation against disturbances or tracking signals in U, then S must ecessarily
contain a subsystem which is capable of generating all the signals in U. It is
not assumed that regulation is robust, nor is there a prior requirement for the
system to be partitioned into separate plant and controller components.
Instead, one assumes that a ``signal detection'' property holds. The result was
motivated by questions of adaptation in bacterial chemotaxis, but the general
mathematical principle is of wide applicability.
| [
{
"created": "Tue, 16 Sep 2003 14:46:46 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Sontag",
"Eduardo D.",
""
]
] | This note provides a theorem showing, under suitable technical assumptions, that if a system S adapts to a class of external signals U, in the sense of egulation against disturbances or tracking signals in U, then S must ecessarily contain a subsystem which is capable of generating all the signals in U. It is not assumed that regulation is robust, nor is there a prior requirement for the system to be partitioned into separate plant and controller components. Instead, one assumes that a ``signal detection'' property holds. The result was motivated by questions of adaptation in bacterial chemotaxis, but the general mathematical principle is of wide applicability. |
2206.11228 | Chong Guo | Chong Guo, Michael J. Lee, Guillaume Leclerc, Joel Dapello, Yug Rao,
Aleksander Madry, James J. DiCarlo | Adversarially trained neural representations may already be as robust as
corresponding biological neural representations | 10 pages, 6 figures, ICML2022 | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual systems of primates are the gold standard of robust perception. There
is thus a general belief that mimicking the neural representations that
underlie those systems will yield artificial visual systems that are
adversarially robust. In this work, we develop a method for performing
adversarial visual attacks directly on primate brain activity. We then leverage
this method to demonstrate that the above-mentioned belief might not be well
founded. Specifically, we report that the biological neurons that make up
visual systems of primates exhibit susceptibility to adversarial perturbations
that is comparable in magnitude to existing (robustly trained) artificial
neural networks.
| [
{
"created": "Sun, 19 Jun 2022 04:15:29 GMT",
"version": "v1"
}
] | 2022-06-23 | [
[
"Guo",
"Chong",
""
],
[
"Lee",
"Michael J.",
""
],
[
"Leclerc",
"Guillaume",
""
],
[
"Dapello",
"Joel",
""
],
[
"Rao",
"Yug",
""
],
[
"Madry",
"Aleksander",
""
],
[
"DiCarlo",
"James J.",
""
]
] | Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work, we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks. |
q-bio/0510035 | Jeremy Sumner | J G Sumner, P D Jarvis | Using the tangle: a consistent construction of phylogenetic distance
matrices for quartets | 18 Pges. Submitted to Mathematical Biosciences | null | null | null | q-bio.PE | null | Distance based algorithms are a common technique in the construction of
phylogenetic trees from taxonomic sequence data. The first step in the
implementation of these algorithms is the calculation of a pairwise distance
matrix to give a measure of the evolutionary change between any pair of the
extant taxa. A standard technique is to use the log det formula to construct
pairwise distances from aligned sequence data. We review a distance measure
valid for the most general models, and show how the log det formula can be used
as an estimator thereof. We then show that the foundation upon which the log
det formula is constructed can be generalized to produce a previously unknown
estimator which improves the consistency of the distance matrices constructed
from the log det formula. This distance estimator provides a consistent
technique for constructing quartets from phylogenetic sequence data under the
assumption of the most general Markov model of sequence evolution.
| [
{
"created": "Tue, 18 Oct 2005 03:19:32 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Mar 2006 06:16:15 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Sumner",
"J G",
""
],
[
"Jarvis",
"P D",
""
]
] | Distance based algorithms are a common technique in the construction of phylogenetic trees from taxonomic sequence data. The first step in the implementation of these algorithms is the calculation of a pairwise distance matrix to give a measure of the evolutionary change between any pair of the extant taxa. A standard technique is to use the log det formula to construct pairwise distances from aligned sequence data. We review a distance measure valid for the most general models, and show how the log det formula can be used as an estimator thereof. We then show that the foundation upon which the log det formula is constructed can be generalized to produce a previously unknown estimator which improves the consistency of the distance matrices constructed from the log det formula. This distance estimator provides a consistent technique for constructing quartets from phylogenetic sequence data under the assumption of the most general Markov model of sequence evolution. |
2401.05343 | Anass B. El-Yaagoubi | Anass B. El-Yaagoubi and Shuhao Jiao and Moo K. Chung and Hernando
Ombao | Spectral Topological Data Analysis of Brain Signals | 28 pages, 23 figures | null | null | null | q-bio.NC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topological data analysis (TDA) has become a powerful approach over the last
twenty years, mainly due to its ability to capture the shape and the geometry
inherent in the data. Persistence homology, which is a particular tool in TDA,
has been demonstrated to be successful in analyzing functional brain
connectivity. One limitation of standard approaches is that they use
arbitrarily chosen threshold values for analyzing connectivity matrices. To
overcome this weakness, TDA provides a filtration of the weighted brain network
across a range of threshold values. However, current analyses of the
topological structure of functional brain connectivity primarily rely on overly
simplistic connectivity measures, such as the Pearson orrelation. These
measures do not provide information about the specific oscillators that drive
dependence within the brain network. Here, we develop a frequency-specific
approach that utilizes coherence, a measure of dependence in the spectral
domain, to evaluate the functional connectivity of the brain. Our approach, the
spectral TDA (STDA), has the ability to capture more nuanced and detailed
information about the underlying brain networks. The proposed STDA method leads
to a novel topological summary, the spectral landscape, which is a
2D-generalization of the persistence landscape. Using the novel spectral
landscape, we analyze the EEG brain connectivity of patients with attention
deficit hyperactivity disorder (ADHD) and shed light on the frequency-specific
differences in the topology of brain connectivity between the controls and ADHD
patients.
| [
{
"created": "Fri, 1 Dec 2023 13:04:44 GMT",
"version": "v1"
}
] | 2024-01-12 | [
[
"El-Yaagoubi",
"Anass B.",
""
],
[
"Jiao",
"Shuhao",
""
],
[
"Chung",
"Moo K.",
""
],
[
"Ombao",
"Hernando",
""
]
] | Topological data analysis (TDA) has become a powerful approach over the last twenty years, mainly due to its ability to capture the shape and the geometry inherent in the data. Persistence homology, which is a particular tool in TDA, has been demonstrated to be successful in analyzing functional brain connectivity. One limitation of standard approaches is that they use arbitrarily chosen threshold values for analyzing connectivity matrices. To overcome this weakness, TDA provides a filtration of the weighted brain network across a range of threshold values. However, current analyses of the topological structure of functional brain connectivity primarily rely on overly simplistic connectivity measures, such as the Pearson orrelation. These measures do not provide information about the specific oscillators that drive dependence within the brain network. Here, we develop a frequency-specific approach that utilizes coherence, a measure of dependence in the spectral domain, to evaluate the functional connectivity of the brain. Our approach, the spectral TDA (STDA), has the ability to capture more nuanced and detailed information about the underlying brain networks. The proposed STDA method leads to a novel topological summary, the spectral landscape, which is a 2D-generalization of the persistence landscape. Using the novel spectral landscape, we analyze the EEG brain connectivity of patients with attention deficit hyperactivity disorder (ADHD) and shed light on the frequency-specific differences in the topology of brain connectivity between the controls and ADHD patients. |
q-bio/0411010 | Fei Liu | Fei Liu, Bi-hui Zhu, and Zhong-can Ou-Yang | Stretching single RNAs: exact numerical and stochastic simulation
methods | 12 pages, 4 figures, and 1 table | null | null | null | q-bio.BM q-bio.QM | null | Exact numerical methods and stochastic simulation methods are developed to
study the force stretching single RNA issue on the secondary structure level in
equilibrium. By computing the force-extension curves on the constant force and
the constant extension ensembles, we find the two independent methods agree
with each other quite well. To show the precision of our methods in predicting
unfolding experiments, the unfolding forces of different RNA molecules under
different experimental conditions are calculated. We find that the ionic
corrections on the RNA free energies alone might not account for the apparent
differences between the theoretical calculations and the experimental data; an
ionic correction to the persistent length of single-stranded RNA should be
necessary.
| [
{
"created": "Tue, 2 Nov 2004 09:13:45 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Liu",
"Fei",
""
],
[
"Zhu",
"Bi-hui",
""
],
[
"Ou-Yang",
"Zhong-can",
""
]
] | Exact numerical methods and stochastic simulation methods are developed to study the force stretching single RNA issue on the secondary structure level in equilibrium. By computing the force-extension curves on the constant force and the constant extension ensembles, we find the two independent methods agree with each other quite well. To show the precision of our methods in predicting unfolding experiments, the unfolding forces of different RNA molecules under different experimental conditions are calculated. We find that the ionic corrections on the RNA free energies alone might not account for the apparent differences between the theoretical calculations and the experimental data; an ionic correction to the persistent length of single-stranded RNA should be necessary. |
2004.00940 | Bernd Blasius | Bernd Blasius | Power-law distribution in the number of confirmed COVID-19 cases | 10 pages, 8 figures; rewrite, emphasizing that the distribution is a
truncated power law | Chaos 30, 093123 (2020) | 10.1063/5.0013031 | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 is an emerging respiratory infectious disease caused by the
coronavirus SARS-CoV-2. It was first reported on in early December 2019 in
Wuhan, China and within three month spread as a pandemic around the whole
globe. Here, we study macro-epidemiological patterns along the time course of
the pandemic. We compute the distribution of confirmed COVID-19 cases and
deaths for countries worldwide and for counties in the US, and show that both
distributions follow a truncated power-law over five orders of magnitude. We
are able to explain the origin of this scaling behavior as a dual-scale
process: the large-scale spread of the virus between countries and the
small-scale accumulation of case numbers within each country. Assuming
exponential growth on both scales, the critical exponent of the power-law is
determined by the ratio of large-scale to small-scale growth rates. We confirm
this theory in numerical simulations in a simple meta-population model,
describing the epidemic spread in a network of interconnected countries. Our
theory gives a mechanistic explanation why most COVID-19 cases occurred within
a few epicenters, at least in the initial phase of the outbreak. Assessing how
well a simple dual-scale model predicts the early spread of epidemics, despite
the huge contrasts between countries, could help identify critical temporal and
spatial scales of response in which to mitigate future epidemic threats.
| [
{
"created": "Thu, 2 Apr 2020 11:22:33 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Apr 2020 06:33:29 GMT",
"version": "v2"
}
] | 2020-09-23 | [
[
"Blasius",
"Bernd",
""
]
] | COVID-19 is an emerging respiratory infectious disease caused by the coronavirus SARS-CoV-2. It was first reported on in early December 2019 in Wuhan, China and within three month spread as a pandemic around the whole globe. Here, we study macro-epidemiological patterns along the time course of the pandemic. We compute the distribution of confirmed COVID-19 cases and deaths for countries worldwide and for counties in the US, and show that both distributions follow a truncated power-law over five orders of magnitude. We are able to explain the origin of this scaling behavior as a dual-scale process: the large-scale spread of the virus between countries and the small-scale accumulation of case numbers within each country. Assuming exponential growth on both scales, the critical exponent of the power-law is determined by the ratio of large-scale to small-scale growth rates. We confirm this theory in numerical simulations in a simple meta-population model, describing the epidemic spread in a network of interconnected countries. Our theory gives a mechanistic explanation why most COVID-19 cases occurred within a few epicenters, at least in the initial phase of the outbreak. Assessing how well a simple dual-scale model predicts the early spread of epidemics, despite the huge contrasts between countries, could help identify critical temporal and spatial scales of response in which to mitigate future epidemic threats. |
1907.04318 | Ahmed BaniMustafa Dr. | Ahmed BaniMustafa and Nigel Hardy | Computer-Aided Data Mining: Automating a Novel Knowledge Discovery and
Data Mining Process Model for Metabolomics | arXiv admin note: text overlap with arXiv:1907.03755 | null | null | null | q-bio.QM cs.DB cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents MeKDDaM-SAGA, computer-aided automation software for
implementing a novel knowledge discovery and data mining process model that was
designed for performing justifiable, traceable and reproducible metabolomics
data analysis. The process model focuses on achieving metabolomics analytical
objectives and on considering the nature of its involved data. MeKDDaM-SAGA was
successfully used for guiding the process model execution in a number of
metabolomics applications. It satisfies the requirements of the proposed
process model design and execution. The software realises the process model
layout, structure and flow and it enables its execution externally using
various data mining and machine learning tools or internally using a number of
embedded facilities that were built for performing a number of automated
activities such as data preprocessing, data exploration, data acclimatization,
modelling, evaluation and visualization. MeKDDaM-SAGA was developed using
object-oriented software engineering methodology and was constructed in Java.
It consists of 241 design classes that were designed to implement 27 use-cases.
The software uses an XML database to guarantee portability and uses a GUI
interface to ensure its user-friendliness. It implements an internal embedded
version control system that is used to realise and manage the process flow,
feedback and iterations and to enable undoing and redoing the execution of the
process phases, activities, and the internal tasks within its phases.
| [
{
"created": "Tue, 9 Jul 2019 01:14:53 GMT",
"version": "v1"
}
] | 2019-07-11 | [
[
"BaniMustafa",
"Ahmed",
""
],
[
"Hardy",
"Nigel",
""
]
] | This work presents MeKDDaM-SAGA, computer-aided automation software for implementing a novel knowledge discovery and data mining process model that was designed for performing justifiable, traceable and reproducible metabolomics data analysis. The process model focuses on achieving metabolomics analytical objectives and on considering the nature of its involved data. MeKDDaM-SAGA was successfully used for guiding the process model execution in a number of metabolomics applications. It satisfies the requirements of the proposed process model design and execution. The software realises the process model layout, structure and flow and it enables its execution externally using various data mining and machine learning tools or internally using a number of embedded facilities that were built for performing a number of automated activities such as data preprocessing, data exploration, data acclimatization, modelling, evaluation and visualization. MeKDDaM-SAGA was developed using object-oriented software engineering methodology and was constructed in Java. It consists of 241 design classes that were designed to implement 27 use-cases. The software uses an XML database to guarantee portability and uses a GUI interface to ensure its user-friendliness. It implements an internal embedded version control system that is used to realise and manage the process flow, feedback and iterations and to enable undoing and redoing the execution of the process phases, activities, and the internal tasks within its phases. |
1111.7243 | Elana Fertig | Matthew R. Francis and Elana J. Fertig | Quantifying the dynamics of coupled networks of switches and oscillators | 25 pages, 6 figures | null | 10.1371/journal.pone.0029497 | null | q-bio.QM q-bio.MN | http://creativecommons.org/licenses/by/3.0/ | Complex network dynamics have been analyzed with models of systems of coupled
switches or systems of coupled oscillators. However, many complex systems are
composed of components with diverse dynamics whose interactions drive the
system's evolution. We, therefore, introduce a new modeling framework that
describes the dynamics of networks composed of both oscillators and switches.
Both oscillator synchronization and switch stability are preserved in these
heterogeneous, coupled networks. Furthermore, this model recapitulates the
qualitative dynamics for the yeast cell cycle consistent with the hypothesized
dynamics resulting from decomposition of the regulatory network into dynamic
motifs. Introducing feedback into the cell-cycle network induces qualitative
dynamics analogous to limitless replicative potential that is a hallmark of
cancer. As a result, the proposed model of switch and oscillator coupling
provides the ability to incorporate mechanisms that underlie the synchronized
stimulus response ubiquitous in biochemical systems.
| [
{
"created": "Wed, 30 Nov 2011 17:33:02 GMT",
"version": "v1"
}
] | 2012-01-09 | [
[
"Francis",
"Matthew R.",
""
],
[
"Fertig",
"Elana J.",
""
]
] | Complex network dynamics have been analyzed with models of systems of coupled switches or systems of coupled oscillators. However, many complex systems are composed of components with diverse dynamics whose interactions drive the system's evolution. We, therefore, introduce a new modeling framework that describes the dynamics of networks composed of both oscillators and switches. Both oscillator synchronization and switch stability are preserved in these heterogeneous, coupled networks. Furthermore, this model recapitulates the qualitative dynamics for the yeast cell cycle consistent with the hypothesized dynamics resulting from decomposition of the regulatory network into dynamic motifs. Introducing feedback into the cell-cycle network induces qualitative dynamics analogous to limitless replicative potential that is a hallmark of cancer. As a result, the proposed model of switch and oscillator coupling provides the ability to incorporate mechanisms that underlie the synchronized stimulus response ubiquitous in biochemical systems. |
2210.02120 | Jacob Theilgaard Lassen | Jacob Theilgaard Lassen, Mikkel Fly Kragh, Jens Rimestad, Martin
Nyg{\aa}rd Johansen, J{\o}rgen Berntsen | Development and validation of deep learning based embryo selection
across multiple days of transfer | null | null | null | null | q-bio.QM cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work describes the development and validation of a fully automated deep
learning model, iDAScore v2.0, for the evaluation of embryos incubated for 2,
3, and 5 or more days. The model is trained and evaluated on an extensive and
diverse dataset including 181,428 embryos from 22 IVF clinics across the world.
For discriminating transferred embryos with known outcome (KID), we show AUCs
ranging from 0.621 to 0.708 depending on the day of transfer. Predictive
performance increased over time and showed a strong correlation with
morphokinetic parameters. The model has equivalent performance to KIDScore D3
on day 3 embryos while significantly surpassing the performance of KIDScore D5
v3 on day 5+ embryos. This model provides an analysis of time-lapse sequences
without the need for user input, and provides a reliable method for ranking
embryos for likelihood to implant, at both cleavage and blastocyst stages. This
greatly improves embryo grading consistency and saves time compared to
traditional embryo evaluation methods.
| [
{
"created": "Wed, 5 Oct 2022 09:44:13 GMT",
"version": "v1"
}
] | 2022-10-06 | [
[
"Lassen",
"Jacob Theilgaard",
""
],
[
"Kragh",
"Mikkel Fly",
""
],
[
"Rimestad",
"Jens",
""
],
[
"Johansen",
"Martin Nygård",
""
],
[
"Berntsen",
"Jørgen",
""
]
] | This work describes the development and validation of a fully automated deep learning model, iDAScore v2.0, for the evaluation of embryos incubated for 2, 3, and 5 or more days. The model is trained and evaluated on an extensive and diverse dataset including 181,428 embryos from 22 IVF clinics across the world. For discriminating transferred embryos with known outcome (KID), we show AUCs ranging from 0.621 to 0.708 depending on the day of transfer. Predictive performance increased over time and showed a strong correlation with morphokinetic parameters. The model has equivalent performance to KIDScore D3 on day 3 embryos while significantly surpassing the performance of KIDScore D5 v3 on day 5+ embryos. This model provides an analysis of time-lapse sequences without the need for user input, and provides a reliable method for ranking embryos for likelihood to implant, at both cleavage and blastocyst stages. This greatly improves embryo grading consistency and saves time compared to traditional embryo evaluation methods. |
1304.5486 | Joshua Schraiber | Joshua G. Schraiber, Yulia Mostovoy, Tiffany Y. Hsu, Rachel B. Brem | Inferring evolutionary histories of pathway regulation from
transcriptional profiling data | 30 pages, 12 figures, 2 tables, contact authors for supplementary
tables | PLoS Computational Biology 9, 2013, e1003255 | 10.1371/journal.pcbi.1003255 | null | q-bio.PE q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | One of the outstanding challenges in comparative genomics is to interpret the
evolutionary importance of regulatory variation between species. Rigorous
molecular evolution-based methods to infer evidence for natural selection from
expression data are at a premium in the field, and to date, phylogenetic
approaches have not been well-suited to address the question in the small sets
of taxa profiled in standard surveys of gene expression. We have developed a
strategy to infer evolutionary histories from expression profiles by analyzing
suites of genes of common function. In a manner conceptually similar to
molecular evolution models in which the evolutionary rates of DNA sequence at
multiple loci follow a gamma distribution, we modeled expression of the genes
of an \emph{a priori}-defined pathway with rates drawn from an inverse gamma
distribution. We then developed a fitting strategy to infer the parameters of
this distribution from expression measurements, and to identify gene groups
whose expression patterns were consistent with evolutionary constraint or rapid
evolution in particular species. Simulations confirmed the power and accuracy
of our inference method. As an experimental testbed for our approach, we
generated and analyzed transcriptional profiles of four \emph{Saccharomyces}
yeasts. The results revealed pathways with signatures of constrained and
accelerated regulatory evolution in individual yeasts and across the phylogeny,
highlighting the prevalence of pathway-level expression change during the
divergence of yeast species. We anticipate that our pathway-based phylogenetic
approach will be of broad utility in the search to understand the evolutionary
relevance of regulatory change.
| [
{
"created": "Fri, 19 Apr 2013 17:35:23 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2013 22:04:06 GMT",
"version": "v2"
}
] | 2013-10-16 | [
[
"Schraiber",
"Joshua G.",
""
],
[
"Mostovoy",
"Yulia",
""
],
[
"Hsu",
"Tiffany Y.",
""
],
[
"Brem",
"Rachel B.",
""
]
] | One of the outstanding challenges in comparative genomics is to interpret the evolutionary importance of regulatory variation between species. Rigorous molecular evolution-based methods to infer evidence for natural selection from expression data are at a premium in the field, and to date, phylogenetic approaches have not been well-suited to address the question in the small sets of taxa profiled in standard surveys of gene expression. We have developed a strategy to infer evolutionary histories from expression profiles by analyzing suites of genes of common function. In a manner conceptually similar to molecular evolution models in which the evolutionary rates of DNA sequence at multiple loci follow a gamma distribution, we modeled expression of the genes of an \emph{a priori}-defined pathway with rates drawn from an inverse gamma distribution. We then developed a fitting strategy to infer the parameters of this distribution from expression measurements, and to identify gene groups whose expression patterns were consistent with evolutionary constraint or rapid evolution in particular species. Simulations confirmed the power and accuracy of our inference method. As an experimental testbed for our approach, we generated and analyzed transcriptional profiles of four \emph{Saccharomyces} yeasts. The results revealed pathways with signatures of constrained and accelerated regulatory evolution in individual yeasts and across the phylogeny, highlighting the prevalence of pathway-level expression change during the divergence of yeast species. We anticipate that our pathway-based phylogenetic approach will be of broad utility in the search to understand the evolutionary relevance of regulatory change. |
2103.02436 | Tsvi Tlusty | Jean-Pierre Eckmann and Tsvi Tlusty | Dimensional Reduction in Complex Living Systems: Where, Why, and How | null | Bioessays 2021 e2100062 | 10.1002/bies.202100062 | null | q-bio.OT physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unprecedented prowess of measurement techniques provides a detailed,
multi-scale look into the depths of living systems. Understanding these
avalanches of high-dimensional data -- by distilling underlying principles and
mechanisms -- necessitates dimensional reduction. We propose that living
systems achieve exquisite dimensional reduction, originating from their
capacity to learn, through evolution and phenotypic plasticity, the relevant
aspects of a non-random, smooth physical reality. We explain how geometric
insights by mathematicians allow one to identify these genuine hallmarks of
life and distinguish them from universal properties of generic data sets. We
illustrate these principles in a concrete example of protein evolution,
suggesting a simple general recipe that can be applied to understand other
biological systems.
| [
{
"created": "Tue, 2 Mar 2021 10:35:12 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jun 2021 09:45:14 GMT",
"version": "v2"
}
] | 2021-08-16 | [
[
"Eckmann",
"Jean-Pierre",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | The unprecedented prowess of measurement techniques provides a detailed, multi-scale look into the depths of living systems. Understanding these avalanches of high-dimensional data -- by distilling underlying principles and mechanisms -- necessitates dimensional reduction. We propose that living systems achieve exquisite dimensional reduction, originating from their capacity to learn, through evolution and phenotypic plasticity, the relevant aspects of a non-random, smooth physical reality. We explain how geometric insights by mathematicians allow one to identify these genuine hallmarks of life and distinguish them from universal properties of generic data sets. We illustrate these principles in a concrete example of protein evolution, suggesting a simple general recipe that can be applied to understand other biological systems. |
2112.00208 | Toshiki Oguma | Toshiki Oguma, Hisako Takigawa-Imamura, Tomoyasu Shinoda, Shuntaro
Ogura, Akiyoshi Uemura, Takaki Miyata, Philip K. Maini and Takashi Miura | Analyzing the effect of cell rearrangement on Delta-Notch pattern
formation | 35 pages, 5 figures and Supplementally material (12 pages, 6 figures) | null | null | null | q-bio.CB nlin.PS | http://creativecommons.org/licenses/by/4.0/ | The Delta-Notch system plays a vital role in a number of areas in biology and
typically forms a salt and pepper pattern in which cells strongly expressing
Delta and cells strongly expressing Notch are alternately aligned via lateral
inhibition. Although the spatial arrangement of the cells is important to the
Delta-Notch pattern, the effect of cell rearrangement is not often considered.
In this study, we provide a framework to analytically evaluate the effect of
cell mixing and proliferation on Delta-Notch pattern formation in one spatial
dimension. We model cell rearrangement events by a Poisson process and analyze
the model while preserving the discrete properties of the spatial structure. We
find that the homogeneous expression pattern is stabilized if the frequency of
cell rearrangement events is sufficiently large. We analytically obtain the
critical frequencies of the cell rearrangement events where the decrease of the
pattern amplitude as a result of cell rearrangement is balanced by the increase
in amplitude due to the Delta-Notch interaction dynamics. Our theoretical
results are qualitatively consistent with experimental results, supporting the
notion that the heterogeneity of expression patterns is inversely correlated
with cell rearrangement \textit{in vivo}. Our framework, while applied here to
the specific case of the Delta-Notch system, is applicable more widely to other
pattern formation mechanisms.
| [
{
"created": "Wed, 1 Dec 2021 01:15:20 GMT",
"version": "v1"
}
] | 2021-12-02 | [
[
"Oguma",
"Toshiki",
""
],
[
"Takigawa-Imamura",
"Hisako",
""
],
[
"Shinoda",
"Tomoyasu",
""
],
[
"Ogura",
"Shuntaro",
""
],
[
"Uemura",
"Akiyoshi",
""
],
[
"Miyata",
"Takaki",
""
],
[
"Maini",
"Philip K.",
... | The Delta-Notch system plays a vital role in a number of areas in biology and typically forms a salt and pepper pattern in which cells strongly expressing Delta and cells strongly expressing Notch are alternately aligned via lateral inhibition. Although the spatial arrangement of the cells is important to the Delta-Notch pattern, the effect of cell rearrangement is not often considered. In this study, we provide a framework to analytically evaluate the effect of cell mixing and proliferation on Delta-Notch pattern formation in one spatial dimension. We model cell rearrangement events by a Poisson process and analyze the model while preserving the discrete properties of the spatial structure. We find that the homogeneous expression pattern is stabilized if the frequency of cell rearrangement events is sufficiently large. We analytically obtain the critical frequencies of the cell rearrangement events where the decrease of the pattern amplitude as a result of cell rearrangement is balanced by the increase in amplitude due to the Delta-Notch interaction dynamics. Our theoretical results are qualitatively consistent with experimental results, supporting the notion that the heterogeneity of expression patterns is inversely correlated with cell rearrangement \textit{in vivo}. Our framework, while applied here to the specific case of the Delta-Notch system, is applicable more widely to other pattern formation mechanisms. |
0907.4819 | Christopher Frenz | Christopher M. Frenz, Philippe P. Lefebvre | Presence of pKa Perturbations Among Homeodomain Residues Facilitates DNA
Binding | Frenz, C.M. & Lefebvre, P.P. (2009) Presence of pKa Perturbations
Among Homeodomain Residues Facilitates DNA Binding, Proceedings of the 2009
International Conference on Bioinformatics and Computational Biology (BIOCOMP
'09), 198-203 | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Homeodomain containing proteins are a broad class of DNA binding proteins
that are believed to primarily function as transcription factors.
Electrostatics interactions have been demonstrated to be critical for the
binding of the homeodomain to DNA. An examination of the electrostatic state of
homeodomain residues involved in DNA phosphate binding has demonstrated the
conserved presence of upward shifted pKa values among the basic residue of
lysine and arginine. It is believed that these pKa perturbations work to
facilitate binding to DNA since they ensure that the basic residues always
retain a positive charge.
| [
{
"created": "Tue, 28 Jul 2009 02:11:59 GMT",
"version": "v1"
}
] | 2009-07-29 | [
[
"Frenz",
"Christopher M.",
""
],
[
"Lefebvre",
"Philippe P.",
""
]
] | Homeodomain containing proteins are a broad class of DNA binding proteins that are believed to primarily function as transcription factors. Electrostatics interactions have been demonstrated to be critical for the binding of the homeodomain to DNA. An examination of the electrostatic state of homeodomain residues involved in DNA phosphate binding has demonstrated the conserved presence of upward shifted pKa values among the basic residue of lysine and arginine. It is believed that these pKa perturbations work to facilitate binding to DNA since they ensure that the basic residues always retain a positive charge. |
2307.13708 | Jesse Engreitz | IGVF Consortium | The Impact of Genomic Variation on Function (IGVF) Consortium | Draft Marker Paper for the Impact of Genomic Variation on Function
(IGVF) Consortium (https://www.igvf.org). Detailed author list (members of
the IGVF Consortium) is included in the manuscript | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Our genomes influence nearly every aspect of human biology from molecular and
cellular functions to phenotypes in health and disease. Human genetics studies
have now associated hundreds of thousands of differences in our DNA sequence
("genomic variation") with disease risk and other phenotypes, many of which
could reveal novel mechanisms of human biology and uncover the basis of genetic
predispositions to diseases, thereby guiding the development of new diagnostics
and therapeutics. Yet, understanding how genomic variation alters genome
function to influence phenotype has proven challenging. To unlock these
insights, we need a systematic and comprehensive catalog of genome function and
the molecular and cellular effects of genomic variants. Toward this goal, the
Impact of Genomic Variation on Function (IGVF) Consortium will combine
approaches in single-cell mapping, genomic perturbations, and predictive
modeling to investigate the relationships among genomic variation, genome
function, and phenotypes. Through systematic comparisons and benchmarking of
experimental and computational methods, we aim to create maps across hundreds
of cell types and states describing how coding variants alter protein activity,
how noncoding variants change the regulation of gene expression, and how both
coding and noncoding variants may connect through gene regulatory and protein
interaction networks. These experimental data, computational predictions, and
accompanying standards and pipelines will be integrated into an open resource
that will catalyze community efforts to explore genome function and the impact
of genetic variation on human biology and disease across populations.
| [
{
"created": "Mon, 24 Jul 2023 20:51:25 GMT",
"version": "v1"
}
] | 2023-07-27 | [
[
"Consortium",
"IGVF",
""
]
] | Our genomes influence nearly every aspect of human biology from molecular and cellular functions to phenotypes in health and disease. Human genetics studies have now associated hundreds of thousands of differences in our DNA sequence ("genomic variation") with disease risk and other phenotypes, many of which could reveal novel mechanisms of human biology and uncover the basis of genetic predispositions to diseases, thereby guiding the development of new diagnostics and therapeutics. Yet, understanding how genomic variation alters genome function to influence phenotype has proven challenging. To unlock these insights, we need a systematic and comprehensive catalog of genome function and the molecular and cellular effects of genomic variants. Toward this goal, the Impact of Genomic Variation on Function (IGVF) Consortium will combine approaches in single-cell mapping, genomic perturbations, and predictive modeling to investigate the relationships among genomic variation, genome function, and phenotypes. Through systematic comparisons and benchmarking of experimental and computational methods, we aim to create maps across hundreds of cell types and states describing how coding variants alter protein activity, how noncoding variants change the regulation of gene expression, and how both coding and noncoding variants may connect through gene regulatory and protein interaction networks. These experimental data, computational predictions, and accompanying standards and pipelines will be integrated into an open resource that will catalyze community efforts to explore genome function and the impact of genetic variation on human biology and disease across populations. |
1405.6232 | Rashid Williams-Garcia | Rashid V. Williams-Garcia, Mark Moore, John M. Beggs, Gerardo Ortiz | Quasi-Critical Brain Dynamics on a Non-Equilibrium Widom Line | Submitted to Physical Review Letters on March 29, 2014. Transferred
to Physical Review E on August 4, 2014 | null | 10.1103/PhysRevE.90.062714 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Is the brain really operating at a critical point? We study the
non-equilibrium properties of a neural network which models the dynamics of the
neocortex and argue for optimal quasi-critical dynamics on the Widom line where
the correlation length is maximal. We simulate the network and introduce an
analytical mean-field approximation, characterize the non-equilibrium phase
transition, and present a non-equilibrium phase diagram, which shows that in
addition to an ordered and disordered phase, the system exhibits a
quasiperiodic phase corresponding to synchronous activity in simulations which
may be related to the pathological synchronization associated with epilepsy.
| [
{
"created": "Fri, 23 May 2014 21:05:18 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Dec 2014 18:09:41 GMT",
"version": "v2"
}
] | 2015-06-19 | [
[
"Williams-Garcia",
"Rashid V.",
""
],
[
"Moore",
"Mark",
""
],
[
"Beggs",
"John M.",
""
],
[
"Ortiz",
"Gerardo",
""
]
] | Is the brain really operating at a critical point? We study the non-equilibrium properties of a neural network which models the dynamics of the neocortex and argue for optimal quasi-critical dynamics on the Widom line where the correlation length is maximal. We simulate the network and introduce an analytical mean-field approximation, characterize the non-equilibrium phase transition, and present a non-equilibrium phase diagram, which shows that in addition to an ordered and disordered phase, the system exhibits a quasiperiodic phase corresponding to synchronous activity in simulations which may be related to the pathological synchronization associated with epilepsy. |
1501.03173 | Ralf M Haefner | Ralf M. Haefner | A note on choice and detect probabilities in the presence of choice bias | 6 pages, 1 figure | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently we have presented the analytical relationship between choice
probabilities, noise correlations and read-out weights in the classical
feedforward decision-making framework (Haefner et al. 2013). The derivation
assumed that behavioral reports are distributed evenly between the two possible
choices. This assumption is often violated in empirical data - especially when
computing so-called grand CPs combining data across stimulus conditions. Here,
we extend our analytical results to situations when subjects show clear biases
towards one choice over the other, e.g. in non-zero signal conditions.
Importantly, this also extends our results from discrimination tasks to
detection tasks and detect probabilities for which much empirical data is
available. We find that CPs and DPs depend monotonously on the fraction, p, of
choices assigned to the more likely option: CPs and DPs are smallest for p
equal to 0.5 and increase as p increases, i.e. as the data deviates from the
ideal, zero-signal, unbiased scenario. While this deviation is small, our
results suggest a) an empirical test for the feedforward framework and b) a way
in which to correct choice probability and detect probability measurements
before combining different stimulus conditions to increase signal/noise.
| [
{
"created": "Tue, 13 Jan 2015 21:13:12 GMT",
"version": "v1"
}
] | 2015-01-15 | [
[
"Haefner",
"Ralf M.",
""
]
] | Recently we have presented the analytical relationship between choice probabilities, noise correlations and read-out weights in the classical feedforward decision-making framework (Haefner et al. 2013). The derivation assumed that behavioral reports are distributed evenly between the two possible choices. This assumption is often violated in empirical data - especially when computing so-called grand CPs combining data across stimulus conditions. Here, we extend our analytical results to situations when subjects show clear biases towards one choice over the other, e.g. in non-zero signal conditions. Importantly, this also extends our results from discrimination tasks to detection tasks and detect probabilities for which much empirical data is available. We find that CPs and DPs depend monotonously on the fraction, p, of choices assigned to the more likely option: CPs and DPs are smallest for p equal to 0.5 and increase as p increases, i.e. as the data deviates from the ideal, zero-signal, unbiased scenario. While this deviation is small, our results suggest a) an empirical test for the feedforward framework and b) a way in which to correct choice probability and detect probability measurements before combining different stimulus conditions to increase signal/noise. |
2401.01742 | Gopinath Sadhu | Gopinath Sadhu and D C Dalal | A mathematical study of the interaction between oxygen and lactate in an
in-vivo and in-vitro tumor | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Micro-environmental acidity is a common feature of the tumor. One of the
causes behind tumor acidity is lactate production by hypoxic cells of the
tumor. Hypoxia is a direct result of the establishment of oxygen gradients. It
is commonly observed in the tumor in an in-vitro experimental setup and also
in-vivo situation. Here, we propose a mathematical model to analyses the
production of lactate by hypoxic cells, and it is used as an alternative fuel
by normoxic cells in tumor tissue in-vitro and in-vivo conditions. In this
article, we study the effects of unequal oxygen concentration at the tumor
boundaries on lactate status in the tumor. The effects of presence of the
necrotic core in the tumor on the lactate concentration profile is examined.
The results have good agreement with experimental data and align with the
theoretical findings of previous studies. The analytical results show that
lactate levels are elevated in an in-vivo tumor compared to that in an in-vitro
tumor. Also, during the onset of necrotic core formation, the effects of
necrotic core on lactate levels are noticed. Knowledge of the lactate status in
a patient's tumor may be helpful in choosing the rightful and precious
medicines for cancer treatment.
| [
{
"created": "Wed, 3 Jan 2024 13:38:23 GMT",
"version": "v1"
}
] | 2024-01-04 | [
[
"Sadhu",
"Gopinath",
""
],
[
"Dalal",
"D C",
""
]
] | Micro-environmental acidity is a common feature of the tumor. One of the causes behind tumor acidity is lactate production by hypoxic cells of the tumor. Hypoxia is a direct result of the establishment of oxygen gradients. It is commonly observed in the tumor in an in-vitro experimental setup and also in-vivo situation. Here, we propose a mathematical model to analyses the production of lactate by hypoxic cells, and it is used as an alternative fuel by normoxic cells in tumor tissue in-vitro and in-vivo conditions. In this article, we study the effects of unequal oxygen concentration at the tumor boundaries on lactate status in the tumor. The effects of presence of the necrotic core in the tumor on the lactate concentration profile is examined. The results have good agreement with experimental data and align with the theoretical findings of previous studies. The analytical results show that lactate levels are elevated in an in-vivo tumor compared to that in an in-vitro tumor. Also, during the onset of necrotic core formation, the effects of necrotic core on lactate levels are noticed. Knowledge of the lactate status in a patient's tumor may be helpful in choosing the rightful and precious medicines for cancer treatment. |
1806.04793 | Fabian Tschopp | Fabian David Tschopp, Michael B. Reiser, Srinivas C. Turaga | A Connectome Based Hexagonal Lattice Convolutional Network Model of the
Drosophila Visual System | Work in progress. Final paper with results from an updated model with
new connectome data will be coming soon | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What can we learn from a connectome? We constructed a simplified model of the
first two stages of the fly visual system, the lamina and medulla. The
resulting hexagonal lattice convolutional network was trained using
backpropagation through time to perform object tracking in natural scene
videos. Networks initialized with weights from connectome reconstructions
automatically discovered well-known orientation and direction selectivity
properties in T4 neurons and their inputs, while networks initialized at random
did not. Our work is the first demonstration, that knowledge of the connectome
can enable in silico predictions of the functional properties of individual
neurons in a circuit, leading to an understanding of circuit function from
structure alone.
| [
{
"created": "Tue, 12 Jun 2018 22:57:14 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Jun 2018 10:36:40 GMT",
"version": "v2"
}
] | 2018-06-26 | [
[
"Tschopp",
"Fabian David",
""
],
[
"Reiser",
"Michael B.",
""
],
[
"Turaga",
"Srinivas C.",
""
]
] | What can we learn from a connectome? We constructed a simplified model of the first two stages of the fly visual system, the lamina and medulla. The resulting hexagonal lattice convolutional network was trained using backpropagation through time to perform object tracking in natural scene videos. Networks initialized with weights from connectome reconstructions automatically discovered well-known orientation and direction selectivity properties in T4 neurons and their inputs, while networks initialized at random did not. Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit, leading to an understanding of circuit function from structure alone. |
2108.13415 | Mohamed Zagour | Mohamed Zagour | A nonlinear cross-diffusion epidemic with time-dependent SIRD system:
Multiscale derivation and computational analysis | null | null | null | null | q-bio.PE cs.NA math.AP math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A nonlinear cross-diffusion epidemic with a time-dependent
Susceptible-Infected-Recovered-Died system is proposed in this paper. This
system is derived from kinetic theory model by multiscale approach, which leads
to an equivalent system coupled the microscopic and macroscopic equations.
Subsequently, numerical investigations to design asymptotic preserving scheme
property is developed and validated by various numerical tests. Finally, the
numerical computational results of the proposed system are discussed in two
dimensional space using the finite volume method.
| [
{
"created": "Sat, 28 Aug 2021 11:53:09 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Zagour",
"Mohamed",
""
]
] | A nonlinear cross-diffusion epidemic with a time-dependent Susceptible-Infected-Recovered-Died system is proposed in this paper. This system is derived from kinetic theory model by multiscale approach, which leads to an equivalent system coupled the microscopic and macroscopic equations. Subsequently, numerical investigations to design asymptotic preserving scheme property is developed and validated by various numerical tests. Finally, the numerical computational results of the proposed system are discussed in two dimensional space using the finite volume method. |
2211.03193 | Lakshmi Ghantasala | Lakshmi A. Ghantasala, Risi Jaiswal, Supriyo Datta | An Efficient MCMC Approach to Energy Function Optimization in Protein
Structure Prediction | 10 pages, 4 figures | null | null | null | q-bio.BM q-bio.QM stat.CO | http://creativecommons.org/licenses/by/4.0/ | Protein structure prediction is a critical problem linked to drug design,
mutation detection, and protein synthesis, among other applications. To this
end, evolutionary data has been used to build contact maps which are
traditionally minimized as energy functions via gradient descent based schemes
like the L-BFGS algorithm. In this paper we present what we call the
Alternating Metropolis-Hastings (AMH) algorithm, which (a) significantly
improves the performance of traditional MCMC methods, (b) is inherently
parallelizable allowing significant hardware acceleration using GPU, and (c)
can be integrated with the L-BFGS algorithm to improve its performance. The
algorithm shows an improvement in energy of found structures of 8.17% to 61.04%
(average 38.9%) over traditional MH and 0.53% to 17.75% (average 8.9%) over
traditional MH with intermittent noisy restarts, tested across 9 proteins from
recent CASP competitions. We go on to map the Alternating MH algorithm to a
GPGPU which improves sampling rate by 277x and improves simulation time to a
low energy protein prediction by 7.5x to 26.5x over CPU. We show that our
approach can be incorporated into state-of-the-art protein prediction pipelines
by applying it to both trRosetta2's energy function and the distogram component
of Alphafold1's energy function. Finally, we note that specially designed
probabilistic computers (or p-computers) can provide even better performance
than GPUs for MCMC algorithms like the one discussed here.
| [
{
"created": "Sun, 6 Nov 2022 18:19:36 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Ghantasala",
"Lakshmi A.",
""
],
[
"Jaiswal",
"Risi",
""
],
[
"Datta",
"Supriyo",
""
]
] | Protein structure prediction is a critical problem linked to drug design, mutation detection, and protein synthesis, among other applications. To this end, evolutionary data has been used to build contact maps which are traditionally minimized as energy functions via gradient descent based schemes like the L-BFGS algorithm. In this paper we present what we call the Alternating Metropolis-Hastings (AMH) algorithm, which (a) significantly improves the performance of traditional MCMC methods, (b) is inherently parallelizable allowing significant hardware acceleration using GPU, and (c) can be integrated with the L-BFGS algorithm to improve its performance. The algorithm shows an improvement in energy of found structures of 8.17% to 61.04% (average 38.9%) over traditional MH and 0.53% to 17.75% (average 8.9%) over traditional MH with intermittent noisy restarts, tested across 9 proteins from recent CASP competitions. We go on to map the Alternating MH algorithm to a GPGPU which improves sampling rate by 277x and improves simulation time to a low energy protein prediction by 7.5x to 26.5x over CPU. We show that our approach can be incorporated into state-of-the-art protein prediction pipelines by applying it to both trRosetta2's energy function and the distogram component of Alphafold1's energy function. Finally, we note that specially designed probabilistic computers (or p-computers) can provide even better performance than GPUs for MCMC algorithms like the one discussed here. |
1312.3350 | Michel Yamagishi | Michel Eduardo Beleza Yamagishi and Roberto Hirochi Herai | Expanding the Grammar of Biology | 9 pages, 1 figure | null | 10.1007/978-3-319-62689-5 | null | q-bio.GN | http://creativecommons.org/licenses/publicdomain/ | We metaphorically call "Grammar of Biology" a small field of genomic
research, whose main objective is to search for intrinsic DNA sequence
properties. Erwin Chargaff inaugurated it back in 50s, but since then little
progress has been made. It remained almost neglected until early 90s, when
Vinayakumar V. Prabhu made a major contribution determining the Symmetry
Principle. Remarkably, different sciences have contributed for its development,
for instance, Chargaff used his Chemistry background to discover the so-called
Chargaff's rules; taking advantage of several publicly available genomic
sequences, and through Computational and Statistical analyses, Prabhu
identified the Symmetry Principle and, recently, using a Mathematical approach,
we have discovered four new Generalized Chargaff's rules. Our work has expanded
the "Grammar of Biology", and created the conceptual and theoretical framework
necessary to further developments.
| [
{
"created": "Wed, 11 Dec 2013 21:20:51 GMT",
"version": "v1"
}
] | 2019-06-13 | [
[
"Yamagishi",
"Michel Eduardo Beleza",
""
],
[
"Herai",
"Roberto Hirochi",
""
]
] | We metaphorically call "Grammar of Biology" a small field of genomic research, whose main objective is to search for intrinsic DNA sequence properties. Erwin Chargaff inaugurated it back in 50s, but since then little progress has been made. It remained almost neglected until early 90s, when Vinayakumar V. Prabhu made a major contribution determining the Symmetry Principle. Remarkably, different sciences have contributed for its development, for instance, Chargaff used his Chemistry background to discover the so-called Chargaff's rules; taking advantage of several publicly available genomic sequences, and through Computational and Statistical analyses, Prabhu identified the Symmetry Principle and, recently, using a Mathematical approach, we have discovered four new Generalized Chargaff's rules. Our work has expanded the "Grammar of Biology", and created the conceptual and theoretical framework necessary to further developments. |
0801.4190 | Sebastian Roch | Constantinos Daskalakis, Elchanan Mossel, Sebastien Roch | Phylogenies without Branch Bounds: Contracting the Short, Pruning the
Deep | null | null | null | null | q-bio.PE cs.CE cs.DS math.PR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new phylogenetic reconstruction algorithm which, unlike most
previous rigorous inference techniques, does not rely on assumptions regarding
the branch lengths or the depth of the tree. The algorithm returns a forest
which is guaranteed to contain all edges that are: 1) sufficiently long and 2)
sufficiently close to the leaves. How much of the true tree is recovered
depends on the sequence length provided. The algorithm is distance-based and
runs in polynomial time.
| [
{
"created": "Mon, 28 Jan 2008 05:10:22 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2009 01:48:27 GMT",
"version": "v2"
}
] | 2011-09-30 | [
[
"Daskalakis",
"Constantinos",
""
],
[
"Mossel",
"Elchanan",
""
],
[
"Roch",
"Sebastien",
""
]
] | We introduce a new phylogenetic reconstruction algorithm which, unlike most previous rigorous inference techniques, does not rely on assumptions regarding the branch lengths or the depth of the tree. The algorithm returns a forest which is guaranteed to contain all edges that are: 1) sufficiently long and 2) sufficiently close to the leaves. How much of the true tree is recovered depends on the sequence length provided. The algorithm is distance-based and runs in polynomial time. |
2407.07530 | Thomas Klein | Jannis Ahlert, Thomas Klein, Felix Wichmann and Robert Geirhos | How Aligned are Different Alignment Metrics? | submitted to the ICLR 2024 Workshop on Representational Alignment
(Re-Align) | null | null | null | q-bio.NC cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, various methods and benchmarks have been proposed to
empirically evaluate the alignment of artificial neural networks to human
neural and behavioral data. But how aligned are different alignment metrics? To
answer this question, we analyze visual data from Brain-Score (Schrimpf et al.,
2018), including metrics from the model-vs-human toolbox (Geirhos et al.,
2021), together with human feature alignment (Linsley et al., 2018; Fel et al.,
2022) and human similarity judgements (Muttenthaler et al., 2022). We find that
pairwise correlations between neural scores and behavioral scores are quite low
and sometimes even negative. For instance, the average correlation between
those 80 models on Brain-Score that were fully evaluated on all 69 alignment
metrics we considered is only 0.198. Assuming that all of the employed metrics
are sound, this implies that alignment with human perception may best be
thought of as a multidimensional concept, with different methods measuring
fundamentally different aspects. Our results underline the importance of
integrative benchmarking, but also raise questions about how to correctly
combine and aggregate individual metrics. Aggregating by taking the arithmetic
average, as done in Brain-Score, leads to the overall performance currently
being dominated by behavior (95.25% explained variance) while the neural
predictivity plays a less important role (only 33.33% explained variance). As a
first step towards making sure that different alignment metrics all contribute
fairly towards an integrative benchmark score, we therefore conclude by
comparing three different aggregation options.
| [
{
"created": "Wed, 10 Jul 2024 10:36:11 GMT",
"version": "v1"
}
] | 2024-07-11 | [
[
"Ahlert",
"Jannis",
""
],
[
"Klein",
"Thomas",
""
],
[
"Wichmann",
"Felix",
""
],
[
"Geirhos",
"Robert",
""
]
] | In recent years, various methods and benchmarks have been proposed to empirically evaluate the alignment of artificial neural networks to human neural and behavioral data. But how aligned are different alignment metrics? To answer this question, we analyze visual data from Brain-Score (Schrimpf et al., 2018), including metrics from the model-vs-human toolbox (Geirhos et al., 2021), together with human feature alignment (Linsley et al., 2018; Fel et al., 2022) and human similarity judgements (Muttenthaler et al., 2022). We find that pairwise correlations between neural scores and behavioral scores are quite low and sometimes even negative. For instance, the average correlation between those 80 models on Brain-Score that were fully evaluated on all 69 alignment metrics we considered is only 0.198. Assuming that all of the employed metrics are sound, this implies that alignment with human perception may best be thought of as a multidimensional concept, with different methods measuring fundamentally different aspects. Our results underline the importance of integrative benchmarking, but also raise questions about how to correctly combine and aggregate individual metrics. Aggregating by taking the arithmetic average, as done in Brain-Score, leads to the overall performance currently being dominated by behavior (95.25% explained variance) while the neural predictivity plays a less important role (only 33.33% explained variance). As a first step towards making sure that different alignment metrics all contribute fairly towards an integrative benchmark score, we therefore conclude by comparing three different aggregation options. |
2103.00481 | Jesus Malo | Qiang Li and Alex Gomez-Villa and Marcelo Bertalmio and Jesus Malo | Contrast Sensitivity Functions in Autoencoders | Accepted in the Journal of Vision | Journal of Vision 2022;22(6):8 | 10.1167/jov.22.6.8. | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Three decades ago, Atick et al. suggested that human frequency sensitivity
may emerge from the enhancement required for a more efficient analysis of
retinal images. Here we reassess the relevance of low-level vision tasks in the
explanation of the Contrast Sensitivity Functions (CSFs) in light of (1) the
current trend of using artificial neural networks for studying vision, and (2)
the current knowledge of retinal image representations.
As a first contribution, we show that a very popular type of convolutional
neural networks (CNNs), called autoencoders, may develop human-like CSFs in the
spatio-temporal and chromatic dimensions when trained to perform some basic
low-level vision tasks (like retinal noise and optical blur removal), but not
others (like chromatic adaptation or pure reconstruction after simple
bottlenecks). As an illustrative example, the best CNN (in the considered set
of simple architectures for enhancement of the retinal signal) reproduces the
CSFs with an RMSE error of 11\% of the maximum sensitivity.
As a second contribution, we provide experimental evidence of the fact that,
for some functional goals (at low abstraction level), deeper CNNs that are
better in reaching the quantitative goal are actually worse in replicating
human-like phenomena (such as the CSFs). This low-level result (for the
explored networks) is not necessarily in contradiction with other works that
report advantages of deeper nets in modeling higher-level vision goals.
However, in line with a growing body of literature, our results suggests
another word of caution about CNNs in vision science since the use of
simplified units or unrealistic architectures in goal optimization may be a
limitation for the modeling and understanding of human vision.
| [
{
"created": "Sun, 28 Feb 2021 12:19:02 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Mar 2021 14:23:41 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Aug 2021 13:49:27 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Aug 2021 13:49:35 GMT",
"version": "v4"
},
{
"c... | 2022-05-24 | [
[
"Li",
"Qiang",
""
],
[
"Gomez-Villa",
"Alex",
""
],
[
"Bertalmio",
"Marcelo",
""
],
[
"Malo",
"Jesus",
""
]
] | Three decades ago, Atick et al. suggested that human frequency sensitivity may emerge from the enhancement required for a more efficient analysis of retinal images. Here we reassess the relevance of low-level vision tasks in the explanation of the Contrast Sensitivity Functions (CSFs) in light of (1) the current trend of using artificial neural networks for studying vision, and (2) the current knowledge of retinal image representations. As a first contribution, we show that a very popular type of convolutional neural networks (CNNs), called autoencoders, may develop human-like CSFs in the spatio-temporal and chromatic dimensions when trained to perform some basic low-level vision tasks (like retinal noise and optical blur removal), but not others (like chromatic adaptation or pure reconstruction after simple bottlenecks). As an illustrative example, the best CNN (in the considered set of simple architectures for enhancement of the retinal signal) reproduces the CSFs with an RMSE error of 11\% of the maximum sensitivity. As a second contribution, we provide experimental evidence of the fact that, for some functional goals (at low abstraction level), deeper CNNs that are better in reaching the quantitative goal are actually worse in replicating human-like phenomena (such as the CSFs). This low-level result (for the explored networks) is not necessarily in contradiction with other works that report advantages of deeper nets in modeling higher-level vision goals. However, in line with a growing body of literature, our results suggests another word of caution about CNNs in vision science since the use of simplified units or unrealistic architectures in goal optimization may be a limitation for the modeling and understanding of human vision. |
2402.04275 | Zhenping Xie | Zhenping Xie | Motion Mapping Cognition: A Nondecomposable Primary Process in Human
Vision | 7 pages, 3 figures | null | null | null | q-bio.NC cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human intelligence seems so mysterious that we have not successfully
understood its foundation until now. Here, I want to present a basic cognitive
process, motion mapping cognition (MMC), which should be a nondecomposable
primary function in human vision. Wherein, I point out that, MMC process can be
used to explain most of human visual functions in fundamental, but can not be
effectively modelled by traditional visual processing ways including image
segmentation, object recognition, object tracking etc. Furthermore, I state
that MMC may be looked as an extension of Chen's theory of topological
perception on human vision, and seems to be unsolvable using existing
intelligent algorithm skills. Finally, along with the requirements of MMC
problem, an interesting computational model, quantized topological matching
principle can be derived by developing the idea of optimal transport theory.
Above results may give us huge inspiration to develop more robust and
interpretable machine vision models.
| [
{
"created": "Fri, 2 Feb 2024 10:11:25 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Xie",
"Zhenping",
""
]
] | Human intelligence seems so mysterious that we have not successfully understood its foundation until now. Here, I want to present a basic cognitive process, motion mapping cognition (MMC), which should be a nondecomposable primary function in human vision. Wherein, I point out that, MMC process can be used to explain most of human visual functions in fundamental, but can not be effectively modelled by traditional visual processing ways including image segmentation, object recognition, object tracking etc. Furthermore, I state that MMC may be looked as an extension of Chen's theory of topological perception on human vision, and seems to be unsolvable using existing intelligent algorithm skills. Finally, along with the requirements of MMC problem, an interesting computational model, quantized topological matching principle can be derived by developing the idea of optimal transport theory. Above results may give us huge inspiration to develop more robust and interpretable machine vision models. |
1903.11670 | Francesc Rossell\'o | Tom\'as M. Coronado, Francesc Rossell\'o | The minimum value of the Colless index | This manuscript has been subsumed by another manuscript, which can be
found on arXiv: arXiv:1907.05064 | null | null | null | q-bio.PE cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Colless index is one of the oldest and most widely used balance indices
for rooted bifurcating trees. Despite its popularity, its minimum value on the
space $\mathcal{T}_n$ of rooted bifurcating trees with $n$ leaves is only known
when $n$ is a power of 2. In this paper we fill this gap in the literature, by
providing a formula that computes, for each $n$, the minimum Colless index on
$\mathcal{T}_n$, and characterizing those trees where this minimum value is
reached.
| [
{
"created": "Wed, 27 Mar 2019 19:47:38 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Apr 2019 17:02:40 GMT",
"version": "v2"
},
{
"created": "Tue, 7 May 2019 10:41:49 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Jul 2019 15:17:40 GMT",
"version": "v4"
}
] | 2019-07-24 | [
[
"Coronado",
"Tomás M.",
""
],
[
"Rosselló",
"Francesc",
""
]
] | The Colless index is one of the oldest and most widely used balance indices for rooted bifurcating trees. Despite its popularity, its minimum value on the space $\mathcal{T}_n$ of rooted bifurcating trees with $n$ leaves is only known when $n$ is a power of 2. In this paper we fill this gap in the literature, by providing a formula that computes, for each $n$, the minimum Colless index on $\mathcal{T}_n$, and characterizing those trees where this minimum value is reached. |
q-bio/0611007 | Frederick Matsen IV | Mike Steel and Frederick A. Matsen | The Bayesian `star paradox' persists for long finite sequences | 1 figure | null | null | null | q-bio.PE | null | The `star paradox' in phylogenetics is the tendency for a particular resolved
tree to be sometimes strongly supported even when the data is generated by an
unresolved (`star') tree. There have been contrary claims as to whether this
phenomenon persists when very long sequences are considered. This note settles
one aspect of this debate by proving mathematically that there is always a
chance that a resolved tree could be strongly supported, even as the length of
the sequences becomes very large.
| [
{
"created": "Fri, 3 Nov 2006 03:17:32 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Steel",
"Mike",
""
],
[
"Matsen",
"Frederick A.",
""
]
] | The `star paradox' in phylogenetics is the tendency for a particular resolved tree to be sometimes strongly supported even when the data is generated by an unresolved (`star') tree. There have been contrary claims as to whether this phenomenon persists when very long sequences are considered. This note settles one aspect of this debate by proving mathematically that there is always a chance that a resolved tree could be strongly supported, even as the length of the sequences becomes very large. |
0710.0717 | Thierry Rabilloud | Sylvie Luche (BBSI), C\'ecile Lelong (BBSI), H\'el\`ene Diemer (IPHC),
Alain Van Dorsselaer (IPHC), Thierry Rabilloud (BBSI) | Ultrafast coelectrophoretic fluorescent staining of proteins with
carbocyanines | null | Proteomics 7, 18 (2007) 3234-44 | 10.1002/pmic.200700365 | null | q-bio.GN | null | Protein detection on SDS gels or on 2-D gels must combine several features,
such as sensitivity, homogeneity from one protein to another, speed, low cost,
and user-friendliness. For some applications, it is also interesting to have a
nonfixing stain, so that proteins can be mobilized from the gel for further use
(electroelution, blotting). We show here that coelectrophoretic staining by
fluorophores of the oxacarbocyanine family, and especially
diheptyloxacarbocyanine, offers several positive features. The sensitivity is
intermediate between the one of colloidal CBB and the one of fluroescent
ruthenium complexes. Detection is achieved within 1 h after the end of the
electrophoretic process and does not use any fixing or toxic agent. The
fluorescent SDS-carbocyanine-protein complexes can be detected either with a
laser scanner with an excitation wavelength of 488 nm or with a UV table
operating at 302 nm. Excellent sequence coverage in subsequent MS analysis of
proteolytic peptides is also achieved with this detection method.
| [
{
"created": "Wed, 3 Oct 2007 07:05:24 GMT",
"version": "v1"
}
] | 2007-10-04 | [
[
"Luche",
"Sylvie",
"",
"BBSI"
],
[
"Lelong",
"Cécile",
"",
"BBSI"
],
[
"Diemer",
"Hélène",
"",
"IPHC"
],
[
"Van Dorsselaer",
"Alain",
"",
"IPHC"
],
[
"Rabilloud",
"Thierry",
"",
"BBSI"
]
] | Protein detection on SDS gels or on 2-D gels must combine several features, such as sensitivity, homogeneity from one protein to another, speed, low cost, and user-friendliness. For some applications, it is also interesting to have a nonfixing stain, so that proteins can be mobilized from the gel for further use (electroelution, blotting). We show here that coelectrophoretic staining by fluorophores of the oxacarbocyanine family, and especially diheptyloxacarbocyanine, offers several positive features. The sensitivity is intermediate between the one of colloidal CBB and the one of fluroescent ruthenium complexes. Detection is achieved within 1 h after the end of the electrophoretic process and does not use any fixing or toxic agent. The fluorescent SDS-carbocyanine-protein complexes can be detected either with a laser scanner with an excitation wavelength of 488 nm or with a UV table operating at 302 nm. Excellent sequence coverage in subsequent MS analysis of proteolytic peptides is also achieved with this detection method. |
2203.16261 | Mohammed Alser | Mohammed Alser, Sharon Waymost, Ram Ayyala, Brendan Lawlor, Richard J.
Abdill, Neha Rajkumar, Nathan LaPierre, Jaqueline Brito, Andre M.
Ribeiro-dos-Santos, Can Firtina, Nour Almadhoun, Varuni Sarwal, Eleazar
Eskin, Qiyang Hu, Derek Strong, Byoung-Do (BD) Kim, Malak S. Abedalthagafi,
Onur Mutlu, Serghei Mangul | Packaging, containerization, and virtualization of computational omics
methods: Advances, challenges, and opportunities | null | null | null | null | q-bio.GN cs.DC cs.SE stat.AP | http://creativecommons.org/licenses/by/4.0/ | Omics software tools have reshaped the landscape of modern biology and become
an essential component of biomedical research. The increasing dependence of
biomedical scientists on these powerful tools creates a need for easier
installation and greater usability. Packaging, virtualization, and
containerization are different approaches to satisfy this need by wrapping
omics tools in additional software that makes the omics tools easier to install
and use. Here, we systematically review practices across prominent packaging,
virtualization, and containerization platforms. We outline the challenges,
advantages, and limitations of each approach and some of the most widely used
platforms from the perspectives of users, software developers, and system
administrators. We also propose principles to make packaging, virtualization,
and containerization of omics software more sustainable and robust to increase
the reproducibility of biomedical and life science research.
| [
{
"created": "Wed, 30 Mar 2022 12:44:45 GMT",
"version": "v1"
}
] | 2022-03-31 | [
[
"Alser",
"Mohammed",
"",
"BD"
],
[
"Waymost",
"Sharon",
"",
"BD"
],
[
"Ayyala",
"Ram",
"",
"BD"
],
[
"Lawlor",
"Brendan",
"",
"BD"
],
[
"Abdill",
"Richard J.",
"",
"BD"
],
[
"Rajkumar",
"Neha",
"",
... | Omics software tools have reshaped the landscape of modern biology and become an essential component of biomedical research. The increasing dependence of biomedical scientists on these powerful tools creates a need for easier installation and greater usability. Packaging, virtualization, and containerization are different approaches to satisfy this need by wrapping omics tools in additional software that makes the omics tools easier to install and use. Here, we systematically review practices across prominent packaging, virtualization, and containerization platforms. We outline the challenges, advantages, and limitations of each approach and some of the most widely used platforms from the perspectives of users, software developers, and system administrators. We also propose principles to make packaging, virtualization, and containerization of omics software more sustainable and robust to increase the reproducibility of biomedical and life science research. |
1708.02423 | Jonathan Schiefer | Jonathan Schiefer, Alexander Niederb\"uhl, Volker Pernice, Carolin
Lennartz, Pierre LeVan, J\"urgen Henning and Stefan Rotter | From Correlation to Causation: Estimation of Effective Connectivity from
Continuous Brain Signals based on Zero-Lag Covariance | 18 pages, 10 figures | PLoS Comput Biol 14(3): e1006056 (2018) | 10.1371/journal.pcbi.1006056 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowing brain connectivity is of great importance both in basic research and
for clinical applications. We are proposing a method to infer directed
connectivity from zero-lag covariances of neuronal activity recorded at
multiple sites. This allows us to identify causal relations that are reflected
in neuronal population activity. To derive our strategy, we assume a generic
linear model of interacting continuous variables, the components of which
represent the activity of local neuronal populations. The suggested method for
inferring connectivity from recorded signals exploits the fact that the
covariance matrix derived from the observed activity contains information about
the existence, the direction and the sign of connections. Assuming a sparsely
coupled network, we disambiguate the underlying causal structure via
$L^1$-minimization. In general, this method is suited to infer effective
connectivity from resting state data of various types. We show that our method
is applicable over a broad range of structural parameters regarding network
size and connection probability of the network. We also explored parameters
affecting its activity dynamics, like the eigenvalue spectrum. Also, based on
the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics,
we show that with our method it is possible to estimate directed connectivity
from zero-lag covariances derived from such signals. In this study, we consider
measurement noise and unobserved nodes as additional confounding factors.
Furthermore, we investigate the amount of data required for a reliable
estimate. Additionally, we apply the proposed method on a fMRI dataset. The
resulting network exhibits a tendency for close-by areas being connected as
well as inter-hemispheric connections between corresponding areas. Also, we
found that a large fraction of identified connections were inhibitory.
| [
{
"created": "Tue, 8 Aug 2017 09:33:35 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2017 09:10:32 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Jan 2018 09:51:56 GMT",
"version": "v3"
},
{
"created": "Mon, 9 Apr 2018 09:02:57 GMT",
"version": "v4"
}
] | 2018-04-10 | [
[
"Schiefer",
"Jonathan",
""
],
[
"Niederbühl",
"Alexander",
""
],
[
"Pernice",
"Volker",
""
],
[
"Lennartz",
"Carolin",
""
],
[
"LeVan",
"Pierre",
""
],
[
"Henning",
"Jürgen",
""
],
[
"Rotter",
"Stefan",
""
... | Knowing brain connectivity is of great importance both in basic research and for clinical applications. We are proposing a method to infer directed connectivity from zero-lag covariances of neuronal activity recorded at multiple sites. This allows us to identify causal relations that are reflected in neuronal population activity. To derive our strategy, we assume a generic linear model of interacting continuous variables, the components of which represent the activity of local neuronal populations. The suggested method for inferring connectivity from recorded signals exploits the fact that the covariance matrix derived from the observed activity contains information about the existence, the direction and the sign of connections. Assuming a sparsely coupled network, we disambiguate the underlying causal structure via $L^1$-minimization. In general, this method is suited to infer effective connectivity from resting state data of various types. We show that our method is applicable over a broad range of structural parameters regarding network size and connection probability of the network. We also explored parameters affecting its activity dynamics, like the eigenvalue spectrum. Also, based on the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics, we show that with our method it is possible to estimate directed connectivity from zero-lag covariances derived from such signals. In this study, we consider measurement noise and unobserved nodes as additional confounding factors. Furthermore, we investigate the amount of data required for a reliable estimate. Additionally, we apply the proposed method on a fMRI dataset. The resulting network exhibits a tendency for close-by areas being connected as well as inter-hemispheric connections between corresponding areas. Also, we found that a large fraction of identified connections were inhibitory. |
2004.05060 | Rosalyn Moran | Rosalyn J. Moran, Erik D. Fagerholm, Maell Cullen, Jean Daunizeau,
Mark P. Richardson, Steven Williams, Federico Turkheimer, Rob Leech, Karl J.
Friston | Estimating required 'lockdown' cycles before immunity to SARS-CoV-2:
Model-based analyses of susceptible population sizes, 'S0', in seven European
countries including the UK and Ireland | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We used Bayesian model inversion to estimate epidemic parameters from the
reported case and death rates from seven countries using data from late January
2020 to April 5th 2020. Two distinct generative model types were employed:
first a continuous time dynamical-systems implementation of a
Susceptible-Exposed-Infectious-Recovered (SEIR) model and second: a partially
observable Markov Decision Process (MDP) or hidden Markov model (HMM)
implementation of an SEIR model. Both models parameterise the size of the
initial susceptible population (S0), as well as epidemic parameters. Parameter
estimation (data fitting) was performed using a standard Bayesian scheme
(variational Laplace) designed to allow for latent unobservable states and
uncertainty in model parameters.
Both models recapitulated the dynamics of transmissions and disease as given
by case and death rates. The peaks of the current waves were predicted to be in
the past for four countries (Italy, Spain, Germany and Switzerland) and to
emerge in 0.5-2 weeks in Ireland and 1-3 weeks in the UK. For France one model
estimated the peak within the past week and the other in the future in two
weeks. Crucially, Maximum a posteriori (MAP) estimates of S0 for each country
indicated effective population sizes of below 20% (of total population size),
under both the continuous time and HMM models. With a Bayesian weighted average
across all seven countries and both models, we estimated that 6.4% of the total
population would be immune. From the two models the maximum percentage of the
effective population was estimated at 19.6% of the total population for the UK,
16.7% for Ireland, 11.4% for Italy, 12.8% for Spain, 18.8% for France, 4.7% for
Germany and 12.9% for Switzerland.
Our results indicate that after the current wave, a large proportion of the
total population will remain without immunity.
| [
{
"created": "Thu, 9 Apr 2020 16:59:04 GMT",
"version": "v1"
}
] | 2020-04-13 | [
[
"Moran",
"Rosalyn J.",
""
],
[
"Fagerholm",
"Erik D.",
""
],
[
"Cullen",
"Maell",
""
],
[
"Daunizeau",
"Jean",
""
],
[
"Richardson",
"Mark P.",
""
],
[
"Williams",
"Steven",
""
],
[
"Turkheimer",
"Federico",
... | We used Bayesian model inversion to estimate epidemic parameters from the reported case and death rates from seven countries using data from late January 2020 to April 5th 2020. Two distinct generative model types were employed: first a continuous time dynamical-systems implementation of a Susceptible-Exposed-Infectious-Recovered (SEIR) model and second: a partially observable Markov Decision Process (MDP) or hidden Markov model (HMM) implementation of an SEIR model. Both models parameterise the size of the initial susceptible population (S0), as well as epidemic parameters. Parameter estimation (data fitting) was performed using a standard Bayesian scheme (variational Laplace) designed to allow for latent unobservable states and uncertainty in model parameters. Both models recapitulated the dynamics of transmissions and disease as given by case and death rates. The peaks of the current waves were predicted to be in the past for four countries (Italy, Spain, Germany and Switzerland) and to emerge in 0.5-2 weeks in Ireland and 1-3 weeks in the UK. For France one model estimated the peak within the past week and the other in the future in two weeks. Crucially, Maximum a posteriori (MAP) estimates of S0 for each country indicated effective population sizes of below 20% (of total population size), under both the continuous time and HMM models. With a Bayesian weighted average across all seven countries and both models, we estimated that 6.4% of the total population would be immune. From the two models the maximum percentage of the effective population was estimated at 19.6% of the total population for the UK, 16.7% for Ireland, 11.4% for Italy, 12.8% for Spain, 18.8% for France, 4.7% for Germany and 12.9% for Switzerland. Our results indicate that after the current wave, a large proportion of the total population will remain without immunity. |
2312.17216 | Rainer Engelken | Rainer Engelken | SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks | 10 pages, 4 figures, accepted at NeurIPS | null | null | null | q-bio.NC cs.AI cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) are biologically-inspired models that are
capable of processing information in streams of action potentials. However,
simulating and training SNNs is computationally expensive due to the need to
solve large systems of coupled differential equations. In this paper, we
introduce SparseProp, a novel event-based algorithm for simulating and training
sparse SNNs. Our algorithm reduces the computational cost of both the forward
and backward pass operations from O(N) to O(log(N)) per network spike, thereby
enabling numerically exact simulations of large spiking networks and their
efficient training using backpropagation through time. By leveraging the
sparsity of the network, SparseProp eliminates the need to iterate through all
neurons at each spike, employing efficient state updates instead. We
demonstrate the efficacy of SparseProp across several classical
integrate-and-fire neuron models, including a simulation of a sparse SNN with
one million LIF neurons. This results in a speed-up exceeding four orders of
magnitude relative to previous event-based implementations. Our work provides
an efficient and exact solution for training large-scale spiking neural
networks and opens up new possibilities for building more sophisticated
brain-inspired models.
| [
{
"created": "Thu, 28 Dec 2023 18:48:10 GMT",
"version": "v1"
}
] | 2023-12-29 | [
[
"Engelken",
"Rainer",
""
]
] | Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials. However, simulating and training SNNs is computationally expensive due to the need to solve large systems of coupled differential equations. In this paper, we introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs. Our algorithm reduces the computational cost of both the forward and backward pass operations from O(N) to O(log(N)) per network spike, thereby enabling numerically exact simulations of large spiking networks and their efficient training using backpropagation through time. By leveraging the sparsity of the network, SparseProp eliminates the need to iterate through all neurons at each spike, employing efficient state updates instead. We demonstrate the efficacy of SparseProp across several classical integrate-and-fire neuron models, including a simulation of a sparse SNN with one million LIF neurons. This results in a speed-up exceeding four orders of magnitude relative to previous event-based implementations. Our work provides an efficient and exact solution for training large-scale spiking neural networks and opens up new possibilities for building more sophisticated brain-inspired models. |
2408.02166 | Swagatam Mukhopadhyay | Swagatam Mukhopadhyay | Efficient Approximate Methods for Design of Experiments for Copolymer
Engineering | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | We develop a set of algorithms to solve a broad class of Design of Experiment
(DoE) problems efficiently. Specifically, we consider problems in which one
must choose a subset of polymers to test in experiments such that the learning
of the polymeric design rules is optimal. This subset must be selected from a
larger set of polymers permissible under arbitrary experimental design
constraints. We demonstrate the performance of our algorithms by solving
several pragmatic nucleic acid therapeutics engineering scenarios, where
limitations in synthesis of chemically diverse nucleic acids or feasibility of
measurements in experimental setups appear as constraints. Our approach focuses
on identifying optimal experimental designs from a given set of experiments,
which is in contrast to traditional, generative DoE methods like BIBD. Finally,
we discuss how these algorithms are broadly applicable to well-established
optimal DoE criteria like D-optimality.
| [
{
"created": "Sun, 4 Aug 2024 23:31:07 GMT",
"version": "v1"
}
] | 2024-08-06 | [
[
"Mukhopadhyay",
"Swagatam",
""
]
] | We develop a set of algorithms to solve a broad class of Design of Experiment (DoE) problems efficiently. Specifically, we consider problems in which one must choose a subset of polymers to test in experiments such that the learning of the polymeric design rules is optimal. This subset must be selected from a larger set of polymers permissible under arbitrary experimental design constraints. We demonstrate the performance of our algorithms by solving several pragmatic nucleic acid therapeutics engineering scenarios, where limitations in synthesis of chemically diverse nucleic acids or feasibility of measurements in experimental setups appear as constraints. Our approach focuses on identifying optimal experimental designs from a given set of experiments, which is in contrast to traditional, generative DoE methods like BIBD. Finally, we discuss how these algorithms are broadly applicable to well-established optimal DoE criteria like D-optimality. |
2006.01666 | Isabel Gonzalo Fonrodona | Isabel Gonzalo-Fonrodona and Miguel A. Porras | Nervous Excitability dynamics in a multisensory syndrome and its
similitude with a normal state. Scaling Laws | 16 pages and 16 figures. A version of this article was published in
the reference indicated. arXiv admin note: text overlap with arXiv:0808.1135 | In: Horizons in Neuroscience Research, Volume 13, Chap. 10, A.
Costa and E.Villalba (Eds), 2014 Nova Sci. Publish., Inc., pp 161-189 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of increased number of works published on multisensory and
cross-modal effects, we review a cortical multisensory syndrome (called central
syndrome) associated with a unilateral parieto-occipital lesion in a rather
unspecific (or multisensory) zone of the cortex.
The patients with this syndrome suffered from bilateral and symmetric
multisensory disorders dependent on the extent of nervous mass lost and the
intensity of the stimulus. They also presented cross-modal effects. A key point
is the similitude of this syndrome with a normal state, since this syndrome
would be the result of a scale reduction in brain excitability. The first
qualities lost when the nervous excitation diminishes are the most complex
ones, following allometric laws proper of a dynamic system.
The inverted perception (visual, tactile, auditive) in this syndrome is
compared to other cases of visual inversion reported in the literature. We
focus on the capability of improving perception by intensifying the stimulus or
by means of another type of stimulus (cross-modal), muscular effort being one
of the most efficient and least known means. This capability is greater when
nervous excitability deficit (lesion) is greater and when the primary stimulus
is weaker. Thus, in a normal subject, this capability is much weaker although
perceptible for functions with high excitability demand. We also review the
proposed scheme of functional cortical gradients whereby the specificity of the
cortex is distributed with a continuous variation leading to a brain dynamics
model accounting for multisensory or cross-modal interactions. Perception data
(including cross-modal effects) in this syndrome are fitted using Stevens'
power law which we relate to the allometric scaling power laws dependent on the
active neural mass, which seem to be the laws governing many biological neural
networks.
| [
{
"created": "Mon, 1 Jun 2020 13:55:42 GMT",
"version": "v1"
}
] | 2020-06-03 | [
[
"Gonzalo-Fonrodona",
"Isabel",
""
],
[
"Porras",
"Miguel A.",
""
]
] | In the context of increased number of works published on multisensory and cross-modal effects, we review a cortical multisensory syndrome (called central syndrome) associated with a unilateral parieto-occipital lesion in a rather unspecific (or multisensory) zone of the cortex. The patients with this syndrome suffered from bilateral and symmetric multisensory disorders dependent on the extent of nervous mass lost and the intensity of the stimulus. They also presented cross-modal effects. A key point is the similitude of this syndrome with a normal state, since this syndrome would be the result of a scale reduction in brain excitability. The first qualities lost when the nervous excitation diminishes are the most complex ones, following allometric laws proper of a dynamic system. The inverted perception (visual, tactile, auditive) in this syndrome is compared to other cases of visual inversion reported in the literature. We focus on the capability of improving perception by intensifying the stimulus or by means of another type of stimulus (cross-modal), muscular effort being one of the most efficient and least known means. This capability is greater when nervous excitability deficit (lesion) is greater and when the primary stimulus is weaker. Thus, in a normal subject, this capability is much weaker although perceptible for functions with high excitability demand. We also review the proposed scheme of functional cortical gradients whereby the specificity of the cortex is distributed with a continuous variation leading to a brain dynamics model accounting for multisensory or cross-modal interactions. Perception data (including cross-modal effects) in this syndrome are fitted using Stevens' power law which we relate to the allometric scaling power laws dependent on the active neural mass, which seem to be the laws governing many biological neural networks. |
2307.04815 | Raul De Palma Aristides | R. P. Aristides and A. J. Pons and H. A. Cerdeira and C. Masoller and
G. Tirabass | Parameter and coupling estimation in small groups of Izhikevich neurons | null | Chaos, vol. 33, n. 4, 2023 | 10.1063/5.0144499 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Nowadays, experimental techniques allow scientists to have access to large
amounts of data. In order to obtain reliable information from the complex
systems which produce these data, appropriate analysis tools are needed}. The
Kalman filter is a {frequently used} technique to infer, assuming a model of
the system, the parameters of the model from uncertain observations. A
well-known implementation of the Kalman filter, the Unscented Kalman filter
(UKF), was recently shown to be able to infer the connectivity of a set of
coupled chaotic oscillators. {I}n this work, we test whether the UKF can also
reconstruct the connectivity of {small groups of} coupled neurons when their
links are either electrical or chemical {synapses}. {In particular, w}e
consider Izhikevich neurons, and aim to infer which neurons influence each
other, considering {simulated spike trains as the experimental observations
used by the UKF}. First, we {verify} that the UKF can recover the parameters of
a single neuron, even when the parameters vary in time. Second, we analyze
small neural ensembles and}} demonstrate that the UKF allows inferring the
connectivity between the neurons, even for heterogeneous, directed, and
{temporally evolving} networks. {Our results show that time-dependent parameter
and coupling estimation is possible in this nonlinearly coupled system.
| [
{
"created": "Fri, 16 Jun 2023 10:27:50 GMT",
"version": "v1"
}
] | 2023-07-12 | [
[
"Aristides",
"R. P.",
""
],
[
"Pons",
"A. J.",
""
],
[
"Cerdeira",
"H. A.",
""
],
[
"Masoller",
"C.",
""
],
[
"Tirabass",
"G.",
""
]
] | Nowadays, experimental techniques allow scientists to have access to large amounts of data. In order to obtain reliable information from the complex systems which produce these data, appropriate analysis tools are needed}. The Kalman filter is a {frequently used} technique to infer, assuming a model of the system, the parameters of the model from uncertain observations. A well-known implementation of the Kalman filter, the Unscented Kalman filter (UKF), was recently shown to be able to infer the connectivity of a set of coupled chaotic oscillators. {I}n this work, we test whether the UKF can also reconstruct the connectivity of {small groups of} coupled neurons when their links are either electrical or chemical {synapses}. {In particular, w}e consider Izhikevich neurons, and aim to infer which neurons influence each other, considering {simulated spike trains as the experimental observations used by the UKF}. First, we {verify} that the UKF can recover the parameters of a single neuron, even when the parameters vary in time. Second, we analyze small neural ensembles and}} demonstrate that the UKF allows inferring the connectivity between the neurons, even for heterogeneous, directed, and {temporally evolving} networks. {Our results show that time-dependent parameter and coupling estimation is possible in this nonlinearly coupled system. |
1508.06226 | Edgardo Bonzi | E. V. Bonzi, G. B. Grad, A. M. Maggi, M. R. Mu\~n\'oz | Study of the characteristic parameters of the normal voices of
Argentinian speakers | 5 pages, 6 figures | Papers in Physics 6, 060002 (2014) | 10.4279/PIP.060002 | null | q-bio.NC cs.SD | http://creativecommons.org/licenses/by/3.0/ | The voice laboratory permits to study the human voices using a method that is
objective and noninvasive. In this work, we have studied the parameters of the
human voice such as pitch, formant, jitter, shimmer and harmonic-noise ratio of
a group of young people. This statistical information of parameters is obtained
from Argentinian speakers.
| [
{
"created": "Thu, 18 Dec 2014 17:11:56 GMT",
"version": "v1"
}
] | 2015-08-26 | [
[
"Bonzi",
"E. V.",
""
],
[
"Grad",
"G. B.",
""
],
[
"Maggi",
"A. M.",
""
],
[
"Muñóz",
"M. R.",
""
]
] | The voice laboratory permits to study the human voices using a method that is objective and noninvasive. In this work, we have studied the parameters of the human voice such as pitch, formant, jitter, shimmer and harmonic-noise ratio of a group of young people. This statistical information of parameters is obtained from Argentinian speakers. |
1708.06158 | M\"ursel Karadas | M\"ursel Karadas, Adam M. Wojciechowski, Alexander Huck, Nils Ole
Dalby, Ulrik Lund Andersen, Axel Thielscher | Opto-magnetic imaging of neural network activity in brain slices at high
resolution using color centers in diamond | 28 pages, 8 figures; SI 11 pages, 4 figures | null | null | null | q-bio.NC physics.med-ph quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a novel approach for wide-field imaging of the neural network
dynamics of brain slices that uses highly sensitivity magnetometry based on
nitrogen-vacancy (NV) centers in diamond. In-vitro recordings in brain slices
is a proven method for the characterization of electrical neural activity and
has strongly contributed to our understanding of the mechanisms that govern
neural information processing. However, traditional recordings can only acquire
signals from a few positions simultaneously, which severely limits their
ability to characterize the dynamics of the underlying neural networks. We
suggest to radically extend the scope of this method using the wide-field
imaging of the neural magnetic fields across the slice by means of NV
magnetometry. Employing comprehensive computational simulations and theoretical
analyses, we characterize the spatiotemporal characteristics of the neural
magnetic fields and derive the required key performance parameters of an
imaging setup based on NV magnetometry. In particular, we determine how the
technical parameters determine the achievable spatial resolution for an optimal
reconstruction of the neural currents from the measured field distributions.
Finally, we compare the imaging of neural slice activity with that of a single
planar pyramidal cell. Our results suggest that imaging of neural slice
activity will be possible with the upcoming generation of NV magnetic field
sensors, while imaging of the activity of a single planar cell remains more
challenging.
| [
{
"created": "Mon, 21 Aug 2017 11:36:54 GMT",
"version": "v1"
}
] | 2017-08-22 | [
[
"Karadas",
"Mürsel",
""
],
[
"Wojciechowski",
"Adam M.",
""
],
[
"Huck",
"Alexander",
""
],
[
"Dalby",
"Nils Ole",
""
],
[
"Andersen",
"Ulrik Lund",
""
],
[
"Thielscher",
"Axel",
""
]
] | We suggest a novel approach for wide-field imaging of the neural network dynamics of brain slices that uses highly sensitivity magnetometry based on nitrogen-vacancy (NV) centers in diamond. In-vitro recordings in brain slices is a proven method for the characterization of electrical neural activity and has strongly contributed to our understanding of the mechanisms that govern neural information processing. However, traditional recordings can only acquire signals from a few positions simultaneously, which severely limits their ability to characterize the dynamics of the underlying neural networks. We suggest to radically extend the scope of this method using the wide-field imaging of the neural magnetic fields across the slice by means of NV magnetometry. Employing comprehensive computational simulations and theoretical analyses, we characterize the spatiotemporal characteristics of the neural magnetic fields and derive the required key performance parameters of an imaging setup based on NV magnetometry. In particular, we determine how the technical parameters determine the achievable spatial resolution for an optimal reconstruction of the neural currents from the measured field distributions. Finally, we compare the imaging of neural slice activity with that of a single planar pyramidal cell. Our results suggest that imaging of neural slice activity will be possible with the upcoming generation of NV magnetic field sensors, while imaging of the activity of a single planar cell remains more challenging. |
1312.3647 | Mihalis Kavousanakis | I Aviziotis and M Kavousanakis and I Bitsanis and A Boudouvis | Coarse-grained analysis of stochastically simulated cell populations
with a positive feedback genetic network architecture | null | null | null | null | q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the different computational approaches modelling the dynamics of
isogenic cell populations, discrete stochastic models can describe with
sufficient accuracy the evolution of small size populations. However, for a
systematic and efficient study of their long-time behaviour over a wide range
of parameter values, the performance of solely direct temporal simulations
requires significantly high computational time. In addition, when the dynamics
of the cell populations exhibit non-trivial bistable behaviour, such an
analysis becomes a prohibitive task, since a large ensemble of initial states
need to be tested for the quest of possibly co-existing steady state solutions.
In this work, we study cell populations which carry the {\it lac} operon
network exhibiting solution multiplicity over a wide range of extracellular
conditions (inducer concentration). By adopting ideas from the so-called
``equation-free'' methodology, we perform systems-level analysis, which
includes numerical tasks such as the computation of {\it coarse} steady state
solutions, {\it coarse} bifurcation analysis, as well as {\it coarse} stability
analysis. Dynamically stable and unstable macroscopic (population level) steady
state solutions are computed by means of bifurcation analysis utilising short
bursts of fine-scale simulations, and the range of bistability is determined
for different sizes of cell populations. The results are compared with the
deterministic cell population balance (CPB) model, which is valid for large
populations, and we demonstrate the increased effect of stochasticity in small
size populations with asymmetric partitioning mechanisms.
| [
{
"created": "Thu, 12 Dec 2013 21:05:46 GMT",
"version": "v1"
}
] | 2013-12-16 | [
[
"Aviziotis",
"I",
""
],
[
"Kavousanakis",
"M",
""
],
[
"Bitsanis",
"I",
""
],
[
"Boudouvis",
"A",
""
]
] | Among the different computational approaches modelling the dynamics of isogenic cell populations, discrete stochastic models can describe with sufficient accuracy the evolution of small size populations. However, for a systematic and efficient study of their long-time behaviour over a wide range of parameter values, the performance of solely direct temporal simulations requires significantly high computational time. In addition, when the dynamics of the cell populations exhibit non-trivial bistable behaviour, such an analysis becomes a prohibitive task, since a large ensemble of initial states need to be tested for the quest of possibly co-existing steady state solutions. In this work, we study cell populations which carry the {\it lac} operon network exhibiting solution multiplicity over a wide range of extracellular conditions (inducer concentration). By adopting ideas from the so-called ``equation-free'' methodology, we perform systems-level analysis, which includes numerical tasks such as the computation of {\it coarse} steady state solutions, {\it coarse} bifurcation analysis, as well as {\it coarse} stability analysis. Dynamically stable and unstable macroscopic (population level) steady state solutions are computed by means of bifurcation analysis utilising short bursts of fine-scale simulations, and the range of bistability is determined for different sizes of cell populations. The results are compared with the deterministic cell population balance (CPB) model, which is valid for large populations, and we demonstrate the increased effect of stochasticity in small size populations with asymmetric partitioning mechanisms. |
1701.02562 | Alejandro F Villaverde | Alejandro F. Villaverde and Julio R. Banga | Dynamical compensation in biological systems as a particular case of
structural non-identifiability | null | null | null | null | q-bio.QM cs.SY math.DS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamical compensation (DC) has been recently defined as the ability of a
biological system to keep its output dynamics unchanged in the face of varying
parameters. This concept is purported to describe a design principle that
provides robustness to physiological circuits. Here we note the similitude
between DC and Structural Identifiability (SI), and we argue that the former
can be explained in terms of (lack of) the latter. We propose to exploit this
fact by using currently existing tools for SI analysis to perform DC analysis.
We demonstrate the feasibility of this approach with four physiological
circuits, for which we confirm the correspondence between DC and lack of SI. We
also warn that care should we taken when using an unidentifiable model to
extract biological insight, since lack of SI can be the result of an
inappropriate choice of model structure and therefore not necessarily a sign of
biological robustness.
| [
{
"created": "Tue, 10 Jan 2017 12:56:24 GMT",
"version": "v1"
}
] | 2017-01-11 | [
[
"Villaverde",
"Alejandro F.",
""
],
[
"Banga",
"Julio R.",
""
]
] | Dynamical compensation (DC) has been recently defined as the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. This concept is purported to describe a design principle that provides robustness to physiological circuits. Here we note the similitude between DC and Structural Identifiability (SI), and we argue that the former can be explained in terms of (lack of) the latter. We propose to exploit this fact by using currently existing tools for SI analysis to perform DC analysis. We demonstrate the feasibility of this approach with four physiological circuits, for which we confirm the correspondence between DC and lack of SI. We also warn that care should we taken when using an unidentifiable model to extract biological insight, since lack of SI can be the result of an inappropriate choice of model structure and therefore not necessarily a sign of biological robustness. |
1909.05778 | Saeed Aljaberi | Saeed Aljaberi, Timothy O'Leary, Fulvio Forni | Qualitative behavior and robustness of dendritic trafficking | 6 pages, 58th IEEE Conference on Decision and Control | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper studies homeostatic ion channel trafficking in neurons. We derive a
nonlinear closed-loop model that captures active transport with degradation,
channel insertion, average membrane potential activity, and integral control.
We study the model via dominance theory and differential dissipativity to show
when steady regulation gives way to pathological oscillations. We provide
quantitative results on the robustness of the closed loop behavior to static
and dynamic uncertainties, which allows us to understand how cell growth
interacts with ion channel regulation.
| [
{
"created": "Thu, 12 Sep 2019 16:19:27 GMT",
"version": "v1"
}
] | 2019-09-13 | [
[
"Aljaberi",
"Saeed",
""
],
[
"O'Leary",
"Timothy",
""
],
[
"Forni",
"Fulvio",
""
]
] | The paper studies homeostatic ion channel trafficking in neurons. We derive a nonlinear closed-loop model that captures active transport with degradation, channel insertion, average membrane potential activity, and integral control. We study the model via dominance theory and differential dissipativity to show when steady regulation gives way to pathological oscillations. We provide quantitative results on the robustness of the closed loop behavior to static and dynamic uncertainties, which allows us to understand how cell growth interacts with ion channel regulation. |
2205.02844 | Hilbert Lam Mr | Hilbert Lam Yuen In, Robbe Pincket | Transcripts per million ratio: applying distribution-aware normalisation
over the popular TPM method | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Current popular methods in literature of RNA sequencing normalisation do not
account for gene length when compared across samples, whilst adjusting for
count biases in the data. This creates a gap in the normalisation as bigger
genes in RNA sequencing accumulate more reads due to shotgun sequencing
methods. As a result, the proportions of these reads inter-sample are not
properly accounted for in current normalisation methods. Alternatively, methods
which account for gene length do not account for the pan-sample biases in the
data by accounting for a central read average. Thus, in order to fill in the
gap in the literature, we propose a novel method of Transcripts Per Million
Ratio and its relatives in RNA-sequencing differential expression normalisation
that can be used in different conditions, which takes into account the gene
length as well as relative expression in normalisation.
| [
{
"created": "Thu, 5 May 2022 06:43:10 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Aug 2022 22:43:53 GMT",
"version": "v2"
}
] | 2022-09-02 | [
[
"In",
"Hilbert Lam Yuen",
""
],
[
"Pincket",
"Robbe",
""
]
] | Current popular methods in literature of RNA sequencing normalisation do not account for gene length when compared across samples, whilst adjusting for count biases in the data. This creates a gap in the normalisation as bigger genes in RNA sequencing accumulate more reads due to shotgun sequencing methods. As a result, the proportions of these reads inter-sample are not properly accounted for in current normalisation methods. Alternatively, methods which account for gene length do not account for the pan-sample biases in the data by accounting for a central read average. Thus, in order to fill in the gap in the literature, we propose a novel method of Transcripts Per Million Ratio and its relatives in RNA-sequencing differential expression normalisation that can be used in different conditions, which takes into account the gene length as well as relative expression in normalisation. |
2101.11614 | Donghyun Kim | Donghyun Kim | Predicting Participation in Cancer Screening Programs with Machine
Learning | null | null | null | null | q-bio.OT cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present machine learning models based on random forest
classifiers, support vector machines, gradient boosted decision trees, and
artificial neural networks to predict participation in cancer screening
programs in South Korea. The top performing model was based on gradient boosted
decision trees and achieved an area under the receiver operating characteristic
curve (AUC-ROC) of 0.8706 and average precision of 0.8776. The results of this
study are encouraging and suggest that with further research, these models can
be directly applied to Korea's healthcare system, thus increasing participation
in Korea's National Cancer Screening Program.
| [
{
"created": "Wed, 27 Jan 2021 11:05:46 GMT",
"version": "v1"
}
] | 2021-01-29 | [
[
"Kim",
"Donghyun",
""
]
] | In this paper, we present machine learning models based on random forest classifiers, support vector machines, gradient boosted decision trees, and artificial neural networks to predict participation in cancer screening programs in South Korea. The top performing model was based on gradient boosted decision trees and achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.8706 and average precision of 0.8776. The results of this study are encouraging and suggest that with further research, these models can be directly applied to Korea's healthcare system, thus increasing participation in Korea's National Cancer Screening Program. |
1309.7474 | Ozlem Tastan Bishop | Rowan Hatherley, Crystal-Leigh Clitheroe, Ngonidzashe Faya and \"Ozlem
Tastan Bishop | Plasmodium falciparum Hop: Detailed analysis on complex formation with
Hsp70 and Hsp90 | 54 pages, 4 Figures, 5 Supplementary Data items | Biochemical and Biophysical Research Communications 2015 vol 456
pp 440-445 | 10.1016/j.bbrc.2014.11.103 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The heat shock organizing protein (Hop) is important in modulating the
activity and co-interaction of two chaperones: heat shock protein 70 and 90
(Hsp70 and Hsp90). Recent research suggested that Plasmodium falciparum Hop
(PfHop), PfHsp70 and PfHsp90 form a complex in the trophozoite infective stage.
However, there has been little computational research on the malarial Hop
protein in complex with other malarial Hsps. Using in silico characterization
of the protein, this work showed that individual domains of Hop are evolving at
different rates within the protein. Differences between human Hop (HsHop) and
PfHop were identified by motif analysis. Homology modeling of PfHop and HsHop
in complex with their own cytosolic Hsp90 and Hsp70 C-terminal peptide partners
indicated excellent conservation of the Hop concave TPR sites bound to the
C-terminal motifs of partner proteins. Further, we analyzed additional binding
sites between Hop and Hsp90, and showed, for the first time, that they are
distinctly less conserved between human and malaria parasite. These sites are
located on the convex surface of Hop TPR2, and involved in interactions with
the Hsp90 middle domain. Since the convex sites are less conserved than the
concave sites, it makes their potential for malarial inhibitor design extremely
attractive (as opposed to the concave sites which have been the focus of
previous efforts).
| [
{
"created": "Sat, 28 Sep 2013 16:35:32 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Dec 2014 14:03:45 GMT",
"version": "v2"
}
] | 2014-12-24 | [
[
"Hatherley",
"Rowan",
""
],
[
"Clitheroe",
"Crystal-Leigh",
""
],
[
"Faya",
"Ngonidzashe",
""
],
[
"Bishop",
"Özlem Tastan",
""
]
] | The heat shock organizing protein (Hop) is important in modulating the activity and co-interaction of two chaperones: heat shock protein 70 and 90 (Hsp70 and Hsp90). Recent research suggested that Plasmodium falciparum Hop (PfHop), PfHsp70 and PfHsp90 form a complex in the trophozoite infective stage. However, there has been little computational research on the malarial Hop protein in complex with other malarial Hsps. Using in silico characterization of the protein, this work showed that individual domains of Hop are evolving at different rates within the protein. Differences between human Hop (HsHop) and PfHop were identified by motif analysis. Homology modeling of PfHop and HsHop in complex with their own cytosolic Hsp90 and Hsp70 C-terminal peptide partners indicated excellent conservation of the Hop concave TPR sites bound to the C-terminal motifs of partner proteins. Further, we analyzed additional binding sites between Hop and Hsp90, and showed, for the first time, that they are distinctly less conserved between human and malaria parasite. These sites are located on the convex surface of Hop TPR2, and involved in interactions with the Hsp90 middle domain. Since the convex sites are less conserved than the concave sites, it makes their potential for malarial inhibitor design extremely attractive (as opposed to the concave sites which have been the focus of previous efforts). |
q-bio/0602003 | Lars Grant | Damon A. Clark and Lars C. Grant | The Bacterial Chemotactic Response Reflects a Compromise Between
Transient and Steady State Behavior | 19 pages, 5 figures | Proc. Natl. Acad. Sci. U.S.A., 102, (26): 9150-9155 (2005) | 10.1073/pnas.0407659102 | null | q-bio.CB cond-mat.stat-mech q-bio.PE | null | Swimming bacteria detect chemical gradients by performing temporal
comparisons of recent measurements of chemical concentration. These comparisons
are described quantitatively by the chemotactic response function, which we
expect to optimize chemotactic behavioral performance. We identify two
independent chemotactic performance criteria: in the short run, a favorable
response function should move bacteria up chemoattractant gradients, while in
the long run, bacteria should aggregate at peaks of chemoattractant
concentration. Surprisingly, these two criteria conflict, so that when one
performance criterion is most favorable, the other is unfavorable. Since both
types of behavior are biologically relevant, we include both behaviors in a
composite optimization that yields a response function that closely resembles
experimental measurements. Our work suggests that the bacterial chemotactic
response function can be derived from simple behavioral considerations, and
sheds light on how the response function contributes to chemotactic
performance.
| [
{
"created": "Fri, 3 Feb 2006 00:26:35 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Clark",
"Damon A.",
""
],
[
"Grant",
"Lars C.",
""
]
] | Swimming bacteria detect chemical gradients by performing temporal comparisons of recent measurements of chemical concentration. These comparisons are described quantitatively by the chemotactic response function, which we expect to optimize chemotactic behavioral performance. We identify two independent chemotactic performance criteria: in the short run, a favorable response function should move bacteria up chemoattractant gradients, while in the long run, bacteria should aggregate at peaks of chemoattractant concentration. Surprisingly, these two criteria conflict, so that when one performance criterion is most favorable, the other is unfavorable. Since both types of behavior are biologically relevant, we include both behaviors in a composite optimization that yields a response function that closely resembles experimental measurements. Our work suggests that the bacterial chemotactic response function can be derived from simple behavioral considerations, and sheds light on how the response function contributes to chemotactic performance. |
1302.1148 | Richard A Neher | Richard A. Neher | Genetic draft, selective interference, and population genetics of rapid
adaptation | supplementary illustrations and scripts are available at
http://webdav.tuebingen.mpg.de/interference/ | Annual Review of Ecology, Evolution, and Systematics Vol. 44:
195-215, 2013 | 10.1146/annurev-ecolsys-110512-135920 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To learn about the past from a sample of genomic sequences, one needs to
understand how evolutionary processes shape genetic diversity. Most population
genetic inference is based on frameworks assuming adaptive evolution is rare.
But if positive selection operates on many loci simultaneously, as has recently
been suggested for many species including animals such as flies, a different
approach is necessary. In this review, I discuss recent progress in
characterizing and understanding evolution in rapidly adapting populations
where random associations of mutations with genetic backgrounds of different
fitness, i.e., genetic draft, dominate over genetic drift. As a result, neutral
genetic diversity depends weakly on population size, but strongly on the rate
of adaptation or more generally the variance in fitness. Coalescent processes
with multiple mergers, rather than Kingman's coalescent, are appropriate
genealogical models for rapidly adapting populations with important
implications for population genetic inference.
| [
{
"created": "Tue, 5 Feb 2013 18:39:07 GMT",
"version": "v1"
}
] | 2014-03-25 | [
[
"Neher",
"Richard A.",
""
]
] | To learn about the past from a sample of genomic sequences, one needs to understand how evolutionary processes shape genetic diversity. Most population genetic inference is based on frameworks assuming adaptive evolution is rare. But if positive selection operates on many loci simultaneously, as has recently been suggested for many species including animals such as flies, a different approach is necessary. In this review, I discuss recent progress in characterizing and understanding evolution in rapidly adapting populations where random associations of mutations with genetic backgrounds of different fitness, i.e., genetic draft, dominate over genetic drift. As a result, neutral genetic diversity depends weakly on population size, but strongly on the rate of adaptation or more generally the variance in fitness. Coalescent processes with multiple mergers, rather than Kingman's coalescent, are appropriate genealogical models for rapidly adapting populations with important implications for population genetic inference. |
1306.2123 | Charalambos Neophytou | Charalambos Neophytou, Hans-Gerhard Michiels | Upper Rhine Valley: A migration crossroads of middle European oaks | 33 pages, accepted author's manuscript | Forest Ecology and Management 304 (2013): pp. 89-98 | 10.1016/j.foreco.2013.04.020 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The indigenous oak species (Quercus spp.) of the Upper Rhine Valley have
migrated to their current distribution range in the area after the transition
to the Holocene interglacial. Since post-glacial recolonization, they have been
subjected to ecological changes and human impact. By using chloroplast
microsatellite markers (cpSSRs), we provide detailed phylogeographic
information and we address the contribution of natural and human-related
factors to the current pattern of chloroplast DNA (cpDNA) variation. 626
individual trees from 86 oak stands including all three indigenous oak species
of the region were sampled. In order to verify the refugial origin, reference
samples from refugial areas and DNA samples from previous studies with known
cpDNA haplotypes (chlorotypes) were used. Chlorotypes belonging to three
different maternal lineages, corresponding to the three main glacial refugia,
were found in the area. These were spatially structured and highly introgressed
among species, reflecting past hybridization which involved all three
indigenous oak species. Site condition heterogeneity was found among groups of
populations which differed in terms of cpDNA variation. This suggests that
different biogeographic subregions within the Upper Rhine Valley were colonized
during separate post-glacial migration waves. Genetic variation was higher in
Quercus robur than in Quercus petraea, which is probably due to more efficient
seed dispersal and the more pronounced pioneer character of the former species.
Finally, stands of Q. robur established in the last 70 years were significantly
more diverse, which can be explained by the improved transportation ability of
seeds and seedlings for artificial regeneration of stands during this period.
| [
{
"created": "Mon, 10 Jun 2013 07:43:44 GMT",
"version": "v1"
}
] | 2013-06-11 | [
[
"Neophytou",
"Charalambos",
""
],
[
"Michiels",
"Hans-Gerhard",
""
]
] | The indigenous oak species (Quercus spp.) of the Upper Rhine Valley have migrated to their current distribution range in the area after the transition to the Holocene interglacial. Since post-glacial recolonization, they have been subjected to ecological changes and human impact. By using chloroplast microsatellite markers (cpSSRs), we provide detailed phylogeographic information and we address the contribution of natural and human-related factors to the current pattern of chloroplast DNA (cpDNA) variation. 626 individual trees from 86 oak stands including all three indigenous oak species of the region were sampled. In order to verify the refugial origin, reference samples from refugial areas and DNA samples from previous studies with known cpDNA haplotypes (chlorotypes) were used. Chlorotypes belonging to three different maternal lineages, corresponding to the three main glacial refugia, were found in the area. These were spatially structured and highly introgressed among species, reflecting past hybridization which involved all three indigenous oak species. Site condition heterogeneity was found among groups of populations which differed in terms of cpDNA variation. This suggests that different biogeographic subregions within the Upper Rhine Valley were colonized during separate post-glacial migration waves. Genetic variation was higher in Quercus robur than in Quercus petraea, which is probably due to more efficient seed dispersal and the more pronounced pioneer character of the former species. Finally, stands of Q. robur established in the last 70 years were significantly more diverse, which can be explained by the improved transportation ability of seeds and seedlings for artificial regeneration of stands during this period. |
1411.2548 | Yong Kong | Yong Kong | Length distribution of sequencing by synthesis: fixed flow cycle model | 27 pages, 5 figures | Journal of mathematical biology 67 (2), 389-410, 2013 | 10.1007/s00285-012-0556-3 | null | q-bio.GN cs.DM math.CO stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequencing by synthesis is the underlying technology for many next-generation
DNA sequencing platforms. We developed a new model, the fixed flow cycle model,
to derive the distributions of sequence length for a given number of flow
cycles under the general conditions where the nucleotide incorporation is
probabilistic and may be incomplete, as in some single-molecule sequencing
technologies. Unlike the previous model, the new model yields the probability
distribution for the sequence length. Explicit closed form formulas are derived
for the mean and variance of the distribution.
| [
{
"created": "Thu, 6 Nov 2014 03:04:16 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Kong",
"Yong",
""
]
] | Sequencing by synthesis is the underlying technology for many next-generation DNA sequencing platforms. We developed a new model, the fixed flow cycle model, to derive the distributions of sequence length for a given number of flow cycles under the general conditions where the nucleotide incorporation is probabilistic and may be incomplete, as in some single-molecule sequencing technologies. Unlike the previous model, the new model yields the probability distribution for the sequence length. Explicit closed form formulas are derived for the mean and variance of the distribution. |
1705.11024 | J\'ozsef Z. Farkas | J\'ozsef Z. Farkas | Net reproduction functions for nonlinear structured population models | To appear in Mathematical Modelling of Natural Phenomena | Mathematical Modelling of Natural Phenomena 13 (2018) | 10.1051/mmnp/2018036 | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this note is to present a general approach to define the net
reproduction function for a large class of nonlinear physiologically structured
population models. In particular, we are going to show that this can be
achieved in a natural way by reformulating a nonlinear problem as a family of
linear ones; each of the linear problems describing the evolution of the
population in a different, but constant environment. The reformulation of a
nonlinear population model as a family of linear ones is a new approach, and
provides an elegant way to study qualitative questions, for example the
existence of positive steady states. To define the net reproduction number for
any fixed (constant) environment, i.e. for the linear models, we use a fairly
recent spectral theoretic result, which characterizes the connection between
the spectral bound of an unbounded operator and the spectral radius of a
corresponding bounded operator. For nonlinear models, varying the environment
naturally leads to a net reproduction function.
| [
{
"created": "Wed, 31 May 2017 10:31:54 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2018 09:06:01 GMT",
"version": "v2"
}
] | 2019-03-06 | [
[
"Farkas",
"József Z.",
""
]
] | The goal of this note is to present a general approach to define the net reproduction function for a large class of nonlinear physiologically structured population models. In particular, we are going to show that this can be achieved in a natural way by reformulating a nonlinear problem as a family of linear ones; each of the linear problems describing the evolution of the population in a different, but constant environment. The reformulation of a nonlinear population model as a family of linear ones is a new approach, and provides an elegant way to study qualitative questions, for example the existence of positive steady states. To define the net reproduction number for any fixed (constant) environment, i.e. for the linear models, we use a fairly recent spectral theoretic result, which characterizes the connection between the spectral bound of an unbounded operator and the spectral radius of a corresponding bounded operator. For nonlinear models, varying the environment naturally leads to a net reproduction function. |
2207.01930 | Martina Conte | Martina Conte and Nadia Loy | A non-local kinetic model for cell migration: a study of the interplay
between contact guidance and steric hindrance | null | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | We propose a non-local model for contact guidance and steric hindrance
depending on a single external cue, namely the extracellular matrix, that
affects in a twofold way the polarization and speed of motion of the cells. We
start from a microscopic description of the stochastic processes underlying the
cell re-orientation mechanism related to the change of cell speed and
direction. Then, we formally derive the corresponding kinetic model that
implements exactly the prescribed microscopic dynamics and, from it, it is
possible to deduce the macroscopic limit in the appropriate regime. Moreover,
we test our model in several scenarios. In particular, we numerically
investigate the minimal microscopic mechanisms that are necessary to reproduce
cell dynamics by comparing the outcomes of our model with some experimental
results related to breast cancer cell migration. This allows us to validate the
proposed modeling approach and, also, to highlight its capability of predicting
the qualitative cell behaviors in diverse heterogeneous microenvironments.
| [
{
"created": "Tue, 5 Jul 2022 10:06:57 GMT",
"version": "v1"
}
] | 2022-07-06 | [
[
"Conte",
"Martina",
""
],
[
"Loy",
"Nadia",
""
]
] | We propose a non-local model for contact guidance and steric hindrance depending on a single external cue, namely the extracellular matrix, that affects in a twofold way the polarization and speed of motion of the cells. We start from a microscopic description of the stochastic processes underlying the cell re-orientation mechanism related to the change of cell speed and direction. Then, we formally derive the corresponding kinetic model that implements exactly the prescribed microscopic dynamics and, from it, it is possible to deduce the macroscopic limit in the appropriate regime. Moreover, we test our model in several scenarios. In particular, we numerically investigate the minimal microscopic mechanisms that are necessary to reproduce cell dynamics by comparing the outcomes of our model with some experimental results related to breast cancer cell migration. This allows us to validate the proposed modeling approach and, also, to highlight its capability of predicting the qualitative cell behaviors in diverse heterogeneous microenvironments. |
1606.02859 | Ovidiu Radulescu | Sergei Vakulenko, Ivan Morozov and Ovidiu Radulescu | Maximal switchability of centralized networks | null | null | 10.1088/0951-7715/29/8/2327 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider continuous time Hopfield-like recurrent networks as dynamical
models for gene regulation and neural networks. We are interested in networks
that contain n high-degree nodes preferably connected to a large number of Ns
weakly connected satellites, a property that we call n/Ns-centrality. If the
hub dynamics is slow, we obtain that the large time network dynamics is
completely defined by the hub dynamics. Moreover, such networks are maximally
flexible and switchable, in the sense that they can switch from a globally
attractive rest state to any structurally stable dynamics when the response
time of a special controller hub is changed. In particular, we show that a
decrease of the controller hub response time can lead to a sharp variation in
the network attractor structure: we can obtain a set of new local attractors,
whose number can increase exponentially with N, the total number of nodes of
the nework. These new attractors can be periodic or even chaotic. We provide an
algorithm, which allows us to design networks with the desired switching
properties, or to learn them from time series, by adjusting the interactions
between hubs and satellites. Such switchable networks could be used as models
for context dependent adaptation in functional genetics or as models for
cognitive functions in neuroscience.
| [
{
"created": "Thu, 9 Jun 2016 08:19:27 GMT",
"version": "v1"
}
] | 2016-08-03 | [
[
"Vakulenko",
"Sergei",
""
],
[
"Morozov",
"Ivan",
""
],
[
"Radulescu",
"Ovidiu",
""
]
] | We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of Ns weakly connected satellites, a property that we call n/Ns-centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience. |
1901.09984 | Alexis Nangue | Alexis Nangue | Global Stability analysis of a cellular model of Hepatitis C Virus
infection under treatment | 17 Pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the global analysis of a HCV model under therapy.
We prove that the solutions with positive initial values are global, positive,
bounded and not display periodic orbits. In addition, we show that the model is
globally asymptotically stable, by using appropriate Lyapunov functions.
| [
{
"created": "Mon, 28 Jan 2019 20:17:43 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2019 20:56:59 GMT",
"version": "v2"
}
] | 2019-05-09 | [
[
"Nangue",
"Alexis",
""
]
] | In this paper, we present the global analysis of a HCV model under therapy. We prove that the solutions with positive initial values are global, positive, bounded and not display periodic orbits. In addition, we show that the model is globally asymptotically stable, by using appropriate Lyapunov functions. |
2304.01350 | Bradly Alicea | Bradly Alicea | Super-performance: sampling, planning, and ecological information | 15 pages, 4 figures, 1 table; Proceedings of BICA 2023 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The connection between active perception and the limits of performance
provide a path to understanding naturalistic behavior. We can take a
comparative cognitive modeling perspective to understand the limits of this
performance and the existence of superperformance. We will discuss two
categories that are hypothesized to originate in terms of coevolutionary
relationships and evolutionary trade offs: supersamplers and superplanners.
Supersamplers take snapshots of their sensory world at a very high sampling
rate. Examples include flies (vision) and frogs (audition) with ecological
specializations. Superplanners internally store information to evaluate and act
upon multiple features of spatiotemporal environments. Slow lorises and turtles
provide examples of superplanning capabilities. The Gibsonian Information (GI)
paradigm is used to evaluate sensory sampling and planning with respect to
direct perception and its role in capturing environmental information content.
By contrast, superplanners utilize internal models of the environment to
compensate for normal rates of sensory sampling, and this relationship often
exists as a sampling/planning tradeoff. Supersamplers and superplanners can
exist in adversarial relationships, or longer-term as coevolutionary
relationships. Moreover, the tradeoff between sampling and planning capacity
can break down, providing relativistic regimes. We can apply the principles of
superperformance to human augmentation technologies.
| [
{
"created": "Wed, 22 Mar 2023 14:33:57 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jul 2023 18:18:37 GMT",
"version": "v2"
}
] | 2023-07-04 | [
[
"Alicea",
"Bradly",
""
]
] | The connection between active perception and the limits of performance provide a path to understanding naturalistic behavior. We can take a comparative cognitive modeling perspective to understand the limits of this performance and the existence of superperformance. We will discuss two categories that are hypothesized to originate in terms of coevolutionary relationships and evolutionary trade offs: supersamplers and superplanners. Supersamplers take snapshots of their sensory world at a very high sampling rate. Examples include flies (vision) and frogs (audition) with ecological specializations. Superplanners internally store information to evaluate and act upon multiple features of spatiotemporal environments. Slow lorises and turtles provide examples of superplanning capabilities. The Gibsonian Information (GI) paradigm is used to evaluate sensory sampling and planning with respect to direct perception and its role in capturing environmental information content. By contrast, superplanners utilize internal models of the environment to compensate for normal rates of sensory sampling, and this relationship often exists as a sampling/planning tradeoff. Supersamplers and superplanners can exist in adversarial relationships, or longer-term as coevolutionary relationships. Moreover, the tradeoff between sampling and planning capacity can break down, providing relativistic regimes. We can apply the principles of superperformance to human augmentation technologies. |
1307.7342 | Tomasz Rutkowski | Hiromu Mori, Shoji Makino, and Tomasz M. Rutkowski | Multi-command Chest Tactile Brain Computer Interface for Small Vehicle
Robot Navigation | accepted as a full paper for The 2013 International Conference on
Brain and Health Informatics; to appear in Lecture Notes in Computer Science
(LNCS), Springer Verlag Berlin Heidelberg, 2013; http://link.springer.com/ | null | null | null | q-bio.NC cs.HC cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presented study explores the extent to which tactile stimuli delivered to
five chest positions of a healthy user can serve as a platform for a brain
computer interface (BCI) that could be used in an interactive application such
as robotic vehicle operation. The five chest locations are used to evoke
tactile brain potential responses, thus defining a tactile brain computer
interface (tBCI). Experimental results with five subjects performing online
tBCI provide a validation of the chest location tBCI paradigm, while the
feasibility of the concept is illuminated through information-transfer rates.
Additionally an offline classification improvement with a linear SVM classifier
is presented through the case study.
| [
{
"created": "Sun, 28 Jul 2013 08:24:20 GMT",
"version": "v1"
}
] | 2013-07-30 | [
[
"Mori",
"Hiromu",
""
],
[
"Makino",
"Shoji",
""
],
[
"Rutkowski",
"Tomasz M.",
""
]
] | The presented study explores the extent to which tactile stimuli delivered to five chest positions of a healthy user can serve as a platform for a brain computer interface (BCI) that could be used in an interactive application such as robotic vehicle operation. The five chest locations are used to evoke tactile brain potential responses, thus defining a tactile brain computer interface (tBCI). Experimental results with five subjects performing online tBCI provide a validation of the chest location tBCI paradigm, while the feasibility of the concept is illuminated through information-transfer rates. Additionally an offline classification improvement with a linear SVM classifier is presented through the case study. |
2212.01495 | Wenhao Lin | Jiahao Li, Zhourun Wu, Wenhao Lin, Jiawei Luo, Jun Zhang, Qingcai Chen
and Junjie Chen | iEnhancer-ELM: improve enhancer identification by extracting
position-related multiscale contextual information based on enhancer language
models | 8 pages, 5 figures. It is a new accepted version | null | 10.1093/bioadv/vbad043 | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Enhancers are important cis-regulatory elements that regulate a
wide range of biological functions and enhance the transcription of target
genes. Although many feature extraction methods have been proposed to improve
the performance of enhancer identification, they cannot learn position-related
multiscale contextual information from raw DNA sequences.
Results: In this article, we propose a novel enhancer identification method
(iEnhancer-ELM) based on BERT-like enhancer language models. iEnhancer-ELM
tokenizes DNA sequences with multi-scale k-mers and extracts contextual
information of different scale k-mers related with their positions via an
multi-head attention mechanism. We first evaluate the performance of different
scale k-mers, then ensemble them to improve the performance of enhancer
identification. The experimental results on two popular benchmark datasets show
that our model outperforms stateof-the-art methods. We further illustrate the
interpretability of iEnhancer-ELM. For a case study, we discover 30 enhancer
motifs via a 3-mer-based model, where 12 of motifs are verified by STREME and
JASPAR, demonstrating our model has a potential ability to unveil the
biological mechanism of enhancer.
Availability and implementation: The models and associated code are available
at https://github.com/chen-bioinfo/iEnhancer-ELM
Contact: junjiechen@hit.edu.cn
Supplementary information: Supplementary data are available at Bioinformatics
Advances online.
| [
{
"created": "Sat, 3 Dec 2022 00:50:51 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jul 2023 13:48:33 GMT",
"version": "v2"
}
] | 2023-07-18 | [
[
"Li",
"Jiahao",
""
],
[
"Wu",
"Zhourun",
""
],
[
"Lin",
"Wenhao",
""
],
[
"Luo",
"Jiawei",
""
],
[
"Zhang",
"Jun",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Chen",
"Junjie",
""
]
] | Motivation: Enhancers are important cis-regulatory elements that regulate a wide range of biological functions and enhance the transcription of target genes. Although many feature extraction methods have been proposed to improve the performance of enhancer identification, they cannot learn position-related multiscale contextual information from raw DNA sequences. Results: In this article, we propose a novel enhancer identification method (iEnhancer-ELM) based on BERT-like enhancer language models. iEnhancer-ELM tokenizes DNA sequences with multi-scale k-mers and extracts contextual information of different scale k-mers related with their positions via an multi-head attention mechanism. We first evaluate the performance of different scale k-mers, then ensemble them to improve the performance of enhancer identification. The experimental results on two popular benchmark datasets show that our model outperforms stateof-the-art methods. We further illustrate the interpretability of iEnhancer-ELM. For a case study, we discover 30 enhancer motifs via a 3-mer-based model, where 12 of motifs are verified by STREME and JASPAR, demonstrating our model has a potential ability to unveil the biological mechanism of enhancer. Availability and implementation: The models and associated code are available at https://github.com/chen-bioinfo/iEnhancer-ELM Contact: junjiechen@hit.edu.cn Supplementary information: Supplementary data are available at Bioinformatics Advances online. |
2103.15356 | Rami Pugatch | Anjan Roy, Dotan Goberman, Rami Pugatch | Transcription-translation machinery -- an autocatalytic network coupling
all cellular cycles and generating a plethora of growth laws | 12 pages, 8 figures. Version 3, few more typos corrected, text
further streamlined, corrected punctuations and minor mistakes in figure
captions and main text, reduce white margin in figure 4 | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently discovered simple quantitative relations, known as bacterial growth
laws, hint on the existence of simple underlying principles at the heart of
bacterial growth. In this work, we provide a unifying picture on how these
known relations, as well as new relations that we derive, stems from a
universal autocatalytic network common to all bacteria, facilitating balanced
exponential growth of individual cells. We show that the core of the cellular
autocatalytic network is the transcription -- translation machinery -- in
itself an autocatalytic network comprising several coupled autocatalytic
cycles, including the ribosome, RNA polymerase, and tRNA charging cycles. We
derive two types of growth laws per autocatalytic cycle, one relating growth
rate to the relative fraction of the catalyst and its catalysis rate, and the
other relating growth rate to all the time scales in the cycle. The structure
of the autocatalytic network generates numerous regimes in state space,
determined by the limiting components, while the number of growth laws can be
much smaller. We also derive a growth law that accounts for the RNA polymerase
autocatalytic cycle, which we use to explain how growth rate depends on the
inducible expression of the rpoB and rpoC genes, which code for the RpoB and C
protein subunits of RNA polymerase, and how the concentration of rifampicin,
which targets RNA polymerase, affects growth rate without changing the
RNA-to-protein ratio. We derive growth laws for tRNA synthesis and charging,
and predict how growth rate depends on temperature, perturbation to ribosome
assembly, and membrane synthesis.
| [
{
"created": "Mon, 29 Mar 2021 06:18:49 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 09:46:32 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Apr 2021 11:59:53 GMT",
"version": "v3"
}
] | 2021-04-27 | [
[
"Roy",
"Anjan",
""
],
[
"Goberman",
"Dotan",
""
],
[
"Pugatch",
"Rami",
""
]
] | Recently discovered simple quantitative relations, known as bacterial growth laws, hint on the existence of simple underlying principles at the heart of bacterial growth. In this work, we provide a unifying picture on how these known relations, as well as new relations that we derive, stems from a universal autocatalytic network common to all bacteria, facilitating balanced exponential growth of individual cells. We show that the core of the cellular autocatalytic network is the transcription -- translation machinery -- in itself an autocatalytic network comprising several coupled autocatalytic cycles, including the ribosome, RNA polymerase, and tRNA charging cycles. We derive two types of growth laws per autocatalytic cycle, one relating growth rate to the relative fraction of the catalyst and its catalysis rate, and the other relating growth rate to all the time scales in the cycle. The structure of the autocatalytic network generates numerous regimes in state space, determined by the limiting components, while the number of growth laws can be much smaller. We also derive a growth law that accounts for the RNA polymerase autocatalytic cycle, which we use to explain how growth rate depends on the inducible expression of the rpoB and rpoC genes, which code for the RpoB and C protein subunits of RNA polymerase, and how the concentration of rifampicin, which targets RNA polymerase, affects growth rate without changing the RNA-to-protein ratio. We derive growth laws for tRNA synthesis and charging, and predict how growth rate depends on temperature, perturbation to ribosome assembly, and membrane synthesis. |
1803.08362 | Jahan Schad | Jahan N. Schad | Consciousness: From the Perspective of the Dynamical Systems Theory | 16 Pages, 1 Figure | J Neurol Stroke. 2019; 9 (3): 133â138 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beings, animate or inanimate, are dynamical systems which continuously
interact with the (external and /or internal) environment through the physical
or physiologic interfaces of their Kantian (representational) realities. And
the nature of their interactions is determined by the inner workings of their
systems. It is from this perspective that this work attempts to address some of
the long-held philosophical questions, major one among them consciousness-- in
the context of the physicality of the dynamic systems.
| [
{
"created": "Sun, 4 Mar 2018 17:10:03 GMT",
"version": "v1"
}
] | 2019-09-03 | [
[
"Schad",
"Jahan N.",
""
]
] | Beings, animate or inanimate, are dynamical systems which continuously interact with the (external and /or internal) environment through the physical or physiologic interfaces of their Kantian (representational) realities. And the nature of their interactions is determined by the inner workings of their systems. It is from this perspective that this work attempts to address some of the long-held philosophical questions, major one among them consciousness-- in the context of the physicality of the dynamic systems. |
1706.01787 | Pan-Jun Kim | Jaeyun Sung, Seunghyeon Kim, Josephine Jill T. Cabatbat, Sungho Jang,
Yong-Su Jin, Gyoo Yeol Jung, Nicholas Chia, Pan-Jun Kim | Global metabolic interaction network of the human gut microbiota for
context-specific community-scale analysis | Supplementary material is available at the journal website | Nat. Commun. 8, 15393 (2017) | 10.1038/ncomms15393 | null | q-bio.MN physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A system-level framework of complex microbe-microbe and host-microbe chemical
cross-talk would help elucidate the role of our gut microbiota in health and
disease. Here we report a literature-curated interspecies network of the human
gut microbiota, called NJS16. This is an extensive data resource composed of
~570 microbial species and 3 human cell types metabolically interacting through
>4,400 small-molecule transport and macromolecule degradation events. Based on
the contents of our network, we develop a mathematical approach to elucidate
representative microbial and metabolic features of the gut microbial community
in a given population, such as a disease cohort. Applying this strategy to
microbiome data from type 2 diabetes patients reveals a context-specific
infrastructure of the gut microbial ecosystem, core microbial entities with
large metabolic influence, and frequently-produced metabolic compounds that
might indicate relevant community metabolic processes. Our network presents a
foundation towards integrative investigations of community-scale microbial
activities within the human gut.
| [
{
"created": "Tue, 6 Jun 2017 14:32:25 GMT",
"version": "v1"
}
] | 2017-06-07 | [
[
"Sung",
"Jaeyun",
""
],
[
"Kim",
"Seunghyeon",
""
],
[
"Cabatbat",
"Josephine Jill T.",
""
],
[
"Jang",
"Sungho",
""
],
[
"Jin",
"Yong-Su",
""
],
[
"Jung",
"Gyoo Yeol",
""
],
[
"Chia",
"Nicholas",
""
],
... | A system-level framework of complex microbe-microbe and host-microbe chemical cross-talk would help elucidate the role of our gut microbiota in health and disease. Here we report a literature-curated interspecies network of the human gut microbiota, called NJS16. This is an extensive data resource composed of ~570 microbial species and 3 human cell types metabolically interacting through >4,400 small-molecule transport and macromolecule degradation events. Based on the contents of our network, we develop a mathematical approach to elucidate representative microbial and metabolic features of the gut microbial community in a given population, such as a disease cohort. Applying this strategy to microbiome data from type 2 diabetes patients reveals a context-specific infrastructure of the gut microbial ecosystem, core microbial entities with large metabolic influence, and frequently-produced metabolic compounds that might indicate relevant community metabolic processes. Our network presents a foundation towards integrative investigations of community-scale microbial activities within the human gut. |
1501.03782 | John Herrick | Bianca Sclavi and John Herrick | Ecological patterns of genome size variation and the origin of species
in salamanders | 21 Pages, 5 figures, 1 supplementary figure | null | null | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Salamanders (urodela) have among the largest vertebrate genomes, ranging in
size from 10 to over 80 pg. The urodela are divided into ten extant families
each with a characteristic range in genome size. Although changes in genome
size often occur randomly and in the absence of selection pressure, non-random
patterns of genome size variation are evident among specific vertebrate
lineages. Here we report that genome size in salamander families varies
inversely with species richness and other ecological factors: clades that began
radiating earlier (older crown age) tend to have smaller genomes, higher levels
of diversity and larger geographical ranges. These observations support the
hypothesis that urodel families with larger genomes either have a lower
propensity to diversify or are more vulnerable to extinction than families with
smaller genomes.
| [
{
"created": "Thu, 15 Jan 2015 19:21:50 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Mar 2015 17:02:11 GMT",
"version": "v2"
}
] | 2015-03-10 | [
[
"Sclavi",
"Bianca",
""
],
[
"Herrick",
"John",
""
]
] | Salamanders (urodela) have among the largest vertebrate genomes, ranging in size from 10 to over 80 pg. The urodela are divided into ten extant families each with a characteristic range in genome size. Although changes in genome size often occur randomly and in the absence of selection pressure, non-random patterns of genome size variation are evident among specific vertebrate lineages. Here we report that genome size in salamander families varies inversely with species richness and other ecological factors: clades that began radiating earlier (older crown age) tend to have smaller genomes, higher levels of diversity and larger geographical ranges. These observations support the hypothesis that urodel families with larger genomes either have a lower propensity to diversify or are more vulnerable to extinction than families with smaller genomes. |
2003.10376 | Andrew Stier | Andrew J. Stier, Marc G. Berman, Luis M. A. Bettencourt | COVID-19 attack rate increases with city size | null | npj Urban Sustain 1, 31 (2021) | 10.1038/s42949-021-00030-0 | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The current outbreak of novel coronavirus disease 2019 (COVID-19) poses an
unprecedented global health and economic threat to interconnected human
societies. Until a vaccine is developed, strategies for controlling the
outbreak rely on aggressive social distancing. These measures largely
disconnect the social network fabric of human societies, especially in urban
areas. Here, we estimate the growth rates and reproductive numbers of COVID-19
in US cities from March 14th through March 19th to reveal a power-law scaling
relationship to city population size. This means that COVID-19 is spreading
faster on average in larger cities with the additional implication that, in an
uncontrolled outbreak, larger fractions of the population are expected to
become infected in more populous urban areas. We discuss the implications of
these observations for controlling the COVID-19 outbreak, emphasizing the need
to implement more aggressive distancing policies in larger cities while also
preserving socioeconomic activity.
| [
{
"created": "Mon, 23 Mar 2020 16:40:58 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Mar 2020 16:45:15 GMT",
"version": "v2"
}
] | 2022-08-24 | [
[
"Stier",
"Andrew J.",
""
],
[
"Berman",
"Marc G.",
""
],
[
"Bettencourt",
"Luis M. A.",
""
]
] | The current outbreak of novel coronavirus disease 2019 (COVID-19) poses an unprecedented global health and economic threat to interconnected human societies. Until a vaccine is developed, strategies for controlling the outbreak rely on aggressive social distancing. These measures largely disconnect the social network fabric of human societies, especially in urban areas. Here, we estimate the growth rates and reproductive numbers of COVID-19 in US cities from March 14th through March 19th to reveal a power-law scaling relationship to city population size. This means that COVID-19 is spreading faster on average in larger cities with the additional implication that, in an uncontrolled outbreak, larger fractions of the population are expected to become infected in more populous urban areas. We discuss the implications of these observations for controlling the COVID-19 outbreak, emphasizing the need to implement more aggressive distancing policies in larger cities while also preserving socioeconomic activity. |
0902.3906 | Simone Pigolotti | Simone Pigolotti, Massimo Cencini | Speciation-rate dependence in species-area relationships | 17 pages, 5 figures | J. Theo. Biol. 260, 83-89 (2009) | 10.1016/j.jtbi.2009.05.023 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The general tendency for species number (S) to increase with sampled area (A)
constitutes one of the most robust empirical laws of ecology, quantified by
species-area relationships (SAR). In many ecosystems, SAR curves display a
power-law dependence, $S\propto A^z$. The exponent $z$ is always less than one
but shows significant variation in different ecosystems. We study the multitype
voter model as one of the simplest models able to reproduce SAR similar to
those observed in real ecosystems in terms of basic ecological processes such
as birth, dispersal and speciation. Within the model, the species-area exponent
$z$ depends on the dimensionless speciation rate $\nu$, even though the
detailed dependence is still matter of controversy. We present extensive
numerical simulations in a broad range of speciation rates from $\nu =10^{-3}$
down to $\nu = 10^{-11}$, where the model reproduces values of the exponent
observed in nature. In particular, we show that the inverse of the species-area
exponent linearly depends on the logarithm of $\nu$. Further, we compare the
model outcomes with field data collected from previous studies, for which we
separate the effect of the speciation rate from that of the different species
lifespans. We find a good linear relationship between inverse exponents and
logarithm of species lifespans. However, the slope sets bounds on the
speciation rates that can hardly be justified on evolutionary basis, suggesting
that additional effects should be taken into account to consistently interpret
the observed exponents.
| [
{
"created": "Mon, 23 Feb 2009 13:34:04 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Aug 2009 12:00:55 GMT",
"version": "v2"
}
] | 2009-08-04 | [
[
"Pigolotti",
"Simone",
""
],
[
"Cencini",
"Massimo",
""
]
] | The general tendency for species number (S) to increase with sampled area (A) constitutes one of the most robust empirical laws of ecology, quantified by species-area relationships (SAR). In many ecosystems, SAR curves display a power-law dependence, $S\propto A^z$. The exponent $z$ is always less than one but shows significant variation in different ecosystems. We study the multitype voter model as one of the simplest models able to reproduce SAR similar to those observed in real ecosystems in terms of basic ecological processes such as birth, dispersal and speciation. Within the model, the species-area exponent $z$ depends on the dimensionless speciation rate $\nu$, even though the detailed dependence is still matter of controversy. We present extensive numerical simulations in a broad range of speciation rates from $\nu =10^{-3}$ down to $\nu = 10^{-11}$, where the model reproduces values of the exponent observed in nature. In particular, we show that the inverse of the species-area exponent linearly depends on the logarithm of $\nu$. Further, we compare the model outcomes with field data collected from previous studies, for which we separate the effect of the speciation rate from that of the different species lifespans. We find a good linear relationship between inverse exponents and logarithm of species lifespans. However, the slope sets bounds on the speciation rates that can hardly be justified on evolutionary basis, suggesting that additional effects should be taken into account to consistently interpret the observed exponents. |
1807.09728 | Antonio Carlos Borges Santos Da Costa | Antonio Carlos Costa, Tosif Ahamed and Greg J. Stephens | Adaptive, locally-linear models of complex dynamics | 25 pages, 16 figures | Proceedings of the National Academy of Sciences Jan 2019, 116 (5)
1501-1510 | 10.1073/pnas.1813476116 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The dynamics of complex systems generally include high-dimensional,
non-stationary and non-linear behavior, all of which pose fundamental
challenges to quantitative understanding. To address these difficulties we
detail a new approach based on local linear models within windows determined
adaptively from the data. While the dynamics within each window are simple,
consisting of exponential decay, growth and oscillations, the collection of
local parameters across all windows provides a principled characterization of
the full time series. To explore the resulting model space, we develop a novel
likelihood-based hierarchical clustering and we examine the eigenvalues of the
linear dynamics. We demonstrate our analysis with the Lorenz system undergoing
stable spiral dynamics and in the standard chaotic regime. Applied to the
posture dynamics of the nematode $C. elegans$ our approach identifies
fine-grained behavioral states and model dynamics which fluctuate close to an
instability boundary, and we detail a bifurcation in a transition from forward
to backward crawling. Finally, we analyze whole-brain imaging in $C. elegans$
and show that the stability of global brain states changes with oxygen
concentration.
| [
{
"created": "Wed, 25 Jul 2018 17:20:03 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Sep 2020 12:41:13 GMT",
"version": "v2"
}
] | 2020-09-11 | [
[
"Costa",
"Antonio Carlos",
""
],
[
"Ahamed",
"Tosif",
""
],
[
"Stephens",
"Greg J.",
""
]
] | The dynamics of complex systems generally include high-dimensional, non-stationary and non-linear behavior, all of which pose fundamental challenges to quantitative understanding. To address these difficulties we detail a new approach based on local linear models within windows determined adaptively from the data. While the dynamics within each window are simple, consisting of exponential decay, growth and oscillations, the collection of local parameters across all windows provides a principled characterization of the full time series. To explore the resulting model space, we develop a novel likelihood-based hierarchical clustering and we examine the eigenvalues of the linear dynamics. We demonstrate our analysis with the Lorenz system undergoing stable spiral dynamics and in the standard chaotic regime. Applied to the posture dynamics of the nematode $C. elegans$ our approach identifies fine-grained behavioral states and model dynamics which fluctuate close to an instability boundary, and we detail a bifurcation in a transition from forward to backward crawling. Finally, we analyze whole-brain imaging in $C. elegans$ and show that the stability of global brain states changes with oxygen concentration. |
2403.08439 | Ching-En Chiu | Ching-En Chiu, Arieh Levy Pinto, Rasheda A Chowdhury, Kim Christensen,
Marta Varela | Characterisation of Anti-Arrhythmic Drug Effects on Cardiac
Electrophysiology using Physics-Informed Neural Networks | Accepted for publication in the 21st IEEE International Symposium on
Biomedical Imaging 2024 | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The ability to accurately infer cardiac electrophysiological (EP) properties
is key to improving arrhythmia diagnosis and treatment. In this work, we
developed a physics-informed neural networks (PINNs) framework to predict how
different myocardial EP parameters are modulated by anti-arrhythmic drugs.
Using $\textit{in vitro}$ optical mapping images and the 3-channel Fenton-Karma
model, we estimated the changes in ionic channel conductance caused by these
drugs.
Our framework successfully characterised the action of drugs HMR1556,
nifedipine and lidocaine - respectively, blockade of $I_{K}$, $I_{Ca}$, and
$I_{Na}$ currents - by estimating that they decreased the respective channel
conductance by $31.8\pm2.7\%$ $(p=8.2 \times 10^{-5})$, $80.9\pm21.6\%$
$(p=0.02)$, and $8.6\pm0.5\%$ $ (p=0.03)$, leaving the conductance of other
channels unchanged. For carbenoxolone, whose main action is the blockade of
intercellular gap junctions, PINNs also successfully predicted no significant
changes $(p>0.09)$ in all ionic conductances.
Our results are an important step towards the deployment of PINNs for model
parameter estimation from experimental data, bringing this framework closer to
clinical or laboratory images analysis and for the personalisation of
mathematical models.
| [
{
"created": "Wed, 13 Mar 2024 11:46:38 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Chiu",
"Ching-En",
""
],
[
"Pinto",
"Arieh Levy",
""
],
[
"Chowdhury",
"Rasheda A",
""
],
[
"Christensen",
"Kim",
""
],
[
"Varela",
"Marta",
""
]
] | The ability to accurately infer cardiac electrophysiological (EP) properties is key to improving arrhythmia diagnosis and treatment. In this work, we developed a physics-informed neural networks (PINNs) framework to predict how different myocardial EP parameters are modulated by anti-arrhythmic drugs. Using $\textit{in vitro}$ optical mapping images and the 3-channel Fenton-Karma model, we estimated the changes in ionic channel conductance caused by these drugs. Our framework successfully characterised the action of drugs HMR1556, nifedipine and lidocaine - respectively, blockade of $I_{K}$, $I_{Ca}$, and $I_{Na}$ currents - by estimating that they decreased the respective channel conductance by $31.8\pm2.7\%$ $(p=8.2 \times 10^{-5})$, $80.9\pm21.6\%$ $(p=0.02)$, and $8.6\pm0.5\%$ $ (p=0.03)$, leaving the conductance of other channels unchanged. For carbenoxolone, whose main action is the blockade of intercellular gap junctions, PINNs also successfully predicted no significant changes $(p>0.09)$ in all ionic conductances. Our results are an important step towards the deployment of PINNs for model parameter estimation from experimental data, bringing this framework closer to clinical or laboratory images analysis and for the personalisation of mathematical models. |
1212.4450 | Mike Steel Prof. | Mike Steel, Wim Hordijk and Joshua Smith | Minimal autocatalytic networks | 28 pages, 6 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-sustaining autocatalytic chemical networks represent a necessary, though
not sufficient condition for the emergence of early living systems. These
networks have been formalised and investigated within the framework of RAF
theory, which has led to a number of insights and results concerning the
likelihood of such networks forming. In this paper, we extend this analysis by
focussing on how {\em small} autocatalytic networks are likely to be when they
first emerge. First we show that simulations are unlikely to settle this
question, by establishing that the problem of finding a smallest RAF within a
catalytic reaction system is NP-hard. However, irreducible RAFs (irrRAFs) can
be constructed in polynomial time, and we show it is possible to determine in
polynomial time whether a bounded size set of these irrRAFs contain the
smallest RAFs within a system. Moreover, we derive rigorous bounds on the sizes
of small RAFs and use simulations to sample irrRAFs under the binary polymer
model. We then apply mathematical arguments to prove a new result suggested by
those simulations: at the transition catalysis level at which RAFs first form
in this model, small RAFs are unlikely to be present. We also investigate
further the relationship between RAFs and another formal approach to
self-sustaining and closed chemical networks, namely chemical organisation
theory (COT).
| [
{
"created": "Tue, 18 Dec 2012 18:08:15 GMT",
"version": "v1"
}
] | 2012-12-19 | [
[
"Steel",
"Mike",
""
],
[
"Hordijk",
"Wim",
""
],
[
"Smith",
"Joshua",
""
]
] | Self-sustaining autocatalytic chemical networks represent a necessary, though not sufficient condition for the emergence of early living systems. These networks have been formalised and investigated within the framework of RAF theory, which has led to a number of insights and results concerning the likelihood of such networks forming. In this paper, we extend this analysis by focussing on how {\em small} autocatalytic networks are likely to be when they first emerge. First we show that simulations are unlikely to settle this question, by establishing that the problem of finding a smallest RAF within a catalytic reaction system is NP-hard. However, irreducible RAFs (irrRAFs) can be constructed in polynomial time, and we show it is possible to determine in polynomial time whether a bounded size set of these irrRAFs contain the smallest RAFs within a system. Moreover, we derive rigorous bounds on the sizes of small RAFs and use simulations to sample irrRAFs under the binary polymer model. We then apply mathematical arguments to prove a new result suggested by those simulations: at the transition catalysis level at which RAFs first form in this model, small RAFs are unlikely to be present. We also investigate further the relationship between RAFs and another formal approach to self-sustaining and closed chemical networks, namely chemical organisation theory (COT). |
1512.03970 | Sebastian Schreiber | Sebastian J. Schreiber | Coexistence in the face of uncertainty | To appear as a refereed chapter in "Recent Progress and Modern
Challenges in Applied Mathematics, Modeling and Computational Science" in the
Fields Institute Communication Series edited by Roderick Melnik, Roman
Makarov, and Jacques Belair | (2017) In: Melnik R., Makarov R., Belair J. (eds) Recent Progress
and Modern Challenges in Applied Mathematics, Modeling and Computational
Science. Fields Institute Communications, vol 79. Springer, New York, NY | 10.1007/978-1-4939-6969-2_12 | null | q-bio.PE math.DS math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past century, nonlinear difference and differential equations have
been used to understand conditions for species coexistence. However, these
models fail to account for random fluctuations due to demographic and
environmental stochasticity which are experienced by all populations. I review
some recent mathematical results about persistence and coexistence for models
accounting for each of these forms of stochasticity. Demographic stochasticity
stems from populations and communities consisting of a finite number of
interacting individuals, and often are represented by Markovian models with a
countable number of states. For closed populations in a bounded world,
extinction occurs in finite time but may be preceded by long-term transients.
Quasi-stationary distributions (QSDs) of these Makov models characterize this
meta-stable behavior. For sufficiently large "habitat sizes", QSDs are shown to
concentrate on the positive attractors of deterministic models. Moreover, the
probability extinction decreases exponentially with habitat size.
Alternatively, environmental stochasticity stems from fluctuations in
environmental conditions which influence survival, growth, and reproduction.
Stochastic difference equations can be used to model the effects of
environmental stochasticity on population and community dynamics. For these
models, stochastic persistence corresponds to empirical measures placing
arbitrarily little weight on arbitrarily low population densities. Sufficient
and necessary conditions for stochastic persistence are reviewed. These
conditions involve weighted combinations of Lyapunov exponents corresponding to
"average" per-capita growth rates of rare species. The results are illustrated
with Bay checkerspot butterflies, coupled sink populations, the storage effect,
and stochastic rock-paper-scissor communities. Open problems and conjectures
are presented.
| [
{
"created": "Sat, 12 Dec 2015 23:01:49 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jun 2016 05:00:55 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Sep 2016 23:32:26 GMT",
"version": "v3"
}
] | 2019-02-12 | [
[
"Schreiber",
"Sebastian J.",
""
]
] | Over the past century, nonlinear difference and differential equations have been used to understand conditions for species coexistence. However, these models fail to account for random fluctuations due to demographic and environmental stochasticity which are experienced by all populations. I review some recent mathematical results about persistence and coexistence for models accounting for each of these forms of stochasticity. Demographic stochasticity stems from populations and communities consisting of a finite number of interacting individuals, and often are represented by Markovian models with a countable number of states. For closed populations in a bounded world, extinction occurs in finite time but may be preceded by long-term transients. Quasi-stationary distributions (QSDs) of these Makov models characterize this meta-stable behavior. For sufficiently large "habitat sizes", QSDs are shown to concentrate on the positive attractors of deterministic models. Moreover, the probability extinction decreases exponentially with habitat size. Alternatively, environmental stochasticity stems from fluctuations in environmental conditions which influence survival, growth, and reproduction. Stochastic difference equations can be used to model the effects of environmental stochasticity on population and community dynamics. For these models, stochastic persistence corresponds to empirical measures placing arbitrarily little weight on arbitrarily low population densities. Sufficient and necessary conditions for stochastic persistence are reviewed. These conditions involve weighted combinations of Lyapunov exponents corresponding to "average" per-capita growth rates of rare species. The results are illustrated with Bay checkerspot butterflies, coupled sink populations, the storage effect, and stochastic rock-paper-scissor communities. Open problems and conjectures are presented. |
1507.01804 | Takaya Saito | Takaya Saito, P{\aa}l S{\ae}trom | MicroRNAs -- targeting and target prediction | A review paper on miRNA targets, 7 pages | N Biotechnol. 2010 Jul 31;27(3):243-9 | 10.1016/j.nbt.2010.02.016 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs (miRNAs) are a class of small noncoding RNAs that can regulate many
genes by base pairing to sites in mRNAs. The functionality of miRNAs overlaps
that of short interfering RNAs (siRNAs), and many features of miRNA targeting
have been revealed experimentally by studying miRNA-mimicking siRNAs. This
review outlines the features associated with animal miRNA targeting and
describes currently available prediction tools.
| [
{
"created": "Tue, 7 Jul 2015 13:42:24 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2021 13:12:06 GMT",
"version": "v2"
}
] | 2021-07-08 | [
[
"Saito",
"Takaya",
""
],
[
"Sætrom",
"Pål",
""
]
] | MicroRNAs (miRNAs) are a class of small noncoding RNAs that can regulate many genes by base pairing to sites in mRNAs. The functionality of miRNAs overlaps that of short interfering RNAs (siRNAs), and many features of miRNA targeting have been revealed experimentally by studying miRNA-mimicking siRNAs. This review outlines the features associated with animal miRNA targeting and describes currently available prediction tools. |
1410.7288 | David Richards | David M. Richards and Robert G. Endres | The mechanism of phagocytosis: two stages of engulfment | 21 pages, 7 figures | Biophysical Journal 107:1542-1553 (2014) | 10.1016/j.bpj.2014.07.070 | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite being of vital importance to the immune system, the mechanism by
which cells engulf relatively large solid particles during phagocytosis is
still poorly understood. From movies of neutrophil phagocytosis of polystyrene
beads, we measure the fractional engulfment as a function of time and
demonstrate that phagocytosis occurs in two distinct stages. During the first
stage, engulfment is relatively slow and progressively slows down as
phagocytosis proceeds. However, at approximately half-engulfment, the rate of
engulfment increases dramatically, with complete engulfment attained soon
afterwards. By studying simple mathematical models of phagocytosis, we suggest
that the first stage is due to a passive mechanism, determined by receptor
diffusion and capture, whereas the second stage is more actively controlled,
perhaps with receptors being driven towards the site of engulfment. We then
consider a more advanced model that includes signaling and captures both stages
of engulfment. This model predicts that there is an optimum ligand density for
quick engulfment. Further, we show how this model explains why non-spherical
particles engulf quickest when presented tip-first. Our findings suggest that
active regulation may be a later evolutionary innovation, allowing fast and
robust engulfment even for large particles.
| [
{
"created": "Mon, 27 Oct 2014 16:06:31 GMT",
"version": "v1"
}
] | 2014-10-28 | [
[
"Richards",
"David M.",
""
],
[
"Endres",
"Robert G.",
""
]
] | Despite being of vital importance to the immune system, the mechanism by which cells engulf relatively large solid particles during phagocytosis is still poorly understood. From movies of neutrophil phagocytosis of polystyrene beads, we measure the fractional engulfment as a function of time and demonstrate that phagocytosis occurs in two distinct stages. During the first stage, engulfment is relatively slow and progressively slows down as phagocytosis proceeds. However, at approximately half-engulfment, the rate of engulfment increases dramatically, with complete engulfment attained soon afterwards. By studying simple mathematical models of phagocytosis, we suggest that the first stage is due to a passive mechanism, determined by receptor diffusion and capture, whereas the second stage is more actively controlled, perhaps with receptors being driven towards the site of engulfment. We then consider a more advanced model that includes signaling and captures both stages of engulfment. This model predicts that there is an optimum ligand density for quick engulfment. Further, we show how this model explains why non-spherical particles engulf quickest when presented tip-first. Our findings suggest that active regulation may be a later evolutionary innovation, allowing fast and robust engulfment even for large particles. |
1009.2034 | Tobias Reichenbach | Tobias Reichenbach and A. J. Hudspeth | Dual contribution to amplification in the mammalian inner ear | 4 pages, 4 figures | Phys. Rev. Lett. 105, 118102 (2010) | 10.1103/PhysRevLett.105.118102 | null | q-bio.TO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inner ear achieves a wide dynamic range of responsiveness by mechanically
amplifying weak sounds. The enormous mechanical gain reported for the mammalian
cochlea, which exceeds a factor of 4,000, poses a challenge for theory. Here we
show how such a large gain can result from an interaction between amplification
by low-gain hair bundles and a pressure wave: hair bundles can amplify both
their displacement per locally applied pressure and the pressure wave itself. A
recently proposed ratchet mechanism, in which hair-bundle forces do not feed
back on the pressure wave, delineates the two effects. Our analytical
calculations with a WKB approximation agree with numerical solutions.
| [
{
"created": "Fri, 10 Sep 2010 15:33:32 GMT",
"version": "v1"
}
] | 2010-09-13 | [
[
"Reichenbach",
"Tobias",
""
],
[
"Hudspeth",
"A. J.",
""
]
] | The inner ear achieves a wide dynamic range of responsiveness by mechanically amplifying weak sounds. The enormous mechanical gain reported for the mammalian cochlea, which exceeds a factor of 4,000, poses a challenge for theory. Here we show how such a large gain can result from an interaction between amplification by low-gain hair bundles and a pressure wave: hair bundles can amplify both their displacement per locally applied pressure and the pressure wave itself. A recently proposed ratchet mechanism, in which hair-bundle forces do not feed back on the pressure wave, delineates the two effects. Our analytical calculations with a WKB approximation agree with numerical solutions. |
1211.0928 | Inigo Martincorena | Inigo Martincorena and Nicholas M. Luscombe | Response to Horizontal gene transfer may explain variation in theta_s | Minor formatting change and correction of a typo | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a short article submitted to ArXiv [1], Maddamsetti et al. argue that the
variation in the neutral mutation rate among genes in Escherichia coli that we
recently reported [2] might be explained by horizontal gene transfer (HGT). To
support their argument they present a reanalysis of synonymous diversity in 10
E.coli strains together with an analysis of a collection of 1,069 synonymous
mutations found in repair-deficient strains in a long-term in vitro evolution
experiment. Here we respond to this communication. Briefly, we explain that HGT
was carefully accounted for in our study by multiple independent phylogenetic
and population genetic approaches, and we show that there is no new evidence of
HGT affecting our results. We also argue that caution must be exercised when
comparing mutations from repair deficient strains to data from wild-type
strains, as these conditions are dominated by different mutational processes.
Finally, we reanalyse Maddamsetti's collection of mutations from a long-term in
vitro experiment and we report preliminary evidence of non-random variation of
the mutation rate in these repair deficient strains.
| [
{
"created": "Mon, 5 Nov 2012 17:02:41 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Nov 2012 08:04:11 GMT",
"version": "v2"
}
] | 2012-11-08 | [
[
"Martincorena",
"Inigo",
""
],
[
"Luscombe",
"Nicholas M.",
""
]
] | In a short article submitted to ArXiv [1], Maddamsetti et al. argue that the variation in the neutral mutation rate among genes in Escherichia coli that we recently reported [2] might be explained by horizontal gene transfer (HGT). To support their argument they present a reanalysis of synonymous diversity in 10 E.coli strains together with an analysis of a collection of 1,069 synonymous mutations found in repair-deficient strains in a long-term in vitro evolution experiment. Here we respond to this communication. Briefly, we explain that HGT was carefully accounted for in our study by multiple independent phylogenetic and population genetic approaches, and we show that there is no new evidence of HGT affecting our results. We also argue that caution must be exercised when comparing mutations from repair deficient strains to data from wild-type strains, as these conditions are dominated by different mutational processes. Finally, we reanalyse Maddamsetti's collection of mutations from a long-term in vitro experiment and we report preliminary evidence of non-random variation of the mutation rate in these repair deficient strains. |
1601.06191 | Ciprian Palaghianu | Ciprian Palaghianu | Analiza regenerarii padurii: perspective statistice si informatice | 415 pages, in Romanian | null | 10.13140/RG.2.1.5114.1203 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The regeneration of forest resources is one of the main objectives of modern
forest management. The present book is based on my personal PhD thesis and aims
to identify detailed features of the regeneration process and the
characteristics of sapling interaction. The research involved evaluation of
regeneration structure, dimensional differentiation of saplings, spatial
pattern analysis - sapling aggregation and species association, and sapling
competition using distance-dependent methods. A new distance-dependent
dimensional differentiation index (IDIV) and a new geometrical criterion for
selecting neighbouring competitor saplings were created and applied.
Appropriate methods and techniques were identified for investigating
regeneration and computer technology was used intensively. Nine stand-alone
software programs were produced in order to achieve the objectives of the
thesis.
| [
{
"created": "Thu, 21 Jan 2016 11:00:06 GMT",
"version": "v1"
}
] | 2016-01-26 | [
[
"Palaghianu",
"Ciprian",
""
]
] | The regeneration of forest resources is one of the main objectives of modern forest management. The present book is based on my personal PhD thesis and aims to identify detailed features of the regeneration process and the characteristics of sapling interaction. The research involved evaluation of regeneration structure, dimensional differentiation of saplings, spatial pattern analysis - sapling aggregation and species association, and sapling competition using distance-dependent methods. A new distance-dependent dimensional differentiation index (IDIV) and a new geometrical criterion for selecting neighbouring competitor saplings were created and applied. Appropriate methods and techniques were identified for investigating regeneration and computer technology was used intensively. Nine stand-alone software programs were produced in order to achieve the objectives of the thesis. |
1112.3867 | Christoph Adami | Christoph Adami | The use of information theory in evolutionary biology | 25 pages, 7 figures. To appear in "The Year in Evolutionary Biology",
of the Annals of the NY Academy of Sciences | Annals NY Acad. Sciences 1256 (2012) 49-65 | 10.1111/j.1749-6632.2011.06422.x | null | q-bio.PE cs.IT math.IT q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information is a key concept in evolutionary biology. Information is stored
in biological organism's genomes, and used to generate the organism as well as
to maintain and control it. Information is also "that which evolves". When a
population adapts to a local environment, information about this environment is
fixed in a representative genome. However, when an environment changes,
information can be lost. At the same time, information is processed by animal
brains to survive in complex environments, and the capacity for information
processing also evolves. Here I review applications of information theory to
the evolution of proteins as well as to the evolution of information processing
in simulated agents that adapt to perform a complex task.
| [
{
"created": "Fri, 16 Dec 2011 15:59:35 GMT",
"version": "v1"
}
] | 2012-07-25 | [
[
"Adami",
"Christoph",
""
]
] | Information is a key concept in evolutionary biology. Information is stored in biological organism's genomes, and used to generate the organism as well as to maintain and control it. Information is also "that which evolves". When a population adapts to a local environment, information about this environment is fixed in a representative genome. However, when an environment changes, information can be lost. At the same time, information is processed by animal brains to survive in complex environments, and the capacity for information processing also evolves. Here I review applications of information theory to the evolution of proteins as well as to the evolution of information processing in simulated agents that adapt to perform a complex task. |
2108.07388 | Michael Stumpf | Robyn P. Araujo, Sean T. Vittadello, Michael P.H. Stumpf | Bayesian and Algebraic Strategies to Design in Synthetic Biology | 12 pages,8 figures | null | null | null | q-bio.MN q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Innovation in synthetic biology often still depends on large-scale
experimental trial-and-error, domain expertise, and ingenuity. The application
of rational design engineering methods promise to make this more efficient,
faster, cheaper and safer. But this requires mathematical models of cellular
systems. And for these models we then have to determine if they can meet our
intended target behaviour. Here we develop two complementary approaches that
allow us to determine whether a given molecular circuit, represented by a
mathematical model, is capable of fulfilling our design objectives. We discuss
algebraic methods that are capable of identifying general principles
guaranteeing desired behaviour; and we provide an overview over Bayesian design
approaches that allow us to choose from a set of models, that model which has
the highest probability of fulfilling our design objectives. We discuss their
uses in the context of biochemical adaptation, and then consider how robustness
can and should affect our design approach.
| [
{
"created": "Tue, 17 Aug 2021 00:43:00 GMT",
"version": "v1"
}
] | 2021-08-18 | [
[
"Araujo",
"Robyn P.",
""
],
[
"Vittadello",
"Sean T.",
""
],
[
"Stumpf",
"Michael P. H.",
""
]
] | Innovation in synthetic biology often still depends on large-scale experimental trial-and-error, domain expertise, and ingenuity. The application of rational design engineering methods promise to make this more efficient, faster, cheaper and safer. But this requires mathematical models of cellular systems. And for these models we then have to determine if they can meet our intended target behaviour. Here we develop two complementary approaches that allow us to determine whether a given molecular circuit, represented by a mathematical model, is capable of fulfilling our design objectives. We discuss algebraic methods that are capable of identifying general principles guaranteeing desired behaviour; and we provide an overview over Bayesian design approaches that allow us to choose from a set of models, that model which has the highest probability of fulfilling our design objectives. We discuss their uses in the context of biochemical adaptation, and then consider how robustness can and should affect our design approach. |
0710.5886 | Henri Laurie | Henri Laurie and Edith Perrier | A multifractal model for spatial variation in species richness | 17 pages, 6 figures | null | null | null | q-bio.QM q-bio.PE | null | Models for species-area relationships up to now have focused on the mean
richness as a function of area. We present MFp1p2, a self-similar multifractal.
It explicitly models both trend and variation in richness as a function of
area, and is a generalisation of the model of scaling of mean species richness
due to Harte et al (1999). The construction is based on a cascade of bisections
of a rectangle. The two parameters of the model are p1, the proportion of
species that occur in the richer half, and p2, the proportion of species that
occur in the poorer half. Equivalent parameterisations are a = (p1 + p2)/2 and
b = p1/p2. These parameters are interpreted as follows: a gives the scaling of
mean density, b gives the scaling of spatial variability. Several properties of
MFp1p2 are derived, a generalisation is noted and some applications are
suggested.
| [
{
"created": "Wed, 31 Oct 2007 15:41:21 GMT",
"version": "v1"
}
] | 2007-11-01 | [
[
"Laurie",
"Henri",
""
],
[
"Perrier",
"Edith",
""
]
] | Models for species-area relationships up to now have focused on the mean richness as a function of area. We present MFp1p2, a self-similar multifractal. It explicitly models both trend and variation in richness as a function of area, and is a generalisation of the model of scaling of mean species richness due to Harte et al (1999). The construction is based on a cascade of bisections of a rectangle. The two parameters of the model are p1, the proportion of species that occur in the richer half, and p2, the proportion of species that occur in the poorer half. Equivalent parameterisations are a = (p1 + p2)/2 and b = p1/p2. These parameters are interpreted as follows: a gives the scaling of mean density, b gives the scaling of spatial variability. Several properties of MFp1p2 are derived, a generalisation is noted and some applications are suggested. |
1103.1200 | William Sherwood | Ed Reznik, Daniel Segre, William Erik Sherwood | The Quasi-Steady State Assumption in an Enzymatically Open System | 28 pages, 12 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quasi-steady state assumption (QSSA) forms the basis for rigorous
mathematical justification of the Michaelis-Menten formalism commonly used in
modeling a broad range of intracellular phenomena. A critical supposition of
QSSA-based analyses is that the underlying biochemical reaction is
enzymatically "closed," so that free enzyme is neither added to nor removed
from the reaction over the relevant time period. Yet there are multiple
circumstances in living cells under which this assumption may not hold, e.g.
during translation of genetic elements or metabolic regulatory events. Here we
consider a modified version of the most basic enzyme-catalyzed reaction which
incorporates enzyme input and removal. We extend the QSSA to this enzymatically
"open" system, computing inner approximations to its dynamics, and we compare
the behavior of the full open system, our approximations, and the closed system
under broad range of kinetic parameters. We also derive conditions under which
our new approximations are provably valid; numerical simulations demonstrate
that our approximations remain quite accurate even when these conditions are
not satisfied. Finally, we investigate the possibility of damped oscillatory
behavior in the enzymatically open reaction.
| [
{
"created": "Mon, 7 Mar 2011 06:04:02 GMT",
"version": "v1"
}
] | 2011-03-08 | [
[
"Reznik",
"Ed",
""
],
[
"Segre",
"Daniel",
""
],
[
"Sherwood",
"William Erik",
""
]
] | The quasi-steady state assumption (QSSA) forms the basis for rigorous mathematical justification of the Michaelis-Menten formalism commonly used in modeling a broad range of intracellular phenomena. A critical supposition of QSSA-based analyses is that the underlying biochemical reaction is enzymatically "closed," so that free enzyme is neither added to nor removed from the reaction over the relevant time period. Yet there are multiple circumstances in living cells under which this assumption may not hold, e.g. during translation of genetic elements or metabolic regulatory events. Here we consider a modified version of the most basic enzyme-catalyzed reaction which incorporates enzyme input and removal. We extend the QSSA to this enzymatically "open" system, computing inner approximations to its dynamics, and we compare the behavior of the full open system, our approximations, and the closed system under broad range of kinetic parameters. We also derive conditions under which our new approximations are provably valid; numerical simulations demonstrate that our approximations remain quite accurate even when these conditions are not satisfied. Finally, we investigate the possibility of damped oscillatory behavior in the enzymatically open reaction. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.