id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.05880 | Jingwen Zhang | Enze Xu, Jingwen Zhang, Jiadi Li, Qianqian Song, Defu Yang, Guorong
Wu, Minghan Chen | Pathology Steered Stratification Network for Subtype Identification in
Alzheimer's Disease | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alzheimer's disease (AD) is a heterogeneous, multifactorial neurodegenerative
disorder characterized by beta-amyloid, pathologic tau, and neurodegeneration.
There are no effective treatments for Alzheimer's disease at a late stage,
urging for early intervention. However, existing statistical inference
approaches of AD subtype identification ignore the pathological domain
knowledge, which could lead to ill-posed results that are sometimes
inconsistent with the essential neurological principles. Integrating systems
biology modeling with machine learning, we propose a novel pathology steered
stratification network (PSSN) that incorporates established domain knowledge in
AD pathology through a reaction-diffusion model, where we consider non-linear
interactions between major biomarkers and diffusion along brain structural
network. Trained on longitudinal multimodal neuroimaging data, the biological
model predicts long-term trajectories that capture individual progression
pattern, filling in the gaps between sparse imaging data available. A deep
predictive neural network is then built to exploit spatiotemporal dynamics,
link neurological examinations with clinical profiles, and generate subtype
assignment probability on an individual basis. We further identify an
evolutionary disease graph to quantify subtype transition probabilities through
extensive simulations. Our stratification achieves superior performance in both
inter-cluster heterogeneity and intra-cluster homogeneity of various clinical
scores. Applying our approach to enriched samples of aging populations, we
identify six subtypes spanning AD spectrum, where each subtype exhibits a
distinctive biomarker pattern that is consistent with its clinical outcome.
PSSN provides insights into pre-symptomatic diagnosis and practical guidance on
clinical treatments, which may be further generalized to other
neurodegenerative diseases.
| [
{
"created": "Wed, 12 Oct 2022 02:52:00 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Aug 2023 14:59:21 GMT",
"version": "v2"
}
] | 2023-08-28 | [
[
"Xu",
"Enze",
""
],
[
"Zhang",
"Jingwen",
""
],
[
"Li",
"Jiadi",
""
],
[
"Song",
"Qianqian",
""
],
[
"Yang",
"Defu",
""
],
[
"Wu",
"Guorong",
""
],
[
"Chen",
"Minghan",
""
]
] | Alzheimer's disease (AD) is a heterogeneous, multifactorial neurodegenerative disorder characterized by beta-amyloid, pathologic tau, and neurodegeneration. There are no effective treatments for Alzheimer's disease at a late stage, urging for early intervention. However, existing statistical inference approaches of AD subtype identification ignore the pathological domain knowledge, which could lead to ill-posed results that are sometimes inconsistent with the essential neurological principles. Integrating systems biology modeling with machine learning, we propose a novel pathology steered stratification network (PSSN) that incorporates established domain knowledge in AD pathology through a reaction-diffusion model, where we consider non-linear interactions between major biomarkers and diffusion along brain structural network. Trained on longitudinal multimodal neuroimaging data, the biological model predicts long-term trajectories that capture individual progression pattern, filling in the gaps between sparse imaging data available. A deep predictive neural network is then built to exploit spatiotemporal dynamics, link neurological examinations with clinical profiles, and generate subtype assignment probability on an individual basis. We further identify an evolutionary disease graph to quantify subtype transition probabilities through extensive simulations. Our stratification achieves superior performance in both inter-cluster heterogeneity and intra-cluster homogeneity of various clinical scores. Applying our approach to enriched samples of aging populations, we identify six subtypes spanning AD spectrum, where each subtype exhibits a distinctive biomarker pattern that is consistent with its clinical outcome. PSSN provides insights into pre-symptomatic diagnosis and practical guidance on clinical treatments, which may be further generalized to other neurodegenerative diseases. |
0808.1371 | Douglas Kell | Douglas B. Kell | Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor
to the Aetiology of Vascular and Other Progressive Inflammatory and
Degenerative Diseases | 159 pages, including 9 Figs and 2184 references | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The production of peroxide and superoxide is an inevitable consequence of
aerobic metabolism, and while these particular "reactive oxygen species" (ROSs)
can exhibit a number of biological effects, they are not of themselves
excessively reactive and thus they are not especially damaging at physiological
concentrations. However, their reactions with poorly liganded iron species can
lead to the catalytic production of the very reactive and dangerous hydroxyl
radical, which is exceptionally damaging, and a major cause of chronic
inflammation. We review the considerable and wide-ranging evidence for the
involvement of this combination of (su)peroxide and poorly liganded iron in a
large number of physiological and indeed pathological processes and
inflammatory disorders, especially those involving the progressive degradation
of cellular and organismal performance. These diseases share a great many
similarities and thus might be considered to have a common cause (i.e.
iron-catalysed free radical and especially hydroxyl radical generation). The
studies reviewed include those focused on a series of cardiovascular, metabolic
and neurological diseases, where iron can be found at the sites of plaques and
lesions, as well as studies showing the significance of iron to aging and
longevity. The effective chelation of iron by natural or synthetic ligands is
thus of major physiological (and potentially therapeutic) importance. As
systems properties, we need to recognise that physiological observables have
multiple molecular causes, and studying them in isolation leads to inconsistent
patterns of apparent causality when it is the simultaneous combination of
multiple factors that is responsible. This explains, for instance, the
decidedly mixed effects of antioxidants that have been observed, etc...
| [
{
"created": "Sat, 9 Aug 2008 19:31:29 GMT",
"version": "v1"
}
] | 2008-08-14 | [
[
"Kell",
"Douglas B.",
""
]
] | The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc... |
0902.2885 | Yasser Roudi | Yasser Roudi, Joanna Tyrcha, John Hertz | The Ising Model for Neural Data: Model Quality and Approximate Methods
for Extracting Functional Connectivity | 12 pages, 10 figures | Phys. Rev. E 79, 051915, 2009 | 10.1103/PhysRevE.79.051915 | null | q-bio.QM cond-mat.dis-nn q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study pairwise Ising models for describing the statistics of multi-neuron
spike trains, using data from a simulated cortical network. We explore
efficient ways of finding the optimal couplings in these models and examine
their statistical properties. To do this, we extract the optimal couplings for
subsets of size up to 200 neurons, essentially exactly, using Boltzmann
learning. We then study the quality of several approximate methods for finding
the couplings by comparing their results with those found from Boltzmann
learning. Two of these methods- inversion of the TAP equations and an
approximation proposed by Sessak and Monasson- are remarkably accurate. Using
these approximations for larger subsets of neurons, we find that extracting
couplings using data from a subset smaller than the full network tends
systematically to overestimate their magnitude. This effect is described
qualitatively by infinite-range spin glass theory for the normal phase. We also
show that a globally-correlated input to the neurons in the network lead to a
small increase in the average coupling. However, the pair-to-pair variation of
the couplings is much larger than this and reflects intrinsic properties of the
network. Finally, we study the quality of these models by comparing their
entropies with that of the data. We find that they perform well for small
subsets of the neurons in the network, but the fit quality starts to
deteriorate as the subset size grows, signalling the need to include higher
order correlations to describe the statistics of large networks.
| [
{
"created": "Tue, 17 Feb 2009 10:43:30 GMT",
"version": "v1"
}
] | 2009-05-21 | [
[
"Roudi",
"Yasser",
""
],
[
"Tyrcha",
"Joanna",
""
],
[
"Hertz",
"John",
""
]
] | We study pairwise Ising models for describing the statistics of multi-neuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods- inversion of the TAP equations and an approximation proposed by Sessak and Monasson- are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin glass theory for the normal phase. We also show that a globally-correlated input to the neurons in the network lead to a small increase in the average coupling. However, the pair-to-pair variation of the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signalling the need to include higher order correlations to describe the statistics of large networks. |
1606.08788 | Joan Saldana | Winfried Just, Joan Saldana, Ying Xin | Oscillations in epidemic models with spread of awareness | 32 pages, 8 figures, Journal of Mathematical Biology (Published
online: 28 July 2017) | null | 10.1007/s00285-017-1166-x | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study ODE models of epidemic spreading with a preventive behavioral
response that is triggered by awareness of the infection. Previous studies of
such models have mostly focused on the impact of the response on the initial
growth of an outbreak and the existence and location of endemic equilibria.
Here we study the question whether this type of response is sufficient to
prevent future flare-ups from low endemic levels if awareness is assumed to
decay over time. In the ODE context, such flare-ups would translate into
sustained oscillations with significant amplitudes.
Our results show that such oscillations are ruled out in
Susceptible-Aware-Infectious-Susceptible models with a single compartment of
aware hosts, but can occur if we consider two distinct compartments of aware
hosts who differ in their willingness to alert other susceptible hosts.
| [
{
"created": "Tue, 28 Jun 2016 17:03:09 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Aug 2017 12:45:52 GMT",
"version": "v2"
}
] | 2017-08-08 | [
[
"Just",
"Winfried",
""
],
[
"Saldana",
"Joan",
""
],
[
"Xin",
"Ying",
""
]
] | We study ODE models of epidemic spreading with a preventive behavioral response that is triggered by awareness of the infection. Previous studies of such models have mostly focused on the impact of the response on the initial growth of an outbreak and the existence and location of endemic equilibria. Here we study the question whether this type of response is sufficient to prevent future flare-ups from low endemic levels if awareness is assumed to decay over time. In the ODE context, such flare-ups would translate into sustained oscillations with significant amplitudes. Our results show that such oscillations are ruled out in Susceptible-Aware-Infectious-Susceptible models with a single compartment of aware hosts, but can occur if we consider two distinct compartments of aware hosts who differ in their willingness to alert other susceptible hosts. |
2312.11436 | Nikhil Parthasarathy | Nikhil Parthasarathy, Olivier J. H\'enaff, Eero P. Simoncelli | Layerwise complexity-matched learning yields an improved model of
cortical area V2 | 31 pages, 13 figures | Transactions on Machine Learning Research, Jun 2024 | null | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human ability to recognize complex visual patterns arises through
transformations performed by successive areas in the ventral visual cortex.
Deep neural networks trained end-to-end for object recognition approach human
capabilities, and offer the best descriptions to date of neural responses in
the late stages of the hierarchy. But these networks provide a poor account of
the early stages, compared to traditional hand-engineered models, or models
optimized for coding efficiency or prediction. Moreover, the gradient
backpropagation used in end-to-end learning is generally considered to be
biologically implausible. Here, we overcome both of these limitations by
developing a bottom-up self-supervised training methodology that operates
independently on successive layers. Specifically, we maximize feature
similarity between pairs of locally-deformed natural image patches, while
decorrelating features across patches sampled from other images. Crucially, the
deformation amplitudes are adjusted proportionally to receptive field sizes in
each layer, thus matching the task complexity to the capacity at each stage of
processing. In comparison with architecture-matched versions of previous
models, we demonstrate that our layerwise complexity-matched learning (LCL)
formulation produces a two-stage model (LCL-V2) that is better aligned with
selectivity properties and neural activity in primate area V2. We demonstrate
that the complexity-matched learning paradigm is responsible for much of the
emergence of the improved biological alignment. Finally, when the two-stage
model is used as a fixed front-end for a deep network trained to perform object
recognition, the resultant model (LCL-V2Net) is significantly better than
standard end-to-end self-supervised, supervised, and adversarially-trained
models in terms of generalization to out-of-distribution tasks and alignment
with human behavior.
| [
{
"created": "Mon, 18 Dec 2023 18:37:02 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Mar 2024 16:31:58 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Jul 2024 23:41:24 GMT",
"version": "v3"
}
] | 2024-07-22 | [
[
"Parthasarathy",
"Nikhil",
""
],
[
"Hénaff",
"Olivier J.",
""
],
[
"Simoncelli",
"Eero P.",
""
]
] | Human ability to recognize complex visual patterns arises through transformations performed by successive areas in the ventral visual cortex. Deep neural networks trained end-to-end for object recognition approach human capabilities, and offer the best descriptions to date of neural responses in the late stages of the hierarchy. But these networks provide a poor account of the early stages, compared to traditional hand-engineered models, or models optimized for coding efficiency or prediction. Moreover, the gradient backpropagation used in end-to-end learning is generally considered to be biologically implausible. Here, we overcome both of these limitations by developing a bottom-up self-supervised training methodology that operates independently on successive layers. Specifically, we maximize feature similarity between pairs of locally-deformed natural image patches, while decorrelating features across patches sampled from other images. Crucially, the deformation amplitudes are adjusted proportionally to receptive field sizes in each layer, thus matching the task complexity to the capacity at each stage of processing. In comparison with architecture-matched versions of previous models, we demonstrate that our layerwise complexity-matched learning (LCL) formulation produces a two-stage model (LCL-V2) that is better aligned with selectivity properties and neural activity in primate area V2. We demonstrate that the complexity-matched learning paradigm is responsible for much of the emergence of the improved biological alignment. Finally, when the two-stage model is used as a fixed front-end for a deep network trained to perform object recognition, the resultant model (LCL-V2Net) is significantly better than standard end-to-end self-supervised, supervised, and adversarially-trained models in terms of generalization to out-of-distribution tasks and alignment with human behavior. |
1505.02033 | Meng Xu | Meng Xu | Taylor's power law: before and after 50 years of scientific scrutiny | 13 pages, 1 figure | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Taylor's power law is one of the mostly widely known empirical patterns in
ecology discovered in the 20th century. It states that the variance of species
population density scales as a power-law function of the mean population
density. Taylor's power law was named after the British ecologist Lionel Roy
Taylor. During the past half-century, Taylor's power law was confirmed for
thousands of biological species and even for non-biological quantities.
Numerous theories and models have been proposed to explain the mechanisms of
Taylor's power law. However an understanding of the historical origin of this
ubiquitous scaling pattern is lacking. This work reviews two research aspects
that are fundamental to the discovery of Taylor's power law and provides an
outlook of its future studies.
| [
{
"created": "Thu, 7 May 2015 18:03:54 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Apr 2016 19:50:35 GMT",
"version": "v2"
}
] | 2016-04-19 | [
[
"Xu",
"Meng",
""
]
] | Taylor's power law is one of the mostly widely known empirical patterns in ecology discovered in the 20th century. It states that the variance of species population density scales as a power-law function of the mean population density. Taylor's power law was named after the British ecologist Lionel Roy Taylor. During the past half-century, Taylor's power law was confirmed for thousands of biological species and even for non-biological quantities. Numerous theories and models have been proposed to explain the mechanisms of Taylor's power law. However an understanding of the historical origin of this ubiquitous scaling pattern is lacking. This work reviews two research aspects that are fundamental to the discovery of Taylor's power law and provides an outlook of its future studies. |
1807.04691 | Jennifer Stiso | Jennifer Stiso and Danielle Bassett | Spatial Embedding Imposes Constraints on the Network Architectures of
Neural Systems | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | A fundamental understanding of the network architecture of the brain is
necessary for the further development of theories explicating circuit function.
Perhaps as a derivative of its initial application to abstract informational
systems, network science provides many methods and summary statistics that
address the network's topological characteristics with little or no thought to
its physical instantiation. Recent progress has capitalized on quantitative
tools from network science to parsimoniously describe and predict neural
activity and connectivity across multiple spatial and temporal scales. Yet, for
embedded systems, physical laws can directly constrain processes of network
growth, development, and function, and an appreciation of those physical laws
is therefore critical to an understanding of the system. Recent evidence
demonstrates that the constraints imposed by the physical shape of the brain,
and by the mechanical forces at play in its development, have marked effects on
the observed network topology and function. Here, we review the rules imposed
by space on the development of neural networks and show that these rules give
rise to a specific set of complex topologies. We present evidence that these
fundamental wiring rules affect the repertoire of neural dynamics that can
emerge from the system, and thereby inform our understanding of network
dysfunction in disease. We also discuss several computational tools,
mathematical models, and algorithms that have proven useful in delineating the
effects of spatial embedding on a given networked system and are important
considerations for addressing future problems in network neuroscience. Finally,
we outline several open questions regarding the network architectures that
support circuit function, the answers to which will require a thorough and
honest appraisal of the role of physical space in brain network anatomy and
physiology.
| [
{
"created": "Thu, 12 Jul 2018 15:57:20 GMT",
"version": "v1"
}
] | 2018-07-13 | [
[
"Stiso",
"Jennifer",
""
],
[
"Bassett",
"Danielle",
""
]
] | A fundamental understanding of the network architecture of the brain is necessary for the further development of theories explicating circuit function. Perhaps as a derivative of its initial application to abstract informational systems, network science provides many methods and summary statistics that address the network's topological characteristics with little or no thought to its physical instantiation. Recent progress has capitalized on quantitative tools from network science to parsimoniously describe and predict neural activity and connectivity across multiple spatial and temporal scales. Yet, for embedded systems, physical laws can directly constrain processes of network growth, development, and function, and an appreciation of those physical laws is therefore critical to an understanding of the system. Recent evidence demonstrates that the constraints imposed by the physical shape of the brain, and by the mechanical forces at play in its development, have marked effects on the observed network topology and function. Here, we review the rules imposed by space on the development of neural networks and show that these rules give rise to a specific set of complex topologies. We present evidence that these fundamental wiring rules affect the repertoire of neural dynamics that can emerge from the system, and thereby inform our understanding of network dysfunction in disease. We also discuss several computational tools, mathematical models, and algorithms that have proven useful in delineating the effects of spatial embedding on a given networked system and are important considerations for addressing future problems in network neuroscience. Finally, we outline several open questions regarding the network architectures that support circuit function, the answers to which will require a thorough and honest appraisal of the role of physical space in brain network anatomy and physiology. |
1402.6000 | Peter Freese | Peter D. Freese, Kirill S. Korolev, Jose I. Jimenez, and Irene A. Chen | Genetic drift suppresses bacterial conjugation in spatially structured
populations | null | Biophysical Journal, Volume 106, Issue 4, 18 February 2014, Pages
944-954 | 10.1016/j.bpj.2014.01.012 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conjugation is the primary mechanism of horizontal gene transfer that spreads
antibiotic resistance among bacteria. Although conjugation normally occurs in
surface-associated growth (e.g., biofilms), it has been traditionally studied
in well-mixed liquid cultures lacking spatial structure, which is known to
affect many evolutionary and ecological processes. Here we visualize spatial
patterns of gene transfer mediated by F plasmid conjugation in a colony of
Escherichia coli growing on solid agar, and we develop a quantitative
understanding by spatial extension of traditional mass-action models. We found
that spatial structure suppresses conjugation in surface-associated growth
because strong genetic drift leads to spatial isolation of donor and recipient
cells, restricting conjugation to rare boundaries between donor and recipient
strains. These results suggest that ecological strategies, such as enforcement
of spatial structure and enhancement of genetic drift, could complement
molecular strategies in slowing the spread of antibiotic resistance genes.
| [
{
"created": "Mon, 24 Feb 2014 22:09:50 GMT",
"version": "v1"
}
] | 2014-02-26 | [
[
"Freese",
"Peter D.",
""
],
[
"Korolev",
"Kirill S.",
""
],
[
"Jimenez",
"Jose I.",
""
],
[
"Chen",
"Irene A.",
""
]
] | Conjugation is the primary mechanism of horizontal gene transfer that spreads antibiotic resistance among bacteria. Although conjugation normally occurs in surface-associated growth (e.g., biofilms), it has been traditionally studied in well-mixed liquid cultures lacking spatial structure, which is known to affect many evolutionary and ecological processes. Here we visualize spatial patterns of gene transfer mediated by F plasmid conjugation in a colony of Escherichia coli growing on solid agar, and we develop a quantitative understanding by spatial extension of traditional mass-action models. We found that spatial structure suppresses conjugation in surface-associated growth because strong genetic drift leads to spatial isolation of donor and recipient cells, restricting conjugation to rare boundaries between donor and recipient strains. These results suggest that ecological strategies, such as enforcement of spatial structure and enhancement of genetic drift, could complement molecular strategies in slowing the spread of antibiotic resistance genes. |
2212.06269 | Fatih Gulec | Fatih Gulec and Andrew W. Eckford | Stochastic Modeling of Biofilm Formation with Bacterial Quorum Sensing | Submitted to ICC 2023 | null | 10.1109/ICC45041.2023.10278566 | null | q-bio.QM eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacteria generally live in complicated structures called biofilms, consisting
of communicating bacterial colonies and extracellular polymeric substance
(EPS). Since biofilms are related to detrimental effects such as infection or
antibiotic resistance in different settings, it is essential to model their
formation. In this paper, a stochastic model is proposed for biofilm formation,
using bacterial quorum sensing (QS). In this model, the biological processes in
the biofilm formation are modeled as a chemical reaction network which includes
bacterial reproduction, productions of autoinducer and EPS, and their
diffusion. The modified explicit tau-leap simulation algorithm is adapted based
on the two-state QS mechanism. Our approach is validated by using the
experimental results of $\textit{Pseudomonas putida}$ IsoF bacteria for
autoinducer and bacteria concentration. It is also shown that the percentage of
EPS in the biofilm increases significantly after the state change in QS, while
it decreases before QS is activated. The presented work shows how the biofilm
growth can be modeled realistically by using the QS mechanism in stochastic
simulations of chemical reactions.
| [
{
"created": "Mon, 12 Dec 2022 22:12:50 GMT",
"version": "v1"
}
] | 2023-11-07 | [
[
"Gulec",
"Fatih",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | Bacteria generally live in complicated structures called biofilms, consisting of communicating bacterial colonies and extracellular polymeric substance (EPS). Since biofilms are related to detrimental effects such as infection or antibiotic resistance in different settings, it is essential to model their formation. In this paper, a stochastic model is proposed for biofilm formation, using bacterial quorum sensing (QS). In this model, the biological processes in the biofilm formation are modeled as a chemical reaction network which includes bacterial reproduction, productions of autoinducer and EPS, and their diffusion. The modified explicit tau-leap simulation algorithm is adapted based on the two-state QS mechanism. Our approach is validated by using the experimental results of $\textit{Pseudomonas putida}$ IsoF bacteria for autoinducer and bacteria concentration. It is also shown that the percentage of EPS in the biofilm increases significantly after the state change in QS, while it decreases before QS is activated. The presented work shows how the biofilm growth can be modeled realistically by using the QS mechanism in stochastic simulations of chemical reactions. |
1808.01893 | Cinzia Di Giusto | Elisabetta De Maria (C&A), Cinzia Di Giusto (C&A), Laetitia Laversa
(C&A) | Spiking Neural Networks modelled as Timed Automata with parameter
learning | null | null | null | null | q-bio.NC cs.FL q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a novel approach to automatically infer parameters
of spiking neural networks. Neurons are modelled as timed automata waiting for
inputs on a number of different channels (synapses), for a given amount of time
(the accumulation period). When this period is over, the current potential
value is computed considering current and past inputs. If this potential
overcomes a given threshold, the automaton emits a broadcast signal over its
output channel , otherwise it restarts another accumulation period. After each
emission, the automaton remains inactive for a fixed refractory period. Spiking
neural networks are formalised as sets of automata, one for each neuron,
running in parallel and sharing channels according to the network structure.
Such a model is formally validated against some crucial properties defined via
proper temporal logic formulae. The model is then exploited to find an
assignment for the synaptical weights of neural networks such that they can
reproduce a given behaviour. The core of this approach consists in identifying
some correcting actions adjusting synaptical weights and back-propagating them
until the expected behaviour is displayed. A concrete case study is discussed.
| [
{
"created": "Wed, 1 Aug 2018 08:19:31 GMT",
"version": "v1"
}
] | 2018-08-07 | [
[
"De Maria",
"Elisabetta",
"",
"C&A"
],
[
"Di Giusto",
"Cinzia",
"",
"C&A"
],
[
"Laversa",
"Laetitia",
"",
"C&A"
]
] | In this paper we present a novel approach to automatically infer parameters of spiking neural networks. Neurons are modelled as timed automata waiting for inputs on a number of different channels (synapses), for a given amount of time (the accumulation period). When this period is over, the current potential value is computed considering current and past inputs. If this potential overcomes a given threshold, the automaton emits a broadcast signal over its output channel , otherwise it restarts another accumulation period. After each emission, the automaton remains inactive for a fixed refractory period. Spiking neural networks are formalised as sets of automata, one for each neuron, running in parallel and sharing channels according to the network structure. Such a model is formally validated against some crucial properties defined via proper temporal logic formulae. The model is then exploited to find an assignment for the synaptical weights of neural networks such that they can reproduce a given behaviour. The core of this approach consists in identifying some correcting actions adjusting synaptical weights and back-propagating them until the expected behaviour is displayed. A concrete case study is discussed. |
1510.02469 | Diego Ferreiro | Pablo Turjanski, R. Gonzalo Parra, Roc\'io Espada, Ver\'onica Becher,
Diego U. Ferreiro | Protein Repeats from First Principles | 15 pages, 5 figures and supporting information | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some natural proteins display recurrent structural patterns. Despite being
highly similar at the tertiary structure level, repetitions within a single
repeat protein can be extremely variable at the sequence level. We propose a
mathematical definition of a repeat and investigate the occurrences of these in
different protein families. We found that long stretches of perfect repetitions
are infrequent in individual natural proteins, even for those which are known
to fold into structures of recurrent structural motifs. We found that natural
repeat proteins are indeed repetitive in their families, exhibiting abundant
stretches of 6 amino acids or longer that are perfect repetitions in the
reference family. We provide a systematic quantification for this
repetitiveness, and show that this form of repetitiveness is not exclusive of
repeat proteins, but also occurs in globular domains. A by-product of this work
is a fast classifier of proteins into families, which yields likelihood value
about a given protein belonging to a given family.
| [
{
"created": "Thu, 8 Oct 2015 14:43:34 GMT",
"version": "v1"
}
] | 2015-10-12 | [
[
"Turjanski",
"Pablo",
""
],
[
"Parra",
"R. Gonzalo",
""
],
[
"Espada",
"Rocío",
""
],
[
"Becher",
"Verónica",
""
],
[
"Ferreiro",
"Diego U.",
""
]
] | Some natural proteins display recurrent structural patterns. Despite being highly similar at the tertiary structure level, repetitions within a single repeat protein can be extremely variable at the sequence level. We propose a mathematical definition of a repeat and investigate the occurrences of these in different protein families. We found that long stretches of perfect repetitions are infrequent in individual natural proteins, even for those which are known to fold into structures of recurrent structural motifs. We found that natural repeat proteins are indeed repetitive in their families, exhibiting abundant stretches of 6 amino acids or longer that are perfect repetitions in the reference family. We provide a systematic quantification for this repetitiveness, and show that this form of repetitiveness is not exclusive of repeat proteins, but also occurs in globular domains. A by-product of this work is a fast classifier of proteins into families, which yields likelihood value about a given protein belonging to a given family. |
2007.02338 | Sumanta Ray | Sumanta Ray, Snehalika Lall, Anirban Mukhopadhyay, Sanghamitra
Bandyopadhyay, and Alexander Sch\"onhuth | Predicting potential drug targets and repurposable drugs for COVID-19
via a deep generative model for graphs | 19 pages, 5 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronavirus Disease 2019 (COVID-19) has been creating a worldwide pandemic
situation. Repurposing drugs, already shown to be free of harmful side effects,
for the treatment of COVID-19 patients is an important option in launching
novel therapeutic strategies. Therefore, reliable molecule interaction data are
a crucial basis, where drug-/protein-protein interaction networks establish
invaluable, year-long carefully curated data resources. However, these
resources have not yet been systematically exploited using high-performance
artificial intelligence approaches. Here, we combine three networks, two of
which are year-long curated, and one of which, on SARS-CoV-2-human host-virus
protein interactions, was published only most recently (30th of April 2020),
raising a novel network that puts drugs, human and virus proteins into mutual
context. We apply Variational Graph AutoEncoders (VGAEs), representing most
advanced deep learning based methodology for the analysis of data that are
subject to network constraints. Reliable simulations confirm that we operate at
utmost accuracy in terms of predicting missing links. We then predict hitherto
unknown links between drugs and human proteins against which virus proteins
preferably bind. The corresponding therapeutic agents present splendid starting
points for exploring novel host-directed therapy (HDT) options.
| [
{
"created": "Sun, 5 Jul 2020 13:45:14 GMT",
"version": "v1"
}
] | 2020-07-07 | [
[
"Ray",
"Sumanta",
""
],
[
"Lall",
"Snehalika",
""
],
[
"Mukhopadhyay",
"Anirban",
""
],
[
"Bandyopadhyay",
"Sanghamitra",
""
],
[
"Schönhuth",
"Alexander",
""
]
] | Coronavirus Disease 2019 (COVID-19) has been creating a worldwide pandemic situation. Repurposing drugs, already shown to be free of harmful side effects, for the treatment of COVID-19 patients is an important option in launching novel therapeutic strategies. Therefore, reliable molecule interaction data are a crucial basis, where drug-/protein-protein interaction networks establish invaluable, year-long carefully curated data resources. However, these resources have not yet been systematically exploited using high-performance artificial intelligence approaches. Here, we combine three networks, two of which are year-long curated, and one of which, on SARS-CoV-2-human host-virus protein interactions, was published only most recently (30th of April 2020), raising a novel network that puts drugs, human and virus proteins into mutual context. We apply Variational Graph AutoEncoders (VGAEs), representing most advanced deep learning based methodology for the analysis of data that are subject to network constraints. Reliable simulations confirm that we operate at utmost accuracy in terms of predicting missing links. We then predict hitherto unknown links between drugs and human proteins against which virus proteins preferably bind. The corresponding therapeutic agents present splendid starting points for exploring novel host-directed therapy (HDT) options. |
2102.07743 | Rachael Heuer | Rachael M. Heuer (1), John D. Stieglitz (2), Christina Pasparakis (1),
Ian C. Enochs (3), Daniel D. Benetti (2), and Martin Grosell (1) ((1)
University of Miami Rosenstiel School of Marine and Atmospheric Science,
Department of Marine Biology and Ecology, (2) University of Miami Rosenstiel
School of Marine and Atmospheric Science, Department of Marine Ecosystems and
Society, (3) NOAA, Atlantic Oceanographic and Meteorological Laboratory,
Ocean Chemistry and Ecosystem Division) | The effects of temperature acclimation on swimming performance in the
pelagic Mahi-mahi (Coryphaena hippurus) | 24 pages, 3 figures main text, 6 figures supplemental text, published
in Frontiers in Marine Science
https://www.frontiersin.org/articles/10.3389/fmars.2021.654276/full | Front. Mar. Sci. 8 (2021) 654276 | 10.3389/fmars.2021.654276 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Mahi-mahi (Coryphaena hippurus) are a highly migratory pelagic fish, but
little is known about what environmental factors drive their broad
distribution. This study examined how temperature influences aerobic scope and
swimming performance in mahi. Mahi were acclimated to four temperatures
spanning their natural range (20, 24, 28, and 32{\deg}C; 5-27 days) and
critical swimming speed (Ucrit), metabolic rates, aerobic scope, and optimal
swim speed were measured. Aerobic scope and Ucrit were highest in
28{\deg}C-acclimated fish. 20{\deg}C-acclimated mahi experienced significantly
decreased aerobic scope and Ucrit relative to 28{\deg}C-acclimated fish (57 and
28% declines, respectively). 32{\deg}C-acclimated mahi experienced increased
mortality and a significant 23% decline in Ucrit, and a trend for a 26% decline
in factorial aerobic scope relative to 28{\deg}C-acclimated fish. Absolute
aerobic scope showed a similar pattern to factorial aerobic scope. Our results
are generally in agreement with previously observed distribution patterns for
wild fish. Although thermal performance can vary across life stages, the
highest tested swim performance and aerobic scope found in the present study
(28{\deg}C), aligns with recently observed habitat utilization patterns for
wild mahi and could be relevant for climate change predictions.
| [
{
"created": "Mon, 15 Feb 2021 18:39:08 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Sep 2021 17:43:33 GMT",
"version": "v2"
}
] | 2021-10-01 | [
[
"Heuer",
"Rachael M.",
""
],
[
"Stieglitz",
"John D.",
""
],
[
"Pasparakis",
"Christina",
""
],
[
"Enochs",
"Ian C.",
""
],
[
"Benetti",
"Daniel D.",
""
],
[
"Grosell",
"Martin",
""
]
] | Mahi-mahi (Coryphaena hippurus) are a highly migratory pelagic fish, but little is known about what environmental factors drive their broad distribution. This study examined how temperature influences aerobic scope and swimming performance in mahi. Mahi were acclimated to four temperatures spanning their natural range (20, 24, 28, and 32{\deg}C; 5-27 days) and critical swimming speed (Ucrit), metabolic rates, aerobic scope, and optimal swim speed were measured. Aerobic scope and Ucrit were highest in 28{\deg}C-acclimated fish. 20{\deg}C-acclimated mahi experienced significantly decreased aerobic scope and Ucrit relative to 28{\deg}C-acclimated fish (57 and 28% declines, respectively). 32{\deg}C-acclimated mahi experienced increased mortality and a significant 23% decline in Ucrit, and a trend for a 26% decline in factorial aerobic scope relative to 28{\deg}C-acclimated fish. Absolute aerobic scope showed a similar pattern to factorial aerobic scope. Our results are generally in agreement with previously observed distribution patterns for wild fish. Although thermal performance can vary across life stages, the highest tested swim performance and aerobic scope found in the present study (28{\deg}C), aligns with recently observed habitat utilization patterns for wild mahi and could be relevant for climate change predictions. |
1702.01825 | Niru Maheswaranathan | Lane T. McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli,
Stephen A. Baccus | Deep Learning Models of the Retinal Response to Natural Scenes | L.T.M. and N.M. contributed equally to this work. Presented at NIPS
2016 | Advances in Neural Information Processing Systems 29 (2016)
1361-1369 | null | null | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central challenge in neuroscience is to understand neural computations and
circuit mechanisms that underlie the encoding of ethologically relevant,
natural stimuli. In multilayered neural circuits, nonlinear processes such as
synaptic transmission and spiking dynamics present a significant obstacle to
the creation of accurate computational models of responses to natural stimuli.
Here we demonstrate that deep convolutional neural networks (CNNs) capture
retinal responses to natural scenes nearly to within the variability of a
cell's response, and are markedly more accurate than linear-nonlinear (LN)
models and Generalized Linear Models (GLMs). Moreover, we find two additional
surprising properties of CNNs: they are less susceptible to overfitting than
their LN counterparts when trained on small amounts of data, and generalize
better when tested on stimuli drawn from a different distribution (e.g. between
natural scenes and white noise). Examination of trained CNNs reveals several
properties. First, a richer set of feature maps is necessary for predicting the
responses to natural scenes compared to white noise. Second, temporally precise
responses to slowly varying inputs originate from feedforward inhibition,
similar to known retinal mechanisms. Third, the injection of latent noise
sources in intermediate layers enables our model to capture the sub-Poisson
spiking variability observed in retinal ganglion cells. Fourth, augmenting our
CNNs with recurrent lateral connections enables them to capture contrast
adaptation as an emergent property of accurately describing retinal responses
to natural scenes. These methods can be readily generalized to other sensory
modalities and stimulus ensembles. Overall, this work demonstrates that CNNs
not only accurately capture sensory circuit responses to natural scenes, but
also yield information about the circuit's internal structure and function.
| [
{
"created": "Mon, 6 Feb 2017 23:48:19 GMT",
"version": "v1"
}
] | 2017-02-09 | [
[
"McIntosh",
"Lane T.",
""
],
[
"Maheswaranathan",
"Niru",
""
],
[
"Nayebi",
"Aran",
""
],
[
"Ganguli",
"Surya",
""
],
[
"Baccus",
"Stephen A.",
""
]
] | A central challenge in neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). Examination of trained CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also yield information about the circuit's internal structure and function. |
2205.13488 | Florian Hartig | Florian Hartig, Fr\'ed\'eric Barraquand | The evidence contained in the P-value is context dependent | This is a letter in response to Muff, S., Nilsen, E. B., O'Hara, R.
B., & Nater, C. R. (2022). Rewriting results sections in the language of
evidence. Trends in Ecology & Evolution | Trends in Ecology & Evolution, 2022 | 10.1016/j.tree.2022.02.011 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent opinion article, Muff et al. recapitulate well-known objections
to the Neyman-Pearson Null-Hypothesis Significance Testing (NHST) framework and
call for reforming our practices in statistical reporting. We agree with them
on several important points: the significance threshold P<0.05 is only a
convention, chosen as a compromise between type I and II error rates;
transforming the p-value into a dichotomous statement leads to a loss of
information; and p-values should be interpreted together with other statistical
indicators, in particular effect sizes and their uncertainty. In our view, a
lot of progress in reporting results can already be achieved by keeping these
three points in mind. We were surprised and worried, however, by Muff et al.'s
suggestion to interpret the p-value as a "gradual notion of evidence". Muff et
al. recommend, for example, that a P-value > 0.1 should be reported as "little
or no evidence" and a P-value of 0.001 as "strong evidence" in favor of the
alternative hypothesis H1.
| [
{
"created": "Thu, 26 May 2022 16:56:07 GMT",
"version": "v1"
}
] | 2022-05-30 | [
[
"Hartig",
"Florian",
""
],
[
"Barraquand",
"Frédéric",
""
]
] | In a recent opinion article, Muff et al. recapitulate well-known objections to the Neyman-Pearson Null-Hypothesis Significance Testing (NHST) framework and call for reforming our practices in statistical reporting. We agree with them on several important points: the significance threshold P<0.05 is only a convention, chosen as a compromise between type I and II error rates; transforming the p-value into a dichotomous statement leads to a loss of information; and p-values should be interpreted together with other statistical indicators, in particular effect sizes and their uncertainty. In our view, a lot of progress in reporting results can already be achieved by keeping these three points in mind. We were surprised and worried, however, by Muff et al.'s suggestion to interpret the p-value as a "gradual notion of evidence". Muff et al. recommend, for example, that a P-value > 0.1 should be reported as "little or no evidence" and a P-value of 0.001 as "strong evidence" in favor of the alternative hypothesis H1. |
2310.08735 | Trevor GrandPre | Jim Wu, David J. Schwab, Trevor GrandPre | Noise driven phase transitions in eco-evolutionary systems | null | null | null | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In complex ecosystems such as microbial communities, there is constant
ecological and evolutionary feedback between the residing species and the
environment occurring on concurrent timescales. Species respond and adapt to
their surroundings by modifying their phenotypic traits, which in turn alters
their environment and the resources available. To study this interplay between
ecological and evolutionary mechanisms, we develop a consumer-resource model
that incorporates phenotypic mutations. In the absence of noise, we find that
phase transitions require finely-tuned interaction kernels. Additionally, we
quantify the effects of noise on frequency dependent selection by defining a
time-integrated mutation current, which accounts for the rate at which
mutations and speciation occurs. We find three distinct phases: homogeneous,
patterned, and patterned traveling waves. The last phase represents one way in
which co-evolution of species can happen in a fluctuating environment. Our
results highlight the principal roles that noise and non-reciprocal
interactions between resources and consumers play in phase transitions within
eco-evolutionary systems.
| [
{
"created": "Thu, 12 Oct 2023 21:47:24 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Oct 2023 10:27:01 GMT",
"version": "v2"
}
] | 2023-10-17 | [
[
"Wu",
"Jim",
""
],
[
"Schwab",
"David J.",
""
],
[
"GrandPre",
"Trevor",
""
]
] | In complex ecosystems such as microbial communities, there is constant ecological and evolutionary feedback between the residing species and the environment occurring on concurrent timescales. Species respond and adapt to their surroundings by modifying their phenotypic traits, which in turn alters their environment and the resources available. To study this interplay between ecological and evolutionary mechanisms, we develop a consumer-resource model that incorporates phenotypic mutations. In the absence of noise, we find that phase transitions require finely-tuned interaction kernels. Additionally, we quantify the effects of noise on frequency dependent selection by defining a time-integrated mutation current, which accounts for the rate at which mutations and speciation occurs. We find three distinct phases: homogeneous, patterned, and patterned traveling waves. The last phase represents one way in which co-evolution of species can happen in a fluctuating environment. Our results highlight the principal roles that noise and non-reciprocal interactions between resources and consumers play in phase transitions within eco-evolutionary systems. |
2404.18960 | Cuong Nguyen | Steven Shave, Richard Kasprowicz, Abdullah M. Athar, Denise Vlachou,
Neil O. Carragher, Cuong Q. Nguyen | Leak Proof CMap; a framework for training and evaluation of cell line
agnostic L1000 similarity methods | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | The Connectivity Map (CMap) is a large publicly available database of
cellular transcriptomic responses to chemical and genetic perturbations built
using a standardized acquisition protocol known as the L1000 technique.
Databases such as CMap provide an exciting opportunity to enrich drug discovery
efforts, providing a 'known' phenotypic landscape to explore and enabling the
development of state of the art techniques for enhanced information extraction
and better informed decisions. Whilst multiple methods for measuring phenotypic
similarity and interrogating profiles have been developed, the field is
severely lacking standardized benchmarks using appropriate data splitting for
training and unbiased evaluation of machine learning methods. To address this,
we have developed 'Leak Proof CMap' and exemplified its application to a set of
common transcriptomic and generic phenotypic similarity methods along with an
exemplar triplet loss-based method. Benchmarking in three critical performance
areas (compactness, distinctness, and uniqueness) is conducted using carefully
crafted data splits ensuring no similar cell lines or treatments with shared or
closely matching responses or mechanisms of action are present in training,
validation, or test sets. This enables testing of models with unseen samples
akin to exploring treatments with novel modes of action in novel patient
derived cell lines. With a carefully crafted benchmark and data splitting
regime in place, the tooling now exists to create performant phenotypic
similarity methods for use in personalized medicine (novel cell lines) and to
better augment high throughput phenotypic screening technologies with the L1000
transcriptomic technology.
| [
{
"created": "Mon, 29 Apr 2024 04:11:39 GMT",
"version": "v1"
}
] | 2024-05-01 | [
[
"Shave",
"Steven",
""
],
[
"Kasprowicz",
"Richard",
""
],
[
"Athar",
"Abdullah M.",
""
],
[
"Vlachou",
"Denise",
""
],
[
"Carragher",
"Neil O.",
""
],
[
"Nguyen",
"Cuong Q.",
""
]
] | The Connectivity Map (CMap) is a large publicly available database of cellular transcriptomic responses to chemical and genetic perturbations built using a standardized acquisition protocol known as the L1000 technique. Databases such as CMap provide an exciting opportunity to enrich drug discovery efforts, providing a 'known' phenotypic landscape to explore and enabling the development of state of the art techniques for enhanced information extraction and better informed decisions. Whilst multiple methods for measuring phenotypic similarity and interrogating profiles have been developed, the field is severely lacking standardized benchmarks using appropriate data splitting for training and unbiased evaluation of machine learning methods. To address this, we have developed 'Leak Proof CMap' and exemplified its application to a set of common transcriptomic and generic phenotypic similarity methods along with an exemplar triplet loss-based method. Benchmarking in three critical performance areas (compactness, distinctness, and uniqueness) is conducted using carefully crafted data splits ensuring no similar cell lines or treatments with shared or closely matching responses or mechanisms of action are present in training, validation, or test sets. This enables testing of models with unseen samples akin to exploring treatments with novel modes of action in novel patient derived cell lines. With a carefully crafted benchmark and data splitting regime in place, the tooling now exists to create performant phenotypic similarity methods for use in personalized medicine (novel cell lines) and to better augment high throughput phenotypic screening technologies with the L1000 transcriptomic technology. |
1509.00490 | Burak Erman | Burak Erman | Three Universal Distribution Functions for Native Proteins with Harmonic
Interactions | 9 pages, 5 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We used statistical thermodynamics of conformational fluctuations and the
elements of algebraic graph theory together with data from 2000 protein crystal
structures, and showed that folded native proteins with harmonic interactions
exhibit distribution functions each of which appear to be universal across all
proteins. The three universal distributions are: (i) the eigenvalue spectrum of
the protein graph Laplacian, (ii) the B-factor distribution of residues, and
(iii) the vibrational entropy difference per residue between the unfolded and
the folded states. The three distributions, which look independent of each
other at first sight, are strongly associated with the Rouse chain model of a
polymer as the unfolded protein. We treat the folded protein as the strongly
perturbed state of the Rouse chain. We explain the underlying factors
controlling the three distributions and discuss the differences from those of
randomly folded structures.
| [
{
"created": "Tue, 1 Sep 2015 20:19:04 GMT",
"version": "v1"
}
] | 2015-09-03 | [
[
"Erman",
"Burak",
""
]
] | We used statistical thermodynamics of conformational fluctuations and the elements of algebraic graph theory together with data from 2000 protein crystal structures, and showed that folded native proteins with harmonic interactions exhibit distribution functions each of which appear to be universal across all proteins. The three universal distributions are: (i) the eigenvalue spectrum of the protein graph Laplacian, (ii) the B-factor distribution of residues, and (iii) the vibrational entropy difference per residue between the unfolded and the folded states. The three distributions, which look independent of each other at first sight, are strongly associated with the Rouse chain model of a polymer as the unfolded protein. We treat the folded protein as the strongly perturbed state of the Rouse chain. We explain the underlying factors controlling the three distributions and discuss the differences from those of randomly folded structures. |
1910.01786 | Zhen Zeng Dr. | Zhen Zeng, Yuefeng Lu, Judong Shen, Wei Zheng, Peter Shaw, Mary Beth
Dorr | A Random Interaction Forest for Prioritizing Predictive Biomarkers | 15 pages, 2 figures, 2 tables | null | null | null | q-bio.QM cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precision medicine is becoming a focus in medical research recently, as its
implementation brings values to all stakeholders in the healthcare system.
Various statistical methodologies have been developed tackling problems in
different aspects of this field, e.g., assessing treatment heterogeneity,
identifying patient subgroups, or building treatment decision models. However,
there is a lack of new tools devoted to selecting and prioritizing predictive
biomarkers. We propose a novel tree-based ensemble method, random interaction
forest (RIF), to generate predictive importance scores and prioritize candidate
biomarkers for constructing refined treatment decision models. RIF was
evaluated by comparing with the conventional random forest and univariable
regression methods and showed favorable properties under various simulation
scenarios. We applied the proposed RIF method to a biomarker dataset from two
phase III clinical trials of bezlotoxumab on $\textit{Clostridium difficile}$
infection recurrence and obtained biologically meaningful results.
| [
{
"created": "Fri, 4 Oct 2019 02:28:41 GMT",
"version": "v1"
}
] | 2019-10-07 | [
[
"Zeng",
"Zhen",
""
],
[
"Lu",
"Yuefeng",
""
],
[
"Shen",
"Judong",
""
],
[
"Zheng",
"Wei",
""
],
[
"Shaw",
"Peter",
""
],
[
"Dorr",
"Mary Beth",
""
]
] | Precision medicine is becoming a focus in medical research recently, as its implementation brings values to all stakeholders in the healthcare system. Various statistical methodologies have been developed tackling problems in different aspects of this field, e.g., assessing treatment heterogeneity, identifying patient subgroups, or building treatment decision models. However, there is a lack of new tools devoted to selecting and prioritizing predictive biomarkers. We propose a novel tree-based ensemble method, random interaction forest (RIF), to generate predictive importance scores and prioritize candidate biomarkers for constructing refined treatment decision models. RIF was evaluated by comparing with the conventional random forest and univariable regression methods and showed favorable properties under various simulation scenarios. We applied the proposed RIF method to a biomarker dataset from two phase III clinical trials of bezlotoxumab on $\textit{Clostridium difficile}$ infection recurrence and obtained biologically meaningful results. |
1302.3997 | Yuri Shestopaloff | Yuri K. Shestopaloff | Predicting growth and finding biomass production using the general
growth mechanism | 15 pages, 3 tables, 4 figures | Yu. K. Shestopaloff, Biophysical Reviews and Letters, 2012, Vol.
7, No. 3-4, p. 177-195 | 10.1142/S1793048012500075 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | First, we briefly describe the general growth mechanism, which governs the
growth of living organisms, and its mathematical representation, the growth
equation. Using the growth equation, we compute growth curve for S. cerevisiae
and show that it corresponds to available experimental data. Then, we propose a
new method for finding amount of synthesized biomass without complicated
stoichiometric computations and apply this method to evaluation of biomass
production by S. cerevisiae. We found that obtained results are very close to
values obtained by methods of metabolic flux analysis. Since methods of
metabolic flux analysis require finding produced biomass, which is one of the
most important parameters affecting stoichiometric models, a priori knowledge
of produced biomass can significantly improve methods of metabolic flux
analysis in many aspects, which we also discuss. Besides, based on the general
growth mechanism, we considered evolutionary development of S. cerevisiae and
found that it is a more ancient organism than S. pombe and is apparently its
direct predecessor.
| [
{
"created": "Sat, 16 Feb 2013 20:36:26 GMT",
"version": "v1"
}
] | 2013-02-19 | [
[
"Shestopaloff",
"Yuri K.",
""
]
] | First, we briefly describe the general growth mechanism, which governs the growth of living organisms, and its mathematical representation, the growth equation. Using the growth equation, we compute growth curve for S. cerevisiae and show that it corresponds to available experimental data. Then, we propose a new method for finding amount of synthesized biomass without complicated stoichiometric computations and apply this method to evaluation of biomass production by S. cerevisiae. We found that obtained results are very close to values obtained by methods of metabolic flux analysis. Since methods of metabolic flux analysis require finding produced biomass, which is one of the most important parameters affecting stoichiometric models, a priori knowledge of produced biomass can significantly improve methods of metabolic flux analysis in many aspects, which we also discuss. Besides, based on the general growth mechanism, we considered evolutionary development of S. cerevisiae and found that it is a more ancient organism than S. pombe and is apparently its direct predecessor. |
2009.07395 | Gabriel Maher | Gabriel Maher, Casey Fleeter, Daniele Schiavazzi, Alison Marsden | Geometric Uncertainty in Patient-Specific Cardiovascular Modeling with
Convolutional Dropout Networks | null | null | 10.1016/j.cma.2021.114038 | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach to generate samples from the conditional
distribution of patient-specific cardiovascular models given a clinically
aquired image volume. A convolutional neural network architecture with dropout
layers is first trained for vessel lumen segmentation using a regression
approach, to enable Bayesian estimation of vessel lumen surfaces. This network
is then integrated into a path-planning patient-specific modeling pipeline to
generate families of cardiovascular models. We demonstrate our approach by
quantifying the effect of geometric uncertainty on the hemodynamics for three
patient-specific anatomies, an aorto-iliac bifurcation, an abdominal aortic
aneurysm and a sub-model of the left coronary arteries. A key innovation
introduced in the proposed approach is the ability to learn geometric
uncertainty directly from training data. The results show how geometric
uncertainty produces coefficients of variation comparable to or larger than
other sources of uncertainty for wall shear stress and velocity magnitude, but
has limited impact on pressure. Specifically, this is true for anatomies
characterized by small vessel sizes, and for local vessel lesions seen
infrequently during network training.
| [
{
"created": "Wed, 16 Sep 2020 00:13:12 GMT",
"version": "v1"
}
] | 2021-09-01 | [
[
"Maher",
"Gabriel",
""
],
[
"Fleeter",
"Casey",
""
],
[
"Schiavazzi",
"Daniele",
""
],
[
"Marsden",
"Alison",
""
]
] | We propose a novel approach to generate samples from the conditional distribution of patient-specific cardiovascular models given a clinically aquired image volume. A convolutional neural network architecture with dropout layers is first trained for vessel lumen segmentation using a regression approach, to enable Bayesian estimation of vessel lumen surfaces. This network is then integrated into a path-planning patient-specific modeling pipeline to generate families of cardiovascular models. We demonstrate our approach by quantifying the effect of geometric uncertainty on the hemodynamics for three patient-specific anatomies, an aorto-iliac bifurcation, an abdominal aortic aneurysm and a sub-model of the left coronary arteries. A key innovation introduced in the proposed approach is the ability to learn geometric uncertainty directly from training data. The results show how geometric uncertainty produces coefficients of variation comparable to or larger than other sources of uncertainty for wall shear stress and velocity magnitude, but has limited impact on pressure. Specifically, this is true for anatomies characterized by small vessel sizes, and for local vessel lesions seen infrequently during network training. |
1008.0576 | Wiet de Ronde | Wiet de Ronde and Filipe Tostevin and Pieter Rein ten Wolde | Multiplexing Biochemical Signals | 4 pages, 4 figures | Phys. Rev. Lett. 107, 048101 (2011) | 10.1103/PhysRevLett.107.048101 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we show that living cells can multiplex biochemical signals,
i.e. transmit multiple signals through the same signaling pathway
simultaneously, and yet respond to them very specifically. We demonstrate how
two binary input signals can be encoded in the concentration of a common
signaling protein, which is then decoded such that each of the two output
signals provides reliable information about one corresponding input. Under
biologically relevant conditions the network can reach the maximum amount of
information that can be transmitted, which is 2 bits.
| [
{
"created": "Tue, 3 Aug 2010 15:31:06 GMT",
"version": "v1"
}
] | 2012-06-01 | [
[
"de Ronde",
"Wiet",
""
],
[
"Tostevin",
"Filipe",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] | In this paper we show that living cells can multiplex biochemical signals, i.e. transmit multiple signals through the same signaling pathway simultaneously, and yet respond to them very specifically. We demonstrate how two binary input signals can be encoded in the concentration of a common signaling protein, which is then decoded such that each of the two output signals provides reliable information about one corresponding input. Under biologically relevant conditions the network can reach the maximum amount of information that can be transmitted, which is 2 bits. |
1603.07369 | Justin Kinney | Muir J. Morrison and Justin B. Kinney | Modeling multi-particle complexes in stochastic chemical systems | 5 pages, 2 figures | null | null | null | q-bio.QM cond-mat.stat-mech physics.bio-ph physics.chem-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large complexes of classical particles play central roles in biology, in
polymer physics, and in other disciplines. However, physics currently lacks
mathematical methods for describing such complexes in terms of component
particles, interaction energies, and assembly rules. Here we describe a Fock
space structure that addresses this need, as well as diagrammatic methods that
facilitate the use of this formalism. These methods can dramatically simplify
the equations governing both equilibrium and non-equilibrium stochastic
chemical systems. A mathematical relationship between the set of all complexes
and a list of rules for complex assembly is also identified.
| [
{
"created": "Wed, 23 Mar 2016 21:25:32 GMT",
"version": "v1"
}
] | 2016-03-25 | [
[
"Morrison",
"Muir J.",
""
],
[
"Kinney",
"Justin B.",
""
]
] | Large complexes of classical particles play central roles in biology, in polymer physics, and in other disciplines. However, physics currently lacks mathematical methods for describing such complexes in terms of component particles, interaction energies, and assembly rules. Here we describe a Fock space structure that addresses this need, as well as diagrammatic methods that facilitate the use of this formalism. These methods can dramatically simplify the equations governing both equilibrium and non-equilibrium stochastic chemical systems. A mathematical relationship between the set of all complexes and a list of rules for complex assembly is also identified. |
1101.3887 | Giovanni Feverati | Fabio Musso, Giovanni Feverati | Mutation-selection dynamics and error threshold in an evolutionary model
for Turing Machines | 26 pages | BioSystems 107 (2012) 18-33 | 10.1016/j.biosystems.2011.08.003 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the mutation-selection dynamics for an evolutionary
computation model based on Turing Machines that we introduced in a previous
article.
The use of Turing Machines allows for very simple mechanisms of code growth
and code activation/inactivation through point mutations. To any value of the
point mutation probability corresponds a maximum amount of active code that can
be maintained by selection and the Turing machines that reach it are said to be
at the error threshold. Simulations with our model show that the Turing
machines population evolve towards the error threshold.
Mathematical descriptions of the model point out that this behaviour is due
more to the mutation-selection dynamics than to the intrinsic nature of the
Turing machines. This indicates that this result is much more general than the
model considered here and could play a role also in biological evolution.
| [
{
"created": "Thu, 20 Jan 2011 12:57:00 GMT",
"version": "v1"
}
] | 2015-03-17 | [
[
"Musso",
"Fabio",
""
],
[
"Feverati",
"Giovanni",
""
]
] | We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing Machines that we introduced in a previous article. The use of Turing Machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve towards the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. |
1510.05296 | Serguei Saavedra | Serguei Saavedra, Rudolf P. Rohr, Miguel A. Fortuna, Nuria Selva,
Jordi Bascompte | Seasonal species interactions minimize the impact of species turnover on
the likelihood of community persistence | This manuscript is IN PRESS in Ecology. Previous version had a
mistake in one reference, this has been corrected | null | null | null | q-bio.PE physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many of the observed species interactions embedded in ecological communities
are not permanent, but are characterized by temporal changes that are observed
along with abiotic and biotic variations. While work has been done describing
and quantifying these changes, little is known about their consequences for
species coexistence. Here, we investigate the extent to which changes of
species composition impacts the likelihood of persistence of the predator-prey
community in the highly seasonal Bialowieza Primeval Forest (NE Poland), and
the extent to which seasonal changes of species interactions (predator diet)
modulate the expected impact. This likelihood is estimated extending recent
developments on the study of structural stability in ecological communities. We
find that the observed species turnover strongly varies the likelihood of
community persistence between summer and winter. Importantly, we demonstrate
that the observed seasonal interaction changes minimize the variation in the
likelihood of persistence associated with species turnover across the year. We
find that these community dynamics can be explained as the coupling of
individual species to their environment by minimizing both the variation in
persistence conditions and the interaction changes between seasons. Our results
provide a homeostatic explanation for seasonal species interactions, and
suggest that monitoring the association of interactions changes with the level
of variation in community dynamics can provide a good indicator of the response
of species to environmental pressures.
| [
{
"created": "Sun, 18 Oct 2015 19:38:07 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jan 2016 17:26:24 GMT",
"version": "v2"
}
] | 2016-01-21 | [
[
"Saavedra",
"Serguei",
""
],
[
"Rohr",
"Rudolf P.",
""
],
[
"Fortuna",
"Miguel A.",
""
],
[
"Selva",
"Nuria",
""
],
[
"Bascompte",
"Jordi",
""
]
] | Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impacts the likelihood of persistence of the predator-prey community in the highly seasonal Bialowieza Primeval Forest (NE Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions, and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures. |
1809.05619 | Phil Nelson | Kevin Y. Chen, Daniel M. Zuckerman and Philip C. Nelson | Stochastic Simulation to Visualize Gene Expression and Error Correction
in Living Cells | null | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic simulation can make the molecular processes of cellular control
more vivid than the traditional differential-equation approach by generating
typical system histories instead of just statistical measures such as the mean
and variance of a population. Simple simulations are now easy for students to
construct from scratch, that is, without recourse to black-box packages. In
some cases, their results can also be compared directly to single-molecule
experimental data. After introducing the stochastic simulation algorithm, this
article gives two case studies, involving gene expression and error correction,
respectively. Code samples and resulting animations showing results are given
in the online supplements.
| [
{
"created": "Sat, 15 Sep 2018 00:24:28 GMT",
"version": "v1"
}
] | 2018-09-18 | [
[
"Chen",
"Kevin Y.",
""
],
[
"Zuckerman",
"Daniel M.",
""
],
[
"Nelson",
"Philip C.",
""
]
] | Stochastic simulation can make the molecular processes of cellular control more vivid than the traditional differential-equation approach by generating typical system histories instead of just statistical measures such as the mean and variance of a population. Simple simulations are now easy for students to construct from scratch, that is, without recourse to black-box packages. In some cases, their results can also be compared directly to single-molecule experimental data. After introducing the stochastic simulation algorithm, this article gives two case studies, involving gene expression and error correction, respectively. Code samples and resulting animations showing results are given in the online supplements. |
2011.09512 | Rodrigo Cofre | Rodrigo Cofr\'e and Cesar Maldonado and Bruno Cessac | Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics | null | null | 10.3390/e22111330 | null | q-bio.NC math-ph math.MP nlin.CD | http://creativecommons.org/licenses/by/4.0/ | The Thermodynamic Formalism provides a rigorous mathematical framework to
study quantitative and qualitative aspects of dynamical systems. At its core
there is a variational principle corresponding, in its simplest form, to the
Maximum Entropy principle. It is used as a statistical inference procedure to
represent, by specific probability measures (Gibbs measures), the collective
behaviour of complex systems. This framework has found applications in
different domains of science. In particular, it has been fruitful and
influential in neurosciences. In this article, we review how the Thermodynamic
Formalism can be exploited in the field of theoretical neuroscience, as a
conceptual and operational tool, to link the dynamics of interacting neurons
and the statistics of action potentials from either experimental data or
mathematical models. We comment on perspectives and open problems in
theoretical neuroscience that could be addressed within this formalism.
| [
{
"created": "Wed, 18 Nov 2020 19:36:58 GMT",
"version": "v1"
}
] | 2020-12-30 | [
[
"Cofré",
"Rodrigo",
""
],
[
"Maldonado",
"Cesar",
""
],
[
"Cessac",
"Bruno",
""
]
] | The Thermodynamic Formalism provides a rigorous mathematical framework to study quantitative and qualitative aspects of dynamical systems. At its core there is a variational principle corresponding, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism. |
1410.2590 | Oleg Kanakov | Oleg Kanakov, Roman Kotelnikov, Ahmed Alsaedi, Lev Tsimring, Ramon
Huerta, Alexey Zaikin, Mikhail Ivanchenko | Multi-input distributed classifiers for synthetic genetic circuits | null | null | 10.1371/journal.pone.0125144 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For practical construction of complex synthetic genetic networks able to
perform elaborate functions it is important to have a pool of relatively simple
"bio-bricks" with different functionality which can be compounded together. To
complement engineering of very different existing synthetic genetic devices
such as switches, oscillators or logical gates, we propose and develop here a
design of synthetic multiple input distributed classifier with learning
ability. Proposed classifier will be able to separate multi-input data, which
are inseparable for single input classifiers. Additionally, the data classes
could potentially occupy the area of any shape in the space of inputs. We study
two approaches to classification, including hard and soft classification and
confirm the schemes of genetic networks by analytical and numerical results.
| [
{
"created": "Wed, 8 Oct 2014 18:44:30 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Kanakov",
"Oleg",
""
],
[
"Kotelnikov",
"Roman",
""
],
[
"Alsaedi",
"Ahmed",
""
],
[
"Tsimring",
"Lev",
""
],
[
"Huerta",
"Ramon",
""
],
[
"Zaikin",
"Alexey",
""
],
[
"Ivanchenko",
"Mikhail",
""
]
] | For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple "bio-bricks" with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multiple input distributed classifier with learning ability. Proposed classifier will be able to separate multi-input data, which are inseparable for single input classifiers. Additionally, the data classes could potentially occupy the area of any shape in the space of inputs. We study two approaches to classification, including hard and soft classification and confirm the schemes of genetic networks by analytical and numerical results. |
1705.05441 | Jose Munoz | Payman Mosaffa, Antonio Rodr\'iguez-Ferran, Jos\'e J. Mu\~noz | Hybrid cell-centred/vertex model for multicellular systems with
equilibrium-preserving remodelling | 33 pages, 10 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a hybrid vertex/cell-centred model for mechanically simulating
planar cellular monolayers undergoing cell reorganisation. Cell centres are
represented by a triangular nodal network, while the cell boundaries are formed
by an associated vertex network. The two networks are coupled through a
kinematic constraint which we allow to relax progressively. Special attention
is paid to the change of cell-cell connectivity due to cell reorganisation or
remodelling events. We handle these situations by using a variable resting
length and applying an Equilibrium-Preserving Mapping (EPM) on the new
connectivity, which computes a new set of resting lengths that preserve nodal
and vertex equilibrium. We illustrate the properties of the model by simulating
monolayers subjected to imposed extension and during a wound healing process.
The evolution of forces and the EPM are analysed during the remodelling events.
As a by-product, the proposed technique enables to recover fully vertex or
fully cell-centred models in a seamlessly manner by modifying a numerical
parameter of the model.
| [
{
"created": "Wed, 26 Apr 2017 22:04:55 GMT",
"version": "v1"
}
] | 2017-05-17 | [
[
"Mosaffa",
"Payman",
""
],
[
"Rodríguez-Ferran",
"Antonio",
""
],
[
"Muñoz",
"José J.",
""
]
] | We present a hybrid vertex/cell-centred model for mechanically simulating planar cellular monolayers undergoing cell reorganisation. Cell centres are represented by a triangular nodal network, while the cell boundaries are formed by an associated vertex network. The two networks are coupled through a kinematic constraint which we allow to relax progressively. Special attention is paid to the change of cell-cell connectivity due to cell reorganisation or remodelling events. We handle these situations by using a variable resting length and applying an Equilibrium-Preserving Mapping (EPM) on the new connectivity, which computes a new set of resting lengths that preserve nodal and vertex equilibrium. We illustrate the properties of the model by simulating monolayers subjected to imposed extension and during a wound healing process. The evolution of forces and the EPM are analysed during the remodelling events. As a by-product, the proposed technique enables to recover fully vertex or fully cell-centred models in a seamlessly manner by modifying a numerical parameter of the model. |
2205.03466 | John Rhodes | Elizabeth S. Allman, Hector Ba\~nos, Jonathan D. Mitchell, John A.
Rhodes | The Tree of Blobs of a Species Network: Identifiability under the
Coalescent | 18 pages, 8 figures | null | null | null | q-bio.PE math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Inference of species networks from genomic data under the Network
Multispecies Coalescent Model is currently severely limited by heavy
computational demands. It also remains unclear how complicated networks can be
for consistent inference to be possible. As a step toward inferring a general
species network, this work considers its tree of blobs, in which non-cut edges
are contracted to nodes, so only tree-like relationships between the taxa are
shown. An identifiability theorem, that most features of the unrooted tree of
blobs can be determined from the distribution of gene quartet topologies, is
established. This depends upon an analysis of gene quartet concordance factors
under the model, together with a new combinatorial inference rule. The
arguments for this theoretical result suggest a practical algorithm for tree of
blobs inference, to be fully developed in a subsequent work.
| [
{
"created": "Fri, 6 May 2022 20:08:13 GMT",
"version": "v1"
}
] | 2022-05-10 | [
[
"Allman",
"Elizabeth S.",
""
],
[
"Baños",
"Hector",
""
],
[
"Mitchell",
"Jonathan D.",
""
],
[
"Rhodes",
"John A.",
""
]
] | Inference of species networks from genomic data under the Network Multispecies Coalescent Model is currently severely limited by heavy computational demands. It also remains unclear how complicated networks can be for consistent inference to be possible. As a step toward inferring a general species network, this work considers its tree of blobs, in which non-cut edges are contracted to nodes, so only tree-like relationships between the taxa are shown. An identifiability theorem, that most features of the unrooted tree of blobs can be determined from the distribution of gene quartet topologies, is established. This depends upon an analysis of gene quartet concordance factors under the model, together with a new combinatorial inference rule. The arguments for this theoretical result suggest a practical algorithm for tree of blobs inference, to be fully developed in a subsequent work. |
2101.12275 | Sergei Maslov | Alexei V. Tkachenko, Sergei Maslov, Tong Wang, Ahmed Elbanna, George
N. Wong, and Nigel Goldenfeld | Stochastic social behavior coupled to COVID-19 dynamics leads to waves,
plateaus and an endemic state | null | null | null | null | q-bio.PE physics.med-ph physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | It is well recognized that population heterogeneity plays an important role
in the spread of epidemics. While individual variations in social activity are
often assumed to be persistent, i.e. constant in time, here we discuss the
consequences of dynamic heterogeneity. By integrating the stochastic dynamics
of social activity into traditional epidemiological models we demonstrate the
emergence of a new long timescale governing the epidemic in broad agreement
with empirical data. Our model captures multiple features of real-life
epidemics such as COVID-19, including prolonged plateaus and multiple waves,
which are transiently suppressed due to the dynamic nature of social activity.
The existence of the long timescale due to the interplay between epidemic and
social dynamics provides a unifying picture of how a fast-paced epidemic
typically will transition to the endemic state.
| [
{
"created": "Thu, 28 Jan 2021 21:00:28 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Feb 2021 15:12:14 GMT",
"version": "v2"
}
] | 2021-02-22 | [
[
"Tkachenko",
"Alexei V.",
""
],
[
"Maslov",
"Sergei",
""
],
[
"Wang",
"Tong",
""
],
[
"Elbanna",
"Ahmed",
""
],
[
"Wong",
"George N.",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | It is well recognized that population heterogeneity plays an important role in the spread of epidemics. While individual variations in social activity are often assumed to be persistent, i.e. constant in time, here we discuss the consequences of dynamic heterogeneity. By integrating the stochastic dynamics of social activity into traditional epidemiological models we demonstrate the emergence of a new long timescale governing the epidemic in broad agreement with empirical data. Our model captures multiple features of real-life epidemics such as COVID-19, including prolonged plateaus and multiple waves, which are transiently suppressed due to the dynamic nature of social activity. The existence of the long timescale due to the interplay between epidemic and social dynamics provides a unifying picture of how a fast-paced epidemic typically will transition to the endemic state. |
0907.3942 | Megan Owen | Megan Owen and J. Scott Provan | A Fast Algorithm for Computing Geodesic Distances in Tree Space | 20 pages, 5 figures. Added new section on including common edges and
leaf edge-lengths in the algorithm, clarified starting point for algorithm,
added references, other minor improvements. To appear in IEEE/ACM
Transactions on Computational Biology and Bioinformatics | null | null | null | q-bio.PE cs.CG cs.DM math.MG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comparing and computing distances between phylogenetic trees are important
biological problems, especially for models where edge lengths play an important
role. The geodesic distance measure between two phylogenetic trees with edge
lengths is the length of the shortest path between them in the continuous tree
space introduced by Billera, Holmes, and Vogtmann. This tree space provides a
powerful tool for studying and comparing phylogenetic trees, both in exhibiting
a natural distance measure and in providing a Euclidean-like structure for
solving optimization problems on trees. An important open problem is to find a
polynomial time algorithm for finding geodesics in tree space. This paper gives
such an algorithm, which starts with a simple initial path and moves through a
series of successively shorter paths until the geodesic is attained.
| [
{
"created": "Thu, 23 Jul 2009 15:39:16 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Nov 2009 01:14:54 GMT",
"version": "v2"
}
] | 2009-11-05 | [
[
"Owen",
"Megan",
""
],
[
"Provan",
"J. Scott",
""
]
] | Comparing and computing distances between phylogenetic trees are important biological problems, especially for models where edge lengths play an important role. The geodesic distance measure between two phylogenetic trees with edge lengths is the length of the shortest path between them in the continuous tree space introduced by Billera, Holmes, and Vogtmann. This tree space provides a powerful tool for studying and comparing phylogenetic trees, both in exhibiting a natural distance measure and in providing a Euclidean-like structure for solving optimization problems on trees. An important open problem is to find a polynomial time algorithm for finding geodesics in tree space. This paper gives such an algorithm, which starts with a simple initial path and moves through a series of successively shorter paths until the geodesic is attained. |
1109.4232 | Antoine Delmotte Mr | Antoine Delmotte, Edward W Tate, Sophia N Yaliraki, Mauricio Barahona | Protein multi-scale organization through graph partitioning and
robustness analysis: Application to the myosin-myosin light chain interaction | 13 pages, 7 Postscript figures | Physical biology, 8(5) (2011) 055010 | 10.1088/1478-3975/8/5/055010 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recognized importance of the multi-scale spatio-temporal
organization of proteins, most computational tools can only access a limited
spectrum of time and spatial scales, thereby ignoring the effects on protein
behavior of the intricate coupling between the different scales. Starting from
a physico-chemical atomistic network of interactions that encodes the structure
of the protein, we introduce a methodology based on multi-scale graph
partitioning that can uncover partitions and levels of organization of proteins
that span the whole range of scales, revealing biological features occurring at
different levels of organization and tracking their effect across scales.
Additionally, we introduce a measure of robustness to quantify the relevance of
the partitions through the generation of biochemically-motivated surrogate
random graph models. We apply the method to four distinct conformations of
myosin tail interacting protein, a protein from the molecular motor of the
malaria parasite, and study properties that have been experimentally addressed
such as the closing mechanism, the presence of conserved clusters, and the
identification through computational mutational analysis of key residues for
binding.
| [
{
"created": "Tue, 20 Sep 2011 08:28:45 GMT",
"version": "v1"
}
] | 2011-09-21 | [
[
"Delmotte",
"Antoine",
""
],
[
"Tate",
"Edward W",
""
],
[
"Yaliraki",
"Sophia N",
""
],
[
"Barahona",
"Mauricio",
""
]
] | Despite the recognized importance of the multi-scale spatio-temporal organization of proteins, most computational tools can only access a limited spectrum of time and spatial scales, thereby ignoring the effects on protein behavior of the intricate coupling between the different scales. Starting from a physico-chemical atomistic network of interactions that encodes the structure of the protein, we introduce a methodology based on multi-scale graph partitioning that can uncover partitions and levels of organization of proteins that span the whole range of scales, revealing biological features occurring at different levels of organization and tracking their effect across scales. Additionally, we introduce a measure of robustness to quantify the relevance of the partitions through the generation of biochemically-motivated surrogate random graph models. We apply the method to four distinct conformations of myosin tail interacting protein, a protein from the molecular motor of the malaria parasite, and study properties that have been experimentally addressed such as the closing mechanism, the presence of conserved clusters, and the identification through computational mutational analysis of key residues for binding. |
1510.00537 | Tom Shearer | Tom Shearer | A new strain energy function for the hyperelastic modelling of ligaments
and tendons based on fascicle microstructure | null | Journal of Biomechanics 48 (2015) 290-297 | 10.1016/j.jbiomech.2014.11.031 | null | q-bio.TO cond-mat.soft | http://creativecommons.org/licenses/by/4.0/ | A new strain energy function for the hyperelastic modelling of ligaments and
tendons based on the geometrical arrangement of their fibrils is derived. The
distribution of the crimp angles of the fibrils is used to determine the
stress-strain response of a single fascicle, and this stress-strain response is
used to determine the form of the strain energy function, the parameters of
which can all potentially be directly measured via experiments - unlike those
of commonly used strain energy functions such as the Holzapfel-Gasser-Ogden
(HGO) model, whose parameters are phenomenological. We compare the new model
with the HGO model and show that the new model gives a better match to existing
stress-strain data for human patellar tendon than the HGO model, with the
average relative error in matching this data when using the new model being
0.053 (compared with 0.57 when using the HGO model), and the average absolute
error when using the new model being 0.12 MPa (compared with 0.31 MPa when
using the HGO model).
| [
{
"created": "Fri, 2 Oct 2015 09:20:52 GMT",
"version": "v1"
}
] | 2015-10-05 | [
[
"Shearer",
"Tom",
""
]
] | A new strain energy function for the hyperelastic modelling of ligaments and tendons based on the geometrical arrangement of their fibrils is derived. The distribution of the crimp angles of the fibrils is used to determine the stress-strain response of a single fascicle, and this stress-strain response is used to determine the form of the strain energy function, the parameters of which can all potentially be directly measured via experiments - unlike those of commonly used strain energy functions such as the Holzapfel-Gasser-Ogden (HGO) model, whose parameters are phenomenological. We compare the new model with the HGO model and show that the new model gives a better match to existing stress-strain data for human patellar tendon than the HGO model, with the average relative error in matching this data when using the new model being 0.053 (compared with 0.57 when using the HGO model), and the average absolute error when using the new model being 0.12 MPa (compared with 0.31 MPa when using the HGO model). |
1810.01484 | Govind Kaigala | Nadya Ostromohov, Deborah Huber, Moran Bercovici and Govind V. Kaigala | Real-Time Monitoring of Fluorescence in situ Hybridization (FISH)
Kinetics | 15 pages, 4 figures | published 2018 | 10.1021/acs.analchem.8b02630 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method for real-time monitoring and kinetic analysis of
fluorescence in situ hybridization (FISH). We implement the method using a
vertical microfluidic probe containing a microstructure designed for rapid
switching between a probe solution and a non-fluorescent imaging buffer. The
FISH signal is monitored in real time during the imaging buffer wash, during
which signal associated with unbound probes is removed. We provide a
theoretical description of the method as well as a demonstration of its
applicability using a model system of centromeric probes (Cen17). We
demonstrate the applicability of the method for the characterization of FISH
kinetics under conditions of varying probe concentration, destabilizing agent
(formamide) content, volume exclusion agent (dextran sulfate) content, and
ionic strength. We show that our method can be used to investigate the effect
of each of these variables and provide insight into processes affecting in situ
hybridization, facilitating the design of new assays.
| [
{
"created": "Tue, 2 Oct 2018 20:05:08 GMT",
"version": "v1"
}
] | 2018-10-04 | [
[
"Ostromohov",
"Nadya",
""
],
[
"Huber",
"Deborah",
""
],
[
"Bercovici",
"Moran",
""
],
[
"Kaigala",
"Govind V.",
""
]
] | We present a novel method for real-time monitoring and kinetic analysis of fluorescence in situ hybridization (FISH). We implement the method using a vertical microfluidic probe containing a microstructure designed for rapid switching between a probe solution and a non-fluorescent imaging buffer. The FISH signal is monitored in real time during the imaging buffer wash, during which signal associated with unbound probes is removed. We provide a theoretical description of the method as well as a demonstration of its applicability using a model system of centromeric probes (Cen17). We demonstrate the applicability of the method for the characterization of FISH kinetics under conditions of varying probe concentration, destabilizing agent (formamide) content, volume exclusion agent (dextran sulfate) content, and ionic strength. We show that our method can be used to investigate the effect of each of these variables and provide insight into processes affecting in situ hybridization, facilitating the design of new assays. |
2311.07962 | Ha-Na Jo | Ha-Na Jo, Young-Seok Kweon, Gi-Hwan Shin, Heon-Gyu Kwak, Seong-Whan
Lee | Relationship Between Mood, Sleepiness, and EEG Functional Connectivity
by 40 Hz Monaural Beats | null | null | null | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by/4.0/ | The monaural beat is known that it can modulate brain and personal states.
However, which changes in brain waves are related to changes in state is still
unclear. Therefore, we aimed to investigate the effects of monaural beats and
find the relationship between them. Ten participants took part in five separate
random sessions, which included a baseline session and four sessions with
monaural beats stimulation: one audible session and three inaudible sessions.
Electroencephalogram (EEG) were recorded and participants completed pre- and
post-stimulation questionnaires assessing mood and sleepiness. As a result,
audible session led to increased arousal and positive mood compared to other
conditions. From the neurophysiological analysis, statistical differences in
frontal-central, central-central, and central-parietal connectivity were
observed only in the audible session. Furthermore, a significant correlation
was identified between sleepiness and EEG power in the temporal and occipital
regions. These results suggested a more detailed correlation for stimulation to
change its personal state. These findings have implications for applications in
areas such as cognitive enhancement, mood regulation, and sleep management.
| [
{
"created": "Tue, 14 Nov 2023 07:28:37 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Nov 2023 07:54:21 GMT",
"version": "v2"
}
] | 2023-11-21 | [
[
"Jo",
"Ha-Na",
""
],
[
"Kweon",
"Young-Seok",
""
],
[
"Shin",
"Gi-Hwan",
""
],
[
"Kwak",
"Heon-Gyu",
""
],
[
"Lee",
"Seong-Whan",
""
]
] | The monaural beat is known that it can modulate brain and personal states. However, which changes in brain waves are related to changes in state is still unclear. Therefore, we aimed to investigate the effects of monaural beats and find the relationship between them. Ten participants took part in five separate random sessions, which included a baseline session and four sessions with monaural beats stimulation: one audible session and three inaudible sessions. Electroencephalogram (EEG) were recorded and participants completed pre- and post-stimulation questionnaires assessing mood and sleepiness. As a result, audible session led to increased arousal and positive mood compared to other conditions. From the neurophysiological analysis, statistical differences in frontal-central, central-central, and central-parietal connectivity were observed only in the audible session. Furthermore, a significant correlation was identified between sleepiness and EEG power in the temporal and occipital regions. These results suggested a more detailed correlation for stimulation to change its personal state. These findings have implications for applications in areas such as cognitive enhancement, mood regulation, and sleep management. |
2210.05677 | Chang Su | Matthew Brendel, Chang Su, Zilong Bai, Hao Zhang, Olivier Elemento,
Fei Wang | Application of Deep Learning on Single-Cell RNA-sequencing Data
Analysis: A Review | null | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Single-cell RNA-sequencing (scRNA-seq) has become a routinely used technique
to quantify the gene expression profile of thousands of single cells
simultaneously. Analysis of scRNA-seq data plays an important role in the study
of cell states and phenotypes, and has helped elucidate biological processes,
such as those occurring during development of complex organisms and improved
our understanding of disease states, such as cancer, diabetes, and COVID, among
others. Deep learning, a recent advance of artificial intelligence that has
been used to address many problems involving large datasets, has also emerged
as a promising tool for scRNA-seq data analysis, as it has a capacity to
extract informative, compact features from noisy, heterogeneous, and
high-dimensional scRNA-seq data to improve downstream analysis. The present
review aims at surveying recently developed deep learning techniques in
scRNA-seq data analysis, identifying key steps within the scRNA-seq data
analysis pipeline that have been advanced by deep learning, and explaining the
benefits of deep learning over more conventional analysis tools. Finally, we
summarize the challenges in current deep learning approaches faced within
scRNA-seq data and discuss potential directions for improvements in deep
algorithms for scRNA-seq data analysis.
| [
{
"created": "Tue, 11 Oct 2022 17:07:22 GMT",
"version": "v1"
}
] | 2022-10-13 | [
[
"Brendel",
"Matthew",
""
],
[
"Su",
"Chang",
""
],
[
"Bai",
"Zilong",
""
],
[
"Zhang",
"Hao",
""
],
[
"Elemento",
"Olivier",
""
],
[
"Wang",
"Fei",
""
]
] | Single-cell RNA-sequencing (scRNA-seq) has become a routinely used technique to quantify the gene expression profile of thousands of single cells simultaneously. Analysis of scRNA-seq data plays an important role in the study of cell states and phenotypes, and has helped elucidate biological processes, such as those occurring during development of complex organisms and improved our understanding of disease states, such as cancer, diabetes, and COVID, among others. Deep learning, a recent advance of artificial intelligence that has been used to address many problems involving large datasets, has also emerged as a promising tool for scRNA-seq data analysis, as it has a capacity to extract informative, compact features from noisy, heterogeneous, and high-dimensional scRNA-seq data to improve downstream analysis. The present review aims at surveying recently developed deep learning techniques in scRNA-seq data analysis, identifying key steps within the scRNA-seq data analysis pipeline that have been advanced by deep learning, and explaining the benefits of deep learning over more conventional analysis tools. Finally, we summarize the challenges in current deep learning approaches faced within scRNA-seq data and discuss potential directions for improvements in deep algorithms for scRNA-seq data analysis. |
0706.0118 | Diana Fusco | D. Fusco, B. Bassetti, P. Jona, M. Cosentino Lagomarsino | DIA-MCIS. An Importance Sampling Network Randomizer for Network Motif
Discovery and Other Topological Observables in Transcription Networks | 6 pages and 1 figure, included supplementary mathematical notes | null | null | null | q-bio.QM | null | Transcription networks, and other directed networks can be characterized by
some topological observables such as for example subgraph occurrence (network
motifs). In order to perform such kind of analysis, it is necessary to be able
to generate suitable randomized network ensembles. Typically, one considers
null networks with the same degree sequences of the original ones. The commonly
used algorithms sometimes have long convergence times, and sampling problems.
We present here an alternative, based on a variant of the importance sampling
Montecarlo developed by Chen et al. [1].
| [
{
"created": "Fri, 1 Jun 2007 10:25:53 GMT",
"version": "v1"
}
] | 2007-06-04 | [
[
"Fusco",
"D.",
""
],
[
"Bassetti",
"B.",
""
],
[
"Jona",
"P.",
""
],
[
"Lagomarsino",
"M. Cosentino",
""
]
] | Transcription networks, and other directed networks can be characterized by some topological observables such as for example subgraph occurrence (network motifs). In order to perform such kind of analysis, it is necessary to be able to generate suitable randomized network ensembles. Typically, one considers null networks with the same degree sequences of the original ones. The commonly used algorithms sometimes have long convergence times, and sampling problems. We present here an alternative, based on a variant of the importance sampling Montecarlo developed by Chen et al. [1]. |
1312.1104 | Isabelle Rivals | Brigitte Quenet (ESA), Christian Straus (ER 10 UPMC), Marie-No\"elle
Fiamma (ER 10 UPMC), Isabelle Rivals (ESA), Thomas Similowski (ER 10 UPMC),
Ginette Horcholle-Bossavit (ESA) | New insights in gill/buccal rhythm spiking activity and CO2 sensitivity
in pre- and post-metamorphic tadpoles (Pelophylax ridibundus) | null | Respiratory Physiology & Neurobiology 191 (2014) 26-37 | 10.1016/j.resp.2013.10.013 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Central CO2chemosensitivity is crucial for all air-breathing vertebrates and
raises the question of itsrole in ventilatory rhythmogenesis. In this study,
neurograms of ventilatory motor outputs recorded infacial nerve of
premetamorphic and postmetamorphic tadpole isolated brainstems, under normo-
andhypercapnia, are investigated using Continuous Wavelet Transform spectral
analysis for buccal activityand computation of number and amplitude of spikes
during buccal and lung activities. Buccal burstsexhibit fast oscillations
(20-30 Hz) that are prominent in premetamorphic tadpoles: they result from
thepresence in periodic time windows of high amplitude spikes. Hypercapnia
systematically decreases thefrequency of buccal rhythm in both pre- and
postmetamorphic tadpoles, by a lengthening of the interburstduration. In
postmetamorphic tadpoles, hypercapnia reduces buccal burst amplitude and
unmasks smallfast oscillations. Our results suggest a common effect of the
hypercapnia on the buccal part of the CentralPattern Generator in all tadpoles
and a possible effect at the level of the motoneuron recruitment
inpostmetamorphic tadpoles.
| [
{
"created": "Wed, 4 Dec 2013 10:57:48 GMT",
"version": "v1"
}
] | 2013-12-10 | [
[
"Quenet",
"Brigitte",
"",
"ESA"
],
[
"Straus",
"Christian",
"",
"ER 10 UPMC"
],
[
"Fiamma",
"Marie-Noëlle",
"",
"ER 10 UPMC"
],
[
"Rivals",
"Isabelle",
"",
"ESA"
],
[
"Similowski",
"Thomas",
"",
"ER 10 UPMC"
],
[
... | Central CO2chemosensitivity is crucial for all air-breathing vertebrates and raises the question of itsrole in ventilatory rhythmogenesis. In this study, neurograms of ventilatory motor outputs recorded infacial nerve of premetamorphic and postmetamorphic tadpole isolated brainstems, under normo- andhypercapnia, are investigated using Continuous Wavelet Transform spectral analysis for buccal activityand computation of number and amplitude of spikes during buccal and lung activities. Buccal burstsexhibit fast oscillations (20-30 Hz) that are prominent in premetamorphic tadpoles: they result from thepresence in periodic time windows of high amplitude spikes. Hypercapnia systematically decreases thefrequency of buccal rhythm in both pre- and postmetamorphic tadpoles, by a lengthening of the interburstduration. In postmetamorphic tadpoles, hypercapnia reduces buccal burst amplitude and unmasks smallfast oscillations. Our results suggest a common effect of the hypercapnia on the buccal part of the CentralPattern Generator in all tadpoles and a possible effect at the level of the motoneuron recruitment inpostmetamorphic tadpoles. |
2309.01122 | Bruna Moreira Da Silva | Bruna Moreira da Silva (1), David B. Ascher (2), Nicholas Geard (1),
Douglas E. V. Pires (1) ((1) The University of Melbourne, (2) The University
of Queensland) | AI driven B-cell Immunotherapy Design | null | null | null | null | q-bio.QM cs.CE cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Antibodies, a prominent class of approved biologics, play a crucial role in
detecting foreign antigens. The effectiveness of antigen neutralisation and
elimination hinges upon the strength, sensitivity, and specificity of the
paratope-epitope interaction, which demands resource-intensive experimental
techniques for characterisation. In recent years, artificial intelligence and
machine learning methods have made significant strides, revolutionising the
prediction of protein structures and their complexes. The past decade has also
witnessed the evolution of computational approaches aiming to support
immunotherapy design. This review focuses on the progress of machine
learning-based tools and their frameworks in the domain of B-cell immunotherapy
design, encompassing linear and conformational epitope prediction, paratope
prediction, and antibody design. We mapped the most commonly used data sources,
evaluation metrics, and method availability and thoroughly assessed their
significance and limitations, discussing the main challenges ahead.
| [
{
"created": "Sun, 3 Sep 2023 09:14:10 GMT",
"version": "v1"
}
] | 2023-09-06 | [
[
"da Silva",
"Bruna Moreira",
""
],
[
"Ascher",
"David B.",
""
],
[
"Geard",
"Nicholas",
""
],
[
"Pires",
"Douglas E. V.",
""
]
] | Antibodies, a prominent class of approved biologics, play a crucial role in detecting foreign antigens. The effectiveness of antigen neutralisation and elimination hinges upon the strength, sensitivity, and specificity of the paratope-epitope interaction, which demands resource-intensive experimental techniques for characterisation. In recent years, artificial intelligence and machine learning methods have made significant strides, revolutionising the prediction of protein structures and their complexes. The past decade has also witnessed the evolution of computational approaches aiming to support immunotherapy design. This review focuses on the progress of machine learning-based tools and their frameworks in the domain of B-cell immunotherapy design, encompassing linear and conformational epitope prediction, paratope prediction, and antibody design. We mapped the most commonly used data sources, evaluation metrics, and method availability and thoroughly assessed their significance and limitations, discussing the main challenges ahead. |
1803.03461 | Michael Sadovsky | Michael Sadovsky, Tatiana Guseva, Vladislav Birukov, Tatiana Shpagina,
Victoria Fedotovskaya | Some preliminary results on relation between triplet composition and
tissue source in larch total transcriptome | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We studied the structuredness ensemble of transcriptome of Siberian larch.
The clusters in 64-dimensional space were identified with $K$-means technique,
where the objects to be clusterized are the different fragments of the genome.
A tetrahedron like structure in distribution of these fragments was found.
Chargaff's discrepancy measure was determined for each class, as well as that
latter between the classes. It reveals a relative similitude of the classes.
The results have been compared to those obtained for specific transcriptome of
each tissue. Also, a surrogate transcriptome has been developed comprising the
contigs assembled for specific tissues; that latter has been compared with the
real total transcriptome, and significant difference has been observed.
| [
{
"created": "Fri, 9 Mar 2018 10:49:06 GMT",
"version": "v1"
}
] | 2018-03-12 | [
[
"Sadovsky",
"Michael",
""
],
[
"Guseva",
"Tatiana",
""
],
[
"Birukov",
"Vladislav",
""
],
[
"Shpagina",
"Tatiana",
""
],
[
"Fedotovskaya",
"Victoria",
""
]
] | We studied the structuredness ensemble of transcriptome of Siberian larch. The clusters in 64-dimensional space were identified with $K$-means technique, where the objects to be clusterized are the different fragments of the genome. A tetrahedron like structure in distribution of these fragments was found. Chargaff's discrepancy measure was determined for each class, as well as that latter between the classes. It reveals a relative similitude of the classes. The results have been compared to those obtained for specific transcriptome of each tissue. Also, a surrogate transcriptome has been developed comprising the contigs assembled for specific tissues; that latter has been compared with the real total transcriptome, and significant difference has been observed. |
2204.02169 | Beren Millidge Mr | Alexander Tschantz, Beren Millidge, Anil K Seth, Christopher L Buckley | Hybrid Predictive Coding: Inferring, Fast and Slow | 05/04/22 initial upload. 06/04/22 added acknowledgements section | null | null | null | q-bio.NC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive coding is an influential model of cortical neural activity. It
proposes that perceptual beliefs are furnished by sequentially minimising
"prediction errors" - the differences between predicted and observed data.
Implicit in this proposal is the idea that perception requires multiple cycles
of neural activity. This is at odds with evidence that several aspects of
visual perception - including complex forms of object recognition - arise from
an initial "feedforward sweep" that occurs on fast timescales which preclude
substantial recurrent activity. Here, we propose that the feedforward sweep can
be understood as performing amortized inference and recurrent processing can be
understood as performing iterative inference. We propose a hybrid predictive
coding network that combines both iterative and amortized inference in a
principled manner by describing both in terms of a dual optimization of a
single objective function. We show that the resulting scheme can be implemented
in a biologically plausible neural architecture that approximates Bayesian
inference utilising local Hebbian update rules. We demonstrate that our hybrid
predictive coding model combines the benefits of both amortized and iterative
inference -- obtaining rapid and computationally cheap perceptual inference for
familiar data while maintaining the context-sensitivity, precision, and sample
efficiency of iterative inference schemes. Moreover, we show how our model is
inherently sensitive to its uncertainty and adaptively balances iterative and
amortized inference to obtain accurate beliefs using minimum computational
expense. Hybrid predictive coding offers a new perspective on the functional
relevance of the feedforward and recurrent activity observed during visual
perception and offers novel insights into distinct aspects of visual
phenomenology.
| [
{
"created": "Tue, 5 Apr 2022 12:52:45 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2022 16:09:28 GMT",
"version": "v2"
}
] | 2022-04-07 | [
[
"Tschantz",
"Alexander",
""
],
[
"Millidge",
"Beren",
""
],
[
"Seth",
"Anil K",
""
],
[
"Buckley",
"Christopher L",
""
]
] | Predictive coding is an influential model of cortical neural activity. It proposes that perceptual beliefs are furnished by sequentially minimising "prediction errors" - the differences between predicted and observed data. Implicit in this proposal is the idea that perception requires multiple cycles of neural activity. This is at odds with evidence that several aspects of visual perception - including complex forms of object recognition - arise from an initial "feedforward sweep" that occurs on fast timescales which preclude substantial recurrent activity. Here, we propose that the feedforward sweep can be understood as performing amortized inference and recurrent processing can be understood as performing iterative inference. We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner by describing both in terms of a dual optimization of a single objective function. We show that the resulting scheme can be implemented in a biologically plausible neural architecture that approximates Bayesian inference utilising local Hebbian update rules. We demonstrate that our hybrid predictive coding model combines the benefits of both amortized and iterative inference -- obtaining rapid and computationally cheap perceptual inference for familiar data while maintaining the context-sensitivity, precision, and sample efficiency of iterative inference schemes. Moreover, we show how our model is inherently sensitive to its uncertainty and adaptively balances iterative and amortized inference to obtain accurate beliefs using minimum computational expense. Hybrid predictive coding offers a new perspective on the functional relevance of the feedforward and recurrent activity observed during visual perception and offers novel insights into distinct aspects of visual phenomenology. |
1812.11857 | Brian Carlson | Amanda L. Colunga, Karam G. Kim, N. Payton Woodall, Todd F. Dardas,
John H. Gennari, Mette S. Olufsen and Brian E. Carlson | Deep phenotyping of cardiac function in heart transplant patients using
cardiovascular systems models | 53 Pages (including supplement), 9 figures in manuscript, 9 figures
in supplement | null | null | null | q-bio.TO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heart transplant patients are followed with periodic right heart
catheterizations (RHCs) to identify post-transplant complications and guide
treatment. Post-transplant positive outcomes are associated with a steady
reduction of right ventricular and pulmonary arterial pressures, toward normal
levels of right-side pressure (about 20mmHg) measured by RHC. This study shows
more information about patient progression is obtained by combining standard
RHC measures with mechanistic computational cardiovascular systems models. This
study shows: to understand how cardiovascular system models can be used to
represent a patient's cardiovascular state, and to use these models to track
post-transplant recovery and outcome. To obtain reliable parameter estimates
comparable within and across datasets, we use sensitivity analysis, parameter
subset selection, and optimization to determine patient specific mechanistic
parameter that can be reliably extracted from the RHC data. Patient-specific
models are identified for ten patients from their first post-transplant RHC and
longitudinal analysis is done for five patients. Results of sensitivity
analysis and subset selection show we can reliably estimate seven
non-measurable quantities including ventricular diastolic relaxation, systemic
resistance, pulmonary venous elastance, pulmonary resistance, pulmonary
arterial elastance, pulmonary valve resistance and systemic arterial elastance.
Changes in parameters and predicted cardiovascular function post-transplant are
used to evaluate cardiovascular state during recovery in five patients. Of
these five patients, only one patient showed inconsistent trends during
recovery in ventricular pressure-volume relationships and power output. At a
four-year recovery time point this patient exhibited biventricular failure
along with graft dysfunction while the remaining four exhibited no
cardiovascular complications.
| [
{
"created": "Wed, 26 Dec 2018 19:03:52 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Dec 2019 16:00:32 GMT",
"version": "v2"
}
] | 2019-12-11 | [
[
"Colunga",
"Amanda L.",
""
],
[
"Kim",
"Karam G.",
""
],
[
"Woodall",
"N. Payton",
""
],
[
"Dardas",
"Todd F.",
""
],
[
"Gennari",
"John H.",
""
],
[
"Olufsen",
"Mette S.",
""
],
[
"Carlson",
"Brian E.",
""... | Heart transplant patients are followed with periodic right heart catheterizations (RHCs) to identify post-transplant complications and guide treatment. Post-transplant positive outcomes are associated with a steady reduction of right ventricular and pulmonary arterial pressures, toward normal levels of right-side pressure (about 20mmHg) measured by RHC. This study shows more information about patient progression is obtained by combining standard RHC measures with mechanistic computational cardiovascular systems models. This study shows: to understand how cardiovascular system models can be used to represent a patient's cardiovascular state, and to use these models to track post-transplant recovery and outcome. To obtain reliable parameter estimates comparable within and across datasets, we use sensitivity analysis, parameter subset selection, and optimization to determine patient specific mechanistic parameter that can be reliably extracted from the RHC data. Patient-specific models are identified for ten patients from their first post-transplant RHC and longitudinal analysis is done for five patients. Results of sensitivity analysis and subset selection show we can reliably estimate seven non-measurable quantities including ventricular diastolic relaxation, systemic resistance, pulmonary venous elastance, pulmonary resistance, pulmonary arterial elastance, pulmonary valve resistance and systemic arterial elastance. Changes in parameters and predicted cardiovascular function post-transplant are used to evaluate cardiovascular state during recovery in five patients. Of these five patients, only one patient showed inconsistent trends during recovery in ventricular pressure-volume relationships and power output. At a four-year recovery time point this patient exhibited biventricular failure along with graft dysfunction while the remaining four exhibited no cardiovascular complications. |
1405.5251 | Korbinian Strimmer | Steve Hoffmann, Peter F. Stadler and Korbinian Strimmer | A Simple Data-Adaptive Probabilistic Variant Calling Model | 19 pages, 6 figures | Algorithms for Molecular Biology 2015, Vol. 10, Article 10 | 10.1186/s13015-015-0037-5 | null | q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Several sources of noise obfuscate the identification of single
nucleotide variation (SNV) in next generation sequencing data. For instance,
errors may be introduced during library construction and sequencing steps. In
addition, the reference genome and the algorithms used for the alignment of the
reads are further critical factors determining the efficacy of variant calling
methods. It is crucial to account for these factors in individual sequencing
experiments.
Results: We introduce a simple data-adaptive model for variant calling. This
model automatically adjusts to specific factors such as alignment errors. To
achieve this, several characteristics are sampled from sites with low mismatch
rates, and these are used to estimate empirical log-likelihoods. These
likelihoods are then combined to a score that typically gives rise to a mixture
distribution. From these we determine a decision threshold to separate
potentially variant sites from the noisy background.
Conclusions: In simulations we show that our simple proposed model is
competitive with frequently used much more complex SNV calling algorithms in
terms of sensitivity and specificity. It performs specifically well in cases
with low allele frequencies. The application to next-generation sequencing data
reveals stark differences of the score distributions indicating a strong
influence of data specific sources of noise. The proposed model is specifically
designed to adjust to these differences.
| [
{
"created": "Tue, 20 May 2014 22:09:17 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Nov 2014 18:20:54 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Jan 2015 15:44:05 GMT",
"version": "v3"
}
] | 2015-03-05 | [
[
"Hoffmann",
"Steve",
""
],
[
"Stadler",
"Peter F.",
""
],
[
"Strimmer",
"Korbinian",
""
]
] | Background: Several sources of noise obfuscate the identification of single nucleotide variation (SNV) in next generation sequencing data. For instance, errors may be introduced during library construction and sequencing steps. In addition, the reference genome and the algorithms used for the alignment of the reads are further critical factors determining the efficacy of variant calling methods. It is crucial to account for these factors in individual sequencing experiments. Results: We introduce a simple data-adaptive model for variant calling. This model automatically adjusts to specific factors such as alignment errors. To achieve this, several characteristics are sampled from sites with low mismatch rates, and these are used to estimate empirical log-likelihoods. These likelihoods are then combined to a score that typically gives rise to a mixture distribution. From these we determine a decision threshold to separate potentially variant sites from the noisy background. Conclusions: In simulations we show that our simple proposed model is competitive with frequently used much more complex SNV calling algorithms in terms of sensitivity and specificity. It performs specifically well in cases with low allele frequencies. The application to next-generation sequencing data reveals stark differences of the score distributions indicating a strong influence of data specific sources of noise. The proposed model is specifically designed to adjust to these differences. |
2310.03058 | Aniket Banerjee | Aniket Banerjee, Urvashi Verma and Rana D. Parshad | Some Remarks on a Non-Local Variable Carrying Capacity Model for Aphid
Population Dynamics | 21 pages, 8 figures | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | Aphids are damaging insect pests on many crops. Their density can rapidly
build up on a host plant to several thousand over one growing season.
Occasionally, a competition-driven decline in population early in the season,
followed by a build-up later, is observed in the field. Such dynamics cannot be
captured via standard models, such as introduced in Kindlmann and Dixon, 2010.
In Kindlmann et al., 2007, a logistic non-local population model with variable
carrying capacity is proposed to capture these alternative dynamics. The
proposed model has a rich dynamical structure and can predict multiple
population peaks, as observed in the field. We show that additionally, this
model also possesses solutions that can blow-up/explode in finite time. The
blow-up is seen to occur for both large and small initial conditions and can
occur both early and late in the season. We propose a model extension that,
under certain parametric restrictions, has global time-bounded solutions. We
discuss the ecological applications of our findings.
| [
{
"created": "Wed, 4 Oct 2023 15:20:36 GMT",
"version": "v1"
}
] | 2023-10-06 | [
[
"Banerjee",
"Aniket",
""
],
[
"Verma",
"Urvashi",
""
],
[
"Parshad",
"Rana D.",
""
]
] | Aphids are damaging insect pests on many crops. Their density can rapidly build up on a host plant to several thousand over one growing season. Occasionally, a competition-driven decline in population early in the season, followed by a build-up later, is observed in the field. Such dynamics cannot be captured via standard models, such as introduced in Kindlmann and Dixon, 2010. In Kindlmann et al., 2007, a logistic non-local population model with variable carrying capacity is proposed to capture these alternative dynamics. The proposed model has a rich dynamical structure and can predict multiple population peaks, as observed in the field. We show that additionally, this model also possesses solutions that can blow-up/explode in finite time. The blow-up is seen to occur for both large and small initial conditions and can occur both early and late in the season. We propose a model extension that, under certain parametric restrictions, has global time-bounded solutions. We discuss the ecological applications of our findings. |
2201.02249 | Cornelia Pokalyuk | Vianney Brouard and Cornelia Pokalyuk | Invasion of cooperative parasites in moderately structured host
populations | null | null | null | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Certain defense mechanisms of phages against the immune system of their
bacterial host rely on cooperation of phages. Motivated by this example we
analyse invasion probabilities of cooperative parasites in host populations
that are moderately structured. More precisely we assume that hosts are
arranged on the vertices of a configuration model and that offspring of
parasites move to nearest neighbours sites to infect new hosts. We consider
parasites that generate many offspring at reproduction, but do this (usually)
only when infecting a host simultaneously. In this regime we identify and
analyse the spatial scale of the population structure at which invasion of
parasites turns from being an unlikely to an highly probable event.
| [
{
"created": "Thu, 6 Jan 2022 21:04:45 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jul 2022 08:39:55 GMT",
"version": "v2"
}
] | 2022-07-22 | [
[
"Brouard",
"Vianney",
""
],
[
"Pokalyuk",
"Cornelia",
""
]
] | Certain defense mechanisms of phages against the immune system of their bacterial host rely on cooperation of phages. Motivated by this example we analyse invasion probabilities of cooperative parasites in host populations that are moderately structured. More precisely we assume that hosts are arranged on the vertices of a configuration model and that offspring of parasites move to nearest neighbours sites to infect new hosts. We consider parasites that generate many offspring at reproduction, but do this (usually) only when infecting a host simultaneously. In this regime we identify and analyse the spatial scale of the population structure at which invasion of parasites turns from being an unlikely to an highly probable event. |
2404.12865 | David Sankoff | Siyu Chen and David Sankoff | A minimal model of boosting and waning iin a recurrent seasonal epidemic | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We propose a model of the immunity to a cyclical epidemic disease taking
account not only of seasonal boosts during the infectious season, but also of
residual immunity remaining from one season to the next. The focus is on the
exponential waning process over successive cycles, imposed on the temporal
distribution of infections or exposures over a season. This distribution,
interacting with the waning function, is all that is necessary to reproduce, in
mathematically closed form, the mechanical cycle of boosting and waning
immunity characteristic of recurrent seasonal infectious disease. Distinct from
epidemiological models predicting numbers of individuals moving between
infectivity compartments, our result enables us to directly estimate parameters
of waning and the infectivity distribution. We can naturally iterate the
cyclical process to simulate immunity trajectories over many years and thus to
quantify the strong relationship between residual immunity and the time elapsed
between annual infectivity peaks.
| [
{
"created": "Fri, 19 Apr 2024 13:05:08 GMT",
"version": "v1"
}
] | 2024-04-22 | [
[
"Chen",
"Siyu",
""
],
[
"Sankoff",
"David",
""
]
] | We propose a model of the immunity to a cyclical epidemic disease taking account not only of seasonal boosts during the infectious season, but also of residual immunity remaining from one season to the next. The focus is on the exponential waning process over successive cycles, imposed on the temporal distribution of infections or exposures over a season. This distribution, interacting with the waning function, is all that is necessary to reproduce, in mathematically closed form, the mechanical cycle of boosting and waning immunity characteristic of recurrent seasonal infectious disease. Distinct from epidemiological models predicting numbers of individuals moving between infectivity compartments, our result enables us to directly estimate parameters of waning and the infectivity distribution. We can naturally iterate the cyclical process to simulate immunity trajectories over many years and thus to quantify the strong relationship between residual immunity and the time elapsed between annual infectivity peaks. |
1608.06665 | Walid Gomaa | Mohamed Khamis, Walid Gomaa, Basem Galal | Deep learning is competing random forest in computational docking | 29 pages, 7 figures | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational docking is the core process of computer-aided drug design; it
aims at predicting the best orientation and conformation of a small drug
molecule when bound to a target large protein receptor. The docking quality is
typically measured by a scoring function: a mathematical predictive model that
produces a score representing the binding free energy and hence the stability
of the resulting complex molecule. We analyze the performance of both learning
techniques on the scoring power, the ranking power, docking power, and
screening power using the PDBbind 2013 database. For the scoring and ranking
powers, the proposed learning scoring functions depend on a wide range of
features (energy terms, pharmacophore, intermolecular) that entirely
characterize the protein-ligand complexes. For the docking and screening
powers, the proposed learning scoring functions depend on the intermolecular
features of the RF-Score to utilize a larger number of training complexes. For
the scoring power, the DL\_RF scoring function achieves Pearson's correlation
coefficient between the predicted and experimentally measured binding
affinities of 0.799 versus 0.758 of the RF scoring function. For the ranking
power, the DL scoring function ranks the ligands bound to fixed target protein
with accuracy 54% for the high-level ranking and with accuracy 78% for the
low-level ranking while the RF scoring function achieves (46% and 62%)
respectively. For the docking power, the DL\_RF scoring function has a success
rate when the three best-scored ligand binding poses are considered within 2
\AA\ root-mean-square-deviation from the native pose of 36.0% versus 30.2% of
the RF scoring function. For the screening power, the DL scoring function has
an average enrichment factor and success rate at the top 1% level of (2.69 and
6.45%) respectively versus (1.61 and 4.84%) respectively of the RF scoring
function.
| [
{
"created": "Tue, 23 Aug 2016 22:52:22 GMT",
"version": "v1"
}
] | 2016-08-25 | [
[
"Khamis",
"Mohamed",
""
],
[
"Gomaa",
"Walid",
""
],
[
"Galal",
"Basem",
""
]
] | Computational docking is the core process of computer-aided drug design; it aims at predicting the best orientation and conformation of a small drug molecule when bound to a target large protein receptor. The docking quality is typically measured by a scoring function: a mathematical predictive model that produces a score representing the binding free energy and hence the stability of the resulting complex molecule. We analyze the performance of both learning techniques on the scoring power, the ranking power, docking power, and screening power using the PDBbind 2013 database. For the scoring and ranking powers, the proposed learning scoring functions depend on a wide range of features (energy terms, pharmacophore, intermolecular) that entirely characterize the protein-ligand complexes. For the docking and screening powers, the proposed learning scoring functions depend on the intermolecular features of the RF-Score to utilize a larger number of training complexes. For the scoring power, the DL\_RF scoring function achieves Pearson's correlation coefficient between the predicted and experimentally measured binding affinities of 0.799 versus 0.758 of the RF scoring function. For the ranking power, the DL scoring function ranks the ligands bound to fixed target protein with accuracy 54% for the high-level ranking and with accuracy 78% for the low-level ranking while the RF scoring function achieves (46% and 62%) respectively. For the docking power, the DL\_RF scoring function has a success rate when the three best-scored ligand binding poses are considered within 2 \AA\ root-mean-square-deviation from the native pose of 36.0% versus 30.2% of the RF scoring function. For the screening power, the DL scoring function has an average enrichment factor and success rate at the top 1% level of (2.69 and 6.45%) respectively versus (1.61 and 4.84%) respectively of the RF scoring function. |
2102.13172 | Loretta del Mercato | Marta Cavo, Donatella Delle Cave, Eliana D'Amone, Giuseppe Gigli, Enza
Lonardo, Loretta L. del Mercato | A synergic approach to enhance long term culture and manipulation of
MiaPaCa-2 pancreatic cancer spheroids | 11 pages, 7 figures. Scientific Reports, 2020 | null | 10.1038/s41598-020-66908-8 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumour spheroids have the potential to be used as preclinical
chemosensitivity assays. However, the production of three dimensional (3D)
tumour spheroids remains challenging as not all tumour cell lines form
spheroids with regular morphologies and spheroid transfer often induces
disaggregation. In the field of pancreatic cancer, the MiaPaCa-2 cell line is
an interesting model for research but it is known for its difficulty to form
stable spheroids; also, when formed, spheroids from this cell line are weak and
arduous to manage and to harvest for further analyses such as multiple staining
and imaging. In this work, we compared different methods (i.e. hanging drop,
round bottom wells and Matrigel embedding, each of them with or without
methylcellulose in the media) to evaluate which one allowed to better overpass
these limitations. Morphometric analysis indicated that hanging drop in
presence of methylcellulose leaded to well-organized spheroids; interestingly,
quantitative PCR (qPCR) analysis reflected the morphometric characterization,
indicating that same spheroids expressed the highest values of CD44, VIMENTIN,
TGF beta1 and Ki67. In addition, we investigated the generation of MiaPaCa-2
spheroids when cultured on substrates of different hydrophobicity, in order to
minimize the area in contact with the culture media and to further improve
spheroid formation.
| [
{
"created": "Thu, 25 Feb 2021 20:59:26 GMT",
"version": "v1"
}
] | 2021-03-01 | [
[
"Cavo",
"Marta",
""
],
[
"Cave",
"Donatella Delle",
""
],
[
"D'Amone",
"Eliana",
""
],
[
"Gigli",
"Giuseppe",
""
],
[
"Lonardo",
"Enza",
""
],
[
"del Mercato",
"Loretta L.",
""
]
] | Tumour spheroids have the potential to be used as preclinical chemosensitivity assays. However, the production of three dimensional (3D) tumour spheroids remains challenging as not all tumour cell lines form spheroids with regular morphologies and spheroid transfer often induces disaggregation. In the field of pancreatic cancer, the MiaPaCa-2 cell line is an interesting model for research but it is known for its difficulty to form stable spheroids; also, when formed, spheroids from this cell line are weak and arduous to manage and to harvest for further analyses such as multiple staining and imaging. In this work, we compared different methods (i.e. hanging drop, round bottom wells and Matrigel embedding, each of them with or without methylcellulose in the media) to evaluate which one allowed to better overpass these limitations. Morphometric analysis indicated that hanging drop in presence of methylcellulose leaded to well-organized spheroids; interestingly, quantitative PCR (qPCR) analysis reflected the morphometric characterization, indicating that same spheroids expressed the highest values of CD44, VIMENTIN, TGF beta1 and Ki67. In addition, we investigated the generation of MiaPaCa-2 spheroids when cultured on substrates of different hydrophobicity, in order to minimize the area in contact with the culture media and to further improve spheroid formation. |
1705.10496 | Christian Schmidt | Christian Schmidt and Eleanor Dunn and Madeleine Lowery and Ursula van
Rienen | Uncertainty Quantification of Oscillation Suppression during DBS in a
Coupled Finite Element and Network Model | 10 pages | null | 10.1109/TNSRE.2016.2608925 | null | q-bio.NC cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of the cortico-basal ganglia network and volume conductor models of
the brain can provide insight into the mechanisms of action of deep brain
stimulation (DBS). In this study, the coupling of a network model, under
parkinsonian conditions, to the extracellular field distribution obtained from
a three dimensional finite element model of a rodent's brain during DBS is
presented. This coupled model is used to investigate the influence of
uncertainty in the electrical properties of brain tissue and encapsulation
tissue, formed around the electrode after implantation, on the suppression of
oscillatory neural activity during DBS. The resulting uncertainty in this
effect of DBS on the network activity is quantified using a computationally
efficient and non-intrusive stochastic approach based on the generalized
Polynomial Chaos. The results suggest that variations in the electrical
properties of brain tissue may have a substantial influence on the level of
suppression of oscillatory activity during DBS. Applying a global sensitivity
analysis on the suppression of the simulated oscillatory activity showed that
the influence of uncertainty in the electrical properties of the encapsulation
tissue had only a minor influence, in agreement with previous experimental and
computational studies investigating the mechanisms of current-controlled DBS in
the literature.
| [
{
"created": "Tue, 30 May 2017 07:59:49 GMT",
"version": "v1"
}
] | 2017-05-31 | [
[
"Schmidt",
"Christian",
""
],
[
"Dunn",
"Eleanor",
""
],
[
"Lowery",
"Madeleine",
""
],
[
"van Rienen",
"Ursula",
""
]
] | Models of the cortico-basal ganglia network and volume conductor models of the brain can provide insight into the mechanisms of action of deep brain stimulation (DBS). In this study, the coupling of a network model, under parkinsonian conditions, to the extracellular field distribution obtained from a three dimensional finite element model of a rodent's brain during DBS is presented. This coupled model is used to investigate the influence of uncertainty in the electrical properties of brain tissue and encapsulation tissue, formed around the electrode after implantation, on the suppression of oscillatory neural activity during DBS. The resulting uncertainty in this effect of DBS on the network activity is quantified using a computationally efficient and non-intrusive stochastic approach based on the generalized Polynomial Chaos. The results suggest that variations in the electrical properties of brain tissue may have a substantial influence on the level of suppression of oscillatory activity during DBS. Applying a global sensitivity analysis on the suppression of the simulated oscillatory activity showed that the influence of uncertainty in the electrical properties of the encapsulation tissue had only a minor influence, in agreement with previous experimental and computational studies investigating the mechanisms of current-controlled DBS in the literature. |
2009.00143 | Ivan Ezeigbo | Ivan Ezeigbo | When Costly Punishment Becomes Evolutionarily Beneficial | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Cooperation in evolutionary biology means paying a cost, c, to enjoy
benefits, b. A defector is one who does not pay any cost but enjoys the
benefits of cooperators. Human societies, especially, have evolved a strategy
to discourage defection, punishment. Costly punishment is a type of punishment
where an agent in a biological network or some cooperative scheme pays a cost
to ensure another agent incurs some cost. A previous study shows how parameters
like diversity in neighbors across games, density of connectivity, and costly
punishment influence the evolution of cooperation in non-regular networks. In
this study, evolution in regular networks due to the influence of costly
punishment is also considered, specifically spatial lattices. This study
compares observations between non-regular and regular networks as these
parameters change and brings a clearer understanding of interactions that occur
in these networks. The models, results and analysis bring a new understanding
to game theory and punishment. The results show that costly punishment never
arises as a sole evolutionary strategy. However, in evolutionary games where
costly punishers could evolve more favorable strategies, the initial presence
of costly punishers would bring about high average payoffs in all types of
regular networks and heterogeneous networks. In regular networks, every node
has the same degree, k. Although punishment is conventionally thought to be
anti-progressive, in the presence of diversity in neighbors, it magnifies the
payoff of a group for all heterogeneous networks. In regular networks however,
diversity in neighbors is not required for costly punishment to boost average
payoff. This suggests an answer to the question on why costly punishment has
been favored by natural selection, which is particularly obvious in human
populations.
| [
{
"created": "Mon, 31 Aug 2020 23:19:15 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Ezeigbo",
"Ivan",
""
]
] | Cooperation in evolutionary biology means paying a cost, c, to enjoy benefits, b. A defector is one who does not pay any cost but enjoys the benefits of cooperators. Human societies, especially, have evolved a strategy to discourage defection, punishment. Costly punishment is a type of punishment where an agent in a biological network or some cooperative scheme pays a cost to ensure another agent incurs some cost. A previous study shows how parameters like diversity in neighbors across games, density of connectivity, and costly punishment influence the evolution of cooperation in non-regular networks. In this study, evolution in regular networks due to the influence of costly punishment is also considered, specifically spatial lattices. This study compares observations between non-regular and regular networks as these parameters change and brings a clearer understanding of interactions that occur in these networks. The models, results and analysis bring a new understanding to game theory and punishment. The results show that costly punishment never arises as a sole evolutionary strategy. However, in evolutionary games where costly punishers could evolve more favorable strategies, the initial presence of costly punishers would bring about high average payoffs in all types of regular networks and heterogeneous networks. In regular networks, every node has the same degree, k. Although punishment is conventionally thought to be anti-progressive, in the presence of diversity in neighbors, it magnifies the payoff of a group for all heterogeneous networks. In regular networks however, diversity in neighbors is not required for costly punishment to boost average payoff. This suggests an answer to the question on why costly punishment has been favored by natural selection, which is particularly obvious in human populations. |
2407.02737 | Ljubomir Buturovic | Ljubomir Buturovic, Michael Mayhew, Roland Luethy, Kirindi Choi, Uros
Midic, Nandita Damaraju, Yehudit Hasin-Brumshtein, Amitesh Pratap, Rhys M.
Adams, Joao Fonseca, Ambika Srinath, Paul Fleming, Claudia Pereira, Oliver
Liesenfeld, Purvesh Khatri, Timothy Sweeney | Development of Machine Learning Classifiers for Blood-based Diagnosis
and Prognosis of Suspected Acute Infections and Sepsis | 16 pages, 6 figures | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | We applied machine learning to the unmet medical need of rapid and accurate
diagnosis and prognosis of acute infections and sepsis in emergency
departments. Our solution consists of a Myrna (TM) Instrument and embedded
TriVerity (TM) classifiers. The instrument measures abundances of 29 messenger
RNAs in patient's blood, subsequently used as features for machine learning.
The classifiers convert the input features to an intuitive test report
comprising the separate likelihoods of (1) a bacterial infection (2) a viral
infection, and (3) severity (need for Intensive Care Unit-level care). In
internal validation, the system achieved AUROC = 0.83 on the three-class
disease diagnosis (bacterial, viral, or non-infected) and AUROC = 0.77 on
binary prognosis of disease severity. The Myrna, TriVerity system was granted
breakthrough device designation by the United States Food and Drug
Administration (FDA). This engineering manuscript teaches the standard and
novel machine learning methods used to translate an academic research concept
to a clinical product aimed at improving patient care, and discusses lessons
learned.
| [
{
"created": "Wed, 3 Jul 2024 01:20:26 GMT",
"version": "v1"
}
] | 2024-07-04 | [
[
"Buturovic",
"Ljubomir",
""
],
[
"Mayhew",
"Michael",
""
],
[
"Luethy",
"Roland",
""
],
[
"Choi",
"Kirindi",
""
],
[
"Midic",
"Uros",
""
],
[
"Damaraju",
"Nandita",
""
],
[
"Hasin-Brumshtein",
"Yehudit",
""... | We applied machine learning to the unmet medical need of rapid and accurate diagnosis and prognosis of acute infections and sepsis in emergency departments. Our solution consists of a Myrna (TM) Instrument and embedded TriVerity (TM) classifiers. The instrument measures abundances of 29 messenger RNAs in patient's blood, subsequently used as features for machine learning. The classifiers convert the input features to an intuitive test report comprising the separate likelihoods of (1) a bacterial infection (2) a viral infection, and (3) severity (need for Intensive Care Unit-level care). In internal validation, the system achieved AUROC = 0.83 on the three-class disease diagnosis (bacterial, viral, or non-infected) and AUROC = 0.77 on binary prognosis of disease severity. The Myrna, TriVerity system was granted breakthrough device designation by the United States Food and Drug Administration (FDA). This engineering manuscript teaches the standard and novel machine learning methods used to translate an academic research concept to a clinical product aimed at improving patient care, and discusses lessons learned. |
1709.01923 | Somaye Hashemifar | Somaye Hashemifar | Computational prediction and analysis of protein-protein interaction
networks | PhD thesis, Toyota Technological Institute at Chicago (2017) | null | null | null | q-bio.MN cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological networks provide insight into the complex organization of
biological processes in a cell at the system level. They are an effective tool
for understanding the comprehensive map of functional interactions, finding the
functional modules and pathways. Reconstruction and comparative analysis of
these networks provide useful information to identify functional modules,
prioritization of disease causing genes and also identification of drug
targets. The talk will consist of two parts. I will discuss several methods for
protein-protein interaction network alignment and investigate their preferences
to other existing methods. Further, I briefly talk about reconstruction of
protein-protein interaction networks by using deep learning.
| [
{
"created": "Wed, 6 Sep 2017 15:44:01 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2017 03:17:06 GMT",
"version": "v2"
}
] | 2017-09-14 | [
[
"Hashemifar",
"Somaye",
""
]
] | Biological networks provide insight into the complex organization of biological processes in a cell at the system level. They are an effective tool for understanding the comprehensive map of functional interactions, finding the functional modules and pathways. Reconstruction and comparative analysis of these networks provide useful information to identify functional modules, prioritization of disease causing genes and also identification of drug targets. The talk will consist of two parts. I will discuss several methods for protein-protein interaction network alignment and investigate their preferences to other existing methods. Further, I briefly talk about reconstruction of protein-protein interaction networks by using deep learning. |
1610.00588 | Markos Antonopoulos | Markos Antonopoulos and Georgios Stamatakos | A mathematical, in silico implemented, modular model for tumor growth in
a spatially inhomogeneous, time-varying chemical environment (1st unrevised
edition) | 35 pages, 11 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the last decades, medical observations and multiscale data concerning
tumor growth are mounting. At the same time, contemporary imaging techniques
well established in clinical practice, provide a variety of information on
real-time, in-vivo tumor growth. Mathematical and in-silico modeling has been
widely recruited to provide means for further understanding of pertinent
biological phenomena. However, despite the vast amounts of new evidence
compiled by medical doctors, there are still many aspects of tumor growth that
remain largely unknown. There is still a large variety of mechanisms to be
better understood and therefore, many hypotheses to be tested. To approach this
problem, starting from mathematical elaborations, we have developed a model of
the early phases of tumor growth consisting of several algorithmic modules,
each one corresponding to a particular biological mechanism. The modularity of
the model allows keeping track of the assumptions made in each step and
facilitates re-adjustment, in case new hypotheses need to be considered.
Simulations showed good qualitative agreement with biological observations, and
revealed a non-trivial interplay between between oxygen requirements of cancer
cells and their maximum mitosis rates. The proposed model has, at least in
principle, the potential to exploit data from contemporary imaging techniques
and is eligible for utilizing multicore computation.
| [
{
"created": "Fri, 30 Sep 2016 09:57:04 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Dec 2017 09:08:24 GMT",
"version": "v2"
}
] | 2017-12-11 | [
[
"Antonopoulos",
"Markos",
""
],
[
"Stamatakos",
"Georgios",
""
]
] | During the last decades, medical observations and multiscale data concerning tumor growth are mounting. At the same time, contemporary imaging techniques well established in clinical practice, provide a variety of information on real-time, in-vivo tumor growth. Mathematical and in-silico modeling has been widely recruited to provide means for further understanding of pertinent biological phenomena. However, despite the vast amounts of new evidence compiled by medical doctors, there are still many aspects of tumor growth that remain largely unknown. There is still a large variety of mechanisms to be better understood and therefore, many hypotheses to be tested. To approach this problem, starting from mathematical elaborations, we have developed a model of the early phases of tumor growth consisting of several algorithmic modules, each one corresponding to a particular biological mechanism. The modularity of the model allows keeping track of the assumptions made in each step and facilitates re-adjustment, in case new hypotheses need to be considered. Simulations showed good qualitative agreement with biological observations, and revealed a non-trivial interplay between between oxygen requirements of cancer cells and their maximum mitosis rates. The proposed model has, at least in principle, the potential to exploit data from contemporary imaging techniques and is eligible for utilizing multicore computation. |
1503.07965 | M. V. Sangaranarayanan | K. Silpaja Chandrasekar and M.V. Sangaranarayanan | A topological perspective into the sequence and conformational space of
proteins | 35 pages; 12 Figures and 6 Tables | null | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The precise sequence of aminoacids plays a central role in the tertiary
structure of proteins and their functional properties. The Hydrophobic-Polar
lattice models have provided valuable insights regarding the energy landscape.
We demonstrate here the isomorphism between the protein sequences and
designable structures for two and three dimensional lattice proteins of very
long aminoacid chains using exact enumerations and intuitive considerations.We
emphasize that the topological arrangement of the aminoacid residues alone is
adequate to deduce the designable and non-designable sequences without explicit
recourse to energetics and degeneracies. The results indicate the computational
feasibility of realistic lattice models for proteins in two and three
dimensions and imply that the fundamental principle underlying the designing of
structures is the connectivity of the hydrophobic and polar residues.
| [
{
"created": "Fri, 27 Mar 2015 05:32:03 GMT",
"version": "v1"
}
] | 2015-03-30 | [
[
"Chandrasekar",
"K. Silpaja",
""
],
[
"Sangaranarayanan",
"M. V.",
""
]
] | The precise sequence of aminoacids plays a central role in the tertiary structure of proteins and their functional properties. The Hydrophobic-Polar lattice models have provided valuable insights regarding the energy landscape. We demonstrate here the isomorphism between the protein sequences and designable structures for two and three dimensional lattice proteins of very long aminoacid chains using exact enumerations and intuitive considerations.We emphasize that the topological arrangement of the aminoacid residues alone is adequate to deduce the designable and non-designable sequences without explicit recourse to energetics and degeneracies. The results indicate the computational feasibility of realistic lattice models for proteins in two and three dimensions and imply that the fundamental principle underlying the designing of structures is the connectivity of the hydrophobic and polar residues. |
2402.07252 | Pierre Barrat-Charlaix | Pierre Barrat-Charlaix and Richard A. Neher | Eco-evolutionary dynamics of adapting pathogens and host immunity | Main text: 28 pages, 4 figures. Appendix: 18 pages, 4 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | As pathogens spread in a population of hosts, immunity is built up and the
pool of susceptible individuals is depleted. This generates selective pressure,
to which many human RNA viruses, such as influenza virus or SARS-CoV-2, respond
with rapid antigenic evolution and frequent emergence of immune evasive
variants. However, the host's immune systems adapt and older immune responses
wane, such that escape variants only enjoy a growth advantage for a limited
time. If variant growth dynamics and reshaping of host-immunity operate on
comparable time scales, viral adaptation is determined by eco-evolutionary
interactions that are not captured by models of rapid evolution in a fixed
environment. Here, we use a Susceptible/Infected model to describe the
interaction between an evolving viral population in a dynamic but
immunologically diverse host population. We show that depending on strain
cross-immunity, heterogeneity of the host population, and durability of immune
responses, escape variants initially grow exponentially, but lose their growth
advantage before reaching high frequencies. Their subsequent dynamics follows
an anomalous random walk determined by future escape variants and results in
variant trajectories that are unpredictable. This model can explain the
apparent contradiction between the clearly adaptive nature of antigenic
evolution and the quasi-neutral dynamics of high frequency variants observed
for influenza viruses.
| [
{
"created": "Sun, 11 Feb 2024 17:40:49 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 09:37:25 GMT",
"version": "v2"
}
] | 2024-03-26 | [
[
"Barrat-Charlaix",
"Pierre",
""
],
[
"Neher",
"Richard A.",
""
]
] | As pathogens spread in a population of hosts, immunity is built up and the pool of susceptible individuals is depleted. This generates selective pressure, to which many human RNA viruses, such as influenza virus or SARS-CoV-2, respond with rapid antigenic evolution and frequent emergence of immune evasive variants. However, the host's immune systems adapt and older immune responses wane, such that escape variants only enjoy a growth advantage for a limited time. If variant growth dynamics and reshaping of host-immunity operate on comparable time scales, viral adaptation is determined by eco-evolutionary interactions that are not captured by models of rapid evolution in a fixed environment. Here, we use a Susceptible/Infected model to describe the interaction between an evolving viral population in a dynamic but immunologically diverse host population. We show that depending on strain cross-immunity, heterogeneity of the host population, and durability of immune responses, escape variants initially grow exponentially, but lose their growth advantage before reaching high frequencies. Their subsequent dynamics follows an anomalous random walk determined by future escape variants and results in variant trajectories that are unpredictable. This model can explain the apparent contradiction between the clearly adaptive nature of antigenic evolution and the quasi-neutral dynamics of high frequency variants observed for influenza viruses. |
2002.02156 | Peter Czuppon | Peter Czuppon and Arne Traulsen | Understanding evolutionary and ecological dynamics using a continuum
limit | updated version | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This manuscript contains nothing new, but synthesizes known results: For the
theoretical population geneticist with a probabilistic background, we provide a
summary of some key results on stochastic differential equations. For the
evolutionary game theorist, we give a new perspective on the derivations of
results obtained when using discrete birth-death processes. For the theoretical
biologist familiar with deterministic modeling, we outline how to derive and
work with stochastic versions of classical ecological and evolutionary
processes.
| [
{
"created": "Thu, 6 Feb 2020 08:51:30 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Aug 2020 14:37:20 GMT",
"version": "v2"
}
] | 2020-08-25 | [
[
"Czuppon",
"Peter",
""
],
[
"Traulsen",
"Arne",
""
]
] | This manuscript contains nothing new, but synthesizes known results: For the theoretical population geneticist with a probabilistic background, we provide a summary of some key results on stochastic differential equations. For the evolutionary game theorist, we give a new perspective on the derivations of results obtained when using discrete birth-death processes. For the theoretical biologist familiar with deterministic modeling, we outline how to derive and work with stochastic versions of classical ecological and evolutionary processes. |
1903.10355 | Siddhartha Chakrabarty | Sonjoy Pan, Soumyendu Raha and Siddhartha P. Chakrabarty | A quantitative study on the role of TKI combined with
Wnt/$\beta$-catenin signaling and IFN-$\alpha$ in the treatment of CML
through deterministic and stochastic approaches | null | null | 10.1016/j.chaos.2020.109627 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose deterministic and stochastic models for studying the
pharmacokinetics of chronic myeloid leukemia (CML), upon administration of
IFN-$\alpha$ (the traditional treatment for CML), TKI (the current frontline
medication for CML) and Wnt/$\beta$-catenin signaling (the state-of-the art
therapeutic breakthrough for CML). To the best of our knowledge, no
mathematical model incorporating all these three therapeutic protocols are
available in literature. Further, this work introduces a stochastic approach in
the study of CML dynamics. The key contributions of this work are: (1)
Determination of the patient condition, contingent upon the patient specific
model parameters, which leads to prediction of the appropriate patient specific
therapeutic dosage. (2) Addressing the question of how the dual therapy of TKI
and Wnt/$\beta$-catenin signaling or triple combination of all three, offers
potentially improved therapeutic responses, particularly in terms of reduced
side effects of TKI or IFN-$\alpha$. (3) Prediction of the likelihood of CML
extinction/remission based on the level of CML stem cells at detection.
| [
{
"created": "Fri, 22 Mar 2019 10:35:05 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2019 10:29:26 GMT",
"version": "v2"
}
] | 2020-02-19 | [
[
"Pan",
"Sonjoy",
""
],
[
"Raha",
"Soumyendu",
""
],
[
"Chakrabarty",
"Siddhartha P.",
""
]
] | We propose deterministic and stochastic models for studying the pharmacokinetics of chronic myeloid leukemia (CML), upon administration of IFN-$\alpha$ (the traditional treatment for CML), TKI (the current frontline medication for CML) and Wnt/$\beta$-catenin signaling (the state-of-the art therapeutic breakthrough for CML). To the best of our knowledge, no mathematical model incorporating all these three therapeutic protocols are available in literature. Further, this work introduces a stochastic approach in the study of CML dynamics. The key contributions of this work are: (1) Determination of the patient condition, contingent upon the patient specific model parameters, which leads to prediction of the appropriate patient specific therapeutic dosage. (2) Addressing the question of how the dual therapy of TKI and Wnt/$\beta$-catenin signaling or triple combination of all three, offers potentially improved therapeutic responses, particularly in terms of reduced side effects of TKI or IFN-$\alpha$. (3) Prediction of the likelihood of CML extinction/remission based on the level of CML stem cells at detection. |
q-bio/0506028 | B. J. Powell | M. Linh Tran, B. J. Powell and Paul Meredith | Chemical and Structural Disorder in Eumelanins - A Possible Explanation
for Broad Band Absorbance | 28 pages, 8 figures, accepted to Biophysical Journal | null | null | null | q-bio.BM physics.bio-ph physics.chem-ph | null | We report the results of an experimental and theoretical study of the
electronic and structural properties of a key eumelanin precursor -
5,6,-dihydroxyindole-2-carboxylic acid (DHICA) and its dimeric forms. We have
used optical spectroscopy to follow the oxidative polymerization of DHICA to
eumelanin, and observe red shifting and broadening of the absorption spectrum
as the reaction proceeds. First principles density functional theory
calculations indicate that DHICA oligomers (possible reaction products of
oxidative polymerization) have red shifted HOMO-LUMO gaps with respect to the
monomer. Furthermore, different bonding configurations (leading to oligomers
with different structures) produce a range of gaps. These experimental and
theoretical results lend support to the chemical disorder model where the broad
band monotonic absorption characteristic of all melanins is a consequence of
the superposition of a large number of inhomogeneously broadened Gaussian
transitions associated with each of the components of a melanin ensemble. These
results suggest that the traditional model of eumelanin as an amorphous organic
semiconductor is not required to explain its optical properties, and should be
thoroughly re-examined. These results have significant implications for our
understanding of the physics, chemistry and biological function of these
important biological macromolecules. Indeed, one may speculate that the robust
functionality of melanins in vitro is a direct consequence of its
heterogeneity, i.e. chemical disorder is a "low cost" natural resource in these
systems.
| [
{
"created": "Mon, 20 Jun 2005 04:25:36 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2005 05:50:10 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Tran",
"M. Linh",
""
],
[
"Powell",
"B. J.",
""
],
[
"Meredith",
"Paul",
""
]
] | We report the results of an experimental and theoretical study of the electronic and structural properties of a key eumelanin precursor - 5,6,-dihydroxyindole-2-carboxylic acid (DHICA) and its dimeric forms. We have used optical spectroscopy to follow the oxidative polymerization of DHICA to eumelanin, and observe red shifting and broadening of the absorption spectrum as the reaction proceeds. First principles density functional theory calculations indicate that DHICA oligomers (possible reaction products of oxidative polymerization) have red shifted HOMO-LUMO gaps with respect to the monomer. Furthermore, different bonding configurations (leading to oligomers with different structures) produce a range of gaps. These experimental and theoretical results lend support to the chemical disorder model where the broad band monotonic absorption characteristic of all melanins is a consequence of the superposition of a large number of inhomogeneously broadened Gaussian transitions associated with each of the components of a melanin ensemble. These results suggest that the traditional model of eumelanin as an amorphous organic semiconductor is not required to explain its optical properties, and should be thoroughly re-examined. These results have significant implications for our understanding of the physics, chemistry and biological function of these important biological macromolecules. Indeed, one may speculate that the robust functionality of melanins in vitro is a direct consequence of its heterogeneity, i.e. chemical disorder is a "low cost" natural resource in these systems. |
2010.03420 | Charles Saah | Ferdinand Kartriku, Dr. Robert Sowah and Charles Saah | Deep Neural Network: An Efficient and Optimized Machine Learning
Paradigm for Reducing Genome Sequencing Error | for associated mpeg file, see https://ijettjournal.org/ | null | null | null | q-bio.GN cs.CV | http://creativecommons.org/licenses/by/4.0/ | Genomic data I used in many fields but, it has become known that most of the
platforms used in the sequencing process produce significant errors. This means
that the analysis and inferences generated from these data may have some errors
that need to be corrected. On the two main types of genome errors -
substitution and indels - our work is focused on correcting indels. A deep
learning approach was used to correct the errors in sequencing the chosen
dataset
| [
{
"created": "Tue, 6 Oct 2020 08:16:35 GMT",
"version": "v1"
}
] | 2020-10-08 | [
[
"Kartriku",
"Ferdinand",
""
],
[
"Sowah",
"Dr. Robert",
""
],
[
"Saah",
"Charles",
""
]
] | Genomic data I used in many fields but, it has become known that most of the platforms used in the sequencing process produce significant errors. This means that the analysis and inferences generated from these data may have some errors that need to be corrected. On the two main types of genome errors - substitution and indels - our work is focused on correcting indels. A deep learning approach was used to correct the errors in sequencing the chosen dataset |
1511.07242 | Ihor Lubashevsky | Ihor Lubashevsky and Marie Watanabe | Statistical Properties of Gray Color Categorization: Asymptotics of
Psychometric Function | Presented at the 47th ISCIE International Symposium on Stochastic
Systems Theory and Its Applications, December 5-8, 2015, Honolulu, Hawaii,
USA | null | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The results of our experiments on categorical perception of different shades
of gray are reported. A special color generator was created for conducting the
experiments on categorizing a random sequence of colors into two classes,
light-gray and dark-gray. The collected data are analyzed based on constructing
(i) the asymptotics of the corresponding psychometric functions and (ii) the
mean decision time in categorizing a given shade of gray depending on the shade
brightness (shade number). Conclusions about plausible mechanisms governing
categorical perception, at least for the analyzed system, are drawn.
| [
{
"created": "Sun, 15 Nov 2015 00:16:18 GMT",
"version": "v1"
}
] | 2015-11-24 | [
[
"Lubashevsky",
"Ihor",
""
],
[
"Watanabe",
"Marie",
""
]
] | The results of our experiments on categorical perception of different shades of gray are reported. A special color generator was created for conducting the experiments on categorizing a random sequence of colors into two classes, light-gray and dark-gray. The collected data are analyzed based on constructing (i) the asymptotics of the corresponding psychometric functions and (ii) the mean decision time in categorizing a given shade of gray depending on the shade brightness (shade number). Conclusions about plausible mechanisms governing categorical perception, at least for the analyzed system, are drawn. |
1703.07844 | Daniele Ramazzotti | Bo Wang and Daniele Ramazzotti and Luca De Sano and Junjie Zhu and
Emma Pierson and Serafim Batzoglou | SIMLR: A Tool for Large-Scale Genomic Analyses by Multi-Kernel Learning | null | null | 10.1002/pmic.201700232 | null | q-bio.GN cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We here present SIMLR (Single-cell Interpretation via Multi-kernel LeaRning),
an open-source tool that implements a novel framework to learn a
sample-to-sample similarity measure from expression data observed for
heterogenous samples. SIMLR can be effectively used to perform tasks such as
dimension reduction, clustering, and visualization of heterogeneous populations
of samples. SIMLR was benchmarked against state-of-the-art methods for these
three tasks on several public datasets, showing it to be scalable and capable
of greatly improving clustering performance, as well as providing valuable
insights by making the data more interpretable via better a visualization.
Availability and Implementation
SIMLR is available on GitHub in both R and MATLAB implementations.
Furthermore, it is also available as an R package on http://bioconductor.org.
| [
{
"created": "Tue, 21 Mar 2017 06:48:59 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jan 2018 20:12:44 GMT",
"version": "v2"
}
] | 2018-01-22 | [
[
"Wang",
"Bo",
""
],
[
"Ramazzotti",
"Daniele",
""
],
[
"De Sano",
"Luca",
""
],
[
"Zhu",
"Junjie",
""
],
[
"Pierson",
"Emma",
""
],
[
"Batzoglou",
"Serafim",
""
]
] | We here present SIMLR (Single-cell Interpretation via Multi-kernel LeaRning), an open-source tool that implements a novel framework to learn a sample-to-sample similarity measure from expression data observed for heterogenous samples. SIMLR can be effectively used to perform tasks such as dimension reduction, clustering, and visualization of heterogeneous populations of samples. SIMLR was benchmarked against state-of-the-art methods for these three tasks on several public datasets, showing it to be scalable and capable of greatly improving clustering performance, as well as providing valuable insights by making the data more interpretable via better a visualization. Availability and Implementation SIMLR is available on GitHub in both R and MATLAB implementations. Furthermore, it is also available as an R package on http://bioconductor.org. |
1110.4605 | Govardhan Reddy | Samuel S. Cho, Govardhan Reddy, John E. Straub, and D. Thirumalai | Entropic Stabilization of Proteins by TMAO | 27 pages, 10 figures | null | null | null | q-bio.BM cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand the mechanism of trimethylamine N-oxide (TMAO) induced
stabilization of folded protein states, we systematically investigated the
action of TMAO on several model dipeptides (Leucine, L2, Serine, S2, Glutamine,
Q2, Lysine, K2, and Glycine, G2) in order to elucidate the effect of
residue-specific TMAO interactions on small fragments of solvent-exposed
conformations of the denatured states of proteins. We find that TMAO
preferentially hydrogen bonds with the exposed dipeptide backbone, but
generally not with nonpolar or polar side chains. However, interactions with
the positively charged Lys are substantially greater than with the backbone.
The dipeptide G2, is a useful model of pure amide backbone, interacts with TMAO
by forming a hydrogen bond between the amide nitrogen and the oxygen in TMAO.
In contrast, TMAO is depleted from the protein backbone in the hexapeptide G6,
which shows that the length of the polypeptide chain is relevant in aqueous
TMAO solutions. These simulations lead to the hypothesis that TMAO-induced
stabilization of proteins and peptides is a consequence of depletion of the
solute from the protein surface provided intramolecular interactions are more
favorable than those between TMAO and the backbone. To test our hypothesis we
performed additional simulations of the action of TMAO on an intrinsically
disordered A{\beta}16-22 (KLVFFAE) monomer. In the absence of TMAO
A{\beta}16-22 is a disordered random coil. However, in aqueous TMAO solution
A{\beta}16-22 monomer samples compact conformations. A transition from random
coil to {\alpha}-helical secondary structure is observed at high TMAO
concentrations. Our work highlights the potential similarities between the
action of TMAO on long polypeptide chains and entropic stabilization of
proteins in a crowded environment due to excluded volume interactions. In this
sense TMAO is a nano-crowding particle.
| [
{
"created": "Thu, 20 Oct 2011 18:53:17 GMT",
"version": "v1"
}
] | 2011-10-21 | [
[
"Cho",
"Samuel S.",
""
],
[
"Reddy",
"Govardhan",
""
],
[
"Straub",
"John E.",
""
],
[
"Thirumalai",
"D.",
""
]
] | To understand the mechanism of trimethylamine N-oxide (TMAO) induced stabilization of folded protein states, we systematically investigated the action of TMAO on several model dipeptides (Leucine, L2, Serine, S2, Glutamine, Q2, Lysine, K2, and Glycine, G2) in order to elucidate the effect of residue-specific TMAO interactions on small fragments of solvent-exposed conformations of the denatured states of proteins. We find that TMAO preferentially hydrogen bonds with the exposed dipeptide backbone, but generally not with nonpolar or polar side chains. However, interactions with the positively charged Lys are substantially greater than with the backbone. The dipeptide G2, is a useful model of pure amide backbone, interacts with TMAO by forming a hydrogen bond between the amide nitrogen and the oxygen in TMAO. In contrast, TMAO is depleted from the protein backbone in the hexapeptide G6, which shows that the length of the polypeptide chain is relevant in aqueous TMAO solutions. These simulations lead to the hypothesis that TMAO-induced stabilization of proteins and peptides is a consequence of depletion of the solute from the protein surface provided intramolecular interactions are more favorable than those between TMAO and the backbone. To test our hypothesis we performed additional simulations of the action of TMAO on an intrinsically disordered A{\beta}16-22 (KLVFFAE) monomer. In the absence of TMAO A{\beta}16-22 is a disordered random coil. However, in aqueous TMAO solution A{\beta}16-22 monomer samples compact conformations. A transition from random coil to {\alpha}-helical secondary structure is observed at high TMAO concentrations. Our work highlights the potential similarities between the action of TMAO on long polypeptide chains and entropic stabilization of proteins in a crowded environment due to excluded volume interactions. In this sense TMAO is a nano-crowding particle. |
2201.01920 | Hue Sun Chan | Yi-Hsuan Lin, Jonas Wess\'en, Tanmoy Pal, Suman Das, and Hue Sun Chan | Numerical Techniques for Applications of Analytical Theories to
Sequence-Dependent Phase Separations of Intrinsically Disordered Proteins | 46 pages, 10 figures, 105 references, with hyperlinks to relevant
computer codes and related information; Figure 8 in version 2 corrected;
accepted for publication in "Methods in Molecular Biology" volume
"Phase-Separated Biomolecular Condensates" edited by H.-X. Zhou, J.-H.
Spille, and P. Banerjee (expected October 2022) | In: Phase-Separated Biomolecular Condensates, Methods and
Protocols; edited by H.-X. Zhou, J.-H. Spille and P.R. Banerjee, Methods in
Molecular Biology (Springer-Nature), Volume 2563, Chapter 3, pages 51-94
(2022) | 10.1007/978-1-0716-2663-4_3 | null | q-bio.BM cond-mat.soft | http://creativecommons.org/licenses/by/4.0/ | Biomolecular condensates, physically underpinned to a significant extent by
liquid-liquid phase separation (LLPS), are now widely recognized by numerous
experimental studies to be of fundamental biological, biomedical, and
biophysical importance. In the face of experimental discoveries, analytical
formulations emerged as a powerful yet tractable tool in recent theoretical
investigations of the role of LLPS in the assembly and dissociation of these
condensates. The pertinent LLPS often involves, though not exclusively,
intrinsically disordered proteins engaging in multivalent interactions that are
governed by their amino acid sequences. For researchers interested in applying
these theoretical methods, here we provide a practical guide to a set of
computational techniques devised for extracting sequence-dependent LLPS
properties from analytical formulations. The numerical procedures covered
include those for the determinination of spinodal and binodal phase boundaries
from a general free energy function with examples based on the random phase
approximation in polymer theory, construction of tie lines for
multiple-component LLPS, and field-theoretic simulation of multiple-chain
heteropolymeric systems using complex Langevin dynamics. Since a more accurate
physical picture often requires comparing analytical theory against
explicit-chain model predictions, a commonly utilized methodology for
coarse-grained molecular dynamics simulations of sequence-specific LLPS is also
briefly outlined.
| [
{
"created": "Thu, 6 Jan 2022 04:56:50 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Aug 2022 02:04:52 GMT",
"version": "v2"
},
{
"created": "Wed, 31 Aug 2022 00:22:16 GMT",
"version": "v3"
}
] | 2022-10-18 | [
[
"Lin",
"Yi-Hsuan",
""
],
[
"Wessén",
"Jonas",
""
],
[
"Pal",
"Tanmoy",
""
],
[
"Das",
"Suman",
""
],
[
"Chan",
"Hue Sun",
""
]
] | Biomolecular condensates, physically underpinned to a significant extent by liquid-liquid phase separation (LLPS), are now widely recognized by numerous experimental studies to be of fundamental biological, biomedical, and biophysical importance. In the face of experimental discoveries, analytical formulations emerged as a powerful yet tractable tool in recent theoretical investigations of the role of LLPS in the assembly and dissociation of these condensates. The pertinent LLPS often involves, though not exclusively, intrinsically disordered proteins engaging in multivalent interactions that are governed by their amino acid sequences. For researchers interested in applying these theoretical methods, here we provide a practical guide to a set of computational techniques devised for extracting sequence-dependent LLPS properties from analytical formulations. The numerical procedures covered include those for the determinination of spinodal and binodal phase boundaries from a general free energy function with examples based on the random phase approximation in polymer theory, construction of tie lines for multiple-component LLPS, and field-theoretic simulation of multiple-chain heteropolymeric systems using complex Langevin dynamics. Since a more accurate physical picture often requires comparing analytical theory against explicit-chain model predictions, a commonly utilized methodology for coarse-grained molecular dynamics simulations of sequence-specific LLPS is also briefly outlined. |
q-bio/0505018 | Jeffrey Endelman | Jeffrey B. Endelman, Jesse D. Bloom, Christopher R. Otey, Marco
Landwehr and Frances H. Arnold | Inferring interactions from combinatorial protein libraries | 21 pages, 2 figures | null | null | null | q-bio.BM | null | Proteins created by combinatorial methods in vitro are an important source of
information for understanding sequence-structure-function relationships.
Alignments of folded proteins from combinatorial libraries can be analyzed
using methods developed for naturally occurring proteins, but this neglects the
information contained in the unfolded sequences of the library. We introduce
two algorithms, logistic regression and excess information analysis, that use
both the folded and unfolded sequences and compare them against contingency
table and statistical coupling analysis, which only use the former. The test
set for this benchmark study is a library of fictitious proteins that fold
according to a hypothetical energy model. Of the four methods studied, only
logistic regression is able to correctly recapitulate the energy model from the
sequence alignment. The other algorithms predict spurious interactions between
alignment positions with strong but individual influences on protein stability.
When present in the same protein, stabilizing amino acids tend to lower the
energy below the threshold needed for folding. As a result, their frequencies
in the alignment can be correlated even if the positions do not interact. We
believe any algorithm that neglects the nonlinear relationship between folding
and energy is susceptible to this error.
| [
{
"created": "Mon, 9 May 2005 23:33:04 GMT",
"version": "v1"
},
{
"created": "Thu, 12 May 2005 21:09:00 GMT",
"version": "v2"
},
{
"created": "Fri, 20 May 2005 18:43:37 GMT",
"version": "v3"
},
{
"created": "Mon, 6 Jun 2005 21:52:56 GMT",
"version": "v4"
},
{
"cre... | 2007-05-23 | [
[
"Endelman",
"Jeffrey B.",
""
],
[
"Bloom",
"Jesse D.",
""
],
[
"Otey",
"Christopher R.",
""
],
[
"Landwehr",
"Marco",
""
],
[
"Arnold",
"Frances H.",
""
]
] | Proteins created by combinatorial methods in vitro are an important source of information for understanding sequence-structure-function relationships. Alignments of folded proteins from combinatorial libraries can be analyzed using methods developed for naturally occurring proteins, but this neglects the information contained in the unfolded sequences of the library. We introduce two algorithms, logistic regression and excess information analysis, that use both the folded and unfolded sequences and compare them against contingency table and statistical coupling analysis, which only use the former. The test set for this benchmark study is a library of fictitious proteins that fold according to a hypothetical energy model. Of the four methods studied, only logistic regression is able to correctly recapitulate the energy model from the sequence alignment. The other algorithms predict spurious interactions between alignment positions with strong but individual influences on protein stability. When present in the same protein, stabilizing amino acids tend to lower the energy below the threshold needed for folding. As a result, their frequencies in the alignment can be correlated even if the positions do not interact. We believe any algorithm that neglects the nonlinear relationship between folding and energy is susceptible to this error. |
1010.4477 | Gabor Szederkenyi | Gabor Szederkenyi and Katalin M. Hangos | Finding complex balanced and detailed balanced realizations of chemical
reaction networks | submitted to J. Math. Chem | Journal of Mathematical Chemistry, Volume 49, Number 6, Pages
1163-1179, 2011 | 10.1007/s10910-011-9804-9 | PCRG-03-2010 | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reversibility, weak reversibility and deficiency, detailed and complex
balancing are generally not "encoded" in the kinetic differential equations but
they are realization properties that may imply local or even global asymptotic
stability of the underlying reaction kinetic system when further conditions are
also fulfilled. In this paper, efficient numerical procedures are given for
finding complex balanced or detailed balanced realizations of mass action type
chemical reaction networks or kinetic dynamical systems in the framework of
linear programming. The procedures are illustrated on numerical examples.
| [
{
"created": "Thu, 21 Oct 2010 14:09:32 GMT",
"version": "v1"
}
] | 2011-05-11 | [
[
"Szederkenyi",
"Gabor",
""
],
[
"Hangos",
"Katalin M.",
""
]
] | Reversibility, weak reversibility and deficiency, detailed and complex balancing are generally not "encoded" in the kinetic differential equations but they are realization properties that may imply local or even global asymptotic stability of the underlying reaction kinetic system when further conditions are also fulfilled. In this paper, efficient numerical procedures are given for finding complex balanced or detailed balanced realizations of mass action type chemical reaction networks or kinetic dynamical systems in the framework of linear programming. The procedures are illustrated on numerical examples. |
0711.4208 | Noel Malod-Dognin | Rumen Andonov (IRISA), Nicola Yanev, No\"el Malod-Dognin (IRISA) | Towards Structural Classification of Proteins based on Contact Map
Overlap | null | null | null | null | q-bio.QM | null | A multitude of measures have been proposed to quantify the similarity between
protein 3-D structure. Among these measures, contact map overlap (CMO)
maximization deserved sustained attention during past decade because it offers
a fine estimation of the natural homology relation between proteins. Despite
this large involvement of the bioinformatics and computer science community,
the performance of known algorithms remains modest. Due to the complexity of
the problem, they got stuck on relatively small instances and are not
applicable for large scale comparison. This paper offers a clear improvement
over past methods in this respect. We present a new integer programming model
for CMO and propose an exact B &B algorithm with bounds computed by solving
Lagrangian relaxation. The efficiency of the approach is demonstrated on a
popular small benchmark (Skolnick set, 40 domains). On this set our algorithm
significantly outperforms the best existing exact algorithms, and yet provides
lower and upper bounds of better quality. Some hard CMO instances have been
solved for the first time and within reasonable time limits. From the values of
the running time and the relative gap (relative difference between upper and
lower bounds), we obtained the right classification for this test. These
encouraging result led us to design a harder benchmark to better assess the
classification capability of our approach. We constructed a large scale set of
300 protein domains (a subset of ASTRAL database) that we have called Proteus
300. Using the relative gap of any of the 44850 couples as a similarity
measure, we obtained a classification in very good agreement with SCOP. Our
algorithm provides thus a powerful classification tool for large structure
databases.
| [
{
"created": "Tue, 27 Nov 2007 15:36:32 GMT",
"version": "v1"
}
] | 2009-04-20 | [
[
"Andonov",
"Rumen",
"",
"IRISA"
],
[
"Yanev",
"Nicola",
"",
"IRISA"
],
[
"Malod-Dognin",
"Noël",
"",
"IRISA"
]
] | A multitude of measures have been proposed to quantify the similarity between protein 3-D structure. Among these measures, contact map overlap (CMO) maximization deserved sustained attention during past decade because it offers a fine estimation of the natural homology relation between proteins. Despite this large involvement of the bioinformatics and computer science community, the performance of known algorithms remains modest. Due to the complexity of the problem, they got stuck on relatively small instances and are not applicable for large scale comparison. This paper offers a clear improvement over past methods in this respect. We present a new integer programming model for CMO and propose an exact B &B algorithm with bounds computed by solving Lagrangian relaxation. The efficiency of the approach is demonstrated on a popular small benchmark (Skolnick set, 40 domains). On this set our algorithm significantly outperforms the best existing exact algorithms, and yet provides lower and upper bounds of better quality. Some hard CMO instances have been solved for the first time and within reasonable time limits. From the values of the running time and the relative gap (relative difference between upper and lower bounds), we obtained the right classification for this test. These encouraging result led us to design a harder benchmark to better assess the classification capability of our approach. We constructed a large scale set of 300 protein domains (a subset of ASTRAL database) that we have called Proteus 300. Using the relative gap of any of the 44850 couples as a similarity measure, we obtained a classification in very good agreement with SCOP. Our algorithm provides thus a powerful classification tool for large structure databases. |
1802.06164 | Benjamin Walker | Benjamin Walker, Katherine Newhall | Inferring Information Flow in Spike-train Data Sets using a
Trial-Shuffle Method | 24 pages, 6 figures | null | 10.1371/journal.pone.0206977 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding information processing in the brain requires the ability to
determine the functional connectivity between the different regions of the
brain. We present a method using transfer entropy to extract this flow of
information between brain regions from spike-train data commonly obtained in
neurological experiments. Transfer entropy is a statistical measure based in
information theory that attempts to quantify the information flow from one
process to another, and has been applied to find connectivity in simulated
spike-train data. Due to statistical error in the estimator, inferring
functional connectivity requires a method for determining significance in the
transfer entropy values. We discuss the issues with numerical estimation of
transfer entropy and resulting challenges in determining significance before
presenting the trial-shuffle method as a viable option. The trial-shuffle
method, for spike-train data that is split into multiple trials, determines
significant transfer entropy values independently for each individual pair of
neurons by comparing to a created baseline distribution using a rigorous
statistical test. This is in contrast to either globally comparing all neuron
transfer entropy values or comparing pairwise values to a single baseline
value.
In establishing the viability of this method by comparison to several
alternative approaches in the literature, we find evidence that preserving the
inter-spike-interval timing is important.
We then use the trial-shuffle method to investigate information flow within a
model network as we vary model parameters. This includes investigating the
global flow of information within a connectivity network divided into two
well-connected subnetworks, going beyond local transfer of information between
pairs of neurons.
| [
{
"created": "Sat, 17 Feb 2018 00:32:12 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Oct 2018 02:07:39 GMT",
"version": "v2"
}
] | 2019-03-06 | [
[
"Walker",
"Benjamin",
""
],
[
"Newhall",
"Katherine",
""
]
] | Understanding information processing in the brain requires the ability to determine the functional connectivity between the different regions of the brain. We present a method using transfer entropy to extract this flow of information between brain regions from spike-train data commonly obtained in neurological experiments. Transfer entropy is a statistical measure based in information theory that attempts to quantify the information flow from one process to another, and has been applied to find connectivity in simulated spike-train data. Due to statistical error in the estimator, inferring functional connectivity requires a method for determining significance in the transfer entropy values. We discuss the issues with numerical estimation of transfer entropy and resulting challenges in determining significance before presenting the trial-shuffle method as a viable option. The trial-shuffle method, for spike-train data that is split into multiple trials, determines significant transfer entropy values independently for each individual pair of neurons by comparing to a created baseline distribution using a rigorous statistical test. This is in contrast to either globally comparing all neuron transfer entropy values or comparing pairwise values to a single baseline value. In establishing the viability of this method by comparison to several alternative approaches in the literature, we find evidence that preserving the inter-spike-interval timing is important. We then use the trial-shuffle method to investigate information flow within a model network as we vary model parameters. This includes investigating the global flow of information within a connectivity network divided into two well-connected subnetworks, going beyond local transfer of information between pairs of neurons. |
1412.6430 | Chad M. Topaz | Chad M. Topaz, Lori Ziegelmeier, Tom Halverson | Topological Data Analysis of Biological Aggregation Models | 25 pages, 12 figures; second version contains typo corrections, minor
textual additions, and a brief discussion of computational complexity; third
version fixes one typo and adds small paragraph about topological stability | null | 10.1371/journal.pone.0126383 | null | q-bio.QM math.AT nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply tools from topological data analysis to two mathematical models
inspired by biological aggregations such as bird flocks, fish schools, and
insect swarms. Our data consists of numerical simulation output from the models
of Vicsek and D'Orsogna. These models are dynamical systems describing the
movement of agents who interact via alignment, attraction, and/or repulsion.
Each simulation time frame is a point cloud in position-velocity space. We
analyze the topological structure of these point clouds, interpreting the
persistent homology by calculating the first few Betti numbers. These Betti
numbers count connected components, topological circles, and trapped volumes
present in the data. To interpret our results, we introduce a visualization
that displays Betti numbers over simulation time and topological persistence
scale. We compare our topological results to order parameters typically used to
quantify the global behavior of aggregations, such as polarization and angular
momentum. The topological calculations reveal events and structure not captured
by the order parameters.
| [
{
"created": "Fri, 19 Dec 2014 16:44:03 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Feb 2015 19:29:09 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Mar 2015 16:54:37 GMT",
"version": "v3"
}
] | 2018-11-21 | [
[
"Topaz",
"Chad M.",
""
],
[
"Ziegelmeier",
"Lori",
""
],
[
"Halverson",
"Tom",
""
]
] | We apply tools from topological data analysis to two mathematical models inspired by biological aggregations such as bird flocks, fish schools, and insect swarms. Our data consists of numerical simulation output from the models of Vicsek and D'Orsogna. These models are dynamical systems describing the movement of agents who interact via alignment, attraction, and/or repulsion. Each simulation time frame is a point cloud in position-velocity space. We analyze the topological structure of these point clouds, interpreting the persistent homology by calculating the first few Betti numbers. These Betti numbers count connected components, topological circles, and trapped volumes present in the data. To interpret our results, we introduce a visualization that displays Betti numbers over simulation time and topological persistence scale. We compare our topological results to order parameters typically used to quantify the global behavior of aggregations, such as polarization and angular momentum. The topological calculations reveal events and structure not captured by the order parameters. |
2206.04683 | Domenico Gatti | Gregory Schwing, Luigi L. Palese, Ariel Fern\'andez, Loren Schwiebert,
Domenico L. Gatti | Molecular dynamics without molecules: searching the conformational space
of proteins with generative neural networks | 12 pages, 9 figures, 3 tables | null | null | null | q-bio.QM cs.AI q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | All-atom and coarse-grained molecular dynamics are two widely used
computational tools to study the conformational states of proteins. Yet, these
two simulation methods suffer from the fact that without access to
supercomputing resources, the time and length scales at which these states
become detectable are difficult to achieve. One alternative to such methods is
based on encoding the atomistic trajectory of molecular dynamics as a shorthand
version devoid of physical particles, and then learning to propagate the
encoded trajectory through the use of artificial intelligence. Here we show
that a simple textual representation of the frames of molecular dynamics
trajectories as vectors of Ramachandran basin classes retains most of the
structural information of the full atomistic representation of a protein in
each frame, and can be used to generate equivalent atom-less trajectories
suitable to train different types of generative neural networks. In turn, the
trained generative models can be used to extend indefinitely the atom-less
dynamics or to sample the conformational space of proteins from their
representation in the models latent space. We define intuitively this
methodology as molecular dynamics without molecules, and show that it enables
to cover physically relevant states of proteins that are difficult to access
with traditional molecular dynamics.
| [
{
"created": "Thu, 9 Jun 2022 02:06:43 GMT",
"version": "v1"
}
] | 2022-06-13 | [
[
"Schwing",
"Gregory",
""
],
[
"Palese",
"Luigi L.",
""
],
[
"Fernández",
"Ariel",
""
],
[
"Schwiebert",
"Loren",
""
],
[
"Gatti",
"Domenico L.",
""
]
] | All-atom and coarse-grained molecular dynamics are two widely used computational tools to study the conformational states of proteins. Yet, these two simulation methods suffer from the fact that without access to supercomputing resources, the time and length scales at which these states become detectable are difficult to achieve. One alternative to such methods is based on encoding the atomistic trajectory of molecular dynamics as a shorthand version devoid of physical particles, and then learning to propagate the encoded trajectory through the use of artificial intelligence. Here we show that a simple textual representation of the frames of molecular dynamics trajectories as vectors of Ramachandran basin classes retains most of the structural information of the full atomistic representation of a protein in each frame, and can be used to generate equivalent atom-less trajectories suitable to train different types of generative neural networks. In turn, the trained generative models can be used to extend indefinitely the atom-less dynamics or to sample the conformational space of proteins from their representation in the models latent space. We define intuitively this methodology as molecular dynamics without molecules, and show that it enables to cover physically relevant states of proteins that are difficult to access with traditional molecular dynamics. |
1804.08363 | Johan-Owen De Craene | Dimitri Bertazzi (GMGM), Johan-Owen De Craene (GMGM), Sylvie Friant
(GMGM) | Myotubularin MTM1 Involved in Centronuclear Myopathy and its Roles in
Human and Yeast Cells | null | Journal of Molecular and Genetic Medicine, OMICS International,
2015, 08 (02) | 10.4172/1747-0862.1000116 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutations in the MTM1 gene, encoding the phosphoinositide phosphatase
myotubularin, are responsible for the X-linked centronuclear myopathy (XLCNM)
or X-linked myotubular myopathy (XLMTM). The MTM1 gene was first identified in
1996 and its function as a PtdIns3P and PtdIns(,5)P2 phosphatase was discovered
in 2000. In recent years, very important progress has been made to set up good
models to study MTM1 and the XLCNM disease such as knockout or knockin mice,
the Labrador Retriever dog, the zebrafish and the yeast Saccharomyces
cerevisiae. These helped to better understand the cellular function of MTM1 and
of its four conserved domains: PH-GRAM (Pleckstrin
Homology-Glucosyltransferase, Rab-like GTPase Activator and Myotubularin), RID
(Rac1-Induced recruitment Domain), PTP/DSP (Protein Tyrosine
Phosphatase/Dual-Specificity Phosphatase) and SID (SET-protein Interaction
Domain). This review presents the cellular function of human myotubularin MTM1
and its yeast homolog yeast protein Ymr1, and the role of MTM1 in the
centronuclear myopathy (CNM) disease.
| [
{
"created": "Mon, 23 Apr 2018 12:22:57 GMT",
"version": "v1"
}
] | 2018-04-24 | [
[
"Bertazzi",
"Dimitri",
"",
"GMGM"
],
[
"De Craene",
"Johan-Owen",
"",
"GMGM"
],
[
"Friant",
"Sylvie",
"",
"GMGM"
]
] | Mutations in the MTM1 gene, encoding the phosphoinositide phosphatase myotubularin, are responsible for the X-linked centronuclear myopathy (XLCNM) or X-linked myotubular myopathy (XLMTM). The MTM1 gene was first identified in 1996 and its function as a PtdIns3P and PtdIns(,5)P2 phosphatase was discovered in 2000. In recent years, very important progress has been made to set up good models to study MTM1 and the XLCNM disease such as knockout or knockin mice, the Labrador Retriever dog, the zebrafish and the yeast Saccharomyces cerevisiae. These helped to better understand the cellular function of MTM1 and of its four conserved domains: PH-GRAM (Pleckstrin Homology-Glucosyltransferase, Rab-like GTPase Activator and Myotubularin), RID (Rac1-Induced recruitment Domain), PTP/DSP (Protein Tyrosine Phosphatase/Dual-Specificity Phosphatase) and SID (SET-protein Interaction Domain). This review presents the cellular function of human myotubularin MTM1 and its yeast homolog yeast protein Ymr1, and the role of MTM1 in the centronuclear myopathy (CNM) disease. |
1907.01827 | Hideyoshi Yanagisawa | Kazutaka Ueda, Yuki Sakai, Hideyoshi Yanagisawa | Quantitative evaluation of sense of discrepancy to operation response
using event-related potential | Submitted to iDECON2019 | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aimed to develop a method to evaluate the sense of discrepancy to
the operation response quantitatively. We examined the availability of
event-related potential (P300), which is considered to reflect attention to
stimulation, to evaluate the sense of discrepancy to the product response to
the user's action. In the experiment using subjective evaluation and P300 to
investigate the sense of discrepancy due to the lack of operation response
(sound and vibration) to the shutter operation of the mirrorless single-lens
camera, it was confirmed that P300 amplitude corresponds to the degree of the
subjective sense of discrepancy. Our results showed that the P300 amplitude
could evaluate the sense of discrepancy to the operation response.
| [
{
"created": "Wed, 3 Jul 2019 10:10:02 GMT",
"version": "v1"
}
] | 2019-07-04 | [
[
"Ueda",
"Kazutaka",
""
],
[
"Sakai",
"Yuki",
""
],
[
"Yanagisawa",
"Hideyoshi",
""
]
] | This study aimed to develop a method to evaluate the sense of discrepancy to the operation response quantitatively. We examined the availability of event-related potential (P300), which is considered to reflect attention to stimulation, to evaluate the sense of discrepancy to the product response to the user's action. In the experiment using subjective evaluation and P300 to investigate the sense of discrepancy due to the lack of operation response (sound and vibration) to the shutter operation of the mirrorless single-lens camera, it was confirmed that P300 amplitude corresponds to the degree of the subjective sense of discrepancy. Our results showed that the P300 amplitude could evaluate the sense of discrepancy to the operation response. |
1902.07268 | Jamila Andoh | Jamila Andoh, Christopher Milde, Martin Diers, Robin Bekrater-Bodmann,
Joerg Trojan, Xaver Fuchs, Susanne Becker, Simon Desch, Herta Flor | Assessment of cortical reorganization and preserved function in phantom
limb pain: a methodological perspective | 21 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phantom limb pain (PLP) has been associated with both the reorganization of
the somatotopic map in primary somatosensory cortex (S1) and preserved S1
function. Here we assessed the nature of the information (sensory, motor) that
reaches S1 and methodological differences in the assessment of cortical
representations that might explain these findings. We used functional magnetic
resonance imaging during a virtual hand motor task, analogous to the classical
mirror box task, in amputees with and without PLP and in matched controls. We
assessed the relationship between task-related activation maxima and PLP
intensity in S1 versus motor (M1) maps. We also measured cortical distances
between the location of activation maxima using region of interest (ROI),
defined from individual or group analyses. Amputees showed significantly
increased activity in M1 and S1 compared to controls, which was not
significantly related to PLP intensity. The location of activation maxima
differed between groups depending on M1 and S1 maps. Neural activity was
positively related to PLP only in amputees with pain and only if a
group-defined ROI was chosen. These findings show that the analysis of changes
in S1 versus M1 yields different results, possibly indicating different
pain-related processes.
| [
{
"created": "Tue, 19 Feb 2019 20:17:22 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2019 14:08:10 GMT",
"version": "v2"
}
] | 2019-03-06 | [
[
"Andoh",
"Jamila",
""
],
[
"Milde",
"Christopher",
""
],
[
"Diers",
"Martin",
""
],
[
"Bekrater-Bodmann",
"Robin",
""
],
[
"Trojan",
"Joerg",
""
],
[
"Fuchs",
"Xaver",
""
],
[
"Becker",
"Susanne",
""
],
... | Phantom limb pain (PLP) has been associated with both the reorganization of the somatotopic map in primary somatosensory cortex (S1) and preserved S1 function. Here we assessed the nature of the information (sensory, motor) that reaches S1 and methodological differences in the assessment of cortical representations that might explain these findings. We used functional magnetic resonance imaging during a virtual hand motor task, analogous to the classical mirror box task, in amputees with and without PLP and in matched controls. We assessed the relationship between task-related activation maxima and PLP intensity in S1 versus motor (M1) maps. We also measured cortical distances between the location of activation maxima using region of interest (ROI), defined from individual or group analyses. Amputees showed significantly increased activity in M1 and S1 compared to controls, which was not significantly related to PLP intensity. The location of activation maxima differed between groups depending on M1 and S1 maps. Neural activity was positively related to PLP only in amputees with pain and only if a group-defined ROI was chosen. These findings show that the analysis of changes in S1 versus M1 yields different results, possibly indicating different pain-related processes. |
1507.03730 | Robert Miller | Robert Miller, Stefan Scherbaum, Daniel W. Heck, Thomas Goschke,
Soeren Enge | On the relation between the (censored) shifted Wald and the Wiener
distribution as measurement models for choice response times | 37 pages, 4 figures, 3 tables | null | 10.1177/0146621617710465 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring processes or constructs from performance data is a major hallmark
of cognitive psychometrics. Particularly, diffusion modeling of response times
(RTs) from correct and erroneous responses using the Wiener distribution has
become a popular measurement tool because it provides a set of psychologically
interpretable parameters. However, an important precondition to identify all of
these parameters is a sufficient number of RTs from erroneous responses. In the
present article, we show by simulation that the parameters of the Wiener
distribution can be recovered from tasks yielding very high or even perfect
response accuracies using the shifted Wald distribution. Specifically, we argue
that error RTs can be modeled as correct RTs that have undergone censoring by
using techniques from parametric survival analysis. We illustrate our reasoning
by fitting the Wiener and (censored) shifted Wald distribution to RTs from six
participants who completed a Go/No-go task. In accordance with our simulations,
diffusion modeling using the Wiener and the shifted Wald distribution yielded
identical parameter estimates when the number of erroneous responses was
predicted to be low. Moreover, the modeling of error RTs as censored correct
RTs substantially improved the recovery of these diffusion parameters when
premature trial timeout was introduced to increase the number of omission
errors. Thus, the censored shifted Wald distribution provides a suitable means
for diffusion modeling in situations when the Wiener distribution cannot be
fitted without parametric constraints.
| [
{
"created": "Tue, 14 Jul 2015 06:21:57 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Feb 2017 15:19:01 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Aug 2017 20:36:31 GMT",
"version": "v3"
}
] | 2017-08-30 | [
[
"Miller",
"Robert",
""
],
[
"Scherbaum",
"Stefan",
""
],
[
"Heck",
"Daniel W.",
""
],
[
"Goschke",
"Thomas",
""
],
[
"Enge",
"Soeren",
""
]
] | Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints. |
2209.14297 | Isaiah Kiprono Mutai | Isaiah K. Mutai, Kristof Van Laerhoven, Nancy W. Karuri, Robert K.
Tewo | Using Multivariate Linear Regression for Biochemical Oxygen Demand
Prediction in Waste Water | null | null | null | null | q-bio.OT cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | There exist opportunities for Multivariate Linear Regression (MLR) in the
prediction of Biochemical Oxygen Demand (BOD) in waste water, using the diverse
water quality parameters as the input variables. The goal of this work is to
examine the capability of MLR in prediction of BOD in waste water through four
input variables: Dissolved Oxygen (DO), Nitrogen, Fecal Coliform and Total
Coliform. The four input variables have higher correlation strength to BOD out
of the seven parameters examined for the strength of correlation. Machine
Learning (ML) was done with both 80% and 90% of the data as the training set
and 20% and 10% as the test set respectively. MLR performance was evaluated
through the coefficient of correlation (r), Root Mean Square Error (RMSE) and
the percentage accuracy in prediction of BOD. The performance indices for the
input variables of Dissolved Oxygen, Nitrogen, Fecal Coliform and Total
Coliform in prediction of BOD are: RMSE=6.77mg/L, r=0.60 and accuracy 70.3% for
training dataset of 80% and RMSE=6.74mg/L, r=0.60 and accuracy of 87.5% for
training set of 90% of the dataset. It was found that increasing the percentage
of the training set above 80% of the dataset improved the accuracy of the model
only but did not have a significant impact on the prediction capacity of the
model. The results showed that MLR model could be successfully employed in the
estimation of BOD in waste water using appropriately selected input parameters.
| [
{
"created": "Thu, 8 Sep 2022 14:41:02 GMT",
"version": "v1"
}
] | 2022-09-30 | [
[
"Mutai",
"Isaiah K.",
""
],
[
"Van Laerhoven",
"Kristof",
""
],
[
"Karuri",
"Nancy W.",
""
],
[
"Tewo",
"Robert K.",
""
]
] | There exist opportunities for Multivariate Linear Regression (MLR) in the prediction of Biochemical Oxygen Demand (BOD) in waste water, using the diverse water quality parameters as the input variables. The goal of this work is to examine the capability of MLR in prediction of BOD in waste water through four input variables: Dissolved Oxygen (DO), Nitrogen, Fecal Coliform and Total Coliform. The four input variables have higher correlation strength to BOD out of the seven parameters examined for the strength of correlation. Machine Learning (ML) was done with both 80% and 90% of the data as the training set and 20% and 10% as the test set respectively. MLR performance was evaluated through the coefficient of correlation (r), Root Mean Square Error (RMSE) and the percentage accuracy in prediction of BOD. The performance indices for the input variables of Dissolved Oxygen, Nitrogen, Fecal Coliform and Total Coliform in prediction of BOD are: RMSE=6.77mg/L, r=0.60 and accuracy 70.3% for training dataset of 80% and RMSE=6.74mg/L, r=0.60 and accuracy of 87.5% for training set of 90% of the dataset. It was found that increasing the percentage of the training set above 80% of the dataset improved the accuracy of the model only but did not have a significant impact on the prediction capacity of the model. The results showed that MLR model could be successfully employed in the estimation of BOD in waste water using appropriately selected input parameters. |
q-bio/0310007 | Anna Chernova | Anna A. Chernova, Judith P. Armitage, Helen L. Packer, Philip K. Maini | Response kinetics of tethered bacteria to stepwise changes in nutrient
concentration | 11 pages, 7 figures, in press in BioSystems | null | 10.1016/S0303-2647(03)00109-6 | null | q-bio.QM | null | We examined the changes in swimming behaviour of the bacterium Rhodobacter
sphaeroides in response to stepwise changes in a nutrient (propionate),
following the prestimulus motion, the initial response and the adaptation to
the sustained concentration of the chemical. This was carried out by tethering
motile cells by their flagella to glass slides and following the rotational
behaviour of their cell bodies in response to the nutrient change. Computerised
motion analysis was used to analyse the behaviour. Distributions of run and
stop times were obtained from rotation data for tethered cells. Exponential and
Weibull fits for these distributions, and variability in individual responses
are discussed. In terms of parameters derived from the run and stop time
distributions, we compare the responses to stepwise changes in the nutrient
concentration and the long-term behaviour of 84 cells under twelve propionate
concentration levels from 1 nM to 25 mM. We discuss traditional assumptions for
the random walk approximation to bacterial swimming and compare them with the
observed R. sphaeroides motile behaviour.
| [
{
"created": "Thu, 9 Oct 2003 12:26:48 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Chernova",
"Anna A.",
""
],
[
"Armitage",
"Judith P.",
""
],
[
"Packer",
"Helen L.",
""
],
[
"Maini",
"Philip K.",
""
]
] | We examined the changes in swimming behaviour of the bacterium Rhodobacter sphaeroides in response to stepwise changes in a nutrient (propionate), following the prestimulus motion, the initial response and the adaptation to the sustained concentration of the chemical. This was carried out by tethering motile cells by their flagella to glass slides and following the rotational behaviour of their cell bodies in response to the nutrient change. Computerised motion analysis was used to analyse the behaviour. Distributions of run and stop times were obtained from rotation data for tethered cells. Exponential and Weibull fits for these distributions, and variability in individual responses are discussed. In terms of parameters derived from the run and stop time distributions, we compare the responses to stepwise changes in the nutrient concentration and the long-term behaviour of 84 cells under twelve propionate concentration levels from 1 nM to 25 mM. We discuss traditional assumptions for the random walk approximation to bacterial swimming and compare them with the observed R. sphaeroides motile behaviour. |
1804.03655 | Manish Sreenivasa | Manish Sreenivasa and Daniel Gonzalez-Alvarado | High-resolution computer meshes of the lower body bones of an adult
human female derived from CT images | null | null | 10.5281/zenodo.889060 | null | q-bio.TO cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background Computer-based geometrical meshes of bones are important for
applications in computational biomechanics as well as clinical software. There
is however a lack of freely available detailed bone meshes, especially related
to the human female morphology.
Methods & Results We provide high resolution bone meshes of the lower body,
derived from CT images of a 59 year old female cadaver that were sourced from
the Visible Human Data Set, Visible Human Project (NIH, USA). Important bone
landmarks and joint rotation axes are identified from the extracted meshes. A
script-based framework is developed to provide a graphical user interface that
can visualize, resample and modify the meshes to fit different subject scales.
Conclusion This open-data resource fills a gap in available data and is
provided for free usage in research and other applications. The associated
scripts allows users to easily transform the meshes to different laboratory and
software setups. This resource may be accessed through the following web link:
https://github.com/manishsreenivasa/BMFToolkit
This document is the author's version of this article.
| [
{
"created": "Tue, 10 Apr 2018 08:52:18 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jul 2018 05:07:37 GMT",
"version": "v2"
}
] | 2018-07-16 | [
[
"Sreenivasa",
"Manish",
""
],
[
"Gonzalez-Alvarado",
"Daniel",
""
]
] | Background Computer-based geometrical meshes of bones are important for applications in computational biomechanics as well as clinical software. There is however a lack of freely available detailed bone meshes, especially related to the human female morphology. Methods & Results We provide high resolution bone meshes of the lower body, derived from CT images of a 59 year old female cadaver that were sourced from the Visible Human Data Set, Visible Human Project (NIH, USA). Important bone landmarks and joint rotation axes are identified from the extracted meshes. A script-based framework is developed to provide a graphical user interface that can visualize, resample and modify the meshes to fit different subject scales. Conclusion This open-data resource fills a gap in available data and is provided for free usage in research and other applications. The associated scripts allows users to easily transform the meshes to different laboratory and software setups. This resource may be accessed through the following web link: https://github.com/manishsreenivasa/BMFToolkit This document is the author's version of this article. |
2003.13755 | Connor Coley | Connor W. Coley, Natalie S. Eyke, Klavs F. Jensen | Autonomous discovery in the chemical sciences part II: Outlook | Revised version available at 10.1002/anie.201909989 | null | 10.1002/anie.201909989 | null | q-bio.QM cs.AI cs.RO stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This two-part review examines how automation has contributed to different
aspects of discovery in the chemical sciences. In this second part, we reflect
on a selection of exemplary studies. It is increasingly important to articulate
what the role of automation and computation has been in the scientific process
and how that has or has not accelerated discovery. One can argue that even the
best automated systems have yet to ``discover'' despite being incredibly useful
as laboratory assistants. We must carefully consider how they have been and can
be applied to future problems of chemical discovery in order to effectively
design and interact with future autonomous platforms.
The majority of this article defines a large set of open research directions,
including improving our ability to work with complex data, build empirical
models, automate both physical and computational experiments for validation,
select experiments, and evaluate whether we are making progress toward the
ultimate goal of autonomous discovery. Addressing these practical and
methodological challenges will greatly advance the extent to which autonomous
systems can make meaningful discoveries.
| [
{
"created": "Mon, 30 Mar 2020 19:11:35 GMT",
"version": "v1"
}
] | 2020-04-01 | [
[
"Coley",
"Connor W.",
""
],
[
"Eyke",
"Natalie S.",
""
],
[
"Jensen",
"Klavs F.",
""
]
] | This two-part review examines how automation has contributed to different aspects of discovery in the chemical sciences. In this second part, we reflect on a selection of exemplary studies. It is increasingly important to articulate what the role of automation and computation has been in the scientific process and how that has or has not accelerated discovery. One can argue that even the best automated systems have yet to ``discover'' despite being incredibly useful as laboratory assistants. We must carefully consider how they have been and can be applied to future problems of chemical discovery in order to effectively design and interact with future autonomous platforms. The majority of this article defines a large set of open research directions, including improving our ability to work with complex data, build empirical models, automate both physical and computational experiments for validation, select experiments, and evaluate whether we are making progress toward the ultimate goal of autonomous discovery. Addressing these practical and methodological challenges will greatly advance the extent to which autonomous systems can make meaningful discoveries. |
1312.1911 | Luca Ciandrini | Luca Ciandrini, M. Carmen Romano and A. Parmeggiani | Stepping and crowding of molecular motors: statistical kinetics from an
exclusion process perspective | 9 pages, 6 figures, 2 supplementary figures | null | 10.1016/j.bpj.2014.07.012 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motor enzymes are remarkable molecular machines that use the energy derived
from the hydrolysis of a nucleoside triphosphate to generate mechanical
movement, achieved through different steps that constitute their kinetic cycle.
These macromolecules, nowadays investigated with advanced experimental
techniques to unveil their molecular mechanisms and the properties of their
kinetic cycles, are implicated in many biological processes, ranging from
biopolymerisation (e.g. RNA polymerases and ribosomes) to intracellular
transport (motor proteins such as kinesins or dyneins). Although the kinetics
of individual motors is well studied on both theoretical and experimental
grounds, the repercussions of their stepping cycle on the collective dynamics
still remains unclear. Advances in this direction will improve our
comprehension of transport process in the natural intracellular medium, where
processive motor enzymes might operate in crowded conditions. In this work, we
therefore extend the current statistical kinetic analysis to study collective
transport phenomena of motors in terms of lattice gas models belonging to the
exclusion process class. Via numerical simulations, we show how to interpret
and use the randomness calculated from single particle trajectories in crowded
conditions. Importantly, we also show that time fluctuations and non-Poissonian
behavior are intrinsically related to spatial correlations and the emergence of
large, but finite, clusters of co-moving motors. The properties unveiled by our
analysis have important biological implications on the collective transport
characteristics of processive motor enzymes in crowded conditions.
| [
{
"created": "Fri, 6 Dec 2013 16:31:38 GMT",
"version": "v1"
},
{
"created": "Mon, 26 May 2014 17:59:25 GMT",
"version": "v2"
}
] | 2014-09-05 | [
[
"Ciandrini",
"Luca",
""
],
[
"Romano",
"M. Carmen",
""
],
[
"Parmeggiani",
"A.",
""
]
] | Motor enzymes are remarkable molecular machines that use the energy derived from the hydrolysis of a nucleoside triphosphate to generate mechanical movement, achieved through different steps that constitute their kinetic cycle. These macromolecules, nowadays investigated with advanced experimental techniques to unveil their molecular mechanisms and the properties of their kinetic cycles, are implicated in many biological processes, ranging from biopolymerisation (e.g. RNA polymerases and ribosomes) to intracellular transport (motor proteins such as kinesins or dyneins). Although the kinetics of individual motors is well studied on both theoretical and experimental grounds, the repercussions of their stepping cycle on the collective dynamics still remains unclear. Advances in this direction will improve our comprehension of transport process in the natural intracellular medium, where processive motor enzymes might operate in crowded conditions. In this work, we therefore extend the current statistical kinetic analysis to study collective transport phenomena of motors in terms of lattice gas models belonging to the exclusion process class. Via numerical simulations, we show how to interpret and use the randomness calculated from single particle trajectories in crowded conditions. Importantly, we also show that time fluctuations and non-Poissonian behavior are intrinsically related to spatial correlations and the emergence of large, but finite, clusters of co-moving motors. The properties unveiled by our analysis have important biological implications on the collective transport characteristics of processive motor enzymes in crowded conditions. |
1708.07352 | Yulin Wang | Yulin Wang, Na Lu, Hongyu Miao | Structural Identifiability of Cyclic Graphical Models of Biological
Networks with Latent Variables | null | null | null | null | q-bio.MN stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An efficient structural identifiability analysis algorithm is developed in
this study for a broad range of network structures. The proposed method adopts
the Wright's path coefficient method to generate identifiability equations in
forms of symbolic polynomials, and then converts these symbolic equations to
binary matrices (called identifiability matrix). Several matrix operations are
introduced for identifiability matrix reduction with system equivalency
maintained. Based on the reduced identifiability matrices, the structural
identifiability of each parameter is determined. A number of benchmark models
are used to verify the validity of the proposed approach. Finally, the network
module for influenza A virus replication is employed as a real example to
illustrate the application of the proposed approach in practice. The proposed
approach can deal with cyclic networks with latent variables. The key advantage
is that it intentionally avoids symbolic computation and is thus highly
efficient. Also, this method is capable of determining the identifiability of
each single parameter and is thus of higher resolution in comparison with many
existing approaches. Overall, this study provides a basis for systematic
examination and refinement of graphical models of biological networks from the
identifiability point of view, and it has a significant potential to be
extended to more complex network structures or high-dimensional systems.
| [
{
"created": "Thu, 24 Aug 2017 10:55:55 GMT",
"version": "v1"
}
] | 2017-08-25 | [
[
"Wang",
"Yulin",
""
],
[
"Lu",
"Na",
""
],
[
"Miao",
"Hongyu",
""
]
] | An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems. |
2302.03449 | Florinda Capone Prof. | Florinda Capone and Roberta De Luca and Isabella Torcicollo | Turing patterns in a Leslie-Gower predator prey model | null | null | null | null | q-bio.PE math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reaction-diffusion Leslie-Gower predator-prey model, incorporating the fear
effect and prey refuge, with Beddington-DeAngelis functional response, is
introduced. A qualitative analysis of the solutions of the model and the
stability analysis of the coexistence equilibrium, are performed. Sufficient
conditions guaranteeing the occurrence of Turing instability have been
determined either in the case of self-diffusion or in the case of
cross-diffusion. Different types of Turing patterns, representing a spatial
redistribution of population in the environment, emerge for different values of
the model parameters.
| [
{
"created": "Tue, 7 Feb 2023 13:06:15 GMT",
"version": "v1"
}
] | 2023-02-08 | [
[
"Capone",
"Florinda",
""
],
[
"De Luca",
"Roberta",
""
],
[
"Torcicollo",
"Isabella",
""
]
] | A reaction-diffusion Leslie-Gower predator-prey model, incorporating the fear effect and prey refuge, with Beddington-DeAngelis functional response, is introduced. A qualitative analysis of the solutions of the model and the stability analysis of the coexistence equilibrium, are performed. Sufficient conditions guaranteeing the occurrence of Turing instability have been determined either in the case of self-diffusion or in the case of cross-diffusion. Different types of Turing patterns, representing a spatial redistribution of population in the environment, emerge for different values of the model parameters. |
0712.2654 | Janusz Szwabi\'nski | A. P\c{e}kalski (1), J. Szwabi\'nski (1 and 2), I. Bena (2) and M.
Droz (2) ((1) Institute of Theoretical Physics, University of Wroc{\l}aw,
Wroc{\l}aw, Poland (2) D\'epartement de Physique Th\'eorique, Universit\'e de
Gen\`eve, Gen\`eve, Switzerland) | Extinction risk and structure of a food web model | 9 pages, 15 figures | null | 10.1103/PhysRevE.77.031917 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | We investigate in detail the model of a trophic web proposed by Amaral and
Meyer [Phys. Rev. Lett. 82, 652 (1999)]. We focused on small-size systems that
are relevant for real biological food webs and for which the fluctuations are
playing an important role. We show, using Monte Carlo simulations, that such
webs can be non-viable, leading to extinction of all species in small and/or
weakly coupled systems. Estimations of the extinction times and survival
chances are also given. We show that before the extinction the fraction of
highly-connected species ("omnivores") is increasing. Viable food webs exhibit
a pyramidal structure, where the density of occupied niches is higher at lower
trophic levels, and moreover the occupations of adjacent levels are closely
correlated. We also demonstrate that the distribution of the lengths of food
chains has an exponential character and changes weakly with the parameters of
the model. On the contrary, the distribution of avalanche sizes of the extinct
species depends strongly on the connectedness of the web. For rather loosely
connected systems we recover the power-law type of behavior with the same
exponent as found in earlier studies, while for densely-connected webs the
distribution is not of a power-law type.
| [
{
"created": "Mon, 17 Dec 2007 09:01:49 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Pȩkalski",
"A.",
"",
"1 and 2"
],
[
"Szwabiński",
"J.",
"",
"1 and 2"
],
[
"Bena",
"I.",
""
],
[
"Droz",
"M.",
""
]
] | We investigate in detail the model of a trophic web proposed by Amaral and Meyer [Phys. Rev. Lett. 82, 652 (1999)]. We focused on small-size systems that are relevant for real biological food webs and for which the fluctuations are playing an important role. We show, using Monte Carlo simulations, that such webs can be non-viable, leading to extinction of all species in small and/or weakly coupled systems. Estimations of the extinction times and survival chances are also given. We show that before the extinction the fraction of highly-connected species ("omnivores") is increasing. Viable food webs exhibit a pyramidal structure, where the density of occupied niches is higher at lower trophic levels, and moreover the occupations of adjacent levels are closely correlated. We also demonstrate that the distribution of the lengths of food chains has an exponential character and changes weakly with the parameters of the model. On the contrary, the distribution of avalanche sizes of the extinct species depends strongly on the connectedness of the web. For rather loosely connected systems we recover the power-law type of behavior with the same exponent as found in earlier studies, while for densely-connected webs the distribution is not of a power-law type. |
1403.1417 | Eva Balsa-Canto | Oana-Teodora Chis, Julio R. Banga and Eva Balsa-Canto | Sloppy models can be identifiable | null | null | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic models of biochemical networks typically consist of sets of
non-linear ordinary differential equations involving states (concentrations or
amounts of the components of the network) and parameters describing the
reaction kinetics. Unfortunately, in most cases the parameters are completely
unknown or only rough estimates of their values are available. Therefore, their
values must be estimated from time-series experimental data.
In recent years, it has been suggested that dynamic systems biology models
are universally sloppy so their parameters cannot be uniquely estimated. In
this work, we re-examine this concept, establishing links with the notions of
identifiability and experimental design. Further, considering a set of
examples, we address the following fundamental questions: i) is sloppiness
inherent to model structure?; ii) is sloppiness influenced by experimental data
or noise?; iii) does sloppiness mean that parameters cannot be identified?, and
iv) can sloppiness be modified by experimental design?
Our results indicate that sloppiness is not equivalent to lack of structural
or practical identifiability (although they can be related), so sloppy models
can be identifiable. Therefore, drawing conclusions about the possibility of
estimating unique parameter values by sloppiness analysis can be misleading.
Checking structural and practical identifiability analyses is a better approach
to asses the uniqueness and confidence in parameter estimation.
| [
{
"created": "Thu, 6 Mar 2014 11:45:59 GMT",
"version": "v1"
}
] | 2014-03-07 | [
[
"Chis",
"Oana-Teodora",
""
],
[
"Banga",
"Julio R.",
""
],
[
"Balsa-Canto",
"Eva",
""
]
] | Dynamic models of biochemical networks typically consist of sets of non-linear ordinary differential equations involving states (concentrations or amounts of the components of the network) and parameters describing the reaction kinetics. Unfortunately, in most cases the parameters are completely unknown or only rough estimates of their values are available. Therefore, their values must be estimated from time-series experimental data. In recent years, it has been suggested that dynamic systems biology models are universally sloppy so their parameters cannot be uniquely estimated. In this work, we re-examine this concept, establishing links with the notions of identifiability and experimental design. Further, considering a set of examples, we address the following fundamental questions: i) is sloppiness inherent to model structure?; ii) is sloppiness influenced by experimental data or noise?; iii) does sloppiness mean that parameters cannot be identified?, and iv) can sloppiness be modified by experimental design? Our results indicate that sloppiness is not equivalent to lack of structural or practical identifiability (although they can be related), so sloppy models can be identifiable. Therefore, drawing conclusions about the possibility of estimating unique parameter values by sloppiness analysis can be misleading. Checking structural and practical identifiability analyses is a better approach to asses the uniqueness and confidence in parameter estimation. |
2402.04499 | Mike Steel Prof. | Fran\c{c}ois Bienvenu and Mike Steel | 0-1 laws for pattern occurrences in phylogenetic trees and networks | 14 pages 2 figures | Bull. Math. Biol. 86, 94 (2024) | 10.1007/s11538-024-01316-x | null | q-bio.PE math.CO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In a recent paper, the question of determining the fraction of binary trees
that contain a fixed pattern known as the snowflake was posed. We show that
this fraction goes to 1, providing two very different proofs: a purely
combinatorial one that is quantitative and specific to this problem; and a
proof using branching process techniques that is less explicit, but also much
more general, as it applies to any fixed patterns and can be extended to other
trees and networks. In particular, it follows immediately from our second proof
that the fraction of $d$-ary trees (resp. level-$k$ networks) that contain a
fixed $d$-ary tree (resp. level-$k$ network) tends to $1$ as the number of
leaves grows.
| [
{
"created": "Wed, 7 Feb 2024 01:06:42 GMT",
"version": "v1"
},
{
"created": "Sat, 25 May 2024 20:54:02 GMT",
"version": "v2"
}
] | 2024-07-30 | [
[
"Bienvenu",
"François",
""
],
[
"Steel",
"Mike",
""
]
] | In a recent paper, the question of determining the fraction of binary trees that contain a fixed pattern known as the snowflake was posed. We show that this fraction goes to 1, providing two very different proofs: a purely combinatorial one that is quantitative and specific to this problem; and a proof using branching process techniques that is less explicit, but also much more general, as it applies to any fixed patterns and can be extended to other trees and networks. In particular, it follows immediately from our second proof that the fraction of $d$-ary trees (resp. level-$k$ networks) that contain a fixed $d$-ary tree (resp. level-$k$ network) tends to $1$ as the number of leaves grows. |
2403.08044 | Tiash Rana Mukherjee | Tiash Rana Mukherjee, Oshin Tyagi, Jingkun Wang, John Kang, Ranjana
Mehta | Neural, Muscular, and Perceptual responses with shoulder exoskeleton use
over Days | Poster Abstract, Submitted to Neuroergonomics Conference and NYC
Neuromodulation Conferences, July 28 to 31, 2022 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Passive shoulder exoskeletons have been widely introduced in the industry to
aid upper extremity movements during repetitive overhead work. As an ergonomic
intervention, it is important to understand how users adapt to these devices
over time and if these induce external stress while working. The study
evaluated the use of an exoskeleton over a period of 3 days by assessing the
neural, physiological, and perceptual responses of twenty-four participants by
comparing a physical task against the same task with an additional cognitive
workload. Over days adaptation to task irrespective of task and group were
identified. Electromyography (EMG) analysis of shoulder and back muscles
reveals lower muscle activity in the exoskeleton group irrespective of task.
Functional connectivity analysis using functional near infrared spectroscopy
(fNIRS) reveals that exoskeletons benefit users by reducing task demands in the
motor planning and execution regions. Sex-based differences were also
identified in these neuromuscular assessments.
| [
{
"created": "Tue, 12 Mar 2024 19:36:43 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Mukherjee",
"Tiash Rana",
""
],
[
"Tyagi",
"Oshin",
""
],
[
"Wang",
"Jingkun",
""
],
[
"Kang",
"John",
""
],
[
"Mehta",
"Ranjana",
""
]
] | Passive shoulder exoskeletons have been widely introduced in the industry to aid upper extremity movements during repetitive overhead work. As an ergonomic intervention, it is important to understand how users adapt to these devices over time and if these induce external stress while working. The study evaluated the use of an exoskeleton over a period of 3 days by assessing the neural, physiological, and perceptual responses of twenty-four participants by comparing a physical task against the same task with an additional cognitive workload. Over days adaptation to task irrespective of task and group were identified. Electromyography (EMG) analysis of shoulder and back muscles reveals lower muscle activity in the exoskeleton group irrespective of task. Functional connectivity analysis using functional near infrared spectroscopy (fNIRS) reveals that exoskeletons benefit users by reducing task demands in the motor planning and execution regions. Sex-based differences were also identified in these neuromuscular assessments. |
1003.1600 | Haiyan Wang | Haiyan Wang and Carlos Castillo-Chavez | Spreading speeds and traveling waves for non-cooperative
integro-difference systems | A number of changes have been made to improve the presentation, in
particular, in Section 5 | null | null | null | q-bio.PE math.CA math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of macroscopic descriptions for the joint dynamics and
behavior of large heterogeneous ensembles subject to ecological forces like
dispersal remains a central challenge for mathematicians and biological
scientists alike. Over the past century, specific attention has been directed
to the role played by dispersal in shaping plant communities, or on the
dynamics of marine open-ocean and intertidal systems, or on biological
invasions, or on the spread of disease, to name a few. Mathematicians and
theoreticians, starting with the efforts of researchers that include Aronson,
Fisher, Kolmogorov, Levin, Okubo, Skellam, Slobodkin, Weinberger and many
others, set the foundation of a fertile area of research at the interface of
ecology, mathematics, population biology and evolutionary biology.
Integrodifference systems, the subject of this manuscript, arise naturally in
the study of the spatial dispersal of organisms whose local population dynamics
are prescribed by models with discrete generations. The brunt of the
mathematical research has focused on the the study of existence traveling wave
solutions and characterizations of the spreading speed particularly, in the
context of cooperative systems. In this paper, we characterize the spreading
speed for a large class of non cooperative systems, all formulated in terms of
integrodifference equations, by the convergence of initial data to wave
solutions. In this setting, the spreading speed is characterized as the slowest
speed of a family of non-constant traveling wave solutions. Our results are
applied to a specific non-cooperative competition system in detail.
| [
{
"created": "Mon, 8 Mar 2010 11:21:30 GMT",
"version": "v1"
},
{
"created": "Thu, 5 May 2011 18:31:58 GMT",
"version": "v2"
}
] | 2011-05-06 | [
[
"Wang",
"Haiyan",
""
],
[
"Castillo-Chavez",
"Carlos",
""
]
] | The development of macroscopic descriptions for the joint dynamics and behavior of large heterogeneous ensembles subject to ecological forces like dispersal remains a central challenge for mathematicians and biological scientists alike. Over the past century, specific attention has been directed to the role played by dispersal in shaping plant communities, or on the dynamics of marine open-ocean and intertidal systems, or on biological invasions, or on the spread of disease, to name a few. Mathematicians and theoreticians, starting with the efforts of researchers that include Aronson, Fisher, Kolmogorov, Levin, Okubo, Skellam, Slobodkin, Weinberger and many others, set the foundation of a fertile area of research at the interface of ecology, mathematics, population biology and evolutionary biology. Integrodifference systems, the subject of this manuscript, arise naturally in the study of the spatial dispersal of organisms whose local population dynamics are prescribed by models with discrete generations. The brunt of the mathematical research has focused on the the study of existence traveling wave solutions and characterizations of the spreading speed particularly, in the context of cooperative systems. In this paper, we characterize the spreading speed for a large class of non cooperative systems, all formulated in terms of integrodifference equations, by the convergence of initial data to wave solutions. In this setting, the spreading speed is characterized as the slowest speed of a family of non-constant traveling wave solutions. Our results are applied to a specific non-cooperative competition system in detail. |
2007.06762 | Armita Nourmohammad | Zachary Montague, Huibin Lv, Jakub Otwinowski, William S. DeWitt,
Giulio Isacchini, Garrick K. Yip, Wilson W. Ng, Owen Tak-Yin Tsang, Meng
Yuan, Hejun Liu, Ian A. Wilson, J. S. Malik Peiris, Nicholas C. Wu, Armita
Nourmohammad, Chris Ka Pun Mok | Dynamics of B-cell repertoires and emergence of cross-reactive responses
in COVID-19 patients with different disease severity | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 patients show varying severity of the disease ranging from
asymptomatic to requiring intensive care. Although a number of SARS-CoV-2
specific monoclonal antibodies have been identified, we still lack an
understanding of the overall landscape of B-cell receptor (BCR) repertoires in
COVID-19 patients. Here, we used high-throughput sequencing of bulk and plasma
B-cells collected over multiple time points during infection to characterize
signatures of B-cell response to SARS-CoV-2 in 19 patients. Using principled
statistical approaches, we determined differential features of BCRs associated
with different disease severity. We identified 38 significantly expanded clonal
lineages shared among patients as candidates for specific responses to
SARS-CoV-2. Using single-cell sequencing, we verified reactivity of BCRs shared
among individuals to SARS-CoV-2 epitopes. Moreover, we identified natural
emergence of a BCR with cross-reactivity to SARS-CoV-1 and SARS-CoV-2 in a
number of patients. Our results provide important insights for development of
rational therapies and vaccines against COVID-19.
| [
{
"created": "Tue, 14 Jul 2020 01:41:42 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Apr 2021 01:02:41 GMT",
"version": "v2"
}
] | 2021-04-07 | [
[
"Montague",
"Zachary",
""
],
[
"Lv",
"Huibin",
""
],
[
"Otwinowski",
"Jakub",
""
],
[
"DeWitt",
"William S.",
""
],
[
"Isacchini",
"Giulio",
""
],
[
"Yip",
"Garrick K.",
""
],
[
"Ng",
"Wilson W.",
""
],
... | COVID-19 patients show varying severity of the disease ranging from asymptomatic to requiring intensive care. Although a number of SARS-CoV-2 specific monoclonal antibodies have been identified, we still lack an understanding of the overall landscape of B-cell receptor (BCR) repertoires in COVID-19 patients. Here, we used high-throughput sequencing of bulk and plasma B-cells collected over multiple time points during infection to characterize signatures of B-cell response to SARS-CoV-2 in 19 patients. Using principled statistical approaches, we determined differential features of BCRs associated with different disease severity. We identified 38 significantly expanded clonal lineages shared among patients as candidates for specific responses to SARS-CoV-2. Using single-cell sequencing, we verified reactivity of BCRs shared among individuals to SARS-CoV-2 epitopes. Moreover, we identified natural emergence of a BCR with cross-reactivity to SARS-CoV-1 and SARS-CoV-2 in a number of patients. Our results provide important insights for development of rational therapies and vaccines against COVID-19. |
1802.00858 | Fernando Vericat | Alejandro M. Mes\'on, C. Manuel Carlevaro and Fernando Vericat | Hierarchical evolutive systems, fuzzy categories and the living single
cell | 39 pages, 9 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, the theory of hierarchical evolutive systems of Ehresmann
and Vandremeersch [Bull. Math. Bio. 49, 13-50 (1987)] is improved by
considering the categories of the theory as fuzzy sets whose elements are the
composite objects formed by the arrows and corresponding vertices of their
embedded graphs. This way each category can be represented as a point in the
states space [0.1]**N. The introduction of a diffeomorphism, that acts in this
context as a functor between categories, allows to define a measure preserving
dynamical system. In particular, we apply this formalism to describe a living
single cell. We propose for its state at a given time a hirerchical category
with three levels (molecular, coarse-grained and cellular levels) related by
adequate colimits. Each level involves the main functional and structural
modules in which the cell can be partitioned. The time evolution of the cell is
drived by a transformation which is a N-dimensional generalization of the
Ricker map whose parameters we propose to be determined by requiring that, as
hallmark of its behavior, the living cell to evolve at the edge of chaos. From
the dynamical point of view this property manifests in the fact that the
largest Lyapunov exponent is equal to zero. Since in a rather complete model of
the living cell the huge number of involved parameters can make of the
calculations a hard task, we also propose a toy model, with fewer parameters to
be determined, which emphasizes the cellular fission.
| [
{
"created": "Wed, 31 Jan 2018 17:29:48 GMT",
"version": "v1"
}
] | 2018-02-06 | [
[
"Mesón",
"Alejandro M.",
""
],
[
"Carlevaro",
"C. Manuel",
""
],
[
"Vericat",
"Fernando",
""
]
] | In this article, the theory of hierarchical evolutive systems of Ehresmann and Vandremeersch [Bull. Math. Bio. 49, 13-50 (1987)] is improved by considering the categories of the theory as fuzzy sets whose elements are the composite objects formed by the arrows and corresponding vertices of their embedded graphs. This way each category can be represented as a point in the states space [0.1]**N. The introduction of a diffeomorphism, that acts in this context as a functor between categories, allows to define a measure preserving dynamical system. In particular, we apply this formalism to describe a living single cell. We propose for its state at a given time a hirerchical category with three levels (molecular, coarse-grained and cellular levels) related by adequate colimits. Each level involves the main functional and structural modules in which the cell can be partitioned. The time evolution of the cell is drived by a transformation which is a N-dimensional generalization of the Ricker map whose parameters we propose to be determined by requiring that, as hallmark of its behavior, the living cell to evolve at the edge of chaos. From the dynamical point of view this property manifests in the fact that the largest Lyapunov exponent is equal to zero. Since in a rather complete model of the living cell the huge number of involved parameters can make of the calculations a hard task, we also propose a toy model, with fewer parameters to be determined, which emphasizes the cellular fission. |
2401.01004 | Lang Van Tran Dr. | Do Hoang Tu, Tran Van Lang, Pham Cong Xuyen, Le Mau Long | Predicting the activity of chemical compounds based on machine learning
approaches | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Exploring methods and techniques of machine learning (ML) to address specific
challenges in various fields is essential. In this work, we tackle a problem in
the domain of Cheminformatics; that is, providing a suitable solution to aid in
predicting the activity of a chemical compound to the best extent possible. To
address the problem at hand, this study conducts experiments on 100 different
combinations of existing techniques. These solutions are then selected based on
a set of criteria that includes the G-means, F1-score, and AUC metrics. The
results have been tested on a dataset of about 10,000 chemical compounds from
PubChem that have been classified according to their activity
| [
{
"created": "Sun, 10 Sep 2023 17:20:45 GMT",
"version": "v1"
}
] | 2024-01-03 | [
[
"Tu",
"Do Hoang",
""
],
[
"Van Lang",
"Tran",
""
],
[
"Xuyen",
"Pham Cong",
""
],
[
"Long",
"Le Mau",
""
]
] | Exploring methods and techniques of machine learning (ML) to address specific challenges in various fields is essential. In this work, we tackle a problem in the domain of Cheminformatics; that is, providing a suitable solution to aid in predicting the activity of a chemical compound to the best extent possible. To address the problem at hand, this study conducts experiments on 100 different combinations of existing techniques. These solutions are then selected based on a set of criteria that includes the G-means, F1-score, and AUC metrics. The results have been tested on a dataset of about 10,000 chemical compounds from PubChem that have been classified according to their activity |
1610.05348 | Brian Chu | Brian K. Chu, Margaret J. Tse, Royce R. Sato, and Elizabeth L. Read | Markov State Models of Gene Regulatory Networks | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene regulatory networks with dynamics characterized by multiple stable
states underlie cell fate-decisions. Quantitative models that can link
molecular-level knowledge of gene regulation to a global understanding of
network dynamics have the potential to guide cell-reprogramming strategies.
Networks are often modeled by the stochastic Chemical Master Equation, but
methods for systematic identification of key properties of the global dynamics
are currently lacking. We present a method for analyzing global dynamics of
gene networks using the Markov State Model (MSM) framework, which utilizes a
separation-of-timescales based clustering method to obtain a coarse-grained
state transition graph that approximates global gene network dynamics. The
method identifies the number, phenotypes, and lifetimes of long-lived network
states. Application of transition path theory to the constructed MSM decomposes
global dynamics into a set of dominant transition paths and associated relative
probabilities for stochastic state-switching. In this proof-of-concept study,
we found that the MSM provides a general framework for analyzing and
visualizing stochastic multistability and state-transitions in gene networks.
Our results suggest that the MSM framework, adopted from the field of atomistic
Molecular Dynamics, can be a useful tool for quantitative Systems Biology at
the network scale.
| [
{
"created": "Mon, 17 Oct 2016 20:35:55 GMT",
"version": "v1"
}
] | 2016-10-19 | [
[
"Chu",
"Brian K.",
""
],
[
"Tse",
"Margaret J.",
""
],
[
"Sato",
"Royce R.",
""
],
[
"Read",
"Elizabeth L.",
""
]
] | Gene regulatory networks with dynamics characterized by multiple stable states underlie cell fate-decisions. Quantitative models that can link molecular-level knowledge of gene regulation to a global understanding of network dynamics have the potential to guide cell-reprogramming strategies. Networks are often modeled by the stochastic Chemical Master Equation, but methods for systematic identification of key properties of the global dynamics are currently lacking. We present a method for analyzing global dynamics of gene networks using the Markov State Model (MSM) framework, which utilizes a separation-of-timescales based clustering method to obtain a coarse-grained state transition graph that approximates global gene network dynamics. The method identifies the number, phenotypes, and lifetimes of long-lived network states. Application of transition path theory to the constructed MSM decomposes global dynamics into a set of dominant transition paths and associated relative probabilities for stochastic state-switching. In this proof-of-concept study, we found that the MSM provides a general framework for analyzing and visualizing stochastic multistability and state-transitions in gene networks. Our results suggest that the MSM framework, adopted from the field of atomistic Molecular Dynamics, can be a useful tool for quantitative Systems Biology at the network scale. |
2012.09435 | Daniel Jacob | Daniel Jacob (BFP), Romain David (MISTEA, ERINHA-AISBL), Sophie Aubin
(DV-IST), Yves Gibon (BFP) | Making experimental data tables in the life sciences more FAIR: a
pragmatic approach | null | GigaScience, BioMed Central, 2020, 9 (12) | 10.15454/1.5572412770331912E12 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Making data compliant with the FAIR Data principles (Findable, Accessible,
Interoperable, Reusable) is still a challenge for many researchers, who are not
sure which criteria should be met first and how. Illustrated from experimental
data tables associated with a Design of Experiments, we propose an approach
that can serve as a model for a research data management that allows
researchers to disseminate their data by satisfying the main FAIR criteria
without insurmountable efforts. More importantly, this approach aims to
facilitate the FAIRification process by providing researchers with tools to
improve their data management practices.
| [
{
"created": "Thu, 17 Dec 2020 08:11:12 GMT",
"version": "v1"
}
] | 2020-12-18 | [
[
"Jacob",
"Daniel",
"",
"BFP"
],
[
"David",
"Romain",
"",
"MISTEA, ERINHA-AISBL"
],
[
"Aubin",
"Sophie",
"",
"DV-IST"
],
[
"Gibon",
"Yves",
"",
"BFP"
]
] | Making data compliant with the FAIR Data principles (Findable, Accessible, Interoperable, Reusable) is still a challenge for many researchers, who are not sure which criteria should be met first and how. Illustrated from experimental data tables associated with a Design of Experiments, we propose an approach that can serve as a model for a research data management that allows researchers to disseminate their data by satisfying the main FAIR criteria without insurmountable efforts. More importantly, this approach aims to facilitate the FAIRification process by providing researchers with tools to improve their data management practices. |
2203.08820 | Cheng Ge | Cheng Ge, Yi Lu, Jia Qu, Liangxu Xie, Feng Wang, Hong Zhang, Ren Kong
and Shan Chang | DePS: An improved deep learning model for de novo peptide sequencing | 10 pages, 7 figures | null | null | null | q-bio.QM cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | De novo peptide sequencing from mass spectrometry data is an important method
for protein identification. Recently, various deep learning approaches were
applied for de novo peptide sequencing and DeepNovoV2 is one of the
represetative models. In this study, we proposed an enhanced model, DePS, which
can improve the accuracy of de novo peptide sequencing even with missing signal
peaks or large number of noisy peaks in tandem mass spectrometry data. It is
showed that, for the same test set of DeepNovoV2, the DePS model achieved
excellent results of 74.22%, 74.21% and 41.68% for amino acid recall, amino
acid precision and peptide recall respectively. Furthermore, the results
suggested that DePS outperforms DeepNovoV2 on the cross species dataset.
| [
{
"created": "Wed, 16 Mar 2022 16:45:48 GMT",
"version": "v1"
}
] | 2022-03-18 | [
[
"Ge",
"Cheng",
""
],
[
"Lu",
"Yi",
""
],
[
"Qu",
"Jia",
""
],
[
"Xie",
"Liangxu",
""
],
[
"Wang",
"Feng",
""
],
[
"Zhang",
"Hong",
""
],
[
"Kong",
"Ren",
""
],
[
"Chang",
"Shan",
""
]
] | De novo peptide sequencing from mass spectrometry data is an important method for protein identification. Recently, various deep learning approaches were applied for de novo peptide sequencing and DeepNovoV2 is one of the represetative models. In this study, we proposed an enhanced model, DePS, which can improve the accuracy of de novo peptide sequencing even with missing signal peaks or large number of noisy peaks in tandem mass spectrometry data. It is showed that, for the same test set of DeepNovoV2, the DePS model achieved excellent results of 74.22%, 74.21% and 41.68% for amino acid recall, amino acid precision and peptide recall respectively. Furthermore, the results suggested that DePS outperforms DeepNovoV2 on the cross species dataset. |
1609.06069 | Siddhartha Sen | Siddhartha Sen | Measuring Consciousness | 23 Pages, Latex file | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A measurement based formula for consciousness, C, as a function of time t, is
constructed. The formula depends on identifying a natural relevant
self-generated, time-dependent dynamical process inherent in any entity. For
human beings the relevant dynamical process identified is the ensemble of brain
waves, observed in EEG measurements, that are represented in the model by their
measured time dependent correlation functions. These correlation functions
define the accessible dynamical state of the brain at any moment of time. From
them a time dependent probability function, P(t), is extracted by using a
mathematical identity. According to information theory, -P(t) log P(t), is a
measure of the information contained in the brain waves. Consciousness, C, is
defined by this information theory formula: it is not localized, does not
depend on specific hardwire details of the brain, but reflects the information
content present in brain waves. Justifications, based on observational
evidence, are given for the formula and it is shown that C reflects the degree
of "awareness" that a person has at a given moment of time. The model explains
the observed time delay between when a brain wave is seen to initiate an action
and when there is awareness that the action has been initiated, in terms of the
way brain waves processes information. Some testable consequences of the model,
including the role dreaming sleep plays in long term memory storage, are
discussed. It is also shown that non living entities have C=0.
| [
{
"created": "Tue, 20 Sep 2016 09:38:28 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2018 11:07:08 GMT",
"version": "v2"
}
] | 2018-02-28 | [
[
"Sen",
"Siddhartha",
""
]
] | A measurement based formula for consciousness, C, as a function of time t, is constructed. The formula depends on identifying a natural relevant self-generated, time-dependent dynamical process inherent in any entity. For human beings the relevant dynamical process identified is the ensemble of brain waves, observed in EEG measurements, that are represented in the model by their measured time dependent correlation functions. These correlation functions define the accessible dynamical state of the brain at any moment of time. From them a time dependent probability function, P(t), is extracted by using a mathematical identity. According to information theory, -P(t) log P(t), is a measure of the information contained in the brain waves. Consciousness, C, is defined by this information theory formula: it is not localized, does not depend on specific hardwire details of the brain, but reflects the information content present in brain waves. Justifications, based on observational evidence, are given for the formula and it is shown that C reflects the degree of "awareness" that a person has at a given moment of time. The model explains the observed time delay between when a brain wave is seen to initiate an action and when there is awareness that the action has been initiated, in terms of the way brain waves processes information. Some testable consequences of the model, including the role dreaming sleep plays in long term memory storage, are discussed. It is also shown that non living entities have C=0. |
2102.01807 | Cole Hurwitz | Cole Hurwitz, Nina Kudryashova, Arno Onken, Matthias H. Hennig | Building population models for large-scale neural recordings:
opportunities and pitfalls | null | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Modern recording technologies now enable simultaneous recording from large
numbers of neurons. This has driven the development of new statistical models
for analyzing and interpreting neural population activity. Here we provide a
broad overview of recent developments in this area. We compare and contrast
different approaches, highlight strengths and limitations, and discuss
biological and mechanistic insights that these methods provide.
| [
{
"created": "Wed, 3 Feb 2021 00:06:49 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 04:47:18 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Jun 2021 17:03:02 GMT",
"version": "v3"
},
{
"created": "Sat, 10 Jul 2021 16:05:31 GMT",
"version": "v4"
}
] | 2021-07-13 | [
[
"Hurwitz",
"Cole",
""
],
[
"Kudryashova",
"Nina",
""
],
[
"Onken",
"Arno",
""
],
[
"Hennig",
"Matthias H.",
""
]
] | Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide. |
1908.04057 | Thierry Mora | Thierry Mora, Ilya Nemenman | Physical limit to concentration sensing in a changing environment | null | Phys. Rev. Lett. 123, 198101 (2019) | 10.1103/PhysRevLett.123.198101 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells adapt to changing environments by sensing ligand concentrations using
specific receptors. The accuracy of sensing is ultimately limited by the finite
number of ligand molecules bound by receptors. Previously derived physical
limits to sensing accuracy have assumed that the concentration was constant and
ignored its temporal fluctuations. We formulate the problem of concentration
sensing in a strongly fluctuating environment as a non-linear field-theoretic
problem, for which we find an excellent approximate Gaussian solution. We
derive a new physical bound on the relative error in concentration $c$ which
scales as $\delta c/c \sim (Dac\tau)^{-1/4}$ with ligand diffusivity $D$,
receptor cross-section $a$, and characteristic fluctuation time scale $\tau$,
in stark contrast with the usual Berg and Purcell bound $\delta c/c \sim
(DacT)^{-1/2}$ for a perfect receptor sensing concentration during time $T$. We
show how the bound can be achieved by a simple biochemical network downstream
the receptor that adapts the kinetics of signaling as a function of the square
root of the sensed concentration.
| [
{
"created": "Mon, 12 Aug 2019 09:04:49 GMT",
"version": "v1"
}
] | 2019-11-13 | [
[
"Mora",
"Thierry",
""
],
[
"Nemenman",
"Ilya",
""
]
] | Cells adapt to changing environments by sensing ligand concentrations using specific receptors. The accuracy of sensing is ultimately limited by the finite number of ligand molecules bound by receptors. Previously derived physical limits to sensing accuracy have assumed that the concentration was constant and ignored its temporal fluctuations. We formulate the problem of concentration sensing in a strongly fluctuating environment as a non-linear field-theoretic problem, for which we find an excellent approximate Gaussian solution. We derive a new physical bound on the relative error in concentration $c$ which scales as $\delta c/c \sim (Dac\tau)^{-1/4}$ with ligand diffusivity $D$, receptor cross-section $a$, and characteristic fluctuation time scale $\tau$, in stark contrast with the usual Berg and Purcell bound $\delta c/c \sim (DacT)^{-1/2}$ for a perfect receptor sensing concentration during time $T$. We show how the bound can be achieved by a simple biochemical network downstream the receptor that adapts the kinetics of signaling as a function of the square root of the sensed concentration. |
1610.03915 | Michael Deem | Yidan Pan and Michael W. Deem | Prediction of Influenza B Vaccine Effectiveness from Sequence Data | 28 pages, 4 figures, 2 tables | Protein Engineering, Design & Selection 29 (2016) 309-315 | 10.1016/j.vaccine.2016.07.015 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influenza is a contagious respiratory illness that causes significant human
morbidity and mortality, affecting 5-15% of the population in a typical
epidemic season. Human influenza epidemics are caused by types A and B, with
roughly 25% of human cases due to influenza B. Influenza B is a single-stranded
RNA virus with a high mutation rate, and both prior immune history and
vaccination put significant pressure on the virus to evolve. Due to the high
rate of viral evolution, the influenza B vaccine component of the annual
influenza vaccine is updated, roughly every other year in recent years. To
predict when an update to the vaccine is needed, an estimate of expected
vaccine effectiveness against a range of viral strains is required. We here
introduce a method to measure antigenic distance between the influenza B
vaccine and circulating viral strains. The measure correlates well with
effectiveness of the influenza B component of the annual vaccine in humans
between 1979 and 2014. We discuss how this measure of antigenic distance may be
used in the context of annual influenza vaccine design and prediction of
vaccine effectiveness.
| [
{
"created": "Thu, 13 Oct 2016 02:24:23 GMT",
"version": "v1"
}
] | 2016-10-14 | [
[
"Pan",
"Yidan",
""
],
[
"Deem",
"Michael W.",
""
]
] | Influenza is a contagious respiratory illness that causes significant human morbidity and mortality, affecting 5-15% of the population in a typical epidemic season. Human influenza epidemics are caused by types A and B, with roughly 25% of human cases due to influenza B. Influenza B is a single-stranded RNA virus with a high mutation rate, and both prior immune history and vaccination put significant pressure on the virus to evolve. Due to the high rate of viral evolution, the influenza B vaccine component of the annual influenza vaccine is updated, roughly every other year in recent years. To predict when an update to the vaccine is needed, an estimate of expected vaccine effectiveness against a range of viral strains is required. We here introduce a method to measure antigenic distance between the influenza B vaccine and circulating viral strains. The measure correlates well with effectiveness of the influenza B component of the annual vaccine in humans between 1979 and 2014. We discuss how this measure of antigenic distance may be used in the context of annual influenza vaccine design and prediction of vaccine effectiveness. |
2211.02546 | Rebekah Rogers | James E. Titus-McQuillan, Adalena V. Nanni, Lauren M. McIntyre,
Rebekah L. Rogers | Transcriptome Complexities Across Eukaryotes | 33 pages main text; 6 main figures; 25 pages of supplement; 1
supplementary table; 24 Supp Figures; 58 pages total | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genomic complexity is a growing field of evolution, with case studies for
comparative evolutionary analyses in model and emerging non-model systems.
Understanding complexity and the functional components of the genome is an
untapped wealth of knowledge ripe for exploration. With the "remarkable lack of
correspondence" between genome size and complexity, there needs to be a way to
quantify complexity across organisms. In this study we use a set of complexity
metrics that allow for evaluation of changes in complexity using TranD. We
ascertain if complexity is increasing or decreasing across transcriptomes and
at what structural level, as complexity is varied. We define three metrics --
TpG, EpT, and EpG in this study to quantify the complexity of the transcriptome
that encapsulate the dynamics of alternative splicing. Here we compare
complexity metrics across 1) whole genome annotations, 2) a filtered subset of
orthologs, and 3) novel genes to elucidate the impacts of ortholog and novel
genes in transcriptome analysis. We also derive a metric from Hong et al.,
2006, Effective Exon Number (EEN), to compare the distribution of exon sizes
within transcripts against random expectations of uniform exon placement. EEN
accounts for differences in exon size, which is important because novel genes
differences in complexity for orthologs and whole transcriptome analyses are
biased towards low complexity genes with few exons and few alternative
transcripts. With our metric analyses, we are able to implement changes in
complexity across diverse lineages with greater precision and accuracy than
previous cross-species comparisons under ortholog conditioning. These analyses
represent a step forward toward whole transcriptome analysis in the emerging
field of non-model evolutionary genomics, with key insights for evolutionary
inference of complexity changes on deep timescales across the tree of life. We
suggest a means to quantify biases generated in ortholog calling and correct
complexity analysis for lineage-specific effects. With these metrics, we
directly assay the quantitative properties of newly formed lineage-specific
genes as they lower complexity in transcriptomes.
| [
{
"created": "Fri, 4 Nov 2022 16:13:36 GMT",
"version": "v1"
}
] | 2022-11-07 | [
[
"Titus-McQuillan",
"James E.",
""
],
[
"Nanni",
"Adalena V.",
""
],
[
"McIntyre",
"Lauren M.",
""
],
[
"Rogers",
"Rebekah L.",
""
]
] | Genomic complexity is a growing field of evolution, with case studies for comparative evolutionary analyses in model and emerging non-model systems. Understanding complexity and the functional components of the genome is an untapped wealth of knowledge ripe for exploration. With the "remarkable lack of correspondence" between genome size and complexity, there needs to be a way to quantify complexity across organisms. In this study we use a set of complexity metrics that allow for evaluation of changes in complexity using TranD. We ascertain if complexity is increasing or decreasing across transcriptomes and at what structural level, as complexity is varied. We define three metrics -- TpG, EpT, and EpG in this study to quantify the complexity of the transcriptome that encapsulate the dynamics of alternative splicing. Here we compare complexity metrics across 1) whole genome annotations, 2) a filtered subset of orthologs, and 3) novel genes to elucidate the impacts of ortholog and novel genes in transcriptome analysis. We also derive a metric from Hong et al., 2006, Effective Exon Number (EEN), to compare the distribution of exon sizes within transcripts against random expectations of uniform exon placement. EEN accounts for differences in exon size, which is important because novel genes differences in complexity for orthologs and whole transcriptome analyses are biased towards low complexity genes with few exons and few alternative transcripts. With our metric analyses, we are able to implement changes in complexity across diverse lineages with greater precision and accuracy than previous cross-species comparisons under ortholog conditioning. These analyses represent a step forward toward whole transcriptome analysis in the emerging field of non-model evolutionary genomics, with key insights for evolutionary inference of complexity changes on deep timescales across the tree of life. We suggest a means to quantify biases generated in ortholog calling and correct complexity analysis for lineage-specific effects. With these metrics, we directly assay the quantitative properties of newly formed lineage-specific genes as they lower complexity in transcriptomes. |
0809.3110 | Siddhartha Gadgil | Siddhartha Gadgil | Watson-Crick pairing, the Heisenberg group and Milnor invariants | 18 pages; to appear in Journal of Mathematical Biology | null | null | null | q-bio.BM math.GR math.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the secondary structure of RNA determined by Watson-Crick pairing
without pseudo-knots using Milnor invariants of links. We focus on the first
non-trivial invariant, which we call the Heisenberg invariant. The Heisenberg
invariant, which is an integer, can be interpreted in terms of the Heisenberg
group as well as in terms of lattice paths.
We show that the Heisenberg invariant gives a lower bound on the number of
unpaired bases in an RNA secondary structure. We also show that the Heisenberg
invariant can predict \emph{allosteric structures} for RNA. Namely, if the
Heisenberg invariant is large, then there are widely separated local maxima
(i.e., allosteric structures) for the number of Watson-Crick pairs found.
| [
{
"created": "Thu, 18 Sep 2008 09:11:54 GMT",
"version": "v1"
}
] | 2008-09-19 | [
[
"Gadgil",
"Siddhartha",
""
]
] | We study the secondary structure of RNA determined by Watson-Crick pairing without pseudo-knots using Milnor invariants of links. We focus on the first non-trivial invariant, which we call the Heisenberg invariant. The Heisenberg invariant, which is an integer, can be interpreted in terms of the Heisenberg group as well as in terms of lattice paths. We show that the Heisenberg invariant gives a lower bound on the number of unpaired bases in an RNA secondary structure. We also show that the Heisenberg invariant can predict \emph{allosteric structures} for RNA. Namely, if the Heisenberg invariant is large, then there are widely separated local maxima (i.e., allosteric structures) for the number of Watson-Crick pairs found. |
2308.10033 | Hamidreza Bolhasani | Zahra Mokhtari, Elham Amjadi, Hamidreza Bolhasani, Zahra Faghih,
AmirReza Dehghanian, Marzieh Rezaei | CRC-ICM: Colorectal Cancer Immune Cell Markers Pattern Dataset | null | null | null | null | q-bio.TO cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Colorectal Cancer (CRC) is the second most common cause of cancer death in
the world, ad can be identified by the location of the primary tumor in the
large intestine: right and left colon, and rectum. Based on the location, CRC
shows differences in chromosomal and molecular characteristics, microbiomes
incidence, pathogenesis, and outcome. It has been shown that tumors on left and
right sides also have different immune landscape, so the prognosis may be
different based on the primary tumor locations. It is widely accepted that
immune components of the tumor microenvironment (TME) plays a critical role in
tumor development. One of the critical regulatory molecules in the TME is
immune checkpoints that as the gatekeepers of immune responses regulate the
infiltrated immune cell functions. Inhibitory immune checkpoints such as PD-1,
Tim3, and LAG3, as the main mechanism of immune suppression in TME
overexpressed and result in further development of the tumor. The images of
this dataset have been taken from colon tissues of patients with CRC, stained
with specific antibodies for CD3, CD8, CD45RO, PD-1, LAG3 and Tim3. The name of
this dataset is CRC-ICM and contains 1756 images related to 136 patients. The
initial version of CRC-ICM is published on Elsevier Mendeley dataset portal,
and the latest version is accessible via: https://databiox.com
| [
{
"created": "Sat, 19 Aug 2023 14:39:28 GMT",
"version": "v1"
}
] | 2023-08-22 | [
[
"Mokhtari",
"Zahra",
""
],
[
"Amjadi",
"Elham",
""
],
[
"Bolhasani",
"Hamidreza",
""
],
[
"Faghih",
"Zahra",
""
],
[
"Dehghanian",
"AmirReza",
""
],
[
"Rezaei",
"Marzieh",
""
]
] | Colorectal Cancer (CRC) is the second most common cause of cancer death in the world, ad can be identified by the location of the primary tumor in the large intestine: right and left colon, and rectum. Based on the location, CRC shows differences in chromosomal and molecular characteristics, microbiomes incidence, pathogenesis, and outcome. It has been shown that tumors on left and right sides also have different immune landscape, so the prognosis may be different based on the primary tumor locations. It is widely accepted that immune components of the tumor microenvironment (TME) plays a critical role in tumor development. One of the critical regulatory molecules in the TME is immune checkpoints that as the gatekeepers of immune responses regulate the infiltrated immune cell functions. Inhibitory immune checkpoints such as PD-1, Tim3, and LAG3, as the main mechanism of immune suppression in TME overexpressed and result in further development of the tumor. The images of this dataset have been taken from colon tissues of patients with CRC, stained with specific antibodies for CD3, CD8, CD45RO, PD-1, LAG3 and Tim3. The name of this dataset is CRC-ICM and contains 1756 images related to 136 patients. The initial version of CRC-ICM is published on Elsevier Mendeley dataset portal, and the latest version is accessible via: https://databiox.com |
2404.01698 | Gaud DERVILLY | Thomas J Mcgrath (ONIRIS, LABERCA, INRAE), Julien Saint-Vanne (ONIRIS,
LABERCA, INRAE), S\'ebastien Hutinet (ONIRIS, LABERCA, INRAE), Walter Vetter,
Giulia Poma (UA), Yukiko Fujii (UA), Robin E Dodson, Boris Johnson-Restrepo,
Dudsadee Muenhor, Bruno Le Bizec (ONIRIS, LABERCA, INRAE), Gaud Dervilly
(ONIRIS, LABERCA, INRAE), Adrian Covaci (UA), Ronan Cariou (ONIRIS, LABERCA,
INRAE) | Detection of bromochloro alkanes in indoor dust using a novel CP-Seeker
data integration tool | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bromochloro alkanes (BCAs) have been manufactured for use as flame retardants
for decades and preliminary environmental risk screening suggests they are
likely to behave similarly to polychlorinated alkanes (PCAs), subclasses of
which are restricted as Stockholm Convention Persistent Organic Pollutants
(POPs). BCAs have rarely been studied in the environment, though some evidence
suggests they may migrate from treated-consumer materials into indoor dust,
resulting in human exposure via inadvertent ingestion. In this study, BCA-C14
mixture standards were synthesized and used to validate an analytical method.
This method relies on chloride-enhanced liquid chromatography-electrospray
ionization-Orbitrap-high resolution mass spectrometry (LC-ESI-Orbitrap-HRMS)
and a novel CP-Seeker integration software package for homologue detection and
integration. Dust sample preparation via ultrasonic extraction, acidified
silica clean-up and fractionation on neutral silica cartridges was found to be
suitable for BCAs, with absolute recovery of individual homologues averaging 66
to 78% and coefficients of variation $\le$10% in replicated spiking experiments
(n=3). In addition, a total of 59 indoor dust samples from six countries
including Australia (n=10), Belgium (n=10), Colombia (n=10), Japan (n=10),
Thailand (n=10) and the United States of America (n=9) were analysed for BCAs.
BCAs were detected in seven samples from the USA, with carbon chain lengths of
C8, C10, C12, C14, C16, C18, C24 to C28, C30 and C31 observed overall, though
not detected in samples from any other countries. Bromination of detected
homologues in the indoor dust samples ranged from Br1-4 as well as Br7, while
chlorine numbers ranged from Cl2-11. BCA-C18 were the most frequently detected,
observed in each of the USA samples, while the most prevalent halogenation
degrees were homologues of Br2 and Cl4-5. Broad estimations of BCA
concentrations in the dust samples indicated that levels may approach those of
other flame retardants in at least some instances. These findings suggest that
development of quantification strategies and further investigation of
environmental occurrence and health implications are needed.
| [
{
"created": "Tue, 2 Apr 2024 06:59:45 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Mcgrath",
"Thomas J",
"",
"ONIRIS, LABERCA, INRAE"
],
[
"Saint-Vanne",
"Julien",
"",
"ONIRIS,\n LABERCA, INRAE"
],
[
"Hutinet",
"Sébastien",
"",
"ONIRIS, LABERCA, INRAE"
],
[
"Vetter",
"Walter",
"",
"UA"
],
[
"Poma",
"Giulia... | Bromochloro alkanes (BCAs) have been manufactured for use as flame retardants for decades and preliminary environmental risk screening suggests they are likely to behave similarly to polychlorinated alkanes (PCAs), subclasses of which are restricted as Stockholm Convention Persistent Organic Pollutants (POPs). BCAs have rarely been studied in the environment, though some evidence suggests they may migrate from treated-consumer materials into indoor dust, resulting in human exposure via inadvertent ingestion. In this study, BCA-C14 mixture standards were synthesized and used to validate an analytical method. This method relies on chloride-enhanced liquid chromatography-electrospray ionization-Orbitrap-high resolution mass spectrometry (LC-ESI-Orbitrap-HRMS) and a novel CP-Seeker integration software package for homologue detection and integration. Dust sample preparation via ultrasonic extraction, acidified silica clean-up and fractionation on neutral silica cartridges was found to be suitable for BCAs, with absolute recovery of individual homologues averaging 66 to 78% and coefficients of variation $\le$10% in replicated spiking experiments (n=3). In addition, a total of 59 indoor dust samples from six countries including Australia (n=10), Belgium (n=10), Colombia (n=10), Japan (n=10), Thailand (n=10) and the United States of America (n=9) were analysed for BCAs. BCAs were detected in seven samples from the USA, with carbon chain lengths of C8, C10, C12, C14, C16, C18, C24 to C28, C30 and C31 observed overall, though not detected in samples from any other countries. Bromination of detected homologues in the indoor dust samples ranged from Br1-4 as well as Br7, while chlorine numbers ranged from Cl2-11. BCA-C18 were the most frequently detected, observed in each of the USA samples, while the most prevalent halogenation degrees were homologues of Br2 and Cl4-5. Broad estimations of BCA concentrations in the dust samples indicated that levels may approach those of other flame retardants in at least some instances. These findings suggest that development of quantification strategies and further investigation of environmental occurrence and health implications are needed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.