id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1501.00022 | Samuela Pasquali | Tristan Cragnolini, Philippe Derreumaux, Samuela Pasquali | Ab initio RNA folding | 28 pages, 18 figures | null | 10.1088/0953-8984/27/23/233102 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA molecules are essential cellular machines performing a wide variety of
functions for which a specific three-dimensional structure is required. Over
the last several years, experimental determination of RNA structures through
X-ray crystallography and NMR seems to have reached a plateau in the number of
structures resolved each year, but as more and more RNA sequences are being
discovered, need for structure prediction tools to complement experimental data
is strong. Theoretical approaches to RNA folding have been developed since the
late nineties when the first algorithms for secondary structure prediction
appeared. Over the last 10 years a number of prediction methods for 3D
structures have been developed, first based on bioinformatics and data-mining,
and more recently based on a coarse-grained physical representation of the
systems. In this review we are going to present the challenges of RNA structure
prediction and the main ideas behind bioinformatic approaches and physics-based
approaches. We will focus on the description of the more recent physics-based
phenomenological models and on how they are built to include the specificity of
the interactions of RNA bases, whose role is critical in folding. Through
examples from different models, we will point out the strengths of
physics-based approaches, which are able not only to predict equilibrium
structures, but also to investigate dynamical and thermodynamical behavior, and
the open challenges to include more key interactions ruling RNA folding.
| [
{
"created": "Tue, 30 Dec 2014 21:26:00 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Cragnolini",
"Tristan",
""
],
[
"Derreumaux",
"Philippe",
""
],
[
"Pasquali",
"Samuela",
""
]
] | RNA molecules are essential cellular machines performing a wide variety of functions for which a specific three-dimensional structure is required. Over the last several years, experimental determination of RNA structures through X-ray crystallography and NMR seems to have reached a plateau in the number of structures resolved each year, but as more and more RNA sequences are being discovered, need for structure prediction tools to complement experimental data is strong. Theoretical approaches to RNA folding have been developed since the late nineties when the first algorithms for secondary structure prediction appeared. Over the last 10 years a number of prediction methods for 3D structures have been developed, first based on bioinformatics and data-mining, and more recently based on a coarse-grained physical representation of the systems. In this review we are going to present the challenges of RNA structure prediction and the main ideas behind bioinformatic approaches and physics-based approaches. We will focus on the description of the more recent physics-based phenomenological models and on how they are built to include the specificity of the interactions of RNA bases, whose role is critical in folding. Through examples from different models, we will point out the strengths of physics-based approaches, which are able not only to predict equilibrium structures, but also to investigate dynamical and thermodynamical behavior, and the open challenges to include more key interactions ruling RNA folding. |
2309.10128 | Yihan Wu | Yihan Wu, Tao Chang, Peng Xu, Yangsong Zhang | Markov Chain-Guided Graph Construction and Sampling Depth Optimization
for EEG-Based Mental Disorder Detection | 5 figures, 4 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have received considerable attention since its
introduction. It has been widely applied in various fields due to its ability
to represent graph structured data. However, the application of GNNs is
constrained by two main issues. Firstly, the "over-smoothing" problem restricts
the use of deeper network structures. Secondly, GNNs' applicability is greatly
limited when nodes and edges are not clearly defined and expressed, as is the
case with EEG data.In this study, we proposed an innovative approach that
harnesses the distinctive properties of the graph structure's Markov Chain to
optimize the sampling depth of deep graph convolution networks. We introduced a
tailored method for constructing graph structures specifically designed for
analyzing EEG data, alongside the development of a vertex-level GNN
classification model for precise detection of mental disorders. In order to
verify the method's performance, we conduct experiments on two disease datasets
using a subject-independent experiment scenario. For the Schizophrenia (SZ)
data, our method achieves an average accuracy of 100% using only the first 300
seconds of data from each subject. Similarly, for Major Depressive Disorder
(MDD) data, the method yields average accuracies of over 99%. These experiments
demonstrate the method's ability to effectively distinguish between healthy
control (HC) subjects and patients with mental disorders. We believe this
method shows great promise for clinical diagnosis.
| [
{
"created": "Mon, 18 Sep 2023 20:07:32 GMT",
"version": "v1"
}
] | 2023-09-20 | [
[
"Wu",
"Yihan",
""
],
[
"Chang",
"Tao",
""
],
[
"Xu",
"Peng",
""
],
[
"Zhang",
"Yangsong",
""
]
] | Graph Neural Networks (GNNs) have received considerable attention since its introduction. It has been widely applied in various fields due to its ability to represent graph structured data. However, the application of GNNs is constrained by two main issues. Firstly, the "over-smoothing" problem restricts the use of deeper network structures. Secondly, GNNs' applicability is greatly limited when nodes and edges are not clearly defined and expressed, as is the case with EEG data.In this study, we proposed an innovative approach that harnesses the distinctive properties of the graph structure's Markov Chain to optimize the sampling depth of deep graph convolution networks. We introduced a tailored method for constructing graph structures specifically designed for analyzing EEG data, alongside the development of a vertex-level GNN classification model for precise detection of mental disorders. In order to verify the method's performance, we conduct experiments on two disease datasets using a subject-independent experiment scenario. For the Schizophrenia (SZ) data, our method achieves an average accuracy of 100% using only the first 300 seconds of data from each subject. Similarly, for Major Depressive Disorder (MDD) data, the method yields average accuracies of over 99%. These experiments demonstrate the method's ability to effectively distinguish between healthy control (HC) subjects and patients with mental disorders. We believe this method shows great promise for clinical diagnosis. |
2404.10369 | Aneta Koseska | Daniel Koch, Akhilesh Nandan, Gayathri Ramesan, Aneta Koseska | Biological computations: limitations of attractor-based formalisms and
the need for transients | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Living systems, from single cells to higher vertebrates, receive a continuous
stream of non-stationary inputs that they sense, e.g., via cell surface
receptors or sensory organs. Integrating these time-varying, multi-sensory, and
often noisy information with memory using complex molecular or neuronal
networks, they generate a variety of responses beyond simple stimulus-response
association, including avoidance behavior, life-long-learning or social
interactions. In a broad sense, these processes can be understood as a type of
biological computation. Taking as a basis generic features of biological
computations, such as real-time responsiveness or robustness and flexibility of
the computation, we highlight the limitations of the current attractor-based
framework for understanding computations in biological systems. We argue that
frameworks based on transient dynamics away from attractors are better suited
for the description of computations performed by neuronal and signaling
networks. In particular, we discuss how quasi-stable transient dynamics from
ghost states that emerge at criticality have a promising potential for
developing an integrated framework of computations, that can help us understand
how living system actively process information and learn from their
continuously changing environment.
| [
{
"created": "Tue, 16 Apr 2024 08:07:46 GMT",
"version": "v1"
}
] | 2024-04-17 | [
[
"Koch",
"Daniel",
""
],
[
"Nandan",
"Akhilesh",
""
],
[
"Ramesan",
"Gayathri",
""
],
[
"Koseska",
"Aneta",
""
]
] | Living systems, from single cells to higher vertebrates, receive a continuous stream of non-stationary inputs that they sense, e.g., via cell surface receptors or sensory organs. Integrating these time-varying, multi-sensory, and often noisy information with memory using complex molecular or neuronal networks, they generate a variety of responses beyond simple stimulus-response association, including avoidance behavior, life-long-learning or social interactions. In a broad sense, these processes can be understood as a type of biological computation. Taking as a basis generic features of biological computations, such as real-time responsiveness or robustness and flexibility of the computation, we highlight the limitations of the current attractor-based framework for understanding computations in biological systems. We argue that frameworks based on transient dynamics away from attractors are better suited for the description of computations performed by neuronal and signaling networks. In particular, we discuss how quasi-stable transient dynamics from ghost states that emerge at criticality have a promising potential for developing an integrated framework of computations, that can help us understand how living system actively process information and learn from their continuously changing environment. |
1105.0184 | Bob Eisenberg | Bob Eisenberg | Life's Solutions are Not Ideal | null | null | null | null | q-bio.BM cond-mat.soft cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Life occurs in ionic solutions, not pure water. The ionic mixtures of these
solutions are very different from water and have dramatic effects on the cells
and molecules of biological systems, yet theories and simulations cannot
calculate their properties. I suggest the reason is that existing theories stem
from the classical theory of ideal or simple gases in which (to a first
approximation) atoms do not interact. Even the law of mass action describes
reactants as if they were ideal. I propose that theories of ionic solutions
should start with the theory of complex fluids because that theory is designed
to deal with interactions from the beginning. The variational theory of complex
fluids is particularly well suited to describe mixtures like the solutions in
and outside biological cells. When a component or force is added to a solution,
the theory derives - by mathematics alone - a set of partial differential
equations that captures the resulting interactions self-consistently. Such a
theory has been implemented and shown to be computable in biologically relevant
systems but it has not yet been thoroughly tested in equilibrium or flow.
| [
{
"created": "Sun, 1 May 2011 16:49:25 GMT",
"version": "v1"
}
] | 2011-05-03 | [
[
"Eisenberg",
"Bob",
""
]
] | Life occurs in ionic solutions, not pure water. The ionic mixtures of these solutions are very different from water and have dramatic effects on the cells and molecules of biological systems, yet theories and simulations cannot calculate their properties. I suggest the reason is that existing theories stem from the classical theory of ideal or simple gases in which (to a first approximation) atoms do not interact. Even the law of mass action describes reactants as if they were ideal. I propose that theories of ionic solutions should start with the theory of complex fluids because that theory is designed to deal with interactions from the beginning. The variational theory of complex fluids is particularly well suited to describe mixtures like the solutions in and outside biological cells. When a component or force is added to a solution, the theory derives - by mathematics alone - a set of partial differential equations that captures the resulting interactions self-consistently. Such a theory has been implemented and shown to be computable in biologically relevant systems but it has not yet been thoroughly tested in equilibrium or flow. |
2209.12635 | Mai Ha Vu | Mai Ha Vu, Philippe A. Robert, Rahmad Akbar, Bartlomiej Swiatczak,
Geir Kjetil Sandve, Dag Trygve Truslew Haug, Victor Greiff | ImmunoLingo: Linguistics-based formalization of the antibody language | 19 pages, 3 figures | Nat Comput Sci (2024) | 10.1038/s43588-024-00642-3 | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Apparent parallels between natural language and biological sequence have led
to a recent surge in the application of deep language models (LMs) to the
analysis of antibody and other biological sequences. However, a lack of a
rigorous linguistic formalization of biological sequence languages, which would
define basic components, such as lexicon (i.e., the discrete units of the
language) and grammar (i.e., the rules that link sequence well-formedness,
structure, and meaning) has led to largely domain-unspecific applications of
LMs, which do not take into account the underlying structure of the biological
sequences studied. A linguistic formalization, on the other hand, establishes
linguistically-informed and thus domain-adapted components for LM applications.
It would facilitate a better understanding of how differences and similarities
between natural language and biological sequences influence the quality of LMs,
which is crucial for the design of interpretable models with extractable
sequence-functions relationship rules, such as the ones underlying the antibody
specificity prediction problem. Deciphering the rules of antibody specificity
is crucial to accelerating rational and in silico biotherapeutic drug design.
Here, we formalize the properties of the antibody language and thereby
establish not only a foundation for the application of linguistic tools in
adaptive immune receptor analysis but also for the systematic immunolinguistic
studies of immune receptor specificity in general.
| [
{
"created": "Mon, 26 Sep 2022 12:33:14 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2022 10:31:28 GMT",
"version": "v2"
}
] | 2024-08-06 | [
[
"Vu",
"Mai Ha",
""
],
[
"Robert",
"Philippe A.",
""
],
[
"Akbar",
"Rahmad",
""
],
[
"Swiatczak",
"Bartlomiej",
""
],
[
"Sandve",
"Geir Kjetil",
""
],
[
"Haug",
"Dag Trygve Truslew",
""
],
[
"Greiff",
"Victor",
""
]
] | Apparent parallels between natural language and biological sequence have led to a recent surge in the application of deep language models (LMs) to the analysis of antibody and other biological sequences. However, a lack of a rigorous linguistic formalization of biological sequence languages, which would define basic components, such as lexicon (i.e., the discrete units of the language) and grammar (i.e., the rules that link sequence well-formedness, structure, and meaning) has led to largely domain-unspecific applications of LMs, which do not take into account the underlying structure of the biological sequences studied. A linguistic formalization, on the other hand, establishes linguistically-informed and thus domain-adapted components for LM applications. It would facilitate a better understanding of how differences and similarities between natural language and biological sequences influence the quality of LMs, which is crucial for the design of interpretable models with extractable sequence-functions relationship rules, such as the ones underlying the antibody specificity prediction problem. Deciphering the rules of antibody specificity is crucial to accelerating rational and in silico biotherapeutic drug design. Here, we formalize the properties of the antibody language and thereby establish not only a foundation for the application of linguistic tools in adaptive immune receptor analysis but also for the systematic immunolinguistic studies of immune receptor specificity in general. |
2405.06657 | Gianluca Palermo | Emanuele Triuzzi, Riccardo Mengoni, Domenico Bonanni, Daniele
Ottaviani, Andrea Beccari, Gianluca Palermo | Molecular Docking via Weighted Subgraph Isomorphism on Quantum Annealers | null | null | null | null | q-bio.BM cs.CE cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular docking is an essential step in the drug discovery process
involving the detection of three-dimensional poses of a ligand inside the
active site of the protein. In this paper, we address the Molecular Docking
search phase by formulating the problem in QUBO terms, suitable for an
annealing approach. We propose a problem formulation as a weighted subgraph
isomorphism between the ligand graph and the grid of the target protein pocket.
In particular, we applied a graph representation to the ligand embedding all
the geometrical properties of the molecule including its flexibility, and we
created a weighted spatial grid to the 3D space region inside the pocket.
Results and performance obtained with quantum annealers are compared with
classical simulated annealing solvers.
| [
{
"created": "Fri, 19 Apr 2024 12:50:04 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Triuzzi",
"Emanuele",
""
],
[
"Mengoni",
"Riccardo",
""
],
[
"Bonanni",
"Domenico",
""
],
[
"Ottaviani",
"Daniele",
""
],
[
"Beccari",
"Andrea",
""
],
[
"Palermo",
"Gianluca",
""
]
] | Molecular docking is an essential step in the drug discovery process involving the detection of three-dimensional poses of a ligand inside the active site of the protein. In this paper, we address the Molecular Docking search phase by formulating the problem in QUBO terms, suitable for an annealing approach. We propose a problem formulation as a weighted subgraph isomorphism between the ligand graph and the grid of the target protein pocket. In particular, we applied a graph representation to the ligand embedding all the geometrical properties of the molecule including its flexibility, and we created a weighted spatial grid to the 3D space region inside the pocket. Results and performance obtained with quantum annealers are compared with classical simulated annealing solvers. |
0705.2816 | Ginestra Bianconi | Ginestra Bianconi and Riccardo Zecchina | Viable flux distribution in metabolic networks | (10 pages, 1 figure) | null | null | null | q-bio.MN | null | The metabolic networks are very well characterized for a large set of
organisms, a unique case in within the large-scale biological networks. For
this reason they provide a a very interesting framework for the construction of
analytically tractable statistical mechanics models.
In this paper we introduce a solvable model for the distribution of fluxes in
the metabolic network. We show that the effect of the topology on the
distribution of fluxes is to allow for large fluctuations of their values, a
fact that should have implications on the robustness of the system.
| [
{
"created": "Mon, 21 May 2007 10:53:37 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bianconi",
"Ginestra",
""
],
[
"Zecchina",
"Riccardo",
""
]
] | The metabolic networks are very well characterized for a large set of organisms, a unique case in within the large-scale biological networks. For this reason they provide a a very interesting framework for the construction of analytically tractable statistical mechanics models. In this paper we introduce a solvable model for the distribution of fluxes in the metabolic network. We show that the effect of the topology on the distribution of fluxes is to allow for large fluctuations of their values, a fact that should have implications on the robustness of the system. |
1809.06216 | Diego Alvarez-Estevez | Diego Alvarez-Estevez, Isaac Fern\'andez-Varela | Large-scale validation of an automatic EEG arousal detection algorithm
using different heterogeneous databases | 13 pages, 1 figure, 7 tables; typos corrected; format improved | null | 10.1016/j.sleep.2019.01.025 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | $\textbf{Objective}$: To assess the validity of an automatic EEG arousal
detection algorithm using large patient samples and different heterogeneous
databases
$\textbf{Methods}$: Automatic scorings were confronted with results from
human expert scorers on a total of 2768 full-night PSG recordings obtained from
two different databases. Of them, 472 recordings were obtained during clinical
routine at our sleep center and were subdivided into two subgroups of 220
(HMC-S) and 252 (HMC-M) recordings each, attending to the procedure followed by
the clinical expert during the visual review (semi-automatic or purely manual,
respectively). In addition, 2296 recordings from the public SHHS-2 database
were evaluated against the respective manual expert scorings.
$\textbf{Results}$: Event-by-event epoch-based validation resulted in an
overall Cohen kappa agreement K = 0.600 (HMC-S), 0.559 (HMC-M), and 0.573
(SHHS-2). Estimated inter-scorer variability on the datasets was, respectively,
K = 0.594, 0.561 and 0.543. Analyses of the corresponding Arousal Index scores
showed associated automatic-human repeatability indices ranging in 0.693-0.771
(HMC-S), 0.646-0.791 (HMC-M), and 0.759-0.791 (SHHS-2).
$\textbf{Conclusions}$: Large-scale validation of our automatic EEG arousal
detector on different databases has shown robust performance and good
generalization results comparable to the expected levels of human agreement.
Special emphasis has been put on allowing reproducibility of the results and
implementation of our method has been made accessible online as open source
code
| [
{
"created": "Wed, 12 Sep 2018 09:55:42 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Nov 2018 08:26:24 GMT",
"version": "v2"
}
] | 2019-01-31 | [
[
"Alvarez-Estevez",
"Diego",
""
],
[
"Fernández-Varela",
"Isaac",
""
]
] | $\textbf{Objective}$: To assess the validity of an automatic EEG arousal detection algorithm using large patient samples and different heterogeneous databases $\textbf{Methods}$: Automatic scorings were confronted with results from human expert scorers on a total of 2768 full-night PSG recordings obtained from two different databases. Of them, 472 recordings were obtained during clinical routine at our sleep center and were subdivided into two subgroups of 220 (HMC-S) and 252 (HMC-M) recordings each, attending to the procedure followed by the clinical expert during the visual review (semi-automatic or purely manual, respectively). In addition, 2296 recordings from the public SHHS-2 database were evaluated against the respective manual expert scorings. $\textbf{Results}$: Event-by-event epoch-based validation resulted in an overall Cohen kappa agreement K = 0.600 (HMC-S), 0.559 (HMC-M), and 0.573 (SHHS-2). Estimated inter-scorer variability on the datasets was, respectively, K = 0.594, 0.561 and 0.543. Analyses of the corresponding Arousal Index scores showed associated automatic-human repeatability indices ranging in 0.693-0.771 (HMC-S), 0.646-0.791 (HMC-M), and 0.759-0.791 (SHHS-2). $\textbf{Conclusions}$: Large-scale validation of our automatic EEG arousal detector on different databases has shown robust performance and good generalization results comparable to the expected levels of human agreement. Special emphasis has been put on allowing reproducibility of the results and implementation of our method has been made accessible online as open source code |
1412.1929 | Sebastian B\"ocker | Kai D\"uhrkop and Sebastian B\"ocker | Fragmentation trees reloaded | different dataset | null | null | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolites, small molecules that are involved in cellular reactions, provide
a direct functional signature of cellular state. Untargeted metabolomics
experiments usually relies on tandem mass spectrometry to identify the
thousands of compounds in a biological sample. Today, the vast majority of
metabolites remain unknown. Fragmentation trees have become a powerful tool for
the interpretation of tandem mass spectrometry data of small molecules. These
trees are found by combinatorial optimization, and aim at explaining the
experimental data via fragmentation cascades. To obtain biochemically
meaningful results requires an elaborate optimization function. We present a
new scoring for computing fragmentation trees, transforming the combinatorial
optimization into a maximum a posteriori estimator. We demonstrate the
superiority of the new scoring for two tasks: Both for the de novo
identification of molecular formulas of unknown compounds, and for searching a
database for structurally similar compounds, our methods performs significantly
better than the previous scoring, as well as other methods for this task. Our
method can expedite the workflow for untargeted metabolomics, allowing
researchers to investigate unknowns using automated computational methods.
| [
{
"created": "Fri, 5 Dec 2014 09:20:07 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Dec 2014 15:04:23 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Jan 2015 14:35:36 GMT",
"version": "v3"
}
] | 2015-01-29 | [
[
"Dührkop",
"Kai",
""
],
[
"Böcker",
"Sebastian",
""
]
] | Metabolites, small molecules that are involved in cellular reactions, provide a direct functional signature of cellular state. Untargeted metabolomics experiments usually relies on tandem mass spectrometry to identify the thousands of compounds in a biological sample. Today, the vast majority of metabolites remain unknown. Fragmentation trees have become a powerful tool for the interpretation of tandem mass spectrometry data of small molecules. These trees are found by combinatorial optimization, and aim at explaining the experimental data via fragmentation cascades. To obtain biochemically meaningful results requires an elaborate optimization function. We present a new scoring for computing fragmentation trees, transforming the combinatorial optimization into a maximum a posteriori estimator. We demonstrate the superiority of the new scoring for two tasks: Both for the de novo identification of molecular formulas of unknown compounds, and for searching a database for structurally similar compounds, our methods performs significantly better than the previous scoring, as well as other methods for this task. Our method can expedite the workflow for untargeted metabolomics, allowing researchers to investigate unknowns using automated computational methods. |
2003.14160 | Zolt\'an V\'arallyay | D\'avid T\'atrai, Zolt\'an V\'arallyay | COVID-19 epidemic outcome predictions based on logistic fitting and
estimation of its reliability | 15 pages, 6 figure, 1 long table | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the first outbreak of the COVID-19 epidemic at the end of 2019, data
has been made available on the number of infections, deaths and recoveries for
all countries of the World, and that data can be used for statistical analysis.
The primary interest of this paper is how well the logistic equation can
predict the outcome of COVID-19 epidemic in any regions of the World assuming
that the methodology of the testing process, namely the data collection method
and social behavior is not changing over the course of time. Besides the social
relevance, this study has two scientific purposes: we investigate if a simple
saturation model can describe the trend of the COVID-19 epidemic and if so, we
would like to determine, from which point during the epidemic the fitting
parameters provide reliable predictions. We also give estimations for the
outcome of this epidemic in several countries based on the logistic model and
the data available on 27 March, 2020. Based on the saturated cases in China, we
have managed to find some criteria to judge the reliability of the predictions.
| [
{
"created": "Tue, 31 Mar 2020 12:56:10 GMT",
"version": "v1"
}
] | 2020-04-01 | [
[
"Tátrai",
"Dávid",
""
],
[
"Várallyay",
"Zoltán",
""
]
] | Since the first outbreak of the COVID-19 epidemic at the end of 2019, data has been made available on the number of infections, deaths and recoveries for all countries of the World, and that data can be used for statistical analysis. The primary interest of this paper is how well the logistic equation can predict the outcome of COVID-19 epidemic in any regions of the World assuming that the methodology of the testing process, namely the data collection method and social behavior is not changing over the course of time. Besides the social relevance, this study has two scientific purposes: we investigate if a simple saturation model can describe the trend of the COVID-19 epidemic and if so, we would like to determine, from which point during the epidemic the fitting parameters provide reliable predictions. We also give estimations for the outcome of this epidemic in several countries based on the logistic model and the data available on 27 March, 2020. Based on the saturated cases in China, we have managed to find some criteria to judge the reliability of the predictions. |
1601.07534 | Henning U. Voss | Henning U. Voss | The leaky integrator with recurrent inhibition as a predictor | 1 figure included in text. published as a note | Neural Computation 28, 1498-1502 (2016) | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown that the leaky integrator, the basis for many neuronal models,
possesses a negative group delay when a time-delayed recurrent inhibition is
added to it. By means of this negative group delay, the leaky integrator
becomes a predictor for some frequency components of the input signal. The
prediction properties are derived analytically and an application to a local
field potential is provided.
| [
{
"created": "Wed, 27 Jan 2016 20:20:57 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Feb 2016 18:41:28 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jul 2016 20:23:11 GMT",
"version": "v3"
}
] | 2016-07-29 | [
[
"Voss",
"Henning U.",
""
]
] | It is shown that the leaky integrator, the basis for many neuronal models, possesses a negative group delay when a time-delayed recurrent inhibition is added to it. By means of this negative group delay, the leaky integrator becomes a predictor for some frequency components of the input signal. The prediction properties are derived analytically and an application to a local field potential is provided. |
2304.13230 | Jian Ma | Muyu Yang and Jian Ma | UNADON: Transformer-based model to predict genome-wide chromosome
spatial position | Published in ISMB 2023 | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The spatial positioning of chromosomes relative to functional nuclear bodies
is intertwined with genome functions such as transcription. However, the
sequence patterns and epigenomic features that collectively influence chromatin
spatial positioning in a genome-wide manner are not well understood. Here, we
develop a new transformer-based deep learning model called UNADON, which
predicts the genome-wide cytological distance to a specific type of nuclear
body, as measured by TSA-seq, using both sequence features and epigenomic
signals. Evaluations of UNADON in four cell lines (K562, H1, HFFc6, HCT116)
show high accuracy in predicting chromatin spatial positioning to nuclear
bodies when trained on a single cell line. UNADON also performed well in an
unseen cell type. Importantly, we reveal potential sequence and epigenomic
factors that affect large-scale chromatin compartmentalization to nuclear
bodies. Together, UNADON provides new insights into the principles between
sequence features and large-scale chromatin spatial localization, which has
important implications for understanding nuclear structure and function.
| [
{
"created": "Wed, 26 Apr 2023 01:30:50 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Jul 2023 05:29:14 GMT",
"version": "v2"
}
] | 2023-07-04 | [
[
"Yang",
"Muyu",
""
],
[
"Ma",
"Jian",
""
]
] | The spatial positioning of chromosomes relative to functional nuclear bodies is intertwined with genome functions such as transcription. However, the sequence patterns and epigenomic features that collectively influence chromatin spatial positioning in a genome-wide manner are not well understood. Here, we develop a new transformer-based deep learning model called UNADON, which predicts the genome-wide cytological distance to a specific type of nuclear body, as measured by TSA-seq, using both sequence features and epigenomic signals. Evaluations of UNADON in four cell lines (K562, H1, HFFc6, HCT116) show high accuracy in predicting chromatin spatial positioning to nuclear bodies when trained on a single cell line. UNADON also performed well in an unseen cell type. Importantly, we reveal potential sequence and epigenomic factors that affect large-scale chromatin compartmentalization to nuclear bodies. Together, UNADON provides new insights into the principles between sequence features and large-scale chromatin spatial localization, which has important implications for understanding nuclear structure and function. |
1909.04358 | Friedrich Schuessler | Friedrich Schuessler, Alexis Dubreuil, Francesca Mastrogiuseppe,
Srdjan Ostojic, Omri Barak | Dynamics of random recurrent networks with correlated low-rank structure | 18 pages, 7 figures | Phys. Rev. Research 2, 013111 (2020) | 10.1103/PhysRevResearch.2.013111 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A given neural network in the brain is involved in many different tasks. This
implies that, when considering a specific task, the network's connectivity
contains a component which is related to the task and another component which
can be considered random. Understanding the interplay between the structured
and random components, and their effect on network dynamics and functionality
is an important open question. Recent studies addressed the co-existence of
random and structured connectivity, but considered the two parts to be
uncorrelated. This constraint limits the dynamics and leaves the random
connectivity non-functional. Algorithms that train networks to perform specific
tasks typically generate correlations between structure and random
connectivity. Here we study nonlinear networks with correlated structured and
random components, assuming the structure to have a low rank. We develop an
analytic framework to establish the precise effect of the correlations on the
eigenvalue spectrum of the joint connectivity. We find that the spectrum
consists of a bulk and multiple outliers, whose location is predicted by our
theory. Using mean-field theory, we show that these outliers directly determine
both the fixed points of the system and their stability. Taken together, our
analysis elucidates how correlations allow structured and random connectivity
to synergistically extend the range of computations available to networks.
| [
{
"created": "Tue, 10 Sep 2019 09:01:33 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Oct 2019 12:51:14 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Dec 2019 23:57:33 GMT",
"version": "v3"
}
] | 2021-03-17 | [
[
"Schuessler",
"Friedrich",
""
],
[
"Dubreuil",
"Alexis",
""
],
[
"Mastrogiuseppe",
"Francesca",
""
],
[
"Ostojic",
"Srdjan",
""
],
[
"Barak",
"Omri",
""
]
] | A given neural network in the brain is involved in many different tasks. This implies that, when considering a specific task, the network's connectivity contains a component which is related to the task and another component which can be considered random. Understanding the interplay between the structured and random components, and their effect on network dynamics and functionality is an important open question. Recent studies addressed the co-existence of random and structured connectivity, but considered the two parts to be uncorrelated. This constraint limits the dynamics and leaves the random connectivity non-functional. Algorithms that train networks to perform specific tasks typically generate correlations between structure and random connectivity. Here we study nonlinear networks with correlated structured and random components, assuming the structure to have a low rank. We develop an analytic framework to establish the precise effect of the correlations on the eigenvalue spectrum of the joint connectivity. We find that the spectrum consists of a bulk and multiple outliers, whose location is predicted by our theory. Using mean-field theory, we show that these outliers directly determine both the fixed points of the system and their stability. Taken together, our analysis elucidates how correlations allow structured and random connectivity to synergistically extend the range of computations available to networks. |
2105.04730 | Matthew Simpson | Maud El-Hachem, Scott W McCue, Matthew J Simpson | Travelling wave analysis of cellular invasion into surrounding tissues | 30 pages, 8 figures | null | 10.1016/j.physd.2021.133026 | null | q-bio.TO nlin.PS | http://creativecommons.org/licenses/by/4.0/ | Single-species reaction-diffusion equations, such as the Fisher-KPP and
Porous-Fisher equations, support travelling wave solutions that are often
interpreted as simple mathematical models of biological invasion. Such
travelling wave solutions are thought to play a role in various applications
including development, wound healing and malignant invasion. One criticism of
these single-species equations is that they do not explicitly describe
interactions between the invading population and the surrounding environment.
In this work we study a reaction-diffusion equation that describes malignant
invasion which has been used to interpret experimental measurements describing
the invasion of malignant melanoma cells into surrounding human skin tissues.
This model explicitly describes how the population of cancer cells degrade the
surrounding tissues, thereby creating free space into which the cancer cells
migrate and proliferate to form an invasion wave of malignant tissue that is
coupled to a retreating wave of skin tissue. We analyse travelling wave
solutions of this model using a combination of numerical simulation, phase
plane analysis and perturbation techniques. Our analysis shows that the
travelling wave solutions involve a range of very interesting properties that
resemble certain well-established features of both the Fisher-KPP and
Porous-Fisher equations, as well as a range of novel properties that can be
thought of as extensions of these well-studied single-species equations. Of
particular interest is that travelling wave solutions of the invasion model are
very well approximated by trajectories in the Fisher-KPP phase plane that are
normally disregarded. This observation establishes a previously unnoticed link
between coupled multi-species reaction diffusion models of invasion and a
different class of models of invasion that involve moving boundary problems.
| [
{
"created": "Tue, 11 May 2021 00:57:25 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jul 2021 09:32:48 GMT",
"version": "v2"
}
] | 2021-10-04 | [
[
"El-Hachem",
"Maud",
""
],
[
"McCue",
"Scott W",
""
],
[
"Simpson",
"Matthew J",
""
]
] | Single-species reaction-diffusion equations, such as the Fisher-KPP and Porous-Fisher equations, support travelling wave solutions that are often interpreted as simple mathematical models of biological invasion. Such travelling wave solutions are thought to play a role in various applications including development, wound healing and malignant invasion. One criticism of these single-species equations is that they do not explicitly describe interactions between the invading population and the surrounding environment. In this work we study a reaction-diffusion equation that describes malignant invasion which has been used to interpret experimental measurements describing the invasion of malignant melanoma cells into surrounding human skin tissues. This model explicitly describes how the population of cancer cells degrade the surrounding tissues, thereby creating free space into which the cancer cells migrate and proliferate to form an invasion wave of malignant tissue that is coupled to a retreating wave of skin tissue. We analyse travelling wave solutions of this model using a combination of numerical simulation, phase plane analysis and perturbation techniques. Our analysis shows that the travelling wave solutions involve a range of very interesting properties that resemble certain well-established features of both the Fisher-KPP and Porous-Fisher equations, as well as a range of novel properties that can be thought of as extensions of these well-studied single-species equations. Of particular interest is that travelling wave solutions of the invasion model are very well approximated by trajectories in the Fisher-KPP phase plane that are normally disregarded. This observation establishes a previously unnoticed link between coupled multi-species reaction diffusion models of invasion and a different class of models of invasion that involve moving boundary problems. |
q-bio/0404032 | David R. Bickel | David R. Bickel | On "Strong control, conservative point estimation and simultaneous
conservative consistency of false discovery rates": Does a large number of
tests obviate confidence intervals of the FDR? | null | null | null | null | q-bio.GN q-bio.CB | null | A previously proved theorem gives sufficient conditions for an estimator of
the false discovery rate (FDR) to conservatively converge to the FDR with
probability 1 as the number of hypothesis tests increases, even for small
sample sizes. It does not follow that several thousand tests ensure that the
estimator has moderate variance under those conditions. In fact, they can hold
even if the test statistics have long-range correlations, which yield
unacceptably wide confidence intervals, as observed in genomic data when there
are 8 or 16 individuals (microarrays) per group. Thus, informative FDR
estimation will include some measure of its reliability.
| [
{
"created": "Fri, 23 Apr 2004 16:50:23 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bickel",
"David R.",
""
]
] | A previously proved theorem gives sufficient conditions for an estimator of the false discovery rate (FDR) to conservatively converge to the FDR with probability 1 as the number of hypothesis tests increases, even for small sample sizes. It does not follow that several thousand tests ensure that the estimator has moderate variance under those conditions. In fact, they can hold even if the test statistics have long-range correlations, which yield unacceptably wide confidence intervals, as observed in genomic data when there are 8 or 16 individuals (microarrays) per group. Thus, informative FDR estimation will include some measure of its reliability. |
2211.08558 | Swarnendu Banerjee | Arnab Chattopadhyay, Swarnendu Banerjee, Amit Samadder, Sabyasachi
Bhattacharya | Environmental toxicity influences disease spread in consumer population | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of infectious disease has been of interest to ecologists since
long. The initiation of epidemic and the long term disease dynamics are largely
influenced by the nature of the underlying consumer (host)-resource dynamics.
Ecological traits of such systems may be often modulated by toxins released in
the environment due to ongoing anthropogenic activities. This, in addition to
toxin-mediated alteration of epidemiological traits, has a significant impact
on disease progression in ecosystems which is quite less studied. In order to
address this, we consider a mathematical model of disease transmission in
consumer population where multiple traits are affected by environmental toxins.
Long term dynamics show that the level of environmental toxin determines
disease persistence, and increasing toxin may even eradicate the disease in
certain circumstances. Furthermore, our results demonstrate bistability between
different ecosystem states and the possibility of an abrupt transition from
disease-free coexistence to disease-induced extinction of consumers. Overall
the results from this study will help us gain fundamental insights into disease
propagation in natural ecosystems in the face of present anthropogenic changes.
| [
{
"created": "Tue, 15 Nov 2022 22:56:19 GMT",
"version": "v1"
}
] | 2022-11-17 | [
[
"Chattopadhyay",
"Arnab",
""
],
[
"Banerjee",
"Swarnendu",
""
],
[
"Samadder",
"Amit",
""
],
[
"Bhattacharya",
"Sabyasachi",
""
]
] | The study of infectious disease has been of interest to ecologists since long. The initiation of epidemic and the long term disease dynamics are largely influenced by the nature of the underlying consumer (host)-resource dynamics. Ecological traits of such systems may be often modulated by toxins released in the environment due to ongoing anthropogenic activities. This, in addition to toxin-mediated alteration of epidemiological traits, has a significant impact on disease progression in ecosystems which is quite less studied. In order to address this, we consider a mathematical model of disease transmission in consumer population where multiple traits are affected by environmental toxins. Long term dynamics show that the level of environmental toxin determines disease persistence, and increasing toxin may even eradicate the disease in certain circumstances. Furthermore, our results demonstrate bistability between different ecosystem states and the possibility of an abrupt transition from disease-free coexistence to disease-induced extinction of consumers. Overall the results from this study will help us gain fundamental insights into disease propagation in natural ecosystems in the face of present anthropogenic changes. |
q-bio/0405023 | Dietrich Stauffer | Debashish Chowdhury and Dietrich Stauffer | Evolving eco-system: a network of networks | 7 pages including 2 figures | null | 10.1016/j.physa.2004.08.051 | null | q-bio.PE | null | Ecology and evolution are inseparable. Motivated by some recent experiments,
we have developed models of evolutionary ecology from the perspective of
dynamic networks. In these models, in addition to the intra-node dynamics,
which corresponds to an individual-based population dynamics of species, the
entire network itself changes slowly with time to capture evolutionary
processes. After a brief summary of our recent published works on these network
models of eco-systems, we extend the most recent version of the model
incorporating predators that wander into neighbouring spatial patches for food.
| [
{
"created": "Thu, 27 May 2004 10:10:32 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Chowdhury",
"Debashish",
""
],
[
"Stauffer",
"Dietrich",
""
]
] | Ecology and evolution are inseparable. Motivated by some recent experiments, we have developed models of evolutionary ecology from the perspective of dynamic networks. In these models, in addition to the intra-node dynamics, which corresponds to an individual-based population dynamics of species, the entire network itself changes slowly with time to capture evolutionary processes. After a brief summary of our recent published works on these network models of eco-systems, we extend the most recent version of the model incorporating predators that wander into neighbouring spatial patches for food. |
2006.09429 | Karl Friston | Karl J. Friston, Thomas Parr, Peter Zeidman, Adeel Razi, Guillaume
Flandin, Jean Daunizeau, Oliver J. Hulme, Alexander J. Billig, Vladimir
Litvak, Cathy J. Price, Rosalyn J. Moran, Anthony Costello, Deenan Pillay and
Christian Lambert | Effective immunity and second waves: a dynamic causal modelling study | 20 pages, 8 figures, 3 tables (technical report) | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | This technical report addresses a pressing issue in the trajectory of the
coronavirus outbreak; namely, the rate at which effective immunity is lost
following the first wave of the pandemic. This is a crucial epidemiological
parameter that speaks to both the consequences of relaxing lockdown and the
propensity for a second wave of infections. Using a dynamic causal model of
reported cases and deaths from multiple countries, we evaluated the evidence
models of progressively longer periods of immunity. The results speak to an
effective population immunity of about three months that, under the model,
defers any second wave for approximately six months in most countries. This may
have implications for the window of opportunity for tracking and tracing, as
well as for developing vaccination programmes, and other therapeutic
interventions.
| [
{
"created": "Tue, 16 Jun 2020 18:22:24 GMT",
"version": "v1"
}
] | 2020-06-18 | [
[
"Friston",
"Karl J.",
""
],
[
"Parr",
"Thomas",
""
],
[
"Zeidman",
"Peter",
""
],
[
"Razi",
"Adeel",
""
],
[
"Flandin",
"Guillaume",
""
],
[
"Daunizeau",
"Jean",
""
],
[
"Hulme",
"Oliver J.",
""
],
[
"Billig",
"Alexander J.",
""
],
[
"Litvak",
"Vladimir",
""
],
[
"Price",
"Cathy J.",
""
],
[
"Moran",
"Rosalyn J.",
""
],
[
"Costello",
"Anthony",
""
],
[
"Pillay",
"Deenan",
""
],
[
"Lambert",
"Christian",
""
]
] | This technical report addresses a pressing issue in the trajectory of the coronavirus outbreak; namely, the rate at which effective immunity is lost following the first wave of the pandemic. This is a crucial epidemiological parameter that speaks to both the consequences of relaxing lockdown and the propensity for a second wave of infections. Using a dynamic causal model of reported cases and deaths from multiple countries, we evaluated the evidence models of progressively longer periods of immunity. The results speak to an effective population immunity of about three months that, under the model, defers any second wave for approximately six months in most countries. This may have implications for the window of opportunity for tracking and tracing, as well as for developing vaccination programmes, and other therapeutic interventions. |
2301.10772 | Zhijian Yang | Zhijian Yang, Junhao Wen, Ahmed Abdulkadir, Yuhan Cui, Guray Erus,
Elizabeth Mamourian, Randa Melhem, Dhivya Srinivasan, Sindhuja T.
Govindarajan, Jiong Chen, Mohamad Habes, Colin L. Masters, Paul Maruff,
Jurgen Fripp, Luigi Ferrucci, Marilyn S. Albert, Sterling C. Johnson, John C.
Morris, Pamela LaMontagne, Daniel S. Marcus, Tammie L. S. Benzinger, David A.
Wolk, Li Shen, Jingxuan Bao, Susan M. Resnick, Haochang Shou, Ilya M.
Nasrallah, Christos Davatzikos | Gene-SGAN: a method for discovering disease subtypes with imaging and
genetic signatures via multi-view weakly-supervised deep clustering | null | null | null | null | q-bio.QM cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disease heterogeneity has been a critical challenge for precision diagnosis
and treatment, especially in neurologic and neuropsychiatric diseases. Many
diseases can display multiple distinct brain phenotypes across individuals,
potentially reflecting disease subtypes that can be captured using MRI and
machine learning methods. However, biological interpretability and treatment
relevance are limited if the derived subtypes are not associated with genetic
drivers or susceptibility factors. Herein, we describe Gene-SGAN - a
multi-view, weakly-supervised deep clustering method - which dissects disease
heterogeneity by jointly considering phenotypic and genetic data, thereby
conferring genetic correlations to the disease subtypes and associated
endophenotypic signatures. We first validate the generalizability,
interpretability, and robustness of Gene-SGAN in semi-synthetic experiments. We
then demonstrate its application to real multi-site datasets from 28,858
individuals, deriving subtypes of Alzheimer's disease and brain endophenotypes
associated with hypertension, from MRI and SNP data. Derived brain phenotypes
displayed significant differences in neuroanatomical patterns, genetic
determinants, biological and clinical biomarkers, indicating potentially
distinct underlying neuropathologic processes, genetic drivers, and
susceptibility factors. Overall, Gene-SGAN is broadly applicable to disease
subtyping and endophenotype discovery, and is herein tested on disease-related,
genetically-driven neuroimaging phenotypes.
| [
{
"created": "Wed, 25 Jan 2023 10:08:30 GMT",
"version": "v1"
}
] | 2023-01-27 | [
[
"Yang",
"Zhijian",
""
],
[
"Wen",
"Junhao",
""
],
[
"Abdulkadir",
"Ahmed",
""
],
[
"Cui",
"Yuhan",
""
],
[
"Erus",
"Guray",
""
],
[
"Mamourian",
"Elizabeth",
""
],
[
"Melhem",
"Randa",
""
],
[
"Srinivasan",
"Dhivya",
""
],
[
"Govindarajan",
"Sindhuja T.",
""
],
[
"Chen",
"Jiong",
""
],
[
"Habes",
"Mohamad",
""
],
[
"Masters",
"Colin L.",
""
],
[
"Maruff",
"Paul",
""
],
[
"Fripp",
"Jurgen",
""
],
[
"Ferrucci",
"Luigi",
""
],
[
"Albert",
"Marilyn S.",
""
],
[
"Johnson",
"Sterling C.",
""
],
[
"Morris",
"John C.",
""
],
[
"LaMontagne",
"Pamela",
""
],
[
"Marcus",
"Daniel S.",
""
],
[
"Benzinger",
"Tammie L. S.",
""
],
[
"Wolk",
"David A.",
""
],
[
"Shen",
"Li",
""
],
[
"Bao",
"Jingxuan",
""
],
[
"Resnick",
"Susan M.",
""
],
[
"Shou",
"Haochang",
""
],
[
"Nasrallah",
"Ilya M.",
""
],
[
"Davatzikos",
"Christos",
""
]
] | Disease heterogeneity has been a critical challenge for precision diagnosis and treatment, especially in neurologic and neuropsychiatric diseases. Many diseases can display multiple distinct brain phenotypes across individuals, potentially reflecting disease subtypes that can be captured using MRI and machine learning methods. However, biological interpretability and treatment relevance are limited if the derived subtypes are not associated with genetic drivers or susceptibility factors. Herein, we describe Gene-SGAN - a multi-view, weakly-supervised deep clustering method - which dissects disease heterogeneity by jointly considering phenotypic and genetic data, thereby conferring genetic correlations to the disease subtypes and associated endophenotypic signatures. We first validate the generalizability, interpretability, and robustness of Gene-SGAN in semi-synthetic experiments. We then demonstrate its application to real multi-site datasets from 28,858 individuals, deriving subtypes of Alzheimer's disease and brain endophenotypes associated with hypertension, from MRI and SNP data. Derived brain phenotypes displayed significant differences in neuroanatomical patterns, genetic determinants, biological and clinical biomarkers, indicating potentially distinct underlying neuropathologic processes, genetic drivers, and susceptibility factors. Overall, Gene-SGAN is broadly applicable to disease subtyping and endophenotype discovery, and is herein tested on disease-related, genetically-driven neuroimaging phenotypes. |
1807.06398 | Mahmoud Hassan | J. Rizkallah, P. Benquet, A. Kabbara, O. Dufor, F. Wendling, M. Hassan | Dynamic reshaping of functional brain networks during visual object
recognition | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging evidence shows that the modular organization of the human brain
allows for better and efficient cognitive performance. Many of these cognitive
functions are very fast and occur in subsecond time scale such as the visual
object recognition. Here, we investigate brain network modularity while
controlling stimuli meaningfulness and measuring participant reaction time. We
particularly raised two questions: i) does the dynamic brain network modularity
change during the recognition of meaningful and meaningless visual images? And
ii) is there a correlation between network modularity and the reaction time of
participants? To tackle these issues, we collected dense electroencephalography
(EEG, 256 channels) data from 20 healthy human subjects performing a cognitive
task consisting of naming meaningful (tools, animals) and meaningless
(scrambled) images. Functional brain networks in both categories were estimated
at subsecond time scale using the EEG source connectivity method. By using
multislice modularity algorithms, we tracked the reconfiguration of functional
networks during the recognition of both meaningful and meaningless images.
Results showed a difference in the module characteristics of both conditions in
term of integration (interactions between modules) and occurrence (probability
on average of any two brain regions to fall in the same module during the
task). Integration and occurrence were greater for meaningless than for
meaningful images. Our findings revealed also that the occurrence within the
right frontal regions and the left occipito-temporal can help to predict the
ability of the brain to rapidly recognize and name visual stimuli. We speculate
that these observations are applicable not only to other fast cognitive
functions but also to detect fast disconnections that can occur in some brain
disorders.
| [
{
"created": "Tue, 17 Jul 2018 13:06:46 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Jul 2018 11:19:53 GMT",
"version": "v2"
}
] | 2018-08-01 | [
[
"Rizkallah",
"J.",
""
],
[
"Benquet",
"P.",
""
],
[
"Kabbara",
"A.",
""
],
[
"Dufor",
"O.",
""
],
[
"Wendling",
"F.",
""
],
[
"Hassan",
"M.",
""
]
] | Emerging evidence shows that the modular organization of the human brain allows for better and efficient cognitive performance. Many of these cognitive functions are very fast and occur in subsecond time scale such as the visual object recognition. Here, we investigate brain network modularity while controlling stimuli meaningfulness and measuring participant reaction time. We particularly raised two questions: i) does the dynamic brain network modularity change during the recognition of meaningful and meaningless visual images? And ii) is there a correlation between network modularity and the reaction time of participants? To tackle these issues, we collected dense electroencephalography (EEG, 256 channels) data from 20 healthy human subjects performing a cognitive task consisting of naming meaningful (tools, animals) and meaningless (scrambled) images. Functional brain networks in both categories were estimated at subsecond time scale using the EEG source connectivity method. By using multislice modularity algorithms, we tracked the reconfiguration of functional networks during the recognition of both meaningful and meaningless images. Results showed a difference in the module characteristics of both conditions in term of integration (interactions between modules) and occurrence (probability on average of any two brain regions to fall in the same module during the task). Integration and occurrence were greater for meaningless than for meaningful images. Our findings revealed also that the occurrence within the right frontal regions and the left occipito-temporal can help to predict the ability of the brain to rapidly recognize and name visual stimuli. We speculate that these observations are applicable not only to other fast cognitive functions but also to detect fast disconnections that can occur in some brain disorders. |
2207.04353 | Wayne Hayes | Patrick Wang and Henry Ye and Wayne B Hayes | BLANT: Basic Local Alignment of Network Topology, Part 1: Seeding local
alignments with unambiguous 8-node graphlets | 13 pages, 12 Figures, 2 Tables | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | BLAST is a standard tool in bioinformatics for creating local sequence
alignments using a "seed-and-extend" approach. Here we introduce an analogous
seed-and-extend algorithm that produces local network alignments: BLANT, for
Basic Local Alignment of Network Topology. This paper introduces BLANT-seed:
given an input graph, BLANT-seed uses network topology alone to create a
limited, high-specificity index of k-node induced subgraphs called k-graphlets
(analogous to BLASTS's k-mers). The index is constructed so that, if
significant common network topology exists between two graphs, their indexes
are likely to overlap. BLANT-seed then queries the indexes of two networks to
generate a list of common k-graphlets which, when paired, form a seed pair. Our
companion paper (submitted elsewhere) describes BLANT-extend, which "grows"
these seeds to larger local alignments, again using only topological
information.
| [
{
"created": "Sun, 10 Jul 2022 00:44:33 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Wang",
"Patrick",
""
],
[
"Ye",
"Henry",
""
],
[
"Hayes",
"Wayne B",
""
]
] | BLAST is a standard tool in bioinformatics for creating local sequence alignments using a "seed-and-extend" approach. Here we introduce an analogous seed-and-extend algorithm that produces local network alignments: BLANT, for Basic Local Alignment of Network Topology. This paper introduces BLANT-seed: given an input graph, BLANT-seed uses network topology alone to create a limited, high-specificity index of k-node induced subgraphs called k-graphlets (analogous to BLASTS's k-mers). The index is constructed so that, if significant common network topology exists between two graphs, their indexes are likely to overlap. BLANT-seed then queries the indexes of two networks to generate a list of common k-graphlets which, when paired, form a seed pair. Our companion paper (submitted elsewhere) describes BLANT-extend, which "grows" these seeds to larger local alignments, again using only topological information. |
2002.09034 | Narciso L\'opez-L\'opez | Narciso L\'opez-L\'opez, Andrea V\'azquez, Cyril Poupon,
Jean-Fran\c{c}ois Mangin, Pamela Guevara | Cortical surface parcellation based on intra-subject white matter fiber
clustering | This research has received funding from the European Union's Horizon
2020 research and innovation programme under the Marie Sklodowska-Curie
Actions H2020-MSCA-RISE-2015 BIRDS GA No. 690941, CONICYT PFCHA/ DOCTORADO
NACIONAL/2016-21160342, CONICYT FONDECYT 1190701, CONICYT PIA/Anillo de
Investigaci\'on en Ciencia y Tecnolog\'ia ACT172121 and CONICYT Basal Center
FB0008 | null | 10.1109/CHILECON47746.2019.8988066 | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a hybrid method that performs the complete parcellation of the
cerebral cortex of an individual, based on the connectivity information of the
white matter fibers from a whole-brain tractography dataset. The method
consists of five steps, first intra-subject clustering is performed on the
brain tractography. The fibers that make up each cluster are then intersected
with the cortical mesh and then filtered to discard outliers. In addition, the
method resolves the overlapping between the different intersection regions
(sub-parcels) throughout the cortex efficiently. Finally, a post-processing is
done to achieve more uniform sub-parcels. The output is the complete labeling
of cortical mesh vertices, representing the different cortex sub-parcels, with
strong connections to other sub-parcels. We evaluated our method with measures
of brain connectivity such as functional segregation (clustering coefficient),
functional integration (characteristic path length) and small-world. Results in
five subjects from ARCHI database show a good individual cortical parcellation
for each one, composed of about 200 subparcels per hemisphere and complying
with these connectivity measures.
| [
{
"created": "Sun, 16 Feb 2020 19:14:39 GMT",
"version": "v1"
}
] | 2020-02-24 | [
[
"López-López",
"Narciso",
""
],
[
"Vázquez",
"Andrea",
""
],
[
"Poupon",
"Cyril",
""
],
[
"Mangin",
"Jean-François",
""
],
[
"Guevara",
"Pamela",
""
]
] | We present a hybrid method that performs the complete parcellation of the cerebral cortex of an individual, based on the connectivity information of the white matter fibers from a whole-brain tractography dataset. The method consists of five steps, first intra-subject clustering is performed on the brain tractography. The fibers that make up each cluster are then intersected with the cortical mesh and then filtered to discard outliers. In addition, the method resolves the overlapping between the different intersection regions (sub-parcels) throughout the cortex efficiently. Finally, a post-processing is done to achieve more uniform sub-parcels. The output is the complete labeling of cortical mesh vertices, representing the different cortex sub-parcels, with strong connections to other sub-parcels. We evaluated our method with measures of brain connectivity such as functional segregation (clustering coefficient), functional integration (characteristic path length) and small-world. Results in five subjects from ARCHI database show a good individual cortical parcellation for each one, composed of about 200 subparcels per hemisphere and complying with these connectivity measures. |
2406.19659 | Shan Xu | Shan Xu, Xinran Feng, Yuannan Li, and Jia Liu | Object Space is Embodied | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The perceived similarity between objects has often been attributed to their
physical and conceptual features, such as appearance and animacy, and the
theoretical framework of object space is accordingly conceived. Here, we extend
this framework by proposing that object space may also be defined by embodied
features, specifically action possibilities that objects afford to an agent
(i.e., affordance) and their spatial relation with the agent (i.e.,
situatedness). To test this proposal, we quantified the embodied features with
a set of action atoms. We found that embodied features explained the subjective
similarity among familiar objects along with the objects' visual features. This
observation was further replicated with novel objects. Our study demonstrates
that embodied features, which place objects within an ecological context, are
essential in constructing object space in the human visual system, emphasizing
the importance of incorporating embodiment as a fundamental dimension in our
understanding of the visual world.
| [
{
"created": "Fri, 28 Jun 2024 05:07:36 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Aug 2024 16:28:25 GMT",
"version": "v2"
}
] | 2024-08-06 | [
[
"Xu",
"Shan",
""
],
[
"Feng",
"Xinran",
""
],
[
"Li",
"Yuannan",
""
],
[
"Liu",
"Jia",
""
]
] | The perceived similarity between objects has often been attributed to their physical and conceptual features, such as appearance and animacy, and the theoretical framework of object space is accordingly conceived. Here, we extend this framework by proposing that object space may also be defined by embodied features, specifically action possibilities that objects afford to an agent (i.e., affordance) and their spatial relation with the agent (i.e., situatedness). To test this proposal, we quantified the embodied features with a set of action atoms. We found that embodied features explained the subjective similarity among familiar objects along with the objects' visual features. This observation was further replicated with novel objects. Our study demonstrates that embodied features, which place objects within an ecological context, are essential in constructing object space in the human visual system, emphasizing the importance of incorporating embodiment as a fundamental dimension in our understanding of the visual world. |
1310.6077 | Meredith Trotter | Meredith V. Trotter, Daniel B. Weissman, Grant I. Peterson, Kayla M.
Peck, Joanna Masel | Cryptic Genetic Variation Can Make Irreducible Complexity a Common Mode
of Adaptation | null | null | 10.1111/evo.12517 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The existence of complex (multiple-step) genetic adaptations that are
"irreducible" (i.e., all partial combinations are less fit than the original
genotype) is one of the longest standing problems in evolutionary biology. In
standard genetics parlance, these adaptations require the crossing of a wide
adaptive valley of deleterious intermediate stages. Here we demonstrate, using
a simple model, that evolution can cross wide valleys to produce "irreducibly
complex" adaptations by making use of previously cryptic mutations. When
revealed by an evolutionary capacitor, previously cryptic mutants have higher
initial frequencies than do new mutations, bringing them closer to a
valley-crossing saddle in allele frequency space. Moreover, simple
combinatorics imply an enormous number of candidate combinations exist within
available cryptic genetic variation. We model the dynamics of crossing of a
wide adaptive valley after a capacitance event using both numerical simulations
and analytical approximations. Although individual valley crossing events
become less likely as valleys widen, by taking the combinatorics of genotype
space into account, we see that revealing cryptic variation can cause the
frequent evolution of complex adaptations. This finding also effectively
dismantles "irreducible complexity" as an argument against evolution by
providing a general mechanism for crossing wide adaptive valleys.
| [
{
"created": "Tue, 22 Oct 2013 23:34:57 GMT",
"version": "v1"
}
] | 2014-10-17 | [
[
"Trotter",
"Meredith V.",
""
],
[
"Weissman",
"Daniel B.",
""
],
[
"Peterson",
"Grant I.",
""
],
[
"Peck",
"Kayla M.",
""
],
[
"Masel",
"Joanna",
""
]
] | The existence of complex (multiple-step) genetic adaptations that are "irreducible" (i.e., all partial combinations are less fit than the original genotype) is one of the longest standing problems in evolutionary biology. In standard genetics parlance, these adaptations require the crossing of a wide adaptive valley of deleterious intermediate stages. Here we demonstrate, using a simple model, that evolution can cross wide valleys to produce "irreducibly complex" adaptations by making use of previously cryptic mutations. When revealed by an evolutionary capacitor, previously cryptic mutants have higher initial frequencies than do new mutations, bringing them closer to a valley-crossing saddle in allele frequency space. Moreover, simple combinatorics imply an enormous number of candidate combinations exist within available cryptic genetic variation. We model the dynamics of crossing of a wide adaptive valley after a capacitance event using both numerical simulations and analytical approximations. Although individual valley crossing events become less likely as valleys widen, by taking the combinatorics of genotype space into account, we see that revealing cryptic variation can cause the frequent evolution of complex adaptations. This finding also effectively dismantles "irreducible complexity" as an argument against evolution by providing a general mechanism for crossing wide adaptive valleys. |
0912.2336 | Rafael Dias Vilela | Rafael D. Vilela and Benjamin Lindner | A comparative study of different integrate-and-fire neurons: spontaneous
activity, dynamical response, and stimulus-induced correlation | 12 pages | Phys. Rev. E 80, 031909 (2009) | 10.1103/PhysRevE.80.031909 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic integrate-and-fire (IF) neuron models have found widespread
applications in computational neuroscience. Here we present results on the
white-noise-driven perfect, leaky, and quadratic IF models, focusing on the
spectral statistics (power spectra, cross spectra, and coherence functions) in
different dynamical regimes (noise-induced and tonic firing regimes with low or
moderate noise). We make the models comparable by tuning parameters such that
the mean value and the coefficient of variation of the interspike interval
match for all of them. We find that, under these conditions, the power spectrum
under white-noise stimulation is often very similar while the response
characteristics, described by the cross spectrum between a fraction of the
input noise and the output spike train, can differ drastically. We also
investigate how the spike trains of two neurons of the same kind (e.g. two
leaky IF neurons) correlate if they share a common noise input. We show that,
depending on the dynamical regime, either two quadratic IF models or two leaky
IFs are more strongly correlated. Our results suggest that, when choosing among
simple IF models for network simulations, the details of the model have a
strong effect on correlation and regularity of the output.
| [
{
"created": "Fri, 11 Dec 2009 20:44:11 GMT",
"version": "v1"
}
] | 2015-05-14 | [
[
"Vilela",
"Rafael D.",
""
],
[
"Lindner",
"Benjamin",
""
]
] | Stochastic integrate-and-fire (IF) neuron models have found widespread applications in computational neuroscience. Here we present results on the white-noise-driven perfect, leaky, and quadratic IF models, focusing on the spectral statistics (power spectra, cross spectra, and coherence functions) in different dynamical regimes (noise-induced and tonic firing regimes with low or moderate noise). We make the models comparable by tuning parameters such that the mean value and the coefficient of variation of the interspike interval match for all of them. We find that, under these conditions, the power spectrum under white-noise stimulation is often very similar while the response characteristics, described by the cross spectrum between a fraction of the input noise and the output spike train, can differ drastically. We also investigate how the spike trains of two neurons of the same kind (e.g. two leaky IF neurons) correlate if they share a common noise input. We show that, depending on the dynamical regime, either two quadratic IF models or two leaky IFs are more strongly correlated. Our results suggest that, when choosing among simple IF models for network simulations, the details of the model have a strong effect on correlation and regularity of the output. |
0805.3841 | Eugene Shakhnovich | Muyoung Heo, Konstantin B. Zeldovich, Eugene I. Shakhnovich | Diversity against adversity: How adaptive immunity evolves potent
antibodies | null | null | null | null | q-bio.CB q-bio.BM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How does immune system evolve functional proteins - potent antibodies - in
such a short time? We address this question using a microscopic, protein-level,
sequence-based model of humoral immune response with explicitly defined
interactions between Immunoglobulins, host and pathogen proteins. Potent
Immunoglobulins are discovered in this model via clonal selection and affinity
maturation. Possible outcomes of an infection (extinction of cells, survival
with complete elimination of viruses, or persistent infection) crucially depend
on mutation rates of viral and Immunoglobulin proteins. The model predicts that
there is an optimal Somatic Hypermutation (SHM) rate close to experimentally
observed 10-3 per nucleotide per replication. Further, we developed an
analytical theory which explains the physical reason for an optimal SHM program
as a compromise between deleterious effects of random mutations on nascent
maturing Immunoglobulins (adversity) and the need to generate diverse pool of
mutated antibodies from which highly potent ones can be drawn (diversity). The
theory explains such effects as dependence of B cell fate on affinity for an
incoming antigen, ceiling in affinity of mature antibodies, Germinal Center
sizes and maturation times. The theory reveals the molecular factors which
determine the efficiency of affinity maturation, providing insight into
variability of immune response to cytopathic (direct response by germline
antibodies) and poorly cytopathic viruses (crucial role of SHM in response).
These results demonstrate the feasibility and promise of microscopic
sequence-based models of immune system, where population dynamics of evolving
Immunoglobulins is explicitly tied to their molecular properties.
| [
{
"created": "Sun, 25 May 2008 17:26:58 GMT",
"version": "v1"
}
] | 2008-05-27 | [
[
"Heo",
"Muyoung",
""
],
[
"Zeldovich",
"Konstantin B.",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | How does immune system evolve functional proteins - potent antibodies - in such a short time? We address this question using a microscopic, protein-level, sequence-based model of humoral immune response with explicitly defined interactions between Immunoglobulins, host and pathogen proteins. Potent Immunoglobulins are discovered in this model via clonal selection and affinity maturation. Possible outcomes of an infection (extinction of cells, survival with complete elimination of viruses, or persistent infection) crucially depend on mutation rates of viral and Immunoglobulin proteins. The model predicts that there is an optimal Somatic Hypermutation (SHM) rate close to experimentally observed 10-3 per nucleotide per replication. Further, we developed an analytical theory which explains the physical reason for an optimal SHM program as a compromise between deleterious effects of random mutations on nascent maturing Immunoglobulins (adversity) and the need to generate diverse pool of mutated antibodies from which highly potent ones can be drawn (diversity). The theory explains such effects as dependence of B cell fate on affinity for an incoming antigen, ceiling in affinity of mature antibodies, Germinal Center sizes and maturation times. The theory reveals the molecular factors which determine the efficiency of affinity maturation, providing insight into variability of immune response to cytopathic (direct response by germline antibodies) and poorly cytopathic viruses (crucial role of SHM in response). These results demonstrate the feasibility and promise of microscopic sequence-based models of immune system, where population dynamics of evolving Immunoglobulins is explicitly tied to their molecular properties. |
0802.1520 | Ophir Flomenbom | O. Flomenbom, and R. J. Silbey | Toolbox for analyzing finite two-state trajectories | null | Phys. Rev. E 78, 066105 (2008) | 10.1103/PhysRevE.78.066105 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many experiments, the aim is to deduce an underlying multi-substate on-off
kinetic scheme (KS) from the statistical properties of a two-state trajectory.
However, the mapping of a KS into a two-state trajectory leads to the loss of
information about the KS, and so, in many cases, more than one KS can be
associated with the data. We recently showed that the optimal way to solve this
problem is to use canonical forms of reduced dimensions (RD). RD forms are
on-off networks with connections only between substates of different states,
where the connections can have non-exponential waiting time probability density
functions (WT-PDFs). In theory, only a single RD form can be associated with
the data. To utilize RD forms in the analysis of the data, a RD form should be
associated with the data. Here, we give a toolbox for building a RD form from a
finite two-state trajectory. The methods in the toolbox are based on known
statistical methods in data analysis, combined with statistical methods and
numerical algorithms designed specifically for the current problem. Our toolbox
is self-contained - it builds a mechanism based only on the information it
extracts from the data, and its implementation on the data is fast (analyzing a
10^6 cycle trajectory from a thirty-parameter mechanism takes a couple of hours
on a PC with a 2.66 GHz processor). The toolbox is automated and is freely
available for academic research upon electronic request.
| [
{
"created": "Mon, 11 Feb 2008 20:07:26 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Oct 2008 23:31:58 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Dec 2008 03:07:42 GMT",
"version": "v3"
}
] | 2010-08-16 | [
[
"Flomenbom",
"O.",
""
],
[
"Silbey",
"R. J.",
""
]
] | In many experiments, the aim is to deduce an underlying multi-substate on-off kinetic scheme (KS) from the statistical properties of a two-state trajectory. However, the mapping of a KS into a two-state trajectory leads to the loss of information about the KS, and so, in many cases, more than one KS can be associated with the data. We recently showed that the optimal way to solve this problem is to use canonical forms of reduced dimensions (RD). RD forms are on-off networks with connections only between substates of different states, where the connections can have non-exponential waiting time probability density functions (WT-PDFs). In theory, only a single RD form can be associated with the data. To utilize RD forms in the analysis of the data, a RD form should be associated with the data. Here, we give a toolbox for building a RD form from a finite two-state trajectory. The methods in the toolbox are based on known statistical methods in data analysis, combined with statistical methods and numerical algorithms designed specifically for the current problem. Our toolbox is self-contained - it builds a mechanism based only on the information it extracts from the data, and its implementation on the data is fast (analyzing a 10^6 cycle trajectory from a thirty-parameter mechanism takes a couple of hours on a PC with a 2.66 GHz processor). The toolbox is automated and is freely available for academic research upon electronic request. |
2006.03034 | Saptarshi Chatterjee Mr. | Saptarshi Chatterjee and Apurba Sarkar and Mintu Karmakar and
Swarnajit Chatterjee and Raja Paul | SEIRD model to study the asymptomatic growth during COVID-19 pandemic in
India | null | Indian J Phys (2020) | 10.1007/s12648-020-01928-8 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the current perception, symptomatic, presymptomatic, and
asymptomatic infectious persons can infect the healthy population susceptible
to the SARS-Cov-2. More importantly, various reports indicate that the number
of asymptomatic cases can be several-fold higher than the reported symptomatic
cases. In this article, we take the reported cases in India and various states
within the country till September 1, as the specimen to understand the
progression of the COVID-19. Employing a modified SEIRD model, we predict the
spread of COVID-19 by the symptomatic as well as asymptomatic infectious
population. Considering reported infection primarily due to symptomatic we
compare the model predicted results with the available data to estimate the
dynamics of the asymptomatically infected population. Our data indicate that in
the absence of the asymptomatic infectious population, the number of
symptomatic cases would have been much less. Therefore, the current progress of
the symptomatic infection can be reduced by quarantining the asymptomatically
infectious population via extensive or random testing. This study is motivated
strictly towards academic pursuit; this theoretical investigation is not meant
for influencing policy decisions or public health practices.
| [
{
"created": "Thu, 4 Jun 2020 17:41:50 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Sep 2020 13:07:23 GMT",
"version": "v2"
}
] | 2020-11-24 | [
[
"Chatterjee",
"Saptarshi",
""
],
[
"Sarkar",
"Apurba",
""
],
[
"Karmakar",
"Mintu",
""
],
[
"Chatterjee",
"Swarnajit",
""
],
[
"Paul",
"Raja",
""
]
] | According to the current perception, symptomatic, presymptomatic, and asymptomatic infectious persons can infect the healthy population susceptible to the SARS-Cov-2. More importantly, various reports indicate that the number of asymptomatic cases can be several-fold higher than the reported symptomatic cases. In this article, we take the reported cases in India and various states within the country till September 1, as the specimen to understand the progression of the COVID-19. Employing a modified SEIRD model, we predict the spread of COVID-19 by the symptomatic as well as asymptomatic infectious population. Considering reported infection primarily due to symptomatic we compare the model predicted results with the available data to estimate the dynamics of the asymptomatically infected population. Our data indicate that in the absence of the asymptomatic infectious population, the number of symptomatic cases would have been much less. Therefore, the current progress of the symptomatic infection can be reduced by quarantining the asymptomatically infectious population via extensive or random testing. This study is motivated strictly towards academic pursuit; this theoretical investigation is not meant for influencing policy decisions or public health practices. |
1308.2885 | Boris Brimkov | Boris Brimkov and Valentin E. Brimkov | Geometric approach to string analysis: deviation from linearity and its
use for biosequence classification | 16 pages, 2 figures (the first with 2 subfigures and the second with
8 subfigures) | null | null | null | q-bio.QM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tools that effectively analyze and compare sequences are of great importance
in various areas of applied computational research, especially in the framework
of molecular biology. In the present paper, we introduce simple geometric
criteria based on the notion of string linearity and use them to compare DNA
sequences of various organisms, as well as to distinguish them from random
sequences. Our experiments reveal a significant difference between biosequences
and random sequences - the former having much higher deviation from linearity
than the latter - as well as a general trend of increasing deviation from
linearity between primitive and biologically complex organisms.
| [
{
"created": "Tue, 13 Aug 2013 14:58:12 GMT",
"version": "v1"
}
] | 2013-08-14 | [
[
"Brimkov",
"Boris",
""
],
[
"Brimkov",
"Valentin E.",
""
]
] | Tools that effectively analyze and compare sequences are of great importance in various areas of applied computational research, especially in the framework of molecular biology. In the present paper, we introduce simple geometric criteria based on the notion of string linearity and use them to compare DNA sequences of various organisms, as well as to distinguish them from random sequences. Our experiments reveal a significant difference between biosequences and random sequences - the former having much higher deviation from linearity than the latter - as well as a general trend of increasing deviation from linearity between primitive and biologically complex organisms. |
q-bio/0310013 | Daniela Russo Dr | Daniela Russo, Greg Hura, Teresa Head-Gordon | Hydration Water Dynamics and Instigation of Protein Structural
Relaxation | 2 pages, 2 figures, Communication | null | null | null | q-bio.BM | null | The molecular mechanism of the solvent motion that is required to instigate
the protein structural relaxation above a critical hydration level or
transition temperature has yet to be determined. In this work we use
quasi-elastic neutron scattering (QENS) and molecular dynamics simulation to
investigate hydration water dynamics near a greatly simplified protein surface.
We consider the hydration water dynamics near the completely deuterated
N-acetyl-leucine-methylamide (NALMA) solute, a hydrophobic amino acid side
chain attached to a polar blocked polypeptide backbone, as a function of
concentration between 0.5M-2.0M, under ambient conditions. In this
Communication, we focus our results of hydration dynamics near a model protein
surface on the issue of how enzymatic activity is restored once a critical
hydration level is reached, and provide a hypothesis for the molecular
mechanism of the solvent motion that is required to trigger protein structural
relaxation when above the hydration transition.
| [
{
"created": "Sat, 11 Oct 2003 00:55:24 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Russo",
"Daniela",
""
],
[
"Hura",
"Greg",
""
],
[
"Head-Gordon",
"Teresa",
""
]
] | The molecular mechanism of the solvent motion that is required to instigate the protein structural relaxation above a critical hydration level or transition temperature has yet to be determined. In this work we use quasi-elastic neutron scattering (QENS) and molecular dynamics simulation to investigate hydration water dynamics near a greatly simplified protein surface. We consider the hydration water dynamics near the completely deuterated N-acetyl-leucine-methylamide (NALMA) solute, a hydrophobic amino acid side chain attached to a polar blocked polypeptide backbone, as a function of concentration between 0.5M-2.0M, under ambient conditions. In this Communication, we focus our results of hydration dynamics near a model protein surface on the issue of how enzymatic activity is restored once a critical hydration level is reached, and provide a hypothesis for the molecular mechanism of the solvent motion that is required to trigger protein structural relaxation when above the hydration transition. |
2402.09209 | Lucas Hedstr\"om | Lucas Hedstr\"om, Ralf Metzler, Ludvig Lizana | A general mechanism for enhancer-insulator pairing reveals heterogeneous
dynamics in long-distant 3D gene regulation | null | null | null | null | q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Cells regulate fates and complex body plans using spatiotemporal signaling
cascades that alter gene expression. Enhancers, short DNA sequences (50-150
base pairs), help coordinate these cascades by attracting regulatory proteins
to enhance the transcription of distal genes by binding to promoters. In
humans, there are hundreds of thousands of enhancers dispersed across the
genome, which poses a challenging coordination task to prevent unintended gene
activation. To mitigate this problem, the genome contains additional DNA
elements, insulators, that block enhancer-promoter interactions. However, there
is an open problem with how the insulation works, especially as
enhancer-insulator pairs may be separated by millions of base pairs. Based on
recent empirical data from Hi-C experiments, this paper proposes a new
mechanism that challenges the common paradigm that rests on specific
insulator-insulator interactions. Instead, this paper introduces a stochastic
looping model where enhancers bind weakly to surrounding chromatin. After
calibrating the model to experimental data, we use simulations to study the
broad distribution of hitting times between an enhancer and a promoter when
there are blocking insulators. In some cases, there is a large difference
between average and most probable hitting times, making it difficult to assign
a typical time scale, hinting at highly defocused regulation times. We also map
our computational model onto a resetting problem that allows us to derive
several analytical results. Besides offering new insights into
enhancer-insulator interactions, our paper advances the understanding of gene
regulatory networks and causal connections between genome folding and gene
activation.
| [
{
"created": "Wed, 14 Feb 2024 14:43:59 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Hedström",
"Lucas",
""
],
[
"Metzler",
"Ralf",
""
],
[
"Lizana",
"Ludvig",
""
]
] | Cells regulate fates and complex body plans using spatiotemporal signaling cascades that alter gene expression. Enhancers, short DNA sequences (50-150 base pairs), help coordinate these cascades by attracting regulatory proteins to enhance the transcription of distal genes by binding to promoters. In humans, there are hundreds of thousands of enhancers dispersed across the genome, which poses a challenging coordination task to prevent unintended gene activation. To mitigate this problem, the genome contains additional DNA elements, insulators, that block enhancer-promoter interactions. However, there is an open problem with how the insulation works, especially as enhancer-insulator pairs may be separated by millions of base pairs. Based on recent empirical data from Hi-C experiments, this paper proposes a new mechanism that challenges the common paradigm that rests on specific insulator-insulator interactions. Instead, this paper introduces a stochastic looping model where enhancers bind weakly to surrounding chromatin. After calibrating the model to experimental data, we use simulations to study the broad distribution of hitting times between an enhancer and a promoter when there are blocking insulators. In some cases, there is a large difference between average and most probable hitting times, making it difficult to assign a typical time scale, hinting at highly defocused regulation times. We also map our computational model onto a resetting problem that allows us to derive several analytical results. Besides offering new insights into enhancer-insulator interactions, our paper advances the understanding of gene regulatory networks and causal connections between genome folding and gene activation. |
0804.3605 | Hilary Carteret | Hilary A. Carteret, Kelly John Rose and Stuart A. Kauffman | Maximum Power Efficiency and Criticality in Random Boolean Networks | 4 pages RevTeX, 1 figure in .eps format. Comments welcome, v2: minor
clarifications added, conclusions unchanged. v3: paper rewritten to clarify
it; conclusions unchanged | Phys. Rev. Lett., vol.101 (2008) 218702 | 10.1103/PhysRevLett.101.218702 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Boolean networks are models of disordered causal systems that can
occur in cells and the biosphere. These are open thermodynamic systems
exhibiting a flow of energy that is dissipated at a finite rate. Life does work
to acquire more energy, then uses the available energy it has gained to perform
more work. It is plausible that natural selection has optimized many biological
systems for power efficiency: useful power generated per unit fuel. In this
letter we begin to investigate these questions for random Boolean networks
using Landauer's erasure principle, which defines a minimum entropy cost for
bit erasure. We show that critical Boolean networks maximize available power
efficiency, which requires that the system have a finite displacement from
equilibrium. Our initial results may extend to more realistic models for cells
and ecosystems.
| [
{
"created": "Wed, 23 Apr 2008 16:59:23 GMT",
"version": "v1"
},
{
"created": "Thu, 1 May 2008 01:54:38 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Jul 2008 19:40:33 GMT",
"version": "v3"
},
{
"created": "Thu, 31 Jul 2008 19:32:12 GMT",
"version": "v4"
},
{
"created": "Fri, 3 Oct 2008 19:48:32 GMT",
"version": "v5"
}
] | 2010-09-16 | [
[
"Carteret",
"Hilary A.",
""
],
[
"Rose",
"Kelly John",
""
],
[
"Kauffman",
"Stuart A.",
""
]
] | Random Boolean networks are models of disordered causal systems that can occur in cells and the biosphere. These are open thermodynamic systems exhibiting a flow of energy that is dissipated at a finite rate. Life does work to acquire more energy, then uses the available energy it has gained to perform more work. It is plausible that natural selection has optimized many biological systems for power efficiency: useful power generated per unit fuel. In this letter we begin to investigate these questions for random Boolean networks using Landauer's erasure principle, which defines a minimum entropy cost for bit erasure. We show that critical Boolean networks maximize available power efficiency, which requires that the system have a finite displacement from equilibrium. Our initial results may extend to more realistic models for cells and ecosystems. |
1604.04145 | Paul Jenkins | Robert C. Griffiths, Paul A. Jenkins, and Sabin Lessard | A coalescent dual process for a Wright-Fisher diffusion with
recombination and its application to haplotype partitioning | This version corrects typographical errors in equations (25), (26),
(27), (B.3), (B.4). 39 pages, 3 figures | Theoretical Population Biology, 112: 126-138 (2016) | 10.1016/j.tpb.2016.08.007 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Duality plays an important role in population genetics. It can relate results
from forwards-in-time models of allele frequency evolution with those of
backwards-in-time genealogical models; a well known example is the duality
between the Wright-Fisher diffusion for genetic drift and its genealogical
counterpart, the coalescent. There have been a number of articles extending
this relationship to include other evolutionary processes such as mutation and
selection, but little has been explored for models also incorporating crossover
recombination. Here, we derive from first principles a new genealogical process
which is dual to a Wright-Fisher diffusion model of drift, mutation, and
recombination. Our approach is based on expressing a putative duality
relationship between two models via their infinitesimal generators, and then
seeking an appropriate test function to ensure the validity of the duality
equation. This approach is quite general, and we use it to find dualities for
several important variants, including both a discrete L-locus model of a gene
and a continuous model in which mutation and recombination events are scattered
along the gene according to continuous distributions. As an application of our
results, we derive a series expansion for the transition function of the
diffusion. Finally, we study in further detail the case in which mutation is
absent. Then the dual process describes the dispersal of ancestral genetic
material across the ancestors of a sample. The stationary distribution of this
process is of particular interest; we show how duality relates this
distribution to haplotype fixation probabilities. We develop an efficient
method for computing such probabilities in multilocus models.
| [
{
"created": "Thu, 14 Apr 2016 13:17:06 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Aug 2016 08:44:50 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jun 2019 14:27:26 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Aug 2019 09:05:10 GMT",
"version": "v4"
}
] | 2019-08-09 | [
[
"Griffiths",
"Robert C.",
""
],
[
"Jenkins",
"Paul A.",
""
],
[
"Lessard",
"Sabin",
""
]
] | Duality plays an important role in population genetics. It can relate results from forwards-in-time models of allele frequency evolution with those of backwards-in-time genealogical models; a well known example is the duality between the Wright-Fisher diffusion for genetic drift and its genealogical counterpart, the coalescent. There have been a number of articles extending this relationship to include other evolutionary processes such as mutation and selection, but little has been explored for models also incorporating crossover recombination. Here, we derive from first principles a new genealogical process which is dual to a Wright-Fisher diffusion model of drift, mutation, and recombination. Our approach is based on expressing a putative duality relationship between two models via their infinitesimal generators, and then seeking an appropriate test function to ensure the validity of the duality equation. This approach is quite general, and we use it to find dualities for several important variants, including both a discrete L-locus model of a gene and a continuous model in which mutation and recombination events are scattered along the gene according to continuous distributions. As an application of our results, we derive a series expansion for the transition function of the diffusion. Finally, we study in further detail the case in which mutation is absent. Then the dual process describes the dispersal of ancestral genetic material across the ancestors of a sample. The stationary distribution of this process is of particular interest; we show how duality relates this distribution to haplotype fixation probabilities. We develop an efficient method for computing such probabilities in multilocus models. |
1804.09844 | Yevgenia Kozorovitskiy | Manish Kumar, Sandeep Kishore, Jordan Nasenbeny, David McLean,
Yevgenia Kozorovitskiy | Integrated one- and two-photon scanned oblique plane illumination (SOPi)
microscopy for rapid volumetric imaging | 15 pages, 7 figures | 26 (10), 13027-13041 (2018) Optics Express | 10.1364/OE.26.013027 | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Versatile, sterically accessible imaging systems capable of in vivo rapid
volumetric functional and structural imaging deep in the brain continue to be a
limiting factor in neuroscience research. Towards overcoming this obstacle, we
present integrated one- and two-photon scanned oblique plane illumination
(SOPi) microscopy which uses a single front-facing microscope objective to
provide light-sheet scanning based rapid volumetric imaging capability at
subcellular resolution. Our planar scan-mirror based optimized light-sheet
architecture allows for non-distorted scanning of volume samples, simplifying
accurate reconstruction of the imaged volume. Integration of both one-photon
(1P) and two-photon (2P) light-sheet microscopy in the same system allows for
easy selection between rapid volumetric imaging and higher resolution imaging
in scattering media. Using SOPi, we demonstrate deep, large volume imaging
capability inside scattering mouse brain sections and rapid imaging speeds up
to 10 volumes per second in zebrafish larvae expressing genetically encoded
fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes
it adaptable for numerous imaging applications and broadly compatible with
orthogonal techniques for actuating or interrogating neuronal structure and
activity.
| [
{
"created": "Thu, 26 Apr 2018 00:46:29 GMT",
"version": "v1"
}
] | 2018-05-10 | [
[
"Kumar",
"Manish",
""
],
[
"Kishore",
"Sandeep",
""
],
[
"Nasenbeny",
"Jordan",
""
],
[
"McLean",
"David",
""
],
[
"Kozorovitskiy",
"Yevgenia",
""
]
] | Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity. |
1506.02083 | Min Xu | Min Xu | Automatic tracking of protein vesicles | Author's master thesis (University of Southern California, May 2009).
Adviser: Sergey Lototsky. ISBN: 9781109140439 | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advance of fluorescence imaging technologies, recently cell
biologists are able to record the movement of protein vesicles within a living
cell. Automatic tracking of the movements of these vesicles become key for
qualitative analysis of dynamics of theses vesicles. In this thesis, we
formulate such tracking problem as video object tracking problem, and design a
dynamic programming method for tracking single object. Our experiments on
simulation data show that the method can identify a track with high accuracy
which is robust to the choose of tracking parameters and presence of high level
noise. We then extend this method to the tracking multiple objects using the
track elimination strategy. In multiple object tracking, the above approach
often fails to correctly identify a track when two tracks cross. We solve this
problem by incorporating the Kalman filter into the dynamic programming
framework. Our experiments on simulated data show that the tracking accuracy is
significantly improved.
| [
{
"created": "Fri, 5 Jun 2015 22:59:47 GMT",
"version": "v1"
}
] | 2015-06-09 | [
[
"Xu",
"Min",
""
]
] | With the advance of fluorescence imaging technologies, recently cell biologists are able to record the movement of protein vesicles within a living cell. Automatic tracking of the movements of these vesicles become key for qualitative analysis of dynamics of theses vesicles. In this thesis, we formulate such tracking problem as video object tracking problem, and design a dynamic programming method for tracking single object. Our experiments on simulation data show that the method can identify a track with high accuracy which is robust to the choose of tracking parameters and presence of high level noise. We then extend this method to the tracking multiple objects using the track elimination strategy. In multiple object tracking, the above approach often fails to correctly identify a track when two tracks cross. We solve this problem by incorporating the Kalman filter into the dynamic programming framework. Our experiments on simulated data show that the tracking accuracy is significantly improved. |
1210.2563 | Davide Valenti | G. Denaro, D. Valenti, A. La Cognata, B. Spagnolo, A. Bonanno, G.
Basilone, S. Mazzola, S. Zgozi, S. Aronica, C. Brunet | Spatio-temporal behaviour of the deep chlorophyll maximum in
Mediterranean Sea: Development of a stochastic model for picophytoplankton
dynamics | To be published in Ecological Complexity | null | null | null | q-bio.PE physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, by using a stochastic reaction-diffusion-taxis model, we
analyze the picophytoplankton dynamics in the basin of the Mediterranean Sea,
characterized by poorly mixed waters. The model includes intraspecific
competition of picophytoplankton for light and nutrients. The multiplicative
noise sources present in the model account for random fluctuations of
environmental variables. Phytoplankton distributions obtained from the model
show a good agreement with experimental data sampled in two different sites of
the Sicily Channel. The results could be extended to analyze data collected in
different sites of the Mediterranean Sea and to devise predictive models for
phytoplankton dynamics in oligotrophic waters.
| [
{
"created": "Tue, 9 Oct 2012 11:14:23 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Oct 2012 17:15:24 GMT",
"version": "v2"
}
] | 2012-10-22 | [
[
"Denaro",
"G.",
""
],
[
"Valenti",
"D.",
""
],
[
"La Cognata",
"A.",
""
],
[
"Spagnolo",
"B.",
""
],
[
"Bonanno",
"A.",
""
],
[
"Basilone",
"G.",
""
],
[
"Mazzola",
"S.",
""
],
[
"Zgozi",
"S.",
""
],
[
"Aronica",
"S.",
""
],
[
"Brunet",
"C.",
""
]
] | In this paper, by using a stochastic reaction-diffusion-taxis model, we analyze the picophytoplankton dynamics in the basin of the Mediterranean Sea, characterized by poorly mixed waters. The model includes intraspecific competition of picophytoplankton for light and nutrients. The multiplicative noise sources present in the model account for random fluctuations of environmental variables. Phytoplankton distributions obtained from the model show a good agreement with experimental data sampled in two different sites of the Sicily Channel. The results could be extended to analyze data collected in different sites of the Mediterranean Sea and to devise predictive models for phytoplankton dynamics in oligotrophic waters. |
1402.1959 | Chun-Chung Chen | Hao Song, Chun-Chung Chen, Jyh-Jang Sun, Pik-Yin Lai, and C. K. Chan | Reconstruction of network structures from repeating spike patterns in
simulated bursting dynamics | 9 pages, 9 figures | Phys. Rev. E 90, 012703 (2014) | 10.1103/PhysRevE.90.012703 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Repeating patterns of spike sequences from a neuronal network have been
proposed to be useful in the reconstruction of the network topology.
Reverberations in a physiologically realistic model with various physical
connection topologies (from random to scale-free) have been simulated to study
the effectiveness of the pattern-matching method in the reconstruction of
network topology from network dynamics. Simulation results show that functional
networks reconstructed from repeating spike patterns can be quite different
from the original physical networks; even global properties, such as the degree
distribution, cannot always be recovered. However, the pattern-matching method
can be effective in identifying hubs in the network. Since the form of
reverberations are quite different for networks with and without hubs, the form
of reverberations together with the reconstruction by repeating spike patterns
might provide a reliable method to detect hubs in neuronal cultures.
| [
{
"created": "Sun, 9 Feb 2014 15:51:00 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Jul 2014 16:11:37 GMT",
"version": "v2"
}
] | 2014-07-15 | [
[
"Song",
"Hao",
""
],
[
"Chen",
"Chun-Chung",
""
],
[
"Sun",
"Jyh-Jang",
""
],
[
"Lai",
"Pik-Yin",
""
],
[
"Chan",
"C. K.",
""
]
] | Repeating patterns of spike sequences from a neuronal network have been proposed to be useful in the reconstruction of the network topology. Reverberations in a physiologically realistic model with various physical connection topologies (from random to scale-free) have been simulated to study the effectiveness of the pattern-matching method in the reconstruction of network topology from network dynamics. Simulation results show that functional networks reconstructed from repeating spike patterns can be quite different from the original physical networks; even global properties, such as the degree distribution, cannot always be recovered. However, the pattern-matching method can be effective in identifying hubs in the network. Since the form of reverberations are quite different for networks with and without hubs, the form of reverberations together with the reconstruction by repeating spike patterns might provide a reliable method to detect hubs in neuronal cultures. |
2008.13521 | Simon Childs | S. J. Childs | Could Deficiencies in South African Data Be the Explanation for Its
Early SARS-CoV-2 Peak? | 13 pages, 2 figures and 7 tables | null | null | null | q-bio.PE physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SARS-CoV-2 pandemic peaked very early in comparison to the thresholds
predicted by an analysis of prior lockdown regimes. The most convenient
explanation is that some, external factor changed the value of the basic
reproduction number, $r_{\rm 0}$; and there certainly are arguments for this.
Other factors could, nonetheless, have played a role. This research attempts to
reconcile the observed peak with the thresholds predicted by lockdown regimes
similar to the one in force at the time. It contemplates the effect of two,
different, hypothetical errors in the data: The first is that the true level of
infection has been underestimated by a multiplicative factor, while the second
is that of an imperceptible, pre-existing, immune fraction of the population.
While it is shown that it certainly is possible to manufacture the perception
of an early peak as extreme as the one observed, solely by way of these two
phenomena, the values need to be fairly high. The phenomena would not, by any
measure, be insignificant. It also remains an inescapable fact that the early
peak in infections coincided with a fairly profound change in $r_{\rm 0}$; in
all the contemplated scenarios of data-deficiency.
| [
{
"created": "Mon, 24 Aug 2020 19:46:10 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Sep 2020 08:26:34 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Feb 2021 19:00:21 GMT",
"version": "v3"
}
] | 2021-02-23 | [
[
"Childs",
"S. J.",
""
]
] | The SARS-CoV-2 pandemic peaked very early in comparison to the thresholds predicted by an analysis of prior lockdown regimes. The most convenient explanation is that some, external factor changed the value of the basic reproduction number, $r_{\rm 0}$; and there certainly are arguments for this. Other factors could, nonetheless, have played a role. This research attempts to reconcile the observed peak with the thresholds predicted by lockdown regimes similar to the one in force at the time. It contemplates the effect of two, different, hypothetical errors in the data: The first is that the true level of infection has been underestimated by a multiplicative factor, while the second is that of an imperceptible, pre-existing, immune fraction of the population. While it is shown that it certainly is possible to manufacture the perception of an early peak as extreme as the one observed, solely by way of these two phenomena, the values need to be fairly high. The phenomena would not, by any measure, be insignificant. It also remains an inescapable fact that the early peak in infections coincided with a fairly profound change in $r_{\rm 0}$; in all the contemplated scenarios of data-deficiency. |
2111.14283 | Tian Cai | Tian Cai, Li Xie, Muge Chen, Yang Liu, Di He, Shuo Zhang, Cameron
Mura, Philip E. Bourne and Lei Xie | Exploration of Dark Chemical Genomics Space via Portal Learning: Applied
to Targeting the Undruggable Genome and COVID-19 Anti-Infective
Polypharmacology | 18 pages, 6 figures | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Advances in biomedicine are largely fueled by exploring uncharted territories
of human biology. Machine learning can both enable and accelerate discovery,
but faces a fundamental hurdle when applied to unseen data with distributions
that differ from previously observed ones -- a common dilemma in scientific
inquiry. We have developed a new deep learning framework, called
{\textit{Portal Learning}}, to explore dark chemical and biological space.
Three key, novel components of our approach include: (i) end-to-end, step-wise
transfer learning, in recognition of biology's sequence-structure-function
paradigm, (ii) out-of-cluster meta-learning, and (iii) stress model selection.
Portal Learning provides a practical solution to the out-of-distribution (OOD)
problem in statistical machine learning. Here, we have implemented Portal
Learning to predict chemical-protein interactions on a genome-wide scale.
Systematic studies demonstrate that Portal Learning can effectively assign
ligands to unexplored gene families (unknown functions), versus existing
state-of-the-art methods, thereby allowing us to target previously
"undruggable" proteins and design novel polypharmacological agents for
disrupting interactions between SARS-CoV-2 and human proteins. Portal Learning
is general-purpose and can be further applied to other areas of scientific
inquiry.
| [
{
"created": "Tue, 23 Nov 2021 19:23:59 GMT",
"version": "v1"
}
] | 2021-11-30 | [
[
"Cai",
"Tian",
""
],
[
"Xie",
"Li",
""
],
[
"Chen",
"Muge",
""
],
[
"Liu",
"Yang",
""
],
[
"He",
"Di",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Mura",
"Cameron",
""
],
[
"Bourne",
"Philip E.",
""
],
[
"Xie",
"Lei",
""
]
] | Advances in biomedicine are largely fueled by exploring uncharted territories of human biology. Machine learning can both enable and accelerate discovery, but faces a fundamental hurdle when applied to unseen data with distributions that differ from previously observed ones -- a common dilemma in scientific inquiry. We have developed a new deep learning framework, called {\textit{Portal Learning}}, to explore dark chemical and biological space. Three key, novel components of our approach include: (i) end-to-end, step-wise transfer learning, in recognition of biology's sequence-structure-function paradigm, (ii) out-of-cluster meta-learning, and (iii) stress model selection. Portal Learning provides a practical solution to the out-of-distribution (OOD) problem in statistical machine learning. Here, we have implemented Portal Learning to predict chemical-protein interactions on a genome-wide scale. Systematic studies demonstrate that Portal Learning can effectively assign ligands to unexplored gene families (unknown functions), versus existing state-of-the-art methods, thereby allowing us to target previously "undruggable" proteins and design novel polypharmacological agents for disrupting interactions between SARS-CoV-2 and human proteins. Portal Learning is general-purpose and can be further applied to other areas of scientific inquiry. |
1705.00708 | Alexandra Koulouri | Alexandra Koulouri | Overcoming the ill-posedness through discretization in vector
tomography: Reconstruction of irrotational vector fields | Technical report | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector tomography methods intend to reconstruct and visualize vector fields
in restricted domains by measuring line integrals of projections of these
vector fields. Here, we deal with the reconstruction of irrotational vector
functions from boundary measurements. As the majority of inverse problems,
vector field recovery is an ill posed in the continuous domain and therefore
further assumptions, measurements and constraints should be imposed for the
full vector field estimation. The reconstruction idea in the discrete domain
relies on solving a numerical system of linear equations which derives from the
approximation of the line integrals along lines which trace the bounded domain.
This work presents an extensive description of a vector field recovery, the
fundamental assumptions and the ill conditioning of this inverse problem. More
importantly we show that this inverse problem is regularized via the domain
discretization, i.e. we show that the recovery of an irrotational vector field
within a discrete grid employing a finite set of longitudinal line integrals,
leads to a consistent linear system which has bounded solution errors. We
elaborate on the estimation of the solution's error and we prove that this
relative error is finite and therefore a stable vector field reconstruction is
ensured. Such theoretical aspects are critical for future implementations of
vector tomography in practical applications like the inverse bioelectric field
problem. We validate our theoretical results by performing simulations that
reconstruct smooth irrotational fields based solely on a finite number of
boundary measurements and without the need of any additional or prior
information (e.g. transversal line integrals or source free assumption).
| [
{
"created": "Thu, 27 Apr 2017 20:00:36 GMT",
"version": "v1"
}
] | 2017-05-03 | [
[
"Koulouri",
"Alexandra",
""
]
] | Vector tomography methods intend to reconstruct and visualize vector fields in restricted domains by measuring line integrals of projections of these vector fields. Here, we deal with the reconstruction of irrotational vector functions from boundary measurements. As the majority of inverse problems, vector field recovery is an ill posed in the continuous domain and therefore further assumptions, measurements and constraints should be imposed for the full vector field estimation. The reconstruction idea in the discrete domain relies on solving a numerical system of linear equations which derives from the approximation of the line integrals along lines which trace the bounded domain. This work presents an extensive description of a vector field recovery, the fundamental assumptions and the ill conditioning of this inverse problem. More importantly we show that this inverse problem is regularized via the domain discretization, i.e. we show that the recovery of an irrotational vector field within a discrete grid employing a finite set of longitudinal line integrals, leads to a consistent linear system which has bounded solution errors. We elaborate on the estimation of the solution's error and we prove that this relative error is finite and therefore a stable vector field reconstruction is ensured. Such theoretical aspects are critical for future implementations of vector tomography in practical applications like the inverse bioelectric field problem. We validate our theoretical results by performing simulations that reconstruct smooth irrotational fields based solely on a finite number of boundary measurements and without the need of any additional or prior information (e.g. transversal line integrals or source free assumption). |
2205.02699 | Andrew Mugler | Soutick Saha, Hye-ran Moon, Bumsoo Han, Andrew Mugler | Detection of signaling mechanisms from cellular responses to multiple
cues | 16 pages, 11 figures | null | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell signaling networks are complex and often incompletely characterized,
making it difficult to obtain a comprehensive picture of the mechanisms they
encode. Mathematical modeling of these networks provides important clues, but
the models themselves are often complex, and it is not always clear how to
extract falsifiable predictions. Here we take an inverse approach, using
experimental data at the cell level to {deduce} the minimal signaling network.
We focus on cells' response to multiple cues, specifically on the surprising
case in which the response is antagonistic: the response to multiple cues is
weaker than the response to the individual cues. We systematically build
candidate signaling networks one node at a time, using the ubiquitous
ingredients of (i) up- or down-regulation, (ii) molecular conversion, or (iii)
reversible binding. In each case, our method reveals a minimal, interpretable
signaling mechanism that explains the antagonistic response. Our work provides
a systematic way to {deduce} molecular mechanisms from cell-level data.
| [
{
"created": "Thu, 5 May 2022 15:10:58 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 19:19:09 GMT",
"version": "v2"
}
] | 2022-11-07 | [
[
"Saha",
"Soutick",
""
],
[
"Moon",
"Hye-ran",
""
],
[
"Han",
"Bumsoo",
""
],
[
"Mugler",
"Andrew",
""
]
] | Cell signaling networks are complex and often incompletely characterized, making it difficult to obtain a comprehensive picture of the mechanisms they encode. Mathematical modeling of these networks provides important clues, but the models themselves are often complex, and it is not always clear how to extract falsifiable predictions. Here we take an inverse approach, using experimental data at the cell level to {deduce} the minimal signaling network. We focus on cells' response to multiple cues, specifically on the surprising case in which the response is antagonistic: the response to multiple cues is weaker than the response to the individual cues. We systematically build candidate signaling networks one node at a time, using the ubiquitous ingredients of (i) up- or down-regulation, (ii) molecular conversion, or (iii) reversible binding. In each case, our method reveals a minimal, interpretable signaling mechanism that explains the antagonistic response. Our work provides a systematic way to {deduce} molecular mechanisms from cell-level data. |
1212.1874 | Areejit Samal | Shalini Singh, Areejit Samal, Varun Giri, Sandeep Krishna, Nandula
Raghuram and Sanjay Jain | Flux-based classification of reactions reveals a functional bow-tie
organization of complex metabolic networks | 11 pages, 6 figures, 1 table | null | 10.1103/PhysRevE.87.052708 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unraveling the structure of complex biological networks and relating it to
their functional role is an important task in systems biology. Here we attempt
to characterize the functional organization of the large-scale metabolic
networks of three microorganisms. We apply flux balance analysis to study the
optimal growth states of these organisms in different environments. By
investigating the differential usage of reactions across flux patterns for
different environments, we observe a striking bimodal distribution in the
activity of reactions. Motivated by this, we propose a simple algorithm to
decompose the metabolic network into three sub-networks. It turns out that our
reaction classifier which is blind to the biochemical role of pathways leads to
three functionally relevant sub-networks that correspond to input, output and
intermediate parts of the metabolic network with distinct structural
characteristics. Our decomposition method unveils a functional bow-tie
organization of metabolic networks that is different from the bow-tie structure
determined by graph-theoretic methods that do not incorporate functionality.
| [
{
"created": "Sun, 9 Dec 2012 10:22:21 GMT",
"version": "v1"
}
] | 2015-06-12 | [
[
"Singh",
"Shalini",
""
],
[
"Samal",
"Areejit",
""
],
[
"Giri",
"Varun",
""
],
[
"Krishna",
"Sandeep",
""
],
[
"Raghuram",
"Nandula",
""
],
[
"Jain",
"Sanjay",
""
]
] | Unraveling the structure of complex biological networks and relating it to their functional role is an important task in systems biology. Here we attempt to characterize the functional organization of the large-scale metabolic networks of three microorganisms. We apply flux balance analysis to study the optimal growth states of these organisms in different environments. By investigating the differential usage of reactions across flux patterns for different environments, we observe a striking bimodal distribution in the activity of reactions. Motivated by this, we propose a simple algorithm to decompose the metabolic network into three sub-networks. It turns out that our reaction classifier which is blind to the biochemical role of pathways leads to three functionally relevant sub-networks that correspond to input, output and intermediate parts of the metabolic network with distinct structural characteristics. Our decomposition method unveils a functional bow-tie organization of metabolic networks that is different from the bow-tie structure determined by graph-theoretic methods that do not incorporate functionality. |
2110.06462 | Yihan Wu | Yihan Wu, Min Xia, Li Nie, Yangsong Zhang, Andong Fan | Simultaneously exploring multi-scale and asymmetric EEG features for
emotion recognition | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, emotion recognition based on electroencephalography (EEG)
has received growing interests in the brain-computer interaction (BCI) field.
The neuroscience researches indicate that the left and right brain hemispheres
demonstrate activity differences under different emotional activities, which
could be an important principle for designing deep learning (DL) model for
emotion recognition. Besides, owing to the nonstationarity of EEG signals,
using convolution kernels of a single size may not sufficiently extract the
abundant features for EEG classification tasks. Based on these two angles, we
proposed a model termed Multi-Scales Bi-hemispheric Asymmetric Model (MSBAM)
based on convolutional neural network (CNN) structure. Evaluated on the public
DEAP and DREAMER datasets, MSBAM achieved over 99% accuracy for the two-class
classification of low-level and high-level states in each of four emotional
dimensions, i.e., arousal, valence, dominance and liking, respectively. This
study further demonstrated the promising potential to design the DL model from
the multi-scale characteristics of the EEG data and the neural mechanisms of
the emotion cognition.
| [
{
"created": "Wed, 13 Oct 2021 02:56:37 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Apr 2022 14:10:53 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Jul 2022 03:05:01 GMT",
"version": "v3"
}
] | 2022-07-12 | [
[
"Wu",
"Yihan",
""
],
[
"Xia",
"Min",
""
],
[
"Nie",
"Li",
""
],
[
"Zhang",
"Yangsong",
""
],
[
"Fan",
"Andong",
""
]
] | In recent years, emotion recognition based on electroencephalography (EEG) has received growing interests in the brain-computer interaction (BCI) field. The neuroscience researches indicate that the left and right brain hemispheres demonstrate activity differences under different emotional activities, which could be an important principle for designing deep learning (DL) model for emotion recognition. Besides, owing to the nonstationarity of EEG signals, using convolution kernels of a single size may not sufficiently extract the abundant features for EEG classification tasks. Based on these two angles, we proposed a model termed Multi-Scales Bi-hemispheric Asymmetric Model (MSBAM) based on convolutional neural network (CNN) structure. Evaluated on the public DEAP and DREAMER datasets, MSBAM achieved over 99% accuracy for the two-class classification of low-level and high-level states in each of four emotional dimensions, i.e., arousal, valence, dominance and liking, respectively. This study further demonstrated the promising potential to design the DL model from the multi-scale characteristics of the EEG data and the neural mechanisms of the emotion cognition. |
2408.07618 | Thomas Williams | Thomas Williams, James M. McCaw, James M. Osborne | Accounting for the geometry of the lung in respiratory viral infections | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Increasingly, mathematical models of viral infections have come to recognise
the important role of spatial structure in infection dynamics. Almost
invariably, spatial models of viral infections make use of a wide, flat
computational domain which is assumed to be representative of the entire
affected tissue. Implicit in this assumption is that either the tissue being
modelled is largely wide and homogeneous, or that the topology of the tissue
has little influence on the dynamics of the system. This assumption fails to
take into account the distinctive geometry of the lung. The lung is
characterised by a tubular, highly branching structure, and moreover is
spatially heterogeneous: deeper regions of the lung are composed of far
narrower airways and are associated with more severe infection. Here, we extend
a typical multicellular model of viral dynamics to account for two essential
features of the geometry of the lung: the tubular structure of airways, and the
branching process between airway generations. We show that, with this more
realistic tissue geometry, the dynamics of infection are substantially changed
compared to the standard approach, and that the resulting model is equipped to
tackle important biological phenomena that are not well-addressed with existing
models, including viral lineage dynamics in the lung, and heterogeneity in
immune responses to infection in different regions of the respiratory tree.
| [
{
"created": "Wed, 7 Aug 2024 00:45:59 GMT",
"version": "v1"
}
] | 2024-08-15 | [
[
"Williams",
"Thomas",
""
],
[
"McCaw",
"James M.",
""
],
[
"Osborne",
"James M.",
""
]
] | Increasingly, mathematical models of viral infections have come to recognise the important role of spatial structure in infection dynamics. Almost invariably, spatial models of viral infections make use of a wide, flat computational domain which is assumed to be representative of the entire affected tissue. Implicit in this assumption is that either the tissue being modelled is largely wide and homogeneous, or that the topology of the tissue has little influence on the dynamics of the system. This assumption fails to take into account the distinctive geometry of the lung. The lung is characterised by a tubular, highly branching structure, and moreover is spatially heterogeneous: deeper regions of the lung are composed of far narrower airways and are associated with more severe infection. Here, we extend a typical multicellular model of viral dynamics to account for two essential features of the geometry of the lung: the tubular structure of airways, and the branching process between airway generations. We show that, with this more realistic tissue geometry, the dynamics of infection are substantially changed compared to the standard approach, and that the resulting model is equipped to tackle important biological phenomena that are not well-addressed with existing models, including viral lineage dynamics in the lung, and heterogeneity in immune responses to infection in different regions of the respiratory tree. |
2303.08530 | Marius Emar Yamakou | Marius E. Yamakou and Christian Kuehn | Combined effects of STDP and homeostatic structural plasticity on
coherence resonance | 15 pages, 5 figures, 86 references | null | 10.1103/PhysRevE.107.044302 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient processing and transfer of information in neurons have been linked
to noise-induced resonance phenomena such as coherence resonance (CR), and
adaptive rules in neural networks have been mostly linked to two prevalent
mechanisms: spike-timing-dependent plasticity (STDP) and homeostatic structural
plasticity (HSP). Thus, this paper investigates CR in small-world and random
adaptive networks of Hodgkin-Huxley neurons driven by STDP and HSP. Our
numerical study indicates that the degree of CR strongly depends, and in
different ways, on the adjusting rate parameter $P$, which controls STDP, on
the characteristic rewiring frequency parameter $F$, which controls HSP, and on
the parameters of the network topology. In particular, we found two robust
behaviors: (i) Decreasing $P$ (which enhances the weakening effect of STDP on
synaptic weights) and decreasing $F$ (which slows down the swapping rate of
synapses between neurons) always leads to higher degrees of CR in small-world
and random networks, provided that the synaptic time delay parameter $\tau_c$
has some appropriate values. (ii) Increasing the synaptic time delay $\tau_c$
induces multiple CR (MCR) -- the occurrence of multiple peaks in the degree of
coherence as $\tau_c$ changes -- in small-world and random networks, with MCR
becoming more pronounced at smaller values of $P$ and $F$. Our results imply
that STDP and HSP can jointly play an essential role in enhancing the time
precision of firing necessary for optimal information processing and transfer
in neural systems and could thus have applications in designing networks of
noisy artificial neural circuits engineered to use CR to optimize information
processing and transfer.
| [
{
"created": "Wed, 15 Mar 2023 11:23:44 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2023 17:05:10 GMT",
"version": "v2"
}
] | 2023-04-26 | [
[
"Yamakou",
"Marius E.",
""
],
[
"Kuehn",
"Christian",
""
]
] | Efficient processing and transfer of information in neurons have been linked to noise-induced resonance phenomena such as coherence resonance (CR), and adaptive rules in neural networks have been mostly linked to two prevalent mechanisms: spike-timing-dependent plasticity (STDP) and homeostatic structural plasticity (HSP). Thus, this paper investigates CR in small-world and random adaptive networks of Hodgkin-Huxley neurons driven by STDP and HSP. Our numerical study indicates that the degree of CR strongly depends, and in different ways, on the adjusting rate parameter $P$, which controls STDP, on the characteristic rewiring frequency parameter $F$, which controls HSP, and on the parameters of the network topology. In particular, we found two robust behaviors: (i) Decreasing $P$ (which enhances the weakening effect of STDP on synaptic weights) and decreasing $F$ (which slows down the swapping rate of synapses between neurons) always leads to higher degrees of CR in small-world and random networks, provided that the synaptic time delay parameter $\tau_c$ has some appropriate values. (ii) Increasing the synaptic time delay $\tau_c$ induces multiple CR (MCR) -- the occurrence of multiple peaks in the degree of coherence as $\tau_c$ changes -- in small-world and random networks, with MCR becoming more pronounced at smaller values of $P$ and $F$. Our results imply that STDP and HSP can jointly play an essential role in enhancing the time precision of firing necessary for optimal information processing and transfer in neural systems and could thus have applications in designing networks of noisy artificial neural circuits engineered to use CR to optimize information processing and transfer. |
2006.12909 | Domenic Germano | Domenic P.J. Germano and James M. Osborne | A mathematical model of cell fate selection on a dynamic tissue | null | null | null | null | q-bio.TO q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multicellular tissues are the building blocks of many biological systems and
organs. These tissues are not static, but dynamically change over time. Even if
the overall structure remains the same there is a turnover of cells within the
tissue. This dynamic homeostasis is maintained by numerous governing mechanisms
which are finely tuned in such a way that the tissue remains in a homeostatic
state, even across large timescales. Some of these governing mechanisms include
cell motion, and cell fate selection through inter cellular signalling.
However, it is not yet clear how to link these two processes, or how they may
affect one another across the tissue. In this paper, we present a
multicellular, multiscale model, which brings together the two phenomena of
cell motility, and inter cellular signalling, to describe cell fate selection
on a dynamic tissue. We find that the affinity for cellular signalling to occur
greatly influences a cells ability to differentiate. We also find that our
results support claims that cell differentiation is a finely tuned process
within dynamic tissues at homeostasis, with excessive cell turnover rates
leading to unhealthy (undifferentiated and unpatterned) tissues.
| [
{
"created": "Tue, 23 Jun 2020 11:30:35 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Feb 2021 04:34:44 GMT",
"version": "v2"
}
] | 2021-02-12 | [
[
"Germano",
"Domenic P. J.",
""
],
[
"Osborne",
"James M.",
""
]
] | Multicellular tissues are the building blocks of many biological systems and organs. These tissues are not static, but dynamically change over time. Even if the overall structure remains the same there is a turnover of cells within the tissue. This dynamic homeostasis is maintained by numerous governing mechanisms which are finely tuned in such a way that the tissue remains in a homeostatic state, even across large timescales. Some of these governing mechanisms include cell motion, and cell fate selection through inter cellular signalling. However, it is not yet clear how to link these two processes, or how they may affect one another across the tissue. In this paper, we present a multicellular, multiscale model, which brings together the two phenomena of cell motility, and inter cellular signalling, to describe cell fate selection on a dynamic tissue. We find that the affinity for cellular signalling to occur greatly influences a cells ability to differentiate. We also find that our results support claims that cell differentiation is a finely tuned process within dynamic tissues at homeostasis, with excessive cell turnover rates leading to unhealthy (undifferentiated and unpatterned) tissues. |
1904.05834 | Jean-Jacques De Groote Dr. | D M L Barbato, J M A De Andrade, and J J De Groote | Identification of similarity amongst eucalyptus planted areas based on
leaf-cutting ant nest sizes | 17 pages 4 figures; Added a reference | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Techniques for leaf-cutting ant control have been investigated in literature
due to the importance of the damage they cause to agriculture. The
effectiveness of different forms of control is explored in researches aimed at
identifying the balance between pest control efficiency and environmental
damage caused by the treatments applied. Plantations with large territorial
extensions, which can be contiguous or not, are usually subdivided into local
administration that collects data to determine the frequencies of areas size
occupied by ant nests. The purpose of this work is to build a relationship of
similarities among different geographical regions using the frequency data of
nests size occurrence by applying Information Bottleneck (IB) method and
Principal Component Analysis (PCA). IB allows simultaneous clustering of each
region with the ant nest size distribution, while PCA is used to reduce the
variable dimensionalities into a three-dimensional representation of the
results. The approach was applied to data of leaf-cutting ants Atta spp.
(Hymenoptera: Formicidae) in cultivated Eucalyptus spp forests in Sao Paulo,
Brazil. The results suggest information acquired by the method can help
coordinate pest management, such as the allocation of baits, material and
personnel.
| [
{
"created": "Tue, 9 Apr 2019 20:42:08 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2019 18:38:55 GMT",
"version": "v2"
}
] | 2019-04-18 | [
[
"Barbato",
"D M L",
""
],
[
"De Andrade",
"J M A",
""
],
[
"De Groote",
"J J",
""
]
] | Techniques for leaf-cutting ant control have been investigated in literature due to the importance of the damage they cause to agriculture. The effectiveness of different forms of control is explored in researches aimed at identifying the balance between pest control efficiency and environmental damage caused by the treatments applied. Plantations with large territorial extensions, which can be contiguous or not, are usually subdivided into local administration that collects data to determine the frequencies of areas size occupied by ant nests. The purpose of this work is to build a relationship of similarities among different geographical regions using the frequency data of nests size occurrence by applying Information Bottleneck (IB) method and Principal Component Analysis (PCA). IB allows simultaneous clustering of each region with the ant nest size distribution, while PCA is used to reduce the variable dimensionalities into a three-dimensional representation of the results. The approach was applied to data of leaf-cutting ants Atta spp. (Hymenoptera: Formicidae) in cultivated Eucalyptus spp forests in Sao Paulo, Brazil. The results suggest information acquired by the method can help coordinate pest management, such as the allocation of baits, material and personnel. |
1410.1029 | Laurence Aitchison | Laurence Aitchison, Jannes Jegminat, Jorge Aurelio Menendez,
Jean-Pascal Pfister, Alex Pouget and Peter E. Latham | Synaptic plasticity as Bayesian inference | Published in Nature Neuroscience:
https://www.nature.com/articles/s41593-021-00809-5 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning, especially rapid learning, is critical for survival. However,
learning is hard: a large number of synaptic weights must be set based on
noisy, often ambiguous, sensory information. In such a high-noise regime,
keeping track of probability distributions over weights is the optimal
strategy. Here we hypothesize that synapses take that strategy; in essence,
when they estimate weights, they include error bars. They then use that
uncertainty to adjust their learning rates, with more uncertain weights having
higher learning rates. We also make a second, independent, hypothesis: synapses
communicate their uncertainty by linking it to variability in PSP size, with
more uncertainty leading to more variability. These two hypotheses cast
synaptic plasticity as a problem of Bayesian inference, and thus provide a
normative view of learning. They generalize known learning rules, offer an
explanation for the large variability in the size of post-synaptic potentials,
and make falsifiable experimental predictions.
| [
{
"created": "Sat, 4 Oct 2014 09:13:22 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Oct 2014 08:03:22 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Apr 2017 10:51:39 GMT",
"version": "v3"
},
{
"created": "Fri, 19 Mar 2021 11:44:11 GMT",
"version": "v4"
}
] | 2021-03-22 | [
[
"Aitchison",
"Laurence",
""
],
[
"Jegminat",
"Jannes",
""
],
[
"Menendez",
"Jorge Aurelio",
""
],
[
"Pfister",
"Jean-Pascal",
""
],
[
"Pouget",
"Alex",
""
],
[
"Latham",
"Peter E.",
""
]
] | Learning, especially rapid learning, is critical for survival. However, learning is hard: a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in PSP size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of post-synaptic potentials, and make falsifiable experimental predictions. |
2004.00746 | Miguel Ramos-Pascual | Miguel Ramos Pascual | Coronavirus SARS-CoV-2: Analysis of subgenomic mRNA transcription,
3CLpro and PL2pro protease cleavage sites and protein synthesis | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronaviruses have recently caused world-wide severe outbreaks: SARS (Severe
Acute Respiratory Syndrome) in 2002 and MERS (Middle-East Respiratory Syndrome)
in 2012. At the end of 2019, a new coronavirus outbreak appeared in Wuhan
(China) seafood market as first focus of infection, becoming a pandemics in
2020, spreading mainly into Europe and Asia. Although the virus family is
well-known, this specific virus presents considerable differences, as higher
transmission rates, being a challenge for diagnostic methods, treatments and
vaccines. Coronavirus(C++).pro is a C++ application which simulates Coronavirus
replication cycle. This software has identified virus type in short times and
provided FASTA files of virus proteins, a list of mRNA sequences and secondary
structures. Furthermore, the software has identified a list of structural,
non-structural and accessory proteins in 2019-nCoV virus genome more similar to
SARS than to MERS, as several fusion proteins characteristics of this virus
type. These results are useful as a first step in order to develop diagnostic
methods, new vaccines or antiviral drugs, which could avoid virus replication
in any stage: fusion inhibitors, RdRp inhibitors and PL2pro/3CLpro protease
inhibitors.
| [
{
"created": "Thu, 2 Apr 2020 00:07:19 GMT",
"version": "v1"
}
] | 2020-04-03 | [
[
"Pascual",
"Miguel Ramos",
""
]
] | Coronaviruses have recently caused world-wide severe outbreaks: SARS (Severe Acute Respiratory Syndrome) in 2002 and MERS (Middle-East Respiratory Syndrome) in 2012. At the end of 2019, a new coronavirus outbreak appeared in Wuhan (China) seafood market as first focus of infection, becoming a pandemics in 2020, spreading mainly into Europe and Asia. Although the virus family is well-known, this specific virus presents considerable differences, as higher transmission rates, being a challenge for diagnostic methods, treatments and vaccines. Coronavirus(C++).pro is a C++ application which simulates Coronavirus replication cycle. This software has identified virus type in short times and provided FASTA files of virus proteins, a list of mRNA sequences and secondary structures. Furthermore, the software has identified a list of structural, non-structural and accessory proteins in 2019-nCoV virus genome more similar to SARS than to MERS, as several fusion proteins characteristics of this virus type. These results are useful as a first step in order to develop diagnostic methods, new vaccines or antiviral drugs, which could avoid virus replication in any stage: fusion inhibitors, RdRp inhibitors and PL2pro/3CLpro protease inhibitors. |
2103.00256 | Birgitta Dresp-Langley | Birgitta Dresp-Langley, John M. Wandeto | Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network
Map | null | Symmetry. 2021; 13(2):299 | 10.3390/sym13020299 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry in biological and physical systems is a product of self organization
driven by evolutionary processes, or mechanical systems under constraints.
Symmetry based feature extrac-tion or representation by neural networks may
unravel the most informative contents in large image databases. Despite
significant achievements of artificial intelligence in recognition and
classification of regular patterns, the problem of uncertainty remains a major
challenge in ambiguous data. In this study, we present an artificial neural
network that detects symmetry uncertainty states in human observers. To this
end, we exploit a neural network metric in the output of a biologically
inspired Self Organizing Map, the Quantization Error (SOM QE). Shape pairs with
perfect geometric mirror symmetry but a non-homogenous appearance, caused by
local variations in hue, saturation, or lightness within or across the shapes
in a given pair produce, as shown here, longer choice RT for yes responses
relative to symmetry. These data are consistently mirrored by the variations in
the SOM QE from unsupervised neural network analysis of the same stimulus
images. The neural network metric is thus capable of detecting and scaling
human symmetry uncertainty in response to patterns. Such capacity is tightly
linked to the metrics proven selectivity to local contrast and color variations
in large and highly complex image data.
| [
{
"created": "Sat, 27 Feb 2021 15:55:01 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Dresp-Langley",
"Birgitta",
""
],
[
"Wandeto",
"John M.",
""
]
] | Symmetry in biological and physical systems is a product of self organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry based feature extrac-tion or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self Organizing Map, the Quantization Error (SOM QE). Shape pairs with perfect geometric mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within or across the shapes in a given pair produce, as shown here, longer choice RT for yes responses relative to symmetry. These data are consistently mirrored by the variations in the SOM QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metrics proven selectivity to local contrast and color variations in large and highly complex image data. |
1701.02588 | Nitish George | Nitish Manu George | Synthesis of Methotrexate loaded Cerium fluoride nanoparticles with pH
sensitive extended release coupled with Hyaluronic acid receptor with
plausible theranostic capabilities for preclinical safety studies | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A key challenge in drug delivery systems is the real time monitoring of
delivered drug and subsequent response. Recent advancement in nanotechnology
has enabled the design and preclinical implementation of novel drug delivery
systems (DDS) with theranostic abilities. Herein, fluorescent cerium fluoride
(CeF3) nanoparticles (nps) were synthesized and their surface modified with a
coat of polyethylenimine (PEI). Thereafter, Methotrexate was conjugated upon it
through glutaraldehyde crosslinking for a pH-sensitive release. This was
followed by the addition of a Hyaluronic acid (HA) receptor via
1-Ethyl-3-(3-dimethylaminopropyl)-carbodiimide and N-hydroxysuccinimide
(EDC-NHS) chemistry to achieve a possible active drug targeting system. The
obtained drug delivery nano-agent retains and exhibits unique photo-luminescent
properties attributed to the nps while exhibiting potential theranostic
capabilities.
| [
{
"created": "Fri, 30 Dec 2016 07:19:06 GMT",
"version": "v1"
}
] | 2017-01-11 | [
[
"George",
"Nitish Manu",
""
]
] | A key challenge in drug delivery systems is the real time monitoring of delivered drug and subsequent response. Recent advancement in nanotechnology has enabled the design and preclinical implementation of novel drug delivery systems (DDS) with theranostic abilities. Herein, fluorescent cerium fluoride (CeF3) nanoparticles (nps) were synthesized and their surface modified with a coat of polyethylenimine (PEI). Thereafter, Methotrexate was conjugated upon it through glutaraldehyde crosslinking for a pH-sensitive release. This was followed by the addition of a Hyaluronic acid (HA) receptor via 1-Ethyl-3-(3-dimethylaminopropyl)-carbodiimide and N-hydroxysuccinimide (EDC-NHS) chemistry to achieve a possible active drug targeting system. The obtained drug delivery nano-agent retains and exhibits unique photo-luminescent properties attributed to the nps while exhibiting potential theranostic capabilities. |
2004.13119 | Pilar Cossio Dr. | Rodrigo Ochoa, Miguel A. Soler, Alessandro Laio and Pilar Cossio | PARCE: Protocol for Amino acid Refinement through Computational
Evolution | null | null | 10.1016/j.cpc.2020.107716 | null | q-bio.BM physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The in silico design of peptides and proteins as binders is useful for
diagnosis and therapeutics due to their low adverse effects and major
specificity. To select the most promising candidates, a key matter is to
understand their interactions with protein targets. In this work, we present
PARCE, an open source Protocol for Amino acid Refinement through Computational
Evolution that implements an advanced and promising method for the design of
peptides and proteins. The protocol performs a random mutation in the binder
sequence, then samples the bound conformations using molecular dynamics
simulations, and evaluates the protein-protein interactions from multiple
scoring. Finally, it accepts or rejects the mutation by applying a consensus
criterion based on binding scores. The procedure is iterated with the aim to
explore efficiently novel sequences with potential better affinities toward
their targets. We also provide a tutorial for running and reproducing the
methodology.
| [
{
"created": "Mon, 27 Apr 2020 19:35:54 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Ochoa",
"Rodrigo",
""
],
[
"Soler",
"Miguel A.",
""
],
[
"Laio",
"Alessandro",
""
],
[
"Cossio",
"Pilar",
""
]
] | The in silico design of peptides and proteins as binders is useful for diagnosis and therapeutics due to their low adverse effects and major specificity. To select the most promising candidates, a key matter is to understand their interactions with protein targets. In this work, we present PARCE, an open source Protocol for Amino acid Refinement through Computational Evolution that implements an advanced and promising method for the design of peptides and proteins. The protocol performs a random mutation in the binder sequence, then samples the bound conformations using molecular dynamics simulations, and evaluates the protein-protein interactions from multiple scoring. Finally, it accepts or rejects the mutation by applying a consensus criterion based on binding scores. The procedure is iterated with the aim to explore efficiently novel sequences with potential better affinities toward their targets. We also provide a tutorial for running and reproducing the methodology. |
0904.0685 | Thierry Rabilloud | Thierry Rabilloud (BBSI) | Detergents and chaotropes for protein solubilization before
two-dimensional electrophoresis | null | Methods in molecular biology (Clifton, N.J.) 528 (2009) 259-67 | 10.1007/978-1-60327-310-7_18 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Because of the outstanding ability of two-dimensional electrophoresis to
separate complex mixtures of intact proteins, it would be advantageous to apply
it to all types of proteins, including hydrophobic and membrane proteins.
Unfortunately, poor solubility hampers the analysis of these molecules. As
these problems arise mainly in the extraction and isoelectric focusing steps,
the solution is to improve protein solubility under the conditions prevailing
during isoelectric focusing. This chapter describes the use of chaotropes and
novel detergents to enhance protein solubility during sample extraction and
isoelectric focussing, and discusses the contribution of these compounds to
improving proteomic analysis of membrane proteins.
| [
{
"created": "Sat, 4 Apr 2009 05:34:22 GMT",
"version": "v1"
}
] | 2009-04-07 | [
[
"Rabilloud",
"Thierry",
"",
"BBSI"
]
] | Because of the outstanding ability of two-dimensional electrophoresis to separate complex mixtures of intact proteins, it would be advantageous to apply it to all types of proteins, including hydrophobic and membrane proteins. Unfortunately, poor solubility hampers the analysis of these molecules. As these problems arise mainly in the extraction and isoelectric focusing steps, the solution is to improve protein solubility under the conditions prevailing during isoelectric focusing. This chapter describes the use of chaotropes and novel detergents to enhance protein solubility during sample extraction and isoelectric focussing, and discusses the contribution of these compounds to improving proteomic analysis of membrane proteins. |
1708.01888 | Ildefons Magrans de Abril | Ildefons Magrans de Abril, Junichiro Yoshimoto and Kenji Doya | Connectivity Inference from Neural Recording Data: Challenges,
Mathematical Bases and Research Directions | 52 pages, 2 figures, 3 tables, survey paper under review in Neural
Networks Journal - Elsevier | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents a review of computational methods for connectivity
inference from neural activity data derived from multi-electrode recordings or
fluorescence imaging. We first identify biophysical and technical challenges in
connectivity inference along the data processing pipeline. We then review
connectivity inference methods based on two major mathematical foundations,
namely, descriptive model-free approaches and generative model-based
approaches. We investigate representative studies in both categories and
clarify which challenges have been addressed by which method. We further
identify critical open issues and possible research directions.
| [
{
"created": "Sun, 6 Aug 2017 13:19:57 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2017 04:16:27 GMT",
"version": "v2"
}
] | 2017-12-18 | [
[
"de Abril",
"Ildefons Magrans",
""
],
[
"Yoshimoto",
"Junichiro",
""
],
[
"Doya",
"Kenji",
""
]
] | This article presents a review of computational methods for connectivity inference from neural activity data derived from multi-electrode recordings or fluorescence imaging. We first identify biophysical and technical challenges in connectivity inference along the data processing pipeline. We then review connectivity inference methods based on two major mathematical foundations, namely, descriptive model-free approaches and generative model-based approaches. We investigate representative studies in both categories and clarify which challenges have been addressed by which method. We further identify critical open issues and possible research directions. |
2402.02129 | Ali Benkherouf | Ali Y. Benkherouf | Nature's Brewery to Bedtime: The Role of Hops in GABAA Receptor
Modulation and Sleep Promotion | 189 pages, 41 figures, dissertation | Doctoral Dissertation University of Turku. ISBN: 978-951-29-9577-6
(2023) | null | Annales-D-1768 | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Insomnia often requires pharmacological interventions, with benzodiazepines
and Z-drugs enhancing GABA's inhibitory effects by stabilizing GABAA receptor
chloride ion channels. Prolonged use, however, raises dependency and cognitive
concerns. Humulus lupulus (hops) is gaining attention as a natural relaxant and
sleep aid, potentially modulating GABAA receptors differently. This study
explores hops' neuroactive phytochemicals and their therapeutic mechanisms. The
alpha-acid humulone and hop prenylflavonoids affect GABA-induced displacement
of [3H]EBOB in the GABAA receptor, showing flumazenil-insensitive and
subtype-selective effects. Molecular docking identifies binding sites, with
humulone's activity confirmed electrophysiologically and in mouse studies,
impacting sleep onset and duration. These findings suggest hops as positive
modulators of GABAA receptors, offering insights for sleep aid optimization.
| [
{
"created": "Sat, 3 Feb 2024 12:06:30 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Benkherouf",
"Ali Y.",
""
]
] | Insomnia often requires pharmacological interventions, with benzodiazepines and Z-drugs enhancing GABA's inhibitory effects by stabilizing GABAA receptor chloride ion channels. Prolonged use, however, raises dependency and cognitive concerns. Humulus lupulus (hops) is gaining attention as a natural relaxant and sleep aid, potentially modulating GABAA receptors differently. This study explores hops' neuroactive phytochemicals and their therapeutic mechanisms. The alpha-acid humulone and hop prenylflavonoids affect GABA-induced displacement of [3H]EBOB in the GABAA receptor, showing flumazenil-insensitive and subtype-selective effects. Molecular docking identifies binding sites, with humulone's activity confirmed electrophysiologically and in mouse studies, impacting sleep onset and duration. These findings suggest hops as positive modulators of GABAA receptors, offering insights for sleep aid optimization. |
2401.07568 | Hans Colonius | Hans Colonius, Adele Diederich | Measuring multisensory integration in reaction time: the relative
entropy approach | 9 pages, 1 figure | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A classic definition of multisensory integration (MI) has been proposed as
``the presence of a (statistically) significant change in the response to a
cross-modal stimulus complex compared to unimodal stimuli''. However, this
general definition did not result in a broad consensus on how to quantify the
amount of MI in the context of reaction time (RT). In this brief note, we argue
that numeric measures of reaction times that only involve mean or median RTs do
not uncover the information required to fully assess the effect of multisensory
integration. We suggest instead novel measures that include the entire RT
distributions functions. The central role is played by relative entropy (aka
Kullback-Leibler divergence), a statistical concept in information theory,
statistics, and machine learning to measure the (non-symmetric) distance
between probability distributions. We provide a number of theoretical examples,
but empirical applications and statistical testing are postponed to later
study.
| [
{
"created": "Mon, 15 Jan 2024 10:03:17 GMT",
"version": "v1"
}
] | 2024-01-17 | [
[
"Colonius",
"Hans",
""
],
[
"Diederich",
"Adele",
""
]
] | A classic definition of multisensory integration (MI) has been proposed as ``the presence of a (statistically) significant change in the response to a cross-modal stimulus complex compared to unimodal stimuli''. However, this general definition did not result in a broad consensus on how to quantify the amount of MI in the context of reaction time (RT). In this brief note, we argue that numeric measures of reaction times that only involve mean or median RTs do not uncover the information required to fully assess the effect of multisensory integration. We suggest instead novel measures that include the entire RT distributions functions. The central role is played by relative entropy (aka Kullback-Leibler divergence), a statistical concept in information theory, statistics, and machine learning to measure the (non-symmetric) distance between probability distributions. We provide a number of theoretical examples, but empirical applications and statistical testing are postponed to later study. |
0712.4216 | Jens Christian Claussen | Jens Christian Claussen (University Kiel, Germany) | Offdiagonal complexity: A computationally quick network complexity
measure. Application to protein networks and cell division | 9 pages, extends Physica A 375, 365-373 (2007)
http://dx.doi.org/10.1016/j.physa.2006.08.067 by FullOdC and application to
an evolving spatial network | Mathematical Modeling of Biological Systems II. Ed. A.Deutsch et
al., Birkhaeuser Boston 291-299 (2007) | null | null | q-bio.QM | null | Many complex biological, social, and economical networks show topologies
drastically differing from random graphs. But, what is a complex network, i.e.\
how can one quantify the complexity of a graph? Here the Offdiagonal Complexity
(OdC), a new, and computationally cheap, measure of complexity is defined,
based on the node-node link cross-distribution, whose nondiagonal elements
characterize the graph structure beyond link distribution, cluster coefficient
and average path length. The OdC apporach is applied to the {\sl Helicobacter
pylori} protein interaction network and randomly rewired surrogates thereof. In
addition, OdC is used to characterize the spatial complexity of cell
aggregates. We investigate the earliest embryo development states of
Caenorhabditis elegans. The development states of the premorphogenetic phase
are represented by symmetric binary-valued cell connection matrices with
dimension growing from 4 to 385. These matrices can be interpreted as adjacency
matrix of an undirected graph, or network. The OdC approach allows to describe
quantitatively the complexity of the cell aggregate geometry.
| [
{
"created": "Thu, 27 Dec 2007 11:54:12 GMT",
"version": "v1"
}
] | 2007-12-28 | [
[
"Claussen",
"Jens Christian",
"",
"University Kiel, Germany"
]
] | Many complex biological, social, and economical networks show topologies drastically differing from random graphs. But, what is a complex network, i.e.\ how can one quantify the complexity of a graph? Here the Offdiagonal Complexity (OdC), a new, and computationally cheap, measure of complexity is defined, based on the node-node link cross-distribution, whose nondiagonal elements characterize the graph structure beyond link distribution, cluster coefficient and average path length. The OdC apporach is applied to the {\sl Helicobacter pylori} protein interaction network and randomly rewired surrogates thereof. In addition, OdC is used to characterize the spatial complexity of cell aggregates. We investigate the earliest embryo development states of Caenorhabditis elegans. The development states of the premorphogenetic phase are represented by symmetric binary-valued cell connection matrices with dimension growing from 4 to 385. These matrices can be interpreted as adjacency matrix of an undirected graph, or network. The OdC approach allows to describe quantitatively the complexity of the cell aggregate geometry. |
2311.04709 | John Nardini | John T. Nardini | Forecasting and predicting stochastic agent-based model data with
biologically-informed neural networks | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Collective migration is an important component of many biological processes,
including wound healing, tumorigenesis, and embryo development. Spatial
agent-based models (ABMs) are often used to model collective migration, but it
is challenging to thoroughly predict these models' behavior throughout
parameter space due to their random and computationally intensive nature.
Modelers often coarse-grain ABM rules into mean-field differential equation
(DE) models. While these DE models are fast to simulate, they suffer from poor
(or even ill-posed) ABM predictions in some regions of parameter space. In this
work, we describe how biologically-informed neural networks (BINNs) can be
trained to learn interpretable BINN-guided DE models capable of accurately
predicting ABM behavior. In particular, we show that BINN-guided partial DE
(PDE) simulations can 1.) forecast future spatial ABM data not seen during
model training, and 2.) predict ABM data at previously-unexplored parameter
values. This latter task is achieved by combining BINN-guided PDE simulations
with multivariate interpolation. We demonstrate our approach using three case
study ABMs of collective migration that imitate cell biology experiments and
find that BINN-guided PDEs accurately forecast and predict ABM data with a
one-compartment PDE when the mean-field PDE is ill-posed or requires two
compartments. This work suggests that BINN-guided PDEs allow modelers to
efficiently explore parameter space, which may enable data-driven tasks for
ABMs, such as estimating parameters from experimental data. All code and data
from our study is available at
https://github.com/johnnardini/Forecasting_predicting_ABMs.
| [
{
"created": "Wed, 8 Nov 2023 14:36:51 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Dec 2023 18:21:32 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Aug 2024 15:40:34 GMT",
"version": "v3"
}
] | 2024-08-14 | [
[
"Nardini",
"John T.",
""
]
] | Collective migration is an important component of many biological processes, including wound healing, tumorigenesis, and embryo development. Spatial agent-based models (ABMs) are often used to model collective migration, but it is challenging to thoroughly predict these models' behavior throughout parameter space due to their random and computationally intensive nature. Modelers often coarse-grain ABM rules into mean-field differential equation (DE) models. While these DE models are fast to simulate, they suffer from poor (or even ill-posed) ABM predictions in some regions of parameter space. In this work, we describe how biologically-informed neural networks (BINNs) can be trained to learn interpretable BINN-guided DE models capable of accurately predicting ABM behavior. In particular, we show that BINN-guided partial DE (PDE) simulations can 1.) forecast future spatial ABM data not seen during model training, and 2.) predict ABM data at previously-unexplored parameter values. This latter task is achieved by combining BINN-guided PDE simulations with multivariate interpolation. We demonstrate our approach using three case study ABMs of collective migration that imitate cell biology experiments and find that BINN-guided PDEs accurately forecast and predict ABM data with a one-compartment PDE when the mean-field PDE is ill-posed or requires two compartments. This work suggests that BINN-guided PDEs allow modelers to efficiently explore parameter space, which may enable data-driven tasks for ABMs, such as estimating parameters from experimental data. All code and data from our study is available at https://github.com/johnnardini/Forecasting_predicting_ABMs. |
1501.00717 | Amir Toor | Amir A. Toor, Abdullah A. Toor, Masoud H. Manjili | On The Organization Of Human T Cell Receptor Loci | 27 Pages, 6 Figures, 2 Supplementary Figures | null | 10.1098/rsif.2015.0911 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human T cell repertoire is generated by the rearrangement of variable
(V), diversity (D) and joining (J) segments on the T cell receptor (TCR) loci.
To determine whether the structural ordering of these gene segments on the TCR
loci contributes to the observed clonal frequencies, the TCR loci were examined
for self-similarity and periodicity in terms of gene segment organization.
Logarithmic transformation of numeric sequence order demonstrated that the V
and J gene segments for both T cell receptor alpha (TRA) and beta (TRB) loci
were arranged in a self-similar manner when the spacing between adjacent
segments was considered as a function of the size of the neighboring gene
segment. The ratio of genomic distance between either the J (in TRA) or D (in
TRB) segments and successive V segments on these loci declined logarithmically.
Accounting for the gene segments occurring on helical DNA molecules, in a
logarithmic distribution, sine and cosine functions of the log transformed
angular coordinates of the start and stop nucleotides of successive TCR gene
segments showed an ordered progression across the locus, supporting a
log-periodic organization. T cell clonal frequencies, based on V and J segment
usage, from three normal stem cell donors plotted against the respective
segment locations on TRB locus demonstrated a periodic variation. We
hypothesize that this quasi-periodic variation in T cell clonal repertoire may
be influenced by the location of the gene segments on the logarithmically
scaled TCR loci. Interactions between the two strands of DNA in the double
helix may influence the probability of gene segment usage by means of either
constructive or destructive interference resulting from the superposition of
the two helices, impacting probability of DNA recombination.
| [
{
"created": "Sun, 4 Jan 2015 20:42:17 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"Toor",
"Amir A.",
""
],
[
"Toor",
"Abdullah A.",
""
],
[
"Manjili",
"Masoud H.",
""
]
] | The human T cell repertoire is generated by the rearrangement of variable (V), diversity (D) and joining (J) segments on the T cell receptor (TCR) loci. To determine whether the structural ordering of these gene segments on the TCR loci contributes to the observed clonal frequencies, the TCR loci were examined for self-similarity and periodicity in terms of gene segment organization. Logarithmic transformation of numeric sequence order demonstrated that the V and J gene segments for both T cell receptor alpha (TRA) and beta (TRB) loci were arranged in a self-similar manner when the spacing between adjacent segments was considered as a function of the size of the neighboring gene segment. The ratio of genomic distance between either the J (in TRA) or D (in TRB) segments and successive V segments on these loci declined logarithmically. Accounting for the gene segments occurring on helical DNA molecules, in a logarithmic distribution, sine and cosine functions of the log transformed angular coordinates of the start and stop nucleotides of successive TCR gene segments showed an ordered progression across the locus, supporting a log-periodic organization. T cell clonal frequencies, based on V and J segment usage, from three normal stem cell donors plotted against the respective segment locations on TRB locus demonstrated a periodic variation. We hypothesize that this quasi-periodic variation in T cell clonal repertoire may be influenced by the location of the gene segments on the logarithmically scaled TCR loci. Interactions between the two strands of DNA in the double helix may influence the probability of gene segment usage by means of either constructive or destructive interference resulting from the superposition of the two helices, impacting probability of DNA recombination. |
2312.07547 | Tommaso Salvatori | Karl J. Friston, Tommaso Salvatori, Takuya Isomura, Alexander
Tschantz, Alex Kiefer, Tim Verbelen, Magnus Koudahl, Aswin Paul, Thomas Parr,
Adeel Razi, Brett Kagan, Christopher L. Buckley, and Maxwell J. D. Ramstead | Active Inference and Intentional Behaviour | 33 pages, 9 figures | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in theoretical biology suggest that basal cognition and
sentient behaviour are emergent properties of in vitro cell cultures and
neuronal networks, respectively. Such neuronal networks spontaneously learn
structured behaviours in the absence of reward or reinforcement. In this paper,
we characterise this kind of self-organisation through the lens of the free
energy principle, i.e., as self-evidencing. We do this by first discussing the
definitions of reactive and sentient behaviour in the setting of active
inference, which describes the behaviour of agents that model the consequences
of their actions. We then introduce a formal account of intentional behaviour,
that describes agents as driven by a preferred endpoint or goal in latent
state-spaces. We then investigate these forms of (reactive, sentient, and
intentional) behaviour using simulations. First, we simulate the aforementioned
in vitro experiments, in which neuronal cultures spontaneously learn to play
Pong, by implementing nested, free energy minimising processes. The simulations
are then used to deconstruct the ensuing predictive behaviour, leading to the
distinction between merely reactive, sentient, and intentional behaviour, with
the latter formalised in terms of inductive planning. This distinction is
further studied using simple machine learning benchmarks (navigation in a grid
world and the Tower of Hanoi problem), that show how quickly and efficiently
adaptive behaviour emerges under an inductive form of active inference.
| [
{
"created": "Wed, 6 Dec 2023 09:38:35 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Dec 2023 17:15:36 GMT",
"version": "v2"
}
] | 2023-12-19 | [
[
"Friston",
"Karl J.",
""
],
[
"Salvatori",
"Tommaso",
""
],
[
"Isomura",
"Takuya",
""
],
[
"Tschantz",
"Alexander",
""
],
[
"Kiefer",
"Alex",
""
],
[
"Verbelen",
"Tim",
""
],
[
"Koudahl",
"Magnus",
""
],
[
"Paul",
"Aswin",
""
],
[
"Parr",
"Thomas",
""
],
[
"Razi",
"Adeel",
""
],
[
"Kagan",
"Brett",
""
],
[
"Buckley",
"Christopher L.",
""
],
[
"Ramstead",
"Maxwell J. D.",
""
]
] | Recent advances in theoretical biology suggest that basal cognition and sentient behaviour are emergent properties of in vitro cell cultures and neuronal networks, respectively. Such neuronal networks spontaneously learn structured behaviours in the absence of reward or reinforcement. In this paper, we characterise this kind of self-organisation through the lens of the free energy principle, i.e., as self-evidencing. We do this by first discussing the definitions of reactive and sentient behaviour in the setting of active inference, which describes the behaviour of agents that model the consequences of their actions. We then introduce a formal account of intentional behaviour, that describes agents as driven by a preferred endpoint or goal in latent state-spaces. We then investigate these forms of (reactive, sentient, and intentional) behaviour using simulations. First, we simulate the aforementioned in vitro experiments, in which neuronal cultures spontaneously learn to play Pong, by implementing nested, free energy minimising processes. The simulations are then used to deconstruct the ensuing predictive behaviour, leading to the distinction between merely reactive, sentient, and intentional behaviour, with the latter formalised in terms of inductive planning. This distinction is further studied using simple machine learning benchmarks (navigation in a grid world and the Tower of Hanoi problem), that show how quickly and efficiently adaptive behaviour emerges under an inductive form of active inference. |
q-bio/0309012 | Nick Monk | Alun Thomas, Rob Cannings, Nicholas A.M. Monk, Chris Cannings | On the structure of proten-protein interaction networks | 6 pages, 6 figures. To appear in Biochem. Soc. Trans | null | null | null | q-bio.MN q-bio.QM | null | We present a simple model for the underlying structure of protein-protein
pairwise interaction graphs that is based on the way in which proteins attach
to each other in experiments such as yeast two-hybrid assays. We show that data
on the interactions of human proteins lend support to this model. The frequency
of the number of connections per protein under this model does not follow a
power law, in contrast to the reported behaviour of data from large scale yeast
two-hybrid screens of yeast protein-protein interactions. Sampling sub-graphs
from the underlying graphs generated with our model, in a way analogous to the
sampling performed in large scale yeast two-hybrid searches, gives degree
distributions that differ subtly from the power law and that fit the observed
data better than the power law itself. Our results show that the observation of
approximate power law behaviour in a sampled sub-graph does not imply that the
underlying graph follows a power law.
| [
{
"created": "Tue, 23 Sep 2003 15:38:37 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Thomas",
"Alun",
""
],
[
"Cannings",
"Rob",
""
],
[
"Monk",
"Nicholas A. M.",
""
],
[
"Cannings",
"Chris",
""
]
] | We present a simple model for the underlying structure of protein-protein pairwise interaction graphs that is based on the way in which proteins attach to each other in experiments such as yeast two-hybrid assays. We show that data on the interactions of human proteins lend support to this model. The frequency of the number of connections per protein under this model does not follow a power law, in contrast to the reported behaviour of data from large scale yeast two-hybrid screens of yeast protein-protein interactions. Sampling sub-graphs from the underlying graphs generated with our model, in a way analogous to the sampling performed in large scale yeast two-hybrid searches, gives degree distributions that differ subtly from the power law and that fit the observed data better than the power law itself. Our results show that the observation of approximate power law behaviour in a sampled sub-graph does not imply that the underlying graph follows a power law. |
1712.04127 | Simone Linz | Janosch D\"ocker and Simone Linz | On the existence of a cherry-picking sequence | Accepted for publication in Theoretical Computer Science | Theoretical Computer Science, 714:36-50, 2018 | 10.1016/j.tcs.2017.12.005 | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the minimum number of reticulation events that is required to
simultaneously embed a collection P of rooted binary phylogenetic trees into a
so-called temporal network has been characterized in terms of cherry-picking
sequences. Such a sequence is a particular ordering on the leaves of the trees
in P. However, it is well-known that not all collections of phylogenetic trees
have a cherry-picking sequence. In this paper, we show that the problem of
deciding whether or not P has a cherry-picking sequence is NP-complete for when
P contains at least eight rooted binary phylogenetic trees. Moreover, we use
automata theory to show that the problem can be solved in polynomial time if
the number of trees in P and the number of cherries in each such tree are
bounded by a constant.
| [
{
"created": "Tue, 12 Dec 2017 04:42:44 GMT",
"version": "v1"
}
] | 2021-04-13 | [
[
"Döcker",
"Janosch",
""
],
[
"Linz",
"Simone",
""
]
] | Recently, the minimum number of reticulation events that is required to simultaneously embed a collection P of rooted binary phylogenetic trees into a so-called temporal network has been characterized in terms of cherry-picking sequences. Such a sequence is a particular ordering on the leaves of the trees in P. However, it is well-known that not all collections of phylogenetic trees have a cherry-picking sequence. In this paper, we show that the problem of deciding whether or not P has a cherry-picking sequence is NP-complete for when P contains at least eight rooted binary phylogenetic trees. Moreover, we use automata theory to show that the problem can be solved in polynomial time if the number of trees in P and the number of cherries in each such tree are bounded by a constant. |
1606.07029 | Sven Peter | Sven Peter, Daniel Durstewitz, Ferran Diego, Fred A. Hamprecht | Sparse convolutional coding for neuronal ensemble identification | 12 pages, 6 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell ensembles, originally proposed by Donald Hebb in 1949, are subsets of
synchronously firing neurons and proposed to explain basic firing behavior in
the brain. Despite having been studied for many years no conclusive evidence
has been presented yet for their existence and involvement in information
processing such that their identification is still a topic of modern research,
especially since simultaneous recordings of large neuronal population have
become possible in the past three decades. These large recordings pose a
challenge for methods allowing to identify individual neurons forming cell
ensembles and their time course of activity inside the vast amounts of spikes
recorded. Related work so far focused on the identification of purely simulta-
neously firing neurons using techniques such as Principal Component Analysis.
In this paper we propose a new algorithm based on sparse convolution coding
which is also able to find ensembles with temporal structure. Application of
our algorithm to synthetically generated datasets shows that it outperforms
previous work and is able to accurately identify temporal cell ensembles even
when those contain overlapping neurons or when strong background noise is
present.
| [
{
"created": "Wed, 22 Jun 2016 18:06:52 GMT",
"version": "v1"
}
] | 2016-06-23 | [
[
"Peter",
"Sven",
""
],
[
"Durstewitz",
"Daniel",
""
],
[
"Diego",
"Ferran",
""
],
[
"Hamprecht",
"Fred A.",
""
]
] | Cell ensembles, originally proposed by Donald Hebb in 1949, are subsets of synchronously firing neurons and proposed to explain basic firing behavior in the brain. Despite having been studied for many years no conclusive evidence has been presented yet for their existence and involvement in information processing such that their identification is still a topic of modern research, especially since simultaneous recordings of large neuronal population have become possible in the past three decades. These large recordings pose a challenge for methods allowing to identify individual neurons forming cell ensembles and their time course of activity inside the vast amounts of spikes recorded. Related work so far focused on the identification of purely simulta- neously firing neurons using techniques such as Principal Component Analysis. In this paper we propose a new algorithm based on sparse convolution coding which is also able to find ensembles with temporal structure. Application of our algorithm to synthetically generated datasets shows that it outperforms previous work and is able to accurately identify temporal cell ensembles even when those contain overlapping neurons or when strong background noise is present. |
2011.06222 | Liu Hong | Liu Hong, Xizhou Liu, Thomas C. T. Michaels, Tuomas P. J. Knowles | Hamiltonian Dynamics of Saturated Elongation in Amyloid Fiber Formation | 15 pages, 4 figures | null | null | null | q-bio.QM physics.bio-ph q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elongation is a fundament process in amyloid fiber growth, which is normally
characterized by a linear relationship between the fiber elongation rate and
the monomer concentration. However, in high concentration regions, a sub-linear
dependence was often observed, which could be explained by a universal
saturation mechanism. In this paper, we modeled the saturated elongation
process through a Michaelis-Menten like mechanism, which is constituted by two
sub-steps -- unspecific association and dissociation of a monomer with the
fibril end, and subsequent conformational change of the associated monomer to
fit itself to the fibrillar structure. Typical saturation concentrations were
found to be $7-70\mu M$ for A$\beta$40, $\alpha$-synuclein and etc.
Furthermore, by using a novel Hamiltonian formulation, analytical solutions
valid for both weak and strong saturated conditions were constructed and
applied to the fibrillation kinetics of $\alpha$-synuclein and silk fibroin.
| [
{
"created": "Thu, 12 Nov 2020 06:21:54 GMT",
"version": "v1"
}
] | 2020-11-13 | [
[
"Hong",
"Liu",
""
],
[
"Liu",
"Xizhou",
""
],
[
"Michaels",
"Thomas C. T.",
""
],
[
"Knowles",
"Tuomas P. J.",
""
]
] | Elongation is a fundament process in amyloid fiber growth, which is normally characterized by a linear relationship between the fiber elongation rate and the monomer concentration. However, in high concentration regions, a sub-linear dependence was often observed, which could be explained by a universal saturation mechanism. In this paper, we modeled the saturated elongation process through a Michaelis-Menten like mechanism, which is constituted by two sub-steps -- unspecific association and dissociation of a monomer with the fibril end, and subsequent conformational change of the associated monomer to fit itself to the fibrillar structure. Typical saturation concentrations were found to be $7-70\mu M$ for A$\beta$40, $\alpha$-synuclein and etc. Furthermore, by using a novel Hamiltonian formulation, analytical solutions valid for both weak and strong saturated conditions were constructed and applied to the fibrillation kinetics of $\alpha$-synuclein and silk fibroin. |
2401.00746 | Zhichao Zhu | Zhichao Zhu, Yang Qi, Wenlian Lu, Jianfeng Feng | Learn to integrate parts for whole through correlated neural variability | 18 pages, 5 figures | null | null | null | q-bio.NC cs.NE physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Sensory perception originates from the responses of sensory neurons, which
react to a collection of sensory signals linked to various physical attributes
of a singular perceptual object. Unraveling how the brain extracts perceptual
information from these neuronal responses is a pivotal challenge in both
computational neuroscience and machine learning. Here we introduce a
statistical mechanical theory, where perceptual information is first encoded in
the correlated variability of sensory neurons and then reformatted into the
firing rates of downstream neurons. Applying this theory, we illustrate the
encoding of motion direction using neural covariance and demonstrate
high-fidelity direction recovery by spiking neural networks. Networks trained
under this theory also show enhanced performance in classifying natural images,
achieving higher accuracy and faster inference speed. Our results challenge the
traditional view of neural covariance as a secondary factor in neural coding,
highlighting its potential influence on brain function.
| [
{
"created": "Mon, 1 Jan 2024 13:05:29 GMT",
"version": "v1"
}
] | 2024-01-02 | [
[
"Zhu",
"Zhichao",
""
],
[
"Qi",
"Yang",
""
],
[
"Lu",
"Wenlian",
""
],
[
"Feng",
"Jianfeng",
""
]
] | Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to various physical attributes of a singular perceptual object. Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning. Here we introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons. Applying this theory, we illustrate the encoding of motion direction using neural covariance and demonstrate high-fidelity direction recovery by spiking neural networks. Networks trained under this theory also show enhanced performance in classifying natural images, achieving higher accuracy and faster inference speed. Our results challenge the traditional view of neural covariance as a secondary factor in neural coding, highlighting its potential influence on brain function. |
2305.06769 | Jiahao Ma | Jingze Liu and Jiahao Ma | Comparative Analysis of Machine Learning Algorithms for Predicting
On-Target and Off-Target Effects of CRISPR-Cas13d for gene editing | code: https://www.kaggle.com/code/markblack370/cas13-pycaret/notebook | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | CRISPR-Cas13 is a system that utilizes single stranded RNAs for RNA editing.
Prediction of on-target and off-target effects for the CRISPR-Cas13d dependency
enables us to design specific single guide RNAs (sgRNAs) that help locate the
desired RNA target positions. In this study, we compared the performance of
multiple machine learning algorithms in predicting these effects using a
reported dataset. Our results show that Catboost is the most accurate model
with high sensitivity. This finding represents a significant advancement in our
understanding of how to chose modeling methods to deal with RNA sequence
feaatures effictivelys. Furthermore, our approach can potentially be applied to
other CRISPR systems and genetic engineering techniques. Overall, this work has
important implications for developing safer and more effective gene therapies
and biotechnological applications.
| [
{
"created": "Thu, 11 May 2023 12:50:13 GMT",
"version": "v1"
}
] | 2023-05-12 | [
[
"Liu",
"Jingze",
""
],
[
"Ma",
"Jiahao",
""
]
] | CRISPR-Cas13 is a system that utilizes single stranded RNAs for RNA editing. Prediction of on-target and off-target effects for the CRISPR-Cas13d dependency enables us to design specific single guide RNAs (sgRNAs) that help locate the desired RNA target positions. In this study, we compared the performance of multiple machine learning algorithms in predicting these effects using a reported dataset. Our results show that Catboost is the most accurate model with high sensitivity. This finding represents a significant advancement in our understanding of how to chose modeling methods to deal with RNA sequence feaatures effictivelys. Furthermore, our approach can potentially be applied to other CRISPR systems and genetic engineering techniques. Overall, this work has important implications for developing safer and more effective gene therapies and biotechnological applications. |
1710.02484 | Daniel Str\"ombom | Daniel Str\"ombom, Tasnia Hassan, W. Hunter Greis, Alice Antia | Asynchrony promotes polarized collective motion in attraction based
models | 8 pages, 4 figures | D Strombom, T Hassan*, WH Greis* & A Antia*. 2019. Asynchrony
induces polarized collective motion in attraction based models. Royal Society
Open Science 6:190381 | 10.1098/rsos.190381 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animal groups frequently move in a highly organized manner, as represented by
flocks of birds and schools of fish. Despite being an everyday occurrence, we
do not yet fully understand how this works. What type of social interactions
between animals gives rise to the overall flock structure and behavior we
observe? This question is often investigated using self-propelled particle
models where particles represent the individual animals. These models differ in
the social interactions used, individual particle properties, and various
technical assumptions. One particular technical assumption relates to whether
all particles update their headings and positions at exactly the same time
(synchronous update) or not (asynchronous update). Here we investigate the
causal effects of this assumption in a specific model and find that it has a
dramatic impact. In particular, polarized groups do not form when synchronous
update is used, but are always produced with asynchronous updates. We also show
that full asynchrony is not required for polarized groups to form and quantify
time to polarized group formation. Since many important models in the
literature have been implemented with synchronous update only, we speculate
that our understanding of these models, or rather the social interactions on
which they are based, may be incomplete. Perhaps a range of previously
unobserved dynamic phenomena will emerge if other potentially more realistic
update schemes are chosen.
| [
{
"created": "Fri, 6 Oct 2017 16:30:14 GMT",
"version": "v1"
}
] | 2021-11-23 | [
[
"Strömbom",
"Daniel",
""
],
[
"Hassan",
"Tasnia",
""
],
[
"Greis",
"W. Hunter",
""
],
[
"Antia",
"Alice",
""
]
] | Animal groups frequently move in a highly organized manner, as represented by flocks of birds and schools of fish. Despite being an everyday occurrence, we do not yet fully understand how this works. What type of social interactions between animals gives rise to the overall flock structure and behavior we observe? This question is often investigated using self-propelled particle models where particles represent the individual animals. These models differ in the social interactions used, individual particle properties, and various technical assumptions. One particular technical assumption relates to whether all particles update their headings and positions at exactly the same time (synchronous update) or not (asynchronous update). Here we investigate the causal effects of this assumption in a specific model and find that it has a dramatic impact. In particular, polarized groups do not form when synchronous update is used, but are always produced with asynchronous updates. We also show that full asynchrony is not required for polarized groups to form and quantify time to polarized group formation. Since many important models in the literature have been implemented with synchronous update only, we speculate that our understanding of these models, or rather the social interactions on which they are based, may be incomplete. Perhaps a range of previously unobserved dynamic phenomena will emerge if other potentially more realistic update schemes are chosen. |
2203.02729 | R. Mansilla | Mar\'ia T. P\'erez-Maldonado, Juli\'an Bravo-Castillero, Ricardo
Mansilla, Rogelio O. Caballero-P\'erez | Discrete Gompertz and Generalized Logistic Models for early monitoring
of the COVID-19 pandemic in Cuba | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | For the last few years there has been a resurgence in the use of
phenomenological growth models for predicting the early dynamics of infectious
diseases. These models assume that time is a continuous variable whereas in the
present contribution, the discrete versions of Gompertz and Generalized
Logistic models are used for early monitoring and short-term forecasting of the
spread of an epidemic in a region. The time-continuous models are represented
mathematically by first-order differential equations while their discrete
versions are represented by first-order difference equations that involve
parameters that should be estimated prior to forecasting. The methodology for
estimating such parameters is described in detail. Real data of COVID-19
infection in Cuba is used to illustrate this methodology. The proposed
methodology was implemented for the first thirty-five days, being able to
predict with very good precision the data reported for the following twenty
days. The codes implemented to study the Gompertz model in differences are
included in an appendix with each step of the methodology identified.
| [
{
"created": "Sat, 5 Mar 2022 12:54:29 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jan 2023 22:28:36 GMT",
"version": "v2"
}
] | 2023-01-24 | [
[
"Pérez-Maldonado",
"María T.",
""
],
[
"Bravo-Castillero",
"Julián",
""
],
[
"Mansilla",
"Ricardo",
""
],
[
"Caballero-Pérez",
"Rogelio O.",
""
]
] | For the last few years there has been a resurgence in the use of phenomenological growth models for predicting the early dynamics of infectious diseases. These models assume that time is a continuous variable whereas in the present contribution, the discrete versions of Gompertz and Generalized Logistic models are used for early monitoring and short-term forecasting of the spread of an epidemic in a region. The time-continuous models are represented mathematically by first-order differential equations while their discrete versions are represented by first-order difference equations that involve parameters that should be estimated prior to forecasting. The methodology for estimating such parameters is described in detail. Real data of COVID-19 infection in Cuba is used to illustrate this methodology. The proposed methodology was implemented for the first thirty-five days, being able to predict with very good precision the data reported for the following twenty days. The codes implemented to study the Gompertz model in differences are included in an appendix with each step of the methodology identified. |
2007.07571 | Joelle Despeyroux | Elisabetta de Maria, Joelle Despeyroux (CRISAM), Amy Felty (uOttawa),
Pietro Li\`o, Carlos Olarte (UFRN), Abdorrahim Bahrami (uOttawa) | Computational Logic for Biomedicine and Neurosciences | null | null | null | null | q-bio.QM cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We advocate here the use of computational logic for systems biology, as a
\emph{unified and safe} framework well suited for both modeling the dynamic
behaviour of biological systems, expressing properties of them, and verifying
these properties. The potential candidate logics should have a traditional
proof theoretic pedigree (including either induction, or a sequent calculus
presentation enjoying cut-elimination and focusing), and should come with
certified proof tools. Beyond providing a reliable framework, this allows the
correct encodings of our biological systems. % For systems biology in general
and biomedicine in particular, we have so far, for the modeling part, three
candidate logics: all based on linear logic. The studied properties and their
proofs are formalized in a very expressive (non linear) inductive logic: the
Calculus of Inductive Constructions (CIC). The examples we have considered so
far are relatively simple ones; however, all coming with formal semi-automatic
proofs in the Coq system, which implements CIC. In neuroscience, we are
directly using CIC and Coq, to model neurons and some simple neuronal circuits
and prove some of their dynamic properties. % In biomedicine, the study of
multi omic pathway interactions, together with clinical and electronic health
record data should help in drug discovery and disease diagnosis. Future work
includes using more automatic provers. This should enable us to specify and
study more realistic examples, and in the long term to provide a system for
disease diagnosis and therapy prognosis.
| [
{
"created": "Wed, 15 Jul 2020 09:37:09 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2020 11:58:49 GMT",
"version": "v2"
}
] | 2020-10-07 | [
[
"de Maria",
"Elisabetta",
"",
"CRISAM"
],
[
"Despeyroux",
"Joelle",
"",
"CRISAM"
],
[
"Felty",
"Amy",
"",
"uOttawa"
],
[
"Liò",
"Pietro",
"",
"UFRN"
],
[
"Olarte",
"Carlos",
"",
"UFRN"
],
[
"Bahrami",
"Abdorrahim",
"",
"uOttawa"
]
] | We advocate here the use of computational logic for systems biology, as a \emph{unified and safe} framework well suited for both modeling the dynamic behaviour of biological systems, expressing properties of them, and verifying these properties. The potential candidate logics should have a traditional proof theoretic pedigree (including either induction, or a sequent calculus presentation enjoying cut-elimination and focusing), and should come with certified proof tools. Beyond providing a reliable framework, this allows the correct encodings of our biological systems. % For systems biology in general and biomedicine in particular, we have so far, for the modeling part, three candidate logics: all based on linear logic. The studied properties and their proofs are formalized in a very expressive (non linear) inductive logic: the Calculus of Inductive Constructions (CIC). The examples we have considered so far are relatively simple ones; however, all coming with formal semi-automatic proofs in the Coq system, which implements CIC. In neuroscience, we are directly using CIC and Coq, to model neurons and some simple neuronal circuits and prove some of their dynamic properties. % In biomedicine, the study of multi omic pathway interactions, together with clinical and electronic health record data should help in drug discovery and disease diagnosis. Future work includes using more automatic provers. This should enable us to specify and study more realistic examples, and in the long term to provide a system for disease diagnosis and therapy prognosis. |
0801.2566 | Chad M. Topaz | A.J. Leverentz, C.M. Topaz, A.J. Bernoff | Asymptotic dynamics of attractive-repulsive swarms | 23 pages, 10 figures; revised version updates the analysis in sec.
2.1 and 2.2, and contains enhanced discussion of the admissible class of
social interaction forces | null | 10.1137/090749037 | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We classify and predict the asymptotic dynamics of a class of swarming
models. The model consists of a conservation equation in one dimension
describing the movement of a population density field. The velocity is found by
convolving the density with a kernel describing attractive-repulsive social
interactions. The kernel's first moment and its limiting behavior at the origin
determine whether the population asymptotically spreads, contracts, or reaches
steady-state. For the spreading case, the dynamics approach those of the porous
medium equation. The widening, compactly-supported population has edges that
behave like traveling waves whose speed, density and slope we calculate. For
the contracting case, the dynamics of the cumulative density approach those of
Burgers' equation. We derive an analytical upper bound for the finite blow-up
time after which the solution forms one or more $\delta$-functions.
| [
{
"created": "Wed, 16 Jan 2008 20:34:12 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Aug 2008 15:39:56 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Leverentz",
"A. J.",
""
],
[
"Topaz",
"C. M.",
""
],
[
"Bernoff",
"A. J.",
""
]
] | We classify and predict the asymptotic dynamics of a class of swarming models. The model consists of a conservation equation in one dimension describing the movement of a population density field. The velocity is found by convolving the density with a kernel describing attractive-repulsive social interactions. The kernel's first moment and its limiting behavior at the origin determine whether the population asymptotically spreads, contracts, or reaches steady-state. For the spreading case, the dynamics approach those of the porous medium equation. The widening, compactly-supported population has edges that behave like traveling waves whose speed, density and slope we calculate. For the contracting case, the dynamics of the cumulative density approach those of Burgers' equation. We derive an analytical upper bound for the finite blow-up time after which the solution forms one or more $\delta$-functions. |
0804.3279 | Gergely J Sz\"oll\H{o}si | Gergely J. Szollosi, Imre Derenyi | The Effect of Recombination on the Neutral Evolution of Genetic
Robustness | Accepted for publication in Math. Biosci. as part of the proceedings
of BIOCOMP 2007 | null | 10.1016/j.mbs.2008.03.010 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional population genetics considers the evolution of a limited number
of genotypes corresponding to phenotypes with different fitness. As model
phenotypes, in particular RNA secondary structure, have become computationally
tractable, however, it has become apparent that the context dependent effect of
mutations and the many-to-one nature inherent in these genotype-phenotype maps
can have fundamental evolutionary consequences. It has previously been
demonstrated that populations of genotypes evolving on the neutral networks
corresponding to all genotypes with the same secondary structure only through
neutral mutations can evolve mutational robustness [Nimwegen {\it et al.}
Neutral evolution of mutational robustness, 1999 PNAS], by concentrating the
population on regions of high neutrality. Introducing recombination we
demonstrate, through numerically calculating the stationary distribution of an
infinite population on ensembles of random neutral networks that mutational
robustness is significantly enhanced and further that the magnitude of this
enhancement is sensitive to details of the neutral network topology. Through
the simulation of finite populations of genotypes evolving on random neutral
networks and a scaled down microRNA neutral network, we show that even in
finite populations recombination will still act to focus the population on
regions of locally high neutrality.
| [
{
"created": "Mon, 21 Apr 2008 11:55:43 GMT",
"version": "v1"
}
] | 2008-04-22 | [
[
"Szollosi",
"Gergely J.",
""
],
[
"Derenyi",
"Imre",
""
]
] | Conventional population genetics considers the evolution of a limited number of genotypes corresponding to phenotypes with different fitness. As model phenotypes, in particular RNA secondary structure, have become computationally tractable, however, it has become apparent that the context dependent effect of mutations and the many-to-one nature inherent in these genotype-phenotype maps can have fundamental evolutionary consequences. It has previously been demonstrated that populations of genotypes evolving on the neutral networks corresponding to all genotypes with the same secondary structure only through neutral mutations can evolve mutational robustness [Nimwegen {\it et al.} Neutral evolution of mutational robustness, 1999 PNAS], by concentrating the population on regions of high neutrality. Introducing recombination we demonstrate, through numerically calculating the stationary distribution of an infinite population on ensembles of random neutral networks that mutational robustness is significantly enhanced and further that the magnitude of this enhancement is sensitive to details of the neutral network topology. Through the simulation of finite populations of genotypes evolving on random neutral networks and a scaled down microRNA neutral network, we show that even in finite populations recombination will still act to focus the population on regions of locally high neutrality. |
1305.6485 | Adam Auton | Adam Auton, Ying Rui Li, Jeffrey Kidd, Kyle Oliveira, Julie Nadel, J.
Kim Holloway, Jessica J. Hayward, Paula E. Cohen, John M. Greally, Jun Wang,
Carlos D. Bustamante, Adam R. Boyko | Genetic recombination is targeted towards gene promoter regions in dogs | Updated version, with significant revisions | null | 10.1371/journal.pgen.1003984 | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of the H3K4 trimethylase, PRDM9, as the gene responsible
for recombination hotspot localization has provided considerable insight into
the mechanisms by which recombination is initiated in mammals. However,
uniquely amongst mammals, canids appear to lack a functional version of PRDM9
and may therefore provide a model for understanding recombination that occurs
in the absence of PRDM9, and thus how PRDM9 functions to shape the
recombination landscape. We have constructed a fine-scale genetic map from
patterns of linkage disequilibrium assessed using high-throughput sequence data
from 51 free-ranging dogs, Canis lupus familiaris. While broad-scale properties
of recombination appear similar to other mammalian species, our fine-scale
estimates indicate that canine highly elevated recombination rates are observed
in the vicinity of CpG rich regions including gene promoter regions, but show
little association with H3K4 trimethylation marks identified in spermatocytes.
By comparison to genomic data from the Andean fox, Lycalopex culpaeus, we show
that biased gene conversion is a plausible mechanism by which the high CpG
content of the dog genome could have occurred.
| [
{
"created": "Tue, 28 May 2013 13:32:55 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2013 15:09:20 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Aug 2013 23:10:51 GMT",
"version": "v3"
}
] | 2013-12-16 | [
[
"Auton",
"Adam",
""
],
[
"Li",
"Ying Rui",
""
],
[
"Kidd",
"Jeffrey",
""
],
[
"Oliveira",
"Kyle",
""
],
[
"Nadel",
"Julie",
""
],
[
"Holloway",
"J. Kim",
""
],
[
"Hayward",
"Jessica J.",
""
],
[
"Cohen",
"Paula E.",
""
],
[
"Greally",
"John M.",
""
],
[
"Wang",
"Jun",
""
],
[
"Bustamante",
"Carlos D.",
""
],
[
"Boyko",
"Adam R.",
""
]
] | The identification of the H3K4 trimethylase, PRDM9, as the gene responsible for recombination hotspot localization has provided considerable insight into the mechanisms by which recombination is initiated in mammals. However, uniquely amongst mammals, canids appear to lack a functional version of PRDM9 and may therefore provide a model for understanding recombination that occurs in the absence of PRDM9, and thus how PRDM9 functions to shape the recombination landscape. We have constructed a fine-scale genetic map from patterns of linkage disequilibrium assessed using high-throughput sequence data from 51 free-ranging dogs, Canis lupus familiaris. While broad-scale properties of recombination appear similar to other mammalian species, our fine-scale estimates indicate that canine highly elevated recombination rates are observed in the vicinity of CpG rich regions including gene promoter regions, but show little association with H3K4 trimethylation marks identified in spermatocytes. By comparison to genomic data from the Andean fox, Lycalopex culpaeus, we show that biased gene conversion is a plausible mechanism by which the high CpG content of the dog genome could have occurred. |
2005.03455 | Gianluca Martelloni | Gabriele Martelloni and Gianluca Martelloni | Modelling the downhill of the Sars-Cov-2 in Italy and a universal
forecast of the epidemic in the world | 13 pages, 8 figures. arXiv admin note: substantial text overlap with
arXiv:2004.022 | null | 10.1016/j.chaos.2020.110064 | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a previous article [1] we have described the temporal evolution of the
Sars- Cov-2 in Italy in the time window February 24-April 1. As we can see in
[1] a generalized logistic equation captures both the peaks of the total
infected and the deaths. In this article our goal is to study the missing peak,
i.e. the currently infected one (or total currently positive). After the April
7 the large increase in the number of swabs meant that the logistical behavior
of the infected curve no longer worked. So we decided to generalize the model,
introducing new parameters. Moreover, we adopt a similar approach used in [1]
(for the estimation of deaths) in order to evaluate the recoveries. In this
way, introducing a simple conservation law, we define a model with 4
populations: total infected, currently positives, recoveries and deaths.
Therefore, we propose an alternative method to a classical SIRD model for the
evaluation of the Sars-Cov-2 epidemic. However, the method is general and thus
applicable to other diseases. Finally we study the behavior of the ratio
infected over swabs for Italy, Germany and USA, and we show as studying this
parameter we recover the generalized Logistic model used in [1] for these three
countries. We think that this trend could be useful for a future epidemic of
this coronavirus.
| [
{
"created": "Thu, 7 May 2020 13:26:56 GMT",
"version": "v1"
},
{
"created": "Wed, 13 May 2020 13:29:07 GMT",
"version": "v2"
}
] | 2020-08-26 | [
[
"Martelloni",
"Gabriele",
""
],
[
"Martelloni",
"Gianluca",
""
]
] | In a previous article [1] we have described the temporal evolution of the Sars- Cov-2 in Italy in the time window February 24-April 1. As we can see in [1] a generalized logistic equation captures both the peaks of the total infected and the deaths. In this article our goal is to study the missing peak, i.e. the currently infected one (or total currently positive). After the April 7 the large increase in the number of swabs meant that the logistical behavior of the infected curve no longer worked. So we decided to generalize the model, introducing new parameters. Moreover, we adopt a similar approach used in [1] (for the estimation of deaths) in order to evaluate the recoveries. In this way, introducing a simple conservation law, we define a model with 4 populations: total infected, currently positives, recoveries and deaths. Therefore, we propose an alternative method to a classical SIRD model for the evaluation of the Sars-Cov-2 epidemic. However, the method is general and thus applicable to other diseases. Finally we study the behavior of the ratio infected over swabs for Italy, Germany and USA, and we show as studying this parameter we recover the generalized Logistic model used in [1] for these three countries. We think that this trend could be useful for a future epidemic of this coronavirus. |
q-bio/0506037 | Francesco Romeo | A. Noviello, F. Romeo and R. De Luca | Time Evolution of Non-Lethal Infectious Diseases: A Semi-Continuous
Approach | 21 pages | Eur. Phys. J. B 50, 505-511 (2006) | 10.1140/epjb/e2006-00163-4 | null | q-bio.PE q-bio.QM | null | A model describing the dynamics related to the spreading of non-lethal
infectious diseases in a fixed-size population is proposed. The model consists
of a non-linear delay-differential equation describing the time evolution of
the increment in the number of infectious individuals and depends upon a
limited number of parameters. Predictions are in good qualitative agreement
with data on influenza.
| [
{
"created": "Sat, 25 Jun 2005 19:32:09 GMT",
"version": "v1"
}
] | 2010-10-05 | [
[
"Noviello",
"A.",
""
],
[
"Romeo",
"F.",
""
],
[
"De Luca",
"R.",
""
]
] | A model describing the dynamics related to the spreading of non-lethal infectious diseases in a fixed-size population is proposed. The model consists of a non-linear delay-differential equation describing the time evolution of the increment in the number of infectious individuals and depends upon a limited number of parameters. Predictions are in good qualitative agreement with data on influenza. |
2009.00776 | Fabio Giardina | F. Giardina and L. Mahadevan | Models of benthic bipedalism | null | null | null | null | q-bio.QM cs.RO nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Walking is a common bipedal and quadrupedal gait and is often associated with
terrestrial and aquatic organisms. Inspired by recent evidence of the neural
underpinnings of primitive aquatic walking in the little skate Leucoraja
erinacea, we introduce a theoretical model of aquatic walking that reveals
robust and efficient gaits with modest requirements for body morphology and
control. The model predicts undulatory behavior of the system body with a
regular foot placement pattern which is also observed in the animal, and
additionally predicts the existence of gait bistability between two states, one
with a large energetic cost for locomotion and another associated with almost
no energetic cost. We show that these can be discovered using a simple
reinforcement learning scheme. To test these theoretical frameworks, we built a
bipedal robot and show that its behaviors are similar to those of our minimal
model: its gait is also periodic and exhibits bistability, with a low
efficiency gait separated from a high efficiency gait by a "jump" transition.
Overall, our study highlights the physical constraints on the evolution of
walking and provides a guide for the design of efficient biomimetic robots.
| [
{
"created": "Wed, 2 Sep 2020 01:45:54 GMT",
"version": "v1"
}
] | 2020-09-03 | [
[
"Giardina",
"F.",
""
],
[
"Mahadevan",
"L.",
""
]
] | Walking is a common bipedal and quadrupedal gait and is often associated with terrestrial and aquatic organisms. Inspired by recent evidence of the neural underpinnings of primitive aquatic walking in the little skate Leucoraja erinacea, we introduce a theoretical model of aquatic walking that reveals robust and efficient gaits with modest requirements for body morphology and control. The model predicts undulatory behavior of the system body with a regular foot placement pattern which is also observed in the animal, and additionally predicts the existence of gait bistability between two states, one with a large energetic cost for locomotion and another associated with almost no energetic cost. We show that these can be discovered using a simple reinforcement learning scheme. To test these theoretical frameworks, we built a bipedal robot and show that its behaviors are similar to those of our minimal model: its gait is also periodic and exhibits bistability, with a low efficiency gait separated from a high efficiency gait by a "jump" transition. Overall, our study highlights the physical constraints on the evolution of walking and provides a guide for the design of efficient biomimetic robots. |
2211.04817 | Chanati Jantrachotechatchawan PhD | Nayada Pandee, Prasert Auewarakul, Chanati Jantrachotechatchawan | DNA Methylation in hypoxia in Mycobacterium tuberculosis | 20 pages, 9 tables | null | null | null | q-bio.GN q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | Tuberculosis is one of the most lethal contagious diseases caused by
Mycobacterium tuberculosis (MTB), in many cases, the infected did not show any
symptoms, because the bacilli entered the dormant stage in granulomas. The
dormant stage of MTB is also associated with higher resistance to drugs and the
immune system. Among multiple epigenetic regulations critical to MTB stress
responses, DNA methylation is necessary for the survival of MTB in hypoxic
conditions, which is a common stress event during granuloma formation. This
review gathers previous findings and demonstrates a meta-analysis by collecting
hypoxia gene expression data from several articles and perform association
analysis between those genes and methylation site profiles across whole genomes
of representative strains pf lineage 2 and 4. While more data is required for
more conclusive support, our results suggest that methylation sites in the
possible promoter regions may induce differential gene regulation in hypoxia.
| [
{
"created": "Wed, 9 Nov 2022 11:36:08 GMT",
"version": "v1"
}
] | 2022-11-10 | [
[
"Pandee",
"Nayada",
""
],
[
"Auewarakul",
"Prasert",
""
],
[
"Jantrachotechatchawan",
"Chanati",
""
]
] | Tuberculosis is one of the most lethal contagious diseases caused by Mycobacterium tuberculosis (MTB), in many cases, the infected did not show any symptoms, because the bacilli entered the dormant stage in granulomas. The dormant stage of MTB is also associated with higher resistance to drugs and the immune system. Among multiple epigenetic regulations critical to MTB stress responses, DNA methylation is necessary for the survival of MTB in hypoxic conditions, which is a common stress event during granuloma formation. This review gathers previous findings and demonstrates a meta-analysis by collecting hypoxia gene expression data from several articles and perform association analysis between those genes and methylation site profiles across whole genomes of representative strains pf lineage 2 and 4. While more data is required for more conclusive support, our results suggest that methylation sites in the possible promoter regions may induce differential gene regulation in hypoxia. |
1305.2378 | Krzysztof Bartoszek | Krzysztof Bartoszek | Quantifying the effects of anagenetic and cladogenetic evolution | null | Mathematical Biosciences 2014, 254, 42-57 | null | null | q-bio.PE math.PR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ongoing debate in evolutionary biology is whether phenotypic change occurs
predominantly around the time of speciation or whether it instead accumulates
gradually over time. In this work I propose a general framework incorporating
both types of change, quantify the effects of speciational change via the
correlation between species and attribute the proportion of change to each
type. I discuss results of parameter estimation of Hominoid body size in this
light. I derive mathematical formulae related to this problem, the probability
generating functions of the number of speciation events along a randomly drawn
lineage and from the most recent common ancestor of two randomly chosen tip
species for a conditioned Yule tree. Additionally I obtain in closed form the
variance of the distance from the root to the most recent common ancestor of
two randomly chosen tip species.
| [
{
"created": "Fri, 10 May 2013 15:58:46 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2013 14:53:49 GMT",
"version": "v2"
},
{
"created": "Mon, 26 May 2014 17:23:06 GMT",
"version": "v3"
}
] | 2014-07-02 | [
[
"Bartoszek",
"Krzysztof",
""
]
] | An ongoing debate in evolutionary biology is whether phenotypic change occurs predominantly around the time of speciation or whether it instead accumulates gradually over time. In this work I propose a general framework incorporating both types of change, quantify the effects of speciational change via the correlation between species and attribute the proportion of change to each type. I discuss results of parameter estimation of Hominoid body size in this light. I derive mathematical formulae related to this problem, the probability generating functions of the number of speciation events along a randomly drawn lineage and from the most recent common ancestor of two randomly chosen tip species for a conditioned Yule tree. Additionally I obtain in closed form the variance of the distance from the root to the most recent common ancestor of two randomly chosen tip species. |
1408.6501 | Max Souza | Fabio A. C. C. Chalub and Max O. Souza | Fixation in large populations: a continuous view of a discrete problem | null | Journal of Mathematical Biology Volume 72, Issue 1-2 , pp 283-330
(2016) | 10.1007/s00285-015-0889-9 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study fixation in large, but finite, populations with two types, and
dynamics governed by birth-death processes. By considering a restricted class
of such processes, we derive a continuous approximation for the probability of
fixation that is valid beyond the weak-selection (WS) limit. From the
continuous approximations, we then obtain asymptotic approximations for
evolutionary dynamics with at most one equilibrium, in the selection-driven
regime, that does not preclude a weak-selection regime. As an application, we
study the fixation pattern when the infinite population limit has an interior
Evolutionary Stable Strategy (ESS): (i) we show that the fixation pattern for
the Hawk and Dove game satisfies what we term the one-half law: if the
Evolutionary Stable Strategy (ESS) is outside a small interval around
$\sfrac{1}{2}$, the fixation is of dominance type; (ii) we also show that,
outside of the weak-selection regime, the long-term dynamics of large
populations can have very little resemblance to the infinite population case;
in addition, we also present results for the case of two equilibria. Finally,
we present continuous restatements valid for large populations of two classical
concepts naturally defined in the discrete case: (i) the definition of an
$\textsc{ESS}_N$ strategy; (ii) the definition of a risk-dominant strategy. We
then present two applications of these restatements: (i) we obtain an
asymptotic definition valid in the quasi-neutral regime that recovers both the
one-third law under linear fitness and the generalised one-third law for
$d$-player games; (ii) we extend the ideas behind the (generalised) one-third
law outside the quasi-neutral regime and, as a generalisation, we introduce the
concept of critical-frequency; (iii) we recover the classification of
risk-dominant strategies for $d$-player games.
| [
{
"created": "Wed, 27 Aug 2014 19:32:24 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Mar 2015 21:11:43 GMT",
"version": "v2"
}
] | 2016-02-02 | [
[
"Chalub",
"Fabio A. C. C.",
""
],
[
"Souza",
"Max O.",
""
]
] | We study fixation in large, but finite, populations with two types, and dynamics governed by birth-death processes. By considering a restricted class of such processes, we derive a continuous approximation for the probability of fixation that is valid beyond the weak-selection (WS) limit. From the continuous approximations, we then obtain asymptotic approximations for evolutionary dynamics with at most one equilibrium, in the selection-driven regime, that does not preclude a weak-selection regime. As an application, we study the fixation pattern when the infinite population limit has an interior Evolutionary Stable Strategy (ESS): (i) we show that the fixation pattern for the Hawk and Dove game satisfies what we term the one-half law: if the Evolutionary Stable Strategy (ESS) is outside a small interval around $\sfrac{1}{2}$, the fixation is of dominance type; (ii) we also show that, outside of the weak-selection regime, the long-term dynamics of large populations can have very little resemblance to the infinite population case; in addition, we also present results for the case of two equilibria. Finally, we present continuous restatements valid for large populations of two classical concepts naturally defined in the discrete case: (i) the definition of an $\textsc{ESS}_N$ strategy; (ii) the definition of a risk-dominant strategy. We then present two applications of these restatements: (i) we obtain an asymptotic definition valid in the quasi-neutral regime that recovers both the one-third law under linear fitness and the generalised one-third law for $d$-player games; (ii) we extend the ideas behind the (generalised) one-third law outside the quasi-neutral regime and, as a generalisation, we introduce the concept of critical-frequency; (iii) we recover the classification of risk-dominant strategies for $d$-player games. |
1412.2155 | Susan Khor | Susan Khor | Protein residue networks from a local search perspective | v5 has 74 pages, a correction in section 2.1, an expansion of
Appendix B and the addition of Appendix H. Only materials and results related
to sections 3.1 to 3.6 have been published in the journal article which has
the same title as this manuscript. Materials and results from section 3.7 and
section 4 are each expanded in other manuscripts, Journal of Complex Networks
(2015) | J Complex Netw (2016) 4 (2): 245-278 | 10.1093/comnet/cnv014 | null | q-bio.MN cs.CE cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We examined protein residue networks (PRNs) from a local search perspective
to understand why PRNs are highly clustered when having short paths is
important for protein functionality. We found that by adopting a local search
perspective, this conflict between form and function is resolved as increased
clustering actually helps to reduce path length in PRNs. Further, the paths
found via our EDS local search algorithm are more congruent with the
characteristics of intra-protein communication. EDS identifies a subset of PRN
edges called short-cuts that are distinct, have high usage, impacts EDS path
length, diversity and stretch, and are dominated by short-range contacts. The
short-cuts form a network (SCN) that increases in size and transitivity as a
protein folds. The structure of a SCN supports its function and formation, and
the function of a SCN influences its formation. Several significant differences
in terms of SCN structure, function and formation is found between successful
and unsuccessful MD trajectories. By connecting the static and the dynamic
aspects of PRNs, the protein folding process becomes a problem of graph
formation with the purpose of forming suitable pathways within proteins.
| [
{
"created": "Thu, 4 Dec 2014 20:20:14 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Dec 2014 16:37:14 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Apr 2015 17:43:47 GMT",
"version": "v3"
},
{
"created": "Mon, 1 Jun 2015 19:00:12 GMT",
"version": "v4"
},
{
"created": "Mon, 30 Nov 2015 00:20:33 GMT",
"version": "v5"
}
] | 2017-06-20 | [
[
"Khor",
"Susan",
""
]
] | We examined protein residue networks (PRNs) from a local search perspective to understand why PRNs are highly clustered when having short paths is important for protein functionality. We found that by adopting a local search perspective, this conflict between form and function is resolved as increased clustering actually helps to reduce path length in PRNs. Further, the paths found via our EDS local search algorithm are more congruent with the characteristics of intra-protein communication. EDS identifies a subset of PRN edges called short-cuts that are distinct, have high usage, impacts EDS path length, diversity and stretch, and are dominated by short-range contacts. The short-cuts form a network (SCN) that increases in size and transitivity as a protein folds. The structure of a SCN supports its function and formation, and the function of a SCN influences its formation. Several significant differences in terms of SCN structure, function and formation is found between successful and unsuccessful MD trajectories. By connecting the static and the dynamic aspects of PRNs, the protein folding process becomes a problem of graph formation with the purpose of forming suitable pathways within proteins. |
2405.20747 | Julio R. Banga | Julio R. Banga and Sebastian Sager | Generalized Inverse Optimal Control and its Application in Biology | null | null | null | null | q-bio.QM math.OC | http://creativecommons.org/licenses/by/4.0/ | Living organisms exhibit remarkable adaptations across all scales, from
molecules to ecosystems. We believe that many of these adaptations correspond
to optimal solutions driven by evolution, training, and underlying physical and
chemical laws and constraints. While some argue against such optimality
principles due to their potential ambiguity, we propose generalized inverse
optimal control to infer them directly from data. This novel approach
incorporates multi-criteria optimality, nestedness of objective functions on
different scales, the presence of active constraints, the possibility of
switches of optimality principles during the observed time horizon,
maximization of robustness, and minimization of time as important special
cases, as well as uncertainties involved with the mathematical modeling of
biological systems. This data-driven approach ensures that optimality
principles are not merely theoretical constructs but are firmly rooted in
experimental observations. Furthermore, the inferred principles can be used in
forward optimal control to predict and manipulate biological systems, with
possible applications in bio-medicine, biotechnology, and agriculture. As
discussed and illustrated, the well-posed problem formulation and the inference
are challenging and require a substantial interdisciplinary effort in the
development of theory and robust numerical methods.
| [
{
"created": "Fri, 31 May 2024 10:23:47 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Banga",
"Julio R.",
""
],
[
"Sager",
"Sebastian",
""
]
] | Living organisms exhibit remarkable adaptations across all scales, from molecules to ecosystems. We believe that many of these adaptations correspond to optimal solutions driven by evolution, training, and underlying physical and chemical laws and constraints. While some argue against such optimality principles due to their potential ambiguity, we propose generalized inverse optimal control to infer them directly from data. This novel approach incorporates multi-criteria optimality, nestedness of objective functions on different scales, the presence of active constraints, the possibility of switches of optimality principles during the observed time horizon, maximization of robustness, and minimization of time as important special cases, as well as uncertainties involved with the mathematical modeling of biological systems. This data-driven approach ensures that optimality principles are not merely theoretical constructs but are firmly rooted in experimental observations. Furthermore, the inferred principles can be used in forward optimal control to predict and manipulate biological systems, with possible applications in bio-medicine, biotechnology, and agriculture. As discussed and illustrated, the well-posed problem formulation and the inference are challenging and require a substantial interdisciplinary effort in the development of theory and robust numerical methods. |
1004.5587 | Valerie Hower | Steven N. Evans, Valerie Hower and Lior Pachter | Coverage statistics for sequence census methods | 10 pages, 4 figures | null | null | null | q-bio.GN math.PR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: We study the statistical properties of fragment coverage in
genome sequencing experiments. In an extension of the classic Lander-Waterman
model, we consider the effect of the length distribution of fragments. We also
introduce the notion of the shape of a coverage function, which can be used to
detect abberations in coverage. The probability theory underlying these
problems is essential for constructing models of current high-throughput
sequencing experiments, where both sample preparation protocols and sequencing
technology particulars can affect fragment length distributions.
Results: We show that regardless of fragment length distribution and under
the mild assumption that fragment start sites are Poisson distributed, the
fragments produced in a sequencing experiment can be viewed as resulting from a
two-dimensional spatial Poisson process. We then study the jump skeleton of the
the coverage function, and show that the induced trees are Galton-Watson trees
whose parameters can be computed.
Conclusions: Our results extend standard analyses of shotgun sequencing that
focus on coverage statistics at individual sites, and provide a null model for
detecting deviations from random coverage in high-throughput sequence census
based experiments. By focusing on fragments, we are also led to a new approach
for visualizing sequencing data that should be of independent interest.
| [
{
"created": "Fri, 30 Apr 2010 18:36:40 GMT",
"version": "v1"
}
] | 2010-05-03 | [
[
"Evans",
"Steven N.",
""
],
[
"Hower",
"Valerie",
""
],
[
"Pachter",
"Lior",
""
]
] | Background: We study the statistical properties of fragment coverage in genome sequencing experiments. In an extension of the classic Lander-Waterman model, we consider the effect of the length distribution of fragments. We also introduce the notion of the shape of a coverage function, which can be used to detect abberations in coverage. The probability theory underlying these problems is essential for constructing models of current high-throughput sequencing experiments, where both sample preparation protocols and sequencing technology particulars can affect fragment length distributions. Results: We show that regardless of fragment length distribution and under the mild assumption that fragment start sites are Poisson distributed, the fragments produced in a sequencing experiment can be viewed as resulting from a two-dimensional spatial Poisson process. We then study the jump skeleton of the the coverage function, and show that the induced trees are Galton-Watson trees whose parameters can be computed. Conclusions: Our results extend standard analyses of shotgun sequencing that focus on coverage statistics at individual sites, and provide a null model for detecting deviations from random coverage in high-throughput sequence census based experiments. By focusing on fragments, we are also led to a new approach for visualizing sequencing data that should be of independent interest. |
1512.00423 | Vicente M. Reyes Ph.D. | Vicente M. Reyes | Implementation of the Tangent Sphere and Cutting Plane Methods in the
Quantitative Determination of Ligand Binding Site Burial Depths in Proteins
Using FORTRAN 77/90 Language | 21 pages, 6466 words total (17 pages/5881 words text, 4 pages/585
words figures+tables+legends), 2 figures, 2 tables | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ligand burial depth is an indicator of protein flexibility, as the extent of
receptor conformational change required to bind a ligand in general varies
directly with its depth of burial. In a companion paper (Reyes, V.M. 2015a), we
report on the Tangent Sphere (TS) and Cutting Plane (CP) methods --
complementary methods to quantify, independent of protein size, the degree of
ligand burial in a protein receptor. In this report, we present results that
demonstrate the effectiveness of a set of FORTRAN 77 and 90 source codes used
in the implementation of the two related procedures, as well as the precise
implementation of the procedures. Particularly, we show here that application
of the TS and CP methods on a theoretical model protein in the form of a
spherical grid of points accurately portrays the behavior of the TS and CP
indices, the predictive parameters obtained from the two methods. We also show
that results of the implementation of the TS and CP methods on six protein
receptors (Laskowski et al. 1996) are inagreement with their findings regarding
cavity sizes in these proteins. The six FORTRAN programs we present here are:
find_molec_centr.f, tangent_sphere.f, find_CP_coeffs.f, CPM_Neg_Side.f,
CPM_Pos_Side.f and CPM_Zero_Side.f. The first program calculates the x-, y- and
z-coordinates of the molecular geometric centroid of the protein (global
centroid, GC), the center of the TS. Its radius is the distance between the GC
and the local centroid (LC), the centroid of the bound ligand or a portion of
its binding site. The second program finds the number of protein atoms inside,
outside and on the TS. The third determines the four coefficients A, B, C and D
of the equation of the CP, Ax + By + Cz + D = 0. The CP is tangent to the TS at
GC. The fourth, fifth and sixth programs determine the number of protein atoms
lying on the negative side, positive side, and on the CP.
| [
{
"created": "Mon, 30 Nov 2015 07:44:47 GMT",
"version": "v1"
}
] | 2015-12-02 | [
[
"Reyes",
"Vicente M.",
""
]
] | Ligand burial depth is an indicator of protein flexibility, as the extent of receptor conformational change required to bind a ligand in general varies directly with its depth of burial. In a companion paper (Reyes, V.M. 2015a), we report on the Tangent Sphere (TS) and Cutting Plane (CP) methods -- complementary methods to quantify, independent of protein size, the degree of ligand burial in a protein receptor. In this report, we present results that demonstrate the effectiveness of a set of FORTRAN 77 and 90 source codes used in the implementation of the two related procedures, as well as the precise implementation of the procedures. Particularly, we show here that application of the TS and CP methods on a theoretical model protein in the form of a spherical grid of points accurately portrays the behavior of the TS and CP indices, the predictive parameters obtained from the two methods. We also show that results of the implementation of the TS and CP methods on six protein receptors (Laskowski et al. 1996) are inagreement with their findings regarding cavity sizes in these proteins. The six FORTRAN programs we present here are: find_molec_centr.f, tangent_sphere.f, find_CP_coeffs.f, CPM_Neg_Side.f, CPM_Pos_Side.f and CPM_Zero_Side.f. The first program calculates the x-, y- and z-coordinates of the molecular geometric centroid of the protein (global centroid, GC), the center of the TS. Its radius is the distance between the GC and the local centroid (LC), the centroid of the bound ligand or a portion of its binding site. The second program finds the number of protein atoms inside, outside and on the TS. The third determines the four coefficients A, B, C and D of the equation of the CP, Ax + By + Cz + D = 0. The CP is tangent to the TS at GC. The fourth, fifth and sixth programs determine the number of protein atoms lying on the negative side, positive side, and on the CP. |
0801.3056 | Luciano da Fontoura Costa | Luciano da Fontoura Costa | Transient and Equilibrium Synchronization in Complex Neuronal Networks | 25 pages, 26 figures. A working manuscript: comments and suggestions
welcomed | null | null | null | q-bio.NC cond-mat.dis-nn physics.bio-ph | null | Transient and equilibrium synchronizations in complex neuronal networks as a
consequence of dynamics induced by having sources placed at specific neurons
are investigated. The basic integrate-and-fire neuron is adopted, and the
dynamics is estimated computationally so as to obtain the activation at each
node along each instant of time. In the transient case, the dynamics is
implemented so as to conserve the total activation entering the system. In our
equilibrium investigations, the internally stored activation is limited to the
value of the respective threshold. The synchronization of the activation of the
network is then quantified in terms of its normalized entropy. The equilibrium
investigations involve the application of a number of complementary
characterization methods, including spectra and Principal Component Analysis,
as well as of an equivalent model capable of reproducing both the transient and
equilibrium dynamics. The potential of such concepts and measurements is
explored with respect to several theoretical models, as well as for the
neuronal network of \emph{C. elegans}. A series of interesting results are
obtained and discussed, including the fact that all models led to a transient
period of synchronization, whose specific features depend on the topological
structures of the networks. The investigations of the equilibrium dynamics
revealed a series of remarkable insights, including the relationship between
spiking oscillations and the hierarchical structure of the networks and the
identification of twin correlation patterns between node degree and total
activation, implying that hubs of connectivity are also hubs of
integrate-and-fire activation.
| [
{
"created": "Sun, 20 Jan 2008 02:04:19 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Feb 2008 14:08:43 GMT",
"version": "v2"
}
] | 2008-02-18 | [
[
"Costa",
"Luciano da Fontoura",
""
]
] | Transient and equilibrium synchronizations in complex neuronal networks as a consequence of dynamics induced by having sources placed at specific neurons are investigated. The basic integrate-and-fire neuron is adopted, and the dynamics is estimated computationally so as to obtain the activation at each node along each instant of time. In the transient case, the dynamics is implemented so as to conserve the total activation entering the system. In our equilibrium investigations, the internally stored activation is limited to the value of the respective threshold. The synchronization of the activation of the network is then quantified in terms of its normalized entropy. The equilibrium investigations involve the application of a number of complementary characterization methods, including spectra and Principal Component Analysis, as well as of an equivalent model capable of reproducing both the transient and equilibrium dynamics. The potential of such concepts and measurements is explored with respect to several theoretical models, as well as for the neuronal network of \emph{C. elegans}. A series of interesting results are obtained and discussed, including the fact that all models led to a transient period of synchronization, whose specific features depend on the topological structures of the networks. The investigations of the equilibrium dynamics revealed a series of remarkable insights, including the relationship between spiking oscillations and the hierarchical structure of the networks and the identification of twin correlation patterns between node degree and total activation, implying that hubs of connectivity are also hubs of integrate-and-fire activation. |
2401.13960 | Haochen Fu | Haochen Fu, Chenyi Fei, Qi Ouyang, Yuhai Tu | Temperature Compensation through Kinetic Regulation in Biochemical
Oscillators | 19 pages, 11 figures (main text + supplementary information) | null | null | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Nearly all circadian clocks maintain a period that is insensitive to
temperature changes, a phenomenon known as temperature compensation (TC). Yet,
it is unclear whether there is any common feature among different systems that
exhibit TC. From a general timescale invariance, we show that TC relies on
existence of certain period-lengthening reactions wherein the period of the
system increases strongly with the rates in these reactions. By studying
several generic oscillator models, we show that this counter-intuitive
dependence is nonetheless a common feature of oscillators in the nonlinear
(far-from-onset) regime where the oscillation can be separated into fast and
slow phases. The increase of the period with the period-lengthening reaction
rates occurs when the amplitude of the slow phase in the oscillation increases
with these rates while the progression-speed in the slow phase is controlled by
other rates of the system. The positive dependence of the period on the
period-lengthening rates balances its inverse dependence on other kinetic rates
in the system, which gives rise to robust TC in a wide range of parameters. We
demonstrate the existence of such period-lengthening reactions and their
relevance for TC in all four model systems we considered. Theoretical results
for a model of the Kai system are supported by experimental data. A study of
the energy dissipation also shows that better TC performance requires higher
energy consumption. Our study unveils a general mechanism by which a
biochemical oscillator achieves TC by operating at regimes far from the onset
where period-lengthening reactions exist.
| [
{
"created": "Thu, 25 Jan 2024 05:40:16 GMT",
"version": "v1"
}
] | 2024-01-26 | [
[
"Fu",
"Haochen",
""
],
[
"Fei",
"Chenyi",
""
],
[
"Ouyang",
"Qi",
""
],
[
"Tu",
"Yuhai",
""
]
] | Nearly all circadian clocks maintain a period that is insensitive to temperature changes, a phenomenon known as temperature compensation (TC). Yet, it is unclear whether there is any common feature among different systems that exhibit TC. From a general timescale invariance, we show that TC relies on existence of certain period-lengthening reactions wherein the period of the system increases strongly with the rates in these reactions. By studying several generic oscillator models, we show that this counter-intuitive dependence is nonetheless a common feature of oscillators in the nonlinear (far-from-onset) regime where the oscillation can be separated into fast and slow phases. The increase of the period with the period-lengthening reaction rates occurs when the amplitude of the slow phase in the oscillation increases with these rates while the progression-speed in the slow phase is controlled by other rates of the system. The positive dependence of the period on the period-lengthening rates balances its inverse dependence on other kinetic rates in the system, which gives rise to robust TC in a wide range of parameters. We demonstrate the existence of such period-lengthening reactions and their relevance for TC in all four model systems we considered. Theoretical results for a model of the Kai system are supported by experimental data. A study of the energy dissipation also shows that better TC performance requires higher energy consumption. Our study unveils a general mechanism by which a biochemical oscillator achieves TC by operating at regimes far from the onset where period-lengthening reactions exist. |
1407.6595 | Aliakbar Jafarpour | Aliakbar Jafarpour | On X-ray scattering model for single particles, Part I: The legacy of
protein crystallography | Some of the reviews and discussions were moved to new appendices | null | null | null | q-bio.BM cond-mat.other | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging coherent X-ray scattering patterns of single particles have shown
dominant morphological signatures in agreement with predictions of the
scattering model used for conventional protein crystallography. The key
question is if and to what extent these scattering patterns contain volumetric
information, and what model can retrieve it. The scattering model of protein
crystallography is valid for very small crystals or those like crystalized
biomolecules with small coherent subunits. But in the general case, it fails to
model the integrated intensities of diffraction spots, and cannot even find the
size of the crystal. The more rigorous and less employed alternative is a
purely-classical crystal-specific model, which bypasses the fundamental notion
of bulk and hence the non-classical X-ray scattering from bulk. This
contribution is Part 1 out of two reports, in which we seek to clarify the
assumptions of some different regimes and models of X-ray scattering and their
implications for single particle imaging. In this part, first basic concepts
and existing models are briefly reviewed. Then the predictions of the
conventional and the rigorous models for emerging scattering patterns of
protein nanocrystals (intermediate case between conventional crystals and
single particles) are contrasted, and the terminology conflict regarding
"Diffraction Theory" is addressed. With a clearer picture of crystal
scattering, Part 2 will focus on additional concepts, limitations, correction
schemes, and alternative models relevant to single particles. Aside from such
optical details, protein crystallography is an advanced tool of analytical
chemistry and not a self-contained optical imaging technique (despite
significant instrumental role of optical data). As such, its final results can
be neither confirmed nor rejected on mere optical grounds; i.e., no
jurisdiction for optics.
| [
{
"created": "Fri, 4 Jul 2014 13:30:59 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jul 2014 09:31:30 GMT",
"version": "v2"
}
] | 2014-07-28 | [
[
"Jafarpour",
"Aliakbar",
""
]
] | Emerging coherent X-ray scattering patterns of single particles have shown dominant morphological signatures in agreement with predictions of the scattering model used for conventional protein crystallography. The key question is if and to what extent these scattering patterns contain volumetric information, and what model can retrieve it. The scattering model of protein crystallography is valid for very small crystals or those like crystalized biomolecules with small coherent subunits. But in the general case, it fails to model the integrated intensities of diffraction spots, and cannot even find the size of the crystal. The more rigorous and less employed alternative is a purely-classical crystal-specific model, which bypasses the fundamental notion of bulk and hence the non-classical X-ray scattering from bulk. This contribution is Part 1 out of two reports, in which we seek to clarify the assumptions of some different regimes and models of X-ray scattering and their implications for single particle imaging. In this part, first basic concepts and existing models are briefly reviewed. Then the predictions of the conventional and the rigorous models for emerging scattering patterns of protein nanocrystals (intermediate case between conventional crystals and single particles) are contrasted, and the terminology conflict regarding "Diffraction Theory" is addressed. With a clearer picture of crystal scattering, Part 2 will focus on additional concepts, limitations, correction schemes, and alternative models relevant to single particles. Aside from such optical details, protein crystallography is an advanced tool of analytical chemistry and not a self-contained optical imaging technique (despite significant instrumental role of optical data). As such, its final results can be neither confirmed nor rejected on mere optical grounds; i.e., no jurisdiction for optics. |
1403.1127 | Elod Mehes | Elod Mehes and Tamas Vicsek | Collective motion of cells: from experiments to models | 24 pages, 25 figures, 13 reference video links | null | null | null | q-bio.CB cond-mat.soft cond-mat.stat-mech physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Swarming or collective motion of living entities is one of the most common
and spectacular manifestations of living systems having been extensively
studied in recent years. A number of general principles have been established.
The interactions at the level of cells are quite different from those among
individual animals therefore the study of collective motion of cells is likely
to reveal some specific important features which are overviewed in this paper.
In addition to presenting the most appealing results from the quickly growing
related literature we also deliver a critical discussion of the emerging
picture and summarize our present understanding of collective motion at the
cellular level. Collective motion of cells plays an essential role in a number
of experimental and real-life situations. In most cases the coordinated motion
is a helpful aspect of the given phenomenon and results in making a related
process more efficient (e.g., embryogenesis or wound healing), while in the
case of tumor cell invasion it appears to speed up the progression of the
disease. In these mechanisms cells both have to be motile and adhere to one
another, the adherence feature being the most specific to this sort of
collective behavior. One of the central aims of this review is both presenting
the related experimental observations and treating them in the light of a few
basic computational models so as to make an interpretation of the phenomena at
a quantitative level as well.
| [
{
"created": "Wed, 5 Mar 2014 13:45:26 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jun 2014 07:23:43 GMT",
"version": "v2"
}
] | 2014-06-06 | [
[
"Mehes",
"Elod",
""
],
[
"Vicsek",
"Tamas",
""
]
] | Swarming or collective motion of living entities is one of the most common and spectacular manifestations of living systems having been extensively studied in recent years. A number of general principles have been established. The interactions at the level of cells are quite different from those among individual animals therefore the study of collective motion of cells is likely to reveal some specific important features which are overviewed in this paper. In addition to presenting the most appealing results from the quickly growing related literature we also deliver a critical discussion of the emerging picture and summarize our present understanding of collective motion at the cellular level. Collective motion of cells plays an essential role in a number of experimental and real-life situations. In most cases the coordinated motion is a helpful aspect of the given phenomenon and results in making a related process more efficient (e.g., embryogenesis or wound healing), while in the case of tumor cell invasion it appears to speed up the progression of the disease. In these mechanisms cells both have to be motile and adhere to one another, the adherence feature being the most specific to this sort of collective behavior. One of the central aims of this review is both presenting the related experimental observations and treating them in the light of a few basic computational models so as to make an interpretation of the phenomena at a quantitative level as well. |
2205.11274 | Junjie Tang | Junjie Tang, Changhu Wang, Feiyi Xiao and Ruibin Xi | Single-cell gene regulatory network analysis for mixed cell populations
with applications to COVID-19 single cell data | 95 pages,28 figures | null | null | null | q-bio.MN stat.ME | http://creativecommons.org/licenses/by/4.0/ | Gene regulatory network (GRN) refers to the complex network formed by
regulatory interactions between genes in living cells. In this paper, we
consider inferring GRNs in single cells based on single cell RNA sequencing
(scRNA-seq) data. In scRNA-seq, single cells are often profiled from mixed
populations and their cell identities are unknown. A common practice for single
cell GRN analysis is to first cluster the cells and infer GRNs for every
cluster separately. However, this two-step procedure ignores uncertainty in the
clustering step and thus could lead to inaccurate estimation of the networks.
To address this problem, we propose to model scRNA-seq by the mixture
multivariate Poisson log-normal (MPLN) distribution. The precision matrices of
the MPLN are the GRNs of different cell types and can be jointly estimated by
maximizing MPLN's lasso-penalized log-likelihood. We show that the MPLN model
is identifiable and the resulting penalized log-likelihood estimator is
consistent. To avoid the intractable optimization of the MPLN's log-likelihood,
we develop an algorithm called VMPLN based on the variational inference method.
Comprehensive simulation and real scRNA-seq data analyses reveal that VMPLN
performs better than the state-of-the-art single cell GRN methods.
| [
{
"created": "Mon, 23 May 2022 12:46:00 GMT",
"version": "v1"
}
] | 2022-05-24 | [
[
"Tang",
"Junjie",
""
],
[
"Wang",
"Changhu",
""
],
[
"Xiao",
"Feiyi",
""
],
[
"Xi",
"Ruibin",
""
]
] | Gene regulatory network (GRN) refers to the complex network formed by regulatory interactions between genes in living cells. In this paper, we consider inferring GRNs in single cells based on single cell RNA sequencing (scRNA-seq) data. In scRNA-seq, single cells are often profiled from mixed populations and their cell identities are unknown. A common practice for single cell GRN analysis is to first cluster the cells and infer GRNs for every cluster separately. However, this two-step procedure ignores uncertainty in the clustering step and thus could lead to inaccurate estimation of the networks. To address this problem, we propose to model scRNA-seq by the mixture multivariate Poisson log-normal (MPLN) distribution. The precision matrices of the MPLN are the GRNs of different cell types and can be jointly estimated by maximizing MPLN's lasso-penalized log-likelihood. We show that the MPLN model is identifiable and the resulting penalized log-likelihood estimator is consistent. To avoid the intractable optimization of the MPLN's log-likelihood, we develop an algorithm called VMPLN based on the variational inference method. Comprehensive simulation and real scRNA-seq data analyses reveal that VMPLN performs better than the state-of-the-art single cell GRN methods. |
2202.04202 | Paul Bertin | Paul Bertin, Jarrid Rector-Brooks, Deepak Sharma, Thomas Gaudelet,
Andrew Anighoro, Torsten Gross, Francisco Martinez-Pena, Eileen L. Tang,
Suraj M S, Cristian Regep, Jeremy Hayter, Maksym Korablyov, Nicholas
Valiante, Almer van der Sloot, Mike Tyers, Charles Roberts, Michael M.
Bronstein, Luke L. Lairson, Jake P. Taylor-King, and Yoshua Bengio | RECOVER: sequential model optimization platform for combination drug
repurposing identifies novel synergistic compounds in vitro | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | For large libraries of small molecules, exhaustive combinatorial chemical
screens become infeasible to perform when considering a range of disease
models, assay conditions, and dose ranges. Deep learning models have achieved
state of the art results in silico for the prediction of synergy scores.
However, databases of drug combinations are biased towards synergistic agents
and these results do not necessarily generalise out of distribution. We employ
a sequential model optimization search utilising a deep learning model to
quickly discover synergistic drug combinations active against a cancer cell
line, requiring substantially less screening than an exhaustive evaluation. Our
small scale wet lab experiments only account for evaluation of ~5% of the total
search space. After only 3 rounds of ML-guided in vitro experimentation
(including a calibration round), we find that the set of drug pairs queried is
enriched for highly synergistic combinations; two additional rounds of
ML-guided experiments were performed to ensure reproducibility of trends.
Remarkably, we rediscover drug combinations later confirmed to be under study
within clinical trials. Moreover, we find that drug embeddings generated using
only structural information begin to reflect mechanisms of action. Prior in
silico benchmarking suggests we can enrich search queries by a factor of ~5-10x
for highly synergistic drug combinations by using sequential rounds of
evaluation when compared to random selection, or by a factor of >3x when using
a pretrained model selecting all drug combinations at a single time point.
| [
{
"created": "Mon, 7 Feb 2022 02:54:29 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Sep 2022 21:19:05 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Mar 2023 21:58:51 GMT",
"version": "v3"
}
] | 2023-03-06 | [
[
"Bertin",
"Paul",
""
],
[
"Rector-Brooks",
"Jarrid",
""
],
[
"Sharma",
"Deepak",
""
],
[
"Gaudelet",
"Thomas",
""
],
[
"Anighoro",
"Andrew",
""
],
[
"Gross",
"Torsten",
""
],
[
"Martinez-Pena",
"Francisco",
""
],
[
"Tang",
"Eileen L.",
""
],
[
"S",
"Suraj M",
""
],
[
"Regep",
"Cristian",
""
],
[
"Hayter",
"Jeremy",
""
],
[
"Korablyov",
"Maksym",
""
],
[
"Valiante",
"Nicholas",
""
],
[
"van der Sloot",
"Almer",
""
],
[
"Tyers",
"Mike",
""
],
[
"Roberts",
"Charles",
""
],
[
"Bronstein",
"Michael M.",
""
],
[
"Lairson",
"Luke L.",
""
],
[
"Taylor-King",
"Jake P.",
""
],
[
"Bengio",
"Yoshua",
""
]
] | For large libraries of small molecules, exhaustive combinatorial chemical screens become infeasible to perform when considering a range of disease models, assay conditions, and dose ranges. Deep learning models have achieved state of the art results in silico for the prediction of synergy scores. However, databases of drug combinations are biased towards synergistic agents and these results do not necessarily generalise out of distribution. We employ a sequential model optimization search utilising a deep learning model to quickly discover synergistic drug combinations active against a cancer cell line, requiring substantially less screening than an exhaustive evaluation. Our small scale wet lab experiments only account for evaluation of ~5% of the total search space. After only 3 rounds of ML-guided in vitro experimentation (including a calibration round), we find that the set of drug pairs queried is enriched for highly synergistic combinations; two additional rounds of ML-guided experiments were performed to ensure reproducibility of trends. Remarkably, we rediscover drug combinations later confirmed to be under study within clinical trials. Moreover, we find that drug embeddings generated using only structural information begin to reflect mechanisms of action. Prior in silico benchmarking suggests we can enrich search queries by a factor of ~5-10x for highly synergistic drug combinations by using sequential rounds of evaluation when compared to random selection, or by a factor of >3x when using a pretrained model selecting all drug combinations at a single time point. |
2209.13297 | Jeremi K. Ochab | Jeremi K. Ochab, Marcin W\k{a}torek, Anna Ceglarek, Magdalena
F\k{a}frowicz, Koryna Lewandowska, Tadeusz Marek, Barbara Sikora-Wachowicz,
Pawe{\l} O\'swi\k{e}cimka | Task-dependent fractal patterns of information processing in working
memory | Accepted to Scientific Reports on 27 Sept. 2022 | Sci Rep 12, 17866 (2022) | 10.1038/s41598-022-21375-1 | null | q-bio.NC cond-mat.dis-nn physics.soc-ph | http://creativecommons.org/licenses/by-sa/4.0/ | We applied detrended fluctuation analysis, power spectral density, and
eigenanalysis of detrended cross-correlations to investigate fMRI data
representing a diurnal variation of working memory in four visual tasks: two
verbal and two nonverbal. We show that the degree of fractal scaling is
regionally dependent on engagement in cognitive tasks. A particularly apparent
difference was found between memorisation in verbal and nonverbal tasks.
Furthermore, the detrended cross-correlations between brain areas were
predominantly indicative of differences between resting state and other tasks,
between memorisation and retrieval, and between verbal and nonverbal tasks. The
fractal and spectral analyses presented in our study are consistent with
previous research related to visuospatial and verbal information processing,
working memory (encoding and retrieval), and executive functions, but they were
found to be more sensitive than Pearson correlations and showed the potential
to obtain other subtler results. We conclude that regionally dependent
cognitive task engagement can be distinguished based on the fractal
characteristics of BOLD signals and their detrended cross-correlation
structure.
| [
{
"created": "Tue, 27 Sep 2022 10:47:21 GMT",
"version": "v1"
}
] | 2022-10-27 | [
[
"Ochab",
"Jeremi K.",
""
],
[
"Wątorek",
"Marcin",
""
],
[
"Ceglarek",
"Anna",
""
],
[
"Fąfrowicz",
"Magdalena",
""
],
[
"Lewandowska",
"Koryna",
""
],
[
"Marek",
"Tadeusz",
""
],
[
"Sikora-Wachowicz",
"Barbara",
""
],
[
"Oświęcimka",
"Paweł",
""
]
] | We applied detrended fluctuation analysis, power spectral density, and eigenanalysis of detrended cross-correlations to investigate fMRI data representing a diurnal variation of working memory in four visual tasks: two verbal and two nonverbal. We show that the degree of fractal scaling is regionally dependent on engagement in cognitive tasks. A particularly apparent difference was found between memorisation in verbal and nonverbal tasks. Furthermore, the detrended cross-correlations between brain areas were predominantly indicative of differences between resting state and other tasks, between memorisation and retrieval, and between verbal and nonverbal tasks. The fractal and spectral analyses presented in our study are consistent with previous research related to visuospatial and verbal information processing, working memory (encoding and retrieval), and executive functions, but they were found to be more sensitive than Pearson correlations and showed the potential to obtain other subtler results. We conclude that regionally dependent cognitive task engagement can be distinguished based on the fractal characteristics of BOLD signals and their detrended cross-correlation structure. |
2009.00434 | Tiberiu Harko | Tiberiu Harko, Man Kwong Mak | Series solution of the Susceptible-Infected-Recovered (SIR) epidemic
model with vital dynamics via the Adomian and Laplace-Adomian Decomposition
Methods | 12 pages, 3 figures. arXiv admin note: text overlap with
arXiv:2006.07170 | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Susceptible-Infected-Recovered (SIR) epidemic model as well as its
generalizations are extensively used for the study of the spread of infectious
diseases, and for the understanding of the dynamical evolution of epidemics.
From SIR type models only the model without vital dynamics has an exact
analytic solution, which can be obtained in an exact parametric form. The SIR
model with vital dynamics, the simplest extension of the basic SIR model, does
not admit a closed form representation of the solution. However, in order to
perform the comparison with the epidemiological data accurate representations
of the time evolution of the SIR model with vital dynamics would be very
useful. In the present paper, we obtain first the basic evolution equation of
the SIR model with vital dynamics, which is given by a strongly nonlinear
second order differential equation. Then we obtain a series representation of
the solution of the model, by using the Adomian and Laplace-Adomian
Decomposition Methods to solve the dynamical evolution equation of the model.
The solutions are expressed in the form of infinite series. The series
representations of the time evolution of the SIR model with vital dynamics are
compared with the exact numerical solutions of the model, and we find that, at
least for a specific range of parameters, there is a good agreement between the
Adomian and Laplace-Adomian semianalytical solutions, containing only a small
number of terms, and the numerical results.
| [
{
"created": "Fri, 28 Aug 2020 18:19:55 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Harko",
"Tiberiu",
""
],
[
"Mak",
"Man Kwong",
""
]
] | The Susceptible-Infected-Recovered (SIR) epidemic model as well as its generalizations are extensively used for the study of the spread of infectious diseases, and for the understanding of the dynamical evolution of epidemics. From SIR type models only the model without vital dynamics has an exact analytic solution, which can be obtained in an exact parametric form. The SIR model with vital dynamics, the simplest extension of the basic SIR model, does not admit a closed form representation of the solution. However, in order to perform the comparison with the epidemiological data accurate representations of the time evolution of the SIR model with vital dynamics would be very useful. In the present paper, we obtain first the basic evolution equation of the SIR model with vital dynamics, which is given by a strongly nonlinear second order differential equation. Then we obtain a series representation of the solution of the model, by using the Adomian and Laplace-Adomian Decomposition Methods to solve the dynamical evolution equation of the model. The solutions are expressed in the form of infinite series. The series representations of the time evolution of the SIR model with vital dynamics are compared with the exact numerical solutions of the model, and we find that, at least for a specific range of parameters, there is a good agreement between the Adomian and Laplace-Adomian semianalytical solutions, containing only a small number of terms, and the numerical results. |
2309.02788 | Malgorzata O'Reilly | Albert Ch. Soewongsono, Barbara R. Holland, Malgorzata M. O'Reilly | Stochastic niche-based models for the evolution of species | The Eleventh International Conference on Matrix-Analytic Methods in
Stochastic Models (MAM11), 2022, Seoul, Republic of Korea | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There have been many studies to examine whether one trait is correlated with
another trait across a group of present-day species (for example, do species
with larger brains tend to have longer gestation times. Since the introduction
of the phylogenetic comparative method some authors have argued that it is
necessary to have a biologically realistic model to generate evolutionary trees
that incorporates information about the ecological niche occupied by species.
Price presented a simple model along these lines in 1997. He defined a
two-dimensional niche space formed by two continuous-valued traits, in which
new niches arise with trait values drawn from a bivariate normal distribution.
When a new niche arises, it is occupied by a descendant species of whichever
current species is closest in ecological niche space. In sequence, more species
are then evolved from already-existing species to which they are ecologically
closest.
Here we explore ways of extending Price's adaptive radiation model. One
extension is to increase the dimensionality of the niche space by considering
more than two continuous traits. A second extension is to allow both extinction
of species (which may leave unoccupied niches) and removal of niches (which
causes species occupying them to go extinct). To model this problem, we
consider a continuous-time stochastic process which implicitly defines a
phylogeny. To explore if trees generated under such a model (or under different
parametrizations of the model) are realistic we can compute a variety of
summary statistics that can be compared to those of empirically observed
phylogenies. For example, there are existing statistics that aim to measure:
tree balance, the relative rate of diversification, and phylogenetic signal of
traits.
| [
{
"created": "Wed, 6 Sep 2023 07:08:04 GMT",
"version": "v1"
}
] | 2023-09-07 | [
[
"Soewongsono",
"Albert Ch.",
""
],
[
"Holland",
"Barbara R.",
""
],
[
"O'Reilly",
"Malgorzata M.",
""
]
] | There have been many studies to examine whether one trait is correlated with another trait across a group of present-day species (for example, do species with larger brains tend to have longer gestation times. Since the introduction of the phylogenetic comparative method some authors have argued that it is necessary to have a biologically realistic model to generate evolutionary trees that incorporates information about the ecological niche occupied by species. Price presented a simple model along these lines in 1997. He defined a two-dimensional niche space formed by two continuous-valued traits, in which new niches arise with trait values drawn from a bivariate normal distribution. When a new niche arises, it is occupied by a descendant species of whichever current species is closest in ecological niche space. In sequence, more species are then evolved from already-existing species to which they are ecologically closest. Here we explore ways of extending Price's adaptive radiation model. One extension is to increase the dimensionality of the niche space by considering more than two continuous traits. A second extension is to allow both extinction of species (which may leave unoccupied niches) and removal of niches (which causes species occupying them to go extinct). To model this problem, we consider a continuous-time stochastic process which implicitly defines a phylogeny. To explore if trees generated under such a model (or under different parametrizations of the model) are realistic we can compute a variety of summary statistics that can be compared to those of empirically observed phylogenies. For example, there are existing statistics that aim to measure: tree balance, the relative rate of diversification, and phylogenetic signal of traits. |
2107.08259 | Ali Gharouni | Ali Gharouni, F.M. Abdelmalek, David J. D. Earn, Jonathan Dushoff,
Benjamin M. Bolker | Testing and Isolation Efficacy: Insights from a Simple Epidemic Model | null | null | null | null | q-bio.PE math.DS physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Testing individuals for pathogens can affect the spread of epidemics.
Understanding how individual-level processes of sampling and reporting test
results can affect community- or population-level spread is a dynamical
modeling question. The effect of testing processes on epidemic dynamics depends
on factors underlying implementation, particularly testing intensity and on
whom testing is focused. Here, we use a simple model to explore how the
individual-level effects of testing might directly impact population-level
spread. Our model development was motivated by the COVID-19 epidemic, but has
generic epidemiological and testing structures. To the classic SIR framework we
have added a per capita testing intensity, and compartment-specific testing
weights, which can be adjusted to reflect different testing emphases --
surveillance, diagnosis, or control. We derive an analytic expression for the
relative reduction in the basic reproductive number due to testing,
test-reporting and related isolation behaviours. Intensive testing and fast
test reporting are expected to be beneficial at the community level because
they can provide a rapid assessment of the situation, identify hot spots, and
may enable rapid contact-tracing. Direct effects of fast testing at the
individual level are less clear, and may depend on how individuals' behaviour
is affected by testing information. Our simple model shows that under some
circumstances both increased testing intensity and faster test reporting can
reduce the effectiveness of control, and allows us to explore the conditions
under which this occurs. Conversely, we find that focusing testing on infected
individuals always acts to increase effectiveness of control.
| [
{
"created": "Sat, 17 Jul 2021 15:35:14 GMT",
"version": "v1"
}
] | 2021-07-20 | [
[
"Gharouni",
"Ali",
""
],
[
"Abdelmalek",
"F. M.",
""
],
[
"Earn",
"David J. D.",
""
],
[
"Dushoff",
"Jonathan",
""
],
[
"Bolker",
"Benjamin M.",
""
]
] | Testing individuals for pathogens can affect the spread of epidemics. Understanding how individual-level processes of sampling and reporting test results can affect community- or population-level spread is a dynamical modeling question. The effect of testing processes on epidemic dynamics depends on factors underlying implementation, particularly testing intensity and on whom testing is focused. Here, we use a simple model to explore how the individual-level effects of testing might directly impact population-level spread. Our model development was motivated by the COVID-19 epidemic, but has generic epidemiological and testing structures. To the classic SIR framework we have added a per capita testing intensity, and compartment-specific testing weights, which can be adjusted to reflect different testing emphases -- surveillance, diagnosis, or control. We derive an analytic expression for the relative reduction in the basic reproductive number due to testing, test-reporting and related isolation behaviours. Intensive testing and fast test reporting are expected to be beneficial at the community level because they can provide a rapid assessment of the situation, identify hot spots, and may enable rapid contact-tracing. Direct effects of fast testing at the individual level are less clear, and may depend on how individuals' behaviour is affected by testing information. Our simple model shows that under some circumstances both increased testing intensity and faster test reporting can reduce the effectiveness of control, and allows us to explore the conditions under which this occurs. Conversely, we find that focusing testing on infected individuals always acts to increase effectiveness of control. |
1710.05292 | Farnaz Zamani Esfahlani | Farnaz Zamani Esfahlani and Hiroki Sayama | A Percolation-based Thresholding Method with Applications in Functional
Connectivity Analysis | 12 pages, 6 figures; to appear in the Proceedings of CompleNet 2018,
in press | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent advances in developing more effective thresholding methods
to convert weighted networks to unweighted counterparts, there are still
several limitations that need to be addressed. One such limitation is the
inability of the most existing thresholding methods to take into account the
topological properties of the original weighted networks during the
binarization process, which could ultimately result in unweighted networks that
have drastically different topological properties than the original weighted
networks. In this study, we propose a new thresholding method based on the
percolation theory to address this limitation. The performance of the proposed
method was validated and compared to the existing thresholding methods using
simulated and real-world functional connectivity networks in the brain.
Comparison of macroscopic and microscopic properties of the resulted unweighted
networks to the original weighted networks suggest that the proposed
thresholding method can successfully maintain the topological properties of the
original weighted networks.
| [
{
"created": "Sun, 15 Oct 2017 07:15:39 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Dec 2017 05:34:42 GMT",
"version": "v2"
}
] | 2017-12-04 | [
[
"Esfahlani",
"Farnaz Zamani",
""
],
[
"Sayama",
"Hiroki",
""
]
] | Despite the recent advances in developing more effective thresholding methods to convert weighted networks to unweighted counterparts, there are still several limitations that need to be addressed. One such limitation is the inability of the most existing thresholding methods to take into account the topological properties of the original weighted networks during the binarization process, which could ultimately result in unweighted networks that have drastically different topological properties than the original weighted networks. In this study, we propose a new thresholding method based on the percolation theory to address this limitation. The performance of the proposed method was validated and compared to the existing thresholding methods using simulated and real-world functional connectivity networks in the brain. Comparison of macroscopic and microscopic properties of the resulted unweighted networks to the original weighted networks suggest that the proposed thresholding method can successfully maintain the topological properties of the original weighted networks. |
2002.09937 | Piero Procacci | Marina Macchiagodena, Marco Pagliai, Piero Procacci | Inhibition of the Main Protease 3CL-pro of the Coronavirus Disease 19
via Structure-Based Ligand Design and Molecular Modeling | main paper: 14 pages, 5 figures, 1 Table Supporting Information: 18
pages, 1 table, 7 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have applied a computational strategy, based on the synergy of virtual
screening, docking and molecular dynamics techniques, aimed at identifying
possible lead compounds for the non-covalent inhibition of the main protease
3CL-pro of the SARS-Cov2 Coronavirus. Based on the recently resolved 6LU7 PDB
structure, ligands were generated using a multimodal structure-based design and
then optimally docked to the 6LU7 monomer. Docking calculations show that
ligand-binding is strikingly similar in SARS-CoV and SARS-CoV2 main proteases,
irrespectively of the protonation state of the catalytic CYS-HIS dyad. The most
potent docked ligands are found to share a common binding pattern with aromatic
moieties connected by rotatable bonds in a pseudo-linear arrangement. Molecular
dynamics calculations fully confirm the stability in the 3CL-pro binding pocket
of the most potent binder identified by docking, namely a
chlorophenyl-pyridyl-carboxamide derivative.
| [
{
"created": "Sun, 23 Feb 2020 16:47:00 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Mar 2020 22:13:58 GMT",
"version": "v2"
}
] | 2020-03-04 | [
[
"Macchiagodena",
"Marina",
""
],
[
"Pagliai",
"Marco",
""
],
[
"Procacci",
"Piero",
""
]
] | We have applied a computational strategy, based on the synergy of virtual screening, docking and molecular dynamics techniques, aimed at identifying possible lead compounds for the non-covalent inhibition of the main protease 3CL-pro of the SARS-Cov2 Coronavirus. Based on the recently resolved 6LU7 PDB structure, ligands were generated using a multimodal structure-based design and then optimally docked to the 6LU7 monomer. Docking calculations show that ligand-binding is strikingly similar in SARS-CoV and SARS-CoV2 main proteases, irrespectively of the protonation state of the catalytic CYS-HIS dyad. The most potent docked ligands are found to share a common binding pattern with aromatic moieties connected by rotatable bonds in a pseudo-linear arrangement. Molecular dynamics calculations fully confirm the stability in the 3CL-pro binding pocket of the most potent binder identified by docking, namely a chlorophenyl-pyridyl-carboxamide derivative. |
1908.09647 | Anindya Ghose Choudhury | Sudip Garai, A Ghose-Choudhury and Partha Guha | On a geometric description of time dependent singular Lagrangians with
applications to biological systems | This is an updated and expanded version of an earlier draft
arXiv:1908.09647v1[q-bio.PE] with more focus on geometric aspects | null | null | null | q-bio.PE nlin.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider certain analytical features of a stochastic model that can
explain among other things competition among species and simultaneous predation
on the competing species from a geometric perspective which allows for a
systematic description of models admitting singular Lagrangians. The model
equations are shown to admit a Jacobi Last Multiplier which in turn allows for
the construction of a Lagrangian. The Lagrangian is of singular nature so that
construction of the Hamiltonian via a Legendre transformation is not possible.
A Hamiltonian description of the model therefore requires the introduction of
Dirac brackets. Explicit results are presented for the "Kill the winner" model
and its reductions.
| [
{
"created": "Tue, 20 Aug 2019 08:00:05 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 06:29:09 GMT",
"version": "v2"
}
] | 2021-01-28 | [
[
"Garai",
"Sudip",
""
],
[
"Ghose-Choudhury",
"A",
""
],
[
"Guha",
"Partha",
""
]
] | We consider certain analytical features of a stochastic model that can explain among other things competition among species and simultaneous predation on the competing species from a geometric perspective which allows for a systematic description of models admitting singular Lagrangians. The model equations are shown to admit a Jacobi Last Multiplier which in turn allows for the construction of a Lagrangian. The Lagrangian is of singular nature so that construction of the Hamiltonian via a Legendre transformation is not possible. A Hamiltonian description of the model therefore requires the introduction of Dirac brackets. Explicit results are presented for the "Kill the winner" model and its reductions. |
q-bio/0702024 | Julian Felix | Silvia Solis Ortiz, Rafael G. Campos, Julian Felix and Octavio Obregon | Coincident Frequencies and Relative Phases Among Female-Brain Signals
and Progesterone-Estrogen levels | Reasearch results currently on publication | null | null | null | q-bio.QM | null | Fourier transform has become a basic tool for analyzing biological signals
1,2,3. Mostly a fast Fourier transform is computed for a finite sequence of
data sample 4. This is the standard way apparatuses and modern computerized
technology provide information, according with their frequency range, of the
well known brain signals Delta, Theta, Alpha 1, Alpha 2, Beta 1 and Beta 2
furnishing experts with electroencephalographic (EEG) profile of clinical use
obtained from these short periods 5,6.
For long periods, an analogous novel procedure is established as follows:
Assigning certain numerical value, i.e., the absolute power, to each brain
signal at certain sampling times, generates data that can be interpolated and
extrapolated through a long period, yielding an absolute power function of time
for each signal 7. A further Fourier transform is then performed8,9, to analyze
these new functions, finding typical frequencies and their corresponding
periods for each one of these signals and, also, relative phases for coincident
periods between two or more signals. Our procedure of analysis presented here
can be applied, in principle, to any biological signal of interest.
| [
{
"created": "Fri, 9 Feb 2007 21:45:16 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Mar 2008 22:59:51 GMT",
"version": "v2"
}
] | 2008-04-01 | [
[
"Ortiz",
"Silvia Solis",
""
],
[
"Campos",
"Rafael G.",
""
],
[
"Felix",
"Julian",
""
],
[
"Obregon",
"Octavio",
""
]
] | Fourier transform has become a basic tool for analyzing biological signals 1,2,3. Mostly a fast Fourier transform is computed for a finite sequence of data sample 4. This is the standard way apparatuses and modern computerized technology provide information, according with their frequency range, of the well known brain signals Delta, Theta, Alpha 1, Alpha 2, Beta 1 and Beta 2 furnishing experts with electroencephalographic (EEG) profile of clinical use obtained from these short periods 5,6. For long periods, an analogous novel procedure is established as follows: Assigning certain numerical value, i.e., the absolute power, to each brain signal at certain sampling times, generates data that can be interpolated and extrapolated through a long period, yielding an absolute power function of time for each signal 7. A further Fourier transform is then performed8,9, to analyze these new functions, finding typical frequencies and their corresponding periods for each one of these signals and, also, relative phases for coincident periods between two or more signals. Our procedure of analysis presented here can be applied, in principle, to any biological signal of interest. |
1111.0360 | Brian Ginn | Brian R. Ginn | The Effect of Protein Length on the Ploidy Level and Environmental
Tolerance of Organisms | 44 pages, 6 figures, 1 table; added appendix on truncation selection,
heterosis, and alternation of generations; added appendix on chemical
affinity; added citations; reworded portions of main text to be more precise | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper summarizes previous work linking protein aggregation to the
heterozygosity of organisms. It also cites the literature showing a correlation
between species' morphological complexity and the lengths of their proteins.
These two findings are combined to form a theory that may potentially explain
the ploidy levels of organisms. Organisms can employ heterozygosity to inhibit
protein aggregation. Hence, complex organisms tend to be diploid because they
tend to synthesize long, aggregation-prone proteins. On the other hand, simple
organisms tend to be haploid because they synthesize short proteins that are
less prone to aggregation. The theory may also explain ecological trends
associated with organisms of different ploidy level. Two mathematical models
are also developed that may explain: 1) how protein aggregation results in
truncation selection that maintains numerous polymorphisms in natural
populations, and 2) the relationship between protein turnover, metabolic
efficiency, and heterosis.
| [
{
"created": "Wed, 2 Nov 2011 01:52:31 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Aug 2013 07:04:40 GMT",
"version": "v2"
}
] | 2013-08-13 | [
[
"Ginn",
"Brian R.",
""
]
] | This paper summarizes previous work linking protein aggregation to the heterozygosity of organisms. It also cites the literature showing a correlation between species' morphological complexity and the lengths of their proteins. These two findings are combined to form a theory that may potentially explain the ploidy levels of organisms. Organisms can employ heterozygosity to inhibit protein aggregation. Hence, complex organisms tend to be diploid because they tend to synthesize long, aggregation-prone proteins. On the other hand, simple organisms tend to be haploid because they synthesize short proteins that are less prone to aggregation. The theory may also explain ecological trends associated with organisms of different ploidy level. Two mathematical models are also developed that may explain: 1) how protein aggregation results in truncation selection that maintains numerous polymorphisms in natural populations, and 2) the relationship between protein turnover, metabolic efficiency, and heterosis. |
1206.0766 | Adilson Enio Motter | Joo Sang Lee, Takashi Nishikawa, Adilson E. Motter | Why Optimal States Recruit Fewer Reactions in Metabolic Networks | Contribution to the special issue in honor of John Guckenheimer on
the occasion of his 65th birthday | Discret. Contin. Dyn. Syst. A. 32(8), 2937 (2012) | 10.3934/dcds.2012.32.2937 | null | q-bio.MN cond-mat.dis-nn nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The metabolic network of a living cell involves several hundreds or thousands
of interconnected biochemical reactions. Previous research has shown that under
realistic conditions only a fraction of these reactions is concurrently active
in any given cell. This is partially determined by nutrient availability, but
is also strongly dependent on the metabolic function and network structure.
Here, we establish rigorous bounds showing that the fraction of active
reactions is smaller (rather than larger) in metabolic networks evolved or
engineered to optimize a specific metabolic task, and we show that this is
largely determined by the presence of thermodynamically irreversible reactions
in the network. We also show that the inactivation of a certain number of
reactions determined by irreversibility can generate a cascade of secondary
reaction inactivations that propagates through the network. The mathematical
results are complemented with numerical simulations of the metabolic networks
of the bacterium Escherichia coli and of human cells, which show,
counterintuitively, that even the maximization of the total reaction flux in
the network leads to a reduced number of active reactions.
| [
{
"created": "Mon, 4 Jun 2012 21:04:55 GMT",
"version": "v1"
}
] | 2012-06-06 | [
[
"Lee",
"Joo Sang",
""
],
[
"Nishikawa",
"Takashi",
""
],
[
"Motter",
"Adilson E.",
""
]
] | The metabolic network of a living cell involves several hundreds or thousands of interconnected biochemical reactions. Previous research has shown that under realistic conditions only a fraction of these reactions is concurrently active in any given cell. This is partially determined by nutrient availability, but is also strongly dependent on the metabolic function and network structure. Here, we establish rigorous bounds showing that the fraction of active reactions is smaller (rather than larger) in metabolic networks evolved or engineered to optimize a specific metabolic task, and we show that this is largely determined by the presence of thermodynamically irreversible reactions in the network. We also show that the inactivation of a certain number of reactions determined by irreversibility can generate a cascade of secondary reaction inactivations that propagates through the network. The mathematical results are complemented with numerical simulations of the metabolic networks of the bacterium Escherichia coli and of human cells, which show, counterintuitively, that even the maximization of the total reaction flux in the network leads to a reduced number of active reactions. |
1510.02323 | Arianna Bianchi | Arianna Bianchi, Konstantinos Syrigos, Georgios Lolas | Tumor-induced neoneurogenesis and perineural tumor growth: a
mathematical approach | 37 pages, 9 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Primary tumors infrequently lead to demise of cancer patients; instead,
mortality and a significant degree of morbidity result from the growth of
secondary tumors in distant organs (metastasis). It is well-known that
malignant tumors induce the formation of a lymphatic and a blood vascular
network around themselves. A similar but far less studied process occurs in
relation to the nervous system and is referred to as \emph{neoneurogenesis}; in
fact, recent studies have demonstrated that tumors initiate their own
innervation. However, the relationship between tumor progression and the
nervous system is still poorly understood. This process is most likely
regulated by a multitude of factors in the tumor-nerve microenvironment and it
is therefore important to study the interactions between the nervous system and
tumor cells through mathematical/computational modelling: this may reveal the
most significant factors of the plethora of interacting elements regulating
neoneurogenesis. The present work is a first attempt to model the
neurobiological aspect of cancer development through a (simple) system of
differential equations. The model confirms the experimental observations that a
tumor is able to promote nerve formation/elongation around itself, and that
high levels of nerve growth factor (NGF) and axon guidance molecules (AGMs) are
recorded in the presence of a tumor. Our results also reflect the observation
that high stress levels (represented by higher norepinephrine release by
sympathetic nerves) contribute to tumor development and spread, indicating a
mutually beneficial relationship between tumor cells and neurons. The model
predictions suggest novel therapeutic strategies, aimed at blocking the stress
effects on tumor growth and dissemination.
| [
{
"created": "Mon, 13 Jul 2015 08:57:24 GMT",
"version": "v1"
}
] | 2015-10-09 | [
[
"Bianchi",
"Arianna",
""
],
[
"Syrigos",
"Konstantinos",
""
],
[
"Lolas",
"Georgios",
""
]
] | Primary tumors infrequently lead to demise of cancer patients; instead, mortality and a significant degree of morbidity result from the growth of secondary tumors in distant organs (metastasis). It is well-known that malignant tumors induce the formation of a lymphatic and a blood vascular network around themselves. A similar but far less studied process occurs in relation to the nervous system and is referred to as \emph{neoneurogenesis}; in fact, recent studies have demonstrated that tumors initiate their own innervation. However, the relationship between tumor progression and the nervous system is still poorly understood. This process is most likely regulated by a multitude of factors in the tumor-nerve microenvironment and it is therefore important to study the interactions between the nervous system and tumor cells through mathematical/computational modelling: this may reveal the most significant factors of the plethora of interacting elements regulating neoneurogenesis. The present work is a first attempt to model the neurobiological aspect of cancer development through a (simple) system of differential equations. The model confirms the experimental observations that a tumor is able to promote nerve formation/elongation around itself, and that high levels of nerve growth factor (NGF) and axon guidance molecules (AGMs) are recorded in the presence of a tumor. Our results also reflect the observation that high stress levels (represented by higher norepinephrine release by sympathetic nerves) contribute to tumor development and spread, indicating a mutually beneficial relationship between tumor cells and neurons. The model predictions suggest novel therapeutic strategies, aimed at blocking the stress effects on tumor growth and dissemination. |
1408.3240 | Olivier Rivoire | Mathieu Hemery and Olivier Rivoire | Evolution of sparsity and modularity in a model of protein allostery | null | null | 10.1103/PhysRevE.91.042704 | null | q-bio.PE cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sequence of a protein is not only constrained by its physical and
biochemical properties under current selection, but also by features of its
past evolutionary history. Understanding the extent and the form that these
evolutionary constraints may take is important to interpret the information in
protein sequences. To study this problem, we introduce a simple but physical
model of protein evolution where selection targets allostery, the functional
coupling of distal sites on protein surfaces. This model shows how the
geometrical organization of couplings between amino acids within a protein
structure can depend crucially on its evolutionary history. In particular, two
scenarios are found to generate a spatial concentration of functional
constraints: high mutation rates and fluctuating selective pressures. This
second scenario offers a plausible explanation for the high tolerance of
natural proteins to mutations and for the spatial organization of their least
tolerant amino acids, as revealed by sequence analyses and mutagenesis
experiments. It also implies a faculty to adapt to new selective pressures that
is consistent with observations. Besides, the model illustrates how several
independent functional modules may emerge within a same protein structure,
depending on the nature of past environmental fluctuations. Our model thus
relates the evolutionary history and evolutionary potential of proteins to the
geometry of their functional constraints, with implications for decoding and
engineering protein sequences.
| [
{
"created": "Thu, 14 Aug 2014 10:17:14 GMT",
"version": "v1"
}
] | 2015-06-22 | [
[
"Hemery",
"Mathieu",
""
],
[
"Rivoire",
"Olivier",
""
]
] | The sequence of a protein is not only constrained by its physical and biochemical properties under current selection, but also by features of its past evolutionary history. Understanding the extent and the form that these evolutionary constraints may take is important to interpret the information in protein sequences. To study this problem, we introduce a simple but physical model of protein evolution where selection targets allostery, the functional coupling of distal sites on protein surfaces. This model shows how the geometrical organization of couplings between amino acids within a protein structure can depend crucially on its evolutionary history. In particular, two scenarios are found to generate a spatial concentration of functional constraints: high mutation rates and fluctuating selective pressures. This second scenario offers a plausible explanation for the high tolerance of natural proteins to mutations and for the spatial organization of their least tolerant amino acids, as revealed by sequence analyses and mutagenesis experiments. It also implies a faculty to adapt to new selective pressures that is consistent with observations. Besides, the model illustrates how several independent functional modules may emerge within a same protein structure, depending on the nature of past environmental fluctuations. Our model thus relates the evolutionary history and evolutionary potential of proteins to the geometry of their functional constraints, with implications for decoding and engineering protein sequences. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.