id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2402.06101
Samiha Rouf
Samiha Rouf, Casey Moore, Debabrata Saha, Dan Nguyen, MaryLena Bleile, Robert Timmerman, Hao Peng, Steve Jiang
PULSAR Effect: Revealing Potential Synergies in Combined Radiation Therapy and Immunotherapy via Differential Equations
null
null
null
null
q-bio.QM math.DS
http://creativecommons.org/licenses/by/4.0/
PULSAR (personalized ultrafractionated stereotactic adaptive radiotherapy) is a form of radiotherapy method where a patient is given a large dose or pulse of radiation a couple of weeks apart rather than daily small doses. The tumor response is then monitored to determine when the subsequent pulse should be given. Pre-clinical trials have shown better tumor response in mice that received immunotherapy along with pulses spaced 10 days apart. However, this was not the case when the pulses were 1 day apart. Therefore, a synergistic effect between immunotherapy and PULSAR is observed when the pulses are spaced out by a certain number of days. In our study, we aimed to develop a mathematical model that can capture the synergistic effect by considering a time-dependent weight function that takes into account the spacing between pulses. By determining feasible parameters, and applying reasonable conditions, we utilize our model to simulate murine trials with varying sequencing of pulses. We successfully demonstrate that our model is simple to implement and can generate tumor volume data that is consistent with the pre-clinical trial data. Our model has the potential to aid in the development of clinical trials of PULSAR therapy.
[ { "created": "Thu, 8 Feb 2024 23:33:28 GMT", "version": "v1" } ]
2024-02-12
[ [ "Rouf", "Samiha", "" ], [ "Moore", "Casey", "" ], [ "Saha", "Debabrata", "" ], [ "Nguyen", "Dan", "" ], [ "Bleile", "MaryLena", "" ], [ "Timmerman", "Robert", "" ], [ "Peng", "Hao", "" ], [ "Jiang", "Steve", "" ] ]
PULSAR (personalized ultrafractionated stereotactic adaptive radiotherapy) is a form of radiotherapy method where a patient is given a large dose or pulse of radiation a couple of weeks apart rather than daily small doses. The tumor response is then monitored to determine when the subsequent pulse should be given. Pre-clinical trials have shown better tumor response in mice that received immunotherapy along with pulses spaced 10 days apart. However, this was not the case when the pulses were 1 day apart. Therefore, a synergistic effect between immunotherapy and PULSAR is observed when the pulses are spaced out by a certain number of days. In our study, we aimed to develop a mathematical model that can capture the synergistic effect by considering a time-dependent weight function that takes into account the spacing between pulses. By determining feasible parameters, and applying reasonable conditions, we utilize our model to simulate murine trials with varying sequencing of pulses. We successfully demonstrate that our model is simple to implement and can generate tumor volume data that is consistent with the pre-clinical trial data. Our model has the potential to aid in the development of clinical trials of PULSAR therapy.
q-bio/0603028
Eugene Shakhnovich
Eric J. Deeds and Eugene I. Shakhnovich
A Structure-Centric View of Protein Evolution, Design and Adaptation
null
null
null
null
q-bio.BM q-bio.PE
null
Proteins, by virtue of their central role in most biological processes, represent one of the key subjects of the study of molecular evolution. Inherent to the indispensability of proteins for living cells is the fact that a given protein can adopt a specific three-dimensional shape that is specified solely by the proteins sequence of amino acids. Over the past several decades, structural biologists have demonstrated that the array of structures that proteins may adopt is quite astounding, and this has lead to a strong interest in understanding how protein structures change and evolve over time. In this review we consider a large body of recent work that attempts to illuminate this structure-centric picture of protein evolution. Much of this work has focused on the question of how completely new protein structures (i.e. new folds or topologies) are discovered by protein sequences as they evolve. Pursuant to this question of structural innovation has been a desire to describe and understand the observation that certain types of protein structures are far more abundant than others and how this uneven distribution of proteins implicates on the process through which new shapes are discovered. We consider a number of theoretical models that have been successful at explaining this heterogeneity in protein populations and discuss the increasing amount of evidence that indicates that the process of structural evolution involves the divergence of protein sequences and structures from one another.
[ { "created": "Wed, 22 Mar 2006 17:18:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Deeds", "Eric J.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
Proteins, by virtue of their central role in most biological processes, represent one of the key subjects of the study of molecular evolution. Inherent to the indispensability of proteins for living cells is the fact that a given protein can adopt a specific three-dimensional shape that is specified solely by the proteins sequence of amino acids. Over the past several decades, structural biologists have demonstrated that the array of structures that proteins may adopt is quite astounding, and this has lead to a strong interest in understanding how protein structures change and evolve over time. In this review we consider a large body of recent work that attempts to illuminate this structure-centric picture of protein evolution. Much of this work has focused on the question of how completely new protein structures (i.e. new folds or topologies) are discovered by protein sequences as they evolve. Pursuant to this question of structural innovation has been a desire to describe and understand the observation that certain types of protein structures are far more abundant than others and how this uneven distribution of proteins implicates on the process through which new shapes are discovered. We consider a number of theoretical models that have been successful at explaining this heterogeneity in protein populations and discuss the increasing amount of evidence that indicates that the process of structural evolution involves the divergence of protein sequences and structures from one another.
1111.0723
Jeremy Sumner
Jeremy Sumner, Peter Jarvis, Jesus Fernandez-Sanchez, Bodie Kaine, Michael Woodhams, and Barbara Holland
Is the general time-reversible model bad for molecular phylogenetics?
10 pages, 2 figures
null
10.1093/sysbio/sys042
null
q-bio.PE math.ST q-bio.QM stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The general time reversible model (GTR) is presently the most popular model used in phylogentic studies. However, GTR has an undesirable mathematical property that is potentially of significant concern. It is the purpose of this article to give examples that demonstrate why this deficit may pose a problem for phylogenetic analysis and interpretation.
[ { "created": "Thu, 3 Nov 2011 04:03:41 GMT", "version": "v1" } ]
2012-04-24
[ [ "Sumner", "Jeremy", "" ], [ "Jarvis", "Peter", "" ], [ "Fernandez-Sanchez", "Jesus", "" ], [ "Kaine", "Bodie", "" ], [ "Woodhams", "Michael", "" ], [ "Holland", "Barbara", "" ] ]
The general time reversible model (GTR) is presently the most popular model used in phylogentic studies. However, GTR has an undesirable mathematical property that is potentially of significant concern. It is the purpose of this article to give examples that demonstrate why this deficit may pose a problem for phylogenetic analysis and interpretation.
q-bio/0601043
Changbong Hyeon
Changbong Hyeon and D. Thirumalai
Forced-unfolding and force-quench refolding of RNA hairpins
42 pages, 15 figures, Biophys. J. (in press)
Biophys. J. (2006) 80 3410-3427
10.1529/biophysj.105.078030
null
q-bio.BM cond-mat.soft
null
Using coarse-grained model we have explored forced-unfolding of RNA hairpin as a function of $f_S$ and the loading rate ($r_f$). The simulations and theoretical analysis have been done without and with the handles that are explicitly modeled by semiflexible polymer chains. The mechanisms and time scales for denaturation by temperature jump and mechanical unfolding are vastly different. The directed perturbation of the native state by $f_S$ results in a sequential unfolding of the hairpin starting from their ends whereas thermal denaturation occurs stochastically. From the dependence of the unfolding rates on $r_f$ and $f_S$ we show that the position of the unfolding transition state (TS) is not a constant but moves dramatically as either $r_f$ or $f_S$ is changed. The TS movements are interpreted by adopting the Hammond postulate for forced-unfolding. Forced-unfolding simulations of RNA, with handles attached to the two ends, show that the value of the unfolding force increases (especially at high pulling speeds) as the length of the handles increases. The pathways for refolding of RNA from stretched initial conformation, upon quenching $f_S$ to the quench force $f_Q$, are highly heterogeneous. The refolding times, upon force quench, are at least an order of magnitude greater than those obtained by temperature quench. The long $f_Q$-dependent refolding times starting from fully stretched states are analyzed using a model that accounts for the microscopic steps in the rate limiting step which involves the trans to gauche transitions of the dihedral angles in the GAAA tetraloop. The simulations with explicit molecular model for the handles show that the dynamics of force-quench refolding is strongly dependent on the interplay of their contour length and the persistence length, and the RNA persistence length.
[ { "created": "Thu, 26 Jan 2006 04:58:08 GMT", "version": "v1" }, { "created": "Tue, 31 Jan 2006 23:12:14 GMT", "version": "v2" } ]
2009-11-13
[ [ "Hyeon", "Changbong", "" ], [ "Thirumalai", "D.", "" ] ]
Using coarse-grained model we have explored forced-unfolding of RNA hairpin as a function of $f_S$ and the loading rate ($r_f$). The simulations and theoretical analysis have been done without and with the handles that are explicitly modeled by semiflexible polymer chains. The mechanisms and time scales for denaturation by temperature jump and mechanical unfolding are vastly different. The directed perturbation of the native state by $f_S$ results in a sequential unfolding of the hairpin starting from their ends whereas thermal denaturation occurs stochastically. From the dependence of the unfolding rates on $r_f$ and $f_S$ we show that the position of the unfolding transition state (TS) is not a constant but moves dramatically as either $r_f$ or $f_S$ is changed. The TS movements are interpreted by adopting the Hammond postulate for forced-unfolding. Forced-unfolding simulations of RNA, with handles attached to the two ends, show that the value of the unfolding force increases (especially at high pulling speeds) as the length of the handles increases. The pathways for refolding of RNA from stretched initial conformation, upon quenching $f_S$ to the quench force $f_Q$, are highly heterogeneous. The refolding times, upon force quench, are at least an order of magnitude greater than those obtained by temperature quench. The long $f_Q$-dependent refolding times starting from fully stretched states are analyzed using a model that accounts for the microscopic steps in the rate limiting step which involves the trans to gauche transitions of the dihedral angles in the GAAA tetraloop. The simulations with explicit molecular model for the handles show that the dynamics of force-quench refolding is strongly dependent on the interplay of their contour length and the persistence length, and the RNA persistence length.
2207.08731
Xiao Gan
Xiao Gan, Zixin Shu, Xinyan Wang, Dengying Yan, Jun Li, Shany ofaim, R\'eka Albert, Xiaodong Li, Baoyan Liu, Xuezhong Zhou, and Albert-L\'aszl\'o Barab\'asi
Network medicine framework reveals generic herb-symptom effectiveness of Traditional Chinese Medicine
25 pages, 4 figures plus 1 table
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Traditional Chinese medicine (TCM) relies on natural medical products to treat symptoms and diseases. While clinical data have demonstrated the effectiveness of selected TCM-based treatments, the mechanistic root of how TCM herbs treat diseases remains largely unknown. More importantly, current approaches focus on single herbs or prescriptions, missing the high-level general principles of TCM. To uncover the mechanistic nature of TCM on a system level, in this work we establish a generic network medicine framework for TCM from the human protein interactome. Applying our framework reveals a network pattern between symptoms (diseases) and herbs in TCM. We first observe that genes associated with a symptom are not distributed randomly in the interactome, but cluster into localized modules; furthermore, a short network distance between two symptom modules is indicative of the symptoms' co-occurrence and similarity. Next, we show that the network proximity of a herb's targets to a symptom module is predictive of the herb's effectiveness in treating the symptom. We validate our framework with real-world hospital patient data by showing that (1) shorter network distance between symptoms of inpatients correlates with higher relative risk (co-occurrence), and (2) herb-symptom network proximity is indicative of patients' symptom recovery rate after herbal treatment. Finally, we identified novel herb-symptom pairs in which the herb's effectiveness in treating the symptom is predicted by network and confirmed in hospital data, but previously unknown to the TCM community. These predictions highlight our framework's potential in creating herb discovery or repurposing opportunities. In conclusion, network medicine offers a powerful novel platform to understand the mechanism of traditional medicine and to predict novel herbal treatment against diseases.
[ { "created": "Mon, 18 Jul 2022 16:25:19 GMT", "version": "v1" } ]
2023-09-28
[ [ "Gan", "Xiao", "" ], [ "Shu", "Zixin", "" ], [ "Wang", "Xinyan", "" ], [ "Yan", "Dengying", "" ], [ "Li", "Jun", "" ], [ "ofaim", "Shany", "" ], [ "Albert", "Réka", "" ], [ "Li", "Xiaodong", "" ], [ "Liu", "Baoyan", "" ], [ "Zhou", "Xuezhong", "" ], [ "Barabási", "Albert-László", "" ] ]
Traditional Chinese medicine (TCM) relies on natural medical products to treat symptoms and diseases. While clinical data have demonstrated the effectiveness of selected TCM-based treatments, the mechanistic root of how TCM herbs treat diseases remains largely unknown. More importantly, current approaches focus on single herbs or prescriptions, missing the high-level general principles of TCM. To uncover the mechanistic nature of TCM on a system level, in this work we establish a generic network medicine framework for TCM from the human protein interactome. Applying our framework reveals a network pattern between symptoms (diseases) and herbs in TCM. We first observe that genes associated with a symptom are not distributed randomly in the interactome, but cluster into localized modules; furthermore, a short network distance between two symptom modules is indicative of the symptoms' co-occurrence and similarity. Next, we show that the network proximity of a herb's targets to a symptom module is predictive of the herb's effectiveness in treating the symptom. We validate our framework with real-world hospital patient data by showing that (1) shorter network distance between symptoms of inpatients correlates with higher relative risk (co-occurrence), and (2) herb-symptom network proximity is indicative of patients' symptom recovery rate after herbal treatment. Finally, we identified novel herb-symptom pairs in which the herb's effectiveness in treating the symptom is predicted by network and confirmed in hospital data, but previously unknown to the TCM community. These predictions highlight our framework's potential in creating herb discovery or repurposing opportunities. In conclusion, network medicine offers a powerful novel platform to understand the mechanism of traditional medicine and to predict novel herbal treatment against diseases.
2404.05391
Gianni Valerio Vinci
Gianni Valerio Vinci and Maurizio Mattia
Escape time in bistable neuronal populations driven by colored synaptic noise
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local networks of neurons are nonlinear systems driven by synaptic currents elicited by its own spiking activity and the input received from other brain areas. Synaptic currents are well approximated by correlated Gaussian noise. Besides, the population dynamics of neuronal networks is often found to be multistable, allowing the noise source to induce state transitions. State changes in neuronal systems underlies the way information is encoded and transformed. The characterization of the escape time from metastable states is then a cornerstone to understand how information is processed in the brain. The effects of correlated input forcing bistable systems have been studied for over half a century, nonetheless most results are perturbative or valid only when a separation of time scales is present. Here, we present a novel and exact result holding when the correlation time of the noise source is identical to that of the neural population, hence solving in a very general setting the mean escape time problem.
[ { "created": "Mon, 8 Apr 2024 10:50:47 GMT", "version": "v1" } ]
2024-04-09
[ [ "Vinci", "Gianni Valerio", "" ], [ "Mattia", "Maurizio", "" ] ]
Local networks of neurons are nonlinear systems driven by synaptic currents elicited by its own spiking activity and the input received from other brain areas. Synaptic currents are well approximated by correlated Gaussian noise. Besides, the population dynamics of neuronal networks is often found to be multistable, allowing the noise source to induce state transitions. State changes in neuronal systems underlies the way information is encoded and transformed. The characterization of the escape time from metastable states is then a cornerstone to understand how information is processed in the brain. The effects of correlated input forcing bistable systems have been studied for over half a century, nonetheless most results are perturbative or valid only when a separation of time scales is present. Here, we present a novel and exact result holding when the correlation time of the noise source is identical to that of the neural population, hence solving in a very general setting the mean escape time problem.
1511.01848
Davide Marenduzzo
C. A. Brackley, J. Johnson, S. Kelly, P. R. Cook, D. Marenduzzo
Binding of bivalent transcription factors to active and inactive regions folds human chromosomes into loops, rosettes and domains
main text 23 pages (including 5 figures); contains also Supplementary Material at the end. Supplementary Movies available on request
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biophysicists are modeling conformations of interphase chromosomes, often basing the strengths of interactions between segments distant on the genetic map on contact frequencies determined experimentally. Here, instead, we develop a fitting-free, minimal model: bivalent red and green "transcription factors" bind to cognate sites in runs of beads ("chromatin") to form molecular bridges stabilizing loops. In the absence of additional explicit forces, molecular dynamic simulations reveal that bound "factors' spontaneously cluster -- red with red, green with green, but rarely red with green -- to give structures reminiscent of transcription factories. Binding of just two transcription factors (or proteins) to active and inactive regions of human chromosomes yields rosettes, topological domains, and contact maps much like those seen experimentally. This emergent "bridging-induced attraction" proves to be a robust, simple, and generic force able to organize interphase chromosomes at all scales.
[ { "created": "Thu, 5 Nov 2015 18:35:25 GMT", "version": "v1" } ]
2015-11-06
[ [ "Brackley", "C. A.", "" ], [ "Johnson", "J.", "" ], [ "Kelly", "S.", "" ], [ "Cook", "P. R.", "" ], [ "Marenduzzo", "D.", "" ] ]
Biophysicists are modeling conformations of interphase chromosomes, often basing the strengths of interactions between segments distant on the genetic map on contact frequencies determined experimentally. Here, instead, we develop a fitting-free, minimal model: bivalent red and green "transcription factors" bind to cognate sites in runs of beads ("chromatin") to form molecular bridges stabilizing loops. In the absence of additional explicit forces, molecular dynamic simulations reveal that bound "factors' spontaneously cluster -- red with red, green with green, but rarely red with green -- to give structures reminiscent of transcription factories. Binding of just two transcription factors (or proteins) to active and inactive regions of human chromosomes yields rosettes, topological domains, and contact maps much like those seen experimentally. This emergent "bridging-induced attraction" proves to be a robust, simple, and generic force able to organize interphase chromosomes at all scales.
2407.04486
Tianshu Feng
Tianshu Feng, Rohan Gnanaolivu, Abolfazl Safikhani, Yuanhang Liu, Jun Jiang, Nicholas Chia, Alexander Partin, Priyanka Vasanthakumari, Yitan Zhu, Chen Wang
Variational and Explanatory Neural Networks for Encoding Cancer Profiles and Predicting Drug Responses
null
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by/4.0/
Human cancers present a significant public health challenge and require the discovery of novel drugs through translational research. Transcriptomics profiling data that describes molecular activities in tumors and cancer cell lines are widely utilized for predicting anti-cancer drug responses. However, existing AI models face challenges due to noise in transcriptomics data and lack of biological interpretability. To overcome these limitations, we introduce VETE (Variational and Explanatory Transcriptomics Encoder), a novel neural network framework that incorporates a variational component to mitigate noise effects and integrates traceable gene ontology into the neural network architecture for encoding cancer transcriptomics data. Key innovations include a local interpretability-guided method for identifying ontology paths, a visualization tool to elucidate biological mechanisms of drug responses, and the application of centralized large scale hyperparameter optimization. VETE demonstrated robust accuracy in cancer cell line classification and drug response prediction. Additionally, it provided traceable biological explanations for both tasks and offers insights into the mechanisms underlying its predictions. VETE bridges the gap between AI-driven predictions and biologically meaningful insights in cancer research, which represents a promising advancement in the field.
[ { "created": "Fri, 5 Jul 2024 13:13:02 GMT", "version": "v1" } ]
2024-07-08
[ [ "Feng", "Tianshu", "" ], [ "Gnanaolivu", "Rohan", "" ], [ "Safikhani", "Abolfazl", "" ], [ "Liu", "Yuanhang", "" ], [ "Jiang", "Jun", "" ], [ "Chia", "Nicholas", "" ], [ "Partin", "Alexander", "" ], [ "Vasanthakumari", "Priyanka", "" ], [ "Zhu", "Yitan", "" ], [ "Wang", "Chen", "" ] ]
Human cancers present a significant public health challenge and require the discovery of novel drugs through translational research. Transcriptomics profiling data that describes molecular activities in tumors and cancer cell lines are widely utilized for predicting anti-cancer drug responses. However, existing AI models face challenges due to noise in transcriptomics data and lack of biological interpretability. To overcome these limitations, we introduce VETE (Variational and Explanatory Transcriptomics Encoder), a novel neural network framework that incorporates a variational component to mitigate noise effects and integrates traceable gene ontology into the neural network architecture for encoding cancer transcriptomics data. Key innovations include a local interpretability-guided method for identifying ontology paths, a visualization tool to elucidate biological mechanisms of drug responses, and the application of centralized large scale hyperparameter optimization. VETE demonstrated robust accuracy in cancer cell line classification and drug response prediction. Additionally, it provided traceable biological explanations for both tasks and offers insights into the mechanisms underlying its predictions. VETE bridges the gap between AI-driven predictions and biologically meaningful insights in cancer research, which represents a promising advancement in the field.
1512.02324
Jacqueline Glasenapp Jacqueline S. Glasenapp
J. S. Glasenapp, B. R. Frieden, C. D. Cruz
Shannon Mutual Information Applied to Genetic Systems
20 pages, 2 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Shannon information has, in the past, been applied to quantify the genetic diversity of many natural populations. Here, we apply the Shannon concept to consecutive generations of alleles as they evolve over time. We suppose a genetic system analogous to the discrete noisy channel of Shannon, where the signal emitted by the input (mother population) is a number of alleles that will form the next generation (offspring). The alleles received at a given generation are conditional upon the previous generation. Knowledge of this conditional probability law allows us to track the evolution of the allele entropies and mutual information values from one generation to the next. We apply these laws to numerical computer simulations and to real data (Stryphnodendron adstringens). We find that, due to the genetic sampling process, in the absence of new mutations the mutual information increases between generations toward a maximum value. Lastly, after sufficient generations the system has a level of mutual information equal to the entropy (diversity) that it had at the beginning of the process (mother population). This implies no increase in genetic diversity in the absence of new mutations. Now, obviously, mutations are essential to the evolution of species. In addition, we observe that when a population shows at least a low level of genetic diversity, the highest values of mutual information between the generations occurs when the system is neither too orderly nor too disorderly. We also find that mutual information is a valid measure of allele fixation.
[ { "created": "Tue, 8 Dec 2015 04:42:25 GMT", "version": "v1" }, { "created": "Wed, 16 Dec 2015 19:46:25 GMT", "version": "v2" } ]
2015-12-17
[ [ "Glasenapp", "J. S.", "" ], [ "Frieden", "B. R.", "" ], [ "Cruz", "C. D.", "" ] ]
Shannon information has, in the past, been applied to quantify the genetic diversity of many natural populations. Here, we apply the Shannon concept to consecutive generations of alleles as they evolve over time. We suppose a genetic system analogous to the discrete noisy channel of Shannon, where the signal emitted by the input (mother population) is a number of alleles that will form the next generation (offspring). The alleles received at a given generation are conditional upon the previous generation. Knowledge of this conditional probability law allows us to track the evolution of the allele entropies and mutual information values from one generation to the next. We apply these laws to numerical computer simulations and to real data (Stryphnodendron adstringens). We find that, due to the genetic sampling process, in the absence of new mutations the mutual information increases between generations toward a maximum value. Lastly, after sufficient generations the system has a level of mutual information equal to the entropy (diversity) that it had at the beginning of the process (mother population). This implies no increase in genetic diversity in the absence of new mutations. Now, obviously, mutations are essential to the evolution of species. In addition, we observe that when a population shows at least a low level of genetic diversity, the highest values of mutual information between the generations occurs when the system is neither too orderly nor too disorderly. We also find that mutual information is a valid measure of allele fixation.
0712.3396
Philippe Lauren\c{c}ot
Philippe Lauren\c{c}ot and Christoph Walker
On an age and spatially structured population model for Proteus Mirabilis swarm-colony development
null
null
null
null
q-bio.PE math.AP
null
Proteus mirabilis are bacteria that make strikingly regular spatial-temporal patterns on agar surfaces. In this paper we investigate a mathematical model that has been shown to display these structures when solved numerically. The model consists of an ordinary differential equation coupled with a partial differential equation involving a first-order hyperbolic aging term together with nonlinear degenerate diffusion. The system is shown to admit global weak solutions.
[ { "created": "Thu, 20 Dec 2007 12:46:10 GMT", "version": "v1" } ]
2008-01-09
[ [ "Laurençot", "Philippe", "" ], [ "Walker", "Christoph", "" ] ]
Proteus mirabilis are bacteria that make strikingly regular spatial-temporal patterns on agar surfaces. In this paper we investigate a mathematical model that has been shown to display these structures when solved numerically. The model consists of an ordinary differential equation coupled with a partial differential equation involving a first-order hyperbolic aging term together with nonlinear degenerate diffusion. The system is shown to admit global weak solutions.
1311.1861
Gustavo Guerberoff
Gustavo Guerberoff, Fernando Alvarez-Valin
A stochastic microscopic model for the dynamics of antigenic variation
7 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel model that describes the within-host evolutionary dynamics of parasites undergoing antigenic variation. The approach uses a multi-type branching process with two types of entities defined according to their relationship with the immune system: clans of resistant parasitic cells (i.e. groups of cells sharing the same antigen not yet recognized by the immune system) that may become sensitive, and individual sensitive cells that can acquire a new resistance thus giving rise to the emergence of a new clan. The simplicity of the model allows analytical treatment to determine the subcritical and supercritical regimes in the space of parameters. By incorporating a density-dependent mechanism the model is able to capture additional relevant features observed in experimental data, such as the characteristic parasitemia waves. In summary our approach provides a new general framework to address the dynamics of antigenic variation which can be easily adapted to cope with broader and more complex situations.
[ { "created": "Fri, 8 Nov 2013 01:20:51 GMT", "version": "v1" }, { "created": "Fri, 15 Nov 2013 16:26:51 GMT", "version": "v2" } ]
2013-11-18
[ [ "Guerberoff", "Gustavo", "" ], [ "Alvarez-Valin", "Fernando", "" ] ]
We present a novel model that describes the within-host evolutionary dynamics of parasites undergoing antigenic variation. The approach uses a multi-type branching process with two types of entities defined according to their relationship with the immune system: clans of resistant parasitic cells (i.e. groups of cells sharing the same antigen not yet recognized by the immune system) that may become sensitive, and individual sensitive cells that can acquire a new resistance thus giving rise to the emergence of a new clan. The simplicity of the model allows analytical treatment to determine the subcritical and supercritical regimes in the space of parameters. By incorporating a density-dependent mechanism the model is able to capture additional relevant features observed in experimental data, such as the characteristic parasitemia waves. In summary our approach provides a new general framework to address the dynamics of antigenic variation which can be easily adapted to cope with broader and more complex situations.
1401.8076
Tatiana T. Marquez-Lago
Eder Zavala and Tatiana T. Marquez-Lago
The Long and Viscous Road: Uncovering Nuclear Diffusion Barriers in Closed Mitosis
21 pages, 6 figures and supplementary material (including 8 additional figures and a Table)
PLoS Comput Biol 10(7): e1003725, 2014
10.1371/journal.pcbi.1003725
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During Saccharomyces cerevisiae closed mitosis, parental identity is sustained by the asymmetric segregation of ageing factors. Such asymmetry has been hypothesized to occur via diffusion barriers, constraining protein lateral exchange in cellular membranes. Diffusion barriers have been extensively studied in the plasma membrane, but their identity and organization within the nucleus remain unknown. Here, we propose how sphingolipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between nuclear lobes. Our spatial stochastic model is based on several lines of experimental evidence and predicts that, while a sphingolipid domain and a protein ring could constitute the barrier during early anaphase; a sphingolipid domain spanning the bridge between lobes during late anaphase would be entirely sufficient. Additionally, we explore the structural organization of plausible diffusion barriers. Our work shows how nuclear diffusion barriers in closed mitosis may be emergent properties of simple nanoscale biophysical interactions.
[ { "created": "Fri, 31 Jan 2014 07:28:03 GMT", "version": "v1" }, { "created": "Mon, 28 Jul 2014 04:03:45 GMT", "version": "v2" } ]
2014-07-29
[ [ "Zavala", "Eder", "" ], [ "Marquez-Lago", "Tatiana T.", "" ] ]
During Saccharomyces cerevisiae closed mitosis, parental identity is sustained by the asymmetric segregation of ageing factors. Such asymmetry has been hypothesized to occur via diffusion barriers, constraining protein lateral exchange in cellular membranes. Diffusion barriers have been extensively studied in the plasma membrane, but their identity and organization within the nucleus remain unknown. Here, we propose how sphingolipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between nuclear lobes. Our spatial stochastic model is based on several lines of experimental evidence and predicts that, while a sphingolipid domain and a protein ring could constitute the barrier during early anaphase; a sphingolipid domain spanning the bridge between lobes during late anaphase would be entirely sufficient. Additionally, we explore the structural organization of plausible diffusion barriers. Our work shows how nuclear diffusion barriers in closed mitosis may be emergent properties of simple nanoscale biophysical interactions.
1810.00398
Kapil Ahuja
Aditya A. Shastri, Kapil Ahuja, Milind B. Ratnaparkhe, Aditya Shah, Aishwary Gagrani, and Anant Lal
Vector Quantized Spectral Clustering applied to Soybean Whole Genome Sequences
10 Pages, 3 Tables, 2 Figures
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a Vector Quantized Spectral Clustering (VQSC) algorithm that is a combination of Spectral Clustering (SC) and Vector Quantization (VQ) sampling for grouping Soybean genomes. The inspiration here is to use SC for its accuracy and VQ to make the algorithm computationally cheap (the complexity of SC is cubic in-terms of the input size). Although the combination of SC and VQ is not new, the novelty of our work is in developing the crucial similarity matrix in SC as well as use of k-medoids in VQ, both adapted for the Soybean genome data. We compare our approach with commonly used techniques like UPGMA (Un-weighted Pair Graph Method with Arithmetic Mean) and NJ (Neighbour Joining). Experimental results show that our approach outperforms both these techniques significantly in terms of cluster quality (up to 25% better cluster quality) and time complexity (order of magnitude faster).
[ { "created": "Sun, 30 Sep 2018 15:13:33 GMT", "version": "v1" } ]
2018-10-02
[ [ "Shastri", "Aditya A.", "" ], [ "Ahuja", "Kapil", "" ], [ "Ratnaparkhe", "Milind B.", "" ], [ "Shah", "Aditya", "" ], [ "Gagrani", "Aishwary", "" ], [ "Lal", "Anant", "" ] ]
We develop a Vector Quantized Spectral Clustering (VQSC) algorithm that is a combination of Spectral Clustering (SC) and Vector Quantization (VQ) sampling for grouping Soybean genomes. The inspiration here is to use SC for its accuracy and VQ to make the algorithm computationally cheap (the complexity of SC is cubic in-terms of the input size). Although the combination of SC and VQ is not new, the novelty of our work is in developing the crucial similarity matrix in SC as well as use of k-medoids in VQ, both adapted for the Soybean genome data. We compare our approach with commonly used techniques like UPGMA (Un-weighted Pair Graph Method with Arithmetic Mean) and NJ (Neighbour Joining). Experimental results show that our approach outperforms both these techniques significantly in terms of cluster quality (up to 25% better cluster quality) and time complexity (order of magnitude faster).
1802.08747
Kanika Bansal
Kanika Bansal, John D. Medaglia, Danielle S. Bassett, Jean M. Vettel, Sarah F. Muldoon
Data-driven brain network models predict individual variability in behavior
26 pages, 6 figures, 3 tables
null
10.1371/journal.pcbi.1006487
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relationship between brain structure and function has been probed using a variety of approaches, but how the underlying structural connectivity of the human brain drives behavior is far from understood. To investigate the effect of anatomical brain organization on human task performance, we use a data-driven computational modeling approach and explore the functional effects of naturally occurring structural differences in brain networks. We construct personalized brain network models by combining anatomical connectivity estimated from diffusion spectrum imaging of individual subjects with a nonlinear model of brain dynamics. By performing computational experiments in which we measure the excitability of the global brain network and spread of synchronization following a targeted computational stimulation, we quantify how individual variation in the underlying connectivity impacts both local and global brain dynamics. We further relate the computational results to individual variability in the subjects' performance of three language-demanding tasks both before and after transcranial magnetic stimulation to the left-inferior frontal gyrus. Our results show that task performance correlates with either local or global measures of functional activity, depending on the complexity of the task. By emphasizing differences in the underlying structural connectivity, our model serves as a powerful tool to predict individual differences in task performances, to dissociate the effect of targeted stimulation in tasks that differ in cognitive complexity, and to pave the way for the development of personalized therapeutics.
[ { "created": "Fri, 23 Feb 2018 21:57:52 GMT", "version": "v1" } ]
2018-11-15
[ [ "Bansal", "Kanika", "" ], [ "Medaglia", "John D.", "" ], [ "Bassett", "Danielle S.", "" ], [ "Vettel", "Jean M.", "" ], [ "Muldoon", "Sarah F.", "" ] ]
The relationship between brain structure and function has been probed using a variety of approaches, but how the underlying structural connectivity of the human brain drives behavior is far from understood. To investigate the effect of anatomical brain organization on human task performance, we use a data-driven computational modeling approach and explore the functional effects of naturally occurring structural differences in brain networks. We construct personalized brain network models by combining anatomical connectivity estimated from diffusion spectrum imaging of individual subjects with a nonlinear model of brain dynamics. By performing computational experiments in which we measure the excitability of the global brain network and spread of synchronization following a targeted computational stimulation, we quantify how individual variation in the underlying connectivity impacts both local and global brain dynamics. We further relate the computational results to individual variability in the subjects' performance of three language-demanding tasks both before and after transcranial magnetic stimulation to the left-inferior frontal gyrus. Our results show that task performance correlates with either local or global measures of functional activity, depending on the complexity of the task. By emphasizing differences in the underlying structural connectivity, our model serves as a powerful tool to predict individual differences in task performances, to dissociate the effect of targeted stimulation in tasks that differ in cognitive complexity, and to pave the way for the development of personalized therapeutics.
2311.07628
Julien ROMAIN
Julien Romain (URCA), Ahlem Arfaoui (URCA), William Bertucci (URCA)
Effect of Wearing a New Prophylactic Orthosis on Postural Balance
null
Journal of Orthopedic Research and Therapy, 2023, 8 (4)
10.29011/2575-8241.001289
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: The purpose of this study is to evaluate the effect of an innovative prophylactic knee orthosis on postural balance. This prophylactic knee orthosis is designed with a compression that is oriented in a chosen direction. The purpose of this compression is to improve stability in dynamic situations. Orthoses are used to provide functional improvements to knee problems. However, more scientific validation is needed for this type of product. Methods: 20 sportsmen in team sports performed a functional test: the Y-Balance Test. This reliable and reproducible test allows to evaluate the postural balance of the lower limb. The subjects were tested in 3 conditions: prophylactic orthosis with innovative compression, control orthosis (with no compression) and without orthosis. The average of the three trials were collected in each direction and condition. Results: The prophylactic orthosis had a better standardized score in the anterior direction (p<0.05) and a better composite score (p<0.05) than the control orthosis (no compression). However, there were no differences in the normalized score in the other directions. There were no significant differences between the prophylactic orthosis and without orthosis. Conclusion:Wearing the prophylactic orthosis improves postural balance compared to a orthosis with no compression. But there is no difference between the prophylactic orthosis and without orthosis on postural balance.
[ { "created": "Mon, 13 Nov 2023 08:48:26 GMT", "version": "v1" } ]
2023-11-15
[ [ "Romain", "Julien", "", "URCA" ], [ "Arfaoui", "Ahlem", "", "URCA" ], [ "Bertucci", "William", "", "URCA" ] ]
Purpose: The purpose of this study is to evaluate the effect of an innovative prophylactic knee orthosis on postural balance. This prophylactic knee orthosis is designed with a compression that is oriented in a chosen direction. The purpose of this compression is to improve stability in dynamic situations. Orthoses are used to provide functional improvements to knee problems. However, more scientific validation is needed for this type of product. Methods: 20 sportsmen in team sports performed a functional test: the Y-Balance Test. This reliable and reproducible test allows to evaluate the postural balance of the lower limb. The subjects were tested in 3 conditions: prophylactic orthosis with innovative compression, control orthosis (with no compression) and without orthosis. The average of the three trials were collected in each direction and condition. Results: The prophylactic orthosis had a better standardized score in the anterior direction (p<0.05) and a better composite score (p<0.05) than the control orthosis (no compression). However, there were no differences in the normalized score in the other directions. There were no significant differences between the prophylactic orthosis and without orthosis. Conclusion:Wearing the prophylactic orthosis improves postural balance compared to a orthosis with no compression. But there is no difference between the prophylactic orthosis and without orthosis on postural balance.
1309.5118
Jared Decker
Jared E. Decker, Stephanie D. McKay, Megan M. Rolf, JaeWoo Kim, Antonio Molina Alcal\'a, Tad S. Sonstegard, Olivier Hanotte, Anders G\"otherstr\"om, Christopher M. Seabury, Lisa Praharani, Masroor Ellahi Babar, Luciana Correia de Almeida Regitano, Mehmet Ali Yildiz, Michael P. Heaton, Wansheng Lui, Chu-Zhao Lei, James M. Reecy, Muhammad Saif-Ur-Rehman, Robert D. Schnabel, and Jeremy F. Taylor
Worldwide Patterns of Ancestry, Divergence, and Admixture in Domesticated Cattle
38 pages, 15 figures. Various changes made to respond to peer reviews. Mostly, arguments were clarified and additional f-statistics were added
PLoS Genet 10 (2014) e1004254
10.1371/journal.pgen.1004254
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The domestication and development of cattle has considerably impacted human societies, but the histories of cattle breeds have been poorly understood especially for African, Asian, and American breeds. Using genotypes from 43,043 autosomal single nucleotide polymorphism markers scored in 1,543 animals, we evaluate the population structure of 134 domesticated bovid breeds. Regardless of the analytical method or sample subset, the three major groups of Asian indicine, Eurasian taurine, and African taurine were consistently observed. Patterns of geographic dispersal resulting from co-migration with humans and exportation are recognizable in phylogenetic networks. All analytical methods reveal patterns of hybridization which occurred after divergence. Using 19 breeds, we map the cline of indicine introgression into Africa. We infer that African taurine possess a large portion of wild African auroch ancestry, causing their divergence from Eurasian taurine. We detect exportation patterns in Asia and identify a cline of Eurasian taurine/indicine hybridization in Asia. We also identify the influence of species other than Bos taurus in the formation of Asian breeds. We detect the pronounced influence of Shorthorn cattle in the formation of European breeds. Iberian and Italian cattle possess introgression from African taurine. American Criollo cattle are shown to be of Iberian, and not African, decent. Indicine introgression into American cattle occurred in the Americas, and not Europe. We argue that cattle migration, movement and trading followed by admixture have been important forces in shaping modern bovine genomic variation.
[ { "created": "Thu, 19 Sep 2013 23:09:45 GMT", "version": "v1" }, { "created": "Fri, 3 Jan 2014 03:32:10 GMT", "version": "v2" } ]
2014-04-01
[ [ "Decker", "Jared E.", "" ], [ "McKay", "Stephanie D.", "" ], [ "Rolf", "Megan M.", "" ], [ "Kim", "JaeWoo", "" ], [ "Alcalá", "Antonio Molina", "" ], [ "Sonstegard", "Tad S.", "" ], [ "Hanotte", "Olivier", "" ], [ "Götherström", "Anders", "" ], [ "Seabury", "Christopher M.", "" ], [ "Praharani", "Lisa", "" ], [ "Babar", "Masroor Ellahi", "" ], [ "Regitano", "Luciana Correia de Almeida", "" ], [ "Yildiz", "Mehmet Ali", "" ], [ "Heaton", "Michael P.", "" ], [ "Lui", "Wansheng", "" ], [ "Lei", "Chu-Zhao", "" ], [ "Reecy", "James M.", "" ], [ "Saif-Ur-Rehman", "Muhammad", "" ], [ "Schnabel", "Robert D.", "" ], [ "Taylor", "Jeremy F.", "" ] ]
The domestication and development of cattle has considerably impacted human societies, but the histories of cattle breeds have been poorly understood especially for African, Asian, and American breeds. Using genotypes from 43,043 autosomal single nucleotide polymorphism markers scored in 1,543 animals, we evaluate the population structure of 134 domesticated bovid breeds. Regardless of the analytical method or sample subset, the three major groups of Asian indicine, Eurasian taurine, and African taurine were consistently observed. Patterns of geographic dispersal resulting from co-migration with humans and exportation are recognizable in phylogenetic networks. All analytical methods reveal patterns of hybridization which occurred after divergence. Using 19 breeds, we map the cline of indicine introgression into Africa. We infer that African taurine possess a large portion of wild African auroch ancestry, causing their divergence from Eurasian taurine. We detect exportation patterns in Asia and identify a cline of Eurasian taurine/indicine hybridization in Asia. We also identify the influence of species other than Bos taurus in the formation of Asian breeds. We detect the pronounced influence of Shorthorn cattle in the formation of European breeds. Iberian and Italian cattle possess introgression from African taurine. American Criollo cattle are shown to be of Iberian, and not African, decent. Indicine introgression into American cattle occurred in the Americas, and not Europe. We argue that cattle migration, movement and trading followed by admixture have been important forces in shaping modern bovine genomic variation.
2003.09477
Stefan Tappe
Stefan Tappe
A simple mathematical model for the evolution of the corona virus
6 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of this note is to present a simple mathematical model with two parameters for the number of deaths due to the corona (COVID-19) virus. The model only requires basic knowledge in differential calculus, and can also be understood by pupils attending secondary school. The model can easily be implemented on a computer, and we will illustrate it on the basis of case studies for different countries.
[ { "created": "Fri, 20 Mar 2020 19:40:42 GMT", "version": "v1" } ]
2020-03-24
[ [ "Tappe", "Stefan", "" ] ]
The goal of this note is to present a simple mathematical model with two parameters for the number of deaths due to the corona (COVID-19) virus. The model only requires basic knowledge in differential calculus, and can also be understood by pupils attending secondary school. The model can easily be implemented on a computer, and we will illustrate it on the basis of case studies for different countries.
1911.12115
Timothy Kinyanjui
Timothy Kinyanjui and Thomas House
Generalised Linear Models for Dependent Binary Outcomes with Applications to Household Stratified Pandemic Influenza Data
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much traditional statistical modelling assumes that the outcome variables of interest are independent of each other when conditioned on the explanatory variables. This assumption is strongly violated in the case of infectious diseases, particularly in close-contact settings such as households, where each individual's probability of infection is strongly influenced by whether other household members experience infection. On the other hand, general multi-type transmission models of household epidemics quickly become unidentifiable from data as the number of types increases. This has led to a situation where it is has not been possible to draw consistent conclusions from household studies of infectious diseases, for example in the event of an influenza pandemic. Here, we present a generalised linear modelling framework for binary outcomes in sub-units that can (i) capture the effects of non-independence arising from a transmission process and (ii) adjust estimates of disease risk and severity for differences in study population characteristics. This model allows for computationally fast estimation, uncertainty quantification, covariate choice and model selection. In application to real pandemic influenza household data, we show that it is formally favoured over existing modelling approaches.
[ { "created": "Wed, 27 Nov 2019 12:48:19 GMT", "version": "v1" } ]
2019-11-28
[ [ "Kinyanjui", "Timothy", "" ], [ "House", "Thomas", "" ] ]
Much traditional statistical modelling assumes that the outcome variables of interest are independent of each other when conditioned on the explanatory variables. This assumption is strongly violated in the case of infectious diseases, particularly in close-contact settings such as households, where each individual's probability of infection is strongly influenced by whether other household members experience infection. On the other hand, general multi-type transmission models of household epidemics quickly become unidentifiable from data as the number of types increases. This has led to a situation where it is has not been possible to draw consistent conclusions from household studies of infectious diseases, for example in the event of an influenza pandemic. Here, we present a generalised linear modelling framework for binary outcomes in sub-units that can (i) capture the effects of non-independence arising from a transmission process and (ii) adjust estimates of disease risk and severity for differences in study population characteristics. This model allows for computationally fast estimation, uncertainty quantification, covariate choice and model selection. In application to real pandemic influenza household data, we show that it is formally favoured over existing modelling approaches.
2109.14659
Sanjukta Krishnagopal
Sanjukta Krishnagopal, Keith Lohse, Robynne Braun
Stroke recovery phenotyping through network trajectory approaches and graph neural networks
20 pages, 5 figures
null
null
null
q-bio.QM cs.LG cs.SI physics.data-an
http://creativecommons.org/licenses/by/4.0/
Stroke is a leading cause of neurological injury characterized by impairments in multiple neurological domains including cognition, language, sensory and motor functions. Clinical recovery in these domains is tracked using a wide range of measures that may be continuous, ordinal, interval or categorical in nature, which presents challenges for standard multivariate regression approaches. This has hindered stroke researchers' ability to achieve an integrated picture of the complex time-evolving interactions amongst symptoms. Here we use tools from network science and machine learning that are particularly well-suited to extracting underlying patterns in such data, and may assist in prediction of recovery patterns. To demonstrate the utility of this approach, we analyzed data from the NINDS tPA trial using the Trajectory Profile Clustering (TPC) method to identify distinct stroke recovery patterns for 11 different neurological domains at 5 discrete time points. Our analysis identified 3 distinct stroke trajectory profiles that align with clinically relevant stroke syndromes, characterized both by distinct clusters of symptoms, as well as differing degrees of symptom severity. We then validated our approach using graph neural networks to determine how well our model performed predictively for stratifying patients into these trajectory profiles at early vs. later time points post-stroke. We demonstrate that trajectory profile clustering is an effective method for identifying clinically relevant recovery subtypes in multidimensional longitudinal datasets, and for early prediction of symptom progression subtypes in individual patients. This paper is the first work introducing network trajectory approaches for stroke recovery phenotyping, and is aimed at enhancing the translation of such novel computational approaches for practical clinical application.
[ { "created": "Wed, 29 Sep 2021 18:46:08 GMT", "version": "v1" } ]
2021-10-01
[ [ "Krishnagopal", "Sanjukta", "" ], [ "Lohse", "Keith", "" ], [ "Braun", "Robynne", "" ] ]
Stroke is a leading cause of neurological injury characterized by impairments in multiple neurological domains including cognition, language, sensory and motor functions. Clinical recovery in these domains is tracked using a wide range of measures that may be continuous, ordinal, interval or categorical in nature, which presents challenges for standard multivariate regression approaches. This has hindered stroke researchers' ability to achieve an integrated picture of the complex time-evolving interactions amongst symptoms. Here we use tools from network science and machine learning that are particularly well-suited to extracting underlying patterns in such data, and may assist in prediction of recovery patterns. To demonstrate the utility of this approach, we analyzed data from the NINDS tPA trial using the Trajectory Profile Clustering (TPC) method to identify distinct stroke recovery patterns for 11 different neurological domains at 5 discrete time points. Our analysis identified 3 distinct stroke trajectory profiles that align with clinically relevant stroke syndromes, characterized both by distinct clusters of symptoms, as well as differing degrees of symptom severity. We then validated our approach using graph neural networks to determine how well our model performed predictively for stratifying patients into these trajectory profiles at early vs. later time points post-stroke. We demonstrate that trajectory profile clustering is an effective method for identifying clinically relevant recovery subtypes in multidimensional longitudinal datasets, and for early prediction of symptom progression subtypes in individual patients. This paper is the first work introducing network trajectory approaches for stroke recovery phenotyping, and is aimed at enhancing the translation of such novel computational approaches for practical clinical application.
q-bio/0411035
Michael Deem
Taison Tan, Daan Frenkel, Vishal Gupta, and Michael W. Deem
Length, Protein-Protein Interactions, and Complexity
13 pages, 5 figures, 2 tables, to appear in Physica A
null
10.1016/j.physa.2004.11.021
null
q-bio.MN
null
The evolutionary reason for the increase in gene length from archaea to prokaryotes to eukaryotes observed in large scale genome sequencing efforts has been unclear. We propose here that the increasing complexity of protein-protein interactions has driven the selection of longer proteins, as longer proteins are more able to distinguish among a larger number of distinct interactions due to their greater average surface area. Annotated protein sequences available from the SWISS-PROT database were analyzed for thirteen eukaryotes, eight bacteria, and two archaea species. The number of subcellular locations to which each protein is associated is used as a measure of the number of interactions to which a protein participates. Two databases of yeast protein-protein interactions were used as another measure of the number of interactions to which each \emph{S. cerevisiae} protein participates. Protein length is shown to correlate with both number of subcellular locations to which a protein is associated and number of interactions as measured by yeast two-hybrid experiments. Protein length is also shown to correlate with the probability that the protein is encoded by an essential gene. Interestingly, average protein length and number of subcellular locations are not significantly different between all human proteins and protein targets of known, marketed drugs. Increased protein length appears to be a significant mechanism by which the increasing complexity of protein-protein interaction networks is accommodated within the natural evolution of species. Consideration of protein length may be a valuable tool in drug design, one that predicts different strategies for inhibiting interactions in aberrant and normal pathways.
[ { "created": "Tue, 16 Nov 2004 10:17:35 GMT", "version": "v1" } ]
2009-11-10
[ [ "Tan", "Taison", "" ], [ "Frenkel", "Daan", "" ], [ "Gupta", "Vishal", "" ], [ "Deem", "Michael W.", "" ] ]
The evolutionary reason for the increase in gene length from archaea to prokaryotes to eukaryotes observed in large scale genome sequencing efforts has been unclear. We propose here that the increasing complexity of protein-protein interactions has driven the selection of longer proteins, as longer proteins are more able to distinguish among a larger number of distinct interactions due to their greater average surface area. Annotated protein sequences available from the SWISS-PROT database were analyzed for thirteen eukaryotes, eight bacteria, and two archaea species. The number of subcellular locations to which each protein is associated is used as a measure of the number of interactions to which a protein participates. Two databases of yeast protein-protein interactions were used as another measure of the number of interactions to which each \emph{S. cerevisiae} protein participates. Protein length is shown to correlate with both number of subcellular locations to which a protein is associated and number of interactions as measured by yeast two-hybrid experiments. Protein length is also shown to correlate with the probability that the protein is encoded by an essential gene. Interestingly, average protein length and number of subcellular locations are not significantly different between all human proteins and protein targets of known, marketed drugs. Increased protein length appears to be a significant mechanism by which the increasing complexity of protein-protein interaction networks is accommodated within the natural evolution of species. Consideration of protein length may be a valuable tool in drug design, one that predicts different strategies for inhibiting interactions in aberrant and normal pathways.
q-bio/0610010
Roberts Paeglis
Roberts Paeglis, Ivars Lacis, Anda Podniece, Nikolajs Sjakste
The Hilbert transform of horizontal gaze position during natural image classification by saccades
Preliminary results reported at ECVP 2006, 12 pages, 6 figures
null
null
null
q-bio.NC
null
Eye movements are a behavioral response that can be involved in tasks as complicated as natural image classification. This report confirms that pro- and anti-saccades can be used by a volunteer to designate target (animal) or non-target images that were centered 16 degrees off the fixation point. With more than 86% correct responses, 11 participants responded to targets in 470 milliseconds on average, starting as quick as 245 milliseconds. Furthermore, tracking the gaze position is considered a powerful method in the studies of recognition as the saccade response times, ocular dynamics and the events around the response time can be calculated from the data sampled 240 times per second. The Hilbert transform is applied to obtain the analytic signal from the horizontal gaze position. Its amplitude and phase are used to describe differences between saccades that may testify to the recognition process.
[ { "created": "Wed, 4 Oct 2006 10:23:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Paeglis", "Roberts", "" ], [ "Lacis", "Ivars", "" ], [ "Podniece", "Anda", "" ], [ "Sjakste", "Nikolajs", "" ] ]
Eye movements are a behavioral response that can be involved in tasks as complicated as natural image classification. This report confirms that pro- and anti-saccades can be used by a volunteer to designate target (animal) or non-target images that were centered 16 degrees off the fixation point. With more than 86% correct responses, 11 participants responded to targets in 470 milliseconds on average, starting as quick as 245 milliseconds. Furthermore, tracking the gaze position is considered a powerful method in the studies of recognition as the saccade response times, ocular dynamics and the events around the response time can be calculated from the data sampled 240 times per second. The Hilbert transform is applied to obtain the analytic signal from the horizontal gaze position. Its amplitude and phase are used to describe differences between saccades that may testify to the recognition process.
2003.09949
Kirill Polovnikov
K. Polovnikov, A. Gorsky, S. Nechaev, S. V. Razin, S. Ulianov
Non-backtracking walks reveal compartments in sparse chromatin interaction networks
null
null
null
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chromatin communities stabilized by protein machinery play essential role in gene regulation and refine global polymeric folding of the chromatin fiber. However, treatment of these communities in the framework of the classical network theory (stochastic block model, SBM) does not take into account intrinsic linear connectivity of the chromatin loci. Here we propose the "polymer" block model, paving the way for community detection in polymer networks. On the basis of this new model we modify the non-backtracking flow operator and suggest the first protocol for annotation of compartmental domains in sparse single cell Hi-C matrices. In particular, we prove that our approach corresponds to the maximum entropy principle. The benchmark analyses demonstrates that the spectrum of the polymer non-backtracking operator resolves the true compartmental structure up to the theoretical detectability threshold, while all commonly used operators fail above it. We test various operators on real data and conclude that the sizes of the non-backtracking single cell domains are most close to the sizes of compartments from the population data. Moreover, the found domains clearly segregate in the gene density and correlate with the population compartmental mask, corroborating biological significance of our annotation of the chromatin compartmental domains in single cells Hi-C matrices.
[ { "created": "Sun, 22 Mar 2020 17:08:34 GMT", "version": "v1" }, { "created": "Sat, 20 Jun 2020 21:28:44 GMT", "version": "v2" } ]
2020-06-23
[ [ "Polovnikov", "K.", "" ], [ "Gorsky", "A.", "" ], [ "Nechaev", "S.", "" ], [ "Razin", "S. V.", "" ], [ "Ulianov", "S.", "" ] ]
Chromatin communities stabilized by protein machinery play essential role in gene regulation and refine global polymeric folding of the chromatin fiber. However, treatment of these communities in the framework of the classical network theory (stochastic block model, SBM) does not take into account intrinsic linear connectivity of the chromatin loci. Here we propose the "polymer" block model, paving the way for community detection in polymer networks. On the basis of this new model we modify the non-backtracking flow operator and suggest the first protocol for annotation of compartmental domains in sparse single cell Hi-C matrices. In particular, we prove that our approach corresponds to the maximum entropy principle. The benchmark analyses demonstrates that the spectrum of the polymer non-backtracking operator resolves the true compartmental structure up to the theoretical detectability threshold, while all commonly used operators fail above it. We test various operators on real data and conclude that the sizes of the non-backtracking single cell domains are most close to the sizes of compartments from the population data. Moreover, the found domains clearly segregate in the gene density and correlate with the population compartmental mask, corroborating biological significance of our annotation of the chromatin compartmental domains in single cells Hi-C matrices.
1902.10228
Rohit Kate
Rohit J. Kate, Noah Pearce, Debesh Mazumdar, Vani Nilakantan
Continual Prediction from EHR Data for Inpatient Acute Kidney Injury
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acute kidney injury (AKI) commonly occurs in hospitalized patients and can lead to serious medical complications. In order to optimally predict AKI before it develops at any time during a hospital stay, we present a novel framework in which AKI is continually predicted automatically from EHR data over the entire hospital stay instead of at only one particular time. The continual model predicts AKI every time a patients AKI-relevant variable changes in the EHR. Thus the model is not only independent of a particular time for making predictions, but it can also leverage the latest values of all the AKI-relevant patient variables for making predictions. Using data of 44,691 hospital stays of duration longer than 24 hours we evaluated our continual prediction model and compared it with the traditional one-time prediction models. Excluding hospitals stays in which AKI occurred within 24 hours from admission, the one-time prediction model predicting at 24 hours from admission obtained area under ROC curve (AUC) of 0.653 while the continual prediction model obtained AUC of 0.724. The one-time prediction model that predicts at 24 hours obviously cannot predict AKI incidences that occur within 24 hours of admission which when included in the evaluation reduced its AUC to 0.57. In comparison, the continual prediction model had AUC of 0.709. The continual prediction model also did better than all other one-time prediction models predicting at other fixed times. By being able to take into account the latest values of AKI-relevant patient variables and by not being limited to a particular time of prediction, the continual prediction model out-performed one-time prediction models in predicting AKI.
[ { "created": "Tue, 26 Feb 2019 21:16:24 GMT", "version": "v1" } ]
2019-02-28
[ [ "Kate", "Rohit J.", "" ], [ "Pearce", "Noah", "" ], [ "Mazumdar", "Debesh", "" ], [ "Nilakantan", "Vani", "" ] ]
Acute kidney injury (AKI) commonly occurs in hospitalized patients and can lead to serious medical complications. In order to optimally predict AKI before it develops at any time during a hospital stay, we present a novel framework in which AKI is continually predicted automatically from EHR data over the entire hospital stay instead of at only one particular time. The continual model predicts AKI every time a patients AKI-relevant variable changes in the EHR. Thus the model is not only independent of a particular time for making predictions, but it can also leverage the latest values of all the AKI-relevant patient variables for making predictions. Using data of 44,691 hospital stays of duration longer than 24 hours we evaluated our continual prediction model and compared it with the traditional one-time prediction models. Excluding hospitals stays in which AKI occurred within 24 hours from admission, the one-time prediction model predicting at 24 hours from admission obtained area under ROC curve (AUC) of 0.653 while the continual prediction model obtained AUC of 0.724. The one-time prediction model that predicts at 24 hours obviously cannot predict AKI incidences that occur within 24 hours of admission which when included in the evaluation reduced its AUC to 0.57. In comparison, the continual prediction model had AUC of 0.709. The continual prediction model also did better than all other one-time prediction models predicting at other fixed times. By being able to take into account the latest values of AKI-relevant patient variables and by not being limited to a particular time of prediction, the continual prediction model out-performed one-time prediction models in predicting AKI.
1911.05943
Shashwat Shukla
Shashwat Shukla, Hideaki Shimazaki, Udayan Ganguly
Structured Mean-field Variational Inference and Learning in Winner-take-all Spiking Neural Networks
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Bayesian view of the brain hypothesizes that the brain constructs a generative model of the world, and uses it to make inferences via Bayes' rule. Although many types of approximate inference schemes have been proposed for hierarchical Bayesian models of the brain, the questions of how these distinct inference procedures can be realized by hierarchical networks of spiking neurons remains largely unresolved. Based on a previously proposed multi-compartment neuron model in which dendrites perform logarithmic compression, and stochastic spiking winner-take-all (WTA) circuits in which firing probability of each neuron is normalized by activities of other neurons, here we construct Spiking Neural Networks that perform \emph{structured} mean-field variational inference and learning, on hierarchical directed probabilistic graphical models with discrete random variables. In these models, we do away with symmetric synaptic weights previously assumed for \emph{unstructured} mean-field variational inference by learning both the feedback and feedforward weights separately. The resulting online learning rules take the form of an error-modulated local Spike-Timing-Dependent Plasticity rule. Importantly, we consider two types of WTA circuits in which only one neuron is allowed to fire at a time (hard WTA) or neurons can fire independently (soft WTA), which makes neurons in these circuits operate in regimes of temporal and rate coding respectively. We show how the hard WTA circuits can be used to perform Gibbs sampling whereas the soft WTA circuits can be used to implement a message passing algorithm that computes the marginals approximately. Notably, a simple change in the amount of lateral inhibition realizes switching between the hard and soft WTA spiking regimes. Hence the proposed network provides a unified view of the two previously disparate modes of inference and coding by spiking neurons.
[ { "created": "Thu, 14 Nov 2019 05:31:11 GMT", "version": "v1" } ]
2019-11-15
[ [ "Shukla", "Shashwat", "" ], [ "Shimazaki", "Hideaki", "" ], [ "Ganguly", "Udayan", "" ] ]
The Bayesian view of the brain hypothesizes that the brain constructs a generative model of the world, and uses it to make inferences via Bayes' rule. Although many types of approximate inference schemes have been proposed for hierarchical Bayesian models of the brain, the questions of how these distinct inference procedures can be realized by hierarchical networks of spiking neurons remains largely unresolved. Based on a previously proposed multi-compartment neuron model in which dendrites perform logarithmic compression, and stochastic spiking winner-take-all (WTA) circuits in which firing probability of each neuron is normalized by activities of other neurons, here we construct Spiking Neural Networks that perform \emph{structured} mean-field variational inference and learning, on hierarchical directed probabilistic graphical models with discrete random variables. In these models, we do away with symmetric synaptic weights previously assumed for \emph{unstructured} mean-field variational inference by learning both the feedback and feedforward weights separately. The resulting online learning rules take the form of an error-modulated local Spike-Timing-Dependent Plasticity rule. Importantly, we consider two types of WTA circuits in which only one neuron is allowed to fire at a time (hard WTA) or neurons can fire independently (soft WTA), which makes neurons in these circuits operate in regimes of temporal and rate coding respectively. We show how the hard WTA circuits can be used to perform Gibbs sampling whereas the soft WTA circuits can be used to implement a message passing algorithm that computes the marginals approximately. Notably, a simple change in the amount of lateral inhibition realizes switching between the hard and soft WTA spiking regimes. Hence the proposed network provides a unified view of the two previously disparate modes of inference and coding by spiking neurons.
1710.03436
Susan Cheng
Joseph Antonelli, Brian Claggett, Mir Henglin, Jeramie D. Watrous, Kim A. Lehmann, Pavel Hushcha, Olga Demler, Samia Mora, Teemu Niiranen, Alexandre C. Pereira, Mohit Jain, Susan Cheng
Statistical Methods and Workflow for Analyzing Human Metabolomics Data
null
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput metabolomics investigations, when conducted in large human cohorts, represent a potentially powerful tool for elucidating the biochemical diversity and mechanisms underlying human health and disease. Large-scale metabolomics data, generated using targeted or nontargeted platforms, are increasingly more common. Appropriate statistical analysis of these complex high-dimensional data is critical for extracting meaningful results from such large-scale human metabolomics studies. Herein, we consider the main statistical analytical approaches that have been employed in human metabolomics studies. Based on the lessons learned and collective experience to date in the field, we propose a step-by-step framework for pursuing statistical analyses of human metabolomics data. We discuss the range of options and potential approaches that may be employed at each stage of data management, analysis, and interpretation, and offer guidance on analytical considerations that are important for implementing an analysis workflow. Certain pervasive analytical challenges facing human metabolomics warrant ongoing research. Addressing these challenges will allow for more standardization in the field and lead to analytical advances in metabolomics investigations with the potential to elucidate novel mechanisms underlying human health and disease.
[ { "created": "Tue, 10 Oct 2017 08:02:51 GMT", "version": "v1" }, { "created": "Tue, 20 Feb 2018 17:28:27 GMT", "version": "v2" } ]
2018-02-21
[ [ "Antonelli", "Joseph", "" ], [ "Claggett", "Brian", "" ], [ "Henglin", "Mir", "" ], [ "Watrous", "Jeramie D.", "" ], [ "Lehmann", "Kim A.", "" ], [ "Hushcha", "Pavel", "" ], [ "Demler", "Olga", "" ], [ "Mora", "Samia", "" ], [ "Niiranen", "Teemu", "" ], [ "Pereira", "Alexandre C.", "" ], [ "Jain", "Mohit", "" ], [ "Cheng", "Susan", "" ] ]
High-throughput metabolomics investigations, when conducted in large human cohorts, represent a potentially powerful tool for elucidating the biochemical diversity and mechanisms underlying human health and disease. Large-scale metabolomics data, generated using targeted or nontargeted platforms, are increasingly more common. Appropriate statistical analysis of these complex high-dimensional data is critical for extracting meaningful results from such large-scale human metabolomics studies. Herein, we consider the main statistical analytical approaches that have been employed in human metabolomics studies. Based on the lessons learned and collective experience to date in the field, we propose a step-by-step framework for pursuing statistical analyses of human metabolomics data. We discuss the range of options and potential approaches that may be employed at each stage of data management, analysis, and interpretation, and offer guidance on analytical considerations that are important for implementing an analysis workflow. Certain pervasive analytical challenges facing human metabolomics warrant ongoing research. Addressing these challenges will allow for more standardization in the field and lead to analytical advances in metabolomics investigations with the potential to elucidate novel mechanisms underlying human health and disease.
2005.12362
Benjamin Mauroy
Fr\'ed\'erique No\"el, Cyril Karamaoun, Jerome A. Dempsey and Benjamin Mauroy
The origin of the allometric scaling of lung ventilation in mammals
Version 6 of this preprint has been peer-reviewed and recommended by Peer Community In Mathematical and Computational Biology (https://doi.org/10.24072/pci.mcb.100005)
null
10.24072/pci.mcb.100005
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A model of optimal control of ventilation has recently been developed for humans. This model highlights the importance of the localization of the transition between a convective and a diffusive transport of respiratory gas. This localization determines how ventilation should be controlled in order to minimize its energetic cost at any metabolic regime. We generalized this model to any mammal, based on the core morphometric characteristics shared by all mammalian lungs and on their allometric scaling from the literature. Since the main energetic costs of ventilation are related to convective transport, we prove that, for all mammals, the localization of the shift from a convective transport to a diffusive transport plays a critical role on keeping this cost low while fulfilling the lung function. Our model predicts for the first time the localization of this transition in order to minimize the energetic cost of ventilation, depending on mammal mass and metabolic regime. From this optimal localization, we are able to predict allometric scaling laws for both tidal volumes and breathing rates, at any metabolic rate. We ran our model for the three common metabolic rates -- basal, field and maximal -- and showed that our predictions reproduce accurately experimental data available in the literature. Our analysis supports the hypothesis that mammals allometric scaling laws of tidal volumes and breathing rates at a given metabolic rate are driven by a few core geometrical characteristics shared by mammalian lungs and by the physical processes of respiratory gas transport.
[ { "created": "Mon, 4 May 2020 22:17:21 GMT", "version": "v1" }, { "created": "Wed, 27 May 2020 12:16:38 GMT", "version": "v2" }, { "created": "Wed, 26 Aug 2020 07:26:56 GMT", "version": "v3" }, { "created": "Thu, 4 Mar 2021 14:50:48 GMT", "version": "v4" }, { "created": "Tue, 29 Jun 2021 09:41:15 GMT", "version": "v5" }, { "created": "Fri, 3 Sep 2021 08:58:57 GMT", "version": "v6" } ]
2021-09-06
[ [ "Noël", "Frédérique", "" ], [ "Karamaoun", "Cyril", "" ], [ "Dempsey", "Jerome A.", "" ], [ "Mauroy", "Benjamin", "" ] ]
A model of optimal control of ventilation has recently been developed for humans. This model highlights the importance of the localization of the transition between a convective and a diffusive transport of respiratory gas. This localization determines how ventilation should be controlled in order to minimize its energetic cost at any metabolic regime. We generalized this model to any mammal, based on the core morphometric characteristics shared by all mammalian lungs and on their allometric scaling from the literature. Since the main energetic costs of ventilation are related to convective transport, we prove that, for all mammals, the localization of the shift from a convective transport to a diffusive transport plays a critical role on keeping this cost low while fulfilling the lung function. Our model predicts for the first time the localization of this transition in order to minimize the energetic cost of ventilation, depending on mammal mass and metabolic regime. From this optimal localization, we are able to predict allometric scaling laws for both tidal volumes and breathing rates, at any metabolic rate. We ran our model for the three common metabolic rates -- basal, field and maximal -- and showed that our predictions reproduce accurately experimental data available in the literature. Our analysis supports the hypothesis that mammals allometric scaling laws of tidal volumes and breathing rates at a given metabolic rate are driven by a few core geometrical characteristics shared by mammalian lungs and by the physical processes of respiratory gas transport.
1605.09106
Tomoyuki Obuchi
Tomoyuki Obuchi, Yoshiyuki Kabashima, and Kei Tokita
Multiple peaks of species abundance distributions induced by sparse interactions
6 pages, 5 figures
null
10.1103/PhysRevE.94.022312
null
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the replicator dynamics with "sparse" symmetric interactions which represent specialist-specialist interactions in ecological communities. By considering a large self interaction $u$, we conduct a perturbative expansion which manifests that the nature of the interactions has a direct impact on the species abundance distribution. The central results are all species coexistence in a realistic range of the model parameters and that a certain discrete nature of the interactions induces multiple peaks in the species abundance distribution, providing the possibility of theoretically explaining multiple peaks observed in various field studies. To get more quantitative information, we also construct a non-perturbative theory which becomes exact on tree-like networks if all the species coexist, providing exact critical values of $u$ below which extinct species emerge. Numerical simulations in various different situations are conducted and they clarify the robustness of the presented mechanism of all species coexistence and multiple peaks in the species abundance distributions.
[ { "created": "Mon, 30 May 2016 05:11:44 GMT", "version": "v1" } ]
2016-09-21
[ [ "Obuchi", "Tomoyuki", "" ], [ "Kabashima", "Yoshiyuki", "" ], [ "Tokita", "Kei", "" ] ]
We investigate the replicator dynamics with "sparse" symmetric interactions which represent specialist-specialist interactions in ecological communities. By considering a large self interaction $u$, we conduct a perturbative expansion which manifests that the nature of the interactions has a direct impact on the species abundance distribution. The central results are all species coexistence in a realistic range of the model parameters and that a certain discrete nature of the interactions induces multiple peaks in the species abundance distribution, providing the possibility of theoretically explaining multiple peaks observed in various field studies. To get more quantitative information, we also construct a non-perturbative theory which becomes exact on tree-like networks if all the species coexist, providing exact critical values of $u$ below which extinct species emerge. Numerical simulations in various different situations are conducted and they clarify the robustness of the presented mechanism of all species coexistence and multiple peaks in the species abundance distributions.
1607.08849
Carlos Sevcik
Carlos Sevcik
Caveat on the Boltzmann distribution function use in biology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sigmoid semilogarithmic functions with shape of Boltzmann equations, have become extremely popular to describe diverse biological situations. Part of the popularity is due to the easy avail- ability of software which fits Boltzmann functions to data, without much knowledge of the fitting procedure or the statistical properties of the parameters derived from the procedure. The purpose of this paper is to explore the plasticity of the Boltzmann function to fit data, some aspects of the optimization procedure to fit the function to data and how to use this plastic function to differentiate the effect of treatment on data and to attest the statistical significance of treatment effect on the data.
[ { "created": "Fri, 29 Jul 2016 15:24:59 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2017 17:55:07 GMT", "version": "v2" } ]
2017-03-31
[ [ "Sevcik", "Carlos", "" ] ]
Sigmoid semilogarithmic functions with shape of Boltzmann equations, have become extremely popular to describe diverse biological situations. Part of the popularity is due to the easy avail- ability of software which fits Boltzmann functions to data, without much knowledge of the fitting procedure or the statistical properties of the parameters derived from the procedure. The purpose of this paper is to explore the plasticity of the Boltzmann function to fit data, some aspects of the optimization procedure to fit the function to data and how to use this plastic function to differentiate the effect of treatment on data and to attest the statistical significance of treatment effect on the data.
2206.12692
Giorgio Sonnino
Giorgio Sonnino, Philippe Peeters, Pasquale Nardone
Modelling the Spread of SARS-CoV2 and its variants. Comparison with Real Data. Relations that have to be Satisfied to Achieve the Total Regression of the SARS-CoV2 Infection
51 pages, 39 Figures. Review/Research Manuscript on modelling the dynamics of SARS-CoV 2 Infection. arXiv admin note: text overlap with arXiv:2101.05596, arXiv:2012.01869
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
This work provides an overview on deterministic and stochastic models that have previously been proposed by us to study the transmission dynamics of the Coronavirus Disease 2019 (COVID-19) in Europe and USA. Briefly, we describe realistic deterministic and stochastic models for the evolution of the COVID-19 pandemic, subject to the lockdown and quarantine measures, which take into account the time-delay for recovery or death processes. Realistic dynamic equations for the entire process have been derived by adopting the so-called "kinetic-type reactions approach". The lockdown and the quarantine measures are modelled by some kind of inhibitor reactions where susceptible and infected individuals can be "trapped" into inactive states. The dynamics for the recovered people is obtained by accounting people who are only traced back to hospitalised infected people. To model the role of the Hospitals we take inspiration from the Michaelis-Menten's enzyme-substrate reaction model (the so-called "MM reaction") where the "enzyme" is associated to the "available hospital beds", the "substrate" to the "infected people", and the "product" to the "recovered people", respectively. The statistical properties of the models, in particular the relevant correlation functions and the probability density functions, have duly been evaluated. We validate our theoretical predictions with a large series of experimental data for Italy, Germany, France, Belgium and United States, and we also compare data for Italy and Belgium with the theoretical predictions of the logistic model. We found that our predictions are in good agreement with the real world since the onset of COVID 19, contrary to the the logistics model that only applies in the first days of the pandemic. In the final part of the work, we can find the (theoretical) relationships that should be satisfied to obtain the disappearance of the virus.
[ { "created": "Sat, 25 Jun 2022 16:41:45 GMT", "version": "v1" }, { "created": "Thu, 7 Jul 2022 21:36:27 GMT", "version": "v2" } ]
2022-07-11
[ [ "Sonnino", "Giorgio", "" ], [ "Peeters", "Philippe", "" ], [ "Nardone", "Pasquale", "" ] ]
This work provides an overview on deterministic and stochastic models that have previously been proposed by us to study the transmission dynamics of the Coronavirus Disease 2019 (COVID-19) in Europe and USA. Briefly, we describe realistic deterministic and stochastic models for the evolution of the COVID-19 pandemic, subject to the lockdown and quarantine measures, which take into account the time-delay for recovery or death processes. Realistic dynamic equations for the entire process have been derived by adopting the so-called "kinetic-type reactions approach". The lockdown and the quarantine measures are modelled by some kind of inhibitor reactions where susceptible and infected individuals can be "trapped" into inactive states. The dynamics for the recovered people is obtained by accounting people who are only traced back to hospitalised infected people. To model the role of the Hospitals we take inspiration from the Michaelis-Menten's enzyme-substrate reaction model (the so-called "MM reaction") where the "enzyme" is associated to the "available hospital beds", the "substrate" to the "infected people", and the "product" to the "recovered people", respectively. The statistical properties of the models, in particular the relevant correlation functions and the probability density functions, have duly been evaluated. We validate our theoretical predictions with a large series of experimental data for Italy, Germany, France, Belgium and United States, and we also compare data for Italy and Belgium with the theoretical predictions of the logistic model. We found that our predictions are in good agreement with the real world since the onset of COVID 19, contrary to the the logistics model that only applies in the first days of the pandemic. In the final part of the work, we can find the (theoretical) relationships that should be satisfied to obtain the disappearance of the virus.
1610.09423
Neerja Garikipati
Neerja Garikipati
Computational genomic algorithms for miRNA-based diagnosis of lung cancer: the potential of machine learning
18 pages, 5 figures, 5 tables, associated code on Github: https://github.com/neerja-g/machine-learning-miRNA
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The advent of large scale, high-throughput genomic screening has introduced a wide range of tests for diagnostic purposes. Prominent among them are tests using miRNA expression levels. Genomics and proteomics now provide expression levels of hundreds of miRNAs at a time. However, for actual diagnostic tools to become reality requires the simultaneous development of methods to interpret the large amounts of miRNA expression data that can be generated from a single patient sample. Because these data are in numeric form, quantitative methods must be developed. Statistics such as p-values and log fold change give some insight, but the diagnostic effectiveness of each miRNA test must first be evaluated. Here, the author has developed a traditional, sensitivity- and specificity-based algorithm, as well as a modern machine learning algorithm, and evaluated their diagnostic potential for lung cancer against a publicly available database. The findings suggest that the machine learning algorithm achieves higher accuracy (97% for cancerous and 73% for normal samples), in addition to providing confidence intervals that could provide valuable diagnostic support. The machine learning algorithm also has significant potential for expansion to more complex diagnoses of lung cancer sub-types, to other cancers as well diseases beyond cancer. Both algorithms are available on the Github repo: https://github.com/neerja-g/machine-learning-miRNA.
[ { "created": "Fri, 28 Oct 2016 23:03:30 GMT", "version": "v1" }, { "created": "Fri, 4 Nov 2016 23:18:10 GMT", "version": "v2" } ]
2016-11-08
[ [ "Garikipati", "Neerja", "" ] ]
The advent of large scale, high-throughput genomic screening has introduced a wide range of tests for diagnostic purposes. Prominent among them are tests using miRNA expression levels. Genomics and proteomics now provide expression levels of hundreds of miRNAs at a time. However, for actual diagnostic tools to become reality requires the simultaneous development of methods to interpret the large amounts of miRNA expression data that can be generated from a single patient sample. Because these data are in numeric form, quantitative methods must be developed. Statistics such as p-values and log fold change give some insight, but the diagnostic effectiveness of each miRNA test must first be evaluated. Here, the author has developed a traditional, sensitivity- and specificity-based algorithm, as well as a modern machine learning algorithm, and evaluated their diagnostic potential for lung cancer against a publicly available database. The findings suggest that the machine learning algorithm achieves higher accuracy (97% for cancerous and 73% for normal samples), in addition to providing confidence intervals that could provide valuable diagnostic support. The machine learning algorithm also has significant potential for expansion to more complex diagnoses of lung cancer sub-types, to other cancers as well diseases beyond cancer. Both algorithms are available on the Github repo: https://github.com/neerja-g/machine-learning-miRNA.
1711.10674
Yunjie Zhao
Yiren Jian, Chen Zeng, Yunjie Zhao
Direct Information Reweighted by Contact Templates: Improved RNA Contact Prediction by Combining Structural Features
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is acknowledged that co-evolutionary nucleotide-nucleotide interactions are essential for RNA structures and functions. Currently, direct coupling analysis (DCA) infers nucleotide contacts in a sequence from its homologous sequence alignment across different species. DCA and similar approaches that use sequence information alone usually yield a low accuracy, especially when the available homologous sequences are limited. Here we present a new method that incorporates a Restricted Boltzmann Machine (RBM) to augment the information on sequence co-variations with structural patterns in contact inference. We thus name our method DIRECT that stands for Direct Information REweighted by Contact Templates. Benchmark tests demonstrate that DIRECT produces a substantial enhancement of 13% in accuracy on average for contact prediction in comparison to the traditional DCA. These results suggest that DIRECT could be used for improving predictions of RNA tertiary structures and functions. The source codes and dataset of DIRECT are available at http:// http://zhao.phy.ccnu.edu.cn:8122/DIRECT/index.html.
[ { "created": "Wed, 29 Nov 2017 04:05:41 GMT", "version": "v1" } ]
2017-11-30
[ [ "Jian", "Yiren", "" ], [ "Zeng", "Chen", "" ], [ "Zhao", "Yunjie", "" ] ]
It is acknowledged that co-evolutionary nucleotide-nucleotide interactions are essential for RNA structures and functions. Currently, direct coupling analysis (DCA) infers nucleotide contacts in a sequence from its homologous sequence alignment across different species. DCA and similar approaches that use sequence information alone usually yield a low accuracy, especially when the available homologous sequences are limited. Here we present a new method that incorporates a Restricted Boltzmann Machine (RBM) to augment the information on sequence co-variations with structural patterns in contact inference. We thus name our method DIRECT that stands for Direct Information REweighted by Contact Templates. Benchmark tests demonstrate that DIRECT produces a substantial enhancement of 13% in accuracy on average for contact prediction in comparison to the traditional DCA. These results suggest that DIRECT could be used for improving predictions of RNA tertiary structures and functions. The source codes and dataset of DIRECT are available at http:// http://zhao.phy.ccnu.edu.cn:8122/DIRECT/index.html.
0803.2860
Toan T. Nguyen
Rui Zhang and Toan T. Nguyen
A model of HIV budding and self-assembly, role of cell membrane
null
null
10.1103/PhysRevE.78.051903
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Budding from the plasma membrane of the host cell is an indispensable step in the life cycle of the Human Immunodeficiency Virus (HIV), which belongs to a large family of enveloped RNA viruses, retroviruses. Unlike regular enveloped viruses, retrovirus budding happens {\em concurrently} with the self-assembly of retrovirus protein subunits (Gags) into spherical virus capsids on the cell membrane. Led by this unique budding and assembly mechanism, we study the free energy profile of retrovirus budding, taking into account of the Gag-Gag attraction energy and the membrane elastic energy. We find that if the Gag-Gag attraction is strong, budding always proceeds to completion. During early stage of budding, the zenith angle of partial budded capsids, $\alpha$, increases with time as $\alpha \propto t^{1/3}$. However, when Gag-Gag attraction is weak, a metastable state of partial budding appears. The zenith angle of these partially spherical capsids is given by $\alpha_0\simeq(\tau^2/\kappa\sigma)^{1/4}$ in a linear approximation, where $\kappa$ and $\sigma$ are the bending modulus and the surface tension of the membrane, and $\tau$ is a line tension of the capsid proportional to the strength of Gag-Gag attraction. Numerically, we find $\alpha_0<0.3\pi$ without any approximations. Using experimental parameters, we show that HIV budding and assembly always proceed to completion in normal biological conditions. On the other hand, by changing Gag-Gag interaction strength or membrane rigidity, it is relatively easy to tune it back and forth between complete budding and partial budding. Our model agrees reasonably well with experiments observing partial budding of retroviruses including HIV.
[ { "created": "Wed, 19 Mar 2008 19:52:12 GMT", "version": "v1" } ]
2013-05-29
[ [ "Zhang", "Rui", "" ], [ "Nguyen", "Toan T.", "" ] ]
Budding from the plasma membrane of the host cell is an indispensable step in the life cycle of the Human Immunodeficiency Virus (HIV), which belongs to a large family of enveloped RNA viruses, retroviruses. Unlike regular enveloped viruses, retrovirus budding happens {\em concurrently} with the self-assembly of retrovirus protein subunits (Gags) into spherical virus capsids on the cell membrane. Led by this unique budding and assembly mechanism, we study the free energy profile of retrovirus budding, taking into account of the Gag-Gag attraction energy and the membrane elastic energy. We find that if the Gag-Gag attraction is strong, budding always proceeds to completion. During early stage of budding, the zenith angle of partial budded capsids, $\alpha$, increases with time as $\alpha \propto t^{1/3}$. However, when Gag-Gag attraction is weak, a metastable state of partial budding appears. The zenith angle of these partially spherical capsids is given by $\alpha_0\simeq(\tau^2/\kappa\sigma)^{1/4}$ in a linear approximation, where $\kappa$ and $\sigma$ are the bending modulus and the surface tension of the membrane, and $\tau$ is a line tension of the capsid proportional to the strength of Gag-Gag attraction. Numerically, we find $\alpha_0<0.3\pi$ without any approximations. Using experimental parameters, we show that HIV budding and assembly always proceed to completion in normal biological conditions. On the other hand, by changing Gag-Gag interaction strength or membrane rigidity, it is relatively easy to tune it back and forth between complete budding and partial budding. Our model agrees reasonably well with experiments observing partial budding of retroviruses including HIV.
1612.01602
Joseph Rusinko
Joseph Rusinko, Matthew McPartlon
Species tree estimation using Neighbor Joining
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent theoretical work has demonstrated that Neighbor Joining applied to concatenated DNA sequences is a statistically consistent method of species tree reconstruction. This brief note compares the accuracy of this approach to other popular statistically consistent species tree reconstruction algorithms including ASTRAL-II Neighbor Joining using average gene-tree internode distances (NJst) and SVD-Quartets+PAUP*, as well as concatenation using maximum likelihood (RaxML). We find that the faster Neighbor Joining, applied to concatenated sequences, is among the most effective of these methods for accurate species tree reconstruction.
[ { "created": "Tue, 6 Dec 2016 00:28:13 GMT", "version": "v1" } ]
2016-12-07
[ [ "Rusinko", "Joseph", "" ], [ "McPartlon", "Matthew", "" ] ]
Recent theoretical work has demonstrated that Neighbor Joining applied to concatenated DNA sequences is a statistically consistent method of species tree reconstruction. This brief note compares the accuracy of this approach to other popular statistically consistent species tree reconstruction algorithms including ASTRAL-II Neighbor Joining using average gene-tree internode distances (NJst) and SVD-Quartets+PAUP*, as well as concatenation using maximum likelihood (RaxML). We find that the faster Neighbor Joining, applied to concatenated sequences, is among the most effective of these methods for accurate species tree reconstruction.
2211.00245
Andrey L. Shilnikov
James Scully, Jassem Bourahmah, David Bloom, Andrey L Shilnikov
Pairing cellular and synaptic dynamics into building blocks of rhythmic neural circuits
null
null
null
null
q-bio.NC math.DS nlin.AO nlin.CD
http://creativecommons.org/licenses/by/4.0/
The purpose of this paper is trifold -- to serve as an instructive resource and a reference catalog for biologically plausible modeling with i) conductance-based models, coupled with ii) strength-varying slow synapse models, culminating in iii) two canonical pair-wise rhythm-generating networks. We document the properties of basic network components: cell models and synaptic models, which are prerequisites for proper network assembly. Using the slow-fast decomposition we present a detailed analysis of the cellular dynamics including a discussion of the most relevant bifurcations. Several approaches to model synaptic coupling are also discussed, and a new logistic model of slow synapses is introduced. Finally, we describe and examine two types of bicellular rhythm-generating networks: i) half-center oscillators ii) excitatory-inhibitory pairs and elucidate a key principle -- the network hysteresis underlying the stable onset of emergent slow bursting in these neural building blocks. These two cell networks are a basis for more complicated neural circuits of rhythmogensis and feature in our models of swim central pattern generators.
[ { "created": "Tue, 1 Nov 2022 03:20:09 GMT", "version": "v1" } ]
2022-11-02
[ [ "Scully", "James", "" ], [ "Bourahmah", "Jassem", "" ], [ "Bloom", "David", "" ], [ "Shilnikov", "Andrey L", "" ] ]
The purpose of this paper is trifold -- to serve as an instructive resource and a reference catalog for biologically plausible modeling with i) conductance-based models, coupled with ii) strength-varying slow synapse models, culminating in iii) two canonical pair-wise rhythm-generating networks. We document the properties of basic network components: cell models and synaptic models, which are prerequisites for proper network assembly. Using the slow-fast decomposition we present a detailed analysis of the cellular dynamics including a discussion of the most relevant bifurcations. Several approaches to model synaptic coupling are also discussed, and a new logistic model of slow synapses is introduced. Finally, we describe and examine two types of bicellular rhythm-generating networks: i) half-center oscillators ii) excitatory-inhibitory pairs and elucidate a key principle -- the network hysteresis underlying the stable onset of emergent slow bursting in these neural building blocks. These two cell networks are a basis for more complicated neural circuits of rhythmogensis and feature in our models of swim central pattern generators.
1809.03897
Divine Wanduku (Dr. )
Divine Wanduku
A comparative stochastic and deterministic study of a class of epidemic dynamic models for malaria: exploring the impacts of noise on eradication and persistence of disease
arXiv admin note: substantial text overlap with arXiv:1808.09842, arXiv:1809.03866
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
A comparative stochastic and deterministic study of a family of SEIRS epidemic dynamic models for malaria is presented. The family type is determined by the qualitative behavior of the nonlinear incidence rates of the disease. Furthermore, the malaria models exhibit three random delays:- two of the delays represent the incubation periods of the disease inside the vector and human hosts, whereas the third delay is the period of effective natural immunity against the disease. The stochastic malaria models are improved by including the random environmental fluctuations in the disease transmission and natural death rates of humans. Insights about the effects of the delays and the noises on the malaria dynamics are gained via comparative analyses of the family of stochastic and deterministic models, and further critical examination of the significance of the intensities of the white noises in the system on (1) the existence and stability of the equilibria, and also on (2) the eradication and persistence of malaria in the human population. The basic reproduction numbers and other threshold values for malaria in the stochastic and deterministic settings are determined and compared for the cases of constant or random delays in the system. Numerical simulation results are presented.
[ { "created": "Mon, 10 Sep 2018 14:19:17 GMT", "version": "v1" } ]
2018-09-12
[ [ "Wanduku", "Divine", "" ] ]
A comparative stochastic and deterministic study of a family of SEIRS epidemic dynamic models for malaria is presented. The family type is determined by the qualitative behavior of the nonlinear incidence rates of the disease. Furthermore, the malaria models exhibit three random delays:- two of the delays represent the incubation periods of the disease inside the vector and human hosts, whereas the third delay is the period of effective natural immunity against the disease. The stochastic malaria models are improved by including the random environmental fluctuations in the disease transmission and natural death rates of humans. Insights about the effects of the delays and the noises on the malaria dynamics are gained via comparative analyses of the family of stochastic and deterministic models, and further critical examination of the significance of the intensities of the white noises in the system on (1) the existence and stability of the equilibria, and also on (2) the eradication and persistence of malaria in the human population. The basic reproduction numbers and other threshold values for malaria in the stochastic and deterministic settings are determined and compared for the cases of constant or random delays in the system. Numerical simulation results are presented.
q-bio/0509035
David A. Kessler
Elisheva Cohen, David A. Kessler, Herbert Levine
Analytic approach to the evolutionary effects of genetic exchange
null
null
10.1103/PhysRevE.73.016113
null
q-bio.PE
null
We present an approximate analytic study of our previously introduced model of evolution including the effects of genetic exchange. This model is motivated by the process of bacterial transformation. We solve for the velocity, the rate of increase of fitness, as a function of the fixed population size, $N$. We find the velocity increases with $\ln N$, eventually saturated at an $N$ which depends on the strength of the recombination process. The analytical treatment is seen to agree well with direct numerical simulations of our model equations.
[ { "created": "Mon, 26 Sep 2005 10:29:57 GMT", "version": "v1" } ]
2009-11-11
[ [ "Cohen", "Elisheva", "" ], [ "Kessler", "David A.", "" ], [ "Levine", "Herbert", "" ] ]
We present an approximate analytic study of our previously introduced model of evolution including the effects of genetic exchange. This model is motivated by the process of bacterial transformation. We solve for the velocity, the rate of increase of fitness, as a function of the fixed population size, $N$. We find the velocity increases with $\ln N$, eventually saturated at an $N$ which depends on the strength of the recombination process. The analytical treatment is seen to agree well with direct numerical simulations of our model equations.
1207.6488
Krzysztof Bartoszek
Krzysztof Bartoszek and Serik Sagitov
Phylogenetic confidence intervals for the optimal trait value
null
Journal of Applied Probability 52(4):1115-1132, 2015
10.1239/jap/1450802756
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a stochastic evolutionary model for a phenotype developing amongst n related species with unknown phylogeny. The unknown tree is modelled by a Yule process conditioned on n contemporary nodes. The trait value is assumed to evolve along lineages as an Ornstein-Uhlenbeck process. As a result, the trait values of the n species form a sample with dependent observations. We establish three limit theorems for the sample mean corresponding to three domains for the adaptation rate. In the case of fast adaptation, we show that for large $n$ the normalized sample mean is approximately normally distributed. Using these limit theorems, we develop novel confidence interval formulae for the optimal trait value.
[ { "created": "Fri, 27 Jul 2012 08:33:03 GMT", "version": "v1" }, { "created": "Fri, 21 Dec 2012 12:08:20 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2013 10:15:24 GMT", "version": "v3" }, { "created": "Mon, 19 May 2014 10:59:54 GMT", "version": "v4" }, { "created": "Fri, 7 Nov 2014 22:28:30 GMT", "version": "v5" } ]
2020-11-23
[ [ "Bartoszek", "Krzysztof", "" ], [ "Sagitov", "Serik", "" ] ]
We consider a stochastic evolutionary model for a phenotype developing amongst n related species with unknown phylogeny. The unknown tree is modelled by a Yule process conditioned on n contemporary nodes. The trait value is assumed to evolve along lineages as an Ornstein-Uhlenbeck process. As a result, the trait values of the n species form a sample with dependent observations. We establish three limit theorems for the sample mean corresponding to three domains for the adaptation rate. In the case of fast adaptation, we show that for large $n$ the normalized sample mean is approximately normally distributed. Using these limit theorems, we develop novel confidence interval formulae for the optimal trait value.
1504.00146
Ulrich Dobramysl
Ulrich Dobramysl (University of Oxford), Sten R\"udiger (Humboldt-Universit\"at zu Berlin), Radek Erban (University of Oxford)
Particle-based Multiscale Modeling of Calcium Puff Dynamics
20 pages, 9 figures, to appear in SIAM Multiscale Modeling and Simulation
Multiscale Model. Simul. 14, 997-1016 (2016)
10.1137/15M1015030
null
q-bio.SC physics.bio-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intracellular calcium is regulated in part by the release of Ca$^{2+}$ ions from the endoplasmic reticulum via inositol-4,5-triphosphate receptor (IP$_3$R) channels (among other possibilities such as RyR and L-type calcium channels). The resulting dynamics are highly diverse, lead to local calcium "puffs" as well as global waves propagating through cells, as observed in {\it Xenopus} oocytes, neurons, and other cell types. Local fluctuations in the number of calcium ions play a crucial role in the onset of these features. Previous modeling studies of calcium puff dynamics stemming from IP$_3$R channels have predominantly focused on stochastic channel models coupled to deterministic diffusion of ions, thereby neglecting local fluctuations of the ion number. Tracking of individual ions is computationally difficult due to the scale separation in the Ca$^{2+}$ concentration when channels are in the open or closed states. In this paper, a spatial multiscale model for investigating of the dynamics of puffs is presented. It couples Brownian motion (diffusion) of ions with a stochastic channel gating model. The model is used to analyze calcium puff statistics. Concentration time traces as well as channel state information are studied. We identify the regime in which puffs can be found and develop a mean-field theory to extract the boundary of this regime. Puffs are only possible when the time scale of channel inhibition is sufficiently large. Implications for the understanding of puff generation and termination are discussed.
[ { "created": "Wed, 1 Apr 2015 08:46:15 GMT", "version": "v1" }, { "created": "Fri, 23 Oct 2015 18:16:30 GMT", "version": "v2" }, { "created": "Wed, 13 Apr 2016 07:06:44 GMT", "version": "v3" } ]
2016-09-14
[ [ "Dobramysl", "Ulrich", "", "University of Oxford" ], [ "Rüdiger", "Sten", "", "Humboldt-Universität zu Berlin" ], [ "Erban", "Radek", "", "University of Oxford" ] ]
Intracellular calcium is regulated in part by the release of Ca$^{2+}$ ions from the endoplasmic reticulum via inositol-4,5-triphosphate receptor (IP$_3$R) channels (among other possibilities such as RyR and L-type calcium channels). The resulting dynamics are highly diverse, lead to local calcium "puffs" as well as global waves propagating through cells, as observed in {\it Xenopus} oocytes, neurons, and other cell types. Local fluctuations in the number of calcium ions play a crucial role in the onset of these features. Previous modeling studies of calcium puff dynamics stemming from IP$_3$R channels have predominantly focused on stochastic channel models coupled to deterministic diffusion of ions, thereby neglecting local fluctuations of the ion number. Tracking of individual ions is computationally difficult due to the scale separation in the Ca$^{2+}$ concentration when channels are in the open or closed states. In this paper, a spatial multiscale model for investigating of the dynamics of puffs is presented. It couples Brownian motion (diffusion) of ions with a stochastic channel gating model. The model is used to analyze calcium puff statistics. Concentration time traces as well as channel state information are studied. We identify the regime in which puffs can be found and develop a mean-field theory to extract the boundary of this regime. Puffs are only possible when the time scale of channel inhibition is sufficiently large. Implications for the understanding of puff generation and termination are discussed.
2106.09007
Andrew Adamatzky
Andrew Adamatzky and Antoni Gandia
Fungi anaesthesia
null
null
null
null
q-bio.NC cs.ET physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrical activity of fungus \emph{Pleurotus ostreatus} is characterised by slow (hours) irregular waves of baseline potential drift and fast (minutes) action potential likes spikes of the electrical potential. An exposure of the mycelium colonised substrate to a chloroform vapour lead to several fold decrease of the baseline potential waves and increase of their duration. The chloroform vapour also causes either complete cessation of spiking activity or substantial reduction of the spiking frequency. Removal of the chloroform vapour from the growth containers leads to a gradual restoration of the mycelium electrical activity.
[ { "created": "Sun, 13 Jun 2021 19:21:53 GMT", "version": "v1" } ]
2021-06-17
[ [ "Adamatzky", "Andrew", "" ], [ "Gandia", "Antoni", "" ] ]
Electrical activity of fungus \emph{Pleurotus ostreatus} is characterised by slow (hours) irregular waves of baseline potential drift and fast (minutes) action potential likes spikes of the electrical potential. An exposure of the mycelium colonised substrate to a chloroform vapour lead to several fold decrease of the baseline potential waves and increase of their duration. The chloroform vapour also causes either complete cessation of spiking activity or substantial reduction of the spiking frequency. Removal of the chloroform vapour from the growth containers leads to a gradual restoration of the mycelium electrical activity.
1804.04485
Thierry Mora
Mikhail V. Pogorelyy, Anastasia A. Minervina, Maximilian Puelma Touzel, Anastasiia L. Sycheva, Ekaterina A. Komech, Elena I. Kovalenko, Galina G. Karganova, Evgeniy S. Egorov, Alexander Yu. Komkov, Dmitriy M. Chudakov, Ilgar Z. Mamedov, Thierry Mora, Aleksandra M. Walczak, Yuri B. Lebedev
Precise tracking of vaccine-responding T-cell clones reveals convergent and personalized response in identical twins
null
PNAS 2018; 115 (50) 12704-12709
10.1073/pnas.1809642115
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
T-cell receptor (TCR) repertoire data contain information about infections that could be used in disease diagnostics and vaccine development, but extracting that information remains a major challenge. Here we developed a statistical framework to detect TCR clone proliferation and contraction from longitudinal repertoire data. We applied this framework to data from three pairs of identical twins immunized with the yellow fever vaccine. We identified 500-1500 responding TCRs in each donor and validated them using three independent assays. While the responding TCRs were mostly private, albeit with higher overlap between twins, they could be well predicted using a classifier based on sequence similarity. Our method can also be applied to samples obtained post-infection, making it suitable for systematic discovery of new infection-specific TCRs in the clinic.
[ { "created": "Thu, 12 Apr 2018 13:09:11 GMT", "version": "v1" } ]
2019-01-24
[ [ "Pogorelyy", "Mikhail V.", "" ], [ "Minervina", "Anastasia A.", "" ], [ "Touzel", "Maximilian Puelma", "" ], [ "Sycheva", "Anastasiia L.", "" ], [ "Komech", "Ekaterina A.", "" ], [ "Kovalenko", "Elena I.", "" ], [ "Karganova", "Galina G.", "" ], [ "Egorov", "Evgeniy S.", "" ], [ "Komkov", "Alexander Yu.", "" ], [ "Chudakov", "Dmitriy M.", "" ], [ "Mamedov", "Ilgar Z.", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Lebedev", "Yuri B.", "" ] ]
T-cell receptor (TCR) repertoire data contain information about infections that could be used in disease diagnostics and vaccine development, but extracting that information remains a major challenge. Here we developed a statistical framework to detect TCR clone proliferation and contraction from longitudinal repertoire data. We applied this framework to data from three pairs of identical twins immunized with the yellow fever vaccine. We identified 500-1500 responding TCRs in each donor and validated them using three independent assays. While the responding TCRs were mostly private, albeit with higher overlap between twins, they could be well predicted using a classifier based on sequence similarity. Our method can also be applied to samples obtained post-infection, making it suitable for systematic discovery of new infection-specific TCRs in the clinic.
2311.17964
Manal Helal
Manal Helal, Fanrong Kong, Sharon C-A Chen, Fei Zhou, Dominic E Dwyer, John Potter, Vitali Sintchenko
Linear normalised hash function for clustering gene sequences and identifying reference sequences from multiple sequence alignments
null
Microbial Informatics and Experimentation volume 2, Article number: 2 (2012) https://microbialinformaticsj.biomedcentral.com/counter/pdf/10.1186/2042-5783-2-2.pdf
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
The aim of this study was to develop a method that would identify the cluster centroids and the optimal number of clusters for a given sensitivity level and could work equally well for the different sequence datasets. A novel method that combines the linear mapping hash function and multiple sequence alignment (MSA) was developed. This method takes advantage of the already sorted by similarity sequences from the MSA output, and identifies the optimal number of clusters, clusters cut-offs, and clusters centroids that can represent reference gene vouchers for the different species. The linear mapping hash function can map an already ordered by similarity distance matrix to indices to reveal gaps in the values around which the optimal cut-offs of the different clusters can be identified. The method was evaluated using sets of closely related (16S rRNA gene sequences of Nocardia species) and highly variable (VP1 genomic region of Enterovirus 71) sequences and outperformed existing unsupervised machine learning clustering methods and dimensionality reduction methods. This method does not require prior knowledge of the number of clusters or the distance between clusters, handles clusters of different sizes and shapes, and scales linearly with the dataset. The combination of MSA with the linear mapping hash function is a computationally efficient way of gene sequence clustering and can be a valuable tool for the assessment of similarity, clustering of different microbial genomes, identifying reference sequences, and for the study of evolution of bacteria and viruses.
[ { "created": "Wed, 29 Nov 2023 11:51:05 GMT", "version": "v1" } ]
2023-12-01
[ [ "Helal", "Manal", "" ], [ "Kong", "Fanrong", "" ], [ "Chen", "Sharon C-A", "" ], [ "Zhou", "Fei", "" ], [ "Dwyer", "Dominic E", "" ], [ "Potter", "John", "" ], [ "Sintchenko", "Vitali", "" ] ]
The aim of this study was to develop a method that would identify the cluster centroids and the optimal number of clusters for a given sensitivity level and could work equally well for the different sequence datasets. A novel method that combines the linear mapping hash function and multiple sequence alignment (MSA) was developed. This method takes advantage of the already sorted by similarity sequences from the MSA output, and identifies the optimal number of clusters, clusters cut-offs, and clusters centroids that can represent reference gene vouchers for the different species. The linear mapping hash function can map an already ordered by similarity distance matrix to indices to reveal gaps in the values around which the optimal cut-offs of the different clusters can be identified. The method was evaluated using sets of closely related (16S rRNA gene sequences of Nocardia species) and highly variable (VP1 genomic region of Enterovirus 71) sequences and outperformed existing unsupervised machine learning clustering methods and dimensionality reduction methods. This method does not require prior knowledge of the number of clusters or the distance between clusters, handles clusters of different sizes and shapes, and scales linearly with the dataset. The combination of MSA with the linear mapping hash function is a computationally efficient way of gene sequence clustering and can be a valuable tool for the assessment of similarity, clustering of different microbial genomes, identifying reference sequences, and for the study of evolution of bacteria and viruses.
q-bio/0311007
Dr. Paul J. Werbos
Paul J. Werbos
From the Termite Mound to the Stars: Meditations on Discussions With Ilya Prigogine
5p. Invited paper for special issue in honor of Prigogine, published in Russian and in English
Problems of Nonlinear Analysis in Engineering Systems, No.3, Vol. 9, 2003, an IFSA-ANS Journal
null
null
q-bio.PE q-bio.OT
null
This paper summarizes and re-evaluates Prigogine's evolution of thought, from a more classical view of thermodynamics with negative implications for the evolution and persistence of life, through to a far more general and open formulation of thermodynamics, which does not require assumption of a Big Bang. The paper also proposes an encoding scheme for the statistics of Lorentzian systems into mixed Fermi-Bose density matrices, to complete the program given in quant-ph/0309087, which forcibly points towards symmetry in time at the microscopic level. The paper suggests future directions to reconcile microscopic time symmetry and macroscopic asymmetry and evolution, and concludes that the space-time continuum may be larger and more interesting than we are able yet to pin down.
[ { "created": "Fri, 7 Nov 2003 18:08:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Werbos", "Paul J.", "" ] ]
This paper summarizes and re-evaluates Prigogine's evolution of thought, from a more classical view of thermodynamics with negative implications for the evolution and persistence of life, through to a far more general and open formulation of thermodynamics, which does not require assumption of a Big Bang. The paper also proposes an encoding scheme for the statistics of Lorentzian systems into mixed Fermi-Bose density matrices, to complete the program given in quant-ph/0309087, which forcibly points towards symmetry in time at the microscopic level. The paper suggests future directions to reconcile microscopic time symmetry and macroscopic asymmetry and evolution, and concludes that the space-time continuum may be larger and more interesting than we are able yet to pin down.
2404.11087
Amit Lavon
Amit Lavon, Smadar Shilo, Ayya Keshet and Eran Segal
FrackyFrac: A Standalone UniFrac Calculator
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
UniFrac is a family of distance metrics over microbial abundances, that take taxonomic relatedness into account. Current tools and libraries for calculating UniFrac have specific requirements regarding the user's technical expertise, operating system, and pre-installed software, which might exclude potential users. FrackyFrac is a native command-line tool that can run on any platform and has no requirements. It can also generate the phylogenetic trees required for the calculation. We show that FrackyFrac's performance is on par with currently existing implementations. FrackyFrac can make UniFrac accessible to researchers who may otherwise skip it due to the effort involved, and it can simplify analysis pipelines for those who already use it.
[ { "created": "Wed, 17 Apr 2024 06:00:01 GMT", "version": "v1" } ]
2024-04-18
[ [ "Lavon", "Amit", "" ], [ "Shilo", "Smadar", "" ], [ "Keshet", "Ayya", "" ], [ "Segal", "Eran", "" ] ]
UniFrac is a family of distance metrics over microbial abundances, that take taxonomic relatedness into account. Current tools and libraries for calculating UniFrac have specific requirements regarding the user's technical expertise, operating system, and pre-installed software, which might exclude potential users. FrackyFrac is a native command-line tool that can run on any platform and has no requirements. It can also generate the phylogenetic trees required for the calculation. We show that FrackyFrac's performance is on par with currently existing implementations. FrackyFrac can make UniFrac accessible to researchers who may otherwise skip it due to the effort involved, and it can simplify analysis pipelines for those who already use it.
2407.12058
Arash Shaban-Nejad
Amirehsan Ghasemi, Soheil Hashtarkhani, David L Schwartz, Arash Shaban-Nejad
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review
22 Pages, 6 Figures, 13 Tables
Cancer Innovation 2024;3:e136
10.1002/cai2.136
null
q-bio.QM eess.IV
http://creativecommons.org/licenses/by/4.0/
With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.
[ { "created": "Fri, 12 Jul 2024 16:53:30 GMT", "version": "v1" } ]
2024-07-18
[ [ "Ghasemi", "Amirehsan", "" ], [ "Hashtarkhani", "Soheil", "" ], [ "Schwartz", "David L", "" ], [ "Shaban-Nejad", "Arash", "" ] ]
With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.
q-bio/0404041
Cornelis Storm
Arnold J. Storm, Cornelis Storm, Jianghua Chen, Henny Zandbergen, Jean-Francois Joanny, Cees Dekker
Fast DNA translocation through a solid-state nanopore
5 pages, 2 figures. Submitted to PRL
null
10.1021/nl048030d
null
q-bio.BM cond-mat.soft physics.bio-ph
null
We report translocation experiments on double-strand DNA through a silicon oxide nanopore. Samples containing DNA fragments with seven different lengths between 2000 to 96000 basepairs have been electrophoretically driven through a 10 nm pore. We find a power-law scaling of the translocation time versus length, with an exponent of 1.26 $\pm$ 0.07. This behavior is qualitatively different from the linear behavior observed in similar experiments performed with protein pores. We address the observed nonlinear scaling in a theoretical model that describes experiments where hydrodynamic drag on the section of the polymer outside the pore is the dominant force counteracting the driving. We show that this is the case in our experiments and derive a power-law scaling with an exponent of 1.18, in excellent agreement with our data.
[ { "created": "Wed, 28 Apr 2004 09:59:25 GMT", "version": "v1" } ]
2015-06-26
[ [ "Storm", "Arnold J.", "" ], [ "Storm", "Cornelis", "" ], [ "Chen", "Jianghua", "" ], [ "Zandbergen", "Henny", "" ], [ "Joanny", "Jean-Francois", "" ], [ "Dekker", "Cees", "" ] ]
We report translocation experiments on double-strand DNA through a silicon oxide nanopore. Samples containing DNA fragments with seven different lengths between 2000 to 96000 basepairs have been electrophoretically driven through a 10 nm pore. We find a power-law scaling of the translocation time versus length, with an exponent of 1.26 $\pm$ 0.07. This behavior is qualitatively different from the linear behavior observed in similar experiments performed with protein pores. We address the observed nonlinear scaling in a theoretical model that describes experiments where hydrodynamic drag on the section of the polymer outside the pore is the dominant force counteracting the driving. We show that this is the case in our experiments and derive a power-law scaling with an exponent of 1.18, in excellent agreement with our data.
2004.05759
Weiyu Xu
Jirong Yi, Raghu Mudumbai, Weiyu Xu
Low-Cost and High-Throughput Testing of COVID-19 Viruses and Antibodies via Compressed Sensing: System Concepts and Computational Experiments
11 pages
null
null
null
q-bio.QM cs.IT eess.SP math.IT q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Coronavirus disease 2019 (COVID-19) is an ongoing pandemic infectious disease outbreak that has significantly harmed and threatened the health and lives of millions or even billions of people. COVID-19 has also negatively impacted the social and economic activities of many countries significantly. With no approved vaccine available at this moment, extensive testing of COVID-19 viruses in people are essential for disease diagnosis, virus spread confinement, contact tracing, and determining right conditions for people to return to normal economic activities. Identifying people who have antibodies for COVID-19 can also help select persons who are suitable for undertaking certain essential activities or returning to workforce. However, the throughputs of current testing technologies for COVID-19 viruses and antibodies are often quite limited, which are not sufficient for dealing with COVID-19 viruses' anticipated fast oscillating waves of spread affecting a significant portion of the earth's population. In this paper, we propose to use compressed sensing (group testing can be seen as a special case of compressed sensing when it is applied to COVID-19 detection) to achieve high-throughput rapid testing of COVID-19 viruses and antibodies, which can potentially provide tens or even more folds of speedup compared with current testing technologies. The proposed compressed sensing system for high-throughput testing can utilize expander graph based compressed sensing matrices developed by us \cite{Weiyuexpander2007}.
[ { "created": "Mon, 13 Apr 2020 03:55:39 GMT", "version": "v1" } ]
2020-04-14
[ [ "Yi", "Jirong", "" ], [ "Mudumbai", "Raghu", "" ], [ "Xu", "Weiyu", "" ] ]
Coronavirus disease 2019 (COVID-19) is an ongoing pandemic infectious disease outbreak that has significantly harmed and threatened the health and lives of millions or even billions of people. COVID-19 has also negatively impacted the social and economic activities of many countries significantly. With no approved vaccine available at this moment, extensive testing of COVID-19 viruses in people are essential for disease diagnosis, virus spread confinement, contact tracing, and determining right conditions for people to return to normal economic activities. Identifying people who have antibodies for COVID-19 can also help select persons who are suitable for undertaking certain essential activities or returning to workforce. However, the throughputs of current testing technologies for COVID-19 viruses and antibodies are often quite limited, which are not sufficient for dealing with COVID-19 viruses' anticipated fast oscillating waves of spread affecting a significant portion of the earth's population. In this paper, we propose to use compressed sensing (group testing can be seen as a special case of compressed sensing when it is applied to COVID-19 detection) to achieve high-throughput rapid testing of COVID-19 viruses and antibodies, which can potentially provide tens or even more folds of speedup compared with current testing technologies. The proposed compressed sensing system for high-throughput testing can utilize expander graph based compressed sensing matrices developed by us \cite{Weiyuexpander2007}.
1703.08934
Shizuo Kaji
Adrien Faur\'e, Shizuo Kaji
A circuit-preserving mapping from multilevel to Boolean dynamics
null
null
null
null
q-bio.MN math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many discrete models of biological networks rely exclusively on Boolean variables and many tools and theorems are available for analysis of strictly Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a one-to-one, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks: most notably, it introduces vast regions of "non-admissible" states that have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a one-to-one transposition of multilevel trajectories, however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the non-admissible regions and, at the expense of (apparently) more complicated, "parallel" trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. As a demonstration of the usability of our method, we apply it to construct a new Boolean counter-example to the well-known conjecture that a local negative circuit is necessary to generate sustained oscillations. This result illustrates the general relevance of our method for the study of multilevel logical models.
[ { "created": "Mon, 27 Mar 2017 05:25:22 GMT", "version": "v1" }, { "created": "Mon, 8 May 2017 07:07:49 GMT", "version": "v2" } ]
2017-05-09
[ [ "Fauré", "Adrien", "" ], [ "Kaji", "Shizuo", "" ] ]
Many discrete models of biological networks rely exclusively on Boolean variables and many tools and theorems are available for analysis of strictly Boolean models. However, multilevel variables are often required to account for threshold effects, in which knowledge of the Boolean case does not generalise straightforwardly. This motivated the development of conversion methods for multilevel to Boolean models. In particular, Van Ham's method has been shown to yield a one-to-one, neighbour and regulation preserving dynamics, making it the de facto standard approach to the problem. However, Van Ham's method has several drawbacks: most notably, it introduces vast regions of "non-admissible" states that have no counterpart in the multilevel, original model. This raises special difficulties for the analysis of interaction between variables and circuit functionality, which is believed to be central to the understanding of dynamic properties of logical models. Here, we propose a new multilevel to Boolean conversion method, with software implementation. Contrary to Van Ham's, our method doesn't yield a one-to-one transposition of multilevel trajectories, however, it maps each and every Boolean state to a specific multilevel state, thus getting rid of the non-admissible regions and, at the expense of (apparently) more complicated, "parallel" trajectories. One of the prominent features of our method is that it preserves dynamics and interaction of variables in a certain manner. As a demonstration of the usability of our method, we apply it to construct a new Boolean counter-example to the well-known conjecture that a local negative circuit is necessary to generate sustained oscillations. This result illustrates the general relevance of our method for the study of multilevel logical models.
1508.00686
Tadashi Miyamoto
Tadashi Miyamoto, Chikara Furusawa, Kunihiko Kaneko
Pluripotency, differentiation, and reprogramming: A gene expression dynamics model with epigenetic feedback regulation
null
null
10.1371/journal.pcbi.1004476
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterization of pluripotent states, in which cells can both self-renew and differentiate, and the irreversible loss of pluripotency are important research areas in developmental biology. In particular, an understanding of these processes is essential to the reprogramming of cells for biomedical applications, i.e., the experimental recovery of pluripotency in differentiated cells. Based on recent advances in dynamical-systems theory for gene expression, we propose a gene-regulatory-network model consisting of several pluripotent and differentiation genes. Our results show that cellular-state transition to differentiated cell types occurs as the number of cells increases, beginning with the pluripotent state and oscillatory expression of pluripotent genes. Cell-cell signaling mediates the differentiation process with robustness to noise, while epigenetic modifications affecting gene expression dynamics fix the cellular state. These modifications ensure the cellular state to be protected against external perturbation, but they also work as an epigenetic barrier to recovery of pluripotency. We show that overexpression of several genes leads to the reprogramming of cells, consistent with the methods for establishing induced pluripotent stem cells. Our model, which involves the inter-relationship between gene expression dynamics and epigenetic modifications, improves our basic understanding of cell differentiation and reprogramming.
[ { "created": "Tue, 4 Aug 2015 07:23:43 GMT", "version": "v1" } ]
2015-09-02
[ [ "Miyamoto", "Tadashi", "" ], [ "Furusawa", "Chikara", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Characterization of pluripotent states, in which cells can both self-renew and differentiate, and the irreversible loss of pluripotency are important research areas in developmental biology. In particular, an understanding of these processes is essential to the reprogramming of cells for biomedical applications, i.e., the experimental recovery of pluripotency in differentiated cells. Based on recent advances in dynamical-systems theory for gene expression, we propose a gene-regulatory-network model consisting of several pluripotent and differentiation genes. Our results show that cellular-state transition to differentiated cell types occurs as the number of cells increases, beginning with the pluripotent state and oscillatory expression of pluripotent genes. Cell-cell signaling mediates the differentiation process with robustness to noise, while epigenetic modifications affecting gene expression dynamics fix the cellular state. These modifications ensure the cellular state to be protected against external perturbation, but they also work as an epigenetic barrier to recovery of pluripotency. We show that overexpression of several genes leads to the reprogramming of cells, consistent with the methods for establishing induced pluripotent stem cells. Our model, which involves the inter-relationship between gene expression dynamics and epigenetic modifications, improves our basic understanding of cell differentiation and reprogramming.
2002.10948
Dmitry V. Dylov
Dmitrii Krylov, Remi Tachet, Romain Laroche, Michael Rosenblum, Dmitry V. Dylov
Reinforcement Learning Framework for Deep Brain Stimulation Study
7 pages + 1 references, 7 figures. arXiv admin note: text overlap with arXiv:1909.12154
IJCAI 2020, pp. 2847-2854
10.24963/ijcai.2020/394
null
q-bio.NC cs.AI cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Malfunctioning neurons in the brain sometimes operate synchronously, reportedly causing many neurological diseases, e.g. Parkinson's. Suppression and control of this collective synchronous activity are therefore of great importance for neuroscience, and can only rely on limited engineering trials due to the need to experiment with live human brains. We present the first Reinforcement Learning gym framework that emulates this collective behavior of neurons and allows us to find suppression parameters for the environment of synthetic degenerate models of neurons. We successfully suppress synchrony via RL for three pathological signaling regimes, characterize the framework's stability to noise, and further remove the unwanted oscillations by engaging multiple PPO agents.
[ { "created": "Sat, 22 Feb 2020 16:48:43 GMT", "version": "v1" } ]
2021-09-22
[ [ "Krylov", "Dmitrii", "" ], [ "Tachet", "Remi", "" ], [ "Laroche", "Romain", "" ], [ "Rosenblum", "Michael", "" ], [ "Dylov", "Dmitry V.", "" ] ]
Malfunctioning neurons in the brain sometimes operate synchronously, reportedly causing many neurological diseases, e.g. Parkinson's. Suppression and control of this collective synchronous activity are therefore of great importance for neuroscience, and can only rely on limited engineering trials due to the need to experiment with live human brains. We present the first Reinforcement Learning gym framework that emulates this collective behavior of neurons and allows us to find suppression parameters for the environment of synthetic degenerate models of neurons. We successfully suppress synchrony via RL for three pathological signaling regimes, characterize the framework's stability to noise, and further remove the unwanted oscillations by engaging multiple PPO agents.
0905.2714
Vladimir Privman
Dmitriy Melnikov, Guinevere Strack, Marcos Pita, Vladimir Privman, Evgeny Katz
Analog Noise Reduction in Enzymatic Logic Gates
25 pages in PDF
J. Phys. Chem. B 113, 10472-10479 (2009)
10.1021/jp904585x
null
q-bio.MN cond-mat.soft q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we demonstrate both experimentally and theoretically that the analog noise generation by a single enzymatic logic gate can be dramatically reduced to yield gate operation with virtually no input noise amplification. This is achieved by exploiting the enzyme's specificity when using a co-substrate that has a much lower affinity than the primary substrate. Under these conditions, we obtain a negligible increase in the noise output from the logic gate as compared to the input noise level. Experimental realizations of the AND logic gate with the enzyme horseradish peroxidase using hydrogen peroxide and two different co-substrates, 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulphonic acid) (ABTS) and ferrocyanide, with vastly different rate constants confirmed our general theoretical conclusions.
[ { "created": "Sun, 17 May 2009 02:44:40 GMT", "version": "v1" } ]
2010-10-12
[ [ "Melnikov", "Dmitriy", "" ], [ "Strack", "Guinevere", "" ], [ "Pita", "Marcos", "" ], [ "Privman", "Vladimir", "" ], [ "Katz", "Evgeny", "" ] ]
In this work we demonstrate both experimentally and theoretically that the analog noise generation by a single enzymatic logic gate can be dramatically reduced to yield gate operation with virtually no input noise amplification. This is achieved by exploiting the enzyme's specificity when using a co-substrate that has a much lower affinity than the primary substrate. Under these conditions, we obtain a negligible increase in the noise output from the logic gate as compared to the input noise level. Experimental realizations of the AND logic gate with the enzyme horseradish peroxidase using hydrogen peroxide and two different co-substrates, 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulphonic acid) (ABTS) and ferrocyanide, with vastly different rate constants confirmed our general theoretical conclusions.
1801.05219
Dane Corneil
Wulfram Gerstner, Marco Lehmann, Vasiliki Liakoni, Dane Corneil, and Johanni Brea
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of neoHebbian Three-Factor Learning Rules
null
null
10.3389/fncir.2018.00053
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most elementary behaviors such as moving the arm to grasp an object or walking into the next room to explore a museum evolve on the time scale of seconds; in contrast, neuronal action potentials occur on the time scale of a few milliseconds. Learning rules of the brain must therefore bridge the gap between these two different time scales. Modern theories of synaptic plasticity have postulated that the co-activation of pre- and postsynaptic neurons sets a flag at the synapse, called an eligibility trace, that leads to a weight change only if an additional factor is present while the flag is set. This third factor, signaling reward, punishment, surprise, or novelty, could be implemented by the phasic activity of neuromodulators or specific neuronal inputs signaling special events. While the theoretical framework has been developed over the last decades, experimental evidence in support of eligibility traces on the time scale of seconds has been collected only during the last few years. Here we review, in the context of three-factor rules of synaptic plasticity, four key experiments that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules.
[ { "created": "Tue, 16 Jan 2018 12:08:03 GMT", "version": "v1" } ]
2018-08-17
[ [ "Gerstner", "Wulfram", "" ], [ "Lehmann", "Marco", "" ], [ "Liakoni", "Vasiliki", "" ], [ "Corneil", "Dane", "" ], [ "Brea", "Johanni", "" ] ]
Most elementary behaviors such as moving the arm to grasp an object or walking into the next room to explore a museum evolve on the time scale of seconds; in contrast, neuronal action potentials occur on the time scale of a few milliseconds. Learning rules of the brain must therefore bridge the gap between these two different time scales. Modern theories of synaptic plasticity have postulated that the co-activation of pre- and postsynaptic neurons sets a flag at the synapse, called an eligibility trace, that leads to a weight change only if an additional factor is present while the flag is set. This third factor, signaling reward, punishment, surprise, or novelty, could be implemented by the phasic activity of neuromodulators or specific neuronal inputs signaling special events. While the theoretical framework has been developed over the last decades, experimental evidence in support of eligibility traces on the time scale of seconds has been collected only during the last few years. Here we review, in the context of three-factor rules of synaptic plasticity, four key experiments that support the role of synaptic eligibility traces in combination with a third factor as a biological implementation of neoHebbian three-factor learning rules.
1805.10116
Chiara Gastaldi
Chiara Gastaldi, Samuel P. Muscinelli and Wulfram Gerstner
Optimal stimulation protocol in a bistable synaptic consolidation model
23 pages, 6 figures
Front. Comput. Neurosci., 13 November 2019
10.3389/fncom.2019.00078
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consolidation of synaptic changes in response to neural activity is thought to be fundamental for memory maintenance over a timescale of hours. In experiments, synaptic consolidation can be induced by repeatedly stimulating presynaptic neurons. However, the effectiveness of such protocols depends crucially on the repetition frequency of the stimulations and the mechanisms that cause this complex dependence are unknown. Here we propose a simple mathematical model that allows us to systematically study the interaction between the stimulation protocol and synaptic consolidation. We show the existence of optimal stimulation protocols for our model and, similarly to LTP experiments, the repetition frequency of the stimulation plays a crucial role in achieving consolidation. Our results show that the complex dependence of LTP on the stimulation frequency emerges naturally from a model which satisfies only minimal bistability requirements.
[ { "created": "Fri, 25 May 2018 12:54:08 GMT", "version": "v1" } ]
2019-11-14
[ [ "Gastaldi", "Chiara", "" ], [ "Muscinelli", "Samuel P.", "" ], [ "Gerstner", "Wulfram", "" ] ]
Consolidation of synaptic changes in response to neural activity is thought to be fundamental for memory maintenance over a timescale of hours. In experiments, synaptic consolidation can be induced by repeatedly stimulating presynaptic neurons. However, the effectiveness of such protocols depends crucially on the repetition frequency of the stimulations and the mechanisms that cause this complex dependence are unknown. Here we propose a simple mathematical model that allows us to systematically study the interaction between the stimulation protocol and synaptic consolidation. We show the existence of optimal stimulation protocols for our model and, similarly to LTP experiments, the repetition frequency of the stimulation plays a crucial role in achieving consolidation. Our results show that the complex dependence of LTP on the stimulation frequency emerges naturally from a model which satisfies only minimal bistability requirements.
0910.0789
Gil Bub Mr
Gil Bub, Matthias Tecza, Michiel Helmes, Peter Lee, Peter Kohl
Pixel multiplexing for high-speed multi-resolution fluorescence imaging
3 pages, 2 figures, submitted to Nature Methods
null
null
null
q-bio.QM physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a imaging modality that works by transiently masking image-subregions during a single exposure of a CCD frame. By offsetting subregion exposure time, temporal information is embedded within each stored frame, allowing simultaneous acquisition of a full high spatial resolution image and a high-speed image sequence without increasing bandwidth. The technique is demonstrated by imaging calcium transients in heart cells at 250 Hz with a 10 Hz megapixel camera.
[ { "created": "Mon, 5 Oct 2009 15:25:08 GMT", "version": "v1" } ]
2016-09-08
[ [ "Bub", "Gil", "" ], [ "Tecza", "Matthias", "" ], [ "Helmes", "Michiel", "" ], [ "Lee", "Peter", "" ], [ "Kohl", "Peter", "" ] ]
We introduce a imaging modality that works by transiently masking image-subregions during a single exposure of a CCD frame. By offsetting subregion exposure time, temporal information is embedded within each stored frame, allowing simultaneous acquisition of a full high spatial resolution image and a high-speed image sequence without increasing bandwidth. The technique is demonstrated by imaging calcium transients in heart cells at 250 Hz with a 10 Hz megapixel camera.
1209.3591
Jacopo Grilli
Jacopo Grilli, Sandro Azaele, Jayanth R Banavar and Amos Maritan
Spatial aggregation and the species-area relationship across scales
16 pages, 5 Figures
Journal of Theoretical Biology. 313:87-97. 2012
10.1016/j.jtbi.2012.07.030
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a considerable effort to understand and quantify the spatial distribution of species across different ecosystems. Relative species abundance (RSA), beta diversity and species area relationship (SAR) are among the most used macroecological measures to characterize plants communities in forests. In this article we introduce a simple phenomenological model based on Poisson cluster processes which allows us to exactly link RSA and beta diversity to SAR. The framework is spatially explicit and accounts for the spatial aggregation of conspecific individuals. Under the simplifying assumption of neutral theory, we derive an analytical expression for the SAR which reproduces tri-phasic behavior as sample area increases from local to continental scales, explaining how the tri-phasic behavior can be understood in terms of simple geometric arguments. We also find an expression for the endemic area relationship (EAR) and for the scaling of the RSA.
[ { "created": "Mon, 17 Sep 2012 08:43:51 GMT", "version": "v1" } ]
2012-09-18
[ [ "Grilli", "Jacopo", "" ], [ "Azaele", "Sandro", "" ], [ "Banavar", "Jayanth R", "" ], [ "Maritan", "Amos", "" ] ]
There has been a considerable effort to understand and quantify the spatial distribution of species across different ecosystems. Relative species abundance (RSA), beta diversity and species area relationship (SAR) are among the most used macroecological measures to characterize plants communities in forests. In this article we introduce a simple phenomenological model based on Poisson cluster processes which allows us to exactly link RSA and beta diversity to SAR. The framework is spatially explicit and accounts for the spatial aggregation of conspecific individuals. Under the simplifying assumption of neutral theory, we derive an analytical expression for the SAR which reproduces tri-phasic behavior as sample area increases from local to continental scales, explaining how the tri-phasic behavior can be understood in terms of simple geometric arguments. We also find an expression for the endemic area relationship (EAR) and for the scaling of the RSA.
1312.3115
Andrea Riebler
Andrea Riebler, Mirco Menigatti, Jenny Z. Song, Aaron L. Statham, Clare Stirzaker, Nadiya Mahmud, Charles A. Mein, Susan J. Clark, Mark D. Robinson
BayMeth: Improved DNA methylation quantification for affinity capture sequencing data using a flexible Bayesian approach
58 pages (main text contains 33 pages), 20 figures (10 figures for the main text, 6 supplementary figures, 4 figures in supplementary text)
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA methylation (DNAme) is a critical component of the epigenetic regulatory machinery and aberrations in DNAme patterns occur in many diseases, such as cancer. Mapping and understanding DNAme profiles offers considerable promise for reversing the aberrant states. There are several approaches to analyze DNAme, which vary widely in cost, resolution and coverage. Affinity capture and high-throughput sequencing of methylated DNA strike a good balance between the high cost of whole genome bisulphite sequencing (WGBS) and the low coverage of methylation arrays. However, existing methods cannot adequately differentiate between hypomethylation patterns and low capture efficiency, and do not offer flexibility to integrate copy number variation (CNV). Furthermore, no uncertainty estimates are provided, which may prove useful for combining data from multiple protocols or propagating into downstream analysis. We propose an empirical Bayes framework that uses a fully methylated (i.e. SssI treated) control sample to transform observed read densities into regional methylation estimates. In our model, inefficient capture can be distinguished from low methylation levels by means of larger posterior variances. Furthermore, we can integrate CNV by introducing a multiplicative offset into our Poisson model framework. Notably, our model offers analytic expressions for the mean and variance of the methylation level and thus is fast to compute. Our algorithm outperforms existing approaches in terms of bias, mean-squared error and coverage probabilities as illustrated on multiple reference datasets. Although our method provides advantages even without the SssI-control, considerable improvement is achieved by its incorporation. Our method can be applied to methylated DNA affinity enrichment assays (e.g MBD-seq, MeDIP-seq) and a software implementation is available in the Bioconductor Repitools package.
[ { "created": "Wed, 11 Dec 2013 11:11:18 GMT", "version": "v1" } ]
2013-12-12
[ [ "Riebler", "Andrea", "" ], [ "Menigatti", "Mirco", "" ], [ "Song", "Jenny Z.", "" ], [ "Statham", "Aaron L.", "" ], [ "Stirzaker", "Clare", "" ], [ "Mahmud", "Nadiya", "" ], [ "Mein", "Charles A.", "" ], [ "Clark", "Susan J.", "" ], [ "Robinson", "Mark D.", "" ] ]
DNA methylation (DNAme) is a critical component of the epigenetic regulatory machinery and aberrations in DNAme patterns occur in many diseases, such as cancer. Mapping and understanding DNAme profiles offers considerable promise for reversing the aberrant states. There are several approaches to analyze DNAme, which vary widely in cost, resolution and coverage. Affinity capture and high-throughput sequencing of methylated DNA strike a good balance between the high cost of whole genome bisulphite sequencing (WGBS) and the low coverage of methylation arrays. However, existing methods cannot adequately differentiate between hypomethylation patterns and low capture efficiency, and do not offer flexibility to integrate copy number variation (CNV). Furthermore, no uncertainty estimates are provided, which may prove useful for combining data from multiple protocols or propagating into downstream analysis. We propose an empirical Bayes framework that uses a fully methylated (i.e. SssI treated) control sample to transform observed read densities into regional methylation estimates. In our model, inefficient capture can be distinguished from low methylation levels by means of larger posterior variances. Furthermore, we can integrate CNV by introducing a multiplicative offset into our Poisson model framework. Notably, our model offers analytic expressions for the mean and variance of the methylation level and thus is fast to compute. Our algorithm outperforms existing approaches in terms of bias, mean-squared error and coverage probabilities as illustrated on multiple reference datasets. Although our method provides advantages even without the SssI-control, considerable improvement is achieved by its incorporation. Our method can be applied to methylated DNA affinity enrichment assays (e.g MBD-seq, MeDIP-seq) and a software implementation is available in the Bioconductor Repitools package.
1508.05468
Henry Tuckwell
Henry C. Tuckwell, Ying Zhou and Nicholas J. Penington
Simplified models of pacemaker spiking in raphe and locus coeruleus neurons
21 pages, 11 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many central neurons, and in particular certain brainstem aminergic neurons exhibit spontaneous and fairly regular spiking with frequencies of order a few Hz. A large number of ion channel types contribute to such spiking so that accurate modeling of spike generation leads to the requirement of solving very large systems of differential equations, ordinary in the first instance. Since analysis of spiking behavior when many synaptic inputs are active adds further to the number of components, it is useful to have simplified mathematical models of spiking in such neurons so that, for example, stochastic features of inputs and output spike trains can be incorporated. In this article we investigate two simple two-component models which mimic features of spiking in serotonergic neurons of the dorsal raphe nucleus and noradrenergic neurons of the locus coeruleus. The first model is of the Fitzhugh-Nagumo type and the second is a reduced Hodgkin-Huxley model. For each model solutions are computed with two representative sets of parameters. Frequency versus input currents are found and reveal Hodgkin type 2 behavior. For the first model a bifurcation and phase plane analysis supports these findings. The spike trajectories in the second model are very similar to those in DRN SE pacemaker activity but there are more parameters than in the Fitzhugh-Nagumo type model. The article concludes with a brief review of previous modeling of these types of neurons and its relevance to studies of serotonergic involvement in spatial working memory and obsessive-compulsive disorder.
[ { "created": "Sat, 22 Aug 2015 04:39:03 GMT", "version": "v1" } ]
2015-08-25
[ [ "Tuckwell", "Henry C.", "" ], [ "Zhou", "Ying", "" ], [ "Penington", "Nicholas J.", "" ] ]
Many central neurons, and in particular certain brainstem aminergic neurons exhibit spontaneous and fairly regular spiking with frequencies of order a few Hz. A large number of ion channel types contribute to such spiking so that accurate modeling of spike generation leads to the requirement of solving very large systems of differential equations, ordinary in the first instance. Since analysis of spiking behavior when many synaptic inputs are active adds further to the number of components, it is useful to have simplified mathematical models of spiking in such neurons so that, for example, stochastic features of inputs and output spike trains can be incorporated. In this article we investigate two simple two-component models which mimic features of spiking in serotonergic neurons of the dorsal raphe nucleus and noradrenergic neurons of the locus coeruleus. The first model is of the Fitzhugh-Nagumo type and the second is a reduced Hodgkin-Huxley model. For each model solutions are computed with two representative sets of parameters. Frequency versus input currents are found and reveal Hodgkin type 2 behavior. For the first model a bifurcation and phase plane analysis supports these findings. The spike trajectories in the second model are very similar to those in DRN SE pacemaker activity but there are more parameters than in the Fitzhugh-Nagumo type model. The article concludes with a brief review of previous modeling of these types of neurons and its relevance to studies of serotonergic involvement in spatial working memory and obsessive-compulsive disorder.
2301.12064
Madhur Mangalam
Madhur Mangalam, Taylor Wilson, Joel Sommerfeld, Aaron D Likens
Optimizing a Bayesian method for estimating the Hurst exponent in behavioral sciences
17 pages, 2 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Bayesian Hurst-Kolmogorov (HK) method estimates the Hurst exponent of a time series more accurately than the age-old detrended fluctuation analysis (DFA), especially when the time series is short. However, this advantage comes at the cost of computation time. The computation time increases exponentially with $N$, easily exceeding several hours for $N = 1024$, limiting the utility of the HK method in real-time paradigms, such as biofeedback and brain-computer interfaces. To address this issue, we have provided data on the estimation accuracy of $H$ for synthetic time series as a function of \textit{a priori} known values of $H$, the time series length, and the simulated sample size from the posterior distribution -- a critical step in the Bayesian estimation method. The simulated sample from the posterior distribution as small as $n = 25$ suffices to estimate $H$ with reasonable accuracy for a time series as short as $256$ measurements. Using a larger simulated sample from the posterior distribution -- i.e., $n > 50$ -- provides only marginal gain in accuracy, which might not be worth trading off with computational efficiency. We suggest balancing the simulated sample size from the posterior distribution of $H$ with the computational resources available to the user, preferring a minimum of $n = 50$ and opting for larger sample sizes based on time and resource constraints
[ { "created": "Sat, 28 Jan 2023 02:47:08 GMT", "version": "v1" } ]
2023-01-31
[ [ "Mangalam", "Madhur", "" ], [ "Wilson", "Taylor", "" ], [ "Sommerfeld", "Joel", "" ], [ "Likens", "Aaron D", "" ] ]
The Bayesian Hurst-Kolmogorov (HK) method estimates the Hurst exponent of a time series more accurately than the age-old detrended fluctuation analysis (DFA), especially when the time series is short. However, this advantage comes at the cost of computation time. The computation time increases exponentially with $N$, easily exceeding several hours for $N = 1024$, limiting the utility of the HK method in real-time paradigms, such as biofeedback and brain-computer interfaces. To address this issue, we have provided data on the estimation accuracy of $H$ for synthetic time series as a function of \textit{a priori} known values of $H$, the time series length, and the simulated sample size from the posterior distribution -- a critical step in the Bayesian estimation method. The simulated sample from the posterior distribution as small as $n = 25$ suffices to estimate $H$ with reasonable accuracy for a time series as short as $256$ measurements. Using a larger simulated sample from the posterior distribution -- i.e., $n > 50$ -- provides only marginal gain in accuracy, which might not be worth trading off with computational efficiency. We suggest balancing the simulated sample size from the posterior distribution of $H$ with the computational resources available to the user, preferring a minimum of $n = 50$ and opting for larger sample sizes based on time and resource constraints
1411.3638
Luca Ferreri
Luca Ferreri, Mario Giacobini, Paolo Bajardi, Luigi Bertolotti, Luca Bolzoni, Valentina Tagliapietra, Annapaola Rizzoli, Roberto Ros\`a
Pattern of tick aggregation on mice: larger than expected distribution tail enhances the spread of tick-borne pathogens
32 pages, 13 figures, appears in PLOS Computational Biology 2014
PLOS Computational Biology 10 (11): e1003931, 2014
10.1371/journal.pcbi.1003931
null
q-bio.PE physics.bio-ph physics.data-an stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spread of tick-borne pathogens represents an important threat to human and animal health in many parts of Eurasia. Here, we analysed a 9-year time series of Ixodes ricinus ticks feeding on Apodemus flavicollis mice (main reservoir-competent host for tick-borne encephalitis, TBE) sampled in Trentino (Northern Italy). The tail of the distribution of the number of ticks per host was fitted by three theoretical distributions: Negative Binomial (NB), Poisson-LogNormal (PoiLN), and Power-Law (PL). The fit with theoretical distributions indicated that the tail of the tick infestation pattern on mice is better described by the PL distribution. Moreover, we found that the tail of the distribution significantly changes with seasonal variations in host abundance. In order to investigate the effect of different tails of tick distribution on the invasion of a non-systemically transmitted pathogen, we simulated the transmission of a TBE-like virus between susceptible and infective ticks using a stochastic model. Model simulations indicated different outcomes of disease spreading when considering different distribution laws of ticks among hosts. Specifically, we found that the epidemic threshold and the prevalence equilibria obtained in epidemiological simulations with PL distribution are a good approximation of those observed in simulations feed by the empirical distribution. Moreover, we also found that the epidemic threshold for disease invasion was lower when considering the seasonal variation of tick aggregation.
[ { "created": "Thu, 13 Nov 2014 17:59:33 GMT", "version": "v1" } ]
2017-07-11
[ [ "Ferreri", "Luca", "" ], [ "Giacobini", "Mario", "" ], [ "Bajardi", "Paolo", "" ], [ "Bertolotti", "Luigi", "" ], [ "Bolzoni", "Luca", "" ], [ "Tagliapietra", "Valentina", "" ], [ "Rizzoli", "Annapaola", "" ], [ "Rosà", "Roberto", "" ] ]
The spread of tick-borne pathogens represents an important threat to human and animal health in many parts of Eurasia. Here, we analysed a 9-year time series of Ixodes ricinus ticks feeding on Apodemus flavicollis mice (main reservoir-competent host for tick-borne encephalitis, TBE) sampled in Trentino (Northern Italy). The tail of the distribution of the number of ticks per host was fitted by three theoretical distributions: Negative Binomial (NB), Poisson-LogNormal (PoiLN), and Power-Law (PL). The fit with theoretical distributions indicated that the tail of the tick infestation pattern on mice is better described by the PL distribution. Moreover, we found that the tail of the distribution significantly changes with seasonal variations in host abundance. In order to investigate the effect of different tails of tick distribution on the invasion of a non-systemically transmitted pathogen, we simulated the transmission of a TBE-like virus between susceptible and infective ticks using a stochastic model. Model simulations indicated different outcomes of disease spreading when considering different distribution laws of ticks among hosts. Specifically, we found that the epidemic threshold and the prevalence equilibria obtained in epidemiological simulations with PL distribution are a good approximation of those observed in simulations feed by the empirical distribution. Moreover, we also found that the epidemic threshold for disease invasion was lower when considering the seasonal variation of tick aggregation.
1603.03355
Ching-Hao Wang
Ching-Hao Wang, Pankaj Mehta, and Michael Elbaum
A thermodynamic paradigm for solution demixing inspired by nuclear transport in living cells
6+9 pages, 4+5 figures, to appear in Phys. Rev. Lett
Phys. Rev. Lett. 118, 158101 (2017)
10.1103/PhysRevLett.118.158101
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living cells display a remarkable capacity to compartmentalize their functional biochemistry. A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve traversing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Beyond simply gating diffusion, the system of nuclear pores and associated transport receptors is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solvent exchange. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production.
[ { "created": "Thu, 10 Mar 2016 18:07:29 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2016 14:11:50 GMT", "version": "v2" }, { "created": "Fri, 3 Feb 2017 16:47:55 GMT", "version": "v3" } ]
2017-04-19
[ [ "Wang", "Ching-Hao", "" ], [ "Mehta", "Pankaj", "" ], [ "Elbaum", "Michael", "" ] ]
Living cells display a remarkable capacity to compartmentalize their functional biochemistry. A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve traversing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Beyond simply gating diffusion, the system of nuclear pores and associated transport receptors is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solvent exchange. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production.
1903.05141
Michael Margaliot
Eyal Bar-Shalom and Alexander Ovseevich and Michael Margaliot
Ribosome flow model with different site sizes
null
null
null
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce and analyze two general dynamical models for unidirectional movement of particles along a circular chain and an open chain of sites. The models include a soft version of the simple exclusion principle, that is, as the density in a site increases the effective entry rate into this site decreases. This allows to model and study the evolution of "traffic jams" of particles along the chain. A unique feature of these two new models is that each site along the chain can have a different size. Although the models are nonlinear, they are amenable to rigorous asymptotic analysis. In particular, we show that the dynamics always converges to a steady-state, and that the steady-state densities along the chain and the steady-state output flow rate from the chain can be derived from the spectral properties of a suitable matrix, thus eliminating the need to numerically simulate the dynamics until convergence. This spectral representation also allows for powerful sensitivity analysis, i.e. understanding how a change in one of the parameters in the models affects the steady-state. We show that the site sizes and the transition rates from site to site play different roles in the dynamics, and that for the purpose of maximizing the steady-state output (or production) rate the site sizes are more important than the transition rates. We also show that the problem of finding parameter values that maximize the production rate is tractable. We believe that the models introduced here can be applied to study various natural and artificial processes including ribosome flow during mRNA translation, the movement of molecular motors along filaments of the cytoskeleton, pedestrian and vehicular traffic, evacuation dynamics, and more.
[ { "created": "Tue, 12 Mar 2019 18:44:56 GMT", "version": "v1" } ]
2019-03-14
[ [ "Bar-Shalom", "Eyal", "" ], [ "Ovseevich", "Alexander", "" ], [ "Margaliot", "Michael", "" ] ]
We introduce and analyze two general dynamical models for unidirectional movement of particles along a circular chain and an open chain of sites. The models include a soft version of the simple exclusion principle, that is, as the density in a site increases the effective entry rate into this site decreases. This allows to model and study the evolution of "traffic jams" of particles along the chain. A unique feature of these two new models is that each site along the chain can have a different size. Although the models are nonlinear, they are amenable to rigorous asymptotic analysis. In particular, we show that the dynamics always converges to a steady-state, and that the steady-state densities along the chain and the steady-state output flow rate from the chain can be derived from the spectral properties of a suitable matrix, thus eliminating the need to numerically simulate the dynamics until convergence. This spectral representation also allows for powerful sensitivity analysis, i.e. understanding how a change in one of the parameters in the models affects the steady-state. We show that the site sizes and the transition rates from site to site play different roles in the dynamics, and that for the purpose of maximizing the steady-state output (or production) rate the site sizes are more important than the transition rates. We also show that the problem of finding parameter values that maximize the production rate is tractable. We believe that the models introduced here can be applied to study various natural and artificial processes including ribosome flow during mRNA translation, the movement of molecular motors along filaments of the cytoskeleton, pedestrian and vehicular traffic, evacuation dynamics, and more.
2212.03197
Alexander Ioannidis
Alexander G. Ioannidis (1), Javier Blanco-Portillo (2), Erika Hagelberg (3), Juan Esteban Rodr\'iguez-Rodr\'iguez (2), Keolu Fox (3), Adrian V. S. Hill (5 and 6), Carlos D. Bustamante (1), Marcus W. Feldman (2), Alexander J. Mentzer (5), Andr\'es Moreno-Estrada (7) ((1) Department of Biomedical Data Science, Stanford Medical School, Stanford, CA, USA, (2) Department of Biology, Stanford University, Stanford, CA, USA, (3) Department of Biosciences, University of Oslo, Oslo, Norway, (4) Department of Anthropology, University of California San Diego, La Jolla, CA, USA, (5) Wellcome Centre for Human Genetics, University of Oxford, Roosevelt Drive, Oxford, UK, (6) The Jenner Institute, Nuffield Department of Medicine, University of Oxford, Oxford, UK, (7) National Laboratory of Genomics for Biodiversity (LANGEBIO), CINVESTAV, Irapuato, Guanajuato, Mexico)
Ancestry-specific analyses of genome-wide data confirm the settlement sequence of Polynesia
6 pages, 1 figure
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
By demonstrating the role that historical population replacements and waves of admixture have played around the world, the genetics work of Reich and colleagues has provided a paradigm for understanding human history [Reich et al. 2009; Reich et al. 2012; Patterson et al. 2012]. Although we show in Ioannidis et al. [2021] that the peopling of Polynesia was a range expansion, and not, as suggested by Huang et al. [2022], yet another example of waves of admixture and large-scale gene flow between populations, we believe that our result in this recently settled oceanic expanse is the exception that proves the rule.
[ { "created": "Tue, 6 Dec 2022 18:16:46 GMT", "version": "v1" } ]
2022-12-07
[ [ "Ioannidis", "Alexander G.", "", "5 and 6" ], [ "Blanco-Portillo", "Javier", "", "5 and 6" ], [ "Hagelberg", "Erika", "", "5 and 6" ], [ "Rodríguez-Rodríguez", "Juan Esteban", "", "5 and 6" ], [ "Fox", "Keolu", "", "5 and 6" ], [ "Hill", "Adrian V. S.", "", "5 and 6" ], [ "Bustamante", "Carlos D.", "" ], [ "Feldman", "Marcus W.", "" ], [ "Mentzer", "Alexander J.", "" ], [ "Moreno-Estrada", "Andrés", "" ] ]
By demonstrating the role that historical population replacements and waves of admixture have played around the world, the genetics work of Reich and colleagues has provided a paradigm for understanding human history [Reich et al. 2009; Reich et al. 2012; Patterson et al. 2012]. Although we show in Ioannidis et al. [2021] that the peopling of Polynesia was a range expansion, and not, as suggested by Huang et al. [2022], yet another example of waves of admixture and large-scale gene flow between populations, we believe that our result in this recently settled oceanic expanse is the exception that proves the rule.
1706.00721
Yongguang Yang
Lixin Feng, Yongguang Yang
Origin and Quantitative Control of Sertoli Cells
total 15 pages with 136 references cited
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sertoli cell is the"nurse"in testes that regulates germ cell proliferation and differentiation. One Sertoli cell supports a certain number of germ cells during these processes. Thus, it is the determinant of male reproductive capability. Sertoli cells originate from the primitive gonads during embryonic stage and their proliferations continue throughout the pre-pubertal stage. The proliferation and final density of Sertoli cells in the testis are regulated by hormones and local factors through autocrine, paracrine as well as endocrine methods. In the concise minireview, the most recent progresses in the study of factors and signaling pathways that participate into regulating the proliferation and function of Sertoli cell were summarized.
[ { "created": "Fri, 2 Jun 2017 15:23:11 GMT", "version": "v1" } ]
2017-06-05
[ [ "Feng", "Lixin", "" ], [ "Yang", "Yongguang", "" ] ]
Sertoli cell is the"nurse"in testes that regulates germ cell proliferation and differentiation. One Sertoli cell supports a certain number of germ cells during these processes. Thus, it is the determinant of male reproductive capability. Sertoli cells originate from the primitive gonads during embryonic stage and their proliferations continue throughout the pre-pubertal stage. The proliferation and final density of Sertoli cells in the testis are regulated by hormones and local factors through autocrine, paracrine as well as endocrine methods. In the concise minireview, the most recent progresses in the study of factors and signaling pathways that participate into regulating the proliferation and function of Sertoli cell were summarized.
1611.04744
Alexander Nestor-Bergmann
Alexander Nestor-Bergmann, Georgina Goddard, Sarah Woolner, Oliver Jensen
Relating cell shape and mechanical stress in a spatially disordered epithelium using a vertex-based model
29 pages, 10 figures, revision
Mathematical Medicine and Biology: A Journal of the IMA, dqx008 (2017)
10.1093/imammb/dqx008
null
q-bio.CB physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a popular vertex-based model to describe a spatially disordered planar epithelial monolayer, we examine the relationship between cell shape and mechanical stress at the cell and tissue level. Deriving expressions for stress tensors starting from an energetic formulation of the model, we show that the principal axes of stress for an individual cell align with the principal axes of shape, and we determine the bulk effective tissue pressure when the monolayer is isotropic at the tissue level. Using simulations for a monolayer that is not under peripheral stress, we fit parameters of the model to experimental data for Xenopus embryonic tissue. The model predicts that mechanical interactions can generate mesoscopic patterns within the monolayer that exhibit long-range correlations in cell shape. The model also suggests that the orientation of mechanical and geometric cues for processes such as cell division are likely to be strongly correlated in real epithelia. Some limitations of the model in capturing geometric features of Xenopus epithelial cells are highlighted.
[ { "created": "Tue, 15 Nov 2016 08:54:08 GMT", "version": "v1" }, { "created": "Wed, 10 May 2017 15:49:56 GMT", "version": "v2" } ]
2017-08-14
[ [ "Nestor-Bergmann", "Alexander", "" ], [ "Goddard", "Georgina", "" ], [ "Woolner", "Sarah", "" ], [ "Jensen", "Oliver", "" ] ]
Using a popular vertex-based model to describe a spatially disordered planar epithelial monolayer, we examine the relationship between cell shape and mechanical stress at the cell and tissue level. Deriving expressions for stress tensors starting from an energetic formulation of the model, we show that the principal axes of stress for an individual cell align with the principal axes of shape, and we determine the bulk effective tissue pressure when the monolayer is isotropic at the tissue level. Using simulations for a monolayer that is not under peripheral stress, we fit parameters of the model to experimental data for Xenopus embryonic tissue. The model predicts that mechanical interactions can generate mesoscopic patterns within the monolayer that exhibit long-range correlations in cell shape. The model also suggests that the orientation of mechanical and geometric cues for processes such as cell division are likely to be strongly correlated in real epithelia. Some limitations of the model in capturing geometric features of Xenopus epithelial cells are highlighted.
1807.02288
Jesus F Bermejo-Martin
Jesus F Bermejo-Martin, Marta Mart\'in-Fernandez, Cristina L\'opez-Mestanza, Patricia Duque, Raquel Almansa
Shared features of endothelial dysfunction between sepsis and its preceding risk factors (aging and chronic disease)
Review article
null
10.3390/jcm7110400
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Acute vascular endothelial dysfunction is a central event in the pathogenesis of sepsis,increasing vascular permeability, promoting activation of the coagulation cascade, tissue edema and compromising perfusion of vital organs. Aging and chronic diseases(hypertension,dyslipidaemia,diabetes mellitus,chronic kidney disease,cardiovascular disease,cerebrovascular disease, chronic pulmonary disease,liver disease or cancer)are recognized risk factors for sepsis. In this article we review the features of endothelial dysfunction shared by sepsis,aging and the chronic conditions preceding this disease. Clinical studies and review articles on endothelial dysfunction associated to sepsis,aging and chronic diseases published in PubMed were considered. The main features of endothelial dysfunction shared by sepsis,aging and chronic diseases were 1.increased oxidative stress and systemic inflammation, 2.glycocalyx degradation and shedding, 3.disassembly of intercellular junctions,endothelial cell death,blood tissue barrier disruption, 4.enhanced leukocyte adhesion and extravasation, 5.induction of a pro-coagulant and anti-fibrinolytic state. In addition,chronic diseases impair the mechanisms of endothelial reparation. In conclusion,sepsis,aging and chronic diseases induce similar features of endothelial dysfunction. The potential contribution of the pre-existent degree of endothelial dysfunction to sepsis pathogenesis deserves to be further investigated
[ { "created": "Fri, 6 Jul 2018 07:24:40 GMT", "version": "v1" }, { "created": "Tue, 9 Oct 2018 07:08:51 GMT", "version": "v2" } ]
2018-10-31
[ [ "Bermejo-Martin", "Jesus F", "" ], [ "Martín-Fernandez", "Marta", "" ], [ "López-Mestanza", "Cristina", "" ], [ "Duque", "Patricia", "" ], [ "Almansa", "Raquel", "" ] ]
Acute vascular endothelial dysfunction is a central event in the pathogenesis of sepsis,increasing vascular permeability, promoting activation of the coagulation cascade, tissue edema and compromising perfusion of vital organs. Aging and chronic diseases(hypertension,dyslipidaemia,diabetes mellitus,chronic kidney disease,cardiovascular disease,cerebrovascular disease, chronic pulmonary disease,liver disease or cancer)are recognized risk factors for sepsis. In this article we review the features of endothelial dysfunction shared by sepsis,aging and the chronic conditions preceding this disease. Clinical studies and review articles on endothelial dysfunction associated to sepsis,aging and chronic diseases published in PubMed were considered. The main features of endothelial dysfunction shared by sepsis,aging and chronic diseases were 1.increased oxidative stress and systemic inflammation, 2.glycocalyx degradation and shedding, 3.disassembly of intercellular junctions,endothelial cell death,blood tissue barrier disruption, 4.enhanced leukocyte adhesion and extravasation, 5.induction of a pro-coagulant and anti-fibrinolytic state. In addition,chronic diseases impair the mechanisms of endothelial reparation. In conclusion,sepsis,aging and chronic diseases induce similar features of endothelial dysfunction. The potential contribution of the pre-existent degree of endothelial dysfunction to sepsis pathogenesis deserves to be further investigated
1911.13002
Shaoli Wang
Shaoli Wang, Xiyan Bai, Fei Xu
Bistability in a SIRS model with general nonmonotone and saturated incidence rate
8 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider a SIRS model with general nonmonotone and saturated incidence rate and perform stability and bifurcation analysis. We show that the system has saddle-node bifurcation and displays bistable behavior. We obtain the critical thresholds that characterize the dynamical behaviors of the model. We find with surprise that the system always admits a disease free equilibrium E0 which is always asymptotically stable. Numerical simulations are carried out to verify our results.
[ { "created": "Fri, 29 Nov 2019 08:53:24 GMT", "version": "v1" } ]
2019-12-02
[ [ "Wang", "Shaoli", "" ], [ "Bai", "Xiyan", "" ], [ "Xu", "Fei", "" ] ]
In this paper, we consider a SIRS model with general nonmonotone and saturated incidence rate and perform stability and bifurcation analysis. We show that the system has saddle-node bifurcation and displays bistable behavior. We obtain the critical thresholds that characterize the dynamical behaviors of the model. We find with surprise that the system always admits a disease free equilibrium E0 which is always asymptotically stable. Numerical simulations are carried out to verify our results.
1302.6977
Gregory Ryslik
Gregory Ryslik, Yuwei Cheng, Kei-Hoi Cheung, Yorgo Modis, Hongyu Zhao
Utilizing Protein Structure to Identify Non-Random Somatic Mutations
null
null
10.1186/1471-2105-14-190
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Human cancer is caused by the accumulation of somatic mutations in tumor suppressors and oncogenes within the genome. In the case of oncogenes, recent theory suggests that there are only a few key "driver" mutations responsible for tumorigenesis. As there have been significant pharmacological successes in developing drugs that treat cancers that carry these driver mutations, several methods that rely on mutational clustering have been developed to identify them. However, these methods consider proteins as a single strand without taking their spatial structures into account. We propose a new methodology that incorporates protein tertiary structure in order to increase our power when identifying mutation clustering. Results: We have developed a novel algorithm, iPAC: identification of Protein Amino acid Clustering, for the identification of non-random somatic mutations in proteins that takes into account the three dimensional protein structure. By using the tertiary information, we are able to detect both novel clusters in proteins that are known to exhibit mutation clustering as well as identify clusters in proteins without evidence of clustering based on existing methods. For example, by combining the data in the Protein Data Bank (PDB) and the Catalogue of Somatic Mutations in Cancer, our algorithm identifies new mutational clusters in well known cancer proteins such as KRAS and PI3KCa. Further, by utilizing the tertiary structure, our algorithm also identifies clusters in EGFR, EIF2AK2, and other proteins that are not identified by current methodology.
[ { "created": "Wed, 27 Feb 2013 20:09:43 GMT", "version": "v1" } ]
2013-07-16
[ [ "Ryslik", "Gregory", "" ], [ "Cheng", "Yuwei", "" ], [ "Cheung", "Kei-Hoi", "" ], [ "Modis", "Yorgo", "" ], [ "Zhao", "Hongyu", "" ] ]
Motivation: Human cancer is caused by the accumulation of somatic mutations in tumor suppressors and oncogenes within the genome. In the case of oncogenes, recent theory suggests that there are only a few key "driver" mutations responsible for tumorigenesis. As there have been significant pharmacological successes in developing drugs that treat cancers that carry these driver mutations, several methods that rely on mutational clustering have been developed to identify them. However, these methods consider proteins as a single strand without taking their spatial structures into account. We propose a new methodology that incorporates protein tertiary structure in order to increase our power when identifying mutation clustering. Results: We have developed a novel algorithm, iPAC: identification of Protein Amino acid Clustering, for the identification of non-random somatic mutations in proteins that takes into account the three dimensional protein structure. By using the tertiary information, we are able to detect both novel clusters in proteins that are known to exhibit mutation clustering as well as identify clusters in proteins without evidence of clustering based on existing methods. For example, by combining the data in the Protein Data Bank (PDB) and the Catalogue of Somatic Mutations in Cancer, our algorithm identifies new mutational clusters in well known cancer proteins such as KRAS and PI3KCa. Further, by utilizing the tertiary structure, our algorithm also identifies clusters in EGFR, EIF2AK2, and other proteins that are not identified by current methodology.
2204.06071
Qiang Li
Qiang Li
Noise Perturbation for Saliency Prediction with Psychophysical Synthetic Images
7 pages, 6 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Convolutional neural networks (CNNs) have achieved great success in natural image saliency prediction. The primary goal of this study is to investigate the performance of saliency prediction in CNN and classic models with psychophysical synthetic images under noise perturbation. Is it still as decent as natural images in terms of performance? In the meantime, it can be used to investigate the relationship between CNNs and human vision, mainly low-level vision functions. On the other hand, are CNNs exact replicas of human visual function? This study used CNNs, Fourier, and spectral models inspired by low-level vision systems to investigate saliency prediction on psychophysical synthetic images rather than natural images. According to our findings, saliency prediction models inspired by Fourier and spectral theory outperformed current pre-trained deep neural networks on psychophysical images with noise perturbation. However, psychophysical models were more unstable in noise than pre-trained deep neural networks. Meanwhile, we suggested that investigating CNNs with psychophysical methods could benefit visual neuroscience and artificial neural network studies.
[ { "created": "Tue, 12 Apr 2022 20:19:20 GMT", "version": "v1" }, { "created": "Sat, 14 May 2022 15:36:01 GMT", "version": "v2" }, { "created": "Fri, 20 May 2022 21:06:25 GMT", "version": "v3" }, { "created": "Sun, 21 Aug 2022 21:43:25 GMT", "version": "v4" }, { "created": "Thu, 25 Aug 2022 20:11:18 GMT", "version": "v5" }, { "created": "Mon, 6 Feb 2023 12:33:03 GMT", "version": "v6" }, { "created": "Thu, 28 Sep 2023 18:51:39 GMT", "version": "v7" } ]
2023-10-02
[ [ "Li", "Qiang", "" ] ]
Convolutional neural networks (CNNs) have achieved great success in natural image saliency prediction. The primary goal of this study is to investigate the performance of saliency prediction in CNN and classic models with psychophysical synthetic images under noise perturbation. Is it still as decent as natural images in terms of performance? In the meantime, it can be used to investigate the relationship between CNNs and human vision, mainly low-level vision functions. On the other hand, are CNNs exact replicas of human visual function? This study used CNNs, Fourier, and spectral models inspired by low-level vision systems to investigate saliency prediction on psychophysical synthetic images rather than natural images. According to our findings, saliency prediction models inspired by Fourier and spectral theory outperformed current pre-trained deep neural networks on psychophysical images with noise perturbation. However, psychophysical models were more unstable in noise than pre-trained deep neural networks. Meanwhile, we suggested that investigating CNNs with psychophysical methods could benefit visual neuroscience and artificial neural network studies.
0910.4015
A. E. Sitnitsky
A.E. Sitnitsky
Model for solvent viscosity effect on enzymatic reactions
16 LaTex pages, 5 eps figures
Chem.Phys. 2010, v. 369, N1, pp.37-42
10.1016/j.chemphys.2010.02.005
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Why reaction rate constants for enzymatic reactions are typically inversely proportional to fractional power exponents of solvent viscosity remains to be already a thirty years old puzzle. Available interpretations of the phenomenon invoke to either a modification of 1. the conventional Kramers' theory or that of 2. the Stokes law. We show that there is an alternative interpretation of the phenomenon at which neither of these modifications is in fact indispensable. We reconcile 1. and 2. with the experimentally observable dependence. We assume that an enzyme solution in solvent with or without cosolvent molecules is an ensemble of samples with different values of the viscosity for the movement of the system along the reaction coordinate. We assume that this viscosity consists of the contribution with the weight $q$ from cosolvent molecules and that with the weight $1-q$ from protein matrix and solvent molecules. We introduce heterogeneity in our system with the help of a distribution over the weight $q$. We verify the obtained solution of the integral equation for the unknown function of the distribution by direct substitution. All parameters of the model are related to experimentally observable values. General formalism is exemplified by the analysis of literature experimental data for oxygen escape from hemerythin.
[ { "created": "Wed, 21 Oct 2009 08:24:50 GMT", "version": "v1" } ]
2015-05-14
[ [ "Sitnitsky", "A. E.", "" ] ]
Why reaction rate constants for enzymatic reactions are typically inversely proportional to fractional power exponents of solvent viscosity remains to be already a thirty years old puzzle. Available interpretations of the phenomenon invoke to either a modification of 1. the conventional Kramers' theory or that of 2. the Stokes law. We show that there is an alternative interpretation of the phenomenon at which neither of these modifications is in fact indispensable. We reconcile 1. and 2. with the experimentally observable dependence. We assume that an enzyme solution in solvent with or without cosolvent molecules is an ensemble of samples with different values of the viscosity for the movement of the system along the reaction coordinate. We assume that this viscosity consists of the contribution with the weight $q$ from cosolvent molecules and that with the weight $1-q$ from protein matrix and solvent molecules. We introduce heterogeneity in our system with the help of a distribution over the weight $q$. We verify the obtained solution of the integral equation for the unknown function of the distribution by direct substitution. All parameters of the model are related to experimentally observable values. General formalism is exemplified by the analysis of literature experimental data for oxygen escape from hemerythin.
2308.15477
Mathieu Desroches
Dmitry Amakhin, Anton Chizhov, Guillaume Girier, Mathieu Desroches, Jan Sieber, Serafim Rodrigues
Observing hidden neuronal states in experiments
null
null
null
null
q-bio.NC math.DS
http://creativecommons.org/licenses/by/4.0/
We construct systematically experimental steady-state bifurcation diagrams for entorhinal cortex neurons. A slowly ramped voltage-clamp electrophysiology protocol serves as closed-loop feedback controlled experiment for the subsequent current-clamp open-loop protocol on the same cell. In this way, the voltage-clamped experiment determines dynamically stable and unstable (hidden) steady states of the current-clamp experiment. The transitions between observable steady states and observable spiking states in the current-clamp experiment reveal stability and bifurcations of the steady states, completing the steady-state bifurcation diagram.
[ { "created": "Tue, 29 Aug 2023 17:55:37 GMT", "version": "v1" } ]
2023-08-30
[ [ "Amakhin", "Dmitry", "" ], [ "Chizhov", "Anton", "" ], [ "Girier", "Guillaume", "" ], [ "Desroches", "Mathieu", "" ], [ "Sieber", "Jan", "" ], [ "Rodrigues", "Serafim", "" ] ]
We construct systematically experimental steady-state bifurcation diagrams for entorhinal cortex neurons. A slowly ramped voltage-clamp electrophysiology protocol serves as closed-loop feedback controlled experiment for the subsequent current-clamp open-loop protocol on the same cell. In this way, the voltage-clamped experiment determines dynamically stable and unstable (hidden) steady states of the current-clamp experiment. The transitions between observable steady states and observable spiking states in the current-clamp experiment reveal stability and bifurcations of the steady states, completing the steady-state bifurcation diagram.
2009.10706
Grzegorz Mrukwa
Grzegorz Mrukwa (1 and 2) and Joanna Polanska (1) ((1) Silesian University of Technology, (2) Netguru)
DiviK: Divisive intelligent K-Means for hands-free unsupervised clustering in big biological data
24 pages, 11 figures
BMC Bioinformatics 23, 538 (2022)
10.1186/s12859-022-05093-z
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Investigating molecular heterogeneity provides insights about tumor origin and metabolomics. The increasing amount of data gathered makes manual analyses infeasible - therefore, automated unsupervised learning approaches are utilized for discovering heterogeneity. However, automated unsupervised analyses require a lot of experience with setting their hyperparameters and usually an upfront knowledge about the number of expected substructures. Moreover, numerous measured molecules require an additional step of feature engineering to provide valuable results. In this work, we propose DiviK: a scalable stepwise algorithm with local data-driven feature space adaptation for the segmentation of high-dimensional datasets. The combination of three quality indices: Dice Index, Rand Index and EXIMS score are used to assess the quality of unsupervised analyses in 3D space. DiviK was validated on two separate high-throughput datasets acquired by Mass Spectrometry Imaging in 2D and 3D. DiviK could be one of the default choices to consider during the initial exploration of Mass Spectrometry Imaging data. It provides a trade-off between absolute heterogeneity detection and focus on biologically plausible structures, and does not require specifying the number of expected structures before the analysis. With its unique local feature space adaptation, it is robust against dominating global patterns when focusing on the detail. Finally, due to its simplicity, DiviK is easily generalizable to an even more flexible framework, useful for other '-omics' data, or tabular data in general (including medical images after appropriate embedding). A generic implementation is freely available under Apache 2.0 license at https://github.com/gmrukwa/divik.
[ { "created": "Tue, 22 Sep 2020 17:50:12 GMT", "version": "v1" }, { "created": "Sat, 29 Jan 2022 23:46:37 GMT", "version": "v2" }, { "created": "Sat, 12 Mar 2022 12:07:23 GMT", "version": "v3" }, { "created": "Mon, 29 Aug 2022 21:33:50 GMT", "version": "v4" }, { "created": "Tue, 17 Jan 2023 20:23:34 GMT", "version": "v5" } ]
2023-01-19
[ [ "Mrukwa", "Grzegorz", "", "1 and 2" ], [ "Polanska", "Joanna", "" ] ]
Investigating molecular heterogeneity provides insights about tumor origin and metabolomics. The increasing amount of data gathered makes manual analyses infeasible - therefore, automated unsupervised learning approaches are utilized for discovering heterogeneity. However, automated unsupervised analyses require a lot of experience with setting their hyperparameters and usually an upfront knowledge about the number of expected substructures. Moreover, numerous measured molecules require an additional step of feature engineering to provide valuable results. In this work, we propose DiviK: a scalable stepwise algorithm with local data-driven feature space adaptation for the segmentation of high-dimensional datasets. The combination of three quality indices: Dice Index, Rand Index and EXIMS score are used to assess the quality of unsupervised analyses in 3D space. DiviK was validated on two separate high-throughput datasets acquired by Mass Spectrometry Imaging in 2D and 3D. DiviK could be one of the default choices to consider during the initial exploration of Mass Spectrometry Imaging data. It provides a trade-off between absolute heterogeneity detection and focus on biologically plausible structures, and does not require specifying the number of expected structures before the analysis. With its unique local feature space adaptation, it is robust against dominating global patterns when focusing on the detail. Finally, due to its simplicity, DiviK is easily generalizable to an even more flexible framework, useful for other '-omics' data, or tabular data in general (including medical images after appropriate embedding). A generic implementation is freely available under Apache 2.0 license at https://github.com/gmrukwa/divik.
0707.3606
Tao Hu
Tao Hu, B. I. Shklovskii
How a protein searches for its specific site on DNA: the role of intersegment transfer
9 pages, 7 figures
Phys. Rev. E 76, 051909 (2007).
10.1103/PhysRevE.76.051909
null
q-bio.BM cond-mat.soft
null
Proteins are known to locate their specific targets on DNA up to two orders of magnitude faster than predicted by the Smoluchowski three-dimensional diffusion rate. One of the mechanisms proposed to resolve this discrepancy is termed "intersegment transfer". Many proteins have two DNA binding sites and can transfer from one DNA segment to another without dissociation to water. We calculate the target search rate for such proteins in a dense globular DNA, taking into account intersegment transfer working in conjunction with DNA motion and protein sliding along DNA. We show that intersegment transfer plays a very important role in cases where the protein spends most of its time adsorbed on DNA.
[ { "created": "Tue, 24 Jul 2007 16:52:08 GMT", "version": "v1" } ]
2011-11-10
[ [ "Hu", "Tao", "" ], [ "Shklovskii", "B. I.", "" ] ]
Proteins are known to locate their specific targets on DNA up to two orders of magnitude faster than predicted by the Smoluchowski three-dimensional diffusion rate. One of the mechanisms proposed to resolve this discrepancy is termed "intersegment transfer". Many proteins have two DNA binding sites and can transfer from one DNA segment to another without dissociation to water. We calculate the target search rate for such proteins in a dense globular DNA, taking into account intersegment transfer working in conjunction with DNA motion and protein sliding along DNA. We show that intersegment transfer plays a very important role in cases where the protein spends most of its time adsorbed on DNA.
2007.06296
Mattia Miotto
Mattia Miotto, Lorenzo Di Rienzo, Giorgio Gosti, Edoardo Milanetti, Giancarlo Ruocco
Does blood type affect the COVID-19 infection pattern?
6 figures, 4 tables
PLoS ONE (2021);16(5): e0251535
10.1371/journal.pone.0251535
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Among the many aspects that characterize the COVID-19 pandemic, two seem particularly challenging to understand: (i) the great geographical differences in the degree of virus contagiousness and lethality which were found in the different phases of the epidemic progression, and (ii) the potential role of the infected people's blood type in both the virus infectivity and the progression of the disease. A recent hypothesis could shed some light on both aspects. Specifically, it has been proposed that in the subject-to-subject transfer SARS-CoV-2 conserves on its capsid the erythrocytes' antigens of the source subject. Thus these conserved antigens can potentially cause an immune reaction in a receiving subject that has previously acquired specific antibodies for the source subject antigens. This hypothesis implies a blood type-dependent infection rate. The strong geographical dependence of the blood type distribution could be, therefore, one of the factors at the origin of the observed heterogeneity in the epidemics spread. Here, we present an epidemiological deterministic model where the infection rules based on blood types are taken into account and compare our model outcomes with the exiting worldwide infection progression data. We found an overall good agreement, which strengthens the hypothesis that blood types do play a role in the COVID-19 infection.
[ { "created": "Mon, 13 Jul 2020 10:29:21 GMT", "version": "v1" }, { "created": "Sun, 25 Oct 2020 22:50:51 GMT", "version": "v2" } ]
2021-06-10
[ [ "Miotto", "Mattia", "" ], [ "Di Rienzo", "Lorenzo", "" ], [ "Gosti", "Giorgio", "" ], [ "Milanetti", "Edoardo", "" ], [ "Ruocco", "Giancarlo", "" ] ]
Among the many aspects that characterize the COVID-19 pandemic, two seem particularly challenging to understand: (i) the great geographical differences in the degree of virus contagiousness and lethality which were found in the different phases of the epidemic progression, and (ii) the potential role of the infected people's blood type in both the virus infectivity and the progression of the disease. A recent hypothesis could shed some light on both aspects. Specifically, it has been proposed that in the subject-to-subject transfer SARS-CoV-2 conserves on its capsid the erythrocytes' antigens of the source subject. Thus these conserved antigens can potentially cause an immune reaction in a receiving subject that has previously acquired specific antibodies for the source subject antigens. This hypothesis implies a blood type-dependent infection rate. The strong geographical dependence of the blood type distribution could be, therefore, one of the factors at the origin of the observed heterogeneity in the epidemics spread. Here, we present an epidemiological deterministic model where the infection rules based on blood types are taken into account and compare our model outcomes with the exiting worldwide infection progression data. We found an overall good agreement, which strengthens the hypothesis that blood types do play a role in the COVID-19 infection.
2302.07961
Rohit Misra
Rohit Misra, Tapan K. Gandhi
Functional Connectivity Dynamics show Resting-State Instability and Rightward Parietal Dysfunction in ADHD
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Attention Deficit/Hyperactivity Disorder (ADHD) is one of the most common neurodevelopmental disorders in children and is characterised by inattention, impulsiveness and hyperactivity. While several studies have analysed the static functional connectivity in the resting-state functional MRI (rs-fMRI) of ADHD patients, detailed investigations are required to characterize the connectivity dynamics in the brain. In an attempt to establish a link between attention instability and the dynamic properties of Functional Connectivity (FC), we investigated the differences in temporal variability of FC between 40 children with ADHD and 40 Typically Developing (TD) children. Using a sliding-window method to segment the rs-fMRI scans in time, we employed seed-to-voxel correlation analysis for each window to obtain time-evolving seed connectivity maps for seeds placed in the posterior cingulate cortex (PCC) and the medial prefrontal cortex (mPFC). For each subject, the standard deviation of the voxel connectivity time series was used as a measure of the temporal variability of FC. Results showed that ADHD patients exhibited significantly higher variability in dFC than TD children in the cingulo-temporal, cingulo-parietal, fronto-temporal, and fronto-parietal networks ($p_{FWE} < 0.05$). Atypical temporal variability was observed in the left and right temporal gyri, the anterior cingulate cortex, and lateral regions of the right parietal cortex. The observations are consistent with visual attention issues, executive control deficit, and rightward parietal dysfunction reported in ADHD, respectively. These results help in understanding the disorder with a fresh perspective linking behavioural inattention with instability in FC in the brain.
[ { "created": "Wed, 15 Feb 2023 21:50:06 GMT", "version": "v1" } ]
2023-02-17
[ [ "Misra", "Rohit", "" ], [ "Gandhi", "Tapan K.", "" ] ]
Attention Deficit/Hyperactivity Disorder (ADHD) is one of the most common neurodevelopmental disorders in children and is characterised by inattention, impulsiveness and hyperactivity. While several studies have analysed the static functional connectivity in the resting-state functional MRI (rs-fMRI) of ADHD patients, detailed investigations are required to characterize the connectivity dynamics in the brain. In an attempt to establish a link between attention instability and the dynamic properties of Functional Connectivity (FC), we investigated the differences in temporal variability of FC between 40 children with ADHD and 40 Typically Developing (TD) children. Using a sliding-window method to segment the rs-fMRI scans in time, we employed seed-to-voxel correlation analysis for each window to obtain time-evolving seed connectivity maps for seeds placed in the posterior cingulate cortex (PCC) and the medial prefrontal cortex (mPFC). For each subject, the standard deviation of the voxel connectivity time series was used as a measure of the temporal variability of FC. Results showed that ADHD patients exhibited significantly higher variability in dFC than TD children in the cingulo-temporal, cingulo-parietal, fronto-temporal, and fronto-parietal networks ($p_{FWE} < 0.05$). Atypical temporal variability was observed in the left and right temporal gyri, the anterior cingulate cortex, and lateral regions of the right parietal cortex. The observations are consistent with visual attention issues, executive control deficit, and rightward parietal dysfunction reported in ADHD, respectively. These results help in understanding the disorder with a fresh perspective linking behavioural inattention with instability in FC in the brain.
2402.19045
Johannes Pausch
Francesco Puccioni, Johannes Pausch, Paul Piho, Philipp Thomas
Noise-induced survival resonances during fractional killing of cell populations
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Fractional killing in response to drugs is a hallmark of non-genetic cellular heterogeneity. Yet how individual lineages evade drug treatment, as observed in bacteria and cancer cells, is not quantitatively understood. We analyse a stochastic population model with age-dependent division and death rates and characterise the emergence of fractional killing as a stochastic phenomenon under constant and periodic drug environments. In constant environments, increasing cell cycle noise induces a phase transition from complete to fractional killing, while increasing death noise can induce the reverse transition. In periodic drug environments, we discover survival resonance phenomena that give rise to peaks in the survival probabilities at division or death times that are multiples of the environment duration not seen in unstructured populations.
[ { "created": "Thu, 29 Feb 2024 11:19:12 GMT", "version": "v1" } ]
2024-03-01
[ [ "Puccioni", "Francesco", "" ], [ "Pausch", "Johannes", "" ], [ "Piho", "Paul", "" ], [ "Thomas", "Philipp", "" ] ]
Fractional killing in response to drugs is a hallmark of non-genetic cellular heterogeneity. Yet how individual lineages evade drug treatment, as observed in bacteria and cancer cells, is not quantitatively understood. We analyse a stochastic population model with age-dependent division and death rates and characterise the emergence of fractional killing as a stochastic phenomenon under constant and periodic drug environments. In constant environments, increasing cell cycle noise induces a phase transition from complete to fractional killing, while increasing death noise can induce the reverse transition. In periodic drug environments, we discover survival resonance phenomena that give rise to peaks in the survival probabilities at division or death times that are multiples of the environment duration not seen in unstructured populations.
1205.3347
Eric Frichot
Eric Frichot (1), Sean Schoville (1), Guillaume Bouchard (2) and Olivier Fran\c{c}ois (1) ((1) UJF, CNRS, TIMC-IMAG, FRANCE, (2) Xerox Research Center Europe, France)
Testing for Associations between Loci and Environmental Gradients Using Latent Factor Mixed Models
29 pages with 8 pages of Supplementary Material (V2 revised presentation and results part)
Mol Biol Evol (2013) 30 (7): 1687-1699
10.1093/molbev/mst063
null
q-bio.PE stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation to local environments often occurs through natural selection acting on a large number of loci, each having a weak phenotypic effect. One way to detect these loci is to identify genetic polymorphisms that exhibit high correlation with environmental variables used as proxies for ecological pressures. Here, we propose new algorithms based on population genetics, ecological modeling, and statistical learning techniques to screen genomes for signatures of local adaptation. Implemented in the computer program "latent factor mixed model" (LFMM), these algorithms employ an approach in which population structure is introduced using unobserved variables. These fast and computationally efficient algorithms detect correlations between environmental and genetic variation while simultaneously inferring background levels of population structure. Comparing these new algorithms with related methods provides evidence that LFMM can efficiently estimate random effects due to population history and isolation-by-distance patterns when computing gene-environment correlations, and decrease the number of false-positive associations in genome scans. We then apply these models to plant and human genetic data, identifying several genes with functions related to development that exhibit strong correlations with climatic gradients.
[ { "created": "Tue, 15 May 2012 12:46:34 GMT", "version": "v1" }, { "created": "Wed, 12 Sep 2012 12:34:56 GMT", "version": "v2" }, { "created": "Thu, 26 Sep 2013 15:42:19 GMT", "version": "v3" } ]
2015-03-20
[ [ "Frichot", "Eric", "" ], [ "Schoville", "Sean", "" ], [ "Bouchard", "Guillaume", "" ], [ "François", "Olivier", "" ] ]
Adaptation to local environments often occurs through natural selection acting on a large number of loci, each having a weak phenotypic effect. One way to detect these loci is to identify genetic polymorphisms that exhibit high correlation with environmental variables used as proxies for ecological pressures. Here, we propose new algorithms based on population genetics, ecological modeling, and statistical learning techniques to screen genomes for signatures of local adaptation. Implemented in the computer program "latent factor mixed model" (LFMM), these algorithms employ an approach in which population structure is introduced using unobserved variables. These fast and computationally efficient algorithms detect correlations between environmental and genetic variation while simultaneously inferring background levels of population structure. Comparing these new algorithms with related methods provides evidence that LFMM can efficiently estimate random effects due to population history and isolation-by-distance patterns when computing gene-environment correlations, and decrease the number of false-positive associations in genome scans. We then apply these models to plant and human genetic data, identifying several genes with functions related to development that exhibit strong correlations with climatic gradients.
1902.08877
Vladimir Minin
Mingwei Tang, Gytis Dudas, Trevor Bedford, Vladimir N. Minin
Fitting stochastic epidemic models to gene genealogies using linear noise approximation
43 pages, 6 figures in the main text
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylodynamics is a set of population genetics tools that aim at reconstructing demographic history of a population based on molecular sequences of individuals sampled from the population of interest. One important task in phylodynamics is to estimate changes in (effective) population size. When applied to infectious disease sequences such estimation of population size trajectories can provide information about changes in the number of infections. To model changes in the number of infected individuals, current phylodynamic methods use non-parametric approaches, parametric approaches, and stochastic modeling in conjunction with likelihood-free Bayesian methods. The first class of methods yields results that are hard-to-interpret epidemiologically. The second class of methods provides estimates of important epidemiological parameters, such as infection and removal/recovery rates, but ignores variation in the dynamics of infectious disease spread. The third class of methods is the most advantageous statistically, but relies on computationally intensive particle filtering techniques that limits its applications. We propose a Bayesian model that combines phylodynamic inference and stochastic epidemic models, and achieves computational tractability by using a linear noise approximation (LNA) --- a technique that allows us to approximate probability densities of stochastic epidemic model trajectories. LNA opens the door for using modern Markov chain Monte Carlo tools to approximate the joint posterior distribution of the disease transmission parameters and of high dimensional vectors describing unobserved changes in the stochastic epidemic model compartment sizes (e.g., numbers of infectious and susceptible individuals). We apply our estimation technique to Ebola genealogies estimated using viral genetic data from the 2014 epidemic in Sierra Leone and Liberia.
[ { "created": "Sun, 24 Feb 2019 02:24:16 GMT", "version": "v1" } ]
2019-02-26
[ [ "Tang", "Mingwei", "" ], [ "Dudas", "Gytis", "" ], [ "Bedford", "Trevor", "" ], [ "Minin", "Vladimir N.", "" ] ]
Phylodynamics is a set of population genetics tools that aim at reconstructing demographic history of a population based on molecular sequences of individuals sampled from the population of interest. One important task in phylodynamics is to estimate changes in (effective) population size. When applied to infectious disease sequences such estimation of population size trajectories can provide information about changes in the number of infections. To model changes in the number of infected individuals, current phylodynamic methods use non-parametric approaches, parametric approaches, and stochastic modeling in conjunction with likelihood-free Bayesian methods. The first class of methods yields results that are hard-to-interpret epidemiologically. The second class of methods provides estimates of important epidemiological parameters, such as infection and removal/recovery rates, but ignores variation in the dynamics of infectious disease spread. The third class of methods is the most advantageous statistically, but relies on computationally intensive particle filtering techniques that limits its applications. We propose a Bayesian model that combines phylodynamic inference and stochastic epidemic models, and achieves computational tractability by using a linear noise approximation (LNA) --- a technique that allows us to approximate probability densities of stochastic epidemic model trajectories. LNA opens the door for using modern Markov chain Monte Carlo tools to approximate the joint posterior distribution of the disease transmission parameters and of high dimensional vectors describing unobserved changes in the stochastic epidemic model compartment sizes (e.g., numbers of infectious and susceptible individuals). We apply our estimation technique to Ebola genealogies estimated using viral genetic data from the 2014 epidemic in Sierra Leone and Liberia.
1402.3875
Dr Rowena Ball
Rowena Ball, John Brindley
Hydrogen peroxide thermochemical oscillator as driver for primordial RNA replication
Submitted 14 Nov 2013 to J. Roy. Soc. Interface, accepted in final form 25 Feb 2014. An article on this paper appears on https://theconversation.com/au. A new recipe for primordial soup on the pre-biotic earth may help answer questions about the origin of life, and explain why new life does not emerge from non-living precursors on the modern earth
null
10.1098/rsif.2013.1052
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents and tests a previously unrecognised mechanism for driving a replicating molecular system on the prebiotic earth. It is proposed that cell-free RNA replication in the primordial soup may have been driven by self-sustained oscillatory thermochemical reactions. To test this hypothesis a well-characterised hydrogen peroxide oscillator was chosen as the driver and complementary RNA strands with known association and melting kinetics were used as the substrate. An open flow system model for the self-consistent, coupled evolution of the temperature and concentrations in a simple autocatalytic scheme is solved numerically, and it is shown that thermochemical cycling drives replication of the RNA strands. For the (justifiably realistic) values of parameters chosen for the simulated example system, the mean amount of replicant produced at steady state is 6.56 times the input amount, given a constant supply of substrate species. The spontaneous onset of sustained thermochemical oscillations via slowly drifting parameters is demonstrated, and a scheme is given for prebiotic production of complementary RNA strands on rock surfaces.
[ { "created": "Mon, 17 Feb 2014 02:59:42 GMT", "version": "v1" }, { "created": "Mon, 24 Feb 2014 23:18:19 GMT", "version": "v2" }, { "created": "Thu, 6 Mar 2014 01:41:45 GMT", "version": "v3" } ]
2014-03-07
[ [ "Ball", "Rowena", "" ], [ "Brindley", "John", "" ] ]
This paper presents and tests a previously unrecognised mechanism for driving a replicating molecular system on the prebiotic earth. It is proposed that cell-free RNA replication in the primordial soup may have been driven by self-sustained oscillatory thermochemical reactions. To test this hypothesis a well-characterised hydrogen peroxide oscillator was chosen as the driver and complementary RNA strands with known association and melting kinetics were used as the substrate. An open flow system model for the self-consistent, coupled evolution of the temperature and concentrations in a simple autocatalytic scheme is solved numerically, and it is shown that thermochemical cycling drives replication of the RNA strands. For the (justifiably realistic) values of parameters chosen for the simulated example system, the mean amount of replicant produced at steady state is 6.56 times the input amount, given a constant supply of substrate species. The spontaneous onset of sustained thermochemical oscillations via slowly drifting parameters is demonstrated, and a scheme is given for prebiotic production of complementary RNA strands on rock surfaces.
q-bio/0610012
Reka Albert
Song Li, Sarah M. Assmann, Reka Albert
Predicting essential components of signal transduction networks: a dynamic model of guard cell abscisic acid signaling
17 pages, 8 figures
PLoS Biology 4 (10), e312 (2006)
10.1371/journal.pbio.0040312
null
q-bio.MN q-bio.SC
null
Plants both lose water and take in carbon dioxide through microscopic stomatal pores, each of which is regulated by a surrounding pair of guard cells. During drought, the plant hormone abscisic acid (ABA) inhibits stomatal opening and promotes stomatal closure, thereby promoting water conservation. Here we synthesize experimental results into a consistent guard cell signal transduction network for ABA-induced stomatal closure, and develop a dynamic model of this process. Our model captures the regulation of more than forty identified network components, and accords well with previous experimental results at both the pathway and whole cell physiological level. Our analysis reveals the novel predictions that the disruption of membrane depolarizability, anion efflux, actin cytoskeleton reorganization, cytosolic pH increase, the phosphatidic acid pathway or of K+ efflux through slowly activating K+ channels at the plasma membrane lead to the strongest reduction in ABA responsiveness. Initial experimental analysis assessing ABA-induced stomatal closure in the presence of cytosolic pH clamp imposed by the weak acid butyrate is consistent with model prediction. Our method can be readily applied to other biological signaling networks to identify key regulatory components in systems where quantitative information is limited.
[ { "created": "Thu, 5 Oct 2006 02:05:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Li", "Song", "" ], [ "Assmann", "Sarah M.", "" ], [ "Albert", "Reka", "" ] ]
Plants both lose water and take in carbon dioxide through microscopic stomatal pores, each of which is regulated by a surrounding pair of guard cells. During drought, the plant hormone abscisic acid (ABA) inhibits stomatal opening and promotes stomatal closure, thereby promoting water conservation. Here we synthesize experimental results into a consistent guard cell signal transduction network for ABA-induced stomatal closure, and develop a dynamic model of this process. Our model captures the regulation of more than forty identified network components, and accords well with previous experimental results at both the pathway and whole cell physiological level. Our analysis reveals the novel predictions that the disruption of membrane depolarizability, anion efflux, actin cytoskeleton reorganization, cytosolic pH increase, the phosphatidic acid pathway or of K+ efflux through slowly activating K+ channels at the plasma membrane lead to the strongest reduction in ABA responsiveness. Initial experimental analysis assessing ABA-induced stomatal closure in the presence of cytosolic pH clamp imposed by the weak acid butyrate is consistent with model prediction. Our method can be readily applied to other biological signaling networks to identify key regulatory components in systems where quantitative information is limited.
1706.06085
Mareike Fischer
Lina Herbst and Mareike Fischer
On the accuracy of ancestral sequence reconstruction for ultrametric trees with parsimony
null
null
null
null
q-bio.PE math.CO math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine a mathematical question concerning the reconstruction accuracy of the Fitch algorithm for reconstructing the ancestral sequence of the most recent common ancestor given a phylogenetic tree and sequence data for all taxa under consideration. In particular, for the symmetric 4-state substitution model which is also known as Jukes-Cantor model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that for any ultrametric phylogenetic tree and a symmetric model, the Fitch parsimony method using all terminal taxa is more accurate, or at least as accurate, for ancestral state reconstruction than using any particular terminal taxon or any particular pair of taxa. This conjecture had so far only been answered for two-state data by Fischer and Thatte. Here, we focus on answering the biologically more relevant case with four states, which corresponds to ancestral sequence reconstruction from DNA or RNA data.
[ { "created": "Mon, 19 Jun 2017 17:56:10 GMT", "version": "v1" } ]
2017-06-20
[ [ "Herbst", "Lina", "" ], [ "Fischer", "Mareike", "" ] ]
We examine a mathematical question concerning the reconstruction accuracy of the Fitch algorithm for reconstructing the ancestral sequence of the most recent common ancestor given a phylogenetic tree and sequence data for all taxa under consideration. In particular, for the symmetric 4-state substitution model which is also known as Jukes-Cantor model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that for any ultrametric phylogenetic tree and a symmetric model, the Fitch parsimony method using all terminal taxa is more accurate, or at least as accurate, for ancestral state reconstruction than using any particular terminal taxon or any particular pair of taxa. This conjecture had so far only been answered for two-state data by Fischer and Thatte. Here, we focus on answering the biologically more relevant case with four states, which corresponds to ancestral sequence reconstruction from DNA or RNA data.
1602.08115
Vicente M. Reyes Ph.D.
Srujana Cheguri and Vicente M. Reyes
Size-Independent Quantification of Ligand Binding Site Depth in Receptor Proteins
51 pages, 9 figures, 1 table
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have developed a web server that implements two complementary methods to quantify the depth of ligand binding site (LBS) in protein-ligand complexes: the "secant plane" (SP) and "tangent sphere" (TS) methods. The protein molecular centroid (global centroid, GC), and the LBS centroid (local centroid, LC) are first determined. The SP is the plane passing through the LC and normal to the line passing through the LC and the GC. The "exterior side" of the SP is the side opposite GC. The TS is the sphere with center at GC and tangent to the SP at LC. The percentage of protein atoms inside the TS (TS index) and on the exterior side of the SP (SP index), are complementary measures of LBS depth. The SPi is directly proportional to LBS depth while the TSi is inversely proportional. We tested the two methods using a test set of 67 well-characterized protein-ligand structures (Laskowski, et al. 1996), as well as that of an artificial protein in the form of a grid of points in the overall shape of a sphere and in which LBS of any depth can be specified. Results from both the SP and TS methods agree well with reported data (ibid.), and results from the artificial case confirm that both methods are suitable measures of LBS depth. The web server may be used in two modes. In the "ligand mode", user inputs the protein PDB coordinates as well as those of the ligand. The "LBS mode" is the same as the former, except that the ligand coordinates are assumed to be unavailable; hence the user inputs what s/he believes to be the coordinates of the LBS amino acid residues. In both cases, the web server outputs the SP and TS indices. LBS depth is usually directly related to the amount of conformational change a protein undergoes upon ligand binding - ability to quantify it could allows meaningful comparison of protein flexibility and dynamics. The URL of our web server will be announced publicly in due course.
[ { "created": "Tue, 15 Dec 2015 04:00:37 GMT", "version": "v1" } ]
2016-02-29
[ [ "Cheguri", "Srujana", "" ], [ "Reyes", "Vicente M.", "" ] ]
We have developed a web server that implements two complementary methods to quantify the depth of ligand binding site (LBS) in protein-ligand complexes: the "secant plane" (SP) and "tangent sphere" (TS) methods. The protein molecular centroid (global centroid, GC), and the LBS centroid (local centroid, LC) are first determined. The SP is the plane passing through the LC and normal to the line passing through the LC and the GC. The "exterior side" of the SP is the side opposite GC. The TS is the sphere with center at GC and tangent to the SP at LC. The percentage of protein atoms inside the TS (TS index) and on the exterior side of the SP (SP index), are complementary measures of LBS depth. The SPi is directly proportional to LBS depth while the TSi is inversely proportional. We tested the two methods using a test set of 67 well-characterized protein-ligand structures (Laskowski, et al. 1996), as well as that of an artificial protein in the form of a grid of points in the overall shape of a sphere and in which LBS of any depth can be specified. Results from both the SP and TS methods agree well with reported data (ibid.), and results from the artificial case confirm that both methods are suitable measures of LBS depth. The web server may be used in two modes. In the "ligand mode", user inputs the protein PDB coordinates as well as those of the ligand. The "LBS mode" is the same as the former, except that the ligand coordinates are assumed to be unavailable; hence the user inputs what s/he believes to be the coordinates of the LBS amino acid residues. In both cases, the web server outputs the SP and TS indices. LBS depth is usually directly related to the amount of conformational change a protein undergoes upon ligand binding - ability to quantify it could allows meaningful comparison of protein flexibility and dynamics. The URL of our web server will be announced publicly in due course.
2010.09898
Laura Liao
Laura E. Liao, Jonathan Carruthers, Sophie J. Smither, CL4 Virology Team, Simon A. Weller, Diane Williamson, Thomas R. Laws, Isabel Garcia-Dorival, Julian Hiscox, Benjamin P. Holder, Catherine A. A. Beauchemin, Alan S. Perelson, Martin Lopez-Garcia, Grant Lythe, John Barr, Carmen Molina-Paris
Quantification of Ebola virus replication kinetics in vitro
16 pages, 3 figures, to be published in PLOS Computational Biology
PLOS Comput. Biol., 16(11):e1008375, November, 2020
10.1371/journal.pcbi.1008375
RIKEN-iTHEMS-Report-20
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Mathematical modelling has successfully been used to provide quantitative descriptions of many viral infections, but for the Ebola virus, which requires biosafety level 4 facilities for experimentation, modelling can play a crucial role. Ebola modelling efforts have primarily focused on in vivo virus kinetics, e.g., in animal models, to aid the development of antivirals and vaccines. But, thus far, these studies have not yielded a detailed specification of the infection cycle, which could provide a foundational description of the virus kinetics and thus a deeper understanding of their clinical manifestation. Here, we obtain a diverse experimental data set of the Ebola infection in vitro, and then make use of Bayesian inference methods to fully identify parameters in a mathematical model of the infection. Our results provide insights into the distribution of time an infected cell spends in the eclipse phase (the period between infection and the start of virus production), as well as the rate at which infectious virions lose infectivity. We suggest how these results can be used in future models to describe co-infection with defective interfering particles, which are an emerging alternative therapeutic.
[ { "created": "Mon, 19 Oct 2020 22:20:32 GMT", "version": "v1" } ]
2024-04-02
[ [ "Liao", "Laura E.", "" ], [ "Carruthers", "Jonathan", "" ], [ "Smither", "Sophie J.", "" ], [ "Team", "CL4 Virology", "" ], [ "Weller", "Simon A.", "" ], [ "Williamson", "Diane", "" ], [ "Laws", "Thomas R.", "" ], [ "Garcia-Dorival", "Isabel", "" ], [ "Hiscox", "Julian", "" ], [ "Holder", "Benjamin P.", "" ], [ "Beauchemin", "Catherine A. A.", "" ], [ "Perelson", "Alan S.", "" ], [ "Lopez-Garcia", "Martin", "" ], [ "Lythe", "Grant", "" ], [ "Barr", "John", "" ], [ "Molina-Paris", "Carmen", "" ] ]
Mathematical modelling has successfully been used to provide quantitative descriptions of many viral infections, but for the Ebola virus, which requires biosafety level 4 facilities for experimentation, modelling can play a crucial role. Ebola modelling efforts have primarily focused on in vivo virus kinetics, e.g., in animal models, to aid the development of antivirals and vaccines. But, thus far, these studies have not yielded a detailed specification of the infection cycle, which could provide a foundational description of the virus kinetics and thus a deeper understanding of their clinical manifestation. Here, we obtain a diverse experimental data set of the Ebola infection in vitro, and then make use of Bayesian inference methods to fully identify parameters in a mathematical model of the infection. Our results provide insights into the distribution of time an infected cell spends in the eclipse phase (the period between infection and the start of virus production), as well as the rate at which infectious virions lose infectivity. We suggest how these results can be used in future models to describe co-infection with defective interfering particles, which are an emerging alternative therapeutic.
1203.3315
Ricardo Broglia
R.A. Broglia
A remarkable emergent property of spontaneous (amino acid content) symmetry breaking
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning how proteins fold will hardly have any impact in the way conventional -- active site centered -- drugs are designed. On the other hand, this knowledge is proving instrumental in defining a new paradigm for the identification of drugs against any target protein: folding inhibition. Targeting folding renders drugs less prone to elicit spontaneous genetic mutations which in many cases, notably in connection with viruses like the Human Immunodeficiency Virus (HIV), can block therapeutic action. From the progress which has taken place during the last years in the understanding of the becoming of a protein, and how to read from the corresponding sequences the associated three-dimensional, biologically active, native structure, the idea of non-conventional (folding) inhibitors and thus of leads to eventual drugs to fight disease, arguably, without creating resistance, emerges as a distinct possibility.
[ { "created": "Thu, 15 Mar 2012 10:43:51 GMT", "version": "v1" } ]
2012-03-16
[ [ "Broglia", "R. A.", "" ] ]
Learning how proteins fold will hardly have any impact in the way conventional -- active site centered -- drugs are designed. On the other hand, this knowledge is proving instrumental in defining a new paradigm for the identification of drugs against any target protein: folding inhibition. Targeting folding renders drugs less prone to elicit spontaneous genetic mutations which in many cases, notably in connection with viruses like the Human Immunodeficiency Virus (HIV), can block therapeutic action. From the progress which has taken place during the last years in the understanding of the becoming of a protein, and how to read from the corresponding sequences the associated three-dimensional, biologically active, native structure, the idea of non-conventional (folding) inhibitors and thus of leads to eventual drugs to fight disease, arguably, without creating resistance, emerges as a distinct possibility.
1701.08086
Dengming Ming
Dengming Ming, Min Han and Xiongbo An
Sequence-based prediction of function site and protein-ligand interaction by a functionally annotated domain profile database
29pages, 3 figures, 3 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying protein functional sites (PFSs) and protein-ligand interactions (PLIs) are critically important in understanding the protein function and the involved biochemical reactions. As large amount of unknown proteins are quickly accumulated in this post-genome era, an urgent task arises to predict PFSs and PLIs at residual level. Nowadays many knowledge-based methods have been well developed for prediction of PFSs, however, accurate methods for PLI prediction are still lacking. In this study, we have presented a new method for prediction of PLIs and PFSs based on sequence of the inquiry protein. The key of the method hinges on a function- and interaction-annotated protein domain profile database, called fiDPD, which was built from the Structural Classification of Proteins (SCOP) database, using a hidden Markov model program. The method was applied to 13 target proteins from the recent Critical Assessment of Structure Prediction (CASP10/11). Our calculations gave a Matthews correlation coefficient (MCC) value of 0.66 for prediction of PFSs, and an 80% recall in prediction of PLIs. Our method reveals that PLIs are conserved during the evolution of proteins, and they can be reliably predicted from fiDPD. fiDPD can be used as a complement to existent bioinformatics tools for protein function annotation.
[ { "created": "Fri, 27 Jan 2017 15:46:42 GMT", "version": "v1" } ]
2017-01-30
[ [ "Ming", "Dengming", "" ], [ "Han", "Min", "" ], [ "An", "Xiongbo", "" ] ]
Identifying protein functional sites (PFSs) and protein-ligand interactions (PLIs) are critically important in understanding the protein function and the involved biochemical reactions. As large amount of unknown proteins are quickly accumulated in this post-genome era, an urgent task arises to predict PFSs and PLIs at residual level. Nowadays many knowledge-based methods have been well developed for prediction of PFSs, however, accurate methods for PLI prediction are still lacking. In this study, we have presented a new method for prediction of PLIs and PFSs based on sequence of the inquiry protein. The key of the method hinges on a function- and interaction-annotated protein domain profile database, called fiDPD, which was built from the Structural Classification of Proteins (SCOP) database, using a hidden Markov model program. The method was applied to 13 target proteins from the recent Critical Assessment of Structure Prediction (CASP10/11). Our calculations gave a Matthews correlation coefficient (MCC) value of 0.66 for prediction of PFSs, and an 80% recall in prediction of PLIs. Our method reveals that PLIs are conserved during the evolution of proteins, and they can be reliably predicted from fiDPD. fiDPD can be used as a complement to existent bioinformatics tools for protein function annotation.
q-bio/0609035
Changbong Hyeon
Changbong Hyeon, Ruxandra I. Dima, and D. Thirumalai
Size, shape, and flexibility of RNA structures
28 pages, 8 figures, J. Chem. Phys. in press
J. Chem. Phys. vol 125, 194905 (2006)
10.1063/1.2364190
null
q-bio.BM physics.bio-ph
null
Determination of sizes and flexibilities of RNA molecules is important in understanding the nature of packing in folded structures and in elucidating interactions between RNA and DNA or proteins. Using the coordinates of the structures of RNA in the Protein Data Bank we find that the size of the folded RNA structures, measured using the radius of gyration, $R_G$, follows the Flory scaling law, namely, $R_G =5.5 N^{1/3}$ \AA where N is the number of nucleotides. The shape of RNA molecules is characterized by the asphericity $\Delta$ and the shape $S$ parameters that are computed using the eigenvalues of the moment of inertia tensor. From the distribution of $\Delta$, we find that a large fraction of folded RNA structures are aspherical and the distribution of $S$ values shows that RNA molecules are prolate ($S>0$). The flexibility of folded structures is characterized by the persistence length $l_p$. By fitting the distance distribution function $P(r)$ to the worm-like chain model we extracted the persistence length $l_p$. We find that $l_p\approx 1.5 N^{0.33}$ \AA. The dependence of $l_p$ on $N$ implies the average length of helices should increases as the size of RNA grows. We also analyze packing in the structures of ribosomes (30S, 50S, and 70S) in terms of $R_G$, $\Delta$, $S$, and $l_p$. The 70S and the 50S subunits are more spherical compared to most RNA molecules. The globularity in 50S is due to the presence of an unusually large number (compared to 30S subunit) of small helices that are stitched together by bulges and loops. Comparison of the shapes of the intact 70S ribosome and the constituent particles suggests that folding of the individual molecules might occur prior to assembly.
[ { "created": "Sat, 23 Sep 2006 04:30:08 GMT", "version": "v1" } ]
2009-11-13
[ [ "Hyeon", "Changbong", "" ], [ "Dima", "Ruxandra I.", "" ], [ "Thirumalai", "D.", "" ] ]
Determination of sizes and flexibilities of RNA molecules is important in understanding the nature of packing in folded structures and in elucidating interactions between RNA and DNA or proteins. Using the coordinates of the structures of RNA in the Protein Data Bank we find that the size of the folded RNA structures, measured using the radius of gyration, $R_G$, follows the Flory scaling law, namely, $R_G =5.5 N^{1/3}$ \AA where N is the number of nucleotides. The shape of RNA molecules is characterized by the asphericity $\Delta$ and the shape $S$ parameters that are computed using the eigenvalues of the moment of inertia tensor. From the distribution of $\Delta$, we find that a large fraction of folded RNA structures are aspherical and the distribution of $S$ values shows that RNA molecules are prolate ($S>0$). The flexibility of folded structures is characterized by the persistence length $l_p$. By fitting the distance distribution function $P(r)$ to the worm-like chain model we extracted the persistence length $l_p$. We find that $l_p\approx 1.5 N^{0.33}$ \AA. The dependence of $l_p$ on $N$ implies the average length of helices should increases as the size of RNA grows. We also analyze packing in the structures of ribosomes (30S, 50S, and 70S) in terms of $R_G$, $\Delta$, $S$, and $l_p$. The 70S and the 50S subunits are more spherical compared to most RNA molecules. The globularity in 50S is due to the presence of an unusually large number (compared to 30S subunit) of small helices that are stitched together by bulges and loops. Comparison of the shapes of the intact 70S ribosome and the constituent particles suggests that folding of the individual molecules might occur prior to assembly.
1303.2411
C. Titus Brown
Alexis Black Pyrkosz and Hans Cheng and C. Titus Brown
RNA-Seq Mapping Errors When Using Incomplete Reference Transcriptomes of Vertebrates
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/publicdomain/
Whole transcriptome sequencing is increasingly being used as a functional genomics tool to study non- model organisms. However, when the reference transcriptome used to calculate differential expression is incomplete, significant error in the inferred expression levels can result. In this study, we use simulated reads generated from real transcriptomes to determine the accuracy of read mapping, and measure the error resulting from using an incomplete transcriptome. We show that the two primary sources of count- ing error are 1) alternative splice variants that share reads and 2) missing transcripts from the reference. Alternative splice variants increase the false positive rate of mapping while incomplete reference tran- scriptomes decrease the true positive rate, leading to inaccurate transcript expression levels. Grouping transcripts by gene or read sharing (similar to mapping to a reference genome) significantly decreases false positives, but only by improving the reference transcriptome itself can the missing transcript problem be addressed. We also demonstrate that employing different mapping software does not yield substantial increases in accuracy on simulated data. Finally, we show that read lengths or insert sizes must increase past 1kb to resolve mapping ambiguity.
[ { "created": "Mon, 11 Mar 2013 02:32:37 GMT", "version": "v1" } ]
2013-03-12
[ [ "Pyrkosz", "Alexis Black", "" ], [ "Cheng", "Hans", "" ], [ "Brown", "C. Titus", "" ] ]
Whole transcriptome sequencing is increasingly being used as a functional genomics tool to study non- model organisms. However, when the reference transcriptome used to calculate differential expression is incomplete, significant error in the inferred expression levels can result. In this study, we use simulated reads generated from real transcriptomes to determine the accuracy of read mapping, and measure the error resulting from using an incomplete transcriptome. We show that the two primary sources of count- ing error are 1) alternative splice variants that share reads and 2) missing transcripts from the reference. Alternative splice variants increase the false positive rate of mapping while incomplete reference tran- scriptomes decrease the true positive rate, leading to inaccurate transcript expression levels. Grouping transcripts by gene or read sharing (similar to mapping to a reference genome) significantly decreases false positives, but only by improving the reference transcriptome itself can the missing transcript problem be addressed. We also demonstrate that employing different mapping software does not yield substantial increases in accuracy on simulated data. Finally, we show that read lengths or insert sizes must increase past 1kb to resolve mapping ambiguity.
2107.10414
Andrew Francis
Andrew Francis
"Normal" Phylogenetic Networks Emerge as the Leading Class
4 pages, 1 figure
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rich and varied ways that genetic material can be passed between species has motivated extensive research into the theory of phylogenetic networks. Features that align with biological processes, or with mathematical tractability, have been used to define classes and prove results, with the ultimate goal of developing the theoretical foundations for network reconstruction methods. We have now reached the point that a collection of recent results can be drawn together to make one class of network, the so-called normal networks, a clear leader. We may be at the cusp of practical inference with phylogenetic networks.
[ { "created": "Thu, 22 Jul 2021 00:42:47 GMT", "version": "v1" } ]
2021-07-23
[ [ "Francis", "Andrew", "" ] ]
The rich and varied ways that genetic material can be passed between species has motivated extensive research into the theory of phylogenetic networks. Features that align with biological processes, or with mathematical tractability, have been used to define classes and prove results, with the ultimate goal of developing the theoretical foundations for network reconstruction methods. We have now reached the point that a collection of recent results can be drawn together to make one class of network, the so-called normal networks, a clear leader. We may be at the cusp of practical inference with phylogenetic networks.
1608.05962
Dalibor Stys
Renata Rychtarikova, Tomas Nahlik, Kevin Shi, Daria Malakhova, Petr Machacek, Rebecca Smaha, Jan Urban, Dalibor Stys
Super-resolved 3-D imaging of live cells organelles from bright-field photon transmission micrographs
41 pages, 3 figures
Ultramicroscopy 179, 1-14, 2017
10.1016/j.ultramic.2017.03.018
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current biological and medical research is aimed at obtaining a detailed spatiotemporal map of a live cell's interior to describe and predict cell's physiological state. We present here an algorithm for complete 3-D modelling of cellular structures from a z-stack of images obtained using label-free wide-field bright-field light-transmitted microscopy. The method visualizes 3-D objects with a volume equivalent to the area of a camera pixel multiplied by the z-height. The computation is based on finding pixels of unchanged intensities between two consecutive images of an object spread function. These pixels represent strongly light-diffracting, light-absorbing, or light-emitting objects. To accomplish this, variables derived from R\'{e}nyi entropy are used to suppress camera noise. Using this algorithm, the detection limit of objects is only limited by the technical specifications of the microscope setup--we achieve the detection of objects of the size of one camera pixel. This method allows us to obtain 3-D reconstructions of cells from bright-field microscopy images that are comparable in quality to those from electron microscopy images.
[ { "created": "Sun, 21 Aug 2016 16:34:14 GMT", "version": "v1" } ]
2017-04-03
[ [ "Rychtarikova", "Renata", "" ], [ "Nahlik", "Tomas", "" ], [ "Shi", "Kevin", "" ], [ "Malakhova", "Daria", "" ], [ "Machacek", "Petr", "" ], [ "Smaha", "Rebecca", "" ], [ "Urban", "Jan", "" ], [ "Stys", "Dalibor", "" ] ]
Current biological and medical research is aimed at obtaining a detailed spatiotemporal map of a live cell's interior to describe and predict cell's physiological state. We present here an algorithm for complete 3-D modelling of cellular structures from a z-stack of images obtained using label-free wide-field bright-field light-transmitted microscopy. The method visualizes 3-D objects with a volume equivalent to the area of a camera pixel multiplied by the z-height. The computation is based on finding pixels of unchanged intensities between two consecutive images of an object spread function. These pixels represent strongly light-diffracting, light-absorbing, or light-emitting objects. To accomplish this, variables derived from R\'{e}nyi entropy are used to suppress camera noise. Using this algorithm, the detection limit of objects is only limited by the technical specifications of the microscope setup--we achieve the detection of objects of the size of one camera pixel. This method allows us to obtain 3-D reconstructions of cells from bright-field microscopy images that are comparable in quality to those from electron microscopy images.
q-bio/0312041
Peng-Ye Wang
Peng-Ye Wang, Ping Xie, Hua-Wei Yin
Control of spiral waves and turbulent states in a cardiac model by travelling-wave perturbations
9 pages, 5 figures
Chinese Physics, 12, 674 (2003)
10.1088/1009-1963/12/6/319
null
q-bio.TO
null
We propose a travelling-wave perturbation method to control the spatiotemporal dynamics in a cardiac model. It is numerically demonstrated that the method can successfully suppress the wave instability (alternans in action potential duration) in the one-dimensional case and convert spiral waves and turbulent states to the normal travelling wave states in the two-dimensional case. An experimental scheme is suggested which may provide a new design for a cardiac defibrillator.
[ { "created": "Mon, 29 Dec 2003 06:40:53 GMT", "version": "v1" } ]
2009-11-10
[ [ "Wang", "Peng-Ye", "" ], [ "Xie", "Ping", "" ], [ "Yin", "Hua-Wei", "" ] ]
We propose a travelling-wave perturbation method to control the spatiotemporal dynamics in a cardiac model. It is numerically demonstrated that the method can successfully suppress the wave instability (alternans in action potential duration) in the one-dimensional case and convert spiral waves and turbulent states to the normal travelling wave states in the two-dimensional case. An experimental scheme is suggested which may provide a new design for a cardiac defibrillator.
2009.14135
Travis Thompson
Travis B. Thompson and Georg Meisl and Tuomas Knowles and Alain Goriely
The role of clearance mechanisms in the kinetics of toxic protein aggregates involved in neurodegenerative diseases
25 pages, 7 figures
null
10.1063/5.0031650
null
q-bio.BM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein aggregates in the brain play a central role in cognitive decline and structural damage associated with neurodegenerative diseases. For instance, in Alzheimer's disease the formation of Amyloid-beta plaques and tau proteins neurofibrillary tangles follows from the accumulation of different proteins into large aggregates through specific mechanisms such as nucleation and elongation. These mechanisms have been studied in vitro where total protein mass is conserved. However, in vivo, clearance mechanisms may play an important role in limiting the formation of aggregates. Here, we generalise classical models of protein aggregation to take into account both production of monomers and the clearance of protein aggregates. Depending on the clearance model, we show that there may be a critical clearance value above which aggregation does not take place. Our result offers further evidence in support of the hypotheses that clearance mechanisms play a potentially crucial role in neurodegenerative disease initiation and progression; and as such, are a possible therapeutic target.
[ { "created": "Tue, 29 Sep 2020 16:29:54 GMT", "version": "v1" } ]
2021-04-07
[ [ "Thompson", "Travis B.", "" ], [ "Meisl", "Georg", "" ], [ "Knowles", "Tuomas", "" ], [ "Goriely", "Alain", "" ] ]
Protein aggregates in the brain play a central role in cognitive decline and structural damage associated with neurodegenerative diseases. For instance, in Alzheimer's disease the formation of Amyloid-beta plaques and tau proteins neurofibrillary tangles follows from the accumulation of different proteins into large aggregates through specific mechanisms such as nucleation and elongation. These mechanisms have been studied in vitro where total protein mass is conserved. However, in vivo, clearance mechanisms may play an important role in limiting the formation of aggregates. Here, we generalise classical models of protein aggregation to take into account both production of monomers and the clearance of protein aggregates. Depending on the clearance model, we show that there may be a critical clearance value above which aggregation does not take place. Our result offers further evidence in support of the hypotheses that clearance mechanisms play a potentially crucial role in neurodegenerative disease initiation and progression; and as such, are a possible therapeutic target.
2010.03323
Diego Oyarz\'un
Mona K Tonn and Philipp Thomas and Mauricio Barahona and Diego A Oyarz\'un
Computation of single-cell metabolite distributions using mixture models
5 Figures, 3 Tables
null
null
null
q-bio.MN q-bio.BM q-bio.QM q-bio.SC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Metabolic heterogeneity is widely recognised as the next challenge in our understanding of non-genetic variation. A growing body of evidence suggests that metabolic heterogeneity may result from the inherent stochasticity of intracellular events. However, metabolism has been traditionally viewed as a purely deterministic process, on the basis that highly abundant metabolites tend to filter out stochastic phenomena. Here we bridge this gap with a general method for prediction of metabolite distributions across single cells. By exploiting the separation of time scales between enzyme expression and enzyme kinetics, our method produces estimates for metabolite distributions without the lengthy stochastic simulations that would be typically required for large metabolic models. The metabolite distributions take the form of Gaussian mixture models that are directly computable from single-cell expression data and standard deterministic models for metabolic pathways. The proposed mixture models provide a systematic method to predict the impact of biochemical parameters on metabolite distributions. Our method lays the groundwork for identifying the molecular processes that shape metabolic heterogeneity and its functional implications in disease.
[ { "created": "Wed, 7 Oct 2020 10:48:53 GMT", "version": "v1" } ]
2020-10-08
[ [ "Tonn", "Mona K", "" ], [ "Thomas", "Philipp", "" ], [ "Barahona", "Mauricio", "" ], [ "Oyarzún", "Diego A", "" ] ]
Metabolic heterogeneity is widely recognised as the next challenge in our understanding of non-genetic variation. A growing body of evidence suggests that metabolic heterogeneity may result from the inherent stochasticity of intracellular events. However, metabolism has been traditionally viewed as a purely deterministic process, on the basis that highly abundant metabolites tend to filter out stochastic phenomena. Here we bridge this gap with a general method for prediction of metabolite distributions across single cells. By exploiting the separation of time scales between enzyme expression and enzyme kinetics, our method produces estimates for metabolite distributions without the lengthy stochastic simulations that would be typically required for large metabolic models. The metabolite distributions take the form of Gaussian mixture models that are directly computable from single-cell expression data and standard deterministic models for metabolic pathways. The proposed mixture models provide a systematic method to predict the impact of biochemical parameters on metabolite distributions. Our method lays the groundwork for identifying the molecular processes that shape metabolic heterogeneity and its functional implications in disease.
2012.11043
Francisco J. Cao Garcia
Miguel \'Angel Fern\'andez-Grande, Francisco Javier Cao-Garcia
Spatial Scales of Population Synchrony generally increases as fluctuations propagate in a Two Species Ecosystem
17 pages, 2 figures
null
null
null
q-bio.PE cond-mat.stat-mech
http://creativecommons.org/licenses/by-nc-nd/4.0/
The spatial scale of population synchrony gives the characteristic distance at which the population fluctuations are correlated. Therefore, it gives also the characteristic size of the regions of simultaneous population depletion, or even extinction. Single-species previous results imply that the spatial scale of population synchrony is equal or greater (due to dispersion) than the spatial scale of synchrony of environmental fluctuations. Theoretical results on multispecies ecosystems points that interspecies interactions modify the spatial scale of population synchrony. In particular, recent results on two species ecosystems, for two competitors and for predator-prey, point that the spatial scale of population synchrony generally increases as the fluctuations propagates through the food web, i.e., the species more directly affected by environmental fluctuations presents the smaller spatial scale of population synchrony. Here, we found that this behaviour is generally true for a two species ecosystem. The exception to this behaviour are the particular cases where the population fluctuations of one of the species does not damp by its own, but requires a strong transfer of the fluctuation to the other species to be damped. These analytical results illustrate the importance of applying an ecosystem rather than a single-species perspective when developing sustainable harvestings or assessing the extinction risk of endangered species.
[ { "created": "Sun, 20 Dec 2020 22:39:20 GMT", "version": "v1" } ]
2020-12-22
[ [ "Fernández-Grande", "Miguel Ángel", "" ], [ "Cao-Garcia", "Francisco Javier", "" ] ]
The spatial scale of population synchrony gives the characteristic distance at which the population fluctuations are correlated. Therefore, it gives also the characteristic size of the regions of simultaneous population depletion, or even extinction. Single-species previous results imply that the spatial scale of population synchrony is equal or greater (due to dispersion) than the spatial scale of synchrony of environmental fluctuations. Theoretical results on multispecies ecosystems points that interspecies interactions modify the spatial scale of population synchrony. In particular, recent results on two species ecosystems, for two competitors and for predator-prey, point that the spatial scale of population synchrony generally increases as the fluctuations propagates through the food web, i.e., the species more directly affected by environmental fluctuations presents the smaller spatial scale of population synchrony. Here, we found that this behaviour is generally true for a two species ecosystem. The exception to this behaviour are the particular cases where the population fluctuations of one of the species does not damp by its own, but requires a strong transfer of the fluctuation to the other species to be damped. These analytical results illustrate the importance of applying an ecosystem rather than a single-species perspective when developing sustainable harvestings or assessing the extinction risk of endangered species.
2104.01532
Thomas Athey
Thomas L. Athey, Jacopo Teneggi, Joshua T. Vogelstein, Daniel Tward, Ulrich Mueller, Michael I. Miller
Fitting Splines to Axonal Arbors Quantifies Relationship between Branch Order and Geometry
null
Front. Neuroinform. 15 (2021)
10.3389/fninf.2021.704627
null
q-bio.NC cs.MS math.DG
http://creativecommons.org/licenses/by/4.0/
Neuromorphology is crucial to identifying neuronal subtypes and understanding learning. It is also implicated in neurological disease. However, standard morphological analysis focuses on macroscopic features such as branching frequency and connectivity between regions, and often neglects the internal geometry of neurons. In this work, we treat neuron trace points as a sampling of differentiable curves and fit them with a set of branching B-splines. We designed our representation with the Frenet-Serret formulas from differential geometry in mind. The Frenet-Serret formulas completely characterize smooth curves, and involve two parameters, curvature and torsion. Our representation makes it possible to compute these parameters from neuron traces in closed form. These parameters are defined continuously along the curve, in contrast to other parameters like tortuosity which depend on start and end points. We applied our method to a dataset of cortical projection neurons traced in two mouse brains, and found that the parameters are distributed differently between primary, collateral, and terminal axon branches, thus quantifying geometric differences between different components of an axonal arbor. The results agreed in both brains, further validating our representation. The code used in this work can be readily applied to neuron traces in SWC format and is available in our open-source Python package brainlit: http://brainlit.neurodata.io/.
[ { "created": "Sun, 4 Apr 2021 03:38:42 GMT", "version": "v1" }, { "created": "Wed, 7 Apr 2021 15:04:09 GMT", "version": "v2" }, { "created": "Sat, 5 Jun 2021 15:19:43 GMT", "version": "v3" } ]
2022-03-10
[ [ "Athey", "Thomas L.", "" ], [ "Teneggi", "Jacopo", "" ], [ "Vogelstein", "Joshua T.", "" ], [ "Tward", "Daniel", "" ], [ "Mueller", "Ulrich", "" ], [ "Miller", "Michael I.", "" ] ]
Neuromorphology is crucial to identifying neuronal subtypes and understanding learning. It is also implicated in neurological disease. However, standard morphological analysis focuses on macroscopic features such as branching frequency and connectivity between regions, and often neglects the internal geometry of neurons. In this work, we treat neuron trace points as a sampling of differentiable curves and fit them with a set of branching B-splines. We designed our representation with the Frenet-Serret formulas from differential geometry in mind. The Frenet-Serret formulas completely characterize smooth curves, and involve two parameters, curvature and torsion. Our representation makes it possible to compute these parameters from neuron traces in closed form. These parameters are defined continuously along the curve, in contrast to other parameters like tortuosity which depend on start and end points. We applied our method to a dataset of cortical projection neurons traced in two mouse brains, and found that the parameters are distributed differently between primary, collateral, and terminal axon branches, thus quantifying geometric differences between different components of an axonal arbor. The results agreed in both brains, further validating our representation. The code used in this work can be readily applied to neuron traces in SWC format and is available in our open-source Python package brainlit: http://brainlit.neurodata.io/.
1110.5265
Eric Werner
Eric Werner
On Programs and Genomes
This a slightly extended version of Part I of a position paper distributed on November 18, 2007 to the participants of our Balliol Seminar on the Conceptual Foundations of Systems Biology. It presented my ideas on the global control architecture of genomes. Denis Noble and myself started the seminar in the Michaelmas term in the autumn of 2006 at Balliol College, University of Oxford
null
null
null
q-bio.OT cs.CE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We outline the global control architecture of genomes. A theory of genomic control information is presented. The concept of a developmental control network called a cene (for control gene) is introduced. We distinguish parts-genes from control genes or cenes. Cenes are interpreted and executed by the cell and, thereby, direct cell actions including communication, growth, division, differentiation and multi-cellular development. The cenome is the global developmental control network in the genome. The cenome is also a cene that consists of interlinked sub-cenes that guide the ontogeny of the organism. The complexity of organisms is linked to the complexity of the cenome. The relevance to ontogeny and evolution is mentioned. We introduce the concept of a universal cell and a universal genome.
[ { "created": "Mon, 24 Oct 2011 15:49:30 GMT", "version": "v1" } ]
2011-10-25
[ [ "Werner", "Eric", "" ] ]
We outline the global control architecture of genomes. A theory of genomic control information is presented. The concept of a developmental control network called a cene (for control gene) is introduced. We distinguish parts-genes from control genes or cenes. Cenes are interpreted and executed by the cell and, thereby, direct cell actions including communication, growth, division, differentiation and multi-cellular development. The cenome is the global developmental control network in the genome. The cenome is also a cene that consists of interlinked sub-cenes that guide the ontogeny of the organism. The complexity of organisms is linked to the complexity of the cenome. The relevance to ontogeny and evolution is mentioned. We introduce the concept of a universal cell and a universal genome.
2203.05174
Tony Sun
Tony Y. Sun, Shreyas Bhave, Jaan Altosaar, No\'emie Elhadad
Assessing Phenotype Definitions for Algorithmic Fairness
American Medical Informatics Association (AMIA) 2022 - Accepted paper and presentation Conference on Health, Inference, and Learning (CHIL) 2022 - Invited non-archival presentation
null
null
null
q-bio.OT cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Disease identification is a core, routine activity in observational health research. Cohorts impact downstream analyses, such as how a condition is characterized, how patient risk is defined, and what treatments are studied. It is thus critical to ensure that selected cohorts are representative of all patients, independently of their demographics or social determinants of health. While there are multiple potential sources of bias when constructing phenotype definitions which may affect their fairness, it is not standard in the field of phenotyping to consider the impact of different definitions across subgroups of patients. In this paper, we propose a set of best practices to assess the fairness of phenotype definitions. We leverage established fairness metrics commonly used in predictive models and relate them to commonly used epidemiological cohort description metrics. We describe an empirical study for Crohn's disease and diabetes type 2, each with multiple phenotype definitions taken from the literature across two sets of patient subgroups (gender and race). We show that the different phenotype definitions exhibit widely varying and disparate performance according to the different fairness metrics and subgroups. We hope that the proposed best practices can help in constructing fair and inclusive phenotype definitions.
[ { "created": "Thu, 10 Mar 2022 06:10:20 GMT", "version": "v1" }, { "created": "Sat, 27 Aug 2022 23:23:46 GMT", "version": "v2" } ]
2022-08-30
[ [ "Sun", "Tony Y.", "" ], [ "Bhave", "Shreyas", "" ], [ "Altosaar", "Jaan", "" ], [ "Elhadad", "Noémie", "" ] ]
Disease identification is a core, routine activity in observational health research. Cohorts impact downstream analyses, such as how a condition is characterized, how patient risk is defined, and what treatments are studied. It is thus critical to ensure that selected cohorts are representative of all patients, independently of their demographics or social determinants of health. While there are multiple potential sources of bias when constructing phenotype definitions which may affect their fairness, it is not standard in the field of phenotyping to consider the impact of different definitions across subgroups of patients. In this paper, we propose a set of best practices to assess the fairness of phenotype definitions. We leverage established fairness metrics commonly used in predictive models and relate them to commonly used epidemiological cohort description metrics. We describe an empirical study for Crohn's disease and diabetes type 2, each with multiple phenotype definitions taken from the literature across two sets of patient subgroups (gender and race). We show that the different phenotype definitions exhibit widely varying and disparate performance according to the different fairness metrics and subgroups. We hope that the proposed best practices can help in constructing fair and inclusive phenotype definitions.
2008.13561
Till Strohsal
Zhen Zhu, Enzo Weber, Till Strohsal, Duaa Serhan
Sustainable Border Control Policy in the COVID-19 Pandemic: A Math Modeling Study
10 pages, 3 figures and 1 table. A condensed and modified version. Improved writing, no major results changed
null
null
null
q-bio.PE econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imported COVID-19 cases, if unchecked, can jeopardize the effort of domestic containment. We aim to find out what sustainable border control options for different entities (e.g., countries, states) exist during the reopening phases, given their own choice of domestic control measures and new technologies such as contact tracing. We propose a SUIHR model, which represents an extension to the discrete time SIR models. The model focuses on studying the spreading of virus predominantly by asymptomatic and pre-symptomatic patients. Imported risk and (1-tier) contact tracing are both built into the model. Under plausible parameter assumptions, we seek sustainable border control policies, in combination with sufficient internal measures, which allow entities to confine the virus without the need to revert back to more restrictive life styles or to rely on herd immunity. When the base reproduction number of COVID-19 exceeds 2.5, even 100% effective contact tracing alone is not enough to contain the spreading. For an entity that has completely eliminated the virus domestically, and resumes "normal", very strict pre-departure screening and test and isolation upon arrival combined with effective contact tracing can only delay another outbreak by 6 months. However, if the total net imported cases are non-increasing, and the entity employs a confining domestic control policy, then the total new cases can be contained even without border control.
[ { "created": "Fri, 28 Aug 2020 17:07:53 GMT", "version": "v1" }, { "created": "Tue, 1 Sep 2020 15:07:58 GMT", "version": "v2" }, { "created": "Thu, 4 Feb 2021 11:29:02 GMT", "version": "v3" } ]
2021-02-05
[ [ "Zhu", "Zhen", "" ], [ "Weber", "Enzo", "" ], [ "Strohsal", "Till", "" ], [ "Serhan", "Duaa", "" ] ]
Imported COVID-19 cases, if unchecked, can jeopardize the effort of domestic containment. We aim to find out what sustainable border control options for different entities (e.g., countries, states) exist during the reopening phases, given their own choice of domestic control measures and new technologies such as contact tracing. We propose a SUIHR model, which represents an extension to the discrete time SIR models. The model focuses on studying the spreading of virus predominantly by asymptomatic and pre-symptomatic patients. Imported risk and (1-tier) contact tracing are both built into the model. Under plausible parameter assumptions, we seek sustainable border control policies, in combination with sufficient internal measures, which allow entities to confine the virus without the need to revert back to more restrictive life styles or to rely on herd immunity. When the base reproduction number of COVID-19 exceeds 2.5, even 100% effective contact tracing alone is not enough to contain the spreading. For an entity that has completely eliminated the virus domestically, and resumes "normal", very strict pre-departure screening and test and isolation upon arrival combined with effective contact tracing can only delay another outbreak by 6 months. However, if the total net imported cases are non-increasing, and the entity employs a confining domestic control policy, then the total new cases can be contained even without border control.
1804.10521
Bradly Alicea
Bradly Alicea
An Integrative Introduction to Human Augmentation Science
22 pages, 5 figures
null
null
null
q-bio.NC cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Human Augmentation (HA) spans several technical fields and methodological approaches, including Experimental Psychology, Human-Computer Interaction, Psychophysiology, and Artificial Intelligence. Augmentation involves various strategies for optimizing and controlling cognitive states, which requires an understanding of biological plasticity, dynamic cognitive processes, and models of adaptive systems. As an instructive lesson, we will explore a few HA-related concepts and outstanding issues. Next, we focus on inducing and controlling HA using experimental methods by introducing three techniques for HA implementation: learning augmentation, augmentation using physical media, and extended phenotype modeling. To conclude, we will review integrative approaches to augmentation, which transcend specific functions.
[ { "created": "Thu, 26 Apr 2018 14:29:27 GMT", "version": "v1" } ]
2018-04-30
[ [ "Alicea", "Bradly", "" ] ]
Human Augmentation (HA) spans several technical fields and methodological approaches, including Experimental Psychology, Human-Computer Interaction, Psychophysiology, and Artificial Intelligence. Augmentation involves various strategies for optimizing and controlling cognitive states, which requires an understanding of biological plasticity, dynamic cognitive processes, and models of adaptive systems. As an instructive lesson, we will explore a few HA-related concepts and outstanding issues. Next, we focus on inducing and controlling HA using experimental methods by introducing three techniques for HA implementation: learning augmentation, augmentation using physical media, and extended phenotype modeling. To conclude, we will review integrative approaches to augmentation, which transcend specific functions.
2401.03968
Erpai Luo
Erpai Luo, Minsheng Hao, Lei Wei, Xuegong Zhang
scDiffusion: conditional generation of high-quality single-cell data using diffusion model
null
null
null
null
q-bio.QM cs.LG q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Single-cell RNA sequencing (scRNA-seq) data are important for studying the laws of life at single-cell level. However, it is still challenging to obtain enough high-quality scRNA-seq data. To mitigate the limited availability of data, generative models have been proposed to computationally generate synthetic scRNA-seq data. Nevertheless, the data generated with current models are not very realistic yet, especially when we need to generate data with controlled conditions. In the meantime, the Diffusion models have shown their power in generating data at high fidelity, providing a new opportunity for scRNA-seq generation. In this study, we developed scDiffusion, a generative model combining diffusion model and foundation model to generate high-quality scRNA-seq data with controlled conditions. We designed multiple classifiers to guide the diffusion process simultaneously, enabling scDiffusion to generate data under multiple condition combinations. We also proposed a new control strategy called Gradient Interpolation. This strategy allows the model to generate continuous trajectories of cell development from a given cell state. Experiments showed that scDiffusion can generate single-cell gene expression data closely resembling real scRNA-seq data. Also, scDiffusion can conditionally produce data on specific cell types including rare cell types. Furthermore, we could use the multiple-condition generation of scDiffusion to generate cell type that was out of the training data. Leveraging the Gradient Interpolation strategy, we generated a continuous developmental trajectory of mouse embryonic cells. These experiments demonstrate that scDiffusion is a powerful tool for augmenting the real scRNA-seq data and can provide insights into cell fate research.
[ { "created": "Mon, 8 Jan 2024 15:44:39 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 04:45:14 GMT", "version": "v2" } ]
2024-03-06
[ [ "Luo", "Erpai", "" ], [ "Hao", "Minsheng", "" ], [ "Wei", "Lei", "" ], [ "Zhang", "Xuegong", "" ] ]
Single-cell RNA sequencing (scRNA-seq) data are important for studying the laws of life at single-cell level. However, it is still challenging to obtain enough high-quality scRNA-seq data. To mitigate the limited availability of data, generative models have been proposed to computationally generate synthetic scRNA-seq data. Nevertheless, the data generated with current models are not very realistic yet, especially when we need to generate data with controlled conditions. In the meantime, the Diffusion models have shown their power in generating data at high fidelity, providing a new opportunity for scRNA-seq generation. In this study, we developed scDiffusion, a generative model combining diffusion model and foundation model to generate high-quality scRNA-seq data with controlled conditions. We designed multiple classifiers to guide the diffusion process simultaneously, enabling scDiffusion to generate data under multiple condition combinations. We also proposed a new control strategy called Gradient Interpolation. This strategy allows the model to generate continuous trajectories of cell development from a given cell state. Experiments showed that scDiffusion can generate single-cell gene expression data closely resembling real scRNA-seq data. Also, scDiffusion can conditionally produce data on specific cell types including rare cell types. Furthermore, we could use the multiple-condition generation of scDiffusion to generate cell type that was out of the training data. Leveraging the Gradient Interpolation strategy, we generated a continuous developmental trajectory of mouse embryonic cells. These experiments demonstrate that scDiffusion is a powerful tool for augmenting the real scRNA-seq data and can provide insights into cell fate research.
2307.14790
Maitham Yousif
Maitham G. Yousif
Decoding Microbial Enigmas: Unleashing the Power of Artificial Intelligence in Analyzing Antibiotic-Resistant Pathogens and their Impact on Human Health
11 pages, 2 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-sa/4.0/
In this research, medical information from 1200 patients across various hospitals in Iraq was collected over a period of 3 years, from February 3, 2018, to March 5, 2021. The study encompassed several infections, including urinary tract infections, wound infections, tonsillitis, prostatitis, endometritis, endometrial lining infections, burns infections, pneumonia, and bloodstream infections in children. Multiple bacterial pathogens were identified, and their resistance to various antibiotics was recorded. The data analysis revealed significant patterns of antibiotic resistance among the identified bacterial pathogens. Resistance was observed to several commonly used antibiotics, highlighting the emerging challenge of antimicrobial resistance in Iraq. These findings underscore the importance of implementing effective antimicrobial stewardship programs and infection control measures in healthcare settings to mitigate the spread of antibiotic-resistant infections and ensure optimal patient outcomes. This study contributes valuable insights into the prevalence and patterns of antibiotic resistance in microbial infections, which can guide healthcare practitioners and policymakers in formulating targeted interventions to combat the growing threat of antimicrobial resistance in Iraq's healthcare landscape.
[ { "created": "Thu, 27 Jul 2023 11:42:48 GMT", "version": "v1" } ]
2023-07-28
[ [ "Yousif", "Maitham G.", "" ] ]
In this research, medical information from 1200 patients across various hospitals in Iraq was collected over a period of 3 years, from February 3, 2018, to March 5, 2021. The study encompassed several infections, including urinary tract infections, wound infections, tonsillitis, prostatitis, endometritis, endometrial lining infections, burns infections, pneumonia, and bloodstream infections in children. Multiple bacterial pathogens were identified, and their resistance to various antibiotics was recorded. The data analysis revealed significant patterns of antibiotic resistance among the identified bacterial pathogens. Resistance was observed to several commonly used antibiotics, highlighting the emerging challenge of antimicrobial resistance in Iraq. These findings underscore the importance of implementing effective antimicrobial stewardship programs and infection control measures in healthcare settings to mitigate the spread of antibiotic-resistant infections and ensure optimal patient outcomes. This study contributes valuable insights into the prevalence and patterns of antibiotic resistance in microbial infections, which can guide healthcare practitioners and policymakers in formulating targeted interventions to combat the growing threat of antimicrobial resistance in Iraq's healthcare landscape.
1104.5112
Ganna Rozhnova
Ganna Rozhnova, Ana Nunes
Modeling the long term dynamics of pre-vaccination pertussis
paper (31 pages, 11 figures, 1 table) and supplementary material (19 pages, 5 figures, 2 tables)
Journal of the Royal Society Interface 9, 2959-2970 (2012)
10.1098/rsif.2012.0432
null
q-bio.PE cond-mat.stat-mech physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of strongly immunizing childhood infections is still not well understood. Although reports of successful modeling of several incidence data records can be found in the literature, the key determinants of the observed temporal patterns have not been clearly identified. In particular, different models of immunity waning and degree of protection applied to disease and vaccine induced immunity have been debated in the literature on pertussis. Here we study the effect of disease acquired immunity on the long term patterns of pertussis prevalence. We compare five minimal models, all of which are stochastic, seasonally forced, well-mixed models of infection based on susceptible-infective-recovered dynamics in a closed population. These models reflect different assumptions about the immune response of naive hosts, namely total permanent immunity, immunity waning, immunity waning together with immunity boosting, reinfection of recovered, and repeat infection after partial immunity waning. The power spectra of the output prevalence time series characterize the long term dynamics of the models. For epidemiological parameters consistent with published data for pertussis, the power spectra show quantitative and even qualitative differences that can be used to test their assumptions by comparison with ensembles of several decades long pre-vaccination data records. We illustrate this strategy on two publicly available historical data sets.
[ { "created": "Wed, 27 Apr 2011 10:55:52 GMT", "version": "v1" }, { "created": "Sat, 25 Feb 2012 20:23:35 GMT", "version": "v2" }, { "created": "Thu, 17 May 2012 18:55:44 GMT", "version": "v3" }, { "created": "Wed, 18 Jul 2012 10:18:08 GMT", "version": "v4" } ]
2012-10-09
[ [ "Rozhnova", "Ganna", "" ], [ "Nunes", "Ana", "" ] ]
The dynamics of strongly immunizing childhood infections is still not well understood. Although reports of successful modeling of several incidence data records can be found in the literature, the key determinants of the observed temporal patterns have not been clearly identified. In particular, different models of immunity waning and degree of protection applied to disease and vaccine induced immunity have been debated in the literature on pertussis. Here we study the effect of disease acquired immunity on the long term patterns of pertussis prevalence. We compare five minimal models, all of which are stochastic, seasonally forced, well-mixed models of infection based on susceptible-infective-recovered dynamics in a closed population. These models reflect different assumptions about the immune response of naive hosts, namely total permanent immunity, immunity waning, immunity waning together with immunity boosting, reinfection of recovered, and repeat infection after partial immunity waning. The power spectra of the output prevalence time series characterize the long term dynamics of the models. For epidemiological parameters consistent with published data for pertussis, the power spectra show quantitative and even qualitative differences that can be used to test their assumptions by comparison with ensembles of several decades long pre-vaccination data records. We illustrate this strategy on two publicly available historical data sets.
2405.06658
Yiqing Shen
Yiqing Shen, Outongyi Lv, Houying Zhu, Yu Guang Wang
ProteinEngine: Empower LLM with Domain Knowledge for Protein Engineering
null
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have garnered considerable attention for their proficiency in tackling intricate tasks, particularly leveraging their capacities for zero-shot and in-context learning. However, their utility has been predominantly restricted to general tasks due to an absence of domain-specific knowledge. This constraint becomes particularly pertinent in the realm of protein engineering, where specialized expertise is required for tasks such as protein function prediction, protein evolution analysis, and protein design, with a level of specialization that existing LLMs cannot furnish. In response to this challenge, we introduce \textsc{ProteinEngine}, a human-centered platform aimed at amplifying the capabilities of LLMs in protein engineering by seamlessly integrating a comprehensive range of relevant tools, packages, and software via API calls. Uniquely, \textsc{ProteinEngine} assigns three distinct roles to LLMs, facilitating efficient task delegation, specialized task resolution, and effective communication of results. This design fosters high extensibility and promotes the smooth incorporation of new algorithms, models, and features for future development. Extensive user studies, involving participants from both the AI and protein engineering communities across academia and industry, consistently validate the superiority of \textsc{ProteinEngine} in augmenting the reliability and precision of deep learning in protein engineering tasks. Consequently, our findings highlight the potential of \textsc{ProteinEngine} to bride the disconnected tools for future research in the protein engineering domain.
[ { "created": "Sun, 21 Apr 2024 01:07:33 GMT", "version": "v1" } ]
2024-05-14
[ [ "Shen", "Yiqing", "" ], [ "Lv", "Outongyi", "" ], [ "Zhu", "Houying", "" ], [ "Wang", "Yu Guang", "" ] ]
Large language models (LLMs) have garnered considerable attention for their proficiency in tackling intricate tasks, particularly leveraging their capacities for zero-shot and in-context learning. However, their utility has been predominantly restricted to general tasks due to an absence of domain-specific knowledge. This constraint becomes particularly pertinent in the realm of protein engineering, where specialized expertise is required for tasks such as protein function prediction, protein evolution analysis, and protein design, with a level of specialization that existing LLMs cannot furnish. In response to this challenge, we introduce \textsc{ProteinEngine}, a human-centered platform aimed at amplifying the capabilities of LLMs in protein engineering by seamlessly integrating a comprehensive range of relevant tools, packages, and software via API calls. Uniquely, \textsc{ProteinEngine} assigns three distinct roles to LLMs, facilitating efficient task delegation, specialized task resolution, and effective communication of results. This design fosters high extensibility and promotes the smooth incorporation of new algorithms, models, and features for future development. Extensive user studies, involving participants from both the AI and protein engineering communities across academia and industry, consistently validate the superiority of \textsc{ProteinEngine} in augmenting the reliability and precision of deep learning in protein engineering tasks. Consequently, our findings highlight the potential of \textsc{ProteinEngine} to bride the disconnected tools for future research in the protein engineering domain.