id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1109.4465
Thorsten Pr\"ustel
Thorsten Pr\"ustel and Martin Meier-Schellersheim
Exact Green's Function of the reversible diffusion-influenced reaction for an isolated pair in 2D
6 pages, 1 Figure
null
10.1063/1.4737662
null
q-bio.QM
http://creativecommons.org/licenses/publicdomain/
We derive an exact Green's function of the diffusion equation for a pair of spherical interacting particles in 2D subject to a back-reaction boundary condition.
[ { "created": "Wed, 21 Sep 2011 03:05:09 GMT", "version": "v1" } ]
2015-05-30
[ [ "Prüstel", "Thorsten", "" ], [ "Meier-Schellersheim", "Martin", "" ] ]
We derive an exact Green's function of the diffusion equation for a pair of spherical interacting particles in 2D subject to a back-reaction boundary condition.
1509.07028
Imane Boudellioua
Imane Boudellioua, Rabie Saidi, Maria Martin, Robert Hoehndorf, and Victor Solovyev
Prediction of Metabolic Pathways Involvement in Prokaryotic UniProtKB Data by Association Rule Mining
null
PLoS ONE. 11(7) 2016 (1-16)
10.1371/journal.pone.0158896
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widening gap between known proteins and their functions has encouraged the development of methods to automatically infer annotations. Automatic functional annotation of proteins is expected to meet the conflicting requirements of maximizing annotation coverage, while minimizing erroneous functional assignments. This trade-off imposes a great challenge in designing intelligent systems to tackle the problem of automatic protein annotation. In this work, we present a system that utilizes rule mining techniques to predict metabolic pathways in prokaryotes. The resulting knowledge represents predictive models that assign pathway involvement to UniProtKB entries. We carried out an evaluation study of our system performance using cross-validation technique. We found that it achieved very promising results in pathway identification with an F1-measure of 0.982 and an AUC of 0.987. Our prediction models were then successfully applied to 6.2 million UniProtKB/TrEMBL reference proteome entries of prokaryotes. As a result, 663,724 entries were covered, where 436,510 of them lacked any previous pathway annotations.
[ { "created": "Wed, 23 Sep 2015 15:22:16 GMT", "version": "v1" }, { "created": "Mon, 19 Sep 2016 14:28:44 GMT", "version": "v2" } ]
2016-09-23
[ [ "Boudellioua", "Imane", "" ], [ "Saidi", "Rabie", "" ], [ "Martin", "Maria", "" ], [ "Hoehndorf", "Robert", "" ], [ "Solovyev", "Victor", "" ] ]
The widening gap between known proteins and their functions has encouraged the development of methods to automatically infer annotations. Automatic functional annotation of proteins is expected to meet the conflicting requirements of maximizing annotation coverage, while minimizing erroneous functional assignments. This trade-off imposes a great challenge in designing intelligent systems to tackle the problem of automatic protein annotation. In this work, we present a system that utilizes rule mining techniques to predict metabolic pathways in prokaryotes. The resulting knowledge represents predictive models that assign pathway involvement to UniProtKB entries. We carried out an evaluation study of our system performance using cross-validation technique. We found that it achieved very promising results in pathway identification with an F1-measure of 0.982 and an AUC of 0.987. Our prediction models were then successfully applied to 6.2 million UniProtKB/TrEMBL reference proteome entries of prokaryotes. As a result, 663,724 entries were covered, where 436,510 of them lacked any previous pathway annotations.
q-bio/0601030
Mark Hertzberg
P. W. Kuchel, B. E. Chapman, W. A. Bubb, P. E. Hansen, C. J. Durrant, M. P. Hertzberg
Magnetic Susceptibility: Solutions, Emulsions, and Cells
15 pages, 9 figures, v2: updated to resemble the published version
Concepts in Magnetic Resonance Part A, Vol. 18A(1) 56-71 (2003)
10.1002/cmr.a.10066
null
q-bio.CB physics.chem-ph
null
Differences in magnetic susceptibility between various compartments in heterogeneous samples can introduce unanticipated complications to NMR spectra. On the other hand, an understanding of these effects at the level of the underlying physical principles has led to the development of several experimental techniques that provide data on cellular function that are unique to NMR spectroscopy. To illustrate some key features of susceptibility effects we present, among a more general overview, results obtained with red blood cells and a recently described model system involving diethyl phthalate in water. This substance forms a relatively stable emulsion in water and yet it has a significant solubility of 5 mmol/L at room temperature; thus, the NMR spectrum has twice as many resonances as would be expected for a simple solution. What determines the relative intensities of the two families of peaks and can their frequencies be manipulated experimentally in a predictable way? The theory used to interpret the NMR spectra from the model system and cells was first developed in the context of electrostatics nearly a century ago, and yet some of its underlying assumptions now warrant closer scrutiny. While this insight is used in a practical way in this article, the accompanying article deals with the mathematics and physics behind this new analysis.
[ { "created": "Fri, 20 Jan 2006 10:07:15 GMT", "version": "v1" }, { "created": "Sun, 24 Dec 2006 08:03:52 GMT", "version": "v2" } ]
2009-10-05
[ [ "Kuchel", "P. W.", "" ], [ "Chapman", "B. E.", "" ], [ "Bubb", "W. A.", "" ], [ "Hansen", "P. E.", "" ], [ "Durrant", "C. J.", "" ], [ "Hertzberg", "M. P.", "" ] ]
Differences in magnetic susceptibility between various compartments in heterogeneous samples can introduce unanticipated complications to NMR spectra. On the other hand, an understanding of these effects at the level of the underlying physical principles has led to the development of several experimental techniques that provide data on cellular function that are unique to NMR spectroscopy. To illustrate some key features of susceptibility effects we present, among a more general overview, results obtained with red blood cells and a recently described model system involving diethyl phthalate in water. This substance forms a relatively stable emulsion in water and yet it has a significant solubility of 5 mmol/L at room temperature; thus, the NMR spectrum has twice as many resonances as would be expected for a simple solution. What determines the relative intensities of the two families of peaks and can their frequencies be manipulated experimentally in a predictable way? The theory used to interpret the NMR spectra from the model system and cells was first developed in the context of electrostatics nearly a century ago, and yet some of its underlying assumptions now warrant closer scrutiny. While this insight is used in a practical way in this article, the accompanying article deals with the mathematics and physics behind this new analysis.
2010.04378
Sabrina Streipert
Sabrina H. Streipert and Gail S. K. Wolkowicz
An alternative delayed population growth difference equation model
11 pages, 3 figures, Appendix: 13 pages
null
10.1007/s00285-021-01652-9
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an alternative delayed population growth difference equation model based on a modification of the Beverton-Holt recurrence, assuming a delay only in the growth contribution that takes into account that those individuals that die during the delay, do not contribute to growth. The model introduced differs from existing delay difference equations in population dynamics, such as the delayed logistic difference equation, which was formulated as a discretization of the Hutchinson model. The analysis of our delayed difference equation model identifies an important critical delay threshold. If the time delay exceeds this threshold, the model predicts that the population will go extinct for all non-negative initial conditions and if it is below this threshold, the population survives and its size converges to a positive globally asymptotically stable equilibrium that is decreasing in size as the delay increases. Firstly, we obtain the local stability results by exploiting the special structure of powers of the Jacobian matrix. Secondly, we show that local stability implies global stability using two different techniques. For one set of parameter values, a contraction mapping result is applied, while for the remaining set of parameter values, we show that the result follows by first proving that the recurrence structure is eventually monotonic in each of its arguments.
[ { "created": "Fri, 9 Oct 2020 05:44:27 GMT", "version": "v1" } ]
2022-10-24
[ [ "Streipert", "Sabrina H.", "" ], [ "Wolkowicz", "Gail S. K.", "" ] ]
We propose an alternative delayed population growth difference equation model based on a modification of the Beverton-Holt recurrence, assuming a delay only in the growth contribution that takes into account that those individuals that die during the delay, do not contribute to growth. The model introduced differs from existing delay difference equations in population dynamics, such as the delayed logistic difference equation, which was formulated as a discretization of the Hutchinson model. The analysis of our delayed difference equation model identifies an important critical delay threshold. If the time delay exceeds this threshold, the model predicts that the population will go extinct for all non-negative initial conditions and if it is below this threshold, the population survives and its size converges to a positive globally asymptotically stable equilibrium that is decreasing in size as the delay increases. Firstly, we obtain the local stability results by exploiting the special structure of powers of the Jacobian matrix. Secondly, we show that local stability implies global stability using two different techniques. For one set of parameter values, a contraction mapping result is applied, while for the remaining set of parameter values, we show that the result follows by first proving that the recurrence structure is eventually monotonic in each of its arguments.
1902.06151
Sean Parsons
Sean Parsons and Jan Huizinga
Robust and fast heart rate variability analysis of long and noisy electrocardiograms using neural networks and images
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Heart rate variability studies depend on the robust calculation of the tachogram, the heart rate times series, usually by the detection of R peaks in the electrocardiogram (ECG). ECGs however are subject to a number of sources of noise which are difficult to filter and therefore reduce the tachogram accuracy. We describe a pipeline for fast calculation of tachograms from noisy ECGs of several hours' length. The pipeline consists of three stages. A neural network (NN) trained to detect R peaks and distinguish these from noise; a measure to robustly detect false positives (FPs) and negatives (FNs) produced by the NN; a simple "alarm" algorithm for automatically removing FPs and interpolating FNs. In addition, we introduce the approach of encoding ECGs, tachograms and other cardiac time series in the form of raster images, which greatly speeds and eases their visual inspection and analysis.
[ { "created": "Sat, 16 Feb 2019 20:26:38 GMT", "version": "v1" } ]
2019-02-19
[ [ "Parsons", "Sean", "" ], [ "Huizinga", "Jan", "" ] ]
Heart rate variability studies depend on the robust calculation of the tachogram, the heart rate times series, usually by the detection of R peaks in the electrocardiogram (ECG). ECGs however are subject to a number of sources of noise which are difficult to filter and therefore reduce the tachogram accuracy. We describe a pipeline for fast calculation of tachograms from noisy ECGs of several hours' length. The pipeline consists of three stages. A neural network (NN) trained to detect R peaks and distinguish these from noise; a measure to robustly detect false positives (FPs) and negatives (FNs) produced by the NN; a simple "alarm" algorithm for automatically removing FPs and interpolating FNs. In addition, we introduce the approach of encoding ECGs, tachograms and other cardiac time series in the form of raster images, which greatly speeds and eases their visual inspection and analysis.
1110.2804
Srividya Iyer-Biswas
Srividya Iyer-Biswas and C. Jayaprakash
Mixed Poisson distributions in exact solutions of stochastic auto-regulation models
null
Phys. Rev. E 90, 052712 (2014)
10.1103/PhysRevE.90.052712
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of auto-activation and auto-inhibition. Using the Poisson Representation, a technique whose particular usefulness in the context of non-linear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter-space qualitatively different behaviors arise. These behaviors include power-law tailed distributions, bimodal distributions and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the auto-inhibition and auto-activation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.
[ { "created": "Wed, 12 Oct 2011 21:38:24 GMT", "version": "v1" }, { "created": "Sun, 5 Oct 2014 19:38:50 GMT", "version": "v2" } ]
2014-11-19
[ [ "Iyer-Biswas", "Srividya", "" ], [ "Jayaprakash", "C.", "" ] ]
In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of auto-activation and auto-inhibition. Using the Poisson Representation, a technique whose particular usefulness in the context of non-linear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter-space qualitatively different behaviors arise. These behaviors include power-law tailed distributions, bimodal distributions and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the auto-inhibition and auto-activation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.
2201.00114
Hongsong Feng
Hongsong Feng, Kaifu Gao, Dong Chen, Alfred J Robison, Edmund Ellsworth and Guo-Wei Wei
Machine learning analysis of cocaine addiction informed by DAT, SERT, and NET-based interactome networks
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Cocaine addiction is a psychosocial disorder induced by the chronic use of cocaine and causes a large of number deaths around the world. Despite many decades' effort, no drugs have been approved by the Food and Drug Administration (FDA) for the treatment of cocaine dependence. Cocaine dependence is neurological and involves many interacting proteins in the interactome. Among them, dopamine transporter (DAT), serotonin transporter (SERT), and norepinephrine transporter (NET) are three major targets. Each of these targets has a large protein-protein interaction (PPI) network which must be considered in the anti-cocaine addiction drug discovery. This work presents DAT, SERT, and NET interactome network-informed machine learning/deep learning (ML/DL) studies of cocaine addiction. We collect and analyze 61 protein targets out 460 proteins in the DAT, SERT, and NET PPI networks that have sufficient existing inhibitor datasets. Utilizing autoencoder and other ML algorithms, we build ML/DL models for these targets with 115,407 inhibitors to predict drug repurposing potentials and possible side effects. We further screen their absorption, distribution, metabolism, and excretion, and toxicity (ADMET) properties to search for nearly optimal leads for anti-cocaine addiction. Our approach sets up a systematic protocol for artificial intelligence (AI)-based anti-cocaine addiction lead discovery.
[ { "created": "Sat, 1 Jan 2022 04:49:20 GMT", "version": "v1" } ]
2022-01-04
[ [ "Feng", "Hongsong", "" ], [ "Gao", "Kaifu", "" ], [ "Chen", "Dong", "" ], [ "Robison", "Alfred J", "" ], [ "Ellsworth", "Edmund", "" ], [ "Wei", "Guo-Wei", "" ] ]
Cocaine addiction is a psychosocial disorder induced by the chronic use of cocaine and causes a large of number deaths around the world. Despite many decades' effort, no drugs have been approved by the Food and Drug Administration (FDA) for the treatment of cocaine dependence. Cocaine dependence is neurological and involves many interacting proteins in the interactome. Among them, dopamine transporter (DAT), serotonin transporter (SERT), and norepinephrine transporter (NET) are three major targets. Each of these targets has a large protein-protein interaction (PPI) network which must be considered in the anti-cocaine addiction drug discovery. This work presents DAT, SERT, and NET interactome network-informed machine learning/deep learning (ML/DL) studies of cocaine addiction. We collect and analyze 61 protein targets out 460 proteins in the DAT, SERT, and NET PPI networks that have sufficient existing inhibitor datasets. Utilizing autoencoder and other ML algorithms, we build ML/DL models for these targets with 115,407 inhibitors to predict drug repurposing potentials and possible side effects. We further screen their absorption, distribution, metabolism, and excretion, and toxicity (ADMET) properties to search for nearly optimal leads for anti-cocaine addiction. Our approach sets up a systematic protocol for artificial intelligence (AI)-based anti-cocaine addiction lead discovery.
1811.11145
Iddo Friedberg
Md-Nafiz Hamid and Iddo Friedberg
Reliable uncertainty estimate for antibiotic resistance classification with Stochastic Gradient Langevin Dynamics
Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:1811.07216
null
null
ML4H/2018/17
q-bio.QM cs.LG q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Antibiotic resistance monitoring is of paramount importance in the face of this on-going global epidemic. Deep learning models trained with traditional optimization algorithms (e.g. Adam, SGD) provide poor posterior estimates when tested against out-of-distribution (OoD) antibiotic resistant/non-resistant genes. In this paper, we introduce a deep learning model trained with Stochastic Gradient Langevin Dynamics (SGLD) to classify antibiotic resistant genes. The model provides better uncertainty estimates when tested against OoD data compared to traditional optimization methods such as Adam.
[ { "created": "Tue, 27 Nov 2018 18:15:58 GMT", "version": "v1" } ]
2018-11-28
[ [ "Hamid", "Md-Nafiz", "" ], [ "Friedberg", "Iddo", "" ] ]
Antibiotic resistance monitoring is of paramount importance in the face of this on-going global epidemic. Deep learning models trained with traditional optimization algorithms (e.g. Adam, SGD) provide poor posterior estimates when tested against out-of-distribution (OoD) antibiotic resistant/non-resistant genes. In this paper, we introduce a deep learning model trained with Stochastic Gradient Langevin Dynamics (SGLD) to classify antibiotic resistant genes. The model provides better uncertainty estimates when tested against OoD data compared to traditional optimization methods such as Adam.
q-bio/0605050
Emilio Hernandez-Garcia
Alejandro F. Rozenfeld, Sophie Arnaud-Haond, Emilio Hernandez-Garcia, Victor M. Eguiluz, Manuel A. Matias, Ester Serrao and Carlos M. Duarte
Spectrum of genetic diversity and networks of clonal organisms
Replaced with revised version
Journal of the Royal Society Interface, 4, 1093-1102 (2007)
10.1098/rsif.2007.0230
null
q-bio.PE q-bio.QM
null
Clonal organisms present a particular challenge in population genetics because, in addition to the possible existence of replicates of the same genotype in a given sample, some of the hypotheses and concepts underlying classical population genetics models are irreconcilable with clonality. The genetic structure and diversity of clonal populations was examined using a combination of new tools to analyze microsatellite data in the marine angiosperm Posidonia oceanica. These tools were based on examination of the frequency distribution of the genetic distance among ramets, termed the spectrum of genetic diversity (GDS), and of networks built on the basis of pairwise genetic distances among genets. The properties and topology of networks based on genetic distances showed a "small-world" topology, characterized by a high degree of connectivity among nodes, and a substantial amount of substructure, revealing organization in sub-families of closely related individuals. Keywords: genetic networks; small-world networks; genetic diversity; clonal organisms
[ { "created": "Tue, 30 May 2006 17:28:17 GMT", "version": "v1" }, { "created": "Tue, 6 Feb 2007 18:19:17 GMT", "version": "v2" } ]
2008-01-23
[ [ "Rozenfeld", "Alejandro F.", "" ], [ "Arnaud-Haond", "Sophie", "" ], [ "Hernandez-Garcia", "Emilio", "" ], [ "Eguiluz", "Victor M.", "" ], [ "Matias", "Manuel A.", "" ], [ "Serrao", "Ester", "" ], [ "Duarte", "Carlos M.", "" ] ]
Clonal organisms present a particular challenge in population genetics because, in addition to the possible existence of replicates of the same genotype in a given sample, some of the hypotheses and concepts underlying classical population genetics models are irreconcilable with clonality. The genetic structure and diversity of clonal populations was examined using a combination of new tools to analyze microsatellite data in the marine angiosperm Posidonia oceanica. These tools were based on examination of the frequency distribution of the genetic distance among ramets, termed the spectrum of genetic diversity (GDS), and of networks built on the basis of pairwise genetic distances among genets. The properties and topology of networks based on genetic distances showed a "small-world" topology, characterized by a high degree of connectivity among nodes, and a substantial amount of substructure, revealing organization in sub-families of closely related individuals. Keywords: genetic networks; small-world networks; genetic diversity; clonal organisms
0905.2174
Joel Miller
Joel Miller, Bahman Davoudi, Rafael Meza, Anja Slim, Babak Pourbohloul
Epidemics with general generation interval distributions
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the spread of susceptible-infected-recovered (SIR) infectious diseases where an individual's infectiousness and probability of recovery depend on his/her "age" of infection. We focus first on early outbreak stages when stochastic effects dominate and show that epidemics tend to happen faster than deterministic calculations predict. If an outbreak is sufficiently large, stochastic effects are negligible and we modify the standard ordinary differential equation (ODE) model to accommodate age-of-infection effects. We avoid the use of partial differential equations which typically appear in related models. We introduce a "memoryless" ODE system which approximates the true solutions. Finally, we analyze the transition from the stochastic to the deterministic phase.
[ { "created": "Wed, 13 May 2009 19:44:05 GMT", "version": "v1" }, { "created": "Wed, 13 May 2009 20:08:32 GMT", "version": "v2" } ]
2009-05-14
[ [ "Miller", "Joel", "" ], [ "Davoudi", "Bahman", "" ], [ "Meza", "Rafael", "" ], [ "Slim", "Anja", "" ], [ "Pourbohloul", "Babak", "" ] ]
We study the spread of susceptible-infected-recovered (SIR) infectious diseases where an individual's infectiousness and probability of recovery depend on his/her "age" of infection. We focus first on early outbreak stages when stochastic effects dominate and show that epidemics tend to happen faster than deterministic calculations predict. If an outbreak is sufficiently large, stochastic effects are negligible and we modify the standard ordinary differential equation (ODE) model to accommodate age-of-infection effects. We avoid the use of partial differential equations which typically appear in related models. We introduce a "memoryless" ODE system which approximates the true solutions. Finally, we analyze the transition from the stochastic to the deterministic phase.
1709.06134
Yu Zheng
Zhe Wang, Yu Zheng, David C. Zhu, Jian Ren and Tongtong Li
Discrete Dynamic Causal Modeling and Its Relationship with Directed Information
null
null
null
null
q-bio.NC cs.IT math.IT stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper explores the discrete Dynamic Causal Modeling (DDCM) and its relationship with Directed Information (DI). We prove the conditional equivalence between DDCM and DI in characterizing the causal relationship between two brain regions. The theoretical results are demonstrated using fMRI data obtained under both resting state and stimulus based state. Our numerical analysis is consistent with that reported in previous study.
[ { "created": "Mon, 18 Sep 2017 19:43:11 GMT", "version": "v1" } ]
2017-09-20
[ [ "Wang", "Zhe", "" ], [ "Zheng", "Yu", "" ], [ "Zhu", "David C.", "" ], [ "Ren", "Jian", "" ], [ "Li", "Tongtong", "" ] ]
This paper explores the discrete Dynamic Causal Modeling (DDCM) and its relationship with Directed Information (DI). We prove the conditional equivalence between DDCM and DI in characterizing the causal relationship between two brain regions. The theoretical results are demonstrated using fMRI data obtained under both resting state and stimulus based state. Our numerical analysis is consistent with that reported in previous study.
2006.01968
T. M. Murali
Jeffrey N. Law, Kyle Akers, Nure Tasnina, Catherine M. Della Santina, Shay Deutsch, Meghana Kshirsagar, Judith Klein-Seetharaman, Mark Crovella, Padmavathy Rajagopalan, Simon Kasif, and T. M. Murali
Interpretable Network Propagation with Application to Expanding the Repertoire of Human Proteins that Interact with SARS-CoV-2
null
null
null
null
q-bio.MN q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: Network propagation has been widely used for nearly 20 years to predict gene functions and phenotypes. Despite the popularity of this approach, little attention has been paid to the question of provenance tracing in this context, e.g., determining how much any experimental observation in the input contributes to the score of every prediction. Results: We design a network propagation framework with two novel components and apply it to predict human proteins that directly or indirectly interact with SARS-CoV-2 proteins. First, we trace the provenance of each prediction to its experimentally validated sources, which in our case are human proteins experimentally determined to interact with viral proteins. Second, we design a technique that helps to reduce the manual adjustment of parameters by users. We find that for every top-ranking prediction, the highest contribution to its score arises from a direct neighbor in a human protein-protein interaction network. We further analyze these results to develop functional insights on SARS-CoV-2 that expand on known biology such as the connection between endoplasmic reticulum stress, HSPA5, and anti-clotting agents. Conclusions: We examine how our provenance tracing method can be generalized to a broad class of network-based algorithms. We provide a useful resource for the SARS-CoV-2 community that implicates many previously undocumented proteins with putative functional relationships to viral infection. This resource includes potential drugs that can be opportunistically repositioned to target these proteins. We also discuss how our overall framework can be extended to other, newly-emerging viruses.
[ { "created": "Tue, 2 Jun 2020 22:47:37 GMT", "version": "v1" }, { "created": "Mon, 22 Jun 2020 15:21:19 GMT", "version": "v2" }, { "created": "Fri, 19 Nov 2021 16:07:30 GMT", "version": "v3" } ]
2021-11-22
[ [ "Law", "Jeffrey N.", "" ], [ "Akers", "Kyle", "" ], [ "Tasnina", "Nure", "" ], [ "Della Santina", "Catherine M.", "" ], [ "Deutsch", "Shay", "" ], [ "Kshirsagar", "Meghana", "" ], [ "Klein-Seetharaman", "Judith", "" ], [ "Crovella", "Mark", "" ], [ "Rajagopalan", "Padmavathy", "" ], [ "Kasif", "Simon", "" ], [ "Murali", "T. M.", "" ] ]
Background: Network propagation has been widely used for nearly 20 years to predict gene functions and phenotypes. Despite the popularity of this approach, little attention has been paid to the question of provenance tracing in this context, e.g., determining how much any experimental observation in the input contributes to the score of every prediction. Results: We design a network propagation framework with two novel components and apply it to predict human proteins that directly or indirectly interact with SARS-CoV-2 proteins. First, we trace the provenance of each prediction to its experimentally validated sources, which in our case are human proteins experimentally determined to interact with viral proteins. Second, we design a technique that helps to reduce the manual adjustment of parameters by users. We find that for every top-ranking prediction, the highest contribution to its score arises from a direct neighbor in a human protein-protein interaction network. We further analyze these results to develop functional insights on SARS-CoV-2 that expand on known biology such as the connection between endoplasmic reticulum stress, HSPA5, and anti-clotting agents. Conclusions: We examine how our provenance tracing method can be generalized to a broad class of network-based algorithms. We provide a useful resource for the SARS-CoV-2 community that implicates many previously undocumented proteins with putative functional relationships to viral infection. This resource includes potential drugs that can be opportunistically repositioned to target these proteins. We also discuss how our overall framework can be extended to other, newly-emerging viruses.
q-bio/0403022
Andras Lorincz
Andras Lorincz
Intelligent encoding and economical communication in the visual stream
6 pages, 2 figures
null
null
null
q-bio.NC cs.AI cs.CC nlin.AO
null
The theory of computational complexity is used to underpin a recent model of neocortical sensory processing. We argue that encoding into reconstruction networks is appealing for communicating agents using Hebbian learning and working on hard combinatorial problems, which are easy to verify. Computational definition of the concept of intelligence is provided. Simulations illustrate the idea.
[ { "created": "Tue, 16 Mar 2004 14:57:29 GMT", "version": "v1" } ]
2007-05-23
[ [ "Lorincz", "Andras", "" ] ]
The theory of computational complexity is used to underpin a recent model of neocortical sensory processing. We argue that encoding into reconstruction networks is appealing for communicating agents using Hebbian learning and working on hard combinatorial problems, which are easy to verify. Computational definition of the concept of intelligence is provided. Simulations illustrate the idea.
q-bio/0503018
Christoph Best
Christoph Best, Ralf Zimmer, Joannis Apostolakis (Institute for Informatics, LMU, Munich, Germany)
Probabilistic methods for predicting protein functions in protein-protein interaction networks
11 pages, 3 figures. Paper presented at the German Conference on Bioinformatics, 2004, Oct 4-6, Bielefeld, Germany
in: R. Giegerich, J. Stoye (eds.), German Conference on Bioinformatics 2004, Lecture Notes in Informatics, Ges. f. Informatik, Bonn, Germany, 2004
null
null
q-bio.MN
null
We discuss probabilistic methods for predicting protein functions from protein-protein interaction networks. Previous work based on Markov Randon Fields is extended and compared to a general machine-learning theoretic approach. Using actual protein interaction networks for yeast from the MIPS database and GO-SLIM function assignments, we compare the predictions of the different probabilistic methods and of a standard support vector machine. It turns out that, with the currently available networks, the simple methods based on counting frequencies perform as well as the more sophisticated approaches.
[ { "created": "Sat, 12 Mar 2005 23:17:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Best", "Christoph", "", "Institute for\n Informatics, LMU, Munich, Germany" ], [ "Zimmer", "Ralf", "", "Institute for\n Informatics, LMU, Munich, Germany" ], [ "Apostolakis", "Joannis", "", "Institute for\n Informatics, LMU, Munich, Germany" ] ]
We discuss probabilistic methods for predicting protein functions from protein-protein interaction networks. Previous work based on Markov Randon Fields is extended and compared to a general machine-learning theoretic approach. Using actual protein interaction networks for yeast from the MIPS database and GO-SLIM function assignments, we compare the predictions of the different probabilistic methods and of a standard support vector machine. It turns out that, with the currently available networks, the simple methods based on counting frequencies perform as well as the more sophisticated approaches.
1908.05996
Madison Krieger
Madison S. Krieger, Sam Sinai and Martin A. Nowak
Turbulent coherent structures and early life below the Kolmogorov scale
null
null
10.1038/s41467-020-15780-1
null
q-bio.PE physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A great number of biological organisms live in aqueous environments. Major evolutionary transitions, including the emergence of life itself, likely occurred in such environments. While the chemical aspects of the role of water in biology are well-studied, the effects of water's physical characteristics on evolutionary events, such as the control of population structure via its rich transport properties, are less clear. Evolutionary transitions such as the emergence of the first cells and of multicellularity, require cooperation among groups of individuals. However, evolution of cooperation faces challenges in unstructured "well-mixed" populations, as parasites quickly overwhelm cooperators. Models that assume population structure to promote cooperation envision such structure to arise from spatial "lattice" models (e.g. surface bound individuals) or compartmentalization models, often realized as protocells. Here we study the effect of turbulent motions in spatial models, and propose that coherent structures, i.e. flow patterns which trap fluid and arise naturally in turbulent flows, may serve many of the properties associated with compartments--collocalization, division, and merging--and thought to play a key role in the origins of life and other evolutionary transitions. These results suggest that group selection models may be applicable with fewer physical and chemical constraints than previously thought, and apply much more widely in aqueous environments.
[ { "created": "Fri, 16 Aug 2019 14:56:42 GMT", "version": "v1" }, { "created": "Mon, 4 May 2020 13:59:14 GMT", "version": "v2" } ]
2020-07-01
[ [ "Krieger", "Madison S.", "" ], [ "Sinai", "Sam", "" ], [ "Nowak", "Martin A.", "" ] ]
A great number of biological organisms live in aqueous environments. Major evolutionary transitions, including the emergence of life itself, likely occurred in such environments. While the chemical aspects of the role of water in biology are well-studied, the effects of water's physical characteristics on evolutionary events, such as the control of population structure via its rich transport properties, are less clear. Evolutionary transitions such as the emergence of the first cells and of multicellularity, require cooperation among groups of individuals. However, evolution of cooperation faces challenges in unstructured "well-mixed" populations, as parasites quickly overwhelm cooperators. Models that assume population structure to promote cooperation envision such structure to arise from spatial "lattice" models (e.g. surface bound individuals) or compartmentalization models, often realized as protocells. Here we study the effect of turbulent motions in spatial models, and propose that coherent structures, i.e. flow patterns which trap fluid and arise naturally in turbulent flows, may serve many of the properties associated with compartments--collocalization, division, and merging--and thought to play a key role in the origins of life and other evolutionary transitions. These results suggest that group selection models may be applicable with fewer physical and chemical constraints than previously thought, and apply much more widely in aqueous environments.
1007.0374
Sergiy Perepelytsya
S.M. Perepelytsya, S.N. Volkov
Intensities of the Raman bands in the low-frequency spectra of DNA with light and heavy counterions
12 pages, 3 figures
S.M. Perepelytsya, S.N. Volkov. Intensities of the Raman bands in the low-frequency spectra of DNA with light and heavy counterions. Biophysical Bulletin (Kharkov) 23(2), 5-19 2009
null
null
q-bio.BM
http://creativecommons.org/licenses/publicdomain/
The approach for calculation of the mode intensities of DNA conformational vibrations in the Raman spectra is developed. It is based on the valence-optic theory and the model for description of conformational vibrations of DNA with counterions. The calculations for Na- and Cs-DNA low-frequency Raman spectra show that the vibrations of DNA backbone chains near 15 cm-1 have the greatest intensity. In the spectrum of Na-DNA at frequency range upper than 40 cm-1 the modes of H-bond stretching in base pairs have the greatest intensities, while the modes of ion-phosphate vibrations have the lowest intensity. In Cs-DNA spectra at this frequency range the mode of ion-phosphate vibrations is prominent. Its intensity is much higher than the intensities of Na-DNA modes of this spectra range. Other modes of Cs-DNA have much lower intensities than in the case of Na-DNA. The comparison of our calculations with the experimental data shows that developed approach gives the understanding of the sensitivity of DNA low-frequency Raman bands to the neutralization of the double helix by light and heavy counterions.
[ { "created": "Fri, 2 Jul 2010 14:38:44 GMT", "version": "v1" } ]
2010-07-05
[ [ "Perepelytsya", "S. M.", "" ], [ "Volkov", "S. N.", "" ] ]
The approach for calculation of the mode intensities of DNA conformational vibrations in the Raman spectra is developed. It is based on the valence-optic theory and the model for description of conformational vibrations of DNA with counterions. The calculations for Na- and Cs-DNA low-frequency Raman spectra show that the vibrations of DNA backbone chains near 15 cm-1 have the greatest intensity. In the spectrum of Na-DNA at frequency range upper than 40 cm-1 the modes of H-bond stretching in base pairs have the greatest intensities, while the modes of ion-phosphate vibrations have the lowest intensity. In Cs-DNA spectra at this frequency range the mode of ion-phosphate vibrations is prominent. Its intensity is much higher than the intensities of Na-DNA modes of this spectra range. Other modes of Cs-DNA have much lower intensities than in the case of Na-DNA. The comparison of our calculations with the experimental data shows that developed approach gives the understanding of the sensitivity of DNA low-frequency Raman bands to the neutralization of the double helix by light and heavy counterions.
2210.00098
Xiaowen Feng
Xiaowen Feng, Heng Li
Towards complete representation of bacterial contents in metagenomic samples
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Background: In the metagenome assembly of a microbiome community, we may think abundant species would be easier to assemble due to their deeper coverage. However, this conjucture is rarely tested. We often do not know how many abundant species we are missing and do not have an approach to recover these species. Results: Here we proposed k-mer based and 16S RNA based methods to measure the completeness of metagenome assembly. We showed that even with PacBio High-Fidelity (HiFi) reads, abundant species are often not assembled as high strain diversity may lead to fragmented contigs. We developed a novel algorithm to recover abundant metagenome-assembled genomes (MAGs) by identifying circular assembly subgraphs. Our algorithm is reference-free and complement to standard metagenome binning. Evaluated on 14 real datasets, it rescued many abundant species that would be missing with existing methods. Conclusions: Our work stresses the importance of metagenome completeness which is often overlooked before. Our algorithm generates more circular MAGs and moves a step closer to the complete representation of microbiome communities.
[ { "created": "Fri, 30 Sep 2022 21:20:57 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2022 05:11:04 GMT", "version": "v2" } ]
2022-11-23
[ [ "Feng", "Xiaowen", "" ], [ "Li", "Heng", "" ] ]
Background: In the metagenome assembly of a microbiome community, we may think abundant species would be easier to assemble due to their deeper coverage. However, this conjucture is rarely tested. We often do not know how many abundant species we are missing and do not have an approach to recover these species. Results: Here we proposed k-mer based and 16S RNA based methods to measure the completeness of metagenome assembly. We showed that even with PacBio High-Fidelity (HiFi) reads, abundant species are often not assembled as high strain diversity may lead to fragmented contigs. We developed a novel algorithm to recover abundant metagenome-assembled genomes (MAGs) by identifying circular assembly subgraphs. Our algorithm is reference-free and complement to standard metagenome binning. Evaluated on 14 real datasets, it rescued many abundant species that would be missing with existing methods. Conclusions: Our work stresses the importance of metagenome completeness which is often overlooked before. Our algorithm generates more circular MAGs and moves a step closer to the complete representation of microbiome communities.
2304.07137
David J Jordan
David J. Jordan, Eric A. Miska
Canalisation and plasticity on the developmental manifold of Caenorhabditis elegans
null
null
10.15252/msb.202311835
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
How do the same mechanisms that faithfully regenerate complex developmental programs in spite of environmental and genetic perturbations also permit responsiveness to environmental signals, adaptation, and genetic evolution? Using the nematode Caenorhabditis elegans as a model, we explore the phenotypic space of growth and development in various genetic and environmental contexts. Our data are growth curves and developmental parameters obtained by automated microscopy. Using these, we show that among the traits that make up the developmental space, correlations within a particular context are predictive of correlations among different contexts. Further we find that the developmental variability of this animal can be captured on a relatively low dimensional phenoptypic manifold and that on this manifold, genetic and environmental contributions to plasticity can be deconvolved independently. Our perspective offers a new way of understanding the relationship between robustness and flexibility in complex systems, suggesting that projection and concentration of dimension can naturally align these forces as complementary rather than competing.
[ { "created": "Fri, 14 Apr 2023 14:03:06 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2023 08:35:23 GMT", "version": "v2" }, { "created": "Tue, 27 Jun 2023 10:03:51 GMT", "version": "v3" } ]
2023-10-20
[ [ "Jordan", "David J.", "" ], [ "Miska", "Eric A.", "" ] ]
How do the same mechanisms that faithfully regenerate complex developmental programs in spite of environmental and genetic perturbations also permit responsiveness to environmental signals, adaptation, and genetic evolution? Using the nematode Caenorhabditis elegans as a model, we explore the phenotypic space of growth and development in various genetic and environmental contexts. Our data are growth curves and developmental parameters obtained by automated microscopy. Using these, we show that among the traits that make up the developmental space, correlations within a particular context are predictive of correlations among different contexts. Further we find that the developmental variability of this animal can be captured on a relatively low dimensional phenoptypic manifold and that on this manifold, genetic and environmental contributions to plasticity can be deconvolved independently. Our perspective offers a new way of understanding the relationship between robustness and flexibility in complex systems, suggesting that projection and concentration of dimension can naturally align these forces as complementary rather than competing.
0903.3887
David A. Kessler
Yosef E. Maruvka, Nadav M. Shnerb and David A. Kessler
Universal features of surname distribution in a subsample of a growing population
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the problem of family size statistics (the number of individuals carrying the same surname, or the same DNA sequence) in a given size subsample of an exponentially growing population. We approach the problem from two directions. In the first, we construct the family size distribution for the subsample from the stable distribution for the full population. This latter distribution is calculated for an arbitrary growth process in the limit of slow growth, and is seen to depend only on the average and variance of the number of children per individual, as well as the mutation rate. The distribution for the subsample is shifted left with respect to the original distribution, tending to eliminate the part of the original distribution reflecting the small families, and thus increasing the mean family size. From the subsample distribution, various bulk quantities such as the average family size and the percentage of singleton families are calculated. In the second approach, we study the past time development of these bulk quantities, deriving the statistics of the genealogical tree of the subsample. This approach reproduces that of the first when the current statistics of the subsample is considered. The surname distribution from th e 2000 U.S. Census is examined in light of these findings, and found to misrepresent the population growth rate by a factor of 1000.
[ { "created": "Mon, 23 Mar 2009 15:59:17 GMT", "version": "v1" }, { "created": "Mon, 23 Mar 2009 21:10:06 GMT", "version": "v2" } ]
2009-03-24
[ [ "Maruvka", "Yosef E.", "" ], [ "Shnerb", "Nadav M.", "" ], [ "Kessler", "David A.", "" ] ]
We examine the problem of family size statistics (the number of individuals carrying the same surname, or the same DNA sequence) in a given size subsample of an exponentially growing population. We approach the problem from two directions. In the first, we construct the family size distribution for the subsample from the stable distribution for the full population. This latter distribution is calculated for an arbitrary growth process in the limit of slow growth, and is seen to depend only on the average and variance of the number of children per individual, as well as the mutation rate. The distribution for the subsample is shifted left with respect to the original distribution, tending to eliminate the part of the original distribution reflecting the small families, and thus increasing the mean family size. From the subsample distribution, various bulk quantities such as the average family size and the percentage of singleton families are calculated. In the second approach, we study the past time development of these bulk quantities, deriving the statistics of the genealogical tree of the subsample. This approach reproduces that of the first when the current statistics of the subsample is considered. The surname distribution from th e 2000 U.S. Census is examined in light of these findings, and found to misrepresent the population growth rate by a factor of 1000.
1004.5537
Kavita Jain
Sarada Seetharaman and Kavita Jain
Evolutionary dynamics on strongly correlated fitness landscapes
null
Phys. Rev. E 82 , 031109 (2010)
10.1103/PhysRevE.82.031109
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the evolutionary dynamics of a maladapted population of self-replicating sequences on strongly correlated fitness landscapes. Each sequence is assumed to be composed of blocks of equal length and its fitness is given by a linear combination of four independent block fitnesses. A mutation affects the fitness contribution of a single block leaving the other blocks unchanged and hence inducing correlations between the parent and mutant fitness. On such strongly correlated fitness landscapes, we calculate the dynamical properties like the number of jumps in the most populated sequence and the temporal distribution of the last jump which is shown to exhibit a inverse square dependence as in evolution on uncorrelated fitness landscapes. We also obtain exact results for the distribution of records and extremes for correlated random variables.
[ { "created": "Fri, 30 Apr 2010 14:17:45 GMT", "version": "v1" }, { "created": "Wed, 8 Sep 2010 07:03:52 GMT", "version": "v2" } ]
2015-05-18
[ [ "Seetharaman", "Sarada", "" ], [ "Jain", "Kavita", "" ] ]
We study the evolutionary dynamics of a maladapted population of self-replicating sequences on strongly correlated fitness landscapes. Each sequence is assumed to be composed of blocks of equal length and its fitness is given by a linear combination of four independent block fitnesses. A mutation affects the fitness contribution of a single block leaving the other blocks unchanged and hence inducing correlations between the parent and mutant fitness. On such strongly correlated fitness landscapes, we calculate the dynamical properties like the number of jumps in the most populated sequence and the temporal distribution of the last jump which is shown to exhibit a inverse square dependence as in evolution on uncorrelated fitness landscapes. We also obtain exact results for the distribution of records and extremes for correlated random variables.
2108.08077
Kahini Wadhawan
Kahini Wadhawan and Payel Das and Barbara A. Han and Ilya R. Fischhoff and Adrian C. Castellanos and Arvind Varsani and Kush R. Varshney
Towards Interpreting Zoonotic Potential of Betacoronavirus Sequences With Attention
11 pages, 8 figures, 1 table, accepted at ICLR 2021 workshop Machine learning for preventing and combating pandemics
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Current methods for viral discovery target evolutionarily conserved proteins that accurately identify virus families but remain unable to distinguish the zoonotic potential of newly discovered viruses. Here, we apply an attention-enhanced long-short-term memory (LSTM) deep neural net classifier to a highly conserved viral protein target to predict zoonotic potential across betacoronaviruses. The classifier performs with a 94% accuracy. Analysis and visualization of attention at the sequence and structure-level features indicate possible association between important protein-protein interactions governing viral replication in zoonotic betacoronaviruses and zoonotic transmission.
[ { "created": "Wed, 18 Aug 2021 10:11:11 GMT", "version": "v1" } ]
2021-08-19
[ [ "Wadhawan", "Kahini", "" ], [ "Das", "Payel", "" ], [ "Han", "Barbara A.", "" ], [ "Fischhoff", "Ilya R.", "" ], [ "Castellanos", "Adrian C.", "" ], [ "Varsani", "Arvind", "" ], [ "Varshney", "Kush R.", "" ] ]
Current methods for viral discovery target evolutionarily conserved proteins that accurately identify virus families but remain unable to distinguish the zoonotic potential of newly discovered viruses. Here, we apply an attention-enhanced long-short-term memory (LSTM) deep neural net classifier to a highly conserved viral protein target to predict zoonotic potential across betacoronaviruses. The classifier performs with a 94% accuracy. Analysis and visualization of attention at the sequence and structure-level features indicate possible association between important protein-protein interactions governing viral replication in zoonotic betacoronaviruses and zoonotic transmission.
2209.06860
Jan Lebert
Jan Lebert, Meenakshi Mittal, Jan Christoph
Reconstruction of Three-dimensional Scroll Waves in Excitable Media from Two-Dimensional Observations using Deep Neural Networks
null
Phys. Rev. E 107, 014221 (2023)
10.1103/PhysRevE.107.014221
null
q-bio.TO cs.CV physics.med-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scroll wave chaos is thought to underlie life-threatening ventricular fibrillation. However, currently there is no direct way to measure action potential wave patterns transmurally throughout the thick ventricular heart muscle. Consequently, direct observations of three-dimensional electrical scroll waves remains elusive. Here, we study whether it is possible to reconstruct simulated scroll waves and scroll wave chaos using deep learning. We trained encoding-decoding convolutional neural networks to predict three-dimensional scroll wave dynamics inside bulk-shaped excitable media from two-dimensional observations of the wave dynamics on the bulk's surface. We tested whether observations from one or two opposing surfaces would be sufficient, and whether transparency or measurements of surface deformations enhances the reconstruction. Further, we evaluated the approach's robustness against noise and tested the feasibility of predicting the bulk's thickness. We distinguished isotropic and anisotropic, as well as opaque and transparent excitable media as models for cardiac tissue and the Belousov-Zhabotinsky chemical reaction, respectively. While we demonstrate that it is possible to reconstruct three-dimensional scroll wave dynamics, we also show that it is challenging to reconstruct complicated scroll wave chaos and that prediction outcomes depend on various factors such as transparency, anisotropy and ultimately the thickness of the medium compared to the size of the scroll waves. In particular, we found that anisotropy provides crucial information for neural networks to decode depth, which facilitates the reconstructions. In the future, deep neural networks could be used to visualize intramural action potential wave patterns from epi- or endocardial measurements.
[ { "created": "Fri, 9 Sep 2022 04:45:29 GMT", "version": "v1" }, { "created": "Wed, 23 Nov 2022 07:20:45 GMT", "version": "v2" } ]
2023-02-08
[ [ "Lebert", "Jan", "" ], [ "Mittal", "Meenakshi", "" ], [ "Christoph", "Jan", "" ] ]
Scroll wave chaos is thought to underlie life-threatening ventricular fibrillation. However, currently there is no direct way to measure action potential wave patterns transmurally throughout the thick ventricular heart muscle. Consequently, direct observations of three-dimensional electrical scroll waves remains elusive. Here, we study whether it is possible to reconstruct simulated scroll waves and scroll wave chaos using deep learning. We trained encoding-decoding convolutional neural networks to predict three-dimensional scroll wave dynamics inside bulk-shaped excitable media from two-dimensional observations of the wave dynamics on the bulk's surface. We tested whether observations from one or two opposing surfaces would be sufficient, and whether transparency or measurements of surface deformations enhances the reconstruction. Further, we evaluated the approach's robustness against noise and tested the feasibility of predicting the bulk's thickness. We distinguished isotropic and anisotropic, as well as opaque and transparent excitable media as models for cardiac tissue and the Belousov-Zhabotinsky chemical reaction, respectively. While we demonstrate that it is possible to reconstruct three-dimensional scroll wave dynamics, we also show that it is challenging to reconstruct complicated scroll wave chaos and that prediction outcomes depend on various factors such as transparency, anisotropy and ultimately the thickness of the medium compared to the size of the scroll waves. In particular, we found that anisotropy provides crucial information for neural networks to decode depth, which facilitates the reconstructions. In the future, deep neural networks could be used to visualize intramural action potential wave patterns from epi- or endocardial measurements.
1709.04654
Randall O'Reilly
Randall C. O'Reilly, Dean R. Wyatte, and John Rohrlich
Deep Predictive Learning: A Comprehensive Model of Three Visual Streams
64 pages, 24 figures, 291 references. Submitted for publication
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How does the neocortex learn and develop the foundations of all our high-level cognitive abilities? We present a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive data at each level. Learning is based on making predictions about what the senses will report at 100 msec (alpha frequency) intervals, and adapting synaptic weights to improve prediction accuracy. The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas and levels of abstraction. The sparse driving inputs from layer 5 intrinsic bursting neurons provide the target signal, and the temporal difference between it and the prediction reverberates throughout the cortex, driving synaptic changes that approximate error backpropagation, using only local activation signals in equations derived directly from a detailed biophysical model. In vision, predictive learning requires a carefully-organized developmental progression and anatomical organization of three pathways (What, Where, and What * Where), according to two central principles: top-down input from compact, high-level, abstract representations is essential for accurate prediction of low-level sensory inputs; and the collective, low-level prediction error must be progressively and opportunistically partitioned to enable extraction of separable factors that drive the learning of further high-level abstractions. Our model self-organized systematic invariant object representations of 100 different objects from simple movies, accounts for a wide range of data, and makes many testable predictions.
[ { "created": "Thu, 14 Sep 2017 08:02:37 GMT", "version": "v1" } ]
2017-09-15
[ [ "O'Reilly", "Randall C.", "" ], [ "Wyatte", "Dean R.", "" ], [ "Rohrlich", "John", "" ] ]
How does the neocortex learn and develop the foundations of all our high-level cognitive abilities? We present a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive data at each level. Learning is based on making predictions about what the senses will report at 100 msec (alpha frequency) intervals, and adapting synaptic weights to improve prediction accuracy. The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas and levels of abstraction. The sparse driving inputs from layer 5 intrinsic bursting neurons provide the target signal, and the temporal difference between it and the prediction reverberates throughout the cortex, driving synaptic changes that approximate error backpropagation, using only local activation signals in equations derived directly from a detailed biophysical model. In vision, predictive learning requires a carefully-organized developmental progression and anatomical organization of three pathways (What, Where, and What * Where), according to two central principles: top-down input from compact, high-level, abstract representations is essential for accurate prediction of low-level sensory inputs; and the collective, low-level prediction error must be progressively and opportunistically partitioned to enable extraction of separable factors that drive the learning of further high-level abstractions. Our model self-organized systematic invariant object representations of 100 different objects from simple movies, accounts for a wide range of data, and makes many testable predictions.
1102.0166
Mathieu Galtier
Mathieu N. Galtier and Olivier D. Faugeras and Paul C. Bressloff
Hebbian learning of recurrent connections: a geometrical perspective
null
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow/fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix on to a distance matrix in a high-dimensional space, with the neurons labelled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the non-convolutional part of the connectivity which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex. In addition, we establish that the non-convolutional (or long-range) connectivity is patchy, and is co-aligned in the case of orientation learning.
[ { "created": "Tue, 1 Feb 2011 14:32:38 GMT", "version": "v1" } ]
2011-02-02
[ [ "Galtier", "Mathieu N.", "" ], [ "Faugeras", "Olivier D.", "" ], [ "Bressloff", "Paul C.", "" ] ]
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow/fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix on to a distance matrix in a high-dimensional space, with the neurons labelled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the non-convolutional part of the connectivity which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex. In addition, we establish that the non-convolutional (or long-range) connectivity is patchy, and is co-aligned in the case of orientation learning.
q-bio/0502012
Mark Bates
Mark Bates, Timothy R. Blosser, Xiaowei Zhuang
Short-range spectroscopic ruler based on a single-molecule optical switch
Article contains 4 pages and 4 figures. Accepted for publication in Physical Review Letters. Supplementary material and movie file will be available through the PRL web site at the time of publication
null
10.1103/PhysRevLett.94.108101
null
q-bio.QM physics.bio-ph physics.chem-ph q-bio.BM
null
We demonstrate a novel all-optical switch consisting of two molecules: a primary fluorophore that can be switched between a fluorescent and a dark state by light of different wavelengths, and a secondary chromophore that facilitates switching. The interaction between the two molecules exhibits a distance dependence much steeper than that of Forster resonance energy transfer. This enables the switch to act as a ruler with the capability to probe distances difficult to access by other spectroscopic methods, thus presenting a new tool for the study of biomolecules at the single-molecule level.
[ { "created": "Mon, 14 Feb 2005 09:24:00 GMT", "version": "v1" } ]
2009-11-11
[ [ "Bates", "Mark", "" ], [ "Blosser", "Timothy R.", "" ], [ "Zhuang", "Xiaowei", "" ] ]
We demonstrate a novel all-optical switch consisting of two molecules: a primary fluorophore that can be switched between a fluorescent and a dark state by light of different wavelengths, and a secondary chromophore that facilitates switching. The interaction between the two molecules exhibits a distance dependence much steeper than that of Forster resonance energy transfer. This enables the switch to act as a ruler with the capability to probe distances difficult to access by other spectroscopic methods, thus presenting a new tool for the study of biomolecules at the single-molecule level.
1808.07021
Silvia Vitali
Silvia Vitali, Francesco Mainardi and Gastone Castellani
Emergence of Fractional Kinetics in Spiny Dendrites
8 pages
Fractal and Fractional MDPI 2018, 2, 6
10.3390/fractalfract2010006
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fractional extensions of the cable equation have been proposed in the literature to describe transmembrane potential in spiny dendrites. The anomalous behavior has been related in the literature to the geometrical properties of the system, in particular, the density of spines, by experiments, computer simulations, and in comb-like models.~The same PDE can be related to more than one stochastic process leading to anomalous diffusion behavior. The time-fractional diffusion equation can be associated to a continuous time random walk (CTRW) with power-law waiting time probability or to a special case of the Erd\'ely-Kober fractional diffusion, described by the ggBm. In this work, we show that time fractional generalization of the cable equation arises naturally in the CTRW by considering a superposition of Markovian processes and in a {\it ggBm-like} construction of the random variable.
[ { "created": "Mon, 13 Aug 2018 14:13:27 GMT", "version": "v1" } ]
2018-08-22
[ [ "Vitali", "Silvia", "" ], [ "Mainardi", "Francesco", "" ], [ "Castellani", "Gastone", "" ] ]
Fractional extensions of the cable equation have been proposed in the literature to describe transmembrane potential in spiny dendrites. The anomalous behavior has been related in the literature to the geometrical properties of the system, in particular, the density of spines, by experiments, computer simulations, and in comb-like models.~The same PDE can be related to more than one stochastic process leading to anomalous diffusion behavior. The time-fractional diffusion equation can be associated to a continuous time random walk (CTRW) with power-law waiting time probability or to a special case of the Erd\'ely-Kober fractional diffusion, described by the ggBm. In this work, we show that time fractional generalization of the cable equation arises naturally in the CTRW by considering a superposition of Markovian processes and in a {\it ggBm-like} construction of the random variable.
2209.09023
Zifeng Wang
Zifeng Wang, Chufan Gao, Lucas M. Glass, Jimeng Sun
Artificial Intelligence for In Silico Clinical Trials: A Review
null
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
A clinical trial is an essential step in drug development, which is often costly and time-consuming. In silico trials are clinical trials conducted digitally through simulation and modeling as an alternative to traditional clinical trials. AI-enabled in silico trials can increase the case group size by creating virtual cohorts as controls. In addition, it also enables automation and optimization of trial design and predicts the trial success rate. This article systematically reviews papers under three main topics: clinical simulation, individualized predictive modeling, and computer-aided trial design. We focus on how machine learning (ML) may be applied in these applications. In particular, we present the machine learning problem formulation and available data sources for each task. We end with discussing the challenges and opportunities of AI for in silico trials in real-world applications.
[ { "created": "Fri, 16 Sep 2022 14:59:31 GMT", "version": "v1" } ]
2022-09-20
[ [ "Wang", "Zifeng", "" ], [ "Gao", "Chufan", "" ], [ "Glass", "Lucas M.", "" ], [ "Sun", "Jimeng", "" ] ]
A clinical trial is an essential step in drug development, which is often costly and time-consuming. In silico trials are clinical trials conducted digitally through simulation and modeling as an alternative to traditional clinical trials. AI-enabled in silico trials can increase the case group size by creating virtual cohorts as controls. In addition, it also enables automation and optimization of trial design and predicts the trial success rate. This article systematically reviews papers under three main topics: clinical simulation, individualized predictive modeling, and computer-aided trial design. We focus on how machine learning (ML) may be applied in these applications. In particular, we present the machine learning problem formulation and available data sources for each task. We end with discussing the challenges and opportunities of AI for in silico trials in real-world applications.
2005.09431
Partha Mondal
Partha Pratim Mondal
Probabilistic Optically-Selective Single-molecule Imaging Based Localization Encoded (POSSIBLE) Microscopy for Ultra-superresolution Imaging
9 pages
null
10.1371/journal.pone.0242452
null
q-bio.QM eess.IV physics.app-ph physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To be able to resolve molecular-clusters it is crucial to access vital informations (such as, molecule density and cluster-size) that are key to understand disease progression and the underlying mechanism. Traditional single-molecule localization microscopy (SMLM) techniques use molecules of variable sizes (as determined by its localization precisions (LPs)) to reconstruct super-resolution map. This results in an image with overlapping and superimposing PSFs (due to a wide size-spectrum of single molecules) that degrade image resolution. Ideally it should be possible to identify the brightest molecules (also termed as, fortunate molecules) to reconstruct ultra-superresolution map, provided sufficient statistics is available from the recorded data. POSSIBLE microscopy explores this possibility by introducing narrow probability size-distribution of single molecules (narrow size-spectrum about a predefined mean-size). The reconstruction begins by presetting the mean and variance of the narrow distribution function (Gaussian function). Subsequently, the dataset is processed and single molecule filtering is carried out by the Gaussian distribution function to filter out unfortunate molecules. The fortunate molecules thus retained are then mapped to reconstruct ultra-superresolution map. In-principle, the POSSIBLE microscopy technique is capable of infinite resolution (resolution of the order of actual single molecule size) provided enough fortunate molecules are experimentally detected. In short, bright molecules (with large emissivity) holds the key. Here, we demonstrate the POSSIBLE microscopy technique and reconstruct single molecule images with an average PSF sizes of 15 nm, 30 nm and 50 nm. Results show better-resolved Dendra2-HA clusters with large cluster-density in transfected NIH3T3 fibroblast cells as compared to the traditional SMLM techniques.
[ { "created": "Sat, 16 May 2020 09:53:19 GMT", "version": "v1" } ]
2021-01-27
[ [ "Mondal", "Partha Pratim", "" ] ]
To be able to resolve molecular-clusters it is crucial to access vital informations (such as, molecule density and cluster-size) that are key to understand disease progression and the underlying mechanism. Traditional single-molecule localization microscopy (SMLM) techniques use molecules of variable sizes (as determined by its localization precisions (LPs)) to reconstruct super-resolution map. This results in an image with overlapping and superimposing PSFs (due to a wide size-spectrum of single molecules) that degrade image resolution. Ideally it should be possible to identify the brightest molecules (also termed as, fortunate molecules) to reconstruct ultra-superresolution map, provided sufficient statistics is available from the recorded data. POSSIBLE microscopy explores this possibility by introducing narrow probability size-distribution of single molecules (narrow size-spectrum about a predefined mean-size). The reconstruction begins by presetting the mean and variance of the narrow distribution function (Gaussian function). Subsequently, the dataset is processed and single molecule filtering is carried out by the Gaussian distribution function to filter out unfortunate molecules. The fortunate molecules thus retained are then mapped to reconstruct ultra-superresolution map. In-principle, the POSSIBLE microscopy technique is capable of infinite resolution (resolution of the order of actual single molecule size) provided enough fortunate molecules are experimentally detected. In short, bright molecules (with large emissivity) holds the key. Here, we demonstrate the POSSIBLE microscopy technique and reconstruct single molecule images with an average PSF sizes of 15 nm, 30 nm and 50 nm. Results show better-resolved Dendra2-HA clusters with large cluster-density in transfected NIH3T3 fibroblast cells as compared to the traditional SMLM techniques.
1610.04835
Debashish Chowdhury
Dipanwita Ghanti (IIT Kanpur, India), Raymond W. Friddle (Sandia National Lab, Livermore, USA), Debashish Chowdhury (IIT Kanpur, India)
Strength and stability of active ligand-receptor bonds: a microtubule attached to a wall by molecular motor tethers
Thoroughly revised model, data, figures, text, bibliography and authorship
Phys. Rev. E 98, 042415 (2018)
10.1103/PhysRevE.98.042415
null
q-bio.SC physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a stochastic kinetic model of a pre-formed attachment of a mictrotuble (MT) with a cell cortex, in which the MT is tethered to the cell by a group of active motor proteins. Such an attachment is a particularly unique case of ligand-receptor bonds: The MT ligand changes its length (and thus binding sites) with time by polymerization-depolymerization kinetics, while multiple motor receptors tend to walk actively along the MT length. These processes, combined with force-mediated unbinding of the motors, result in an elaborate behavior of the MT connection to the cell cortex. We present results for the strength and lifetime of the system through the well-established force-clamp and force-ramp protocols when external tension is applied to the MT. The simulation results reveal that the MT-cell attachment behaves as a catch-bond or slip-bond depending on system parameters. We provide analytical approximations of the lifetime and discuss implications of our results on in-vitro experiments.
[ { "created": "Sun, 16 Oct 2016 09:37:47 GMT", "version": "v1" }, { "created": "Sun, 20 Aug 2017 07:09:44 GMT", "version": "v2" } ]
2018-10-31
[ [ "Ghanti", "Dipanwita", "", "IIT Kanpur, India" ], [ "Friddle", "Raymond W.", "", "Sandia\n National Lab, Livermore, USA" ], [ "Chowdhury", "Debashish", "", "IIT Kanpur, India" ] ]
We develop a stochastic kinetic model of a pre-formed attachment of a mictrotuble (MT) with a cell cortex, in which the MT is tethered to the cell by a group of active motor proteins. Such an attachment is a particularly unique case of ligand-receptor bonds: The MT ligand changes its length (and thus binding sites) with time by polymerization-depolymerization kinetics, while multiple motor receptors tend to walk actively along the MT length. These processes, combined with force-mediated unbinding of the motors, result in an elaborate behavior of the MT connection to the cell cortex. We present results for the strength and lifetime of the system through the well-established force-clamp and force-ramp protocols when external tension is applied to the MT. The simulation results reveal that the MT-cell attachment behaves as a catch-bond or slip-bond depending on system parameters. We provide analytical approximations of the lifetime and discuss implications of our results on in-vitro experiments.
2005.03283
Alexandre Bonvin
Francesco Ambrosetti, Zuzana Jandova and Alexandre M.J.J. Bonvin
A protocol for information-driven antibody-antigen modelling with the HADDOCK2.4 webserver
22 pages, 4 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the recent years, therapeutic use of antibodies has seen a huge growth, due to their inherent proprieties and technological advances in the methods used to study and characterize them. Effective design and engineering of antibodies for therapeutic purposes are heavily dependent on knowledge of the structural principles that regulate antibody-antigen interactions. Several experimental techniques such as X-ray crystallography, cryo-electron microscopy, NMR or mutagenesis analysis can be applied, but these are usually expensive and time consuming. Therefore computational approaches like molecular docking may offer a valuable alternative for the characterisation of antibody-antigen complexes. Here we describe a protocol for the prediction of the 3D structure of antibody-antigen complexes using the integrative modelling platform HADDOCK. The protocol consists of: 1) The identification of the antibody residues belonging to the hyper variable loops which are known to be crucial for the binding and can be used to guide the docking; 2) The detailed steps to perform docking with the HADDOCK 2.4 webserver following different strategies depending on the availability of information about epitope residues.
[ { "created": "Thu, 7 May 2020 07:02:11 GMT", "version": "v1" } ]
2020-05-08
[ [ "Ambrosetti", "Francesco", "" ], [ "Jandova", "Zuzana", "" ], [ "Bonvin", "Alexandre M. J. J.", "" ] ]
In the recent years, therapeutic use of antibodies has seen a huge growth, due to their inherent proprieties and technological advances in the methods used to study and characterize them. Effective design and engineering of antibodies for therapeutic purposes are heavily dependent on knowledge of the structural principles that regulate antibody-antigen interactions. Several experimental techniques such as X-ray crystallography, cryo-electron microscopy, NMR or mutagenesis analysis can be applied, but these are usually expensive and time consuming. Therefore computational approaches like molecular docking may offer a valuable alternative for the characterisation of antibody-antigen complexes. Here we describe a protocol for the prediction of the 3D structure of antibody-antigen complexes using the integrative modelling platform HADDOCK. The protocol consists of: 1) The identification of the antibody residues belonging to the hyper variable loops which are known to be crucial for the binding and can be used to guide the docking; 2) The detailed steps to perform docking with the HADDOCK 2.4 webserver following different strategies depending on the availability of information about epitope residues.
1003.1411
Alexei Ryabov B
Alexei B. Ryabov, Lars Rudolf and Bernd Blasius
Vertical distribution and composition of phytoplankton under the influence of an upper mixed layer
20 pages, 8 figures
J. Theor. Biol. 263 (2010) 120--133
10.1016/j.jtbi.2009.10.034
null
q-bio.PE math-ph math.MP nlin.AO nlin.PS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vertical distribution of phytoplankton is of fundamental importance for the dynamics and structure of aquatic communities. Here, using an advection-reaction-diffusion model, we investigate the distribution and competition of phytoplankton species in a water column, in which inverse resource gradients of light and a nutrient can limit growth of the biomass. This problem poses a challenge for ecologists, as the location of a production layer is not fixed, but rather depends on many internal parameters and environmental factors. In particular, we study the influence of an upper mixed layer (UML) in this system and show that it leads to a variety of dynamic effects: (i) Our model predicts alternative density profiles with a maximum of biomass either within or below the UML, thereby the system may be bistable or the relaxation from an unstable state may require a long-lasting transition. (ii) Reduced mixing in the deep layer can induce oscillations of the biomass; we show that a UML can sustain these oscillations even if the diffusivity is less than the critical mixing for a sinking phytoplankton population. (iii) A UML can strongly modify the outcome of competition between different phytoplankton species, yielding bistability both in the spatial distribution and in the species composition. (iv) A light limited species can obtain a competitive advantage if the diffusivity in the deep layers is reduced below a critical value. This yields a subtle competitive exclusion effect, where the oscillatory states in the deep layers are displaced by steady solutions in the UML. Finally, we present a novel graphical approach for deducing the competition outcome and for the analysis of the role of a UML in aquatic systems.
[ { "created": "Sat, 6 Mar 2010 18:15:38 GMT", "version": "v1" } ]
2010-03-09
[ [ "Ryabov", "Alexei B.", "" ], [ "Rudolf", "Lars", "" ], [ "Blasius", "Bernd", "" ] ]
The vertical distribution of phytoplankton is of fundamental importance for the dynamics and structure of aquatic communities. Here, using an advection-reaction-diffusion model, we investigate the distribution and competition of phytoplankton species in a water column, in which inverse resource gradients of light and a nutrient can limit growth of the biomass. This problem poses a challenge for ecologists, as the location of a production layer is not fixed, but rather depends on many internal parameters and environmental factors. In particular, we study the influence of an upper mixed layer (UML) in this system and show that it leads to a variety of dynamic effects: (i) Our model predicts alternative density profiles with a maximum of biomass either within or below the UML, thereby the system may be bistable or the relaxation from an unstable state may require a long-lasting transition. (ii) Reduced mixing in the deep layer can induce oscillations of the biomass; we show that a UML can sustain these oscillations even if the diffusivity is less than the critical mixing for a sinking phytoplankton population. (iii) A UML can strongly modify the outcome of competition between different phytoplankton species, yielding bistability both in the spatial distribution and in the species composition. (iv) A light limited species can obtain a competitive advantage if the diffusivity in the deep layers is reduced below a critical value. This yields a subtle competitive exclusion effect, where the oscillatory states in the deep layers are displaced by steady solutions in the UML. Finally, we present a novel graphical approach for deducing the competition outcome and for the analysis of the role of a UML in aquatic systems.
1710.08807
Nadav M. Shnerb
Matan Danino and Nadav M. Shnerb
Fixation and absorption in a fluctuating environment
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental problem in the fields of population genetics, evolution, and community ecology, is the fate of a single mutant, or invader, introduced in a finite population of wild types. For a fixed-size community of $N$ individuals, with Markovian, zero-sum dynamics driven by stochastic birth-death events, the mutant population eventually reaches either fixation or extinction. The classical analysis, provided by Kimura and his coworkers, is focused on the neutral case, [where the dynamics is only due to demographic stochasticity (drift)], and on \emph{time-independent} selective forces (deleterious/beneficial mutation). However, both theoretical arguments and empirical analyses suggest that in many cases the selective forces fluctuate in time (temporal environmental stochasticity). Here we consider a generic model for a system with demographic noise and fluctuating selection. Our system is characterized by the time-averaged (log)-fitness $s_0$ and zero-mean fitness fluctuations. These fluctuations, in turn, are parameterized by their amplitude $\gamma$ and their correlation time $\delta$. We provide asymptotic (large $N$) formulas for the chance of fixation, the mean time to fixation and the mean time to absorption. Our expressions interpolate correctly between the constant selection limit $\gamma \to 0$ and the time-averaged neutral case $s_0=0$.
[ { "created": "Tue, 24 Oct 2017 14:39:53 GMT", "version": "v1" } ]
2017-10-25
[ [ "Danino", "Matan", "" ], [ "Shnerb", "Nadav M.", "" ] ]
A fundamental problem in the fields of population genetics, evolution, and community ecology, is the fate of a single mutant, or invader, introduced in a finite population of wild types. For a fixed-size community of $N$ individuals, with Markovian, zero-sum dynamics driven by stochastic birth-death events, the mutant population eventually reaches either fixation or extinction. The classical analysis, provided by Kimura and his coworkers, is focused on the neutral case, [where the dynamics is only due to demographic stochasticity (drift)], and on \emph{time-independent} selective forces (deleterious/beneficial mutation). However, both theoretical arguments and empirical analyses suggest that in many cases the selective forces fluctuate in time (temporal environmental stochasticity). Here we consider a generic model for a system with demographic noise and fluctuating selection. Our system is characterized by the time-averaged (log)-fitness $s_0$ and zero-mean fitness fluctuations. These fluctuations, in turn, are parameterized by their amplitude $\gamma$ and their correlation time $\delta$. We provide asymptotic (large $N$) formulas for the chance of fixation, the mean time to fixation and the mean time to absorption. Our expressions interpolate correctly between the constant selection limit $\gamma \to 0$ and the time-averaged neutral case $s_0=0$.
2309.07950
Zixuan Cang
Tram Huynh, Zixuan Cang
Topological and geometric analysis of cell states in single-cell transcriptomic data
null
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell RNA sequencing (scRNA-seq) enables dissecting cellular heterogeneity in tissues, resulting in numerous biological discoveries. Various computational methods have been devised to delineate cell types by clustering scRNA-seq data where the clusters are often annotated using prior knowledge of marker genes. In addition to identifying pure cell types, several methods have been developed to identify cells undergoing state transitions which often rely on prior clustering results. Present computational approaches predominantly investigate the local and first-order structures of scRNA-seq data using graph representations, while scRNA-seq data frequently displays complex high-dimensional structures. Here, we present a tool, scGeom for exploiting the multiscale and multidimensional structures in scRNA-seq data by inspecting the geometry via graph curvature and topology via persistent homology of both cell networks and gene networks. We demonstrate the utility of these structural features for reflecting biological properties and functions in several applications where we show that curvatures and topological signatures of cell and gene networks can help indicate transition cells and developmental potency of cells. We additionally illustrate that the structural characteristics can improve the classification of cell types.
[ { "created": "Thu, 14 Sep 2023 17:58:58 GMT", "version": "v1" } ]
2023-09-18
[ [ "Huynh", "Tram", "" ], [ "Cang", "Zixuan", "" ] ]
Single-cell RNA sequencing (scRNA-seq) enables dissecting cellular heterogeneity in tissues, resulting in numerous biological discoveries. Various computational methods have been devised to delineate cell types by clustering scRNA-seq data where the clusters are often annotated using prior knowledge of marker genes. In addition to identifying pure cell types, several methods have been developed to identify cells undergoing state transitions which often rely on prior clustering results. Present computational approaches predominantly investigate the local and first-order structures of scRNA-seq data using graph representations, while scRNA-seq data frequently displays complex high-dimensional structures. Here, we present a tool, scGeom for exploiting the multiscale and multidimensional structures in scRNA-seq data by inspecting the geometry via graph curvature and topology via persistent homology of both cell networks and gene networks. We demonstrate the utility of these structural features for reflecting biological properties and functions in several applications where we show that curvatures and topological signatures of cell and gene networks can help indicate transition cells and developmental potency of cells. We additionally illustrate that the structural characteristics can improve the classification of cell types.
1501.03451
Maria Vittoria Barbarossa
Maria Vittoria Barbarossa, Gergely R\"ost
Mathematical models for vaccination, waning immunity and immune system boosting: a general framework
18 pages, 1 figure keywords: Immuno-epidemiology, Waning immunity, Immune status, Boosting, Physiological structure, Reinfection, Delay equations, Vaccination. arXiv admin note: substantial text overlap with arXiv:1411.3195
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When the body gets infected by a pathogen or receives a vaccine dose, the immune system develops pathogen-specific immunity. Induced immunity decays in time and years after recovery/vaccination the host might become susceptible again. Exposure to the pathogen in the environment boosts the immune system thus prolonging the duration of the protection. Such an interplay of within host and population level dynamics poses significant challenges in rigorous mathematical modeling of immuno-epidemiology. The aim of this paper is twofold. First, we provide an overview of existing models for waning of disease/vaccine-induced immunity and immune system boosting. Then a new modeling approach is proposed for SIRVS dynamics, monitoring the immune status of individuals and including both waning immunity and immune system boosting. We show that some previous models can be considered as special cases or approximations of our framework.
[ { "created": "Wed, 7 Jan 2015 21:32:31 GMT", "version": "v1" } ]
2015-01-15
[ [ "Barbarossa", "Maria Vittoria", "" ], [ "Röst", "Gergely", "" ] ]
When the body gets infected by a pathogen or receives a vaccine dose, the immune system develops pathogen-specific immunity. Induced immunity decays in time and years after recovery/vaccination the host might become susceptible again. Exposure to the pathogen in the environment boosts the immune system thus prolonging the duration of the protection. Such an interplay of within host and population level dynamics poses significant challenges in rigorous mathematical modeling of immuno-epidemiology. The aim of this paper is twofold. First, we provide an overview of existing models for waning of disease/vaccine-induced immunity and immune system boosting. Then a new modeling approach is proposed for SIRVS dynamics, monitoring the immune status of individuals and including both waning immunity and immune system boosting. We show that some previous models can be considered as special cases or approximations of our framework.
1511.06920
David Angulo-Garcia
David Angulo-Garcia, Joshua D. Berke, Alessandro Torcini
Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons
22 pages, 9 figures
PLOS Computational Biology 12(2): e1004778 (2016)
10.1371/journal.pcbi.1004778
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices [Carrillo-Reid et al., J. Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We found that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9 (2013) e1002954] that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
[ { "created": "Sat, 21 Nov 2015 20:23:44 GMT", "version": "v1" } ]
2016-06-29
[ [ "Angulo-Garcia", "David", "" ], [ "Berke", "Joshua D.", "" ], [ "Torcini", "Alessandro", "" ] ]
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices [Carrillo-Reid et al., J. Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We found that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9 (2013) e1002954] that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
2002.02721
Simon Kapitza Mr
Simon Kapitza, Pham Van Ha, Tom Kompas, Nick Golding, Natasha C. R. Cadenhead, Payal Bal, Brendan A. Wintle
Assessing biophysical and socio-economic impacts of climate change on avian biodiversity
4 figures, 30 pages double-spaced
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Climate change threatens biodiversity directly by influencing biophysical variables that drive species' geographic distributions and indirectly through socio-economic changes that influence land use patterns, driven by global consumption, production and climate. To date, no detailed analyses have been produced that assess the relative importance of, or interaction between, these direct and indirect climate change impacts on biodiversity at large scales. Here, we apply a new integrated modelling framework to quantify the relative influence of biophysical and socio-economically mediated impacts on avian species in Vietnam and Australia. We find that socio-economically mediated impacts on suitable ranges are largely outweighed by biophysical impacts, but global shifts of production are likely to result in adverse impacts on habitats worldwide. By translating economic futures and shocks into spatially explicit predictions of biodiversity change, we now have the power to analyse in a consistent way outcomes for nature and people of any change to policy, regulation, trading conditions or consumption trend at any scale from sub-national to global.
[ { "created": "Fri, 7 Feb 2020 11:35:16 GMT", "version": "v1" } ]
2020-02-10
[ [ "Kapitza", "Simon", "" ], [ "Van Ha", "Pham", "" ], [ "Kompas", "Tom", "" ], [ "Golding", "Nick", "" ], [ "Cadenhead", "Natasha C. R.", "" ], [ "Bal", "Payal", "" ], [ "Wintle", "Brendan A.", "" ] ]
Climate change threatens biodiversity directly by influencing biophysical variables that drive species' geographic distributions and indirectly through socio-economic changes that influence land use patterns, driven by global consumption, production and climate. To date, no detailed analyses have been produced that assess the relative importance of, or interaction between, these direct and indirect climate change impacts on biodiversity at large scales. Here, we apply a new integrated modelling framework to quantify the relative influence of biophysical and socio-economically mediated impacts on avian species in Vietnam and Australia. We find that socio-economically mediated impacts on suitable ranges are largely outweighed by biophysical impacts, but global shifts of production are likely to result in adverse impacts on habitats worldwide. By translating economic futures and shocks into spatially explicit predictions of biodiversity change, we now have the power to analyse in a consistent way outcomes for nature and people of any change to policy, regulation, trading conditions or consumption trend at any scale from sub-national to global.
q-bio/0703010
Peter Csermely
Tamas Korcsmaros, Mate S. Szalay, Csaba Bode, Istvan A. Kovacs and Peter Csermely
How to design multi-target drugs: Target search options in cellular networks
12 pages, 3 figures, 1 table
Expert Opinion on Drug Discovery (2007) 2:1-10
null
null
q-bio.MN q-bio.BM
null
Despite improved rational drug design and a remarkable progress in genomic, proteomic and high-throughput screening methods, the number of novel, single-target drugs fell much behind expectations during the past decade. Multi-target drugs multiply the number of pharmacologically relevant target molecules by introducing a set of indirect, network-dependent effects. Parallel with this the low-affinity binding of multi-target drugs eases the constraints of druggability, and significantly increases the size of the druggable proteome. These effects tremendously expand the number of potential drug targets, and will introduce novel classes of multi-target drugs with smaller side effects and toxicity. Here we review the recent progress in this field, compare possible network attack strategies, and propose several methods to find target-sets for multi-target drugs.
[ { "created": "Sun, 4 Mar 2007 20:57:33 GMT", "version": "v1" }, { "created": "Sun, 20 May 2007 12:18:22 GMT", "version": "v2" } ]
2007-05-23
[ [ "Korcsmaros", "Tamas", "" ], [ "Szalay", "Mate S.", "" ], [ "Bode", "Csaba", "" ], [ "Kovacs", "Istvan A.", "" ], [ "Csermely", "Peter", "" ] ]
Despite improved rational drug design and a remarkable progress in genomic, proteomic and high-throughput screening methods, the number of novel, single-target drugs fell much behind expectations during the past decade. Multi-target drugs multiply the number of pharmacologically relevant target molecules by introducing a set of indirect, network-dependent effects. Parallel with this the low-affinity binding of multi-target drugs eases the constraints of druggability, and significantly increases the size of the druggable proteome. These effects tremendously expand the number of potential drug targets, and will introduce novel classes of multi-target drugs with smaller side effects and toxicity. Here we review the recent progress in this field, compare possible network attack strategies, and propose several methods to find target-sets for multi-target drugs.
2310.00185
Harvey Huang
Harvey Huang (1), Gabriela Ojeda Valencia (2), Nicholas M. Gregg (3), Gamaleldin M. Osman (3 and 6), Morgan N. Montoya (2), Gregory A. Worrell (2 and 3), Kai J. Miller (2 and 4), Dora Hermes (2 and 3 and 5) ((1) Mayo Clinic Medical Scientist Training Program, (2) Mayo Clinic Department of Physiology and Biomedical Engineering, (3) Mayo Clinic Department of Neurology, (4) Mayo Clinic Department of Neurologic Surgery, (5) Mayo Clinic Department of Radiology, (6) McGovern Medical School Department of Pediatrics)
CARLA: Adjusted common average referencing for cortico-cortical evoked potential data
29 pages, 8 main figures, 3 supplemental figures. For associated code, see https://github.com/hharveygit/CARLA_JNM
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Human brain connectivity can be mapped by single pulse electrical stimulation during intracranial EEG measurements. The raw cortico-cortical evoked potentials (CCEP) are often contaminated by noise. Common average referencing (CAR) removes common noise and preserves response shapes but can introduce bias from responsive channels. We address this issue with an adjusted, adaptive CAR algorithm termed "CAR by Least Anticorrelation (CARLA)". CARLA was tested on simulated CCEP data and real CCEP data collected from four human participants. In CARLA, the channels are ordered by increasing mean cross-trial covariance, and iteratively added to the common average until anticorrelation between any single channel and all re-referenced channels reaches a minimum, as a measure of shared noise. We simulated CCEP data with true responses in 0 to 45 of 50 total channels. We quantified CARLA's error and found that it erroneously included 0 (median) truly responsive channels in the common average with less than or equal to 42 responsive channels, and erroneously excluded less than or equal to 2.5 (median) unresponsive channels at all responsiveness levels. On real CCEP data, signal quality was quantified with the mean R-squared between all pairs of channels, which represents inter-channel dependency and is low for well-referenced data. CARLA re-referencing produced significantly lower mean R-squared than standard CAR, CAR using a fixed bottom quartile of channels by covariance, and no re-referencing. CARLA minimizes bias in re-referenced CCEP data by adaptively selecting the optimal subset of non-responsive channels. It showed high specificity and sensitivity on simulated CCEP data and lowered inter-channel dependency compared to CAR on real CCEP data.
[ { "created": "Fri, 29 Sep 2023 23:17:12 GMT", "version": "v1" } ]
2023-10-03
[ [ "Huang", "Harvey", "", "3 and 6" ], [ "Valencia", "Gabriela Ojeda", "", "3 and 6" ], [ "Gregg", "Nicholas M.", "", "3 and 6" ], [ "Osman", "Gamaleldin M.", "", "3 and 6" ], [ "Montoya", "Morgan N.", "", "2\n and 3" ], [ "Worrell", "Gregory A.", "", "2\n and 3" ], [ "Miller", "Kai J.", "", "2 and 4" ], [ "Hermes", "Dora", "", "2 and 3 and 5" ] ]
Human brain connectivity can be mapped by single pulse electrical stimulation during intracranial EEG measurements. The raw cortico-cortical evoked potentials (CCEP) are often contaminated by noise. Common average referencing (CAR) removes common noise and preserves response shapes but can introduce bias from responsive channels. We address this issue with an adjusted, adaptive CAR algorithm termed "CAR by Least Anticorrelation (CARLA)". CARLA was tested on simulated CCEP data and real CCEP data collected from four human participants. In CARLA, the channels are ordered by increasing mean cross-trial covariance, and iteratively added to the common average until anticorrelation between any single channel and all re-referenced channels reaches a minimum, as a measure of shared noise. We simulated CCEP data with true responses in 0 to 45 of 50 total channels. We quantified CARLA's error and found that it erroneously included 0 (median) truly responsive channels in the common average with less than or equal to 42 responsive channels, and erroneously excluded less than or equal to 2.5 (median) unresponsive channels at all responsiveness levels. On real CCEP data, signal quality was quantified with the mean R-squared between all pairs of channels, which represents inter-channel dependency and is low for well-referenced data. CARLA re-referencing produced significantly lower mean R-squared than standard CAR, CAR using a fixed bottom quartile of channels by covariance, and no re-referencing. CARLA minimizes bias in re-referenced CCEP data by adaptively selecting the optimal subset of non-responsive channels. It showed high specificity and sensitivity on simulated CCEP data and lowered inter-channel dependency compared to CAR on real CCEP data.
2105.13226
Sadek Bouroubi
Nabil Boumedine, Sadek Bouroubi
Protein folding simulations in the hydrophobic-polar model using a hybrid cuckoo search algorithm
null
null
null
null
q-bio.BM math.OC
http://creativecommons.org/publicdomain/zero/1.0/
A protein is a linear chain containing a set of amino acids, which folds on itself to create a specific native structure, also called the minimum energy conformation. It is the native structure that determines the functionality of each protein. The protein folding problem (PFP) remains one of the more difficult problems in computational and chemical biology. The principal challenge of PFP is to predict the optimal conformation of a given protein by considering only its amino acid sequence. As the conformational space contains a very large number of conformations, even when addressing short sequences, different simplified models have been developed and applied to make the PFP less complex. In the last few years, many computational approaches have been proposed to solve the PFP. They are based on simplified lattice models such as the hydrophobic-polar model. In this paper, we present a new Hybrid Cuckoo Search Algorithm (HCSA) to solve the 3D-HP protein folding optimization problem. Our proposed algorithm consists of combining the Cuckoo Search Algorithm (CSA) with the Hill Climbing (HC) algorithm. Simulation results on different benchmark sequences are presented and compared to the state-of-the-art algorithms.
[ { "created": "Sat, 22 May 2021 20:31:20 GMT", "version": "v1" } ]
2021-05-28
[ [ "Boumedine", "Nabil", "" ], [ "Bouroubi", "Sadek", "" ] ]
A protein is a linear chain containing a set of amino acids, which folds on itself to create a specific native structure, also called the minimum energy conformation. It is the native structure that determines the functionality of each protein. The protein folding problem (PFP) remains one of the more difficult problems in computational and chemical biology. The principal challenge of PFP is to predict the optimal conformation of a given protein by considering only its amino acid sequence. As the conformational space contains a very large number of conformations, even when addressing short sequences, different simplified models have been developed and applied to make the PFP less complex. In the last few years, many computational approaches have been proposed to solve the PFP. They are based on simplified lattice models such as the hydrophobic-polar model. In this paper, we present a new Hybrid Cuckoo Search Algorithm (HCSA) to solve the 3D-HP protein folding optimization problem. Our proposed algorithm consists of combining the Cuckoo Search Algorithm (CSA) with the Hill Climbing (HC) algorithm. Simulation results on different benchmark sequences are presented and compared to the state-of-the-art algorithms.
1810.11652
H\'el\`ene Delano\"e-Ayari Ph.D.
M. Durande, S. Tlili, T. Homan, B. Guirao, F. Graner and H. Delano\"e-Ayari
Fast determination of coarse grained cell anisotropy and size in epithelial tissue images using Fourier transform
13 pages; 9 figures
Phys. Rev. E 99, 062401 (2019)
10.1103/PhysRevE.99.062401
null
q-bio.TO physics.med-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanical strain and stress play a major role in biological processes such as wound healing or morphogenesis. To assess this role quantitatively, fixed or live images of tissues are acquired at a cellular precision in large fields of views. To exploit these data, large numbers of cells have to be analyzed to extract cell shape anisotropy and cell size. Most frequently, this is performed through detailed individual cell contour determination, using so-called segmentation computer programs, complemented if necessary by manual detection and error corrections. However, a coarse grained and faster technique can be recommended in at least three situations. First, when detailed information on individual cell contours is not required, for instance in studies which require only coarse-grained average information on cell anisotropy. Second, as an exploratory step to determine whether full segmentation can be potentially useful. Third, when segmentation is too difficult, for instance due to poor image quality or too large a cell number. We developed a user-friendly, Fourier transform-based image analysis pipeline. It is fast (typically $10^4$ cells per minute with a current laptop computer) and suitable for time, space or ensemble averages. We validate it on one set of artificial images and on two sets of fully segmented images, one from a Drosophila pupa and the other from a chicken embryo; the pipeline results are robust. Perspectives include \textit{in vitro} tissues, non-biological cellular patterns such as foams, and $xyz$ stacks.
[ { "created": "Sat, 27 Oct 2018 14:34:42 GMT", "version": "v1" }, { "created": "Wed, 14 Nov 2018 20:47:31 GMT", "version": "v2" }, { "created": "Sun, 17 Mar 2019 17:49:16 GMT", "version": "v3" } ]
2019-06-12
[ [ "Durande", "M.", "" ], [ "Tlili", "S.", "" ], [ "Homan", "T.", "" ], [ "Guirao", "B.", "" ], [ "Graner", "F.", "" ], [ "Delanoë-Ayari", "H.", "" ] ]
Mechanical strain and stress play a major role in biological processes such as wound healing or morphogenesis. To assess this role quantitatively, fixed or live images of tissues are acquired at a cellular precision in large fields of views. To exploit these data, large numbers of cells have to be analyzed to extract cell shape anisotropy and cell size. Most frequently, this is performed through detailed individual cell contour determination, using so-called segmentation computer programs, complemented if necessary by manual detection and error corrections. However, a coarse grained and faster technique can be recommended in at least three situations. First, when detailed information on individual cell contours is not required, for instance in studies which require only coarse-grained average information on cell anisotropy. Second, as an exploratory step to determine whether full segmentation can be potentially useful. Third, when segmentation is too difficult, for instance due to poor image quality or too large a cell number. We developed a user-friendly, Fourier transform-based image analysis pipeline. It is fast (typically $10^4$ cells per minute with a current laptop computer) and suitable for time, space or ensemble averages. We validate it on one set of artificial images and on two sets of fully segmented images, one from a Drosophila pupa and the other from a chicken embryo; the pipeline results are robust. Perspectives include \textit{in vitro} tissues, non-biological cellular patterns such as foams, and $xyz$ stacks.
1910.01796
David Le
David Le, Minhaj Alam, Cham Yao, Jennifer I. Lim, R.V.P. Chan, Devrim Toslak, and Xincheng Yao
Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy
20 pages, 4 figures, 6 tables
null
10.1167/tvst.9.2.35
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy (DR). Methods: A deep learning convolutional neural network (CNN) architecture VGG16 was employed for this study. A transfer learning process was implemented to re-train the CNN for robust OCTA classification. In order to demonstrate the feasibility of using this method for artificial intelligence (AI) screening of DR in clinical environments, the re-trained CNN was incorporated into a custom developed GUI platform which can be readily operated by ophthalmic personnel. Results: With last nine layers re-trained, CNN architecture achieved the best performance for automated OCTA classification. The overall accuracy of the re-trained classifier for differentiating healthy, NoDR, and NPDR was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR and DR were 0.97, 0.98 and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusion: With a transfer leaning process to adopt the early layers for simple feature analysis and to re-train the upper layers for fine feature analysis, the CNN architecture VGG16 can be used for robust OCTA classification of healthy, NoDR, and NPDR eyes. Translational Relevance: OCTA can capture microvascular changes in early DR. A transfer learning process enables robust implementation of convolutional neural network (CNN) for automated OCTA classification of DR.
[ { "created": "Fri, 4 Oct 2019 04:12:01 GMT", "version": "v1" } ]
2021-12-16
[ [ "Le", "David", "" ], [ "Alam", "Minhaj", "" ], [ "Yao", "Cham", "" ], [ "Lim", "Jennifer I.", "" ], [ "Chan", "R. V. P.", "" ], [ "Toslak", "Devrim", "" ], [ "Yao", "Xincheng", "" ] ]
Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy (DR). Methods: A deep learning convolutional neural network (CNN) architecture VGG16 was employed for this study. A transfer learning process was implemented to re-train the CNN for robust OCTA classification. In order to demonstrate the feasibility of using this method for artificial intelligence (AI) screening of DR in clinical environments, the re-trained CNN was incorporated into a custom developed GUI platform which can be readily operated by ophthalmic personnel. Results: With last nine layers re-trained, CNN architecture achieved the best performance for automated OCTA classification. The overall accuracy of the re-trained classifier for differentiating healthy, NoDR, and NPDR was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR and DR were 0.97, 0.98 and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusion: With a transfer leaning process to adopt the early layers for simple feature analysis and to re-train the upper layers for fine feature analysis, the CNN architecture VGG16 can be used for robust OCTA classification of healthy, NoDR, and NPDR eyes. Translational Relevance: OCTA can capture microvascular changes in early DR. A transfer learning process enables robust implementation of convolutional neural network (CNN) for automated OCTA classification of DR.
1612.05530
Joseph Baron
Joseph W. Baron, Tobias Galla
Sojourn times and fixation dynamics in multi-player games with fluctuating environments
22 pages, 9 figures
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study evolutionary multi-player games in finite populations, subject to fluctuating environments. The population undergoes a birth-death process with absorbing states, and the environment follows a Markovian process, resulting in a fluctuating payoff matrix for the evolutionary game. Our focus is on the fixation or extinction of a single mutant in a population of wildtypes. We show that the nonlinear nature of fitnesses in multi-player games gives rise to an intricate interplay of selection, genetic drift and environmental fluctuations. This generates effects not seen in simpler two-player games. To analyse trajectories towards fixation we analytically calculate sojourn times for general birth-death processes in populations of two types of individuals and in fluctuating environments.
[ { "created": "Fri, 16 Dec 2016 16:11:17 GMT", "version": "v1" } ]
2016-12-19
[ [ "Baron", "Joseph W.", "" ], [ "Galla", "Tobias", "" ] ]
We study evolutionary multi-player games in finite populations, subject to fluctuating environments. The population undergoes a birth-death process with absorbing states, and the environment follows a Markovian process, resulting in a fluctuating payoff matrix for the evolutionary game. Our focus is on the fixation or extinction of a single mutant in a population of wildtypes. We show that the nonlinear nature of fitnesses in multi-player games gives rise to an intricate interplay of selection, genetic drift and environmental fluctuations. This generates effects not seen in simpler two-player games. To analyse trajectories towards fixation we analytically calculate sojourn times for general birth-death processes in populations of two types of individuals and in fluctuating environments.
1803.07193
Lior Lebovich
Lior Lebovich, Ran Darshan, Yoni Lavi, David Hansel and Yonatan Loewenstein
Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.
[ { "created": "Mon, 19 Mar 2018 23:10:51 GMT", "version": "v1" } ]
2018-03-21
[ [ "Lebovich", "Lior", "" ], [ "Darshan", "Ran", "" ], [ "Lavi", "Yoni", "" ], [ "Hansel", "David", "" ], [ "Loewenstein", "Yonatan", "" ] ]
Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.
1601.00705
Aldo Ledesma
Aldo Ledesma-Dur\'an, H\'ector Ju\'arez-Valencia, Iv\'an Santamar\'ia-Holek
Reaction Diffusion patterns in Pseudoplatystoma fishes
null
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies how patterns derived from a system of reaction-diffusion equations may vary significantly depending upon boundary and initial conditions, as well as in the spatial dependence of the coefficients involved. From an extensive numerical study of the BVAM model, we demonstrate that the geometric pattern of a reaction-diffusion system is not uniquely determined by the value of the parameters in the equation. From this result, we suggest that the variability of patterns among individuals of the same species may have its roots in this sensitivity. Furthermore, this study analyzes briefly how the inclusion of the advection and the space dependency in the parameters of the model influences the forms of a specific pattern. The results of this study are compared to the skin patterns that appear in Pseudoplatystom} fishes.
[ { "created": "Tue, 5 Jan 2016 00:00:47 GMT", "version": "v1" } ]
2016-01-06
[ [ "Ledesma-Durán", "Aldo", "" ], [ "Juárez-Valencia", "Héctor", "" ], [ "Santamaría-Holek", "Iván", "" ] ]
This paper studies how patterns derived from a system of reaction-diffusion equations may vary significantly depending upon boundary and initial conditions, as well as in the spatial dependence of the coefficients involved. From an extensive numerical study of the BVAM model, we demonstrate that the geometric pattern of a reaction-diffusion system is not uniquely determined by the value of the parameters in the equation. From this result, we suggest that the variability of patterns among individuals of the same species may have its roots in this sensitivity. Furthermore, this study analyzes briefly how the inclusion of the advection and the space dependency in the parameters of the model influences the forms of a specific pattern. The results of this study are compared to the skin patterns that appear in Pseudoplatystom} fishes.
1802.05569
Bijan Sarkar
Bijan Sarkar
Moran-evolution of cooperation: From well-mixed to heterogeneous complex networks
Results generated by computer simulation of stochastic modelling framework might be failing to attain a generality
Physica A, 2018
10.1016/j.physa.2018.01.022
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Configurational arrangement of network architecture and interaction character of individuals are two most influential factors on the mechanisms underlying the evolutionary outcome of cooperation, which is explained by the well-established framework of evolutionary game theory. In the current study, not only qualitatively but also quantitatively, we measure Moran-evolution of cooperation to support an analytical agreement based on the consequences of the replicator equation in a finite population. The validity of the measurement has been double-checked in the well-mixed network by the Langevin stochastic differential equation and the Gillespie-algorithmic version of Moran-evolution, while in a structured network, the measurement of accuracy is verified by the standard numerical simulation. Considering the Birth-Death and Death-Birth updating rules through diffusion of individuals, the investigation is carried out in the wide range of game environments those relate to the various social dilemmas where we are able to draw a new rigorous mathematical track to tackle the heterogeneity of complex networks. The set of modified criteria reveals the exact fact about the emergence and maintenance of cooperation in the structured population. We find that in general, nature promotes the environment of coexistent traits.
[ { "created": "Wed, 14 Feb 2018 15:31:13 GMT", "version": "v1" } ]
2018-02-16
[ [ "Sarkar", "Bijan", "" ] ]
Configurational arrangement of network architecture and interaction character of individuals are two most influential factors on the mechanisms underlying the evolutionary outcome of cooperation, which is explained by the well-established framework of evolutionary game theory. In the current study, not only qualitatively but also quantitatively, we measure Moran-evolution of cooperation to support an analytical agreement based on the consequences of the replicator equation in a finite population. The validity of the measurement has been double-checked in the well-mixed network by the Langevin stochastic differential equation and the Gillespie-algorithmic version of Moran-evolution, while in a structured network, the measurement of accuracy is verified by the standard numerical simulation. Considering the Birth-Death and Death-Birth updating rules through diffusion of individuals, the investigation is carried out in the wide range of game environments those relate to the various social dilemmas where we are able to draw a new rigorous mathematical track to tackle the heterogeneity of complex networks. The set of modified criteria reveals the exact fact about the emergence and maintenance of cooperation in the structured population. We find that in general, nature promotes the environment of coexistent traits.
2003.13473
Shi Huang
Hongyao Chen and Shi Huang
Modern alleles in archaic human Y chromosomes support origin of modern human paternal lineages in Asia rather than Africa
14 pages, 2 figures, 2 tables
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown that hybridization between modern and archaic humans was commonplace in the history of our species. After admixture, some individuals with admixed autosomes carried the modern Homo Sapiens uniparental DNAs, while the rest carried the archaic versions. Coevolution of admixed autosomes and uniparental DNAs is expected to cause some of the sites in modern uniparental DNAs to revert back to archaic alleles, while the opposite process would occur (from archaic to modern) in some of the sites in archaic uniparental DNAs. This type of coevolution is one of the elements that differentiate the two different models of the Y phylogenetic tree of modern humans, rooting it either in Africa or East Asia. The expected reversion to archaic alleles is assumed to occur and is easily traceable in the Asia model, but is absent in the Africa model due to its infinite site assumption, which also precludes the independent or convergent mutation to modern alleles in archaic uniparental DNAs since mutations are assumed to occur randomly across a neutral genome, and convergent evolution is assumed not to occur. Here, we examined newly published high coverage Y chromosome sequencing data of two Denisovan and two Neanderthal samples to determine whether they carry modern-Homo Sapiens alleles in sites where they are not supposed to according to the Africa model. The results showed that a significant fraction of the sites that, according to the Asia model, should differentiate the original modern Y from the original archaic Y carried modern alleles in the archaic Y samples here. Some of these modern alleles were shared among all archaic humans while others could differentiate Denisovans from Neanderthals. The observation is best accounted for by coevolution of archaic Y and admixed modern autosomes, and hence supports the Asia model, since it takes such coevolution into account.
[ { "created": "Thu, 26 Mar 2020 23:06:13 GMT", "version": "v1" } ]
2020-03-31
[ [ "Chen", "Hongyao", "" ], [ "Huang", "Shi", "" ] ]
Recent studies have shown that hybridization between modern and archaic humans was commonplace in the history of our species. After admixture, some individuals with admixed autosomes carried the modern Homo Sapiens uniparental DNAs, while the rest carried the archaic versions. Coevolution of admixed autosomes and uniparental DNAs is expected to cause some of the sites in modern uniparental DNAs to revert back to archaic alleles, while the opposite process would occur (from archaic to modern) in some of the sites in archaic uniparental DNAs. This type of coevolution is one of the elements that differentiate the two different models of the Y phylogenetic tree of modern humans, rooting it either in Africa or East Asia. The expected reversion to archaic alleles is assumed to occur and is easily traceable in the Asia model, but is absent in the Africa model due to its infinite site assumption, which also precludes the independent or convergent mutation to modern alleles in archaic uniparental DNAs since mutations are assumed to occur randomly across a neutral genome, and convergent evolution is assumed not to occur. Here, we examined newly published high coverage Y chromosome sequencing data of two Denisovan and two Neanderthal samples to determine whether they carry modern-Homo Sapiens alleles in sites where they are not supposed to according to the Africa model. The results showed that a significant fraction of the sites that, according to the Asia model, should differentiate the original modern Y from the original archaic Y carried modern alleles in the archaic Y samples here. Some of these modern alleles were shared among all archaic humans while others could differentiate Denisovans from Neanderthals. The observation is best accounted for by coevolution of archaic Y and admixed modern autosomes, and hence supports the Asia model, since it takes such coevolution into account.
1602.01848
Andrew Marantan
Andrew Marantan and Ariel Amir
Stochastic modeling of cell growth with symmetric or asymmetric division
22 pages, 11 figures
Phys. Rev. E 94, 012405 (2016)
10.1103/PhysRevE.94.012405
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a class of biologically-motivated stochastic processes in which a unicellular organism divides its resources (volume or damaged proteins, in particular) symmetrically or asymmetrically between its progeny. Assuming the final amount of the resource is controlled by a growth policy and subject to additive and multiplicative noise, we derive the "master equation" describing how the resource distribution evolves over subsequent generations and use it to study the properties of stable resource distributions. We find conditions under which a unique stable resource distribution exists and calculate its moments for the class of affine linear growth policies. Moreover, we apply an asymptotic analysis to elucidate the conditions under which the stable distribution (when it exists) has a power-law tail. Finally, we use the results of this asymptotic analysis along with the moment equations to draw a stability phase diagram for the system that reveals the counterintuitive result that asymmetry serves to increase stability while at the same time widening the stable distribution. We also briefly discuss how cells can divide damaged proteins asymmetrically between their progeny as a form of damage control. In the appendix, motivated by the asymmetric division of cell volume in Saccharomyces cerevisiae, we extend our results to the case wherein mother and daughter cells follow different growth policies.
[ { "created": "Thu, 4 Feb 2016 21:03:32 GMT", "version": "v1" } ]
2016-07-20
[ [ "Marantan", "Andrew", "" ], [ "Amir", "Ariel", "" ] ]
We consider a class of biologically-motivated stochastic processes in which a unicellular organism divides its resources (volume or damaged proteins, in particular) symmetrically or asymmetrically between its progeny. Assuming the final amount of the resource is controlled by a growth policy and subject to additive and multiplicative noise, we derive the "master equation" describing how the resource distribution evolves over subsequent generations and use it to study the properties of stable resource distributions. We find conditions under which a unique stable resource distribution exists and calculate its moments for the class of affine linear growth policies. Moreover, we apply an asymptotic analysis to elucidate the conditions under which the stable distribution (when it exists) has a power-law tail. Finally, we use the results of this asymptotic analysis along with the moment equations to draw a stability phase diagram for the system that reveals the counterintuitive result that asymmetry serves to increase stability while at the same time widening the stable distribution. We also briefly discuss how cells can divide damaged proteins asymmetrically between their progeny as a form of damage control. In the appendix, motivated by the asymmetric division of cell volume in Saccharomyces cerevisiae, we extend our results to the case wherein mother and daughter cells follow different growth policies.
0904.3996
Razvan Radulescu M.D.
Razvan Tudor Radulescu
Wave and quantum properties of peptide strings: defining a helix in spacetime
9 pages, 1 figure
null
null
null
q-bio.SC q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies have described the concept of peptide strings in qualitative terms and illustrated it by applying its corrolaries in order to elucidate basic questions in oncology and rheumatology. The present investigation is the first to quantify these potential sub- and transcellular phenomena. Accordingly, the propagation of peptide strings is proposed here to occur by way of waves that in turn are subject to the energy equation established by Planck. As a result of these insights, widespread future applications can now be envisaged for peptide strings both in molecular medicine and quantum optics.
[ { "created": "Sat, 25 Apr 2009 18:06:29 GMT", "version": "v1" } ]
2009-04-28
[ [ "Radulescu", "Razvan Tudor", "" ] ]
Previous studies have described the concept of peptide strings in qualitative terms and illustrated it by applying its corrolaries in order to elucidate basic questions in oncology and rheumatology. The present investigation is the first to quantify these potential sub- and transcellular phenomena. Accordingly, the propagation of peptide strings is proposed here to occur by way of waves that in turn are subject to the energy equation established by Planck. As a result of these insights, widespread future applications can now be envisaged for peptide strings both in molecular medicine and quantum optics.
q-bio/0609017
Nabila Jabrane-Ferrat
G. Drozina (UCSF), J. Kohoutek (UCSF), N. Jabrane-Ferrat (UCSF, IPBS), B. M. Peterlin (UCSF)
Expression of MHC II genes
null
Current topics in microbiology and immunology. 290 (2005) Pages 147-70, Accession Number 16480042
null
null
q-bio.GN
null
Innate and adaptive immunity are connected via antigen processing and presentation (APP), which results in the presentation of antigenic peptides to T cells in the complex with the major histocompatibility (MHC) determinants. MHC class II (MHC II) determinants present antigens to CD4+ T cells, which are the main regulators of the immune response. Their genes are transcribed from compact promoters that form first the MHC II enhanceosome, which contains DNA-bound activators and then the MHC II transcriptosome with the addition of the class II transactivator (CIITA). CIITA is the master regulator of MHC II transcription. It is expressed constitutively in dendritic cells (DC) and mature B cells and is inducible in most other cell types. Three isoforms of CIITA exist, depending on cell type and inducing signals. CIITA is regulated at the levels of transcription and post-translational modifications, which are still not very clear. Inappropriate immune responses are found in several diseases, including cancer and autoimmunity. Since CIITA regulates the expression of MHC II genes, it is involved directly in the regulation of the immune response. The knowledge of CIITA will facilitate the manipulation of the immune response and might contribute to the treatment of these diseases.
[ { "created": "Mon, 11 Sep 2006 15:41:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Drozina", "G.", "", "UCSF" ], [ "Kohoutek", "J.", "", "UCSF" ], [ "Jabrane-Ferrat", "N.", "", "UCSF, IPBS" ], [ "Peterlin", "B. M.", "", "UCSF" ] ]
Innate and adaptive immunity are connected via antigen processing and presentation (APP), which results in the presentation of antigenic peptides to T cells in the complex with the major histocompatibility (MHC) determinants. MHC class II (MHC II) determinants present antigens to CD4+ T cells, which are the main regulators of the immune response. Their genes are transcribed from compact promoters that form first the MHC II enhanceosome, which contains DNA-bound activators and then the MHC II transcriptosome with the addition of the class II transactivator (CIITA). CIITA is the master regulator of MHC II transcription. It is expressed constitutively in dendritic cells (DC) and mature B cells and is inducible in most other cell types. Three isoforms of CIITA exist, depending on cell type and inducing signals. CIITA is regulated at the levels of transcription and post-translational modifications, which are still not very clear. Inappropriate immune responses are found in several diseases, including cancer and autoimmunity. Since CIITA regulates the expression of MHC II genes, it is involved directly in the regulation of the immune response. The knowledge of CIITA will facilitate the manipulation of the immune response and might contribute to the treatment of these diseases.
1106.1752
Yasser Roudi
John Hertz, Yasser Roudi, Joanna Tyrcha
Ising Models for Inferring Network Structure From Spike Data
To appear in "Principles of Neural Coding", edited by Stefano Panzeri and Rodrigo Quian Quiroga
null
null
null
q-bio.QM cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Now that spike trains from many neurons can be recorded simultaneously, there is a need for methods to decode these data to learn about the networks that these neurons are part of. One approach to this problem is to adjust the parameters of a simple model network to make its spike trains resemble the data as much as possible. The connections in the model network can then give us an idea of how the real neurons that generated the data are connected and how they influence each other. In this chapter we describe how to do this for the simplest kind of model: an Ising network. We derive algorithms for finding the best model connection strengths for fitting a given data set, as well as faster approximate algorithms based on mean field theory. We test the performance of these algorithms on data from model networks and experiments.
[ { "created": "Thu, 9 Jun 2011 09:17:14 GMT", "version": "v1" } ]
2011-06-10
[ [ "Hertz", "John", "" ], [ "Roudi", "Yasser", "" ], [ "Tyrcha", "Joanna", "" ] ]
Now that spike trains from many neurons can be recorded simultaneously, there is a need for methods to decode these data to learn about the networks that these neurons are part of. One approach to this problem is to adjust the parameters of a simple model network to make its spike trains resemble the data as much as possible. The connections in the model network can then give us an idea of how the real neurons that generated the data are connected and how they influence each other. In this chapter we describe how to do this for the simplest kind of model: an Ising network. We derive algorithms for finding the best model connection strengths for fitting a given data set, as well as faster approximate algorithms based on mean field theory. We test the performance of these algorithms on data from model networks and experiments.
1304.0681
Sergio Barrachina Mir
H\'ector Mart\'inez (1), Joaqu\'in T\'arraga (2), Ignacio Medina (2), Sergio Barrachina (1), Maribel Castillo (1), Joaqu\'in Dopazo (2), Enrique S. Quintana-Ort\'i (1) ((1) Dpto. de Ingenier\'ia y Ciencia de los Computadores, Universidad Jaume I, Castell\'on, Spain, (2) Computational Genomics Institute, Centro de Investigaci\'on Pr\'incipe Felipe, Valencia, Spain)
Concurrent and Accurate RNA Sequencing on Multicore Platforms
null
null
null
UJI ICC 2013-03-01
q-bio.GN cs.DC q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper we introduce a novel parallel pipeline for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, named HPG-Aligner, leverages the speed of the Burrows-Wheeler Transform to map a large number of RNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, that is employed to deal with conflictive reads. The aligner is complemented with a careful strategy to detect splice junctions based on the division of RNA reads into short segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing useful information for the successful alignment of the complete reads. Experimental results on platforms with AMD and Intel multicore processors report the remarkable parallel performance of HPG-Aligner, on short and long RNA reads, which excels in both execution time and sensitivity to an state-of-the-art aligner such as TopHat 2 built on top of Bowtie and Bowtie 2.
[ { "created": "Tue, 2 Apr 2013 16:34:36 GMT", "version": "v1" } ]
2013-04-03
[ [ "Martínez", "Héctor", "" ], [ "Tárraga", "Joaquín", "" ], [ "Medina", "Ignacio", "" ], [ "Barrachina", "Sergio", "" ], [ "Castillo", "Maribel", "" ], [ "Dopazo", "Joaquín", "" ], [ "Quintana-Ortí", "Enrique S.", "" ] ]
In this paper we introduce a novel parallel pipeline for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, named HPG-Aligner, leverages the speed of the Burrows-Wheeler Transform to map a large number of RNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, that is employed to deal with conflictive reads. The aligner is complemented with a careful strategy to detect splice junctions based on the division of RNA reads into short segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing useful information for the successful alignment of the complete reads. Experimental results on platforms with AMD and Intel multicore processors report the remarkable parallel performance of HPG-Aligner, on short and long RNA reads, which excels in both execution time and sensitivity to an state-of-the-art aligner such as TopHat 2 built on top of Bowtie and Bowtie 2.
1301.0996
Dipanjan Roy
Dipanjan Roy, Yenni Tjandra, Konstantin Mergenthaler, Jeremy Petravicz, Caroline A. Runyan, Nathan R. Wilson, Mriganka Sur, Klaus Obermayer
Afferent specificity, feature specific connectivity influence orientation selectivity: A computational study in mouse primary visual cortex
39 pages, 12 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Primary visual cortex (V1) provides crucial insights into the selectivity and emergence of specific output features such as orientation tuning. Tuning and selectivity of cortical neurons in mouse visual cortex is not equivocally resolved so far. While many in-vivo experimental studies found inhibitory neurons of all subtypes to be broadly tuned for orientation other studies report inhibitory neurons that are as sharply tuned as excitatory neurons. These diverging findings about the selectivity of excitatory and inhibitory cortical neurons prompted us to ask the following questions: (1) How different or similar is the cortical computation with that in previously described species that relies on map? (2) What is the network mechanism underlying the sharpening of orientation selectivity in the mouse primary visual cortex? Here, we investigate the above questions in a computational framework with a recurrent network composed of Hodgkin-Huxley (HH) point neurons. Our cortical network with random connectivity alone could not account for all the experimental observations, which led us to hypothesize, (a) Orientation dependent connectivity (b) Feedforward afferent specificity to understand orientation selectivity of V1 neurons in mouse. Using population (orientation selectivity index) OSI as a measure of neuronal selectivity to stimulus orientation we test each hypothesis separately and in combination against experimental data. Based on our analysis of orientation selectivity (OS) data we find a good fit of network parameters in a model based on afferent specificity and connectivity that scales with feature similarity. We conclude that this particular model class best supports data sets of orientation selectivity of excitatory and inhibitory neurons in layer 2/3 of primary visual cortex of mouse.
[ { "created": "Sun, 6 Jan 2013 11:52:42 GMT", "version": "v1" } ]
2013-01-08
[ [ "Roy", "Dipanjan", "" ], [ "Tjandra", "Yenni", "" ], [ "Mergenthaler", "Konstantin", "" ], [ "Petravicz", "Jeremy", "" ], [ "Runyan", "Caroline A.", "" ], [ "Wilson", "Nathan R.", "" ], [ "Sur", "Mriganka", "" ], [ "Obermayer", "Klaus", "" ] ]
Primary visual cortex (V1) provides crucial insights into the selectivity and emergence of specific output features such as orientation tuning. Tuning and selectivity of cortical neurons in mouse visual cortex is not equivocally resolved so far. While many in-vivo experimental studies found inhibitory neurons of all subtypes to be broadly tuned for orientation other studies report inhibitory neurons that are as sharply tuned as excitatory neurons. These diverging findings about the selectivity of excitatory and inhibitory cortical neurons prompted us to ask the following questions: (1) How different or similar is the cortical computation with that in previously described species that relies on map? (2) What is the network mechanism underlying the sharpening of orientation selectivity in the mouse primary visual cortex? Here, we investigate the above questions in a computational framework with a recurrent network composed of Hodgkin-Huxley (HH) point neurons. Our cortical network with random connectivity alone could not account for all the experimental observations, which led us to hypothesize, (a) Orientation dependent connectivity (b) Feedforward afferent specificity to understand orientation selectivity of V1 neurons in mouse. Using population (orientation selectivity index) OSI as a measure of neuronal selectivity to stimulus orientation we test each hypothesis separately and in combination against experimental data. Based on our analysis of orientation selectivity (OS) data we find a good fit of network parameters in a model based on afferent specificity and connectivity that scales with feature similarity. We conclude that this particular model class best supports data sets of orientation selectivity of excitatory and inhibitory neurons in layer 2/3 of primary visual cortex of mouse.
q-bio/0604023
Gennady Margolin
Gennady Margolin, Ivan V. Gregoretti, Holly V. Goodson, Mark S. Alber
Analysis of a microscopic stochastic model of microtubule dynamic instability
14 pages, 7 figures
PHYSICAL REVIEW E 74, 041920 (2006)
10.1103/PhysRevE.74.041920
null
q-bio.SC q-bio.CB q-bio.QM
null
A novel theoretical model of dynamic instability of a system of linear (1D) microtubules (MTs) in a bounded domain is introduced for studying the role of a cell edge in vivo and analyzing the effect of competition for a limited amount of tubulin. The model differs from earlier models in that the evolution of MTs is based on the rates of single unit (e.g., a heterodimer per protofilament) transformations, in contrast to postulating effective rates/frequencies of larger-scale changes, extracted, e.g., from the length history plots of MTs. Spontaneous GTP hydrolysis with finite rate after polymerization is assumed, and theoretical estimates of an effective catastrophe frequency as well as other parameters characterizing MT length distributions and cap size are derived. We implement a simple cap model which does not include vectorial hydrolysis. We demonstrate that our theoretical predictions, such as steady state concentration of free tubulin, and parameters of MT length distributions, are in agreement with the numerical simulations. The present model establishes a quantitative link between microscopic parameters governing the dynamics of MTs and macroscopic characteristics of MTs in a closed system. Lastly, we use a computational Monte Carlo model to provide an explanation for non-exponential MT length distributions observed in experiments. In particular, we show that appearance of such non-exponential distributions in the experiments can occur because the true steady state has not been reached, and/or due to the presence of a cell edge.
[ { "created": "Mon, 17 Apr 2006 17:19:38 GMT", "version": "v1" } ]
2007-05-23
[ [ "Margolin", "Gennady", "" ], [ "Gregoretti", "Ivan V.", "" ], [ "Goodson", "Holly V.", "" ], [ "Alber", "Mark S.", "" ] ]
A novel theoretical model of dynamic instability of a system of linear (1D) microtubules (MTs) in a bounded domain is introduced for studying the role of a cell edge in vivo and analyzing the effect of competition for a limited amount of tubulin. The model differs from earlier models in that the evolution of MTs is based on the rates of single unit (e.g., a heterodimer per protofilament) transformations, in contrast to postulating effective rates/frequencies of larger-scale changes, extracted, e.g., from the length history plots of MTs. Spontaneous GTP hydrolysis with finite rate after polymerization is assumed, and theoretical estimates of an effective catastrophe frequency as well as other parameters characterizing MT length distributions and cap size are derived. We implement a simple cap model which does not include vectorial hydrolysis. We demonstrate that our theoretical predictions, such as steady state concentration of free tubulin, and parameters of MT length distributions, are in agreement with the numerical simulations. The present model establishes a quantitative link between microscopic parameters governing the dynamics of MTs and macroscopic characteristics of MTs in a closed system. Lastly, we use a computational Monte Carlo model to provide an explanation for non-exponential MT length distributions observed in experiments. In particular, we show that appearance of such non-exponential distributions in the experiments can occur because the true steady state has not been reached, and/or due to the presence of a cell edge.
1606.04876
Christos Papadimitriou
Christos H. Papadimitriou
On the optimality of grid cells
null
null
null
null
q-bio.NC cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grid cells, discovered more than a decade ago [5], are neurons in the brain of mammals that fire when the animal is located near certain specific points in its familiar terrain. Intriguingly, these points form, for a single cell, a two-dimensional triangular grid, not unlike our Figure 3. Grid cells are widely believed to be involved in path integration, that is, the maintenance of a location state through the summation of small displacements. We provide theoretical evidence for this assertion by showing that cells with grid-like tuning curves are indeed well adapted for the path integration task. In particular we prove that, in one dimension under Gaussian noise, the sensitivity of measuring small displacements is maximized by a population of neurons whose tuning curves are near-sinusoids -- that is to say, with peaks forming a one-dimensional grid. We also show that effective computation of the displacement is possible through a second population of cells whose sinusoid tuning curves are in phase difference from the first. In two dimensions, under additional assumptions it can be shown that measurement sensitivity is optimized by the product of two sinusoids, again yielding a grid-like pattern. We discuss the connection of our results to the triangular grid pattern observed in animals.
[ { "created": "Wed, 15 Jun 2016 17:36:44 GMT", "version": "v1" } ]
2016-06-16
[ [ "Papadimitriou", "Christos H.", "" ] ]
Grid cells, discovered more than a decade ago [5], are neurons in the brain of mammals that fire when the animal is located near certain specific points in its familiar terrain. Intriguingly, these points form, for a single cell, a two-dimensional triangular grid, not unlike our Figure 3. Grid cells are widely believed to be involved in path integration, that is, the maintenance of a location state through the summation of small displacements. We provide theoretical evidence for this assertion by showing that cells with grid-like tuning curves are indeed well adapted for the path integration task. In particular we prove that, in one dimension under Gaussian noise, the sensitivity of measuring small displacements is maximized by a population of neurons whose tuning curves are near-sinusoids -- that is to say, with peaks forming a one-dimensional grid. We also show that effective computation of the displacement is possible through a second population of cells whose sinusoid tuning curves are in phase difference from the first. In two dimensions, under additional assumptions it can be shown that measurement sensitivity is optimized by the product of two sinusoids, again yielding a grid-like pattern. We discuss the connection of our results to the triangular grid pattern observed in animals.
1709.00739
Michael Assaf
Vicenc Mendez, Michael Assaf, Werner Horsthemke and Daniel Campos
Stochastic foundations in nonlinear density-regulation growth
12 pages, 6 figures
Phys. Rev. E 96, 022147 (2017)
10.1103/PhysRevE.96.022147
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we construct individual-based models that give rise to the generalized logistic model at the mean-field deterministic level and that allow us to interpret the parameters of these models in terms of individual interactions. We also study the effect of internal fluctuations on the long-time dynamics for the different models that have been widely used in the literature, such as the theta-logistic and Savageau models. In particular, we determine the conditions for population extinction and calculate the mean time to extinction. If the population does not become extinct, we obtain analytical expressions for the population abundance distribution. Our theoretical results are based on WKB theory and the probability generating function formalism and are verified by numerical simulations.
[ { "created": "Sun, 3 Sep 2017 16:21:08 GMT", "version": "v1" } ]
2017-09-13
[ [ "Mendez", "Vicenc", "" ], [ "Assaf", "Michael", "" ], [ "Horsthemke", "Werner", "" ], [ "Campos", "Daniel", "" ] ]
In this work we construct individual-based models that give rise to the generalized logistic model at the mean-field deterministic level and that allow us to interpret the parameters of these models in terms of individual interactions. We also study the effect of internal fluctuations on the long-time dynamics for the different models that have been widely used in the literature, such as the theta-logistic and Savageau models. In particular, we determine the conditions for population extinction and calculate the mean time to extinction. If the population does not become extinct, we obtain analytical expressions for the population abundance distribution. Our theoretical results are based on WKB theory and the probability generating function formalism and are verified by numerical simulations.
2107.12124
Thierry Mora
Victor Chard\`es, Massimo Vergassola, Aleksandra M. Walczak, Thierry Mora
Affinity maturation for an optimal balance between long-term immune coverage and short-term resource constraints
null
null
10.1073/pnas.2113512119
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to target threatening pathogens, the adaptive immune system performs a continuous reorganization of its lymphocyte repertoire. Following an immune challenge, the B cell repertoire can evolve cells of increased specificity for the encountered strain. This process of affinity maturation generates a memory pool whose diversity and size remain difficult to predict. We assume that the immune system follows a strategy that maximizes the long-term immune coverage and minimizes the short-term metabolic costs associated with affinity maturation. This strategy is defined as an optimal decision process on a finite dimensional phenotypic space, where a pre-existing population of naive cells is sequentially challenged with a neutrally evolving strain. We unveil a trade-off between immune protection against future strains and the necessary reorganization of the repertoire. This plasticity of the repertoire drives the emergence of distinct regimes for the size and diversity of the memory pool, depending on the density of naive cells and on the mutation rate of the strain. The model predicts power-law distributions of clonotype sizes observed in data, and rationalizes antigenic imprinting as a strategy to minimize metabolic costs while keeping good immune protection against future strains.
[ { "created": "Mon, 26 Jul 2021 11:45:16 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 17:09:00 GMT", "version": "v2" } ]
2022-03-23
[ [ "Chardès", "Victor", "" ], [ "Vergassola", "Massimo", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ] ]
In order to target threatening pathogens, the adaptive immune system performs a continuous reorganization of its lymphocyte repertoire. Following an immune challenge, the B cell repertoire can evolve cells of increased specificity for the encountered strain. This process of affinity maturation generates a memory pool whose diversity and size remain difficult to predict. We assume that the immune system follows a strategy that maximizes the long-term immune coverage and minimizes the short-term metabolic costs associated with affinity maturation. This strategy is defined as an optimal decision process on a finite dimensional phenotypic space, where a pre-existing population of naive cells is sequentially challenged with a neutrally evolving strain. We unveil a trade-off between immune protection against future strains and the necessary reorganization of the repertoire. This plasticity of the repertoire drives the emergence of distinct regimes for the size and diversity of the memory pool, depending on the density of naive cells and on the mutation rate of the strain. The model predicts power-law distributions of clonotype sizes observed in data, and rationalizes antigenic imprinting as a strategy to minimize metabolic costs while keeping good immune protection against future strains.
1808.02766
Daniele Ramazzotti
Chris Sauer and Jinghui Dong and Leo Celi and Daniele Ramazzotti
Improved survival of cancer patients admitted to the ICU between 2002 and 2011 at a U.S. teaching hospital
null
null
null
null
q-bio.QM stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past decades, both critical care and cancer care have improved substantially. Due to increased cancer-specific survival, we hypothesized that both the number of cancer patients admitted to the ICU and overall survival have increased since the millennium change. MIMIC-III, a freely accessible critical care database of Beth Israel Deaconess Medical Center, Boston, USA was used to retrospectively study trends and outcomes of cancer patients admitted to the ICU between 2002 and 2011. Multiple logistic regression analysis was performed to adjust for confounders of 28-day and 1-year mortality. Out of 41,468 unique ICU admissions, 1,100 hemato-oncologic, 3,953 oncologic and 49 patients with both a hematological and solid malignancy were analyzed. Hematological patients had higher critical illness scores than non-cancer patients, while oncologic patients had similar APACHE-III and SOFA-scores compared to non-cancer patients. In the univariate analysis, cancer was strongly associated with mortality (OR= 2.74, 95%CI: 2.56, 2.94). Over the 10-year study period, 28-day mortality of cancer patients decreased by 30%. This trend persisted after adjustment for covariates, with cancer patients having significantly higher mortality (OR=2.63, 95%CI: 2.38, 2.88). Between 2002 and 2011, both the adjusted odds of 28-day mortality and the adjusted odds of 1-year mortality for cancer patients decreased by 6% (95%CI: 4%, 9%). Having cancer was the strongest single predictor of 1-year mortality in the multivariate model (OR=4.47, 95%CI: 4.11, 4.84).
[ { "created": "Mon, 6 Aug 2018 22:15:20 GMT", "version": "v1" } ]
2018-08-09
[ [ "Sauer", "Chris", "" ], [ "Dong", "Jinghui", "" ], [ "Celi", "Leo", "" ], [ "Ramazzotti", "Daniele", "" ] ]
Over the past decades, both critical care and cancer care have improved substantially. Due to increased cancer-specific survival, we hypothesized that both the number of cancer patients admitted to the ICU and overall survival have increased since the millennium change. MIMIC-III, a freely accessible critical care database of Beth Israel Deaconess Medical Center, Boston, USA was used to retrospectively study trends and outcomes of cancer patients admitted to the ICU between 2002 and 2011. Multiple logistic regression analysis was performed to adjust for confounders of 28-day and 1-year mortality. Out of 41,468 unique ICU admissions, 1,100 hemato-oncologic, 3,953 oncologic and 49 patients with both a hematological and solid malignancy were analyzed. Hematological patients had higher critical illness scores than non-cancer patients, while oncologic patients had similar APACHE-III and SOFA-scores compared to non-cancer patients. In the univariate analysis, cancer was strongly associated with mortality (OR= 2.74, 95%CI: 2.56, 2.94). Over the 10-year study period, 28-day mortality of cancer patients decreased by 30%. This trend persisted after adjustment for covariates, with cancer patients having significantly higher mortality (OR=2.63, 95%CI: 2.38, 2.88). Between 2002 and 2011, both the adjusted odds of 28-day mortality and the adjusted odds of 1-year mortality for cancer patients decreased by 6% (95%CI: 4%, 9%). Having cancer was the strongest single predictor of 1-year mortality in the multivariate model (OR=4.47, 95%CI: 4.11, 4.84).
2108.05046
Thomas Klotz
Thomas Klotz, Leonardo Gizzi, Oliver R\"ohrle
Investigating the spatial resolution of EMG and MMG based on a systemic multi-scale model
Preprint, Submitted to Biomechanics and Modeling in Mechanobiology
null
null
null
q-bio.TO q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
While electromyography (EMG) and magnetomyography (MMG) are both methods to measure the electrical activity of skeletal muscles, no systematic comparison between both signals exists. Within this work, we propose a systemic in silico model for EMG and MMG and test the hypothesis that MMG surpasses EMG in terms of spatial selectivity. The results show that MMG provides a slightly better spatial selectivity than EMG when recorded directly on the muscle surface. However, there is a remarkable difference in spatial selectivity for non-invasive surface measurements. The spatial selectivity of the MMG components aligned with the muscle fibres and normal to the body surface outperforms the spatial selectivity of surface EMG. Particularly, for the MMG's normal-to-the-surface component the influence of subcutaneous fat is minimal. Further, for the first time, we analyse the contribution of different structural components, i.e., muscle fibres from different motor units and the extracellular space, to the measurable biomagnetic field. Notably, the simulations show that the normal-to-the-surface MMG component, the contribution from volume currents in the extracellular space and in surrounding inactive tissues is negligible. Further, our model predicts a surprisingly high contribution of the passive muscle fibres to the observable magnetic field.
[ { "created": "Wed, 11 Aug 2021 06:21:34 GMT", "version": "v1" } ]
2021-08-12
[ [ "Klotz", "Thomas", "" ], [ "Gizzi", "Leonardo", "" ], [ "Röhrle", "Oliver", "" ] ]
While electromyography (EMG) and magnetomyography (MMG) are both methods to measure the electrical activity of skeletal muscles, no systematic comparison between both signals exists. Within this work, we propose a systemic in silico model for EMG and MMG and test the hypothesis that MMG surpasses EMG in terms of spatial selectivity. The results show that MMG provides a slightly better spatial selectivity than EMG when recorded directly on the muscle surface. However, there is a remarkable difference in spatial selectivity for non-invasive surface measurements. The spatial selectivity of the MMG components aligned with the muscle fibres and normal to the body surface outperforms the spatial selectivity of surface EMG. Particularly, for the MMG's normal-to-the-surface component the influence of subcutaneous fat is minimal. Further, for the first time, we analyse the contribution of different structural components, i.e., muscle fibres from different motor units and the extracellular space, to the measurable biomagnetic field. Notably, the simulations show that the normal-to-the-surface MMG component, the contribution from volume currents in the extracellular space and in surrounding inactive tissues is negligible. Further, our model predicts a surprisingly high contribution of the passive muscle fibres to the observable magnetic field.
0912.3221
Biswa Sengupta
B. Sengupta and S. B. Laughlin and J. E. Niven
Comparison of Langevin and Markov channel noise models for neuronal signal generation
null
null
10.1103/PhysRevE.81.011918
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stochastic opening and closing of voltage-gated ion channels produces noise in neurons. The effect of this noise on the neuronal performance has been modelled using either approximate or Langevin model, based on stochastic differential equations or an exact model, based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either $Na^{+}$ and $K^{+}$, or only $K^{+}$ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and non-spiking membranes. Even with increasing numbers of channels the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.
[ { "created": "Wed, 16 Dec 2009 20:18:36 GMT", "version": "v1" } ]
2015-05-14
[ [ "Sengupta", "B.", "" ], [ "Laughlin", "S. B.", "" ], [ "Niven", "J. E.", "" ] ]
The stochastic opening and closing of voltage-gated ion channels produces noise in neurons. The effect of this noise on the neuronal performance has been modelled using either approximate or Langevin model, based on stochastic differential equations or an exact model, based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either $Na^{+}$ and $K^{+}$, or only $K^{+}$ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and non-spiking membranes. Even with increasing numbers of channels the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.
1307.0225
William Bialek
Stephanie E. Palmer, Olivier Marre, Michael J. Berry II, and William Bialek
Predictive information in a sensory population
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Guiding behavior requires the brain to make predictions about future sensory inputs. Here we show that efficient predictive computation starts at the earliest stages of the visual system. We estimate how much information groups of retinal ganglion cells carry about the future state of their visual inputs, and show that every cell we can observe participates in a group of cells for which this predictive information is close to the physical limit set by the statistical structure of the inputs themselves. Groups of cells in the retina also carry information about the future state of their own activity, and we show that this information can be compressed further and encoded by downstream predictor neurons, which then exhibit interesting feature selectivity. Efficient representation of predictive information is a candidate principle that can be applied at each stage of neural computation.
[ { "created": "Sun, 30 Jun 2013 17:36:28 GMT", "version": "v1" } ]
2013-07-02
[ [ "Palmer", "Stephanie E.", "" ], [ "Marre", "Olivier", "" ], [ "Berry", "Michael J.", "II" ], [ "Bialek", "William", "" ] ]
Guiding behavior requires the brain to make predictions about future sensory inputs. Here we show that efficient predictive computation starts at the earliest stages of the visual system. We estimate how much information groups of retinal ganglion cells carry about the future state of their visual inputs, and show that every cell we can observe participates in a group of cells for which this predictive information is close to the physical limit set by the statistical structure of the inputs themselves. Groups of cells in the retina also carry information about the future state of their own activity, and we show that this information can be compressed further and encoded by downstream predictor neurons, which then exhibit interesting feature selectivity. Efficient representation of predictive information is a candidate principle that can be applied at each stage of neural computation.
1403.1551
Geoffrey Hoffmann PhD
Earnest Leung, Geoffrey W. Hoffmann
MHC Restriction of V-V Interactions in Serum IgG
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to Jerne's idiotypic network hypothesis, the adaptive immune system is regulated by interactions between the variable regions of antibodies, B cells, and T cells.1 The symmetrical immune network theory2,3 is based on Jerne's hypothesis, and provides a basis for understanding many of the phenomena of adaptive immunity. The theory includes the postulate that the repertoire of serum IgG molecules is regulated by T cells, with the result that IgG molecules express V region determinants that mimic V region determinants present on suppressor T cells. In this paper we describe rapid binding between purified murine serum IgG of H-2b and H-2d mice and serum IgG from the same strain and from MHC-matched mice, but not between serum IgG preparations of mice with different MHC genes. We interpret this surprising finding in terms of a model in which IgG molecules are selected to have both anti-anti-(self MHC class II) and anti-anti-anti-(self MHC class II) specificity.
[ { "created": "Tue, 4 Mar 2014 19:43:52 GMT", "version": "v1" } ]
2014-03-07
[ [ "Leung", "Earnest", "" ], [ "Hoffmann", "Geoffrey W.", "" ] ]
According to Jerne's idiotypic network hypothesis, the adaptive immune system is regulated by interactions between the variable regions of antibodies, B cells, and T cells.1 The symmetrical immune network theory2,3 is based on Jerne's hypothesis, and provides a basis for understanding many of the phenomena of adaptive immunity. The theory includes the postulate that the repertoire of serum IgG molecules is regulated by T cells, with the result that IgG molecules express V region determinants that mimic V region determinants present on suppressor T cells. In this paper we describe rapid binding between purified murine serum IgG of H-2b and H-2d mice and serum IgG from the same strain and from MHC-matched mice, but not between serum IgG preparations of mice with different MHC genes. We interpret this surprising finding in terms of a model in which IgG molecules are selected to have both anti-anti-(self MHC class II) and anti-anti-anti-(self MHC class II) specificity.
2009.09950
Jorge P. Rodr\'iguez
Jorge P. Rodr\'iguez and V\'ictor M. Egu\'iluz
Coupling between COVID-19 and seasonal influenza leads to synchronization of their dynamics
null
Chaos 33, 021103 (2023)
10.1063/5.0137380
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions between COVID-19 and other pathogens may change their dynamics. Specifically, this may hinder the modelling of empirical data when the symptoms of both infections are hard to distinguish. We introduce a model coupling the dynamics of COVID-19 and seasonal influenza, simulating cooperation, competition and asymmetric interactions. We find that the coupling synchronizes both infections, with a strong influence on the dynamics of influenza, reducing its time extent to a half.
[ { "created": "Thu, 17 Sep 2020 20:52:50 GMT", "version": "v1" } ]
2023-02-24
[ [ "Rodríguez", "Jorge P.", "" ], [ "Eguíluz", "Víctor M.", "" ] ]
Interactions between COVID-19 and other pathogens may change their dynamics. Specifically, this may hinder the modelling of empirical data when the symptoms of both infections are hard to distinguish. We introduce a model coupling the dynamics of COVID-19 and seasonal influenza, simulating cooperation, competition and asymmetric interactions. We find that the coupling synchronizes both infections, with a strong influence on the dynamics of influenza, reducing its time extent to a half.
1310.1653
Henry Lin
Henry Lin
Theoretical Bounds on Mate-Pair Information for Accurate Genome Assembly
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past two decades, a series of works have aimed at studying the problem of genome assembly: the process of reconstructing a genome from sequence reads. An early formulation of the genome assembly problem showed that genome reconstruction is NP-hard when framed as finding the shortest sequence that contains all observed reads. Although this original formulation is very simplistic and does not allow for mate-pair information, subsequent formulations have also proven to be NP-hard, and/or may not be guaranteed to return a correct assembly. In this paper, we provide an alternate perspective on the genome assembly problem by showing genome assembly is easy when provided with sufficient mate-pair information. Moreover, we quantify the number of mate-pair libraries necessary and sufficient for accurate genome assembly, in terms of the length of the longest repetitive region within a genome. In our analysis, we consider an idealized sequencing model where each mate-pair library generates pairs of error free reads with a fixed and known insert size at each position in the genome. Even in this idealized model, we show that accurate genome reconstruction cannot be guaranteed in the worst case unless at least roughly R/2L mate-pair libraries are produced, where R is the length of the longest repetitive region in the genome and L is the length of each read. On the other hand, if (R/L)+1 mate-pair libraries are provided, then a simple algorithm can be used to find a correct genome assembly easily in polynomial time. Although (R/L)+1 mate-pair libraries can be too much to produce in practice, the previous bounds only hold in the worst case. In our last result, we show that if additional conditions hold on a genome, a correct assembly can be guaranteed with only O(log (R/L)) mate-pair libraries.
[ { "created": "Mon, 7 Oct 2013 01:45:40 GMT", "version": "v1" }, { "created": "Fri, 11 Oct 2013 16:14:47 GMT", "version": "v2" }, { "created": "Fri, 27 Dec 2013 16:33:55 GMT", "version": "v3" } ]
2013-12-30
[ [ "Lin", "Henry", "" ] ]
Over the past two decades, a series of works have aimed at studying the problem of genome assembly: the process of reconstructing a genome from sequence reads. An early formulation of the genome assembly problem showed that genome reconstruction is NP-hard when framed as finding the shortest sequence that contains all observed reads. Although this original formulation is very simplistic and does not allow for mate-pair information, subsequent formulations have also proven to be NP-hard, and/or may not be guaranteed to return a correct assembly. In this paper, we provide an alternate perspective on the genome assembly problem by showing genome assembly is easy when provided with sufficient mate-pair information. Moreover, we quantify the number of mate-pair libraries necessary and sufficient for accurate genome assembly, in terms of the length of the longest repetitive region within a genome. In our analysis, we consider an idealized sequencing model where each mate-pair library generates pairs of error free reads with a fixed and known insert size at each position in the genome. Even in this idealized model, we show that accurate genome reconstruction cannot be guaranteed in the worst case unless at least roughly R/2L mate-pair libraries are produced, where R is the length of the longest repetitive region in the genome and L is the length of each read. On the other hand, if (R/L)+1 mate-pair libraries are provided, then a simple algorithm can be used to find a correct genome assembly easily in polynomial time. Although (R/L)+1 mate-pair libraries can be too much to produce in practice, the previous bounds only hold in the worst case. In our last result, we show that if additional conditions hold on a genome, a correct assembly can be guaranteed with only O(log (R/L)) mate-pair libraries.
1904.10393
Hector Zenil
Alberto Hern\'andez-Espinosa, H\'ector Zenil, Narsis A. Kiani, Jesper Tegn\'er
Estimations of Integrated Information Based on Algorithmic Complexity and Dynamic Querying
33 pages + Appendix = 44 pages
Entropy, 2019
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The concept of information has emerged as a language in its own right, bridging several disciplines that analyze natural phenomena and man-made systems. Integrated information has been introduced as a metric to quantify the amount of information generated by a system beyond the information generated by its elements. Yet, this intriguing notion comes with the price of being prohibitively expensive to calculate, since the calculations require an exponential number of sub-divisions of a system. Here we introduce a novel framework to connect algorithmic randomness and integrated information and a numerical method for estimating integrated information using a perturbation test rooted in algorithmic information dynamics. This method quantifies the change in program size of a system when subjected to a perturbation. The intuition behind is that if an object is random then random perturbations have little to no effect to what happens when a shorter program but when an object has the ability to move in both directions (towards or away from randomness) it will be shown to be better integrated as a measure of sophistication telling apart randomness and simplicity from structure. We show that an object with a high integrated information value is also more compressible, and is, therefore, more sensitive to perturbations. We find that such a perturbation test quantifying compression sensitivity provides a system with a means to extract explanations--causal accounts--of its own behaviour. Our technique can reduce the number of calculations to arrive at some bounds or estimations, as the algorithmic perturbation test guides an efficient search for estimating integrated information. Our work sets the stage for a systematic exploration of connections between algorithmic complexity and integrated information at the level of both theory and practice.
[ { "created": "Tue, 9 Apr 2019 18:05:05 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2019 19:29:49 GMT", "version": "v2" } ]
2019-06-10
[ [ "Hernández-Espinosa", "Alberto", "" ], [ "Zenil", "Héctor", "" ], [ "Kiani", "Narsis A.", "" ], [ "Tegnér", "Jesper", "" ] ]
The concept of information has emerged as a language in its own right, bridging several disciplines that analyze natural phenomena and man-made systems. Integrated information has been introduced as a metric to quantify the amount of information generated by a system beyond the information generated by its elements. Yet, this intriguing notion comes with the price of being prohibitively expensive to calculate, since the calculations require an exponential number of sub-divisions of a system. Here we introduce a novel framework to connect algorithmic randomness and integrated information and a numerical method for estimating integrated information using a perturbation test rooted in algorithmic information dynamics. This method quantifies the change in program size of a system when subjected to a perturbation. The intuition behind is that if an object is random then random perturbations have little to no effect to what happens when a shorter program but when an object has the ability to move in both directions (towards or away from randomness) it will be shown to be better integrated as a measure of sophistication telling apart randomness and simplicity from structure. We show that an object with a high integrated information value is also more compressible, and is, therefore, more sensitive to perturbations. We find that such a perturbation test quantifying compression sensitivity provides a system with a means to extract explanations--causal accounts--of its own behaviour. Our technique can reduce the number of calculations to arrive at some bounds or estimations, as the algorithmic perturbation test guides an efficient search for estimating integrated information. Our work sets the stage for a systematic exploration of connections between algorithmic complexity and integrated information at the level of both theory and practice.
1404.0594
Samuel Scarpino
Samuel V. Scarpino, Ross Gillette, David Crews
multiDimBio: An R Package for the Design, Analysis, and Visualization of Systems Biology Experiments
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past decade has witnessed a dramatic increase in the size and scope of biological and behavioral experiments. These experiments are providing an unprecedented level of detail and depth of data. However, this increase in data presents substantial statistical and graphical hurdles to overcome, namely how to distinguish signal from noise and how to visualize multidimensional results. Here we present a series of tools designed to support a research project from inception to publication. We provide implementation of dimension reduction techniques and visualizations that function well with the types of data often seen in animal behavior studies. This package is designed to be used with experimental data but can also be used for experimental design and sample justification. The goal for this project is to create a package that will evolve over time, thereby remaining relevant and reflective of current methods and techniques.
[ { "created": "Wed, 2 Apr 2014 15:51:56 GMT", "version": "v1" } ]
2014-04-03
[ [ "Scarpino", "Samuel V.", "" ], [ "Gillette", "Ross", "" ], [ "Crews", "David", "" ] ]
The past decade has witnessed a dramatic increase in the size and scope of biological and behavioral experiments. These experiments are providing an unprecedented level of detail and depth of data. However, this increase in data presents substantial statistical and graphical hurdles to overcome, namely how to distinguish signal from noise and how to visualize multidimensional results. Here we present a series of tools designed to support a research project from inception to publication. We provide implementation of dimension reduction techniques and visualizations that function well with the types of data often seen in animal behavior studies. This package is designed to be used with experimental data but can also be used for experimental design and sample justification. The goal for this project is to create a package that will evolve over time, thereby remaining relevant and reflective of current methods and techniques.
2103.09559
Mar\'ia Vallet-Regi
Juan L. Paris, Nuria Lafuente-Gomez, M. Victoria Cabanas, Jesus Roman, Juan Pena, Maria Vallet-Regi
Fabrication of a nanoparticle-containing 3D porous bone scaffold with proangiogenic and antibacterial properties
29 pages, 10 figures
Acta Biomaterialia 86, 441-449 (2019)
10.1016/j.actbio.2019.01.013
null
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
3D porous scaffolds based on agarose and nanocrystalline apatite, two structural components that act as a temporary mineralized extracellular matrix, were prepared by the GELPOR3D method. This shaping technology allows the introduction of thermally-labile molecules within the scaffolds during the fabrication procedure. An angiogenic protein, Vascular Endothelial Growth Factor, and an antibiotic, cephalexin, loaded in mesoporous silica nanoparticles, were included to design multifunctional scaffolds for bone reconstruction. The dual release of both molecules showed a pro-angiogenic behaviour in chicken embryos grown ex ovo, while, at the same time providing an antibiotic local concentration capable of inhibiting Staphylococcus aureus bacterial growth. In this sense, different release patterns, monitored by UV-spectroscopy, could be tailored as a function of the cephalexin loading strategy. The scaffold surface was characterized by a high hydrophilicity, as determined by contact angle measurements, that facilitated the adhesion and proliferation of preosteoblastic cells.
[ { "created": "Wed, 17 Mar 2021 10:44:35 GMT", "version": "v1" } ]
2021-03-18
[ [ "Paris", "Juan L.", "" ], [ "Lafuente-Gomez", "Nuria", "" ], [ "Cabanas", "M. Victoria", "" ], [ "Roman", "Jesus", "" ], [ "Pena", "Juan", "" ], [ "Vallet-Regi", "Maria", "" ] ]
3D porous scaffolds based on agarose and nanocrystalline apatite, two structural components that act as a temporary mineralized extracellular matrix, were prepared by the GELPOR3D method. This shaping technology allows the introduction of thermally-labile molecules within the scaffolds during the fabrication procedure. An angiogenic protein, Vascular Endothelial Growth Factor, and an antibiotic, cephalexin, loaded in mesoporous silica nanoparticles, were included to design multifunctional scaffolds for bone reconstruction. The dual release of both molecules showed a pro-angiogenic behaviour in chicken embryos grown ex ovo, while, at the same time providing an antibiotic local concentration capable of inhibiting Staphylococcus aureus bacterial growth. In this sense, different release patterns, monitored by UV-spectroscopy, could be tailored as a function of the cephalexin loading strategy. The scaffold surface was characterized by a high hydrophilicity, as determined by contact angle measurements, that facilitated the adhesion and proliferation of preosteoblastic cells.
1603.06410
Laila Kazimierski
Laila Daniela Kazimierski, Guillermo Abramson, Marcelo N\'estor Kuperman
The movement of a forager: Strategies for the efficient use of resources
null
null
10.1140/epjb/e2016-70241-1
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a simple model of a foraging animal that modifies the substrate on which it moves. This substrate provides its only resource, and the forager manage it by taking a limited portion at each visited site. The resource recovers its value after the visit following a relaxation law. We study different scenarios to analyze the efficiency of the managing strategy, corresponding to control the bite size. We observe the non trivial emergence of a home range, that is visited in a periodic way. The duration of the corresponding cycles and the transient until it emerges is affected by the bite size. Our results show that the most efficient use of the resource, measured as the balance between gathering and travelled distance, corresponds to foragers that take larger portions but without exhausting the resource. We also analyze the use of space determining the number of attractors of the dynamics, and we observe that it depends on the bite size and the recovery time of the resource.
[ { "created": "Mon, 21 Mar 2016 12:39:29 GMT", "version": "v1" } ]
2016-11-23
[ [ "Kazimierski", "Laila Daniela", "" ], [ "Abramson", "Guillermo", "" ], [ "Kuperman", "Marcelo Néstor", "" ] ]
We study a simple model of a foraging animal that modifies the substrate on which it moves. This substrate provides its only resource, and the forager manage it by taking a limited portion at each visited site. The resource recovers its value after the visit following a relaxation law. We study different scenarios to analyze the efficiency of the managing strategy, corresponding to control the bite size. We observe the non trivial emergence of a home range, that is visited in a periodic way. The duration of the corresponding cycles and the transient until it emerges is affected by the bite size. Our results show that the most efficient use of the resource, measured as the balance between gathering and travelled distance, corresponds to foragers that take larger portions but without exhausting the resource. We also analyze the use of space determining the number of attractors of the dynamics, and we observe that it depends on the bite size and the recovery time of the resource.
2112.07953
Thierry Mora
Cosimo Lupo, Natanael Spisak, Aleksandra M. Walczak, Thierry Mora
Learning the statistics and landscape of somatic mutation-induced insertions and deletions in antibodies
null
null
10.1371/journal.pcbi.1010167
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Affinity maturation is crucial for improving the binding affinity of antibodies to antigens. This process is mainly driven by point substitutions caused by somatic hypermutations of the immunoglobulin gene. It also includes deletions and insertions of genomic material known as indels. While the landscape of point substitutions has been extensively studied, a detailed statistical description of indels is still lacking. Here we present a probabilistic inference tool to learn the statistics of indels from repertoire sequencing data, which overcomes the pitfalls and biases of standard annotation methods. The model includes antibody-specific maturation ages to account for variable mutational loads in the repertoire. After validation on synthetic data, we applied our tool to a large dataset of human immunoglobulin heavy chains. The inferred model allows us to identify universal statistical features of indels in heavy chains. We report distinct insertion and deletion hotspots, and show that the distribution of lengths of indels follows a geometric distribution, which puts constraints on future mechanistic models of the hypermutation process.
[ { "created": "Wed, 15 Dec 2021 08:24:58 GMT", "version": "v1" }, { "created": "Mon, 4 Apr 2022 07:32:35 GMT", "version": "v2" } ]
2022-10-12
[ [ "Lupo", "Cosimo", "" ], [ "Spisak", "Natanael", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ] ]
Affinity maturation is crucial for improving the binding affinity of antibodies to antigens. This process is mainly driven by point substitutions caused by somatic hypermutations of the immunoglobulin gene. It also includes deletions and insertions of genomic material known as indels. While the landscape of point substitutions has been extensively studied, a detailed statistical description of indels is still lacking. Here we present a probabilistic inference tool to learn the statistics of indels from repertoire sequencing data, which overcomes the pitfalls and biases of standard annotation methods. The model includes antibody-specific maturation ages to account for variable mutational loads in the repertoire. After validation on synthetic data, we applied our tool to a large dataset of human immunoglobulin heavy chains. The inferred model allows us to identify universal statistical features of indels in heavy chains. We report distinct insertion and deletion hotspots, and show that the distribution of lengths of indels follows a geometric distribution, which puts constraints on future mechanistic models of the hypermutation process.
1610.05872
Sergey Stavisky
David Sussillo, Sergey D. Stavisky, Jonathan C. Kao, Stephen I. Ryu, Krishna V. Shenoy
Making brain-machine interfaces robust to future neural variability
D.S., S.D.S., and J.C.K. contributed equally to this work
Nature Communications. 7:13749 (2016)
10.1038/ncomms13749
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to- kinematic mappings and became more robust with larger training datasets. When tested with a non-human primate preclinical BMI model, this decoder was robust under conditions that disabled a state-of-the-art Kalman filter based decoder. These results validate a new BMI strategy in which accumulated data history is effectively harnessed, and may facilitate reliable daily BMI use by reducing decoder retraining downtime.
[ { "created": "Wed, 19 Oct 2016 05:32:32 GMT", "version": "v1" } ]
2016-12-15
[ [ "Sussillo", "David", "" ], [ "Stavisky", "Sergey D.", "" ], [ "Kao", "Jonathan C.", "" ], [ "Ryu", "Stephen I.", "" ], [ "Shenoy", "Krishna V.", "" ] ]
A major hurdle to clinical translation of brain-machine interfaces (BMIs) is that current decoders, which are trained from a small quantity of recent data, become ineffective when neural recording conditions subsequently change. We tested whether a decoder could be made more robust to future neural variability by training it to handle a variety of recording conditions sampled from months of previously collected data as well as synthetic training data perturbations. We developed a new multiplicative recurrent neural network BMI decoder that successfully learned a large variety of neural-to- kinematic mappings and became more robust with larger training datasets. When tested with a non-human primate preclinical BMI model, this decoder was robust under conditions that disabled a state-of-the-art Kalman filter based decoder. These results validate a new BMI strategy in which accumulated data history is effectively harnessed, and may facilitate reliable daily BMI use by reducing decoder retraining downtime.
1310.5390
Ron Nielsen
Ron W Nielsen aka Jan Nurzynski
Malthusian stagnation or Malthusian regeneration?
18 pages, 3 figures, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empirical evidence questions fundamental concepts of the human population dynamics. One of the key conclusions of this study is that positive checks activate the efficient Malthusian regeneration mechanism, suggesting that the Epoch of Malthusian Stagnation, the first stage of growth claimed by the Demographic Transition Theory, did not exist.
[ { "created": "Mon, 21 Oct 2013 00:33:54 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2013 01:55:53 GMT", "version": "v2" } ]
2013-11-07
[ [ "Nurzynski", "Ron W Nielsen aka Jan", "" ] ]
Empirical evidence questions fundamental concepts of the human population dynamics. One of the key conclusions of this study is that positive checks activate the efficient Malthusian regeneration mechanism, suggesting that the Epoch of Malthusian Stagnation, the first stage of growth claimed by the Demographic Transition Theory, did not exist.
1510.07415
Derdei Bichara
Baltazar Espinoza, Victor Moreno, Derdei Bichara and Carlos Castillo-Chavez
Assessing the Efficiency of \textit{Cordon Sanitaire} as a Control Strategy of Ebola
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formulate a two-patch mathematical model for Ebola Virus Disease dynamics in order to evaluate the effectiveness of \textit{cordons sanitaires}, mandatory movement restrictions between communities while exploring their role on disease dynamics and final epidemic size. Simulations show that severe restrictions in movement between high and low risk areas of closely linked communities may have a deleterious impact on the overall levels of infection in the total population.
[ { "created": "Mon, 26 Oct 2015 09:24:55 GMT", "version": "v1" } ]
2015-10-27
[ [ "Espinoza", "Baltazar", "" ], [ "Moreno", "Victor", "" ], [ "Bichara", "Derdei", "" ], [ "Castillo-Chavez", "Carlos", "" ] ]
We formulate a two-patch mathematical model for Ebola Virus Disease dynamics in order to evaluate the effectiveness of \textit{cordons sanitaires}, mandatory movement restrictions between communities while exploring their role on disease dynamics and final epidemic size. Simulations show that severe restrictions in movement between high and low risk areas of closely linked communities may have a deleterious impact on the overall levels of infection in the total population.
2310.02378
Rodrigo Ramos
Rodrigo Henrique Ramos, Cynthia de Oliveira Lage Ferreira, Adenilso Simao
Human Protein Protein Interaction Networks: A Topological Comparison Review
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Protein-Protein Interaction Networks aim to model the interactome, providing a powerful tool for understanding the complex relationships governing cellular processes. These networks have numerous applications, including functional enrichment, discovering cancer driver genes, identifying drug targets, and more. Various databases make protein-protein networks available for many species, including Homo sapiens. This work topologically compares four Homo sapiens networks using a coarse-to-fine approach, comparing global characteristics, sub-network topology, specific nodes centrality, and interaction significance. Results show that the four human protein networks share many common protein-encoding genes and some global measures, but significantly differ in the interactions and neighbourhood. Small sub-networks from cancer pathways performed better than the whole networks, indicating an improved topological consistency in functional pathways. The centrality analysis shows that the same genes play different roles in different networks. We discuss how studies and analyses that rely on protein-protein networks for humans should consider their similarities and distinctions.
[ { "created": "Tue, 3 Oct 2023 19:00:48 GMT", "version": "v1" } ]
2023-10-05
[ [ "Ramos", "Rodrigo Henrique", "" ], [ "Ferreira", "Cynthia de Oliveira Lage", "" ], [ "Simao", "Adenilso", "" ] ]
Protein-Protein Interaction Networks aim to model the interactome, providing a powerful tool for understanding the complex relationships governing cellular processes. These networks have numerous applications, including functional enrichment, discovering cancer driver genes, identifying drug targets, and more. Various databases make protein-protein networks available for many species, including Homo sapiens. This work topologically compares four Homo sapiens networks using a coarse-to-fine approach, comparing global characteristics, sub-network topology, specific nodes centrality, and interaction significance. Results show that the four human protein networks share many common protein-encoding genes and some global measures, but significantly differ in the interactions and neighbourhood. Small sub-networks from cancer pathways performed better than the whole networks, indicating an improved topological consistency in functional pathways. The centrality analysis shows that the same genes play different roles in different networks. We discuss how studies and analyses that rely on protein-protein networks for humans should consider their similarities and distinctions.
0912.5120
Mark Little Mark Peter Little
Mark P Little, Anna Gola, Ioanna Tzoulaki, Wendy Vandoolaeghe
Variant assumptions made in deriving equilibrium solutions to Little et al (PLoS Comput Biol 2009 5(10) e1000539)
4 pages, 0 figures. This expands on the comments made on the PLoS Comput Biol webpage for the article of Little et al (PLoS Comput Biol 2009 5(10) e1000539). Acknowledgements have been added
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper of Little et al. (PloS Comput Biol 2009 5(10) e1000539) outlined a system of reaction-diffusion equations that were used to describe induction of atherosclerotic disease. These were solved by considering an equilibrium solution and small perturbations around this equilibrium. Here we consider slight variant sets of assumptions that could be used to derive equilibrium solutions. In general they do not imply any change in the numerical results relating to monocyte chemo-attractant protein-1 (MCP-1) presented in that paper.
[ { "created": "Tue, 29 Dec 2009 16:59:14 GMT", "version": "v1" } ]
2009-12-31
[ [ "Little", "Mark P", "" ], [ "Gola", "Anna", "" ], [ "Tzoulaki", "Ioanna", "" ], [ "Vandoolaeghe", "Wendy", "" ] ]
The paper of Little et al. (PloS Comput Biol 2009 5(10) e1000539) outlined a system of reaction-diffusion equations that were used to describe induction of atherosclerotic disease. These were solved by considering an equilibrium solution and small perturbations around this equilibrium. Here we consider slight variant sets of assumptions that could be used to derive equilibrium solutions. In general they do not imply any change in the numerical results relating to monocyte chemo-attractant protein-1 (MCP-1) presented in that paper.
1603.03815
Nan Xu
Nan Xu, R. Nathan Spreng, Peter C. Doerschuk
Initial validation for the estimation of resting-state fMRI effective connectivity by a generalization of the correlation approach
24 pages, 11 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.
[ { "created": "Fri, 11 Mar 2016 22:54:31 GMT", "version": "v1" }, { "created": "Tue, 10 May 2016 20:24:37 GMT", "version": "v2" }, { "created": "Tue, 24 Jan 2017 00:43:37 GMT", "version": "v3" } ]
2017-01-25
[ [ "Xu", "Nan", "" ], [ "Spreng", "R. Nathan", "" ], [ "Doerschuk", "Peter C.", "" ] ]
Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest. However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the "common driver" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain.
1908.09853
Maximilian Pichler
Maximilian Pichler, Virginie Boreux, Alexandra-Maria Klein, Matthias Schleuning, Florian Hartig
Machine learning algorithms to infer trait-matching and predict species interactions in ecological networks
48 pages, 5 figures
null
10.1111/2041-210X.13329
null
q-bio.PE cs.LG q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecologists have long suspected that species are more likely to interact if their traits match in a particular way. For example, a pollination interaction may be more likely if the proportions of a bee's tongue fit a plant's flower shape. Empirical estimates of the importance of trait-matching for determining species interactions, however, vary significantly among different types of ecological networks. Here, we show that ambiguity among empirical trait-matching studies may have arisen at least in parts from using overly simple statistical models. Using simulated and real data, we contrast conventional generalized linear models (GLM) with more flexible Machine Learning (ML) models (Random Forest, Boosted Regression Trees, Deep Neural Networks, Convolutional Neural Networks, Support Vector Machines, naive Bayes, and k-Nearest-Neighbor), testing their ability to predict species interactions based on traits, and infer trait combinations causally responsible for species interactions. We find that the best ML models can successfully predict species interactions in plant-pollinator networks, outperforming GLMs by a substantial margin. Our results also demonstrate that ML models can better identify the causally responsible trait-matching combinations than GLMs. In two case studies, the best ML models successfully predicted species interactions in a global plant-pollinator database and inferred ecologically plausible trait-matching rules for a plant-hummingbird network, without any prior assumptions. We conclude that flexible ML models offer many advantages over traditional regression models for understanding interaction networks. We anticipate that these results extrapolate to other ecological network types. More generally, our results highlight the potential of machine learning and artificial intelligence for inference in ecology, beyond standard tasks such as image or pattern recognition.
[ { "created": "Mon, 26 Aug 2019 18:00:09 GMT", "version": "v1" }, { "created": "Mon, 4 Nov 2019 18:58:36 GMT", "version": "v2" } ]
2019-11-05
[ [ "Pichler", "Maximilian", "" ], [ "Boreux", "Virginie", "" ], [ "Klein", "Alexandra-Maria", "" ], [ "Schleuning", "Matthias", "" ], [ "Hartig", "Florian", "" ] ]
Ecologists have long suspected that species are more likely to interact if their traits match in a particular way. For example, a pollination interaction may be more likely if the proportions of a bee's tongue fit a plant's flower shape. Empirical estimates of the importance of trait-matching for determining species interactions, however, vary significantly among different types of ecological networks. Here, we show that ambiguity among empirical trait-matching studies may have arisen at least in parts from using overly simple statistical models. Using simulated and real data, we contrast conventional generalized linear models (GLM) with more flexible Machine Learning (ML) models (Random Forest, Boosted Regression Trees, Deep Neural Networks, Convolutional Neural Networks, Support Vector Machines, naive Bayes, and k-Nearest-Neighbor), testing their ability to predict species interactions based on traits, and infer trait combinations causally responsible for species interactions. We find that the best ML models can successfully predict species interactions in plant-pollinator networks, outperforming GLMs by a substantial margin. Our results also demonstrate that ML models can better identify the causally responsible trait-matching combinations than GLMs. In two case studies, the best ML models successfully predicted species interactions in a global plant-pollinator database and inferred ecologically plausible trait-matching rules for a plant-hummingbird network, without any prior assumptions. We conclude that flexible ML models offer many advantages over traditional regression models for understanding interaction networks. We anticipate that these results extrapolate to other ecological network types. More generally, our results highlight the potential of machine learning and artificial intelligence for inference in ecology, beyond standard tasks such as image or pattern recognition.
2101.11658
Carmen Minuesa
Cristina Gutierrez and Carmen Minuesa
A two-sex branching process with oscillations: application to predator-prey systems
25 pages, 10 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A two-type two-sex branching process is introduced with the aim of describing the interaction of predator and prey populations with sexual reproduction and promiscuous mating. In each generation and in each species the total number of individuals which mate and produce offspring is controlled by a binomial distribution with size given by this number of individuals and probability of success depending on the density of preys per predator. The resulting model enables us to depict the typical cyclic behaviour of predator-prey systems under some mild assumptions on the shape of the function that characterises the probability of survival of the previous binomial distribution. We present some basic results about fixation and extinction of both species as well as conditions for the coexistence of both of them. We also analyse the suitability of the process to model real ecosystems comparing our model with a real dataset.
[ { "created": "Wed, 27 Jan 2021 19:35:44 GMT", "version": "v1" } ]
2021-01-29
[ [ "Gutierrez", "Cristina", "" ], [ "Minuesa", "Carmen", "" ] ]
A two-type two-sex branching process is introduced with the aim of describing the interaction of predator and prey populations with sexual reproduction and promiscuous mating. In each generation and in each species the total number of individuals which mate and produce offspring is controlled by a binomial distribution with size given by this number of individuals and probability of success depending on the density of preys per predator. The resulting model enables us to depict the typical cyclic behaviour of predator-prey systems under some mild assumptions on the shape of the function that characterises the probability of survival of the previous binomial distribution. We present some basic results about fixation and extinction of both species as well as conditions for the coexistence of both of them. We also analyse the suitability of the process to model real ecosystems comparing our model with a real dataset.
0911.5303
Bernat Corominas-Murtra BCM
Carlos Rodriguez-Caso, Bernat Corominas-Murtra, Ricard V. Sol\'e
On the basic computational structure of gene regulatory networks
This article is published at Molecular Biosystems, Please cite as: Carlos Rodriguez-Caso, Bernat Corominas-Murtra and Ricard V. Sole. Mol. BioSyst., 2009, 5 pp 1617--1719
Carlos Rodriguez-Caso, Bernat Corominas-Murtra and Ricard V. Sole. Mol. BioSyst., 2009, 5 pp 1617--1719
10.1039/b904960f
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks constitute the first layer of the cellular computation for cell adaptation and surveillance. In these webs, a set of causal relations is built up from thousands of interactions between transcription factors and their target genes. The large size of these webs and their entangled nature make difficult to achieve a global view of their internal organisation. Here, this problem has been addressed through a comparative study for {\em Escherichia coli}, {\em Bacillus subtilis} and {\em Saccharomyces cerevisiae} gene regulatory networks. We extract the minimal core of causal relations, uncovering the hierarchical and modular organisation from a novel dynamical/causal perspective. Our results reveal a marked top-down hierarchy containing several small dynamical modules for \textit{E. coli} and \textit{B. subtilis}. Conversely, the yeast network displays a single but large dynamical module in the middle of a bow-tie structure. We found that these dynamical modules capture the relevant wiring among both common and organism-specific biological functions such as transcription initiation, metabolic control, signal transduction, response to stress, sporulation and cell cycle. Functional and topological results suggest that two fundamentally different forms of logic organisation may have evolved in bacteria and yeast.
[ { "created": "Fri, 27 Nov 2009 16:18:11 GMT", "version": "v1" } ]
2009-11-30
[ [ "Rodriguez-Caso", "Carlos", "" ], [ "Corominas-Murtra", "Bernat", "" ], [ "Solé", "Ricard V.", "" ] ]
Gene regulatory networks constitute the first layer of the cellular computation for cell adaptation and surveillance. In these webs, a set of causal relations is built up from thousands of interactions between transcription factors and their target genes. The large size of these webs and their entangled nature make difficult to achieve a global view of their internal organisation. Here, this problem has been addressed through a comparative study for {\em Escherichia coli}, {\em Bacillus subtilis} and {\em Saccharomyces cerevisiae} gene regulatory networks. We extract the minimal core of causal relations, uncovering the hierarchical and modular organisation from a novel dynamical/causal perspective. Our results reveal a marked top-down hierarchy containing several small dynamical modules for \textit{E. coli} and \textit{B. subtilis}. Conversely, the yeast network displays a single but large dynamical module in the middle of a bow-tie structure. We found that these dynamical modules capture the relevant wiring among both common and organism-specific biological functions such as transcription initiation, metabolic control, signal transduction, response to stress, sporulation and cell cycle. Functional and topological results suggest that two fundamentally different forms of logic organisation may have evolved in bacteria and yeast.
2311.08611
Bo Deng
Bo Deng and Chayu Yang
Theory of Infectious Diseases with Testing and Testing-less Covid-19 Endemic
null
null
null
null
q-bio.PE math.DS nlin.CD q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
What is the long term dynamics of the Covid-19 pandemic? How will it end? Here we constructed an infectious disease model with testing and analyzed the existence and stability of its endemic states. For a large parameter set, including those relevant to the SARS-CoV-2 virus, we demonstrated the existence of one endemic equilibrium without testing and one endemic equilibrium with testing and proved their local and global stabilities for some cases. Our results suggest that the pandemic is to end with a testing-less endemic state through a novel and surprising mechanism called stochastic trapping.
[ { "created": "Wed, 15 Nov 2023 00:25:00 GMT", "version": "v1" } ]
2023-11-16
[ [ "Deng", "Bo", "" ], [ "Yang", "Chayu", "" ] ]
What is the long term dynamics of the Covid-19 pandemic? How will it end? Here we constructed an infectious disease model with testing and analyzed the existence and stability of its endemic states. For a large parameter set, including those relevant to the SARS-CoV-2 virus, we demonstrated the existence of one endemic equilibrium without testing and one endemic equilibrium with testing and proved their local and global stabilities for some cases. Our results suggest that the pandemic is to end with a testing-less endemic state through a novel and surprising mechanism called stochastic trapping.
1704.00301
Nadav M. Shnerb
Haim Weissmann, Rafi Kent, Yaron Michael and Nadav M. Shnerb
Empirical analysis of vegetation dynamics and the possibility of a catastrophic desertification transition
null
null
10.1371/journal.pone.0189058
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The process of desertification in the semi-arid climatic zone is considered by many as a catastrophic regime shift, since the positive feedback of vegetation density on growth rates yields a system that admits alternative steady states. Some support to this idea comes from the analysis of static patterns, where peaks of the vegetation density histogram were associated with these alternative states. Here we present a large-scale empirical study of vegetation dynamics, aimed at identifying and quantifying directly the effects of positive feedback. To do that, we have analyzed vegetation density across $~2.5 \times 10^6 \ \rm{km}^2$ of the African Sahel region, with spatial resolution of $30 \times 30$ meters, using three consecutive snapshots. The results are mixed. The local vegetation density (measured at a single pixel) moves towards the average of the corresponding rainfall line, indicating a purely negative feedback. On the other hand, the chance of spatial clusters (of many "green" pixels) to expand in the next census is growing with their size, suggesting some positive feedback. We show that these apparently contradicting results emerge naturally in a model with positive feedback and strong demographic stochasticity, a model that allows for a catastrophic shift only in a certain range of parameters. Static patterns, like the double peak in the histogram of vegetation density, are shown to vary between censuses, with no apparent correlation with the actual dynamical features.
[ { "created": "Sun, 2 Apr 2017 14:05:12 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2017 06:50:59 GMT", "version": "v2" } ]
2018-02-07
[ [ "Weissmann", "Haim", "" ], [ "Kent", "Rafi", "" ], [ "Michael", "Yaron", "" ], [ "Shnerb", "Nadav M.", "" ] ]
The process of desertification in the semi-arid climatic zone is considered by many as a catastrophic regime shift, since the positive feedback of vegetation density on growth rates yields a system that admits alternative steady states. Some support to this idea comes from the analysis of static patterns, where peaks of the vegetation density histogram were associated with these alternative states. Here we present a large-scale empirical study of vegetation dynamics, aimed at identifying and quantifying directly the effects of positive feedback. To do that, we have analyzed vegetation density across $~2.5 \times 10^6 \ \rm{km}^2$ of the African Sahel region, with spatial resolution of $30 \times 30$ meters, using three consecutive snapshots. The results are mixed. The local vegetation density (measured at a single pixel) moves towards the average of the corresponding rainfall line, indicating a purely negative feedback. On the other hand, the chance of spatial clusters (of many "green" pixels) to expand in the next census is growing with their size, suggesting some positive feedback. We show that these apparently contradicting results emerge naturally in a model with positive feedback and strong demographic stochasticity, a model that allows for a catastrophic shift only in a certain range of parameters. Static patterns, like the double peak in the histogram of vegetation density, are shown to vary between censuses, with no apparent correlation with the actual dynamical features.
1006.1209
Stefan Auer SA
Stefan Auer, Antonio Trovato, Michele Vendruscolo
A Condensation-Ordering Mechanism in Nanoparticle-Catalyzed Peptide Aggregation
null
S. Auer, A. Trovato, M. Vendruscolo (2010) A Condensation-Ordering Mechanism in Nanoparticle-Catalyzed Peptide Aggregation. PLoS Comput Biol 5(8):e1000458
10.1371/journal.pcbi.1000458
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nanoparticles introduced in living cells are capable of strongly promoting the aggregation of peptides and proteins. We use here molecular dynamics simulations to characterise in detail the process by which nanoparticle surfaces catalyse the self- assembly of peptides into fibrillar structures. The simulation of a system of hundreds of peptides over the millisecond timescale enables us to show that the mechanism of aggregation involves a first phase in which small structurally disordered oligomers assemble onto the nanoparticle and a second phase in which they evolve into highly ordered beta-sheets as their size increases.
[ { "created": "Mon, 7 Jun 2010 09:23:36 GMT", "version": "v1" } ]
2010-06-08
[ [ "Auer", "Stefan", "" ], [ "Trovato", "Antonio", "" ], [ "Vendruscolo", "Michele", "" ] ]
Nanoparticles introduced in living cells are capable of strongly promoting the aggregation of peptides and proteins. We use here molecular dynamics simulations to characterise in detail the process by which nanoparticle surfaces catalyse the self- assembly of peptides into fibrillar structures. The simulation of a system of hundreds of peptides over the millisecond timescale enables us to show that the mechanism of aggregation involves a first phase in which small structurally disordered oligomers assemble onto the nanoparticle and a second phase in which they evolve into highly ordered beta-sheets as their size increases.
q-bio/0702039
Atul Narang
Atul Narang
Effect of DNA looping on the induction kinetics of the lac operon
37 pages, J. Theoret. Biol
null
null
null
q-bio.MN q-bio.CB
null
The induction of the lac operon follows cooperative kinetics.The first mechanistic model of these kinetics is the de facto standard in the modeling literature (Yagil & Yagil, Biophys J, 11, 11-27, 1971). Yet, subsequent studies have shown that the model is based on incorrect assumptions. Specifically, the repressor is a tetramerwith four (not two) inducer-binding sites, and the operon contains two auxiliary operators (in addition to the main operator). Furthermore, these structural features are crucial for the formation of DNA loops, the key determinants of lac repression and induction. Indeed, the repression is determined almost entirely (>95%) by the looped complexes (Oehler et al, EMBO J, 13, 3348, 1990), and the pronounced cooperativity of the induction curve hinges upon the existence of the looped complexes (Oehler et al, Nucleic Acids Res, 34, 606, 2006). Here, we formulate a model of lac induction taking due account of the tetrameric structure of the repressor and the existence of looped complexes. We show that: (1) The kinetics are significantly more cooperative than those predicted by the Yagil & Yagil model. (2) The model provides good fits to the repression data for cells containing tetrameric (or mutant dimeric) repressor, as well as the induction curves for 6 different strains of E. coli. It also implies that the ratios of certain looped and non-looped complexes are independent of inducer and repressor levels, a conclusion that can be rigorously tested by gel electrophoresis. (3) Repressor overexpression dramatically increases the cooperativity of the induction curve. This suggests that repressor overexpression can induce bistability in systems, such as growth of E. coli on lactose, that are otherwise monostable.
[ { "created": "Mon, 19 Feb 2007 18:30:15 GMT", "version": "v1" } ]
2007-05-23
[ [ "Narang", "Atul", "" ] ]
The induction of the lac operon follows cooperative kinetics.The first mechanistic model of these kinetics is the de facto standard in the modeling literature (Yagil & Yagil, Biophys J, 11, 11-27, 1971). Yet, subsequent studies have shown that the model is based on incorrect assumptions. Specifically, the repressor is a tetramerwith four (not two) inducer-binding sites, and the operon contains two auxiliary operators (in addition to the main operator). Furthermore, these structural features are crucial for the formation of DNA loops, the key determinants of lac repression and induction. Indeed, the repression is determined almost entirely (>95%) by the looped complexes (Oehler et al, EMBO J, 13, 3348, 1990), and the pronounced cooperativity of the induction curve hinges upon the existence of the looped complexes (Oehler et al, Nucleic Acids Res, 34, 606, 2006). Here, we formulate a model of lac induction taking due account of the tetrameric structure of the repressor and the existence of looped complexes. We show that: (1) The kinetics are significantly more cooperative than those predicted by the Yagil & Yagil model. (2) The model provides good fits to the repression data for cells containing tetrameric (or mutant dimeric) repressor, as well as the induction curves for 6 different strains of E. coli. It also implies that the ratios of certain looped and non-looped complexes are independent of inducer and repressor levels, a conclusion that can be rigorously tested by gel electrophoresis. (3) Repressor overexpression dramatically increases the cooperativity of the induction curve. This suggests that repressor overexpression can induce bistability in systems, such as growth of E. coli on lactose, that are otherwise monostable.
1405.0576
Christoph Zechner
Christoph Zechner and Heinz Koeppl
Uncoupled Analysis of Stochastic Reaction Networks in Fluctuating Environments
7 pages, 4 figures, Appendix attached as SI.pdf, under submission
null
10.1371/journal.pcbi.1003942
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of stochastic reaction networks within cells are inevitably modulated by factors considered extrinsic to the network such as for instance the fluctuations in ribsome copy numbers for a gene regulatory network. While several recent studies demonstrate the importance of accounting for such extrinsic components, the resulting models are typically hard to analyze. In this work we develop a general mathematical framework that allows to uncouple the network from its dynamic environment by incorporating only the environment's effect onto the network into a new model. More technically, we show how such fluctuating extrinsic components (e.g., chemical species) can be marginalized in order to obtain this decoupled model. We derive its corresponding process- and master equations and show how stochastic simulations can be performed. Using several case studies, we demonstrate the significance of the approach. For instance, we exemplarily formulate and solve a marginal master equation describing the protein translation and degradation in a fluctuating environment.
[ { "created": "Sat, 3 May 2014 12:23:26 GMT", "version": "v1" } ]
2015-06-19
[ [ "Zechner", "Christoph", "" ], [ "Koeppl", "Heinz", "" ] ]
The dynamics of stochastic reaction networks within cells are inevitably modulated by factors considered extrinsic to the network such as for instance the fluctuations in ribsome copy numbers for a gene regulatory network. While several recent studies demonstrate the importance of accounting for such extrinsic components, the resulting models are typically hard to analyze. In this work we develop a general mathematical framework that allows to uncouple the network from its dynamic environment by incorporating only the environment's effect onto the network into a new model. More technically, we show how such fluctuating extrinsic components (e.g., chemical species) can be marginalized in order to obtain this decoupled model. We derive its corresponding process- and master equations and show how stochastic simulations can be performed. Using several case studies, we demonstrate the significance of the approach. For instance, we exemplarily formulate and solve a marginal master equation describing the protein translation and degradation in a fluctuating environment.
1604.02487
Russell Schwartz
Theodore Roman, Lu Xie, Russell Schwartz
Automated deconvolution of structured mixtures from bulk tumor genomic data
Paper accepted at RECOMB-CCB 2016
null
10.1371/journal.pcbi.1005815
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: As cancer researchers have come to appreciate the importance of intratumor heterogeneity, much attention has focused on the challenges of accurately profiling heterogeneity in individual patients. Experimental technologies for directly profiling genomes of single cells are rapidly improving, but they are still impractical for large-scale sampling. Bulk genomic assays remain the standard for population-scale studies, but conflate the influences of mixtures of genetically distinct tumor, stromal, and infiltrating immune cells. Many computational approaches have been developed to deconvolute these mixed samples and reconstruct the genomics of genetically homogeneous clonal subpopulations. All such methods, however, are limited to reconstructing only coarse approximations to a few major subpopulations. In prior work, we showed that one can improve deconvolution of genomic data by leveraging substructure in cellular mixtures through a strategy called simplicial complex inference. This strategy, however, is also limited by the difficulty of inferring mixture structure from sparse, noisy assays. Results: We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We show that these improvements lead to more accurate inference of cell populations and mixture proportions in simulated scenarios. We further demonstrate their effectiveness in identifying mixture substructure in real tumor CNV data. Availability: Source code is available at http://www.cs.cmu.edu/~russells/software/WSCUnmix.zip
[ { "created": "Fri, 8 Apr 2016 21:05:27 GMT", "version": "v1" } ]
2018-02-07
[ [ "Roman", "Theodore", "" ], [ "Xie", "Lu", "" ], [ "Schwartz", "Russell", "" ] ]
Motivation: As cancer researchers have come to appreciate the importance of intratumor heterogeneity, much attention has focused on the challenges of accurately profiling heterogeneity in individual patients. Experimental technologies for directly profiling genomes of single cells are rapidly improving, but they are still impractical for large-scale sampling. Bulk genomic assays remain the standard for population-scale studies, but conflate the influences of mixtures of genetically distinct tumor, stromal, and infiltrating immune cells. Many computational approaches have been developed to deconvolute these mixed samples and reconstruct the genomics of genetically homogeneous clonal subpopulations. All such methods, however, are limited to reconstructing only coarse approximations to a few major subpopulations. In prior work, we showed that one can improve deconvolution of genomic data by leveraging substructure in cellular mixtures through a strategy called simplicial complex inference. This strategy, however, is also limited by the difficulty of inferring mixture structure from sparse, noisy assays. Results: We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We show that these improvements lead to more accurate inference of cell populations and mixture proportions in simulated scenarios. We further demonstrate their effectiveness in identifying mixture substructure in real tumor CNV data. Availability: Source code is available at http://www.cs.cmu.edu/~russells/software/WSCUnmix.zip
1704.05942
Adam Gosztolai
Adam Gosztolai, J\"org Schumacher, Volker Behrends, Jacob G Bundy, Franziska Heydenreich, Mark H Bennett, Martin Buck, Mauricio Barahona
GlnK facilitates the dynamic regulation of bacterial nitrogen assimilation
null
null
10.1016/j.bpj.2017.04.012
null
q-bio.SC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ammonium assimilation in E. coli is regulated by two paralogous proteins (GlnB and GlnK), which orchestrate interactions with regulators of gene expression, transport proteins and metabolic pathways. Yet how they conjointly modulate the activity of glutamine synthetase (GS), the key enzyme for nitrogen assimilation, is poorly understood. We combine experiments and theory to study the dynamic roles of GlnB and GlnK during nitrogen starvation and upshift. We measure time-resolved in vivo concentrations of metabolites, total and post-translationally modified proteins, and develop a concise biochemical model of GlnB and GlnK that incorporates competition for active and allosteric sites, as well as functional sequestration of GlnK. The model predicts the responses of GS, GlnB and GlnK under time-varying external ammonium level in the wild type and two genetic knock-outs. Our results show that GlnK is tightly regulated under nitrogen-rich conditions, yet it is expressed during ammonium run-out and starvation. This suggests a role for GlnK as a buffer of nitrogen shock after starvation, and provides a further functional link between nitrogen and carbon metabolisms.
[ { "created": "Wed, 19 Apr 2017 21:56:43 GMT", "version": "v1" } ]
2017-06-28
[ [ "Gosztolai", "Adam", "" ], [ "Schumacher", "Jörg", "" ], [ "Behrends", "Volker", "" ], [ "Bundy", "Jacob G", "" ], [ "Heydenreich", "Franziska", "" ], [ "Bennett", "Mark H", "" ], [ "Buck", "Martin", "" ], [ "Barahona", "Mauricio", "" ] ]
Ammonium assimilation in E. coli is regulated by two paralogous proteins (GlnB and GlnK), which orchestrate interactions with regulators of gene expression, transport proteins and metabolic pathways. Yet how they conjointly modulate the activity of glutamine synthetase (GS), the key enzyme for nitrogen assimilation, is poorly understood. We combine experiments and theory to study the dynamic roles of GlnB and GlnK during nitrogen starvation and upshift. We measure time-resolved in vivo concentrations of metabolites, total and post-translationally modified proteins, and develop a concise biochemical model of GlnB and GlnK that incorporates competition for active and allosteric sites, as well as functional sequestration of GlnK. The model predicts the responses of GS, GlnB and GlnK under time-varying external ammonium level in the wild type and two genetic knock-outs. Our results show that GlnK is tightly regulated under nitrogen-rich conditions, yet it is expressed during ammonium run-out and starvation. This suggests a role for GlnK as a buffer of nitrogen shock after starvation, and provides a further functional link between nitrogen and carbon metabolisms.
1606.09095
Lars Rothkegel
Lars O. M. Rothkegel, Hans A. Trukenbrod, Heiko H. Sch\"utt, Felix A. Wichmann, Ralf Engbert
Influence of initial fixation position in scene viewing
34 pages with 10 figures submitted to Vision Research. Reviews Received on June 8th, 2016 (Minor Revision). Updated Version will be uploaded within the year 2016
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During scene perception our eyes generate complex sequences of fixations. Predictors of fixation locations are bottom-up factors like luminance contrast, top-down factors like viewing instruction, and systematic biases like the tendency to place fixations near the center of an image. However, comparatively little is known about the dynamics of scanpaths after experimental manipulation of specific fixation locations. Here we investigate the influence of initial fixation position on subsequent eye-movement behavior on an image. We presented 64 colored photographs to participants who started their scanpaths from one of two experimentally controlled positions in the right or left part of an image. Additionally, we computed the images' saliency maps and classified them as balanced images or images with high saliency values on either the left or right side of a picture. As a result of the starting point manipulation, we found long transients of mean fixation position and a tendency to overshoot to the image side opposite to the starting position. Possible mechanisms for the generation of this overshoot were investigated using numerical simulations of statistical and dynamical models. We conclude that inhibitory tagging is a viable mechanism for dynamical planning of scanpaths.
[ { "created": "Wed, 29 Jun 2016 13:42:56 GMT", "version": "v1" }, { "created": "Wed, 13 Jul 2016 10:52:50 GMT", "version": "v2" } ]
2016-07-14
[ [ "Rothkegel", "Lars O. M.", "" ], [ "Trukenbrod", "Hans A.", "" ], [ "Schütt", "Heiko H.", "" ], [ "Wichmann", "Felix A.", "" ], [ "Engbert", "Ralf", "" ] ]
During scene perception our eyes generate complex sequences of fixations. Predictors of fixation locations are bottom-up factors like luminance contrast, top-down factors like viewing instruction, and systematic biases like the tendency to place fixations near the center of an image. However, comparatively little is known about the dynamics of scanpaths after experimental manipulation of specific fixation locations. Here we investigate the influence of initial fixation position on subsequent eye-movement behavior on an image. We presented 64 colored photographs to participants who started their scanpaths from one of two experimentally controlled positions in the right or left part of an image. Additionally, we computed the images' saliency maps and classified them as balanced images or images with high saliency values on either the left or right side of a picture. As a result of the starting point manipulation, we found long transients of mean fixation position and a tendency to overshoot to the image side opposite to the starting position. Possible mechanisms for the generation of this overshoot were investigated using numerical simulations of statistical and dynamical models. We conclude that inhibitory tagging is a viable mechanism for dynamical planning of scanpaths.
1501.01282
Jack Peterson
Jack Peterson, Steve Presse, Kristin S. Peterson, Ken A. Dill
Simulated evolution of protein-protein interaction networks with realistic topology
22 pages, 18 figures, 3 tables
PLoS ONE 7 (2012) e39052
10.1371/journal.pone.0039052
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model the evolution of eukaryotic protein-protein interaction (PPI) networks. In our model, PPI networks evolve by two known biological mechanisms: (1) Gene duplication, which is followed by rapid diversification of duplicate interactions. (2) Neofunctionalization, in which a mutation leads to a new interaction with some other protein. Since many interactions are due to simple surface compatibility, we hypothesize there is an increased likelihood of interacting with other proteins in the target protein's neighborhood. We find good agreement of the model on 10 different network properties compared to high-confidence experimental PPI networks in yeast, fruit flies, and humans. Key findings are: (1) PPI networks evolve modular structures, with no need to invoke particular selection pressures. (2) Proteins in cells have on average about 6 degrees of separation, similar to some social networks, such as human-communication and actor networks. (3) Unlike social networks, which have a shrinking diameter (degree of maximum separation) over time, PPI networks are predicted to grow in diameter. (4) The model indicates that evolutionarily old proteins should have higher connectivities and be more centrally embedded in their networks. This suggests a way in which present-day proteomics data could provide insights into biological evolution.
[ { "created": "Tue, 6 Jan 2015 19:53:12 GMT", "version": "v1" } ]
2015-01-07
[ [ "Peterson", "Jack", "" ], [ "Presse", "Steve", "" ], [ "Peterson", "Kristin S.", "" ], [ "Dill", "Ken A.", "" ] ]
We model the evolution of eukaryotic protein-protein interaction (PPI) networks. In our model, PPI networks evolve by two known biological mechanisms: (1) Gene duplication, which is followed by rapid diversification of duplicate interactions. (2) Neofunctionalization, in which a mutation leads to a new interaction with some other protein. Since many interactions are due to simple surface compatibility, we hypothesize there is an increased likelihood of interacting with other proteins in the target protein's neighborhood. We find good agreement of the model on 10 different network properties compared to high-confidence experimental PPI networks in yeast, fruit flies, and humans. Key findings are: (1) PPI networks evolve modular structures, with no need to invoke particular selection pressures. (2) Proteins in cells have on average about 6 degrees of separation, similar to some social networks, such as human-communication and actor networks. (3) Unlike social networks, which have a shrinking diameter (degree of maximum separation) over time, PPI networks are predicted to grow in diameter. (4) The model indicates that evolutionarily old proteins should have higher connectivities and be more centrally embedded in their networks. This suggests a way in which present-day proteomics data could provide insights into biological evolution.
0807.0715
Bob Eisenberg
R. S. Eisenberg
Atomic Biology, Electrostatics, and Ionic Channels
This is a submission without substantive change of a chapter in the hard to find book, "New Developments and Theoretical Studies of Proteins" World Scientific Publishing Philadelphia, Edited by Ron Elber, published in 1996 ISBN-10:9810221967; ISBN-13:978-9810221966. RS Eisenberg is also known as Bob Eisenberg. Headers and Footers have been modified in versions 2 and 3
New Developments and Theoretical Studies of Proteins, Ed. Ron Elber, World Scientific Publishing, Philadelphia (1996) ISBN-10:9810221967; ISBN-13:978-9810221966
null
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I believe an atomic biology is needed to supplement present day molecular biology, if we are to design and understand proteins, as well as define, make, and use them. Topics in the paper are molecular biology and atomic biology. Electrodiffusion in the open channel. Electrodiffusion in mixed electrolytes. Models of permeation. State Models of Permeation are Inconsistent with the Electric Field. Making models in atomic biology. Molecular dynamics. Temporal Limitations; Spatial Limitations; Periodic boundary conditions. Hierarchy of models of the open channel. Stochastic Motion of the Channel. Langevin Dynamics. Simulations of the Reaction Path: the Permion. Chemical reactions. What was wrong? Back to the hierarchy: Occam's razor can slit your throat. Poisson-Nernst-Planck PNP Models Flux Ratios; Pumping by Field Coupling. Gating in channels of one conformation. Gating by Field Switching; Gating Current; Gating in Branched Channels; Blocking. Back to the hierarchy: Linking levels. Is there a theory? At what level will the adaptation be found? Simplicity, evolution, and natural function.
[ { "created": "Fri, 4 Jul 2008 10:35:06 GMT", "version": "v1" }, { "created": "Mon, 13 May 2013 17:46:24 GMT", "version": "v2" }, { "created": "Wed, 15 May 2013 17:04:59 GMT", "version": "v3" } ]
2013-05-16
[ [ "Eisenberg", "R. S.", "" ] ]
I believe an atomic biology is needed to supplement present day molecular biology, if we are to design and understand proteins, as well as define, make, and use them. Topics in the paper are molecular biology and atomic biology. Electrodiffusion in the open channel. Electrodiffusion in mixed electrolytes. Models of permeation. State Models of Permeation are Inconsistent with the Electric Field. Making models in atomic biology. Molecular dynamics. Temporal Limitations; Spatial Limitations; Periodic boundary conditions. Hierarchy of models of the open channel. Stochastic Motion of the Channel. Langevin Dynamics. Simulations of the Reaction Path: the Permion. Chemical reactions. What was wrong? Back to the hierarchy: Occam's razor can slit your throat. Poisson-Nernst-Planck PNP Models Flux Ratios; Pumping by Field Coupling. Gating in channels of one conformation. Gating by Field Switching; Gating Current; Gating in Branched Channels; Blocking. Back to the hierarchy: Linking levels. Is there a theory? At what level will the adaptation be found? Simplicity, evolution, and natural function.
q-bio/0412003
Bruce Ayati
Bruce P. Ayati
A Structured-Population Model of Proteus mirabilis Swarm-Colony Development
null
Journal of Mathematical Biology, 52(1), 2006, pp. 93-114
10.1007/s00285-005-0345-3
null
q-bio.CB q-bio.PE
null
In this paper we present continuous age- and space-structured models and numerical computations of Proteus mirabilis swarm-colony development. We base the mathematical representation of the cell-cycle dynamics of Proteus mirabilis on those developed by Esipov and Shapiro, which are the best understood aspects of the system, and we make minimum assumptions about less-understood mechanisms, such as precise forms of the spatial diffusion. The models in this paper have explicit age-structure and, when solved numerically, display both the temporal and spatial regularity seen in experiments, whereas the Esipov and Shapiro model, when solved accurately, shows only the temporal regularity. The composite hyperbolic-parabolic partial differential equations used to model Proteus mirabilis swarm-colony development are relevant to other biological systems where the spatial dynamics depend on local physiological structure. We use computational methods designed for such systems, with known convergence properties, to obtain the numerical results presented in this paper.
[ { "created": "Wed, 1 Dec 2004 20:20:13 GMT", "version": "v1" } ]
2023-02-14
[ [ "Ayati", "Bruce P.", "" ] ]
In this paper we present continuous age- and space-structured models and numerical computations of Proteus mirabilis swarm-colony development. We base the mathematical representation of the cell-cycle dynamics of Proteus mirabilis on those developed by Esipov and Shapiro, which are the best understood aspects of the system, and we make minimum assumptions about less-understood mechanisms, such as precise forms of the spatial diffusion. The models in this paper have explicit age-structure and, when solved numerically, display both the temporal and spatial regularity seen in experiments, whereas the Esipov and Shapiro model, when solved accurately, shows only the temporal regularity. The composite hyperbolic-parabolic partial differential equations used to model Proteus mirabilis swarm-colony development are relevant to other biological systems where the spatial dynamics depend on local physiological structure. We use computational methods designed for such systems, with known convergence properties, to obtain the numerical results presented in this paper.
2304.00885
R\'emi Fraysse
R\'emi Fraysse (1), R\'emi Choquet (1), Carlo Costantini (2), Roger Pradel (1) ((1) CEFE Univ Montpellier CNRS EPHE IRD Montpellier France, (2) MIVEGEC Univ Montpellier CNRS IRD Montpellier France)
Population size estimation with capture-recapture in presence of individual misidentification and low recapture
15 pages, 4 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
While non-invasive sampling is more and more commonly used in capture-recapture (CR) experiments, it carries a higher risk of misidentifications than direct observations. As a consequence, one must screen the data to retain only the reliable data before applying a classical CR model. This procedure is unacceptable when too few data would remain. Models able to deal with misidentifications have been proposed but are barely used. Three objectives are pursued in this paper. First, we present the Latent Multinomial Model of Link et al. (2010) where estimates of the model are obtained from a Monte Carlo Markov Chain (MCMC). Second we show the impact of the use of an informative prior over the estimations when the capture rate is low. Finally we extend the model to the multistate paradigm as an example of its flexibility. We showed that, without prior information, with capture rate at 0.2 or lower, parameters of the model are difficult to estimate i.e. either the MCMC does not converge or the estimates are biased. In that case, we show that adding an informative prior on the identification probability solves the identifiability problem of the model and allow for convergence. It also allows for good quality estimates of population size, although when the capture rate is 0.1 it underestimates it of about 10%. A similar approach on the multistate extension show good quality estimates of the population size and transition probabilities with a capture rate of 0.3 or more.
[ { "created": "Mon, 3 Apr 2023 11:09:37 GMT", "version": "v1" } ]
2023-04-04
[ [ "Fraysse", "Rémi", "" ], [ "Choquet", "Rémi", "" ], [ "Costantini", "Carlo", "" ], [ "Pradel", "Roger", "" ] ]
While non-invasive sampling is more and more commonly used in capture-recapture (CR) experiments, it carries a higher risk of misidentifications than direct observations. As a consequence, one must screen the data to retain only the reliable data before applying a classical CR model. This procedure is unacceptable when too few data would remain. Models able to deal with misidentifications have been proposed but are barely used. Three objectives are pursued in this paper. First, we present the Latent Multinomial Model of Link et al. (2010) where estimates of the model are obtained from a Monte Carlo Markov Chain (MCMC). Second we show the impact of the use of an informative prior over the estimations when the capture rate is low. Finally we extend the model to the multistate paradigm as an example of its flexibility. We showed that, without prior information, with capture rate at 0.2 or lower, parameters of the model are difficult to estimate i.e. either the MCMC does not converge or the estimates are biased. In that case, we show that adding an informative prior on the identification probability solves the identifiability problem of the model and allow for convergence. It also allows for good quality estimates of population size, although when the capture rate is 0.1 it underestimates it of about 10%. A similar approach on the multistate extension show good quality estimates of the population size and transition probabilities with a capture rate of 0.3 or more.
1506.03344
Nadav M. Shnerb
Haim Weissmann and Nadav M. Shnerb
Predicting catastrophic shifts
null
null
null
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Catastrophic transitions, where a system shifts abruptly between alternate steady states, are a generic feature of many nonlinear systems. Recently these regime shift were suggested as the mechanism underlies many ecological catastrophes, such as desertification and coral reef collapses, which are considered as a prominent threat to sustainability and to the well-being of millions. Still, the methods proposed so far for the prediction of an imminent transition are quite ineffective, and some empirical and theoretical studies suggest that actual transitions may occur smoothly, without an abrupt shift. Here we present a new diagnostic tool, based on monitoring the dynamics of clusters through time. Our technique discriminates between systems with local positive feedback, where the transition is abrupt, and systems with negative density dependence, where the transition is smooth. Analyzing the spatial dynamics of these two generic scenarios, we show that changes in the critical cluster size provide a reliable early warning indicator for both transitions. Our method may allow for the prediction, and thus hopefully the prevention of such transitions, avoiding their destructive outcomes.
[ { "created": "Wed, 10 Jun 2015 14:59:24 GMT", "version": "v1" } ]
2015-06-11
[ [ "Weissmann", "Haim", "" ], [ "Shnerb", "Nadav M.", "" ] ]
Catastrophic transitions, where a system shifts abruptly between alternate steady states, are a generic feature of many nonlinear systems. Recently these regime shift were suggested as the mechanism underlies many ecological catastrophes, such as desertification and coral reef collapses, which are considered as a prominent threat to sustainability and to the well-being of millions. Still, the methods proposed so far for the prediction of an imminent transition are quite ineffective, and some empirical and theoretical studies suggest that actual transitions may occur smoothly, without an abrupt shift. Here we present a new diagnostic tool, based on monitoring the dynamics of clusters through time. Our technique discriminates between systems with local positive feedback, where the transition is abrupt, and systems with negative density dependence, where the transition is smooth. Analyzing the spatial dynamics of these two generic scenarios, we show that changes in the critical cluster size provide a reliable early warning indicator for both transitions. Our method may allow for the prediction, and thus hopefully the prevention of such transitions, avoiding their destructive outcomes.
2110.05400
Timothy Bray
Timothy JP Bray, Alan Bainbridge, Margaret A Hall-Craggs, Hui Zhang
MAGORINO: Magnitude-only fat fraction and R2* estimation with Rician noise modelling
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Purpose: Magnitude-based fitting of chemical shift-encoded data enables proton density fat fraction (PDFF) and R2* estimation where complex-based methods fail or when phase data is inaccessible or unreliable, such as in multi-centre studies. However, traditional magnitude-based fitting algorithms suffer from Rician noise-related bias and fat-water swaps. To address these issues, we propose an algorithm for Magnitude-Only PDFF and R2* estimation with Rician Noise modelling (MAGORINO). Methods: Simulations of multi-echo gradient echo signal intensities are used to investigate the performance and behavior of MAGORINO over the space of clinically plausible PDFF, R2* and SNR values. Fitting performance is assessed in terms of parameter bias, precision and fitting error. To gain deeper insights into algorithm behavior, the paths on the likelihood functions are visualized and statistics describing correct optimization are generated. MAGORINO is compared against Gaussian noise-based magnitude fitting and complex fitting. Results: Simulations show that MAGORINO reduces bias in both PDFF and R2* measurements compared to Gaussian fitting, through two main mechanisms: (i) a greater chance of selecting the true (non-swapped) optimum, and (ii) a shift in the position of the optima such that the estimates are closer to ground truth solutions, as a result of the correct noise model. Conclusion: MAGORINO reduces fat-water swaps and Rician noise-related bias in PDFF and R2* estimation, thus addressing key limitations of traditional Gaussian noise-based magnitude-only fitting.
[ { "created": "Mon, 11 Oct 2021 16:39:33 GMT", "version": "v1" }, { "created": "Sat, 20 Nov 2021 16:43:13 GMT", "version": "v2" }, { "created": "Thu, 3 Mar 2022 12:36:24 GMT", "version": "v3" } ]
2022-03-04
[ [ "Bray", "Timothy JP", "" ], [ "Bainbridge", "Alan", "" ], [ "Hall-Craggs", "Margaret A", "" ], [ "Zhang", "Hui", "" ] ]
Purpose: Magnitude-based fitting of chemical shift-encoded data enables proton density fat fraction (PDFF) and R2* estimation where complex-based methods fail or when phase data is inaccessible or unreliable, such as in multi-centre studies. However, traditional magnitude-based fitting algorithms suffer from Rician noise-related bias and fat-water swaps. To address these issues, we propose an algorithm for Magnitude-Only PDFF and R2* estimation with Rician Noise modelling (MAGORINO). Methods: Simulations of multi-echo gradient echo signal intensities are used to investigate the performance and behavior of MAGORINO over the space of clinically plausible PDFF, R2* and SNR values. Fitting performance is assessed in terms of parameter bias, precision and fitting error. To gain deeper insights into algorithm behavior, the paths on the likelihood functions are visualized and statistics describing correct optimization are generated. MAGORINO is compared against Gaussian noise-based magnitude fitting and complex fitting. Results: Simulations show that MAGORINO reduces bias in both PDFF and R2* measurements compared to Gaussian fitting, through two main mechanisms: (i) a greater chance of selecting the true (non-swapped) optimum, and (ii) a shift in the position of the optima such that the estimates are closer to ground truth solutions, as a result of the correct noise model. Conclusion: MAGORINO reduces fat-water swaps and Rician noise-related bias in PDFF and R2* estimation, thus addressing key limitations of traditional Gaussian noise-based magnitude-only fitting.
1407.4436
Jack Heal
J. W. Heal, R. A. R\"omer, C. A. Blindauer and R. B. Freedman
Characterizing the folding core of the cyclophilin A - cyclosporin A complex I: hydrogen exchange data and rigidity analysis
15 pages
Biophys J. 108, 1739-1746 (2015)
10.1016/j.bpj.2015.02.017
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The determination of a 'folding core' can help to provide insight into the structure, flexibility, mobility and dynamics, and hence, ultimately, function of a protein - a central concern of structural biology. Changes in the folding core upon ligand binding are of particular interest because they may be relevant to drug-induced functional changes. Cyclophilin A is a multi-functional ligand-binding protein and a significant drug target. It acts principally as an enzyme during protein folding, but also as the primary binding partner for the immunosuppressant drug cyclosporin A (CsA). Here, we have used hydrogen-deuterium exchange (HDX) NMR spectroscopy to determine the folding core of the CypA-CsA complex. We also use the rapid computational tool of rigidity analysis, implemented in FIRST, to determine a theoretical folding core of the complex. In addition we generate a theoretical folding core for the unbound protein and compare this with previously published HDX data. The FIRST method gives a good prediction of the HDX folding core, but we find that it is not yet sufficiently sensitive to predict the effects of ligand binding on CypA.
[ { "created": "Wed, 16 Jul 2014 19:25:41 GMT", "version": "v1" } ]
2015-04-09
[ [ "Heal", "J. W.", "" ], [ "Römer", "R. A.", "" ], [ "Blindauer", "C. A.", "" ], [ "Freedman", "R. B.", "" ] ]
The determination of a 'folding core' can help to provide insight into the structure, flexibility, mobility and dynamics, and hence, ultimately, function of a protein - a central concern of structural biology. Changes in the folding core upon ligand binding are of particular interest because they may be relevant to drug-induced functional changes. Cyclophilin A is a multi-functional ligand-binding protein and a significant drug target. It acts principally as an enzyme during protein folding, but also as the primary binding partner for the immunosuppressant drug cyclosporin A (CsA). Here, we have used hydrogen-deuterium exchange (HDX) NMR spectroscopy to determine the folding core of the CypA-CsA complex. We also use the rapid computational tool of rigidity analysis, implemented in FIRST, to determine a theoretical folding core of the complex. In addition we generate a theoretical folding core for the unbound protein and compare this with previously published HDX data. The FIRST method gives a good prediction of the HDX folding core, but we find that it is not yet sufficiently sensitive to predict the effects of ligand binding on CypA.
q-bio/0508017
Veit Schw\"ammle
V. Schwammle, A. O. Sousa, S. M. de Oliveira
Monte carlo simulations of parapatric speciation
submitted to Phys.Rev. E
null
10.1140/epjb/e2006-00251-5
null
q-bio.PE
null
Parapatric speciation is studied using an individual--based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.
[ { "created": "Mon, 15 Aug 2005 22:22:56 GMT", "version": "v1" } ]
2009-11-11
[ [ "Schwammle", "V.", "" ], [ "Sousa", "A. O.", "" ], [ "de Oliveira", "S. M.", "" ] ]
Parapatric speciation is studied using an individual--based model with sexual reproduction. We combine the theory of mutation accumulation for biological ageing with an environmental selection pressure that varies according to the individuals geographical positions and phenotypic traits. Fluctuations and genetic diversity of large populations are crucial ingredients to model the features of evolutionary branching and are intrinsic properties of the model. Its implementation on a spatial lattice gives interesting insights into the population dynamics of speciation on a geographical landscape and the disruptive selection that leads to the divergence of phenotypes. Our results suggest that assortative mating is not an obligatory ingredient to obtain speciation in large populations at low gene flow.
2309.10629
Sissel Banke
Jakob L. Andersen, Sissel Banke, Rolf Fagerberg, Christoph Flamm, Daniel Merkle and Peter F. Stadler
On the Realisability of Chemical Pathways
Accepted in LNBI proceedings
null
null
null
q-bio.MN cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exploration of pathways and alternative pathways that have a specific function is of interest in numerous chemical contexts. A framework for specifying and searching for pathways has previously been developed, but a focus on which of the many pathway solutions are realisable, or can be made realisable, is missing. Realisable here means that there actually exists some sequencing of the reactions of the pathway that will execute the pathway. We present a method for analysing the realisability of pathways based on the reachability question in Petri nets. For realisable pathways, our method also provides a certificate encoding an order of the reactions which realises the pathway. We present two extended notions of realisability of pathways, one of which is related to the concept of network catalysts. We exemplify our findings on the pentose phosphate pathway. Lastly, we discuss the relevance of our concepts for elucidating the choices often implicitly made when depicting pathways.
[ { "created": "Tue, 19 Sep 2023 14:09:53 GMT", "version": "v1" } ]
2023-09-20
[ [ "Andersen", "Jakob L.", "" ], [ "Banke", "Sissel", "" ], [ "Fagerberg", "Rolf", "" ], [ "Flamm", "Christoph", "" ], [ "Merkle", "Daniel", "" ], [ "Stadler", "Peter F.", "" ] ]
The exploration of pathways and alternative pathways that have a specific function is of interest in numerous chemical contexts. A framework for specifying and searching for pathways has previously been developed, but a focus on which of the many pathway solutions are realisable, or can be made realisable, is missing. Realisable here means that there actually exists some sequencing of the reactions of the pathway that will execute the pathway. We present a method for analysing the realisability of pathways based on the reachability question in Petri nets. For realisable pathways, our method also provides a certificate encoding an order of the reactions which realises the pathway. We present two extended notions of realisability of pathways, one of which is related to the concept of network catalysts. We exemplify our findings on the pentose phosphate pathway. Lastly, we discuss the relevance of our concepts for elucidating the choices often implicitly made when depicting pathways.
2005.08107
Weston Viles
Weston D. Viles, Juliette C. Madan, Hongzhe Li, Jason C. Moore, Margaret R. Karagas, and Anne G. Hoen
Information content of high-order associations of the human gut microbiota network
21 pages, 3 figures
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human gastrointestinal tract is an environment that hosts an ecosystem of microorganisms essential to human health. Vital biological processes emerge from fundamental inter- and intra-species molecular interactions that influence the assembly and composition of the gut microbiota ecology. Here we quantify the complexity of the ecological relationships within the human infant gut microbiota ecosystem as a function of the information contained in the nonlinear associations of a sequence of increasingly-specified maximum entropy representations of the system. Our paradigm frames the ecological state, in terms of the presence or absence of individual microbial ecological units that are identified by amplicon sequence variants (ASV) in the gut microenvironment, as a function of both the ecological states of its neighboring units and, in a departure from standard graphical model representations, the associations among the units within its neighborhood. We characterize the order of the system based on the relative quantity of statistical information encoded by high-order statistical associations of the infant gut microbiota.
[ { "created": "Sat, 16 May 2020 21:15:15 GMT", "version": "v1" }, { "created": "Wed, 24 Feb 2021 02:05:45 GMT", "version": "v2" } ]
2021-02-25
[ [ "Viles", "Weston D.", "" ], [ "Madan", "Juliette C.", "" ], [ "Li", "Hongzhe", "" ], [ "Moore", "Jason C.", "" ], [ "Karagas", "Margaret R.", "" ], [ "Hoen", "Anne G.", "" ] ]
The human gastrointestinal tract is an environment that hosts an ecosystem of microorganisms essential to human health. Vital biological processes emerge from fundamental inter- and intra-species molecular interactions that influence the assembly and composition of the gut microbiota ecology. Here we quantify the complexity of the ecological relationships within the human infant gut microbiota ecosystem as a function of the information contained in the nonlinear associations of a sequence of increasingly-specified maximum entropy representations of the system. Our paradigm frames the ecological state, in terms of the presence or absence of individual microbial ecological units that are identified by amplicon sequence variants (ASV) in the gut microenvironment, as a function of both the ecological states of its neighboring units and, in a departure from standard graphical model representations, the associations among the units within its neighborhood. We characterize the order of the system based on the relative quantity of statistical information encoded by high-order statistical associations of the infant gut microbiota.
1409.1431
Emanuele Santovetti
Bianca Gustavino (1), Giovanni Carboni (2), Roberto Petrillo (2), Marco Rizzoni (1), Emanuele Santovetti (2) ((1) Department of Biology and (2) Department of Physics, University of Rome Tor Vergata, Rome (Italy))
Micronucleus induction by 915 MHz Radiofrequency Radiation in Vicia faba root tips
null
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mutagenic effect of radiofrequency electromagnetic field (RF-EMF) is evaluated by the micronucleus (MN) test in secondary roots of Vicia faba seedlings. Root exposures were carried out with 915 MHz continuous wave (CW) radiation for 72h, at power densities of 25, 38, 50 W/m$^2$. The specific absorption rate (SAR) corresponding to the experimental exposures was measured with a calorimetric method and fall in the range 0.3-1.8 W/kg. Results show a significant increase of MN frequency up to ten fold, correlated with the increasing power densities values.
[ { "created": "Thu, 4 Sep 2014 13:07:30 GMT", "version": "v1" } ]
2014-09-05
[ [ "Gustavino", "Bianca", "" ], [ "Carboni", "Giovanni", "" ], [ "Petrillo", "Roberto", "" ], [ "Rizzoni", "Marco", "" ], [ "Santovetti", "Emanuele", "" ] ]
The mutagenic effect of radiofrequency electromagnetic field (RF-EMF) is evaluated by the micronucleus (MN) test in secondary roots of Vicia faba seedlings. Root exposures were carried out with 915 MHz continuous wave (CW) radiation for 72h, at power densities of 25, 38, 50 W/m$^2$. The specific absorption rate (SAR) corresponding to the experimental exposures was measured with a calorimetric method and fall in the range 0.3-1.8 W/kg. Results show a significant increase of MN frequency up to ten fold, correlated with the increasing power densities values.
1611.08913
Nadav Amir
Nadav Amir, Israel Nelken, Naftali Tishby
A Simple Model of Attentional Blink
null
null
null
null
q-bio.NC cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The attentional blink (AB) effect is the reduced ability of subjects to report a second target stimuli (T2) among a rapidly presented series of non-target stimuli, when it appears within a time window of about 200-500 ms after a first target (T1). We present a simple dynamical systems model explaining the AB as resulting from the temporal response dynamics of a stochastic, linear system with threshold, whose output represents the amount of attentional resources allocated to the incoming sensory stimuli. The model postulates that the available attention capacity is limited by activity of the default mode network (DMN), a correlated set of brain regions related to task irrelevant processing which is known to exhibit reduced activation following mental training such as mindfulness meditation. The model provides a parsimonious account relating key findings from the AB, DMN and meditation research literature, and suggests some new testable predictions.
[ { "created": "Sun, 27 Nov 2016 21:08:26 GMT", "version": "v1" }, { "created": "Wed, 27 Sep 2017 13:40:49 GMT", "version": "v2" } ]
2017-09-28
[ [ "Amir", "Nadav", "" ], [ "Nelken", "Israel", "" ], [ "Tishby", "Naftali", "" ] ]
The attentional blink (AB) effect is the reduced ability of subjects to report a second target stimuli (T2) among a rapidly presented series of non-target stimuli, when it appears within a time window of about 200-500 ms after a first target (T1). We present a simple dynamical systems model explaining the AB as resulting from the temporal response dynamics of a stochastic, linear system with threshold, whose output represents the amount of attentional resources allocated to the incoming sensory stimuli. The model postulates that the available attention capacity is limited by activity of the default mode network (DMN), a correlated set of brain regions related to task irrelevant processing which is known to exhibit reduced activation following mental training such as mindfulness meditation. The model provides a parsimonious account relating key findings from the AB, DMN and meditation research literature, and suggests some new testable predictions.
1210.8376
Bartosz Szczesny
Bartosz Szczesny, Mauro Mobilia and Alastair M. Rucklidge
When does cyclic dominance lead to stable spiral waves?
6 pages, 5 figures, supplementary material and movies available at http://dx.doi.org/10.6084/m9.figshare.96949, accepted by the EPL (Europhysics Letters)
EPL (Europhysics Letters) Vol. 102, 28012 (2013)
10.1209/0295-5075/102/28012
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion.
[ { "created": "Wed, 31 Oct 2012 16:09:45 GMT", "version": "v1" }, { "created": "Fri, 12 Apr 2013 14:35:14 GMT", "version": "v2" } ]
2013-05-09
[ [ "Szczesny", "Bartosz", "" ], [ "Mobilia", "Mauro", "" ], [ "Rucklidge", "Alastair M.", "" ] ]
Species diversity in ecosystems is often accompanied by the self-organisation of the population into fascinating spatio-temporal patterns. Here, we consider a two-dimensional three-species population model and study the spiralling patterns arising from the combined effects of generic cyclic dominance, mutation, pair-exchange and hopping of the individuals. The dynamics is characterised by nonlinear mobility and a Hopf bifurcation around which the system's phase diagram is inferred from the underlying complex Ginzburg-Landau equation derived using a perturbative multiscale expansion. While the dynamics is generally characterised by spiralling patterns, we show that spiral waves are stable in only one of the four phases. Furthermore, we characterise a phase where nonlinearity leads to the annihilation of spirals and to the spatially uniform dominance of each species in turn. Away from the Hopf bifurcation, when the coexistence fixed point is unstable, the spiralling patterns are also affected by nonlinear diffusion.
0905.3350
Zeev Schuss
M. Shaked, G. Gibor, B. Attali, Z. Schuss
Stochastic resonance of ELF-EMF in voltage-gated channels: the case of the cardiac I_Ks potassium channel
13 pages, 8 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We applied a periodic magnetic field of frequency 16 Hz and amplitude 16 nT to a human I_Ks channel, expressed in a Xenopus oocyte and varied the membrane depolarization between -100 mV and +100 mV. We observed a maximal increase of about 9% in the potassium current at membrane depolarization between 0 mV and 8 mV (see Figure 3). A similar measurement of the potassium current in the KCNQ1 channel, expressed in an oocyte, gave a maximal increase of 16% at the same applied magnetic field and membrane depolarization between -14 mV and -7 mV (see Figure 4). We attribute this resonant behavior to stochastic resonance between the thermal activations of the configuration of interacting ions in the I_Ks channel over a low potential barrier inside the closed state of the channel and the periodic electromotive force induced across the membrane by the periodic magnetic field. The partial synchronization of the random jumps with the periodic force changes the relative times spent on either side of the barrier, thereby changing the open probability of the spontaneously gating open channel. This, in turn, changes the conductance of the channel at the particular depolarization level and frequency and is expressed in the Hodgkin-Huxley equations as a bump at the given voltage in the conductance-voltage relation. We integrate the modified Hodgkin-Huxley equations for the current into the Luo-Rudy model of a Guinea pig ventricular cardiac myocyte and obtain increased conductance during the plateau of the action potential in the cell. This shortens both the action potential and the cytosolic calcium concentration spike durations, lowers its amplitude, increases cytosolic sodium, and lowers cytosolic potassium concentrations. The shortening of the ventricular calcium signal shortens the QT period of the ECG.
[ { "created": "Wed, 20 May 2009 16:35:36 GMT", "version": "v1" } ]
2009-05-21
[ [ "Shaked", "M.", "" ], [ "Gibor", "G.", "" ], [ "Attali", "B.", "" ], [ "Schuss", "Z.", "" ] ]
We applied a periodic magnetic field of frequency 16 Hz and amplitude 16 nT to a human I_Ks channel, expressed in a Xenopus oocyte and varied the membrane depolarization between -100 mV and +100 mV. We observed a maximal increase of about 9% in the potassium current at membrane depolarization between 0 mV and 8 mV (see Figure 3). A similar measurement of the potassium current in the KCNQ1 channel, expressed in an oocyte, gave a maximal increase of 16% at the same applied magnetic field and membrane depolarization between -14 mV and -7 mV (see Figure 4). We attribute this resonant behavior to stochastic resonance between the thermal activations of the configuration of interacting ions in the I_Ks channel over a low potential barrier inside the closed state of the channel and the periodic electromotive force induced across the membrane by the periodic magnetic field. The partial synchronization of the random jumps with the periodic force changes the relative times spent on either side of the barrier, thereby changing the open probability of the spontaneously gating open channel. This, in turn, changes the conductance of the channel at the particular depolarization level and frequency and is expressed in the Hodgkin-Huxley equations as a bump at the given voltage in the conductance-voltage relation. We integrate the modified Hodgkin-Huxley equations for the current into the Luo-Rudy model of a Guinea pig ventricular cardiac myocyte and obtain increased conductance during the plateau of the action potential in the cell. This shortens both the action potential and the cytosolic calcium concentration spike durations, lowers its amplitude, increases cytosolic sodium, and lowers cytosolic potassium concentrations. The shortening of the ventricular calcium signal shortens the QT period of the ECG.
1908.03569
Isidro Cortes-Ciriano PhD
Isidro Cort\'es-Ciriano and Andreas Bender
Concepts and Applications of Conformal Prediction in Computational Drug Discovery
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the reliability of individual predictions is key to increase the adoption of computational models and artificial intelligence in preclinical drug discovery, as well as to foster its application to guide decision making in clinical settings. Among the large number of algorithms developed over the last decades to compute prediction errors, Conformal Prediction (CP) has gained increasing attention in the computational drug discovery community. A major reason for its recent popularity is the ease of interpretation of the computed prediction errors in both classification and regression tasks. For instance, at a confidence level of 90% the true value will be within the predicted confidence intervals in at least 90% of the cases. This so called validity of conformal predictors is guaranteed by the robust mathematical foundation underlying CP. The versatility of CP relies on its minimal computational footprint, as it can be easily coupled to any machine learning algorithm at little computational cost. In this review, we summarize underlying concepts and practical applications of CP with a particular focus on virtual screening and activity modelling, and list open source implementations of relevant software. Finally, we describe the current limitations in the field, and provide a perspective on future opportunities for CP in preclinical and clinical drug discovery.
[ { "created": "Fri, 9 Aug 2019 11:17:05 GMT", "version": "v1" } ]
2019-08-13
[ [ "Cortés-Ciriano", "Isidro", "" ], [ "Bender", "Andreas", "" ] ]
Estimating the reliability of individual predictions is key to increase the adoption of computational models and artificial intelligence in preclinical drug discovery, as well as to foster its application to guide decision making in clinical settings. Among the large number of algorithms developed over the last decades to compute prediction errors, Conformal Prediction (CP) has gained increasing attention in the computational drug discovery community. A major reason for its recent popularity is the ease of interpretation of the computed prediction errors in both classification and regression tasks. For instance, at a confidence level of 90% the true value will be within the predicted confidence intervals in at least 90% of the cases. This so called validity of conformal predictors is guaranteed by the robust mathematical foundation underlying CP. The versatility of CP relies on its minimal computational footprint, as it can be easily coupled to any machine learning algorithm at little computational cost. In this review, we summarize underlying concepts and practical applications of CP with a particular focus on virtual screening and activity modelling, and list open source implementations of relevant software. Finally, we describe the current limitations in the field, and provide a perspective on future opportunities for CP in preclinical and clinical drug discovery.