id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
0903.2569
Matthew Caudill
M. S. Caudill, S. F. Brandt, Z. Nussinov, R. Wessel
Equivalent Dynamics from Disparate Synaptic Weights in a Prevalent Visual Circuit
5 pages, 4 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural feedback-triads consisting of two feedback loops with a non-reciprocal lateral connection from one loop to the other are ubiquitous in the brain. We show analytically that the dynamics of this network topology are determined by two algebraic combinations of its five synaptic weights. Thus different weight settings can generate equivalent network dynamics. Exploration of network activity over the two-dimensional parameter space demonstrates the importance of the non-reciprocal lateral connection and reveals intricate behavior involving continuous transitions between qualitatively different activity states. In addition, we show that the response to periodic inputs is narrowly tuned around a center frequency determined by the two effective synaptic parameters.
[ { "created": "Mon, 16 Mar 2009 18:49:04 GMT", "version": "v1" }, { "created": "Tue, 17 Mar 2009 16:34:10 GMT", "version": "v2" } ]
2009-03-17
[ [ "Caudill", "M. S.", "" ], [ "Brandt", "S. F.", "" ], [ "Nussinov", "Z.", "" ], [ "Wessel", "R.", "" ] ]
Neural feedback-triads consisting of two feedback loops with a non-reciprocal lateral connection from one loop to the other are ubiquitous in the brain. We show analytically that the dynamics of this network topology are determined by two algebraic combinations of its five synaptic weights. Thus different weight settings can generate equivalent network dynamics. Exploration of network activity over the two-dimensional parameter space demonstrates the importance of the non-reciprocal lateral connection and reveals intricate behavior involving continuous transitions between qualitatively different activity states. In addition, we show that the response to periodic inputs is narrowly tuned around a center frequency determined by the two effective synaptic parameters.
q-bio/0409038
Margaret Dominy
M. Dominy (1), G. Rosen (2) ((1) Hagerty Library, (2) Physics Department, Drexel University)
A modular Fibonacci sequence in proteins
11 pages, 1 figure, 2 tables
null
null
null
q-bio.GN q-bio.BM
null
Protein-fragment seqlets typically feature about 10 amino acid residue positions that are fixed to within conservative substitutions but usually separated by a number of prescribed gaps with arbitrary residue content. By quantifying a general amino acid residue sequence in terms of the associated codon number sequence, we have found a precise modular Fibonacci sequence in a continuous gap-free 10-residue seqlet with either 3 or 4 conservative amino acid substitutions. This modular Fibonacci sequence is genuinely biophysical, for it occurs nine times in the SWISS-Prot/TrEMBL database of natural proteins.
[ { "created": "Thu, 30 Sep 2004 19:30:23 GMT", "version": "v1" } ]
2007-05-23
[ [ "Dominy", "M.", "" ], [ "Rosen", "G.", "" ] ]
Protein-fragment seqlets typically feature about 10 amino acid residue positions that are fixed to within conservative substitutions but usually separated by a number of prescribed gaps with arbitrary residue content. By quantifying a general amino acid residue sequence in terms of the associated codon number sequence, we have found a precise modular Fibonacci sequence in a continuous gap-free 10-residue seqlet with either 3 or 4 conservative amino acid substitutions. This modular Fibonacci sequence is genuinely biophysical, for it occurs nine times in the SWISS-Prot/TrEMBL database of natural proteins.
0901.3028
Claudius Gros
Claudius Gros
Cognitive computation with autonomously active neural networks: an emerging field
keynote review. Cognitive Computation (in press, 2009)
Cognitive Computation 1, 77-99 (2009)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain is autonomously active. To understand the functional role of this self-sustained neural activity, and its interplay with the sensory data input stream, is an important question in cognitive system research and we review here the present state of theoretical modelling. This review will start with a brief overview of the experimental efforts, together with a discussion of transient vs. self-sustained neural activity in the framework of reservoir computing. The main emphasis will be then on two paradigmal neural network architectures showing continuously ongoing transient-state dynamics: saddle point networks and networks of attractor relics. Self-active neural networks are confronted with two seemingly contrasting demands: a stable internal dynamical state and sensitivity to incoming stimuli. We show, that this dilemma can be solved by networks of attractor relics based on competitive neural dynamics, where the attractor relics compete on one side with each other for transient dominance, and on the other side with the dynamical influence of the input signals. Unsupervised and local Hebbian-style online learning then allows the system to build up correlations between the internal dynamical transient states and the sensory input stream. An emergent cognitive capability results from this set-up. The system performs online, and on its own, a non-linear independent component analysis of the sensory data stream, all the time being continuously and autonomously active. This process maps the independent components of the sensory input onto the attractor relics, which acquire in this way a semantic meaning.
[ { "created": "Tue, 20 Jan 2009 11:00:35 GMT", "version": "v1" } ]
2009-03-03
[ [ "Gros", "Claudius", "" ] ]
The human brain is autonomously active. To understand the functional role of this self-sustained neural activity, and its interplay with the sensory data input stream, is an important question in cognitive system research and we review here the present state of theoretical modelling. This review will start with a brief overview of the experimental efforts, together with a discussion of transient vs. self-sustained neural activity in the framework of reservoir computing. The main emphasis will be then on two paradigmal neural network architectures showing continuously ongoing transient-state dynamics: saddle point networks and networks of attractor relics. Self-active neural networks are confronted with two seemingly contrasting demands: a stable internal dynamical state and sensitivity to incoming stimuli. We show, that this dilemma can be solved by networks of attractor relics based on competitive neural dynamics, where the attractor relics compete on one side with each other for transient dominance, and on the other side with the dynamical influence of the input signals. Unsupervised and local Hebbian-style online learning then allows the system to build up correlations between the internal dynamical transient states and the sensory input stream. An emergent cognitive capability results from this set-up. The system performs online, and on its own, a non-linear independent component analysis of the sensory data stream, all the time being continuously and autonomously active. This process maps the independent components of the sensory input onto the attractor relics, which acquire in this way a semantic meaning.
1003.1802
Edouard Debonneuil
Edouard Debonneuil
A simple model of mortality trends aiming at universality: Lee Carter + Cohort
5 pages, 3 figures
null
null
null
q-bio.PE q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Lee Carter modelling framework is widely used because of its simplicity and robustness despite its inability to model specific cohort effects. A large number of extensions have been proposed that model cohort effects but there is no consensus. It is difficult to simultaneously account for cohort effects and age-adjusted improvements and we here test a simple model that accounts for both: we simply add a non age-adjusted cohort component to the Lee Carter framework. This is a trade-off between accuracy and robustness but, for various countries present in the Human Mortality Database and compared to various models, the model accurately fits past mortality data and gives good backtesting projections.
[ { "created": "Tue, 9 Mar 2010 07:05:23 GMT", "version": "v1" } ]
2010-03-10
[ [ "Debonneuil", "Edouard", "" ] ]
The Lee Carter modelling framework is widely used because of its simplicity and robustness despite its inability to model specific cohort effects. A large number of extensions have been proposed that model cohort effects but there is no consensus. It is difficult to simultaneously account for cohort effects and age-adjusted improvements and we here test a simple model that accounts for both: we simply add a non age-adjusted cohort component to the Lee Carter framework. This is a trade-off between accuracy and robustness but, for various countries present in the Human Mortality Database and compared to various models, the model accurately fits past mortality data and gives good backtesting projections.
1407.5218
Cameron Mura
Marcin Cieslik, Zygmunt Derewenda, Cameron Mura
Abstractions, Algorithms and Data Structures for Structural Bioinformatics in PyCogent
36 pages, 4 figures (including supplemental information)
Journal of Applied Crystallography (2011), 44(2), 424-428
10.1107/S0021889811004481
null
q-bio.BM cs.DS cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To facilitate flexible and efficient structural bioinformatics analyses, new functionality for three-dimensional structure processing and analysis has been introduced into PyCogent -- a popular feature-rich framework for sequence-based bioinformatics, but one which has lacked equally powerful tools for handling stuctural/coordinate-based data. Extensible Python modules have been developed, which provide object-oriented abstractions (based on a hierarchical representation of macromolecules), efficient data structures (e.g. kD-trees), fast implementations of common algorithms (e.g. surface-area calculations), read/write support for Protein Data Bank-related file formats and wrappers for external command-line applications (e.g. Stride). Integration of this code into PyCogent is symbiotic, allowing sequence-based work to benefit from structure-derived data and, reciprocally, enabling structural studies to leverage PyCogent's versatile tools for phylogenetic and evolutionary analyses.
[ { "created": "Sat, 19 Jul 2014 19:41:02 GMT", "version": "v1" } ]
2018-10-01
[ [ "Cieslik", "Marcin", "" ], [ "Derewenda", "Zygmunt", "" ], [ "Mura", "Cameron", "" ] ]
To facilitate flexible and efficient structural bioinformatics analyses, new functionality for three-dimensional structure processing and analysis has been introduced into PyCogent -- a popular feature-rich framework for sequence-based bioinformatics, but one which has lacked equally powerful tools for handling stuctural/coordinate-based data. Extensible Python modules have been developed, which provide object-oriented abstractions (based on a hierarchical representation of macromolecules), efficient data structures (e.g. kD-trees), fast implementations of common algorithms (e.g. surface-area calculations), read/write support for Protein Data Bank-related file formats and wrappers for external command-line applications (e.g. Stride). Integration of this code into PyCogent is symbiotic, allowing sequence-based work to benefit from structure-derived data and, reciprocally, enabling structural studies to leverage PyCogent's versatile tools for phylogenetic and evolutionary analyses.
2101.07257
Zhongqi Tian
Zhong-Qi Kyle Tian and Douglas Zhou
Library-based Fast Algorithm for Simulating the Hodgkin-Huxley Neuronal Networks
arXiv admin note: text overlap with arXiv:2101.06627
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a modified library-based method for simulating the Hodgkin-Huxley (HH) neuronal networks. By pre-computing a high resolution data library during the interval of an action potential (spike), we can avoid evolving the HH equations during the spike and can use a large time step to raise efficiency. The library method can stably achieve at most 10 times of speedup compared with the regular Runge-Kutta method while capturing most statistical properties of HH neurons like the distribution of spikes which data is widely used in the statistical analysis like transfer entropy and Granger causality. The idea of library method can be easily and successfully applied to other HH-type models like the most prominent \textquotedblleft regular spiking\textquotedblright , \textquotedblleft fast spiking\textquotedblright , \textquotedblleft intrinsically bursting\textquotedblright{} and \textquotedblleft low-threshold spike\textquotedblright{} types of HH models.
[ { "created": "Sun, 17 Jan 2021 10:25:09 GMT", "version": "v1" } ]
2021-01-20
[ [ "Tian", "Zhong-Qi Kyle", "" ], [ "Zhou", "Douglas", "" ] ]
We present a modified library-based method for simulating the Hodgkin-Huxley (HH) neuronal networks. By pre-computing a high resolution data library during the interval of an action potential (spike), we can avoid evolving the HH equations during the spike and can use a large time step to raise efficiency. The library method can stably achieve at most 10 times of speedup compared with the regular Runge-Kutta method while capturing most statistical properties of HH neurons like the distribution of spikes which data is widely used in the statistical analysis like transfer entropy and Granger causality. The idea of library method can be easily and successfully applied to other HH-type models like the most prominent \textquotedblleft regular spiking\textquotedblright , \textquotedblleft fast spiking\textquotedblright , \textquotedblleft intrinsically bursting\textquotedblright{} and \textquotedblleft low-threshold spike\textquotedblright{} types of HH models.
2311.03151
Binbin Xu
Zaineb Ajra, Binbin Xu, G\'erard Dray, Jacky Montmain, St\'ephane Perrey
Using Shallow Neural Networks with Functional Connectivity from EEG signals for Early Diagnosis of Alzheimer's and Frontotemporal Dementia
null
Front Neurol. 2023; 14: 1270405
10.3389/fneur.2023.1270405
null
q-bio.NC eess.SP
http://creativecommons.org/licenses/by/4.0/
{Introduction: } Dementia is a neurological disorder associated with aging that can cause a loss of cognitive functions, impacting daily life. Alzheimer's disease (AD) is the most common cause of dementia, accounting for 50--70\% of cases, while frontotemporal dementia (FTD) affects social skills and personality. Electroencephalography (EEG) provides an effective tool to study the effects of AD on the brain. {Methods: } In this study, we propose to use shallow neural networks applied to two sets of features: spectral-temporal and functional connectivity using four methods. We compare three supervised machine learning techniques to the CNN models to classify EEG signals of AD / FTD and control cases. We also evaluate different measures of functional connectivity from common EEG frequency bands considering multiple thresholds. {Results and Discussion: } Results showed that the shallow CNN-based models achieved the highest accuracy of 94.54\% with AEC in test dataset when considering all connections, outperforming conventional methods and providing potentially an additional early dementia diagnosis tool. \url{https://doi.org/10.3389%2Ffneur.2023.1270405}
[ { "created": "Mon, 6 Nov 2023 14:46:30 GMT", "version": "v1" } ]
2023-11-07
[ [ "Ajra", "Zaineb", "" ], [ "Xu", "Binbin", "" ], [ "Dray", "Gérard", "" ], [ "Montmain", "Jacky", "" ], [ "Perrey", "Stéphane", "" ] ]
{Introduction: } Dementia is a neurological disorder associated with aging that can cause a loss of cognitive functions, impacting daily life. Alzheimer's disease (AD) is the most common cause of dementia, accounting for 50--70\% of cases, while frontotemporal dementia (FTD) affects social skills and personality. Electroencephalography (EEG) provides an effective tool to study the effects of AD on the brain. {Methods: } In this study, we propose to use shallow neural networks applied to two sets of features: spectral-temporal and functional connectivity using four methods. We compare three supervised machine learning techniques to the CNN models to classify EEG signals of AD / FTD and control cases. We also evaluate different measures of functional connectivity from common EEG frequency bands considering multiple thresholds. {Results and Discussion: } Results showed that the shallow CNN-based models achieved the highest accuracy of 94.54\% with AEC in test dataset when considering all connections, outperforming conventional methods and providing potentially an additional early dementia diagnosis tool. \url{https://doi.org/10.3389%2Ffneur.2023.1270405}
2005.03214
Manuel Razo-Mejia
Manuel Razo-Mejia, Sarah Marzen, Griffin Chure, Rachel Taubman, Muir Morrison, and Rob Phillips
First-principles prediction of the information processing capacity of a simple genetic circuit
null
Phys. Rev. E 102, 022404 (2020)
10.1103/PhysRevE.102.022404
null
q-bio.MN q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Given the stochastic nature of gene expression, genetically identical cells exposed to the same environmental inputs will produce different outputs. This heterogeneity has been hypothesized to have consequences for how cells are able to survive in changing environments. Recent work has explored the use of information theory as a framework to understand the accuracy with which cells can ascertain the state of their surroundings. Yet the predictive power of these approaches is limited and has not been rigorously tested using precision measurements. To that end, we generate a minimal model for a simple genetic circuit in which all parameter values for the model come from independently published data sets. We then predict the information processing capacity of the genetic circuit for a suite of biophysical parameters such as protein copy number and protein-DNA affinity. We compare these parameter-free predictions with an experimental determination of protein expression distributions and the resulting information processing capacity of E. coli cells. We find that our minimal model captures the scaling of the cell-to-cell variability in the data and the inferred information processing capacity of our simple genetic circuit up to a systematic deviation.
[ { "created": "Thu, 7 May 2020 02:46:27 GMT", "version": "v1" } ]
2020-08-19
[ [ "Razo-Mejia", "Manuel", "" ], [ "Marzen", "Sarah", "" ], [ "Chure", "Griffin", "" ], [ "Taubman", "Rachel", "" ], [ "Morrison", "Muir", "" ], [ "Phillips", "Rob", "" ] ]
Given the stochastic nature of gene expression, genetically identical cells exposed to the same environmental inputs will produce different outputs. This heterogeneity has been hypothesized to have consequences for how cells are able to survive in changing environments. Recent work has explored the use of information theory as a framework to understand the accuracy with which cells can ascertain the state of their surroundings. Yet the predictive power of these approaches is limited and has not been rigorously tested using precision measurements. To that end, we generate a minimal model for a simple genetic circuit in which all parameter values for the model come from independently published data sets. We then predict the information processing capacity of the genetic circuit for a suite of biophysical parameters such as protein copy number and protein-DNA affinity. We compare these parameter-free predictions with an experimental determination of protein expression distributions and the resulting information processing capacity of E. coli cells. We find that our minimal model captures the scaling of the cell-to-cell variability in the data and the inferred information processing capacity of our simple genetic circuit up to a systematic deviation.
1903.11972
Sitabhra Sinha
Anand Pathak, Nivedita Chatterjee and Sitabhra Sinha
Developmental trajectory of Caenorhabditis elegans nervous system governs its structural organization
25 pages, 8 figures + 24 pages supplementary material
null
10.1371/journal.pcbi.1007602
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A central problem of neuroscience involves uncovering the principles governing the organization of nervous systems which ensure robustness in brain development. The nematode Caenorhabditis elegans provides us with a model organism for studying this question. In this paper, we focus on the invariant connection structure and spatial arrangement of the neurons comprising the somatic neuronal network of this organism to understand the key developmental constraints underlying its design. We observe that neurons with certain shared characteristics - such as, neural process lengths, birth time cohort, lineage and bilateral symmetry - exhibit a preference for connecting to each other. Recognizing the existence of such homophily helps in connecting the physical location and morphology of neurons with the topological organization of the network. Further, the functional identities of neurons appear to dictate the temporal hierarchy of their appearance during the course of development. Providing crucial insights into principles that may be common across many organisms, our study shows how the trajectory in the developmental landscape constrains the eventual spatial and network topological organization of a nervous system.
[ { "created": "Thu, 28 Mar 2019 13:48:29 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 11:34:30 GMT", "version": "v2" } ]
2020-07-01
[ [ "Pathak", "Anand", "" ], [ "Chatterjee", "Nivedita", "" ], [ "Sinha", "Sitabhra", "" ] ]
A central problem of neuroscience involves uncovering the principles governing the organization of nervous systems which ensure robustness in brain development. The nematode Caenorhabditis elegans provides us with a model organism for studying this question. In this paper, we focus on the invariant connection structure and spatial arrangement of the neurons comprising the somatic neuronal network of this organism to understand the key developmental constraints underlying its design. We observe that neurons with certain shared characteristics - such as, neural process lengths, birth time cohort, lineage and bilateral symmetry - exhibit a preference for connecting to each other. Recognizing the existence of such homophily helps in connecting the physical location and morphology of neurons with the topological organization of the network. Further, the functional identities of neurons appear to dictate the temporal hierarchy of their appearance during the course of development. Providing crucial insights into principles that may be common across many organisms, our study shows how the trajectory in the developmental landscape constrains the eventual spatial and network topological organization of a nervous system.
2006.10115
Matthew Beauregard
Eric Takyi and Joydeb Bhattacharyya and Rana D. Parshad and Matthew A. Beauregard
Mimicking the TYC strategy: Weak Allee effects, and a nonhyperbolic extinction boundary
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Trojan Y Chromosome strategy (TYC) is a genetic biocontrol strategy designed to alter the sex ratio of a target invasive population by reducing the number of females over time. Recently an alternative strategy is introduced, that mimics the TYC strategy by harvesting females whilst stocking males . We consider the FHMS strategy, with a weak Allee effect, and show that the extinction boundary need note be hyperbolic. To the best of our knowledge, this is the first example of a non-hyperbolic extinction boundary in mating models, structured by sex. Next, we consider the spatially explicit model and show that the weak Allee effect is both sufficient and necessary for Turing patterns to occur. We discuss the applicability of our results to large scale biocontrol, as well as compare and contrast our results to the case with a strong Allee effect.
[ { "created": "Wed, 17 Jun 2020 19:29:43 GMT", "version": "v1" }, { "created": "Mon, 22 Jun 2020 15:42:51 GMT", "version": "v2" } ]
2020-06-23
[ [ "Takyi", "Eric", "" ], [ "Bhattacharyya", "Joydeb", "" ], [ "Parshad", "Rana D.", "" ], [ "Beauregard", "Matthew A.", "" ] ]
The Trojan Y Chromosome strategy (TYC) is a genetic biocontrol strategy designed to alter the sex ratio of a target invasive population by reducing the number of females over time. Recently an alternative strategy is introduced, that mimics the TYC strategy by harvesting females whilst stocking males . We consider the FHMS strategy, with a weak Allee effect, and show that the extinction boundary need note be hyperbolic. To the best of our knowledge, this is the first example of a non-hyperbolic extinction boundary in mating models, structured by sex. Next, we consider the spatially explicit model and show that the weak Allee effect is both sufficient and necessary for Turing patterns to occur. We discuss the applicability of our results to large scale biocontrol, as well as compare and contrast our results to the case with a strong Allee effect.
1904.05031
Mohammad Nami
Azadeh Mojdehfarahbakhsh, Saba Nouri, Abolfazl Alipour, Mohammad Nami
Top-down predictions influence binocular rivalry through beta band rhythms: a qEEG-based investigation
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive coding theory suggests that conscious perception results from the interaction between top-down and bottom-up signals in the brain. However, the electrophysiological signatures of top-down predictions are not clear yet. Here, we cued subjects to expect a certain perceptual state in a binocular rivalry task and quantitatively analyzed their EEG signals during the cue period and prior to binocular rivalry task. We found that when predictions can successfully influence the perception of the rivalrous stimuli, the power of beta band rhythms increases in primary visual cortices and beta band phase lag in a frontal-visual homologous channel pair was notably diminished. Building upon earlier works, our findings suggest that beta rhythm is potentially considered as a signature of top-down communication in the brain.
[ { "created": "Wed, 10 Apr 2019 07:36:43 GMT", "version": "v1" } ]
2019-04-11
[ [ "Mojdehfarahbakhsh", "Azadeh", "" ], [ "Nouri", "Saba", "" ], [ "Alipour", "Abolfazl", "" ], [ "Nami", "Mohammad", "" ] ]
Predictive coding theory suggests that conscious perception results from the interaction between top-down and bottom-up signals in the brain. However, the electrophysiological signatures of top-down predictions are not clear yet. Here, we cued subjects to expect a certain perceptual state in a binocular rivalry task and quantitatively analyzed their EEG signals during the cue period and prior to binocular rivalry task. We found that when predictions can successfully influence the perception of the rivalrous stimuli, the power of beta band rhythms increases in primary visual cortices and beta band phase lag in a frontal-visual homologous channel pair was notably diminished. Building upon earlier works, our findings suggest that beta rhythm is potentially considered as a signature of top-down communication in the brain.
1901.11110
Stefan Seelig
Stefan A. Seelig, Maximilian M. Rabe, Noa Malem-Shinitski, Sarah Risse, Sebastian Reich, Ralf Engbert
Bayesian parameter estimation for the SWIFT model of eye-movement control during reading
30 pages, 10 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of spatial fixation position and fixation duration during reading. First, we develop and test an approximate likelihood function of the model, which is a combination of a pseudo-marginal spatial likelihood and an approximate temporal likelihood function. Second, we use a Bayesian approach to parameter inference using an adapative Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for the area of eye-movement modeling, where modelling of individual data on the basis of process-based dynamic models has not been possible before.
[ { "created": "Wed, 30 Jan 2019 21:36:53 GMT", "version": "v1" }, { "created": "Fri, 1 Feb 2019 16:39:54 GMT", "version": "v2" }, { "created": "Tue, 22 Oct 2019 14:43:01 GMT", "version": "v3" } ]
2019-10-23
[ [ "Seelig", "Stefan A.", "" ], [ "Rabe", "Maximilian M.", "" ], [ "Malem-Shinitski", "Noa", "" ], [ "Risse", "Sarah", "" ], [ "Reich", "Sebastian", "" ], [ "Engbert", "Ralf", "" ] ]
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of spatial fixation position and fixation duration during reading. First, we develop and test an approximate likelihood function of the model, which is a combination of a pseudo-marginal spatial likelihood and an approximate temporal likelihood function. Second, we use a Bayesian approach to parameter inference using an adapative Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for the area of eye-movement modeling, where modelling of individual data on the basis of process-based dynamic models has not been possible before.
1305.6559
Laurent Gatto
Laurent Gatto and Andy Christoforou
Using R and Bioconductor for proteomics data analysis
null
Biochim Biophys Acta. 1844 (2014) 1844 42-51
10.1016/j.bbapap.2013.04.032
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This review presents how R, the popular statistical environment and programming language, can be used in the frame of proteomics data analysis. A short introduction to R is given, with special emphasis on some of the features that make R and its add-on packages a premium software for sound and reproducible data analysis. The reader is also advised on how to find relevant R software for proteomics. Several use cases are then presented, illustrating data input/output, quality control, quantitative proteomics and data analysis. Detailed code and additional links to extensive documentation are available in the freely available companion package RforProteomics.
[ { "created": "Tue, 28 May 2013 17:10:30 GMT", "version": "v1" } ]
2014-01-14
[ [ "Gatto", "Laurent", "" ], [ "Christoforou", "Andy", "" ] ]
This review presents how R, the popular statistical environment and programming language, can be used in the frame of proteomics data analysis. A short introduction to R is given, with special emphasis on some of the features that make R and its add-on packages a premium software for sound and reproducible data analysis. The reader is also advised on how to find relevant R software for proteomics. Several use cases are then presented, illustrating data input/output, quality control, quantitative proteomics and data analysis. Detailed code and additional links to extensive documentation are available in the freely available companion package RforProteomics.
1402.2788
Alexey Mikaberidze
Alexey Mikaberidze, Bruce McDonald and Sebastian Bonhoeffer
How to develop smarter host mixtures to control plant disease?
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A looming challenge for agriculture is sustainable intensification of food production to feed the growing human population. Current chemical and genetic technologies used to manage plant diseases are highly vulnerable to pathogen evolution and are not sustainable. Pathogen evolution is facilitated by the genetic uniformity underlying modern agroecosystems, suggesting that one path to sustainable disease control lies through increasing genetic diversity at the field scale by using genetically diverse host mixtures. We investigate how host mixtures can improve disease control using a population dynamics model. We find that when a population of crop plants is exposed to host-specialized pathogen species or strains, the overall disease severity is smaller in the mixture of two host varieties than in each of the corresponding pure stands. The disease severity can be minimized over a range of mixing ratios. These findings may help in designing host mixtures that efficiently control diseases of crops. We then generalize the model to describe host mixtures with many components. We find that when pathogens exhibit host specialization, the overall disease severity decreases with the number of components in the mixture. As the degree of specialization increases, the decrease in disease severity becomes larger. Using these model outcomes, we propose ways to optimize the use of host mixtures to decrease disease in agroecosystems.
[ { "created": "Wed, 12 Feb 2014 11:09:38 GMT", "version": "v1" } ]
2014-02-13
[ [ "Mikaberidze", "Alexey", "" ], [ "McDonald", "Bruce", "" ], [ "Bonhoeffer", "Sebastian", "" ] ]
A looming challenge for agriculture is sustainable intensification of food production to feed the growing human population. Current chemical and genetic technologies used to manage plant diseases are highly vulnerable to pathogen evolution and are not sustainable. Pathogen evolution is facilitated by the genetic uniformity underlying modern agroecosystems, suggesting that one path to sustainable disease control lies through increasing genetic diversity at the field scale by using genetically diverse host mixtures. We investigate how host mixtures can improve disease control using a population dynamics model. We find that when a population of crop plants is exposed to host-specialized pathogen species or strains, the overall disease severity is smaller in the mixture of two host varieties than in each of the corresponding pure stands. The disease severity can be minimized over a range of mixing ratios. These findings may help in designing host mixtures that efficiently control diseases of crops. We then generalize the model to describe host mixtures with many components. We find that when pathogens exhibit host specialization, the overall disease severity decreases with the number of components in the mixture. As the degree of specialization increases, the decrease in disease severity becomes larger. Using these model outcomes, we propose ways to optimize the use of host mixtures to decrease disease in agroecosystems.
1810.09913
Xiujun Cheng
Xiujun Cheng, Hui Wang, Xiao Wang, Jinqiao Duan, Xiaofan Li
Most Probable Evolution Trajectories in a Genetic Regulatory System Excited by Stable L\'evy Noise
null
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the most probable trajectories of the concentration evolution for the transcription factor activator in a genetic regulation system, with non-Gaussian stable L\'evy noise in the synthesis reaction rate taking into account. We calculate the most probable trajectory by spatially maximizing the probability density of the system path, i.e., the solution of the associated nonlocal Fokker-Planck equation. We especially examine those most probable trajectories from low concentration state to high concentration state (i.e., the likely transcription regime) for certain parameters, in order to gain insights into the transcription processes and the tipping time for the transcription likely to occur. This enables us: (i) to visualize the progress of concentration evolution (i.e., observe whether the system enters the transcription regime within a given time period); (ii) to predict or avoid certain transcriptions via selecting specific noise parameters in particular regions in the parameter space. Moreover, we have found some peculiar or counter-intuitive phenomena in this gene model system, including (a) a smaller noise intensity may trigger the transcription process, while a larger noise intensity can not, under the same asymmetric L\'evy noise. This phenomenon does not occur in the case of symmetric L\'evy noise; (b) the symmetric L\'evy motion always induces transition to high concentration, but certain asymmetric L\'evy motions do not trigger the switch to transcription. These findings provide insights for further experimental research, in order to achieve or to avoid specific gene transcriptions, with possible relevance for medical advances.
[ { "created": "Tue, 9 Oct 2018 07:50:32 GMT", "version": "v1" }, { "created": "Mon, 28 Jan 2019 12:27:29 GMT", "version": "v2" } ]
2019-01-29
[ [ "Cheng", "Xiujun", "" ], [ "Wang", "Hui", "" ], [ "Wang", "Xiao", "" ], [ "Duan", "Jinqiao", "" ], [ "Li", "Xiaofan", "" ] ]
We study the most probable trajectories of the concentration evolution for the transcription factor activator in a genetic regulation system, with non-Gaussian stable L\'evy noise in the synthesis reaction rate taking into account. We calculate the most probable trajectory by spatially maximizing the probability density of the system path, i.e., the solution of the associated nonlocal Fokker-Planck equation. We especially examine those most probable trajectories from low concentration state to high concentration state (i.e., the likely transcription regime) for certain parameters, in order to gain insights into the transcription processes and the tipping time for the transcription likely to occur. This enables us: (i) to visualize the progress of concentration evolution (i.e., observe whether the system enters the transcription regime within a given time period); (ii) to predict or avoid certain transcriptions via selecting specific noise parameters in particular regions in the parameter space. Moreover, we have found some peculiar or counter-intuitive phenomena in this gene model system, including (a) a smaller noise intensity may trigger the transcription process, while a larger noise intensity can not, under the same asymmetric L\'evy noise. This phenomenon does not occur in the case of symmetric L\'evy noise; (b) the symmetric L\'evy motion always induces transition to high concentration, but certain asymmetric L\'evy motions do not trigger the switch to transcription. These findings provide insights for further experimental research, in order to achieve or to avoid specific gene transcriptions, with possible relevance for medical advances.
1910.14469
Homayoun Valafar
E. Timko, P. Shealy, M. Bryson, H. Valafar
Minimum Data Requirements and Supplemental Angle Constraints for Protein Structure Prediction with REDCRAFT
7 pages, BioComp 2008
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One algorithm to predict protein structure is the residual dipolar coupling based residue assembly and filter tool (REDCRAFT). This algorithm exploits an exponential reduction of the search space of all possible structures to find a structure that best fits a set of experimental residual dipolar couplings. However, the minimum amount of data required to successfully determine a protein's structure using REDCRAFT has not been previously investigated. Here we explore the effect of reducing the amount of data used to fold proteins. Our goal is to reduce experimental data collection times while retaining the accuracy levels previously achieved with larger amounts of data. We also investigate incorporating a priori secondary structure information into REDCRAFT to improve its structure prediction ability.
[ { "created": "Thu, 31 Oct 2019 13:54:41 GMT", "version": "v1" }, { "created": "Wed, 6 Nov 2019 13:53:39 GMT", "version": "v2" } ]
2019-11-07
[ [ "Timko", "E.", "" ], [ "Shealy", "P.", "" ], [ "Bryson", "M.", "" ], [ "Valafar", "H.", "" ] ]
One algorithm to predict protein structure is the residual dipolar coupling based residue assembly and filter tool (REDCRAFT). This algorithm exploits an exponential reduction of the search space of all possible structures to find a structure that best fits a set of experimental residual dipolar couplings. However, the minimum amount of data required to successfully determine a protein's structure using REDCRAFT has not been previously investigated. Here we explore the effect of reducing the amount of data used to fold proteins. Our goal is to reduce experimental data collection times while retaining the accuracy levels previously achieved with larger amounts of data. We also investigate incorporating a priori secondary structure information into REDCRAFT to improve its structure prediction ability.
q-bio/0604019
Jesus M. Cortes
J. J. Torres, J.M. Cortes, J. Marro, H.J. Kappen
Competition between synaptic depression and facilitation in attractor neural networks
14 pages, 7 figures
null
null
null
q-bio.NC
null
We study the effect of competition between short-term synaptic depression and facilitation on the dynamical properties of attractor neural networks, using Monte Carlo simulation and a mean field analysis. Depending on the balance between depression, facilitation and the noise, the network displays different behaviours, including associative memory and switching of the activity between different attractors. We conclude that synaptic facilitation enhances the attractor instability in a way that (i) intensifies the system adaptability to external stimuli, which is in agreement with experiments, and (ii) favours the retrieval of information with less error during short--time intervals.
[ { "created": "Sun, 16 Apr 2006 19:55:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Torres", "J. J.", "" ], [ "Cortes", "J. M.", "" ], [ "Marro", "J.", "" ], [ "Kappen", "H. J.", "" ] ]
We study the effect of competition between short-term synaptic depression and facilitation on the dynamical properties of attractor neural networks, using Monte Carlo simulation and a mean field analysis. Depending on the balance between depression, facilitation and the noise, the network displays different behaviours, including associative memory and switching of the activity between different attractors. We conclude that synaptic facilitation enhances the attractor instability in a way that (i) intensifies the system adaptability to external stimuli, which is in agreement with experiments, and (ii) favours the retrieval of information with less error during short--time intervals.
0707.2551
Ulrich S. Schwarz
A. Besser and U. S. Schwarz (University of Heidelberg)
Coupling biochemistry and mechanics in cell adhesion: a model for inhomogeneous stress fiber contraction
Revtex, 35 pages, 13 Postscript figures included, in press with New Journal of Physics, Special Issue on The Physics of the Cytoskeleton
New J. Phys., 9:425, 2007
10.1088/1367-2630/9/11/425
null
q-bio.SC q-bio.CB
null
Biochemistry and mechanics are closely coupled in cell adhesion. At sites of cell-matrix adhesion, mechanical force triggers signaling through the Rho-pathway, which leads to structural reinforcement and increased contractility in the actin cytoskeleton. The resulting force acts back to the sites of adhesion, resulting in a positive feedback loop for mature adhesion. Here we model this biochemical-mechanical feedback loop for the special case when the actin cytoskeleton is organized in stress fibers, which are contractile bundles of actin filaments. Activation of myosin II molecular motors through the Rho-pathway is described by a system of reaction-diffusion equations, which are coupled into a viscoelastic model for a contractile actin bundle. We find strong spatial gradients in the activation of contractility and in the corresponding deformation pattern of the stress fiber, in good agreement with experimental findings.
[ { "created": "Tue, 17 Jul 2007 15:58:57 GMT", "version": "v1" }, { "created": "Wed, 24 Oct 2007 17:01:16 GMT", "version": "v2" } ]
2010-02-24
[ [ "Besser", "A.", "", "University of Heidelberg" ], [ "Schwarz", "U. S.", "", "University of Heidelberg" ] ]
Biochemistry and mechanics are closely coupled in cell adhesion. At sites of cell-matrix adhesion, mechanical force triggers signaling through the Rho-pathway, which leads to structural reinforcement and increased contractility in the actin cytoskeleton. The resulting force acts back to the sites of adhesion, resulting in a positive feedback loop for mature adhesion. Here we model this biochemical-mechanical feedback loop for the special case when the actin cytoskeleton is organized in stress fibers, which are contractile bundles of actin filaments. Activation of myosin II molecular motors through the Rho-pathway is described by a system of reaction-diffusion equations, which are coupled into a viscoelastic model for a contractile actin bundle. We find strong spatial gradients in the activation of contractility and in the corresponding deformation pattern of the stress fiber, in good agreement with experimental findings.
2008.08162
Georgios D. Barmparis
G. D. Barmparis and G. P. Tsironis
Physics-informed machine learning for the COVID-19 pandemic: Adherence to social distancing and short-term predictions for eight countries
null
null
null
null
q-bio.PE cs.LG physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spread of COVID-19 during the initial phase of the first half of 2020 was curtailed to a larger or lesser extent through measures of social distancing imposed by most countries. In this work, we link directly, through machine learning techniques, infection data at a country level to a single number that signifies social distancing effectiveness. We assume that the standard SIR model gives a reasonable description of the dynamics of spreading, and thus the social distancing aspect can be modeled through time-dependent infection rates that are imposed externally. We use an exponential ansatz to analyze the SIR model, find an exact solution for the time-independent infection rate, and derive a simple first-order differential equation for the time-dependent infection rate as a function of the infected population. Using infected number data from the "first wave" of the infection from eight countries, and through physics-informed machine learning, we extract the degree of linear dependence in social distancing that led to the specific infections. We find that in the two extremes are Greece, with the highest decay slope on one side, and the US on the other with a practically flat "decay". The hierarchy of slopes is compatible with the effectiveness of the pandemic containment in each country. Finally, we train our network with data after the end of the analyzed period, and we make week-long predictions for the current phase of the infection that appear to be very close to the actual infection values.
[ { "created": "Tue, 18 Aug 2020 21:26:30 GMT", "version": "v1" } ]
2020-08-20
[ [ "Barmparis", "G. D.", "" ], [ "Tsironis", "G. P.", "" ] ]
The spread of COVID-19 during the initial phase of the first half of 2020 was curtailed to a larger or lesser extent through measures of social distancing imposed by most countries. In this work, we link directly, through machine learning techniques, infection data at a country level to a single number that signifies social distancing effectiveness. We assume that the standard SIR model gives a reasonable description of the dynamics of spreading, and thus the social distancing aspect can be modeled through time-dependent infection rates that are imposed externally. We use an exponential ansatz to analyze the SIR model, find an exact solution for the time-independent infection rate, and derive a simple first-order differential equation for the time-dependent infection rate as a function of the infected population. Using infected number data from the "first wave" of the infection from eight countries, and through physics-informed machine learning, we extract the degree of linear dependence in social distancing that led to the specific infections. We find that in the two extremes are Greece, with the highest decay slope on one side, and the US on the other with a practically flat "decay". The hierarchy of slopes is compatible with the effectiveness of the pandemic containment in each country. Finally, we train our network with data after the end of the analyzed period, and we make week-long predictions for the current phase of the infection that appear to be very close to the actual infection values.
0802.3234
Carson C. Chow
Carson C. Chow and Kevin D. Hall
The dynamics of human body weight change
null
null
10.1371/journal.pcbi.1000045
null
q-bio.TO
http://creativecommons.org/licenses/publicdomain/
An imbalance between energy intake and energy expenditure will lead to a change in body weight (mass) and body composition (fat and lean masses). A quantitative understanding of the processes involved, which currently remains lacking, will be useful in determining the etiology and treatment of obesity and other conditions resulting from prolonged energy imbalance. Here, we show that the long-term dynamics of human weight change can be captured by a mathematical model of the macronutrient flux balances and all previous models are special cases of this model. We show that the generic dynamical behavior of body composition for a clamped diet can be divided into two classes. In the first class, the body composition and mass are determined uniquely. In the second class, the body composition can exist at an infinite number of possible states. Surprisingly, perturbations of dietary energy intake or energy expenditure can give identical responses in both model classes and existing data are insufficient to distinguish between these two possibilities. However, this distinction is important for the efficacy of clinical interventions that alter body composition and mass.
[ { "created": "Thu, 21 Feb 2008 23:45:31 GMT", "version": "v1" } ]
2015-05-13
[ [ "Chow", "Carson C.", "" ], [ "Hall", "Kevin D.", "" ] ]
An imbalance between energy intake and energy expenditure will lead to a change in body weight (mass) and body composition (fat and lean masses). A quantitative understanding of the processes involved, which currently remains lacking, will be useful in determining the etiology and treatment of obesity and other conditions resulting from prolonged energy imbalance. Here, we show that the long-term dynamics of human weight change can be captured by a mathematical model of the macronutrient flux balances and all previous models are special cases of this model. We show that the generic dynamical behavior of body composition for a clamped diet can be divided into two classes. In the first class, the body composition and mass are determined uniquely. In the second class, the body composition can exist at an infinite number of possible states. Surprisingly, perturbations of dietary energy intake or energy expenditure can give identical responses in both model classes and existing data are insufficient to distinguish between these two possibilities. However, this distinction is important for the efficacy of clinical interventions that alter body composition and mass.
1310.2114
Piotr Plonski
Piotr Plonski and Jan P. Radomski
Neighbor Joining Plus - algorithm for phylogenetic tree reconstruction with proper nodes assignment
18 pages, 9 figures, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of major algorithms for phylogenetic tree reconstruction assume that sequences in the analyzed set either do not have any offspring, or that parent sequences can maximally mutate into just two descendants. The graph resulting from such assumptions forms therefore a binary tree, with all the nodes labeled as leaves. However, these constraints are unduly restrictive as there are numerous data sets with multiple offspring of the same ancestors. Here we propose a solution to analyze and visualize such sets in a more intuitive manner. The method reconstructs phylogenetic tree by assigning the sequences with offspring as internal nodes, and the sequences without offspring as leaf nodes. In the resulting tree there is no constraint for the number of adjacent nodes, which means that the solution tree needs not to be a binary graph only. The subsequent derivation of evolutionary pathways, and pair-wise mutations, are then an algorithmically straightforward, with edge's length corresponding directly to the number of mutations. Other tree reconstruction algorithms can be extended in the proposed manner, to also give unbiased topologies.
[ { "created": "Tue, 8 Oct 2013 12:38:36 GMT", "version": "v1" } ]
2013-10-09
[ [ "Plonski", "Piotr", "" ], [ "Radomski", "Jan P.", "" ] ]
Most of major algorithms for phylogenetic tree reconstruction assume that sequences in the analyzed set either do not have any offspring, or that parent sequences can maximally mutate into just two descendants. The graph resulting from such assumptions forms therefore a binary tree, with all the nodes labeled as leaves. However, these constraints are unduly restrictive as there are numerous data sets with multiple offspring of the same ancestors. Here we propose a solution to analyze and visualize such sets in a more intuitive manner. The method reconstructs phylogenetic tree by assigning the sequences with offspring as internal nodes, and the sequences without offspring as leaf nodes. In the resulting tree there is no constraint for the number of adjacent nodes, which means that the solution tree needs not to be a binary graph only. The subsequent derivation of evolutionary pathways, and pair-wise mutations, are then an algorithmically straightforward, with edge's length corresponding directly to the number of mutations. Other tree reconstruction algorithms can be extended in the proposed manner, to also give unbiased topologies.
2009.02079
Pascal Klamser
Pascal P. Klamser and Pawel Romanczuk
Collective predator evasion: Putting the criticality hypothesis to the test
null
null
10.1371/journal.pcbi.1008832
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to the criticality hypothesis, collective biological systems should operate in a special parameter region, close to so-called critical points, where the collective behavior undergoes a qualitative change between different dynamical regimes. Critical systems exhibit unique properties, which may benefit collective information processing such as maximal responsiveness to external stimuli. Besides neuronal and gene-regulatory networks, recent empirical data suggests that also animal collectives may be examples of self-organized critical systems. However, open questions about self-organization mechanisms in animal groups remain: Evolutionary adaptation towards a group-level optimum (group-level selection), implicitly assumed in the "criticality hypothesis", appears in general not reasonable for fission-fusion groups composed of non-related individuals. Furthermore, previous theoretical work relies on non-spatial models, which ignore potentially important self-organization and spatial sorting effects. Using a generic, spatially-explicit model of schooling prey being attacked by a predator, we show first that schools operating at criticality perform best. However, this is not due to optimal response of the prey to the predator, as suggested by the "criticality hypothesis", but rather due to the spatial structure of the prey school at criticality. Secondly, by investigating individual-level evolution, we show that strong spatial self-sorting effects at the critical point lead to strong selection gradients, and make it an evolutionary unstable state. Our results demonstrate the decisive role of spatio-temporal phenomena in collective behavior, and that individual-level selection is in general not a viable mechanism for self-tuning of unrelated animal groups towards criticality.
[ { "created": "Fri, 4 Sep 2020 09:13:30 GMT", "version": "v1" }, { "created": "Sat, 30 Jan 2021 00:24:52 GMT", "version": "v2" } ]
2021-06-09
[ [ "Klamser", "Pascal P.", "" ], [ "Romanczuk", "Pawel", "" ] ]
According to the criticality hypothesis, collective biological systems should operate in a special parameter region, close to so-called critical points, where the collective behavior undergoes a qualitative change between different dynamical regimes. Critical systems exhibit unique properties, which may benefit collective information processing such as maximal responsiveness to external stimuli. Besides neuronal and gene-regulatory networks, recent empirical data suggests that also animal collectives may be examples of self-organized critical systems. However, open questions about self-organization mechanisms in animal groups remain: Evolutionary adaptation towards a group-level optimum (group-level selection), implicitly assumed in the "criticality hypothesis", appears in general not reasonable for fission-fusion groups composed of non-related individuals. Furthermore, previous theoretical work relies on non-spatial models, which ignore potentially important self-organization and spatial sorting effects. Using a generic, spatially-explicit model of schooling prey being attacked by a predator, we show first that schools operating at criticality perform best. However, this is not due to optimal response of the prey to the predator, as suggested by the "criticality hypothesis", but rather due to the spatial structure of the prey school at criticality. Secondly, by investigating individual-level evolution, we show that strong spatial self-sorting effects at the critical point lead to strong selection gradients, and make it an evolutionary unstable state. Our results demonstrate the decisive role of spatio-temporal phenomena in collective behavior, and that individual-level selection is in general not a viable mechanism for self-tuning of unrelated animal groups towards criticality.
1202.5329
Cencini Massimo Dr.
Davide Vergni, Sandro Iannaccone, Stefano Berti, Massimo Cencini
Invasions in heterogeneous habitats in the presence of advection
16 pages, 11 figures J. Theor. Biol. to appear
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate invasions from a biological reservoir to an initially empty, heterogeneous habitat in the presence of advection. The habitat consists of a periodic alternation of favorable and unfavorable patches. In the latter the population dies at fixed rate. In the former it grows either with the logistic or with an Allee effect type dynamics, where the population has to overcome a threshold to grow. We study the conditions for successful invasions and the speed of the invasion process, which is numerically and analytically investigated in several limits. Generically advection enhances the downstream invasion speed but decreases the population size of the invading species, and can even inhibit the invasion process. Remarkably, however, the rate of population increase, which quantifies the invasion efficiency, is maximized by an optimal advection velocity. In models with Allee effect, differently from the logistic case, above a critical unfavorable patch size the population localizes in a favorable patch, being unable to invade the habitat. However, we show that advection, when intense enough, may activate the invasion process.
[ { "created": "Thu, 23 Feb 2012 21:35:38 GMT", "version": "v1" } ]
2012-02-27
[ [ "Vergni", "Davide", "" ], [ "Iannaccone", "Sandro", "" ], [ "Berti", "Stefano", "" ], [ "Cencini", "Massimo", "" ] ]
We investigate invasions from a biological reservoir to an initially empty, heterogeneous habitat in the presence of advection. The habitat consists of a periodic alternation of favorable and unfavorable patches. In the latter the population dies at fixed rate. In the former it grows either with the logistic or with an Allee effect type dynamics, where the population has to overcome a threshold to grow. We study the conditions for successful invasions and the speed of the invasion process, which is numerically and analytically investigated in several limits. Generically advection enhances the downstream invasion speed but decreases the population size of the invading species, and can even inhibit the invasion process. Remarkably, however, the rate of population increase, which quantifies the invasion efficiency, is maximized by an optimal advection velocity. In models with Allee effect, differently from the logistic case, above a critical unfavorable patch size the population localizes in a favorable patch, being unable to invade the habitat. However, we show that advection, when intense enough, may activate the invasion process.
2010.03191
Da Zhou Dr.
Zhijie Wu, Yuman Wang, Kun Wang, Da Zhou
Stochastic stem cell models with mutation: A comparison of asymmetric and symmetric divisions
25 pages, 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to fulfill cell proliferation and differentiation through cellular hierarchy, stem cells can undergo either asymmetric or symmetric divisions. Recent studies pay special attention to the effect of different modes of stem cell division on the lifetime risk of cancer, and report that symmetric division is more beneficial to delay the onset of cancer. The fate uncertainty of symmetric division is considered to be the reason for the cancer-delaying effect. In this paper we compare asymmetric and symmetric divisions of stem cells via studying stochastic stem cell models with mutations. Specially, by using rigorous mathematical analysis we find that both asymmetric and symmetric models show the same statistical average, but symmetric model shows higher fluctuation than asymmetric model. We further show that the difference between the two models would be more remarkable for lower mutation rates. Our work quantifies the uncertainty of cell division and highlights the significance of stochasticity for distinguishing between different modes of stem cell division.
[ { "created": "Wed, 7 Oct 2020 05:55:15 GMT", "version": "v1" } ]
2020-10-08
[ [ "Wu", "Zhijie", "" ], [ "Wang", "Yuman", "" ], [ "Wang", "Kun", "" ], [ "Zhou", "Da", "" ] ]
In order to fulfill cell proliferation and differentiation through cellular hierarchy, stem cells can undergo either asymmetric or symmetric divisions. Recent studies pay special attention to the effect of different modes of stem cell division on the lifetime risk of cancer, and report that symmetric division is more beneficial to delay the onset of cancer. The fate uncertainty of symmetric division is considered to be the reason for the cancer-delaying effect. In this paper we compare asymmetric and symmetric divisions of stem cells via studying stochastic stem cell models with mutations. Specially, by using rigorous mathematical analysis we find that both asymmetric and symmetric models show the same statistical average, but symmetric model shows higher fluctuation than asymmetric model. We further show that the difference between the two models would be more remarkable for lower mutation rates. Our work quantifies the uncertainty of cell division and highlights the significance of stochasticity for distinguishing between different modes of stem cell division.
1709.01322
Davide Michieletto
Davide Michieletto, Michael Chiang, Davide Coli, Argyris Papantonis, Enzo Orlandini, Peter R. Cook and Davide Marenduzzo
Shaping Epigenetic Memory via Genomic Bookmarking
Published in Nucleic Acids Research; Supplementary Movies can be found at this url: https://www2.ph.ed.ac.uk/~dmichiel/ShapingEpigeneticMemory.html or https://www.youtube.com/watch?v=xY-DNP58yBU
Nucleic Acids Res. (2017)
10.1093/nar/gkx1200
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconciling the stability of epigenetic patterns with the rapid turnover of histone modifications and their adaptability to external stimuli is an outstanding challenge. Here, we propose a new biophysical mechanism that can establish and maintain robust yet plastic epigenetic domains via genomic bookmarking (GBM). We model chromatin as a recolourable polymer whose segments bear non-permanent histone marks (or colours) which can be modified by "writer" proteins. The three-dimensional chromatin organisation is mediated by protein bridges, or "readers", such as Polycomb Repressive Complexes and Transcription Factors. The coupling between readers and writers drives spreading of biochemical marks and sustains the memory of local chromatin states across replication and mitosis. In contrast, GBM-targeted perturbations destabilise the epigenetic patterns. Strikingly, we demonstrate that GBM alone can explain the full distribution of Polycomb marks in a whole Drosophila chromosome. We finally suggest that our model provides a starting point for an understanding of the biophysics of cellular differentiation and reprogramming.
[ { "created": "Tue, 5 Sep 2017 10:22:52 GMT", "version": "v1" }, { "created": "Tue, 28 Nov 2017 17:09:57 GMT", "version": "v2" } ]
2017-11-29
[ [ "Michieletto", "Davide", "" ], [ "Chiang", "Michael", "" ], [ "Coli", "Davide", "" ], [ "Papantonis", "Argyris", "" ], [ "Orlandini", "Enzo", "" ], [ "Cook", "Peter R.", "" ], [ "Marenduzzo", "Davide", "" ] ]
Reconciling the stability of epigenetic patterns with the rapid turnover of histone modifications and their adaptability to external stimuli is an outstanding challenge. Here, we propose a new biophysical mechanism that can establish and maintain robust yet plastic epigenetic domains via genomic bookmarking (GBM). We model chromatin as a recolourable polymer whose segments bear non-permanent histone marks (or colours) which can be modified by "writer" proteins. The three-dimensional chromatin organisation is mediated by protein bridges, or "readers", such as Polycomb Repressive Complexes and Transcription Factors. The coupling between readers and writers drives spreading of biochemical marks and sustains the memory of local chromatin states across replication and mitosis. In contrast, GBM-targeted perturbations destabilise the epigenetic patterns. Strikingly, we demonstrate that GBM alone can explain the full distribution of Polycomb marks in a whole Drosophila chromosome. We finally suggest that our model provides a starting point for an understanding of the biophysics of cellular differentiation and reprogramming.
2105.00924
Narayani Bhatia
Narayani Bhatia, Devang Mahesh, Jashandeep Singh, and Manan Suri
Bird-Area Water-Bodies Dataset (BAWD) and Predictive AI Model for Avian Botulism Outbreak (AVI-BoT)
null
null
null
null
q-bio.QM cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Avian botulism is a paralytic bacterial disease in birds often leading to high fatality. In-vitro diagnostic techniques such as Mouse Bioassay, ELISA, PCR are usually non-preventive, post-mortem in nature, and require invasive sample collection from affected sites or dead birds. In this study, we build a first-ever multi-spectral, remote-sensing imagery based global Bird-Area Water-bodies Dataset (BAWD) (i.e. fused satellite images of warm-water lakes/marshy-lands or similar water-body sites that are important for avian fauna) backed by on-ground reporting evidence of outbreaks. BAWD consists of 16 topographically diverse global sites monitored over a time-span of 4 years (2016-2021). We propose a first-ever Artificial Intelligence based (AI) model to predict potential outbreak of Avian botulism called AVI-BoT (Aerosol Visible, Infra-red (NIR/SWIR) and Bands of Thermal). We also train and investigate a simpler (5-band) Causative-Factor model (based on prominent physiological factors reported in literature) to predict Avian botulism. AVI-BoT demonstrates a training accuracy of 0.96 and validation accuracy of 0.989 on BAWD, far superior in comparison to our model based on causative factors. We also perform an ablation study and perform a detailed feature-space analysis. We further analyze three test case study locations - Lower Klamath National Wildlife Refuge and Langvlei and Rondevlei lakes where an outbreak had occurred, and Pong Dam where an outbreak had not occurred and confirm predictions with on-ground reportings. The proposed technique presents a scale-able, low-cost, non-invasive methodology for continuous monitoring of bird-habitats against botulism outbreaks with the potential of saving valuable fauna lives.
[ { "created": "Mon, 3 May 2021 15:00:12 GMT", "version": "v1" }, { "created": "Thu, 17 Nov 2022 08:00:45 GMT", "version": "v2" } ]
2022-11-18
[ [ "Bhatia", "Narayani", "" ], [ "Mahesh", "Devang", "" ], [ "Singh", "Jashandeep", "" ], [ "Suri", "Manan", "" ] ]
Avian botulism is a paralytic bacterial disease in birds often leading to high fatality. In-vitro diagnostic techniques such as Mouse Bioassay, ELISA, PCR are usually non-preventive, post-mortem in nature, and require invasive sample collection from affected sites or dead birds. In this study, we build a first-ever multi-spectral, remote-sensing imagery based global Bird-Area Water-bodies Dataset (BAWD) (i.e. fused satellite images of warm-water lakes/marshy-lands or similar water-body sites that are important for avian fauna) backed by on-ground reporting evidence of outbreaks. BAWD consists of 16 topographically diverse global sites monitored over a time-span of 4 years (2016-2021). We propose a first-ever Artificial Intelligence based (AI) model to predict potential outbreak of Avian botulism called AVI-BoT (Aerosol Visible, Infra-red (NIR/SWIR) and Bands of Thermal). We also train and investigate a simpler (5-band) Causative-Factor model (based on prominent physiological factors reported in literature) to predict Avian botulism. AVI-BoT demonstrates a training accuracy of 0.96 and validation accuracy of 0.989 on BAWD, far superior in comparison to our model based on causative factors. We also perform an ablation study and perform a detailed feature-space analysis. We further analyze three test case study locations - Lower Klamath National Wildlife Refuge and Langvlei and Rondevlei lakes where an outbreak had occurred, and Pong Dam where an outbreak had not occurred and confirm predictions with on-ground reportings. The proposed technique presents a scale-able, low-cost, non-invasive methodology for continuous monitoring of bird-habitats against botulism outbreaks with the potential of saving valuable fauna lives.
1211.1869
Vyacheslav Kalmykov L.
Lev V. Kalmykov, Vyacheslav L. Kalmykov
A mechanistic verification of the competitive exclusion principle
Because the known formulations of the competitive exclusion principle were violated in the first version of the manuscript, we have radically reformulated this principle. A new formulation of the principle is presented in the second version of the manuscript
Chaos, Solitons & Fractals 56 (2013) 124-131
10.1016/j.chaos.2013.07.006
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biodiversity conservation becoming increasingly urgent. It is important to find mechanisms of competitive coexistence of species with different fitness in especially difficult circumstances - on one limiting resource, in isolated stable uniform habitat, without any trade-offs and cooperative interactions. Here we show a mechanism of competitive coexistence based on a soliton-like behaviour of population waves. We have modelled it by the logical axiomatic deterministic individual-based cellular automata method. Our mechanistic models of population and ecosystem dynamics are of white-box type and so they provide direct insight into mechanisms under study. The mechanism provides indefinite coexistence of two, three and four competing species. This mechanism violates the known formulations of the competitive exclusion principle. As a consequence, we have proposed a fully mechanistic and most stringent formulation of the principle.
[ { "created": "Thu, 8 Nov 2012 14:48:20 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2013 19:04:52 GMT", "version": "v2" }, { "created": "Wed, 1 May 2013 00:05:03 GMT", "version": "v3" } ]
2015-03-13
[ [ "Kalmykov", "Lev V.", "" ], [ "Kalmykov", "Vyacheslav L.", "" ] ]
Biodiversity conservation becoming increasingly urgent. It is important to find mechanisms of competitive coexistence of species with different fitness in especially difficult circumstances - on one limiting resource, in isolated stable uniform habitat, without any trade-offs and cooperative interactions. Here we show a mechanism of competitive coexistence based on a soliton-like behaviour of population waves. We have modelled it by the logical axiomatic deterministic individual-based cellular automata method. Our mechanistic models of population and ecosystem dynamics are of white-box type and so they provide direct insight into mechanisms under study. The mechanism provides indefinite coexistence of two, three and four competing species. This mechanism violates the known formulations of the competitive exclusion principle. As a consequence, we have proposed a fully mechanistic and most stringent formulation of the principle.
2302.02018
Eddy Kwessi
Eddy Kwessi
Hierarchical Model with Allee Effect, Immigration, and Holling Type II Functional Response
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In this paper, we discuss a hierarchical model, based on a Ricker competition model. The species considered are competing for resources and may be subject to an Allee effect due to mate limitation, anti-predator vigilance or aggression, cooperative predation or resource defense, or social thermoregulation. The species may be classified into a more dominant species and less dominant or ``wimpy" species or just as a predator and prey. The model under consideration also has components taking into account immigration in both species and a more structured Holling type II functional response. Local and global stability analyses are discussed and simulations are provided. We also consider demographic stochasticity on the species due to environmental fluctuations in the form of Wiener processes and we show that there are conditions under which a global solution exists, stationary distributions exists, and strong persistence in mean of the species is possible. We also use Wasserstein distance to show empirically that stochasticity can acts as bifurcation parameter.
[ { "created": "Fri, 3 Feb 2023 22:20:19 GMT", "version": "v1" } ]
2023-02-07
[ [ "Kwessi", "Eddy", "" ] ]
In this paper, we discuss a hierarchical model, based on a Ricker competition model. The species considered are competing for resources and may be subject to an Allee effect due to mate limitation, anti-predator vigilance or aggression, cooperative predation or resource defense, or social thermoregulation. The species may be classified into a more dominant species and less dominant or ``wimpy" species or just as a predator and prey. The model under consideration also has components taking into account immigration in both species and a more structured Holling type II functional response. Local and global stability analyses are discussed and simulations are provided. We also consider demographic stochasticity on the species due to environmental fluctuations in the form of Wiener processes and we show that there are conditions under which a global solution exists, stationary distributions exists, and strong persistence in mean of the species is possible. We also use Wasserstein distance to show empirically that stochasticity can acts as bifurcation parameter.
1504.08312
Thomas Shultz
Caitlin J. Mouri and Thomas R. Shultz
Cooperative Intergroup Mating Can Overcome Ethnocentrism in Diverse Populations
18 pages, 8 Tables, 8 Figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ethnocentrism is a behavioral strategy seen on every scale of social interaction. Game-theory models demonstrate that evolution selects ethnocentrism because it boosts cooperation, which increases reproductive fitness. However, some believe that interethnic unions have the potential to foster universal cooperation and overcome in-group biases in humans. Here, we use agent-based computer simulations to test this hypothesis. Cooperative intergroup mating does lend an advantage to a universal cooperation strategy when the cost/benefit ratio of cooperation is low and local population diversity is high.
[ { "created": "Thu, 30 Apr 2015 17:28:58 GMT", "version": "v1" } ]
2015-05-01
[ [ "Mouri", "Caitlin J.", "" ], [ "Shultz", "Thomas R.", "" ] ]
Ethnocentrism is a behavioral strategy seen on every scale of social interaction. Game-theory models demonstrate that evolution selects ethnocentrism because it boosts cooperation, which increases reproductive fitness. However, some believe that interethnic unions have the potential to foster universal cooperation and overcome in-group biases in humans. Here, we use agent-based computer simulations to test this hypothesis. Cooperative intergroup mating does lend an advantage to a universal cooperation strategy when the cost/benefit ratio of cooperation is low and local population diversity is high.
1703.05692
Daniele Ramazzotti
Rocco Piazza and Daniele Ramazzotti and Roberta Spinelli and Alessandra Pirola and Luca De Sano and Pierangelo Ferrari and Vera Magistroni and Nicoletta Cordani and Nitesh Sharma and Carlo Gambacorti-Passerini
OncoScore: a novel, Internet-based tool to assess the oncogenic potential of genes
null
null
10.1038/srep46290
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The complicated, evolving landscape of cancer mutations poses a formidable challenge to identify cancer genes among the large lists of mutations typically generated in NGS experiments. The ability to prioritize these variants is therefore of paramount importance. To address this issue we developed OncoScore, a text-mining tool that ranks genes according to their association with cancer, based on available biomedical literature. Receiver operating characteristic curve and the area under the curve (AUC) metrics on manually curated datasets confirmed the excellent discriminating capability of OncoScore (OncoScore cut-off threshold = 21.09; AUC = 90.3%, 95% CI: 88.1-92.5%), indicating that OncoScore provides useful results in cases where an efficient prioritization of cancer-associated genes is needed.
[ { "created": "Thu, 9 Mar 2017 01:24:23 GMT", "version": "v1" }, { "created": "Fri, 7 Apr 2017 19:41:25 GMT", "version": "v2" } ]
2017-04-11
[ [ "Piazza", "Rocco", "" ], [ "Ramazzotti", "Daniele", "" ], [ "Spinelli", "Roberta", "" ], [ "Pirola", "Alessandra", "" ], [ "De Sano", "Luca", "" ], [ "Ferrari", "Pierangelo", "" ], [ "Magistroni", "Vera", "" ], [ "Cordani", "Nicoletta", "" ], [ "Sharma", "Nitesh", "" ], [ "Gambacorti-Passerini", "Carlo", "" ] ]
The complicated, evolving landscape of cancer mutations poses a formidable challenge to identify cancer genes among the large lists of mutations typically generated in NGS experiments. The ability to prioritize these variants is therefore of paramount importance. To address this issue we developed OncoScore, a text-mining tool that ranks genes according to their association with cancer, based on available biomedical literature. Receiver operating characteristic curve and the area under the curve (AUC) metrics on manually curated datasets confirmed the excellent discriminating capability of OncoScore (OncoScore cut-off threshold = 21.09; AUC = 90.3%, 95% CI: 88.1-92.5%), indicating that OncoScore provides useful results in cases where an efficient prioritization of cancer-associated genes is needed.
2310.17003
Aniket Banerjee
Urvashi Verma, Aniket Banerjee and Rana D. Parshad
T(w)o patch or not t(w)o patch: A novel additional food model
21 figures, 29 pages
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by-sa/4.0/
A number of top down bio-control models have been proposed where the predators' efficacy is enhanced via the provision of additional food (AF). These can lead to infinite time \emph{blow-up} of the predator . Predator interference, predator competition, refuge, pest defense and cannibalism have been proposed to achieve pest extinction, while keeping the predator population bounded, but entail \emph{quadratic} or higher order damping terms. Thus an AF model with monotone pest dependent functional response, devoid of these mechanisms, cannot yield the pest extinction state. In the current manuscript we propose a novel patch driven bio-control model, with monotone pest dependent functional response. Here a predator is introduced into a ``patch" such as a prairie strip with AF, and then disperses into a neighboring ``patch" such as a crop field, to target a pest. Assuming only \emph{linear} dispersal in the predator, we show, (i) blow-up in the predator can be prevented in both the prairie strip and the crop field, (ii) the pest extinction state is attainable in the crop field, (iii) the AF model with patch structure can keep pest densities lower than the AF model without patch structure and/or the classical top-down bio-control model, (iv) the AF model with patch structure possesses rich dynamics including chaos. In particular, one observes ``patch specific chaos" - the pest occupying the crop-field can oscillate chaotically, while the pest in the prairie strip oscillates periodically. We discuss these results in light of bio-control strategies that would utilize prairie strips.
[ { "created": "Wed, 25 Oct 2023 21:00:30 GMT", "version": "v1" } ]
2023-10-27
[ [ "Verma", "Urvashi", "" ], [ "Banerjee", "Aniket", "" ], [ "Parshad", "Rana D.", "" ] ]
A number of top down bio-control models have been proposed where the predators' efficacy is enhanced via the provision of additional food (AF). These can lead to infinite time \emph{blow-up} of the predator . Predator interference, predator competition, refuge, pest defense and cannibalism have been proposed to achieve pest extinction, while keeping the predator population bounded, but entail \emph{quadratic} or higher order damping terms. Thus an AF model with monotone pest dependent functional response, devoid of these mechanisms, cannot yield the pest extinction state. In the current manuscript we propose a novel patch driven bio-control model, with monotone pest dependent functional response. Here a predator is introduced into a ``patch" such as a prairie strip with AF, and then disperses into a neighboring ``patch" such as a crop field, to target a pest. Assuming only \emph{linear} dispersal in the predator, we show, (i) blow-up in the predator can be prevented in both the prairie strip and the crop field, (ii) the pest extinction state is attainable in the crop field, (iii) the AF model with patch structure can keep pest densities lower than the AF model without patch structure and/or the classical top-down bio-control model, (iv) the AF model with patch structure possesses rich dynamics including chaos. In particular, one observes ``patch specific chaos" - the pest occupying the crop-field can oscillate chaotically, while the pest in the prairie strip oscillates periodically. We discuss these results in light of bio-control strategies that would utilize prairie strips.
q-bio/0612030
Uygar Ozesmi
Uygar Ozesmi and Can Ozan Tan
Cognitive Maps of Complex Systems Show Hierarchical Structure and Scale-Free Properties
8 pages, 2 figures
null
null
null
q-bio.NC q-bio.OT
null
Many networks in natural and human-made systems exhibit scale-free properties and are small worlds. Now we show that people's understanding of complex systems in their cognitive maps also follow a scale-free topology (P_k = k^-lambda, lambda [1.24,3.03]; r^2 <= 0.95). People focus on a few attributes, as indicated by a fat tail in the probability distribution of total degree. These few attributes are related with many other variables in the system. Many more attributes have very few connections. The scale-free properties in the cognitive maps of people arise despite the fact that their average distances are not different (Wilcoxon sign-rank test, W=78, p=0.75) than random networks of the same size and connection density. The scale-free property manifests itself in the higher hierarchical structure compared to random networks (Wilcoxon sign-rank test, W=12, p=0.03). People use relatively short explanations to describe systems. These findings may help us to better understand people's perceptions, especially when it comes to decision-making, conflict resolution, politics and management.
[ { "created": "Sat, 16 Dec 2006 14:20:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ozesmi", "Uygar", "" ], [ "Tan", "Can Ozan", "" ] ]
Many networks in natural and human-made systems exhibit scale-free properties and are small worlds. Now we show that people's understanding of complex systems in their cognitive maps also follow a scale-free topology (P_k = k^-lambda, lambda [1.24,3.03]; r^2 <= 0.95). People focus on a few attributes, as indicated by a fat tail in the probability distribution of total degree. These few attributes are related with many other variables in the system. Many more attributes have very few connections. The scale-free properties in the cognitive maps of people arise despite the fact that their average distances are not different (Wilcoxon sign-rank test, W=78, p=0.75) than random networks of the same size and connection density. The scale-free property manifests itself in the higher hierarchical structure compared to random networks (Wilcoxon sign-rank test, W=12, p=0.03). People use relatively short explanations to describe systems. These findings may help us to better understand people's perceptions, especially when it comes to decision-making, conflict resolution, politics and management.
1507.03647
Vu Dinh
Vu Dinh and Frederick A. Matsen IV
The shape of the one-dimensional phylogenetic likelihood function
31 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By fixing all parameters in a phylogenetic likelihood model except for one branch length, one obtains a one-dimensional likelihood function. In this work, we introduce a mathematical framework to characterize the shapes of such one-dimensional phylogenetic likelihood functions. This framework is based on analyses of algebraic structures on the space of all frequency patterns with respect to a polynomial representation of the likelihood functions. Using this framework, we provide conditions under which the one-dimensional phylogenetic likelihood functions are guaranteed to have at most one stationary point, and this point is the maximum likelihood branch length. These conditions are satisfied by common simple models including all binary models, the Jukes-Cantor model and the Felsenstein 1981 model. We then prove that for the simplest model that does not satisfy our conditions, namely, the Kimura 2-parameter model, the one-dimensional likelihood functions may have multiple stationary points. As a proof of concept, we construct a non-degenerate example in which the phylogenetic likelihood function has two local maxima and a local minimum. To construct such examples, we derive a general method of constructing a tree and sequence data with a specified frequency pattern at the root. We then extend the result to prove that the space of all rescaled and translated one-dimensional phylogenetic likelihood functions under the Kimura 2-parameter model is dense in the space of all non-negative continuous functions on $[0, \infty)$ with finite limits. These results indicate that one-dimensional likelihood functions under advanced evolutionary models can be more complex than it is typically assumed by phylogenetic inference algorithms; however, these complexities can be effectively captured by the Kimura 2-parameter model.
[ { "created": "Mon, 13 Jul 2015 22:53:41 GMT", "version": "v1" }, { "created": "Thu, 21 Jul 2016 22:07:34 GMT", "version": "v2" } ]
2016-07-25
[ [ "Dinh", "Vu", "" ], [ "Matsen", "Frederick A.", "IV" ] ]
By fixing all parameters in a phylogenetic likelihood model except for one branch length, one obtains a one-dimensional likelihood function. In this work, we introduce a mathematical framework to characterize the shapes of such one-dimensional phylogenetic likelihood functions. This framework is based on analyses of algebraic structures on the space of all frequency patterns with respect to a polynomial representation of the likelihood functions. Using this framework, we provide conditions under which the one-dimensional phylogenetic likelihood functions are guaranteed to have at most one stationary point, and this point is the maximum likelihood branch length. These conditions are satisfied by common simple models including all binary models, the Jukes-Cantor model and the Felsenstein 1981 model. We then prove that for the simplest model that does not satisfy our conditions, namely, the Kimura 2-parameter model, the one-dimensional likelihood functions may have multiple stationary points. As a proof of concept, we construct a non-degenerate example in which the phylogenetic likelihood function has two local maxima and a local minimum. To construct such examples, we derive a general method of constructing a tree and sequence data with a specified frequency pattern at the root. We then extend the result to prove that the space of all rescaled and translated one-dimensional phylogenetic likelihood functions under the Kimura 2-parameter model is dense in the space of all non-negative continuous functions on $[0, \infty)$ with finite limits. These results indicate that one-dimensional likelihood functions under advanced evolutionary models can be more complex than it is typically assumed by phylogenetic inference algorithms; however, these complexities can be effectively captured by the Kimura 2-parameter model.
1012.5011
Amin Zia
Amin Zia, Alan M. Moses
Towards a theoretical understanding of false positives in DNA motif finding
Submitted to PLOS Computational Biology
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detection of false-positive motifs is one of the main causes of low performance in motif finding methods. It is generally assumed that false-positives are mostly due to algorithmic weakness of motif-finders. Here, however, we derive the theoretical dependence of false positives on dataset size and find that false positives can arise as a result of large dataset size, irrespective of the algorithm used. Interestingly, the false-positive strength depends more on the number of sequences in the dataset than it does on the sequence length. As expected, false-positives can be reduced by decreasing the sequence length or by adding more sequences to the dataset. The dependence on number of sequences, however, diminishes and reaches a plateau after which adding more sequences to the dataset does not reduce the false-positive rate significantly. Based on the theoretical results presented here, we provide a number of intuitive rules of thumb that may be used to enhance motif-finding results in practice.
[ { "created": "Wed, 22 Dec 2010 15:31:32 GMT", "version": "v1" } ]
2010-12-23
[ [ "Zia", "Amin", "" ], [ "Moses", "Alan M.", "" ] ]
Detection of false-positive motifs is one of the main causes of low performance in motif finding methods. It is generally assumed that false-positives are mostly due to algorithmic weakness of motif-finders. Here, however, we derive the theoretical dependence of false positives on dataset size and find that false positives can arise as a result of large dataset size, irrespective of the algorithm used. Interestingly, the false-positive strength depends more on the number of sequences in the dataset than it does on the sequence length. As expected, false-positives can be reduced by decreasing the sequence length or by adding more sequences to the dataset. The dependence on number of sequences, however, diminishes and reaches a plateau after which adding more sequences to the dataset does not reduce the false-positive rate significantly. Based on the theoretical results presented here, we provide a number of intuitive rules of thumb that may be used to enhance motif-finding results in practice.
1801.10597
Min Xu
Jialiang Guo, Bo Zhou, Xiangrui Zeng, Zachary Freyberg, Min Xu
Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography
8 pages
International Conference on Image Analysis and Recognition (ICIAR) 2018
null
null
q-bio.QM cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electron Cryo-Tomography (ECT) enables 3D visualization of macromolecule structure inside single cells. Macromolecule classification approaches based on convolutional neural networks (CNN) were developed to separate millions of macromolecules captured from ECT systematically. However, given the fast accumulation of ECT data, it will soon become necessary to use CNN models to efficiently and accurately separate substantially more macromolecules at the prediction stage, which requires additional computational costs. To speed up the prediction, we compress classification models into compact neural networks with little in accuracy for deployment. Specifically, we propose to perform model compression through knowledge distillation. Firstly, a complex teacher network is trained to generate soft labels with better classification feasibility followed by training of customized student networks with simple architectures using the soft label to compress model complexity. Our tests demonstrate that our compressed models significantly reduce the number of parameters and time cost while maintaining similar classification accuracy.
[ { "created": "Wed, 31 Jan 2018 18:39:41 GMT", "version": "v1" } ]
2018-03-28
[ [ "Guo", "Jialiang", "" ], [ "Zhou", "Bo", "" ], [ "Zeng", "Xiangrui", "" ], [ "Freyberg", "Zachary", "" ], [ "Xu", "Min", "" ] ]
Electron Cryo-Tomography (ECT) enables 3D visualization of macromolecule structure inside single cells. Macromolecule classification approaches based on convolutional neural networks (CNN) were developed to separate millions of macromolecules captured from ECT systematically. However, given the fast accumulation of ECT data, it will soon become necessary to use CNN models to efficiently and accurately separate substantially more macromolecules at the prediction stage, which requires additional computational costs. To speed up the prediction, we compress classification models into compact neural networks with little in accuracy for deployment. Specifically, we propose to perform model compression through knowledge distillation. Firstly, a complex teacher network is trained to generate soft labels with better classification feasibility followed by training of customized student networks with simple architectures using the soft label to compress model complexity. Our tests demonstrate that our compressed models significantly reduce the number of parameters and time cost while maintaining similar classification accuracy.
1505.03821
Cameron Browne
Cameron Browne, Hayriye Gulbudak and Glenn Webb
Modeling Contact Tracing in Outbreaks with Application to Ebola
null
Journal of Theoretical Biology, Volume 384 pg. 33-49, 2015
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contact tracing is an important control strategy for containing Ebola epidemics. From a theoretical perspective, explicitly incorporating contact tracing with disease dynamics presents challenges, and population level effects of contact tracing are difficult to determine. In this work, we formulate and analyze a mechanistic SEIR type outbreak model which considers the key features of contact tracing, and we characterize the impact of contact tracing on the effective reproduction number, $\mathcal R_e$, of Ebola. In particular, we determine how relevant epidemiological properties such as incubation period, infectious period and case reporting, along with varying monitoring protocols, affect the efficacy of contact tracing. In the special cases of either perfect monitoring of traced cases or perfect reporting of all cases, we derive simple formulae for the critical proportion of contacts that need to be traced in order to bring the effective reproduction number $\mathcal R_e$ below one. Also, in either case, we show that $\mathcal R_e$ can be expressed completely in terms of observable reported case/tracing quantities, namely $\mathcal R_e=k\dfrac{(1-q)}{q}+k_m$ where $k$ is the number of secondary traced infected contacts per primary untraced reported case, $k_m$ is the number of secondary traced infected contacts per primary traced reported case and $(1-q)/q$ is the odds that a reported case is not a traced contact. These formulae quantify contact tracing as both an intervention strategy that impacts disease spread and a probe into the current epidemic status at the population level. Data from the West Africa Ebola outbreak is utilized to form real-time estimates of $\mathcal R_e$, and inform our projections of the impact of contact tracing, and other control measures, on the epidemic trajectory.
[ { "created": "Thu, 14 May 2015 18:08:17 GMT", "version": "v1" }, { "created": "Sat, 7 Nov 2015 22:22:44 GMT", "version": "v2" } ]
2015-11-10
[ [ "Browne", "Cameron", "" ], [ "Gulbudak", "Hayriye", "" ], [ "Webb", "Glenn", "" ] ]
Contact tracing is an important control strategy for containing Ebola epidemics. From a theoretical perspective, explicitly incorporating contact tracing with disease dynamics presents challenges, and population level effects of contact tracing are difficult to determine. In this work, we formulate and analyze a mechanistic SEIR type outbreak model which considers the key features of contact tracing, and we characterize the impact of contact tracing on the effective reproduction number, $\mathcal R_e$, of Ebola. In particular, we determine how relevant epidemiological properties such as incubation period, infectious period and case reporting, along with varying monitoring protocols, affect the efficacy of contact tracing. In the special cases of either perfect monitoring of traced cases or perfect reporting of all cases, we derive simple formulae for the critical proportion of contacts that need to be traced in order to bring the effective reproduction number $\mathcal R_e$ below one. Also, in either case, we show that $\mathcal R_e$ can be expressed completely in terms of observable reported case/tracing quantities, namely $\mathcal R_e=k\dfrac{(1-q)}{q}+k_m$ where $k$ is the number of secondary traced infected contacts per primary untraced reported case, $k_m$ is the number of secondary traced infected contacts per primary traced reported case and $(1-q)/q$ is the odds that a reported case is not a traced contact. These formulae quantify contact tracing as both an intervention strategy that impacts disease spread and a probe into the current epidemic status at the population level. Data from the West Africa Ebola outbreak is utilized to form real-time estimates of $\mathcal R_e$, and inform our projections of the impact of contact tracing, and other control measures, on the epidemic trajectory.
2012.09122
Yannik Schaelte
Fabian Fr\"ohlich, Daniel Weindl, Yannik Sch\"alte, Dilan Pathirana, {\L}ukasz Paszkowski, Glenn Terje Lines, Paul Stapor, Jan Hasenauer
AMICI: High-Performance Sensitivity Analysis for Large Ordinary Differential Equation Models
null
null
10.1093/bioinformatics/btab227
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Ordinary differential equation models facilitate the understanding of cellular signal transduction and other biological processes. However, for large and comprehensive models, the computational cost of simulating or calibrating can be limiting. AMICI is a modular toolbox implemented in C++/Python/MATLAB that provides efficient simulation and sensitivity analysis routines tailored for scalable, gradient-based parameter estimation and uncertainty quantification. AMICI is published under the permissive BSD-3-Clause license with source code publicly available on https://github.com/AMICI-dev/AMICI. Citeable releases are archived on Zenodo.
[ { "created": "Wed, 16 Dec 2020 18:09:17 GMT", "version": "v1" } ]
2023-11-29
[ [ "Fröhlich", "Fabian", "" ], [ "Weindl", "Daniel", "" ], [ "Schälte", "Yannik", "" ], [ "Pathirana", "Dilan", "" ], [ "Paszkowski", "Łukasz", "" ], [ "Lines", "Glenn Terje", "" ], [ "Stapor", "Paul", "" ], [ "Hasenauer", "Jan", "" ] ]
Ordinary differential equation models facilitate the understanding of cellular signal transduction and other biological processes. However, for large and comprehensive models, the computational cost of simulating or calibrating can be limiting. AMICI is a modular toolbox implemented in C++/Python/MATLAB that provides efficient simulation and sensitivity analysis routines tailored for scalable, gradient-based parameter estimation and uncertainty quantification. AMICI is published under the permissive BSD-3-Clause license with source code publicly available on https://github.com/AMICI-dev/AMICI. Citeable releases are archived on Zenodo.
2105.05672
Mario V Balzan
Mario V Balzan
Assessing ecosystem services for evidence-based nature-based solutions
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The term nature-based solutions has often been used to refer to adequate green infrastructure, which is cost-effective and simultaneously provides environmental, social and economic benefits, through the delivery of ecosystem services, and contributes to build resilience. This paper provides an overview of the recent work mapping and assessing ecosystem services in Malta and the implications for decision-making. Research has focused on the identification and mapping of ecosystems, and ecosystem condition, the capacity to deliver key ecosystem services and the actual use (flow) of these services by local communities leading to benefits to human well-being. The integration of results from these different assessments demonstrates several significant synergies between ecosystem services, indicating multifunctionality in the provision of ecosystem services leading to human well-being. This is considered as key criterion in the identification of green infrastructure in the Maltese Islands. A gradient in green infrastructure cover and ecosystem services capacity is observed between rural and urban areas but ecosystem services flow per unit area was in some cases higher in urban environments. These results indicate a potential mismatch between ecosystem service demand and capacity but also provide a scientific baseline for evidence-based policy which fosters the development of green infrastructure through nature-based innovation promoting more specific and novel solutions for landscape and urban planning.
[ { "created": "Wed, 12 May 2021 14:13:04 GMT", "version": "v1" } ]
2021-05-13
[ [ "Balzan", "Mario V", "" ] ]
The term nature-based solutions has often been used to refer to adequate green infrastructure, which is cost-effective and simultaneously provides environmental, social and economic benefits, through the delivery of ecosystem services, and contributes to build resilience. This paper provides an overview of the recent work mapping and assessing ecosystem services in Malta and the implications for decision-making. Research has focused on the identification and mapping of ecosystems, and ecosystem condition, the capacity to deliver key ecosystem services and the actual use (flow) of these services by local communities leading to benefits to human well-being. The integration of results from these different assessments demonstrates several significant synergies between ecosystem services, indicating multifunctionality in the provision of ecosystem services leading to human well-being. This is considered as key criterion in the identification of green infrastructure in the Maltese Islands. A gradient in green infrastructure cover and ecosystem services capacity is observed between rural and urban areas but ecosystem services flow per unit area was in some cases higher in urban environments. These results indicate a potential mismatch between ecosystem service demand and capacity but also provide a scientific baseline for evidence-based policy which fosters the development of green infrastructure through nature-based innovation promoting more specific and novel solutions for landscape and urban planning.
1805.00117
Armando G. M. Neves
Evandro P. Souza, Eliza M. Ferreira and Armando G. M. Neves
Fixation probabilities for the Moran process in evolutionary games with two strategies: graph shapes and large population asymptotics
5 figures
null
10.1007/s00285-018-1300-4
null
q-bio.PE math.PR physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is based on the complete classification of evolutionary scenarios for the Moran process with two strategies given by Taylor et al. (B. Math. Biol. 66(6): 1621--1644, 2004). Their classification is based on whether each strategy is a Nash equilibrium and whether the fixation probability for a single individual of each strategy is larger or smaller than its value for neutral evolution. We improve on this analysis by showing that each evolutionary scenario is characterized by a definite graph shape for the fixation probability function. A second class of results deals with the behavior of the fixation probability when the population size tends to infinity. We develop asymptotic formulae that approximate the fixation probability in this limit and conclude that some of the evolutionary scenarios cannot exist when the population size is large.
[ { "created": "Mon, 30 Apr 2018 22:24:07 GMT", "version": "v1" } ]
2018-11-27
[ [ "Souza", "Evandro P.", "" ], [ "Ferreira", "Eliza M.", "" ], [ "Neves", "Armando G. M.", "" ] ]
This paper is based on the complete classification of evolutionary scenarios for the Moran process with two strategies given by Taylor et al. (B. Math. Biol. 66(6): 1621--1644, 2004). Their classification is based on whether each strategy is a Nash equilibrium and whether the fixation probability for a single individual of each strategy is larger or smaller than its value for neutral evolution. We improve on this analysis by showing that each evolutionary scenario is characterized by a definite graph shape for the fixation probability function. A second class of results deals with the behavior of the fixation probability when the population size tends to infinity. We develop asymptotic formulae that approximate the fixation probability in this limit and conclude that some of the evolutionary scenarios cannot exist when the population size is large.
q-bio/0605044
Eugene Shakhnovich
Konstantin B. Zeldovich, Boris E.Shakhnovich, Eugene I. Shakhnovich
Emergence of the protein universe in organismal evolution
Replaced with revised version; power-law distributions are shown here to emerge from microscopic model evolutionary dynamics
null
null
null
q-bio.PE q-bio.BM
null
In this work we propose a physical model of organismal evolution, where phenotype, organism life expectancy, is directly related to genotype i.e. the stability of its proteins which can be determined exactly in the model. Simulating the model on a computer, we consistently observe the Big Bang scenario whereby exponential population growth ensues as favorable sequence-structure combinations (precursors of stable proteins) are discovered. After that, random diversity of the structural space abruptly collapses into a small set of preferred structural motifs. We observe that protein folds remain stable and abundant in the population at time scales much greater than mutation or organism lifetime, and the distribution of the lifetimes of dominant folds in a population approximately follows a power law. The separation of evolutionary time scales between discovery of new folds and generation of new sequences gives rise to emergence of protein families and superfamilies whose sizes are power-law distributed, closely matching the same distributions for real proteins. The network of structural similarities of the universe of evolved proteins has the same scale-free like character as the actual protein domain universe graph (PDUG). Further, the model predicts that ancient protein domains represent a highly connected and clustered subset of all protein domains, in complete agreement with reality. Together, these results provide a microscopic first principles picture of how protein structures and gene families evolved in the course of evolution.
[ { "created": "Fri, 26 May 2006 23:36:42 GMT", "version": "v1" }, { "created": "Wed, 17 Jan 2007 22:27:31 GMT", "version": "v2" } ]
2007-05-23
[ [ "Zeldovich", "Konstantin B.", "" ], [ "Shakhnovich", "Boris E.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
In this work we propose a physical model of organismal evolution, where phenotype, organism life expectancy, is directly related to genotype i.e. the stability of its proteins which can be determined exactly in the model. Simulating the model on a computer, we consistently observe the Big Bang scenario whereby exponential population growth ensues as favorable sequence-structure combinations (precursors of stable proteins) are discovered. After that, random diversity of the structural space abruptly collapses into a small set of preferred structural motifs. We observe that protein folds remain stable and abundant in the population at time scales much greater than mutation or organism lifetime, and the distribution of the lifetimes of dominant folds in a population approximately follows a power law. The separation of evolutionary time scales between discovery of new folds and generation of new sequences gives rise to emergence of protein families and superfamilies whose sizes are power-law distributed, closely matching the same distributions for real proteins. The network of structural similarities of the universe of evolved proteins has the same scale-free like character as the actual protein domain universe graph (PDUG). Further, the model predicts that ancient protein domains represent a highly connected and clustered subset of all protein domains, in complete agreement with reality. Together, these results provide a microscopic first principles picture of how protein structures and gene families evolved in the course of evolution.
2003.01553
Alex Kasman
Alex Kasman and Brenton LeMesurier
Did Sequence Dependent Geometry Influence the Evolution of the Genetic Code?
To appear in Contemporary Mathematics
null
10.1090/conm/746/15001
null
q-bio.OT cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genetic code is the function from the set of codons to the set of amino acids by which a DNA sequence encodes proteins. Since the codons also influence the shape of the DNA molecule itself, the same sequence that encodes a protein also has a separate geometric interpretation. A question then arises: How well-duplexed are these two "codes"? In other words, in choosing a genetic sequence to encode a particular protein, how much freedom does one still have to vary the geometry (or vice versa). A recent paper by the first author addressed this question using two different methods. After reviewing those results, this paper addresses the same question with a third method: the use of Monte Carlo and Gaussian sampling methods to approximate a multi-integral representing the mutual information of a variety of possible genetic codes. Once again, it is found that the genetic code used in nuclear DNA has a slightly lower than average duplexing efficiency as compared with other hypothetical genetic codes. A concluding section discusses the significance of these surprising results.
[ { "created": "Sun, 1 Mar 2020 19:18:37 GMT", "version": "v1" } ]
2020-03-04
[ [ "Kasman", "Alex", "" ], [ "LeMesurier", "Brenton", "" ] ]
The genetic code is the function from the set of codons to the set of amino acids by which a DNA sequence encodes proteins. Since the codons also influence the shape of the DNA molecule itself, the same sequence that encodes a protein also has a separate geometric interpretation. A question then arises: How well-duplexed are these two "codes"? In other words, in choosing a genetic sequence to encode a particular protein, how much freedom does one still have to vary the geometry (or vice versa). A recent paper by the first author addressed this question using two different methods. After reviewing those results, this paper addresses the same question with a third method: the use of Monte Carlo and Gaussian sampling methods to approximate a multi-integral representing the mutual information of a variety of possible genetic codes. Once again, it is found that the genetic code used in nuclear DNA has a slightly lower than average duplexing efficiency as compared with other hypothetical genetic codes. A concluding section discusses the significance of these surprising results.
1404.5208
Norshuhaila Mohamed Sunar N.M.Sunar
N. M. Sunar, E.I. Stentiford, D.I. Stewart and L.A. Fletcher
Molecular techniques to characterize the invA genes of Salmonella spp. for pathogen inactivation study in composting
ORBIT 2010 International Conference of Organic Resources in the Carbon Economy. Crete, Greece
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relatively low concentration of pathogen indicators, such Salmonella, in composting sometimes causes a problem with detection when using the conventional techniques. The presence of viable but non-culturable (VBNC) organisms is also a potential problem with Salmonella detection when using conventional techniques. In this study, the molecular approach for organism recognition, known as Polymerase Chain Reaction (PCR), was used for characterisation the Salmonella spp. used as inoculums in composting. This study also provides a better understand about selecting the suitable primer for Salmonella spp. The specificity of the probes and primers at the species level were verified by performing NCBI-BLAST2 (Basic Local Alignment Search Tool). Primers that target the invA gene for Salmonella were selected which produce fragment lengths around 285bp (shown in Figure 1). The Salmonella spp. solutions were tested and proved contained the sequences of invA gene by using several steps of molecular techniques before used it as inoculums in composting. The laboratory scale batch composting reactors were used to examine the inactivation of Salmonella spp. The inoculate solution of Salmonella was prepared by culturing Salmonella enteritidis (ATCC13076) in tryptone broth solution for 24 hours before adding it directly into the compost material. The reduction rate of Salmonella spp. was enumerated by conventional method accordingly to the standard compost guidelines (Figure 2). The laboratory scale study showed that after composting for 8 days the numbers of Salmonella spp. were below the limits in the UK compost standard (PAS 100) which requires the compost to be free of Salmonella spp.
[ { "created": "Mon, 21 Apr 2014 14:25:30 GMT", "version": "v1" } ]
2014-04-22
[ [ "Sunar", "N. M.", "" ], [ "Stentiford", "E. I.", "" ], [ "Stewart", "D. I.", "" ], [ "Fletcher", "L. A.", "" ] ]
The relatively low concentration of pathogen indicators, such Salmonella, in composting sometimes causes a problem with detection when using the conventional techniques. The presence of viable but non-culturable (VBNC) organisms is also a potential problem with Salmonella detection when using conventional techniques. In this study, the molecular approach for organism recognition, known as Polymerase Chain Reaction (PCR), was used for characterisation the Salmonella spp. used as inoculums in composting. This study also provides a better understand about selecting the suitable primer for Salmonella spp. The specificity of the probes and primers at the species level were verified by performing NCBI-BLAST2 (Basic Local Alignment Search Tool). Primers that target the invA gene for Salmonella were selected which produce fragment lengths around 285bp (shown in Figure 1). The Salmonella spp. solutions were tested and proved contained the sequences of invA gene by using several steps of molecular techniques before used it as inoculums in composting. The laboratory scale batch composting reactors were used to examine the inactivation of Salmonella spp. The inoculate solution of Salmonella was prepared by culturing Salmonella enteritidis (ATCC13076) in tryptone broth solution for 24 hours before adding it directly into the compost material. The reduction rate of Salmonella spp. was enumerated by conventional method accordingly to the standard compost guidelines (Figure 2). The laboratory scale study showed that after composting for 8 days the numbers of Salmonella spp. were below the limits in the UK compost standard (PAS 100) which requires the compost to be free of Salmonella spp.
1508.04635
Gopal Sarma
Gopal P. Sarma, Travis W. Jacobs, Mark D. Watts, Vahid Ghayoomi, Richard C. Gerkin, and Stephen D. Larson
Unit Testing, Model Validation, and Biological Simulation
13 pages, 8 figures
F1000Research 2016, 5:1946
10.12688/f1000research.9315.1
null
q-bio.QM cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.
[ { "created": "Wed, 19 Aug 2015 13:18:43 GMT", "version": "v1" }, { "created": "Sun, 5 Mar 2017 16:09:35 GMT", "version": "v2" } ]
2017-03-07
[ [ "Sarma", "Gopal P.", "" ], [ "Jacobs", "Travis W.", "" ], [ "Watts", "Mark D.", "" ], [ "Ghayoomi", "Vahid", "" ], [ "Gerkin", "Richard C.", "" ], [ "Larson", "Stephen D.", "" ] ]
The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.
2104.04672
Chen Liu
Nanyan Zhu, Chen Liu, Xinyang Feng, Dipika Sikka, Sabrina Gjerswold-Selleck, Scott A. Small, Jia Guo
Deep Learning Identifies Neuroimaging Signatures of Alzheimer's Disease Using Structural and Synthesized Functional MRI Data
Published in IEEE ISBI 2021. Available at https://ieeexplore.ieee.org/document/9433808
null
10.1109/ISBI48211.2021.9433808
null
q-bio.QM cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Current neuroimaging techniques provide paths to investigate the structure and function of the brain in vivo and have made great advances in understanding Alzheimer's disease (AD). However, the group-level analyses prevalently used for investigation and understanding of the disease are not applicable for diagnosis of individuals. More recently, deep learning, which can efficiently analyze large-scale complex patterns in 3D brain images, has helped pave the way for computer-aided individual diagnosis by providing accurate and automated disease classification. Great progress has been made in classifying AD with deep learning models developed upon increasingly available structural MRI data. The lack of scale-matched functional neuroimaging data prevents such models from being further improved by observing functional changes in pathophysiology. Here we propose a potential solution by first learning a structural-to-functional transformation in brain MRI, and further synthesizing spatially matched functional images from large-scale structural scans. We evaluated our approach by building computational models to discriminate patients with AD from healthy normal subjects and demonstrated a performance boost after combining the structural and synthesized functional brain images into the same model. Furthermore, our regional analyses identified the temporal lobe to be the most predictive structural-region and the parieto-occipital lobe to be the most predictive functional-region of our model, which are both in concordance with previous group-level neuroimaging findings. Together, we demonstrate the potential of deep learning with large-scale structural and synthesized functional MRI to impact AD classification and to identify AD's neuroimaging signatures.
[ { "created": "Sat, 10 Apr 2021 03:16:33 GMT", "version": "v1" }, { "created": "Fri, 28 May 2021 15:56:28 GMT", "version": "v2" } ]
2021-05-31
[ [ "Zhu", "Nanyan", "" ], [ "Liu", "Chen", "" ], [ "Feng", "Xinyang", "" ], [ "Sikka", "Dipika", "" ], [ "Gjerswold-Selleck", "Sabrina", "" ], [ "Small", "Scott A.", "" ], [ "Guo", "Jia", "" ] ]
Current neuroimaging techniques provide paths to investigate the structure and function of the brain in vivo and have made great advances in understanding Alzheimer's disease (AD). However, the group-level analyses prevalently used for investigation and understanding of the disease are not applicable for diagnosis of individuals. More recently, deep learning, which can efficiently analyze large-scale complex patterns in 3D brain images, has helped pave the way for computer-aided individual diagnosis by providing accurate and automated disease classification. Great progress has been made in classifying AD with deep learning models developed upon increasingly available structural MRI data. The lack of scale-matched functional neuroimaging data prevents such models from being further improved by observing functional changes in pathophysiology. Here we propose a potential solution by first learning a structural-to-functional transformation in brain MRI, and further synthesizing spatially matched functional images from large-scale structural scans. We evaluated our approach by building computational models to discriminate patients with AD from healthy normal subjects and demonstrated a performance boost after combining the structural and synthesized functional brain images into the same model. Furthermore, our regional analyses identified the temporal lobe to be the most predictive structural-region and the parieto-occipital lobe to be the most predictive functional-region of our model, which are both in concordance with previous group-level neuroimaging findings. Together, we demonstrate the potential of deep learning with large-scale structural and synthesized functional MRI to impact AD classification and to identify AD's neuroimaging signatures.
2103.05635
Vahid Salari
Fereshteh Arab, Sareh Rostami, Mohammad Dehghani-Habibabadi, Vahid Salari, Mir-Shahram Safari
Optogenetically Induced Spatiotemporal Gamma Oscillations in Visual Cortex
11 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been hypothesized that Gamma cortical oscillations play important roles in numerous cognitive processes and may involve psychiatric conditions including anxiety, schizophrenia, and autism. Gamma rhythms are commonly observed in many brain regions during both waking and sleep states, yet their functions and mechanisms remain a matter of debate. Spatiotemporal Gamma oscillations can explain neuronal representation, computation, and the shaping of communication among cortical neurons, even neurological and neuropsychiatric disorders in neo-cortex. In this study, the neural network dynamics and spatiotemporal behavior in the cerebral cortex are examined during Gamma brain activity. We have directly observed the Gamma oscillations on visual processing as spatiotemporal waves induced by targeted optogenetics stimulation. We have experimentally demonstrated the constant optogenetics stimulation based on the ChR2 opsin under the control of the CaMKII{\alpha} promotor, which can induce sustained narrowband Gamma oscillations in the visual cortex of rats during their comatose states. The injections of the viral vector [LentiVirus CaMKII{\alpha} ChR2] was performed at two different depths, 200 and 500 \mu m. Finally, we computationally analyze our results via Wilson-Cowan model.
[ { "created": "Mon, 8 Mar 2021 23:52:50 GMT", "version": "v1" } ]
2021-03-11
[ [ "Arab", "Fereshteh", "" ], [ "Rostami", "Sareh", "" ], [ "Dehghani-Habibabadi", "Mohammad", "" ], [ "Salari", "Vahid", "" ], [ "Safari", "Mir-Shahram", "" ] ]
It has been hypothesized that Gamma cortical oscillations play important roles in numerous cognitive processes and may involve psychiatric conditions including anxiety, schizophrenia, and autism. Gamma rhythms are commonly observed in many brain regions during both waking and sleep states, yet their functions and mechanisms remain a matter of debate. Spatiotemporal Gamma oscillations can explain neuronal representation, computation, and the shaping of communication among cortical neurons, even neurological and neuropsychiatric disorders in neo-cortex. In this study, the neural network dynamics and spatiotemporal behavior in the cerebral cortex are examined during Gamma brain activity. We have directly observed the Gamma oscillations on visual processing as spatiotemporal waves induced by targeted optogenetics stimulation. We have experimentally demonstrated the constant optogenetics stimulation based on the ChR2 opsin under the control of the CaMKII{\alpha} promotor, which can induce sustained narrowband Gamma oscillations in the visual cortex of rats during their comatose states. The injections of the viral vector [LentiVirus CaMKII{\alpha} ChR2] was performed at two different depths, 200 and 500 \mu m. Finally, we computationally analyze our results via Wilson-Cowan model.
q-bio/0512035
Stephen Hicks
Stephen D. Hicks and C. L. Henley
An irreversible growth model for virus capsid assembly
19 pages, 19 figures; v2: two figures added, small changes to content; v3: minor content changes in response to referees
null
10.1103/PhysRevE.74.031912
null
q-bio.BM cond-mat.soft
null
We model the spontaneous assembly of a capsid (a virus's closed outer shell) from many copies of identical units, using entirely irreversible steps and only information local to the growing edge. Our model is formulated in terms of (i) an elastic Hamiltonian with stretching and bending stiffness and a spontaneous curvature, and (ii) a set of rate constants for addition of new units or bonds. An ensemble of highly irregular capsids is generated, unlike the well-known icosahedrally symmetric viruses, but (we argue) plausible as a way to model the irregular capsids of retroviruses such as HIV. We found that (i) the probability of successful capsid completion decays exponentially with capsid size; (ii) capsid size depends strongly on spontaneous curvature and weakly on the ratio of the bending and stretching elastic stiffnesses of the shell; (iii) the degree of localization of Gaussian curvature (a measure of facetedness) depends heavily on the ratio of elastic stiffnesses.
[ { "created": "Mon, 19 Dec 2005 19:05:48 GMT", "version": "v1" }, { "created": "Tue, 21 Mar 2006 03:39:24 GMT", "version": "v2" }, { "created": "Tue, 8 Aug 2006 06:06:37 GMT", "version": "v3" } ]
2009-11-11
[ [ "Hicks", "Stephen D.", "" ], [ "Henley", "C. L.", "" ] ]
We model the spontaneous assembly of a capsid (a virus's closed outer shell) from many copies of identical units, using entirely irreversible steps and only information local to the growing edge. Our model is formulated in terms of (i) an elastic Hamiltonian with stretching and bending stiffness and a spontaneous curvature, and (ii) a set of rate constants for addition of new units or bonds. An ensemble of highly irregular capsids is generated, unlike the well-known icosahedrally symmetric viruses, but (we argue) plausible as a way to model the irregular capsids of retroviruses such as HIV. We found that (i) the probability of successful capsid completion decays exponentially with capsid size; (ii) capsid size depends strongly on spontaneous curvature and weakly on the ratio of the bending and stretching elastic stiffnesses of the shell; (iii) the degree of localization of Gaussian curvature (a measure of facetedness) depends heavily on the ratio of elastic stiffnesses.
1010.3059
Leonid Perlovsky
Leonid I. Perlovsky
SCIENCE AND RELIGION: Scientific Understanding and Mathematical Modeling of Emotions of the Spiritually Sublime
18 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Science strives for a detailed understanding of reality even if this differentiation threatens individual synthesis, or the wholeness of psyche. Religion strives to maintain the wholeness of psyche, even if at the expense of a detailed understanding of the world and Self. This paper analyzes the cognitive forces driving us to achieve both. This analysis leads to understanding emotions of the religiously sublime, which are the foundations of all religions. These seemingly mysterious feelings, which everyone feels, even if rarely, even if without noticing them consciously, even if without being able to name them properly, today can be explained scientifically. And possibly, we may soon be able to measure them in a psychological laboratory. The article briefly reviews new developments in brain imaging that have made new data available, and reviews development and mathematical modeling in cognitive theory explaining these previously mysterious feelings. This new scientific analysis has overcome another long-standing challenge: reductionism. Although religious feelings can be scientifically discussed in terms of concrete neural mechanisms and mathematically modeled, but cannot be reduced to "just this or that" mechanical explanation.
[ { "created": "Fri, 15 Oct 2010 01:17:48 GMT", "version": "v1" } ]
2010-11-13
[ [ "Perlovsky", "Leonid I.", "" ] ]
Science strives for a detailed understanding of reality even if this differentiation threatens individual synthesis, or the wholeness of psyche. Religion strives to maintain the wholeness of psyche, even if at the expense of a detailed understanding of the world and Self. This paper analyzes the cognitive forces driving us to achieve both. This analysis leads to understanding emotions of the religiously sublime, which are the foundations of all religions. These seemingly mysterious feelings, which everyone feels, even if rarely, even if without noticing them consciously, even if without being able to name them properly, today can be explained scientifically. And possibly, we may soon be able to measure them in a psychological laboratory. The article briefly reviews new developments in brain imaging that have made new data available, and reviews development and mathematical modeling in cognitive theory explaining these previously mysterious feelings. This new scientific analysis has overcome another long-standing challenge: reductionism. Although religious feelings can be scientifically discussed in terms of concrete neural mechanisms and mathematically modeled, but cannot be reduced to "just this or that" mechanical explanation.
0810.0053
Oskar Hallatschek
Oskar Hallatschek, David R. Nelson
Life at the front of an expanding population
Update
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent microbial experiments suggest that enhanced genetic drift at the frontier of a two-dimensional range expansion can cause genetic sectoring patterns with fractal domain boundaries. Here, we propose and analyze a simple model of asexual biological evolution at expanding frontiers to explain these neutral patterns and predict the effect of natural selection. Our model attributes the observed gradual decrease in the number of sectors at the leading edge to an unbiased random walk of sector boundaries. Natural selection introduces a deterministic bias in the wandering of domain boundaries that renders beneficial mutations more likely to escape genetic drift and become established in a sector. We find that the opening angle of those sectors and the rate at which they become established depend sensitively on the selective advantage of the mutants. Deleterious mutations, on the other hand, are not able to establish a sector permanently. They can, however, temporarily "surf" on the population front, and thereby reach unusual high frequencies. As a consequence, expanding frontiers are susceptible to deleterious mutations as revealed by the high fraction of mutants at mutation-selection balance. Numerically, we also determine the condition at which the wild type is lost in favor of deleterious mutants (genetic meltdown) at a growing front. Our prediction for this error threshold differs qualitatively from existing well-mixed theories, and sets tight constraints on sustainable mutation rates for populations that undergo frequent range expansions.
[ { "created": "Wed, 1 Oct 2008 04:06:18 GMT", "version": "v1" }, { "created": "Fri, 12 Dec 2008 08:13:22 GMT", "version": "v2" } ]
2008-12-12
[ [ "Hallatschek", "Oskar", "" ], [ "Nelson", "David R.", "" ] ]
Recent microbial experiments suggest that enhanced genetic drift at the frontier of a two-dimensional range expansion can cause genetic sectoring patterns with fractal domain boundaries. Here, we propose and analyze a simple model of asexual biological evolution at expanding frontiers to explain these neutral patterns and predict the effect of natural selection. Our model attributes the observed gradual decrease in the number of sectors at the leading edge to an unbiased random walk of sector boundaries. Natural selection introduces a deterministic bias in the wandering of domain boundaries that renders beneficial mutations more likely to escape genetic drift and become established in a sector. We find that the opening angle of those sectors and the rate at which they become established depend sensitively on the selective advantage of the mutants. Deleterious mutations, on the other hand, are not able to establish a sector permanently. They can, however, temporarily "surf" on the population front, and thereby reach unusual high frequencies. As a consequence, expanding frontiers are susceptible to deleterious mutations as revealed by the high fraction of mutants at mutation-selection balance. Numerically, we also determine the condition at which the wild type is lost in favor of deleterious mutants (genetic meltdown) at a growing front. Our prediction for this error threshold differs qualitatively from existing well-mixed theories, and sets tight constraints on sustainable mutation rates for populations that undergo frequent range expansions.
1801.08189
Ricardo Martinez-Garcia
Ricardo Martinez-Garcia, Carey D. Nadell, Raimo Hartmann, Knut Drescher, Juan A. Bonachela
Cell adhesion and fluid flow jointly initiate genotype spatial distribution in biofilms
32 pages (4 figures; 8 SI Fig; SI Text)
null
10.1371/journal.pcbi.1006094
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biofilms are microbial collectives that occupy a diverse array of surfaces. The function and evolution of biofilms are strongly influenced by the spatial arrangement of different strains and species within them, but how spatiotemporal distributions of different genotypes in biofilm populations originate is still underexplored. Here, we study the origins of biofilm genetic structure by combining model development, numerical simulations, and microfluidic experiments using the human pathogen Vibrio cholerae. Using spatial correlation functions to quantify the differences between emergent cell lineage segregation patterns, we find that strong adhesion often, but not always, maximizes the size of clonal cell clusters on flat surfaces. Counterintuitively, our model predicts that, under some conditions, investing in adhesion can reduce rather than increase clonal group size. Our results emphasize that a complex interaction of fluid flow and cell adhesiveness can underlie emergent patterns of biofilm genetic structure. This structure, in turn, has an outsize influence on how biofilm-dwelling populations function and evolve.
[ { "created": "Wed, 24 Jan 2018 21:00:31 GMT", "version": "v1" } ]
2018-07-04
[ [ "Martinez-Garcia", "Ricardo", "" ], [ "Nadell", "Carey D.", "" ], [ "Hartmann", "Raimo", "" ], [ "Drescher", "Knut", "" ], [ "Bonachela", "Juan A.", "" ] ]
Biofilms are microbial collectives that occupy a diverse array of surfaces. The function and evolution of biofilms are strongly influenced by the spatial arrangement of different strains and species within them, but how spatiotemporal distributions of different genotypes in biofilm populations originate is still underexplored. Here, we study the origins of biofilm genetic structure by combining model development, numerical simulations, and microfluidic experiments using the human pathogen Vibrio cholerae. Using spatial correlation functions to quantify the differences between emergent cell lineage segregation patterns, we find that strong adhesion often, but not always, maximizes the size of clonal cell clusters on flat surfaces. Counterintuitively, our model predicts that, under some conditions, investing in adhesion can reduce rather than increase clonal group size. Our results emphasize that a complex interaction of fluid flow and cell adhesiveness can underlie emergent patterns of biofilm genetic structure. This structure, in turn, has an outsize influence on how biofilm-dwelling populations function and evolve.
1712.04000
Thierry Mora
Rhys M. Adams, Justin B. Kinney, Aleksandra M. Walczak, Thierry Mora
Physical epistatic landscape of antibody binding affinity
null
Cell Systems 8 86-93 (2019)
10.1016/j.cels.2018.12.004
null
q-bio.PE q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Affinity maturation produces antibodies that bind antigens with high specificity by accumulating mutations in the antibody sequence. Mapping out the antibody-antigen affinity landscape can give us insight into the accessible paths during this rapid evolutionary process. By developing a carefully controlled null model for noninteracting mutations, we characterized epistasis in affinity measurements of a large library of antibody variants obtained by Tite-Seq, a recently introduced Deep Mutational Scan method yielding physical values of the binding constant. We show that representing affinity as the binding free energy minimizes epistasis. Yet, we find that epistatically interacting sites contribute substantially to binding. In addition to negative epistasis, we report a large amount of beneficial epistasis, enlarging the space of high-affinity antibodies as well as their mutational accessibility. These properties suggest that the degeneracy of antibody sequences that can bind a given antigen is enhanced by epistasis - an important property for vaccine design.
[ { "created": "Mon, 11 Dec 2017 19:51:39 GMT", "version": "v1" } ]
2019-05-14
[ [ "Adams", "Rhys M.", "" ], [ "Kinney", "Justin B.", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Mora", "Thierry", "" ] ]
Affinity maturation produces antibodies that bind antigens with high specificity by accumulating mutations in the antibody sequence. Mapping out the antibody-antigen affinity landscape can give us insight into the accessible paths during this rapid evolutionary process. By developing a carefully controlled null model for noninteracting mutations, we characterized epistasis in affinity measurements of a large library of antibody variants obtained by Tite-Seq, a recently introduced Deep Mutational Scan method yielding physical values of the binding constant. We show that representing affinity as the binding free energy minimizes epistasis. Yet, we find that epistatically interacting sites contribute substantially to binding. In addition to negative epistasis, we report a large amount of beneficial epistasis, enlarging the space of high-affinity antibodies as well as their mutational accessibility. These properties suggest that the degeneracy of antibody sequences that can bind a given antigen is enhanced by epistasis - an important property for vaccine design.
2102.11437
Carina Curto
Daniela Egas Santander, Stefania Ebli, Alice Patania, Nicole Sanderson, Felicia Burtscher, Katherine Morrison, Carina Curto
Nerve theorems for fixed points of neural networks
25 pages, 17 figures, to appear in Research in Computational Topology volume 2 of the Springer AWM series
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Nonlinear network dynamics are notoriously difficult to understand. Here we study a class of recurrent neural networks called combinatorial threshold-linear networks (CTLNs) whose dynamics are determined by the structure of a directed graph. They are a special case of TLNs, a popular framework for modeling neural activity in computational neuroscience. In prior work, CTLNs were found to be surprisingly tractable mathematically. For small networks, the fixed points of the network dynamics can often be completely determined via a series of graph rules that can be applied directly to the underlying graph. For larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties. In this work, we propose a method of covering graphs of CTLNs with a set of smaller directional graphs that reflect the local flow of activity. While directional graphs may or may not have a feedforward architecture, their fixed point structure is indicative of feedforward dynamics. The combinatorial structure of the graph cover is captured by the nerve of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three nerve theorems that provide strong constraints on the fixed points of the underlying network from the structure of the nerve. We then illustrate the power of these theorems with some examples. Remarkably, we find that the nerve not only constrains the fixed points of CTLNs, but also gives insight into the transient and asymptotic dynamics. This is because the flow of activity in the network tends to follow the edges of the nerve.
[ { "created": "Tue, 23 Feb 2021 01:00:43 GMT", "version": "v1" }, { "created": "Wed, 28 Jul 2021 17:50:25 GMT", "version": "v2" }, { "created": "Wed, 15 Sep 2021 17:14:55 GMT", "version": "v3" } ]
2021-09-16
[ [ "Santander", "Daniela Egas", "" ], [ "Ebli", "Stefania", "" ], [ "Patania", "Alice", "" ], [ "Sanderson", "Nicole", "" ], [ "Burtscher", "Felicia", "" ], [ "Morrison", "Katherine", "" ], [ "Curto", "Carina", "" ] ]
Nonlinear network dynamics are notoriously difficult to understand. Here we study a class of recurrent neural networks called combinatorial threshold-linear networks (CTLNs) whose dynamics are determined by the structure of a directed graph. They are a special case of TLNs, a popular framework for modeling neural activity in computational neuroscience. In prior work, CTLNs were found to be surprisingly tractable mathematically. For small networks, the fixed points of the network dynamics can often be completely determined via a series of graph rules that can be applied directly to the underlying graph. For larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties. In this work, we propose a method of covering graphs of CTLNs with a set of smaller directional graphs that reflect the local flow of activity. While directional graphs may or may not have a feedforward architecture, their fixed point structure is indicative of feedforward dynamics. The combinatorial structure of the graph cover is captured by the nerve of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three nerve theorems that provide strong constraints on the fixed points of the underlying network from the structure of the nerve. We then illustrate the power of these theorems with some examples. Remarkably, we find that the nerve not only constrains the fixed points of CTLNs, but also gives insight into the transient and asymptotic dynamics. This is because the flow of activity in the network tends to follow the edges of the nerve.
q-bio/0601044
Guido Tiana
Ludovico Sutto, Guido Tiana and Ricardo A. Broglia
Hierarchy of events in protein folding: beyond the Go model
null
null
null
null
q-bio.BM
null
Simplified Go models, where only native contacts interact favorably, have proven useful to characterize some aspects of the folding of small proteins. The success of these models is limited by the fact that all residues interact in the same way, so that the folding features of a protein are determined only by the geometry of its native conformation. We present an extended version of a C-alpha based Go model where different residues interact with different energies. The model is used to calculate the thermodynamics of three small proteins (Protein G, SrcSH3 and CI2) and the effect of mutations on the wildtype sequence. The model allows to investigate some of the most controversial areas in protein folding such as its earliest stages, a subject which has lately received particular attention. The picture which emerges for the three proteins under study is that of a hierarchical process, where local elementary structures (LES) (not necessarily coincident with elements of secondary structure) are formed at the early stages of the folding and drive the protein, through the transition state and the postcritical folding nucleus (FN), resulting from the docking of the LES, to the native conformation.
[ { "created": "Thu, 26 Jan 2006 15:03:24 GMT", "version": "v1" } ]
2007-05-23
[ [ "Sutto", "Ludovico", "" ], [ "Tiana", "Guido", "" ], [ "Broglia", "Ricardo A.", "" ] ]
Simplified Go models, where only native contacts interact favorably, have proven useful to characterize some aspects of the folding of small proteins. The success of these models is limited by the fact that all residues interact in the same way, so that the folding features of a protein are determined only by the geometry of its native conformation. We present an extended version of a C-alpha based Go model where different residues interact with different energies. The model is used to calculate the thermodynamics of three small proteins (Protein G, SrcSH3 and CI2) and the effect of mutations on the wildtype sequence. The model allows to investigate some of the most controversial areas in protein folding such as its earliest stages, a subject which has lately received particular attention. The picture which emerges for the three proteins under study is that of a hierarchical process, where local elementary structures (LES) (not necessarily coincident with elements of secondary structure) are formed at the early stages of the folding and drive the protein, through the transition state and the postcritical folding nucleus (FN), resulting from the docking of the LES, to the native conformation.
1611.00834
Alexei Koulakov
Sergey A. Shuvaev, Batuhan Ba\c{s}erdem, Anthony Zador, Alexei A. Koulakov
Network cloning using DNA barcodes
5 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to measure or manipulate network connectivity is the main challenge in the field of connectomics. Recently, a set of approaches has been developed that takes advantage of next generation DNA sequencing to scan connections between neurons into a set of DNA barcodes. Individual DNA sequences called markers represent single neurons, while pairs of markers, called barcodes contain information about connections. Here we propose a strategy for 'copying' or 'cloning' connectivity contained in barcodes into a clean slate tabula rasa network. We show that a one marker one cell (OMOC) rule, which forces all markers with the same sequence to condense into the same neuron, leads to fast and reliable formation of desired connectivity in a new network. We show that OMOC rule yields convergence in a number of steps given by a power law function of the network size. We thus propose that copying network connectivity from one network to another is theoretically possible.
[ { "created": "Wed, 2 Nov 2016 22:55:10 GMT", "version": "v1" } ]
2016-11-04
[ [ "Shuvaev", "Sergey A.", "" ], [ "Başerdem", "Batuhan", "" ], [ "Zador", "Anthony", "" ], [ "Koulakov", "Alexei A.", "" ] ]
The ability to measure or manipulate network connectivity is the main challenge in the field of connectomics. Recently, a set of approaches has been developed that takes advantage of next generation DNA sequencing to scan connections between neurons into a set of DNA barcodes. Individual DNA sequences called markers represent single neurons, while pairs of markers, called barcodes contain information about connections. Here we propose a strategy for 'copying' or 'cloning' connectivity contained in barcodes into a clean slate tabula rasa network. We show that a one marker one cell (OMOC) rule, which forces all markers with the same sequence to condense into the same neuron, leads to fast and reliable formation of desired connectivity in a new network. We show that OMOC rule yields convergence in a number of steps given by a power law function of the network size. We thus propose that copying network connectivity from one network to another is theoretically possible.
2407.00350
Yuxuan Wu
Yuxuan Wu, Liufang Xu, and Jin Wang
Nonequilibrium dynamics and thermodynamics provide the underlying physical mechanism of the perceptual rivalry
26 pages, 10 figures
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Perceptual rivalry, where conflicting sensory information leads to alternating perceptions crucial for associated cognitive function, has attracted researcher's attention for long. Despite progresses being made, recent studies have revealed limitations and inconsistencies in our understanding across various rivalry contexts. We develop a unified physical framework, where perception undergoes a consecutive phase transition process encompassing different multi-state competitions. We reveal the underlying mechanisms of perceptual rivalry by identifying dominant switching paths among perceptual states and quantifying mean perceptual durations, switching frequencies, and proportions of different perceptions. We uncover the underlying nonequilibrium dynamics and thermodynamics by analyzing average nonequilibrium flux and entropy production rate, while associated time series irreversibility reflects the underlying nonequilibrium mechanism of perceptual rivalry and link thermodynamical results with neuro-electrophysiological experiments. Our framework provides a global and physical understanding of brain perception, which may go beyond cognitive science or psychology but embodies the connection with wider fields as decision-making.
[ { "created": "Sat, 29 Jun 2024 07:48:29 GMT", "version": "v1" }, { "created": "Mon, 15 Jul 2024 10:43:06 GMT", "version": "v2" } ]
2024-07-16
[ [ "Wu", "Yuxuan", "" ], [ "Xu", "Liufang", "" ], [ "Wang", "Jin", "" ] ]
Perceptual rivalry, where conflicting sensory information leads to alternating perceptions crucial for associated cognitive function, has attracted researcher's attention for long. Despite progresses being made, recent studies have revealed limitations and inconsistencies in our understanding across various rivalry contexts. We develop a unified physical framework, where perception undergoes a consecutive phase transition process encompassing different multi-state competitions. We reveal the underlying mechanisms of perceptual rivalry by identifying dominant switching paths among perceptual states and quantifying mean perceptual durations, switching frequencies, and proportions of different perceptions. We uncover the underlying nonequilibrium dynamics and thermodynamics by analyzing average nonequilibrium flux and entropy production rate, while associated time series irreversibility reflects the underlying nonequilibrium mechanism of perceptual rivalry and link thermodynamical results with neuro-electrophysiological experiments. Our framework provides a global and physical understanding of brain perception, which may go beyond cognitive science or psychology but embodies the connection with wider fields as decision-making.
2004.10490
Helder Nakaya
Ra\'ul Arias-Carrasco, Jeevan Giddaluru, Lucas E. Cardozo, Felipe Martins, Vinicius Maracaja-Coutinho, Helder I. Nakaya
OUTBREAK: A user-friendly georeferencing online tool for disease surveillance
null
null
null
null
q-bio.QM cs.CY physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
The current COVID-19 pandemic has already claimed more than 100,000 victims and it will cause more deaths in the coming months. Tools that can track the number and locations of cases are critical for surveillance and can help in making policy decisions for controlling the outbreak. The current surveillance web-based dashboards run on proprietary platforms, which are often expensive and require specific computational knowledge. We present a new tool (OUTBREAK) for studying and visualizing epidemiological data. It permits even non-specialist users to input data most conveniently and track outbreaks in real-time. This tool has the potential to guide and help health authorities to intervene and minimize the effects of the outbreaks. It is freely available at http://outbreak.sysbio.tools/.
[ { "created": "Wed, 22 Apr 2020 11:01:08 GMT", "version": "v1" } ]
2020-04-23
[ [ "Arias-Carrasco", "Raúl", "" ], [ "Giddaluru", "Jeevan", "" ], [ "Cardozo", "Lucas E.", "" ], [ "Martins", "Felipe", "" ], [ "Maracaja-Coutinho", "Vinicius", "" ], [ "Nakaya", "Helder I.", "" ] ]
The current COVID-19 pandemic has already claimed more than 100,000 victims and it will cause more deaths in the coming months. Tools that can track the number and locations of cases are critical for surveillance and can help in making policy decisions for controlling the outbreak. The current surveillance web-based dashboards run on proprietary platforms, which are often expensive and require specific computational knowledge. We present a new tool (OUTBREAK) for studying and visualizing epidemiological data. It permits even non-specialist users to input data most conveniently and track outbreaks in real-time. This tool has the potential to guide and help health authorities to intervene and minimize the effects of the outbreaks. It is freely available at http://outbreak.sysbio.tools/.
1703.08950
Damir Hasic
Damir Hasic, Eric Tannier
Gene tree species tree reconciliation with gene conversion
null
null
null
null
q-bio.QM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene tree/species tree reconciliation is a recent decisive progress in phylo-genetic methods, accounting for the possible differences between gene histories and species histories. Reconciliation consists in explaining these differences by gene-scale events such as duplication, loss, transfer, which translates mathematically into a mapping between gene tree nodes and species tree nodes or branches. Gene conversion is a very frequent biological event, which results in the replacement of a gene by a copy of another from the same species and in the same gene tree. Including this event in reconciliations has never been attempted because this changes as well the solutions as the methods to construct reconciliations. Standard algorithms based on dynamic programming become ineffective. We propose here a novel mathematical framework including gene conversion as an evolutionary event in gene tree/species tree reconciliation. We describe a randomized algorithm giving in polynomial running time a reconciliation minimizing the number of duplications, losses and conversions. We show that the space of reconciliations includes an analog of the Last Common Ancestor reconciliation, but is not limited to it. Our algorithm outputs any optimal reconciliation with non null probability. We argue that this study opens a wide research avenue on including gene conversion in reconciliation, which can be important for biology.
[ { "created": "Mon, 27 Mar 2017 06:50:30 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2017 14:05:13 GMT", "version": "v2" } ]
2017-10-05
[ [ "Hasic", "Damir", "" ], [ "Tannier", "Eric", "" ] ]
Gene tree/species tree reconciliation is a recent decisive progress in phylo-genetic methods, accounting for the possible differences between gene histories and species histories. Reconciliation consists in explaining these differences by gene-scale events such as duplication, loss, transfer, which translates mathematically into a mapping between gene tree nodes and species tree nodes or branches. Gene conversion is a very frequent biological event, which results in the replacement of a gene by a copy of another from the same species and in the same gene tree. Including this event in reconciliations has never been attempted because this changes as well the solutions as the methods to construct reconciliations. Standard algorithms based on dynamic programming become ineffective. We propose here a novel mathematical framework including gene conversion as an evolutionary event in gene tree/species tree reconciliation. We describe a randomized algorithm giving in polynomial running time a reconciliation minimizing the number of duplications, losses and conversions. We show that the space of reconciliations includes an analog of the Last Common Ancestor reconciliation, but is not limited to it. Our algorithm outputs any optimal reconciliation with non null probability. We argue that this study opens a wide research avenue on including gene conversion in reconciliation, which can be important for biology.
1811.02114
Ingoo Lee
Ingoo Lee, Jongsoo Keum, Hojung Nam
DeepConv-DTI: Prediction of drug-target interactions via deep learning with convolution on protein sequences
26 pages, 7 figures
null
10.1371/journal.pcbi.1007129
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of drug-target interactions (DTIs) plays a key role in drug discovery. The high cost and labor-intensive nature of in vitro and in vivo experiments have highlighted the importance of in silico-based DTI prediction approaches. In several computational models, conventional protein descriptors are shown to be not informative enough to predict accurate DTIs. Thus, in this study, we employ a convolutional neural network (CNN) on raw protein sequences to capture local residue patterns participating in DTIs. With CNN on protein sequences, our model performs better than previous protein descriptor-based models. In addition, our model performs better than the previous deep learning model for massive prediction of DTIs. By examining the pooled convolution results, we found that our model can detect binding sites of proteins for DTIs. In conclusion, our prediction model for detecting local residue patterns of target proteins successfully enriches the protein features of a raw protein sequence, yielding better prediction results than previous approaches.
[ { "created": "Tue, 6 Nov 2018 01:39:02 GMT", "version": "v1" } ]
2019-09-11
[ [ "Lee", "Ingoo", "" ], [ "Keum", "Jongsoo", "" ], [ "Nam", "Hojung", "" ] ]
Identification of drug-target interactions (DTIs) plays a key role in drug discovery. The high cost and labor-intensive nature of in vitro and in vivo experiments have highlighted the importance of in silico-based DTI prediction approaches. In several computational models, conventional protein descriptors are shown to be not informative enough to predict accurate DTIs. Thus, in this study, we employ a convolutional neural network (CNN) on raw protein sequences to capture local residue patterns participating in DTIs. With CNN on protein sequences, our model performs better than previous protein descriptor-based models. In addition, our model performs better than the previous deep learning model for massive prediction of DTIs. By examining the pooled convolution results, we found that our model can detect binding sites of proteins for DTIs. In conclusion, our prediction model for detecting local residue patterns of target proteins successfully enriches the protein features of a raw protein sequence, yielding better prediction results than previous approaches.
1610.05066
Eduardo Henrique Colombo
E.H. Colombo, C. Anteneodo
Population dynamics in an intermittent refuge
null
Phys. Rev. E 94, 042413 (2016)
10.1103/PhysRevE.94.042413
null
q-bio.PE cond-mat.stat-mech nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population dynamics is constrained by the environment, which needs to obey certain conditions to support population growth. We consider a standard model for the evolution of a single species population density, that includes reproduction, competition for resources and spatial spreading, while subject to an external harmful effect. The habitat is spatially heterogeneous, there existing a refuge where the population can be protected. Temporal variability is introduced by the intermittent character of the refuge. This scenario can apply to a wide range of situations, from a lab setting where bacteria can be protected by a blinking mask from ultraviolet radiation, to large scale ecosystems, like a marine reserve where there can be seasonal fishing prohibitions. Using analytical and numerical tools, we investigate the asymptotic behavior of the total population as a function of the size and characteristic time scales of the refuge. We obtain expressions for the minimal size required for population survival, in the slow and fast time scale limits.
[ { "created": "Mon, 17 Oct 2016 12:11:55 GMT", "version": "v1" } ]
2016-10-18
[ [ "Colombo", "E. H.", "" ], [ "Anteneodo", "C.", "" ] ]
Population dynamics is constrained by the environment, which needs to obey certain conditions to support population growth. We consider a standard model for the evolution of a single species population density, that includes reproduction, competition for resources and spatial spreading, while subject to an external harmful effect. The habitat is spatially heterogeneous, there existing a refuge where the population can be protected. Temporal variability is introduced by the intermittent character of the refuge. This scenario can apply to a wide range of situations, from a lab setting where bacteria can be protected by a blinking mask from ultraviolet radiation, to large scale ecosystems, like a marine reserve where there can be seasonal fishing prohibitions. Using analytical and numerical tools, we investigate the asymptotic behavior of the total population as a function of the size and characteristic time scales of the refuge. We obtain expressions for the minimal size required for population survival, in the slow and fast time scale limits.
1902.02321
Irina Moreira S
Pedro Matos-Filipe, Ant\'onio J. Preto, Panagiotis I. Koukos, Joana Mour\~ao, Alexandre M.J.J. Bonvin, Irina S. Moreira
MENSADB: A Thorough Structural Analysis of Membrane Protein Dimers
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Membrane Proteins (MPs) account for around 15-39% of the human proteome and assume a critical role in a vast set of cellular and physiological mechanisms, including molecular transport, nutrient uptake, toxin and waste product clearance, respiration, and signaling. While roughly 60% of all FDA-approved drugs target MPs, there is a shortage of structural and biochemical data on them mainly hindered by their localization in the lipid bilayer. We present here MEmbrane protein dimer Novel Structure Analyser database (MENSAdb), a real time web-application exposing a broad array of fundamental features about MPs surface and their interfacial regions. In particular, we present conservation, four distinctive Accessible Solvent Area (ASA) descriptors, average and environment-specific B-factors, intermolecular contacts at 2.5 and 4.0 angstroms distance cutoffs, salt-bridges, hydrogen-bonds, hydrophobic, pi-pi interactions, t-stacking and cation-pi interactions. Additionally, users can closely inspect differences in values between three distinctive residues classes: i) non-surface, ii) surface and non-interfacial and iii) interfacial. The database is freely available at www.moreiralab.com/resources/mensadb.
[ { "created": "Wed, 6 Feb 2019 18:35:47 GMT", "version": "v1" } ]
2019-02-07
[ [ "Matos-Filipe", "Pedro", "" ], [ "Preto", "António J.", "" ], [ "Koukos", "Panagiotis I.", "" ], [ "Mourão", "Joana", "" ], [ "Bonvin", "Alexandre M. J. J.", "" ], [ "Moreira", "Irina S.", "" ] ]
Membrane Proteins (MPs) account for around 15-39% of the human proteome and assume a critical role in a vast set of cellular and physiological mechanisms, including molecular transport, nutrient uptake, toxin and waste product clearance, respiration, and signaling. While roughly 60% of all FDA-approved drugs target MPs, there is a shortage of structural and biochemical data on them mainly hindered by their localization in the lipid bilayer. We present here MEmbrane protein dimer Novel Structure Analyser database (MENSAdb), a real time web-application exposing a broad array of fundamental features about MPs surface and their interfacial regions. In particular, we present conservation, four distinctive Accessible Solvent Area (ASA) descriptors, average and environment-specific B-factors, intermolecular contacts at 2.5 and 4.0 angstroms distance cutoffs, salt-bridges, hydrogen-bonds, hydrophobic, pi-pi interactions, t-stacking and cation-pi interactions. Additionally, users can closely inspect differences in values between three distinctive residues classes: i) non-surface, ii) surface and non-interfacial and iii) interfacial. The database is freely available at www.moreiralab.com/resources/mensadb.
0912.2905
Jie Xu
Jie Xu and Daniel Attinger
Drop on demand in a microfluidic chip
null
Xu, J. and D. Attinger, Drop on demand in a microfluidic chip. Journal of Micromechanics and Microengineering, 2008. 18: p. 065020
10.1088/0960-1317/18/6/065020
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we introduce the novel technique of in-chip drop on demand, which consists in dispensing picoliter to nanoliter drops on demand directly in the liquid-filled channels of a polymer microfluidic chip, at frequencies up to 2.5 kHz and with precise volume control. The technique involves a PDMS chip with one or several microliter-size chambers driven by piezoelectric actuators. Individual aqueous microdrops are dispensed from the chamber to a main transport channel filled with an immiscible fluid, in a process analogous to atmospheric drop on demand dispensing. In this article, the drop formation process is characterized with respect to critical dispense parameters such as the shape and duration of the driving pulse, and the size of both the fluid chamber and the nozzle. Several features of the in-chip drop on demand technique with direct relevance to lab on a chip applications are presented and discussed, such as the precise control of the dispensed volume, the ability to merge drops of different reagents and the ability to move a drop from the shooting area of one nozzle to another for multi-step reactions. The possibility to drive the microfluidic chip with inexpensive audio electronics instead of research-grade equipment is also examined and verified. Finally, we show that the same piezoelectric technique can be used to generate a single gas bubble on demand in a microfluidic chip.
[ { "created": "Tue, 15 Dec 2009 14:26:57 GMT", "version": "v1" } ]
2009-12-16
[ [ "Xu", "Jie", "" ], [ "Attinger", "Daniel", "" ] ]
In this work, we introduce the novel technique of in-chip drop on demand, which consists in dispensing picoliter to nanoliter drops on demand directly in the liquid-filled channels of a polymer microfluidic chip, at frequencies up to 2.5 kHz and with precise volume control. The technique involves a PDMS chip with one or several microliter-size chambers driven by piezoelectric actuators. Individual aqueous microdrops are dispensed from the chamber to a main transport channel filled with an immiscible fluid, in a process analogous to atmospheric drop on demand dispensing. In this article, the drop formation process is characterized with respect to critical dispense parameters such as the shape and duration of the driving pulse, and the size of both the fluid chamber and the nozzle. Several features of the in-chip drop on demand technique with direct relevance to lab on a chip applications are presented and discussed, such as the precise control of the dispensed volume, the ability to merge drops of different reagents and the ability to move a drop from the shooting area of one nozzle to another for multi-step reactions. The possibility to drive the microfluidic chip with inexpensive audio electronics instead of research-grade equipment is also examined and verified. Finally, we show that the same piezoelectric technique can be used to generate a single gas bubble on demand in a microfluidic chip.
2401.13180
Michael B\"orsch
Ivan Perez, Anke Krueger, Joerg Wrachtrup, Fedor Jelezko, Michael B\"orsch
Single NV in nanodiamond for quantum sensing of protein dynamics in an ABEL trap
14 pages, 5 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Enzymes are cellular protein machines using a variety of conformational changes to power fast biochemical catalysis. Our goal is to exploit the single-spin properties of the luminescent NV (nitrogen-vacancy) center in nanodiamonds to reveal the dynamics of an active enzyme complex at physiological conditions with the highest spatio-temporal resolution. Specifically attached to the membrane enzyme FoF1-ATP synthase, the NV sensor will report the adenosine triphosphate (ATP)-driven full rotation of Fo motor subunits in ten consecutive 36{\deg} steps. Conformational dynamics are monitored using either a double electron-electron resonance scheme or NV- magnetometry with optical readout or using NV- relaxometry with a superparamagnetic nanoparticle as the second marker attached to the same enzyme. First, we show how all photophysical parameters like individual size, charge, brightness, spectral range of fluorescence and fluorescence lifetime can be determined for the NV- center in a single nanodiamond held in aqueous solution by a confocal anti-Brownian electrokinetic trap (ABEL trap). Stable photon count rates of individual nanodiamonds and the absence of blinking allow for observation times of single nanodiamonds in solution exceeding hundreds of seconds. For the proposed quantum sensing of nanometer-sized distance changes within an active enzyme, we show that local magnetic field fluctuations can be detected all-optically by analyzing fluorescence lifetime changes of the NV- center in each nanodiamond in solution.
[ { "created": "Wed, 24 Jan 2024 01:56:28 GMT", "version": "v1" } ]
2024-01-25
[ [ "Perez", "Ivan", "" ], [ "Krueger", "Anke", "" ], [ "Wrachtrup", "Joerg", "" ], [ "Jelezko", "Fedor", "" ], [ "Börsch", "Michael", "" ] ]
Enzymes are cellular protein machines using a variety of conformational changes to power fast biochemical catalysis. Our goal is to exploit the single-spin properties of the luminescent NV (nitrogen-vacancy) center in nanodiamonds to reveal the dynamics of an active enzyme complex at physiological conditions with the highest spatio-temporal resolution. Specifically attached to the membrane enzyme FoF1-ATP synthase, the NV sensor will report the adenosine triphosphate (ATP)-driven full rotation of Fo motor subunits in ten consecutive 36{\deg} steps. Conformational dynamics are monitored using either a double electron-electron resonance scheme or NV- magnetometry with optical readout or using NV- relaxometry with a superparamagnetic nanoparticle as the second marker attached to the same enzyme. First, we show how all photophysical parameters like individual size, charge, brightness, spectral range of fluorescence and fluorescence lifetime can be determined for the NV- center in a single nanodiamond held in aqueous solution by a confocal anti-Brownian electrokinetic trap (ABEL trap). Stable photon count rates of individual nanodiamonds and the absence of blinking allow for observation times of single nanodiamonds in solution exceeding hundreds of seconds. For the proposed quantum sensing of nanometer-sized distance changes within an active enzyme, we show that local magnetic field fluctuations can be detected all-optically by analyzing fluorescence lifetime changes of the NV- center in each nanodiamond in solution.
1907.07524
Jorge Vila
Osvaldo A. Martin and Jorge A. Vila
The Marginal Stability of Proteins: How Jiggling and Wiggling of Atoms are Connected to Neutral Evolution
two pages, no figures; perspectives
null
null
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we propose that the upper bound marginal stability of proteins (7.4 kcal/mol) is a universal property that includes macro-molecular complexes and is not affected by molecular changes such as mutations and Post-Translational Modifications. Its existence is, essentially, a consequence of the Anfinsen thermodynamic hypothesis rather than a result of an evolutive process. This result enables us to conjecture that neutral evolution should also be, with respect to protein stability, a universal phenomenon.
[ { "created": "Wed, 17 Jul 2019 13:49:06 GMT", "version": "v1" }, { "created": "Mon, 23 Sep 2019 20:19:32 GMT", "version": "v2" }, { "created": "Thu, 26 Sep 2019 10:31:47 GMT", "version": "v3" }, { "created": "Sun, 22 Dec 2019 13:22:19 GMT", "version": "v4" }, { "created": "Thu, 19 Mar 2020 14:08:08 GMT", "version": "v5" } ]
2020-03-20
[ [ "Martin", "Osvaldo A.", "" ], [ "Vila", "Jorge A.", "" ] ]
Here we propose that the upper bound marginal stability of proteins (7.4 kcal/mol) is a universal property that includes macro-molecular complexes and is not affected by molecular changes such as mutations and Post-Translational Modifications. Its existence is, essentially, a consequence of the Anfinsen thermodynamic hypothesis rather than a result of an evolutive process. This result enables us to conjecture that neutral evolution should also be, with respect to protein stability, a universal phenomenon.
2407.08451
Nadav M. Shnerb
David A. Kessler and Nadav M. Shnerb
Sticky extinction states and the egalitarian transition
null
null
null
null
q-bio.PE nlin.AO
http://creativecommons.org/licenses/by/4.0/
Abundance fluctuations caused by environmental stochasticity are proportional to population size. Therefore, populations that reach low density tend to remain in this state and can increase only due to rare sequences of favorable years. This stickiness can affect the composition of a community, as many species become trapped in an almost permanent state of rarity. Here, we present a solution for the species abundance distribution in scenarios where environmental stochasticity is the dominant mechanism. We show that in the absence of processes that impose a lower bound on abundances (such as immigration), the community is dominated, at equilibrium, by a single species. In the presence of immigration, the community undergoes a transition from an egalitarian state, under weak stochasticity, to a very skewed abundance distribution when stochasticity is strong.
[ { "created": "Thu, 11 Jul 2024 12:45:21 GMT", "version": "v1" } ]
2024-07-12
[ [ "Kessler", "David A.", "" ], [ "Shnerb", "Nadav M.", "" ] ]
Abundance fluctuations caused by environmental stochasticity are proportional to population size. Therefore, populations that reach low density tend to remain in this state and can increase only due to rare sequences of favorable years. This stickiness can affect the composition of a community, as many species become trapped in an almost permanent state of rarity. Here, we present a solution for the species abundance distribution in scenarios where environmental stochasticity is the dominant mechanism. We show that in the absence of processes that impose a lower bound on abundances (such as immigration), the community is dominated, at equilibrium, by a single species. In the presence of immigration, the community undergoes a transition from an egalitarian state, under weak stochasticity, to a very skewed abundance distribution when stochasticity is strong.
1208.5476
Ralph Brinks
Ralph Brinks
On characteristics of an ordinary differential equation and a related inverse problem in epidemiology
8 pages, 1 figure
null
null
null
q-bio.QM q-bio.PE stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we examine the properties of a recently described ordinary differential equation that relates the age-specific prevalence of a chronic disease with the incidence and mortalities of the diseased and healthy persons. The equation has been used to estimate the incidence from prevalence data, which is an inverse problem. The ill-posedness of this problem is proven, too.
[ { "created": "Mon, 27 Aug 2012 19:54:32 GMT", "version": "v1" } ]
2012-08-28
[ [ "Brinks", "Ralph", "" ] ]
In this work we examine the properties of a recently described ordinary differential equation that relates the age-specific prevalence of a chronic disease with the incidence and mortalities of the diseased and healthy persons. The equation has been used to estimate the incidence from prevalence data, which is an inverse problem. The ill-posedness of this problem is proven, too.
1908.03306
Amitesh Jayaraman
Shengnan Lyu, Christian W\"ulker, Yuqing Pan, Amitesh S. Jayaraman, Jianhao Zheng, Yilin Cai and Gregory S. Chirikjian
Cross-Modal Fusion Between Data in SAXS and Cryo-EM for Biomolecular Structure Determination
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cryo-Electron Microscopy (cryo-EM) has become an extremely powerful method for resolving structural details of large biomolecular complexes. However, challenging problems in single-particle methods remain open because of (1) the low signal-to-noise ratio in EM; and (2) the potential anisotropy and lack of coverage of projection directions relative to the body-fixed coordinate system for some complexes. Whereas (1) is usually addressed by class averaging (and increasingly due to rapid advances in microscope and sensor technology), (2) is an artifact of the mechanics of interaction of biomolecular complexes and the vitrification process. In the absence of tilt series, (2) remains a problem, which is addressed here by supplementing EM data with Small-Angle X-Ray Scattering (SAXS). Whereas SAXS is of relatively low resolution and contains much lower information content than EM, we show that it is nevertheless possible to use SAXS to fill in blind spots in EM in difficult cases where the range of projection directions is limited.
[ { "created": "Fri, 9 Aug 2019 04:28:50 GMT", "version": "v1" } ]
2019-08-12
[ [ "Lyu", "Shengnan", "" ], [ "Wülker", "Christian", "" ], [ "Pan", "Yuqing", "" ], [ "Jayaraman", "Amitesh S.", "" ], [ "Zheng", "Jianhao", "" ], [ "Cai", "Yilin", "" ], [ "Chirikjian", "Gregory S.", "" ] ]
Cryo-Electron Microscopy (cryo-EM) has become an extremely powerful method for resolving structural details of large biomolecular complexes. However, challenging problems in single-particle methods remain open because of (1) the low signal-to-noise ratio in EM; and (2) the potential anisotropy and lack of coverage of projection directions relative to the body-fixed coordinate system for some complexes. Whereas (1) is usually addressed by class averaging (and increasingly due to rapid advances in microscope and sensor technology), (2) is an artifact of the mechanics of interaction of biomolecular complexes and the vitrification process. In the absence of tilt series, (2) remains a problem, which is addressed here by supplementing EM data with Small-Angle X-Ray Scattering (SAXS). Whereas SAXS is of relatively low resolution and contains much lower information content than EM, we show that it is nevertheless possible to use SAXS to fill in blind spots in EM in difficult cases where the range of projection directions is limited.
1005.5704
Steffen Rulands
S. Rulands, T. Reichenbach and E. Frey
Three-fold way to extinction in populations of cyclically competing species
null
S Rulands et al J. Stat. Mech. (2011) L01003
10.1088/1742-5468/2011/01/L01003
null
q-bio.PE cond-mat.stat-mech physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species extinction occurs regularly and unavoidably in ecological systems. The time scales for extinction can broadly vary and inform on the ecosystem's stability. We study the spatio-temporal extinction dynamics of a paradigmatic population model where three species exhibit cyclic competition. The cyclic dynamics reflects the non-equilibrium nature of the species interactions. While previous work focusses on the coarsening process as a mechanism that drives the system to extinction, we found that unexpectedly the dynamics to extinction is much richer. We observed three different types of dynamics. In addition to coarsening, in the evolutionary relevant limit of large times, oscillating traveling waves and heteroclinic orbits play a dominant role. The weight of the different processes depends on the degree of mixing and the system size. By analytical arguments and extensive numerical simulations we provide the full characteristics of scenarios leading to extinction in one of the most surprising models of ecology.
[ { "created": "Mon, 31 May 2010 16:35:25 GMT", "version": "v1" }, { "created": "Mon, 18 Oct 2010 09:32:39 GMT", "version": "v2" }, { "created": "Sun, 26 Dec 2010 10:14:37 GMT", "version": "v3" } ]
2011-03-02
[ [ "Rulands", "S.", "" ], [ "Reichenbach", "T.", "" ], [ "Frey", "E.", "" ] ]
Species extinction occurs regularly and unavoidably in ecological systems. The time scales for extinction can broadly vary and inform on the ecosystem's stability. We study the spatio-temporal extinction dynamics of a paradigmatic population model where three species exhibit cyclic competition. The cyclic dynamics reflects the non-equilibrium nature of the species interactions. While previous work focusses on the coarsening process as a mechanism that drives the system to extinction, we found that unexpectedly the dynamics to extinction is much richer. We observed three different types of dynamics. In addition to coarsening, in the evolutionary relevant limit of large times, oscillating traveling waves and heteroclinic orbits play a dominant role. The weight of the different processes depends on the degree of mixing and the system size. By analytical arguments and extensive numerical simulations we provide the full characteristics of scenarios leading to extinction in one of the most surprising models of ecology.
2110.02883
Johanna Senk
Johanna Senk, Birgit Kriener, Mikael Djurfeldt, Nicole Voges, Han-Jia Jiang, Lisa Sch\"uttler, Gabriele Gramelsberger, Markus Diesmann, Hans E. Plesser, Sacha J. van Albada
Connectivity Concepts in Neuronal Network Modeling
null
PLoS Comput Biol 18(9): e1010086 (2022)
10.1371/journal.pcbi.1010086
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.
[ { "created": "Wed, 6 Oct 2021 16:09:42 GMT", "version": "v1" }, { "created": "Sat, 11 Jun 2022 17:26:25 GMT", "version": "v2" }, { "created": "Wed, 15 Jun 2022 08:43:01 GMT", "version": "v3" } ]
2022-09-16
[ [ "Senk", "Johanna", "" ], [ "Kriener", "Birgit", "" ], [ "Djurfeldt", "Mikael", "" ], [ "Voges", "Nicole", "" ], [ "Jiang", "Han-Jia", "" ], [ "Schüttler", "Lisa", "" ], [ "Gramelsberger", "Gabriele", "" ], [ "Diesmann", "Markus", "" ], [ "Plesser", "Hans E.", "" ], [ "van Albada", "Sacha J.", "" ] ]
Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.
q-bio/0610044
Miloje Rakocevic M.
Miloje M. Rakocevic
Genetic Code as a Harmonic System
26 pages, 11 tables, 10 figures
null
null
null
q-bio.OT
null
In a certain way, this paper presents the continuation of the previous one which discussed the harmonic structure of the genetic code (Rakocevic, 2004). Several new harmonic structures presented in this paper, through specific unity and coherence, together with the previously presented (Rakocevic, 2004), show that it makes sense to understand genetic code as a set of several different harmonic structures. Thereby, the harmonicity itself represents a specific unity and coherence of physico-chemical properties of amino acid molecules and the number of atoms and/or nucleons in the molecules themselves (in the form of typical balances). A specific Gauss' arithmetical algorithm has the central position among all these structures and it corresponds to the patterns of the number of atoms within the side chains of amino acid molecules in the following sense: G+V = 11; P+I = 21; S+T+L+A+G = 31; D+E+M+C+P = 41; K+R+Q+N+V = 61; F+Y+W+H+I = 71; (L+M+Q+W) + (A+C+N+H) = 81; (S+D+K+F) + (T+E+R+Y) = 91; (F+L+M+S+P) = (T+A+Y+H+I) = (Q+N+K+D+V) = (E+C+W+R+G) = 51. Bearing in mind all these regularities it make sense to talk about genetic code as a harmonic system. On the other hand, such an order provides new evidence supporting the hypothesis established in the previous paper (Rakocevic, 2004) that genetic code has been complete from the very beginning and as such was the condition for the origin and evolution of life.
[ { "created": "Tue, 24 Oct 2006 13:23:29 GMT", "version": "v1" } ]
2007-05-23
[ [ "Rakocevic", "Miloje M.", "" ] ]
In a certain way, this paper presents the continuation of the previous one which discussed the harmonic structure of the genetic code (Rakocevic, 2004). Several new harmonic structures presented in this paper, through specific unity and coherence, together with the previously presented (Rakocevic, 2004), show that it makes sense to understand genetic code as a set of several different harmonic structures. Thereby, the harmonicity itself represents a specific unity and coherence of physico-chemical properties of amino acid molecules and the number of atoms and/or nucleons in the molecules themselves (in the form of typical balances). A specific Gauss' arithmetical algorithm has the central position among all these structures and it corresponds to the patterns of the number of atoms within the side chains of amino acid molecules in the following sense: G+V = 11; P+I = 21; S+T+L+A+G = 31; D+E+M+C+P = 41; K+R+Q+N+V = 61; F+Y+W+H+I = 71; (L+M+Q+W) + (A+C+N+H) = 81; (S+D+K+F) + (T+E+R+Y) = 91; (F+L+M+S+P) = (T+A+Y+H+I) = (Q+N+K+D+V) = (E+C+W+R+G) = 51. Bearing in mind all these regularities it make sense to talk about genetic code as a harmonic system. On the other hand, such an order provides new evidence supporting the hypothesis established in the previous paper (Rakocevic, 2004) that genetic code has been complete from the very beginning and as such was the condition for the origin and evolution of life.
1501.01567
Stanly Steinberg Prof
Flor A. Espinoza, Stanly L. Steinberg
A New Characterization of Fine Scale Diffusion on the Cell Membrane
18 pages, 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use a large single particle tracking data set to analyze the short time and small spatial scale motion of quantum dots labeling proteins in cell membranes. Our analysis focuses on the jumps which are the changes in the position of the quantum dots between frames in a movie of their motion. Previously we have shown that the directions of the jumps are uniformly distributed and the jump lengths can be characterized by a double power law distribution. Here we show that the jumps over a small number of time steps can be described by scalings of a {\em single} double power law distribution. This provides additional strong evidence that the double power law provides an accurate description of the fine scale motion. This more extensive analysis provides strong evidence that the double power law is a novel stable distribution for the motion. This analysis provides strong evidence that an earlier result that the motion can be modeled as diffusion in a space of fractional dimension roughly 3/2 is correct. The form of the power law distribution quantifies the excess of short jumps in the data and provides an accurate characterization of the fine scale diffusion and, in fact, this distribution gives an accurate description of the jump lengths up to a few hundred nanometers. Our results complement of the usual mean squared displacement analysis used to study diffusion at larger scales where the proteins are more likely to strongly interact with larger membrane structures.
[ { "created": "Wed, 7 Jan 2015 17:24:26 GMT", "version": "v1" } ]
2015-01-08
[ [ "Espinoza", "Flor A.", "" ], [ "Steinberg", "Stanly L.", "" ] ]
We use a large single particle tracking data set to analyze the short time and small spatial scale motion of quantum dots labeling proteins in cell membranes. Our analysis focuses on the jumps which are the changes in the position of the quantum dots between frames in a movie of their motion. Previously we have shown that the directions of the jumps are uniformly distributed and the jump lengths can be characterized by a double power law distribution. Here we show that the jumps over a small number of time steps can be described by scalings of a {\em single} double power law distribution. This provides additional strong evidence that the double power law provides an accurate description of the fine scale motion. This more extensive analysis provides strong evidence that the double power law is a novel stable distribution for the motion. This analysis provides strong evidence that an earlier result that the motion can be modeled as diffusion in a space of fractional dimension roughly 3/2 is correct. The form of the power law distribution quantifies the excess of short jumps in the data and provides an accurate characterization of the fine scale diffusion and, in fact, this distribution gives an accurate description of the jump lengths up to a few hundred nanometers. Our results complement of the usual mean squared displacement analysis used to study diffusion at larger scales where the proteins are more likely to strongly interact with larger membrane structures.
1409.3481
Sandip Datta
Sandip Datta and Brian Seed
Influence of Multiplicative Stochastic Variation on Translational Elongation Rates
14 pages, 10 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experiments have shown that stochastic effects exerted at the level of translation contribute a substantial portion of the variation in abundance of proteins expressed at moderate to high levels. This study analyzes translational noise arising from fluctuations in residue-specific elongation rates. The resulting variation has multiplicative components that lead individual protein abundances in a population to exhibit approximately log-normal behavior. The high variability inherent in the process leads to parameter variation that has the features of a type of noise in biological systems that has been characterized as extrinsic. Elongation rate variation offers an accounting for a major component of extrinsic noise, and the analysis provided here highlights a probability distribution that is a natural extension of the Poisson and has broad applicability to many types of multiplicative noise processes.
[ { "created": "Thu, 11 Sep 2014 15:42:30 GMT", "version": "v1" }, { "created": "Mon, 15 Sep 2014 00:20:07 GMT", "version": "v2" } ]
2014-09-16
[ [ "Datta", "Sandip", "" ], [ "Seed", "Brian", "" ] ]
Recent experiments have shown that stochastic effects exerted at the level of translation contribute a substantial portion of the variation in abundance of proteins expressed at moderate to high levels. This study analyzes translational noise arising from fluctuations in residue-specific elongation rates. The resulting variation has multiplicative components that lead individual protein abundances in a population to exhibit approximately log-normal behavior. The high variability inherent in the process leads to parameter variation that has the features of a type of noise in biological systems that has been characterized as extrinsic. Elongation rate variation offers an accounting for a major component of extrinsic noise, and the analysis provided here highlights a probability distribution that is a natural extension of the Poisson and has broad applicability to many types of multiplicative noise processes.
1604.04654
Arianna Bianchi
Arianna Bianchi, Kevin J. Painter, Jonathan A. Sherratt
Spatio-temporal Models of Lymphangiogenesis in Wound Healing
29 pages, 9 Figures, 6 Tables (39 figure files in total)
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several studies suggest that one possible cause of impaired wound healing is failed or insufficient lymphangiogenesis, that is the formation of new lymphatic capillaries. Although many mathematical models have been developed to describe the formation of blood capillaries (angiogenesis), very few have been proposed for the regeneration of the lymphatic network. Lymphangiogenesis is a markedly different process from angiogenesis, occurring at different times and in response to different chemical stimuli. Two main hypotheses have been proposed: 1) lymphatic capillaries sprout from existing interrupted ones at the edge of the wound in analogy to the blood angiogenesis case; 2) lymphatic endothelial cells first pool in the wound region following the lymph flow and then, once sufficiently populated, start to form a network. Here we present two PDE models describing lymphangiogenesis according to these two different hypotheses. Further, we include the effect of advection due to interstitial flow and lymph flow coming from open capillaries. The variables represent different cell densities and growth factor concentrations, and where possible the parameters are estimated from biological data. The models are then solved numerically and the results are compared with the available biological literature.
[ { "created": "Fri, 15 Apr 2016 22:07:45 GMT", "version": "v1" } ]
2016-04-19
[ [ "Bianchi", "Arianna", "" ], [ "Painter", "Kevin J.", "" ], [ "Sherratt", "Jonathan A.", "" ] ]
Several studies suggest that one possible cause of impaired wound healing is failed or insufficient lymphangiogenesis, that is the formation of new lymphatic capillaries. Although many mathematical models have been developed to describe the formation of blood capillaries (angiogenesis), very few have been proposed for the regeneration of the lymphatic network. Lymphangiogenesis is a markedly different process from angiogenesis, occurring at different times and in response to different chemical stimuli. Two main hypotheses have been proposed: 1) lymphatic capillaries sprout from existing interrupted ones at the edge of the wound in analogy to the blood angiogenesis case; 2) lymphatic endothelial cells first pool in the wound region following the lymph flow and then, once sufficiently populated, start to form a network. Here we present two PDE models describing lymphangiogenesis according to these two different hypotheses. Further, we include the effect of advection due to interstitial flow and lymph flow coming from open capillaries. The variables represent different cell densities and growth factor concentrations, and where possible the parameters are estimated from biological data. The models are then solved numerically and the results are compared with the available biological literature.
q-bio/0509006
Franco Bagnoli
Franco Bagnoli, Pietro Lio', Luca Sguanci
Modeling viral coevolution: HIV multi-clonal persistence and competition dynamics
20 pages, 10 figures
Physica A 366, 333-346 (2006)
10.1016/j.physa.2005.10.055
null
q-bio.PE
null
The coexistence of different viral strains (quasispecies) within the same host are nowadays observed for a growing number of viruses, most notably HIV, Marburg and Ebola, but the conditions for the formation and survival of new strains have not yet been understood. We present a model of HIV quasispecies competition, that describes the conditions of viral quasispecies coexistence under different immune system conditions. Our model incorporates both T and B cells responses, and we show that the role of B cells is important and additive to that of T cells. Simulations of coinfection (simultaneous infection) and superinfection (delayed secondary infection) scenarios in the early stages (days) and in the late stages of the infection (years) are in agreement with emerging molecular biology findings. The immune response induces a competition among similar phenotypes, leading to differentiation (quasi-speciation), escape dynamics and complex oscillations of viral strain abundance. We found that the quasispecies dynamics after superinfection or coinfection has time scales of several months and becomes even slower when the immune system response is weak. Our model represents a general framework to study the speed and distribution of HIV quasispecies during disease progression, vaccination and therapy.
[ { "created": "Wed, 7 Sep 2005 11:11:45 GMT", "version": "v1" } ]
2008-01-20
[ [ "Bagnoli", "Franco", "" ], [ "Lio'", "Pietro", "" ], [ "Sguanci", "Luca", "" ] ]
The coexistence of different viral strains (quasispecies) within the same host are nowadays observed for a growing number of viruses, most notably HIV, Marburg and Ebola, but the conditions for the formation and survival of new strains have not yet been understood. We present a model of HIV quasispecies competition, that describes the conditions of viral quasispecies coexistence under different immune system conditions. Our model incorporates both T and B cells responses, and we show that the role of B cells is important and additive to that of T cells. Simulations of coinfection (simultaneous infection) and superinfection (delayed secondary infection) scenarios in the early stages (days) and in the late stages of the infection (years) are in agreement with emerging molecular biology findings. The immune response induces a competition among similar phenotypes, leading to differentiation (quasi-speciation), escape dynamics and complex oscillations of viral strain abundance. We found that the quasispecies dynamics after superinfection or coinfection has time scales of several months and becomes even slower when the immune system response is weak. Our model represents a general framework to study the speed and distribution of HIV quasispecies during disease progression, vaccination and therapy.
2310.03439
Andreas Tiffeau-Mayer
Andreas Tiffeau-Mayer
Unbiased estimation of sampling variance for Simpson's diversity index
updated manuscript with 11 pages, 9 figures
null
null
null
q-bio.PE cond-mat.stat-mech q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Quantification of measurement uncertainty is crucial for robust scientific inference, yet accurate estimates of this uncertainty remain elusive for ecological measures of diversity. Here, we address this longstanding challenge by deriving a closed-form unbiased estimator for the sampling variance of Simpson's diversity index. In numerical tests the estimator consistently outperforms existing approaches, particularly for applications in which species richness exceeds sample size. We apply the estimator to quantify biodiversity loss in marine ecosystems and to demonstrate ligand-dependent contributions of T cell receptor chains to specificity, illustrating its versatility across fields. The novel estimator provides researchers with a reliable method for comparing diversity between samples, essential for quantifying biodiversity trends and making informed conservation decisions.
[ { "created": "Thu, 5 Oct 2023 10:27:33 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2023 20:55:43 GMT", "version": "v2" }, { "created": "Mon, 26 Feb 2024 11:32:06 GMT", "version": "v3" } ]
2024-02-27
[ [ "Tiffeau-Mayer", "Andreas", "" ] ]
Quantification of measurement uncertainty is crucial for robust scientific inference, yet accurate estimates of this uncertainty remain elusive for ecological measures of diversity. Here, we address this longstanding challenge by deriving a closed-form unbiased estimator for the sampling variance of Simpson's diversity index. In numerical tests the estimator consistently outperforms existing approaches, particularly for applications in which species richness exceeds sample size. We apply the estimator to quantify biodiversity loss in marine ecosystems and to demonstrate ligand-dependent contributions of T cell receptor chains to specificity, illustrating its versatility across fields. The novel estimator provides researchers with a reliable method for comparing diversity between samples, essential for quantifying biodiversity trends and making informed conservation decisions.
2010.16403
Olga Zolotareva
Olga Zolotareva (1), Reza Nasirigerdeh (1), Julian Matschinske (1), Reihaneh Torkzadehmahani (1), Tobias Frisch (2), Julian Sp\"ath (1), David B. Blumenthal (1), Amir Abbasinejad (1 and 4), Paolo Tieri (3 nad 4), Nina K. Wenke (1), Markus List (1), Jan Baumbach (1 and 2) ((1) Chair of Experimental Bioinformatics, TUM School of Life Sciences, Technical University of Munich, Munich, Germany, (2) Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, (3) CNR National Research Council, IAC Institute for Applied Computing, Rome, Italy, (4) SapienzaUniversity of Rome, Rome, Italy)
Flimma: a federated and privacy-preserving tool for differential gene expression analysis
27 pages, 7 figures
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aggregating transcriptomics data across hospitals can increase sensitivity and robustness of differential expression analyses, yielding deeper clinical insights. As data exchange is often restricted by privacy legislation, meta-analyses are frequently employed to pool local results. However, if class labels are inhomogeneously distributed between cohorts, their accuracy may drop. Flimma (https://exbio.wzw.tum.de/flimma/) addresses this issue by implementing the state-of-the-art workflow limma voom in a privacy-preserving manner, i.e. patient data never leaves its source site. Flimma results are identical to those generated by limma voom on combined datasets even in imbalanced scenarios where meta-analysis approaches fail.
[ { "created": "Fri, 30 Oct 2020 17:50:37 GMT", "version": "v1" }, { "created": "Thu, 5 Nov 2020 17:12:23 GMT", "version": "v2" }, { "created": "Mon, 23 Nov 2020 17:42:07 GMT", "version": "v3" } ]
2020-11-24
[ [ "Zolotareva", "Olga", "", "1 and 4" ], [ "Nasirigerdeh", "Reza", "", "1 and 4" ], [ "Matschinske", "Julian", "", "1 and 4" ], [ "Torkzadehmahani", "Reihaneh", "", "1 and 4" ], [ "Frisch", "Tobias", "", "1 and 4" ], [ "Späth", "Julian", "", "1 and 4" ], [ "Blumenthal", "David B.", "", "1 and 4" ], [ "Abbasinejad", "Amir", "", "1 and 4" ], [ "Tieri", "Paolo", "", "3 nad 4" ], [ "Wenke", "Nina K.", "", "1 and 2" ], [ "List", "Markus", "", "1 and 2" ], [ "Baumbach", "Jan", "", "1 and 2" ] ]
Aggregating transcriptomics data across hospitals can increase sensitivity and robustness of differential expression analyses, yielding deeper clinical insights. As data exchange is often restricted by privacy legislation, meta-analyses are frequently employed to pool local results. However, if class labels are inhomogeneously distributed between cohorts, their accuracy may drop. Flimma (https://exbio.wzw.tum.de/flimma/) addresses this issue by implementing the state-of-the-art workflow limma voom in a privacy-preserving manner, i.e. patient data never leaves its source site. Flimma results are identical to those generated by limma voom on combined datasets even in imbalanced scenarios where meta-analysis approaches fail.
q-bio/0511051
Martin Jacobi N
Martin Nilsson Jacobi and Mats Nordahl
Quasispecies and recombination
null
null
null
null
q-bio.PE q-bio.GN
null
Recombination is introduced into Eigen's theory of quasispecies evolution. Comparing numerical simulations of the rate equations in the non-recombining and recombining cases show that recombination has a strong effect on the error threshold and, for a wide range of mutation rates, gives rise to two stable fixed points in the dynamics. This bi-stability results in the existence of two error thresholds. We prove that, under some assumptions on the fitness landscape but for general crossover probability, a fixed point localized about the sequence with superior fitness is globally stable for low mutation rates.
[ { "created": "Wed, 30 Nov 2005 15:13:36 GMT", "version": "v1" } ]
2007-05-23
[ [ "Jacobi", "Martin Nilsson", "" ], [ "Nordahl", "Mats", "" ] ]
Recombination is introduced into Eigen's theory of quasispecies evolution. Comparing numerical simulations of the rate equations in the non-recombining and recombining cases show that recombination has a strong effect on the error threshold and, for a wide range of mutation rates, gives rise to two stable fixed points in the dynamics. This bi-stability results in the existence of two error thresholds. We prove that, under some assumptions on the fitness landscape but for general crossover probability, a fixed point localized about the sequence with superior fitness is globally stable for low mutation rates.
1809.09620
Shujaat Khan Engr
Shujaat Khan, Imran Naseem, Roberto Togneri, and Mohammed Bennamoun
RAFP-Pred: Robust Prediction of Antifreeze Proteins using Localized Analysis of n-Peptide Compositions
7 pages, 2 figures
"RAFP-Pred: Robust Prediction of Antifreeze Proteins Using Localized Analysis of n-Peptide Compositions," in IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 15, no. 1, pp. 244-250, 1 Jan.-Feb. 2018
10.1109/TCBB.2016.2617337
null
q-bio.BM cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In extreme cold weather, living organisms produce Antifreeze Proteins (AFPs) to counter the otherwise lethal intracellular formation of ice. Structures and sequences of various AFPs exhibit a high degree of heterogeneity, consequently the prediction of the AFPs is considered to be a challenging task. In this research, we propose to handle this arduous manifold learning task using the notion of localized processing. In particular an AFP sequence is segmented into two sub-segments each of which is analyzed for amino acid and di-peptide compositions. We propose to use only the most significant features using the concept of information gain (IG) followed by a random forest classification approach. The proposed RAFP-Pred achieved an excellent performance on a number of standard datasets. We report a high Youden's index (sensitivity+specificity-1) value of 0.75 on the standard independent test data set outperforming the AFP-PseAAC, AFP\_PSSM, AFP-Pred and iAFP by a margin of 0.05, 0.06, 0.14 and 0.68 respectively. The verification rate on the UniProKB dataset is found to be 83.19\% which is substantially superior to the 57.18\% reported for the iAFP method.
[ { "created": "Tue, 25 Sep 2018 05:51:29 GMT", "version": "v1" } ]
2018-09-27
[ [ "Khan", "Shujaat", "" ], [ "Naseem", "Imran", "" ], [ "Togneri", "Roberto", "" ], [ "Bennamoun", "Mohammed", "" ] ]
In extreme cold weather, living organisms produce Antifreeze Proteins (AFPs) to counter the otherwise lethal intracellular formation of ice. Structures and sequences of various AFPs exhibit a high degree of heterogeneity, consequently the prediction of the AFPs is considered to be a challenging task. In this research, we propose to handle this arduous manifold learning task using the notion of localized processing. In particular an AFP sequence is segmented into two sub-segments each of which is analyzed for amino acid and di-peptide compositions. We propose to use only the most significant features using the concept of information gain (IG) followed by a random forest classification approach. The proposed RAFP-Pred achieved an excellent performance on a number of standard datasets. We report a high Youden's index (sensitivity+specificity-1) value of 0.75 on the standard independent test data set outperforming the AFP-PseAAC, AFP\_PSSM, AFP-Pred and iAFP by a margin of 0.05, 0.06, 0.14 and 0.68 respectively. The verification rate on the UniProKB dataset is found to be 83.19\% which is substantially superior to the 57.18\% reported for the iAFP method.
q-bio/0703015
Zhihui Wang
Eun Bo Shim, Yoo Seok Kim, Thomas S. Deisboeck
Analyzing the Dynamic Relationship between Tumor Growth and Angiogenesis in a Two Dimensional Finite Element Model
37 pages, 11 figures, 4 tables
null
null
null
q-bio.TO
null
Employing a novel two-dimensional computational model we have simulated the feedback between angiogenesis and tumor growth dynamics. Analyzing vessel formation and elongation towards the concentration gradient of the tumor-derived angiogenetic basic fibroblast growth factor, bFGF, we assumed that prior to the blood vessels reaching the tumor surface, the resulting pattern of tumor growth is symmetric, circular with a common center point. However, after the vessels reach the tumor surface, we assumed that the growth rate of that particular cancer region is accelerated compared to the tumor surface section that lacks neo-vascularization. Therefore, the resulting asymmetric tumor growth pattern is biased towards the site of the nourishing vessels. The simulation results show over time an increase in vessel density, a decrease in vessel branching length, and an increase in fracticality of the vascular branching architecture. Interestingly, over time the fractal dimension displayed a sigmoidal pattern with a reduced rate increase at earlier and later tumor growth stages due to distinct characteristics in vessel length and density. The finding that, at later stages, higher vascular fracticality resulted in a marked increase of tumor slice volume provides further in silico evidence for a functional impact of vascular patterns on cancer growth.
[ { "created": "Tue, 6 Mar 2007 15:46:00 GMT", "version": "v1" } ]
2007-05-23
[ [ "Shim", "Eun Bo", "" ], [ "Kim", "Yoo Seok", "" ], [ "Deisboeck", "Thomas S.", "" ] ]
Employing a novel two-dimensional computational model we have simulated the feedback between angiogenesis and tumor growth dynamics. Analyzing vessel formation and elongation towards the concentration gradient of the tumor-derived angiogenetic basic fibroblast growth factor, bFGF, we assumed that prior to the blood vessels reaching the tumor surface, the resulting pattern of tumor growth is symmetric, circular with a common center point. However, after the vessels reach the tumor surface, we assumed that the growth rate of that particular cancer region is accelerated compared to the tumor surface section that lacks neo-vascularization. Therefore, the resulting asymmetric tumor growth pattern is biased towards the site of the nourishing vessels. The simulation results show over time an increase in vessel density, a decrease in vessel branching length, and an increase in fracticality of the vascular branching architecture. Interestingly, over time the fractal dimension displayed a sigmoidal pattern with a reduced rate increase at earlier and later tumor growth stages due to distinct characteristics in vessel length and density. The finding that, at later stages, higher vascular fracticality resulted in a marked increase of tumor slice volume provides further in silico evidence for a functional impact of vascular patterns on cancer growth.
2305.12347
Han Huang
Han Huang, Leilei Sun, Bowen Du, Weifeng Lv
Learning Joint 2D & 3D Diffusion Models for Complete Molecule Generation
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing new molecules is essential for drug discovery and material science. Recently, deep generative models that aim to model molecule distribution have made promising progress in narrowing down the chemical research space and generating high-fidelity molecules. However, current generative models only focus on modeling either 2D bonding graphs or 3D geometries, which are two complementary descriptors for molecules. The lack of ability to jointly model both limits the improvement of generation quality and further downstream applications. In this paper, we propose a new joint 2D and 3D diffusion model (JODO) that generates complete molecules with atom types, formal charges, bond information, and 3D coordinates. To capture the correlation between molecular graphs and geometries in the diffusion process, we develop a Diffusion Graph Transformer to parameterize the data prediction model that recovers the original data from noisy data. The Diffusion Graph Transformer interacts node and edge representations based on our relational attention mechanism, while simultaneously propagating and updating scalar features and geometric vectors. Our model can also be extended for inverse molecular design targeting single or multiple quantum properties. In our comprehensive evaluation pipeline for unconditional joint generation, the results of the experiment show that JODO remarkably outperforms the baselines on the QM9 and GEOM-Drugs datasets. Furthermore, our model excels in few-step fast sampling, as well as in inverse molecule design and molecular graph generation. Our code is provided in https://github.com/GRAPH-0/JODO.
[ { "created": "Sun, 21 May 2023 04:49:53 GMT", "version": "v1" }, { "created": "Sun, 4 Jun 2023 10:09:36 GMT", "version": "v2" } ]
2023-06-06
[ [ "Huang", "Han", "" ], [ "Sun", "Leilei", "" ], [ "Du", "Bowen", "" ], [ "Lv", "Weifeng", "" ] ]
Designing new molecules is essential for drug discovery and material science. Recently, deep generative models that aim to model molecule distribution have made promising progress in narrowing down the chemical research space and generating high-fidelity molecules. However, current generative models only focus on modeling either 2D bonding graphs or 3D geometries, which are two complementary descriptors for molecules. The lack of ability to jointly model both limits the improvement of generation quality and further downstream applications. In this paper, we propose a new joint 2D and 3D diffusion model (JODO) that generates complete molecules with atom types, formal charges, bond information, and 3D coordinates. To capture the correlation between molecular graphs and geometries in the diffusion process, we develop a Diffusion Graph Transformer to parameterize the data prediction model that recovers the original data from noisy data. The Diffusion Graph Transformer interacts node and edge representations based on our relational attention mechanism, while simultaneously propagating and updating scalar features and geometric vectors. Our model can also be extended for inverse molecular design targeting single or multiple quantum properties. In our comprehensive evaluation pipeline for unconditional joint generation, the results of the experiment show that JODO remarkably outperforms the baselines on the QM9 and GEOM-Drugs datasets. Furthermore, our model excels in few-step fast sampling, as well as in inverse molecule design and molecular graph generation. Our code is provided in https://github.com/GRAPH-0/JODO.
1505.03354
Lorenzo Pellis
Lorenzo Pellis, Thomas House, Matt J. Keeling
Exact and approximate moment closures for non-Markovian network epidemics
Main text (45 pages, 11 figures and 3 tables) + supplementary material (12 pages, 10 figures and 1 table). Accepted for publication in Journal of Theoretical Biology on 27th April 2015
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moment-closure techniques are commonly used to generate low-dimensional deterministic models to approximate the average dynamics of stochastic systems on networks. The quality of such closures is usually difficult to asses and the relationship between model assumptions and closure accuracy are often difficult, if not impossible, to quantify. Here we carefully examine some commonly used moment closures, in particular a new one based on the concept of maximum entropy, for approximating the spread of epidemics on networks by reconstructing the probability distributions over triplets based on those over pairs. We consider various models (SI, SIR, SEIR and Reed-Frost-type) under Markovian and non-Markovian assumption characterising the latent and infectious periods. We initially study two special networks, namely the open triplet and closed triangle, for which we can obtain analytical results. We then explore numerically the exactness of moment closures for a wide range of larger motifs, thus gaining understanding of the factors that introduce errors in the approximations, in particular the presence of a random duration of the infectious period and the presence of overlapping triangles in a network. We also derive a simpler and more intuitive proof than previously available concerning the known result that pair-based moment closure is exact for the Markovian SIR model on tree-like networks under pure initial conditions. We also extend such a result to all infectious models, Markovian and non-Markovian, in which susceptibles escape infection independently from each infected neighbour and for which infectives cannot regain susceptible status, provided the network is tree-like and initial conditions are pure. This works represent a valuable step in deepening understanding of the assumptions behind moment closure approximations and for putting them on a more rigorous mathematical footing.
[ { "created": "Wed, 13 May 2015 12:29:35 GMT", "version": "v1" } ]
2015-05-14
[ [ "Pellis", "Lorenzo", "" ], [ "House", "Thomas", "" ], [ "Keeling", "Matt J.", "" ] ]
Moment-closure techniques are commonly used to generate low-dimensional deterministic models to approximate the average dynamics of stochastic systems on networks. The quality of such closures is usually difficult to asses and the relationship between model assumptions and closure accuracy are often difficult, if not impossible, to quantify. Here we carefully examine some commonly used moment closures, in particular a new one based on the concept of maximum entropy, for approximating the spread of epidemics on networks by reconstructing the probability distributions over triplets based on those over pairs. We consider various models (SI, SIR, SEIR and Reed-Frost-type) under Markovian and non-Markovian assumption characterising the latent and infectious periods. We initially study two special networks, namely the open triplet and closed triangle, for which we can obtain analytical results. We then explore numerically the exactness of moment closures for a wide range of larger motifs, thus gaining understanding of the factors that introduce errors in the approximations, in particular the presence of a random duration of the infectious period and the presence of overlapping triangles in a network. We also derive a simpler and more intuitive proof than previously available concerning the known result that pair-based moment closure is exact for the Markovian SIR model on tree-like networks under pure initial conditions. We also extend such a result to all infectious models, Markovian and non-Markovian, in which susceptibles escape infection independently from each infected neighbour and for which infectives cannot regain susceptible status, provided the network is tree-like and initial conditions are pure. This works represent a valuable step in deepening understanding of the assumptions behind moment closure approximations and for putting them on a more rigorous mathematical footing.
2004.09237
Vincent Noireaux
Aset Khakimzhan, David Garenne, Benjamin I. Tickman, Jason Fontana, James Carothers, Vincent Noireaux
Proofreading mechanism of Class 2 CRISPR-Cas systems
8 figures
null
10.1088/1478-3975/ac091e
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CRISPR systems experience off-target effects that interfere with the ability to accurately perform genetic edits. While empirical models predict off-target effects in specific platforms, there is a gap for a wide-ranging mechanistic model of CRISPR systems based on evidence from the essential processes of target recognition. In this work, we present a first principles model supported by many experiments performed in vivo and in vitro. First, we observe and model the conformational changes and discrete stepped DNA unwinding events of SpCas9-CRISPR target recognition process in a cell-free transcription-translation system using truncated guide RNAs, confirming structural and FRET microscopy experiments. Second, we implement an energy landscape to describe single mismatch effects observed in vivo for spCas9 and Cas12a CRISPR systems. From the mismatch analysis results we uncover that CRISPR class 2 systems maintain their kinetic proofreading mechanism by utilizing intermittent energetic barriers and we show how to leverage the landscape to improve target specificity.
[ { "created": "Mon, 20 Apr 2020 12:34:44 GMT", "version": "v1" } ]
2021-09-30
[ [ "Khakimzhan", "Aset", "" ], [ "Garenne", "David", "" ], [ "Tickman", "Benjamin I.", "" ], [ "Fontana", "Jason", "" ], [ "Carothers", "James", "" ], [ "Noireaux", "Vincent", "" ] ]
CRISPR systems experience off-target effects that interfere with the ability to accurately perform genetic edits. While empirical models predict off-target effects in specific platforms, there is a gap for a wide-ranging mechanistic model of CRISPR systems based on evidence from the essential processes of target recognition. In this work, we present a first principles model supported by many experiments performed in vivo and in vitro. First, we observe and model the conformational changes and discrete stepped DNA unwinding events of SpCas9-CRISPR target recognition process in a cell-free transcription-translation system using truncated guide RNAs, confirming structural and FRET microscopy experiments. Second, we implement an energy landscape to describe single mismatch effects observed in vivo for spCas9 and Cas12a CRISPR systems. From the mismatch analysis results we uncover that CRISPR class 2 systems maintain their kinetic proofreading mechanism by utilizing intermittent energetic barriers and we show how to leverage the landscape to improve target specificity.
1806.06993
Joshua M. Deutsch
Stephen E Martin, Matthew E Brunner, and Joshua M Deutsch
Emergence of Metachronal Waves in Active Microtubule Arrays
5 pages, 5 figures, supplement: 8 pages 3 figures
Phys. Rev. Fluids 4, 103101 (2019)
10.1103/PhysRevFluids.4.103101
null
q-bio.SC cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The physical mechanism behind the spontaneous formation of metachronal waves in microtubule arrays in a low Reynolds number fluid has been of interest for the past several years, yet is still not well understood. We present a model implementing the hydrodynamic coupling hypothesis from first principles, and use this model to simulate kinesin-driven microtubule arrays and observe their emergent behavior. The results of simulations are compared to known experimental observations by Sanchez et al. By varying parameters, we determine regimes in which the metachronal wave phenomenon emerges, and categorize other types of possible microtubule motion outside these regimes.
[ { "created": "Tue, 19 Jun 2018 00:43:24 GMT", "version": "v1" } ]
2019-11-06
[ [ "Martin", "Stephen E", "" ], [ "Brunner", "Matthew E", "" ], [ "Deutsch", "Joshua M", "" ] ]
The physical mechanism behind the spontaneous formation of metachronal waves in microtubule arrays in a low Reynolds number fluid has been of interest for the past several years, yet is still not well understood. We present a model implementing the hydrodynamic coupling hypothesis from first principles, and use this model to simulate kinesin-driven microtubule arrays and observe their emergent behavior. The results of simulations are compared to known experimental observations by Sanchez et al. By varying parameters, we determine regimes in which the metachronal wave phenomenon emerges, and categorize other types of possible microtubule motion outside these regimes.
1103.1791
Chris Adami
Jeffrey Edlund, Nicolas Chaumont, Arend Hintze, Christof Koch, Giulio Tononi, and Christoph Adami
Integrated information increases with fitness in the evolution of animats
27 pages, 8 figures, one supplementary figure. Three supplementary video files available on request. Version commensurate with published text in PLoS Comput. Biol
PLoS Computational Biology 7 (2001) e1002236
10.1371/journal.pcbi.1002236
null
q-bio.PE cs.AI nlin.AO q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent ("animat") evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its "fit" to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data.
[ { "created": "Wed, 9 Mar 2011 14:33:21 GMT", "version": "v1" }, { "created": "Mon, 3 Oct 2011 16:59:02 GMT", "version": "v2" } ]
2011-11-08
[ [ "Edlund", "Jeffrey", "" ], [ "Chaumont", "Nicolas", "" ], [ "Hintze", "Arend", "" ], [ "Koch", "Christof", "" ], [ "Tononi", "Giulio", "" ], [ "Adami", "Christoph", "" ] ]
One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent ("animat") evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its "fit" to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data.
2111.10535
Katsumi Tateno
Hiroki Nakagawa, Katsumi Tateno, Kensuke Takada, Takashi Morie
A Neural Network Model of the Entorhinal Cortex and Hippocampus for Event-order Memory Processing
10 pages, 7 figures. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To solve a navigation task based on experiences, we need a mechanism to associate places with objects and to recall them along the course of action. In a reward-oriented task, if the route to a reward location is simulated in mind after experiencing it once, it might be possible that the reward is gained efficiently. One way to solve it is to incorporate a biologically plausible mechanism. In this study, we propose a neural network that stores a sequence of events associated with a reward. The proposed network recalls the reward location by tracing them in its mind in order. We simulated a virtual mouse that explores a figure-eight maze and recalls the route to the reward location. During the learning period, a sequence of events relating to firing along a passage was temporarily stored in the heteroassociative network, and the sequence of events is consolidated in the synaptic weight matrix when a reward is fed. For retrieval, the sequential activation of conjunctive cue-place cells toward the reward location is internally generated by an impetus input. In the figure-eight maze task, the location of the reward was estimated by mind travel, irrespective of whether the reward is in the counterclockwise or distant clockwise route. The mechanism of efficiently reaching the goal by mind travel in the brain based on experiences is beneficial for mobile service robots that perform autonomous navigation.
[ { "created": "Sat, 20 Nov 2021 07:41:53 GMT", "version": "v1" } ]
2021-11-23
[ [ "Nakagawa", "Hiroki", "" ], [ "Tateno", "Katsumi", "" ], [ "Takada", "Kensuke", "" ], [ "Morie", "Takashi", "" ] ]
To solve a navigation task based on experiences, we need a mechanism to associate places with objects and to recall them along the course of action. In a reward-oriented task, if the route to a reward location is simulated in mind after experiencing it once, it might be possible that the reward is gained efficiently. One way to solve it is to incorporate a biologically plausible mechanism. In this study, we propose a neural network that stores a sequence of events associated with a reward. The proposed network recalls the reward location by tracing them in its mind in order. We simulated a virtual mouse that explores a figure-eight maze and recalls the route to the reward location. During the learning period, a sequence of events relating to firing along a passage was temporarily stored in the heteroassociative network, and the sequence of events is consolidated in the synaptic weight matrix when a reward is fed. For retrieval, the sequential activation of conjunctive cue-place cells toward the reward location is internally generated by an impetus input. In the figure-eight maze task, the location of the reward was estimated by mind travel, irrespective of whether the reward is in the counterclockwise or distant clockwise route. The mechanism of efficiently reaching the goal by mind travel in the brain based on experiences is beneficial for mobile service robots that perform autonomous navigation.
q-bio/0612024
Ellen Baake
Ellen Baake, Inke Herms
Single-crossover dynamics: finite versus infinite populations
21 pages, 4 figures
Bull. Math. Biol. 70 (2008), 603-624
null
null
q-bio.PE math.PR
null
Populations evolving under the joint influence of recombination and resampling (traditionally known as genetic drift) are investigated. First, we summarise and adapt a deterministic approach, as valid for infinite populations, which assumes continuous time and single crossover events. The corresponding nonlinear system of differential equations permits a closed solution, both in terms of the type frequencies and via linkage disequilibria of all orders. To include stochastic effects, we then consider the corresponding finite-population model, the Moran model with single crossovers, and examine it both analytically and by means of simulations. Particular emphasis is on the connection with the deterministic solution. If there is only recombination and every pair of recombined offspring replaces their pair of parents (i.e., there is no resampling), then the {\em expected} type frequencies in the finite population, of arbitrary size, equal the type frequencies in the infinite population. If resampling is included, the stochastic process converges, in the infinite-population limit, to the deterministic dynamics, which turns out to be a good approximation already for populations of moderate size.
[ { "created": "Wed, 13 Dec 2006 10:53:19 GMT", "version": "v1" }, { "created": "Mon, 3 Mar 2008 11:04:38 GMT", "version": "v2" } ]
2009-02-20
[ [ "Baake", "Ellen", "" ], [ "Herms", "Inke", "" ] ]
Populations evolving under the joint influence of recombination and resampling (traditionally known as genetic drift) are investigated. First, we summarise and adapt a deterministic approach, as valid for infinite populations, which assumes continuous time and single crossover events. The corresponding nonlinear system of differential equations permits a closed solution, both in terms of the type frequencies and via linkage disequilibria of all orders. To include stochastic effects, we then consider the corresponding finite-population model, the Moran model with single crossovers, and examine it both analytically and by means of simulations. Particular emphasis is on the connection with the deterministic solution. If there is only recombination and every pair of recombined offspring replaces their pair of parents (i.e., there is no resampling), then the {\em expected} type frequencies in the finite population, of arbitrary size, equal the type frequencies in the infinite population. If resampling is included, the stochastic process converges, in the infinite-population limit, to the deterministic dynamics, which turns out to be a good approximation already for populations of moderate size.
1609.09170
Andrew Black
James N. Walker, Joshua V. Ross and Andrew J. Black
Inference of epidemiological parameters from household stratified data
null
null
10.1371/journal.pone.0185910
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters---governing within-household transmission, recovery, and between-household transmission---from data of the day upon which each individual became infectious and the household in which each infection occurred, as would be available from first few hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be an excellent approximation and remains computationally efficient as the amount of data is increased.
[ { "created": "Thu, 29 Sep 2016 01:52:10 GMT", "version": "v1" }, { "created": "Tue, 21 Mar 2017 03:43:57 GMT", "version": "v2" } ]
2018-02-07
[ [ "Walker", "James N.", "" ], [ "Ross", "Joshua V.", "" ], [ "Black", "Andrew J.", "" ] ]
We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters---governing within-household transmission, recovery, and between-household transmission---from data of the day upon which each individual became infectious and the household in which each infection occurred, as would be available from first few hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be an excellent approximation and remains computationally efficient as the amount of data is increased.
1703.03787
Jian-Jun Shu
Jian-Jun Shu
A new integrated symmetrical table for genetic codes
null
BioSystems, Vol. 151, pp. 21-26, 2017
10.1016/j.biosystems.2016.11.004
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Degeneracy is a salient feature of genetic codes, because there are more codons than amino acids. The conventional table for genetic codes suffers from an inability of illustrating a symmetrical nature among genetic base codes. In fact, because the conventional wisdom avoids the question, there is little agreement as to whether the symmetrical nature actually even exists. A better understanding of symmetry and an appreciation for its essential role in the genetic code formation can improve our understanding of nature coding processes. Thus, it is worth formulating a new integrated symmetrical table for genetic codes, which is presented in this paper. It could be very useful to understand the Nobel laureate Crick wobble hypothesis: how one transfer ribonucleic acid can recognize two or more synonymous codons, which is an unsolved fundamental question in biological science.
[ { "created": "Fri, 10 Mar 2017 18:29:01 GMT", "version": "v1" } ]
2017-03-13
[ [ "Shu", "Jian-Jun", "" ] ]
Degeneracy is a salient feature of genetic codes, because there are more codons than amino acids. The conventional table for genetic codes suffers from an inability of illustrating a symmetrical nature among genetic base codes. In fact, because the conventional wisdom avoids the question, there is little agreement as to whether the symmetrical nature actually even exists. A better understanding of symmetry and an appreciation for its essential role in the genetic code formation can improve our understanding of nature coding processes. Thus, it is worth formulating a new integrated symmetrical table for genetic codes, which is presented in this paper. It could be very useful to understand the Nobel laureate Crick wobble hypothesis: how one transfer ribonucleic acid can recognize two or more synonymous codons, which is an unsolved fundamental question in biological science.
1410.0350
William DeWitt
William DeWitt, Paul Lindau, Thomas Snyder, Marissa Vignali, Ryan Emerson, Harlan Robins
Replicate immunosequencing as a robust probe of B cell repertoire diversity
6 pages, 3 figures, 1 table
null
10.1371/journal.pone.0160853
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fundamental to quantitative characterization of the B cell receptor repertoire is clonal diversity - the number of distinct somatically recombined receptors present in the repertoire and their relative abundances, defining the search space available for immune response. This study synthesizes flow cytometry and immunosequencing to study memory and naive B cells from the peripheral blood of three adults. A combinatorial experimental design was employed, constituting a sample abundance probe robust to amplification stochasticity, a crucial quantitative advance over previous sequencing studies of diversity. These data are leveraged to interrogate repertoire diversity, motivating an extension of a canonical diversity model in ecology and corpus linguistics. Maximum likelihood diversity estimates are provided for memory and naive B cell repertoires. Both evince domination by rare clones and regimes of power law scaling in abundance. Memory clones have more disparate repertoire abundances than naive clones, and most naive clones undergo no proliferation prior to antigen recognition.
[ { "created": "Wed, 1 Oct 2014 19:59:11 GMT", "version": "v1" } ]
2017-08-30
[ [ "DeWitt", "William", "" ], [ "Lindau", "Paul", "" ], [ "Snyder", "Thomas", "" ], [ "Vignali", "Marissa", "" ], [ "Emerson", "Ryan", "" ], [ "Robins", "Harlan", "" ] ]
Fundamental to quantitative characterization of the B cell receptor repertoire is clonal diversity - the number of distinct somatically recombined receptors present in the repertoire and their relative abundances, defining the search space available for immune response. This study synthesizes flow cytometry and immunosequencing to study memory and naive B cells from the peripheral blood of three adults. A combinatorial experimental design was employed, constituting a sample abundance probe robust to amplification stochasticity, a crucial quantitative advance over previous sequencing studies of diversity. These data are leveraged to interrogate repertoire diversity, motivating an extension of a canonical diversity model in ecology and corpus linguistics. Maximum likelihood diversity estimates are provided for memory and naive B cell repertoires. Both evince domination by rare clones and regimes of power law scaling in abundance. Memory clones have more disparate repertoire abundances than naive clones, and most naive clones undergo no proliferation prior to antigen recognition.
1508.04241
Andrew Magyar
Andrew Magyar
Beta Distribution of Human MTL Neuron Sparsity: A Sparse and Skewed Code
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single unit recordings in the human medial temporal lobe (MTL) have revealed a population of cells with conceptually based, highly selective activity, indicating the presence of a sparse neural code. Building off previous work by the author and J.C. Collins, this paper develops a statistical model for analyzing this data, based on maximum likelihood analysis. The goal is to infer the underlying distribution of neural response probabilities across the population of MTL cells. The response probability, or neuronal sparsity, is defined as the total probability that the neuron produces an above-threshold firing rate during the presentation of a randomly selected stimulus. Applying the method, it is shown that a beta-distributed neuronal sparsity across the cells of the MTL is consistent with the data. The resulting fits reveal a sparse and highly skewed code, with a huge majority of neurons exhibiting extremely low response probabilities, and a smaller minority possessing considerably higher response probabilities. The distributions are closely approximated by a power law at low sparsity values. Strikingly similar skewed distributions have been found in the statistics of place cell activity in rats, suggesting similar underlying coding dynamics between the human MTL and the rat hippocampus.
[ { "created": "Tue, 18 Aug 2015 08:38:24 GMT", "version": "v1" } ]
2015-08-19
[ [ "Magyar", "Andrew", "" ] ]
Single unit recordings in the human medial temporal lobe (MTL) have revealed a population of cells with conceptually based, highly selective activity, indicating the presence of a sparse neural code. Building off previous work by the author and J.C. Collins, this paper develops a statistical model for analyzing this data, based on maximum likelihood analysis. The goal is to infer the underlying distribution of neural response probabilities across the population of MTL cells. The response probability, or neuronal sparsity, is defined as the total probability that the neuron produces an above-threshold firing rate during the presentation of a randomly selected stimulus. Applying the method, it is shown that a beta-distributed neuronal sparsity across the cells of the MTL is consistent with the data. The resulting fits reveal a sparse and highly skewed code, with a huge majority of neurons exhibiting extremely low response probabilities, and a smaller minority possessing considerably higher response probabilities. The distributions are closely approximated by a power law at low sparsity values. Strikingly similar skewed distributions have been found in the statistics of place cell activity in rats, suggesting similar underlying coding dynamics between the human MTL and the rat hippocampus.
2104.05497
Marco A. Formoso
Marco A. Formoso, Andr\'es Ortiz, Francisco J. Mart\'inez-Murcia, Nicol\'as Gallego-Molina, Juan L. Luque
Modelling Brain Connectivity Networks by Graph Embedding for Dyslexia Diagnosis
null
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by/4.0/
Several methods have been developed to extract information from electroencephalograms (EEG). One of them is Phase-Amplitude Coupling (PAC) which is a type of Cross-Frequency Coupling (CFC) method, consisting in measure the synchronization of phase and amplitude for the different EEG bands and electrodes. This provides information regarding brain areas that are synchronously activated, and eventually, a marker of functional connectivity between these areas. In this work, intra and inter electrode PAC is computed obtaining the relationship among different electrodes used in EEG. The connectivity information is then treated as a graph in which the different nodes are the electrodes and the edges PAC values between them. These structures are embedded to create a feature vector that can be further used to classify multichannel EEG samples. The proposed method has been applied to classified EEG samples acquired using specific auditory stimuli in a task designed for dyslexia disorder diagnosis in seven years old children EEG's. The proposed method provides AUC values up to 0.73 and allows selecting the most discriminant electrodes and EEG bands.
[ { "created": "Mon, 12 Apr 2021 14:27:29 GMT", "version": "v1" } ]
2021-04-13
[ [ "Formoso", "Marco A.", "" ], [ "Ortiz", "Andrés", "" ], [ "Martínez-Murcia", "Francisco J.", "" ], [ "Gallego-Molina", "Nicolás", "" ], [ "Luque", "Juan L.", "" ] ]
Several methods have been developed to extract information from electroencephalograms (EEG). One of them is Phase-Amplitude Coupling (PAC) which is a type of Cross-Frequency Coupling (CFC) method, consisting in measure the synchronization of phase and amplitude for the different EEG bands and electrodes. This provides information regarding brain areas that are synchronously activated, and eventually, a marker of functional connectivity between these areas. In this work, intra and inter electrode PAC is computed obtaining the relationship among different electrodes used in EEG. The connectivity information is then treated as a graph in which the different nodes are the electrodes and the edges PAC values between them. These structures are embedded to create a feature vector that can be further used to classify multichannel EEG samples. The proposed method has been applied to classified EEG samples acquired using specific auditory stimuli in a task designed for dyslexia disorder diagnosis in seven years old children EEG's. The proposed method provides AUC values up to 0.73 and allows selecting the most discriminant electrodes and EEG bands.
1011.4840
Alexander Bershadskii
A. Bershadskii
Cluster-scaling, chaotic order and coherence in DNA
extended. arXiv admin note: substantial text overlap with arXiv:1008.1359
Physics Letters A 375 (2011) 3021-3024
10.1016/j.physleta.2011.06.037
null
q-bio.BM nlin.CD q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different numerical mappings of the DNA sequences have been studied using a new cluster-scaling method and the well known spectral methods. It is shown, in particular, that the nucleotide sequences in DNA molecules have robust cluster-scaling properties. These properties are relevant to both types of nucleotide pair-bases interactions: hydrogen bonds and stacking interactions. It is shown that taking into account the cluster-scaling properties can help to improve heterogeneous models of the DNA dynamics. It is also shown that a chaotic (deterministic) order, rather than a stochastic randomness, controls the energy minima positions of the stacking interactions in the DNA sequences on large scales. The chaotic order results in a large-scale chaotic coherence between the two complimentary DNA-duplex's sequences. A competition between this broad-band chaotic coherence and the resonance coherence produced by genetic code has been briefly discussed. The Arabidopsis plant genome (which is a model plant for genome analysis) and two human genes: BRCA2 and NRXN1, have been considered as examples.
[ { "created": "Mon, 22 Nov 2010 15:13:19 GMT", "version": "v1" }, { "created": "Thu, 16 Dec 2010 16:14:20 GMT", "version": "v2" }, { "created": "Fri, 22 Apr 2011 13:43:37 GMT", "version": "v3" }, { "created": "Mon, 29 Jan 2018 14:00:14 GMT", "version": "v4" } ]
2018-01-31
[ [ "Bershadskii", "A.", "" ] ]
Different numerical mappings of the DNA sequences have been studied using a new cluster-scaling method and the well known spectral methods. It is shown, in particular, that the nucleotide sequences in DNA molecules have robust cluster-scaling properties. These properties are relevant to both types of nucleotide pair-bases interactions: hydrogen bonds and stacking interactions. It is shown that taking into account the cluster-scaling properties can help to improve heterogeneous models of the DNA dynamics. It is also shown that a chaotic (deterministic) order, rather than a stochastic randomness, controls the energy minima positions of the stacking interactions in the DNA sequences on large scales. The chaotic order results in a large-scale chaotic coherence between the two complimentary DNA-duplex's sequences. A competition between this broad-band chaotic coherence and the resonance coherence produced by genetic code has been briefly discussed. The Arabidopsis plant genome (which is a model plant for genome analysis) and two human genes: BRCA2 and NRXN1, have been considered as examples.
2104.12151
Marcus Aguiar de
Gabriella D. Franco, Flavia M. D. Marquitti, Lucas D. Fernandes, Dan Braha, Marcus A.M. de Aguiar
Shannon information criterion for low-high diversity transition in Moran and Voter models
13 pages, 8 figures
Phys. Rev. E 104, 024315 (2021)
10.1103/PhysRevE.104.024315
null
q-bio.PE nlin.AO nlin.PS
http://creativecommons.org/licenses/by/4.0/
Mutation and drift play opposite roles in genetics. While mutation creates diversity, drift can cause gene variants to disappear, especially when they are rare. In the absence of natural selection and migration, the balance between the drift and mutation in a well-mixed population defines its diversity. The Moran model captures the effects of these two evolutionary forces and has a counterpart in social dynamics, known as the Voter model with external opinion influencers. Two extreme outcomes of the Voter model dynamics are consensus and coexistence of opinions, which correspond to low and high diversity in the Moran model. Here we use a Shannon's information-theoretic approach to characterize the smooth transition between the states of consensus and coexistence of opinions in the Voter model. Mapping the Moran into the Voter model we extend the results to the mutation-drift balance and characterize the transition between low and high diversity in finite populations. Describing the population as a network of connected individuals we show that the transition between the two regimes depends on the network topology of the population and on the possible asymmetries in the mutation rates.
[ { "created": "Sun, 25 Apr 2021 13:13:30 GMT", "version": "v1" }, { "created": "Thu, 25 Nov 2021 17:36:03 GMT", "version": "v2" } ]
2021-11-29
[ [ "Franco", "Gabriella D.", "" ], [ "Marquitti", "Flavia M. D.", "" ], [ "Fernandes", "Lucas D.", "" ], [ "Braha", "Dan", "" ], [ "de Aguiar", "Marcus A. M.", "" ] ]
Mutation and drift play opposite roles in genetics. While mutation creates diversity, drift can cause gene variants to disappear, especially when they are rare. In the absence of natural selection and migration, the balance between the drift and mutation in a well-mixed population defines its diversity. The Moran model captures the effects of these two evolutionary forces and has a counterpart in social dynamics, known as the Voter model with external opinion influencers. Two extreme outcomes of the Voter model dynamics are consensus and coexistence of opinions, which correspond to low and high diversity in the Moran model. Here we use a Shannon's information-theoretic approach to characterize the smooth transition between the states of consensus and coexistence of opinions in the Voter model. Mapping the Moran into the Voter model we extend the results to the mutation-drift balance and characterize the transition between low and high diversity in finite populations. Describing the population as a network of connected individuals we show that the transition between the two regimes depends on the network topology of the population and on the possible asymmetries in the mutation rates.
1906.06445
Thomas Bolton
Thomas A.W. Bolton, Daniela Z\"oller, C\'esar Caballero-Gaudes, Valeria Kebets, Enrico Glerean, Dimitri Van De Ville
Agito ergo sum: correlates of spatiotemporal motion characteristics during fMRI
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The impact of in-scanner motion on functional magnetic resonance imaging (fMRI) data has a notorious reputation in the neuroimaging community. State-ofthe-art guidelines advise to scrub out excessively corrupted frames as assessed by a composite framewise displacement (FD) score, to regress out models of nuisance variables, and to include average FD as a covariate in group-level analyses. Here, we studied individual motion time courses at time points typically retained in fMRI analyses. We observed that even in this set of putatively clean time points, motion exhibited a very clear spatiotemporal structure, so that we could distinguish subjects into four groups of movers with varying characteristics Then, we showed that this spatiotemporal motion cartography tightly relates to a broad array of anthropometric, behavioral and clinical factors. Convergent results were obtained from two different analytical perspectives: univariate assessment of behavioral differences across mover subgroups unraveled defining markers, while subsequent multivariate analysis broadened the range of involved factors and clarified that multiple motion/behavior modes of covariance overlap in the data. Our results demonstrate that even the smaller episodes of motion typically retained in fMRI analyses carry structured, behaviorally relevant information. They call for further examinations of possible biases in current regression-based motion correction strategies.
[ { "created": "Sat, 15 Jun 2019 01:02:00 GMT", "version": "v1" } ]
2019-06-18
[ [ "Bolton", "Thomas A. W.", "" ], [ "Zöller", "Daniela", "" ], [ "Caballero-Gaudes", "César", "" ], [ "Kebets", "Valeria", "" ], [ "Glerean", "Enrico", "" ], [ "Van De Ville", "Dimitri", "" ] ]
The impact of in-scanner motion on functional magnetic resonance imaging (fMRI) data has a notorious reputation in the neuroimaging community. State-ofthe-art guidelines advise to scrub out excessively corrupted frames as assessed by a composite framewise displacement (FD) score, to regress out models of nuisance variables, and to include average FD as a covariate in group-level analyses. Here, we studied individual motion time courses at time points typically retained in fMRI analyses. We observed that even in this set of putatively clean time points, motion exhibited a very clear spatiotemporal structure, so that we could distinguish subjects into four groups of movers with varying characteristics Then, we showed that this spatiotemporal motion cartography tightly relates to a broad array of anthropometric, behavioral and clinical factors. Convergent results were obtained from two different analytical perspectives: univariate assessment of behavioral differences across mover subgroups unraveled defining markers, while subsequent multivariate analysis broadened the range of involved factors and clarified that multiple motion/behavior modes of covariance overlap in the data. Our results demonstrate that even the smaller episodes of motion typically retained in fMRI analyses carry structured, behaviorally relevant information. They call for further examinations of possible biases in current regression-based motion correction strategies.
1806.03765
Duc Nguyen
Duc Duy Nguyen, Guo-Wei Wei
DG-GL: Differential geometry based geometric learning of molecular datasets
28 pages, 14 figures
null
null
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Despite its great success in various physical modeling, differential geometry (DG) has rarely been devised as a versatile tool for analyzing large, diverse and complex molecular and biomolecular datasets due to the limited understanding of its potential power in dimensionality reduction and its ability to encode essential chemical and biological information in differentiable manifolds. Results: We put forward a differential geometry based geometric learning (DG-GL) hypothesis that the intrinsic physics of three-dimensional (3D) molecular structures lies on a family of low-dimensional manifolds embedded in a high-dimensional data space. We encode crucial chemical, physical and biological information into 2D element interactive manifolds, extracted from a high-dimensional structural data space via a multiscale discrete-to-continuum mapping using differentiable density estimators. Differential geometry apparatuses are utilized to construct element interactive curvatures %Gaussian curvature, mean curvature, minimum curvature and maximum curvature in analytical forms for certain analytically differentiable density estimators. These low-dimensional differential geometry representations are paired with a robust machine learning algorithm to showcase their descriptive and predictive powers for large, diverse and complex molecular and biomolecular datasets. Extensive numerical experiments are carried out to demonstrated that the proposed DG-GL strategy outperforms other advanced methods in the predictions of drug discovery related protein-ligand binding affinity, drug toxicity, and molecular solvation free energy.
[ { "created": "Mon, 11 Jun 2018 02:06:48 GMT", "version": "v1" } ]
2018-06-12
[ [ "Nguyen", "Duc Duy", "" ], [ "Wei", "Guo-Wei", "" ] ]
Motivation: Despite its great success in various physical modeling, differential geometry (DG) has rarely been devised as a versatile tool for analyzing large, diverse and complex molecular and biomolecular datasets due to the limited understanding of its potential power in dimensionality reduction and its ability to encode essential chemical and biological information in differentiable manifolds. Results: We put forward a differential geometry based geometric learning (DG-GL) hypothesis that the intrinsic physics of three-dimensional (3D) molecular structures lies on a family of low-dimensional manifolds embedded in a high-dimensional data space. We encode crucial chemical, physical and biological information into 2D element interactive manifolds, extracted from a high-dimensional structural data space via a multiscale discrete-to-continuum mapping using differentiable density estimators. Differential geometry apparatuses are utilized to construct element interactive curvatures %Gaussian curvature, mean curvature, minimum curvature and maximum curvature in analytical forms for certain analytically differentiable density estimators. These low-dimensional differential geometry representations are paired with a robust machine learning algorithm to showcase their descriptive and predictive powers for large, diverse and complex molecular and biomolecular datasets. Extensive numerical experiments are carried out to demonstrated that the proposed DG-GL strategy outperforms other advanced methods in the predictions of drug discovery related protein-ligand binding affinity, drug toxicity, and molecular solvation free energy.
2010.00116
Li Xiao
Li Xiao, Biao Cai, Gang Qu, Julia M. Stephen, Tony W. Wilson, Vince D. Calhoun, and Yu-Ping Wang
Distance Correlation Based Brain Functional Connectivity Estimation and Non-Convex Multi-Task Learning for Developmental fMRI Studies
null
null
null
null
q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resting-state functional magnetic resonance imaging (rs-fMRI)-derived functional connectivity patterns have been extensively utilized to delineate global functional organization of the human brain in health, development, and neuropsychiatric disorders. In this paper, we investigate how functional connectivity in males and females differs in an age prediction framework. We first estimate functional connectivity between regions-of-interest (ROIs) using distance correlation instead of Pearson's correlation. Distance correlation, as a multivariate statistical method, explores spatial relations of voxel-wise time courses within individual ROIs and measures both linear and nonlinear dependence, capturing more complex information of between-ROI interactions. Then, a novel non-convex multi-task learning (NC-MTL) model is proposed to study age-related gender differences in functional connectivity, where age prediction for each gender group is viewed as one task. Specifically, in the proposed NC-MTL model, we introduce a composite regularizer with a combination of non-convex $\ell_{2,1-2}$ and $\ell_{1-2}$ regularization terms for selecting both common and task-specific features. Finally, we validate the proposed NC-MTL model along with distance correlation based functional connectivity on rs-fMRI of the Philadelphia Neurodevelopmental Cohort for predicting ages of both genders. The experimental results demonstrate that the proposed NC-MTL model outperforms other competing MTL models in age prediction, as well as characterizing developmental gender differences in functional connectivity patterns.
[ { "created": "Wed, 30 Sep 2020 21:48:52 GMT", "version": "v1" } ]
2020-10-02
[ [ "Xiao", "Li", "" ], [ "Cai", "Biao", "" ], [ "Qu", "Gang", "" ], [ "Stephen", "Julia M.", "" ], [ "Wilson", "Tony W.", "" ], [ "Calhoun", "Vince D.", "" ], [ "Wang", "Yu-Ping", "" ] ]
Resting-state functional magnetic resonance imaging (rs-fMRI)-derived functional connectivity patterns have been extensively utilized to delineate global functional organization of the human brain in health, development, and neuropsychiatric disorders. In this paper, we investigate how functional connectivity in males and females differs in an age prediction framework. We first estimate functional connectivity between regions-of-interest (ROIs) using distance correlation instead of Pearson's correlation. Distance correlation, as a multivariate statistical method, explores spatial relations of voxel-wise time courses within individual ROIs and measures both linear and nonlinear dependence, capturing more complex information of between-ROI interactions. Then, a novel non-convex multi-task learning (NC-MTL) model is proposed to study age-related gender differences in functional connectivity, where age prediction for each gender group is viewed as one task. Specifically, in the proposed NC-MTL model, we introduce a composite regularizer with a combination of non-convex $\ell_{2,1-2}$ and $\ell_{1-2}$ regularization terms for selecting both common and task-specific features. Finally, we validate the proposed NC-MTL model along with distance correlation based functional connectivity on rs-fMRI of the Philadelphia Neurodevelopmental Cohort for predicting ages of both genders. The experimental results demonstrate that the proposed NC-MTL model outperforms other competing MTL models in age prediction, as well as characterizing developmental gender differences in functional connectivity patterns.
1606.07372
Noah Apthorpe
Noah J. Apthorpe, Alexander J. Riordan, Rob E. Aguilar, Jan Homann, Yi Gu, David W. Tank, H. Sebastian Seung
Automatic Neuron Detection in Calcium Imaging Data Using Convolutional Networks
9 pages, 5 figures, 2 ancillary files; minor changes for camera-ready version. appears in Advances in Neural Information Processing Systems 29 (NIPS 2016)
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium imaging is an important technique for monitoring the activity of thousands of neurons simultaneously. As calcium imaging datasets grow in size, automated detection of individual neurons is becoming important. Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA/ICA method based on precision and recall relative to ground truth annotation by a human expert. These results suggest that convolutional networks are an efficient and flexible tool for the analysis of large-scale calcium imaging data.
[ { "created": "Thu, 23 Jun 2016 16:49:40 GMT", "version": "v1" }, { "created": "Wed, 21 Dec 2016 23:40:08 GMT", "version": "v2" } ]
2016-12-23
[ [ "Apthorpe", "Noah J.", "" ], [ "Riordan", "Alexander J.", "" ], [ "Aguilar", "Rob E.", "" ], [ "Homann", "Jan", "" ], [ "Gu", "Yi", "" ], [ "Tank", "David W.", "" ], [ "Seung", "H. Sebastian", "" ] ]
Calcium imaging is an important technique for monitoring the activity of thousands of neurons simultaneously. As calcium imaging datasets grow in size, automated detection of individual neurons is becoming important. Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA/ICA method based on precision and recall relative to ground truth annotation by a human expert. These results suggest that convolutional networks are an efficient and flexible tool for the analysis of large-scale calcium imaging data.
2308.11486
Nicola Vassena
Nicola Vassena, Peter F. Stadler
Unstable Cores are the source of instability in chemical reaction networks
42 pages
Proc. R. Soc. A. 480. 20230694. (2024)
10.1098/rspa.2023.0694
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biochemical networks, complex dynamical features such as superlinear growth and oscillations are classically considered a consequence of autocatalysis. For the large class of parameter-rich kinetic models, which includes Generalized Mass Action kinetics and Michaelis-Menten kinetics, we show that certain submatrices of the stoichiometric matrix, so-called unstable cores, are sufficient for a reaction network to admit instability and potentially give rise to such complex dynamical behavior. The determinant of the submatrix distinguishes unstable-positive feedbacks, with a single real-positive eigenvalue, and unstable-negative feedbacks without real-positive eigenvalues. Autocatalytic cores turn out to be exactly the unstable-positive feedbacks that are Metzler matrices. Thus there are sources of dynamical instability in chemical networks that are unrelated to autocatalysis. We use such intuition to design non-autocatalytic biochemical networks with superlinear growth and oscillations.
[ { "created": "Tue, 22 Aug 2023 15:06:05 GMT", "version": "v1" }, { "created": "Wed, 20 Sep 2023 10:54:10 GMT", "version": "v2" }, { "created": "Tue, 5 Mar 2024 08:48:45 GMT", "version": "v3" } ]
2024-03-06
[ [ "Vassena", "Nicola", "" ], [ "Stadler", "Peter F.", "" ] ]
In biochemical networks, complex dynamical features such as superlinear growth and oscillations are classically considered a consequence of autocatalysis. For the large class of parameter-rich kinetic models, which includes Generalized Mass Action kinetics and Michaelis-Menten kinetics, we show that certain submatrices of the stoichiometric matrix, so-called unstable cores, are sufficient for a reaction network to admit instability and potentially give rise to such complex dynamical behavior. The determinant of the submatrix distinguishes unstable-positive feedbacks, with a single real-positive eigenvalue, and unstable-negative feedbacks without real-positive eigenvalues. Autocatalytic cores turn out to be exactly the unstable-positive feedbacks that are Metzler matrices. Thus there are sources of dynamical instability in chemical networks that are unrelated to autocatalysis. We use such intuition to design non-autocatalytic biochemical networks with superlinear growth and oscillations.
1406.5799
Alexander Franks
Alexander Franks, Florian Markowetz and Edoardo Airoldi
Estimating cellular pathways from an ensemble of heterogeneous data sources
null
null
null
null
q-bio.MN stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building better models of cellular pathways is one of the major challenges of systems biology and functional genomics. There is a need for methods to build on established expert knowledge and reconcile it with results of high-throughput studies. Moreover, the available data sources are heterogeneous and need to be combined in a way specific for the part of the pathway in which they are most informative. Here, we present a compartment specific strategy to integrate edge, node and path data for the refinement of a network hypothesis. Specifically, we use a local-move Gibbs sampler for refining pathway hypotheses from a compendium of heterogeneous data sources, including novel methodology for integrating protein attributes. We demonstrate the utility of this approach in a case study of the pheromone response MAPK pathway in the yeast S. cerevisiae.
[ { "created": "Mon, 23 Jun 2014 02:45:49 GMT", "version": "v1" } ]
2014-06-25
[ [ "Franks", "Alexander", "" ], [ "Markowetz", "Florian", "" ], [ "Airoldi", "Edoardo", "" ] ]
Building better models of cellular pathways is one of the major challenges of systems biology and functional genomics. There is a need for methods to build on established expert knowledge and reconcile it with results of high-throughput studies. Moreover, the available data sources are heterogeneous and need to be combined in a way specific for the part of the pathway in which they are most informative. Here, we present a compartment specific strategy to integrate edge, node and path data for the refinement of a network hypothesis. Specifically, we use a local-move Gibbs sampler for refining pathway hypotheses from a compendium of heterogeneous data sources, including novel methodology for integrating protein attributes. We demonstrate the utility of this approach in a case study of the pheromone response MAPK pathway in the yeast S. cerevisiae.
2309.06355
Christo Morison
Christo Morison, Dudley Stark and Weini Huang
Single-cell mutational burden distributions in birth-death processes
28 pages, 6 figures
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
Genetic mutations are footprints of tumour growth. While mutation data in bulk samples has been used to infer evolutionary parameters hard to measure in vivo, the advent of single-cell data has led to strong interest in the mutational burden distribution (MBD) among tumour cells. We introduce dynamical matrices and recurrence relations to integrate this single-cell MBD with known statistics, and derive new analytical expressions. Surprisingly, we find that the shape of the MBD is driven by cell lineage-level stochasticity rather than by the distribution of mutations in each cell division.
[ { "created": "Tue, 12 Sep 2023 16:18:23 GMT", "version": "v1" }, { "created": "Wed, 13 Sep 2023 15:23:41 GMT", "version": "v2" }, { "created": "Fri, 7 Jun 2024 15:16:17 GMT", "version": "v3" } ]
2024-06-10
[ [ "Morison", "Christo", "" ], [ "Stark", "Dudley", "" ], [ "Huang", "Weini", "" ] ]
Genetic mutations are footprints of tumour growth. While mutation data in bulk samples has been used to infer evolutionary parameters hard to measure in vivo, the advent of single-cell data has led to strong interest in the mutational burden distribution (MBD) among tumour cells. We introduce dynamical matrices and recurrence relations to integrate this single-cell MBD with known statistics, and derive new analytical expressions. Surprisingly, we find that the shape of the MBD is driven by cell lineage-level stochasticity rather than by the distribution of mutations in each cell division.
1609.02535
Mette Olufsen
Jacob Sturdy, Johnny T Ottesen, Mette S Olufsen
Modeling the differentiation of A- and C-type baroreceptor firing patterns
Keywords: Baroreflex model, mechanosensitivity, A- and C-type afferent baroreceptors, biophysical model, computational model
Journal of Computational Neuroscience, 2016
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The baroreceptor neurons serve as the primary transducers of blood pressure for the autonomic nervous system and are thus critical in enabling the body to respond effectively to changes in blood pressure. These neurons can be separated into two types (A and C) based on the myelination of their axons and their distinct firing patterns elicited in response to specific pressure stimuli. This study has developed a comprehensive model of the afferent baroreceptor discharge built on physiological knowledge of arterial wall mechanics, firing rate responses to controlled pressure stimuli, and ion channel dynamics within the baroreceptor neurons. With this model, we were able to predict firing rates observed in previously published experiments in both A- and C-type neurons. These results were obtained by adjusting model parameters determining the maximal ion-channel conductances. The observed variation in the model parameters are hypothesized to correspond to physiological differences between A- and C-type neurons. In agreement with published experimental observations, our simulations suggest that a twofold lower potassium conductance in C-type neurons is responsible for the observed sustained basal firing, whereas a tenfold higher mechanosensitive conductance is responsible for the greater firing rate observed in A-type neurons. A better understanding of the difference between the two neuron types can potentially be used to gain more insight into the underlying pathophysiology facilitating development of targeted interventions improving baroreflex function in diseased individuals, e.g. in patients with autonomic failure, a syndrome that is difficult to diagnose in terms of its pathophysiology.
[ { "created": "Wed, 7 Sep 2016 18:48:44 GMT", "version": "v1" } ]
2016-09-09
[ [ "Sturdy", "Jacob", "" ], [ "Ottesen", "Johnny T", "" ], [ "Olufsen", "Mette S", "" ] ]
The baroreceptor neurons serve as the primary transducers of blood pressure for the autonomic nervous system and are thus critical in enabling the body to respond effectively to changes in blood pressure. These neurons can be separated into two types (A and C) based on the myelination of their axons and their distinct firing patterns elicited in response to specific pressure stimuli. This study has developed a comprehensive model of the afferent baroreceptor discharge built on physiological knowledge of arterial wall mechanics, firing rate responses to controlled pressure stimuli, and ion channel dynamics within the baroreceptor neurons. With this model, we were able to predict firing rates observed in previously published experiments in both A- and C-type neurons. These results were obtained by adjusting model parameters determining the maximal ion-channel conductances. The observed variation in the model parameters are hypothesized to correspond to physiological differences between A- and C-type neurons. In agreement with published experimental observations, our simulations suggest that a twofold lower potassium conductance in C-type neurons is responsible for the observed sustained basal firing, whereas a tenfold higher mechanosensitive conductance is responsible for the greater firing rate observed in A-type neurons. A better understanding of the difference between the two neuron types can potentially be used to gain more insight into the underlying pathophysiology facilitating development of targeted interventions improving baroreflex function in diseased individuals, e.g. in patients with autonomic failure, a syndrome that is difficult to diagnose in terms of its pathophysiology.
2303.18017
Allison Andrews
Allison E. Andrews, Hugh Dickinson and James P. Hague
Rapid prediction of lab-grown tissue properties using deep learning
26 Pages, 11 Figures
null
null
null
q-bio.TO cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interactions between cells and the extracellular matrix are vital for the self-organisation of tissues. In this paper we present proof-of-concept to use machine learning tools to predict the role of this mechanobiology in the self-organisation of cell-laden hydrogels grown in tethered moulds. We develop a process for the automated generation of mould designs with and without key symmetries. We create a large training set with $N=6500$ cases by running detailed biophysical simulations of cell-matrix interactions using the contractile network dipole orientation (CONDOR) model for the self-organisation of cellular hydrogels within these moulds. These are used to train an implementation of the \texttt{pix2pix} deep learning model, reserving $740$ cases that were unseen in the training of the neural network for training and validation. Comparison between the predictions of the machine learning technique and the reserved predictions from the biophysical algorithm show that the machine learning algorithm makes excellent predictions. The machine learning algorithm is significantly faster than the biophysical method, opening the possibility of very high throughput rational design of moulds for pharmaceutical testing, regenerative medicine and fundamental studies of biology. Future extensions for scaffolds and 3D bioprinting will open additional applications.
[ { "created": "Fri, 31 Mar 2023 12:49:37 GMT", "version": "v1" } ]
2023-04-03
[ [ "Andrews", "Allison E.", "" ], [ "Dickinson", "Hugh", "" ], [ "Hague", "James P.", "" ] ]
The interactions between cells and the extracellular matrix are vital for the self-organisation of tissues. In this paper we present proof-of-concept to use machine learning tools to predict the role of this mechanobiology in the self-organisation of cell-laden hydrogels grown in tethered moulds. We develop a process for the automated generation of mould designs with and without key symmetries. We create a large training set with $N=6500$ cases by running detailed biophysical simulations of cell-matrix interactions using the contractile network dipole orientation (CONDOR) model for the self-organisation of cellular hydrogels within these moulds. These are used to train an implementation of the \texttt{pix2pix} deep learning model, reserving $740$ cases that were unseen in the training of the neural network for training and validation. Comparison between the predictions of the machine learning technique and the reserved predictions from the biophysical algorithm show that the machine learning algorithm makes excellent predictions. The machine learning algorithm is significantly faster than the biophysical method, opening the possibility of very high throughput rational design of moulds for pharmaceutical testing, regenerative medicine and fundamental studies of biology. Future extensions for scaffolds and 3D bioprinting will open additional applications.