id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1511.08825
Purbarun Dhar
Soumya Bhattacharya, Purbarun Dhar, Sarit K Das, Ranjan Ganguly, Thomas Webster and Suprabha Nayar
Colloidal graphite/graphene nanostructures using collagen showing enhanced thermal conductivity
null
International journal of nanomedicine 9, 1287, (2014)
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time kinetics of interaction of natural graphite (GR) to colloidal graphene (G) collagen (C) nanocomposites was studied at ambient conditions, and observed that just one day at ambient conditions is enough to form colloidal graphene directly from graphite using the protein collagen. Neither controlled temperature and pressure ambiance nor sonication was needed for the same; thereby rendering the process biomimetic. Detailed spectroscopy, X ray diffraction, electron microscopy as well as fluorescence and luminescence assisted characterization of the colloidal dispersions on day one and day seven reveals graphene and collagen interaction and subsequent rearrangement to form an open structure. Detailed confocal microscopy, in the liquid state, reveals the initial attack at the zigzag edges of GR, the enhancement of auto fluorescence and finally the opening up of graphitic stacks of GR to form near transparent G. Atomic Force Microscopy studies prove the existence of both collagen and graphene and the disruption of periodicity at the atomic level. Thermal conductivity of the colloid shows a 17% enhancement for a volume fraction of less than 0.00005 of G. Time variant increase in thermal conductivity provides qualitative evidence for the transient exfoliation of GR to G. The composite reveals interesting properties that could propel it as a future material for advanced bio applications including therapeutics.
[ { "created": "Fri, 27 Nov 2015 21:39:53 GMT", "version": "v1" } ]
2015-12-01
[ [ "Bhattacharya", "Soumya", "" ], [ "Dhar", "Purbarun", "" ], [ "Das", "Sarit K", "" ], [ "Ganguly", "Ranjan", "" ], [ "Webster", "Thomas", "" ], [ "Nayar", "Suprabha", "" ] ]
Time kinetics of interaction of natural graphite (GR) to colloidal graphene (G) collagen (C) nanocomposites was studied at ambient conditions, and observed that just one day at ambient conditions is enough to form colloidal graphene directly from graphite using the protein collagen. Neither controlled temperature and pressure ambiance nor sonication was needed for the same; thereby rendering the process biomimetic. Detailed spectroscopy, X ray diffraction, electron microscopy as well as fluorescence and luminescence assisted characterization of the colloidal dispersions on day one and day seven reveals graphene and collagen interaction and subsequent rearrangement to form an open structure. Detailed confocal microscopy, in the liquid state, reveals the initial attack at the zigzag edges of GR, the enhancement of auto fluorescence and finally the opening up of graphitic stacks of GR to form near transparent G. Atomic Force Microscopy studies prove the existence of both collagen and graphene and the disruption of periodicity at the atomic level. Thermal conductivity of the colloid shows a 17% enhancement for a volume fraction of less than 0.00005 of G. Time variant increase in thermal conductivity provides qualitative evidence for the transient exfoliation of GR to G. The composite reveals interesting properties that could propel it as a future material for advanced bio applications including therapeutics.
1703.05917
Mike Steel Prof.
Liane Gabora and Mike Steel
Autocatalytic networks in cognition and the origin of culture
28 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been proposed that cultural evolution was made possible by a cognitive transition brought about by onset of the capacity for self-triggered recall and rehearsal. Here we develop a novel idea that models of collectively autocatalytic networks, developed for understanding the origin and organization of life, may also help explain the origin of the kind of cognitive structure that makes cultural evolution possible. In our setting, mental representations (for example, memories, concepts, ideas) play the role of 'molecules', and 'reactions' involve the evoking of one representation by another through remindings, associations, and stimuli. In the 'episodic mind', representations are so coarse-grained (encode too few properties) that such reactions are catalyzed only by external stimuli. As cranial capacity increased, representations became more fine-grained (encoded more features), allowing them to act as catalysts, leading to streams of thought. At this point, the mind could combine representations and adapt them to specific needs and situations, and thereby contribute to cultural evolution. In this paper, we propose and study a simple and explicit cognitive model that gives rise naturally to autocatylatic networks, and thereby provides a possible mechanism for the transition from a pre-cultural episodic mind to a mimetic mind.
[ { "created": "Fri, 17 Mar 2017 08:00:15 GMT", "version": "v1" }, { "created": "Thu, 20 Jul 2017 06:24:57 GMT", "version": "v2" }, { "created": "Mon, 7 Aug 2017 21:59:34 GMT", "version": "v3" } ]
2017-08-09
[ [ "Gabora", "Liane", "" ], [ "Steel", "Mike", "" ] ]
It has been proposed that cultural evolution was made possible by a cognitive transition brought about by onset of the capacity for self-triggered recall and rehearsal. Here we develop a novel idea that models of collectively autocatalytic networks, developed for understanding the origin and organization of life, may also help explain the origin of the kind of cognitive structure that makes cultural evolution possible. In our setting, mental representations (for example, memories, concepts, ideas) play the role of 'molecules', and 'reactions' involve the evoking of one representation by another through remindings, associations, and stimuli. In the 'episodic mind', representations are so coarse-grained (encode too few properties) that such reactions are catalyzed only by external stimuli. As cranial capacity increased, representations became more fine-grained (encoded more features), allowing them to act as catalysts, leading to streams of thought. At this point, the mind could combine representations and adapt them to specific needs and situations, and thereby contribute to cultural evolution. In this paper, we propose and study a simple and explicit cognitive model that gives rise naturally to autocatylatic networks, and thereby provides a possible mechanism for the transition from a pre-cultural episodic mind to a mimetic mind.
1701.07053
Marco Kienzle
Matt K. Broadhurst, Marco Kienzle, John Stewart
Natural and fishing mortalities affecting eastern sea garfish, Hyporhamphus australis, inferred from age-frequency data using hazard functions
null
null
10.1016/j.fishres.2017.10.016
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimates of age-specific natural (M) and fishing (F) mortalities among economically important stocks are required to determine sustainable yields and, ultimately, facilitate effective resource management. Here we used hazard functions to estimate mortality rates for eastern sea garfish, Hyporhamphus australis, a pelagic species that forms the basis of an Australian commercial lampara-net fishery. Data describing annual (2004 to 2015) age frequencies (0-1 to 5-6 years), yield, effort (boat-days), and average weights at age were used to fit various stochastic models to estimate mortality rates by maximum likelihood. The model best supported by the data implied: (i) the escape of fish aged 0-1 years increased from approximately 90 to 97% as a result of a mandated increase in stretched mesh opening from 25 to 28 mm; (ii) full selectivity among older age groups; (iii) a constant M of 0.52 +- 0.06 per year; and (iv) a decline in F between 2004 and 2015. Recruitment and biomass were estimated to vary, but increased during the sampled period. The results reiterate the utility of hazard functions to estimate and partition mortality rates, and support traditional input controls designed to reduce both accounted and unaccounted F.
[ { "created": "Tue, 24 Jan 2017 19:44:54 GMT", "version": "v1" }, { "created": "Thu, 26 Jan 2017 01:46:07 GMT", "version": "v2" } ]
2017-11-03
[ [ "Broadhurst", "Matt K.", "" ], [ "Kienzle", "Marco", "" ], [ "Stewart", "John", "" ] ]
Estimates of age-specific natural (M) and fishing (F) mortalities among economically important stocks are required to determine sustainable yields and, ultimately, facilitate effective resource management. Here we used hazard functions to estimate mortality rates for eastern sea garfish, Hyporhamphus australis, a pelagic species that forms the basis of an Australian commercial lampara-net fishery. Data describing annual (2004 to 2015) age frequencies (0-1 to 5-6 years), yield, effort (boat-days), and average weights at age were used to fit various stochastic models to estimate mortality rates by maximum likelihood. The model best supported by the data implied: (i) the escape of fish aged 0-1 years increased from approximately 90 to 97% as a result of a mandated increase in stretched mesh opening from 25 to 28 mm; (ii) full selectivity among older age groups; (iii) a constant M of 0.52 +- 0.06 per year; and (iv) a decline in F between 2004 and 2015. Recruitment and biomass were estimated to vary, but increased during the sampled period. The results reiterate the utility of hazard functions to estimate and partition mortality rates, and support traditional input controls designed to reduce both accounted and unaccounted F.
1803.08473
Torsten Held
Torsten Held, Daniel Klemmer, Michael L\"assig
Survival of the simplest: the cost of complexity in microbial evolution
*equal contribution
null
10.1038/s41467-019-10413-8
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of microbial and viral organisms often generates clonal interference, a mode of competition between genetic clades within a population. In this paper, we show that interference strongly constrains the genetic and phenotypic complexity of evolving systems. Our analysis uses biophysically grounded evolutionary models for an organism's quantitative molecular phenotypes, such as fold stability and enzymatic activity of genes. We find a generic mode of asexual evolution called phenotypic interference with strong implications for systems biology: it couples the stability and function of individual genes to the population's global speed of evolution. This mode occurs over a wide range of evolutionary parameters appropriate for microbial populations. It generates selection against genome complexity, because the fitness cost of mutations increases faster than linearly with the number of genes. Recombination can generate a distinct mode of sexual evolution that eliminates the superlinear cost. We show that positive selection can drive a transition from asexual to facultative sexual evolution, providing a specific, biophysically grounded scenario for the evolution of sex. In a broader context, our analysis suggests that the systems biology of microbial organisms is strongly intertwined with their mode of evolution.
[ { "created": "Thu, 22 Mar 2018 17:17:09 GMT", "version": "v1" } ]
2019-06-12
[ [ "Held", "Torsten", "" ], [ "Klemmer", "Daniel", "" ], [ "Lässig", "Michael", "" ] ]
The evolution of microbial and viral organisms often generates clonal interference, a mode of competition between genetic clades within a population. In this paper, we show that interference strongly constrains the genetic and phenotypic complexity of evolving systems. Our analysis uses biophysically grounded evolutionary models for an organism's quantitative molecular phenotypes, such as fold stability and enzymatic activity of genes. We find a generic mode of asexual evolution called phenotypic interference with strong implications for systems biology: it couples the stability and function of individual genes to the population's global speed of evolution. This mode occurs over a wide range of evolutionary parameters appropriate for microbial populations. It generates selection against genome complexity, because the fitness cost of mutations increases faster than linearly with the number of genes. Recombination can generate a distinct mode of sexual evolution that eliminates the superlinear cost. We show that positive selection can drive a transition from asexual to facultative sexual evolution, providing a specific, biophysically grounded scenario for the evolution of sex. In a broader context, our analysis suggests that the systems biology of microbial organisms is strongly intertwined with their mode of evolution.
1602.06591
Yong Fuga Li
Yong Fuga Li, Russ B. Altman
Systematic Target Function Annotation of Human Transcription Factors
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcription factors (TFs), the key players in transcriptional regulation, have attracted great experimental attention, yet the functions of most human TFs remain poorly understood. Recent capabilities in genome-wide protein binding profiling have stimulated systematic studies of the hierarchical organization of human gene regulatory network and DNA-binding specificity of TFs, shedding light on combinatorial gene regulation. We show here that these data also enable a systematic annotation of the biological functions and functional diversity of TFs. We compiled a human gene regulatory network for 384 TFs covering the 146,096 TF-target gene relationships, extracted from over 850 ChIP-seq experiments as well as the literature. By integrating this network of TF-TF and TF-target gene relationships with 3,715 functional concepts from six sources of gene function annotations, we obtained over 9,000 confident functional annotations for 279 TFs. We observe extensive connectivity between transcription factors and Mendelian diseases, GWAS phenotypes, and pharmacogenetic pathways. Further, we show that transcription factors link apparently unrelated functions, even when the two functions do not share common genes. Finally, we analyze the pleiotropic functions of TFs and suggest that increased number of upstream regulators contributes to the functional pleiotropy of TFs. Our computational approach is complementary to focused experimental studies on TF functions, and the resulting knowledge can guide experimental design for discovering the unknown roles of TFs in human disease and drug response.
[ { "created": "Sun, 21 Feb 2016 22:37:08 GMT", "version": "v1" } ]
2016-02-23
[ [ "Li", "Yong Fuga", "" ], [ "Altman", "Russ B.", "" ] ]
Transcription factors (TFs), the key players in transcriptional regulation, have attracted great experimental attention, yet the functions of most human TFs remain poorly understood. Recent capabilities in genome-wide protein binding profiling have stimulated systematic studies of the hierarchical organization of human gene regulatory network and DNA-binding specificity of TFs, shedding light on combinatorial gene regulation. We show here that these data also enable a systematic annotation of the biological functions and functional diversity of TFs. We compiled a human gene regulatory network for 384 TFs covering the 146,096 TF-target gene relationships, extracted from over 850 ChIP-seq experiments as well as the literature. By integrating this network of TF-TF and TF-target gene relationships with 3,715 functional concepts from six sources of gene function annotations, we obtained over 9,000 confident functional annotations for 279 TFs. We observe extensive connectivity between transcription factors and Mendelian diseases, GWAS phenotypes, and pharmacogenetic pathways. Further, we show that transcription factors link apparently unrelated functions, even when the two functions do not share common genes. Finally, we analyze the pleiotropic functions of TFs and suggest that increased number of upstream regulators contributes to the functional pleiotropy of TFs. Our computational approach is complementary to focused experimental studies on TF functions, and the resulting knowledge can guide experimental design for discovering the unknown roles of TFs in human disease and drug response.
0905.1083
Oskar Hallatschek
Oskar Hallatschek, K. S. Korolev
Fisher waves in the strong noise limit
4 pages, 2 figures, update
null
10.1103/PhysRevLett.103.108103
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate the effects of strong number fluctuations on traveling waves in the Fisher-Kolmogorov reaction-diffusion system. Our findings are in stark contrast to the commonly used deterministic and weak-noise approximations. We compute the wave velocity in one and two spatial dimensions, for which we find a linear and a square-root dependence of the speed on the particle density. Instead of smooth sigmoidal wave profiles, we observe fronts composed of a few rugged kinks that diffuse, annihilate, and rarely branch; this dynamics leads to power-law tails in the distribution of the front sizes.
[ { "created": "Thu, 7 May 2009 17:50:08 GMT", "version": "v1" }, { "created": "Tue, 19 May 2009 19:11:08 GMT", "version": "v2" } ]
2013-05-29
[ [ "Hallatschek", "Oskar", "" ], [ "Korolev", "K. S.", "" ] ]
We investigate the effects of strong number fluctuations on traveling waves in the Fisher-Kolmogorov reaction-diffusion system. Our findings are in stark contrast to the commonly used deterministic and weak-noise approximations. We compute the wave velocity in one and two spatial dimensions, for which we find a linear and a square-root dependence of the speed on the particle density. Instead of smooth sigmoidal wave profiles, we observe fronts composed of a few rugged kinks that diffuse, annihilate, and rarely branch; this dynamics leads to power-law tails in the distribution of the front sizes.
0804.3643
Van Hoa Nguyen
Van Hoa Nguyen (IRISA), Dominique Lavenier (IRISA)
Fine-grained parallelization of similarity search between protein sequences
null
null
null
RR-6513
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This report presents the implementation of a protein sequence comparison algorithm specifically designed for speeding up time consuming part on parallel hardware such as SSE instructions, multicore architectures or graphic boards. Three programs have been developed: PLAST-P, TPLAST-N and PLAST-X. They provide equivalent results compared to the NCBI BLAST family programs (BLAST-P, TBLAST-N and BLAST-X) with a speed-up factor ranging from 5 to 10.
[ { "created": "Wed, 23 Apr 2008 09:38:45 GMT", "version": "v1" } ]
2008-12-18
[ [ "Nguyen", "Van Hoa", "", "IRISA" ], [ "Lavenier", "Dominique", "", "IRISA" ] ]
This report presents the implementation of a protein sequence comparison algorithm specifically designed for speeding up time consuming part on parallel hardware such as SSE instructions, multicore architectures or graphic boards. Three programs have been developed: PLAST-P, TPLAST-N and PLAST-X. They provide equivalent results compared to the NCBI BLAST family programs (BLAST-P, TBLAST-N and BLAST-X) with a speed-up factor ranging from 5 to 10.
1406.1219
Kevin Emmett
Kevin J. Emmett and Raul Rabadan
Characterizing Scales of Genetic Recombination and Antibiotic Resistance in Pathogenic Bacteria Using Topological Data Analysis
12 pages, 6 figures. To appear in AMT 2014 Special Session on Advanced Methods of Interactive Data Mining for Personalized Medicine
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pathogenic bacteria present a large disease burden on human health. Control of these pathogens is hampered by rampant lateral gene transfer, whereby pathogenic strains may acquire genes conferring resistance to common antibiotics. Here we introduce tools from topological data analysis to characterize the frequency and scale of lateral gene transfer in bacteria, focusing on a set of pathogens of significant public health relevance. As a case study, we examine the spread of antibiotic resistance in Staphylococcus aureus. Finally, we consider the possible role of the human microbiome as a reservoir for antibiotic resistance genes.
[ { "created": "Wed, 4 Jun 2014 21:39:11 GMT", "version": "v1" } ]
2014-06-06
[ [ "Emmett", "Kevin J.", "" ], [ "Rabadan", "Raul", "" ] ]
Pathogenic bacteria present a large disease burden on human health. Control of these pathogens is hampered by rampant lateral gene transfer, whereby pathogenic strains may acquire genes conferring resistance to common antibiotics. Here we introduce tools from topological data analysis to characterize the frequency and scale of lateral gene transfer in bacteria, focusing on a set of pathogens of significant public health relevance. As a case study, we examine the spread of antibiotic resistance in Staphylococcus aureus. Finally, we consider the possible role of the human microbiome as a reservoir for antibiotic resistance genes.
2402.09755
Linfeng Jiang
Linfeng Jiang and Yuan Zhu
Data Smoothing Filling Method based on ScRNA-Seq Data Zero-Value Identification
null
null
null
null
q-bio.GN cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Single-cell RNA sequencing (scRNA-seq) determines RNA expression at single-cell resolution. It provides a powerful tool for studying immunity, regulation, and other life activities of cells. However, due to the limitations of the sequencing technique, the scRNA-seq data are represented with sparsity, whichcontains missing gene values, i.e., zero values, called dropout. Therefore, it is necessary to impute missing values before analyzing scRNA-seq data. However, existing imputation computation methods often only focus on the identification of technical zeros or imputing all zeros based on cell similarity. This study proposes a new method (SFAG) to reconstruct the gene expression relationship matrix by usinggraph regularization technology to preserve the high-dimensional manifold information of the data, andto mine the relationship between genes and cells in the data, and then uses a method of averaging the clustering results to fill in the identified technical zeros. Experimental results show that SFAGcan helpimprove downstream analysis and reconstruct cell trajectory
[ { "created": "Thu, 15 Feb 2024 07:08:27 GMT", "version": "v1" } ]
2024-02-16
[ [ "Jiang", "Linfeng", "" ], [ "Zhu", "Yuan", "" ] ]
Single-cell RNA sequencing (scRNA-seq) determines RNA expression at single-cell resolution. It provides a powerful tool for studying immunity, regulation, and other life activities of cells. However, due to the limitations of the sequencing technique, the scRNA-seq data are represented with sparsity, whichcontains missing gene values, i.e., zero values, called dropout. Therefore, it is necessary to impute missing values before analyzing scRNA-seq data. However, existing imputation computation methods often only focus on the identification of technical zeros or imputing all zeros based on cell similarity. This study proposes a new method (SFAG) to reconstruct the gene expression relationship matrix by usinggraph regularization technology to preserve the high-dimensional manifold information of the data, andto mine the relationship between genes and cells in the data, and then uses a method of averaging the clustering results to fill in the identified technical zeros. Experimental results show that SFAGcan helpimprove downstream analysis and reconstruct cell trajectory
1611.08751
John Medaglia
John D. Medaglia, Weiyu Huang, Elisabeth A. Karuza, Sharon L. Thompson-Schill, Alejandro Ribeiro, Danielle S. Bassett
Functional Alignment with Anatomical Networks is Associated with Cognitive Flexibility
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive flexibility describes the human ability to switch between modes of mental function to achieve goals. Mental switching is accompanied by transient changes in brain activity, which must occur atop an anatomical architecture that bridges disparate cortical and subcortical regions by underlying white matter tracts. However, an integrated perspective regarding how white matter networks might constrain brain dynamics during cognitive processes requiring flexibility has remained elusive. To address this challenge, we applied emerging tools from graph signal processing to decompose BOLD signals based on diffusion imaging tractography in 28 individuals performing a perceptual task that probed cognitive flexibility. We found that the alignment between functional signals and the architecture of the underlying white matter network was associated with greater cognitive flexibility across subjects. Signals with behaviorally-relevant alignment were concentrated in the basal ganglia and anterior cingulate cortex, consistent with cortico-striatal mechanisms of cognitive flexibility. Importantly, these findings are not accessible to unimodal analyses of functional or anatomical neuroimaging alone. Instead, by taking a generalizable and concise reduction of multimodal neuroimaging data, we uncover an integrated structure-function driver of human behavior.
[ { "created": "Sat, 26 Nov 2016 22:24:29 GMT", "version": "v1" } ]
2016-11-29
[ [ "Medaglia", "John D.", "" ], [ "Huang", "Weiyu", "" ], [ "Karuza", "Elisabeth A.", "" ], [ "Thompson-Schill", "Sharon L.", "" ], [ "Ribeiro", "Alejandro", "" ], [ "Bassett", "Danielle S.", "" ] ]
Cognitive flexibility describes the human ability to switch between modes of mental function to achieve goals. Mental switching is accompanied by transient changes in brain activity, which must occur atop an anatomical architecture that bridges disparate cortical and subcortical regions by underlying white matter tracts. However, an integrated perspective regarding how white matter networks might constrain brain dynamics during cognitive processes requiring flexibility has remained elusive. To address this challenge, we applied emerging tools from graph signal processing to decompose BOLD signals based on diffusion imaging tractography in 28 individuals performing a perceptual task that probed cognitive flexibility. We found that the alignment between functional signals and the architecture of the underlying white matter network was associated with greater cognitive flexibility across subjects. Signals with behaviorally-relevant alignment were concentrated in the basal ganglia and anterior cingulate cortex, consistent with cortico-striatal mechanisms of cognitive flexibility. Importantly, these findings are not accessible to unimodal analyses of functional or anatomical neuroimaging alone. Instead, by taking a generalizable and concise reduction of multimodal neuroimaging data, we uncover an integrated structure-function driver of human behavior.
1001.3638
David Morrison
David A. Morrison
Bayesian posterior probabilities: revisited
7 pages, including 3 Figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Huelsenbeck and Rannala (2004, Systematic Biology 53, 904-913) presented a series of simulations in order to assess the extent to which the bayesian posterior probabilities associated with phylogenetic trees represent the standard frequentist statistical interpretation. They concluded that when the analysis model matches the generating model then the bayesian posterior probabilities are correct, but that the probabilities are much too large when the model is under-specified and slightly too small when the model is over-specified. Here, I take issue with the first conclusion, and instead contend that their simulation data show that the posterior probabilities are still slightly too large even when the models match. Furthermore, I suggest that the data show that the degree of this over-estimation increases as the sequence length increases, and that it might increase as model complexity increases. I also provide some comments on the authors' conclusions concerning whether bootstrap proportions over- or under-estimate the true probabilities.
[ { "created": "Wed, 20 Jan 2010 18:57:11 GMT", "version": "v1" } ]
2010-01-21
[ [ "Morrison", "David A.", "" ] ]
Huelsenbeck and Rannala (2004, Systematic Biology 53, 904-913) presented a series of simulations in order to assess the extent to which the bayesian posterior probabilities associated with phylogenetic trees represent the standard frequentist statistical interpretation. They concluded that when the analysis model matches the generating model then the bayesian posterior probabilities are correct, but that the probabilities are much too large when the model is under-specified and slightly too small when the model is over-specified. Here, I take issue with the first conclusion, and instead contend that their simulation data show that the posterior probabilities are still slightly too large even when the models match. Furthermore, I suggest that the data show that the degree of this over-estimation increases as the sequence length increases, and that it might increase as model complexity increases. I also provide some comments on the authors' conclusions concerning whether bootstrap proportions over- or under-estimate the true probabilities.
q-bio/0701022
Mark Humphries
M. D. Humphries
High level modeling of tonic dopamine mechanisms in striatal neurons
14 pages, 7 figures. Technical report ABRG 3, November, 2003
null
null
null
q-bio.NC
null
The extant versions of many basal ganglia models use a `gating' model of dopamine function which enhances input to D1 receptor units and attenuates input to D2 receptor units. There is evidence that this model is unsatisfactory because (a) there are not sufficient dopaminergic synapses to gate all input and (b) dopamine's main effect is likely to be on the ion-channels contributing to the neuron's membrane potential. Thus, an alternative output function-based model of dopamine's effect is proposed which accounts for the dopamine-mediated changes in ion-channel based currents. Simulation results show that the selection and switching properties of the intrinsic and extended models are retained with the new models. The parameter regimes under which this occurs leads us to predict that an L-type Ca2+ current is likely to be the major determinant of striatal neuron output if the basal ganglia is indeed an action selection mechanism. In addition, the results provide evidence that increasing dopamine can improve a neuron's signal-to-noise ratio.
[ { "created": "Mon, 15 Jan 2007 16:50:12 GMT", "version": "v1" } ]
2007-05-23
[ [ "Humphries", "M. D.", "" ] ]
The extant versions of many basal ganglia models use a `gating' model of dopamine function which enhances input to D1 receptor units and attenuates input to D2 receptor units. There is evidence that this model is unsatisfactory because (a) there are not sufficient dopaminergic synapses to gate all input and (b) dopamine's main effect is likely to be on the ion-channels contributing to the neuron's membrane potential. Thus, an alternative output function-based model of dopamine's effect is proposed which accounts for the dopamine-mediated changes in ion-channel based currents. Simulation results show that the selection and switching properties of the intrinsic and extended models are retained with the new models. The parameter regimes under which this occurs leads us to predict that an L-type Ca2+ current is likely to be the major determinant of striatal neuron output if the basal ganglia is indeed an action selection mechanism. In addition, the results provide evidence that increasing dopamine can improve a neuron's signal-to-noise ratio.
1603.06790
Giovanni Montana
A.W. Chung and M.D. Schirmer and M.L. Krishna and G. Ball and P. Aljabar and A.D. Edwards and G. Montana
Characterising brain network topologies: a dynamic analysis approach using heat kernels
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network theory provides a principled abstraction of the human brain: reducing a complex system into a simpler representation from which to investigate brain organisation. Recent advancement in the neuroimaging field are towards representing brain connectivity as a dynamic process in order to gain a deeper understanding of how the brain is organised for information transport. In this paper we propose a network modelling approach based on the heat kernel to capture the process of heat diffusion in complex networks. By applying the heat kernel to structural brain networks, we define new features which quantify change in energy flow. Identifying suitable features which can classify networks between cohorts is useful towards understanding the effect of disease on brain architecture. We demonstrate the discriminative power of heat kernel features in both synthetic and clinical preterm data. By generating an extensive range of synthetic networks with varying density and randomisation, we investigate how heat flows in the networks in relation to changes in network topology. We demonstrate that our proposed features provide a metric of network efficiency and may be indicative of organisational principles commonly associated with, for example, small-world architecture. In addition, we show the potential of these features to characterise and classify between network topologies. We further demonstrate our methodology in a clinical setting by applying it to a large cohort of preterm babies scanned at term equivalent age from which diffusion networks were computed. We show that our heat kernel features are able to successfully predict motor function measured at two years of age (sensitivity, specificity, F-score, accuracy = 75.0, 82.5, 78.6, 82.3%, respectively.
[ { "created": "Tue, 22 Mar 2016 13:53:00 GMT", "version": "v1" } ]
2016-03-23
[ [ "Chung", "A. W.", "" ], [ "Schirmer", "M. D.", "" ], [ "Krishna", "M. L.", "" ], [ "Ball", "G.", "" ], [ "Aljabar", "P.", "" ], [ "Edwards", "A. D.", "" ], [ "Montana", "G.", "" ] ]
Network theory provides a principled abstraction of the human brain: reducing a complex system into a simpler representation from which to investigate brain organisation. Recent advancement in the neuroimaging field are towards representing brain connectivity as a dynamic process in order to gain a deeper understanding of how the brain is organised for information transport. In this paper we propose a network modelling approach based on the heat kernel to capture the process of heat diffusion in complex networks. By applying the heat kernel to structural brain networks, we define new features which quantify change in energy flow. Identifying suitable features which can classify networks between cohorts is useful towards understanding the effect of disease on brain architecture. We demonstrate the discriminative power of heat kernel features in both synthetic and clinical preterm data. By generating an extensive range of synthetic networks with varying density and randomisation, we investigate how heat flows in the networks in relation to changes in network topology. We demonstrate that our proposed features provide a metric of network efficiency and may be indicative of organisational principles commonly associated with, for example, small-world architecture. In addition, we show the potential of these features to characterise and classify between network topologies. We further demonstrate our methodology in a clinical setting by applying it to a large cohort of preterm babies scanned at term equivalent age from which diffusion networks were computed. We show that our heat kernel features are able to successfully predict motor function measured at two years of age (sensitivity, specificity, F-score, accuracy = 75.0, 82.5, 78.6, 82.3%, respectively.
0910.2559
Konstantin Klemm
Martin Mann and Konstantin Klemm
Efficient exploration of discrete energy landscapes
7 pages, 5 figures
Physical Review E 83, 011113 (2011)
10.1103/PhysRevE.83.011113
null
q-bio.BM physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many physical and chemical processes, such as folding of biopolymers, are best described as dynamics on large combinatorial energy landscapes. A concise approximate description of dynamics is obtained by partitioning the micro-states of the landscape into macro-states. Since most landscapes of interest are not tractable analytically, the probabilities of transitions between macro-states need to be extracted numerically from the microscopic ones, typically by full enumeration of the state space. Here we propose to approximate transition probabilities by a Markov chain Monte-Carlo method. For landscapes of the number partitioning problem and an RNA switch molecule we show that the method allows for accurate probability estimates with significantly reduced computational cost.
[ { "created": "Wed, 14 Oct 2009 09:07:20 GMT", "version": "v1" }, { "created": "Tue, 20 Apr 2010 14:34:19 GMT", "version": "v2" }, { "created": "Tue, 18 Jan 2011 18:14:09 GMT", "version": "v3" } ]
2011-01-19
[ [ "Mann", "Martin", "" ], [ "Klemm", "Konstantin", "" ] ]
Many physical and chemical processes, such as folding of biopolymers, are best described as dynamics on large combinatorial energy landscapes. A concise approximate description of dynamics is obtained by partitioning the micro-states of the landscape into macro-states. Since most landscapes of interest are not tractable analytically, the probabilities of transitions between macro-states need to be extracted numerically from the microscopic ones, typically by full enumeration of the state space. Here we propose to approximate transition probabilities by a Markov chain Monte-Carlo method. For landscapes of the number partitioning problem and an RNA switch molecule we show that the method allows for accurate probability estimates with significantly reduced computational cost.
2402.18808
Masaru Kuwabara
Masaru Kuwabara, Ryota Kanai
Stimulation technology for brain and nerves, now and future
null
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In individuals afflicted with conditions such as paralysis, the implementation of Brain-Computer-Interface (BCI) has begun to significantly impact their quality of life. Furthermore, even in healthy individuals, the anticipated advantages of brain-to-brain communication and brain-to-computer interaction hold considerable promise for the future. This is attributed to the liberation from bodily constraints and the transcendence of existing limitations inherent in contemporary brain-to-brain communication methods. To actualize a comprehensive BCI, the establishment of bidirectional communication between the brain and the external environment is imperative. While neural input technology spans diverse disciplines and is currently advancing rapidly, a notable absence exists in the form of review papers summarizing the technology from the standpoint of the latest or potential input methods. The challenges encountered encompass the requisite for bidirectional communication to achieve a holistic BCI, as well as obstacles related to information volume, precision, and invasiveness. The review section comprehensively addresses both invasive and non-invasive techniques, incorporating nanotech/micro-device technology and the integration of Artificial Intelligence (AI) in brain stimulation.
[ { "created": "Thu, 29 Feb 2024 02:26:33 GMT", "version": "v1" } ]
2024-03-01
[ [ "Kuwabara", "Masaru", "" ], [ "Kanai", "Ryota", "" ] ]
In individuals afflicted with conditions such as paralysis, the implementation of Brain-Computer-Interface (BCI) has begun to significantly impact their quality of life. Furthermore, even in healthy individuals, the anticipated advantages of brain-to-brain communication and brain-to-computer interaction hold considerable promise for the future. This is attributed to the liberation from bodily constraints and the transcendence of existing limitations inherent in contemporary brain-to-brain communication methods. To actualize a comprehensive BCI, the establishment of bidirectional communication between the brain and the external environment is imperative. While neural input technology spans diverse disciplines and is currently advancing rapidly, a notable absence exists in the form of review papers summarizing the technology from the standpoint of the latest or potential input methods. The challenges encountered encompass the requisite for bidirectional communication to achieve a holistic BCI, as well as obstacles related to information volume, precision, and invasiveness. The review section comprehensively addresses both invasive and non-invasive techniques, incorporating nanotech/micro-device technology and the integration of Artificial Intelligence (AI) in brain stimulation.
1905.08074
Pasquale Ciarletta
Abramo Agosti, Stefano Marchesi, Giorgio Scita, Pasquale Ciarletta
The self-organised, non-equilibrium dynamics of spontaneous cancerous buds
null
null
null
null
q-bio.CB cond-mat.soft nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tissue self-organization into defined and well-controlled three-dimensional structures is essential during development for the generation of organs. A similar, but highly deranged process might also occur during the aberrant growth of cancers, which frequently display a loss of the orderly structures of the tissue of origin, but retain a multicellular organization in the form of spheroids, strands, and buds. The latter structures are often seen when tumors masses switch to an invasive behavior into surrounding tissues. However, the general physical principles governing the self-organized architectures of tumor cell populations remain by and large unclear. In this work, we perform in-vitro experiments to characterize the growth properties of glioblastoma budding emerging from monolayers. Using a theoretical model and numerical tools here we find that such a topological transition is a self-organised, non-equilibrium phenomenon driven by the trade--off of mechanical forces and physical interactions exerted at cell-cell and cell-substrate adhesions. Notably, the unstable disorder states of uncontrolled cellular proliferation macroscopically emerge as complex spatio--temporal patterns that evolve statistically correlated by a universal law.
[ { "created": "Mon, 20 May 2019 13:01:22 GMT", "version": "v1" } ]
2019-05-21
[ [ "Agosti", "Abramo", "" ], [ "Marchesi", "Stefano", "" ], [ "Scita", "Giorgio", "" ], [ "Ciarletta", "Pasquale", "" ] ]
Tissue self-organization into defined and well-controlled three-dimensional structures is essential during development for the generation of organs. A similar, but highly deranged process might also occur during the aberrant growth of cancers, which frequently display a loss of the orderly structures of the tissue of origin, but retain a multicellular organization in the form of spheroids, strands, and buds. The latter structures are often seen when tumors masses switch to an invasive behavior into surrounding tissues. However, the general physical principles governing the self-organized architectures of tumor cell populations remain by and large unclear. In this work, we perform in-vitro experiments to characterize the growth properties of glioblastoma budding emerging from monolayers. Using a theoretical model and numerical tools here we find that such a topological transition is a self-organised, non-equilibrium phenomenon driven by the trade--off of mechanical forces and physical interactions exerted at cell-cell and cell-substrate adhesions. Notably, the unstable disorder states of uncontrolled cellular proliferation macroscopically emerge as complex spatio--temporal patterns that evolve statistically correlated by a universal law.
1908.11237
Frits van Heijster
Frits H. A. van Heijster, Vincent Breukels and Arend Heerschap
Quantitative model for $^{13}$C tracing applied to citrate production and secretion of prostate epithelial tissue
35 pages, 13 figures, 2 tables
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Healthy human prostate epithelial cells have the unique ability to produce and secrete large amounts of citrate into the lumen of the prostate. Citrate is a Krebs cycle metabolite produced in the condensation reaction between acetyl-CoA and oxaloacetate in the mitochondria of the cell. With the application of $^{13}$C enriched substrates, such as $^{13}$C glucose or pyruvate, to prostate cells or tissues, it is possible to identify the contributions of different metabolic pathways to this production and secretion of citrate. In this work we present a quantitative model describing the mitochondrial production and the secretion of citrate by prostatic epithelial cells employing the $^{13}$C labeling pattern of secreted citrate as readout. We derived equations for the secretion fraction of citrate and the contribution of pyruvate dehydrogenase complex (PDC) versus the anaplerotic pyruvate carboxylase (PC) pathways in supplying the Krebs cycle with carbons from pyruvate for the production of citrate. These measures are independent of initial $^{13}$C-enrichment of the administered supplements and of $^{13}$C J-coupling patterns, making this method robust also if SNR is low. We propose use of these equations to distinguish between citrate metabolism in healthy and diseased prostate tissue, in particular upon malignant transformation.
[ { "created": "Thu, 29 Aug 2019 14:00:51 GMT", "version": "v1" }, { "created": "Sat, 10 Oct 2020 11:00:33 GMT", "version": "v2" }, { "created": "Fri, 4 Jun 2021 13:44:52 GMT", "version": "v3" }, { "created": "Wed, 9 Jun 2021 10:13:10 GMT", "version": "v4" } ]
2021-06-10
[ [ "van Heijster", "Frits H. A.", "" ], [ "Breukels", "Vincent", "" ], [ "Heerschap", "Arend", "" ] ]
Healthy human prostate epithelial cells have the unique ability to produce and secrete large amounts of citrate into the lumen of the prostate. Citrate is a Krebs cycle metabolite produced in the condensation reaction between acetyl-CoA and oxaloacetate in the mitochondria of the cell. With the application of $^{13}$C enriched substrates, such as $^{13}$C glucose or pyruvate, to prostate cells or tissues, it is possible to identify the contributions of different metabolic pathways to this production and secretion of citrate. In this work we present a quantitative model describing the mitochondrial production and the secretion of citrate by prostatic epithelial cells employing the $^{13}$C labeling pattern of secreted citrate as readout. We derived equations for the secretion fraction of citrate and the contribution of pyruvate dehydrogenase complex (PDC) versus the anaplerotic pyruvate carboxylase (PC) pathways in supplying the Krebs cycle with carbons from pyruvate for the production of citrate. These measures are independent of initial $^{13}$C-enrichment of the administered supplements and of $^{13}$C J-coupling patterns, making this method robust also if SNR is low. We propose use of these equations to distinguish between citrate metabolism in healthy and diseased prostate tissue, in particular upon malignant transformation.
1812.09263
Fabio Sanchez PhD
Fabio Sanchez, Luis Barboza, Paola Vasquez
Parameter estimates of the 2016-2017 Zika outbreak in Costa Rica: An Approximate Bayesian Computation (ABC) Approach
17 pages, 6 figures
null
10.3934/mbe.2019136
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Costa Rica, the first known cases of Zika were reported in 2016. We looked at the 2016-2017 Zika outbreak and explored the transmission dynamics using weekly reported data. A nonlinear differential equation single-outbreak model with sexual transmission, as well as host availability for vector-feeding was used to estimate key parameters, fit the data and compute the {\it basic reproductive number}, $\mathcal{R}_0$, distribution. Furthermore, a sensitivity and elasticity analysis was computed based on the $\mathcal{R}_0$ parameters.
[ { "created": "Fri, 21 Dec 2018 17:10:58 GMT", "version": "v1" } ]
2019-08-07
[ [ "Sanchez", "Fabio", "" ], [ "Barboza", "Luis", "" ], [ "Vasquez", "Paola", "" ] ]
In Costa Rica, the first known cases of Zika were reported in 2016. We looked at the 2016-2017 Zika outbreak and explored the transmission dynamics using weekly reported data. A nonlinear differential equation single-outbreak model with sexual transmission, as well as host availability for vector-feeding was used to estimate key parameters, fit the data and compute the {\it basic reproductive number}, $\mathcal{R}_0$, distribution. Furthermore, a sensitivity and elasticity analysis was computed based on the $\mathcal{R}_0$ parameters.
1504.03080
Andrew McPherson
Andrew McPherson, Andrew Roth, Gavin Ha, Sohrab P. Shah, Cedric Chauve, S. Cenk Sahinalp
Joint Inference of Genome Structure and Content in Heterogeneous Tumour Samples
Presented at RECOMB 2015
null
null
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a genomically unstable cancer, a single tumour biopsy will often contain a mixture of competing tumour clones. These tumour clones frequently differ with respect to their genomic content (copy number of each gene) and structure (order of genes on each chromosome). Modern bulk genome sequencing mixes the signals of tumour clones and contaminating normal cells, complicating inference of genomic content and structure. We propose a method to unmix tumour and contaminating normal signals and jointly predict genomic structure and content of each tumour clone. We use genome graphs to represent tumour clones, and model the likelihood of the observed reads given clones and mixing proportions. Our use of haplotype blocks allows us to accurately measure allele specific read counts, and infer allele specific copy number for each clone. The proposed method is a heuristic local search based on applying incremental, locally optimal modifications of the genome graphs. Using simulated data, we show that our method predicts copy counts and gene adjacencies with reasonable accuracy.
[ { "created": "Mon, 13 Apr 2015 07:17:36 GMT", "version": "v1" }, { "created": "Fri, 24 Apr 2015 22:51:37 GMT", "version": "v2" } ]
2015-04-28
[ [ "McPherson", "Andrew", "" ], [ "Roth", "Andrew", "" ], [ "Ha", "Gavin", "" ], [ "Shah", "Sohrab P.", "" ], [ "Chauve", "Cedric", "" ], [ "Sahinalp", "S. Cenk", "" ] ]
For a genomically unstable cancer, a single tumour biopsy will often contain a mixture of competing tumour clones. These tumour clones frequently differ with respect to their genomic content (copy number of each gene) and structure (order of genes on each chromosome). Modern bulk genome sequencing mixes the signals of tumour clones and contaminating normal cells, complicating inference of genomic content and structure. We propose a method to unmix tumour and contaminating normal signals and jointly predict genomic structure and content of each tumour clone. We use genome graphs to represent tumour clones, and model the likelihood of the observed reads given clones and mixing proportions. Our use of haplotype blocks allows us to accurately measure allele specific read counts, and infer allele specific copy number for each clone. The proposed method is a heuristic local search based on applying incremental, locally optimal modifications of the genome graphs. Using simulated data, we show that our method predicts copy counts and gene adjacencies with reasonable accuracy.
2010.06415
Wayne Hayes
Wayne B. Hayes
Exact $p$-values for global network alignments via combinatorial analysis of shared GO terms (Subtitle: REFANGO: Rigorous Evaluation of Functional Alignments of Networks using Gene Ontology)
22 pages, 3 figures, 4 tables
null
null
null
q-bio.MN cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network alignment aims to uncover topologically similar regions in the protein-protein interaction (PPI) networks of two or more species under the assumption that topologically similar regions tend to perform similar functions. Although there exist a plethora of both network alignment algorithms and measures of topological similarity, currently no gold standard exists for evaluating how well either is able to uncover functionally similar regions. Here we propose a formal, mathematically and statistically rigorous method for evaluating the statistical significance of shared GO terms in a global, 1-to-1 alignment between two PPI networks. We use combinatorics to precisely count the number of possible network alignments in which $k$ proteins share a particular GO term. When divided by the number of all possible network alignments, this provides an explicit, exact $p$-value for a network alignment with respect to a particular GO term. Just as with BLAST's p-values and bit-scores, this method is designed not to guide the formation of any particular alignment, but instead to provide an after-the-fact evaluation of a fixed, given alignment.
[ { "created": "Fri, 9 Oct 2020 23:02:13 GMT", "version": "v1" }, { "created": "Sat, 25 Sep 2021 19:04:49 GMT", "version": "v2" } ]
2021-09-28
[ [ "Hayes", "Wayne B.", "" ] ]
Network alignment aims to uncover topologically similar regions in the protein-protein interaction (PPI) networks of two or more species under the assumption that topologically similar regions tend to perform similar functions. Although there exist a plethora of both network alignment algorithms and measures of topological similarity, currently no gold standard exists for evaluating how well either is able to uncover functionally similar regions. Here we propose a formal, mathematically and statistically rigorous method for evaluating the statistical significance of shared GO terms in a global, 1-to-1 alignment between two PPI networks. We use combinatorics to precisely count the number of possible network alignments in which $k$ proteins share a particular GO term. When divided by the number of all possible network alignments, this provides an explicit, exact $p$-value for a network alignment with respect to a particular GO term. Just as with BLAST's p-values and bit-scores, this method is designed not to guide the formation of any particular alignment, but instead to provide an after-the-fact evaluation of a fixed, given alignment.
1903.08447
Mattia Bramini
Mattia Bramini, Martina Chiacchiaretta, Andrea Armirotti, Anna Rocchi, Deepali D. Kale, Cristina Martin Jimenez, Ester V\'azquez, Tiziano Bandiera, Stefano Ferroni, Fabrizia Cesca and Fabio Benfenati
An increase in membrane cholesterol by graphene oxide disrupts calcium homeostasis in primary astrocytes
This document is the unedited Author's version of a Submitted Work that was subsequently accepted for publication in Small after peer review. To access the final edited and published work see https://onlinelibrary.wiley.com/doi/10.1002/smll.201900147 40 pages, 6 main figures and 1 supplementary figure
Small 2019, 1900147
10.1002/smll.201900147
null
q-bio.NC physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
The use of graphene nanomaterials (GNMs) for biomedical applications targeted to the central nervous system is exponentially increasing, although precise information on their effects on brain cells is lacking. In this work, we addressed the molecular changes induced in cortical astrocytes by few-layer graphene (FLG) and graphene oxide (GO) flakes. Our results show that exposure to FLG/GO does not affect cell viability or proliferation. However, proteomic and lipidomic analyses unveiled alterations in several cellular processes, including intracellular Ca2+ ([Ca2+]i) homeostasis and cholesterol metabolism, which were particularly intense in cells exposed to GO. Indeed, GO exposure impaired spontaneous and evoked astrocyte [Ca2+]i signals and induced a marked increase in membrane cholesterol levels. Importantly, cholesterol depletion fully rescued [Ca2+]i dynamics in GO-treated cells, indicating a causal relationship between these GO-mediated effects. Our results indicate that exposure to GNMs alters intracellular signaling in astrocytes and may impact on astrocyte-neuron interactions.
[ { "created": "Wed, 20 Mar 2019 11:21:17 GMT", "version": "v1" } ]
2019-03-21
[ [ "Bramini", "Mattia", "" ], [ "Chiacchiaretta", "Martina", "" ], [ "Armirotti", "Andrea", "" ], [ "Rocchi", "Anna", "" ], [ "Kale", "Deepali D.", "" ], [ "Jimenez", "Cristina Martin", "" ], [ "Vázquez", "Ester", "" ], [ "Bandiera", "Tiziano", "" ], [ "Ferroni", "Stefano", "" ], [ "Cesca", "Fabrizia", "" ], [ "Benfenati", "Fabio", "" ] ]
The use of graphene nanomaterials (GNMs) for biomedical applications targeted to the central nervous system is exponentially increasing, although precise information on their effects on brain cells is lacking. In this work, we addressed the molecular changes induced in cortical astrocytes by few-layer graphene (FLG) and graphene oxide (GO) flakes. Our results show that exposure to FLG/GO does not affect cell viability or proliferation. However, proteomic and lipidomic analyses unveiled alterations in several cellular processes, including intracellular Ca2+ ([Ca2+]i) homeostasis and cholesterol metabolism, which were particularly intense in cells exposed to GO. Indeed, GO exposure impaired spontaneous and evoked astrocyte [Ca2+]i signals and induced a marked increase in membrane cholesterol levels. Importantly, cholesterol depletion fully rescued [Ca2+]i dynamics in GO-treated cells, indicating a causal relationship between these GO-mediated effects. Our results indicate that exposure to GNMs alters intracellular signaling in astrocytes and may impact on astrocyte-neuron interactions.
1105.5874
David McAvity
David McAvity, Tristen Bristow, Eric Bunker, Alex Dreyer
Perception without Self-Matching in Conditional Tag Based Cooperation
12 pages, 10 figures
Journal of Theoretical Biology 333 (2013)
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a model for the evolution of cooperation in a population where individuals may have one of a number of different heritable and distinguishable markers or tags. Individuals interact with each of their neighbours on a square lattice by either cooperating by donating some benefit at a cost to themselves or defecting by doing nothing. The decision to cooperate or defect is contingent on each individual's perception of its interacting partner's tag. Unlike in other tag-based models individuals do not compare their own tag to that of their interaction partner. That is, there is no {\em self-matching}. When perception is perfect the cooperation rate is substantially higher than in the usual spatial prisoner's dilemma game when the cost of cooperation is high. The enhancement in cooperation is positively correlated with the number of different tags. The more diverse a population is the more cooperative it becomes. When individuals start with an inability to perceive tags the population evolves to a state where individuals gain at least partial perception. With some reproduction mechanisms perfect perception evolves, but with others the ability to perceive tags is imperfect. We find that perception of tags evolves to lower levels when the cost of cooperation is higher.
[ { "created": "Mon, 30 May 2011 06:26:28 GMT", "version": "v1" }, { "created": "Tue, 31 May 2011 04:07:15 GMT", "version": "v2" }, { "created": "Sun, 9 Feb 2014 23:53:25 GMT", "version": "v3" } ]
2014-02-11
[ [ "McAvity", "David", "" ], [ "Bristow", "Tristen", "" ], [ "Bunker", "Eric", "" ], [ "Dreyer", "Alex", "" ] ]
We consider a model for the evolution of cooperation in a population where individuals may have one of a number of different heritable and distinguishable markers or tags. Individuals interact with each of their neighbours on a square lattice by either cooperating by donating some benefit at a cost to themselves or defecting by doing nothing. The decision to cooperate or defect is contingent on each individual's perception of its interacting partner's tag. Unlike in other tag-based models individuals do not compare their own tag to that of their interaction partner. That is, there is no {\em self-matching}. When perception is perfect the cooperation rate is substantially higher than in the usual spatial prisoner's dilemma game when the cost of cooperation is high. The enhancement in cooperation is positively correlated with the number of different tags. The more diverse a population is the more cooperative it becomes. When individuals start with an inability to perceive tags the population evolves to a state where individuals gain at least partial perception. With some reproduction mechanisms perfect perception evolves, but with others the ability to perceive tags is imperfect. We find that perception of tags evolves to lower levels when the cost of cooperation is higher.
2104.08537
Federico Bertoni
Federico Bertoni, Noemi Montobbio, Alessandro Sarti and Giovanna Citti
Emergence of Lie symmetries in functional architectures learned by CNNs
null
null
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images. Our architecture is built in such a way to mimic the early stages of biological visual systems. In particular, it contains a pre-filtering step $\ell^0$ defined in analogy with the Lateral Geniculate Nucleus (LGN). Moreover, the first convolutional layer is equipped with lateral connections defined as a propagation driven by a learned connectivity kernel, in analogy with the horizontal connectivity of the primary visual cortex (V1). The layer $\ell^0$ shows a rotational symmetric pattern well approximated by a Laplacian of Gaussian (LoG), which is a well-known model of the receptive profiles of LGN cells. The convolutional filters in the first layer can be approximated by Gabor functions, in agreement with well-established models for the profiles of simple cells in V1. We study the learned lateral connectivity kernel of this layer, showing the emergence of orientation selectivity w.r.t. the learned filters. We also examine the association fields induced by the learned kernel, and show qualitative and quantitative comparisons with known group-based models of V1 horizontal connectivity. These geometric properties arise spontaneously during the training of the CNN architecture, analogously to the emergence of symmetries in visual systems thanks to brain plasticity driven by external stimuli.
[ { "created": "Sat, 17 Apr 2021 13:23:26 GMT", "version": "v1" } ]
2021-04-20
[ [ "Bertoni", "Federico", "" ], [ "Montobbio", "Noemi", "" ], [ "Sarti", "Alessandro", "" ], [ "Citti", "Giovanna", "" ] ]
In this paper we study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images. Our architecture is built in such a way to mimic the early stages of biological visual systems. In particular, it contains a pre-filtering step $\ell^0$ defined in analogy with the Lateral Geniculate Nucleus (LGN). Moreover, the first convolutional layer is equipped with lateral connections defined as a propagation driven by a learned connectivity kernel, in analogy with the horizontal connectivity of the primary visual cortex (V1). The layer $\ell^0$ shows a rotational symmetric pattern well approximated by a Laplacian of Gaussian (LoG), which is a well-known model of the receptive profiles of LGN cells. The convolutional filters in the first layer can be approximated by Gabor functions, in agreement with well-established models for the profiles of simple cells in V1. We study the learned lateral connectivity kernel of this layer, showing the emergence of orientation selectivity w.r.t. the learned filters. We also examine the association fields induced by the learned kernel, and show qualitative and quantitative comparisons with known group-based models of V1 horizontal connectivity. These geometric properties arise spontaneously during the training of the CNN architecture, analogously to the emergence of symmetries in visual systems thanks to brain plasticity driven by external stimuli.
2208.05598
Rene Warren
Ren\'e L. Warren
PASS: De novo assembler for short peptide sequences
4 pages, 1 table
null
null
null
q-bio.GN q-bio.BM
http://creativecommons.org/licenses/by/4.0/
The ability to characterize proteins at sequence-level resolution is vital to biological research. Currently, the leading method for protein sequencing is by liquid chromatography mass spectrometry (LC-MS) whereas proteins are reduced to their constituent peptides by enzymatic digest and subsequently analyzed on an LC-MS instrument. The short peptide sequences that result from this analysis are used to characterize the original protein content of the sample. Here we present PASS, a de novo assembler for short peptide sequences that can be used to reconstruct large portions of protein targets, a step that can facilitate downstream sample characterization efforts. We show how, with adequate peptide sequence coverage and little-to-no additional sequence processing, PASS reconstructs protein sequences into relatively large (100 amino acid or longer) contigs having high (93.1 - 99.1%) sequence identity to reference antibody light and heavy chain proteins. Availability: PASS is released under the GNU General Public License Version 3 (GPLv3) and is publicly available from https://github.com/warrenlr/PASS
[ { "created": "Thu, 11 Aug 2022 00:35:08 GMT", "version": "v1" } ]
2022-08-12
[ [ "Warren", "René L.", "" ] ]
The ability to characterize proteins at sequence-level resolution is vital to biological research. Currently, the leading method for protein sequencing is by liquid chromatography mass spectrometry (LC-MS) whereas proteins are reduced to their constituent peptides by enzymatic digest and subsequently analyzed on an LC-MS instrument. The short peptide sequences that result from this analysis are used to characterize the original protein content of the sample. Here we present PASS, a de novo assembler for short peptide sequences that can be used to reconstruct large portions of protein targets, a step that can facilitate downstream sample characterization efforts. We show how, with adequate peptide sequence coverage and little-to-no additional sequence processing, PASS reconstructs protein sequences into relatively large (100 amino acid or longer) contigs having high (93.1 - 99.1%) sequence identity to reference antibody light and heavy chain proteins. Availability: PASS is released under the GNU General Public License Version 3 (GPLv3) and is publicly available from https://github.com/warrenlr/PASS
1308.1453
Hunter Fraser
Jessica Chang, Yiqi Zhou, Xiaoli Hu, Lucia Lam, Cameron Henry, Erin M. Green, Ryosuke Kita, Michael S. Kobor, and Hunter B. Fraser
The molecular mechanism of a cis-regulatory adaptation in yeast
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite recent advances in our ability to detect adaptive evolution involving the cis-regulation of gene expression, our knowledge of the molecular mechanisms underlying these adaptations has lagged far behind. Across all model organisms the causal mutations have been discovered for only a handful of gene expression adaptations, and even for these, mechanistic details (e.g. the trans-regulatory factors involved) have not been determined. We previously reported a polygenic gene expression adaptation involving down-regulation of the ergosterol biosynthesis pathway in the budding yeast Saccharomyces cerevisiae. Here we investigate the molecular mechanism of a cis-acting mutation affecting a member of this pathway, ERG28. We show that the causal mutation is a two-base deletion in the promoter of ERG28 that strongly reduces the binding of two transcription factors, Sok2 and Mot3, thus abolishing their regulation of ERG28. This down-regulation increases resistance to a widely used antifungal drug targeting ergosterol, similar to mutations disrupting this pathway in clinical yeast isolates. The identification of the causal genetic variant revealed that the selection likely occurred after the deletion was already present at high frequency in the population, rather than when it was a new mutation. These results provide a detailed view of the molecular mechanism of a cis-regulatory adaptation, and underscore the importance of this view to our understanding of evolution at the molecular level.
[ { "created": "Wed, 7 Aug 2013 00:42:01 GMT", "version": "v1" } ]
2013-08-08
[ [ "Chang", "Jessica", "" ], [ "Zhou", "Yiqi", "" ], [ "Hu", "Xiaoli", "" ], [ "Lam", "Lucia", "" ], [ "Henry", "Cameron", "" ], [ "Green", "Erin M.", "" ], [ "Kita", "Ryosuke", "" ], [ "Kobor", "Michael S.", "" ], [ "Fraser", "Hunter B.", "" ] ]
Despite recent advances in our ability to detect adaptive evolution involving the cis-regulation of gene expression, our knowledge of the molecular mechanisms underlying these adaptations has lagged far behind. Across all model organisms the causal mutations have been discovered for only a handful of gene expression adaptations, and even for these, mechanistic details (e.g. the trans-regulatory factors involved) have not been determined. We previously reported a polygenic gene expression adaptation involving down-regulation of the ergosterol biosynthesis pathway in the budding yeast Saccharomyces cerevisiae. Here we investigate the molecular mechanism of a cis-acting mutation affecting a member of this pathway, ERG28. We show that the causal mutation is a two-base deletion in the promoter of ERG28 that strongly reduces the binding of two transcription factors, Sok2 and Mot3, thus abolishing their regulation of ERG28. This down-regulation increases resistance to a widely used antifungal drug targeting ergosterol, similar to mutations disrupting this pathway in clinical yeast isolates. The identification of the causal genetic variant revealed that the selection likely occurred after the deletion was already present at high frequency in the population, rather than when it was a new mutation. These results provide a detailed view of the molecular mechanism of a cis-regulatory adaptation, and underscore the importance of this view to our understanding of evolution at the molecular level.
1011.5388
Steven Frank
Steven A. Frank
Measurement scale in maximum entropy models of species abundance
22 pages, 1 figure
Journal of Evolutionary Biology 24:485-496 (2011)
10.1111/j.1420-9101.2010.02209.x
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The consistency of the species abundance distribution across diverse communities has attracted widespread attention. In this paper, I argue that the consistency of pattern arises because diverse ecological mechanisms share a common symmetry with regard to measurement scale. By symmetry, I mean that different ecological processes preserve the same measure of information and lose all other information in the aggregation of various perturbations. I frame these explanations of symmetry, measurement, and aggregation in terms of a recently developed extension to the theory of maximum entropy. I show that the natural measurement scale for the species abundance distribution is log-linear: the information in observations at small population sizes scales logarithmically and, as population size increases, the scaling of information grades from logarithmic to linear. Such log-linear scaling leads naturally to a gamma distribution for species abundance, which matches well with the observed patterns. Much of the variation between samples can be explained by the magnitude at which the measurement scale grades from logarithmic to linear. This measurement approach can be applied to the similar problem of allelic diversity in population genetics and to a wide variety of other patterns in biology.
[ { "created": "Wed, 24 Nov 2010 15:02:48 GMT", "version": "v1" } ]
2011-02-28
[ [ "Frank", "Steven A.", "" ] ]
The consistency of the species abundance distribution across diverse communities has attracted widespread attention. In this paper, I argue that the consistency of pattern arises because diverse ecological mechanisms share a common symmetry with regard to measurement scale. By symmetry, I mean that different ecological processes preserve the same measure of information and lose all other information in the aggregation of various perturbations. I frame these explanations of symmetry, measurement, and aggregation in terms of a recently developed extension to the theory of maximum entropy. I show that the natural measurement scale for the species abundance distribution is log-linear: the information in observations at small population sizes scales logarithmically and, as population size increases, the scaling of information grades from logarithmic to linear. Such log-linear scaling leads naturally to a gamma distribution for species abundance, which matches well with the observed patterns. Much of the variation between samples can be explained by the magnitude at which the measurement scale grades from logarithmic to linear. This measurement approach can be applied to the similar problem of allelic diversity in population genetics and to a wide variety of other patterns in biology.
1210.8295
Luc Berthouze
Timothy J. Taylor, Caroline Hartley, P\'eter L. Simon, Istvan Z Kiss, Luc Berthouze
Identification of criticality in neuronal avalanches: I. A theoretical investigation of the non-driven case
33 pages, 10 figures
The Journal of Mathematical Neuroscience 2013, 3:5
10.1186/2190-8567-3-5
null
q-bio.NC math-ph math.MP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study a simple model of a purely excitatory neural network that, by construction, operates at a critical point. This model allows us to consider various markers of criticality and illustrate how they should perform in a finite-size system. By calculating the exact distribution of avalanche sizes we are able to show that, over a limited range of avalanche sizes which we precisely identify, the distribution has scale free properties but is not a power law. This suggests that it would be inappropriate to dismiss a system as not being critical purely based on an inability to rigorously fit a power law distribution as has been recently advocated. In assessing whether a system, especially a finite-size one, is critical it is thus important to consider other possible markers. We illustrate one of these by showing the divergence of susceptibility as the critical point of the system is approached. Finally, we provide evidence that power laws may underlie other observables of the system, that may be more amenable to robust experimental assessment.
[ { "created": "Wed, 31 Oct 2012 11:01:56 GMT", "version": "v1" } ]
2013-05-17
[ [ "Taylor", "Timothy J.", "" ], [ "Hartley", "Caroline", "" ], [ "Simon", "Péter L.", "" ], [ "Kiss", "Istvan Z", "" ], [ "Berthouze", "Luc", "" ] ]
In this paper we study a simple model of a purely excitatory neural network that, by construction, operates at a critical point. This model allows us to consider various markers of criticality and illustrate how they should perform in a finite-size system. By calculating the exact distribution of avalanche sizes we are able to show that, over a limited range of avalanche sizes which we precisely identify, the distribution has scale free properties but is not a power law. This suggests that it would be inappropriate to dismiss a system as not being critical purely based on an inability to rigorously fit a power law distribution as has been recently advocated. In assessing whether a system, especially a finite-size one, is critical it is thus important to consider other possible markers. We illustrate one of these by showing the divergence of susceptibility as the critical point of the system is approached. Finally, we provide evidence that power laws may underlie other observables of the system, that may be more amenable to robust experimental assessment.
1203.1076
Shinsuke Koyama
Shinsuke Koyama
On the Relation between Encoding and Decoding of Neuronal Spikes
21 pages, 1 figure
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural encoding refers to the mapping from stimulus to response, and primarily focuses on understanding how neurons respond to a wide variety of stimuli, and on constructing models that accurately describe the stimulus-response relationship. Neural decoding, on the other hand, refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here, we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, i.e., the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean, does not always efficiently decode the rate codes, but can improve efficiency in reading certain rate codes, when correlations within a spike train are taken into account.
[ { "created": "Tue, 6 Mar 2012 00:25:23 GMT", "version": "v1" } ]
2012-03-07
[ [ "Koyama", "Shinsuke", "" ] ]
Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural encoding refers to the mapping from stimulus to response, and primarily focuses on understanding how neurons respond to a wide variety of stimuli, and on constructing models that accurately describe the stimulus-response relationship. Neural decoding, on the other hand, refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here, we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, i.e., the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean, does not always efficiently decode the rate codes, but can improve efficiency in reading certain rate codes, when correlations within a spike train are taken into account.
1703.06496
Mona Arabzadeh
Mona Arabzadeh, Morteza Saheb Zamani, Mehdi Sedighi, Sayed-Amir Marashi
A Graph-Based Approach to Analyze Flux-Balanced Pathways in Metabolic Networks
23 pages, 7 figures, 2 tables
BioSystems,165,40--51,(2018)
10.1016/j.biosystems.2017.12.001
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An Elementary Flux Mode (EFM) is a pathway with minimum set of reactions that are functional in steady-state constrained space. Due to the high computational complexity of calculating EFMs, different approaches have been proposed to find these flux-balanced pathways. In this paper, an approach to find a subset of EFMs is proposed based on a graph data model. The given metabolic network is mapped to the graph model and decisions for reaction inclusion can be made based on metabolites and their associated reactions. This notion makes the approach more convenient to categorize the output pathways. Implications of the proposed method on metabolic networks are discussed.
[ { "created": "Sun, 19 Mar 2017 19:37:51 GMT", "version": "v1" }, { "created": "Sun, 4 Feb 2018 14:19:11 GMT", "version": "v2" } ]
2018-02-06
[ [ "Arabzadeh", "Mona", "" ], [ "Zamani", "Morteza Saheb", "" ], [ "Sedighi", "Mehdi", "" ], [ "Marashi", "Sayed-Amir", "" ] ]
An Elementary Flux Mode (EFM) is a pathway with minimum set of reactions that are functional in steady-state constrained space. Due to the high computational complexity of calculating EFMs, different approaches have been proposed to find these flux-balanced pathways. In this paper, an approach to find a subset of EFMs is proposed based on a graph data model. The given metabolic network is mapped to the graph model and decisions for reaction inclusion can be made based on metabolites and their associated reactions. This notion makes the approach more convenient to categorize the output pathways. Implications of the proposed method on metabolic networks are discussed.
1812.03715
Krzysztof Bartoszek
Krzysztof Bartoszek and Pietro Li\`o
Modelling trait dependent speciation with Approximate Bayesian Computation
null
Acta Physica Polonica B Proceedings Supplement, 12(1):25-47, 2019
10.5506/APhysPolBSupp.12.25
null
q-bio.PE cs.LG stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogeny is the field of modelling the temporal discrete dynamics of speciation. Complex models can nowadays be studied using the Approximate Bayesian Computation approach which avoids likelihood calculations. The field's progression is hampered by the lack of robust software to estimate the numerous parameters of the speciation process. In this work we present an R package, pcmabc, based on Approximate Bayesian Computations, that implements three novel phylogenetic algorithms for trait-dependent speciation modelling. Our phylogenetic comparative methodology takes into account both the simulated traits and phylogeny, attempting to estimate the parameters of the processes generating the phenotype and the trait. The user is not restricted to a predefined set of models and can specify a variety of evolutionary and branching models. We illustrate the software with a simulation-reestimation study focused around the branching Ornstein-Uhlenbeck process, where the branching rate depends non-linearly on the value of the driving Ornstein-Uhlenbeck process. Included in this work is a tutorial on how to use the software.
[ { "created": "Mon, 10 Dec 2018 10:17:12 GMT", "version": "v1" } ]
2020-11-23
[ [ "Bartoszek", "Krzysztof", "" ], [ "Liò", "Pietro", "" ] ]
Phylogeny is the field of modelling the temporal discrete dynamics of speciation. Complex models can nowadays be studied using the Approximate Bayesian Computation approach which avoids likelihood calculations. The field's progression is hampered by the lack of robust software to estimate the numerous parameters of the speciation process. In this work we present an R package, pcmabc, based on Approximate Bayesian Computations, that implements three novel phylogenetic algorithms for trait-dependent speciation modelling. Our phylogenetic comparative methodology takes into account both the simulated traits and phylogeny, attempting to estimate the parameters of the processes generating the phenotype and the trait. The user is not restricted to a predefined set of models and can specify a variety of evolutionary and branching models. We illustrate the software with a simulation-reestimation study focused around the branching Ornstein-Uhlenbeck process, where the branching rate depends non-linearly on the value of the driving Ornstein-Uhlenbeck process. Included in this work is a tutorial on how to use the software.
2302.14778
Nithya Ramakrishnan
Aamir Sahil Chandroth, Nithya Ramakrishnan, Sanjay Chandrasekharan
The self-organization of selfishness: Reinforcement Learning shows how selfish behavior can emerge from agent-environment interaction dynamics
9 pages, 16 figs, 1 table. Reinforcement Learning - Parametric Analysis, Social Behavior
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
When biological communities use signaling structures for complex coordination, 'free-riders' emerge. The free-riding agents do not contribute to the community resources (signals), but exploit them. Most models of such 'selfish' behavior consider free-riding as evolving through mutation and selection. Over generations, the mutation -- which is considered to create a stable trait -- spreads through the population. This can lead to a version of the 'Tragedy of the Commons', where the community's coordination resource gets fully depleted or deteriorated. In contrast to this evolutionary view, we present a reinforcement learning model, which shows that both signaling-based coordination and free-riding behavior can emerge within a generation, through learning based on energy minimisation. Further, we show that there can be two types of free-riding, and both of these are not stable traits, but dynamic 'coagulations' of agent-environment interactions. Our model thus shows how different kinds of selfish behavior can emerge through self-organization, and suggests that the idea of selfishness as a stable trait presumes a model based on mutations. We conclude with a discussion of some social and policy implications of our model.
[ { "created": "Tue, 28 Feb 2023 17:24:11 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 05:33:32 GMT", "version": "v2" } ]
2023-03-29
[ [ "Chandroth", "Aamir Sahil", "" ], [ "Ramakrishnan", "Nithya", "" ], [ "Chandrasekharan", "Sanjay", "" ] ]
When biological communities use signaling structures for complex coordination, 'free-riders' emerge. The free-riding agents do not contribute to the community resources (signals), but exploit them. Most models of such 'selfish' behavior consider free-riding as evolving through mutation and selection. Over generations, the mutation -- which is considered to create a stable trait -- spreads through the population. This can lead to a version of the 'Tragedy of the Commons', where the community's coordination resource gets fully depleted or deteriorated. In contrast to this evolutionary view, we present a reinforcement learning model, which shows that both signaling-based coordination and free-riding behavior can emerge within a generation, through learning based on energy minimisation. Further, we show that there can be two types of free-riding, and both of these are not stable traits, but dynamic 'coagulations' of agent-environment interactions. Our model thus shows how different kinds of selfish behavior can emerge through self-organization, and suggests that the idea of selfishness as a stable trait presumes a model based on mutations. We conclude with a discussion of some social and policy implications of our model.
2310.13021
Cedegao Zhang
Cedegao E. Zhang, Katherine M. Collins, Adrian Weller, Joshua B. Tenenbaum
AI for Mathematics: A Cognitive Science Perspective
null
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Mathematics is one of the most powerful conceptual systems developed and used by the human species. Dreams of automated mathematicians have a storied history in artificial intelligence (AI). Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems. In this work, we reflect on these goals from a \textit{cognitive science} perspective. We call attention to several classical and ongoing research directions from cognitive science, which we believe are valuable for AI practitioners to consider when seeking to build truly human (or superhuman)-level mathematical systems. We close with open discussions and questions that we believe necessitate a multi-disciplinary perspective -- cognitive scientists working in tandem with AI researchers and mathematicians -- as we move toward better mathematical AI systems which not only help us push the frontier of the mathematics, but also offer glimpses into how we as humans are even capable of such great cognitive feats.
[ { "created": "Thu, 19 Oct 2023 02:00:31 GMT", "version": "v1" } ]
2023-10-23
[ [ "Zhang", "Cedegao E.", "" ], [ "Collins", "Katherine M.", "" ], [ "Weller", "Adrian", "" ], [ "Tenenbaum", "Joshua B.", "" ] ]
Mathematics is one of the most powerful conceptual systems developed and used by the human species. Dreams of automated mathematicians have a storied history in artificial intelligence (AI). Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems. In this work, we reflect on these goals from a \textit{cognitive science} perspective. We call attention to several classical and ongoing research directions from cognitive science, which we believe are valuable for AI practitioners to consider when seeking to build truly human (or superhuman)-level mathematical systems. We close with open discussions and questions that we believe necessitate a multi-disciplinary perspective -- cognitive scientists working in tandem with AI researchers and mathematicians -- as we move toward better mathematical AI systems which not only help us push the frontier of the mathematics, but also offer glimpses into how we as humans are even capable of such great cognitive feats.
1608.08793
William Schafer
Barry Bentley, Robyn Branicky, Christopher L. Barnes, Edward T. Bullmore, Petra E. V\'ertes and William R. Schafer
The multilayer connectome of Caenorhabditis elegans
null
null
10.1371/journal.pcbi.1005283
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connectomics has focused primarily on the mapping of synaptic links in the brain; yet it is well established that extrasynaptic volume transmission, especially via monoamines and neuropeptides, is also critical to brain function. Here we present a draft monoamine connectome, along with a partial neuropeptide connectome, for the nematode C. elegans, based on new and published expression data for biosynthetic genes and receptors. Thus, the neuronal connectome can be represented as a multiplex network, with synaptic, gap junction, and neuromodulatory layers representing alternative modes of interneuronal interaction and with distinct network structures. In particular, the monoamine network exhibits novel topological properties, with a highly disassortative star-like structure and a rich-club of interconnected broadcasting hubs. Despite the low degree of overlap between layers, we find highly significant modes of interaction, pinpointing network locations and multilink motifs where aminergic and neuropeptide signalling modulate synaptic activity. The multilayer connectome of C. elegans represents a clear exemplar of a biological multiplex network and provides a prototype for understanding how extrasynaptic signalling can be integrated into the wired circuitry in larger brains.
[ { "created": "Wed, 31 Aug 2016 10:02:09 GMT", "version": "v1" } ]
2017-02-08
[ [ "Bentley", "Barry", "" ], [ "Branicky", "Robyn", "" ], [ "Barnes", "Christopher L.", "" ], [ "Bullmore", "Edward T.", "" ], [ "Vértes", "Petra E.", "" ], [ "Schafer", "William R.", "" ] ]
Connectomics has focused primarily on the mapping of synaptic links in the brain; yet it is well established that extrasynaptic volume transmission, especially via monoamines and neuropeptides, is also critical to brain function. Here we present a draft monoamine connectome, along with a partial neuropeptide connectome, for the nematode C. elegans, based on new and published expression data for biosynthetic genes and receptors. Thus, the neuronal connectome can be represented as a multiplex network, with synaptic, gap junction, and neuromodulatory layers representing alternative modes of interneuronal interaction and with distinct network structures. In particular, the monoamine network exhibits novel topological properties, with a highly disassortative star-like structure and a rich-club of interconnected broadcasting hubs. Despite the low degree of overlap between layers, we find highly significant modes of interaction, pinpointing network locations and multilink motifs where aminergic and neuropeptide signalling modulate synaptic activity. The multilayer connectome of C. elegans represents a clear exemplar of a biological multiplex network and provides a prototype for understanding how extrasynaptic signalling can be integrated into the wired circuitry in larger brains.
1610.05057
Benjamin Kaehler
Benjamin D Kaehler
Full Reconstruction of Non-Stationary Strand-Symmetric Models on Rooted Phylogenies
null
null
null
null
q-bio.PE math.PR q-bio.GN stat.AP
http://creativecommons.org/licenses/by/4.0/
Understanding the evolutionary relationship among species is of fundamental importance to the biological sciences. The location of the root in any phylogenetic tree is critical as it gives an order to evolutionary events. None of the popular models of nucleotide evolution used in likelihood or Bayesian methods are able to infer the location of the root without exogenous information. It is known that the most general Markov models of nucleotide substitution can also not identify the location of the root or be fitted to multiple sequence alignments with less than three sequences. We prove that the location of the root and the full model can be identified and statistically consistently estimated for a non-stationary, strand-symmetric substitution model given a multiple sequence alignment with two or more sequences. We also generalise earlier work to provide a practical means of overcoming the computationally intractable problem of labelling hidden states in a phylogenetic model.
[ { "created": "Mon, 17 Oct 2016 11:43:53 GMT", "version": "v1" }, { "created": "Sun, 13 Nov 2016 00:20:24 GMT", "version": "v2" } ]
2016-11-15
[ [ "Kaehler", "Benjamin D", "" ] ]
Understanding the evolutionary relationship among species is of fundamental importance to the biological sciences. The location of the root in any phylogenetic tree is critical as it gives an order to evolutionary events. None of the popular models of nucleotide evolution used in likelihood or Bayesian methods are able to infer the location of the root without exogenous information. It is known that the most general Markov models of nucleotide substitution can also not identify the location of the root or be fitted to multiple sequence alignments with less than three sequences. We prove that the location of the root and the full model can be identified and statistically consistently estimated for a non-stationary, strand-symmetric substitution model given a multiple sequence alignment with two or more sequences. We also generalise earlier work to provide a practical means of overcoming the computationally intractable problem of labelling hidden states in a phylogenetic model.
1903.02617
Nida Obatake
Nida Obatake, Anne Shiu, Xiaoxian Tang, Angelica Torres
Oscillations and bistability in a model of ERK regulation
33 pages, 4 figures, 4 tables, 3 appendices
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work concerns the question of how two important dynamical properties, oscillations and bistability, emerge in an important biological signaling network. Specifically, we consider a model for dual-site phosphorylation and dephosphorylation of extracellular signal-regulated kinase (ERK). We prove that oscillations persist even as the model is greatly simplified (reactions are made irreversible and intermediates are removed). Bistability, however, is much less robust -- this property is lost when intermediates are removed or even when all reactions are made irreversible. Moreover, bistability is characterized by the presence of two reversible, catalytic reactions: as other reactions are made irreversible, bistability persists as long as one or both of the specified reactions is preserved. Finally, we investigate the maximum number of steady states, aided by a network's "mixed volume" (a concept from convex geometry). Taken together, our results shed light on the question of how oscillations and bistability emerge from a limiting network of the ERK network -- namely, the fully processive dual-site network -- which is known to be globally stable and therefore lack both oscillations and bistability. Our proofs are enabled by a Hopf bifurcation criterion due to Yang, analyses of Newton polytopes arising from Hurwitz determinants, and recent characterizations of multistationarity for networks having a steady-state parametrization.
[ { "created": "Wed, 6 Mar 2019 21:20:33 GMT", "version": "v1" } ]
2019-03-08
[ [ "Obatake", "Nida", "" ], [ "Shiu", "Anne", "" ], [ "Tang", "Xiaoxian", "" ], [ "Torres", "Angelica", "" ] ]
This work concerns the question of how two important dynamical properties, oscillations and bistability, emerge in an important biological signaling network. Specifically, we consider a model for dual-site phosphorylation and dephosphorylation of extracellular signal-regulated kinase (ERK). We prove that oscillations persist even as the model is greatly simplified (reactions are made irreversible and intermediates are removed). Bistability, however, is much less robust -- this property is lost when intermediates are removed or even when all reactions are made irreversible. Moreover, bistability is characterized by the presence of two reversible, catalytic reactions: as other reactions are made irreversible, bistability persists as long as one or both of the specified reactions is preserved. Finally, we investigate the maximum number of steady states, aided by a network's "mixed volume" (a concept from convex geometry). Taken together, our results shed light on the question of how oscillations and bistability emerge from a limiting network of the ERK network -- namely, the fully processive dual-site network -- which is known to be globally stable and therefore lack both oscillations and bistability. Our proofs are enabled by a Hopf bifurcation criterion due to Yang, analyses of Newton polytopes arising from Hurwitz determinants, and recent characterizations of multistationarity for networks having a steady-state parametrization.
q-bio/0505006
Andre X. C. N. Valente
Andre X. C. N. Valente and Michael E. Cusick
Yeast Protein Interactome Topology Provides Framework for Coordinated-Functionality
Final, revised version. 13 pages. Please see Nucleic Acids open access article for higher resolution figures
Nucleic Acids Research 34 (9) 2812-2819 (2006)
10.1093/nar/gkl325
null
q-bio.MN cond-mat.other cond-mat.stat-mech q-bio.OT
null
The architecture of the network of protein-protein physical interactions in Saccharomyces cerevisiae is exposed through the combination of two complementary theoretical network measures, betweenness centrality and `Q-modularity'. The yeast interactome is characterized by well-defined topological modules connected via a small number of inter-module protein interactions. Should such topological inter-module connections turn out to constitute a form of functional coordination between the modules, we speculate that this coordination is occurring typically in a pair-wise fashion, rather than by way of high-degree hub proteins responsible for coordinating multiple modules. The unique non-hub-centric hierarchical organization of the interactome is not reproduced by gene duplication-and-divergence stochastic growth models that disregard global selective pressures.
[ { "created": "Tue, 3 May 2005 14:52:05 GMT", "version": "v1" }, { "created": "Tue, 20 Sep 2005 19:23:33 GMT", "version": "v2" }, { "created": "Wed, 24 May 2006 15:26:55 GMT", "version": "v3" } ]
2009-09-29
[ [ "Valente", "Andre X. C. N.", "" ], [ "Cusick", "Michael E.", "" ] ]
The architecture of the network of protein-protein physical interactions in Saccharomyces cerevisiae is exposed through the combination of two complementary theoretical network measures, betweenness centrality and `Q-modularity'. The yeast interactome is characterized by well-defined topological modules connected via a small number of inter-module protein interactions. Should such topological inter-module connections turn out to constitute a form of functional coordination between the modules, we speculate that this coordination is occurring typically in a pair-wise fashion, rather than by way of high-degree hub proteins responsible for coordinating multiple modules. The unique non-hub-centric hierarchical organization of the interactome is not reproduced by gene duplication-and-divergence stochastic growth models that disregard global selective pressures.
q-bio/0512022
Georgy Karev
Artem S. Novozhilov, Faina S. Berezovskaya, Eugene V. Koonin, Georgy P. Karev
Mathematical modeling of anti-tumor virus therapy: Regimes with complete recovery within the framework of deterministic models
24 pages, 8 figures; presented to ARCC workshop "The Modeling of Cancer Progression and Immunotherapy"
null
null
null
q-bio.TO q-bio.PE
null
A complete parametric analysis of dynamic regimes of a conceptual model of anti-tumor virus therapy is presented. The role and limitations of mass-action kinetics are discussed. A functional response, which is a function of the ratio of uninfected to infected tumor cells, is proposed to describe the spread of the virus infection in the tumor. One of the main mathematical features of ratio-dependent models is that the origin is a complicated equilibrium point whose characteristics crucially determine the main properties of the model. It is shown that, in a certain area of parameter values, the trajectories of the model form a family of homoclinics to the origin (so-called elliptic sector). Biologically, this means that both infected and uninfected tumor cells can be eliminated with time, and complete recovery is possible as a result of the virus therapy within the framework of deterministic models
[ { "created": "Fri, 9 Dec 2005 20:26:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Novozhilov", "Artem S.", "" ], [ "Berezovskaya", "Faina S.", "" ], [ "Koonin", "Eugene V.", "" ], [ "Karev", "Georgy P.", "" ] ]
A complete parametric analysis of dynamic regimes of a conceptual model of anti-tumor virus therapy is presented. The role and limitations of mass-action kinetics are discussed. A functional response, which is a function of the ratio of uninfected to infected tumor cells, is proposed to describe the spread of the virus infection in the tumor. One of the main mathematical features of ratio-dependent models is that the origin is a complicated equilibrium point whose characteristics crucially determine the main properties of the model. It is shown that, in a certain area of parameter values, the trajectories of the model form a family of homoclinics to the origin (so-called elliptic sector). Biologically, this means that both infected and uninfected tumor cells can be eliminated with time, and complete recovery is possible as a result of the virus therapy within the framework of deterministic models
1806.10627
Maria Shubina
Maria Shubina
Exact traveling wave solutions of one-dimensional models of cancer invasion
19 pages, 6 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider continuous mathematical models of tumour growth and invasion based on the model introduced by Chaplain and Lolas \cite{Chaplain&Lolas2006}, for the case of one space dimension. The models consist of a system of three coupled nonlinear reaction-diffusion-taxis partial differential equations describing the interactions between cancer cells, the matrix degrading enzyme and the tissue. For these models under certain conditions on the model parameters we obtain exact analytical solutions in terms of traveling wave variables. These solutions are smooth positive definite functions for some of which whose profiles agree with those obtained from numerical computations \cite{Chaplain&Lolas2006} for not very large time intervals.
[ { "created": "Fri, 15 Jun 2018 21:04:24 GMT", "version": "v1" }, { "created": "Fri, 17 Jan 2020 18:34:54 GMT", "version": "v2" }, { "created": "Sun, 5 Feb 2023 16:52:37 GMT", "version": "v3" } ]
2023-02-07
[ [ "Shubina", "Maria", "" ] ]
In this paper we consider continuous mathematical models of tumour growth and invasion based on the model introduced by Chaplain and Lolas \cite{Chaplain&Lolas2006}, for the case of one space dimension. The models consist of a system of three coupled nonlinear reaction-diffusion-taxis partial differential equations describing the interactions between cancer cells, the matrix degrading enzyme and the tissue. For these models under certain conditions on the model parameters we obtain exact analytical solutions in terms of traveling wave variables. These solutions are smooth positive definite functions for some of which whose profiles agree with those obtained from numerical computations \cite{Chaplain&Lolas2006} for not very large time intervals.
1801.02375
Yani Zhao
Yani Zhao and Marek Cieplak
Proteins at air-water and oil-water interfaces in an all-atom model
13 pages, 10 figures
Physical Chemistry Chemical Physics, 19(36), 25197-25206 (2017)
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the behavior of five proteins at the air-water and oil-water interfaces by all-atom molecular dynamics. The proteins are found to get distorted when pinned to the interface. This behavior is consistent with the phenomenological way of introducing the interfaces in a coarse-grained model through a force that depends on the hydropathy indices of the residues. Proteins couple to the oil-water interface stronger than to the air- water one. They diffuse slower at the oil-water interface but do not depin from it, whereas depinning events are observed at the other interface. The reduction of the disulfide bonds slows the diffusion down.
[ { "created": "Mon, 8 Jan 2018 10:49:41 GMT", "version": "v1" } ]
2018-01-09
[ [ "Zhao", "Yani", "" ], [ "Cieplak", "Marek", "" ] ]
We study the behavior of five proteins at the air-water and oil-water interfaces by all-atom molecular dynamics. The proteins are found to get distorted when pinned to the interface. This behavior is consistent with the phenomenological way of introducing the interfaces in a coarse-grained model through a force that depends on the hydropathy indices of the residues. Proteins couple to the oil-water interface stronger than to the air- water one. They diffuse slower at the oil-water interface but do not depin from it, whereas depinning events are observed at the other interface. The reduction of the disulfide bonds slows the diffusion down.
1805.03771
Mark Kon
Yue Fan, Mark Kon and Charles DeLisi
Transcription Factor-DNA Binding Via Machine Learning Ensembles
33 pages
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ensemble methods in a machine learning (ML) framework combining predictions from five known motif/binding site exploration algorithms. For a given TF the ensemble starts with position weight matrices (PWM's) for the motif, collected from the component algorithms. Using dimension reduction, we identify significant PWM-based subspaces for analysis. Within each subspace a machine classifier is built for identifying the TF's gene (promoter) targets (Problem 1). These PWM-based subspaces form an ML-based sequence analysis tool. Problem 2 (finding binding motifs) is solved by agglomerating k-mer (string) feature PWM-based subspaces that stand out in identifying gene targets. We approach Problem 3 (binding sites) with a novel machine learning approach that uses promoter string features and ML importance scores in a classification algorithm locating binding sites across the genome. For target gene identification this method improves performance (measured by the F1 score) by about 10 percentage points over the (a) motif scanning method and (b) the coexpression-based association method. Top motif outperformed 5 component algorithms as well as two other common algorithms (BEST and DEME). For identifying individual binding sites on a benchmark cross species database (Tompa et al., 2005) we match the best performer without much human intervention. It also improved the performance on mammalian TFs. The ensemble can integrate orthogonal information from different weak learners (potentially using entirely different types of features) into a machine learner that can perform consistently better for more TFs. The TF gene target identification component (problem 1 above) is useful in constructing a transcriptional regulatory network from known TF-target associations. The ensemble is easily extendable to include more tools as well as future PWM-based information.
[ { "created": "Thu, 10 May 2018 01:13:44 GMT", "version": "v1" } ]
2018-05-11
[ [ "Fan", "Yue", "" ], [ "Kon", "Mark", "" ], [ "DeLisi", "Charles", "" ] ]
We present ensemble methods in a machine learning (ML) framework combining predictions from five known motif/binding site exploration algorithms. For a given TF the ensemble starts with position weight matrices (PWM's) for the motif, collected from the component algorithms. Using dimension reduction, we identify significant PWM-based subspaces for analysis. Within each subspace a machine classifier is built for identifying the TF's gene (promoter) targets (Problem 1). These PWM-based subspaces form an ML-based sequence analysis tool. Problem 2 (finding binding motifs) is solved by agglomerating k-mer (string) feature PWM-based subspaces that stand out in identifying gene targets. We approach Problem 3 (binding sites) with a novel machine learning approach that uses promoter string features and ML importance scores in a classification algorithm locating binding sites across the genome. For target gene identification this method improves performance (measured by the F1 score) by about 10 percentage points over the (a) motif scanning method and (b) the coexpression-based association method. Top motif outperformed 5 component algorithms as well as two other common algorithms (BEST and DEME). For identifying individual binding sites on a benchmark cross species database (Tompa et al., 2005) we match the best performer without much human intervention. It also improved the performance on mammalian TFs. The ensemble can integrate orthogonal information from different weak learners (potentially using entirely different types of features) into a machine learner that can perform consistently better for more TFs. The TF gene target identification component (problem 1 above) is useful in constructing a transcriptional regulatory network from known TF-target associations. The ensemble is easily extendable to include more tools as well as future PWM-based information.
1411.4334
Adri\`a Tauste Campo
Adri\`a Tauste Campo, Marina Martinez-Garcia, Ver\'onica N\'acher, Ranulfo Romo and Gustavo Deco
Task-driven intra- and interarea communications in primate cerebral cortex
Article and Supplementary Information
null
10.1073/pnas.1503937112
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural correlations during a cognitive task are central to study brain information processing and computation. However, they have been poorly analyzed due to the difficulty of recording simultaneous single neurons during task performance. In the present work, we quantified neural directional correlations using spike trains that were simultaneously recorded in sensory, premotor, and motor cortical areas of two monkeys during a somatosensory discrimination task. Upon modeling spike trains as binary time series, we used a nonparametric Bayesian method to estimate pairwise directional correlations between many pairs of neurons throughout different stages of the task, namely, perception, working memory, decision making, and motor report. We find that solving the task involves feedforward and feedback correlation paths linking sensory and motor areas during certain task intervals. Specifically, information is communicated by task-driven neural correlations that are significantly delayed across secondary somatosensory cortex, premotor, and motor areas when decision making takes place. Crucially, when sensory comparison is no longer requested for task performance, a major proportion of directional correlations consistently vanish across all cortical areas.
[ { "created": "Mon, 17 Nov 2014 00:12:30 GMT", "version": "v1" }, { "created": "Tue, 5 May 2015 19:41:31 GMT", "version": "v2" } ]
2016-02-17
[ [ "Campo", "Adrià Tauste", "" ], [ "Martinez-Garcia", "Marina", "" ], [ "Nácher", "Verónica", "" ], [ "Romo", "Ranulfo", "" ], [ "Deco", "Gustavo", "" ] ]
Neural correlations during a cognitive task are central to study brain information processing and computation. However, they have been poorly analyzed due to the difficulty of recording simultaneous single neurons during task performance. In the present work, we quantified neural directional correlations using spike trains that were simultaneously recorded in sensory, premotor, and motor cortical areas of two monkeys during a somatosensory discrimination task. Upon modeling spike trains as binary time series, we used a nonparametric Bayesian method to estimate pairwise directional correlations between many pairs of neurons throughout different stages of the task, namely, perception, working memory, decision making, and motor report. We find that solving the task involves feedforward and feedback correlation paths linking sensory and motor areas during certain task intervals. Specifically, information is communicated by task-driven neural correlations that are significantly delayed across secondary somatosensory cortex, premotor, and motor areas when decision making takes place. Crucially, when sensory comparison is no longer requested for task performance, a major proportion of directional correlations consistently vanish across all cortical areas.
1704.03002
Peter Schuck
Peter Schuck and Sumit K. Chaturvedi
Sedimentation of rapidly interacting multicomponent systems
18 pages
null
null
null
q-bio.QM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The biophysical analysis of dynamically formed multi-protein complexes in solution presents a formidable technical challenge. Sedimentation velocity (SV) analytical ultracentrifugation achieves strongly size-dependent hydrodynamic resolution of different size species, and can be combined with multi-component detection by exploiting different spectral properties or temporally modulated signals from photoswitchable proteins. Coexisting complexes arising from self- or hetero-associations that can be distinguished in SV allow measurement of their stoichiometry, affinity, and cooperativity. However, assemblies that are short-lived on the time-scale of sedimentation (t1/2 < 100 sec) will exhibit an as of yet unexplored pattern of sedimentation boundaries governed by coupled co-migration of the entire system. Here, we present a theory for multi-component sedimentation of rapidly interacting systems, which reveals simple underlying physical principles, offers a quantitative framework for analysis, thereby extending the dynamic range of SV for studying multi-component interactions
[ { "created": "Mon, 10 Apr 2017 18:24:09 GMT", "version": "v1" } ]
2017-04-12
[ [ "Schuck", "Peter", "" ], [ "Chaturvedi", "Sumit K.", "" ] ]
The biophysical analysis of dynamically formed multi-protein complexes in solution presents a formidable technical challenge. Sedimentation velocity (SV) analytical ultracentrifugation achieves strongly size-dependent hydrodynamic resolution of different size species, and can be combined with multi-component detection by exploiting different spectral properties or temporally modulated signals from photoswitchable proteins. Coexisting complexes arising from self- or hetero-associations that can be distinguished in SV allow measurement of their stoichiometry, affinity, and cooperativity. However, assemblies that are short-lived on the time-scale of sedimentation (t1/2 < 100 sec) will exhibit an as of yet unexplored pattern of sedimentation boundaries governed by coupled co-migration of the entire system. Here, we present a theory for multi-component sedimentation of rapidly interacting systems, which reveals simple underlying physical principles, offers a quantitative framework for analysis, thereby extending the dynamic range of SV for studying multi-component interactions
1701.05338
Katte Rao Toppaldoddi
Katte Rao Toppaldoddi
IRE1 alpha may be causing abnormal loss of p53 at post transcriptional level in chronic myeloid leukemia
4 pages, 3 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current treatment strategy for chronic myeloid leukemia (CML) mainly includes inhibition of tyrosine kinase activity, which has dramatically improved the prognosis of the disease but without cure. In addition some patients may become drug resistant. Thus there is still the need for other therapies to avoid resistance and if possible to cure the disease. Loss of p53 is known to play an important role in the disease progression of CML and causes drug resistance. Here I propose that in CML, inositol requiring enzyme 1 alpha (IRE1 alpha) may cause abnormal degradation of p53 mRNA resulting in inhibition of apoptosis in leukemic clonal cells, which has not been elucidated before. Hence, I propose that inhibition of endoribonuclease activity of IRE1 alpha with small molecule inhibitors may provide a novel strategy to enhance p53 function in CML leukemic clones to overcome the limitations of current treatment regimens.
[ { "created": "Thu, 19 Jan 2017 09:14:17 GMT", "version": "v1" } ]
2017-01-20
[ [ "Toppaldoddi", "Katte Rao", "" ] ]
Current treatment strategy for chronic myeloid leukemia (CML) mainly includes inhibition of tyrosine kinase activity, which has dramatically improved the prognosis of the disease but without cure. In addition some patients may become drug resistant. Thus there is still the need for other therapies to avoid resistance and if possible to cure the disease. Loss of p53 is known to play an important role in the disease progression of CML and causes drug resistance. Here I propose that in CML, inositol requiring enzyme 1 alpha (IRE1 alpha) may cause abnormal degradation of p53 mRNA resulting in inhibition of apoptosis in leukemic clonal cells, which has not been elucidated before. Hence, I propose that inhibition of endoribonuclease activity of IRE1 alpha with small molecule inhibitors may provide a novel strategy to enhance p53 function in CML leukemic clones to overcome the limitations of current treatment regimens.
1406.5461
Hal Smith
Daniel A. Korytowski and Hal L. Smith
How Nested Infection Networks in Host-Phage Communities Come To Be
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that a chemostat community of bacteria and bacteriophage in which bacteria compete for a single nutrient and for which the bipartite infection network is perfectly nested is permanent, a.k.a. uniformly persistent, provided that bacteria that are superior competitors for nutrient devote the least to defence against infection and the virus that are the most efficient at infecting host have the smallest host range. This confirms earlier work of Jover et al \cite{Jover} who raised the issue of whether nested infection networks are permanent. In addition, we provide sufficient conditions that a bacteria-phage community of arbitrary size with nested infection network can arise through a succession of permanent subcommunties each with a nested infection network by the successive addition of one new population.
[ { "created": "Fri, 20 Jun 2014 17:09:25 GMT", "version": "v1" } ]
2014-06-23
[ [ "Korytowski", "Daniel A.", "" ], [ "Smith", "Hal L.", "" ] ]
We show that a chemostat community of bacteria and bacteriophage in which bacteria compete for a single nutrient and for which the bipartite infection network is perfectly nested is permanent, a.k.a. uniformly persistent, provided that bacteria that are superior competitors for nutrient devote the least to defence against infection and the virus that are the most efficient at infecting host have the smallest host range. This confirms earlier work of Jover et al \cite{Jover} who raised the issue of whether nested infection networks are permanent. In addition, we provide sufficient conditions that a bacteria-phage community of arbitrary size with nested infection network can arise through a succession of permanent subcommunties each with a nested infection network by the successive addition of one new population.
1301.5058
Ramon Plaza
J. Francisco Leyva, Carlos Malaga, Ramon G. Plaza
The effects of nutrient chemotaxis on bacterial aggregation patterns with non-linear degenerate cross diffusion
null
null
null
null
q-bio.CB nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a reaction-diffusion-chemotaxis model for bacterial aggregation patterns on the surface of thin agar plates. It is based on the non-linear degenerate cross diffusion model proposed by Kawasaki et al. (J. of Theor. Biol. 188(2) 1997) and it includes a suitable nutrient chemotactic term compatible with such type of diffusion. High resolution numerical simulations using Graphic Processing Units (GPUs) of the new model are presented, showing that the chemotactic term enhances the velocity of propagation of the colony envelope for dense-branching morphologies. In addition, the chemotaxis seems to stabilize the formation of branches in the soft-agar, low-nutrient regime. An asymptotic estimation predicts the growth velocity of the colony envelope as a function of both the nutrient concentration and the chemotactic sensitivity. For fixed nutrient concentrations, the growth velocity is an increasing function of the chemotactic sensitivity.
[ { "created": "Tue, 22 Jan 2013 02:04:23 GMT", "version": "v1" } ]
2013-01-23
[ [ "Leyva", "J. Francisco", "" ], [ "Malaga", "Carlos", "" ], [ "Plaza", "Ramon G.", "" ] ]
This paper introduces a reaction-diffusion-chemotaxis model for bacterial aggregation patterns on the surface of thin agar plates. It is based on the non-linear degenerate cross diffusion model proposed by Kawasaki et al. (J. of Theor. Biol. 188(2) 1997) and it includes a suitable nutrient chemotactic term compatible with such type of diffusion. High resolution numerical simulations using Graphic Processing Units (GPUs) of the new model are presented, showing that the chemotactic term enhances the velocity of propagation of the colony envelope for dense-branching morphologies. In addition, the chemotaxis seems to stabilize the formation of branches in the soft-agar, low-nutrient regime. An asymptotic estimation predicts the growth velocity of the colony envelope as a function of both the nutrient concentration and the chemotactic sensitivity. For fixed nutrient concentrations, the growth velocity is an increasing function of the chemotactic sensitivity.
1903.01854
Sepideh Shamsizadeh
Sepideh Shamsizadeh, Sama Goliaei, Zahra Razaghi Moghadam
CAMIRADA: Cancer microRNA association discovery algorithm, a case study on breast cancer
null
null
null
null
q-bio.GN cs.CE
http://creativecommons.org/licenses/by/4.0/
In recent studies, non-coding protein RNAs have been identified as microRNA that can be used as biomarkers for early diagnosis and treatment of cancer, that decrease mortality in cancer. A microRNA may target hundreds or thousands of genes and a gene may regulate several microRNAs, so determining which microRNA is associated with which cancer is a big challenge. Many computational methods have been performed to detect micoRNAs association with cancer, but more effort is needed with higher accuracy. Increasing research has shown that relationship between microRNAs and TFs play a significant role in the diagnosis of cancer. Therefore, we developed a new computational framework (CAMIRADA) to identify cancer-related microRNAs based on the relationship between microRNAs and disease genes (DG) in the protein network, the functional relationships between microRNAs and Transcription Factors (TF) on the co-expression network, and the relationship between microRNAs and the Differential Expression Gene (DEG) on co-expression network. The CAMIRADA was applied to assess breast cancer data from two HMDD and miR2Disease databases. In this study, the AUC for the 65 microRNAs of the top of the list was 0.95, which was more accurate than the similar methods used to detect microRNAs associated with the cancer artery.
[ { "created": "Wed, 27 Feb 2019 19:55:03 GMT", "version": "v1" } ]
2019-03-06
[ [ "Shamsizadeh", "Sepideh", "" ], [ "Goliaei", "Sama", "" ], [ "Moghadam", "Zahra Razaghi", "" ] ]
In recent studies, non-coding protein RNAs have been identified as microRNA that can be used as biomarkers for early diagnosis and treatment of cancer, that decrease mortality in cancer. A microRNA may target hundreds or thousands of genes and a gene may regulate several microRNAs, so determining which microRNA is associated with which cancer is a big challenge. Many computational methods have been performed to detect micoRNAs association with cancer, but more effort is needed with higher accuracy. Increasing research has shown that relationship between microRNAs and TFs play a significant role in the diagnosis of cancer. Therefore, we developed a new computational framework (CAMIRADA) to identify cancer-related microRNAs based on the relationship between microRNAs and disease genes (DG) in the protein network, the functional relationships between microRNAs and Transcription Factors (TF) on the co-expression network, and the relationship between microRNAs and the Differential Expression Gene (DEG) on co-expression network. The CAMIRADA was applied to assess breast cancer data from two HMDD and miR2Disease databases. In this study, the AUC for the 65 microRNAs of the top of the list was 0.95, which was more accurate than the similar methods used to detect microRNAs associated with the cancer artery.
1305.6666
Feng Wang
Marawan Ahmed, Maiada M. Sadek, Khaled A. Abouzid and Feng Wang
In Silico Design, Extended Molecular Dynamic Simulations and Binding Energy Calculations for a New Series of Dually Acting Inhibitors against EGFR and HER2
37 pages, 5 tables and 6 figures
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Starting from the lead structure we have identified in our previous works, we are extending our insight understanding of its potential inhibitory effect against both EGFR and HER2 receptors. Herein and using extended molecular dynamic simulations and different scoring techniques, we are providing plausible explanations for the observed inhibitory effect. Also, we are comparing the binding mechanism in addition to the dynamics of binding with two other approved inhibitors against EGFR (Lapatinib) and HER2 (SYR). Based on this information, we are also designing and in silico screening new potential inhibitors sharing the same scaffold of the lead structure. We have chosen the best scoring inhibitor for additional in silico investigation against both the wild-type and T790M mutant strain of EGFR. It seems that certain substitution pattern guarantees the binding to the conserved water molecule commonly observed with kinase crystal structures. Also, the new inhibitors seem to form a stable interaction with the mutant strain as a direct consequence of their enhanced ability to form additional interactions with binding site residues.
[ { "created": "Wed, 29 May 2013 00:33:49 GMT", "version": "v1" } ]
2013-05-30
[ [ "Ahmed", "Marawan", "" ], [ "Sadek", "Maiada M.", "" ], [ "Abouzid", "Khaled A.", "" ], [ "Wang", "Feng", "" ] ]
Starting from the lead structure we have identified in our previous works, we are extending our insight understanding of its potential inhibitory effect against both EGFR and HER2 receptors. Herein and using extended molecular dynamic simulations and different scoring techniques, we are providing plausible explanations for the observed inhibitory effect. Also, we are comparing the binding mechanism in addition to the dynamics of binding with two other approved inhibitors against EGFR (Lapatinib) and HER2 (SYR). Based on this information, we are also designing and in silico screening new potential inhibitors sharing the same scaffold of the lead structure. We have chosen the best scoring inhibitor for additional in silico investigation against both the wild-type and T790M mutant strain of EGFR. It seems that certain substitution pattern guarantees the binding to the conserved water molecule commonly observed with kinase crystal structures. Also, the new inhibitors seem to form a stable interaction with the mutant strain as a direct consequence of their enhanced ability to form additional interactions with binding site residues.
q-bio/0411052
Iaroslav Ispolatov
I.Ispolatov, P.L.Krapivsky, A.Yuryev
Duplication-divergence model of protein interaction network
8 pages, 13 figures
Phys. Rev. E 71, 061911 (2005)
10.1103/PhysRevE.71.061911
null
q-bio.MN cond-mat.dis-nn q-bio.BM
null
We show that the protein-protein interaction networks can be surprisingly well described by a very simple evolution model of duplication and divergence. The model exhibits a remarkably rich behavior depending on a single parameter, the probability to retain a duplicated link during divergence. When this parameter is large, the network growth is not self-averaging and an average vertex degree increases algebraically. The lack of self-averaging results in a great diversity of networks grown out of the same initial condition. For small values of the link retention probability, the growth is self-averaging, the average degree increases very slowly or tends to a constant, and a degree distribution has a power-law tail.
[ { "created": "Tue, 30 Nov 2004 01:41:31 GMT", "version": "v1" } ]
2009-11-10
[ [ "Ispolatov", "I.", "" ], [ "Krapivsky", "P. L.", "" ], [ "Yuryev", "A.", "" ] ]
We show that the protein-protein interaction networks can be surprisingly well described by a very simple evolution model of duplication and divergence. The model exhibits a remarkably rich behavior depending on a single parameter, the probability to retain a duplicated link during divergence. When this parameter is large, the network growth is not self-averaging and an average vertex degree increases algebraically. The lack of self-averaging results in a great diversity of networks grown out of the same initial condition. For small values of the link retention probability, the growth is self-averaging, the average degree increases very slowly or tends to a constant, and a degree distribution has a power-law tail.
1308.6245
Wentian Li
Wentian Li, Jan Freudenberg, Young Ju Suh, Yaning Yang
Using Volcano Plots and Regularized-Chi Statistics in Genetic Association Studies
5 figures
Computational Biology and Chemistry, 48: 77-83 (2014)
10.1016/j.compbiolchem.2013.02.003
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value, may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes.
[ { "created": "Wed, 28 Aug 2013 18:40:58 GMT", "version": "v1" } ]
2017-03-03
[ [ "Li", "Wentian", "" ], [ "Freudenberg", "Jan", "" ], [ "Suh", "Young Ju", "" ], [ "Yang", "Yaning", "" ] ]
Labor intensive experiments are typically required to identify the causal disease variants from a list of disease associated variants in the genome. For designing such experiments, candidate variants are ranked by their strength of genetic association with the disease. However, the two commonly used measures of genetic association, the odds-ratio (OR) and p-value, may rank variants in different order. To integrate these two measures into a single analysis, here we transfer the volcano plot methodology from gene expression analysis to genetic association studies. In its original setting, volcano plots are scatter plots of fold-change and t-test statistic (or -log of the p-value), with the latter being more sensitive to sample size. In genetic association studies, the OR and Pearson's chi-square statistic (or equivalently its square root, chi; or the standardized log(OR)) can be analogously used in a volcano plot, allowing for their visual inspection. Moreover, the geometric interpretation of these plots leads to an intuitive method for filtering results by a combination of both OR and chi-square statistic, which we term "regularized-chi". This method selects associated markers by a smooth curve in the volcano plot instead of the right-angled lines which corresponds to independent cutoffs for OR and chi-square statistic. The regularized-chi incorporates relatively more signals from variants with lower minor-allele-frequencies than chi-square test statistic. As rare variants tend to have stronger functional effects, regularized-chi is better suited to the task of prioritization of candidate genes.
2305.04345
Colin Twomey
Colin R. Twomey, David H. Brainard, Joshua B. Plotkin
Historical constraints on the evolution of efficient color naming
13 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Color naming in natural languages is not arbitrary: it reflects efficient partitions of perceptual color space modulated by the relative needs to communicate about different colors. These psychophysical and communicative constraints help explain why languages around the world have remarkably similar, but not identical, mappings of colors to color terms. Languages converge on a small set of efficient representations. But languages also evolve, and the number of terms in a color vocabulary may change over time. Here we show that history, i.e. the existence of an antecedent color vocabulary, acts as a non-adaptive constraint that biases the choice of efficient solution as a language transitions from a vocabulary of size n to n+1 terms. Moreover, as vocabularies evolve to include more terms they explore a smaller fraction of all possible efficient vocabularies compared to equally-sized vocabularies constructed de novo. This path dependence on the cultural evolution of color naming presents an opportunity. Historical constraints can be used to reconstruct ancestral color vocabularies, allowing us to answer long-standing questions about the evolutionary sequences of color words, and enabling us to draw inferences from phylogenetic patterns of language change.
[ { "created": "Sun, 7 May 2023 17:48:55 GMT", "version": "v1" } ]
2023-05-09
[ [ "Twomey", "Colin R.", "" ], [ "Brainard", "David H.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Color naming in natural languages is not arbitrary: it reflects efficient partitions of perceptual color space modulated by the relative needs to communicate about different colors. These psychophysical and communicative constraints help explain why languages around the world have remarkably similar, but not identical, mappings of colors to color terms. Languages converge on a small set of efficient representations. But languages also evolve, and the number of terms in a color vocabulary may change over time. Here we show that history, i.e. the existence of an antecedent color vocabulary, acts as a non-adaptive constraint that biases the choice of efficient solution as a language transitions from a vocabulary of size n to n+1 terms. Moreover, as vocabularies evolve to include more terms they explore a smaller fraction of all possible efficient vocabularies compared to equally-sized vocabularies constructed de novo. This path dependence on the cultural evolution of color naming presents an opportunity. Historical constraints can be used to reconstruct ancestral color vocabularies, allowing us to answer long-standing questions about the evolutionary sequences of color words, and enabling us to draw inferences from phylogenetic patterns of language change.
1306.1215
Benjamin Good
Benjamin H. Good and Aleksandra M. Walczak and Richard A. Neher and Michael M. Desai
Interference limits resolution of selection pressures from linked neutral diversity
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pervasive natural selection can strongly influence observed patterns of genetic variation, but these effects remain poorly understood when multiple selected variants segregate in nearby regions of the genome. Classical population genetics fails to account for interference between linked mutations, which grows increasingly severe as the density of selected polymorphisms increases. Here, we describe a simple limit that emerges when interference is common, in which the fitness effects of individual mutations play a relatively minor role. Instead, molecular evolution is determined by the variance in fitness within the population, defined over an effectively asexual segment of the genome (a ``linkage block''). We exploit this insensitivity in a new ``coarse-grained'' coalescent framework, which approximates the effects of many weakly selected mutations with a smaller number of strongly selected mutations with the same variance in fitness. This approximation generates accurate and efficient predictions for the genetic diversity that cannot be summarized by a simple reduction in effective population size. However, these results suggest a fundamental limit on our ability to resolve individual selection pressures from contemporary sequence data alone, since a wide range of parameters yield nearly identical patterns of sequence variability.
[ { "created": "Wed, 5 Jun 2013 19:38:32 GMT", "version": "v1" } ]
2013-06-06
[ [ "Good", "Benjamin H.", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Neher", "Richard A.", "" ], [ "Desai", "Michael M.", "" ] ]
Pervasive natural selection can strongly influence observed patterns of genetic variation, but these effects remain poorly understood when multiple selected variants segregate in nearby regions of the genome. Classical population genetics fails to account for interference between linked mutations, which grows increasingly severe as the density of selected polymorphisms increases. Here, we describe a simple limit that emerges when interference is common, in which the fitness effects of individual mutations play a relatively minor role. Instead, molecular evolution is determined by the variance in fitness within the population, defined over an effectively asexual segment of the genome (a ``linkage block''). We exploit this insensitivity in a new ``coarse-grained'' coalescent framework, which approximates the effects of many weakly selected mutations with a smaller number of strongly selected mutations with the same variance in fitness. This approximation generates accurate and efficient predictions for the genetic diversity that cannot be summarized by a simple reduction in effective population size. However, these results suggest a fundamental limit on our ability to resolve individual selection pressures from contemporary sequence data alone, since a wide range of parameters yield nearly identical patterns of sequence variability.
1403.6384
Fabrizio De Vico Fallani
Daria La Rocca, Patrizio Campisi, Balazs Vegso, Peter Cserti, Gyorgy Kozmann, Fabio Babiloni, Fabrizio De Vico Fallani
Human brain distinctiveness based on EEG spectral coherence connectivity
Key words: EEG, Resting state, Biometrics, Spectral coherence, Match score fusion
null
10.1109/TBME.2014.2317881
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of EEG biometrics, for the purpose of automatic people recognition, has received increasing attention in the recent years. Most of current analysis rely on the extraction of features characterizing the activity of single brain regions, like power-spectrum estimates, thus neglecting possible temporal dependencies between the generated EEG signals. However, important physiological information can be extracted from the way different brain regions are functionally coupled. In this study, we propose a novel approach that fuses spectral coherencebased connectivity between different brain regions as a possibly viable biometric feature. The proposed approach is tested on a large dataset of subjects (N=108) during eyes-closed (EC) and eyes-open (EO) resting state conditions. The obtained recognition performances show that using brain connectivity leads to higher distinctiveness with respect to power-spectrum measurements, in both the experimental conditions. Notably, a 100% recognition accuracy is obtained in EC and EO when integrating functional connectivity between regions in the frontal lobe, while a lower 97.41% is obtained in EC (96.26% in EO) when fusing power spectrum information from centro-parietal regions. Taken together, these results suggest that functional connectivity patterns represent effective features for improving EEG-based biometric systems.
[ { "created": "Sun, 23 Mar 2014 21:22:15 GMT", "version": "v1" } ]
2014-09-10
[ [ "La Rocca", "Daria", "" ], [ "Campisi", "Patrizio", "" ], [ "Vegso", "Balazs", "" ], [ "Cserti", "Peter", "" ], [ "Kozmann", "Gyorgy", "" ], [ "Babiloni", "Fabio", "" ], [ "Fallani", "Fabrizio De Vico", "" ] ]
The use of EEG biometrics, for the purpose of automatic people recognition, has received increasing attention in the recent years. Most of current analysis rely on the extraction of features characterizing the activity of single brain regions, like power-spectrum estimates, thus neglecting possible temporal dependencies between the generated EEG signals. However, important physiological information can be extracted from the way different brain regions are functionally coupled. In this study, we propose a novel approach that fuses spectral coherencebased connectivity between different brain regions as a possibly viable biometric feature. The proposed approach is tested on a large dataset of subjects (N=108) during eyes-closed (EC) and eyes-open (EO) resting state conditions. The obtained recognition performances show that using brain connectivity leads to higher distinctiveness with respect to power-spectrum measurements, in both the experimental conditions. Notably, a 100% recognition accuracy is obtained in EC and EO when integrating functional connectivity between regions in the frontal lobe, while a lower 97.41% is obtained in EC (96.26% in EO) when fusing power spectrum information from centro-parietal regions. Taken together, these results suggest that functional connectivity patterns represent effective features for improving EEG-based biometric systems.
1802.06004
Haiming Tang
Haiming Tang, Christopher J Mungall, Huaiyu Mi, Paul D Thomas
GOTaxon: Representing the evolution of biological functions in the Gene Ontology
23 pages, 2 figures, 6 tables
null
null
null
q-bio.PE q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Gene Ontology aims to define the universe of functions known for gene products, at the molecular, cellular and organism levels. While the ontology is designed to cover all aspects of biology in a "species independent manner", the fact remains that many if not most biological functions are restricted in their taxonomic range. This is simply because functions evolve, i.e. like other biological characteristics they are gained and lost over evolutionary time. Here we introduce a general method of representing the evolutionary gain and loss of biological functions within the Gene Ontology. We then apply a variety of techniques, including manual curation, logical reasoning over the ontology structure, and previously published "taxon constraints" to assign evolutionary gain and loss events to the majority of terms in the GO. These gain and loss events now almost triple the number of terms with taxon constraints, and currently cover a total of 76% of GO terms, including 40% of molecular function terms, 78% of cellular component terms, and 89% of biological process terms. Database URL: GOTaxon is freely available at https://github.com/haimingt/GOTaxonConstraint
[ { "created": "Fri, 16 Feb 2018 16:11:29 GMT", "version": "v1" } ]
2018-02-19
[ [ "Tang", "Haiming", "" ], [ "Mungall", "Christopher J", "" ], [ "Mi", "Huaiyu", "" ], [ "Thomas", "Paul D", "" ] ]
The Gene Ontology aims to define the universe of functions known for gene products, at the molecular, cellular and organism levels. While the ontology is designed to cover all aspects of biology in a "species independent manner", the fact remains that many if not most biological functions are restricted in their taxonomic range. This is simply because functions evolve, i.e. like other biological characteristics they are gained and lost over evolutionary time. Here we introduce a general method of representing the evolutionary gain and loss of biological functions within the Gene Ontology. We then apply a variety of techniques, including manual curation, logical reasoning over the ontology structure, and previously published "taxon constraints" to assign evolutionary gain and loss events to the majority of terms in the GO. These gain and loss events now almost triple the number of terms with taxon constraints, and currently cover a total of 76% of GO terms, including 40% of molecular function terms, 78% of cellular component terms, and 89% of biological process terms. Database URL: GOTaxon is freely available at https://github.com/haimingt/GOTaxonConstraint
2005.13519
Henrik Hult
Henrik Hult and Martina Favero
Estimates of the proportion of SARS-CoV-2 infected individuals in Sweden
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper a Bayesian SEIR model is studied to estimate the proportion of the population infected with SARS-CoV-2, the virus responsible for COVID-19. To capture heterogeneity in the population and the effect of interventions to reduce the rate of epidemic spread, the model uses a time-varying contact rate, whose logarithm has a Gaussian process prior. A Poisson point process is used to model the occurrence of deaths due to COVID-19 and the model is calibrated using data of daily death counts in combination with a snapshot of the the proportion of individuals with an active infection, performed in Stockholm in late March. The methodology is applied to regions in Sweden. The results show that the estimated proportion of the population who has been infected is around 13.5% in Stockholm, by 2020-05-15, and ranges between 2.5% - 15.6% in the other investigated regions. In Stockholm where the peak of daily death counts is likely behind us, parameter uncertainty does not heavily influence the expected daily number of deaths, nor the expected cumulative number of deaths. It does, however, impact the estimated cumulative number of infected individuals. In the other regions, where random sampling of the number of active infections is not available, parameter sharing is used to improve estimates, but the parameter uncertainty remains substantial.
[ { "created": "Mon, 25 May 2020 07:13:33 GMT", "version": "v1" } ]
2020-05-28
[ [ "Hult", "Henrik", "" ], [ "Favero", "Martina", "" ] ]
In this paper a Bayesian SEIR model is studied to estimate the proportion of the population infected with SARS-CoV-2, the virus responsible for COVID-19. To capture heterogeneity in the population and the effect of interventions to reduce the rate of epidemic spread, the model uses a time-varying contact rate, whose logarithm has a Gaussian process prior. A Poisson point process is used to model the occurrence of deaths due to COVID-19 and the model is calibrated using data of daily death counts in combination with a snapshot of the the proportion of individuals with an active infection, performed in Stockholm in late March. The methodology is applied to regions in Sweden. The results show that the estimated proportion of the population who has been infected is around 13.5% in Stockholm, by 2020-05-15, and ranges between 2.5% - 15.6% in the other investigated regions. In Stockholm where the peak of daily death counts is likely behind us, parameter uncertainty does not heavily influence the expected daily number of deaths, nor the expected cumulative number of deaths. It does, however, impact the estimated cumulative number of infected individuals. In the other regions, where random sampling of the number of active infections is not available, parameter sharing is used to improve estimates, but the parameter uncertainty remains substantial.
1407.6525
Christian Albers
Christian Albers, Maren Westkott, Klaus Pawelzik
Learning of Precise Spike Times with Membrane Potential Dependent Synaptic Plasticity
24 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise.
[ { "created": "Thu, 24 Jul 2014 10:58:55 GMT", "version": "v1" }, { "created": "Mon, 23 Feb 2015 14:34:51 GMT", "version": "v2" } ]
2015-02-24
[ [ "Albers", "Christian", "" ], [ "Westkott", "Maren", "" ], [ "Pawelzik", "Klaus", "" ] ]
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise.
0810.1544
Jonathan Newman
J.P. Newman and R.J. Butera
Mechanism, dynamics, and biological existence of multistability in a large class of bursting neurons
24 pages, 8 figures
J.P. Newman and R.J. Butera. Mechanism, dynamics, and biological existence of multistability in a large class of bursting neurons. Chaos 20, 2010
10.1063/1.341399
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multistability, the coexistence of multiple attractors in a dynamical system, is explored in bursting nerve cells. A modeling study is performed to show that a large class of bursting systems, as defined by a shared topology when represented as dynamical systems, is inherently suited to support multistability. We derive the bifurcation structure and parametric trends leading to multistability in these systems. Evidence for the existence of multirhythmic behavior in neurons of the aquatic mollusc Aplysia californica that is consistent with our proposed mechanism is presented. Although these experimental results are preliminary, they indicate that single neurons may be capable of dynamically storing information for longer time scales than typically attributed to nonsynaptic mechanisms.
[ { "created": "Wed, 8 Oct 2008 21:29:06 GMT", "version": "v1" }, { "created": "Sun, 19 Jul 2009 00:33:42 GMT", "version": "v2" }, { "created": "Mon, 28 Jun 2010 03:10:41 GMT", "version": "v3" } ]
2010-06-29
[ [ "Newman", "J. P.", "" ], [ "Butera", "R. J.", "" ] ]
Multistability, the coexistence of multiple attractors in a dynamical system, is explored in bursting nerve cells. A modeling study is performed to show that a large class of bursting systems, as defined by a shared topology when represented as dynamical systems, is inherently suited to support multistability. We derive the bifurcation structure and parametric trends leading to multistability in these systems. Evidence for the existence of multirhythmic behavior in neurons of the aquatic mollusc Aplysia californica that is consistent with our proposed mechanism is presented. Although these experimental results are preliminary, they indicate that single neurons may be capable of dynamically storing information for longer time scales than typically attributed to nonsynaptic mechanisms.
q-bio/0609024
Jean-Philippe Vert
Pierre Mah\'e (CB), Jean-Philippe Vert (CB)
Graph kernels based on tree patterns for molecules
null
null
null
null
q-bio.QM
null
Motivated by chemical applications, we revisit and extend a family of positive definite kernels for graphs based on the detection of common subtrees, initially proposed by Ramon et al. (2003). We propose new kernels with a parameter to control the complexity of the subtrees used as features to represent the graphs. This parameter allows to smoothly interpolate between classical graph kernels based on the count of common walks, on the one hand, and kernels that emphasize the detection of large common subtrees, on the other hand. We also propose two modular extensions to this formulation. The first extension increases the number of subtrees that define the feature space, and the second one removes noisy features from the graph representations. We validate experimentally these new kernels on binary classification tasks consisting in discriminating toxic and non-toxic molecules with support vector machines.
[ { "created": "Fri, 15 Sep 2006 15:22:53 GMT", "version": "v1" } ]
2016-08-16
[ [ "Mahé", "Pierre", "", "CB" ], [ "Vert", "Jean-Philippe", "", "CB" ] ]
Motivated by chemical applications, we revisit and extend a family of positive definite kernels for graphs based on the detection of common subtrees, initially proposed by Ramon et al. (2003). We propose new kernels with a parameter to control the complexity of the subtrees used as features to represent the graphs. This parameter allows to smoothly interpolate between classical graph kernels based on the count of common walks, on the one hand, and kernels that emphasize the detection of large common subtrees, on the other hand. We also propose two modular extensions to this formulation. The first extension increases the number of subtrees that define the feature space, and the second one removes noisy features from the graph representations. We validate experimentally these new kernels on binary classification tasks consisting in discriminating toxic and non-toxic molecules with support vector machines.
1705.06817
Varun Ojha
Varun Kumar Ojha, Konrad Jackowski, Ajith Abraham, V\'aclav Sn\'a\v{s}el
Dimensionality reduction, and function approximation of poly(lactic-co-glycolic acid) micro- and nanoparticle dissolution rate
null
International Journal of Nanomedicine 2015,10
10.2147/IJN.S71847
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prediction of poly(lactic co glycolic acid) (PLGA) micro- and nanoparticles' dissolution rates plays a significant role in pharmaceutical and medical industries. The prediction of PLGA dissolution rate is crucial for drug manufacturing. Therefore, a model that predicts the PLGA dissolution rate could be beneficial. PLGA dissolution is influenced by numerous factors (features), and counting the known features leads to a dataset with 300 features. This large number of features and high redundancy within the dataset makes the prediction task very difficult and inaccurate. In this study, dimensionality reduction techniques were applied in order to simplify the task and eliminate irrelevant and redundant features. A heterogeneous pool of several regression algorithms were independently tested and evaluated. In addition, several ensemble methods were tested in order to improve the accuracy of prediction. The empirical results revealed that the proposed evolutionary weighted ensemble method offered the lowest margin of error and significantly outperformed the individual algorithms and the other ensemble techniques.
[ { "created": "Tue, 16 May 2017 07:36:47 GMT", "version": "v1" } ]
2017-05-22
[ [ "Ojha", "Varun Kumar", "" ], [ "Jackowski", "Konrad", "" ], [ "Abraham", "Ajith", "" ], [ "Snášel", "Václav", "" ] ]
Prediction of poly(lactic co glycolic acid) (PLGA) micro- and nanoparticles' dissolution rates plays a significant role in pharmaceutical and medical industries. The prediction of PLGA dissolution rate is crucial for drug manufacturing. Therefore, a model that predicts the PLGA dissolution rate could be beneficial. PLGA dissolution is influenced by numerous factors (features), and counting the known features leads to a dataset with 300 features. This large number of features and high redundancy within the dataset makes the prediction task very difficult and inaccurate. In this study, dimensionality reduction techniques were applied in order to simplify the task and eliminate irrelevant and redundant features. A heterogeneous pool of several regression algorithms were independently tested and evaluated. In addition, several ensemble methods were tested in order to improve the accuracy of prediction. The empirical results revealed that the proposed evolutionary weighted ensemble method offered the lowest margin of error and significantly outperformed the individual algorithms and the other ensemble techniques.
2403.11774
Thomas Michelitsch
Teo Granger, Thomas M. Michelitsch, Michael Bestehorn, Alejandro P. Riascos, Bernard A. Collet
Stochastic compartment model with mortality and its application to epidemic spreading in complex networks
31 pages, 13 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study epidemic spreading in complex networks by a multiple random walker approach. Each walker performs an independent simple Markovian random walk on a complex undirected (ergodic) random graph where we focus on Barab\'asi-Albert (BA), Erd\"os-R\'enyi (ER) and Watts-Strogatz (WS) types. Both, walkers and nodes can be either susceptible (S) or infected and infectious (I) representing their states of health. Susceptible nodes may be infected by visits of infected walkers, and susceptible walkers may be infected by visiting infected nodes. No direct transmission of the disease among walkers (or among nodes) is possible. This model mimics a large class of diseases such as Dengue and Malaria with transmission of the disease via vectors (mosquitos). Infected walkers may die during the time span of their infection introducing an additional compartment D of dead walkers. Infected nodes never die and always recover from their infection after a random finite time. We derive stochastic evolution equations for the mean-field compartmental populations with mortality of walkers and delayed transitions among the compartments. From linear stability analysis, we derive the basic reproduction numbers R M , R 0 with and without mortality, respectively, and prove that R M < R 0 . For R M , R 0 > 1 the healthy state is unstable whereas for zero mortality a stable endemic equilibrium exists (independent of the initial conditions) which we obtained explicitly. We observe that the solutions of the random walk simulations in the considered networks agree well with the mean-field solutions for strongly connected graph topologies, whereas less well for weakly connected structures and for diseases with high mortality.
[ { "created": "Mon, 18 Mar 2024 13:31:58 GMT", "version": "v1" } ]
2024-03-19
[ [ "Granger", "Teo", "" ], [ "Michelitsch", "Thomas M.", "" ], [ "Bestehorn", "Michael", "" ], [ "Riascos", "Alejandro P.", "" ], [ "Collet", "Bernard A.", "" ] ]
We study epidemic spreading in complex networks by a multiple random walker approach. Each walker performs an independent simple Markovian random walk on a complex undirected (ergodic) random graph where we focus on Barab\'asi-Albert (BA), Erd\"os-R\'enyi (ER) and Watts-Strogatz (WS) types. Both, walkers and nodes can be either susceptible (S) or infected and infectious (I) representing their states of health. Susceptible nodes may be infected by visits of infected walkers, and susceptible walkers may be infected by visiting infected nodes. No direct transmission of the disease among walkers (or among nodes) is possible. This model mimics a large class of diseases such as Dengue and Malaria with transmission of the disease via vectors (mosquitos). Infected walkers may die during the time span of their infection introducing an additional compartment D of dead walkers. Infected nodes never die and always recover from their infection after a random finite time. We derive stochastic evolution equations for the mean-field compartmental populations with mortality of walkers and delayed transitions among the compartments. From linear stability analysis, we derive the basic reproduction numbers R M , R 0 with and without mortality, respectively, and prove that R M < R 0 . For R M , R 0 > 1 the healthy state is unstable whereas for zero mortality a stable endemic equilibrium exists (independent of the initial conditions) which we obtained explicitly. We observe that the solutions of the random walk simulations in the considered networks agree well with the mean-field solutions for strongly connected graph topologies, whereas less well for weakly connected structures and for diseases with high mortality.
2111.09118
Elaheh Hatami Majoumerd
Elaheh Hatamimajoumerd, Alireza Talebpour
The Neural Correlates of Image Texture in the Human Vision Using Magnetoencephalography
null
null
null
null
q-bio.NC cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Undoubtedly, textural property of an image is one of the most important features in object recognition task in both human and computer vision applications. Here, we investigated the neural signatures of four well-known statistical texture features including contrast, homogeneity, energy, and correlation computed from the gray level co-occurrence matrix (GLCM) of the images viewed by the participants in the process of magnetoencephalography (MEG) data collection. To trace these features in the human visual system, we used multivariate pattern analysis (MVPA) and trained a linear support vector machine (SVM) classifier on every timepoint of MEG data representing the brain activity and compared it with the textural descriptors of images using the Spearman correlation. The result of this study demonstrates that hierarchical structure in the processing of these four texture descriptors in the human brain with the order of contrast, homogeneity, energy, and correlation. Additionally, we found that energy, which carries broad texture property of the images, shows a more sustained statistically meaningful correlation with the brain activity in the course of time.
[ { "created": "Tue, 16 Nov 2021 01:09:51 GMT", "version": "v1" } ]
2021-11-18
[ [ "Hatamimajoumerd", "Elaheh", "" ], [ "Talebpour", "Alireza", "" ] ]
Undoubtedly, textural property of an image is one of the most important features in object recognition task in both human and computer vision applications. Here, we investigated the neural signatures of four well-known statistical texture features including contrast, homogeneity, energy, and correlation computed from the gray level co-occurrence matrix (GLCM) of the images viewed by the participants in the process of magnetoencephalography (MEG) data collection. To trace these features in the human visual system, we used multivariate pattern analysis (MVPA) and trained a linear support vector machine (SVM) classifier on every timepoint of MEG data representing the brain activity and compared it with the textural descriptors of images using the Spearman correlation. The result of this study demonstrates that hierarchical structure in the processing of these four texture descriptors in the human brain with the order of contrast, homogeneity, energy, and correlation. Additionally, we found that energy, which carries broad texture property of the images, shows a more sustained statistically meaningful correlation with the brain activity in the course of time.
2102.13600
Benjamin F. Maier
Benjamin F. Maier, Angelique Burdinski, Annika H. Rose, Frank Schlosser, David Hinrichs, Cornelia Betsch, Lars Korn, Philipp Sprengholz, Michael Meyer-Hermann, Tanmay Mitra, Karl Lauterbach, Dirk Brockmann
Potential benefits of delaying the second mRNA COVID-19 vaccine dose
22 pages, 9 figures, 10 tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vaccination against COVID-19 with the recently approved mRNA vaccines BNT162b2 (BioNTech/Pfizer) and mRNA-1273 (Moderna) is currently underway in a large number of countries. However, high incidence rates and rapidly spreading SARS-CoV-2 variants are concerning. In combination with acute supply deficits in Europe in early 2021, the question arises of whether stretching the vaccine, for instance by delaying the second dose, can make a significant contribution to preventing deaths, despite associated risks such as lower vaccine efficacy, the potential emergence of escape mutants, enhancement, waning immunity, reduced social acceptance of off-label vaccination, and liability shifts. A quantitative epidemiological assessment of risks and benefits of non-standard vaccination protocols remains elusive. To clarify the situation and to provide a quantitative epidemiological foundation we develop a stochastic epidemiological model that integrates specific vaccine rollout protocols into a risk-group structured infectious disease dynamical model. Using the situation and conditions in Germany as a reference system, we show that delaying the second vaccine dose is expected to prevent deaths in the four to five digit range, should the incidence resurge. We show that this considerable public health benefit relies on the fact that both mRNA vaccines provide substantial protection against severe COVID-19 and death beginning 12 to 14 days after the first dose. The benefits of protocol change are attenuated should vaccine compliance decrease substantially. To quantify the impact of protocol change on vaccination adherence we performed a large-scale online survey. We find that, in Germany, changing vaccination protocols may lead to small reductions in vaccination intention. In sum, we therefore expect the benefits of a strategy change to remain substantial and stable.
[ { "created": "Fri, 26 Feb 2021 17:16:18 GMT", "version": "v1" } ]
2021-03-01
[ [ "Maier", "Benjamin F.", "" ], [ "Burdinski", "Angelique", "" ], [ "Rose", "Annika H.", "" ], [ "Schlosser", "Frank", "" ], [ "Hinrichs", "David", "" ], [ "Betsch", "Cornelia", "" ], [ "Korn", "Lars", "" ], [ "Sprengholz", "Philipp", "" ], [ "Meyer-Hermann", "Michael", "" ], [ "Mitra", "Tanmay", "" ], [ "Lauterbach", "Karl", "" ], [ "Brockmann", "Dirk", "" ] ]
Vaccination against COVID-19 with the recently approved mRNA vaccines BNT162b2 (BioNTech/Pfizer) and mRNA-1273 (Moderna) is currently underway in a large number of countries. However, high incidence rates and rapidly spreading SARS-CoV-2 variants are concerning. In combination with acute supply deficits in Europe in early 2021, the question arises of whether stretching the vaccine, for instance by delaying the second dose, can make a significant contribution to preventing deaths, despite associated risks such as lower vaccine efficacy, the potential emergence of escape mutants, enhancement, waning immunity, reduced social acceptance of off-label vaccination, and liability shifts. A quantitative epidemiological assessment of risks and benefits of non-standard vaccination protocols remains elusive. To clarify the situation and to provide a quantitative epidemiological foundation we develop a stochastic epidemiological model that integrates specific vaccine rollout protocols into a risk-group structured infectious disease dynamical model. Using the situation and conditions in Germany as a reference system, we show that delaying the second vaccine dose is expected to prevent deaths in the four to five digit range, should the incidence resurge. We show that this considerable public health benefit relies on the fact that both mRNA vaccines provide substantial protection against severe COVID-19 and death beginning 12 to 14 days after the first dose. The benefits of protocol change are attenuated should vaccine compliance decrease substantially. To quantify the impact of protocol change on vaccination adherence we performed a large-scale online survey. We find that, in Germany, changing vaccination protocols may lead to small reductions in vaccination intention. In sum, we therefore expect the benefits of a strategy change to remain substantial and stable.
2404.10806
Kim H. Parker Professor
Kim H. Parker, Alun D. Hughes
The theoretical basis of reservoir pressure in arteries
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The separation of measured arterial pressure into a reservoir pressure and an excess pressure was introduced nearly 20 years ago as an heuristic hypothesis. We demonstrate that a two-time asymptotic analysis of the 1-D conservation equations in each artery coupled with the separation of the smaller arteries into inviscid and resistance arteries, based on their resistance coefficients, results, for the first time, in a formal derivation of the reservoir pressure. The key to the two-time analysis is the existence of a fast time associated with the propagation of waves through the arteries and a slow time associated with the convective velocity of the blood. The ratio between these two time scales is given by the Mach number; the ratio of a characteristic convective velocity to a characteristic wave speed. If the Mach number is small, a formal asymptotic analysis can be carried out which is accurate to the order of the square of the Mach number. The slow-time conservation equations involve a resistance coefficient that models the effect of viscosity on the convective velocity. On the basis of this resistance coefficient, we separate the arteries into the larger inviscid arteries where the coefficient is negligible and the smaller resistance arteries where it it is not negligible. The slow time pressure in the inviscid arteries is shown to be spatially uniform but varying in time. We define this pressure as the reservoir pressure. Dynamic analysis using mass conservation in the inviscid arteries shows that the reservoir pressure accounts for the storage of potential energy by the distension of the elastic inviscid arteries during early systole and its release during late systole and diastole. This analysis thus provides a formal derivation of the reservoir pressure and its physical meaning.
[ { "created": "Tue, 16 Apr 2024 13:25:44 GMT", "version": "v1" } ]
2024-04-18
[ [ "Parker", "Kim H.", "" ], [ "Hughes", "Alun D.", "" ] ]
The separation of measured arterial pressure into a reservoir pressure and an excess pressure was introduced nearly 20 years ago as an heuristic hypothesis. We demonstrate that a two-time asymptotic analysis of the 1-D conservation equations in each artery coupled with the separation of the smaller arteries into inviscid and resistance arteries, based on their resistance coefficients, results, for the first time, in a formal derivation of the reservoir pressure. The key to the two-time analysis is the existence of a fast time associated with the propagation of waves through the arteries and a slow time associated with the convective velocity of the blood. The ratio between these two time scales is given by the Mach number; the ratio of a characteristic convective velocity to a characteristic wave speed. If the Mach number is small, a formal asymptotic analysis can be carried out which is accurate to the order of the square of the Mach number. The slow-time conservation equations involve a resistance coefficient that models the effect of viscosity on the convective velocity. On the basis of this resistance coefficient, we separate the arteries into the larger inviscid arteries where the coefficient is negligible and the smaller resistance arteries where it it is not negligible. The slow time pressure in the inviscid arteries is shown to be spatially uniform but varying in time. We define this pressure as the reservoir pressure. Dynamic analysis using mass conservation in the inviscid arteries shows that the reservoir pressure accounts for the storage of potential energy by the distension of the elastic inviscid arteries during early systole and its release during late systole and diastole. This analysis thus provides a formal derivation of the reservoir pressure and its physical meaning.
1304.5565
Sriganesh Srihari Dr
Sriganesh Srihari and Mark A. Ragan
Computing Pathways to Systems Biology: Key Contributions of Computational Methods in Pathway Identification
18 pages, 1 figure, survey article
null
null
null
q-bio.MN cs.CE
http://creativecommons.org/licenses/by/3.0/
Understanding large molecular networks consisting of entities such as genes, proteins or RNAs that interact in complex ways to drive the cellular machinery has been an active focus of systems biology. Computational approaches have played a key role in systems biology by complementing theoretical and experimental approaches. Here we roadmap some key contributions of computational methods developed over the last decade in the reconstruction of biological pathways. We position these contributions in a 'systems biology perspective' to reemphasize their roles in unraveling cellular mechanisms and to understand 'systems biology diseases' including cancer.
[ { "created": "Sat, 20 Apr 2013 00:17:03 GMT", "version": "v1" } ]
2013-04-23
[ [ "Srihari", "Sriganesh", "" ], [ "Ragan", "Mark A.", "" ] ]
Understanding large molecular networks consisting of entities such as genes, proteins or RNAs that interact in complex ways to drive the cellular machinery has been an active focus of systems biology. Computational approaches have played a key role in systems biology by complementing theoretical and experimental approaches. Here we roadmap some key contributions of computational methods developed over the last decade in the reconstruction of biological pathways. We position these contributions in a 'systems biology perspective' to reemphasize their roles in unraveling cellular mechanisms and to understand 'systems biology diseases' including cancer.
1604.08278
Naoto Hori
Naoto Hori, Natalia A. Denesyuk, D. Thirumalai
Salt Effects on the Thermodynamics of a Frameshifting RNA Pseudoknot under Tension
Final draft accepted in Journal of Molecular Biology, 16 pages including Supporting Information
null
10.1016/j.jmb.2016.06.002
null
q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Because of the potential link between -1 programmed ribosomal frameshifting and response of a pseudoknot (PK) RNA to force, a number of single molecule pulling experiments have been performed on PKs to decipher the mechanism of programmed ribosomal frameshifting. Motivated in part by these experiments, we performed simulations using a coarse-grained model of RNA to describe the response of a PK over a range of mechanical forces ($f$s) and monovalent salt concentrations ($C$s). The coarse-grained simulations quantitatively reproduce the multistep thermal melting observed in experiments, thus validating our model. The free energy changes obtained in simulations are in excellent agreement with experiments. By varying $f$ and $C$, we calculated the phase diagram that shows a sequence of structural transitions, populating distinct intermediate states. As $f$ and $C$ are changed, the stem-loop tertiary interactions rupture first, followed by unfolding of the $3^{\prime}$-end hairpin ($\textrm{I}\rightleftharpoons\textrm{F}$). Finally, the $5^{\prime}$-end hairpin unravels, producing an extended state ($\textrm{E}\rightleftharpoons\textrm{I}$). A theoretical analysis of the phase boundaries shows that the critical force for rupture scales as $\left(\log C_{\textrm{m}}\right)^{\alpha}$ with $\alpha=1\,(0.5)$ for $\textrm{E}\rightleftharpoons\textrm{I}$ ($\textrm{I}\rightleftharpoons\textrm{F}$) transition. This relation is used to obtain the preferential ion-RNA interaction coefficient, which can be quantitatively measured in single-molecule experiments, as done previously for DNA hairpins. A by-product of our work is the suggestion that the frameshift efficiency is likely determined by the stability of the $5^{\prime}$-end hairpin that the ribosome first encounters during translation.
[ { "created": "Thu, 28 Apr 2016 00:58:15 GMT", "version": "v1" }, { "created": "Tue, 7 Jun 2016 19:12:34 GMT", "version": "v2" }, { "created": "Tue, 21 Jun 2016 04:45:26 GMT", "version": "v3" } ]
2016-06-22
[ [ "Hori", "Naoto", "" ], [ "Denesyuk", "Natalia A.", "" ], [ "Thirumalai", "D.", "" ] ]
Because of the potential link between -1 programmed ribosomal frameshifting and response of a pseudoknot (PK) RNA to force, a number of single molecule pulling experiments have been performed on PKs to decipher the mechanism of programmed ribosomal frameshifting. Motivated in part by these experiments, we performed simulations using a coarse-grained model of RNA to describe the response of a PK over a range of mechanical forces ($f$s) and monovalent salt concentrations ($C$s). The coarse-grained simulations quantitatively reproduce the multistep thermal melting observed in experiments, thus validating our model. The free energy changes obtained in simulations are in excellent agreement with experiments. By varying $f$ and $C$, we calculated the phase diagram that shows a sequence of structural transitions, populating distinct intermediate states. As $f$ and $C$ are changed, the stem-loop tertiary interactions rupture first, followed by unfolding of the $3^{\prime}$-end hairpin ($\textrm{I}\rightleftharpoons\textrm{F}$). Finally, the $5^{\prime}$-end hairpin unravels, producing an extended state ($\textrm{E}\rightleftharpoons\textrm{I}$). A theoretical analysis of the phase boundaries shows that the critical force for rupture scales as $\left(\log C_{\textrm{m}}\right)^{\alpha}$ with $\alpha=1\,(0.5)$ for $\textrm{E}\rightleftharpoons\textrm{I}$ ($\textrm{I}\rightleftharpoons\textrm{F}$) transition. This relation is used to obtain the preferential ion-RNA interaction coefficient, which can be quantitatively measured in single-molecule experiments, as done previously for DNA hairpins. A by-product of our work is the suggestion that the frameshift efficiency is likely determined by the stability of the $5^{\prime}$-end hairpin that the ribosome first encounters during translation.
1603.05707
Chi Zhang
Chi Zhang
Molecular Clock Dating using MrBayes
19 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides an overview and a tutorial of molecular clock dating using MrBayes, which is a software for Bayesian inference of phylogeny. Two modern approaches, total-evidence dating and node dating, are demonstrated using a dataset of Hymenoptera with molecular sequences and morphological characters. The similarity and difference of the two methods are compared and discussed. Besides, a non-clock analysis is performed on the same dataset to compare with the molecular clock dating analyses.
[ { "created": "Thu, 17 Mar 2016 22:06:54 GMT", "version": "v1" }, { "created": "Thu, 30 Nov 2017 12:16:58 GMT", "version": "v2" } ]
2017-12-01
[ [ "Zhang", "Chi", "" ] ]
This paper provides an overview and a tutorial of molecular clock dating using MrBayes, which is a software for Bayesian inference of phylogeny. Two modern approaches, total-evidence dating and node dating, are demonstrated using a dataset of Hymenoptera with molecular sequences and morphological characters. The similarity and difference of the two methods are compared and discussed. Besides, a non-clock analysis is performed on the same dataset to compare with the molecular clock dating analyses.
2006.03286
Diego Oyarz\'un
Varshit Dusad, Denise Thiel, Mauricio Barahona, Hector C. Keun, Diego A. Oyarz\'un
Opportunities at the interface of network science and metabolic modelling
null
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Metabolism plays a central role in cell physiology because it provides the molecular machinery for growth. At the genome-scale, metabolism is made up of thousands of reactions interacting with one another. Untangling this complexity is key to understand how cells respond to genetic, environmental, or therapeutic perturbations. Here we discuss the roles of two complementary strategies for the analysis of genome-scale metabolic models: Flux Balance Analysis (FBA) and network science. While FBA estimates metabolic flux on the basis of an optimisation principle, network approaches reveal emergent properties of the global metabolic connectivity. We highlight how the integration of both approaches promises to deliver insights on the structure and function of metabolic systems with wide-ranging implications in discovery science, precision medicine and industrial biotechnology.
[ { "created": "Fri, 5 Jun 2020 08:09:27 GMT", "version": "v1" }, { "created": "Tue, 3 Nov 2020 20:42:56 GMT", "version": "v2" }, { "created": "Thu, 17 Dec 2020 23:18:31 GMT", "version": "v3" } ]
2020-12-21
[ [ "Dusad", "Varshit", "" ], [ "Thiel", "Denise", "" ], [ "Barahona", "Mauricio", "" ], [ "Keun", "Hector C.", "" ], [ "Oyarzún", "Diego A.", "" ] ]
Metabolism plays a central role in cell physiology because it provides the molecular machinery for growth. At the genome-scale, metabolism is made up of thousands of reactions interacting with one another. Untangling this complexity is key to understand how cells respond to genetic, environmental, or therapeutic perturbations. Here we discuss the roles of two complementary strategies for the analysis of genome-scale metabolic models: Flux Balance Analysis (FBA) and network science. While FBA estimates metabolic flux on the basis of an optimisation principle, network approaches reveal emergent properties of the global metabolic connectivity. We highlight how the integration of both approaches promises to deliver insights on the structure and function of metabolic systems with wide-ranging implications in discovery science, precision medicine and industrial biotechnology.
q-bio/0309014
Chi Ming Yang Dr.
Chi Ming Yang (Nankai University)
The naturally designed spherical symmetry in the genetic code
13 pages, 7 figures
null
null
null
q-bio.BM cond-mat.soft q-bio.PE
null
In the present work, 16 genetic code doublets and their cognate amino acids in the genetic code are fitted into a polyhedron model. Based on the structural regularity in nucleobases, and by using a series of common-sense topological approaches to rearranging the Hamiltonian-type graph of the codon map, it is identified that the degeneracy of codons and the internal relation of the 20 amino acids within the genetic code are in agreement with the spherical and polyhedral symmetry of a quasi-28-gon, i.e., icosikaioctagon. Hence, a quasi-central, quasi-polyhedral and rotational symmetry within the genetic code is described. Accordingly, the rotational symmetry of the numerical distribution of side-chain carbon atoms of the 20 amino acids and the side-chain skeleton atoms (carbon, nitrogen, oxygen and sulfur) of the 20 amino acids are presented in the framework of this quasi-28-gon model. Two evolutionary axes within the 20 standard amino acids are suggested.
[ { "created": "Thu, 25 Sep 2003 20:04:20 GMT", "version": "v1" } ]
2007-05-23
[ [ "Yang", "Chi Ming", "", "Nankai University" ] ]
In the present work, 16 genetic code doublets and their cognate amino acids in the genetic code are fitted into a polyhedron model. Based on the structural regularity in nucleobases, and by using a series of common-sense topological approaches to rearranging the Hamiltonian-type graph of the codon map, it is identified that the degeneracy of codons and the internal relation of the 20 amino acids within the genetic code are in agreement with the spherical and polyhedral symmetry of a quasi-28-gon, i.e., icosikaioctagon. Hence, a quasi-central, quasi-polyhedral and rotational symmetry within the genetic code is described. Accordingly, the rotational symmetry of the numerical distribution of side-chain carbon atoms of the 20 amino acids and the side-chain skeleton atoms (carbon, nitrogen, oxygen and sulfur) of the 20 amino acids are presented in the framework of this quasi-28-gon model. Two evolutionary axes within the 20 standard amino acids are suggested.
1202.5211
Paul Muller Jr
Paul A. Muller Jr. and Slava S. Epstein
In Silico Genome-Genome Hybridization Values Accurately and Precisely Predict Empirical DNA-DNA Hybridization Values for Classifying Prokaryotes
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For nearly 50 years microbiologists have been determining prokaryotic genome relatedness by means of nucleic acid reassociation kinetics. These methods, however, are technically challenging, difficult to reproduce, and - given the time and resources it takes to generate a single data-point - not cost effective. In the post genomic era, with the cost of sequencing whole prokaryotic genomes no longer a limiting factor, we believed that computationally predicting the output value from a traditional DNA-DNA hybridization experiment using pair-wise comparisons of whole genome sequences to be of value. While other computational whole-genome classification methods exist, they predict values on widely different scales than DNA-DNA hybridization, introducing yet another metric into the polyphasic approach of defining microbial species. Our goal was to develop an in silico BLAST based pipeline that would predict with a high level of certainty the value of the wet lab-based DNA-DNA hybridization values. Here we report on one such method that produces estimates that are both accurate and precise with respect to the DNA-DNA hybridization values they are designed to emulate.
[ { "created": "Thu, 23 Feb 2012 15:36:54 GMT", "version": "v1" } ]
2012-02-24
[ [ "Muller", "Paul A.", "Jr." ], [ "Epstein", "Slava S.", "" ] ]
For nearly 50 years microbiologists have been determining prokaryotic genome relatedness by means of nucleic acid reassociation kinetics. These methods, however, are technically challenging, difficult to reproduce, and - given the time and resources it takes to generate a single data-point - not cost effective. In the post genomic era, with the cost of sequencing whole prokaryotic genomes no longer a limiting factor, we believed that computationally predicting the output value from a traditional DNA-DNA hybridization experiment using pair-wise comparisons of whole genome sequences to be of value. While other computational whole-genome classification methods exist, they predict values on widely different scales than DNA-DNA hybridization, introducing yet another metric into the polyphasic approach of defining microbial species. Our goal was to develop an in silico BLAST based pipeline that would predict with a high level of certainty the value of the wet lab-based DNA-DNA hybridization values. Here we report on one such method that produces estimates that are both accurate and precise with respect to the DNA-DNA hybridization values they are designed to emulate.
1407.2976
Filippo Disanto
Filippo Disanto and Noah A. Rosenberg
On the number of ranked species trees producing anomalous ranked gene trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analysis of probability distributions conditional on species trees has demonstrated the existence of anomalous ranked gene trees (ARGTs), ranked gene trees that are more probable than the ranked gene tree that accords with the ranked species tree. Here, to improve the characterization of ARGTs, we study enumerative and probabilistic properties of two classes of ranked labeled species trees, focusing on the presence or avoidance of certain subtree patterns associated with the production of ARGTs. We provide exact enumerations and asymptotic estimates for cardinalities of these sets of trees, showing that as the number of species increases without bound, the fraction of all ranked labeled species trees that are ARGT-producing approaches 1. This result extends beyond earlier existence results to provide a probabilistic claim about the frequency of ARGTs.
[ { "created": "Thu, 10 Jul 2014 22:02:33 GMT", "version": "v1" } ]
2014-07-14
[ [ "Disanto", "Filippo", "" ], [ "Rosenberg", "Noah A.", "" ] ]
Analysis of probability distributions conditional on species trees has demonstrated the existence of anomalous ranked gene trees (ARGTs), ranked gene trees that are more probable than the ranked gene tree that accords with the ranked species tree. Here, to improve the characterization of ARGTs, we study enumerative and probabilistic properties of two classes of ranked labeled species trees, focusing on the presence or avoidance of certain subtree patterns associated with the production of ARGTs. We provide exact enumerations and asymptotic estimates for cardinalities of these sets of trees, showing that as the number of species increases without bound, the fraction of all ranked labeled species trees that are ARGT-producing approaches 1. This result extends beyond earlier existence results to provide a probabilistic claim about the frequency of ARGTs.
1807.01195
Genki Ichinose
Genki Ichinose, Yoshiki Satotani, Hiroki Sayama, Takashi Nagatani
Reduced mobility of infected agents suppresses but lengthens disease in biased random walk
6 pages, 6 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various theoretical models have been proposed to understand the basic nature of epidemics. Recent studies focus on the effects of mobility to epidemic process. However, uncorrelated random walk is typically assumed as the type of movement. In our daily life, the movement of people sometimes tends to be limited to a certain direction, which can be described by biased random walk. Here, we developed an agent-based model of susceptible-infected-recovered (SIR) epidemic process in a 2D continuous space where agents tend to move in a certain direction in addition to random movement. Moreover, we mainly focus on the effect of the reduced mobility of infected agents. Our model assumes that, when people are infected, their movement activity is greatly reduced because they are physically weakened by the disease. By conducting extensive simulations, we found that when the movement of infected people is limited, the final epidemic size becomes small. However, that crucially depended on the movement type of agents. Furthermore, the reduced mobility of infected agents lengthened the duration of the epidemic because the infection progressed slowly.
[ { "created": "Tue, 3 Jul 2018 14:05:35 GMT", "version": "v1" } ]
2018-07-04
[ [ "Ichinose", "Genki", "" ], [ "Satotani", "Yoshiki", "" ], [ "Sayama", "Hiroki", "" ], [ "Nagatani", "Takashi", "" ] ]
Various theoretical models have been proposed to understand the basic nature of epidemics. Recent studies focus on the effects of mobility to epidemic process. However, uncorrelated random walk is typically assumed as the type of movement. In our daily life, the movement of people sometimes tends to be limited to a certain direction, which can be described by biased random walk. Here, we developed an agent-based model of susceptible-infected-recovered (SIR) epidemic process in a 2D continuous space where agents tend to move in a certain direction in addition to random movement. Moreover, we mainly focus on the effect of the reduced mobility of infected agents. Our model assumes that, when people are infected, their movement activity is greatly reduced because they are physically weakened by the disease. By conducting extensive simulations, we found that when the movement of infected people is limited, the final epidemic size becomes small. However, that crucially depended on the movement type of agents. Furthermore, the reduced mobility of infected agents lengthened the duration of the epidemic because the infection progressed slowly.
q-bio/0502015
Tomoshiro Ochiai
T. Ochiai, J.C. Nacher, T. Akutsu
A stochastic approach to multi-gene expression dynamics
17 pages, 2 figures, Latex, v2 includes minor modifications
null
10.1016/j.physleta.2005.02.066
null
q-bio.BM
null
In the last years, tens of thousands gene expression profiles for cells of several organisms have been monitored. Gene expression is a complex transcriptional process where mRNA molecules are translated into proteins, which control most of the cell functions. In this process, the correlation among genes is crucial to determine the specific functions of genes. Here, we propose a novel multi-dimensional stochastic approach to deal with the gene correlation phenomena. Interestingly, our stochastic framework suggests that the study of the gene correlation requires only one theoretical assumption -Markov property- and the experimental transition probability, which characterizes the gene correlation system. Finally, a gene expression experiment is proposed for future applications of the model.
[ { "created": "Mon, 14 Feb 2005 05:17:07 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2005 06:31:38 GMT", "version": "v2" } ]
2009-11-11
[ [ "Ochiai", "T.", "" ], [ "Nacher", "J. C.", "" ], [ "Akutsu", "T.", "" ] ]
In the last years, tens of thousands gene expression profiles for cells of several organisms have been monitored. Gene expression is a complex transcriptional process where mRNA molecules are translated into proteins, which control most of the cell functions. In this process, the correlation among genes is crucial to determine the specific functions of genes. Here, we propose a novel multi-dimensional stochastic approach to deal with the gene correlation phenomena. Interestingly, our stochastic framework suggests that the study of the gene correlation requires only one theoretical assumption -Markov property- and the experimental transition probability, which characterizes the gene correlation system. Finally, a gene expression experiment is proposed for future applications of the model.
1603.04773
Jaline Gerardin
Milen Nikolov, Caitlin A. Bever, Alexander Upfill-Brown, Busiku Hamainza, John M. Miller, Philip A. Eckhoff, Edward A. Wenger, Jaline Gerardin
Malaria elimination campaigns in the Lake Kariba region of Zambia: a spatial dynamical model
null
null
10.1371/journal.pcbi.1005192
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background As more regions approach malaria elimination, understanding how different interventions interact to reduce transmission becomes critical. The Lake Kariba area of Southern Province, Zambia, is part of a multi-country elimination effort and presents a particular challenge as it is an interconnected region of variable transmission intensities. Methods In 2012-13, six rounds of mass-screen-and-treat drug campaigns were carried out in the Lake Kariba region. A spatial dynamical model of malaria transmission in the Lake Kariba area, with transmission and climate modeled at the village scale, was calibrated to the 2012-13 prevalence survey data, with case management rates, insecticide-treated net usage, and drug campaign coverage informed by surveillance. The model was used to simulate the effect of various interventions implemented in 2014-22 on reducing regional transmission, achieving elimination by 2022, and maintaining elimination through 2028. Findings The model captured the spatio-temporal trends of decline and rebound in malaria prevalence in 2012-13 at the village scale. Simulations predicted that elimination required repeated mass drug administrations coupled with simultaneous increase in net usage. Drug campaigns targeted only at high-burden areas were as successful as campaigns covering the entire region. Interpretation Elimination in the Lake Kariba region is possible through coordinating mass drug campaigns with high-coverage vector control. Targeting regional hotspots is a viable alternative to global campaigns when human migration within an interconnected area is responsible for maintaining transmission in low-burden areas.
[ { "created": "Tue, 15 Mar 2016 17:37:23 GMT", "version": "v1" } ]
2017-02-08
[ [ "Nikolov", "Milen", "" ], [ "Bever", "Caitlin A.", "" ], [ "Upfill-Brown", "Alexander", "" ], [ "Hamainza", "Busiku", "" ], [ "Miller", "John M.", "" ], [ "Eckhoff", "Philip A.", "" ], [ "Wenger", "Edward A.", "" ], [ "Gerardin", "Jaline", "" ] ]
Background As more regions approach malaria elimination, understanding how different interventions interact to reduce transmission becomes critical. The Lake Kariba area of Southern Province, Zambia, is part of a multi-country elimination effort and presents a particular challenge as it is an interconnected region of variable transmission intensities. Methods In 2012-13, six rounds of mass-screen-and-treat drug campaigns were carried out in the Lake Kariba region. A spatial dynamical model of malaria transmission in the Lake Kariba area, with transmission and climate modeled at the village scale, was calibrated to the 2012-13 prevalence survey data, with case management rates, insecticide-treated net usage, and drug campaign coverage informed by surveillance. The model was used to simulate the effect of various interventions implemented in 2014-22 on reducing regional transmission, achieving elimination by 2022, and maintaining elimination through 2028. Findings The model captured the spatio-temporal trends of decline and rebound in malaria prevalence in 2012-13 at the village scale. Simulations predicted that elimination required repeated mass drug administrations coupled with simultaneous increase in net usage. Drug campaigns targeted only at high-burden areas were as successful as campaigns covering the entire region. Interpretation Elimination in the Lake Kariba region is possible through coordinating mass drug campaigns with high-coverage vector control. Targeting regional hotspots is a viable alternative to global campaigns when human migration within an interconnected area is responsible for maintaining transmission in low-burden areas.
2005.13256
Masaru Tanaka
Masaru Tanaka and Gyula Telegdy
Antidepressant-like Effects of Neuropeptide SF (NPSF)
4 pages, 3 figures
World Journal of Research and Review (WJRR) ISSN:2455-3956, Volume-4, Issue-5, May 2017 Pages 26-30
null
null
q-bio.BM q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
Neuropeptide SF (NPSF) is a member of RFamide neuropeptides that play diverse roles in central nervous system. Little is know about the effects of NPSF on brain functions. Antidepressant-like effect of NPSF was studied in modified mice FST. NPSF showed the antidepressant-like effects by decreasing the immobility time and increasing the climbing and swimming time. Furthermore, the involvement of the adrenergic, serotonergic, cholinergic or dopaminergic receptors in the antidepressant-like effect of NPSF was studied in modified mice FST. Mice were pretreated with a non-selective {\alpha}-adrenergic receptor antagonist phenoxybenzamine, a \b{eta}-adrenergic receptor antagonist, propranolol, a non-selective 5-HT2 serotonergic receptor antagonist, cyproheptadine, nonselective muscarinic acetylcholine receptor antagonist, atropine, or D2, D3, D4 dopamine receptor antagonist, haloperidol. The present results confirmed that the antidepressant-like effect of NPSF is mediated, at least in part, by an interaction of the {\alpha}-adrenergic, 5-HT2 serotonergic, muscarinic acetylcholine receptors and D2, D3, D4 dopamine receptor in a modified mouse FST.
[ { "created": "Wed, 27 May 2020 09:43:26 GMT", "version": "v1" } ]
2020-05-29
[ [ "Tanaka", "Masaru", "" ], [ "Telegdy", "Gyula", "" ] ]
Neuropeptide SF (NPSF) is a member of RFamide neuropeptides that play diverse roles in central nervous system. Little is know about the effects of NPSF on brain functions. Antidepressant-like effect of NPSF was studied in modified mice FST. NPSF showed the antidepressant-like effects by decreasing the immobility time and increasing the climbing and swimming time. Furthermore, the involvement of the adrenergic, serotonergic, cholinergic or dopaminergic receptors in the antidepressant-like effect of NPSF was studied in modified mice FST. Mice were pretreated with a non-selective {\alpha}-adrenergic receptor antagonist phenoxybenzamine, a \b{eta}-adrenergic receptor antagonist, propranolol, a non-selective 5-HT2 serotonergic receptor antagonist, cyproheptadine, nonselective muscarinic acetylcholine receptor antagonist, atropine, or D2, D3, D4 dopamine receptor antagonist, haloperidol. The present results confirmed that the antidepressant-like effect of NPSF is mediated, at least in part, by an interaction of the {\alpha}-adrenergic, 5-HT2 serotonergic, muscarinic acetylcholine receptors and D2, D3, D4 dopamine receptor in a modified mouse FST.
2311.04357
James Brunner
James D. Brunner and Aaron J. Robinson and Patrick S.G. Chain
Combining Compositional Data Sets Introduces Error in Covariance Network Reconstruction
18 pages, 10 figures, 1 table
null
null
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Microbial communities are diverse biological systems that include taxa from across multiple kingdoms of life. Notably, interactions between bacteria and fungi play a significant role in determining community structure. However, these statistical associations across kingdoms are more difficult to infer than intra-kingdom associations due to the nature of the data involved using standard network inference techniques. We quantify the challenges of cross-kingdom network inference from both a theoretical and practical viewpoint using synthetic and real-world microbiome data. We detail the theoretical issue presented by combining compositional data sets drawn from the same environment, e.g. 16S and ITS sequencing of a single set of samples, and survey common network inference techniques for their ability to handle this error. We then test these techniques for the accuracy and usefulness of their intra- and inter-kingdom associations by inferring networks from a set of simulated samples for which a ground-truth set of associations is known. We show that while two methods mitigate the error of cross-kingdom inference, there is little difference between techniques for key practical applications including identification of strong correlations and identification of possible keystone taxa (i.e. hub nodes in the network). Furthermore, we identify a signature of the error caused transkingdom network inference and demonstrate that it appears in networks constructed using real-world environmental microbiome data.
[ { "created": "Tue, 7 Nov 2023 21:38:31 GMT", "version": "v1" }, { "created": "Fri, 12 Apr 2024 17:04:24 GMT", "version": "v2" } ]
2024-04-15
[ [ "Brunner", "James D.", "" ], [ "Robinson", "Aaron J.", "" ], [ "Chain", "Patrick S. G.", "" ] ]
Microbial communities are diverse biological systems that include taxa from across multiple kingdoms of life. Notably, interactions between bacteria and fungi play a significant role in determining community structure. However, these statistical associations across kingdoms are more difficult to infer than intra-kingdom associations due to the nature of the data involved using standard network inference techniques. We quantify the challenges of cross-kingdom network inference from both a theoretical and practical viewpoint using synthetic and real-world microbiome data. We detail the theoretical issue presented by combining compositional data sets drawn from the same environment, e.g. 16S and ITS sequencing of a single set of samples, and survey common network inference techniques for their ability to handle this error. We then test these techniques for the accuracy and usefulness of their intra- and inter-kingdom associations by inferring networks from a set of simulated samples for which a ground-truth set of associations is known. We show that while two methods mitigate the error of cross-kingdom inference, there is little difference between techniques for key practical applications including identification of strong correlations and identification of possible keystone taxa (i.e. hub nodes in the network). Furthermore, we identify a signature of the error caused transkingdom network inference and demonstrate that it appears in networks constructed using real-world environmental microbiome data.
2010.10444
Masanao Ozawa
Masanao Ozawa and Andrei Khrennikov
Modeling combination of question order effect, response replicability effect, and QQ-equality with quantum instruments
39 pages, title changed, accepted for publication in J. Math. Psychol
Journal of Mathematical Psychology 100, 102491, 2021
10.1016/j.jmp.2020.102491
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
We continue to analyze basic constraints on the human decision making from the viewpoint of quantum measurement theory (QMT). As it has been found, the conventional QMT based on the projection postulate cannot account for the combination of the question order effect (QOE) and the response replicability effect (RRE). This was alarming finding for quantum-like modeling of decision making. Recently, it was shown that this difficulty can be resolved by using of the general QMT based on quantum instruments. In the present paper we analyse the problem of the combination of QOE, RRE, and the well-known QQ-equality (QQE). This equality was derived by Busemeyer and Wang and it was shown (in a joint paper with Solloway and Shiffrin) that statistical data from many social opinion polls satisfy it. Here we construct quantum instruments satisfying QOE, RRE and QQE. The general features of our approach are formalized with postulates that generalize (the Wang-Busemeyer) postulates for quantum-like modeling of decision making. Moreover, we show that our model closely reproduces the statistics of the well-known Clinton-Gore Poll data with a prior belief state independent of the question order. This model successfully corrects for the order effect in the data to determine the "genuine" distribution of the opinions in the Poll. The paper also provides an accessible introduction to the theory of quantum instruments - the most general mathematical framework for quantum measurements.
[ { "created": "Sun, 11 Oct 2020 13:41:04 GMT", "version": "v1" }, { "created": "Fri, 4 Dec 2020 17:06:41 GMT", "version": "v2" } ]
2021-02-19
[ [ "Ozawa", "Masanao", "" ], [ "Khrennikov", "Andrei", "" ] ]
We continue to analyze basic constraints on the human decision making from the viewpoint of quantum measurement theory (QMT). As it has been found, the conventional QMT based on the projection postulate cannot account for the combination of the question order effect (QOE) and the response replicability effect (RRE). This was alarming finding for quantum-like modeling of decision making. Recently, it was shown that this difficulty can be resolved by using of the general QMT based on quantum instruments. In the present paper we analyse the problem of the combination of QOE, RRE, and the well-known QQ-equality (QQE). This equality was derived by Busemeyer and Wang and it was shown (in a joint paper with Solloway and Shiffrin) that statistical data from many social opinion polls satisfy it. Here we construct quantum instruments satisfying QOE, RRE and QQE. The general features of our approach are formalized with postulates that generalize (the Wang-Busemeyer) postulates for quantum-like modeling of decision making. Moreover, we show that our model closely reproduces the statistics of the well-known Clinton-Gore Poll data with a prior belief state independent of the question order. This model successfully corrects for the order effect in the data to determine the "genuine" distribution of the opinions in the Poll. The paper also provides an accessible introduction to the theory of quantum instruments - the most general mathematical framework for quantum measurements.
2404.09706
Amadeus M. Gebauer
Amadeus M. Gebauer, Martin R. Pfaller, Jason M. Szafron, Wolfgang A. Wall
Adaptive integration of history variables in constrained mixture models for organ-scale growth and remodeling
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decades, many computational models have been developed to predict soft tissue growth and remodeling (G&R). The constrained mixture theory describes fundamental mechanobiological processes in soft tissue G&R and has been widely adopted in cardiovascular models of G&R. However, even after two decades of work, large organ-scale models are rare, mainly due to high computational costs (model evaluation and memory consumption), especially in long-range simulations. We propose two strategies to adaptively integrate history variables in constrained mixture models to enable large organ-scale simulations of G&R. Both strategies exploit that the influence of deposited tissue on the current mixture decreases over time through degradation. One strategy is independent of external loading, allowing the estimation of the computational resources ahead of the simulation. The other adapts the history snapshots based on the local mechanobiological environment so that the additional integration errors can be controlled and kept negligibly small, even in G&R scenarios with severe perturbations. We analyze the adaptively integrated constrained mixture model on a tissue patch for a parameter study and show the performance under different G&R scenarios. To confirm that adaptive strategies enable large organ-scale examples, we show simulations of different hypertension conditions with a real-world example of a biventricular heart discretized with a finite element mesh. In our example, adaptive integrations sped up simulations by a factor of three and reduced memory requirements to one-sixth. The reduction of the computational costs gets even more pronounced for simulations over longer periods. Adaptive integration of the history variables allows studying more finely resolved models and longer G&R periods while computational costs are drastically reduced and largely constant in time.
[ { "created": "Mon, 15 Apr 2024 12:04:24 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2024 12:38:36 GMT", "version": "v2" } ]
2024-07-12
[ [ "Gebauer", "Amadeus M.", "" ], [ "Pfaller", "Martin R.", "" ], [ "Szafron", "Jason M.", "" ], [ "Wall", "Wolfgang A.", "" ] ]
In the last decades, many computational models have been developed to predict soft tissue growth and remodeling (G&R). The constrained mixture theory describes fundamental mechanobiological processes in soft tissue G&R and has been widely adopted in cardiovascular models of G&R. However, even after two decades of work, large organ-scale models are rare, mainly due to high computational costs (model evaluation and memory consumption), especially in long-range simulations. We propose two strategies to adaptively integrate history variables in constrained mixture models to enable large organ-scale simulations of G&R. Both strategies exploit that the influence of deposited tissue on the current mixture decreases over time through degradation. One strategy is independent of external loading, allowing the estimation of the computational resources ahead of the simulation. The other adapts the history snapshots based on the local mechanobiological environment so that the additional integration errors can be controlled and kept negligibly small, even in G&R scenarios with severe perturbations. We analyze the adaptively integrated constrained mixture model on a tissue patch for a parameter study and show the performance under different G&R scenarios. To confirm that adaptive strategies enable large organ-scale examples, we show simulations of different hypertension conditions with a real-world example of a biventricular heart discretized with a finite element mesh. In our example, adaptive integrations sped up simulations by a factor of three and reduced memory requirements to one-sixth. The reduction of the computational costs gets even more pronounced for simulations over longer periods. Adaptive integration of the history variables allows studying more finely resolved models and longer G&R periods while computational costs are drastically reduced and largely constant in time.
2112.00221
Luis F Seoane PhD
Lu\'is F Seoane
Evolutionary paths to lateralization of complex brain functions
21 pages, 10 figures, 5 appendixes
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
At large, most animal brains present two mirror-symmetric sides; but closer inspection reveals a range of asymmetries (in shape and function), that seem more salient in more cognitively complex species. Sustaining symmetric, redundant neural circuitry has associated metabolic costs, but it might aid in implementing computations within noisy environments or with faulty pieces. It has been suggested that the complexity of a computational task might play a role in breaking bilaterally symmetric circuits into fully lateralized ones; yet a rigorous, mathematically grounded theory of how this mechanism might work is missing. Here we provide such a mathematical framework, starting with the simplest assumptions, but extending our results to a comprehensive range of biologically and computationally relevant scenarios. We show mathematically that only fully lateralized or bilateral solutions are relevant within our framework (dismissing configurations in which circuits are only partially engaged). We provide maps that show when each of these configurations is preferred depending on costs, contributed fitness, circuit reliability, and task complexity. We discuss evolutionary paths leading from bilateral to lateralized configurations and other possible outcomes. The implications of these results for evolution, development, and rehabilitation of damaged or aged brains is discussed. Our work constitutes a limit case that should constrain and underlie similar mappings when other aspects (aside task complexity and circuit reliability) are considered.
[ { "created": "Wed, 1 Dec 2021 01:52:08 GMT", "version": "v1" } ]
2021-12-02
[ [ "Seoane", "Luís F", "" ] ]
At large, most animal brains present two mirror-symmetric sides; but closer inspection reveals a range of asymmetries (in shape and function), that seem more salient in more cognitively complex species. Sustaining symmetric, redundant neural circuitry has associated metabolic costs, but it might aid in implementing computations within noisy environments or with faulty pieces. It has been suggested that the complexity of a computational task might play a role in breaking bilaterally symmetric circuits into fully lateralized ones; yet a rigorous, mathematically grounded theory of how this mechanism might work is missing. Here we provide such a mathematical framework, starting with the simplest assumptions, but extending our results to a comprehensive range of biologically and computationally relevant scenarios. We show mathematically that only fully lateralized or bilateral solutions are relevant within our framework (dismissing configurations in which circuits are only partially engaged). We provide maps that show when each of these configurations is preferred depending on costs, contributed fitness, circuit reliability, and task complexity. We discuss evolutionary paths leading from bilateral to lateralized configurations and other possible outcomes. The implications of these results for evolution, development, and rehabilitation of damaged or aged brains is discussed. Our work constitutes a limit case that should constrain and underlie similar mappings when other aspects (aside task complexity and circuit reliability) are considered.
1206.1098
Alberto d'Onofrio
Giulio Caravagna, Giancarlo Mauri, Alberto d'Onofrio
The interplay of intrinsic and extrinsic bounded noises in genetic networks
null
null
10.1371/journal.pone.0051174
null
q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a genetic network. The influence of intrinsic and extrinsic noises on genetic networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: $(i)$ the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, $(ii)$ a model of enzymatic futile cycle and $(iii)$ a genetic toggle switch. In $(ii)$ and $(iii)$ we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possibile functional role of bounded noises.
[ { "created": "Wed, 6 Jun 2012 00:35:33 GMT", "version": "v1" } ]
2015-06-05
[ [ "Caravagna", "Giulio", "" ], [ "Mauri", "Giancarlo", "" ], [ "d'Onofrio", "Alberto", "" ] ]
After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a genetic network. The influence of intrinsic and extrinsic noises on genetic networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: $(i)$ the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, $(ii)$ a model of enzymatic futile cycle and $(iii)$ a genetic toggle switch. In $(ii)$ and $(iii)$ we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possibile functional role of bounded noises.
1408.1002
Andrew Teschendorff
Andrew Teschendorff and Peter Sollich and Reimer Kuehn
Signalling Entropy: a novel network-theoretical framework for systems analysis and interpretation of functional omic data
34 pages, 6 figures
Methods. 2014 Jun 1;67(3):282-93
10.1016/j.ymeth.2014.03.013
null
q-bio.MN q-bio.GN
http://creativecommons.org/licenses/by/3.0/
A key challenge in systems biology is the elucidation of the underlying principles, or fundamental laws, which determine the cellular phenotype. Understanding how these fundamental principles are altered in diseases like cancer is important for translating basic scientific knowledge into clinical advances. While significant progress is being made, with the identification of novel drug targets and treatments by means of systems biological methods, our fundamental systems level understanding of why certain treatments succeed and others fail is still lacking. We here advocate a novel methodological framework for systems analysis and interpretation of molecular omic data, which is based on statistical mechanical principles. Specifically, we propose the notion of cellular signalling entropy (or uncertainty), as a novel means of analysing and interpreting omic data, and more fundamentally, as a means of elucidating systems-level principles underlying basic biology and disease. We describe the power of signalling entropy to discriminate cells according to differentiation potential and cancer status. We further argue the case for an empirical cellular entropy-robustness correlation theorem and demonstrate its existence in cancer cell line drug sensitivity data. Specifically, we find that high signalling entropy correlates with drug resistance and further describe how entropy could be used to identify the achilles heels of cancer cells. In summary, signalling entropy is a deep and powerful concept, based on rigorous statistical mechanical principles, which, with improved data quality and coverage, will allow a much deeper understanding of the systems biological principles underlying normal and disease physiology.
[ { "created": "Tue, 5 Aug 2014 15:25:02 GMT", "version": "v1" } ]
2014-08-06
[ [ "Teschendorff", "Andrew", "" ], [ "Sollich", "Peter", "" ], [ "Kuehn", "Reimer", "" ] ]
A key challenge in systems biology is the elucidation of the underlying principles, or fundamental laws, which determine the cellular phenotype. Understanding how these fundamental principles are altered in diseases like cancer is important for translating basic scientific knowledge into clinical advances. While significant progress is being made, with the identification of novel drug targets and treatments by means of systems biological methods, our fundamental systems level understanding of why certain treatments succeed and others fail is still lacking. We here advocate a novel methodological framework for systems analysis and interpretation of molecular omic data, which is based on statistical mechanical principles. Specifically, we propose the notion of cellular signalling entropy (or uncertainty), as a novel means of analysing and interpreting omic data, and more fundamentally, as a means of elucidating systems-level principles underlying basic biology and disease. We describe the power of signalling entropy to discriminate cells according to differentiation potential and cancer status. We further argue the case for an empirical cellular entropy-robustness correlation theorem and demonstrate its existence in cancer cell line drug sensitivity data. Specifically, we find that high signalling entropy correlates with drug resistance and further describe how entropy could be used to identify the achilles heels of cancer cells. In summary, signalling entropy is a deep and powerful concept, based on rigorous statistical mechanical principles, which, with improved data quality and coverage, will allow a much deeper understanding of the systems biological principles underlying normal and disease physiology.
1504.03345
Alvaro Corvalan
Romina Cardo and Alvaro Corval\'an
Regulaci\'on de especies competitivas bajo modelos impulsivos de pesca-siembra regidos por operadores maximales
in Spanish, Figures below the references
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider a strategy based on the values of the left one-sided maximal Hardy-Littlewood operator to regulate the pulsating seeding of young fish in the case where two or more competitive species (eg perch and trout in lakes) interact; this strategy may occur as a natural from the point of view of those who propose seeding and also allows controlling the magnitudes resulting from competing species.
[ { "created": "Sun, 5 Apr 2015 05:38:25 GMT", "version": "v1" } ]
2015-04-15
[ [ "Cardo", "Romina", "" ], [ "Corvalán", "Alvaro", "" ] ]
In this paper we consider a strategy based on the values of the left one-sided maximal Hardy-Littlewood operator to regulate the pulsating seeding of young fish in the case where two or more competitive species (eg perch and trout in lakes) interact; this strategy may occur as a natural from the point of view of those who propose seeding and also allows controlling the magnitudes resulting from competing species.
1206.3720
Daniel Klein
Anna Bershteyn, Daniel J. Klein, Edward Wenger, Philip A. Eckhoff
Description of the EMOD-HIV Model v0.7
16 pages, 5 figures
null
null
null
q-bio.QM q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The expansion of tools against HIV transmission has brought increased interest in epidemiological models that can predict the impact of these interventions. The EMOD-HIV model was recently compared to eleven other independently developed mathematical models of HIV transmission to determine the extent to which they agree about the potential impact of expanded use of antiretroviral therapy in South Africa. Here we describe in detail the modeling methodology used to produce the results in this comparison, which we term EMOD-HIV v0.7. We include a discussion of the structure and a full list of model parameters. We also discuss the architecture of the model, and its potential utility in comparing structural assumptions within a single modeling framework.
[ { "created": "Sun, 17 Jun 2012 03:36:48 GMT", "version": "v1" } ]
2012-06-19
[ [ "Bershteyn", "Anna", "" ], [ "Klein", "Daniel J.", "" ], [ "Wenger", "Edward", "" ], [ "Eckhoff", "Philip A.", "" ] ]
The expansion of tools against HIV transmission has brought increased interest in epidemiological models that can predict the impact of these interventions. The EMOD-HIV model was recently compared to eleven other independently developed mathematical models of HIV transmission to determine the extent to which they agree about the potential impact of expanded use of antiretroviral therapy in South Africa. Here we describe in detail the modeling methodology used to produce the results in this comparison, which we term EMOD-HIV v0.7. We include a discussion of the structure and a full list of model parameters. We also discuss the architecture of the model, and its potential utility in comparing structural assumptions within a single modeling framework.
1501.02941
Ralf Metzler
Aljaz Godec and Ralf Metzler
Signal focusing through active transport
5 pages. 3 figures, includes supplementary material (2 pages)
null
null
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biological cells and novel diagnostic devices biochemical receptors need to be sensitive to extremely small concentration changes of signaling molecules. The accuracy of such molecular signaling is ultimately limited by the counting noise imposed by the thermal diffusion of molecules. Many macromolecules and organelles transiently bind to molecular motors and are then actively transported. We here show that a random albeit directed delivery of signaling molecules to within a typical diffusion distance to the receptor reduces the correlation time of the counting noise, effecting an improved sensing precision. The conditions for this active focusing are indeed compatible with observations in living cells. Our results are relevant for a better understanding of molecular cellular signaling and the design of novel diagnostic devices.
[ { "created": "Tue, 13 Jan 2015 10:23:33 GMT", "version": "v1" } ]
2023-04-06
[ [ "Godec", "Aljaz", "" ], [ "Metzler", "Ralf", "" ] ]
In biological cells and novel diagnostic devices biochemical receptors need to be sensitive to extremely small concentration changes of signaling molecules. The accuracy of such molecular signaling is ultimately limited by the counting noise imposed by the thermal diffusion of molecules. Many macromolecules and organelles transiently bind to molecular motors and are then actively transported. We here show that a random albeit directed delivery of signaling molecules to within a typical diffusion distance to the receptor reduces the correlation time of the counting noise, effecting an improved sensing precision. The conditions for this active focusing are indeed compatible with observations in living cells. Our results are relevant for a better understanding of molecular cellular signaling and the design of novel diagnostic devices.
2408.00984
M. Sohel Rahman
Saleh Sakib Ahmed, Nahian Shabab, Md. Abul Hassan Samee and M. Sohel Rahman
GraphAge: Unleashing the power of Graph Neural Network to Decode Epigenetic Aging
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
DNA methylation is a crucial epigenetic marker used in various clocks to predict epigenetic age. However, many existing clocks fail to account for crucial information about CpG sites and their interrelationships, such as co-methylation patterns. We present a novel approach to represent methylation data as a graph, using methylation values and relevant information about CpG sites as nodes, and relationships like co-methylation, same gene, and same chromosome as edges. We then use a Graph Neural Network (GNN) to predict age. Thus our model, GraphAge, leverages both structural and positional information for prediction as well as better interpretation. Although we had to train in a constrained compute setting, GraphAge still showed competitive performance with a Mean Absolute Error (MAE) of 3.207 and a Mean Squared Error (MSE) of 25.277, slightly outperforming the current state of the art. Perhaps more importantly, we utilized GNN explainer for interpretation purposes and were able to unearth interesting insights (e.g., key CpG sites, pathways, and their relationships through Methylation Regulated Networks in the context of aging), which were not possible to 'decode' without leveraging the unique capability of GraphAge to 'encode' various structural relationships. GraphAge has the potential to consume and utilize all relevant information (if available) about an individual that relates to the complex process of aging. So, in that sense, it is one of its kind and can be seen as the first benchmark for a multimodal model that can incorporate all this information in order to close the gap in our understanding of the true nature of aging.
[ { "created": "Fri, 2 Aug 2024 02:55:56 GMT", "version": "v1" } ]
2024-08-05
[ [ "Ahmed", "Saleh Sakib", "" ], [ "Shabab", "Nahian", "" ], [ "Samee", "Md. Abul Hassan", "" ], [ "Rahman", "M. Sohel", "" ] ]
DNA methylation is a crucial epigenetic marker used in various clocks to predict epigenetic age. However, many existing clocks fail to account for crucial information about CpG sites and their interrelationships, such as co-methylation patterns. We present a novel approach to represent methylation data as a graph, using methylation values and relevant information about CpG sites as nodes, and relationships like co-methylation, same gene, and same chromosome as edges. We then use a Graph Neural Network (GNN) to predict age. Thus our model, GraphAge, leverages both structural and positional information for prediction as well as better interpretation. Although we had to train in a constrained compute setting, GraphAge still showed competitive performance with a Mean Absolute Error (MAE) of 3.207 and a Mean Squared Error (MSE) of 25.277, slightly outperforming the current state of the art. Perhaps more importantly, we utilized GNN explainer for interpretation purposes and were able to unearth interesting insights (e.g., key CpG sites, pathways, and their relationships through Methylation Regulated Networks in the context of aging), which were not possible to 'decode' without leveraging the unique capability of GraphAge to 'encode' various structural relationships. GraphAge has the potential to consume and utilize all relevant information (if available) about an individual that relates to the complex process of aging. So, in that sense, it is one of its kind and can be seen as the first benchmark for a multimodal model that can incorporate all this information in order to close the gap in our understanding of the true nature of aging.
2204.10504
Siddhartha Chakrabarty
Sonjoy Pan, Siddhartha P. Chakrabarty, Soumyendu Raha
Progression, Detection and Remission: Evolution of Chronic Myeloid Leukemia using a three-stage probabilistic model
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We present a three-stage probabilistic model for the progression of Chronic Myeloid Leukemia (CML), as manifested by the leukemic stem cells, progenitor cells and mature leukemic cells. This progression is captured through the process of cell division and cell mutation, with probabilities of occurrence being assigned to both of them. The key contributions of this study include, the determination of the expected number of the leukemic stem cells, progenitor cells, mature leukemic cells, as well as total number of these cells (in terms of probabilities, and contingent on the initial cell count), expected time to reach a threshold level of total and injurious leukemic cells, as well as the critical time when the disease changes its phases, the probability of extinction of CML, and the dynamics of CML evolution consequent to primary therapy. Finally, the results obtained are demonstrated with numerical illustrations.
[ { "created": "Fri, 22 Apr 2022 05:13:44 GMT", "version": "v1" } ]
2022-04-25
[ [ "Pan", "Sonjoy", "" ], [ "Chakrabarty", "Siddhartha P.", "" ], [ "Raha", "Soumyendu", "" ] ]
We present a three-stage probabilistic model for the progression of Chronic Myeloid Leukemia (CML), as manifested by the leukemic stem cells, progenitor cells and mature leukemic cells. This progression is captured through the process of cell division and cell mutation, with probabilities of occurrence being assigned to both of them. The key contributions of this study include, the determination of the expected number of the leukemic stem cells, progenitor cells, mature leukemic cells, as well as total number of these cells (in terms of probabilities, and contingent on the initial cell count), expected time to reach a threshold level of total and injurious leukemic cells, as well as the critical time when the disease changes its phases, the probability of extinction of CML, and the dynamics of CML evolution consequent to primary therapy. Finally, the results obtained are demonstrated with numerical illustrations.
q-bio/0612018
Nikolai Sinitsyn
N. A. Sinitsyn and Ilya Nemenman
The Berry phase and the pump flux in stochastic chemical kinetics
null
EPL, 77 (2007) 58001
10.1209/0295-5075/80/38001
LAUR #06-7207
q-bio.QM q-bio.MN
null
We study a classical two-state stochastic system in a sea of substrates and products (absorbing states), which can be interpreted as a single Michaelis-Menten catalyzing enzyme or as a channel on a cell surface. We introduce a novel general method and use it to derive the expression for the full counting statistics of transitions among the absorbing states. For the evolution of the system under a periodic perturbation of the kinetic rates, the latter contains a term with a purely geometrical (the Berry phase) interpretation. This term gives rise to a pump current between the absorbing states, which is due entirely to the stochastic nature of the system. We calculate the first two cumulants of this current, and we argue that it is observable experimentally.
[ { "created": "Mon, 11 Dec 2006 20:35:22 GMT", "version": "v1" }, { "created": "Thu, 15 Feb 2007 23:55:19 GMT", "version": "v2" } ]
2009-11-13
[ [ "Sinitsyn", "N. A.", "" ], [ "Nemenman", "Ilya", "" ] ]
We study a classical two-state stochastic system in a sea of substrates and products (absorbing states), which can be interpreted as a single Michaelis-Menten catalyzing enzyme or as a channel on a cell surface. We introduce a novel general method and use it to derive the expression for the full counting statistics of transitions among the absorbing states. For the evolution of the system under a periodic perturbation of the kinetic rates, the latter contains a term with a purely geometrical (the Berry phase) interpretation. This term gives rise to a pump current between the absorbing states, which is due entirely to the stochastic nature of the system. We calculate the first two cumulants of this current, and we argue that it is observable experimentally.
2204.01504
Bela M. Mulder
Panayiotis Foteinopoulos and Bela M. Mulder
Microtubule organization and cell geometry
38 pages, 13 figures
null
10.1103/PhysRevE.106.054408
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a systematic study of the influence of cell geometry on the orientational distribution of microtubules (MTs) nucleated from a single microtubule organizing center (MTOC). For simplicity we consider an elliptical cell geometry, a setting appropriate to a generic non-spherical animal cell. Within this context we introduce four models of increasing complexity, in each case introducing additional mechanisms that govern the interaction of the MTs with the cell boundary. In order, we consider the cases: MTs that can bind to the boundary with a fixed mean residence time (M0), force-producing MTs that can slide on the boundary towards the cell poles (MS), MTs that interact with a generic polarity factor that is transported and deposited at the boundary, and which in turn stabilizes the MTs at the boundary (MP), and a final model in which both sliding and stabilization by polarity factors is taken into account (MSP). In the baseline model (M0), the exponential length distribution of MTs causes most of the interactions at the cell boundary to occur along the shorter transverse direction in the cell, leading to transverse biaxial order. MT sliding (MS) is able to reorient the main axis of this biaxial order along the longitudinal axis. The polarization mechanism introduced in MP and MSP overrules the geometric bias towards bipolar order observed in M0 and MS, and allows the establishment of unipolar order either along the short- (MP) or the long cell axis (MSP). The behavior of the latter two models can be qualitatively reproduced by a very simple toy model with discrete MT orientations.
[ { "created": "Mon, 4 Apr 2022 14:05:50 GMT", "version": "v1" } ]
2022-11-30
[ [ "Foteinopoulos", "Panayiotis", "" ], [ "Mulder", "Bela M.", "" ] ]
We present a systematic study of the influence of cell geometry on the orientational distribution of microtubules (MTs) nucleated from a single microtubule organizing center (MTOC). For simplicity we consider an elliptical cell geometry, a setting appropriate to a generic non-spherical animal cell. Within this context we introduce four models of increasing complexity, in each case introducing additional mechanisms that govern the interaction of the MTs with the cell boundary. In order, we consider the cases: MTs that can bind to the boundary with a fixed mean residence time (M0), force-producing MTs that can slide on the boundary towards the cell poles (MS), MTs that interact with a generic polarity factor that is transported and deposited at the boundary, and which in turn stabilizes the MTs at the boundary (MP), and a final model in which both sliding and stabilization by polarity factors is taken into account (MSP). In the baseline model (M0), the exponential length distribution of MTs causes most of the interactions at the cell boundary to occur along the shorter transverse direction in the cell, leading to transverse biaxial order. MT sliding (MS) is able to reorient the main axis of this biaxial order along the longitudinal axis. The polarization mechanism introduced in MP and MSP overrules the geometric bias towards bipolar order observed in M0 and MS, and allows the establishment of unipolar order either along the short- (MP) or the long cell axis (MSP). The behavior of the latter two models can be qualitatively reproduced by a very simple toy model with discrete MT orientations.
1701.03476
Dhananjay Suresh
R Srikar, Dhananjay Suresh, Ajit Zambre, Kristen Taylor, Sarah Chapman, Matthew Leevy, Anandhi Upendran, Raghuraman Kannan
Targeted nanoconjugate co-delivering siRNA and tyrosine kinase inhibitor to KRAS mutant NSCLC dissociates GAB1-SHP2 post oncogene knockdown
14 pages, 9 figures, research article
Scientific Reports 6, Article number: 30245 (2016)
10.1038/srep30245
null
q-bio.SC
http://creativecommons.org/licenses/by/4.0/
A tri-block nanoparticle (TBN) comprising of an enzymatically cleavable porous gelatin nanocore encapsulated with gefitinib (tyrosine kinase inhibitor (TKI)) and surface functionalized with cetuximab-siRNA conjugate has been synthesized. Targeted delivery of siRNA to undruggable KRAS mutated non-small cell lung cancer cells would sensitize the cells to TKI drugs and offers an efficient therapy for treating cancer; however, efficient delivery of siRNA and releasing it in cytoplasm remains a major challenge. We have shown TBN can efficiently deliver siRNA to cytoplasm of KRAS mutant H23 Non-Small Cell Lung Cancer (NSCLC) cells for oncogene knockdown; subsequently, sensitizing it to TKI. In the absence of TKI, the nanoparticle showed minimal toxicity suggesting that the cells adapt a parallel GAB1 mediated survival pathway. In H23 cells, activated ERK results in phosphorylation of GAB1 on serine and threonine residues to form GAB1-p85 PI3K complex. In the absence of TKI, knocking down the oncogene dephosphorylated ERK, and negated the complex formation. This event led to tyrosine phosphorylation at Tyr627 domain of GAB1 that regulated EGFR signaling by recruiting SHP2. In the presence of TKI, GAB1-SHP2 dissociation occurs, leading to cell death. The outcome of this study provides a promising platform for treating NSCLC patients harboring KRAS mutation.
[ { "created": "Thu, 12 Jan 2017 19:29:54 GMT", "version": "v1" } ]
2017-01-16
[ [ "Srikar", "R", "" ], [ "Suresh", "Dhananjay", "" ], [ "Zambre", "Ajit", "" ], [ "Taylor", "Kristen", "" ], [ "Chapman", "Sarah", "" ], [ "Leevy", "Matthew", "" ], [ "Upendran", "Anandhi", "" ], [ "Kannan", "Raghuraman", "" ] ]
A tri-block nanoparticle (TBN) comprising of an enzymatically cleavable porous gelatin nanocore encapsulated with gefitinib (tyrosine kinase inhibitor (TKI)) and surface functionalized with cetuximab-siRNA conjugate has been synthesized. Targeted delivery of siRNA to undruggable KRAS mutated non-small cell lung cancer cells would sensitize the cells to TKI drugs and offers an efficient therapy for treating cancer; however, efficient delivery of siRNA and releasing it in cytoplasm remains a major challenge. We have shown TBN can efficiently deliver siRNA to cytoplasm of KRAS mutant H23 Non-Small Cell Lung Cancer (NSCLC) cells for oncogene knockdown; subsequently, sensitizing it to TKI. In the absence of TKI, the nanoparticle showed minimal toxicity suggesting that the cells adapt a parallel GAB1 mediated survival pathway. In H23 cells, activated ERK results in phosphorylation of GAB1 on serine and threonine residues to form GAB1-p85 PI3K complex. In the absence of TKI, knocking down the oncogene dephosphorylated ERK, and negated the complex formation. This event led to tyrosine phosphorylation at Tyr627 domain of GAB1 that regulated EGFR signaling by recruiting SHP2. In the presence of TKI, GAB1-SHP2 dissociation occurs, leading to cell death. The outcome of this study provides a promising platform for treating NSCLC patients harboring KRAS mutation.
2003.11595
Pablo Rodr\'iguez-S\'anchez
Pablo Rodr\'iguez-S\'anchez, Egbert H. van Nes, Marten Scheffer
Early warning signals for desynchronization in periodically forced systems
10 pages, 6 figures, 1 appendix
null
null
null
q-bio.QM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conditions such as insomnia, cardiac arrhythmia and jet-lag share a common feature: they are all related to the ability of biological systems to synchronize with the day-night cycle. When organisms lose resilience, this ability of synchronizing can become weaker till they eventually become desynchronized in a state of malfunctioning or sickness. It would be useful to measure this loss of resilience before the full desynchronization takes place. Several dynamical indicators of resilience (DIORs) have been proposed to account for the loss of resilience of a dynamical system. The performance of these indicators depends on the underlying mechanism of the critical transition, usually a saddle-node bifurcation. Before such bifurcation the recovery rate from perturbations of the system becomes slower, a mechanism known as critical slowing down. Here we show that, for a wide class of biological systems, desynchronization happens through another bifurcation, namely the saddle-node of cycles, for which critical slowing down cannot be directly detected. Such a bifurcation represents a system transitioning from synchronized (phase locked) to a desynchronized state, or vice versa. We show that after an appropriate transformation we can also detect this bifurcation using dynamical indicators of resilience. We test this method with data generated by models of sleep-wake cycles.
[ { "created": "Wed, 25 Mar 2020 19:31:34 GMT", "version": "v1" } ]
2020-03-27
[ [ "Rodríguez-Sánchez", "Pablo", "" ], [ "van Nes", "Egbert H.", "" ], [ "Scheffer", "Marten", "" ] ]
Conditions such as insomnia, cardiac arrhythmia and jet-lag share a common feature: they are all related to the ability of biological systems to synchronize with the day-night cycle. When organisms lose resilience, this ability of synchronizing can become weaker till they eventually become desynchronized in a state of malfunctioning or sickness. It would be useful to measure this loss of resilience before the full desynchronization takes place. Several dynamical indicators of resilience (DIORs) have been proposed to account for the loss of resilience of a dynamical system. The performance of these indicators depends on the underlying mechanism of the critical transition, usually a saddle-node bifurcation. Before such bifurcation the recovery rate from perturbations of the system becomes slower, a mechanism known as critical slowing down. Here we show that, for a wide class of biological systems, desynchronization happens through another bifurcation, namely the saddle-node of cycles, for which critical slowing down cannot be directly detected. Such a bifurcation represents a system transitioning from synchronized (phase locked) to a desynchronized state, or vice versa. We show that after an appropriate transformation we can also detect this bifurcation using dynamical indicators of resilience. We test this method with data generated by models of sleep-wake cycles.
1507.08736
Stephen Odaibo
Stephen G. Odaibo
A Sinc Wavelet Describes the Receptive Fields of Neurons in the Motion Cortex
This work was presented in part at the 44th Annual Meeting of the Society for Neuroscience in Washington, DC
null
null
null
q-bio.NC cs.CV cs.IT math.IT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual perception results from a systematic transformation of the information flowing through the visual system. In the neuronal hierarchy, the response properties of single neurons are determined by neurons located one level below, and in turn, determine the responses of neurons located one level above. Therefore in modeling receptive fields, it is essential to ensure that the response properties of neurons in a given level can be generated by combining the response models of neurons in its input levels. However, existing response models of neurons in the motion cortex do not inherently yield the temporal frequency filtering gradient (TFFG) property that is known to emerge along the primary visual cortex (V1) to middle temporal (MT) motion processing stream. TFFG is the change from predominantly lowpass to predominantly bandpass temporal frequency filtering character along the V1 to MT pathway (Foster et al 1985; DeAngelis et al 1993; Hawken et al 1996). We devised a new model, the sinc wavelet model (Odaibo, 2014), which logically and efficiently generates the TFFG. The model replaces the Gabor function's sine wave carrier with a sinc (sin(x)/x) function, and has the same or fewer number of parameters as existing models. Because of its logical consistency with the emergent network property of TFFG, we conclude that the sinc wavelet is a better model for the receptive fields of motion cortex neurons. This model will provide new physiological insights into how the brain represents visual information.
[ { "created": "Fri, 31 Jul 2015 02:55:54 GMT", "version": "v1" } ]
2015-08-03
[ [ "Odaibo", "Stephen G.", "" ] ]
Visual perception results from a systematic transformation of the information flowing through the visual system. In the neuronal hierarchy, the response properties of single neurons are determined by neurons located one level below, and in turn, determine the responses of neurons located one level above. Therefore in modeling receptive fields, it is essential to ensure that the response properties of neurons in a given level can be generated by combining the response models of neurons in its input levels. However, existing response models of neurons in the motion cortex do not inherently yield the temporal frequency filtering gradient (TFFG) property that is known to emerge along the primary visual cortex (V1) to middle temporal (MT) motion processing stream. TFFG is the change from predominantly lowpass to predominantly bandpass temporal frequency filtering character along the V1 to MT pathway (Foster et al 1985; DeAngelis et al 1993; Hawken et al 1996). We devised a new model, the sinc wavelet model (Odaibo, 2014), which logically and efficiently generates the TFFG. The model replaces the Gabor function's sine wave carrier with a sinc (sin(x)/x) function, and has the same or fewer number of parameters as existing models. Because of its logical consistency with the emergent network property of TFFG, we conclude that the sinc wavelet is a better model for the receptive fields of motion cortex neurons. This model will provide new physiological insights into how the brain represents visual information.
q-bio/0404012
Silvia Scarpetta
Li Zhaoping, Alex Lewis, and Silvia Scarpetta
Mathematical Analysis and Simulations of the Neural Circuit for Locomotion in Lamprey
4 pages, accepted for publication in Physical Review Letters
null
10.1103/PhysRevLett.92.198106
null
q-bio.NC cond-mat.dis-nn
null
We analyze the dynamics of the neural circuit of the lamprey central pattern generator (CPG). This analysis provides insights into how neural interactions form oscillators and enable spontaneous oscillations in a network of damped oscillators, which were not apparent in previous simulations or abstract phase oscillator models. We also show how the different behaviour regimes (characterized by phase and amplitude relationships between oscillators) of forward/backward swimming, and turning, can be controlled using the neural connection strengths and external inputs.
[ { "created": "Thu, 8 Apr 2004 20:06:45 GMT", "version": "v1" } ]
2009-11-10
[ [ "Zhaoping", "Li", "" ], [ "Lewis", "Alex", "" ], [ "Scarpetta", "Silvia", "" ] ]
We analyze the dynamics of the neural circuit of the lamprey central pattern generator (CPG). This analysis provides insights into how neural interactions form oscillators and enable spontaneous oscillations in a network of damped oscillators, which were not apparent in previous simulations or abstract phase oscillator models. We also show how the different behaviour regimes (characterized by phase and amplitude relationships between oscillators) of forward/backward swimming, and turning, can be controlled using the neural connection strengths and external inputs.
1811.05923
Fabio Sanchez PhD
Fabio Sanchez and Juan Gabriel Calvo
The role of short-term immigration on disease dynamics: An SIR model with age-structure
16 pages, 27 figures
null
10.15517/RMTA.V26I1.36229
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We formulate an age-structured nonlinear partial differential equation model that features short-term immigration effects in a population. Individuals can immigrate into the population as any of the three stages in the model: susceptible, infected or recovered}. Global stability of the immigration-free and infection-free equilibria is discussed. A generalized numerical framework is established and specific short-term immigration scenarios are explored.
[ { "created": "Wed, 14 Nov 2018 17:48:04 GMT", "version": "v1" }, { "created": "Sat, 15 Dec 2018 15:48:34 GMT", "version": "v2" } ]
2019-08-07
[ [ "Sanchez", "Fabio", "" ], [ "Calvo", "Juan Gabriel", "" ] ]
We formulate an age-structured nonlinear partial differential equation model that features short-term immigration effects in a population. Individuals can immigrate into the population as any of the three stages in the model: susceptible, infected or recovered}. Global stability of the immigration-free and infection-free equilibria is discussed. A generalized numerical framework is established and specific short-term immigration scenarios are explored.
1301.5093
Mireille Regnier
Olga Berillo, Assel Issabekova, Mireille Regnier (INRIA Saclay - Ile de France, LIX), Anatoliy T. Ivashchenko
Characteristics of Intronic and Intergenic Human miRNAs and Features of their Interaction with mRNA
World Academy of Science, Engineering and Technology (2011)
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulatory relationships of 686 intronic miRNA and 784 intergenic miRNAs with mRNAs of 51 intronic miRNA coding genes were established. Interaction features of studied miRNAs with 5'UTR, CDS and 3'UTR of mRNA of each gene were revealed. Functional regions of mRNA were shown to be significantly heterogenous according to the number of binding sites of miRNA and to the location density of these sites.
[ { "created": "Tue, 22 Jan 2013 07:32:40 GMT", "version": "v1" } ]
2013-01-23
[ [ "Berillo", "Olga", "", "INRIA Saclay - Ile\n de France, LIX" ], [ "Issabekova", "Assel", "", "INRIA Saclay - Ile\n de France, LIX" ], [ "Regnier", "Mireille", "", "INRIA Saclay - Ile\n de France, LIX" ], [ "Ivashchenko", "Anatoliy T.", "" ] ]
Regulatory relationships of 686 intronic miRNA and 784 intergenic miRNAs with mRNAs of 51 intronic miRNA coding genes were established. Interaction features of studied miRNAs with 5'UTR, CDS and 3'UTR of mRNA of each gene were revealed. Functional regions of mRNA were shown to be significantly heterogenous according to the number of binding sites of miRNA and to the location density of these sites.
1501.02709
Pranjal Mahanta
P. Mahanta, A. Bhardwaj, K. Kumar, V.S. Reddy, S. Ramakumar
Modulation of N- to C-terminal interactions enhances protein stability
41 pages, 10 figures (including supplemental materials)
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although, several factors have been attributed to thermostability, the stabilization strategies used by proteins are still enigmatic. Studies on recombinant xylanase which has the ubiquitous (\b{eta}/{\alpha})8 TIM (Triosephosphate isomerase) barrel fold showed that, just a single extreme N-terminus mutation (V1L) markedly enhanced the thermostability by 5 {\deg}C without loss of catalytic activity whereas another mutation, V1A at the same position decreased the stability by 2 {\deg}C. Based on computational analysis of their crystal structures including residue interaction network, we established a link between N- to C-terminal contacts and protein stability. We demonstrate that augmenting of N- to C-terminal non-covalent interactions is associated with the enhancement of protein stability. We propose that the strategy of mutations at the termini could be exploited with a view to modulate stability without compromising on enzymatic activity, or in general, protein function, in diverse folds where N- and C-termini are in close proximity. Finally, we discuss the implications of our results for the development of therapeutics involving proteins and for designing effective protein engineering strategies.
[ { "created": "Mon, 12 Jan 2015 16:42:29 GMT", "version": "v1" } ]
2015-01-13
[ [ "Mahanta", "P.", "" ], [ "Bhardwaj", "A.", "" ], [ "Kumar", "K.", "" ], [ "Reddy", "V. S.", "" ], [ "Ramakumar", "S.", "" ] ]
Although, several factors have been attributed to thermostability, the stabilization strategies used by proteins are still enigmatic. Studies on recombinant xylanase which has the ubiquitous (\b{eta}/{\alpha})8 TIM (Triosephosphate isomerase) barrel fold showed that, just a single extreme N-terminus mutation (V1L) markedly enhanced the thermostability by 5 {\deg}C without loss of catalytic activity whereas another mutation, V1A at the same position decreased the stability by 2 {\deg}C. Based on computational analysis of their crystal structures including residue interaction network, we established a link between N- to C-terminal contacts and protein stability. We demonstrate that augmenting of N- to C-terminal non-covalent interactions is associated with the enhancement of protein stability. We propose that the strategy of mutations at the termini could be exploited with a view to modulate stability without compromising on enzymatic activity, or in general, protein function, in diverse folds where N- and C-termini are in close proximity. Finally, we discuss the implications of our results for the development of therapeutics involving proteins and for designing effective protein engineering strategies.
1805.05385
Haiming Tang
Haiming Tang and Angela Wilkins
Reconstruction of the deep history of "Parent-Daughter" relationships among vertebrate paralogs
a novel method to reconstruct the deep history of "parent daughter" relationships among vertebrate paralogs, which combines the phylogenetic reconstruction of duplications at different evolutionary periods and the synteny evidence collected from the preserved homologous gene orders
null
null
null
q-bio.PE
http://creativecommons.org/publicdomain/zero/1.0/
Gene duplication is a major mechanism through which new genetic material is generated. Although numerous methods have been developed to differentiate the ortholog and paralogs, very few differentiate the "Parent-Daughter" relationship among paralogous pairs. As coined by the Mira et al, we refer the "Parent" copy as the paralogous copy that stays at the original genomic position of the "original copy" before the duplication event, while the "Daughter" copy occupies a new genomic locus. Here we present a novel method which combines the phylogenetic reconstruction of duplications at different evolutionary periods and the synteny evidence collected from the preserved homologous gene orders. We reconstructed for the first time a deep evolutionary history of "Parent-Daughter" relationships among genes that were descendants from 2 rounds of whole genome duplications (2R WGDs) at early vertebrates and were further duplicated in later ceancestors like early Mammalia and early Primates. Our analysis reveals that the "Parent" copy has significantly fewer accumulated mutations compared with the "Daughter" copy since their divergence after the duplication event. More strikingly, we found that the "Parent" copy in a duplication event continues to be the "Parent" of the younger successive duplication events which lead to "grand-daughters".
[ { "created": "Mon, 14 May 2018 19:07:46 GMT", "version": "v1" }, { "created": "Tue, 22 May 2018 21:10:11 GMT", "version": "v2" } ]
2018-05-24
[ [ "Tang", "Haiming", "" ], [ "Wilkins", "Angela", "" ] ]
Gene duplication is a major mechanism through which new genetic material is generated. Although numerous methods have been developed to differentiate the ortholog and paralogs, very few differentiate the "Parent-Daughter" relationship among paralogous pairs. As coined by the Mira et al, we refer the "Parent" copy as the paralogous copy that stays at the original genomic position of the "original copy" before the duplication event, while the "Daughter" copy occupies a new genomic locus. Here we present a novel method which combines the phylogenetic reconstruction of duplications at different evolutionary periods and the synteny evidence collected from the preserved homologous gene orders. We reconstructed for the first time a deep evolutionary history of "Parent-Daughter" relationships among genes that were descendants from 2 rounds of whole genome duplications (2R WGDs) at early vertebrates and were further duplicated in later ceancestors like early Mammalia and early Primates. Our analysis reveals that the "Parent" copy has significantly fewer accumulated mutations compared with the "Daughter" copy since their divergence after the duplication event. More strikingly, we found that the "Parent" copy in a duplication event continues to be the "Parent" of the younger successive duplication events which lead to "grand-daughters".
1310.6012
Christoph Adami
Randal S. Olson, David B. Knoester, and Christoph Adami
Evolution of swarming behavior is shaped by how predators attack
25 pages, 11 figures, 5 tables, including 2 Supplementary Figures. Version to appear in "Artificial Life"
null
null
null
q-bio.PE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animal grouping behaviors have been widely studied due to their implications for understanding social intelligence, collective cognition, and potential applications in engineering, artificial intelligence, and robotics. An important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior. In the past decade, researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator-prey models. The selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator, thus causing an endless cycle of movement toward the center of the group. Using an evolutionary model of a predator-prey system, we show that how predators attack is critical to the evolution of the selfish herd. Following this discovery, we show that density-dependent predation provides an abstraction of Hamilton's original formulation of ``domains of danger.'' Finally, we verify that density-dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators. Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital evolutionary model, refines the assumptions of the selfish herd hypothesis, and generalizes the domain of danger concept to density-dependent predation.
[ { "created": "Tue, 22 Oct 2013 19:10:38 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2015 20:05:54 GMT", "version": "v2" } ]
2015-11-25
[ [ "Olson", "Randal S.", "" ], [ "Knoester", "David B.", "" ], [ "Adami", "Christoph", "" ] ]
Animal grouping behaviors have been widely studied due to their implications for understanding social intelligence, collective cognition, and potential applications in engineering, artificial intelligence, and robotics. An important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior. In the past decade, researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator-prey models. The selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator, thus causing an endless cycle of movement toward the center of the group. Using an evolutionary model of a predator-prey system, we show that how predators attack is critical to the evolution of the selfish herd. Following this discovery, we show that density-dependent predation provides an abstraction of Hamilton's original formulation of ``domains of danger.'' Finally, we verify that density-dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators. Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital evolutionary model, refines the assumptions of the selfish herd hypothesis, and generalizes the domain of danger concept to density-dependent predation.
1710.02762
Gurdip Uppal
Gurdip Uppal, Dervis Can Vural
Shearing in flow environment promotes evolution of social behavior in microbial populations
20 pages, 7 figures
eLife 2018;7:e34862
10.7554/eLife.34862
null
q-bio.PE nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How producers of public goods persist in microbial communities is a major question in evolutionary biology. Cooperation is evolutionarily unstable, since cheating strains can reproduce quicker and take over. Spatial structure has been shown to be a robust mechanism for the evolution of cooperation. Here we study how spatial assortment might emerge from native dynamics and show that fluid flow shear promotes cooperative behavior. Social structures arise naturally from our advection-diffusion-reaction model as self-reproducing Turing patterns. We computationally study the effects of fluid advection on these patterns as a mechanism to enable or enhance social behavior. Our central finding is that flow shear enables and promotes social behavior in microbes by increasing the group fragmentation rate and thereby limiting the spread of cheating strains. Regions of the flow domain with higher shear admit high cooperativity and large population density, whereas low shear regions are devoid of life due to opportunistic mutations.
[ { "created": "Sun, 8 Oct 2017 00:55:03 GMT", "version": "v1" }, { "created": "Thu, 24 May 2018 20:14:26 GMT", "version": "v2" } ]
2018-05-28
[ [ "Uppal", "Gurdip", "" ], [ "Vural", "Dervis Can", "" ] ]
How producers of public goods persist in microbial communities is a major question in evolutionary biology. Cooperation is evolutionarily unstable, since cheating strains can reproduce quicker and take over. Spatial structure has been shown to be a robust mechanism for the evolution of cooperation. Here we study how spatial assortment might emerge from native dynamics and show that fluid flow shear promotes cooperative behavior. Social structures arise naturally from our advection-diffusion-reaction model as self-reproducing Turing patterns. We computationally study the effects of fluid advection on these patterns as a mechanism to enable or enhance social behavior. Our central finding is that flow shear enables and promotes social behavior in microbes by increasing the group fragmentation rate and thereby limiting the spread of cheating strains. Regions of the flow domain with higher shear admit high cooperativity and large population density, whereas low shear regions are devoid of life due to opportunistic mutations.
1805.04634
Min Xu
Kai Wen Wang, Xiangrui Zeng, Xiaodan Liang, Zhiguang Huo, Eric P. Xing, Min Xu
Image-derived generative modeling of pseudo-macromolecular structures - towards the statistical assessment of Electron CryoTomography template matching
null
British Machine Vision Conference (BMVC) 2018
null
null
q-bio.QM cs.CV stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular Electron CryoTomography (CECT) is a 3D imaging technique that captures information about the structure and spatial organization of macromolecular complexes within single cells, in near-native state and at sub-molecular resolution. Although template matching is often used to locate macromolecules in a CECT image, it is insufficient as it only measures the relative structural similarity. Therefore, it is preferable to assess the statistical credibility of the decision through hypothesis testing, requiring many templates derived from a diverse population of macromolecular structures. Due to the very limited number of known structures, we need a generative model to efficiently and reliably sample pseudo-structures from the complex distribution of macromolecular structures. To address this challenge, we propose a novel image-derived approach for performing hypothesis testing for template matching by constructing generative models using the generative adversarial network. Finally, we conducted hypothesis testing experiments for template matching on both simulated and experimental subtomograms, allowing us to conclude the identity of subtomograms with high statistical credibility and significantly reducing false positives.
[ { "created": "Sat, 12 May 2018 02:00:30 GMT", "version": "v1" } ]
2018-07-04
[ [ "Wang", "Kai Wen", "" ], [ "Zeng", "Xiangrui", "" ], [ "Liang", "Xiaodan", "" ], [ "Huo", "Zhiguang", "" ], [ "Xing", "Eric P.", "" ], [ "Xu", "Min", "" ] ]
Cellular Electron CryoTomography (CECT) is a 3D imaging technique that captures information about the structure and spatial organization of macromolecular complexes within single cells, in near-native state and at sub-molecular resolution. Although template matching is often used to locate macromolecules in a CECT image, it is insufficient as it only measures the relative structural similarity. Therefore, it is preferable to assess the statistical credibility of the decision through hypothesis testing, requiring many templates derived from a diverse population of macromolecular structures. Due to the very limited number of known structures, we need a generative model to efficiently and reliably sample pseudo-structures from the complex distribution of macromolecular structures. To address this challenge, we propose a novel image-derived approach for performing hypothesis testing for template matching by constructing generative models using the generative adversarial network. Finally, we conducted hypothesis testing experiments for template matching on both simulated and experimental subtomograms, allowing us to conclude the identity of subtomograms with high statistical credibility and significantly reducing false positives.
2004.13361
Manuel Morante
Manuel Morante
A lite parametric model for the Hemodynamic Response Function
6 pages, 3 figures
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When working with task-related fMRI data, one of the most crucial parts of the data analysis consists of determining a proper estimate of the BOLD response. The following document presents a lite model for the Hemodynamic Response Function HRF. Between other advances, the proposed model present less number of parameters compared to other similar HRF alternative, which reduces its optimization complexity and facilitates its potential applications.
[ { "created": "Tue, 28 Apr 2020 08:29:41 GMT", "version": "v1" } ]
2020-04-29
[ [ "Morante", "Manuel", "" ] ]
When working with task-related fMRI data, one of the most crucial parts of the data analysis consists of determining a proper estimate of the BOLD response. The following document presents a lite model for the Hemodynamic Response Function HRF. Between other advances, the proposed model present less number of parameters compared to other similar HRF alternative, which reduces its optimization complexity and facilitates its potential applications.
1205.3025
Marc-Oliver Gewaltig
Marc-Oliver Gewaltig and Robert Cannon
Current practice in software development for computational neuroscience and how to improve it
null
null
null
null
q-bio.NC cs.CY cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers and scientists to assess the quality whether particular pieces of software are ready for use in research.
[ { "created": "Mon, 14 May 2012 13:48:51 GMT", "version": "v1" }, { "created": "Wed, 20 Nov 2013 08:02:32 GMT", "version": "v2" } ]
2013-11-21
[ [ "Gewaltig", "Marc-Oliver", "" ], [ "Cannon", "Robert", "" ] ]
Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers and scientists to assess the quality whether particular pieces of software are ready for use in research.
1408.3114
Mattia Zanella
Daniela Morale, Mattia Zanella, Vincenzo Capasso, Willi Jaeger
Stochastic Modeling and Simulation of Ion Transport through Channels
null
null
null
null
q-bio.BM math-ph math.MP nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine since they control many vital physiological functions. The aim of this work is on one hand to propose a fully stochastic and discrete model describing the main characteristics of a multiple channel system. The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered, in addition, the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all supported by numerical simulations. The specific features of both cases of micro- and nanochannels have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider a discrete and stochastic model for ions movement inside the channels.
[ { "created": "Wed, 13 Aug 2014 14:42:56 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2015 10:17:03 GMT", "version": "v2" }, { "created": "Thu, 7 Jan 2016 19:51:11 GMT", "version": "v3" }, { "created": "Fri, 8 Jan 2016 08:26:37 GMT", "version": "v4" } ]
2016-01-11
[ [ "Morale", "Daniela", "" ], [ "Zanella", "Mattia", "" ], [ "Capasso", "Vincenzo", "" ], [ "Jaeger", "Willi", "" ] ]
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine since they control many vital physiological functions. The aim of this work is on one hand to propose a fully stochastic and discrete model describing the main characteristics of a multiple channel system. The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered, in addition, the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all supported by numerical simulations. The specific features of both cases of micro- and nanochannels have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider a discrete and stochastic model for ions movement inside the channels.