id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0608002
Leor Weinberger
Leor S. Weinberger, John C. Burnett, Jared E. Toettcher, Adam P. Arkin, and David V. Schaffer
Stochastic Gene Expression in a Lentiviral Positive Feedback Loop: HIV-1 Tat Fluctuations Drive Phenotypic Diversity
Supplemental data available as q-bio.MN/0608003
Cell. 2005 Jul 29;122(2):169-82
null
null
q-bio.MN cond-mat.soft physics.bio-ph q-bio.CB
null
Stochastic gene expression has been implicated in a variety of cellular processes, including cell differentiation and disease. In this issue of Cell, Weinberger et al. (2005) take an integrated computational-experimental approach to study the Tat transactivation feedback loop in HIV-1 and show that fluctuations in a key regulator, Tat, can result in a phenotypic bifurcation. This phenomenon is observed in an isogenic population where individual cells display two distinct expression states corresponding to latent and productive infection by HIV-1. These findings demonstrate the importance of stochastic gene expression in molecular "decision-making."
[ { "created": "Tue, 1 Aug 2006 19:37:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Weinberger", "Leor S.", "" ], [ "Burnett", "John C.", "" ], [ "Toettcher", "Jared E.", "" ], [ "Arkin", "Adam P.", "" ], [ "Schaffer", "David V.", "" ] ]
Stochastic gene expression has been implicated in a variety of cellular processes, including cell differentiation and disease. In this issue of Cell, Weinberger et al. (2005) take an integrated computational-experimental approach to study the Tat transactivation feedback loop in HIV-1 and show that fluctuations in a key regulator, Tat, can result in a phenotypic bifurcation. This phenomenon is observed in an isogenic population where individual cells display two distinct expression states corresponding to latent and productive infection by HIV-1. These findings demonstrate the importance of stochastic gene expression in molecular "decision-making."
2011.12350
Changchuan Yin Dr.
Changchuan Yin, Stephen S.-T. Yau
Inverted repeats in coronavirus SARS-CoV-2 genome and implications in evolution
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coronavirus disease (COVID-19) pandemic, caused by the coronavirus SARS-CoV-2, has caused 60 millions of infections and 1.38 millions of fatalities. Genomic analysis of SARS-CoV-2 can provide insights on drug design and vaccine development for controlling the pandemic. Inverted repeats in a genome greatly impact the stability of the genome structure and regulate gene expression. Inverted repeats involve cellular evolution and genetic diversity, genome arrangements, and diseases. Here, we investigate the inverted repeats in the coronavirus SARS-CoV-2 genome. We found that SARS-CoV-2 genome has an abundance of inverted repeats. The inverted repeats are mainly located in the gene of the Spike protein. This result suggests the Spike protein gene undergoes recombination events, therefore, is essential for fast evolution. Comparison of the inverted repeat signatures in human and bat coronaviruses suggest that SARS-CoV-2 is mostly related SARS-related coronavirus, SARSr-CoV/RaTG13. The study also reveals that the recent SARS-related coronavirus, SARSr-CoV/RmYN02, has a high amount of inverted repeats in the spike protein gene. Besides, this study demonstrates that the inverted repeat distribution in a genome can be considered as the genomic signature. This study highlights the significance of inverted repeats in the evolution of SARS-CoV-2 and presents the inverted repeats as the genomic signature in genome analysis.
[ { "created": "Tue, 24 Nov 2020 20:11:40 GMT", "version": "v1" } ]
2020-11-26
[ [ "Yin", "Changchuan", "" ], [ "Yau", "Stephen S. -T.", "" ] ]
The coronavirus disease (COVID-19) pandemic, caused by the coronavirus SARS-CoV-2, has caused 60 millions of infections and 1.38 millions of fatalities. Genomic analysis of SARS-CoV-2 can provide insights on drug design and vaccine development for controlling the pandemic. Inverted repeats in a genome greatly impact the stability of the genome structure and regulate gene expression. Inverted repeats involve cellular evolution and genetic diversity, genome arrangements, and diseases. Here, we investigate the inverted repeats in the coronavirus SARS-CoV-2 genome. We found that SARS-CoV-2 genome has an abundance of inverted repeats. The inverted repeats are mainly located in the gene of the Spike protein. This result suggests the Spike protein gene undergoes recombination events, therefore, is essential for fast evolution. Comparison of the inverted repeat signatures in human and bat coronaviruses suggest that SARS-CoV-2 is mostly related SARS-related coronavirus, SARSr-CoV/RaTG13. The study also reveals that the recent SARS-related coronavirus, SARSr-CoV/RmYN02, has a high amount of inverted repeats in the spike protein gene. Besides, this study demonstrates that the inverted repeat distribution in a genome can be considered as the genomic signature. This study highlights the significance of inverted repeats in the evolution of SARS-CoV-2 and presents the inverted repeats as the genomic signature in genome analysis.
2003.13282
Tarun Jain
Tarun Jain, Bijendra Nath Jain
Accelerated infection testing at scale: a proposal for inference with single test on multiple patients
8 pages
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In pandemics or epidemics, public health authorities need to rapidly test a large number of individuals, both to determine the line of treatment as well as to know the spread of infection to plan containment, mitigation and future responses. However, the lack of adequate testing kits could be a bottleneck, especially in the case of unanticipated new diseases, such as COVID-19, where the testing technology, manufacturing capability, distribution, human skills and laboratories might be unavailable or in short supply. In addition, the cost of the standard PCR test is approximately USD 48, which is prohibitive for poorer patients and most governments. We address this bottleneck by proposing a test methodology that pools the sample from two (or more) patients in a single test. The key insight is that a single negative result from a pooled sample likely implies negative infection of all the individual patients. and It thereby rules out further tests for the patients. This protocol, therefore, requires significantly fewer tests. This may, however, result in somewhat increased false negatives. Our simulations show that combining samples from two patients with 7% underlying likelihood of infection implies that 36% fewer test kits are required, with 14% additional units of time for testing.
[ { "created": "Mon, 30 Mar 2020 09:06:08 GMT", "version": "v1" } ]
2020-03-31
[ [ "Jain", "Tarun", "" ], [ "Jain", "Bijendra Nath", "" ] ]
In pandemics or epidemics, public health authorities need to rapidly test a large number of individuals, both to determine the line of treatment as well as to know the spread of infection to plan containment, mitigation and future responses. However, the lack of adequate testing kits could be a bottleneck, especially in the case of unanticipated new diseases, such as COVID-19, where the testing technology, manufacturing capability, distribution, human skills and laboratories might be unavailable or in short supply. In addition, the cost of the standard PCR test is approximately USD 48, which is prohibitive for poorer patients and most governments. We address this bottleneck by proposing a test methodology that pools the sample from two (or more) patients in a single test. The key insight is that a single negative result from a pooled sample likely implies negative infection of all the individual patients. and It thereby rules out further tests for the patients. This protocol, therefore, requires significantly fewer tests. This may, however, result in somewhat increased false negatives. Our simulations show that combining samples from two patients with 7% underlying likelihood of infection implies that 36% fewer test kits are required, with 14% additional units of time for testing.
2003.09564
Nigel Goldenfeld
Sergei Maslov and Nigel Goldenfeld
Window of Opportunity for Mitigation to Prevent Overflow of ICU capacity in Chicago by COVID-19
null
null
null
null
q-bio.PE physics.med-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We estimate the growth in demand for ICU beds in Chicago during the emerging COVID-19 epidemic, using state-of-the-art computer simulations calibrated for the SARS-CoV-2 virus. The questions we address are these: (1) Will the ICU capacity in Chicago be exceeded, and if so by how much? (2) Can strong mitigation strategies, such as lockdown or shelter in place order, prevent the overflow of capacity? (3) When should such strategies be implemented? Our answers are as follows: (1) The ICU capacity may be exceeded by a large amount, probably by a factor of ten. (2) Strong mitigation can avert this emergency situation potentially, but even that will not work if implemented too late. (3) If the strong mitigation precedes April 1st, then the growth of COVID-19 can be controlled and the ICU capacity could be adequate. The earlier the strong mitigation is implemented, the greater the probability that it will be successful. After around April 1 2020, any strong mitigation will not avert the emergency situation. In Italy, the lockdown occurred too late and the number of deaths is still doubling every 2.3 days. It is difficult to be sure about the precise dates for this window of opportunity, due to the inherent uncertainties in computer simulation. But there is high confidence in the main conclusion that it exists and will soon be closed. Our conclusion is that, being fully cognizant of the societal trade-offs, there is a rapidly closing window of opportunity to avert a worst-case scenario in Chicago, but only with strong mitigation/lockdown implemented in the next week at the latest. If this window is missed, the epidemic will get worse and then strong mitigation/lockdown will be required after all, but it will be too late.
[ { "created": "Sat, 21 Mar 2020 02:47:39 GMT", "version": "v1" } ]
2020-03-24
[ [ "Maslov", "Sergei", "" ], [ "Goldenfeld", "Nigel", "" ] ]
We estimate the growth in demand for ICU beds in Chicago during the emerging COVID-19 epidemic, using state-of-the-art computer simulations calibrated for the SARS-CoV-2 virus. The questions we address are these: (1) Will the ICU capacity in Chicago be exceeded, and if so by how much? (2) Can strong mitigation strategies, such as lockdown or shelter in place order, prevent the overflow of capacity? (3) When should such strategies be implemented? Our answers are as follows: (1) The ICU capacity may be exceeded by a large amount, probably by a factor of ten. (2) Strong mitigation can avert this emergency situation potentially, but even that will not work if implemented too late. (3) If the strong mitigation precedes April 1st, then the growth of COVID-19 can be controlled and the ICU capacity could be adequate. The earlier the strong mitigation is implemented, the greater the probability that it will be successful. After around April 1 2020, any strong mitigation will not avert the emergency situation. In Italy, the lockdown occurred too late and the number of deaths is still doubling every 2.3 days. It is difficult to be sure about the precise dates for this window of opportunity, due to the inherent uncertainties in computer simulation. But there is high confidence in the main conclusion that it exists and will soon be closed. Our conclusion is that, being fully cognizant of the societal trade-offs, there is a rapidly closing window of opportunity to avert a worst-case scenario in Chicago, but only with strong mitigation/lockdown implemented in the next week at the latest. If this window is missed, the epidemic will get worse and then strong mitigation/lockdown will be required after all, but it will be too late.
1404.0674
Paulo Bandiera-Paiva
Paulo Bandiera-Paiva and Marcelo R.S. Briones
PGA: A Program for Genome Annotation by Comparative Analysis of Maximum Likelihood Phylogenies of Genes and Species
arXiv admin note: substantial text overlap with arXiv:1404.0630
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Phylogenetic Genome Annotator (PGA) is a computer program that enables real-time comparison of 'gene trees' versus 'species trees' obtained from predicted open reading frames of whole genome data. The gene phylogenies are inferred for each individual genome predicted proteins whereas the species phylogenies are inferred from rDNA data. The correlated protein domains, defined by PFAM, are then displayed side-by-side with a phylogeny of the corresponding species. The statistical support of gene clusters (branches) is given by the quartet puzzling method. This analysis readily discriminates paralogs from orthologs, enabling the identification of proteins originated by gene duplications and the prediction of possible functional divergence in groups of similar sequences.
[ { "created": "Wed, 2 Apr 2014 17:55:42 GMT", "version": "v1" } ]
2014-04-04
[ [ "Bandiera-Paiva", "Paulo", "" ], [ "Briones", "Marcelo R. S.", "" ] ]
The Phylogenetic Genome Annotator (PGA) is a computer program that enables real-time comparison of 'gene trees' versus 'species trees' obtained from predicted open reading frames of whole genome data. The gene phylogenies are inferred for each individual genome predicted proteins whereas the species phylogenies are inferred from rDNA data. The correlated protein domains, defined by PFAM, are then displayed side-by-side with a phylogeny of the corresponding species. The statistical support of gene clusters (branches) is given by the quartet puzzling method. This analysis readily discriminates paralogs from orthologs, enabling the identification of proteins originated by gene duplications and the prediction of possible functional divergence in groups of similar sequences.
1703.01999
Quico Spaen
Quico Spaen, Dorit S. Hochbaum, Roberto As\'in-Ach\'a
HNCcorr: A Novel Combinatorial Approach for Cell Identification in Calcium-Imaging Movies
null
null
10.1523/ENEURO.0304-18.2019
null
q-bio.QM math.OC q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium imaging has emerged as a workhorse method in neuroscience to investigate patterns of neuronal activity. Instrumentation to acquire calcium imaging movies has rapidly progressed and has become standard across labs. Still, algorithms to automatically detect and extract activity signals from calcium imaging movies are highly variable from~lab~to~lab and more advanced algorithms are continuously being developed. Here we present HNCcorr, a novel algorithm for cell identification in calcium imaging movies based on combinatorial optimization. The algorithm identifies cells by finding distinct groups of highly similar pixels in correlation space, where a pixel is represented by the vector of correlations to a set of other pixels. The HNCcorr algorithm achieves the best known results for the cell identification benchmark of Neurofinder, and guarantees an optimal solution to the underlying deterministic optimization model resulting in a transparent mapping from input data to outcome.
[ { "created": "Mon, 6 Mar 2017 17:42:25 GMT", "version": "v1" } ]
2019-06-03
[ [ "Spaen", "Quico", "" ], [ "Hochbaum", "Dorit S.", "" ], [ "Asín-Achá", "Roberto", "" ] ]
Calcium imaging has emerged as a workhorse method in neuroscience to investigate patterns of neuronal activity. Instrumentation to acquire calcium imaging movies has rapidly progressed and has become standard across labs. Still, algorithms to automatically detect and extract activity signals from calcium imaging movies are highly variable from~lab~to~lab and more advanced algorithms are continuously being developed. Here we present HNCcorr, a novel algorithm for cell identification in calcium imaging movies based on combinatorial optimization. The algorithm identifies cells by finding distinct groups of highly similar pixels in correlation space, where a pixel is represented by the vector of correlations to a set of other pixels. The HNCcorr algorithm achieves the best known results for the cell identification benchmark of Neurofinder, and guarantees an optimal solution to the underlying deterministic optimization model resulting in a transparent mapping from input data to outcome.
1403.5376
Suman Kumar Banik
Srijeeta Talukder, Shrabani Sen, Prantik Chakraborti, Ralf Metzler, Suman K Banik, Pinaki Chaudhury
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
10 pages, 3 figures, 4 tables
J. Chem. Phys. 140 (2014) 125101
10.1063/1.4869112
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters are estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the fourteen model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction $\epsilon_{hb}(\mathtt{AT})$ for an $\mathtt{AT}$ base pair and the ring factor $\xi$ turn out to be the most sensitive parameters. In addition, the stacking interaction $\epsilon_{st}(\mathtt{TA}-\mathtt{TA})$ for an $\mathtt{TA}-\mathtt{TA}$ nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
[ { "created": "Fri, 21 Mar 2014 06:19:03 GMT", "version": "v1" } ]
2014-03-27
[ [ "Talukder", "Srijeeta", "" ], [ "Sen", "Shrabani", "" ], [ "Chakraborti", "Prantik", "" ], [ "Metzler", "Ralf", "" ], [ "Banik", "Suman K", "" ], [ "Chaudhury", "Pinaki", "" ] ]
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters are estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the fourteen model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction $\epsilon_{hb}(\mathtt{AT})$ for an $\mathtt{AT}$ base pair and the ring factor $\xi$ turn out to be the most sensitive parameters. In addition, the stacking interaction $\epsilon_{st}(\mathtt{TA}-\mathtt{TA})$ for an $\mathtt{TA}-\mathtt{TA}$ nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
2111.00108
Muhammad Ammar Malik
Muhammad Ammar Malik, Adriaan-Alexander Ludl, Tom Michoel
High-dimensional multi-trait GWAS by reverse prediction of genotypes
null
null
null
null
q-bio.GN cs.LG q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-trait genome-wide association studies (GWAS) use multi-variate statistical methods to identify associations between genetic variants and multiple correlated traits simultaneously, and have higher statistical power than independent univariate analyses of traits. Reverse regression, where genotypes of genetic variants are regressed on multiple traits simultaneously, has emerged as a promising approach to perform multi-trait GWAS in high-dimensional settings where the number of traits exceeds the number of samples. We analyzed different machine learning methods (ridge regression, naive Bayes/independent univariate, random forests and support vector machines) for reverse regression in multi-trait GWAS, using genotypes, gene expression data and ground-truth transcriptional regulatory networks from the DREAM5 SysGen Challenge and from a cross between two yeast strains to evaluate methods. We found that genotype prediction performance, in terms of root mean squared error (RMSE), allowed to distinguish between genomic regions with high and low transcriptional activity. Moreover, model feature coefficients correlated with the strength of association between variants and individual traits, and were predictive of true trans-eQTL target genes, with complementary findings across methods. Code to reproduce the analysis is available at https://github.com/michoel-lab/Reverse-Pred-GWAS
[ { "created": "Fri, 29 Oct 2021 22:34:35 GMT", "version": "v1" }, { "created": "Wed, 9 Feb 2022 14:45:03 GMT", "version": "v2" } ]
2022-02-10
[ [ "Malik", "Muhammad Ammar", "" ], [ "Ludl", "Adriaan-Alexander", "" ], [ "Michoel", "Tom", "" ] ]
Multi-trait genome-wide association studies (GWAS) use multi-variate statistical methods to identify associations between genetic variants and multiple correlated traits simultaneously, and have higher statistical power than independent univariate analyses of traits. Reverse regression, where genotypes of genetic variants are regressed on multiple traits simultaneously, has emerged as a promising approach to perform multi-trait GWAS in high-dimensional settings where the number of traits exceeds the number of samples. We analyzed different machine learning methods (ridge regression, naive Bayes/independent univariate, random forests and support vector machines) for reverse regression in multi-trait GWAS, using genotypes, gene expression data and ground-truth transcriptional regulatory networks from the DREAM5 SysGen Challenge and from a cross between two yeast strains to evaluate methods. We found that genotype prediction performance, in terms of root mean squared error (RMSE), allowed to distinguish between genomic regions with high and low transcriptional activity. Moreover, model feature coefficients correlated with the strength of association between variants and individual traits, and were predictive of true trans-eQTL target genes, with complementary findings across methods. Code to reproduce the analysis is available at https://github.com/michoel-lab/Reverse-Pred-GWAS
2210.16098
Yiqiang Yi
Yiqiang Yi, Xu Wan, Kangfei Zhao, Le Ou-Yang, Peilin Zhao
Predicting Protein-Ligand Binding Affinity with Equivariant Line Graph Network
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binding affinity prediction of three-dimensional (3D) protein ligand complexes is critical for drug repositioning and virtual drug screening. Existing approaches transform a 3D protein-ligand complex to a two-dimensional (2D) graph, and then use graph neural networks (GNNs) to predict its binding affinity. However, the node and edge features of the 2D graph are extracted based on invariant local coordinate systems of the 3D complex. As a result, the method can not fully learn the global information of the complex, such as, the physical symmetry and the topological information of bonds. To address these issues, we propose a novel Equivariant Line Graph Network (ELGN) for affinity prediction of 3D protein ligand complexes. The proposed ELGN firstly adds a super node to the 3D complex, and then builds a line graph based on the 3D complex. After that, ELGN uses a new E(3)-equivariant network layer to pass the messages between nodes and edges based on the global coordinate system of the 3D complex. Experimental results on two real datasets demonstrate the effectiveness of ELGN over several state-of-the-art baselines.
[ { "created": "Thu, 27 Oct 2022 02:15:52 GMT", "version": "v1" } ]
2022-10-31
[ [ "Yi", "Yiqiang", "" ], [ "Wan", "Xu", "" ], [ "Zhao", "Kangfei", "" ], [ "Ou-Yang", "Le", "" ], [ "Zhao", "Peilin", "" ] ]
Binding affinity prediction of three-dimensional (3D) protein ligand complexes is critical for drug repositioning and virtual drug screening. Existing approaches transform a 3D protein-ligand complex to a two-dimensional (2D) graph, and then use graph neural networks (GNNs) to predict its binding affinity. However, the node and edge features of the 2D graph are extracted based on invariant local coordinate systems of the 3D complex. As a result, the method can not fully learn the global information of the complex, such as, the physical symmetry and the topological information of bonds. To address these issues, we propose a novel Equivariant Line Graph Network (ELGN) for affinity prediction of 3D protein ligand complexes. The proposed ELGN firstly adds a super node to the 3D complex, and then builds a line graph based on the 3D complex. After that, ELGN uses a new E(3)-equivariant network layer to pass the messages between nodes and edges based on the global coordinate system of the 3D complex. Experimental results on two real datasets demonstrate the effectiveness of ELGN over several state-of-the-art baselines.
1305.6043
Bjarki Eldon
Matthias Birkner, Jochen Blath, Bjarki Eldon
Statistical properties of the site-frequency spectrum associated with Lambda-coalescents
45 pages, 14 figures, 4 tables, Appendix, supporting information
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical properties of the site frequency spectrum associated with Lambda-coalescents are our objects of study. In particular, we derive recursions for the expected value, variance, and covariance of the spectrum, extending earlier results of Fu (1995) for the classical Kingman coalescent. Estimating coalescent parameters introduced by certain Lambda-coalescents for datasets too large for full likelihood methods is our focus. The recursions for the expected values we obtain can be used to find the parameter values which give the best fit to the observed frequency spectrum. The expected values are also used to approximate the probability a (derived) mutation arises on a branch subtending a given number of leaves (DNA sequences), allowing us to apply a pseudo-likelihood inference to estimate coalescence parameters associated with certain subclasses of Lambda coalescents. The properties of the pseudo-likelihood approach are investigated on simulated as well as real mtDNA datasets for the high fecundity Atlantic cod (\emph{Gadus morhua}). Our results for two subclasses of Lambda coalescents show that one can distinguish these subclasses from the Kingman coalescent, as well as between the Lambda-subclasses, even for moderate sample sizes.
[ { "created": "Sun, 26 May 2013 16:37:54 GMT", "version": "v1" }, { "created": "Sat, 24 Aug 2013 17:15:28 GMT", "version": "v2" } ]
2013-08-27
[ [ "Birkner", "Matthias", "" ], [ "Blath", "Jochen", "" ], [ "Eldon", "Bjarki", "" ] ]
Statistical properties of the site frequency spectrum associated with Lambda-coalescents are our objects of study. In particular, we derive recursions for the expected value, variance, and covariance of the spectrum, extending earlier results of Fu (1995) for the classical Kingman coalescent. Estimating coalescent parameters introduced by certain Lambda-coalescents for datasets too large for full likelihood methods is our focus. The recursions for the expected values we obtain can be used to find the parameter values which give the best fit to the observed frequency spectrum. The expected values are also used to approximate the probability a (derived) mutation arises on a branch subtending a given number of leaves (DNA sequences), allowing us to apply a pseudo-likelihood inference to estimate coalescence parameters associated with certain subclasses of Lambda coalescents. The properties of the pseudo-likelihood approach are investigated on simulated as well as real mtDNA datasets for the high fecundity Atlantic cod (\emph{Gadus morhua}). Our results for two subclasses of Lambda coalescents show that one can distinguish these subclasses from the Kingman coalescent, as well as between the Lambda-subclasses, even for moderate sample sizes.
1707.00664
Alessio Franci
Alessio Franci, Guillaume Drion, Rodolphe Sepulchre
Robust and tunable bursting requires slow positive feedback
null
null
null
null
q-bio.NC math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We highlight that the robustness and tunability of a bursting model critically relies on currents that provide slow positive feedback to the membrane potential. Such currents have the ability of making the total conductance of the circuit negative in a time scale that is termed slow because intermediate between the fast time scale of the spike upstroke and the ultraslow time scale of even slower adaptation currents. We discuss how such currents can be assessed either in voltage-clamp experiments or in computational models. We show that, while frequent in the literature, mathematical and computational models of bursting that lack the slow negative conductance are fragile and rigid. Our results suggest that modeling the slow negative conductance of cellular models is important when studying the neuromodulation of rhythmic circuits at any broader scale.
[ { "created": "Mon, 3 Jul 2017 17:29:04 GMT", "version": "v1" }, { "created": "Fri, 26 Jan 2018 15:56:43 GMT", "version": "v2" } ]
2018-01-29
[ [ "Franci", "Alessio", "" ], [ "Drion", "Guillaume", "" ], [ "Sepulchre", "Rodolphe", "" ] ]
We highlight that the robustness and tunability of a bursting model critically relies on currents that provide slow positive feedback to the membrane potential. Such currents have the ability of making the total conductance of the circuit negative in a time scale that is termed slow because intermediate between the fast time scale of the spike upstroke and the ultraslow time scale of even slower adaptation currents. We discuss how such currents can be assessed either in voltage-clamp experiments or in computational models. We show that, while frequent in the literature, mathematical and computational models of bursting that lack the slow negative conductance are fragile and rigid. Our results suggest that modeling the slow negative conductance of cellular models is important when studying the neuromodulation of rhythmic circuits at any broader scale.
1509.09192
Mohammad Soltani
Thierry Platini, Mohammad Soltani, Abhyudai Singh
Stochastic Analysis Of An Incoherent Feedforward Genetic Motif
8 pages
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene products (RNAs, proteins) often occur at low molecular counts inside individual cells, and hence are subject to considerable random fluctuations (noise) in copy number over time. Not surprisingly, cells encode diverse regulatory mechanisms to buffer noise. One such mechanism is the incoherent feedforward circuit. We analyze a simplistic version of this circuit, where an upstream regulator X affects both the production and degradation of a protein Y. Thus, any random increase in X's copy numbers would increase both production and degradation, keeping Y levels unchanged. To study its stochastic dynamics, we formulate this network into a mathematical model using the Chemical Master Equation formulation. We prove that if the functional dependence of Y's production and degradation on X is similar, then the steady-distribution of Y's copy numbers is independent of X. To investigate how fluctuations in Y propagate downstream, a protein Z whose production rate only depend on Y is introduced. Intriguingly, results show that the extent of noise in Z increases with noise in X, in spite of the fact that the magnitude of noise in Y is invariant of X. Such counter intuitive results arise because X enhances the time-scale of fluctuations in Y, which amplifies fluctuations in downstream processes. In summary, while feedforward systems can buffer a protein from noise in its upstream regulators, noise can propagate downstream due to changes in the time-scale of fluctuations.
[ { "created": "Wed, 30 Sep 2015 14:30:08 GMT", "version": "v1" } ]
2015-10-01
[ [ "Platini", "Thierry", "" ], [ "Soltani", "Mohammad", "" ], [ "Singh", "Abhyudai", "" ] ]
Gene products (RNAs, proteins) often occur at low molecular counts inside individual cells, and hence are subject to considerable random fluctuations (noise) in copy number over time. Not surprisingly, cells encode diverse regulatory mechanisms to buffer noise. One such mechanism is the incoherent feedforward circuit. We analyze a simplistic version of this circuit, where an upstream regulator X affects both the production and degradation of a protein Y. Thus, any random increase in X's copy numbers would increase both production and degradation, keeping Y levels unchanged. To study its stochastic dynamics, we formulate this network into a mathematical model using the Chemical Master Equation formulation. We prove that if the functional dependence of Y's production and degradation on X is similar, then the steady-distribution of Y's copy numbers is independent of X. To investigate how fluctuations in Y propagate downstream, a protein Z whose production rate only depend on Y is introduced. Intriguingly, results show that the extent of noise in Z increases with noise in X, in spite of the fact that the magnitude of noise in Y is invariant of X. Such counter intuitive results arise because X enhances the time-scale of fluctuations in Y, which amplifies fluctuations in downstream processes. In summary, while feedforward systems can buffer a protein from noise in its upstream regulators, noise can propagate downstream due to changes in the time-scale of fluctuations.
1201.3170
Pascal Buenzli
Peter Pivonka, Pascal R. Buenzli, Stefan Scheiner, Christian Hellmich, Colin R. Dunstan
The influence of bone surface availability in bone remodelling - A mathematical model including coupled geometrical and biomechanical regulations of bone cells
17 pages, 9 figures, 3 tables; Changes in v2: New title, C Hellmich added as author for his contribution to biomechanical part of the model, rewritten in this version. One figure (Fig 4) added. Some misprints and errors of v1 corrected. Some sylistic rearrangements
Eng Struct (2013) 47:134-147
10.1016/j.engstruct.2012.09.006
null
q-bio.TO physics.bio-ph physics.med-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone is a biomaterial undergoing continuous renewal. The renewal process is known as bone remodelling and is operated by bone-resorbing cells (osteoclasts) and bone-forming cells (osteoblasts). Both biochemical and biomechanical regulatory mechanisms have been identified in the interaction between osteoclasts and osteoblasts. Here we focus on an additional and poorly understood potential regulatory mechanism of bone cells, that involves the morphology of the microstructure of bone. Bone cells can only remove and replace bone at a bone surface. However, the microscopic availability of bone surface depends in turn on the ever-changing bone microstructure. The importance of this geometrical dependence is unknown and difficult to quantify experimentally. Therefore, we develop a sophisticated mathematical model of bone cell interactions that takes into account biochemical, biomechanical and geometrical regulations. We then investigate numerically the influence of bone surface availability in bone remodelling within a representative bone tissue sample. The interdependence between the bone cells' activity, which modifies the bone microstructure, and changes in the microscopic bone surface availability, which in turn influences bone cell development and activity, is implemented using a remarkable experimental relationship between bone specific surface and bone porosity. Our model suggests that geometrical regulation of the activation of new remodelling events could have a significant effect on bone porosity and bone stiffness. On the other hand, geometrical regulation of late stages of osteoblast and osteoclast differentiation seems less significant. We conclude that the development of osteoporosis is probably accelerated by this geometrical regulation in cortical bone, but probably slowed down in trabecular bone.
[ { "created": "Mon, 16 Jan 2012 07:45:23 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2012 07:16:49 GMT", "version": "v2" } ]
2014-05-21
[ [ "Pivonka", "Peter", "" ], [ "Buenzli", "Pascal R.", "" ], [ "Scheiner", "Stefan", "" ], [ "Hellmich", "Christian", "" ], [ "Dunstan", "Colin R.", "" ] ]
Bone is a biomaterial undergoing continuous renewal. The renewal process is known as bone remodelling and is operated by bone-resorbing cells (osteoclasts) and bone-forming cells (osteoblasts). Both biochemical and biomechanical regulatory mechanisms have been identified in the interaction between osteoclasts and osteoblasts. Here we focus on an additional and poorly understood potential regulatory mechanism of bone cells, that involves the morphology of the microstructure of bone. Bone cells can only remove and replace bone at a bone surface. However, the microscopic availability of bone surface depends in turn on the ever-changing bone microstructure. The importance of this geometrical dependence is unknown and difficult to quantify experimentally. Therefore, we develop a sophisticated mathematical model of bone cell interactions that takes into account biochemical, biomechanical and geometrical regulations. We then investigate numerically the influence of bone surface availability in bone remodelling within a representative bone tissue sample. The interdependence between the bone cells' activity, which modifies the bone microstructure, and changes in the microscopic bone surface availability, which in turn influences bone cell development and activity, is implemented using a remarkable experimental relationship between bone specific surface and bone porosity. Our model suggests that geometrical regulation of the activation of new remodelling events could have a significant effect on bone porosity and bone stiffness. On the other hand, geometrical regulation of late stages of osteoblast and osteoclast differentiation seems less significant. We conclude that the development of osteoporosis is probably accelerated by this geometrical regulation in cortical bone, but probably slowed down in trabecular bone.
0709.2015
Henrik Jeldtoft Jensen
Henrik Jeldtoft Jensen and Elsa Arcaute
Complexity, Collective Effects and Modelling of Ecosystems: formation, function and stability
11 pages and 1 figure
null
null
null
q-bio.PE q-bio.OT
null
We discuss the relevance of studying ecology within the framework of Complexity Science from a statistical mechanics approach. Ecology is concerned with understanding how systems level properties emerge out of the multitude of interactions amongst large numbers of components, leading to ecosystems that possess the prototypical characteristics of complex systems. We argue that statistical mechanics is at present the best methodology available to obtain a quantitative description of complex systems, and that ecology is in urgent need of ``integrative'' approaches that are quantitative and non-stationary. We describe examples where combining statistical mechanics and ecology has led to improved ecological modelling and, at the same time, broadened the scope of statistical mechanics.
[ { "created": "Thu, 13 Sep 2007 08:13:14 GMT", "version": "v1" } ]
2007-09-14
[ [ "Jensen", "Henrik Jeldtoft", "" ], [ "Arcaute", "Elsa", "" ] ]
We discuss the relevance of studying ecology within the framework of Complexity Science from a statistical mechanics approach. Ecology is concerned with understanding how systems level properties emerge out of the multitude of interactions amongst large numbers of components, leading to ecosystems that possess the prototypical characteristics of complex systems. We argue that statistical mechanics is at present the best methodology available to obtain a quantitative description of complex systems, and that ecology is in urgent need of ``integrative'' approaches that are quantitative and non-stationary. We describe examples where combining statistical mechanics and ecology has led to improved ecological modelling and, at the same time, broadened the scope of statistical mechanics.
1606.05912
Ricardo Martinez-Garcia
Ricardo Martinez-Garcia, Corina E. Tarnita
Seasonality can induce coexistence of multiple bet-hedging strategies in Dictyostelium discoideum via storage effect
33 pages, 7 figures
null
10.1016/j.jtbi.2017.05.019
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
D. discoideum has been recently suggested as an example of bet-hedging. Upon starvation a population of unicellular amoebae splits between aggregators, which form a fruiting body made of a stalk and resistant spores, and non-aggregators. Spores are favored by long starvation periods, but vegetative cells can exploit resources in fast-recovering environments. This partition can be understood as a bet-hedging strategy that evolves in response to stochastic starvation times. A genotype is defined by a different balance between each type of cells. In this framework, if the ecological conditions are defined in terms of the mean starvation time (i.e. time between onset of starvation and the arrival of a new food pulse), a single genotype dominates each environment, which is inconsistent with the huge genetic diversity observed in nature. We investigate whether seasonality, represented by a periodic alternation in the mean starvation times, allows the coexistence of several strategies. We use a non-spatial (well-mixed) setting where different strains compete for a pulse of resources. We find that seasonality, which we model via two seasons, induces a temporal storage effect that can promote the stable coexistence of multiple genotypes. Two conditions need to be met. First, the distributions of starvation times in each season cannot overlap in order to create two well differentiated habitats within the year. Second, numerous growth-starvation cycles have to occur during each season to allow well-adapted strains to grow and survive the subsequent unfavorable period. Additional tradeoffs among life-history traits can expand the range of coexistence and increase the number of coexisting strategies, contributing towards explaining the genetic diversity observed in D. discoideum
[ { "created": "Sun, 19 Jun 2016 21:33:51 GMT", "version": "v1" }, { "created": "Tue, 21 Jun 2016 06:23:24 GMT", "version": "v2" }, { "created": "Wed, 17 May 2017 16:37:10 GMT", "version": "v3" } ]
2017-05-18
[ [ "Martinez-Garcia", "Ricardo", "" ], [ "Tarnita", "Corina E.", "" ] ]
D. discoideum has been recently suggested as an example of bet-hedging. Upon starvation a population of unicellular amoebae splits between aggregators, which form a fruiting body made of a stalk and resistant spores, and non-aggregators. Spores are favored by long starvation periods, but vegetative cells can exploit resources in fast-recovering environments. This partition can be understood as a bet-hedging strategy that evolves in response to stochastic starvation times. A genotype is defined by a different balance between each type of cells. In this framework, if the ecological conditions are defined in terms of the mean starvation time (i.e. time between onset of starvation and the arrival of a new food pulse), a single genotype dominates each environment, which is inconsistent with the huge genetic diversity observed in nature. We investigate whether seasonality, represented by a periodic alternation in the mean starvation times, allows the coexistence of several strategies. We use a non-spatial (well-mixed) setting where different strains compete for a pulse of resources. We find that seasonality, which we model via two seasons, induces a temporal storage effect that can promote the stable coexistence of multiple genotypes. Two conditions need to be met. First, the distributions of starvation times in each season cannot overlap in order to create two well differentiated habitats within the year. Second, numerous growth-starvation cycles have to occur during each season to allow well-adapted strains to grow and survive the subsequent unfavorable period. Additional tradeoffs among life-history traits can expand the range of coexistence and increase the number of coexisting strategies, contributing towards explaining the genetic diversity observed in D. discoideum
1306.4747
Ricky Der
Ricky Der, Joshua B. Plotkin
The equilibrium allele frequency distribution for a population with reproductive skew
Submitted to Genetics
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the population genetics of two neutral alleles under reversible mutation in the \Lambda-processes, a population model that features a skewed offspring distribution. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from the equilibrium distribution, but that the form of the offspring distribution itself cannot be uniquely identified. We also introduce an infinite-sites version of the \Lambda-process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulae for the expected number of segregating sizes as a function of sample size. We find that the Wright-Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other \Lambda-processes.
[ { "created": "Thu, 20 Jun 2013 03:29:40 GMT", "version": "v1" } ]
2013-06-21
[ [ "Der", "Ricky", "" ], [ "Plotkin", "Joshua B.", "" ] ]
We study the population genetics of two neutral alleles under reversible mutation in the \Lambda-processes, a population model that features a skewed offspring distribution. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from the equilibrium distribution, but that the form of the offspring distribution itself cannot be uniquely identified. We also introduce an infinite-sites version of the \Lambda-process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulae for the expected number of segregating sizes as a function of sample size. We find that the Wright-Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other \Lambda-processes.
1302.0255
Kieran Sharkey
Robert R. Wilkinson and Kieran J. Sharkey
An Exact Relationship Between Invasion Probability and Endemic Prevalence for Markovian SIS Dynamics on Networks
16 pages, 5 figures. Supplementary data available with published version at http://dx.doi.org/10.1371/journal.pone.0069028
WPLoS ONE 8(7): e69028
10.1371/journal.pone.0069028
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding models which represent the invasion of network-based systems by infectious agents can give important insights into many real-world situations, including the prevention and control of infectious diseases and computer viruses. Here we consider Markovian susceptible-infectious-susceptible (SIS) dynamics on finite strongly connected networks, applicable to several sexually transmitted diseases and computer viruses. In this context, a theoretical definition of endemic prevalence is easily obtained via the quasi-stationary distribution (QSD). By representing the model as a percolation process and utilising the property of duality, we also provide a theoretical definition of invasion probability. We then show that, for undirected networks, the probability of invasion from any given individual is equal to the (probabilistic) endemic prevalence, following successful invasion, at the individual (we also provide a relationship for the directed case). The total (fractional) endemic prevalence in the population is thus equal to the average invasion probability (across all individuals). Consequently, for such systems, the regions or individuals already supporting a high level of infection are likely to be the source of a successful invasion by another infectious agent. This could be used to inform targeted interventions when there is a threat from an emerging infectious disease.
[ { "created": "Fri, 1 Feb 2013 19:16:15 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2013 15:35:09 GMT", "version": "v2" } ]
2013-08-02
[ [ "Wilkinson", "Robert R.", "" ], [ "Sharkey", "Kieran J.", "" ] ]
Understanding models which represent the invasion of network-based systems by infectious agents can give important insights into many real-world situations, including the prevention and control of infectious diseases and computer viruses. Here we consider Markovian susceptible-infectious-susceptible (SIS) dynamics on finite strongly connected networks, applicable to several sexually transmitted diseases and computer viruses. In this context, a theoretical definition of endemic prevalence is easily obtained via the quasi-stationary distribution (QSD). By representing the model as a percolation process and utilising the property of duality, we also provide a theoretical definition of invasion probability. We then show that, for undirected networks, the probability of invasion from any given individual is equal to the (probabilistic) endemic prevalence, following successful invasion, at the individual (we also provide a relationship for the directed case). The total (fractional) endemic prevalence in the population is thus equal to the average invasion probability (across all individuals). Consequently, for such systems, the regions or individuals already supporting a high level of infection are likely to be the source of a successful invasion by another infectious agent. This could be used to inform targeted interventions when there is a threat from an emerging infectious disease.
1605.09070
Karel B\v{r}inda
Karel B\v{r}inda, Valentina Boeva, Gregory Kucherov
Dynamic read mapping and online consensus calling for better variant detection
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variant detection from high-throughput sequencing data is an essential step in identification of alleles involved in complex diseases and cancer. To deal with these massive data, elaborated sequence analysis pipelines are employed. A core component of such pipelines is a read mapping module whose accuracy strongly affects the quality of resulting variant calls. We propose a dynamic read mapping approach that significantly improves read alignment accuracy. The general idea of dynamic mapping is to continuously update the reference sequence on the basis of previously computed read alignments. Even though this concept already appeared in the literature, we believe that our work provides the first comprehensive analysis of this approach. To evaluate the benefit of dynamic mapping, we developed a software pipeline (http://github.com/karel-brinda/dymas) that mimics different dynamic mapping scenarios. The pipeline was applied to compare dynamic mapping with the conventional static mapping and, on the other hand, with the so-called iterative referencing - a computationally expensive procedure computing an optimal modification of the reference that maximizes the overall quality of all alignments. We conclude that in all alternatives, dynamic mapping results in a much better accuracy than static mapping, approaching the accuracy of iterative referencing. To correct the reference sequence in the course of dynamic mapping, we developed an online consensus caller named OCOCO (http://github.com/karel-brinda/ococo). OCOCO is the first consensus caller capable to process input reads in the online fashion. Finally, we provide conclusions about the feasibility of dynamic mapping and discuss main obstacles that have to be overcome to implement it. We also review a wide range of possible applications of dynamic mapping with a special emphasis on variant detection.
[ { "created": "Sun, 29 May 2016 22:25:55 GMT", "version": "v1" } ]
2016-05-31
[ [ "Břinda", "Karel", "" ], [ "Boeva", "Valentina", "" ], [ "Kucherov", "Gregory", "" ] ]
Variant detection from high-throughput sequencing data is an essential step in identification of alleles involved in complex diseases and cancer. To deal with these massive data, elaborated sequence analysis pipelines are employed. A core component of such pipelines is a read mapping module whose accuracy strongly affects the quality of resulting variant calls. We propose a dynamic read mapping approach that significantly improves read alignment accuracy. The general idea of dynamic mapping is to continuously update the reference sequence on the basis of previously computed read alignments. Even though this concept already appeared in the literature, we believe that our work provides the first comprehensive analysis of this approach. To evaluate the benefit of dynamic mapping, we developed a software pipeline (http://github.com/karel-brinda/dymas) that mimics different dynamic mapping scenarios. The pipeline was applied to compare dynamic mapping with the conventional static mapping and, on the other hand, with the so-called iterative referencing - a computationally expensive procedure computing an optimal modification of the reference that maximizes the overall quality of all alignments. We conclude that in all alternatives, dynamic mapping results in a much better accuracy than static mapping, approaching the accuracy of iterative referencing. To correct the reference sequence in the course of dynamic mapping, we developed an online consensus caller named OCOCO (http://github.com/karel-brinda/ococo). OCOCO is the first consensus caller capable to process input reads in the online fashion. Finally, we provide conclusions about the feasibility of dynamic mapping and discuss main obstacles that have to be overcome to implement it. We also review a wide range of possible applications of dynamic mapping with a special emphasis on variant detection.
1311.5696
Kieran Smallbone
Kieran Smallbone
Striking a balance with Recon 2.1
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recon 2 is a highly curated reconstruction of the human metabolic network. Whilst the network is state of the art, it has shortcomings, including the presence of unbalanced reactions involving generic metabolites. By replacing these generic molecules with each of their specific instances, we can ensure full elemental balancing, in turn allowing constraint-based analyses to be performed. The resultant model, called Recon 2.1, is an order of magnitude larger than the original.
[ { "created": "Fri, 22 Nov 2013 10:19:06 GMT", "version": "v1" }, { "created": "Wed, 26 Nov 2014 00:38:24 GMT", "version": "v2" } ]
2014-11-27
[ [ "Smallbone", "Kieran", "" ] ]
Recon 2 is a highly curated reconstruction of the human metabolic network. Whilst the network is state of the art, it has shortcomings, including the presence of unbalanced reactions involving generic metabolites. By replacing these generic molecules with each of their specific instances, we can ensure full elemental balancing, in turn allowing constraint-based analyses to be performed. The resultant model, called Recon 2.1, is an order of magnitude larger than the original.
1609.04496
Karina Mazzitello
K. I. Mazzitello, Q. Zhang, M. A. Chrenek, F. Family, H. E. Grossniklaus, J. M. Nickerson, and Y. Jiang
Druse-Induced Morphology Evolution in Retinal Pigment Epithelium
10 pages, 9 figures
null
null
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The retinal pigment epithelium (RPE) is a key site of pathogenesis for many retina diseases. The formation of drusen in the retina is characteristic of retinal degeneration. We investigate morphological changes in the RPE in the presence of soft drusen using an integrated experimental and modeling approach. We collect RPE flat mount images from donated human eyes and develop 1) statistical tools to quantify the images and 2) a cell-based model to simulate the morphology evolution. We compare three different mechanisms of RPE repair evolution, cell apoptosis, cell fusion, and expansion, and Simulations of our RPE morphogenesis model quantitatively reproduce deformations of human RPE morphology due to drusen, suggesting that a purse-string mechanism is sufficient to explain how RPE heals cell loss caused by drusen-damage. We found that drusen beneath tissue promote cell death in a number that far exceeds the cell numbers covering the drusen. Tissue deformations are studied using area distributions, Voronoi domains and a texture tensor.
[ { "created": "Thu, 15 Sep 2016 02:46:20 GMT", "version": "v1" }, { "created": "Thu, 2 Mar 2017 17:43:01 GMT", "version": "v2" } ]
2017-03-03
[ [ "Mazzitello", "K. I.", "" ], [ "Zhang", "Q.", "" ], [ "Chrenek", "M. A.", "" ], [ "Family", "F.", "" ], [ "Grossniklaus", "H. E.", "" ], [ "Nickerson", "J. M.", "" ], [ "Jiang", "Y.", "" ] ]
The retinal pigment epithelium (RPE) is a key site of pathogenesis for many retina diseases. The formation of drusen in the retina is characteristic of retinal degeneration. We investigate morphological changes in the RPE in the presence of soft drusen using an integrated experimental and modeling approach. We collect RPE flat mount images from donated human eyes and develop 1) statistical tools to quantify the images and 2) a cell-based model to simulate the morphology evolution. We compare three different mechanisms of RPE repair evolution, cell apoptosis, cell fusion, and expansion, and Simulations of our RPE morphogenesis model quantitatively reproduce deformations of human RPE morphology due to drusen, suggesting that a purse-string mechanism is sufficient to explain how RPE heals cell loss caused by drusen-damage. We found that drusen beneath tissue promote cell death in a number that far exceeds the cell numbers covering the drusen. Tissue deformations are studied using area distributions, Voronoi domains and a texture tensor.
2011.04651
Wengong Jin
Wengong Jin, Regina Barzilay, Tommi Jaakkola
Discovering Synergistic Drug Combinations for COVID with Biological Bottleneck Models
Accepted to NeurIPS 2020 Machine Learning for Molecules Workshop
null
null
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug combinations play an important role in therapeutics due to its better efficacy and reduced toxicity. Recent approaches have applied machine learning to identify synergistic combinations for cancer, but they are not applicable to new diseases with limited combination data. Given that drug synergy is closely tied to biological targets, we propose a \emph{biological bottleneck} model that jointly learns drug-target interaction and synergy. The model consists of two parts: a drug-target interaction and target-disease association module. This design enables the model to \emph{explain} how a biological target affects drug synergy. By utilizing additional biological information, our model achieves 0.78 test AUC in drug synergy prediction using only 90 COVID drug combinations for training. We experimentally tested the model predictions in the U.S. National Center for Advancing Translational Sciences (NCATS) facilities and discovered two novel drug combinations (Remdesivir + Reserpine and Remdesivir + IQ-1S) with strong synergy in vitro.
[ { "created": "Mon, 9 Nov 2020 03:30:44 GMT", "version": "v1" }, { "created": "Sat, 28 Nov 2020 18:53:07 GMT", "version": "v2" } ]
2020-12-01
[ [ "Jin", "Wengong", "" ], [ "Barzilay", "Regina", "" ], [ "Jaakkola", "Tommi", "" ] ]
Drug combinations play an important role in therapeutics due to its better efficacy and reduced toxicity. Recent approaches have applied machine learning to identify synergistic combinations for cancer, but they are not applicable to new diseases with limited combination data. Given that drug synergy is closely tied to biological targets, we propose a \emph{biological bottleneck} model that jointly learns drug-target interaction and synergy. The model consists of two parts: a drug-target interaction and target-disease association module. This design enables the model to \emph{explain} how a biological target affects drug synergy. By utilizing additional biological information, our model achieves 0.78 test AUC in drug synergy prediction using only 90 COVID drug combinations for training. We experimentally tested the model predictions in the U.S. National Center for Advancing Translational Sciences (NCATS) facilities and discovered two novel drug combinations (Remdesivir + Reserpine and Remdesivir + IQ-1S) with strong synergy in vitro.
2404.17128
Xiaoyu Zhang
Xiaoyu Zhang, Pengcheng Yang, Jiawei Feng, Qiang Luo, Wei Lin and Xin Lu
Network Structure Trumps Neuron Dynamics: Insights from Drosophila Connectome Simulations
null
null
null
null
q-bio.NC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the success of artificial neural networks, the necessity of real network structures in simulating intelligence remains unclear. Utilizing the largest adult Drosophila connectome data set, we constructed a large-scale network communication model framework based on simple neuronal activation mechanisms to simulate the activation behavior observed in the connectome. The results demonstrate that even with simple propagation rules, models based on real neural network structures can generate activation patterns similar to those in the actual brain. Importantly, we found that using different neuronal dynamics models, all produced similar activation patterns. This consistency across different models emphasizes the crucial role of network topology in neural information processing, challenging views that rely solely on neuron count or complex individual neuron dynamics.Moreover, we test the influence of network reconnect rate and find that even 1%'s reconnect rate will ruin the activation patterns appeared before. By comparing network distances and spatial distances, we found that network distance better explains the information propagation patterns between neurons, highlighting the importance of topological structure in neural information processing. To facilitate these studies, we developed real-time 3D large spatial network visualization software, bridging a crucial gap in existing tools. Our findings underscore the importance of network structure in neural activation and provide new insights into the fundamental principles governing brain functionality.
[ { "created": "Fri, 26 Apr 2024 03:07:14 GMT", "version": "v1" }, { "created": "Sun, 9 Jun 2024 03:34:23 GMT", "version": "v2" }, { "created": "Sun, 30 Jun 2024 13:25:32 GMT", "version": "v3" } ]
2024-07-02
[ [ "Zhang", "Xiaoyu", "" ], [ "Yang", "Pengcheng", "" ], [ "Feng", "Jiawei", "" ], [ "Luo", "Qiang", "" ], [ "Lin", "Wei", "" ], [ "Lu", "Xin", "" ] ]
Despite the success of artificial neural networks, the necessity of real network structures in simulating intelligence remains unclear. Utilizing the largest adult Drosophila connectome data set, we constructed a large-scale network communication model framework based on simple neuronal activation mechanisms to simulate the activation behavior observed in the connectome. The results demonstrate that even with simple propagation rules, models based on real neural network structures can generate activation patterns similar to those in the actual brain. Importantly, we found that using different neuronal dynamics models, all produced similar activation patterns. This consistency across different models emphasizes the crucial role of network topology in neural information processing, challenging views that rely solely on neuron count or complex individual neuron dynamics.Moreover, we test the influence of network reconnect rate and find that even 1%'s reconnect rate will ruin the activation patterns appeared before. By comparing network distances and spatial distances, we found that network distance better explains the information propagation patterns between neurons, highlighting the importance of topological structure in neural information processing. To facilitate these studies, we developed real-time 3D large spatial network visualization software, bridging a crucial gap in existing tools. Our findings underscore the importance of network structure in neural activation and provide new insights into the fundamental principles governing brain functionality.
2208.08896
Shuqiang Wang
Heng Kong and Shuqiang Wang
Adversarial Learning Based Structural Brain-network Generative Model for Analyzing Mild Cognitive Impairment
null
null
null
null
q-bio.NC cs.CV eess.IV eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mild cognitive impairment(MCI) is a precursor of Alzheimer's disease(AD), and the detection of MCI is of great clinical significance. Analyzing the structural brain networks of patients is vital for the recognition of MCI. However, the current studies on structural brain networks are totally dependent on specific toolboxes, which is time-consuming and subjective. Few tools can obtain the structural brain networks from brain diffusion tensor images. In this work, an adversarial learning-based structural brain-network generative model(SBGM) is proposed to directly learn the structural connections from brain diffusion tensor images. By analyzing the differences in structural brain networks across subjects, we found that the structural brain networks of subjects showed a consistent trend from elderly normal controls(NC) to early mild cognitive impairment(EMCI) to late mild cognitive impairment(LMCI): structural connectivity progressed in a progressively weaker direction as the condition worsened. In addition, our proposed model tri-classifies EMCI, LMCI, and NC subjects, achieving a classification accuracy of 83.33\% on the Alzheimer's Disease Neuroimaging Initiative(ADNI) database.
[ { "created": "Tue, 9 Aug 2022 02:45:53 GMT", "version": "v1" } ]
2022-08-19
[ [ "Kong", "Heng", "" ], [ "Wang", "Shuqiang", "" ] ]
Mild cognitive impairment(MCI) is a precursor of Alzheimer's disease(AD), and the detection of MCI is of great clinical significance. Analyzing the structural brain networks of patients is vital for the recognition of MCI. However, the current studies on structural brain networks are totally dependent on specific toolboxes, which is time-consuming and subjective. Few tools can obtain the structural brain networks from brain diffusion tensor images. In this work, an adversarial learning-based structural brain-network generative model(SBGM) is proposed to directly learn the structural connections from brain diffusion tensor images. By analyzing the differences in structural brain networks across subjects, we found that the structural brain networks of subjects showed a consistent trend from elderly normal controls(NC) to early mild cognitive impairment(EMCI) to late mild cognitive impairment(LMCI): structural connectivity progressed in a progressively weaker direction as the condition worsened. In addition, our proposed model tri-classifies EMCI, LMCI, and NC subjects, achieving a classification accuracy of 83.33\% on the Alzheimer's Disease Neuroimaging Initiative(ADNI) database.
1212.1117
Alexander Stewart
Alexander J. Stewart, Robert M. Seymour, Andrew Pomiankowski, Max Reuter
Under-dominance constrains the evolution of negative autoregulation in diploids
null
PLoS Comput Biol, 2013, 9(3): e1002992
10.1371/journal.pcbi.1002992
null
q-bio.PE q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulatory networks have evolved to allow gene expression to rapidly track changes in the environment as well as to buffer perturbations and maintain cellular homeostasis in the absence of change. Theoretical work and empirical investigation in Escherichia coli have shown that negative autoregulation confers both rapid response times and reduced intrinsic noise, which is reflected in the fact that almost half of Escherichia coli transcription factors are negatively autoregulated. However, negative autoregulation is exceedingly rare amongst the transcription factors of Saccharomyces cerevisiae. This difference is all the more surprising because E. coli and S. cerevisiae otherwise have remarkably similar profiles of network motifs. In this study we first show that regulatory interactions amongst the transcription factors of Drosophila melanogaster and humans have a similar dearth of negative autoregulation to that seen in S. cerevisiae. We then present a model demonstrating that this fundamental difference in the noise reduction strategies used amongst species can be explained by constraints on the evolution of negative autoregulation in diploids. We show that regulatory interactions between pairs of homologous genes within the same cell can lead to under-dominance - mutations which result in stronger autoregulation, and decrease noise in homozygotes, paradoxically can cause increased noise in heterozygotes. This severely limits a diploid's ability to evolve negative autoregulation as a noise reduction mechanism. Our work offers a simple and general explanation for a previously unexplained difference between the regulatory architectures of E. coli and yeast, Drosophila and humans. It also demonstrates that the effects of diploidy in gene networks can have counter-intuitive consequences that may profoundly influence the course of evolution.
[ { "created": "Wed, 5 Dec 2012 18:04:15 GMT", "version": "v1" } ]
2013-04-30
[ [ "Stewart", "Alexander J.", "" ], [ "Seymour", "Robert M.", "" ], [ "Pomiankowski", "Andrew", "" ], [ "Reuter", "Max", "" ] ]
Regulatory networks have evolved to allow gene expression to rapidly track changes in the environment as well as to buffer perturbations and maintain cellular homeostasis in the absence of change. Theoretical work and empirical investigation in Escherichia coli have shown that negative autoregulation confers both rapid response times and reduced intrinsic noise, which is reflected in the fact that almost half of Escherichia coli transcription factors are negatively autoregulated. However, negative autoregulation is exceedingly rare amongst the transcription factors of Saccharomyces cerevisiae. This difference is all the more surprising because E. coli and S. cerevisiae otherwise have remarkably similar profiles of network motifs. In this study we first show that regulatory interactions amongst the transcription factors of Drosophila melanogaster and humans have a similar dearth of negative autoregulation to that seen in S. cerevisiae. We then present a model demonstrating that this fundamental difference in the noise reduction strategies used amongst species can be explained by constraints on the evolution of negative autoregulation in diploids. We show that regulatory interactions between pairs of homologous genes within the same cell can lead to under-dominance - mutations which result in stronger autoregulation, and decrease noise in homozygotes, paradoxically can cause increased noise in heterozygotes. This severely limits a diploid's ability to evolve negative autoregulation as a noise reduction mechanism. Our work offers a simple and general explanation for a previously unexplained difference between the regulatory architectures of E. coli and yeast, Drosophila and humans. It also demonstrates that the effects of diploidy in gene networks can have counter-intuitive consequences that may profoundly influence the course of evolution.
1107.5192
Ingo Lohmar
Ingo Lohmar and Baruch Meerson
Switching between phenotypes and population extinction
11 pages, 5 figures. Additional discussion paragraph, minor language improvements; content as published in Phys. Rev. E
Phys. Rev. E 84 (2011) 051901
10.1103/PhysRevE.84.051901
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many types of bacteria can survive under stress by switching stochastically between two different phenotypes: the "normals" who multiply fast, but are vulnerable to stress, and the "persisters" who hardly multiply, but are resilient to stress. Previous theoretical studies of such bacterial populations have focused on the \emph{fitness}: the asymptotic rate of unbounded growth of the population. Yet for an isolated population of established (and not very large) size, a more relevant measure may be the population \emph{extinction risk} due to the interplay of adverse extrinsic variations and intrinsic noise of birth, death and switching processes. Applying a WKB approximation to the pertinent master equation of such a two-population system, we quantify the extinction risk, and find the most likely path to extinction under both favorable and adverse conditions. Analytical results are obtained both in the biologically relevant regime when the switching is rare compared with the birth and death processes, and in the opposite regime of frequent switching. We show that rare switches are most beneficial in reducing the extinction risk.
[ { "created": "Tue, 26 Jul 2011 12:18:08 GMT", "version": "v1" }, { "created": "Wed, 9 Nov 2011 20:21:58 GMT", "version": "v2" } ]
2011-11-10
[ [ "Lohmar", "Ingo", "" ], [ "Meerson", "Baruch", "" ] ]
Many types of bacteria can survive under stress by switching stochastically between two different phenotypes: the "normals" who multiply fast, but are vulnerable to stress, and the "persisters" who hardly multiply, but are resilient to stress. Previous theoretical studies of such bacterial populations have focused on the \emph{fitness}: the asymptotic rate of unbounded growth of the population. Yet for an isolated population of established (and not very large) size, a more relevant measure may be the population \emph{extinction risk} due to the interplay of adverse extrinsic variations and intrinsic noise of birth, death and switching processes. Applying a WKB approximation to the pertinent master equation of such a two-population system, we quantify the extinction risk, and find the most likely path to extinction under both favorable and adverse conditions. Analytical results are obtained both in the biologically relevant regime when the switching is rare compared with the birth and death processes, and in the opposite regime of frequent switching. We show that rare switches are most beneficial in reducing the extinction risk.
q-bio/0703033
Ioana Bena Dr.
Ioana Bena, Michel Droz, Janusz Szwabinski, Andrzej Pekalski
Complex population dynamics as a competition between multiple time-scale phenomena
15 pages, 12 figures. Accepted for publication in Phys. Rev. E
Physical Review E 76, 011908 (2007).
10.1103/PhysRevE.76.011908
null
q-bio.PE cond-mat.other cond-mat.stat-mech physics.bio-ph
null
The role of the selection pressure and mutation amplitude on the behavior of a single-species population evolving on a two-dimensional lattice, in a periodically changing environment, is studied both analytically and numerically. The mean-field level of description allows to highlight the delicate interplay between the different time-scale processes in the resulting complex dynamics of the system. We clarify the influence of the amplitude and period of the environmental changes on the critical value of the selection pressure corresponding to a phase-transition "extinct-alive" of the population. However, the intrinsic stochasticity and the dynamically-built in correlations among the individuals, as well as the role of the mutation-induced variety in population's evolution are not appropriately accounted for. A more refined level of description, which is an individual-based one, has to be considered. The inherent fluctuations do not destroy the phase transition "extinct-alive", and the mutation amplitude is strongly influencing the value of the critical selection pressure. The phase diagram in the plane of the population's parameters -- selection and mutation is discussed as a function of the environmental variation characteristics. The differences between a smooth variation of the environment and an abrupt, catastrophic change are also addressesd.
[ { "created": "Thu, 15 Mar 2007 00:16:27 GMT", "version": "v1" }, { "created": "Thu, 7 Jun 2007 14:29:38 GMT", "version": "v2" } ]
2009-11-13
[ [ "Bena", "Ioana", "" ], [ "Droz", "Michel", "" ], [ "Szwabinski", "Janusz", "" ], [ "Pekalski", "Andrzej", "" ] ]
The role of the selection pressure and mutation amplitude on the behavior of a single-species population evolving on a two-dimensional lattice, in a periodically changing environment, is studied both analytically and numerically. The mean-field level of description allows to highlight the delicate interplay between the different time-scale processes in the resulting complex dynamics of the system. We clarify the influence of the amplitude and period of the environmental changes on the critical value of the selection pressure corresponding to a phase-transition "extinct-alive" of the population. However, the intrinsic stochasticity and the dynamically-built in correlations among the individuals, as well as the role of the mutation-induced variety in population's evolution are not appropriately accounted for. A more refined level of description, which is an individual-based one, has to be considered. The inherent fluctuations do not destroy the phase transition "extinct-alive", and the mutation amplitude is strongly influencing the value of the critical selection pressure. The phase diagram in the plane of the population's parameters -- selection and mutation is discussed as a function of the environmental variation characteristics. The differences between a smooth variation of the environment and an abrupt, catastrophic change are also addressesd.
1711.00250
Takashi Okada
Takashi Okada, Je-Chiang Tsai, and Atsushi Mochizuki
Structural Bifurcation Analysis in Chemical Reaction Networks
29 pages, 12 figures. v2: FIG S4 corrected
Phys. Rev. E 98, 012417 (2018)
10.1103/PhysRevE.98.012417
RIKEN-iTHEMS-Report-18
q-bio.MN math.DS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In living cells, chemical reactions form a complex network. Complicated dynamics arising from such networks are the origins of biological functions. We propose a novel mathematical method to analyze bifurcation behaviors of a reaction system from the network structure alone. The whole network is decomposed into subnetworks based on "buffering structures". For each subnetwork, the bifurcation condition is studied independently, and the parameters that can induce bifurcations and the chemicals that can exhibit bifurcations are determined. We demonstrate our theory using hypothetical and real networks.
[ { "created": "Wed, 1 Nov 2017 08:37:35 GMT", "version": "v1" }, { "created": "Mon, 6 Nov 2017 06:44:34 GMT", "version": "v2" } ]
2018-08-08
[ [ "Okada", "Takashi", "" ], [ "Tsai", "Je-Chiang", "" ], [ "Mochizuki", "Atsushi", "" ] ]
In living cells, chemical reactions form a complex network. Complicated dynamics arising from such networks are the origins of biological functions. We propose a novel mathematical method to analyze bifurcation behaviors of a reaction system from the network structure alone. The whole network is decomposed into subnetworks based on "buffering structures". For each subnetwork, the bifurcation condition is studied independently, and the parameters that can induce bifurcations and the chemicals that can exhibit bifurcations are determined. We demonstrate our theory using hypothetical and real networks.
0705.4079
Alpan Raval
Alpan Raval
Molecular Clock on a Neutral Network
10 pages
null
10.1103/PhysRevLett.99.138104
null
q-bio.PE q-bio.MN
null
The number of fixed mutations accumulated in an evolving population often displays a variance that is significantly larger than the mean (the overdispersed molecular clock). By examining a generic evolutionary process on a neutral network of high-fitness genotypes, we establish a formalism for computing all cumulants of the full probability distribution of accumulated mutations in terms of graph properties of the neutral network, and use the formalism to prove overdispersion of the molecular clock. We further show that significant overdispersion arises naturally in evolution when the neutral network is highly sparse, exhibits large global fluctuations in neutrality, and small local fluctuations in neutrality. The results are also relevant for elucidating the topological structure of a neutral network from empirical measurements of the substitution process.
[ { "created": "Mon, 28 May 2007 19:01:45 GMT", "version": "v1" } ]
2009-11-13
[ [ "Raval", "Alpan", "" ] ]
The number of fixed mutations accumulated in an evolving population often displays a variance that is significantly larger than the mean (the overdispersed molecular clock). By examining a generic evolutionary process on a neutral network of high-fitness genotypes, we establish a formalism for computing all cumulants of the full probability distribution of accumulated mutations in terms of graph properties of the neutral network, and use the formalism to prove overdispersion of the molecular clock. We further show that significant overdispersion arises naturally in evolution when the neutral network is highly sparse, exhibits large global fluctuations in neutrality, and small local fluctuations in neutrality. The results are also relevant for elucidating the topological structure of a neutral network from empirical measurements of the substitution process.
1305.3902
Sayak Mukherjee
Sayak Mukherjee, Sang-Cheol Seok, Veronica J. Vieland and Jayajit Das
Data-driven quantification of robustness and sensitivity of cell signaling networks
46 pages, 11 figures. Physical Biology, 2013
null
10.1088/1478-3975/10/6/066002
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data driven Maximum Entropy (MaxEnt) based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems.
[ { "created": "Thu, 16 May 2013 19:46:09 GMT", "version": "v1" }, { "created": "Tue, 29 Oct 2013 20:28:21 GMT", "version": "v2" } ]
2013-10-31
[ [ "Mukherjee", "Sayak", "" ], [ "Seok", "Sang-Cheol", "" ], [ "Vieland", "Veronica J.", "" ], [ "Das", "Jayajit", "" ] ]
Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data driven Maximum Entropy (MaxEnt) based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems.
1203.3954
Klaus Jaffe Dr
Klaus Jaffe, Guillermo Mascitti and Daniella Seguias
Gender differences in time perception and its relation with academic performance: non-linear dynamics in the formation of cognitive systems
Politically incorrect paper practically impossible to publish in a psychology journal
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-linear dynamics is probably much more common in the epigenetic dynamics of living beings than hitherto recognized. Here we report a case of global bifurcation triggered by gender that affects higher cognitive functions in humans. We report a cross-cultural study showing deviations in time perception, as assessed by estimating the duration of brief sounds, according to their durations and to the gender of the perciver. Results show that the duration of sounds lasting less than 10 s were on average overestimated, whereas those lasting longer were underestimated; estimates of sounds shorter than 1 s were extremely inaccurate. Females consistently gave longer estimates than males. Accuracy in time estimation was correlated to academic performance in disciplines requiring mathematical or scientific skills in male, but not in female students. This difference in correlation however had nothing to do with overall skills in mathematics. Both sexes scored similarly in scientific and technical disciplines, but females had higher grades than males in languages and lower ones in physical education. Our results confirm existing evidence for gender differences in cognitive processing, hinting to the existence of different "mathematical intelligences" with different non-linear relationships between natural or biological mathematical intuition and time perception.
[ { "created": "Sun, 18 Mar 2012 14:29:30 GMT", "version": "v1" }, { "created": "Sat, 24 Mar 2012 13:57:49 GMT", "version": "v2" } ]
2012-03-27
[ [ "Jaffe", "Klaus", "" ], [ "Mascitti", "Guillermo", "" ], [ "Seguias", "Daniella", "" ] ]
Non-linear dynamics is probably much more common in the epigenetic dynamics of living beings than hitherto recognized. Here we report a case of global bifurcation triggered by gender that affects higher cognitive functions in humans. We report a cross-cultural study showing deviations in time perception, as assessed by estimating the duration of brief sounds, according to their durations and to the gender of the perciver. Results show that the duration of sounds lasting less than 10 s were on average overestimated, whereas those lasting longer were underestimated; estimates of sounds shorter than 1 s were extremely inaccurate. Females consistently gave longer estimates than males. Accuracy in time estimation was correlated to academic performance in disciplines requiring mathematical or scientific skills in male, but not in female students. This difference in correlation however had nothing to do with overall skills in mathematics. Both sexes scored similarly in scientific and technical disciplines, but females had higher grades than males in languages and lower ones in physical education. Our results confirm existing evidence for gender differences in cognitive processing, hinting to the existence of different "mathematical intelligences" with different non-linear relationships between natural or biological mathematical intuition and time perception.
1607.00952
Mahmoud Hassan
Aya Kabbara, Wassim El Falou, Mohamad Khalil, Fabrice Wendling and Mahmoud Hassan
Graph analysis of spontaneous brain network using EEG source connectivity
International Conference on Bio-engineering for Smart Technologies (BioSMART 2016)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploring the human brain networks during rest is a topic of great interest. Several structural and functional studies have previously been conducted to study the intrinsic brain networks. In this paper, we focus on investigating the human brain network topology using dense Electroencephalography (EEG) source connectivity approach. We applied graph theoretical methods on functional networks reconstructed from resting state data acquired using EEG in 14 healthy subjects. Our findings confirmed the existence of sets of brain regions considered as functional hubs. In particular, the isthmus cingulate and the orbitofrontal regions reveal high levels of integration. Results also emphasize on the critical role of the default mode network (DMN) in enabling an efficient communication between brain regions.
[ { "created": "Mon, 4 Jul 2016 16:38:16 GMT", "version": "v1" } ]
2016-07-05
[ [ "Kabbara", "Aya", "" ], [ "Falou", "Wassim El", "" ], [ "Khalil", "Mohamad", "" ], [ "Wendling", "Fabrice", "" ], [ "Hassan", "Mahmoud", "" ] ]
Exploring the human brain networks during rest is a topic of great interest. Several structural and functional studies have previously been conducted to study the intrinsic brain networks. In this paper, we focus on investigating the human brain network topology using dense Electroencephalography (EEG) source connectivity approach. We applied graph theoretical methods on functional networks reconstructed from resting state data acquired using EEG in 14 healthy subjects. Our findings confirmed the existence of sets of brain regions considered as functional hubs. In particular, the isthmus cingulate and the orbitofrontal regions reveal high levels of integration. Results also emphasize on the critical role of the default mode network (DMN) in enabling an efficient communication between brain regions.
2307.06495
Benjamin Allen
Benjamin Allen
Symmetry in models of natural selection
21 pages, 4 figures
null
null
null
q-bio.PE math.GR math.PR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Symmetry arguments are frequently used -- often implicitly -- in mathematical modeling of natural selection. Symmetry simplifies the analysis of models and reduces the number of distinct population states to be considered. Here, I introduce a formal definition of symmetry in mathematical models of natural selection. This definition applies to a broad class of models that satisfy a minimal set of assumptions, using a framework developed in previous works. In this framework, population structure is represented by a set of sites at which alleles can live, and transitions occur via replacement of some alleles by copies of others. A symmetry is defined as a permutation of sites that preserves probabilities of replacement and mutation. The symmetries of a given selection process form a group, which acts on population states in a way that preserves the Markov chain representing selection. Applying classical results on group actions, I formally characterize the use of symmetry to reduce the states of this Markov chain, and obtain bounds on the number of states in the reduced chain.
[ { "created": "Thu, 13 Jul 2023 00:07:42 GMT", "version": "v1" } ]
2023-07-14
[ [ "Allen", "Benjamin", "" ] ]
Symmetry arguments are frequently used -- often implicitly -- in mathematical modeling of natural selection. Symmetry simplifies the analysis of models and reduces the number of distinct population states to be considered. Here, I introduce a formal definition of symmetry in mathematical models of natural selection. This definition applies to a broad class of models that satisfy a minimal set of assumptions, using a framework developed in previous works. In this framework, population structure is represented by a set of sites at which alleles can live, and transitions occur via replacement of some alleles by copies of others. A symmetry is defined as a permutation of sites that preserves probabilities of replacement and mutation. The symmetries of a given selection process form a group, which acts on population states in a way that preserves the Markov chain representing selection. Applying classical results on group actions, I formally characterize the use of symmetry to reduce the states of this Markov chain, and obtain bounds on the number of states in the reduced chain.
0706.1504
George Bass Ph.D.
George E. Bass, Bernd Meibohm, James T. Dalton and Robert Sayre
Free Energy of Activation for the Comorosan Effect
21 pages, 3 figures, 2 tables
null
null
null
q-bio.SC q-bio.BM
null
Initial reaction rate data for lactic dehydrogenase / pyruvate, lactic dehydrogenase / lactate and malic dehydrogenase / malate enzyme reactions were analyzed to obtain activation free energy changes of -329, -195 and -221 cal/mole, respectively, for rate increases associated with time-specific irradiation of the crystalline substrates prior to dissolution and incorporation in the reaction solutions. These energies, presumably, correspond to conformational or vibrational changes in the reactants or the activated complex. For the lactic dehydrogenase / pyruvate reaction, it is estimated that on the order of 10% of the irradiation energy (546 nm, 400 footcandles for 5 seconds) would be required to produce the observed reaction rate increase if a presumed photoproduct is consumed stoichiometrically with the pyruvate substrate. These findings are consistent with the proposition that the observed reaction rate enhancement involves photoproducts derived from oscillatory atmospheric gas reactions at the crystalline enzyme substrate surfaces rather than photo-excitations of the substrate molecules, per se.
[ { "created": "Mon, 11 Jun 2007 16:03:22 GMT", "version": "v1" } ]
2007-06-12
[ [ "Bass", "George E.", "" ], [ "Meibohm", "Bernd", "" ], [ "Dalton", "James T.", "" ], [ "Sayre", "Robert", "" ] ]
Initial reaction rate data for lactic dehydrogenase / pyruvate, lactic dehydrogenase / lactate and malic dehydrogenase / malate enzyme reactions were analyzed to obtain activation free energy changes of -329, -195 and -221 cal/mole, respectively, for rate increases associated with time-specific irradiation of the crystalline substrates prior to dissolution and incorporation in the reaction solutions. These energies, presumably, correspond to conformational or vibrational changes in the reactants or the activated complex. For the lactic dehydrogenase / pyruvate reaction, it is estimated that on the order of 10% of the irradiation energy (546 nm, 400 footcandles for 5 seconds) would be required to produce the observed reaction rate increase if a presumed photoproduct is consumed stoichiometrically with the pyruvate substrate. These findings are consistent with the proposition that the observed reaction rate enhancement involves photoproducts derived from oscillatory atmospheric gas reactions at the crystalline enzyme substrate surfaces rather than photo-excitations of the substrate molecules, per se.
1104.2204
Juergen Reingruber
Juergen Reingruber and David Holcman
Transcription factor search for a DNA promoter in a three-states model
4 pages, 3 figures
null
10.1103/PhysRevE.84.020901
null
q-bio.SC q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To ensure fast gene activation, Transcription Factors (TF) use a mechanism known as facilitated diffusion to find their DNA promoter site. Here we analyze such a process where a TF alternates between 3D and 1D diffusion. In the latter (TF bound to the DNA), the TF further switches between a fast translocation state dominated by interaction with the DNA backbone, and a slow examination state where interaction with DNA base pairs is predominant. We derive a new formula for the mean search time, and show that it is faster and less sensitive to the binding energy fluctuations compared to the case of a single sliding state. We find that for an optimal search, the time spent bound to the DNA is larger compared to the 3D time in the nucleus, in agreement with recent experimental data. Our results further suggest that modifying switching via phosphorylation or methylation of the TF or the DNA can efficiently regulate transcription.
[ { "created": "Tue, 12 Apr 2011 13:16:09 GMT", "version": "v1" } ]
2015-05-27
[ [ "Reingruber", "Juergen", "" ], [ "Holcman", "David", "" ] ]
To ensure fast gene activation, Transcription Factors (TF) use a mechanism known as facilitated diffusion to find their DNA promoter site. Here we analyze such a process where a TF alternates between 3D and 1D diffusion. In the latter (TF bound to the DNA), the TF further switches between a fast translocation state dominated by interaction with the DNA backbone, and a slow examination state where interaction with DNA base pairs is predominant. We derive a new formula for the mean search time, and show that it is faster and less sensitive to the binding energy fluctuations compared to the case of a single sliding state. We find that for an optimal search, the time spent bound to the DNA is larger compared to the 3D time in the nucleus, in agreement with recent experimental data. Our results further suggest that modifying switching via phosphorylation or methylation of the TF or the DNA can efficiently regulate transcription.
2001.03019
Burcu Gungor
Hilal Hacilar, O.Ufuk Nalbantoglu, Oya Aran, Burcu Bakir-Gungor
Inflammatory Bowel Disease Biomarkers of Human Gut Microbiota Selected via Ensemble Feature Selection Methods
9 pages, 10 figures
null
null
null
q-bio.QM cs.LG q-bio.GN stat.ML
http://creativecommons.org/licenses/by/4.0/
The tremendous boost in the next generation sequencing and in the omics technologies makes it possible to characterize human gut microbiome (the collective genomes of the microbial community that reside in our gastrointestinal tract). While some of these microorganisms are considered as essential regulators of our immune system, some others can cause several diseases such as Inflammatory Bowel Diseases (IBD), diabetes, and cancer. IBD, is a gut related disorder where the deviations from the healthy gut microbiome are considered to be associated with IBD. Although existing studies attempt to unveal the composition of the gut microbiome in relation to IBD diseases, a comprehensive picture is far from being complete. Due to the complexity of metagenomic studies, the applications of the state of the art machine learning techniques became popular to address a wide range of questions in the field of metagenomic data analysis. In this regard, using IBD associated metagenomics dataset, this study utilizes both supervised and unsupervised machine learning algorithms, i) to generate a classification model that aids IBD diagnosis, ii) to discover IBD associated biomarkers, iii) to find subgroups of IBD patients using k means and hierarchical clustering. To deal with the high dimensionality of features, we applied robust feature selection algorithms such as Conditional Mutual Information Maximization (CMIM), Fast Correlation Based Filter (FCBF), min redundancy max relevance (mRMR) and Extreme Gradient Boosting (XGBoost). In our experiments with 10 fold cross validation, XGBoost had a considerable effect in terms of minimizing the microbiota used for the diagnosis of IBD and thus reducing the cost and time. We observed that compared to the single classifiers, ensemble methods such as kNN and logitboost resulted in better performance measures for the classification of IBD.
[ { "created": "Wed, 8 Jan 2020 13:17:26 GMT", "version": "v1" } ]
2020-01-10
[ [ "Hacilar", "Hilal", "" ], [ "Nalbantoglu", "O. Ufuk", "" ], [ "Aran", "Oya", "" ], [ "Bakir-Gungor", "Burcu", "" ] ]
The tremendous boost in the next generation sequencing and in the omics technologies makes it possible to characterize human gut microbiome (the collective genomes of the microbial community that reside in our gastrointestinal tract). While some of these microorganisms are considered as essential regulators of our immune system, some others can cause several diseases such as Inflammatory Bowel Diseases (IBD), diabetes, and cancer. IBD, is a gut related disorder where the deviations from the healthy gut microbiome are considered to be associated with IBD. Although existing studies attempt to unveal the composition of the gut microbiome in relation to IBD diseases, a comprehensive picture is far from being complete. Due to the complexity of metagenomic studies, the applications of the state of the art machine learning techniques became popular to address a wide range of questions in the field of metagenomic data analysis. In this regard, using IBD associated metagenomics dataset, this study utilizes both supervised and unsupervised machine learning algorithms, i) to generate a classification model that aids IBD diagnosis, ii) to discover IBD associated biomarkers, iii) to find subgroups of IBD patients using k means and hierarchical clustering. To deal with the high dimensionality of features, we applied robust feature selection algorithms such as Conditional Mutual Information Maximization (CMIM), Fast Correlation Based Filter (FCBF), min redundancy max relevance (mRMR) and Extreme Gradient Boosting (XGBoost). In our experiments with 10 fold cross validation, XGBoost had a considerable effect in terms of minimizing the microbiota used for the diagnosis of IBD and thus reducing the cost and time. We observed that compared to the single classifiers, ensemble methods such as kNN and logitboost resulted in better performance measures for the classification of IBD.
1312.7331
Ziv Williams
Ziv M Williams
Trans-generational effect of trained aversive and appetitive experiences in Drosophila
11 pages, 3 figures
null
null
null
q-bio.NC q-bio.PE
http://creativecommons.org/licenses/by/3.0/
Associative learning allows animals to rapidly adapt to changes in the environment. Whether and what aspects of such acquired traits may be transmittable across generations remains unclear. Using prolonged olfactory training and subsequent two-forced choice testing in Drosophila melanogaster, it is observed that certain aspects of learned behavior were transmitted from parents to offspring. Offspring of parents exposed to distinct odors during both aversive and appetitive conditioning displayed a heightened sensitivity to those same odors. The conditioned responses associated with those odors, however, were not transmitted to the offspring as they displayed a constitutive preference to the parent-exposed stimuli irrespective of whether they were associated with aversive or appetitive training. Moreover, the degree to which the offspring preferred the conditioned stimuli markedly varied from odor-to-odor. These findings suggest that heightened sensitivities to certain salient stimuli in the environment, but not their associated conditioned behaviors, may be transmittable from parents to offspring. Such trans-generational adaptations may influence animal traits over short evolutionary time-scales.
[ { "created": "Fri, 27 Dec 2013 20:22:23 GMT", "version": "v1" } ]
2013-12-30
[ [ "Williams", "Ziv M", "" ] ]
Associative learning allows animals to rapidly adapt to changes in the environment. Whether and what aspects of such acquired traits may be transmittable across generations remains unclear. Using prolonged olfactory training and subsequent two-forced choice testing in Drosophila melanogaster, it is observed that certain aspects of learned behavior were transmitted from parents to offspring. Offspring of parents exposed to distinct odors during both aversive and appetitive conditioning displayed a heightened sensitivity to those same odors. The conditioned responses associated with those odors, however, were not transmitted to the offspring as they displayed a constitutive preference to the parent-exposed stimuli irrespective of whether they were associated with aversive or appetitive training. Moreover, the degree to which the offspring preferred the conditioned stimuli markedly varied from odor-to-odor. These findings suggest that heightened sensitivities to certain salient stimuli in the environment, but not their associated conditioned behaviors, may be transmittable from parents to offspring. Such trans-generational adaptations may influence animal traits over short evolutionary time-scales.
1404.5441
Sacha S. J. Laurent
Sacha Laurent and Marc Robinson-Rechavi and Nicolas Salamin
Detecting patterns of species diversification in the presence of both rate shifts and mass extinctions
34 pages, 11 figures
BMC Evolutionary Biology 2015 15:157
10.1186/s12862-015-0432-z
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent methodological advances are enabling better examination of speciation and extinction processes and patterns. A major open question is the origin of large discrepancies in species number between groups of the same age. Existing frameworks to model this diversity either focus on changes between lineages, neglecting global effects such as mass extinctions, or focus on changes over time which would affect all lineages. Yet it seems probable that both lineages differences and mass extinctions affect the same groups. Here we used simulations to test the performance of two widely used methods, under complex scenarios. We report good performances, although with a tendency to over-predict events when increasing the complexity of the scenario. Overall, we find that lineage shifts are better detected than mass extinctions. This work has significance for assessing the methods currently used for estimating changes in diversification using phylogenies and developing new tests.
[ { "created": "Tue, 22 Apr 2014 09:55:12 GMT", "version": "v1" }, { "created": "Mon, 17 Nov 2014 17:13:46 GMT", "version": "v2" }, { "created": "Mon, 31 Aug 2015 12:31:30 GMT", "version": "v3" } ]
2015-09-01
[ [ "Laurent", "Sacha", "" ], [ "Robinson-Rechavi", "Marc", "" ], [ "Salamin", "Nicolas", "" ] ]
Recent methodological advances are enabling better examination of speciation and extinction processes and patterns. A major open question is the origin of large discrepancies in species number between groups of the same age. Existing frameworks to model this diversity either focus on changes between lineages, neglecting global effects such as mass extinctions, or focus on changes over time which would affect all lineages. Yet it seems probable that both lineages differences and mass extinctions affect the same groups. Here we used simulations to test the performance of two widely used methods, under complex scenarios. We report good performances, although with a tendency to over-predict events when increasing the complexity of the scenario. Overall, we find that lineage shifts are better detected than mass extinctions. This work has significance for assessing the methods currently used for estimating changes in diversification using phylogenies and developing new tests.
1804.00969
Teodoro Dannemann
Teodoro Dannemann, Denis Boyer, Octavio Miramontes
L\'evy flight movements prevent extinctions and maximize population abundances in fragile Lotka Volterra systems
null
PNAS. 201719889, 2018
10.1073/pnas.1719889115
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-scale mobility is ubiquitous in nature and has become instrumental for understanding and modeling animal foraging behavior. However, the impact of individual movements on the long-term stability of populations remains largely unexplored. We analyze deterministic and stochastic Lotka Volterra systems, where mobile predators consume scarce resources (prey) confined in patches. In fragile systems (that is, those unfavorable to species coexistence), the predator species has a maximized abundance and is resilient to degraded prey conditions when individual mobility is multiple scaled. Within the L\'evy flight model, highly superdiffusive foragers rarely encounter prey patches and go extinct, whereas normally diffusing foragers tend to proliferate within patches, causing extinctions by overexploitation. L\'evy flights of intermediate index allow a sustainable balance between patch exploitation and regeneration over wide ranges of demographic rates. Our analytical and simulated results can explain field observations and suggest that scale-free random movements are an important mechanism by which entire populations adapt to scarcity in fragmented ecosystems.
[ { "created": "Fri, 30 Mar 2018 03:37:22 GMT", "version": "v1" } ]
2018-04-04
[ [ "Dannemann", "Teodoro", "" ], [ "Boyer", "Denis", "" ], [ "Miramontes", "Octavio", "" ] ]
Multiple-scale mobility is ubiquitous in nature and has become instrumental for understanding and modeling animal foraging behavior. However, the impact of individual movements on the long-term stability of populations remains largely unexplored. We analyze deterministic and stochastic Lotka Volterra systems, where mobile predators consume scarce resources (prey) confined in patches. In fragile systems (that is, those unfavorable to species coexistence), the predator species has a maximized abundance and is resilient to degraded prey conditions when individual mobility is multiple scaled. Within the L\'evy flight model, highly superdiffusive foragers rarely encounter prey patches and go extinct, whereas normally diffusing foragers tend to proliferate within patches, causing extinctions by overexploitation. L\'evy flights of intermediate index allow a sustainable balance between patch exploitation and regeneration over wide ranges of demographic rates. Our analytical and simulated results can explain field observations and suggest that scale-free random movements are an important mechanism by which entire populations adapt to scarcity in fragmented ecosystems.
2101.11656
Sayan Ghosal
Sayan Ghosal, Qiang Chen, Giulio Pergola, Aaron L. Goldman, William Ulrich, Karen F. Berman, Giuseppe Blasi, Leonardo Fazio, Antonio Rampino, Alessandro Bertolino, Daniel R. Weinberger, Venkata S. Mattay, and Archana Venkataraman
G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for Biomarker Identification and Disease Classification
null
null
null
null
q-bio.QM cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers. Our model consists of an encoder, a decoder and a classifier. The encoder learns a non-linear subspace shared between the input data modalities. The classifier and the decoder act as regularizers to ensure that the low-dimensional encoding captures predictive differences between patients and controls. We use a learnable dropout layer to extract interpretable biomarkers from the data, and our unique training strategy can easily accommodate missing data modalities across subjects. We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data. Using 10-fold cross validation, we demonstrate that our model achieves better classification accuracy than baseline methods, and that this performance generalizes to a second dataset collected at a different site. In an exploratory analysis we further show that the biomarkers identified by our model are closely associated with the well-documented deficits in schizophrenia.
[ { "created": "Wed, 27 Jan 2021 19:28:04 GMT", "version": "v1" } ]
2021-01-29
[ [ "Ghosal", "Sayan", "" ], [ "Chen", "Qiang", "" ], [ "Pergola", "Giulio", "" ], [ "Goldman", "Aaron L.", "" ], [ "Ulrich", "William", "" ], [ "Berman", "Karen F.", "" ], [ "Blasi", "Giuseppe", "" ], [ "Fazio", "Leonardo", "" ], [ "Rampino", "Antonio", "" ], [ "Bertolino", "Alessandro", "" ], [ "Weinberger", "Daniel R.", "" ], [ "Mattay", "Venkata S.", "" ], [ "Venkataraman", "Archana", "" ] ]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers. Our model consists of an encoder, a decoder and a classifier. The encoder learns a non-linear subspace shared between the input data modalities. The classifier and the decoder act as regularizers to ensure that the low-dimensional encoding captures predictive differences between patients and controls. We use a learnable dropout layer to extract interpretable biomarkers from the data, and our unique training strategy can easily accommodate missing data modalities across subjects. We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data. Using 10-fold cross validation, we demonstrate that our model achieves better classification accuracy than baseline methods, and that this performance generalizes to a second dataset collected at a different site. In an exploratory analysis we further show that the biomarkers identified by our model are closely associated with the well-documented deficits in schizophrenia.
2312.11700
Seongwon Kim
Seongwon Kim, Parisa Mollaei, Amir Barati Farimani and Anne Skaja Robinson
Characterization of Phosphorylated Tau-Microtubule complex with Molecular Dynamics (MD) simulation
27pages, 12 figure
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Alzheimer's Disease (AD), a neurodegenerative disorder, is reported as one of the most severe health and socioeconomic problems in current public health. Tau proteins are assumed to be a crucial driving factor of AD that detach from microtubules (MT) and accumulate as neurotoxic aggregates in the brains of AD patients. Extensive experimental and computational research has observed that phosphorylation at specific tau residues enhances aggregation, but the exact mechanisms underlying this phenomenon remain unclear. In this study, we employed molecular dynamics (MD) simulations on pseudo-phosphorylated tau-MT complex (residue 199 ~ 312), incorporating structural data from recent cryo-electron microscopy studies. Simulation results have revealed altered tau conformations after applying pseudo-phosphorylation. Additionally, root-mean-square deviation (RMSD) analyses and dimensionality reduction of dihedral angles revealed key residues responsible for these conformational shifts
[ { "created": "Mon, 18 Dec 2023 20:56:13 GMT", "version": "v1" } ]
2023-12-20
[ [ "Kim", "Seongwon", "" ], [ "Mollaei", "Parisa", "" ], [ "Farimani", "Amir Barati", "" ], [ "Robinson", "Anne Skaja", "" ] ]
Alzheimer's Disease (AD), a neurodegenerative disorder, is reported as one of the most severe health and socioeconomic problems in current public health. Tau proteins are assumed to be a crucial driving factor of AD that detach from microtubules (MT) and accumulate as neurotoxic aggregates in the brains of AD patients. Extensive experimental and computational research has observed that phosphorylation at specific tau residues enhances aggregation, but the exact mechanisms underlying this phenomenon remain unclear. In this study, we employed molecular dynamics (MD) simulations on pseudo-phosphorylated tau-MT complex (residue 199 ~ 312), incorporating structural data from recent cryo-electron microscopy studies. Simulation results have revealed altered tau conformations after applying pseudo-phosphorylation. Additionally, root-mean-square deviation (RMSD) analyses and dimensionality reduction of dihedral angles revealed key residues responsible for these conformational shifts
2311.10403
Junbo Jia
Junbo Jia and Luonan Chen
Velde: constructing cell potential landscapes by RNA velocity vector field decomposition
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Waddington landscape serves as a metaphor illustrating the developmental process of cells, likening it to a small ball rolling down various trajectories into valleys. Constructing an epigenetic landscape of this nature aids in visualizing and gaining insights into cell differentiation. Development encompasses intricate processes involving both cell differentiation and cell cycles. However, current landscape methods solely focus on constructing a potential landscape for cell differentiation, neglecting the accompanying cell cycle. This paper introduces a novel method that simultaneously constructs two types of potential landscapes using single-cell RNA sequencing data. Specifically, it presents the natural Helmholtz-Hodge decomposition (nHHD) of a continuous vector field within a bounded domain in n-dimensional Euclidean space. This decomposition uniquely breaks down the vector field into a gradient field, a rotation field, and a harmonic field. Utilizing this approach, the RNA velocity vector field is separated into a curl-free component representing cell differentiation and a curl component representing the cell cycle. By calculating the corresponding potential functions, potential landscapes for both cell differentiation and the cell cycle are obtained. Finally, the efficacy of this method is demonstrated through its application to synthetic and real datasets.
[ { "created": "Fri, 17 Nov 2023 09:08:54 GMT", "version": "v1" } ]
2023-11-20
[ [ "Jia", "Junbo", "" ], [ "Chen", "Luonan", "" ] ]
The Waddington landscape serves as a metaphor illustrating the developmental process of cells, likening it to a small ball rolling down various trajectories into valleys. Constructing an epigenetic landscape of this nature aids in visualizing and gaining insights into cell differentiation. Development encompasses intricate processes involving both cell differentiation and cell cycles. However, current landscape methods solely focus on constructing a potential landscape for cell differentiation, neglecting the accompanying cell cycle. This paper introduces a novel method that simultaneously constructs two types of potential landscapes using single-cell RNA sequencing data. Specifically, it presents the natural Helmholtz-Hodge decomposition (nHHD) of a continuous vector field within a bounded domain in n-dimensional Euclidean space. This decomposition uniquely breaks down the vector field into a gradient field, a rotation field, and a harmonic field. Utilizing this approach, the RNA velocity vector field is separated into a curl-free component representing cell differentiation and a curl component representing the cell cycle. By calculating the corresponding potential functions, potential landscapes for both cell differentiation and the cell cycle are obtained. Finally, the efficacy of this method is demonstrated through its application to synthetic and real datasets.
2308.07465
Nikolai Slavov
Andrew Leduc, Hannah Harens, and Nikolai Slavov
Modeling and interpretation of single-cell proteogenomic data
null
null
null
null
q-bio.GN q-bio.BM q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biological functions stem from coordinated interactions among proteins, nucleic acids and small molecules. Mass spectrometry technologies for reliable, high throughput single-cell proteomics will add a new modality to genomics and enable data-driven modeling of the molecular mechanisms coordinating proteins and nucleic acids at single-cell resolution. This promising potential requires estimating the reliability of measurements and computational analysis so that models can distinguish biological regulation from technical artifacts. We highlight different measurement modes that can support single-cell proteogenomic analysis and how to estimate their reliability. We then discuss approaches for developing both abstract and mechanistic models that aim to biologically interpret the measured differences across modalities, including specific applications to directed stem cell differentiation and to inferring protein interactions in cancer cells from the buffing of DNA copy-number variations. Single-cell proteogenomic data will support mechanistic models of direct molecular interactions that will provide generalizable and predictive representations of biological systems.
[ { "created": "Mon, 14 Aug 2023 21:25:56 GMT", "version": "v1" }, { "created": "Sat, 4 Nov 2023 20:55:46 GMT", "version": "v2" } ]
2023-11-07
[ [ "Leduc", "Andrew", "" ], [ "Harens", "Hannah", "" ], [ "Slavov", "Nikolai", "" ] ]
Biological functions stem from coordinated interactions among proteins, nucleic acids and small molecules. Mass spectrometry technologies for reliable, high throughput single-cell proteomics will add a new modality to genomics and enable data-driven modeling of the molecular mechanisms coordinating proteins and nucleic acids at single-cell resolution. This promising potential requires estimating the reliability of measurements and computational analysis so that models can distinguish biological regulation from technical artifacts. We highlight different measurement modes that can support single-cell proteogenomic analysis and how to estimate their reliability. We then discuss approaches for developing both abstract and mechanistic models that aim to biologically interpret the measured differences across modalities, including specific applications to directed stem cell differentiation and to inferring protein interactions in cancer cells from the buffing of DNA copy-number variations. Single-cell proteogenomic data will support mechanistic models of direct molecular interactions that will provide generalizable and predictive representations of biological systems.
2004.12836
Christina Bohk-Ewald
Christina Bohk-Ewald and Christian Dudel and Mikko Myrskyl\"a
A demographic scaling model for estimating the total number of COVID-19 infections
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how widely COVID-19 has spread is critical for examining the pandemic's progression. Despite efforts to carefully monitor the pandemic, the number of confirmed cases may underestimate the total number of infections. We introduce a demographic scaling model to estimate COVID-19 infections using an broadly applicable approach that is based on minimal data requirements: COVID-19 related deaths, infection fatality rates (IFRs), and life tables. As many countries lack reliable estimates of age-specific IFRs, we scale IFRs between countries using remaining life expectancy as a marker to account for differences in age structures, health conditions, and medical services. Across 10 countries with most COVID-19 deaths as of May 13, 2020, the number of infections is estimated to be four [95% prediction interval: 2-11] times higher than the number of confirmed cases. Cross-country variation is high. The estimated number of infections is 1.4 million (six times the number of confirmed cases) for Italy; 3.1 million (2.2 times the number of confirmed cases) for the U.S.; and 1.8 times the number of confirmed cases for Germany, where testing has been comparatively extensive. Our prevalence estimates, however, are markedly lower than most others based on local seroprevalence studies. We introduce formulas for quantifying the bias that is required in our data on deaths in order to reproduce estimates published elsewhere. This bias analysis shows that either COVID-19 deaths are severely underestimated, by a factor of two or more; or alternatively, the seroprevalence based results are overestimates and not representative for the total population.
[ { "created": "Fri, 24 Apr 2020 17:26:50 GMT", "version": "v1" }, { "created": "Tue, 26 May 2020 08:46:18 GMT", "version": "v2" } ]
2020-05-27
[ [ "Bohk-Ewald", "Christina", "" ], [ "Dudel", "Christian", "" ], [ "Myrskylä", "Mikko", "" ] ]
Understanding how widely COVID-19 has spread is critical for examining the pandemic's progression. Despite efforts to carefully monitor the pandemic, the number of confirmed cases may underestimate the total number of infections. We introduce a demographic scaling model to estimate COVID-19 infections using an broadly applicable approach that is based on minimal data requirements: COVID-19 related deaths, infection fatality rates (IFRs), and life tables. As many countries lack reliable estimates of age-specific IFRs, we scale IFRs between countries using remaining life expectancy as a marker to account for differences in age structures, health conditions, and medical services. Across 10 countries with most COVID-19 deaths as of May 13, 2020, the number of infections is estimated to be four [95% prediction interval: 2-11] times higher than the number of confirmed cases. Cross-country variation is high. The estimated number of infections is 1.4 million (six times the number of confirmed cases) for Italy; 3.1 million (2.2 times the number of confirmed cases) for the U.S.; and 1.8 times the number of confirmed cases for Germany, where testing has been comparatively extensive. Our prevalence estimates, however, are markedly lower than most others based on local seroprevalence studies. We introduce formulas for quantifying the bias that is required in our data on deaths in order to reproduce estimates published elsewhere. This bias analysis shows that either COVID-19 deaths are severely underestimated, by a factor of two or more; or alternatively, the seroprevalence based results are overestimates and not representative for the total population.
1910.06113
Thomas Bolton
Thomas A. W. Bolton, Constantin Tuleasca, Gwladys Rey, Diana Wotruba, Julian Gaviria, Herberto Dhanis, Eva Blondiaux, Baptise Gauthier, Lukasz Smigielski, Dimitri Van De Ville
TbCAPs: A ToolBox for Co-Activation Pattern Analysis
15 pages, 4 figures, 1 table
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional magnetic resonance imaging provides rich spatio-temporal data of human brain activity during task and rest. Many recent efforts have focussed on characterising dynamics of brain activity. One notable instance is co-activation pattern (CAP) analysis, a frame-wise analytical approach that disentangles the different functional brain networks interacting with a user-defined seed region. While promising applications in various clinical settings have been demonstrated, there is not yet any centralised, publicly accessible resource to facilitate the deployment of the technique. Here, we release a working version of TbCAPs, a new toolbox for CAP analysis, which includes all steps of the analytical pipeline, introduces new methodological developments that build on already existing concepts, and enables a facilitated inspection of CAPs and resulting metrics of brain dynamics. The toolbox is available on a public academic repository (https://c4science.ch/source/CAP_Toolbox.git). In addition, to illustrate the feasibility and usefulness of our pipeline, we describe an application to the study of human cognition. CAPs are constructed from resting-state fMRI using as seed the right dorsolateral prefrontal cortex, and, in a separate sample, we successfully predict a behavioural measure of continuous attentional performance from the metrics of CAP dynamics (R=0.59).
[ { "created": "Mon, 14 Oct 2019 12:53:52 GMT", "version": "v1" } ]
2019-10-15
[ [ "Bolton", "Thomas A. W.", "" ], [ "Tuleasca", "Constantin", "" ], [ "Rey", "Gwladys", "" ], [ "Wotruba", "Diana", "" ], [ "Gaviria", "Julian", "" ], [ "Dhanis", "Herberto", "" ], [ "Blondiaux", "Eva", "" ], [ "Gauthier", "Baptise", "" ], [ "Smigielski", "Lukasz", "" ], [ "Van De Ville", "Dimitri", "" ] ]
Functional magnetic resonance imaging provides rich spatio-temporal data of human brain activity during task and rest. Many recent efforts have focussed on characterising dynamics of brain activity. One notable instance is co-activation pattern (CAP) analysis, a frame-wise analytical approach that disentangles the different functional brain networks interacting with a user-defined seed region. While promising applications in various clinical settings have been demonstrated, there is not yet any centralised, publicly accessible resource to facilitate the deployment of the technique. Here, we release a working version of TbCAPs, a new toolbox for CAP analysis, which includes all steps of the analytical pipeline, introduces new methodological developments that build on already existing concepts, and enables a facilitated inspection of CAPs and resulting metrics of brain dynamics. The toolbox is available on a public academic repository (https://c4science.ch/source/CAP_Toolbox.git). In addition, to illustrate the feasibility and usefulness of our pipeline, we describe an application to the study of human cognition. CAPs are constructed from resting-state fMRI using as seed the right dorsolateral prefrontal cortex, and, in a separate sample, we successfully predict a behavioural measure of continuous attentional performance from the metrics of CAP dynamics (R=0.59).
2004.05895
Audrey B\"urki
A. B\"urki, S. Elbuy, S. Madec, S. Vasishth
What did we learn from forty years of research on semantic interference? A Bayesian metaanalysis
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When participants in an experiment have to name pictures while ignoring distractor words superimposed on the picture or presented auditorily (i.e., picture-word interference paradigm), they take more time when the word to be named (or target) and distractor words are from the same semantic category (e.g., cat-dog). This experimental effect is known as the semantic interference effect, and is probably one of the most studied in the language production literature. The functional origin of the effect and the exact conditions in which it occurs are however still debated. Since Lupker reported the effect in the first response time experiment about 40 years ago, more than 300 similar experiments have been conducted. The semantic interference effect was replicated in many experiments, but several studies also reported the absence of an effect in a subset of experimental conditions. The aim of the present study is to provide a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces. The results are discussed in the light of current debates about the functional origin of the semantic interference effect and its implications for our understanding of the language production system.
[ { "created": "Thu, 2 Apr 2020 15:07:26 GMT", "version": "v1" }, { "created": "Sun, 26 Apr 2020 09:41:47 GMT", "version": "v2" } ]
2020-04-28
[ [ "Bürki", "A.", "" ], [ "Elbuy", "S.", "" ], [ "Madec", "S.", "" ], [ "Vasishth", "S.", "" ] ]
When participants in an experiment have to name pictures while ignoring distractor words superimposed on the picture or presented auditorily (i.e., picture-word interference paradigm), they take more time when the word to be named (or target) and distractor words are from the same semantic category (e.g., cat-dog). This experimental effect is known as the semantic interference effect, and is probably one of the most studied in the language production literature. The functional origin of the effect and the exact conditions in which it occurs are however still debated. Since Lupker reported the effect in the first response time experiment about 40 years ago, more than 300 similar experiments have been conducted. The semantic interference effect was replicated in many experiments, but several studies also reported the absence of an effect in a subset of experimental conditions. The aim of the present study is to provide a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces. The results are discussed in the light of current debates about the functional origin of the semantic interference effect and its implications for our understanding of the language production system.
1209.5760
Suzanne Bowen Dr
Suzanne Bowen
Protein function influences frequency of encoded regions containing VNTRs and number of unique interactions
21 pages, 4 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins encoded by genes containing regions of variable number tandem repeats (VNTRs) are known to be polymorphic within species but the influence of their instability in molecular interactions remains unclear. VNTRs are overrepresented in encoding sequence of particular functional groups where their presence could influence protein interactions. Using human consensus coding sequence, this work examines if genomic instability, determined by regions of VNTRs, influences the number of protein interactions. Findings reveal that, in relation to protein function, the frequency of unique interactions in human proteins increase with the number of repeated regions. This supports experimental evidence that repeat expansion may lead to an increase in molecular interactions. Genetic diversity, estimated by Ka/Ks, appeared to decrease as the number of protein-protein interactions increased. Additionally, G+C and CpG content were negatively correlated with increasing occurrence of VNTRs. This may indicate that nucleotide composition along with selective processes can increase genomic stability and thereby restrict the expansion of repeated regions. Proteins involved in acetylation are associated with a high number of repeated regions and interactions but a low G+C and CpG content. While in contrast, less interactive membrane proteins contain a lower number of repeated regions but higher levels of C+G and CpGs. This work provides further evidence that VNTRs may provide the genetic variability to generate unique interactions between proteins.
[ { "created": "Tue, 25 Sep 2012 20:32:43 GMT", "version": "v1" }, { "created": "Thu, 27 Sep 2012 14:05:17 GMT", "version": "v2" } ]
2012-09-28
[ [ "Bowen", "Suzanne", "" ] ]
Proteins encoded by genes containing regions of variable number tandem repeats (VNTRs) are known to be polymorphic within species but the influence of their instability in molecular interactions remains unclear. VNTRs are overrepresented in encoding sequence of particular functional groups where their presence could influence protein interactions. Using human consensus coding sequence, this work examines if genomic instability, determined by regions of VNTRs, influences the number of protein interactions. Findings reveal that, in relation to protein function, the frequency of unique interactions in human proteins increase with the number of repeated regions. This supports experimental evidence that repeat expansion may lead to an increase in molecular interactions. Genetic diversity, estimated by Ka/Ks, appeared to decrease as the number of protein-protein interactions increased. Additionally, G+C and CpG content were negatively correlated with increasing occurrence of VNTRs. This may indicate that nucleotide composition along with selective processes can increase genomic stability and thereby restrict the expansion of repeated regions. Proteins involved in acetylation are associated with a high number of repeated regions and interactions but a low G+C and CpG content. While in contrast, less interactive membrane proteins contain a lower number of repeated regions but higher levels of C+G and CpGs. This work provides further evidence that VNTRs may provide the genetic variability to generate unique interactions between proteins.
2306.01935
Vivek N. Prakash
Setareh Gooshvar, Gopika Madhu, Melissa Ruszczyk, and Vivek N. Prakash
Non-bilaterians as Model Systems for Tissue Mechanics
Review paper, Comments/suggestions are welcome
Integrative and Comparative Biology, 2023
10.1093/icb/icad074
null
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
In animals, epithelial tissues are barriers against the external environment, providing protection against biological, chemical, and physical damage. Depending on the animal's physiology and behavior, these tissues encounter different types of mechanical forces and need to provide a suitable adaptive response to ensure success. Therefore, understanding tissue mechanics in different contexts is an important research area. Here, we review recent tissue mechanics discoveries in a few early-divergent non-bilaterian animals -- Trichoplax adhaerens, Hydra vulgaris, and Aurelia aurita. We highlight each animal's simple body plan and biology, and unique, rapid tissue remodeling phenomena that play a crucial role in its physiology. We also discuss the emergent large-scale mechanics that arise from small-scale phenomena. Finally, we emphasize the enormous potential of these non-bilaterian animals to be model systems for further investigation in tissue mechanics.
[ { "created": "Fri, 2 Jun 2023 22:28:16 GMT", "version": "v1" } ]
2023-12-29
[ [ "Gooshvar", "Setareh", "" ], [ "Madhu", "Gopika", "" ], [ "Ruszczyk", "Melissa", "" ], [ "Prakash", "Vivek N.", "" ] ]
In animals, epithelial tissues are barriers against the external environment, providing protection against biological, chemical, and physical damage. Depending on the animal's physiology and behavior, these tissues encounter different types of mechanical forces and need to provide a suitable adaptive response to ensure success. Therefore, understanding tissue mechanics in different contexts is an important research area. Here, we review recent tissue mechanics discoveries in a few early-divergent non-bilaterian animals -- Trichoplax adhaerens, Hydra vulgaris, and Aurelia aurita. We highlight each animal's simple body plan and biology, and unique, rapid tissue remodeling phenomena that play a crucial role in its physiology. We also discuss the emergent large-scale mechanics that arise from small-scale phenomena. Finally, we emphasize the enormous potential of these non-bilaterian animals to be model systems for further investigation in tissue mechanics.
2407.08224
Wenwen Min
Shuailin Xue, Fangfang Zhu, Changmiao Wang and Wenwen Min
stEnTrans: Transformer-based deep learning for spatial transcriptomics enhancement
ISBRA2024, Code: https://github.com/shuailinxue/stEnTrans
null
null
null
q-bio.QM cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spatial location of cells within tissues and organs is crucial for the manifestation of their specific functions.Spatial transcriptomics technology enables comprehensive measurement of the gene expression patterns in tissues while retaining spatial information. However, current popular spatial transcriptomics techniques either have shallow sequencing depth or low resolution. We present stEnTrans, a deep learning method based on Transformer architecture that provides comprehensive predictions for gene expression in unmeasured areas or unexpectedly lost areas and enhances gene expression in original and inputed spots. Utilizing a self-supervised learning approach, stEnTrans establishes proxy tasks on gene expression profile without requiring additional data, mining intrinsic features of the tissues as supervisory information. We evaluate stEnTrans on six datasets and the results indicate superior performance in enhancing spots resolution and predicting gene expression in unmeasured areas compared to other deep learning and traditional interpolation methods. Additionally, Our method also can help the discovery of spatial patterns in Spatial Transcriptomics and enrich to more biologically significant pathways. Our source code is available at https://github.com/shuailinxue/stEnTrans.
[ { "created": "Thu, 11 Jul 2024 06:50:34 GMT", "version": "v1" } ]
2024-07-12
[ [ "Xue", "Shuailin", "" ], [ "Zhu", "Fangfang", "" ], [ "Wang", "Changmiao", "" ], [ "Min", "Wenwen", "" ] ]
The spatial location of cells within tissues and organs is crucial for the manifestation of their specific functions.Spatial transcriptomics technology enables comprehensive measurement of the gene expression patterns in tissues while retaining spatial information. However, current popular spatial transcriptomics techniques either have shallow sequencing depth or low resolution. We present stEnTrans, a deep learning method based on Transformer architecture that provides comprehensive predictions for gene expression in unmeasured areas or unexpectedly lost areas and enhances gene expression in original and inputed spots. Utilizing a self-supervised learning approach, stEnTrans establishes proxy tasks on gene expression profile without requiring additional data, mining intrinsic features of the tissues as supervisory information. We evaluate stEnTrans on six datasets and the results indicate superior performance in enhancing spots resolution and predicting gene expression in unmeasured areas compared to other deep learning and traditional interpolation methods. Additionally, Our method also can help the discovery of spatial patterns in Spatial Transcriptomics and enrich to more biologically significant pathways. Our source code is available at https://github.com/shuailinxue/stEnTrans.
1611.08259
Antti Niemi
Alexandr Nasedkin, Jan Davidsson, Antti J. Niemi, Xubiao Peng
Solution X-ray scattering (S/WAXS) and structure formation in protein dynamics
10 figures
Phys. Rev. E 96, 062405 (2017)
10.1103/PhysRevE.96.062405
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose to develop mean field theory in combination with Glauber algorithm, to model and interpret protein dynamics and structure formation in small to wide angle x-ray scattering (S/WAXS) experiments. We develop the methodology by analysing the Engrailed homeodomain protein as an example. We demonstrate how to interpret S/WAXS data with a good precision and over an extended temperature range. We explain experimentally observed phenomena in terms of protein phase structure, and we make predictions for future experiments how the scattering data behaves at different ambient temperature values. We conclude that a combination of mean field theory with Glauber algorithm has the potential to develop into a highly accurate, computationally effective and predictive tool for analysing S/WAXS data. Finally, we compare our results with those obtained previously in an all-atom molecular dynamics simulation.
[ { "created": "Thu, 24 Nov 2016 17:09:22 GMT", "version": "v1" }, { "created": "Mon, 5 Jun 2017 11:22:17 GMT", "version": "v2" } ]
2017-12-20
[ [ "Nasedkin", "Alexandr", "" ], [ "Davidsson", "Jan", "" ], [ "Niemi", "Antti J.", "" ], [ "Peng", "Xubiao", "" ] ]
We propose to develop mean field theory in combination with Glauber algorithm, to model and interpret protein dynamics and structure formation in small to wide angle x-ray scattering (S/WAXS) experiments. We develop the methodology by analysing the Engrailed homeodomain protein as an example. We demonstrate how to interpret S/WAXS data with a good precision and over an extended temperature range. We explain experimentally observed phenomena in terms of protein phase structure, and we make predictions for future experiments how the scattering data behaves at different ambient temperature values. We conclude that a combination of mean field theory with Glauber algorithm has the potential to develop into a highly accurate, computationally effective and predictive tool for analysing S/WAXS data. Finally, we compare our results with those obtained previously in an all-atom molecular dynamics simulation.
1206.0889
Gilles Guillot
Gilles Guillot
Detection of correlation between genotypes and environmental variables. A fast computational approach for genomewide studies
To appear in Spatial Statistics
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomic regions (or loci) displaying outstanding correlation with some environmental variables are likely to be under selection and this is the rationale of recent methods of identifying selected loci and retrieving functional information about them. To be efficient, such methods need to be able to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial generalized linear mixed model (SGLMM). For inference, we make use of the INLA-SPDE theoretical and computational framework developed by Rue et al. (2009) and Lindgren et al (2011). The method we propose allows one to quantify the correlation between genotypes and environmental variables. It works for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine weevils (Hylobius abietis}) populations across Europe. The method proposed appears also as a statistically sound alternative to the Mantel tests for testing the association between genetic and environmental variables.
[ { "created": "Tue, 5 Jun 2012 11:55:13 GMT", "version": "v1" }, { "created": "Mon, 12 Aug 2013 08:11:53 GMT", "version": "v2" } ]
2013-08-13
[ [ "Guillot", "Gilles", "" ] ]
Genomic regions (or loci) displaying outstanding correlation with some environmental variables are likely to be under selection and this is the rationale of recent methods of identifying selected loci and retrieving functional information about them. To be efficient, such methods need to be able to disentangle the potential effect of environmental variables from the confounding effect of population history. For the routine analysis of genome-wide datasets, one also needs fast inference and model selection algorithms. We propose a method based on an explicit spatial model which is an instance of spatial generalized linear mixed model (SGLMM). For inference, we make use of the INLA-SPDE theoretical and computational framework developed by Rue et al. (2009) and Lindgren et al (2011). The method we propose allows one to quantify the correlation between genotypes and environmental variables. It works for the most common types of genetic markers, obtained either at the individual or at the population level. Analyzing simulated data produced under a geostatistical model then under an explicit model of selection, we show that the method is efficient. We also re-analyze a dataset relative to nineteen pine weevils (Hylobius abietis}) populations across Europe. The method proposed appears also as a statistically sound alternative to the Mantel tests for testing the association between genetic and environmental variables.
2103.16606
Jia Li
Jia Li, Ilias Rentzeperis, Cees van Leeuwen
Functional and spatial rewiring jointly generate convergent-divergent units in self-organizing networks
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Self-organization through adaptive rewiring of random neural networks generates brain-like topologies comprising modular small-world structures with rich club effects, merely as the product of optimizing the network topology. In the nervous system, spatial organization is optimized no less by rewiring, through minimizing wiring distance and maximizing spatially aligned wiring layouts. We show that such spatial organization principles interact constructively with adaptive rewiring, contributing to establish the networks' connectedness and modular structures. We use an evolving neural network model with weighted and directed connections, in which neural traffic flow is based on consensus and advection dynamics, to show that wiring cost minimization supports adaptive rewiring in creating convergent-divergent unit structures. Convergent-divergent units consist of a convergent input-hub, connected to a divergent output-hub via subnetworks of intermediate nodes, which may function as the computational core of the unit. The prominence of minimizing wiring distance in the dynamic evolution of the network determines the extent to which the core is encapsulated from the rest of the network, i.e., the context-sensitivity of its computations. This corresponds to the central role convergent-divergent units play in establishing context-sensitivity in neuronal information processing.
[ { "created": "Tue, 30 Mar 2021 18:26:34 GMT", "version": "v1" }, { "created": "Thu, 3 Nov 2022 13:37:25 GMT", "version": "v2" } ]
2022-11-04
[ [ "Li", "Jia", "" ], [ "Rentzeperis", "Ilias", "" ], [ "van Leeuwen", "Cees", "" ] ]
Self-organization through adaptive rewiring of random neural networks generates brain-like topologies comprising modular small-world structures with rich club effects, merely as the product of optimizing the network topology. In the nervous system, spatial organization is optimized no less by rewiring, through minimizing wiring distance and maximizing spatially aligned wiring layouts. We show that such spatial organization principles interact constructively with adaptive rewiring, contributing to establish the networks' connectedness and modular structures. We use an evolving neural network model with weighted and directed connections, in which neural traffic flow is based on consensus and advection dynamics, to show that wiring cost minimization supports adaptive rewiring in creating convergent-divergent unit structures. Convergent-divergent units consist of a convergent input-hub, connected to a divergent output-hub via subnetworks of intermediate nodes, which may function as the computational core of the unit. The prominence of minimizing wiring distance in the dynamic evolution of the network determines the extent to which the core is encapsulated from the rest of the network, i.e., the context-sensitivity of its computations. This corresponds to the central role convergent-divergent units play in establishing context-sensitivity in neuronal information processing.
0805.3675
Michael Yampolsky
Carolyn M. Salafia, Dawn P. Misra, Michael Yampolsky, Adrian K. Charles, Richard K. Miller
Allometric metabolic scaling and fetal and placental weight
null
null
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tested the hypothesis that the fetal-placental relationship scales allometrically and identified modifying factors. Among women delivering after 34 weeks but prior to 43 weeks gestation, 24,601 participants in the Collaborative Perinatal Project (CPP) had complete data for placental gross proportion measures, specifically, disk shape, larger and smaller disk diameters and thickness, and umbilical cord length. The allometric metabolic equation was solved for alpha and beta by rewriting PW= alpha(BW)^beta as Log (PW) = Log(alpha) + beta*Log(BW). Mean beta was 0.78+ 0.02 (range 0.66, 0.89), 104% of that predicted by a supply-limited fractal system (0.75). Gestational age, maternal age, maternal BMI, parity, smoking, socioeconomic status, infant sex, and changes in placental proportions each had independent and significant effects on alpha. Conclusions: In the CPP cohort, the placental - birth weight relationship scales to approximately 3/4 power.
[ { "created": "Fri, 23 May 2008 18:03:08 GMT", "version": "v1" }, { "created": "Fri, 25 Jul 2008 16:31:57 GMT", "version": "v2" }, { "created": "Sun, 22 Mar 2009 22:02:32 GMT", "version": "v3" } ]
2009-03-23
[ [ "Salafia", "Carolyn M.", "" ], [ "Misra", "Dawn P.", "" ], [ "Yampolsky", "Michael", "" ], [ "Charles", "Adrian K.", "" ], [ "Miller", "Richard K.", "" ] ]
We tested the hypothesis that the fetal-placental relationship scales allometrically and identified modifying factors. Among women delivering after 34 weeks but prior to 43 weeks gestation, 24,601 participants in the Collaborative Perinatal Project (CPP) had complete data for placental gross proportion measures, specifically, disk shape, larger and smaller disk diameters and thickness, and umbilical cord length. The allometric metabolic equation was solved for alpha and beta by rewriting PW= alpha(BW)^beta as Log (PW) = Log(alpha) + beta*Log(BW). Mean beta was 0.78+ 0.02 (range 0.66, 0.89), 104% of that predicted by a supply-limited fractal system (0.75). Gestational age, maternal age, maternal BMI, parity, smoking, socioeconomic status, infant sex, and changes in placental proportions each had independent and significant effects on alpha. Conclusions: In the CPP cohort, the placental - birth weight relationship scales to approximately 3/4 power.
0709.3237
Matthias Keil
Matthias S. Keil
Gradient Representations and the Perception of Luminosity
This is the longer version of an article which is under review for publication in Vision Research
null
null
null
q-bio.NC
null
The neuronal mechanisms that serve to distinguish between light-emitting and light reflecting objects are largely unknown. It has been suggested that luminosity perception implements a separate pathway in the visual system, such that luminosity constitutes an independent perceptual feature. Recently, a psychophysical study was conducted to address the question whether luminosity has a feature status or not. However, the results of this study lend support to the hypothesis that luminance gradients are instead a perceptual feature. Here, I show how the perception of luminosity can emerge from a previously proposed neuronal architecture for generating representations of luminance gradients.
[ { "created": "Thu, 20 Sep 2007 14:06:43 GMT", "version": "v1" } ]
2007-09-21
[ [ "Keil", "Matthias S.", "" ] ]
The neuronal mechanisms that serve to distinguish between light-emitting and light reflecting objects are largely unknown. It has been suggested that luminosity perception implements a separate pathway in the visual system, such that luminosity constitutes an independent perceptual feature. Recently, a psychophysical study was conducted to address the question whether luminosity has a feature status or not. However, the results of this study lend support to the hypothesis that luminance gradients are instead a perceptual feature. Here, I show how the perception of luminosity can emerge from a previously proposed neuronal architecture for generating representations of luminance gradients.
2005.01200
Gurdip Uppal
Gurdip Uppal, Weiyi Hu, Dervis Can Vural
Evolution of chemotactic hitchhiking
10 pages, 5 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacteria typically reside in heterogeneous environments with various chemogradients where motile cells can gain an advantage over non-motile cells. Since motility is energetically costly, cells must optimize their swimming speed and behavior to maximize their fitness. Here we investigate how cheating strategies might evolve where slow or non-motile microbes exploit faster ones by sticking together and hitching a ride. Starting with physical and biological first-principles we computationally study the effects of sticking on the evolution of motility in a controlled chemostat environment. We find stickiness allows slow cheaters to dominate when nutrients are dispersed at intermediate distances. Here, slow microbes exploit faster ones until they consume the population, leading to a tragedy of commons. For long races, slow microbes do gain an initial advantage from sticking, but eventually fall behind. Here, fast microbes are more likely to stick to other fast microbes, and cooperate to increase their own population. We therefore find the nature of the hitchhiking interaction, parasitic or mutualistic, depends on the nutrient distribution.
[ { "created": "Sun, 3 May 2020 22:34:18 GMT", "version": "v1" } ]
2020-05-05
[ [ "Uppal", "Gurdip", "" ], [ "Hu", "Weiyi", "" ], [ "Vural", "Dervis Can", "" ] ]
Bacteria typically reside in heterogeneous environments with various chemogradients where motile cells can gain an advantage over non-motile cells. Since motility is energetically costly, cells must optimize their swimming speed and behavior to maximize their fitness. Here we investigate how cheating strategies might evolve where slow or non-motile microbes exploit faster ones by sticking together and hitching a ride. Starting with physical and biological first-principles we computationally study the effects of sticking on the evolution of motility in a controlled chemostat environment. We find stickiness allows slow cheaters to dominate when nutrients are dispersed at intermediate distances. Here, slow microbes exploit faster ones until they consume the population, leading to a tragedy of commons. For long races, slow microbes do gain an initial advantage from sticking, but eventually fall behind. Here, fast microbes are more likely to stick to other fast microbes, and cooperate to increase their own population. We therefore find the nature of the hitchhiking interaction, parasitic or mutualistic, depends on the nutrient distribution.
1503.04059
Frederic Bartumeus
Joan Garriga, John R. Palmer, Aitana Oltra, Frederic Bartumeus
Expectation-Maximization Binary Clustering for Behavioural Annotation
34 pages main text including 11 (full page) figures
null
10.1371/journal.pone.0151984
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a variant of the well sounded Expectation-Maximization Clustering algorithm that is constrained to generate partitions of the input space into high and low values. The motivation of splitting input variables into high and low values is to favour the semantic interpretation of the final clustering. The Expectation-Maximization binary Clustering is specially useful when a bimodal conditional distribution of the variables is expected or at least when a binary discretization of the input space is deemed meaningful. Furthermore, the algorithm deals with the reliability of the input data such that the larger their uncertainty the less their role in the final clustering. We show here its suitability for behavioural annotation of movement trajectories. However, it can be considered as a general purpose algorithm for the clustering or segmentation of multivariate data or temporal series.
[ { "created": "Fri, 13 Mar 2015 13:30:36 GMT", "version": "v1" }, { "created": "Fri, 4 Dec 2015 16:51:01 GMT", "version": "v2" } ]
2016-04-27
[ [ "Garriga", "Joan", "" ], [ "Palmer", "John R.", "" ], [ "Oltra", "Aitana", "" ], [ "Bartumeus", "Frederic", "" ] ]
We present a variant of the well sounded Expectation-Maximization Clustering algorithm that is constrained to generate partitions of the input space into high and low values. The motivation of splitting input variables into high and low values is to favour the semantic interpretation of the final clustering. The Expectation-Maximization binary Clustering is specially useful when a bimodal conditional distribution of the variables is expected or at least when a binary discretization of the input space is deemed meaningful. Furthermore, the algorithm deals with the reliability of the input data such that the larger their uncertainty the less their role in the final clustering. We show here its suitability for behavioural annotation of movement trajectories. However, it can be considered as a general purpose algorithm for the clustering or segmentation of multivariate data or temporal series.
2401.01811
Andrij Rovenchak
Andrij Rovenchak and Maksym Druchok
Machine learning-assisted search for novel coagulants: when machine learning can be efficient even if data availability is low
null
J. Comput. Chem. 45, No. 13, 937-952 (2024)
10.1002/jcc.27292
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Design of new drugs is a challenging process: a candidate molecule should satisfy multiple conditions to act properly and make the least side-effect -- perfect candidates selectively attach to and influence only targets, leaving off-targets intact. The amount of experimental data about various properties of molecules constantly grows, promoting data-driven approaches. However, the applicability of typical predictive machine learning techniques can be substantially limited by a lack of experimental data about a particular target. For example, there are many known Thrombin inhibitors (acting as anticoagulants), but a very limited number of known Protein C inhibitors (coagulants). In this study, we present our approach to suggest new inhibitor candidates by building an effective representation of chemical space. For this aim, we developed a deep learning model -- autoencoder, trained on a large set of molecules in the SMILES format to map the chemical space. Further, we applied different sampling strategies to generate novel coagulant candidates. Symmetrically, we tested our approach on anticoagulant candidates, where we were able to predict their inhibition towards Thrombin. We also compare our approach with MegaMolBART -- another deep learning generative model, but exploiting similar principles of navigation in a chemical space.
[ { "created": "Wed, 3 Jan 2024 16:14:37 GMT", "version": "v1" } ]
2024-05-07
[ [ "Rovenchak", "Andrij", "" ], [ "Druchok", "Maksym", "" ] ]
Design of new drugs is a challenging process: a candidate molecule should satisfy multiple conditions to act properly and make the least side-effect -- perfect candidates selectively attach to and influence only targets, leaving off-targets intact. The amount of experimental data about various properties of molecules constantly grows, promoting data-driven approaches. However, the applicability of typical predictive machine learning techniques can be substantially limited by a lack of experimental data about a particular target. For example, there are many known Thrombin inhibitors (acting as anticoagulants), but a very limited number of known Protein C inhibitors (coagulants). In this study, we present our approach to suggest new inhibitor candidates by building an effective representation of chemical space. For this aim, we developed a deep learning model -- autoencoder, trained on a large set of molecules in the SMILES format to map the chemical space. Further, we applied different sampling strategies to generate novel coagulant candidates. Symmetrically, we tested our approach on anticoagulant candidates, where we were able to predict their inhibition towards Thrombin. We also compare our approach with MegaMolBART -- another deep learning generative model, but exploiting similar principles of navigation in a chemical space.
2403.11516
Ya Li
Yue Ding, Hongqiao Shi, Shuang Song, Yonghui Wang and Ya Li
Perceptual learning in contour detection transfer across changes in contour path and orientation
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of local elements into shape contours is critical for target detection and identification in cluttered scenes. Previous studies have shown that observers can learn to use image regularities for contour integration and target identification. However, we still know little about the generalization of perceptual learning in contour integration. Specifically, whether training in contour detection task could transfer to untrained contour type, path or orientation is still unclear. In a series of four experiments, human perceptual learning in contour detection was studied using psychophysical methods. We trained participants to detect contours in cluttered scenes over several days, which resulted in a significant improvement in sensitivity to trained contour type. This improved sensitivity was highly specific to contour type, but transfer across changes in contour path and contour orientation. These results suggest that short-term training improves the ability to integrate specific types of contours by optimizing the ability of the visual system to extract specific image regularities. The differential specificity and generalization across different stimulus features may support the involvement of both low-level and higher-level visual areas in perceptual learning in contour detection. These findings provide further insights into understanding the nature and the brain plasticity mechanism of contour integration learning.
[ { "created": "Mon, 18 Mar 2024 07:06:03 GMT", "version": "v1" } ]
2024-03-19
[ [ "Ding", "Yue", "" ], [ "Shi", "Hongqiao", "" ], [ "Song", "Shuang", "" ], [ "Wang", "Yonghui", "" ], [ "Li", "Ya", "" ] ]
The integration of local elements into shape contours is critical for target detection and identification in cluttered scenes. Previous studies have shown that observers can learn to use image regularities for contour integration and target identification. However, we still know little about the generalization of perceptual learning in contour integration. Specifically, whether training in contour detection task could transfer to untrained contour type, path or orientation is still unclear. In a series of four experiments, human perceptual learning in contour detection was studied using psychophysical methods. We trained participants to detect contours in cluttered scenes over several days, which resulted in a significant improvement in sensitivity to trained contour type. This improved sensitivity was highly specific to contour type, but transfer across changes in contour path and contour orientation. These results suggest that short-term training improves the ability to integrate specific types of contours by optimizing the ability of the visual system to extract specific image regularities. The differential specificity and generalization across different stimulus features may support the involvement of both low-level and higher-level visual areas in perceptual learning in contour detection. These findings provide further insights into understanding the nature and the brain plasticity mechanism of contour integration learning.
0801.3382
Jose Luis Toca-Herrera
Veronica Saravia
Hepatocyte Aggregates: Methods of Preparation in the Microgravity Simulating Bioreactor Use in Tissue Engineering
MSc Thesis (Chemical Engineering Department, Rovira i Virgili University, Spain) Supervisors: Dr. Petros Lenas and Dr. Jose L. Toca-Herrera Pages:32, Figures:15
null
null
null
q-bio.TO
null
Tissue Engineering concerns the three-dimensional cell growth so that bio-artificial tissues could be created and used for transplantation. The recently expressed concerns from the Tissue Engineering research community for a re-direction of the research activities necessitate the proposition of new methodologies. We propose a methodology that has to do with the simulation in bioreactor systems of liver structures as are described in liver anatomy. I this way the hepatocyte microenvironments that determine their function could be re-created in vitro. The approach needs the use of hepatocyte aggregates as entities to load the bioreactor systems. A new bioreactor, the microgravity simulating rotation bioreactor, has been used for the preparation of cell aggregates. Microcontact printing has been used to produce a patterned surfaces. They were tested adsorbing BSA proteins, and will be used in future for the mmobilization of cell aggregates in order to gain further understanding of the role of cell heterogeneity in the cooperative behaviour of cells in vitro.
[ { "created": "Tue, 22 Jan 2008 14:40:33 GMT", "version": "v1" } ]
2008-01-23
[ [ "Saravia", "Veronica", "" ] ]
Tissue Engineering concerns the three-dimensional cell growth so that bio-artificial tissues could be created and used for transplantation. The recently expressed concerns from the Tissue Engineering research community for a re-direction of the research activities necessitate the proposition of new methodologies. We propose a methodology that has to do with the simulation in bioreactor systems of liver structures as are described in liver anatomy. I this way the hepatocyte microenvironments that determine their function could be re-created in vitro. The approach needs the use of hepatocyte aggregates as entities to load the bioreactor systems. A new bioreactor, the microgravity simulating rotation bioreactor, has been used for the preparation of cell aggregates. Microcontact printing has been used to produce a patterned surfaces. They were tested adsorbing BSA proteins, and will be used in future for the mmobilization of cell aggregates in order to gain further understanding of the role of cell heterogeneity in the cooperative behaviour of cells in vitro.
1011.5108
Fabien Campillo
Fabien Campillo (INRIA Sophia Antipolis - INRA/SupAgro UMR 0729 MISTEA - Montpellier), Marc Joannides (INRIA Sophia Antipolis - INRA/SupAgro UMR 0729 MISTEA - Montpellier, I3M), Ir\`ene Larramendy (I3M)
Stochastic models of the chemostat
null
N° RR-7458 (2010)
null
RR-7458
q-bio.QM math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the modeling of the dynamics of the chemostat at its very source. The chemostat is classically represented as a system of ordinary differential equations. Our goal is to establish a stochastic model that is valid at the scale immediately preceding the one corresponding to the deterministic model. At a microscopic scale we present a pure jump stochastic model that gives rise, at the macroscopic scale, to the ordinary differential equation model. At an intermediate scale, an approximation diffusion allows us to propose a model in the form of a system of stochastic differential equations. We expound the mechanism to switch from one model to another, together with the associated simulation procedures. We also describe the domain of validity of the different models.
[ { "created": "Mon, 22 Nov 2010 10:30:20 GMT", "version": "v1" }, { "created": "Wed, 6 Jul 2011 05:47:22 GMT", "version": "v2" } ]
2011-07-07
[ [ "Campillo", "Fabien", "", "INRIA Sophia Antipolis - INRA/SupAgro UMR 0729 MISTEA\n - Montpellier" ], [ "Joannides", "Marc", "", "INRIA Sophia Antipolis - INRA/SupAgro UMR\n 0729 MISTEA - Montpellier, I3M" ], [ "Larramendy", "Irène", "", "I3M" ] ]
We consider the modeling of the dynamics of the chemostat at its very source. The chemostat is classically represented as a system of ordinary differential equations. Our goal is to establish a stochastic model that is valid at the scale immediately preceding the one corresponding to the deterministic model. At a microscopic scale we present a pure jump stochastic model that gives rise, at the macroscopic scale, to the ordinary differential equation model. At an intermediate scale, an approximation diffusion allows us to propose a model in the form of a system of stochastic differential equations. We expound the mechanism to switch from one model to another, together with the associated simulation procedures. We also describe the domain of validity of the different models.
2408.05224
Liu Hong
Mengshou Wang, Liangrong Pengb, Baoguo Jia, Liu Hong
Optimal Strategy for Stabilizing Protein Folding Intermediates
19 pages, 5 figures, 2 tables
null
null
null
q-bio.BM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To manipulate the protein population at certain functional state through chemical stabilizers is crucial for protein-related studies. It not only plays a key role in protein structure analysis and protein folding kinetics, but also affects protein functionality to a large extent and thus has wide applications in medicine, food industry, etc. However, due to concerns about side effects or financial costs of stabilizers, identifying optimal strategies for enhancing protein stability with a minimal amount of stabilizers is of great importance. Here we prove that either for the fixed terminal time (including both finite and infinite cases) or the free one, the optimal control strategy for stabilizing the folding intermediates with a linear strategy for stabilizer addition belongs to the class of Bang-Bang controls. The corresponding optimal switching time is derived analytically, whose phase diagram with respect to several key parameters is explored in detail. The Bang-Bang control will be broken when nonlinear strategies for stabilizer addition are adopted. Our current study on optimal strategies for protein stabilizers not only offers deep insights into the general picture of protein folding kinetics, but also provides valuable theoretical guidance on treatments for protein-related diseases in medicine.
[ { "created": "Sun, 28 Jul 2024 11:36:29 GMT", "version": "v1" } ]
2024-08-13
[ [ "Wang", "Mengshou", "" ], [ "Pengb", "Liangrong", "" ], [ "Jia", "Baoguo", "" ], [ "Hong", "Liu", "" ] ]
To manipulate the protein population at certain functional state through chemical stabilizers is crucial for protein-related studies. It not only plays a key role in protein structure analysis and protein folding kinetics, but also affects protein functionality to a large extent and thus has wide applications in medicine, food industry, etc. However, due to concerns about side effects or financial costs of stabilizers, identifying optimal strategies for enhancing protein stability with a minimal amount of stabilizers is of great importance. Here we prove that either for the fixed terminal time (including both finite and infinite cases) or the free one, the optimal control strategy for stabilizing the folding intermediates with a linear strategy for stabilizer addition belongs to the class of Bang-Bang controls. The corresponding optimal switching time is derived analytically, whose phase diagram with respect to several key parameters is explored in detail. The Bang-Bang control will be broken when nonlinear strategies for stabilizer addition are adopted. Our current study on optimal strategies for protein stabilizers not only offers deep insights into the general picture of protein folding kinetics, but also provides valuable theoretical guidance on treatments for protein-related diseases in medicine.
1210.0120
Brant Faircloth
Michael E. Alfaro and Brant C. Faircloth and Laurie Sorenson and Francesco Santini
A phylogenomic perspective on the radiation of ray-finned fishes based upon targeted sequencing of ultraconserved elements
null
(2013) PLoS ONE 8(6): e65923
10.1371/journal.pone.0065923
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ray-finned fishes constitute the dominant radiation of vertebrates with over 30,000 species. Although molecular phylogenetics has begun to disentangle major evolutionary relationships within this vast section of the Tree of Life, there is no widely available approach for efficiently collecting phylogenomic data within fishes, leaving much of the enormous potential of massively parallel sequencing technologies for resolving major radiations in ray-finned fishes unrealized. Here, we provide a genomic perspective on longstanding questions regarding the diversification of major groups of ray-finned fishes through targeted enrichment of ultraconserved nuclear DNA elements (UCEs) and their flanking sequence. Our workflow efficiently and economically generates data sets that are orders of magnitude larger than those produced by traditional approaches and is well-suited to working with museum specimens. Analysis of the UCE data set recovers a well-supported phylogeny at both shallow and deep time-scales that supports a monophyletic relationship between Amia and Lepisosteus (Holostei) and reveals elopomorphs and then osteoglossomorphs to be the earliest diverging teleost lineages. Divergence time estimation based upon 14 fossil calibrations reveals that crown teleosts appeared ~270 Ma at the end of the Permian and that elopomorphs, osteoglossomorphs, ostarioclupeomorphs, and euteleosts diverged from one another by 205 Ma during the Triassic. Our approach additionally reveals that sequence capture of UCE regions and their flanking sequence offers enormous potential for resolving phylogenetic relationships within ray-finned fishes.
[ { "created": "Sat, 29 Sep 2012 16:00:44 GMT", "version": "v1" } ]
2013-06-20
[ [ "Alfaro", "Michael E.", "" ], [ "Faircloth", "Brant C.", "" ], [ "Sorenson", "Laurie", "" ], [ "Santini", "Francesco", "" ] ]
Ray-finned fishes constitute the dominant radiation of vertebrates with over 30,000 species. Although molecular phylogenetics has begun to disentangle major evolutionary relationships within this vast section of the Tree of Life, there is no widely available approach for efficiently collecting phylogenomic data within fishes, leaving much of the enormous potential of massively parallel sequencing technologies for resolving major radiations in ray-finned fishes unrealized. Here, we provide a genomic perspective on longstanding questions regarding the diversification of major groups of ray-finned fishes through targeted enrichment of ultraconserved nuclear DNA elements (UCEs) and their flanking sequence. Our workflow efficiently and economically generates data sets that are orders of magnitude larger than those produced by traditional approaches and is well-suited to working with museum specimens. Analysis of the UCE data set recovers a well-supported phylogeny at both shallow and deep time-scales that supports a monophyletic relationship between Amia and Lepisosteus (Holostei) and reveals elopomorphs and then osteoglossomorphs to be the earliest diverging teleost lineages. Divergence time estimation based upon 14 fossil calibrations reveals that crown teleosts appeared ~270 Ma at the end of the Permian and that elopomorphs, osteoglossomorphs, ostarioclupeomorphs, and euteleosts diverged from one another by 205 Ma during the Triassic. Our approach additionally reveals that sequence capture of UCE regions and their flanking sequence offers enormous potential for resolving phylogenetic relationships within ray-finned fishes.
1410.3972
Eran Elhaik
Eran Elhaik, Tatiana V. Tatarinova, Anatole A. Klyosov, and Dan Graur
An extended reply to Mendez et al.: The 'extremely ancient' chromosome that still isn't
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Earlier this year, we published a scathing critique of a paper by Mendez et al. (2013) in which the claim was made that a Y chromosome was 237,000-581,000 years old. Elhaik et al. (2014) also attacked a popular article in Scientific American by the senior author of Mendez et al. (2013), whose title was "Sex with other human species might have been the secret of Homo sapiens's [sic] success" (Hammer 2013). Five of the 11 authors of Mendez et al. (2013) have now written a "rebuttal," and we were allowed to reply. Unfortunately, our reply was censored for being "too sarcastic and inflamed." References were removed, meanings were castrated, and a dedication in the Acknowledgments was deleted. Now, that the so-called rebuttal by 45% of the authors of Mendez et al. (2013) has been published together with our vasectomized reply, we decided to make public our entire reply to the so called "rebuttal." In fact, we go one step further, and publish a version of the reply that has not even been self-censored.
[ { "created": "Wed, 15 Oct 2014 08:45:15 GMT", "version": "v1" }, { "created": "Mon, 20 Oct 2014 21:22:26 GMT", "version": "v2" } ]
2014-10-22
[ [ "Elhaik", "Eran", "" ], [ "Tatarinova", "Tatiana V.", "" ], [ "Klyosov", "Anatole A.", "" ], [ "Graur", "Dan", "" ] ]
Earlier this year, we published a scathing critique of a paper by Mendez et al. (2013) in which the claim was made that a Y chromosome was 237,000-581,000 years old. Elhaik et al. (2014) also attacked a popular article in Scientific American by the senior author of Mendez et al. (2013), whose title was "Sex with other human species might have been the secret of Homo sapiens's [sic] success" (Hammer 2013). Five of the 11 authors of Mendez et al. (2013) have now written a "rebuttal," and we were allowed to reply. Unfortunately, our reply was censored for being "too sarcastic and inflamed." References were removed, meanings were castrated, and a dedication in the Acknowledgments was deleted. Now, that the so-called rebuttal by 45% of the authors of Mendez et al. (2013) has been published together with our vasectomized reply, we decided to make public our entire reply to the so called "rebuttal." In fact, we go one step further, and publish a version of the reply that has not even been self-censored.
1707.04192
Marco Lehmann
Marco Lehmann, He Xu, Vasiliki Liakoni, Michael Herzog, Wulfram Gerstner, Kerstin Preuschoff
One-shot learning and behavioral eligibility traces in sequential decision making
null
eLife 2019; 8:e47463
10.7554/eLife.47463
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many daily tasks we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here we asked whether humans use eligibility traces. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.
[ { "created": "Thu, 13 Jul 2017 16:04:34 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2019 15:22:49 GMT", "version": "v2" }, { "created": "Tue, 12 Nov 2019 10:00:22 GMT", "version": "v3" } ]
2019-11-13
[ [ "Lehmann", "Marco", "" ], [ "Xu", "He", "" ], [ "Liakoni", "Vasiliki", "" ], [ "Herzog", "Michael", "" ], [ "Gerstner", "Wulfram", "" ], [ "Preuschoff", "Kerstin", "" ] ]
In many daily tasks we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here we asked whether humans use eligibility traces. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.
1805.05433
Joshua M. Deutsch
J. M. Deutsch
Computational mechanisms in genetic regulation by RNA
18 pages, 10 figures
null
null
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolution of the genome has led to very sophisticated and complex regulation. Because of the abundance of non-coding RNA (ncRNA) in the cell, different species will promiscuously associate with each other, suggesting collective dynamics similar to artificial neural networks. Here we present a simple mechanism allowing ncRNA to perform computations equivalent to neural network algorithms such as Boltzmann machines and the Hopfield model. The quantities analogous to the neural couplings are the equilibrium constants between different RNA species. The relatively rapid equilibration of RNA binding and unbinding is regulated by a slower process that degrades and creates new RNA. The model requires that the creation rate for each species be an increasing function of the ratio of total to unbound RNA. Similar mechanisms have already been found to exist experimentally for ncRNA regulation. With the overall concentration of RNA regulated, equilibrium constants can be chosen to store many different patterns, or many different input-output relations. The network is also quite insensitive to random mutations in equilibrium constants. Therefore one expects that this kind of mechanism will have a much higher mutation rate than ones typically regarded as being under evolutionary constraint.
[ { "created": "Mon, 14 May 2018 20:39:02 GMT", "version": "v1" } ]
2018-05-16
[ [ "Deutsch", "J. M.", "" ] ]
The evolution of the genome has led to very sophisticated and complex regulation. Because of the abundance of non-coding RNA (ncRNA) in the cell, different species will promiscuously associate with each other, suggesting collective dynamics similar to artificial neural networks. Here we present a simple mechanism allowing ncRNA to perform computations equivalent to neural network algorithms such as Boltzmann machines and the Hopfield model. The quantities analogous to the neural couplings are the equilibrium constants between different RNA species. The relatively rapid equilibration of RNA binding and unbinding is regulated by a slower process that degrades and creates new RNA. The model requires that the creation rate for each species be an increasing function of the ratio of total to unbound RNA. Similar mechanisms have already been found to exist experimentally for ncRNA regulation. With the overall concentration of RNA regulated, equilibrium constants can be chosen to store many different patterns, or many different input-output relations. The network is also quite insensitive to random mutations in equilibrium constants. Therefore one expects that this kind of mechanism will have a much higher mutation rate than ones typically regarded as being under evolutionary constraint.
0711.2531
Hendrik Blok
Michael Doebeli, Hendrik J. Blok, Olof Leimar, Ulf Dieckmann
Multimodal pattern formation in phenotype distributions of sexual populations
null
Proc. R. Soc. B (2007) 274, 347-357
10.1098/rspb.2006.3725
null
q-bio.PE
null
During bouts of evolutionary diversification, such as adaptive radiations, the emerging species cluster around different locations in phenotype space, How such multimodal patterns in phenotype space can emerge from a single ancestral species is a fundamental question in biology. Frequency-dependent competition is one potential mechanism for such pattern formation, as has previously been shown in models based on the theory of adaptive dynamics. Here we demonstrate that also in models similar to those used in quantitative genetics, phenotype distributions can split into multiple modes under the force of frequency-dependent competition. In sexual populations, this requires assortative mating, and we show that the multimodal splitting of initially unimodal distributions occurs over a range of assortment parameters. In addition, assortative mating can be favoured evolutionarily even if it incurs costs, because it provides a means of alleviating the effects of frequency dependence. Our results reveal that models at both ends of the spectrum between essentially monomorphic (adaptive dynamics) and fully polymorphic (quantitative genetics) yield similar results. This underscores that frequency-dependent selection is a strong agent of pattern formation in phenotype distributions, potentially resulting in adaptive speciation.
[ { "created": "Thu, 15 Nov 2007 23:25:00 GMT", "version": "v1" } ]
2007-11-19
[ [ "Doebeli", "Michael", "" ], [ "Blok", "Hendrik J.", "" ], [ "Leimar", "Olof", "" ], [ "Dieckmann", "Ulf", "" ] ]
During bouts of evolutionary diversification, such as adaptive radiations, the emerging species cluster around different locations in phenotype space, How such multimodal patterns in phenotype space can emerge from a single ancestral species is a fundamental question in biology. Frequency-dependent competition is one potential mechanism for such pattern formation, as has previously been shown in models based on the theory of adaptive dynamics. Here we demonstrate that also in models similar to those used in quantitative genetics, phenotype distributions can split into multiple modes under the force of frequency-dependent competition. In sexual populations, this requires assortative mating, and we show that the multimodal splitting of initially unimodal distributions occurs over a range of assortment parameters. In addition, assortative mating can be favoured evolutionarily even if it incurs costs, because it provides a means of alleviating the effects of frequency dependence. Our results reveal that models at both ends of the spectrum between essentially monomorphic (adaptive dynamics) and fully polymorphic (quantitative genetics) yield similar results. This underscores that frequency-dependent selection is a strong agent of pattern formation in phenotype distributions, potentially resulting in adaptive speciation.
2306.07812
Xu Wang
Xu Wang and Huan Zhao and Weiwei Tu and Quanming Yao
Automated 3D Pre-Training for Molecular Property Prediction
null
null
10.1145/3580305.3599252
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular property prediction is an important problem in drug discovery and materials science. As geometric structures have been demonstrated necessary for molecular property prediction, 3D information has been combined with various graph learning methods to boost prediction performance. However, obtaining the geometric structure of molecules is not feasible in many real-world applications due to the high computational cost. In this work, we propose a novel 3D pre-training framework (dubbed 3D PGT), which pre-trains a model on 3D molecular graphs, and then fine-tunes it on molecular graphs without 3D structures. Based on fact that bond length, bond angle, and dihedral angle are three basic geometric descriptors corresponding to a complete molecular 3D conformer, we first develop a multi-task generative pre-train framework based on these three attributes. Next, to automatically fuse these three generative tasks, we design a surrogate metric using the \textit{total energy} to search for weight distribution of the three pretext task since total energy corresponding to the quality of 3D conformer.Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT compared to various pre-training baselines.
[ { "created": "Tue, 13 Jun 2023 14:43:13 GMT", "version": "v1" }, { "created": "Sun, 2 Jul 2023 13:03:27 GMT", "version": "v2" } ]
2023-07-04
[ [ "Wang", "Xu", "" ], [ "Zhao", "Huan", "" ], [ "Tu", "Weiwei", "" ], [ "Yao", "Quanming", "" ] ]
Molecular property prediction is an important problem in drug discovery and materials science. As geometric structures have been demonstrated necessary for molecular property prediction, 3D information has been combined with various graph learning methods to boost prediction performance. However, obtaining the geometric structure of molecules is not feasible in many real-world applications due to the high computational cost. In this work, we propose a novel 3D pre-training framework (dubbed 3D PGT), which pre-trains a model on 3D molecular graphs, and then fine-tunes it on molecular graphs without 3D structures. Based on fact that bond length, bond angle, and dihedral angle are three basic geometric descriptors corresponding to a complete molecular 3D conformer, we first develop a multi-task generative pre-train framework based on these three attributes. Next, to automatically fuse these three generative tasks, we design a surrogate metric using the \textit{total energy} to search for weight distribution of the three pretext task since total energy corresponding to the quality of 3D conformer.Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT compared to various pre-training baselines.
1806.07477
Heyrim Cho
Heyrim Cho and Doron Levy
The Impact of Competition Between Cancer Cells and Healthy Cells on Optimal Drug Delivery
18 pages
null
10.1051/mmnp/2019043
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell competition is recognized to be instrumental to the dynamics and structure of the tumor-host interface in invasive cancers. In mild competition scenarios, the healthy tissue and cancer cells can coexist. When the competition is aggressive, competitive cells, the so called super-competitors, expand by killing other cells. Novel cytotoxic drugs and molecularly targeted drugs are commonly administered as part of cancer therapy. Both types of drugs are susceptible to various mechanisms of drug resistance, obstructing or preventing a successful outcome. In this paper, we develop a cancer growth model that accounts for the competition between cancer cells and healthy cells. The model incorporates resistance to both cytotoxic and targeted drugs. In both cases, the level of drug resistance is assumed to be a continuous variable ranging from fully-sensitive to fully-resistant. Using our model we demonstrate that when the competition is moderate, therapies using both drugs are more effective compared with single drug therapies. However, when cancer cells are highly competitive, targeted drugs become more effective. In this case, therapies that are initiated with a targeted drug and are exposed to it for a sufficiently long time are shown to have better outcomes. The results of the study stress the importance of adjusting the therapy to the pre-treatment resistance levels. We conclude with a study of the spatiotemporal propagation of drug resistance in a competitive setting, verifying that the same conclusions hold in the spatially heterogeneous case.
[ { "created": "Tue, 19 Jun 2018 21:26:02 GMT", "version": "v1" } ]
2022-04-19
[ [ "Cho", "Heyrim", "" ], [ "Levy", "Doron", "" ] ]
Cell competition is recognized to be instrumental to the dynamics and structure of the tumor-host interface in invasive cancers. In mild competition scenarios, the healthy tissue and cancer cells can coexist. When the competition is aggressive, competitive cells, the so called super-competitors, expand by killing other cells. Novel cytotoxic drugs and molecularly targeted drugs are commonly administered as part of cancer therapy. Both types of drugs are susceptible to various mechanisms of drug resistance, obstructing or preventing a successful outcome. In this paper, we develop a cancer growth model that accounts for the competition between cancer cells and healthy cells. The model incorporates resistance to both cytotoxic and targeted drugs. In both cases, the level of drug resistance is assumed to be a continuous variable ranging from fully-sensitive to fully-resistant. Using our model we demonstrate that when the competition is moderate, therapies using both drugs are more effective compared with single drug therapies. However, when cancer cells are highly competitive, targeted drugs become more effective. In this case, therapies that are initiated with a targeted drug and are exposed to it for a sufficiently long time are shown to have better outcomes. The results of the study stress the importance of adjusting the therapy to the pre-treatment resistance levels. We conclude with a study of the spatiotemporal propagation of drug resistance in a competitive setting, verifying that the same conclusions hold in the spatially heterogeneous case.
2308.09725
Ziwei Yang
Ziwei Yang, Zheng Chen, Yasuko Matsubara, Yasushi Sakurai
MoCLIM: Towards Accurate Cancer Subtyping via Multi-Omics Contrastive Learning with Omics-Inference Modeling
CIKM'23 Long/Full Papers
null
10.1145/3583780.3614970
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Precision medicine fundamentally aims to establish causality between dysregulated biochemical mechanisms and cancer subtypes. Omics-based cancer subtyping has emerged as a revolutionary approach, as different level of omics records the biochemical products of multistep processes in cancers. This paper focuses on fully exploiting the potential of multi-omics data to improve cancer subtyping outcomes, and hence developed MoCLIM, a representation learning framework. MoCLIM independently extracts the informative features from distinct omics modalities. Using a unified representation informed by contrastive learning of different omics modalities, we can well-cluster the subtypes, given cancer, into a lower latent space. This contrast can be interpreted as a projection of inter-omics inference observed in biological networks. Experimental results on six cancer datasets demonstrate that our approach significantly improves data fit and subtyping performance in fewer high-dimensional cancer instances. Moreover, our framework incorporates various medical evaluations as the final component, providing high interpretability in medical analysis.
[ { "created": "Thu, 17 Aug 2023 10:49:48 GMT", "version": "v1" }, { "created": "Thu, 24 Aug 2023 04:38:45 GMT", "version": "v2" } ]
2023-08-25
[ [ "Yang", "Ziwei", "" ], [ "Chen", "Zheng", "" ], [ "Matsubara", "Yasuko", "" ], [ "Sakurai", "Yasushi", "" ] ]
Precision medicine fundamentally aims to establish causality between dysregulated biochemical mechanisms and cancer subtypes. Omics-based cancer subtyping has emerged as a revolutionary approach, as different level of omics records the biochemical products of multistep processes in cancers. This paper focuses on fully exploiting the potential of multi-omics data to improve cancer subtyping outcomes, and hence developed MoCLIM, a representation learning framework. MoCLIM independently extracts the informative features from distinct omics modalities. Using a unified representation informed by contrastive learning of different omics modalities, we can well-cluster the subtypes, given cancer, into a lower latent space. This contrast can be interpreted as a projection of inter-omics inference observed in biological networks. Experimental results on six cancer datasets demonstrate that our approach significantly improves data fit and subtyping performance in fewer high-dimensional cancer instances. Moreover, our framework incorporates various medical evaluations as the final component, providing high interpretability in medical analysis.
2003.03580
Pengli Lu
Pengli Lu and JingJuan Yu
Two new methods for identifying proteins based on the domain protein complexes and topological properties
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recognition of essential proteins not only can help to understand the mechanism of cell operation, but also help to study the mechanism of biological evolution. At present, many scholars have been discovering essential proteins according to the topological structure of protein network and complexes. While some proteins still can not be recognized. In this paper, we proposed two new methods complex degree centrality (CDC) and complex in-degree and betweenness definition (CIBD) which integrate the local character of protein complexes and topological properties to determine the essentiality of proteins. First, we give the definitions of complex average centrality (CAC) and complex hybrid centrality (CHC) which both describe the properties of protein complexes. Then we propose these new methods CDC and CIBD based on CAC and CHC definitions. In order to access these two methods, different Protein-Protein Interaction (PPI) networks of Saccharomyces cerevisiae, DIP, MIPS and YMBD are used as experimental materials. Experimental results in networks show that the methods of CDC and CIBD can help to improve the precision of predicting essential proteins.
[ { "created": "Sat, 7 Mar 2020 13:56:35 GMT", "version": "v1" } ]
2020-03-10
[ [ "Lu", "Pengli", "" ], [ "Yu", "JingJuan", "" ] ]
The recognition of essential proteins not only can help to understand the mechanism of cell operation, but also help to study the mechanism of biological evolution. At present, many scholars have been discovering essential proteins according to the topological structure of protein network and complexes. While some proteins still can not be recognized. In this paper, we proposed two new methods complex degree centrality (CDC) and complex in-degree and betweenness definition (CIBD) which integrate the local character of protein complexes and topological properties to determine the essentiality of proteins. First, we give the definitions of complex average centrality (CAC) and complex hybrid centrality (CHC) which both describe the properties of protein complexes. Then we propose these new methods CDC and CIBD based on CAC and CHC definitions. In order to access these two methods, different Protein-Protein Interaction (PPI) networks of Saccharomyces cerevisiae, DIP, MIPS and YMBD are used as experimental materials. Experimental results in networks show that the methods of CDC and CIBD can help to improve the precision of predicting essential proteins.
2305.05086
Brian Sun
Braden Barnett, Yiqi Lyu, Kyle Pichney, Brian Sun, Jixiao Wu
Mechanical Evidence for the Phylogenetic Origin of the Red Panda's False Thumb as an Adaptation to Arboreal Locomotion
14 pages, 10 figures
null
null
null
q-bio.PE cs.RO
http://creativecommons.org/licenses/by/4.0/
We constructed a modular, biomimetic red panda paw with which to experimentally investigate the evolutionary reason for the existence of the false thumbs of red pandas. These thumbs were once believed to have shared a common origin with the similar false thumbs of giant pandas; however, the discovery of a carnivorous fossil ancestor of the red panda that had false thumbs implies that the red panda did not evolve its thumbs to assist in eating bamboo, as the giant panda did, but rather evolved its thumbs for some other purpose. The leading proposal for this purpose is that the thumbs developed to aid arboreal locomotion. To test this hypothesis, we conducted grasp tests on rods 5-15 mm in diameter using a biomimetic paw with 0-16 mm interchangeable thumb lengths. The results of these tests demonstrated an optimal thumb length of 7 mm, which is just above that of the red panda's true thumb length of 5.5 mm. Given trends in the data that suggest that smaller thumbs are better suited to grasping larger diameter rods, we conclude that the red panda's thumb being sized below the optimum length suggests an adaptation toward grasping branches as opposed to relatively thinner food items, supporting the new proposal that the red panda's thumbs are an adaptation primary to climbing rather than food manipulation.
[ { "created": "Mon, 8 May 2023 23:05:39 GMT", "version": "v1" } ]
2023-05-10
[ [ "Barnett", "Braden", "" ], [ "Lyu", "Yiqi", "" ], [ "Pichney", "Kyle", "" ], [ "Sun", "Brian", "" ], [ "Wu", "Jixiao", "" ] ]
We constructed a modular, biomimetic red panda paw with which to experimentally investigate the evolutionary reason for the existence of the false thumbs of red pandas. These thumbs were once believed to have shared a common origin with the similar false thumbs of giant pandas; however, the discovery of a carnivorous fossil ancestor of the red panda that had false thumbs implies that the red panda did not evolve its thumbs to assist in eating bamboo, as the giant panda did, but rather evolved its thumbs for some other purpose. The leading proposal for this purpose is that the thumbs developed to aid arboreal locomotion. To test this hypothesis, we conducted grasp tests on rods 5-15 mm in diameter using a biomimetic paw with 0-16 mm interchangeable thumb lengths. The results of these tests demonstrated an optimal thumb length of 7 mm, which is just above that of the red panda's true thumb length of 5.5 mm. Given trends in the data that suggest that smaller thumbs are better suited to grasping larger diameter rods, we conclude that the red panda's thumb being sized below the optimum length suggests an adaptation toward grasping branches as opposed to relatively thinner food items, supporting the new proposal that the red panda's thumbs are an adaptation primary to climbing rather than food manipulation.
q-bio/0507018
Toby Johnson
Toby Johnson
Bayesian Method for Disease QTL Detection and Mapping, using a Case and Control Design and DNA Pooling
null
Biostatistics (2007) 8:546--565
10.1093/biostatistics/kxl028
null
q-bio.GN q-bio.PE q-bio.QM
null
This paper describes a Bayesian statistical method for determining the genetic basis of a complex genetic trait. The method uses a sample of unrelated individuals classified into two groups, for example cases and controls. Each group is assumed to have been genotyped at a battery of marker loci using a laboratory effort efficient technique called DNA pooling. The aim is to detect and map a quantitative trait locus (QTL) that is not one of the typed markers. The method works by conducting an exact Bayesian analysis under a number of simplifying population genetic assumptions that are somewhat unrealistic. Despite this, the method is shown to perform acceptably on datasets simulated under a more realistic model, and furthermore is shown to outperform classical single point methods.
[ { "created": "Wed, 13 Jul 2005 11:59:52 GMT", "version": "v1" } ]
2008-02-21
[ [ "Johnson", "Toby", "" ] ]
This paper describes a Bayesian statistical method for determining the genetic basis of a complex genetic trait. The method uses a sample of unrelated individuals classified into two groups, for example cases and controls. Each group is assumed to have been genotyped at a battery of marker loci using a laboratory effort efficient technique called DNA pooling. The aim is to detect and map a quantitative trait locus (QTL) that is not one of the typed markers. The method works by conducting an exact Bayesian analysis under a number of simplifying population genetic assumptions that are somewhat unrealistic. Despite this, the method is shown to perform acceptably on datasets simulated under a more realistic model, and furthermore is shown to outperform classical single point methods.
2405.03707
Lixin Lin
Lixin Lin, Homayoun Hamedmoghadam, Robert Shorten, Lewi Stone
Quantifying indirect and direct vaccination effects arising in the SIR model
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Vaccination campaigns have both direct and indirect effects that act to control an infectious disease as it spreads through a population. Indirect effects arise when vaccinated individuals block disease transmission in any infection chains they are part of, and this in turn can benefit both vaccinated and unvaccinated individuals. Indirect effects are difficult to quantify in practice, but here, working with the Susceptible-Infected-Recovered (SIR) model, they are analytically calculated in important cases, through pivoting on the Final Size formula for epidemics. Their relationship to herd immunity is also clarified. Furthermore, we identify the important distinction between quantifying indirect effects of vaccination at the "population level" versus the "per capita" individual level, which often results in radically different conclusions. As an important example, the analysis unpacks why population-level indirect effect can appear significantly larger than its per capita analogue. In addition, we consider a recently proposed epidemiological non-pharamaceutical intervention used over COVID-19, referred to as "shielding", and study its impact in our mathematical analysis. The shielding scheme is extended by inclusion of limited vaccination.
[ { "created": "Fri, 3 May 2024 20:57:57 GMT", "version": "v1" } ]
2024-05-08
[ [ "Lin", "Lixin", "" ], [ "Hamedmoghadam", "Homayoun", "" ], [ "Shorten", "Robert", "" ], [ "Stone", "Lewi", "" ] ]
Vaccination campaigns have both direct and indirect effects that act to control an infectious disease as it spreads through a population. Indirect effects arise when vaccinated individuals block disease transmission in any infection chains they are part of, and this in turn can benefit both vaccinated and unvaccinated individuals. Indirect effects are difficult to quantify in practice, but here, working with the Susceptible-Infected-Recovered (SIR) model, they are analytically calculated in important cases, through pivoting on the Final Size formula for epidemics. Their relationship to herd immunity is also clarified. Furthermore, we identify the important distinction between quantifying indirect effects of vaccination at the "population level" versus the "per capita" individual level, which often results in radically different conclusions. As an important example, the analysis unpacks why population-level indirect effect can appear significantly larger than its per capita analogue. In addition, we consider a recently proposed epidemiological non-pharamaceutical intervention used over COVID-19, referred to as "shielding", and study its impact in our mathematical analysis. The shielding scheme is extended by inclusion of limited vaccination.
1612.02035
Donald Forsdyke Dr.
Donald R. Forsdyke
Elusive preferred hosts or nucleic acid level selection? A commentary on: Evolutionary interpretations of mycobacteriophage biodiversity and host-range through the analysis of codon usage bias (Esposito et al. 2016)
Submitted (less the reference to Meyer et al. 2016) to Microbial Genomics on 8th November 2016
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While confirming the long held view that viruses do not closely imitate the use of their host's codon catalogue, Esposito and coworkers nevertheless consider it surprising that, despite having the ability to infect the same host, many mycobacteriophages share little or no genetic similarity (i.e. similarity in their GC contents and codon utilization patterns). Arguing correctly that efficient translation of a phage's proteins within a host is likely to be optimized by the phage's ability to match the host's codon usage pattern, it is concluded that the preferred host of many mycobacteriophages is not Mycobacterium smegmatis, despite their having been isolated on that organism. Thus, a virus and its elusive preferred hosts would have had similar GC percentages and codon usages, but the same virus could still infect a less-preferred host (Mycobacterium smegmatis), where the virus-host similarity would be less evident. However, there is another evolutionary interpretation.
[ { "created": "Tue, 6 Dec 2016 21:29:26 GMT", "version": "v1" } ]
2016-12-08
[ [ "Forsdyke", "Donald R.", "" ] ]
While confirming the long held view that viruses do not closely imitate the use of their host's codon catalogue, Esposito and coworkers nevertheless consider it surprising that, despite having the ability to infect the same host, many mycobacteriophages share little or no genetic similarity (i.e. similarity in their GC contents and codon utilization patterns). Arguing correctly that efficient translation of a phage's proteins within a host is likely to be optimized by the phage's ability to match the host's codon usage pattern, it is concluded that the preferred host of many mycobacteriophages is not Mycobacterium smegmatis, despite their having been isolated on that organism. Thus, a virus and its elusive preferred hosts would have had similar GC percentages and codon usages, but the same virus could still infect a less-preferred host (Mycobacterium smegmatis), where the virus-host similarity would be less evident. However, there is another evolutionary interpretation.
2303.06423
Herv\'e Isambert
Marcel da C\^amara Ribeiro-Dantas, Honghao Li, Vincent Cabeli, Louise Dupuis, Franck Simon, Liza Hettal, Anne-Sophie Hamy, and Herv\'e Isambert
Learning interpretable causal networks from very large datasets, application to 400,000 medical records of breast cancer patients
19 pages, 6 figures, 8 supplementary figures and 5 pages supporting information
null
null
null
q-bio.QM cs.LG physics.data-an q-bio.MN stat.ME
http://creativecommons.org/licenses/by-nc-nd/4.0/
Discovering causal effects is at the core of scientific investigation but remains challenging when only observational data is available. In practice, causal networks are difficult to learn and interpret, and limited to relatively small datasets. We report a more reliable and scalable causal discovery method (iMIIC), based on a general mutual information supremum principle, which greatly improves the precision of inferred causal relations while distinguishing genuine causes from putative and latent causal effects. We showcase iMIIC on synthetic and real-life healthcare data from 396,179 breast cancer patients from the US Surveillance, Epidemiology, and End Results program. More than 90\% of predicted causal effects appear correct, while the remaining unexpected direct and indirect causal effects can be interpreted in terms of diagnostic procedures, therapeutic timing, patient preference or socio-economic disparity. iMIIC's unique capabilities open up new avenues to discover reliable and interpretable causal networks across a range of research fields.
[ { "created": "Sat, 11 Mar 2023 15:18:19 GMT", "version": "v1" } ]
2023-03-14
[ [ "Ribeiro-Dantas", "Marcel da Câmara", "" ], [ "Li", "Honghao", "" ], [ "Cabeli", "Vincent", "" ], [ "Dupuis", "Louise", "" ], [ "Simon", "Franck", "" ], [ "Hettal", "Liza", "" ], [ "Hamy", "Anne-Sophie", "" ], [ "Isambert", "Hervé", "" ] ]
Discovering causal effects is at the core of scientific investigation but remains challenging when only observational data is available. In practice, causal networks are difficult to learn and interpret, and limited to relatively small datasets. We report a more reliable and scalable causal discovery method (iMIIC), based on a general mutual information supremum principle, which greatly improves the precision of inferred causal relations while distinguishing genuine causes from putative and latent causal effects. We showcase iMIIC on synthetic and real-life healthcare data from 396,179 breast cancer patients from the US Surveillance, Epidemiology, and End Results program. More than 90\% of predicted causal effects appear correct, while the remaining unexpected direct and indirect causal effects can be interpreted in terms of diagnostic procedures, therapeutic timing, patient preference or socio-economic disparity. iMIIC's unique capabilities open up new avenues to discover reliable and interpretable causal networks across a range of research fields.
1905.08129
Carsten Conradi
Carsten Conradi and Elisenda Feliu and Maya Mincheva
On the existence of Hopf bifurcations in the sequential and distributive double phosphorylation cycle
null
null
10.3934/mbe.2020027
null
q-bio.MN math.AG math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein phosphorylation cycles are important mechanisms of the post translational modification of a protein and as such an integral part of intracellular signaling and control. We consider the sequential phosphorylation and dephosphorylation of a protein at two binding sites. While it is known that proteins where phosphorylation is processive and dephosphorylation is distributive admit oscillations (for some value of the rate constants and total concentrations) it is not known whether or not this is the case if both phosphorylation and dephosphorylation are distributive. We study four simplified mass action models of sequential and distributive phosphorylation and show that for each of those there do not exist rate constants and total concentrations where a Hopf bifurcation occurs. To arrive at this result we use convex parameters to parameterize the steady state and Hurwitz matrices.
[ { "created": "Mon, 20 May 2019 14:11:54 GMT", "version": "v1" }, { "created": "Wed, 4 Sep 2019 10:37:29 GMT", "version": "v2" } ]
2019-11-06
[ [ "Conradi", "Carsten", "" ], [ "Feliu", "Elisenda", "" ], [ "Mincheva", "Maya", "" ] ]
Protein phosphorylation cycles are important mechanisms of the post translational modification of a protein and as such an integral part of intracellular signaling and control. We consider the sequential phosphorylation and dephosphorylation of a protein at two binding sites. While it is known that proteins where phosphorylation is processive and dephosphorylation is distributive admit oscillations (for some value of the rate constants and total concentrations) it is not known whether or not this is the case if both phosphorylation and dephosphorylation are distributive. We study four simplified mass action models of sequential and distributive phosphorylation and show that for each of those there do not exist rate constants and total concentrations where a Hopf bifurcation occurs. To arrive at this result we use convex parameters to parameterize the steady state and Hurwitz matrices.
2310.10978
Ka My Dang Dr
Ka My Dang, Yi Jia Zhang, Tianchen Zhang, Chao Wang, Anton Sinner, Piero Coronica, and Joyce K. S. Poon
NeuroQuantify -- An Image Analysis Software for Detection and Quantification of Neurons and Neurites using Deep Learning
null
null
null
null
q-bio.QM eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals. However, automatic and accurate analysis of neuronal structures from phase contrast images has remained challenging. To address this, we have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images. NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub https://github.com/StanleyZ0528/neural-image-segmentation.
[ { "created": "Mon, 16 Oct 2023 13:11:59 GMT", "version": "v1" }, { "created": "Thu, 19 Oct 2023 08:33:52 GMT", "version": "v2" } ]
2023-10-20
[ [ "Dang", "Ka My", "" ], [ "Zhang", "Yi Jia", "" ], [ "Zhang", "Tianchen", "" ], [ "Wang", "Chao", "" ], [ "Sinner", "Anton", "" ], [ "Coronica", "Piero", "" ], [ "Poon", "Joyce K. S.", "" ] ]
The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals. However, automatic and accurate analysis of neuronal structures from phase contrast images has remained challenging. To address this, we have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images. NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub https://github.com/StanleyZ0528/neural-image-segmentation.
2011.04354
Fedor Garbuzov
F. E. Garbuzov, V. V. Gursky
Nonequilibrium model of short-range repression in gene transcription regulation
null
Phys. Rev. E 104, 014407 (2021)
10.1103/PhysRevE.104.014407
null
q-bio.MN q-bio.GN
http://creativecommons.org/licenses/by-sa/4.0/
Transcription factors are proteins that regulate gene activity by activating or repressing gene transcription. A special class of transcriptional repressors operates via a short-range mechanism, making local DNA regions inaccessible to binding by activators, and thus providing an indirect repressive action on the target gene. This mechanism is commonly modeled assuming that repressors interact with DNA under thermodynamic equilibrium and neglecting some configurations of the gene regulatory region. We elaborate on a more general nonequilibrium model of short-range repression using the graph formalism for transitions between gene states, and we apply analytical calculations to compare it with the equilibrium model in terms of the repression strength and expression noise. In contrast to the equilibrium approach, the new model allows us to separate two basic mechanisms of short-range repression. The first mechanism is associated with the recruiting of factors that mediate chromatin condensation, and the second one concerns the blocking of factors that mediate chromatin loosening. The nonequilibrium model demonstrates better performance on previously published gene expression data obtained for transcription factors controlling Drosophila development, and furthermore it predicts that the first repression mechanism is the most favorable in this system. The presented approach can be scaled to larger gene networks and can be used to infer specific modes and parameters of transcriptional regulation from gene expression data.
[ { "created": "Mon, 9 Nov 2020 11:38:13 GMT", "version": "v1" }, { "created": "Mon, 11 Apr 2022 20:01:05 GMT", "version": "v2" } ]
2022-04-13
[ [ "Garbuzov", "F. E.", "" ], [ "Gursky", "V. V.", "" ] ]
Transcription factors are proteins that regulate gene activity by activating or repressing gene transcription. A special class of transcriptional repressors operates via a short-range mechanism, making local DNA regions inaccessible to binding by activators, and thus providing an indirect repressive action on the target gene. This mechanism is commonly modeled assuming that repressors interact with DNA under thermodynamic equilibrium and neglecting some configurations of the gene regulatory region. We elaborate on a more general nonequilibrium model of short-range repression using the graph formalism for transitions between gene states, and we apply analytical calculations to compare it with the equilibrium model in terms of the repression strength and expression noise. In contrast to the equilibrium approach, the new model allows us to separate two basic mechanisms of short-range repression. The first mechanism is associated with the recruiting of factors that mediate chromatin condensation, and the second one concerns the blocking of factors that mediate chromatin loosening. The nonequilibrium model demonstrates better performance on previously published gene expression data obtained for transcription factors controlling Drosophila development, and furthermore it predicts that the first repression mechanism is the most favorable in this system. The presented approach can be scaled to larger gene networks and can be used to infer specific modes and parameters of transcriptional regulation from gene expression data.
2001.03207
Sally Ellingson
Brian Davis, Kevin Mcloughlin, Jonathan Allen, and Sally Ellingson
Split Optimization for Protein/Ligand Binding Models
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate potential biases in datasets used to make drug binding predictions using machine learning. We investigate a recently published metric called the Asymmetric Validation Embedding (AVE) bias which is used to quantify this bias and detect overfitting. We compare it to a slightly revised version and introduce a new weighted metric. We find that the new metrics allow to quantify overfitting while not overly limiting training data and produce models with greater predictive value.
[ { "created": "Thu, 9 Jan 2020 20:07:57 GMT", "version": "v1" } ]
2020-01-13
[ [ "Davis", "Brian", "" ], [ "Mcloughlin", "Kevin", "" ], [ "Allen", "Jonathan", "" ], [ "Ellingson", "Sally", "" ] ]
In this paper, we investigate potential biases in datasets used to make drug binding predictions using machine learning. We investigate a recently published metric called the Asymmetric Validation Embedding (AVE) bias which is used to quantify this bias and detect overfitting. We compare it to a slightly revised version and introduce a new weighted metric. We find that the new metrics allow to quantify overfitting while not overly limiting training data and produce models with greater predictive value.
q-bio/0604020
Jesus M. Cortes
J. Marro, J.J. Torres, J.M. Cortes
Chaotic hopping between attractors in neural networks
12 pages, 5 figures
null
null
null
q-bio.NC
null
We present a neurobiologically--inspired stochastic cellular automaton whose state jumps with time between the attractors corresponding to a series of stored patterns. The jumping varies from regular to chaotic as the model parameters are modified. The resulting irregular behavior, which mimics the state of attention in which a systems shows a great adaptability to changing stimulus, is a consequence in the model of short--time presynaptic noise which induces synaptic depression. We discuss results from both a mean--field analysis and Monte Carlo simulations.
[ { "created": "Sun, 16 Apr 2006 21:26:40 GMT", "version": "v1" } ]
2007-05-23
[ [ "Marro", "J.", "" ], [ "Torres", "J. J.", "" ], [ "Cortes", "J. M.", "" ] ]
We present a neurobiologically--inspired stochastic cellular automaton whose state jumps with time between the attractors corresponding to a series of stored patterns. The jumping varies from regular to chaotic as the model parameters are modified. The resulting irregular behavior, which mimics the state of attention in which a systems shows a great adaptability to changing stimulus, is a consequence in the model of short--time presynaptic noise which induces synaptic depression. We discuss results from both a mean--field analysis and Monte Carlo simulations.
2006.09454
Wenxing Hu
Wenxing Hu, Xianghe Meng, Yuntong Bai, Aiying Zhang, Biao Cai, Gemeng Zhang, Tony W. Wilson, Julia M. Stephen, Vince D. Calhoun, Yu-Ping Wang
Interpretable multimodal fusion networks reveal mechanisms of brain cognition
null
null
null
null
q-bio.NC cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal fusion benefits disease diagnosis by providing a more comprehensive perspective. Developing algorithms is challenging due to data heterogeneity and the complex within- and between-modality associations. Deep-network-based data-fusion models have been developed to capture the complex associations and the performance in diagnosis has been improved accordingly. Moving beyond diagnosis prediction, evaluation of disease mechanisms is critically important for biomedical research. Deep-network-based data-fusion models, however, are difficult to interpret, bringing about difficulties for studying biological mechanisms. In this work, we develop an interpretable multimodal fusion model, namely gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously. The gCAM-CCL model can generate interpretable activation maps, which quantify pixel-level contributions of the input features. This is achieved by combining intermediate feature maps using gradient-based weights. Moreover, the estimated activation maps are class-specific, and the captured cross-data associations are interest/label related, which further facilitates class-specific analysis and biological mechanism analysis. We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCL's performed well for both classification and mechanism analysis. Mechanism analysis suggests that during task-fMRI scans, several object recognition related regions of interests (ROIs) are first activated and then several downstream encoding ROIs get involved. Results also suggest that the higher cognition performing group may have stronger neurotransmission signaling while the lower cognition performing group may have problem in brain/neuron development, resulting from genetic variations.
[ { "created": "Tue, 16 Jun 2020 18:52:50 GMT", "version": "v1" } ]
2020-06-18
[ [ "Hu", "Wenxing", "" ], [ "Meng", "Xianghe", "" ], [ "Bai", "Yuntong", "" ], [ "Zhang", "Aiying", "" ], [ "Cai", "Biao", "" ], [ "Zhang", "Gemeng", "" ], [ "Wilson", "Tony W.", "" ], [ "Stephen", "Julia M.", "" ], [ "Calhoun", "Vince D.", "" ], [ "Wang", "Yu-Ping", "" ] ]
Multimodal fusion benefits disease diagnosis by providing a more comprehensive perspective. Developing algorithms is challenging due to data heterogeneity and the complex within- and between-modality associations. Deep-network-based data-fusion models have been developed to capture the complex associations and the performance in diagnosis has been improved accordingly. Moving beyond diagnosis prediction, evaluation of disease mechanisms is critically important for biomedical research. Deep-network-based data-fusion models, however, are difficult to interpret, bringing about difficulties for studying biological mechanisms. In this work, we develop an interpretable multimodal fusion model, namely gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously. The gCAM-CCL model can generate interpretable activation maps, which quantify pixel-level contributions of the input features. This is achieved by combining intermediate feature maps using gradient-based weights. Moreover, the estimated activation maps are class-specific, and the captured cross-data associations are interest/label related, which further facilitates class-specific analysis and biological mechanism analysis. We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCL's performed well for both classification and mechanism analysis. Mechanism analysis suggests that during task-fMRI scans, several object recognition related regions of interests (ROIs) are first activated and then several downstream encoding ROIs get involved. Results also suggest that the higher cognition performing group may have stronger neurotransmission signaling while the lower cognition performing group may have problem in brain/neuron development, resulting from genetic variations.
0905.2875
Guido Tiana
C. Camilloni, G. Tiana and R. A. Broglia
Atomic-detailed milestones along the folding trajectory of protein G
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The high computational cost of carrying out molecular dynamics simulations of even small-size proteins is a major obstacle in the study, at atomic detail and in explicit solvent, of the physical mechanism which is at the basis of the folding of proteins. Making use of a biasing algorithm, based on the principle of the ratchet-and-pawl, we have been able to calculate eight folding trajectories (to an RMSD between 1.2A and 2.5A) of the B1 domain of protein G in explicit solvent without the need of high-performance computing. The simulations show that in the denatured state there is a complex network of cause-effect relationships among contacts, which results in a rather hierarchical folding mechanism. The network displays few local and nonlocal native contacts which are cause of most of the others, in agreement with the NOE signals obtained in mildly-denatured conditions. Also nonnative contacts play an active role in the folding kinetics. The set of conformations corresponding to the transition state display phi-values with a correlation coefficient of 0.69 with the experimental ones. They are structurally quite homogeneous and topologically native-like, although some of the side chains and most of the hydrogen bonds are not in place.
[ { "created": "Mon, 18 May 2009 12:35:51 GMT", "version": "v1" } ]
2009-05-19
[ [ "Camilloni", "C.", "" ], [ "Tiana", "G.", "" ], [ "Broglia", "R. A.", "" ] ]
The high computational cost of carrying out molecular dynamics simulations of even small-size proteins is a major obstacle in the study, at atomic detail and in explicit solvent, of the physical mechanism which is at the basis of the folding of proteins. Making use of a biasing algorithm, based on the principle of the ratchet-and-pawl, we have been able to calculate eight folding trajectories (to an RMSD between 1.2A and 2.5A) of the B1 domain of protein G in explicit solvent without the need of high-performance computing. The simulations show that in the denatured state there is a complex network of cause-effect relationships among contacts, which results in a rather hierarchical folding mechanism. The network displays few local and nonlocal native contacts which are cause of most of the others, in agreement with the NOE signals obtained in mildly-denatured conditions. Also nonnative contacts play an active role in the folding kinetics. The set of conformations corresponding to the transition state display phi-values with a correlation coefficient of 0.69 with the experimental ones. They are structurally quite homogeneous and topologically native-like, although some of the side chains and most of the hydrogen bonds are not in place.
1902.07942
Janusz Szwabi\'nski
Patrycja Kowalek and Hanna Loch-Olszewska and Janusz Szwabi\'nski
Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach
null
Phys. Rev. E 100, 032410 (2019)
10.1103/PhysRevE.100.032410
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-particle trajectories measured in microscopy experiments contain important information about dynamic processes undergoing in a range of materials including living cells and tissues. However, extracting that information is not a trivial task due to the stochastic nature of particles' movement and the sampling noise. In this paper, we adopt a deep-learning method known as a convolutional neural network (CNN) to classify modes of diffusion from given trajectories. We compare this fully automated approach working with raw data to classical machine learning techniques that require data preprocessing and extraction of human-engineered features from the trajectories to feed classifiers like random forest or gradient boosting. All methods are tested using simulated trajectories for which the underlying physical model is known. From the results it follows that CNN is usually slightly better than the feature-based methods, but at the costs of much longer processing times. Moreover, there are still some borderline cases, in which the classical methods perform better than CNN.
[ { "created": "Thu, 21 Feb 2019 10:05:48 GMT", "version": "v1" }, { "created": "Tue, 26 Feb 2019 08:59:01 GMT", "version": "v2" }, { "created": "Thu, 4 Jul 2019 07:56:37 GMT", "version": "v3" }, { "created": "Mon, 2 Sep 2019 08:25:19 GMT", "version": "v4" } ]
2019-09-25
[ [ "Kowalek", "Patrycja", "" ], [ "Loch-Olszewska", "Hanna", "" ], [ "Szwabiński", "Janusz", "" ] ]
Single-particle trajectories measured in microscopy experiments contain important information about dynamic processes undergoing in a range of materials including living cells and tissues. However, extracting that information is not a trivial task due to the stochastic nature of particles' movement and the sampling noise. In this paper, we adopt a deep-learning method known as a convolutional neural network (CNN) to classify modes of diffusion from given trajectories. We compare this fully automated approach working with raw data to classical machine learning techniques that require data preprocessing and extraction of human-engineered features from the trajectories to feed classifiers like random forest or gradient boosting. All methods are tested using simulated trajectories for which the underlying physical model is known. From the results it follows that CNN is usually slightly better than the feature-based methods, but at the costs of much longer processing times. Moreover, there are still some borderline cases, in which the classical methods perform better than CNN.
1910.08724
Zhongqi Tian
Zhong-Qi Kyle Tian and Douglas Zhou
Design Efficient Exponential Time Differencing method For Hodgkin-Huxley Neural Networks
null
null
10.3389/fncom.2020.00040
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The exponential time differencing (ETD) method allows using a large time step to efficiently evolve the stiff system such as Hodgkin-Huxley (HH) neural networks. For pulse-coupled HH networks, the synaptic spike times cannot be predetermined and are convoluted with neuron's trajectory itself. This presents a challenging issue for the design of an efficient numerical simulation algorithm. The stiffness in the HH equations are quite different between the spike and non-spike regions. Here, we design a second-order adaptive exponential time differencing algorithm (AETD2) for the numerical evolution of HH neural networks. Compared with the regular second-order Runge-Kutta method (RK2), our AETD2 method can use time steps one order of magnitude larger and improve computational efficiency more than ten times while excellently capturing accurate traces of membrane potentials of HH neurons. This high accuracy and efficiency can be robustly obtained and do not depend on the dynamical regimes, connectivity structure or the network size.
[ { "created": "Sat, 19 Oct 2019 08:41:15 GMT", "version": "v1" }, { "created": "Sun, 17 Nov 2019 05:33:00 GMT", "version": "v2" }, { "created": "Thu, 21 Nov 2019 13:25:31 GMT", "version": "v3" }, { "created": "Wed, 27 Nov 2019 14:49:11 GMT", "version": "v4" } ]
2020-06-29
[ [ "Tian", "Zhong-Qi Kyle", "" ], [ "Zhou", "Douglas", "" ] ]
The exponential time differencing (ETD) method allows using a large time step to efficiently evolve the stiff system such as Hodgkin-Huxley (HH) neural networks. For pulse-coupled HH networks, the synaptic spike times cannot be predetermined and are convoluted with neuron's trajectory itself. This presents a challenging issue for the design of an efficient numerical simulation algorithm. The stiffness in the HH equations are quite different between the spike and non-spike regions. Here, we design a second-order adaptive exponential time differencing algorithm (AETD2) for the numerical evolution of HH neural networks. Compared with the regular second-order Runge-Kutta method (RK2), our AETD2 method can use time steps one order of magnitude larger and improve computational efficiency more than ten times while excellently capturing accurate traces of membrane potentials of HH neurons. This high accuracy and efficiency can be robustly obtained and do not depend on the dynamical regimes, connectivity structure or the network size.
1403.5519
Manuel Jim\'enez-Mart\'in
Manuel Jim\'enez-Mart\'in and Juan Manuel Pastor and Juan Carlos Losada and Javier Galeano
Link aggregation process for modelling weighted mutualistic networks
6 Figures, 2 Tables
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutualism is a biological interaction mutually beneficial for both species involved, such as the interaction between plants and their pollinators. Real mutualistic communities can be understood as weighted bipartite networks and they present a nested structure and truncated power law degree and strength distributions. We present a novel link aggregation model that works on a strength-preferential attachment rule based on the Individual Neutrality hypothesis. The model generates mutualistic networks with emergent nestedness and truncated distributions. We provide some analytical results and compare the simulated and empirical network topology. Upon further improving the shape of the distributions, we have also studied the role of forbidden interactions on the model and found that the inclusion of forbidden links does not prevent for the appearance of super-generalist species. A Python script with the model algorithms is available.
[ { "created": "Fri, 21 Mar 2014 16:58:13 GMT", "version": "v1" } ]
2014-03-24
[ [ "Jiménez-Martín", "Manuel", "" ], [ "Pastor", "Juan Manuel", "" ], [ "Losada", "Juan Carlos", "" ], [ "Galeano", "Javier", "" ] ]
Mutualism is a biological interaction mutually beneficial for both species involved, such as the interaction between plants and their pollinators. Real mutualistic communities can be understood as weighted bipartite networks and they present a nested structure and truncated power law degree and strength distributions. We present a novel link aggregation model that works on a strength-preferential attachment rule based on the Individual Neutrality hypothesis. The model generates mutualistic networks with emergent nestedness and truncated distributions. We provide some analytical results and compare the simulated and empirical network topology. Upon further improving the shape of the distributions, we have also studied the role of forbidden interactions on the model and found that the inclusion of forbidden links does not prevent for the appearance of super-generalist species. A Python script with the model algorithms is available.
2212.08211
Carina Curto
Caitlyn Parmelee, Juliana Londono Alvarez, Carina Curto, Katherine Morrison
Sequence generation in inhibition-dominated neural networks
6 pages, 4 figures, appeared in SIAM DSWeb, 2022
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
This is a brief overview of results from [arXiv:2107.10244, ref 11], on network architectures that produce sequential dynamics in a special family of inhibition-dominated neural networks. It was written for SIAM DSWeb.
[ { "created": "Fri, 16 Dec 2022 00:51:05 GMT", "version": "v1" } ]
2022-12-19
[ [ "Parmelee", "Caitlyn", "" ], [ "Alvarez", "Juliana Londono", "" ], [ "Curto", "Carina", "" ], [ "Morrison", "Katherine", "" ] ]
This is a brief overview of results from [arXiv:2107.10244, ref 11], on network architectures that produce sequential dynamics in a special family of inhibition-dominated neural networks. It was written for SIAM DSWeb.
1805.09107
Viktor Stojkoski MSc
Viktor Stojkoski, Zoran Utkovski, Elisabeth Andre, Ljupco Kocarev
Multiplex Network Structure Enhances the Role of Generalized Reciprocity in Promoting Cooperation
Extended abstract of "The Role of Multiplex Network Structure in Cooperation through Generalized Reciprocity"
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In multi-agent systems, cooperative behavior is largely determined by the network structure which dictates the interactions among neighboring agents. These interactions often exhibit multidimensional features, either as relationships of different types or temporal dynamics, both of which may be modeled as a "multiplex" network. Against this background, here we advance the research on cooperation models inspired by generalized reciprocity, a simple pay-it-forward behavioral mechanism, by considering a multidimensional networked society. Our results reveal that a multiplex network structure can act as an enhancer of the role of generalized reciprocity in promoting cooperation by acting as a latent support, even when the parameters in some of the separate network dimensions suggest otherwise (i.e. favor defection). As a result, generalized reciprocity forces the cooperative contributions of the individual agents to concentrate in the dimension which is most favorable for the existence of cooperation.
[ { "created": "Wed, 23 May 2018 13:02:34 GMT", "version": "v1" } ]
2018-05-24
[ [ "Stojkoski", "Viktor", "" ], [ "Utkovski", "Zoran", "" ], [ "Andre", "Elisabeth", "" ], [ "Kocarev", "Ljupco", "" ] ]
In multi-agent systems, cooperative behavior is largely determined by the network structure which dictates the interactions among neighboring agents. These interactions often exhibit multidimensional features, either as relationships of different types or temporal dynamics, both of which may be modeled as a "multiplex" network. Against this background, here we advance the research on cooperation models inspired by generalized reciprocity, a simple pay-it-forward behavioral mechanism, by considering a multidimensional networked society. Our results reveal that a multiplex network structure can act as an enhancer of the role of generalized reciprocity in promoting cooperation by acting as a latent support, even when the parameters in some of the separate network dimensions suggest otherwise (i.e. favor defection). As a result, generalized reciprocity forces the cooperative contributions of the individual agents to concentrate in the dimension which is most favorable for the existence of cooperation.
1907.02730
Koh Onimaru
Koh Onimaru, Luciano Marcon
Systems biology approach to the origin of the tetrapod limb
22 pages, 5 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
It is still not understood how similar genomic sequences have generated diverse and spectacular forms during evolution. The difficulty to bridge phenotypes and genotypes stems from the complexity of multicellular systems, where thousands of genes and cells interact with each other providing developmental non-linearity. To understand how diverse morphologies have evolved, it is essential to find ways to handle such complex systems. Here, we review the fin-to-limb transition as a case study for the evolution of multicellular systems. We first describe the historical perspective of comparative studies between fins and limbs. Second, we introduce our approach that combines mechanistic theory, computational modeling, and in vivo experiments to provide a mechanical explanation for the morphological difference between fish fins and tetrapod limbs. This approach helps resolve a long-standing debate about anatomical homology between the skeletal elements of fins and limbs. We will conclude by proposing that due to the counter-intuitive dynamics of gene interactions, integrative approaches that combine computer modeling, theory and experiments are essential to understand the evolution of multicellular organisms.
[ { "created": "Fri, 5 Jul 2019 09:03:15 GMT", "version": "v1" } ]
2019-07-08
[ [ "Onimaru", "Koh", "" ], [ "Marcon", "Luciano", "" ] ]
It is still not understood how similar genomic sequences have generated diverse and spectacular forms during evolution. The difficulty to bridge phenotypes and genotypes stems from the complexity of multicellular systems, where thousands of genes and cells interact with each other providing developmental non-linearity. To understand how diverse morphologies have evolved, it is essential to find ways to handle such complex systems. Here, we review the fin-to-limb transition as a case study for the evolution of multicellular systems. We first describe the historical perspective of comparative studies between fins and limbs. Second, we introduce our approach that combines mechanistic theory, computational modeling, and in vivo experiments to provide a mechanical explanation for the morphological difference between fish fins and tetrapod limbs. This approach helps resolve a long-standing debate about anatomical homology between the skeletal elements of fins and limbs. We will conclude by proposing that due to the counter-intuitive dynamics of gene interactions, integrative approaches that combine computer modeling, theory and experiments are essential to understand the evolution of multicellular organisms.
2305.00338
Jorge Arroyo-Esquivel
Jorge Arroyo-Esquivel and Christopher A Klausmeier and Elena Litchman
Using neural ordinary differential equations to predict complex ecological dynamics from population density data
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Simple models have been used to describe ecological processes for over a century. However, the complexity of ecological systems makes simple models subject to modeling bias due to simplifying assumptions or unaccounted factors, limiting their predictive power. Neural Ordinary Differential Equations (NODEs) have surged as a machine-learning algorithm that preserves the dynamic nature of the data \cite{chen_neural_2018}. Although preserving the dynamics in the data is an advantage, the question of how NODEs perform as a forecasting tool of ecological communities is unanswered. Here we explore this question using simulated time series of competing species in a time-varying environment. We find that NODEs provide more precise forecasts than ARIMA models. {We also find that untuned NODEs have a similar forecasting accuracy as untuned Long-Short Term Memory neural networks (LSTMs) and both are outperformed in accuracy and precision by EDM models. However, we also find NODEs generally outperform all other methods when evaluating with the interval score, which evaluates precision and accuracy in terms of prediction intervals rather than pointwise accuracy.} We also discuss ways to improve the forecasting performance {of NODEs}. The power of a forecasting tool such as NODEs is that it can provide insights into population dynamics and should thus broaden the approaches to studying time series of ecological communities.
[ { "created": "Sat, 29 Apr 2023 20:26:42 GMT", "version": "v1" }, { "created": "Mon, 28 Aug 2023 14:25:56 GMT", "version": "v2" }, { "created": "Tue, 23 Jan 2024 18:35:46 GMT", "version": "v3" } ]
2024-01-24
[ [ "Arroyo-Esquivel", "Jorge", "" ], [ "Klausmeier", "Christopher A", "" ], [ "Litchman", "Elena", "" ] ]
Simple models have been used to describe ecological processes for over a century. However, the complexity of ecological systems makes simple models subject to modeling bias due to simplifying assumptions or unaccounted factors, limiting their predictive power. Neural Ordinary Differential Equations (NODEs) have surged as a machine-learning algorithm that preserves the dynamic nature of the data \cite{chen_neural_2018}. Although preserving the dynamics in the data is an advantage, the question of how NODEs perform as a forecasting tool of ecological communities is unanswered. Here we explore this question using simulated time series of competing species in a time-varying environment. We find that NODEs provide more precise forecasts than ARIMA models. {We also find that untuned NODEs have a similar forecasting accuracy as untuned Long-Short Term Memory neural networks (LSTMs) and both are outperformed in accuracy and precision by EDM models. However, we also find NODEs generally outperform all other methods when evaluating with the interval score, which evaluates precision and accuracy in terms of prediction intervals rather than pointwise accuracy.} We also discuss ways to improve the forecasting performance {of NODEs}. The power of a forecasting tool such as NODEs is that it can provide insights into population dynamics and should thus broaden the approaches to studying time series of ecological communities.
2312.07899
Yanjun Li
Qiaosi Tang, Ranjala Ratnayake, Gustavo Seabra, Zhe Jiang, Ruogu Fang, Lina Cui, Yousong Ding, Tamer Kahveci, Jiang Bian, Chenglong Li, Hendrik Luesch, Yanjun Li
Morphological Profiling for Drug Discovery in the Era of Deep Learning
44 pages, 5 figure, 5 tables
null
null
null
q-bio.QM cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Morphological profiling is a valuable tool in phenotypic drug discovery. The advent of high-throughput automated imaging has enabled the capturing of a wide range of morphological features of cells or organisms in response to perturbations at the single-cell resolution. Concurrently, significant advances in machine learning and deep learning, especially in computer vision, have led to substantial improvements in analyzing large-scale high-content images at high-throughput. These efforts have facilitated understanding of compound mechanism-of-action (MOA), drug repurposing, characterization of cell morphodynamics under perturbation, and ultimately contributing to the development of novel therapeutics. In this review, we provide a comprehensive overview of the recent advances in the field of morphological profiling. We summarize the image profiling analysis workflow, survey a broad spectrum of analysis strategies encompassing feature engineering- and deep learning-based approaches, and introduce publicly available benchmark datasets. We place a particular emphasis on the application of deep learning in this pipeline, covering cell segmentation, image representation learning, and multimodal learning. Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.
[ { "created": "Wed, 13 Dec 2023 05:08:32 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2024 21:22:46 GMT", "version": "v2" } ]
2024-01-17
[ [ "Tang", "Qiaosi", "" ], [ "Ratnayake", "Ranjala", "" ], [ "Seabra", "Gustavo", "" ], [ "Jiang", "Zhe", "" ], [ "Fang", "Ruogu", "" ], [ "Cui", "Lina", "" ], [ "Ding", "Yousong", "" ], [ "Kahveci", "Tamer", "" ], [ "Bian", "Jiang", "" ], [ "Li", "Chenglong", "" ], [ "Luesch", "Hendrik", "" ], [ "Li", "Yanjun", "" ] ]
Morphological profiling is a valuable tool in phenotypic drug discovery. The advent of high-throughput automated imaging has enabled the capturing of a wide range of morphological features of cells or organisms in response to perturbations at the single-cell resolution. Concurrently, significant advances in machine learning and deep learning, especially in computer vision, have led to substantial improvements in analyzing large-scale high-content images at high-throughput. These efforts have facilitated understanding of compound mechanism-of-action (MOA), drug repurposing, characterization of cell morphodynamics under perturbation, and ultimately contributing to the development of novel therapeutics. In this review, we provide a comprehensive overview of the recent advances in the field of morphological profiling. We summarize the image profiling analysis workflow, survey a broad spectrum of analysis strategies encompassing feature engineering- and deep learning-based approaches, and introduce publicly available benchmark datasets. We place a particular emphasis on the application of deep learning in this pipeline, covering cell segmentation, image representation learning, and multimodal learning. Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.
2405.13182
Zachary Kilpatrick PhD
Heather L Cihak and Zachary P Kilpatrick
Robustly encoding certainty in a metastable neural circuit model
15 pages, 10 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Localized persistent neural activity can encode delayed estimates of continuous variables. Common experiments require that subjects store and report the feature value (e.g., orientation) of a particular cue (e.g., oriented bar on a screen) after a delay. Visualizing recorded activity of neurons along their feature tuning reveals activity bumps whose centers wander stochastically, degrading the estimate over time. Bump position therefore represents the remembered estimate. Recent work suggests bump amplitude may represent estimate certainty reflecting a probabilistic population code for a Bayesian posterior. Idealized models of this type are fragile due to the fine tuning common to constructed continuum attractors in dynamical systems. Here we propose an alternative metastable model for robustly supporting multiple bump amplitudes by extending neural circuit models to include quantized nonlinearities. Asymptotic projections of circuit activity produce low-dimensional evolution equations for the amplitude and position of bump solutions in response to external stimuli and noise perturbations. Analysis of reduced equations accurately characterizes phase variance and the dynamics of amplitude transitions between stable discrete values. More salient cues generate bumps of higher amplitude which wander less, consistent with the experimental finding that greater certainty correlates with more accurate memories.
[ { "created": "Tue, 21 May 2024 20:13:35 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 19:15:50 GMT", "version": "v2" } ]
2024-08-01
[ [ "Cihak", "Heather L", "" ], [ "Kilpatrick", "Zachary P", "" ] ]
Localized persistent neural activity can encode delayed estimates of continuous variables. Common experiments require that subjects store and report the feature value (e.g., orientation) of a particular cue (e.g., oriented bar on a screen) after a delay. Visualizing recorded activity of neurons along their feature tuning reveals activity bumps whose centers wander stochastically, degrading the estimate over time. Bump position therefore represents the remembered estimate. Recent work suggests bump amplitude may represent estimate certainty reflecting a probabilistic population code for a Bayesian posterior. Idealized models of this type are fragile due to the fine tuning common to constructed continuum attractors in dynamical systems. Here we propose an alternative metastable model for robustly supporting multiple bump amplitudes by extending neural circuit models to include quantized nonlinearities. Asymptotic projections of circuit activity produce low-dimensional evolution equations for the amplitude and position of bump solutions in response to external stimuli and noise perturbations. Analysis of reduced equations accurately characterizes phase variance and the dynamics of amplitude transitions between stable discrete values. More salient cues generate bumps of higher amplitude which wander less, consistent with the experimental finding that greater certainty correlates with more accurate memories.
q-bio/0608029
Michael Deem
D. B. Saakian, E. Munoz, Chin-Kun Hu, and M. W. Deem
Quasispecies Theory for Multiple-Peak Fitness Landscapes
10 pages, 3 figures, 2 tables
Phys. Rev. E 73 (2006) 041913
10.1103/PhysRevE.73.041913
null
q-bio.PE cond-mat.stat-mech
null
We use a path integral representation to solve the Eigen and Crow-Kimura molecular evolution models for the case of multiple fitness peaks with arbitrary fitness and degradation functions. In the general case, we find that the solution to these molecular evolution models can be written as the optimum of a fitness function, with constraints enforced by Lagrange multipliers and with a term accounting for the entropy of the spreading population in sequence space. The results for the Eigen model are applied to consider virus or cancer proliferation under the control of drugs or the immune system.
[ { "created": "Tue, 15 Aug 2006 19:39:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Saakian", "D. B.", "" ], [ "Munoz", "E.", "" ], [ "Hu", "Chin-Kun", "" ], [ "Deem", "M. W.", "" ] ]
We use a path integral representation to solve the Eigen and Crow-Kimura molecular evolution models for the case of multiple fitness peaks with arbitrary fitness and degradation functions. In the general case, we find that the solution to these molecular evolution models can be written as the optimum of a fitness function, with constraints enforced by Lagrange multipliers and with a term accounting for the entropy of the spreading population in sequence space. The results for the Eigen model are applied to consider virus or cancer proliferation under the control of drugs or the immune system.
1707.08984
Amit Chattopadhyay
Jason Laurie, Amit K Chattopadhyay and Darren R Flower
Protein Lipograms
8 pages, 2 columns, 5 figures
Journal of Theoretical Biology, vol 430, pg 109, 2017
10.1016/j.jtbi.2017.07.009
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linguistic analysis of protein sequences is an underexploited technique. Here, we capitalize on the concept of the lipogram to characterize sequences at the proteome levels. A lipogram is a literary composition which omits one or more letters. A protein lipogram likewise omits one or more types of amino acid. In this article, we establish a usable terminology for the decomposition of a sequence collection in terms of the lipogram. Next, we characterize Uniref50 using a lipogram decomposition. At the global level, protein lipograms exhibit power-law properties. A clear correlation with metabolic cost is seen. Finally, we use the lipogram construction to differentiate proteomes between the four branches of the tree-of-life: archaea, bacteria, eukaryotes and viruses. We conclude from this pilot study that the lipogram demonstrates considerable potential as an additional tool for sequence analysis and proteome classification.
[ { "created": "Tue, 25 Jul 2017 11:44:23 GMT", "version": "v1" } ]
2017-07-31
[ [ "Laurie", "Jason", "" ], [ "Chattopadhyay", "Amit K", "" ], [ "Flower", "Darren R", "" ] ]
Linguistic analysis of protein sequences is an underexploited technique. Here, we capitalize on the concept of the lipogram to characterize sequences at the proteome levels. A lipogram is a literary composition which omits one or more letters. A protein lipogram likewise omits one or more types of amino acid. In this article, we establish a usable terminology for the decomposition of a sequence collection in terms of the lipogram. Next, we characterize Uniref50 using a lipogram decomposition. At the global level, protein lipograms exhibit power-law properties. A clear correlation with metabolic cost is seen. Finally, we use the lipogram construction to differentiate proteomes between the four branches of the tree-of-life: archaea, bacteria, eukaryotes and viruses. We conclude from this pilot study that the lipogram demonstrates considerable potential as an additional tool for sequence analysis and proteome classification.
1009.0118
Amaury Lambert
Amaury Lambert
Species abundance distributions in neutral models with immigration or mutation and general lifetimes
16 pages, 4 figures. To appear in Journal of Mathematical Biology. The final publication is available at http://www.springerlink.com
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate \lambda. We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate \mu (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate \theta. We are interested in the species abundance distribution, i.e., in the numbers, denoted I_n(k) in the immigration model and A_n(k) in the mutation model, of species represented by k individuals, k=1,2,...,n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I_t(k);k\ge 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I_n(k) converges as n\to\infty to a Poisson r.v. with mean \gamma /k, where \gamma:=\mu/\lambda. In the mutation model, as n\to\infty, we obtain the almost sure convergence of n^{-1}A_n(k) to a nonrandom explicit constant. In the case of a critical, linear birth--death process, this constant is given by Fisher's log-series, namely n^{-1}A_n(k) converges to \alpha^{k}/k, where \alpha :=\lambda/(\lambda+\theta). In both models, the abundances of the most abundant species are briefly discussed.
[ { "created": "Wed, 1 Sep 2010 08:32:05 GMT", "version": "v1" } ]
2010-09-02
[ [ "Lambert", "Amaury", "" ] ]
We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate \lambda. We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate \mu (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate \theta. We are interested in the species abundance distribution, i.e., in the numbers, denoted I_n(k) in the immigration model and A_n(k) in the mutation model, of species represented by k individuals, k=1,2,...,n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I_t(k);k\ge 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I_n(k) converges as n\to\infty to a Poisson r.v. with mean \gamma /k, where \gamma:=\mu/\lambda. In the mutation model, as n\to\infty, we obtain the almost sure convergence of n^{-1}A_n(k) to a nonrandom explicit constant. In the case of a critical, linear birth--death process, this constant is given by Fisher's log-series, namely n^{-1}A_n(k) converges to \alpha^{k}/k, where \alpha :=\lambda/(\lambda+\theta). In both models, the abundances of the most abundant species are briefly discussed.
2101.10056
Vitor Manuel Dinis Pereira
Vitor Manuel Dinis Pereira
Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness
31 pages, 23 figures, according to the Philpapers.org, my manuscript "Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness" has been downloaded 161 times until today (since 2017-11-30) without any substantial critic, at least any substantial critic that I'am aware
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Given the hard problem of consciousness (Chalmers, 1995) there are no brain electrophysiological correlates of the subjective experience (the felt quality of redness or the redness of red, the experience of dark and light, the quality of depth in a visual field, the sound of a clarinet, the smell of mothball, bodily sensations from pains to orgasms, mental images that are conjured up internally, the felt quality of emotion, the experience of a stream of conscious thought or the phenomenology of thought). However, there are brain occipital and left temporal electrophysiological correlates of the subjective experience (Pereira, 2015). Notwithstanding, as evoked signal, the change in event-related brain potentials phase (frequency is the change in phase over time) is instantaneous, that is, the frequency will transiently be infinite: a transient peak in frequency (positive or negative), if any, is instantaneous in electroencephalogram averaging or filtering that the event-related brain potentials required and the underlying structure of the event-related brain potentials in the frequency domain cannot be accounted, for example, by the Wavelet Transform (WT) or the Fast Fourier Transform (FFT) analysis, because they require that frequency is derived by convolution rather than by differentiation. However, as I show in the current original research report, one suitable method for analyse the instantaneous change in event-related brain potentials phase and accounted for a transient peak in frequency (positive or negative), if any, in the underlying structure of the event-related brain potentials is the Empirical Mode Decomposition with post processing (Xie et al., 2014) Ensemble Empirical Mode Decomposition (postEEMD) and Hilbert-Huang Transform (HHT).
[ { "created": "Sat, 26 Dec 2020 16:30:40 GMT", "version": "v1" }, { "created": "Fri, 26 Feb 2021 18:23:30 GMT", "version": "v2" }, { "created": "Fri, 5 Mar 2021 18:36:30 GMT", "version": "v3" } ]
2021-03-08
[ [ "Pereira", "Vitor Manuel Dinis", "" ] ]
Given the hard problem of consciousness (Chalmers, 1995) there are no brain electrophysiological correlates of the subjective experience (the felt quality of redness or the redness of red, the experience of dark and light, the quality of depth in a visual field, the sound of a clarinet, the smell of mothball, bodily sensations from pains to orgasms, mental images that are conjured up internally, the felt quality of emotion, the experience of a stream of conscious thought or the phenomenology of thought). However, there are brain occipital and left temporal electrophysiological correlates of the subjective experience (Pereira, 2015). Notwithstanding, as evoked signal, the change in event-related brain potentials phase (frequency is the change in phase over time) is instantaneous, that is, the frequency will transiently be infinite: a transient peak in frequency (positive or negative), if any, is instantaneous in electroencephalogram averaging or filtering that the event-related brain potentials required and the underlying structure of the event-related brain potentials in the frequency domain cannot be accounted, for example, by the Wavelet Transform (WT) or the Fast Fourier Transform (FFT) analysis, because they require that frequency is derived by convolution rather than by differentiation. However, as I show in the current original research report, one suitable method for analyse the instantaneous change in event-related brain potentials phase and accounted for a transient peak in frequency (positive or negative), if any, in the underlying structure of the event-related brain potentials is the Empirical Mode Decomposition with post processing (Xie et al., 2014) Ensemble Empirical Mode Decomposition (postEEMD) and Hilbert-Huang Transform (HHT).
1208.3570
Darya Novopashina S
D. S. Novopashina, E. K. Apartsin, A. G. Venyaminova
Fluorecently labeled bionanotransporters of nucleic acid based on carbon nanotubes
http://www.ujp.bitp.kiev.ua
Ukrainian Journal of Physics, 2009, Vol. 54, no. 1-2, pp. 207-215
null
null
q-bio.BM cond-mat.mtrl-sci physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we propose the approach to design of the new type of hybrids of oligonucleotides with fluorescein-functionalized single-walled carbon nanotubes. The approach is based on stacking interactions of functionalized nanotubes with pyrene residues in conjugates of oligonucleotides. The amino- and fluorescein-modified single-walled carbon nanotubes were obtained, and their physico-chemical properties were investigated. The effect of carbon nanotubes functionalization type on the efficacy of sorption of pyrene conjugates of oligonucleotides was examined. Proposed non-covalent hybrids of fluorescein-labeled carbon nanotubes with oligonucleotides may be used for intracellular transport of functional nucleic acids.
[ { "created": "Fri, 17 Aug 2012 10:32:53 GMT", "version": "v1" } ]
2015-03-13
[ [ "Novopashina", "D. S.", "" ], [ "Apartsin", "E. K.", "" ], [ "Venyaminova", "A. G.", "" ] ]
Here we propose the approach to design of the new type of hybrids of oligonucleotides with fluorescein-functionalized single-walled carbon nanotubes. The approach is based on stacking interactions of functionalized nanotubes with pyrene residues in conjugates of oligonucleotides. The amino- and fluorescein-modified single-walled carbon nanotubes were obtained, and their physico-chemical properties were investigated. The effect of carbon nanotubes functionalization type on the efficacy of sorption of pyrene conjugates of oligonucleotides was examined. Proposed non-covalent hybrids of fluorescein-labeled carbon nanotubes with oligonucleotides may be used for intracellular transport of functional nucleic acids.
1604.02193
Yana Safonova
Alexander Shlemov, Sergey Bankevich, Andrey Bzikadze, Yana Safonova
New algorithmic challenges of adaptive immune repertoire construction
Paper accepted at the RECOMB-Seq 2016
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: The analysis of antibodies and T-cell receptors (TCRs) concentrations in serum is a fundamental problem in immunoinformatics. Repertoire construction is a preliminary step of analysis of clonal lineages, understanding of immune response dynamics, population analysis of immunoglobulin and TCR loci. Emergence of MiSeq Illumina sequencing machine in 2013 opened horizons of investigation of adaptive immune repertoires using highly accurate reads. Reads produced by MiSeq are able to cover repertoires of moderate size. At the same time, throughput of sequencing machines increases from year to year. This will enable ultra deep scanning of adaptive immune repertoires and analysis of their diversity. Such data requires both efficient and highly accurate repertoire construction tools. In 2015 Safonova et al. presented IgRepertoireConstructor, a tool for accurate construction of antibody repertoire and immunoproteogenomics analysis. Unfortunately, proposed algorithm was very time and memory consuming and could be a bottleneck of processing large immunosequencing libraries. In this paper we overcome this challenge and present IgReC, a novel algorithm for adaptive repertoire construction problem. IgReC reconstructs a repertoire with high precision even if each input read contains sequencing errors and performs well on contemporary datasets. Results of computational experiments show that IgReC improves state-of-the-art in the field. Availability: IgReC is an open source and freely available program running on Linux platforms. The source code is available at GitHub: yana-safonova.github.io/ig_repertoire_constructor. Contact: safonova.yana@gmail.com
[ { "created": "Thu, 7 Apr 2016 23:04:37 GMT", "version": "v1" } ]
2016-04-11
[ [ "Shlemov", "Alexander", "" ], [ "Bankevich", "Sergey", "" ], [ "Bzikadze", "Andrey", "" ], [ "Safonova", "Yana", "" ] ]
Motivation: The analysis of antibodies and T-cell receptors (TCRs) concentrations in serum is a fundamental problem in immunoinformatics. Repertoire construction is a preliminary step of analysis of clonal lineages, understanding of immune response dynamics, population analysis of immunoglobulin and TCR loci. Emergence of MiSeq Illumina sequencing machine in 2013 opened horizons of investigation of adaptive immune repertoires using highly accurate reads. Reads produced by MiSeq are able to cover repertoires of moderate size. At the same time, throughput of sequencing machines increases from year to year. This will enable ultra deep scanning of adaptive immune repertoires and analysis of their diversity. Such data requires both efficient and highly accurate repertoire construction tools. In 2015 Safonova et al. presented IgRepertoireConstructor, a tool for accurate construction of antibody repertoire and immunoproteogenomics analysis. Unfortunately, proposed algorithm was very time and memory consuming and could be a bottleneck of processing large immunosequencing libraries. In this paper we overcome this challenge and present IgReC, a novel algorithm for adaptive repertoire construction problem. IgReC reconstructs a repertoire with high precision even if each input read contains sequencing errors and performs well on contemporary datasets. Results of computational experiments show that IgReC improves state-of-the-art in the field. Availability: IgReC is an open source and freely available program running on Linux platforms. The source code is available at GitHub: yana-safonova.github.io/ig_repertoire_constructor. Contact: safonova.yana@gmail.com
0912.4472
John Rhodes
Elizabeth S. Allman, James H. Degnan, John A. Rhodes
Identifying the Rooted Species Tree from the Distribution of Unrooted Gene Trees under the Coalescent
Additional material extends results to polytomous species trees
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene trees are evolutionary trees representing the ancestry of genes sampled from multiple populations. Species trees represent populations of individuals -- each with many genes -- splitting into new populations or species. The coalescent process, which models ancestry of gene copies within populations, is often used to model the probability distribution of gene trees given a fixed species tree. This multispecies coalescent model provides a framework for phylogeneticists to infer species trees from gene trees using maximum likelihood or Bayesian approaches. Because the coalescent models a branching process over time, all trees are typically assumed to be rooted in this setting. Often, however, gene trees inferred by traditional phylogenetic methods are unrooted. We investigate probabilities of unrooted gene trees under the multispecies coalescent model. We show that when there are 4 species with one gene sampled per species, the distribution of unrooted gene tree topologies identifies the unrooted species tree topology and some, but not all, information in the species tree edges (branch lengths). The location of the root on the species tree is not identifiable in this situation. However, for 5 or more species with one gene sampled per species, we show that the distribution of unrooted gene tree topologies identifies the rooted species tree topology and all its internal branch lengths. The length of any pendent branch leading to a leaf of the species tree is also identifiable for any species from which more than one gene is sampled.
[ { "created": "Tue, 22 Dec 2009 18:00:39 GMT", "version": "v1" }, { "created": "Thu, 29 Jul 2010 18:07:14 GMT", "version": "v2" } ]
2010-07-30
[ [ "Allman", "Elizabeth S.", "" ], [ "Degnan", "James H.", "" ], [ "Rhodes", "John A.", "" ] ]
Gene trees are evolutionary trees representing the ancestry of genes sampled from multiple populations. Species trees represent populations of individuals -- each with many genes -- splitting into new populations or species. The coalescent process, which models ancestry of gene copies within populations, is often used to model the probability distribution of gene trees given a fixed species tree. This multispecies coalescent model provides a framework for phylogeneticists to infer species trees from gene trees using maximum likelihood or Bayesian approaches. Because the coalescent models a branching process over time, all trees are typically assumed to be rooted in this setting. Often, however, gene trees inferred by traditional phylogenetic methods are unrooted. We investigate probabilities of unrooted gene trees under the multispecies coalescent model. We show that when there are 4 species with one gene sampled per species, the distribution of unrooted gene tree topologies identifies the unrooted species tree topology and some, but not all, information in the species tree edges (branch lengths). The location of the root on the species tree is not identifiable in this situation. However, for 5 or more species with one gene sampled per species, we show that the distribution of unrooted gene tree topologies identifies the rooted species tree topology and all its internal branch lengths. The length of any pendent branch leading to a leaf of the species tree is also identifiable for any species from which more than one gene is sampled.
1006.0020
Teruhiko Yoneyama
Teruhiko Yoneyama and Mukkai S. Krishnamoorthy
Influence of the Cold War upon Influenza Pandemic of 1957-1958
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/3.0/
Influenza Pandemic of 1957-1958, also called Asian Flu Pandemic, was one of the most widespread pandemics in history. In this paper, we model the pandemic, considering the effect of the Cold War. There were some restrictions between Western and Eastern nations due to the Cold War during the pandemic. We expect that such restrictions influenced the spread of the pandemic. We propose a hybrid model to determine how the pandemic spread through the world. The model combines the SEIR-based model for local areas and the network model for global connection between countries. First, we reproduce the situation in 19 countries. Then, we run another experiment to find the influence of the war in the spread of the pandemic; simulation considering international relationships in different years. The simulation results show that the impact of the pandemic in each country was much influenced by international relationships. This study indicates that if there was less effect of the Cold War, Western nations would have larger number of death cases, Eastern nations would have smaller number of death cases, and the world impact would be increased somewhat.
[ { "created": "Tue, 11 May 2010 22:07:38 GMT", "version": "v1" } ]
2010-06-02
[ [ "Yoneyama", "Teruhiko", "" ], [ "Krishnamoorthy", "Mukkai S.", "" ] ]
Influenza Pandemic of 1957-1958, also called Asian Flu Pandemic, was one of the most widespread pandemics in history. In this paper, we model the pandemic, considering the effect of the Cold War. There were some restrictions between Western and Eastern nations due to the Cold War during the pandemic. We expect that such restrictions influenced the spread of the pandemic. We propose a hybrid model to determine how the pandemic spread through the world. The model combines the SEIR-based model for local areas and the network model for global connection between countries. First, we reproduce the situation in 19 countries. Then, we run another experiment to find the influence of the war in the spread of the pandemic; simulation considering international relationships in different years. The simulation results show that the impact of the pandemic in each country was much influenced by international relationships. This study indicates that if there was less effect of the Cold War, Western nations would have larger number of death cases, Eastern nations would have smaller number of death cases, and the world impact would be increased somewhat.
2311.04567
Robert Petryszak
Kevin Troul\'e, Robert Petryszak, Martin Prete, James Cranley, Alicia Harasty, Zewen Kelvin Tuong, Sarah A Teichmann, Luz Garcia-Alonso, Roser Vento-Tormo
CellPhoneDB v5: inferring cell-cell communication from single-cell multiomics data
30 pages, 3 figures and 2 tables. Added previously missing figures and tables; Updated the reference for 'An integrated single-cell reference atlas of the human endometrium' paper
null
null
null
q-bio.CB q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Cell-cell communication is essential for tissue development, regeneration and function, and its disruption can lead to diseases and developmental abnormalities. The revolution of single-cell genomics technologies offers unprecedented insights into cellular identities, opening new avenues to resolve the intricate cellular interactions present in tissue niches. CellPhoneDB is a bioinformatics toolkit designed to infer cell-cell communication by combining a curated repository of bona fide ligand-receptor interactions with a set of computational and statistical methods to integrate them with single-cell genomics data. Importantly, CellPhoneDB captures the multimeric nature of molecular complexes, thus representing cell-cell communication biology faithfully. Here we present CellPhoneDB v5, an updated version of the tool, which offers several new features. Firstly, the repository has been expanded by one-third with the addition of new interactions. These encompass interactions mediated by non-protein ligands such as endocrine hormones and GPCR ligands. Secondly, it includes a differentially expression-based methodology for more tailored interaction queries. Thirdly, it incorporates novel computational methods to prioritise specific cell-cell interactions, leveraging other single-cell modalities, such as spatial information or TF activities (i.e. CellSign module). Finally, we provide CellPhoneDBViz, a module to interactively visualise and share results amongst users. Altogether, CellPhoneDB v5 elevates the precision of cell-cell communication inference, ushering in new perspectives to comprehend tissue biology in both healthy and pathological states.
[ { "created": "Wed, 8 Nov 2023 09:59:03 GMT", "version": "v1" }, { "created": "Mon, 13 Nov 2023 13:41:51 GMT", "version": "v2" } ]
2023-11-14
[ [ "Troulé", "Kevin", "" ], [ "Petryszak", "Robert", "" ], [ "Prete", "Martin", "" ], [ "Cranley", "James", "" ], [ "Harasty", "Alicia", "" ], [ "Tuong", "Zewen Kelvin", "" ], [ "Teichmann", "Sarah A", "" ], [ "Garcia-Alonso", "Luz", "" ], [ "Vento-Tormo", "Roser", "" ] ]
Cell-cell communication is essential for tissue development, regeneration and function, and its disruption can lead to diseases and developmental abnormalities. The revolution of single-cell genomics technologies offers unprecedented insights into cellular identities, opening new avenues to resolve the intricate cellular interactions present in tissue niches. CellPhoneDB is a bioinformatics toolkit designed to infer cell-cell communication by combining a curated repository of bona fide ligand-receptor interactions with a set of computational and statistical methods to integrate them with single-cell genomics data. Importantly, CellPhoneDB captures the multimeric nature of molecular complexes, thus representing cell-cell communication biology faithfully. Here we present CellPhoneDB v5, an updated version of the tool, which offers several new features. Firstly, the repository has been expanded by one-third with the addition of new interactions. These encompass interactions mediated by non-protein ligands such as endocrine hormones and GPCR ligands. Secondly, it includes a differentially expression-based methodology for more tailored interaction queries. Thirdly, it incorporates novel computational methods to prioritise specific cell-cell interactions, leveraging other single-cell modalities, such as spatial information or TF activities (i.e. CellSign module). Finally, we provide CellPhoneDBViz, a module to interactively visualise and share results amongst users. Altogether, CellPhoneDB v5 elevates the precision of cell-cell communication inference, ushering in new perspectives to comprehend tissue biology in both healthy and pathological states.
q-bio/0402003
Hiro-Sato Niwa
Hiro-Sato Niwa
Space-irrelevant scaling law for fish school sizes
23 pages, 12 figures, to appear in J. Theor. Biol
J. Theor. Biol. 228 (2004) 347-357
10.1016/j.jtbi.2004.01.011
null
q-bio.PE cond-mat.stat-mech
null
Universal scaling in the power-law size distribution of pelagic fish schools is established. The power-law exponent of size distributions is extracted through the data collapse. The distribution depends on the school size only through the ratio of the size to the expected size of the schools an arbitrary individual engages in. This expected size is linear in the ratio of the spatial population density of fish to the breakup rate of school. By means of extensive numerical simulations, it is verified that the law is completely independent of the dimension of the space in which the fish move. Besides the scaling analysis on school size distributions, the integrity of schools over extended periods of time is discussed.
[ { "created": "Sun, 1 Feb 2004 21:11:39 GMT", "version": "v1" } ]
2007-05-23
[ [ "Niwa", "Hiro-Sato", "" ] ]
Universal scaling in the power-law size distribution of pelagic fish schools is established. The power-law exponent of size distributions is extracted through the data collapse. The distribution depends on the school size only through the ratio of the size to the expected size of the schools an arbitrary individual engages in. This expected size is linear in the ratio of the spatial population density of fish to the breakup rate of school. By means of extensive numerical simulations, it is verified that the law is completely independent of the dimension of the space in which the fish move. Besides the scaling analysis on school size distributions, the integrity of schools over extended periods of time is discussed.