id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1812.11627
Griffin Chure
Rob Phillips, Nathan M. Belliveau, Griffin Chure, Hernan G. Garcia, Manuel Razo-Mejia, Clarissa Scholes
Figure 1 Theory Meets Figure 2 Experiments in the Study of Gene Expression
55 pages, 27 figures, review article
null
null
null
q-bio.GN q-bio.MN
http://creativecommons.org/licenses/by/4.0/
It is tempting to believe that we now own the genome. The ability to read and re-write it at will has ushered in a stunning period in the history of science. Nonetheless, there is an Achilles heel exposed by all of the genomic data that has accrued: we still don't know how to interpret it. Many genes are subject to sophisticated programs of transcriptional regulation, mediated by DNA sequences that harbor binding sites for transcription factors which can up- or down-regulate gene expression depending upon environmental conditions. This gives rise to an input-output function describing how the level of expression depends upon the parameters of the regulated gene { for instance, on the number and type of binding sites in its regulatory sequence. In recent years, the ability to make precision measurements of expression, coupled with the ability to make increasingly sophisticated theoretical predictions, have enabled an explicit dialogue between theory and experiment that holds the promise of covering this genomic Achilles heel. The goal is to reach a predictive understanding of transcriptional regulation that makes it possible to calculate gene expression levels from DNA regulatory sequence. This review focuses on the canonical simple repression motif to ask how well the models that have been used to characterize it actually work. We consider a hierarchy of increasingly sophisticated experiments in which the minimal parameter set learned at one level is applied to make quantitative predictions at the next. We show that these careful quantitative dissections provide a template for a predictive understanding of the many more complex regulatory arrangements found across all domains of life.
[ { "created": "Sun, 30 Dec 2018 23:06:51 GMT", "version": "v1" } ]
2019-01-01
[ [ "Phillips", "Rob", "" ], [ "Belliveau", "Nathan M.", "" ], [ "Chure", "Griffin", "" ], [ "Garcia", "Hernan G.", "" ], [ "Razo-Mejia", "Manuel", "" ], [ "Scholes", "Clarissa", "" ] ]
It is tempting to believe that we now own the genome. The ability to read and re-write it at will has ushered in a stunning period in the history of science. Nonetheless, there is an Achilles heel exposed by all of the genomic data that has accrued: we still don't know how to interpret it. Many genes are subject to sophisticated programs of transcriptional regulation, mediated by DNA sequences that harbor binding sites for transcription factors which can up- or down-regulate gene expression depending upon environmental conditions. This gives rise to an input-output function describing how the level of expression depends upon the parameters of the regulated gene { for instance, on the number and type of binding sites in its regulatory sequence. In recent years, the ability to make precision measurements of expression, coupled with the ability to make increasingly sophisticated theoretical predictions, have enabled an explicit dialogue between theory and experiment that holds the promise of covering this genomic Achilles heel. The goal is to reach a predictive understanding of transcriptional regulation that makes it possible to calculate gene expression levels from DNA regulatory sequence. This review focuses on the canonical simple repression motif to ask how well the models that have been used to characterize it actually work. We consider a hierarchy of increasingly sophisticated experiments in which the minimal parameter set learned at one level is applied to make quantitative predictions at the next. We show that these careful quantitative dissections provide a template for a predictive understanding of the many more complex regulatory arrangements found across all domains of life.
2308.10907
Vivek Thumbigere Math
Himanshi Tanwar, Jeba Mercy Gnanasekaran, Devon Allison, Ling-shiang Chuang, Xuesong He, Mario Aimetti, Giacomo Baima, Massimo Costalonga, Raymond K. Cross, Cynthia Sears, Saurabh Mehandru, Judy Cho, Jean-Frederic Colombel, Jean-Pierre Raufman, Vivek Thumbigere-Math
Unraveling the Link between Periodontitis and Inflammatory Bowel Disease: Challenges and Outlook
Total Words: 7,016 Figures: 3 Tables: 2 Reference: 341
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Periodontitis and Inflammatory Bowel Disease (IBD) are chronic inflammatory conditions, characterized by microbial dysbiosis and hyper-immunoinflammatory responses. Growing evidence suggest an interconnection between periodontitis and IBD, implying a shift from the traditional concept of independent diseases to a complex, reciprocal cycle. This review outlines the evidence supporting an Oral-Gut axis, marked by a higher prevalence of periodontitis in IBD patients and vice versa. The specific mechanisms linking periodontitis and IBD remain to be fully elucidated, but emerging evidence points to the ectopic colonization of the gut by oral bacteria, which promote intestinal inflammation by activating host immune responses. This review presents an in-depth examination of the interconnection between periodontitis and IBD, highlighting the shared microbiological and immunological pathways, and proposing a multi-hit hypothesis in the pathogenesis of periodontitis-mediated intestinal inflammation. Furthermore, the review underscores the critical need for a collaborative approach between dentists and gastroenterologists to provide holistic oral-systemic healthcare.
[ { "created": "Sat, 19 Aug 2023 18:59:42 GMT", "version": "v1" } ]
2023-08-23
[ [ "Tanwar", "Himanshi", "" ], [ "Gnanasekaran", "Jeba Mercy", "" ], [ "Allison", "Devon", "" ], [ "Chuang", "Ling-shiang", "" ], [ "He", "Xuesong", "" ], [ "Aimetti", "Mario", "" ], [ "Baima", "Giacomo", "" ...
Periodontitis and Inflammatory Bowel Disease (IBD) are chronic inflammatory conditions, characterized by microbial dysbiosis and hyper-immunoinflammatory responses. Growing evidence suggest an interconnection between periodontitis and IBD, implying a shift from the traditional concept of independent diseases to a complex, reciprocal cycle. This review outlines the evidence supporting an Oral-Gut axis, marked by a higher prevalence of periodontitis in IBD patients and vice versa. The specific mechanisms linking periodontitis and IBD remain to be fully elucidated, but emerging evidence points to the ectopic colonization of the gut by oral bacteria, which promote intestinal inflammation by activating host immune responses. This review presents an in-depth examination of the interconnection between periodontitis and IBD, highlighting the shared microbiological and immunological pathways, and proposing a multi-hit hypothesis in the pathogenesis of periodontitis-mediated intestinal inflammation. Furthermore, the review underscores the critical need for a collaborative approach between dentists and gastroenterologists to provide holistic oral-systemic healthcare.
0901.3271
Tobias Galla
Tobias Galla
Intrinsic fluctuations in stochastic delay systems: theoretical description and application to a simple model of gene regulation
12 pages, 6 figures
null
10.1103/PhysRevE.80.021909
null
q-bio.MN cond-mat.stat-mech nlin.AO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effects of intrinsic noise on stochastic delay systems is studied within an expansion in the inverse system size. We show that the stochastic nature of the underlying dynamics may induce oscillatory behaviour in parameter ranges where the deterministic system does not sustain cycles, and compute the power spectra of these stochastic oscillations analytically, in good agreement with simulations. The theory is developed in the context of a simple one-dimensional toy model, but is applicable more generally. Gene regulatory systems in particular often contain only a small number of molecules, leading to significant fluctuations in mRNA and protein concentrations. As an application we therefore study a minimalistic model of the expression levels of hes1 mRNA and Hes1 protein, representing the simple motif of an auto-inhibitory feedback loop and motivated by its relevance to somite segmentation.
[ { "created": "Wed, 21 Jan 2009 13:25:01 GMT", "version": "v1" } ]
2015-05-13
[ [ "Galla", "Tobias", "" ] ]
The effects of intrinsic noise on stochastic delay systems is studied within an expansion in the inverse system size. We show that the stochastic nature of the underlying dynamics may induce oscillatory behaviour in parameter ranges where the deterministic system does not sustain cycles, and compute the power spectra of these stochastic oscillations analytically, in good agreement with simulations. The theory is developed in the context of a simple one-dimensional toy model, but is applicable more generally. Gene regulatory systems in particular often contain only a small number of molecules, leading to significant fluctuations in mRNA and protein concentrations. As an application we therefore study a minimalistic model of the expression levels of hes1 mRNA and Hes1 protein, representing the simple motif of an auto-inhibitory feedback loop and motivated by its relevance to somite segmentation.
2105.00448
Xiaoran Lai
Xiaoran Lai, H{\aa}kon A. Task\'en, Torgeir Mo, Simon W. Funke, Arnoldo Frigessi, Marie E. Rognes and Alvaro K\"ohn-Luque
A scalable solver for a stochastic, hybrid cellular automaton model of personalized breast cancer therapy
null
null
10.1002/cnm.3542
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Mathematical modeling and simulation is a promising approach to personalized cancer medicine. Yet, the complexity, heterogeneity and multi-scale nature of cancer pose significant computational challenges. Coupling discrete cell-based models with continuous models using hybrid cellular automata is a powerful approach for mimicking biological complexity and describing the dynamical exchange of information across different scales. However, when clinically relevant cancer portions are taken into account, such models become computationally very expensive. While efficient parallelization techniques for continuous models exist, their coupling with discrete models, particularly cellular automata, necessitates more elaborate solutions. Building upon FEniCS, a popular and powerful scientific computing platform for solving partial differential equations, we developed parallel algorithms to link stochastic cellular automata with differential equations ( https://bitbucket.org/HTasken/cansim ). The algorithms minimize the communication between processes that share cellular automata neighborhood values while also allowing for reproducibility during stochastic updates. We demonstrated the potential of our solution on a complex hybrid cellular automaton model of breast cancer treated with combination chemotherapy. On a single-core processor, we obtained nearly linear scaling with an increasing problem size, whereas weak parallel scaling showed moderate growth in solving time relative to increase in problem size. Finally we applied the algorithm to a problem that is 500 times larger than previous work, allowing us to run personalized therapy simulations based on heterogeneous cell density and tumor perfusion conditions estimated from magnetic resonance imaging data on an unprecedented scale.
[ { "created": "Sun, 2 May 2021 11:38:20 GMT", "version": "v1" } ]
2021-11-23
[ [ "Lai", "Xiaoran", "" ], [ "Taskén", "Håkon A.", "" ], [ "Mo", "Torgeir", "" ], [ "Funke", "Simon W.", "" ], [ "Frigessi", "Arnoldo", "" ], [ "Rognes", "Marie E.", "" ], [ "Köhn-Luque", "Alvaro", "" ] ]
Mathematical modeling and simulation is a promising approach to personalized cancer medicine. Yet, the complexity, heterogeneity and multi-scale nature of cancer pose significant computational challenges. Coupling discrete cell-based models with continuous models using hybrid cellular automata is a powerful approach for mimicking biological complexity and describing the dynamical exchange of information across different scales. However, when clinically relevant cancer portions are taken into account, such models become computationally very expensive. While efficient parallelization techniques for continuous models exist, their coupling with discrete models, particularly cellular automata, necessitates more elaborate solutions. Building upon FEniCS, a popular and powerful scientific computing platform for solving partial differential equations, we developed parallel algorithms to link stochastic cellular automata with differential equations ( https://bitbucket.org/HTasken/cansim ). The algorithms minimize the communication between processes that share cellular automata neighborhood values while also allowing for reproducibility during stochastic updates. We demonstrated the potential of our solution on a complex hybrid cellular automaton model of breast cancer treated with combination chemotherapy. On a single-core processor, we obtained nearly linear scaling with an increasing problem size, whereas weak parallel scaling showed moderate growth in solving time relative to increase in problem size. Finally we applied the algorithm to a problem that is 500 times larger than previous work, allowing us to run personalized therapy simulations based on heterogeneous cell density and tumor perfusion conditions estimated from magnetic resonance imaging data on an unprecedented scale.
1411.7320
Pierre de Villemereuil
Pierre de Villemereuil and Oscar E. Gaggiotti
A new $F_{\text{ST}}$-based method to uncover local adaptation using environmental variables
19 pages, 5 figures, Supplementary Information at the end of the document
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Genome-scan methods are used for screening genome-wide patterns of DNA polymorphism to detect signatures of positive selection. There are two main types of methods: (i) 'outlier' detection methods based on Fst that detect loci with high differentiation compared to the rest of the genomes, and (ii) environmental association methods that test the association between allele frequencies and environmental variables. - We present a new Fst-based genome-scan method, BayeScEnv, which incorporates environmental information in the form of 'environmental differentiation'. It is based on the F-model, but, as opposed to existing approaches, it considers two locus-specific effects; one due to divergent selection, and another one due to various other processes different from local adaptation (e.g. range expansions, differences in mutation rates across loci or background selection). The method was developped in C++ and is avaible at http://github.com/devillemereuil/bayescenv. - Simulation studies shows that our method has a much lower false positive rate than an existing Fst-based method, BayeScan, under a wide range of demographic scenarios. Although it has lower power, it leads to a better compromise between power and false positive rate. - We apply our method to human and salmon datasets and show that it can be used successfully to study local adaptation. We discuss its scope and compare its mechanics to other existing methods.
[ { "created": "Wed, 26 Nov 2014 18:09:20 GMT", "version": "v1" }, { "created": "Tue, 3 Feb 2015 12:22:56 GMT", "version": "v2" } ]
2015-02-04
[ [ "de Villemereuil", "Pierre", "" ], [ "Gaggiotti", "Oscar E.", "" ] ]
- Genome-scan methods are used for screening genome-wide patterns of DNA polymorphism to detect signatures of positive selection. There are two main types of methods: (i) 'outlier' detection methods based on Fst that detect loci with high differentiation compared to the rest of the genomes, and (ii) environmental association methods that test the association between allele frequencies and environmental variables. - We present a new Fst-based genome-scan method, BayeScEnv, which incorporates environmental information in the form of 'environmental differentiation'. It is based on the F-model, but, as opposed to existing approaches, it considers two locus-specific effects; one due to divergent selection, and another one due to various other processes different from local adaptation (e.g. range expansions, differences in mutation rates across loci or background selection). The method was developped in C++ and is avaible at http://github.com/devillemereuil/bayescenv. - Simulation studies shows that our method has a much lower false positive rate than an existing Fst-based method, BayeScan, under a wide range of demographic scenarios. Although it has lower power, it leads to a better compromise between power and false positive rate. - We apply our method to human and salmon datasets and show that it can be used successfully to study local adaptation. We discuss its scope and compare its mechanics to other existing methods.
1704.03355
Vyacheslav Yukalov
V.I. Yukalov, E.P. Yukalova, and D. Sornette
Dynamic Transition in Symbiotic Evolution Induced by Growth Rate Variation
Latex file, 16 pages, 17 figures
Int. J. Bifur. Chaos 27 (2017) 1730013
10.1142/S0218127417300130
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a standard bifurcation of a dynamical system, the stationary points (or more generally attractors) change qualitatively when varying a control parameter. Here we describe a novel unusual effect, when the change of a parameter, e.g. a growth rate, does not influence the stationary states, but nevertheless leads to a qualitative change of dynamics. For instance, such a dynamic transition can be between the convergence to a stationary state and a strong increase without stationary states, or between the convergence to one stationary state and that to a different state. This effect is illustrated for a dynamical system describing two symbiotic populations, one of which exhibits a growth rate larger than the other one. We show that, although the stationary states of the dynamical system do not depend on the growth rates, the latter influence the boundary of the basins of attraction. This change of the basins of attraction explains this unusual effect of the quantitative change of dynamics by growth rate variation.
[ { "created": "Tue, 11 Apr 2017 15:27:38 GMT", "version": "v1" } ]
2017-04-26
[ [ "Yukalov", "V. I.", "" ], [ "Yukalova", "E. P.", "" ], [ "Sornette", "D.", "" ] ]
In a standard bifurcation of a dynamical system, the stationary points (or more generally attractors) change qualitatively when varying a control parameter. Here we describe a novel unusual effect, when the change of a parameter, e.g. a growth rate, does not influence the stationary states, but nevertheless leads to a qualitative change of dynamics. For instance, such a dynamic transition can be between the convergence to a stationary state and a strong increase without stationary states, or between the convergence to one stationary state and that to a different state. This effect is illustrated for a dynamical system describing two symbiotic populations, one of which exhibits a growth rate larger than the other one. We show that, although the stationary states of the dynamical system do not depend on the growth rates, the latter influence the boundary of the basins of attraction. This change of the basins of attraction explains this unusual effect of the quantitative change of dynamics by growth rate variation.
q-bio/0404007
Jean Faber
Jean Faber, Renato Portugal and Luiz Pinguelli Rosa
Information Processing in Brain Microtubules
14 pages, 10 figures, article presented in Quantum Mind 2003 conference
null
null
null
q-bio.SC q-bio.NC quant-ph
null
Models of the mind are based on the possibility of computing in brain microtubules. From this point of view, information processing is the fundamental issue for understanding the brain mechanisms that produce consciousness. The cytoskeleton polymers could store and process information through their dynamic coupling mediated by mechanical energy. We analyze the problem of information transfer and storage in brain microtubules, considering them as a communication channel. We discuss the implications of assuming that consciousness is generated by the subneuronal process.
[ { "created": "Tue, 6 Apr 2004 01:37:03 GMT", "version": "v1" }, { "created": "Tue, 18 Jan 2005 17:21:28 GMT", "version": "v2" } ]
2007-05-23
[ [ "Faber", "Jean", "" ], [ "Portugal", "Renato", "" ], [ "Rosa", "Luiz Pinguelli", "" ] ]
Models of the mind are based on the possibility of computing in brain microtubules. From this point of view, information processing is the fundamental issue for understanding the brain mechanisms that produce consciousness. The cytoskeleton polymers could store and process information through their dynamic coupling mediated by mechanical energy. We analyze the problem of information transfer and storage in brain microtubules, considering them as a communication channel. We discuss the implications of assuming that consciousness is generated by the subneuronal process.
2206.01803
Ariel F\'elix Gualtieri PhD
Ariel F\'elix Gualtieri, Carolina de la Cal, Augusto Francisco Toma, Pedro Hecht
Spread of SARS-CoV-2 in a SIS model with vaccination and breakthrough infection
24 pages with 12 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Although previous infection and vaccination provide protection against SARS-CoV-2 infection, both reinfection and breakthrough infection are possible events whose occurrence would increase with time after first exposure to the antigen and with the emergence of new variants of the virus. Periodic vaccination could counteract this decline in protection. In the present work, our aim was to develop and explore a model of SARS-CoV-2 spread with vaccination, reinfection and breakthrough infection. A modified deterministic SIS (Susceptible-Infected-Susceptible) model represented by a system of differential equations was designed. As in any SIS model, the population was divided into susceptible and infected individuals. But in our design, susceptible individuals were, in turn, grouped into three consecutive categories whose susceptibility increases with time after infection or vaccination. The model was studied by means of computer simulations, which were analysed qualitatively. The results obtained show that the prevalence, after oscillating between peaks and valleys, reaches a plateau phase. Moreover, as might be expected, the magnitude of the peaks and plateaus increases as the infection rate rises, the vaccination rate decreases and the rate of decay of protection conferred by vaccination or previous infection increases. Therefore, the present study suggests that, at least under certain conditions, the spread of SARS-CoV-2, although it could experience fluctuations, would finally evolve into an endemic form, with a more or less stable prevalence that would depend on the levels of infection and vaccination, and on the kinetics of post-infection and post-vaccination protection. However, it should be kept in mind that our development is a theoretical scheme with many limitations. For this reason, its predictions should be considered with great care.
[ { "created": "Fri, 3 Jun 2022 20:18:47 GMT", "version": "v1" } ]
2022-06-07
[ [ "Gualtieri", "Ariel Félix", "" ], [ "de la Cal", "Carolina", "" ], [ "Toma", "Augusto Francisco", "" ], [ "Hecht", "Pedro", "" ] ]
Although previous infection and vaccination provide protection against SARS-CoV-2 infection, both reinfection and breakthrough infection are possible events whose occurrence would increase with time after first exposure to the antigen and with the emergence of new variants of the virus. Periodic vaccination could counteract this decline in protection. In the present work, our aim was to develop and explore a model of SARS-CoV-2 spread with vaccination, reinfection and breakthrough infection. A modified deterministic SIS (Susceptible-Infected-Susceptible) model represented by a system of differential equations was designed. As in any SIS model, the population was divided into susceptible and infected individuals. But in our design, susceptible individuals were, in turn, grouped into three consecutive categories whose susceptibility increases with time after infection or vaccination. The model was studied by means of computer simulations, which were analysed qualitatively. The results obtained show that the prevalence, after oscillating between peaks and valleys, reaches a plateau phase. Moreover, as might be expected, the magnitude of the peaks and plateaus increases as the infection rate rises, the vaccination rate decreases and the rate of decay of protection conferred by vaccination or previous infection increases. Therefore, the present study suggests that, at least under certain conditions, the spread of SARS-CoV-2, although it could experience fluctuations, would finally evolve into an endemic form, with a more or less stable prevalence that would depend on the levels of infection and vaccination, and on the kinetics of post-infection and post-vaccination protection. However, it should be kept in mind that our development is a theoretical scheme with many limitations. For this reason, its predictions should be considered with great care.
1912.13167
Xueyuan Zhao
Xueyuan Zhao and Dario Pompili
Transform-Domain Classification of Human Cells based on DNA Methylation Datasets
null
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel method to classify human cells is presented in this work based on the transform-domain method on DNA methylation data. DNA methylation profile variations are observed in human cells with the progression of disease stages, and the proposal is based on this DNA methylation variation to classify normal and disease cells including cancer cells. The cancer cell types investigated in this work cover hepatocellular (sample size n = 40), colorectal (n = 44), lung (n = 70) and endometrial (n = 87) cancer cells. A new pipeline is proposed integrating the DNA methylation intensity measurements on all the CpG islands by the transformation of Walsh-Hadamard Transform (WHT). The study reveals the three-step properties of the DNA methylation transform-domain data and the step values of association with the cell status. Further assessments have been carried out on the proposed machine learning pipeline to perform classification of the normal and cancer tissue cells. A number of machine learning classifiers are compared for whole sequence and WHT sequence classification based on public Whole-Genome Bisulfite Sequencing (WGBS) DNA methylation datasets. The WHT-based method can speed up the computation time by more than one order of magnitude compared with whole original sequence classification, while maintaining comparable classification accuracy by the selected machine learning classifiers. The proposed method has broad applications in expedited disease and normal human cell classifications by the epigenome and genome datasets.
[ { "created": "Tue, 31 Dec 2019 04:18:11 GMT", "version": "v1" } ]
2020-01-01
[ [ "Zhao", "Xueyuan", "" ], [ "Pompili", "Dario", "" ] ]
A novel method to classify human cells is presented in this work based on the transform-domain method on DNA methylation data. DNA methylation profile variations are observed in human cells with the progression of disease stages, and the proposal is based on this DNA methylation variation to classify normal and disease cells including cancer cells. The cancer cell types investigated in this work cover hepatocellular (sample size n = 40), colorectal (n = 44), lung (n = 70) and endometrial (n = 87) cancer cells. A new pipeline is proposed integrating the DNA methylation intensity measurements on all the CpG islands by the transformation of Walsh-Hadamard Transform (WHT). The study reveals the three-step properties of the DNA methylation transform-domain data and the step values of association with the cell status. Further assessments have been carried out on the proposed machine learning pipeline to perform classification of the normal and cancer tissue cells. A number of machine learning classifiers are compared for whole sequence and WHT sequence classification based on public Whole-Genome Bisulfite Sequencing (WGBS) DNA methylation datasets. The WHT-based method can speed up the computation time by more than one order of magnitude compared with whole original sequence classification, while maintaining comparable classification accuracy by the selected machine learning classifiers. The proposed method has broad applications in expedited disease and normal human cell classifications by the epigenome and genome datasets.
2309.03783
Matthew Silk
Iacopo Iacopini, Jennifer R Foote, Nina H Fefferman, Elizabeth P Derryberry and Matthew J Silk
Not your private t\^ete-\`a-t\^ete: leveraging the power of higher-order networks to study animal communication
null
null
null
null
q-bio.PE cs.SI q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Animal communication is frequently studied with conventional network representations that link pairs of individuals who interact, for example, through vocalisation. However, acoustic signals often have multiple simultaneous receivers, or receivers integrate information from multiple signallers, meaning these interactions are not dyadic. Additionally, non-dyadic social structures often shape an individual's behavioural response to vocal communication. Recently, major advances have been made in the study of these non-dyadic, higher-order networks (e.g., hypergraphs and simplicial complexes). Here, we show how these approaches can provide new insights into vocal communication through three case studies that illustrate how higher-order network models can: a) alter predictions made about the outcome of vocally-coordinated group departures; b) generate different patterns of song synchronisation than models that only include dyadic interactions; and c) inform models of cultural evolution of vocal communication. Together, our examples highlight the potential power of higher-order networks to study animal vocal communication. We then build on our case studies to identify key challenges in applying higher-order network approaches in this context and outline important research questions these techniques could help answer.
[ { "created": "Thu, 7 Sep 2023 15:31:14 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 11:19:13 GMT", "version": "v2" } ]
2023-11-08
[ [ "Iacopini", "Iacopo", "" ], [ "Foote", "Jennifer R", "" ], [ "Fefferman", "Nina H", "" ], [ "Derryberry", "Elizabeth P", "" ], [ "Silk", "Matthew J", "" ] ]
Animal communication is frequently studied with conventional network representations that link pairs of individuals who interact, for example, through vocalisation. However, acoustic signals often have multiple simultaneous receivers, or receivers integrate information from multiple signallers, meaning these interactions are not dyadic. Additionally, non-dyadic social structures often shape an individual's behavioural response to vocal communication. Recently, major advances have been made in the study of these non-dyadic, higher-order networks (e.g., hypergraphs and simplicial complexes). Here, we show how these approaches can provide new insights into vocal communication through three case studies that illustrate how higher-order network models can: a) alter predictions made about the outcome of vocally-coordinated group departures; b) generate different patterns of song synchronisation than models that only include dyadic interactions; and c) inform models of cultural evolution of vocal communication. Together, our examples highlight the potential power of higher-order networks to study animal vocal communication. We then build on our case studies to identify key challenges in applying higher-order network approaches in this context and outline important research questions these techniques could help answer.
2004.12319
Marc Timme
Malte Schr\"oder, Andreas Bossert, Moritz Kersting, Sebastian Aeffner, Justin Coetzee, Marc Timme, Jan Schl\"uter
COVID-19 in Africa -- outbreak despite interventions?
16 pages including 3 Figures and Executive Summary
Sci. Rep. 11, 4956 (2021)
10.1038/s41598-021-84487-0
null
q-bio.PE nlin.AO
http://creativecommons.org/licenses/by-sa/4.0/
Few African countries have reported COVID-19 case numbers above $1\,000$ as of April 18, 2020, with South Africa reporting $3\,034$ cases being hit hardest in Sub-Saharan Africa. Several African countries, especially South Africa, have already taken strong non-pharmaceutical interventions that include physical distancing, restricted economic, educational and leisure activities and reduced human mobility options. The required strengths and overall effectiveness of such interventions, however, are debated because of simultaneous but opposing interests in most African countries: strongly limited health care capacities and testing capabilities largely conflict with pressured national economies and socio-economic hardships on the individual level, limiting compliance to intervention targets. Here we investigate implications of interventions on the COVID-19 outbreak dynamics, focusing on South Africa before and after the national lockdown enacted on March 27, 2020. Our analysis shows that initial exponential growth of existing case numbers is consistent with doubling times of about $2.5$ days. After lockdown, the growth remains exponential, now with doubling times of 18 days, but still in contrast to subexponential growth reported for Hubei/China after lockdown. Moreover, a scenario analysis of a computational data-driven agent based mobility model for the Nelson Mandela Bay Municipality (with $1.14$ million inhabitants) hints that keeping current levels of intervention measures and compliance until the end of April is of insufficient length and still too weak, too unspecific or too inconsistently complied with to not overload local intensive care capacity. Yet, enduring, slightly stronger, more specific interventions combined with sufficient compliance may constitute a viable option for interventions for regions in South Africa and potentially for large parts of the African continent.
[ { "created": "Sun, 26 Apr 2020 08:56:42 GMT", "version": "v1" } ]
2021-06-01
[ [ "Schröder", "Malte", "" ], [ "Bossert", "Andreas", "" ], [ "Kersting", "Moritz", "" ], [ "Aeffner", "Sebastian", "" ], [ "Coetzee", "Justin", "" ], [ "Timme", "Marc", "" ], [ "Schlüter", "Jan", "" ] ]
Few African countries have reported COVID-19 case numbers above $1\,000$ as of April 18, 2020, with South Africa reporting $3\,034$ cases being hit hardest in Sub-Saharan Africa. Several African countries, especially South Africa, have already taken strong non-pharmaceutical interventions that include physical distancing, restricted economic, educational and leisure activities and reduced human mobility options. The required strengths and overall effectiveness of such interventions, however, are debated because of simultaneous but opposing interests in most African countries: strongly limited health care capacities and testing capabilities largely conflict with pressured national economies and socio-economic hardships on the individual level, limiting compliance to intervention targets. Here we investigate implications of interventions on the COVID-19 outbreak dynamics, focusing on South Africa before and after the national lockdown enacted on March 27, 2020. Our analysis shows that initial exponential growth of existing case numbers is consistent with doubling times of about $2.5$ days. After lockdown, the growth remains exponential, now with doubling times of 18 days, but still in contrast to subexponential growth reported for Hubei/China after lockdown. Moreover, a scenario analysis of a computational data-driven agent based mobility model for the Nelson Mandela Bay Municipality (with $1.14$ million inhabitants) hints that keeping current levels of intervention measures and compliance until the end of April is of insufficient length and still too weak, too unspecific or too inconsistently complied with to not overload local intensive care capacity. Yet, enduring, slightly stronger, more specific interventions combined with sufficient compliance may constitute a viable option for interventions for regions in South Africa and potentially for large parts of the African continent.
2012.05043
Gustau Camps-Valls
Katja Berger, Jochem Verrelst, Jean-Baptiste F\'eret, Tobias Hank, Matthias Wocher, Wolfram Mauser, Gustau Camps-Valls
Retrieval of aboveground crop nitrogen content with a hybrid machine learning method
null
Preprint version of the paper in International Journal of Applied Earth Observation and Geoinformation, Volume 92, October 2020, 102174
10.1016/j.jag.2020.102174
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Hyperspectral acquisitions have proven to be the most informative Earth observation data source for the estimation of nitrogen (N) content, which is the main limiting nutrient for plant growth and thus agricultural production. In the past, empirical algorithms have been widely employed to retrieve information on this biochemical plant component from canopy reflectance. However, these approaches do not seek for a cause-effect relationship based on physical laws. Moreover, most studies solely relied on the correlation of chlorophyll content with nitrogen, and thus neglected the fact that most N is bound in proteins. Our study presents a hybrid retrieval method using a physically-based approach combined with machine learning regression to estimate crop N content. Within the workflow, the leaf optical properties model PROSPECT-PRO including the newly calibrated specific absorption coefficients (SAC) of proteins, was coupled with the canopy reflectance model 4SAIL to PROSAIL-PRO. The latter was then employed to generate a training database to be used for advanced probabilistic machine learning methods: a standard homoscedastic Gaussian process (GP) and a heteroscedastic GP regression that accounts for signal-to-noise relations. Both GP models have the property of providing confidence intervals for the estimates, which sets them apart from other machine learners. GP-based band analysis identified optimal spectral settings with ten bands mainly situated in the shortwave infrared (SWIR) spectral region. Use of well-known protein absorption bands from the literature showed comparative results. Finally, the heteroscedastic GP model was successfully applied on airborne hyperspectral data for N mapping. We conclude that GP algorithms, and in particular the heteroscedastic GP, should be implemented for global agricultural monitoring of aboveground N from future imaging spectroscopy data.
[ { "created": "Mon, 7 Dec 2020 13:06:59 GMT", "version": "v1" } ]
2020-12-10
[ [ "Berger", "Katja", "" ], [ "Verrelst", "Jochem", "" ], [ "Féret", "Jean-Baptiste", "" ], [ "Hank", "Tobias", "" ], [ "Wocher", "Matthias", "" ], [ "Mauser", "Wolfram", "" ], [ "Camps-Valls", "Gustau", "" ...
Hyperspectral acquisitions have proven to be the most informative Earth observation data source for the estimation of nitrogen (N) content, which is the main limiting nutrient for plant growth and thus agricultural production. In the past, empirical algorithms have been widely employed to retrieve information on this biochemical plant component from canopy reflectance. However, these approaches do not seek for a cause-effect relationship based on physical laws. Moreover, most studies solely relied on the correlation of chlorophyll content with nitrogen, and thus neglected the fact that most N is bound in proteins. Our study presents a hybrid retrieval method using a physically-based approach combined with machine learning regression to estimate crop N content. Within the workflow, the leaf optical properties model PROSPECT-PRO including the newly calibrated specific absorption coefficients (SAC) of proteins, was coupled with the canopy reflectance model 4SAIL to PROSAIL-PRO. The latter was then employed to generate a training database to be used for advanced probabilistic machine learning methods: a standard homoscedastic Gaussian process (GP) and a heteroscedastic GP regression that accounts for signal-to-noise relations. Both GP models have the property of providing confidence intervals for the estimates, which sets them apart from other machine learners. GP-based band analysis identified optimal spectral settings with ten bands mainly situated in the shortwave infrared (SWIR) spectral region. Use of well-known protein absorption bands from the literature showed comparative results. Finally, the heteroscedastic GP model was successfully applied on airborne hyperspectral data for N mapping. We conclude that GP algorithms, and in particular the heteroscedastic GP, should be implemented for global agricultural monitoring of aboveground N from future imaging spectroscopy data.
2004.00995
Diego Oyarz\'un
Evangelos-Marios Nikolados, Andrea Y. Wei{\ss}e, Diego A. Oyarz\'un
Prediction of cellular burden with host-circuit models
null
null
null
null
q-bio.MN q-bio.BM q-bio.CB q-bio.SC
http://creativecommons.org/licenses/by/4.0/
Heterologous gene expression draws resources from host cells. These resources include vital components to sustain growth and replication, and the resulting cellular burden is a widely recognised bottleneck in the design of robust circuits. In this tutorial we discuss the use of computational models that integrate gene circuits and the physiology of host cells. Through various use cases, we illustrate the power of host-circuit models to predict the impact of design parameters on both burden and circuit functionality. Our approach relies on a new generation of computational models for microbial growth that can flexibly accommodate resource bottlenecks encountered in gene circuit design. Adoption of this modelling paradigm can facilitate fast and robust design cycles in synthetic biology.
[ { "created": "Thu, 2 Apr 2020 13:47:24 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2020 07:39:07 GMT", "version": "v2" } ]
2020-04-06
[ [ "Nikolados", "Evangelos-Marios", "" ], [ "Weiße", "Andrea Y.", "" ], [ "Oyarzún", "Diego A.", "" ] ]
Heterologous gene expression draws resources from host cells. These resources include vital components to sustain growth and replication, and the resulting cellular burden is a widely recognised bottleneck in the design of robust circuits. In this tutorial we discuss the use of computational models that integrate gene circuits and the physiology of host cells. Through various use cases, we illustrate the power of host-circuit models to predict the impact of design parameters on both burden and circuit functionality. Our approach relies on a new generation of computational models for microbial growth that can flexibly accommodate resource bottlenecks encountered in gene circuit design. Adoption of this modelling paradigm can facilitate fast and robust design cycles in synthetic biology.
1403.3792
Jeferson J. Arenzon
Charlotte Rulquin and Jeferson J. Arenzon
Globally synchronized oscillations in complex cyclic games
null
Phys. Rev. E 89 (2014) 032133
10.1103/PhysRevE.89.032133
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Rock-Paper-Scissors (RPS) game and its generalizations with ${\cal S}>3$ species are well studied models for cyclically interacting populations. Four is, however, the minimum number of species that, by allowing other interactions beyond the single, cyclic loop, breaks both the full intransitivity of the food graph and the one predator, one prey symmetry. L\"utz {\it et al} (J. Theor. Biol. {\bf 317} (2013) 286) have shown the existence, on a square lattice, of two distinct phases, with either four or three coexisting species. In both phases, each agent is eventually replaced by one of its predators but these strategy oscillations remain localized as long as the interactions are short ranged. Distant regions may be either out of phase or cycling through different food web subloops (if any). Here we show that upon replacing a minimum fraction $Q_c$ of the short range interactions by long range ones, there is a Hopf bifurcation and global oscillations become stable. Surprisingly, to build such long distance, global synchronization, the four species coexistence phase requires less long range interactions than the three species phase, while one would naively expect the contrary. Moreover, deviations from highly homogeneous conditions ($\chi=0$ or 1) increase $Q_c$ and the more heterogeneous is the food web, the harder the synchronization is. By further increasing $Q$, while the three species phase remains stable, the four species one has a transition to an absorbing, single species state. The existence of a phase with global oscillations for ${\cal S}>3$, when the interaction graph has multiple subloops and several possible local cycles, lead to the conjecture that global oscillations are a general characteristic, even for large, realistic food webs.
[ { "created": "Sat, 15 Mar 2014 11:55:55 GMT", "version": "v1" } ]
2014-03-28
[ [ "Rulquin", "Charlotte", "" ], [ "Arenzon", "Jeferson J.", "" ] ]
The Rock-Paper-Scissors (RPS) game and its generalizations with ${\cal S}>3$ species are well studied models for cyclically interacting populations. Four is, however, the minimum number of species that, by allowing other interactions beyond the single, cyclic loop, breaks both the full intransitivity of the food graph and the one predator, one prey symmetry. L\"utz {\it et al} (J. Theor. Biol. {\bf 317} (2013) 286) have shown the existence, on a square lattice, of two distinct phases, with either four or three coexisting species. In both phases, each agent is eventually replaced by one of its predators but these strategy oscillations remain localized as long as the interactions are short ranged. Distant regions may be either out of phase or cycling through different food web subloops (if any). Here we show that upon replacing a minimum fraction $Q_c$ of the short range interactions by long range ones, there is a Hopf bifurcation and global oscillations become stable. Surprisingly, to build such long distance, global synchronization, the four species coexistence phase requires less long range interactions than the three species phase, while one would naively expect the contrary. Moreover, deviations from highly homogeneous conditions ($\chi=0$ or 1) increase $Q_c$ and the more heterogeneous is the food web, the harder the synchronization is. By further increasing $Q$, while the three species phase remains stable, the four species one has a transition to an absorbing, single species state. The existence of a phase with global oscillations for ${\cal S}>3$, when the interaction graph has multiple subloops and several possible local cycles, lead to the conjecture that global oscillations are a general characteristic, even for large, realistic food webs.
1409.5352
Stefan Thurner
Maximilian Sadilek and Stefan Thurner
Physiologically motivated multiplex Kuramoto model describes phase diagram of cortical activity
8 pages, 3 figures
Scientific Reports 5, 10015, (2015)
null
null
q-bio.NC cond-mat.stat-mech nlin.CD physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive a two-layer multiplex Kuramoto model from weakly coupled Wilson-Cowan oscillators on a cortical network with inhibitory synaptic time delays. Depending on the coupling strength and a phase shift parameter, related to cerebral blood flow and GABA concentration, respectively, we numerically identify three macroscopic phases: unsynchronized, synchronized, and chaotic dynamics. These correspond to physiological background-, epileptic seizure-, and resting-state cortical activity, respectively. We also observe frequency suppression at the transition from resting-state to seizure activity.
[ { "created": "Mon, 15 Sep 2014 20:23:46 GMT", "version": "v1" } ]
2016-10-03
[ [ "Sadilek", "Maximilian", "" ], [ "Thurner", "Stefan", "" ] ]
We derive a two-layer multiplex Kuramoto model from weakly coupled Wilson-Cowan oscillators on a cortical network with inhibitory synaptic time delays. Depending on the coupling strength and a phase shift parameter, related to cerebral blood flow and GABA concentration, respectively, we numerically identify three macroscopic phases: unsynchronized, synchronized, and chaotic dynamics. These correspond to physiological background-, epileptic seizure-, and resting-state cortical activity, respectively. We also observe frequency suppression at the transition from resting-state to seizure activity.
2102.04842
Rob Phillips
Rob Phillips
Schr\"{o}dinger's "What is Life?" at 75
null
null
null
null
q-bio.OT physics.bio-ph physics.hist-ph
http://creativecommons.org/licenses/by/4.0/
2019 marked the 75th anniversary of the publication of Erwin Schr\"{o}dinger's "What is Life?", a short book described by Roger Penrose in his preface to a reprint of this classic as "among the most influential scientific writings of the 20th century." In this article, I review the long argument made by Schr\"{o}dinger as he mused on how the laws of physics could help us understand "the events in space and time which take place within the spatial boundary of a living organism." Though Schr\"{o}dinger's book is often hailed for its influence on some of the titans who founded molecular biology, this article takes a different tack. Instead of exploring the way the book touched biologists such as James Watson and Francis Crick, as well as its critical reception by others such as Linus Pauling and Max Perutz, I argue that Schr\"{o}dinger's classic is a timeless manifesto, rather than a dated historical curiosity. "What is Life?" is full of timely outlooks and approaches to understanding the mysterious living world that includes and surrounds us and can instead be viewed as a call to arms to tackle the great unanswered challenges in the study of living matter that remain for 21$^{st}$ century science.
[ { "created": "Sun, 7 Feb 2021 20:04:55 GMT", "version": "v1" } ]
2021-02-10
[ [ "Phillips", "Rob", "" ] ]
2019 marked the 75th anniversary of the publication of Erwin Schr\"{o}dinger's "What is Life?", a short book described by Roger Penrose in his preface to a reprint of this classic as "among the most influential scientific writings of the 20th century." In this article, I review the long argument made by Schr\"{o}dinger as he mused on how the laws of physics could help us understand "the events in space and time which take place within the spatial boundary of a living organism." Though Schr\"{o}dinger's book is often hailed for its influence on some of the titans who founded molecular biology, this article takes a different tack. Instead of exploring the way the book touched biologists such as James Watson and Francis Crick, as well as its critical reception by others such as Linus Pauling and Max Perutz, I argue that Schr\"{o}dinger's classic is a timeless manifesto, rather than a dated historical curiosity. "What is Life?" is full of timely outlooks and approaches to understanding the mysterious living world that includes and surrounds us and can instead be viewed as a call to arms to tackle the great unanswered challenges in the study of living matter that remain for 21$^{st}$ century science.
q-bio/0508026
Carla Goldman
Fernando G. Carvalhaes and Carla Goldman
Shock waves in virus fitness evolution
9 pages 2 figures, submitted to Phys. Rev. Lett
null
null
null
q-bio.PE
null
We consider a nonlinear partial differential equation of conservation type to describe the dynamics of vesicular stomatitis virus observed in aliquots of fixed particle number taken from an evolving clone at periodic intervals of time \cite{novella 95}. The changes in time behavior of fitness function noticed in experimental data are related to a crossover exhibited by the solutions to this equation in the transient regime for pulse-like initial conditions. As a consequence, the average replication rate of the population is predicted to reach a plateau as a power t^{-(1/2)}.
[ { "created": "Sat, 20 Aug 2005 02:34:47 GMT", "version": "v1" } ]
2007-05-23
[ [ "Carvalhaes", "Fernando G.", "" ], [ "Goldman", "Carla", "" ] ]
We consider a nonlinear partial differential equation of conservation type to describe the dynamics of vesicular stomatitis virus observed in aliquots of fixed particle number taken from an evolving clone at periodic intervals of time \cite{novella 95}. The changes in time behavior of fitness function noticed in experimental data are related to a crossover exhibited by the solutions to this equation in the transient regime for pulse-like initial conditions. As a consequence, the average replication rate of the population is predicted to reach a plateau as a power t^{-(1/2)}.
2102.03305
Yuqiang Ma
Hong-ming Ding, Yue-wen Yin, Song-di Ni, Yan-jing Sheng, Yu-qiang Ma
Accurate Evaluation on the Interactions of SARS-CoV-2 with Its Receptor ACE2 and Antibodies CR3022/CB6
18 pages, 5 figures
CHIN.PHYS.LETT. Vol.38, No.1(2021)018701 Express Letter
10.1088/0256-307X/38/1/018701
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spread of the coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has become a global health crisis. The binding affinity of SARS-CoV-2 (in particular the receptor binding domain, RBD) to its receptor angiotensin converting enzyme 2 (ACE2) and the antibodies is of great importance in understanding the infectivity of COVID-19 and evaluating the candidate therapeutic for COVID-19. In this work, we propose a new method based on molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) to accurately calculate the free energy of SARS-CoV-2 RBD binding to ACE2 and antibodies. The calculated binding free energy of SARS-CoV-2 RBD to ACE2 is -13.3 kcal/mol, and that of SARS-CoV RBD to ACE2 is -11.4 kcal/mol, which agrees well with experimental result (-11.3 kcal/mol and -10.1 kcal/mol, respectively). Moreover, we take two recently reported antibodies as the example, and calculate the free energy of antibodies binding to SARS-CoV-2 RBD, which is also consistent with the experimental findings. Further, within the framework of the modified MM/PBSA, we determine the key residues and the main driving forces for the SARS-CoV-2 RBD/CB6 interaction by the computational alanine scanning method. The present study offers a computationally efficient and numerically reliable method to evaluate the free energy of SARS-CoV-2 binding to other proteins, which may stimulate the development of the therapeutics against the COVID-19 disease in real applications.
[ { "created": "Sun, 17 Jan 2021 13:11:22 GMT", "version": "v1" } ]
2021-02-10
[ [ "Ding", "Hong-ming", "" ], [ "Yin", "Yue-wen", "" ], [ "Ni", "Song-di", "" ], [ "Sheng", "Yan-jing", "" ], [ "Ma", "Yu-qiang", "" ] ]
The spread of the coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has become a global health crisis. The binding affinity of SARS-CoV-2 (in particular the receptor binding domain, RBD) to its receptor angiotensin converting enzyme 2 (ACE2) and the antibodies is of great importance in understanding the infectivity of COVID-19 and evaluating the candidate therapeutic for COVID-19. In this work, we propose a new method based on molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) to accurately calculate the free energy of SARS-CoV-2 RBD binding to ACE2 and antibodies. The calculated binding free energy of SARS-CoV-2 RBD to ACE2 is -13.3 kcal/mol, and that of SARS-CoV RBD to ACE2 is -11.4 kcal/mol, which agrees well with experimental result (-11.3 kcal/mol and -10.1 kcal/mol, respectively). Moreover, we take two recently reported antibodies as the example, and calculate the free energy of antibodies binding to SARS-CoV-2 RBD, which is also consistent with the experimental findings. Further, within the framework of the modified MM/PBSA, we determine the key residues and the main driving forces for the SARS-CoV-2 RBD/CB6 interaction by the computational alanine scanning method. The present study offers a computationally efficient and numerically reliable method to evaluate the free energy of SARS-CoV-2 binding to other proteins, which may stimulate the development of the therapeutics against the COVID-19 disease in real applications.
1711.02332
Uttam Ghosh
Srijan Sengupta, Uttam Ghosh, Susmita Sarkar and Shantanu Das
Application of Fractional Derivatives in Characterization of ECG graphs of Right Ventricular Hypertrophy Patients
16 pages, 3 figures, 14 tables, submitted on 2017
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are many functions which are continuous everywhere but non-differentiable at some or all points such functions are termed as unreachable functions. Graphs representing such unreachable functions are called unreachable graphs. For example, ECG is such an unreachable graph. Classical calculus fails in their characterization as derivatives do not exist at the unreachable points. Such unreachable functions can be characterized by fractional calculus as fractional derivatives exist at those unreachable points where classical derivatives do not exist. Definition of fractional derivatives has been proposed by several mathematicians like Grunwald-Letinikov, Riemann-Liouville, Caputo, and Jumarie to develop the theory of fractional calculus. In this paper, we have used Jumarie type fractional derivative and consequently the phase transition (P.T.) which is the difference between left fractional derivative and right fractional derivatives to characterize those points. A comparative study has been done between normal ECG sample and problematic ECG sample (Right Ventricular Hypertrophy) by the help of the above mentioned mathematical tool.
[ { "created": "Tue, 7 Nov 2017 08:21:55 GMT", "version": "v1" } ]
2017-11-08
[ [ "Sengupta", "Srijan", "" ], [ "Ghosh", "Uttam", "" ], [ "Sarkar", "Susmita", "" ], [ "Das", "Shantanu", "" ] ]
There are many functions which are continuous everywhere but non-differentiable at some or all points such functions are termed as unreachable functions. Graphs representing such unreachable functions are called unreachable graphs. For example, ECG is such an unreachable graph. Classical calculus fails in their characterization as derivatives do not exist at the unreachable points. Such unreachable functions can be characterized by fractional calculus as fractional derivatives exist at those unreachable points where classical derivatives do not exist. Definition of fractional derivatives has been proposed by several mathematicians like Grunwald-Letinikov, Riemann-Liouville, Caputo, and Jumarie to develop the theory of fractional calculus. In this paper, we have used Jumarie type fractional derivative and consequently the phase transition (P.T.) which is the difference between left fractional derivative and right fractional derivatives to characterize those points. A comparative study has been done between normal ECG sample and problematic ECG sample (Right Ventricular Hypertrophy) by the help of the above mentioned mathematical tool.
0705.2913
Jan Karbowski
Jan Karbowski
Global and regional brain metabolic scaling and its functional consequences
Brain metabolism scales with its mass well above 3/4 exponent
BMC Biology 5:18 (2007)
null
null
q-bio.NC q-bio.TO
null
Background: Information processing in the brain requires large amounts of metabolic energy, the spatial distribution of which is highly heterogeneous reflecting complex activity patterns in the mammalian brain. Results: Here, it is found based on empirical data that, despite this heterogeneity, the volume-specific cerebral glucose metabolic rate of many different brain structures scales with brain volume with almost the same exponent around -0.15. The exception is white matter, the metabolism of which seems to scale with a standard specific exponent -1/4. The scaling exponents for the total oxygen and glucose consumptions in the brain in relation to its volume are identical and equal to $0.86\pm 0.03$, which is significantly larger than the exponents 3/4 and 2/3 suggested for whole body basal metabolism on body mass. Conclusions: These findings show explicitly that in mammals (i) volume-specific scaling exponents of the cerebral energy expenditure in different brain parts are approximately constant (except brain stem structures), and (ii) the total cerebral metabolic exponent against brain volume is greater than the much-cited Kleiber's 3/4 exponent. The neurophysiological factors that might account for the regional uniformity of the exponents and for the excessive scaling of the total brain metabolism are discussed, along with the relationship between brain metabolic scaling and computation.
[ { "created": "Mon, 21 May 2007 04:13:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Karbowski", "Jan", "" ] ]
Background: Information processing in the brain requires large amounts of metabolic energy, the spatial distribution of which is highly heterogeneous reflecting complex activity patterns in the mammalian brain. Results: Here, it is found based on empirical data that, despite this heterogeneity, the volume-specific cerebral glucose metabolic rate of many different brain structures scales with brain volume with almost the same exponent around -0.15. The exception is white matter, the metabolism of which seems to scale with a standard specific exponent -1/4. The scaling exponents for the total oxygen and glucose consumptions in the brain in relation to its volume are identical and equal to $0.86\pm 0.03$, which is significantly larger than the exponents 3/4 and 2/3 suggested for whole body basal metabolism on body mass. Conclusions: These findings show explicitly that in mammals (i) volume-specific scaling exponents of the cerebral energy expenditure in different brain parts are approximately constant (except brain stem structures), and (ii) the total cerebral metabolic exponent against brain volume is greater than the much-cited Kleiber's 3/4 exponent. The neurophysiological factors that might account for the regional uniformity of the exponents and for the excessive scaling of the total brain metabolism are discussed, along with the relationship between brain metabolic scaling and computation.
2205.00507
Ganchao Wei
Ganchao Wei, Ian H. Stevenson
Dynamic modeling of spike count data with Conway-Maxwell Poisson variability
6 figures
null
null
null
q-bio.NC stat.AP
http://creativecommons.org/publicdomain/zero/1.0/
In many areas of the brain, neural spiking activity covaries with features of the external world, such as sensory stimuli or an animal's movement. Experimental findings suggest that the variability of neural activity changes over time and may provide information about the external world beyond the information provided by the average neural activity. To flexibly track time-varying neural response properties, here we developed a dynamic model with Conway-Maxwell Poisson (CMP) observations. The CMP distribution can flexibly describe firing patterns that are both under- and over-dispersed relative to the Poisson distribution. Here we track parameters of the CMP distribution as they vary over time. Using simulations, we show that a normal approximation can accurately track dynamics in state vectors for both the centering and shape parameters ($\lambda$ and $\nu$). We then fit our model to neural data from neurons in primary visual cortex and "place cells" in the hippocampus. We find that this method out-performs previous dynamic models based on the Poisson distribution. The dynamic CMP model provides a flexible framework for tracking time-varying non-Poisson count data and may also have applications beyond neuroscience.
[ { "created": "Sun, 1 May 2022 16:37:13 GMT", "version": "v1" }, { "created": "Sat, 8 Oct 2022 16:26:54 GMT", "version": "v2" } ]
2022-10-11
[ [ "Wei", "Ganchao", "" ], [ "Stevenson", "Ian H.", "" ] ]
In many areas of the brain, neural spiking activity covaries with features of the external world, such as sensory stimuli or an animal's movement. Experimental findings suggest that the variability of neural activity changes over time and may provide information about the external world beyond the information provided by the average neural activity. To flexibly track time-varying neural response properties, here we developed a dynamic model with Conway-Maxwell Poisson (CMP) observations. The CMP distribution can flexibly describe firing patterns that are both under- and over-dispersed relative to the Poisson distribution. Here we track parameters of the CMP distribution as they vary over time. Using simulations, we show that a normal approximation can accurately track dynamics in state vectors for both the centering and shape parameters ($\lambda$ and $\nu$). We then fit our model to neural data from neurons in primary visual cortex and "place cells" in the hippocampus. We find that this method out-performs previous dynamic models based on the Poisson distribution. The dynamic CMP model provides a flexible framework for tracking time-varying non-Poisson count data and may also have applications beyond neuroscience.
q-bio/0611076
Thierry Rabilloud
Sylvie Luche, V\'eronique Santoni (BPMP), Thierry Rabilloud
Evaluation of nonionic and zwitterionic detergents as membrane protein solubilizers in two-dimensional electrophoresis
website publisher http://www.interscience.wiley.com
Proteomics 3 (03/2003) 249-53
10.1002/pmic.200390037
null
q-bio.GN
null
The solubilizing power of various nonionic and zwitterionic detergents as membrane protein solubilizers for two-dimensional electrophoresis was investigated. Human red blood cell ghosts and Arabidopsis thaliana leaf membrane proteins were used as model systems. Efficient detergents could be found in each class, i.e. with oligooxyethylene, sugar or sulfobetaine polar heads. Among the commercially available nonionic detergents, dodecyl maltoside and decaethylene glycol mono hexadecyl ether proved most efficient. They complement the more classical sulfobetaine detergents to widen the scope of useful detergents for the solubilization of membrane proteins in proteomics.
[ { "created": "Fri, 24 Nov 2006 12:57:48 GMT", "version": "v1" } ]
2016-08-16
[ [ "Luche", "Sylvie", "", "BPMP" ], [ "Santoni", "Véronique", "", "BPMP" ], [ "Rabilloud", "Thierry", "" ] ]
The solubilizing power of various nonionic and zwitterionic detergents as membrane protein solubilizers for two-dimensional electrophoresis was investigated. Human red blood cell ghosts and Arabidopsis thaliana leaf membrane proteins were used as model systems. Efficient detergents could be found in each class, i.e. with oligooxyethylene, sugar or sulfobetaine polar heads. Among the commercially available nonionic detergents, dodecyl maltoside and decaethylene glycol mono hexadecyl ether proved most efficient. They complement the more classical sulfobetaine detergents to widen the scope of useful detergents for the solubilization of membrane proteins in proteomics.
q-bio/0501004
Iaroslav Ispolatov
Iaroslav Ispolatov, Anton Yuryev, Ilya Mazo, and Sergei Maslov
Binding properties and evolution of homodimers in protein-protein interaction networks
16 pages, 3 figures
Nucleic Acids Research 2005 33(11):3629-3635
null
null
q-bio.GN cond-mat.dis-nn q-bio.MN
null
We demonstrate that Protein-Protein Interaction (PPI) networks in several eucaryotic organisms contain significantly more self-interacting proteins than expected if such homodimers randomly appeared in the course of the evolution. We also show that on average homodimers have twice as many interaction partners than non-self-interacting proteins. More specifically the likelihood of a protein to physically interact with itself was found to be proportional to the total number of its binding partners. These properties of dimers are are in agreement with a phenomenological model in which individual proteins differ from each other by the degree of their ``stickiness'' or general propensity towards interaction with other proteins including oneself. A duplication of self-interacting proteins creates a pair of paralogous proteins interacting with each other. We show that such pairs occur more frequently than could be explained by pure chance alone. Similar to homodimers, proteins involved in heterodimers with their paralogs on average have twice as many interacting partners than the rest of the network. The likelihood of a pair of paralogous proteins to interact with each other was also shown to decrease with their sequence similarity. This all points to the conclusion that most of interactions between paralogs are inherited from ancestral homodimeric proteins, rather than established de novo after the duplication. We finally discuss possible implications of our empirical observations from functional and evolutionary standpoints.
[ { "created": "Tue, 4 Jan 2005 00:59:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ispolatov", "Iaroslav", "" ], [ "Yuryev", "Anton", "" ], [ "Mazo", "Ilya", "" ], [ "Maslov", "Sergei", "" ] ]
We demonstrate that Protein-Protein Interaction (PPI) networks in several eucaryotic organisms contain significantly more self-interacting proteins than expected if such homodimers randomly appeared in the course of the evolution. We also show that on average homodimers have twice as many interaction partners than non-self-interacting proteins. More specifically the likelihood of a protein to physically interact with itself was found to be proportional to the total number of its binding partners. These properties of dimers are are in agreement with a phenomenological model in which individual proteins differ from each other by the degree of their ``stickiness'' or general propensity towards interaction with other proteins including oneself. A duplication of self-interacting proteins creates a pair of paralogous proteins interacting with each other. We show that such pairs occur more frequently than could be explained by pure chance alone. Similar to homodimers, proteins involved in heterodimers with their paralogs on average have twice as many interacting partners than the rest of the network. The likelihood of a pair of paralogous proteins to interact with each other was also shown to decrease with their sequence similarity. This all points to the conclusion that most of interactions between paralogs are inherited from ancestral homodimeric proteins, rather than established de novo after the duplication. We finally discuss possible implications of our empirical observations from functional and evolutionary standpoints.
2202.12406
Marcelo Tragtenberg Dr.
R. V. Stenzinger and M. H. R. Tragtenberg
Cardiac Reentry Modeled by Spatiotemporal Chaos in a Coupled Map Lattice
15 pages, 9 figures. Eur. Phys. J. Spec. Top. (2022)
null
10.1140/epjs/s11734-022-00473-1
null
q-bio.TO nlin.CD nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Arrhythmias are potentially fatal disruptions to the normal heart rhythm, but their underlying dynamics is still poorly understood. Theoretical modeling is an important tool to fill this gap. Typical studies often employ detailed multidimensional conductance-based models. We describe the cardiac muscle with a three-dimensional map-based membrane potential model in lattices. Although maps retain the biophysical behavior of cells and generate computationally efficient tissue models, few studies have used them to understand cardiac dynamics. Our study captures healthy and pathological behaviors with fewer parameters and simpler equations than conductance models. We successfully generalize results obtained previously with reaction-diffusion systems, showing how chaotic properties result in reentry, a pathological propagation of stimuli that evolves to arrhythmias with complex spatiotemporal features. The bifurcation diagram of the single cell is very similar to that obtained in a detailed conductance-based model. We find torsade de pointes, a clinical manifestation of some types of tachycardia in the electrocardiogram, using a generic sampling of the whole network during spiral waves. We also find a novel type of dynamical pattern, with wavefronts composed of synchronized cardiac plateaus and bursts. Our study provides the first in-depth look at the use of map-based models to simulate complex cardiac dynamics.
[ { "created": "Sat, 12 Feb 2022 19:53:36 GMT", "version": "v1" }, { "created": "Sat, 5 Mar 2022 20:18:52 GMT", "version": "v2" } ]
2022-03-08
[ [ "Stenzinger", "R. V.", "" ], [ "Tragtenberg", "M. H. R.", "" ] ]
Arrhythmias are potentially fatal disruptions to the normal heart rhythm, but their underlying dynamics is still poorly understood. Theoretical modeling is an important tool to fill this gap. Typical studies often employ detailed multidimensional conductance-based models. We describe the cardiac muscle with a three-dimensional map-based membrane potential model in lattices. Although maps retain the biophysical behavior of cells and generate computationally efficient tissue models, few studies have used them to understand cardiac dynamics. Our study captures healthy and pathological behaviors with fewer parameters and simpler equations than conductance models. We successfully generalize results obtained previously with reaction-diffusion systems, showing how chaotic properties result in reentry, a pathological propagation of stimuli that evolves to arrhythmias with complex spatiotemporal features. The bifurcation diagram of the single cell is very similar to that obtained in a detailed conductance-based model. We find torsade de pointes, a clinical manifestation of some types of tachycardia in the electrocardiogram, using a generic sampling of the whole network during spiral waves. We also find a novel type of dynamical pattern, with wavefronts composed of synchronized cardiac plateaus and bursts. Our study provides the first in-depth look at the use of map-based models to simulate complex cardiac dynamics.
2007.14919
Jirong Yi
Jirong Yi, Myung Cho, Xiaodong Wu, Weiyu Xu, and Raghu Mudumbai
Error Correction Codes for COVID-19 Virus and Antibody Testing: Using Pooled Testing to Increase Test Reliability
14 pages, 15 figures
null
null
null
q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a novel method to increase the reliability of COVID-19 virus or antibody tests by using specially designed pooled testings. Instead of testing nasal swab or blood samples from individual persons, we propose to test mixtures of samples from many individuals. The pooled sample testing method proposed in this paper also serves a different purpose: for increasing test reliability and providing accurate diagnoses even if the tests themselves are not very accurate. Our method uses ideas from compressed sensing and error-correction coding to correct for a certain number of errors in the test results. The intuition is that when each individual's sample is part of many pooled sample mixtures, the test results from all of the sample mixtures contain redundant information about each individual's diagnosis, which can be exploited to automatically correct for wrong test results in exactly the same way that error correction codes correct errors introduced in noisy communication channels. While such redundancy can also be achieved by simply testing each individual's sample multiple times, we present simulations and theoretical arguments that show that our method is significantly more efficient in increasing diagnostic accuracy. In contrast to group testing and compressed sensing which aim to reduce the number of required tests, this proposed error correction code idea purposefully uses pooled testing to increase test accuracy, and works not only in the "undersampling" regime, but also in the "oversampling" regime, where the number of tests is bigger than the number of subjects. The results in this paper run against traditional beliefs that, "even though pooled testing increased test capacity, pooled testings were less reliable than testing individuals separately."
[ { "created": "Wed, 29 Jul 2020 15:52:49 GMT", "version": "v1" } ]
2020-07-30
[ [ "Yi", "Jirong", "" ], [ "Cho", "Myung", "" ], [ "Wu", "Xiaodong", "" ], [ "Xu", "Weiyu", "" ], [ "Mudumbai", "Raghu", "" ] ]
We consider a novel method to increase the reliability of COVID-19 virus or antibody tests by using specially designed pooled testings. Instead of testing nasal swab or blood samples from individual persons, we propose to test mixtures of samples from many individuals. The pooled sample testing method proposed in this paper also serves a different purpose: for increasing test reliability and providing accurate diagnoses even if the tests themselves are not very accurate. Our method uses ideas from compressed sensing and error-correction coding to correct for a certain number of errors in the test results. The intuition is that when each individual's sample is part of many pooled sample mixtures, the test results from all of the sample mixtures contain redundant information about each individual's diagnosis, which can be exploited to automatically correct for wrong test results in exactly the same way that error correction codes correct errors introduced in noisy communication channels. While such redundancy can also be achieved by simply testing each individual's sample multiple times, we present simulations and theoretical arguments that show that our method is significantly more efficient in increasing diagnostic accuracy. In contrast to group testing and compressed sensing which aim to reduce the number of required tests, this proposed error correction code idea purposefully uses pooled testing to increase test accuracy, and works not only in the "undersampling" regime, but also in the "oversampling" regime, where the number of tests is bigger than the number of subjects. The results in this paper run against traditional beliefs that, "even though pooled testing increased test capacity, pooled testings were less reliable than testing individuals separately."
2310.18708
Yoram Burak
Haggai Agmon and Yoram Burak
Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization
To be presetned at the Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by-nc-nd/4.0/
The storage of continuous variables in working memory is hypothesized to be sustained in the brain by the dynamics of recurrent neural networks (RNNs) whose steady states form continuous manifolds. In some cases, it is thought that the synaptic connectivity supports multiple attractor manifolds, each mapped to a different context or task. For example, in hippocampal area CA3, positions in distinct environments are represented by distinct sets of population activity patterns, each forming a continuum. It has been argued that the embedding of multiple continuous attractors in a single RNN inevitably causes detrimental interference: quenched noise in the synaptic connectivity disrupts the continuity of each attractor, replacing it by a discrete set of steady states that can be conceptualized as lying on local minima of an abstract energy landscape. Consequently, population activity patterns exhibit systematic drifts towards one of these discrete minima, thereby degrading the stored memory over time. Here we show that it is possible to dramatically attenuate these detrimental interference effects by adjusting the synaptic weights. Synaptic weight adjustments are derived from a loss function that quantifies the roughness of the energy landscape along each of the embedded attractor manifolds. By minimizing this loss function, the stability of states can be dramatically improved, without compromising the capacity.
[ { "created": "Sat, 28 Oct 2023 13:36:55 GMT", "version": "v1" } ]
2023-10-31
[ [ "Agmon", "Haggai", "" ], [ "Burak", "Yoram", "" ] ]
The storage of continuous variables in working memory is hypothesized to be sustained in the brain by the dynamics of recurrent neural networks (RNNs) whose steady states form continuous manifolds. In some cases, it is thought that the synaptic connectivity supports multiple attractor manifolds, each mapped to a different context or task. For example, in hippocampal area CA3, positions in distinct environments are represented by distinct sets of population activity patterns, each forming a continuum. It has been argued that the embedding of multiple continuous attractors in a single RNN inevitably causes detrimental interference: quenched noise in the synaptic connectivity disrupts the continuity of each attractor, replacing it by a discrete set of steady states that can be conceptualized as lying on local minima of an abstract energy landscape. Consequently, population activity patterns exhibit systematic drifts towards one of these discrete minima, thereby degrading the stored memory over time. Here we show that it is possible to dramatically attenuate these detrimental interference effects by adjusting the synaptic weights. Synaptic weight adjustments are derived from a loss function that quantifies the roughness of the energy landscape along each of the embedded attractor manifolds. By minimizing this loss function, the stability of states can be dramatically improved, without compromising the capacity.
2106.10627
Gerald Pao
Gerald M Pao, Cameron Smith, Joseph Park, Keichi Takahashi, Wassapon Watanakeesuntorn, Hiroaki Natsukawa, Sreekanth H Chalasani, Tom Lorimer, Ryousei Takano, Nuttida Rungratsameetaweemana, George Sugihara
Experimentally testable whole brain manifolds that recapitulate behavior
20 pages, 15 figures; corresponding author: Gerald Pao geraldpao@gmail.com
null
null
null
q-bio.NC math.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose an algorithm grounded in dynamical systems theory that generalizes manifold learning from a global state representation, to a network of local interacting manifolds termed a Generative Manifold Network (GMN). Manifolds are discovered using the convergent cross mapping (CCM) causal inference algorithm which are then compressed into a reduced redundancy network. The representation is a network of manifolds embedded from observational data where each orthogonal axis of a local manifold is an embedding of a individually identifiable neuron or brain area that has exact correspondence in the real world. As such these can be experimentally manipulated to test hypotheses derived from theory and data analysis. Here we demonstrate that this representation preserves the essential features of the brain of flies,larval zebrafish and humans. In addition to accurate near-term prediction, the GMN model can be used to synthesize realistic time series of whole brain neuronal activity and locomotion viewed over the long term. Thus, as a final validation of how well GMN captures essential dynamic information, we show that the artificially generated time series can be used as a training set to predict out-of-sample observed fly locomotion, as well as brain activity in out of sample withheld data not used in model building. Remarkably, the artificially generated time series show realistic novel behaviors that do not exist in the training data, but that do exist in the out-of-sample observational data. This suggests that GMN captures inherently emergent properties of the network. We suggest our approach may be a generic recipe for mapping time series observations of any complex nonlinear network into a model that is able to generate naturalistic system behaviors that identifies variables that have real world correspondence and can be experimentally manipulated.
[ { "created": "Sun, 20 Jun 2021 05:17:05 GMT", "version": "v1" } ]
2021-06-22
[ [ "Pao", "Gerald M", "" ], [ "Smith", "Cameron", "" ], [ "Park", "Joseph", "" ], [ "Takahashi", "Keichi", "" ], [ "Watanakeesuntorn", "Wassapon", "" ], [ "Natsukawa", "Hiroaki", "" ], [ "Chalasani", "Sreekanth H"...
We propose an algorithm grounded in dynamical systems theory that generalizes manifold learning from a global state representation, to a network of local interacting manifolds termed a Generative Manifold Network (GMN). Manifolds are discovered using the convergent cross mapping (CCM) causal inference algorithm which are then compressed into a reduced redundancy network. The representation is a network of manifolds embedded from observational data where each orthogonal axis of a local manifold is an embedding of a individually identifiable neuron or brain area that has exact correspondence in the real world. As such these can be experimentally manipulated to test hypotheses derived from theory and data analysis. Here we demonstrate that this representation preserves the essential features of the brain of flies,larval zebrafish and humans. In addition to accurate near-term prediction, the GMN model can be used to synthesize realistic time series of whole brain neuronal activity and locomotion viewed over the long term. Thus, as a final validation of how well GMN captures essential dynamic information, we show that the artificially generated time series can be used as a training set to predict out-of-sample observed fly locomotion, as well as brain activity in out of sample withheld data not used in model building. Remarkably, the artificially generated time series show realistic novel behaviors that do not exist in the training data, but that do exist in the out-of-sample observational data. This suggests that GMN captures inherently emergent properties of the network. We suggest our approach may be a generic recipe for mapping time series observations of any complex nonlinear network into a model that is able to generate naturalistic system behaviors that identifies variables that have real world correspondence and can be experimentally manipulated.
1204.6226
Michael Deem
Keyao Pan, Krystina C. Subieta, and Michael W. Deem
A Novel Sequence-Based Antigenic Distance Measure for H1N1, with Application to Vaccine Effectiveness and the Selection of Vaccine Strains
26 pages, 7 figures, 2 tables, supplement
Protein Engineering, Design & Selection 24 (2011) 291-299
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
H1N1 influenza causes substantial seasonal illness and was the subtype of the 2009 influenza pandemic. Precise measures of antigenic distance between the vaccine and circulating virus strains help researchers design influenza vaccines with high vaccine effectiveness. We here introduce a sequence-based method to predict vaccine effectiveness in humans. Historical epidemiological data show that this sequence-based method is as predictive of vaccine effectiveness as hemagglutination inhibition (HI) assay data from ferret animal model studies. Interestingly, the expected vaccine effectiveness is greater against H1N1 than H3N2, suggesting a stronger immune response against H1N1 than H3N2. The evolution rate of hemagglutinin in H1N1 is also shown to be greater than that in H3N2, presumably due to greater immune selection pressure.
[ { "created": "Fri, 27 Apr 2012 14:25:16 GMT", "version": "v1" } ]
2012-04-30
[ [ "Pan", "Keyao", "" ], [ "Subieta", "Krystina C.", "" ], [ "Deem", "Michael W.", "" ] ]
H1N1 influenza causes substantial seasonal illness and was the subtype of the 2009 influenza pandemic. Precise measures of antigenic distance between the vaccine and circulating virus strains help researchers design influenza vaccines with high vaccine effectiveness. We here introduce a sequence-based method to predict vaccine effectiveness in humans. Historical epidemiological data show that this sequence-based method is as predictive of vaccine effectiveness as hemagglutination inhibition (HI) assay data from ferret animal model studies. Interestingly, the expected vaccine effectiveness is greater against H1N1 than H3N2, suggesting a stronger immune response against H1N1 than H3N2. The evolution rate of hemagglutinin in H1N1 is also shown to be greater than that in H3N2, presumably due to greater immune selection pressure.
0704.2191
Bo Deng
Bo Deng
Mismatch Repair Error Implies Chargaff's Second Parity Rule
null
null
null
null
q-bio.GN
null
Chargaff's second parity rule holds empirically for most types of DNA that along single strands of DNA the base contents are equal for complimentary bases, A = T, G = C. A Markov chain model is constructed to track the evolution of any single base position along single strands of genomes whose organisms are equipped with replication mismatch repair. Under the key assumptions that mismatch error rates primarily depend the number of hydrogen bonds of nucleotides and that the mismatch repairing process itself makes strand recognition error, the model shows that the steady state probabilities for any base position to take on one of the 4 nucleotide bases are equal for complimentary bases. As a result, Chargaff's second parity rule is the manifestation of the Law of Large Number acting on the steady state probabilities. More importantly, because the model pinpoints mismatch repair as a basis of the rule, it is suitable for experimental verification.
[ { "created": "Tue, 17 Apr 2007 16:15:37 GMT", "version": "v1" }, { "created": "Thu, 20 Sep 2007 15:46:34 GMT", "version": "v2" } ]
2007-09-20
[ [ "Deng", "Bo", "" ] ]
Chargaff's second parity rule holds empirically for most types of DNA that along single strands of DNA the base contents are equal for complimentary bases, A = T, G = C. A Markov chain model is constructed to track the evolution of any single base position along single strands of genomes whose organisms are equipped with replication mismatch repair. Under the key assumptions that mismatch error rates primarily depend the number of hydrogen bonds of nucleotides and that the mismatch repairing process itself makes strand recognition error, the model shows that the steady state probabilities for any base position to take on one of the 4 nucleotide bases are equal for complimentary bases. As a result, Chargaff's second parity rule is the manifestation of the Law of Large Number acting on the steady state probabilities. More importantly, because the model pinpoints mismatch repair as a basis of the rule, it is suitable for experimental verification.
1809.09475
Claudio Angione
Supreeta Vijayakumar, Max Conway, Pietro Li\'o, Claudio Angione
Seeing the wood for the trees: a forest of methods for optimisation and omic-network integration in metabolic modelling
null
Briefings in Bioinformatics, bbx053, 2017
10.1093/bib/bbx053
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a "forest" of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridise and become more complex. We then provide the first hands-on tutorial for multi-objective optimisation of metabolic models in R. We finally discuss the implementation of multi-view machine-learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimisation of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning, and a step-by-step R tutorial should be of interest for both the beginner and the advanced user.
[ { "created": "Fri, 21 Sep 2018 20:34:13 GMT", "version": "v1" } ]
2018-09-26
[ [ "Vijayakumar", "Supreeta", "" ], [ "Conway", "Max", "" ], [ "Lió", "Pietro", "" ], [ "Angione", "Claudio", "" ] ]
Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a "forest" of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridise and become more complex. We then provide the first hands-on tutorial for multi-objective optimisation of metabolic models in R. We finally discuss the implementation of multi-view machine-learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimisation of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning, and a step-by-step R tutorial should be of interest for both the beginner and the advanced user.
2010.07506
Zachary Laubach
Zachary M. Laubach, Eleanor J. Murray, Kim L. Hoke, Rebecca J. Safran, Wei Perng
EIC (Expert Information Criterion) not AIC: the cautious biologist's guide to model selection
Word count: 6791 Tables: 1 (Box 1) Figures: 9 References: 57
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
1.A goal of many research programs in biology is to extract meaningful insights from large, complex data sets. Researchers in Ecology, Evolution and Behavior (EEB) often grapple with long-term, observational data sets from which they construct models to address fundamental questions about biology. Similarly, epidemiologists analyze large, complex observational data sets to understand the distribution and determinants of human health and disease. A key difference in the analytical workflows for these two distinct areas of biology is delineation of data analysis tasks and explicit use of causal inference methods, widely adopted by epidemiologists. 2.Here, we review the most recent causal inference literature and describe an analytical workflow that has direct applications for EEB researchers. 3.The first half of this commentary defines four distinct analytical tasks (description, prediction, association, and causal inference), and the corresponding approaches to data analysis and model selection. The latter half is dedicated to walking the reader through the steps of casual inference, focusing on examples from EEB. 4.Given increasing interest in causal inference and common misperceptions regarding the task of causal inference, we aim to facilitate an exchange of ideas between disciplinary silos and provide a framework for analyses of all data, though particularly relevant for observational data.
[ { "created": "Thu, 15 Oct 2020 04:14:07 GMT", "version": "v1" } ]
2020-10-16
[ [ "Laubach", "Zachary M.", "" ], [ "Murray", "Eleanor J.", "" ], [ "Hoke", "Kim L.", "" ], [ "Safran", "Rebecca J.", "" ], [ "Perng", "Wei", "" ] ]
1.A goal of many research programs in biology is to extract meaningful insights from large, complex data sets. Researchers in Ecology, Evolution and Behavior (EEB) often grapple with long-term, observational data sets from which they construct models to address fundamental questions about biology. Similarly, epidemiologists analyze large, complex observational data sets to understand the distribution and determinants of human health and disease. A key difference in the analytical workflows for these two distinct areas of biology is delineation of data analysis tasks and explicit use of causal inference methods, widely adopted by epidemiologists. 2.Here, we review the most recent causal inference literature and describe an analytical workflow that has direct applications for EEB researchers. 3.The first half of this commentary defines four distinct analytical tasks (description, prediction, association, and causal inference), and the corresponding approaches to data analysis and model selection. The latter half is dedicated to walking the reader through the steps of casual inference, focusing on examples from EEB. 4.Given increasing interest in causal inference and common misperceptions regarding the task of causal inference, we aim to facilitate an exchange of ideas between disciplinary silos and provide a framework for analyses of all data, though particularly relevant for observational data.
2203.10189
Claudio Maccone
Claudio Maccone
Evo-SETI: A Mathematical Tool for Cladistics, Evolution, and SETI
arXiv admin note: text overlap with arXiv:2103.12062
null
null
null
q-bio.PE astro-ph.EP math.PR
http://creativecommons.org/licenses/by-nc-nd/4.0/
The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time. We call b-lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b-lognormal in the time. Next, our "Peak-Locus Theorem" translates cladistics: each species created by evolution is a b-lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called "Geometric Brownian Motion" (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b-lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The "molecular clock" is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) "cubic" evolution: a curve of logarithmic increase.
[ { "created": "Fri, 18 Mar 2022 23:00:23 GMT", "version": "v1" } ]
2022-03-22
[ [ "Maccone", "Claudio", "" ] ]
The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time. We call b-lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b-lognormal in the time. Next, our "Peak-Locus Theorem" translates cladistics: each species created by evolution is a b-lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called "Geometric Brownian Motion" (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b-lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The "molecular clock" is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) "cubic" evolution: a curve of logarithmic increase.
2006.11975
Hyunsu Lee
Hyunsu Lee
Toward the biological model of the hippocampus as the successor representation agent
9 pages, 1 figure
2022, Biosystems, 213, 104612
10.1016/j.biosystems.2022.104612
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The hippocampus is an essential brain region for spatial memory and learning. Recently, a theoretical model of the hippocampus based on temporal difference (TD) learning has been published. Inspired by the successor representation (SR) learning algorithms, which decompose value function of TD learning into reward and state transition, they argued that the rate of firing of CA1 place cells in the hippocampus represents the probability of state transition. This theory, called predictive map theory, claims that the hippocampus representing space learns the probability of transition from the current state to the future state. The neural correlates of expecting the future state are the firing rates of the CA1 place cells. This explanation is plausible for the results recorded in behavioral experiments, but it is lacking the neurobiological implications. Modifying the SR learning algorithm added biological implications to the predictive map theory. Similar with the simultaneous needs of information of the current and future state in the SR learning algorithm, the CA1 place cells receive two inputs from CA3 and entorhinal cortex. Mathematical transformation showed that the SR learning algorithm is equivalent to the heterosynaptic plasticity rule. The heterosynaptic plasticity phenomena in CA1 were discussed and compared with the modified SR update rule. This study attempted to interpret the TD algorithm as the neurobiological mechanism occurring in place learning, and to integrate the neuroscience and artificial intelligence approaches in the field.
[ { "created": "Mon, 22 Jun 2020 02:44:42 GMT", "version": "v1" }, { "created": "Tue, 26 Oct 2021 07:19:57 GMT", "version": "v2" } ]
2024-02-08
[ [ "Lee", "Hyunsu", "" ] ]
The hippocampus is an essential brain region for spatial memory and learning. Recently, a theoretical model of the hippocampus based on temporal difference (TD) learning has been published. Inspired by the successor representation (SR) learning algorithms, which decompose value function of TD learning into reward and state transition, they argued that the rate of firing of CA1 place cells in the hippocampus represents the probability of state transition. This theory, called predictive map theory, claims that the hippocampus representing space learns the probability of transition from the current state to the future state. The neural correlates of expecting the future state are the firing rates of the CA1 place cells. This explanation is plausible for the results recorded in behavioral experiments, but it is lacking the neurobiological implications. Modifying the SR learning algorithm added biological implications to the predictive map theory. Similar with the simultaneous needs of information of the current and future state in the SR learning algorithm, the CA1 place cells receive two inputs from CA3 and entorhinal cortex. Mathematical transformation showed that the SR learning algorithm is equivalent to the heterosynaptic plasticity rule. The heterosynaptic plasticity phenomena in CA1 were discussed and compared with the modified SR update rule. This study attempted to interpret the TD algorithm as the neurobiological mechanism occurring in place learning, and to integrate the neuroscience and artificial intelligence approaches in the field.
1611.09283
Jose Fontanari
Caroline Franco and Jos\'e F. Fontanari
The spatial dynamics of ecosystem engineers
null
Mathematical Biosciences 292, 76-85 (2017)
10.1016/j.mbs.2017.08.002
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The changes on abiotic features of ecosystems have rarely been taken into account by population dynamics models, which typically focus on trophic and competitive interactions between species. However, understanding the population dynamics of organisms that must modify their habitats in order to survive, the so-called ecosystem engineers, requires the explicit incorporation of abiotic interactions in the models. Here we study a model of ecosystem engineers that is discrete both in space and time, and where the engineers and their habitats are arranged in patches fixed to the sites of regular lattices. The growth of the engineer population is modeled by Ricker equation with a density-dependent carrying capacity that is given by the number of modified habitats. A diffusive dispersal stage ensures that a fraction of the engineers move from their birth patches to neighboring patches. We find that dispersal influences the metapopulation dynamics only in the case that the local or single-patch dynamics exhibits chaotic behavior. In that case, it can suppress the chaotic behavior and avoid extinctions in the regime of large intrinsic growth rate of the population.
[ { "created": "Mon, 28 Nov 2016 18:52:36 GMT", "version": "v1" }, { "created": "Mon, 6 Feb 2017 19:50:22 GMT", "version": "v2" }, { "created": "Thu, 10 Aug 2017 23:58:45 GMT", "version": "v3" } ]
2017-08-14
[ [ "Franco", "Caroline", "" ], [ "Fontanari", "José F.", "" ] ]
The changes on abiotic features of ecosystems have rarely been taken into account by population dynamics models, which typically focus on trophic and competitive interactions between species. However, understanding the population dynamics of organisms that must modify their habitats in order to survive, the so-called ecosystem engineers, requires the explicit incorporation of abiotic interactions in the models. Here we study a model of ecosystem engineers that is discrete both in space and time, and where the engineers and their habitats are arranged in patches fixed to the sites of regular lattices. The growth of the engineer population is modeled by Ricker equation with a density-dependent carrying capacity that is given by the number of modified habitats. A diffusive dispersal stage ensures that a fraction of the engineers move from their birth patches to neighboring patches. We find that dispersal influences the metapopulation dynamics only in the case that the local or single-patch dynamics exhibits chaotic behavior. In that case, it can suppress the chaotic behavior and avoid extinctions in the regime of large intrinsic growth rate of the population.
1910.13576
Sarah Innes-Gold
Sarah N. Innes-Gold, John P. Berezney, Omar A. Saleh
Single-molecule stretching shows glycosylation sets tension in the hyaluronan-aggrecan bottlebrush
null
null
10.1016/j.bpj.2019.11.1205
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large bottlebrush complexes formed from the polysaccharide hyaluronan (HA) and the proteoglycan aggrecan contribute to cartilage compression resistance and are necessary for healthy joint function. A variety of mechanical forces act on these complexes in the cartilage extracellular matrix, motivating the need for a quantitative description which links their structure and mechanical response. Studies using electron microscopy have imaged the HA-aggrecan brush but require adsorption to a surface, dramatically altering the complex from its native conformation. We use magnetic tweezers force spectroscopy to measure changes in extension and mechanical response of an HA chain as aggrecan monomers bind and form a bottlebrush. This technique directly measures changes undergone by a single complex with time and under varying solution conditions. Upon addition of aggrecan, we find a large swelling effect manifests when the HA chain is under very low external tension (i.e. stretching forces less than ~1 pN). We use models of force-extension behavior to show that repulsion between the aggrecans induces an internal tension in the HA chain. Through reference to theories of bottlebrush polymer behavior, we demonstrate that the experimental values of internal tension are consistent with a polydisperse aggrecan population, likely caused by varying degrees of glycosylation. By enzymatically deglycosylating aggrecan, we show that aggrecan glycosylation is the structural feature which causes HA stiffening. We then construct a simple stochastic binding model to show that variable glycosylation leads to a wide distribution of internal tensions in HA, causing variations in the mechanics at much longer length-scales. Our results provide a mechanistic picture of how flexibility and size of HA and aggrecan lead to the brush architecture and mechanical properties of this important component of cartilage.
[ { "created": "Tue, 29 Oct 2019 23:36:17 GMT", "version": "v1" }, { "created": "Wed, 19 Aug 2020 23:18:04 GMT", "version": "v2" } ]
2023-07-19
[ [ "Innes-Gold", "Sarah N.", "" ], [ "Berezney", "John P.", "" ], [ "Saleh", "Omar A.", "" ] ]
Large bottlebrush complexes formed from the polysaccharide hyaluronan (HA) and the proteoglycan aggrecan contribute to cartilage compression resistance and are necessary for healthy joint function. A variety of mechanical forces act on these complexes in the cartilage extracellular matrix, motivating the need for a quantitative description which links their structure and mechanical response. Studies using electron microscopy have imaged the HA-aggrecan brush but require adsorption to a surface, dramatically altering the complex from its native conformation. We use magnetic tweezers force spectroscopy to measure changes in extension and mechanical response of an HA chain as aggrecan monomers bind and form a bottlebrush. This technique directly measures changes undergone by a single complex with time and under varying solution conditions. Upon addition of aggrecan, we find a large swelling effect manifests when the HA chain is under very low external tension (i.e. stretching forces less than ~1 pN). We use models of force-extension behavior to show that repulsion between the aggrecans induces an internal tension in the HA chain. Through reference to theories of bottlebrush polymer behavior, we demonstrate that the experimental values of internal tension are consistent with a polydisperse aggrecan population, likely caused by varying degrees of glycosylation. By enzymatically deglycosylating aggrecan, we show that aggrecan glycosylation is the structural feature which causes HA stiffening. We then construct a simple stochastic binding model to show that variable glycosylation leads to a wide distribution of internal tensions in HA, causing variations in the mechanics at much longer length-scales. Our results provide a mechanistic picture of how flexibility and size of HA and aggrecan lead to the brush architecture and mechanical properties of this important component of cartilage.
2308.01921
Yihui Ren
Wei Chen, Yihui Ren, Ai Kagawa, Matthew R. Carbone, Samuel Yen-Chi Chen, Xiaohui Qu, Shinjae Yoo, Austin Clyde, Arvind Ramanathan, Rick L. Stevens, Hubertus J. J. van Dam, Deyu Lu
Transferable Graph Neural Fingerprint Models for Quick Response to Future Bio-Threats
8 pages, 5 figures, 2 tables, accepted by ICLMA2023
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Fast screening of drug molecules based on the ligand binding affinity is an important step in the drug discovery pipeline. Graph neural fingerprint is a promising method for developing molecular docking surrogates with high throughput and great fidelity. In this study, we built a COVID-19 drug docking dataset of about 300,000 drug candidates on 23 coronavirus protein targets. With this dataset, we trained graph neural fingerprint docking models for high-throughput virtual COVID-19 drug screening. The graph neural fingerprint models yield high prediction accuracy on docking scores with the mean squared error lower than $0.21$ kcal/mol for most of the docking targets, showing significant improvement over conventional circular fingerprint methods. To make the neural fingerprints transferable for unknown targets, we also propose a transferable graph neural fingerprint method trained on multiple targets. With comparable accuracy to target-specific graph neural fingerprint models, the transferable model exhibits superb training and data efficiency. We highlight that the impact of this study extends beyond COVID-19 dataset, as our approach for fast virtual ligand screening can be easily adapted and integrated into a general machine learning-accelerated pipeline to battle future bio-threats.
[ { "created": "Mon, 17 Jul 2023 05:59:34 GMT", "version": "v1" }, { "created": "Thu, 14 Sep 2023 17:28:52 GMT", "version": "v2" }, { "created": "Fri, 15 Sep 2023 00:53:02 GMT", "version": "v3" } ]
2023-09-18
[ [ "Chen", "Wei", "" ], [ "Ren", "Yihui", "" ], [ "Kagawa", "Ai", "" ], [ "Carbone", "Matthew R.", "" ], [ "Chen", "Samuel Yen-Chi", "" ], [ "Qu", "Xiaohui", "" ], [ "Yoo", "Shinjae", "" ], [ "Clyde", ...
Fast screening of drug molecules based on the ligand binding affinity is an important step in the drug discovery pipeline. Graph neural fingerprint is a promising method for developing molecular docking surrogates with high throughput and great fidelity. In this study, we built a COVID-19 drug docking dataset of about 300,000 drug candidates on 23 coronavirus protein targets. With this dataset, we trained graph neural fingerprint docking models for high-throughput virtual COVID-19 drug screening. The graph neural fingerprint models yield high prediction accuracy on docking scores with the mean squared error lower than $0.21$ kcal/mol for most of the docking targets, showing significant improvement over conventional circular fingerprint methods. To make the neural fingerprints transferable for unknown targets, we also propose a transferable graph neural fingerprint method trained on multiple targets. With comparable accuracy to target-specific graph neural fingerprint models, the transferable model exhibits superb training and data efficiency. We highlight that the impact of this study extends beyond COVID-19 dataset, as our approach for fast virtual ligand screening can be easily adapted and integrated into a general machine learning-accelerated pipeline to battle future bio-threats.
2404.14123
Jan Karbowski
Jan Karbowski, Paulina Urban
Cooperativity, information gain, and energy cost during early LTP in dendritic spines
Corrected typo in Eq. (14). The paper is about information and its energy cost during synaptic plasticity (synaptic learning)
Neural Computation 36: 271-311 (2024)
10.1162/neco_a_01632
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech physics.comp-ph q-bio.QM
http://creativecommons.org/licenses/by/4.0/
We investigate a mutual relationship between information and energy during early phase of LTP induction and maintenance in a large-scale system of mutually coupled dendritic spines, with discrete internal states and probabilistic dynamics, within the framework of nonequilibrium stochastic thermodynamics. In order to analyze this computationally intractable stochastic multidimensional system, we introduce a pair approximation, which allows us to reduce the spine dynamics into a lower dimensional manageable system of closed equations. It is found that the rates of information gain and energy attain their maximal values during an initial period of LTP (i.e. during stimulation), and after that they recover to their baseline low values, as opposed to a memory trace that lasts much longer. This suggests that learning phase is much more energy demanding than the memory phase. We show that positive correlations between neighboring spines increase both a duration of memory trace and energy cost during LTP, but the memory time per invested energy increases dramatically for very strong positive synaptic cooperativity, suggesting a beneficial role of synaptic clustering on memory duration. In contrast, information gain after LTP is the largest for negative correlations, and energy efficiency of that information generally declines with increasing synaptic cooperativity. We also find that dendritic spines can use sparse representations for encoding of long-term information, as both energetic and structural efficiencies of retained information and its lifetime exhibit maxima for low fractions of stimulated synapses during LTP. In general, our stochastic thermodynamics approach provides a unifying framework for studying, from first principles, information encoding and its energy cost during learning and memory in stochastic systems of interacting synapses.
[ { "created": "Mon, 22 Apr 2024 12:23:09 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2024 12:43:34 GMT", "version": "v2" } ]
2024-08-07
[ [ "Karbowski", "Jan", "" ], [ "Urban", "Paulina", "" ] ]
We investigate a mutual relationship between information and energy during early phase of LTP induction and maintenance in a large-scale system of mutually coupled dendritic spines, with discrete internal states and probabilistic dynamics, within the framework of nonequilibrium stochastic thermodynamics. In order to analyze this computationally intractable stochastic multidimensional system, we introduce a pair approximation, which allows us to reduce the spine dynamics into a lower dimensional manageable system of closed equations. It is found that the rates of information gain and energy attain their maximal values during an initial period of LTP (i.e. during stimulation), and after that they recover to their baseline low values, as opposed to a memory trace that lasts much longer. This suggests that learning phase is much more energy demanding than the memory phase. We show that positive correlations between neighboring spines increase both a duration of memory trace and energy cost during LTP, but the memory time per invested energy increases dramatically for very strong positive synaptic cooperativity, suggesting a beneficial role of synaptic clustering on memory duration. In contrast, information gain after LTP is the largest for negative correlations, and energy efficiency of that information generally declines with increasing synaptic cooperativity. We also find that dendritic spines can use sparse representations for encoding of long-term information, as both energetic and structural efficiencies of retained information and its lifetime exhibit maxima for low fractions of stimulated synapses during LTP. In general, our stochastic thermodynamics approach provides a unifying framework for studying, from first principles, information encoding and its energy cost during learning and memory in stochastic systems of interacting synapses.
1809.03265
Irmeli Barkefors
Johan Elf and Irmeli Barkefors
Single-molecule kinetics in living cells
Accepted for publication in Annual Review of Biochemistry
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by-sa/4.0/
In the past decades, advances in microscopy have made it possible to study the dynamics of individual biomolecules in vitro and resolve intramolecular kinetics that would otherwise be hidden in ensemble averages. More recently, single-molecule methods have been used to image, localize and track individually labeled macromolecules in the cytoplasm of living cells, allowing investigations of intermolecular kinetics under physiologically relevant conditions. In this review, we illuminate the particular advantages of single-molecule techniques when studying kinetics in living cells and discuss solutions to specific challenges associated with these methods.
[ { "created": "Mon, 10 Sep 2018 12:20:46 GMT", "version": "v1" } ]
2018-09-11
[ [ "Elf", "Johan", "" ], [ "Barkefors", "Irmeli", "" ] ]
In the past decades, advances in microscopy have made it possible to study the dynamics of individual biomolecules in vitro and resolve intramolecular kinetics that would otherwise be hidden in ensemble averages. More recently, single-molecule methods have been used to image, localize and track individually labeled macromolecules in the cytoplasm of living cells, allowing investigations of intermolecular kinetics under physiologically relevant conditions. In this review, we illuminate the particular advantages of single-molecule techniques when studying kinetics in living cells and discuss solutions to specific challenges associated with these methods.
2011.10479
Rodrigo Mendez Rojano
R. Mendez Rojano, M. Zhussupbekov, J. F. Antaki
Multi-scale simulation of thrombus formation at LVAD inlet cannula connection: Importance of Virchow's triad
11 pages, 10 figures, original article
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
As pump thrombosis is reduced in current-generation ventricular assist devices (VAD), adverse events such as bleeding or stroke remain at unacceptable rates. Thrombosis around the VAD inlet cannula (IC) has been highlighted as a possible source of stroke events. Recent computational fluid dynamics (CFD) studies have attempted to characterize the thrombosis risk of different IC-ventricle configurations. However, purely CFD simulations relate thrombosis risk to ad-hoc criteria based on flow characteristics, with little consideration of biochemical factors. This study investigates the genesis of IC thrombosis including two elements of the Virchow's triad: Endothelial injury and Hypercoagulability. To this end a multi-scale thrombosis simulation that includes platelet activity and coagulation reactions was performed. Our results show significant thrombin formation in stagnation regions (|u|< 0.002 m/s) close to the IC wall. In addition, high shear-mediated platelet activation was observed over the leading-edge tip of the cannula which mirrors the thrombus deposition pattern observed clinically. The current study reveals the importance of biochemical factors to the genesis of thrombosis at the ventricular-cannula junction which can inform clinical decisions in terms of anticoagulation/antiplatelet therapy and guide engineers to develop more robust designs.
[ { "created": "Fri, 20 Nov 2020 16:18:40 GMT", "version": "v1" } ]
2020-11-23
[ [ "Rojano", "R. Mendez", "" ], [ "Zhussupbekov", "M.", "" ], [ "Antaki", "J. F.", "" ] ]
As pump thrombosis is reduced in current-generation ventricular assist devices (VAD), adverse events such as bleeding or stroke remain at unacceptable rates. Thrombosis around the VAD inlet cannula (IC) has been highlighted as a possible source of stroke events. Recent computational fluid dynamics (CFD) studies have attempted to characterize the thrombosis risk of different IC-ventricle configurations. However, purely CFD simulations relate thrombosis risk to ad-hoc criteria based on flow characteristics, with little consideration of biochemical factors. This study investigates the genesis of IC thrombosis including two elements of the Virchow's triad: Endothelial injury and Hypercoagulability. To this end a multi-scale thrombosis simulation that includes platelet activity and coagulation reactions was performed. Our results show significant thrombin formation in stagnation regions (|u|< 0.002 m/s) close to the IC wall. In addition, high shear-mediated platelet activation was observed over the leading-edge tip of the cannula which mirrors the thrombus deposition pattern observed clinically. The current study reveals the importance of biochemical factors to the genesis of thrombosis at the ventricular-cannula junction which can inform clinical decisions in terms of anticoagulation/antiplatelet therapy and guide engineers to develop more robust designs.
q-bio/0410030
Axel G. Rossberg
A. G. Rossberg, H. Matsuda, T. Amemiya, K. Itoh
An explanatory model for food-web structure and evolution
16 pages, 3 figures
Ecological Complexity 2, 312-321 (2005)
10.1016/j.ecocom.2005.04.007
null
q-bio.PE
null
Food webs are networks describing who is eating whom in an ecological community. By now it is clear that many aspects of food-web structure are reproducible across diverse habitats, yet little is known about the driving force behind this structure. Evolutionary and population dynamical mechanisms have been considered. We propose a model for the evolutionary dynamics of food-web topology and show that it accurately reproduces observed food-web characteristic in the steady state. It is based on the observation that most consumers are larger than their resource species and the hypothesis that speciation and extinction rates decrease with increasing body mass. Results give strong support to the evolutionary hypothesis.
[ { "created": "Tue, 26 Oct 2004 03:00:36 GMT", "version": "v1" }, { "created": "Wed, 4 May 2005 03:12:15 GMT", "version": "v2" } ]
2007-05-23
[ [ "Rossberg", "A. G.", "" ], [ "Matsuda", "H.", "" ], [ "Amemiya", "T.", "" ], [ "Itoh", "K.", "" ] ]
Food webs are networks describing who is eating whom in an ecological community. By now it is clear that many aspects of food-web structure are reproducible across diverse habitats, yet little is known about the driving force behind this structure. Evolutionary and population dynamical mechanisms have been considered. We propose a model for the evolutionary dynamics of food-web topology and show that it accurately reproduces observed food-web characteristic in the steady state. It is based on the observation that most consumers are larger than their resource species and the hypothesis that speciation and extinction rates decrease with increasing body mass. Results give strong support to the evolutionary hypothesis.
1609.09602
Luis David Garcia-Puente
Ethan Petersen, Nora Youngs, Ryan Kruse, Dane Miyata, Rebecca Garcia, Luis David Garcia Puente
Neural Ideals in SageMath
8 pages, 2 tables, software available at https://github.com/e6-1/NeuralIdeals
null
null
null
q-bio.NC math.AC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major area in neuroscience research is the study of how the brain processes spatial information. Neurons in the brain represent external stimuli via neural codes. These codes often arise from stereotyped stimulus-response maps, associating to each neuron a convex receptive field. An important problem consists in determining what stimulus space features can be extracted directly from a neural code. The neural ideal is an algebraic object that encodes the full combinatorial data of a neural code. This ideal can be expressed in a canonical form that directly translates to a minimal description of the receptive field structure intrinsic to the code. In here, we describe a SageMath package that contains several algorithms related to the canonical form of a neural ideal.
[ { "created": "Fri, 30 Sep 2016 05:43:17 GMT", "version": "v1" } ]
2016-10-03
[ [ "Petersen", "Ethan", "" ], [ "Youngs", "Nora", "" ], [ "Kruse", "Ryan", "" ], [ "Miyata", "Dane", "" ], [ "Garcia", "Rebecca", "" ], [ "Puente", "Luis David Garcia", "" ] ]
A major area in neuroscience research is the study of how the brain processes spatial information. Neurons in the brain represent external stimuli via neural codes. These codes often arise from stereotyped stimulus-response maps, associating to each neuron a convex receptive field. An important problem consists in determining what stimulus space features can be extracted directly from a neural code. The neural ideal is an algebraic object that encodes the full combinatorial data of a neural code. This ideal can be expressed in a canonical form that directly translates to a minimal description of the receptive field structure intrinsic to the code. In here, we describe a SageMath package that contains several algorithms related to the canonical form of a neural ideal.
1505.05815
Pablo G. Camara
Pablo G. Camara, Arnold J. Levine and Raul Rabadan
Inference of Ancestral Recombination Graphs through Topological Data Analysis
33 pages, 12 figures. The accompanying software, instructions and example files used in the manuscript can be obtained from https://github.com/RabadanLab/TARGet
null
10.1371/journal.pcbi.1005071
null
q-bio.QM math.AT q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent explosion of genomic data has underscored the need for interpretable and comprehensive analyses that can capture complex phylogenetic relationships within and across species. Recombination, reassortment and horizontal gene transfer constitute examples of pervasive biological phenomena that cannot be captured by tree-like representations. Starting from hundreds of genomes, we are interested in the reconstruction of potential evolutionary histories leading to the observed data. Ancestral recombination graphs represent potential histories that explicitly accommodate recombination and mutation events across orthologous genomes. However, they are computationally costly to reconstruct, usually being infeasible for more than few tens of genomes. Recently, Topological Data Analysis (TDA) methods have been proposed as robust and scalable methods that can capture the genetic scale and frequency of recombination. We build upon previous TDA developments for detecting and quantifying recombination, and present a novel framework that can be applied to hundreds of genomes and can be interpreted in terms of minimal histories of mutation and recombination events, quantifying the scales and identifying the genomic locations of recombinations. We implement this framework in a software package, called TARGet, and apply it to several examples, including small migration between different populations, human recombination, and horizontal evolution in finches inhabiting the Gal\'apagos Islands.
[ { "created": "Thu, 21 May 2015 18:10:03 GMT", "version": "v1" }, { "created": "Wed, 27 Jul 2016 00:52:22 GMT", "version": "v2" } ]
2016-09-28
[ [ "Camara", "Pablo G.", "" ], [ "Levine", "Arnold J.", "" ], [ "Rabadan", "Raul", "" ] ]
The recent explosion of genomic data has underscored the need for interpretable and comprehensive analyses that can capture complex phylogenetic relationships within and across species. Recombination, reassortment and horizontal gene transfer constitute examples of pervasive biological phenomena that cannot be captured by tree-like representations. Starting from hundreds of genomes, we are interested in the reconstruction of potential evolutionary histories leading to the observed data. Ancestral recombination graphs represent potential histories that explicitly accommodate recombination and mutation events across orthologous genomes. However, they are computationally costly to reconstruct, usually being infeasible for more than few tens of genomes. Recently, Topological Data Analysis (TDA) methods have been proposed as robust and scalable methods that can capture the genetic scale and frequency of recombination. We build upon previous TDA developments for detecting and quantifying recombination, and present a novel framework that can be applied to hundreds of genomes and can be interpreted in terms of minimal histories of mutation and recombination events, quantifying the scales and identifying the genomic locations of recombinations. We implement this framework in a software package, called TARGet, and apply it to several examples, including small migration between different populations, human recombination, and horizontal evolution in finches inhabiting the Gal\'apagos Islands.
2201.04099
Ying Zhang
Ying Zhang and Samuel A. Isaacson
Detailed Balance for Particle Models of Reversible Reactions in Bounded Domains
null
null
10.1063/5.0085296
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In particle-based stochastic reaction-diffusion models, reaction rate and placement kernels are used to decide the probability per time a reaction can occur between reactant particles, and to decide where product particles should be placed. When choosing kernels to use in reversible reactions, a key constraint is to ensure that detailed balance of spatial reaction-fluxes holds at all points at equilibrium. In this work we formulate a general partial-integral differential equation model that encompasses several of the commonly used contact reactivity (e.g. Smoluchowski-Collins-Kimball) and volume reactivity (e.g. Doi) particle models. From these equations we derive a detailed balance condition for the reversible $\textrm{A} + \textrm{B} \leftrightarrows \textrm{C}$ reaction. In bounded domains with no-flux boundary conditions, when choosing unbinding kernels consistent with several commonly used binding kernels, we show that preserving detailed balance of spatial reaction-fluxes at all points requires spatially varying unbinding rate functions near the domain boundary. Brownian Dynamics simulation algorithms can realize such varying rates through ignoring domain boundaries during unbinding and rejecting unbinding events that result in product particles being placed outside the domain.
[ { "created": "Tue, 11 Jan 2022 17:51:14 GMT", "version": "v1" }, { "created": "Fri, 25 Mar 2022 20:07:35 GMT", "version": "v2" }, { "created": "Thu, 14 Apr 2022 12:34:30 GMT", "version": "v3" } ]
2022-06-08
[ [ "Zhang", "Ying", "" ], [ "Isaacson", "Samuel A.", "" ] ]
In particle-based stochastic reaction-diffusion models, reaction rate and placement kernels are used to decide the probability per time a reaction can occur between reactant particles, and to decide where product particles should be placed. When choosing kernels to use in reversible reactions, a key constraint is to ensure that detailed balance of spatial reaction-fluxes holds at all points at equilibrium. In this work we formulate a general partial-integral differential equation model that encompasses several of the commonly used contact reactivity (e.g. Smoluchowski-Collins-Kimball) and volume reactivity (e.g. Doi) particle models. From these equations we derive a detailed balance condition for the reversible $\textrm{A} + \textrm{B} \leftrightarrows \textrm{C}$ reaction. In bounded domains with no-flux boundary conditions, when choosing unbinding kernels consistent with several commonly used binding kernels, we show that preserving detailed balance of spatial reaction-fluxes at all points requires spatially varying unbinding rate functions near the domain boundary. Brownian Dynamics simulation algorithms can realize such varying rates through ignoring domain boundaries during unbinding and rejecting unbinding events that result in product particles being placed outside the domain.
1707.06854
Evgeny Ivanko
Evgeny Ivanko
Should Evolution Necessarily be Egolution?
null
E. Ivanko. Is evolution always "egolution": Discussion of evolutionary efficiency of altruistic energy exchange, Ecological Complexity, Elsevier Publishing, 2018, vol. 34, pp. 1-8
10.1016/j.ecocom.2018.02.001
null
q-bio.PE cs.MA
http://creativecommons.org/licenses/by/4.0/
In the article I study the evolutionary adaptivity of two simple population models, based on either altruistic or egoistic law of energy exchange. The computational experiments show the convincing advantage of the altruists, which brings us to a small discussion about genetic algorithms and extraterrestrial life.
[ { "created": "Fri, 21 Jul 2017 11:33:31 GMT", "version": "v1" } ]
2018-03-29
[ [ "Ivanko", "Evgeny", "" ] ]
In the article I study the evolutionary adaptivity of two simple population models, based on either altruistic or egoistic law of energy exchange. The computational experiments show the convincing advantage of the altruists, which brings us to a small discussion about genetic algorithms and extraterrestrial life.
1712.01334
Katja Ried
Katja Ried and Thomas M\"uller and Hans J. Briegel
Modelling collective motion based on the principle of agency
13 pages plus 6 page appendix, 6 figures
PLoS ONE 14(2): e0212044 (2019)
10.1371/journal.pone.0212044
null
q-bio.PE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals. In theoretical models, these rules are normally \emph{assumed} to take a particular form, possibly constrained by heuristic arguments. We propose a new class of models, which describe the individuals as \emph{agents}, capable of deciding for themselves how to act and learning from their experiences. The local interaction rules do not need to be postulated in this model, since they \emph{emerge} from the learning process. We apply this ansatz to a concrete scenario involving marching locusts, in order to model the phenomenon of density-dependent alignment. We show that our learning agent-based model can account for a Fokker-Planck equation that describes the collective motion and, most notably, that the agents can learn the appropriate local interactions, requiring no strong previous assumptions on their form. These results suggest that learning agent-based models are a powerful tool for studying a broader class of problems involving collective motion and animal agency in general.
[ { "created": "Mon, 4 Dec 2017 20:20:59 GMT", "version": "v1" } ]
2019-04-24
[ [ "Ried", "Katja", "" ], [ "Müller", "Thomas", "" ], [ "Briegel", "Hans J.", "" ] ]
Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals. In theoretical models, these rules are normally \emph{assumed} to take a particular form, possibly constrained by heuristic arguments. We propose a new class of models, which describe the individuals as \emph{agents}, capable of deciding for themselves how to act and learning from their experiences. The local interaction rules do not need to be postulated in this model, since they \emph{emerge} from the learning process. We apply this ansatz to a concrete scenario involving marching locusts, in order to model the phenomenon of density-dependent alignment. We show that our learning agent-based model can account for a Fokker-Planck equation that describes the collective motion and, most notably, that the agents can learn the appropriate local interactions, requiring no strong previous assumptions on their form. These results suggest that learning agent-based models are a powerful tool for studying a broader class of problems involving collective motion and animal agency in general.
2406.02846
Andreas Buttensch\"on
Andreas Buttensch\"on, Shona Sinclair, Leah Edelstein-Keshet
How cells stay together; a mechanism for maintenance of a robust cluster explored by local and nonlocal continuum models
20 pages, 7 figures
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Formation of organs and specialized tissues in embryonic development requires migration of cells to specific targets. In some instances, such cells migrate as a robust cluster. We here explore a recent local approximation of nonlocal continuum models by Falc\'o, Baker, and Carrillo (2023). We apply their theoretical results by specifying biologically-based cell-cell interactions, showing how such cell communication results in an effective attraction-repulsion Morse potential. We then explore the clustering instability, the existence and size of the cluster, and its stability. We also extend their work by investigating the accuracy of the local approximation relative to the full nonlocal model.
[ { "created": "Wed, 5 Jun 2024 01:45:13 GMT", "version": "v1" } ]
2024-06-06
[ [ "Buttenschön", "Andreas", "" ], [ "Sinclair", "Shona", "" ], [ "Edelstein-Keshet", "Leah", "" ] ]
Formation of organs and specialized tissues in embryonic development requires migration of cells to specific targets. In some instances, such cells migrate as a robust cluster. We here explore a recent local approximation of nonlocal continuum models by Falc\'o, Baker, and Carrillo (2023). We apply their theoretical results by specifying biologically-based cell-cell interactions, showing how such cell communication results in an effective attraction-repulsion Morse potential. We then explore the clustering instability, the existence and size of the cluster, and its stability. We also extend their work by investigating the accuracy of the local approximation relative to the full nonlocal model.
1911.08241
Tai Sing Lee
Ziniu Wu and Harold Rockwell and Yimeng Zhang and Shiming Tang and Tai Sing Lee
Complexity and Diversity in Sparse Code Priors Improve Receptive Field Characterization of Macaque V1 Neurons
22 pages
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
System identification techniques -- projection pursuit regression models (PPRs) and convolutional neural networks (CNNs) -- provide state-of-the-art performance in predicting visual cortical neurons' responses to arbitrary input stimuli. However, the constituent kernels recovered by these methods are often noisy and lack coherent structure, making it difficult to understand the underlying component features of a neuron's receptive field. In this paper, we show that using a dictionary of diverse kernels with complex shapes learned from natural scenes based on efficient coding theory, as the front-end for PPRs and CNNs can improve their performance in neuronal response prediction as well as algorithmic data efficiency and convergence speed. Extensive experimental results also indicate that these sparse-code kernels provide important information on the component features of a neuron's receptive field. In addition, we find that models with the complex-shaped sparse code front-end are significantly better than models with a standard orientation-selective Gabor filter front-end for modeling V1 neurons that have been found to exhibit complex pattern selectivity. We show that the relative performance difference due to these two front-ends can be used to produce a sensitive metric for detecting complex selectivity in V1 neurons.
[ { "created": "Tue, 19 Nov 2019 13:00:22 GMT", "version": "v1" }, { "created": "Tue, 24 Nov 2020 03:40:14 GMT", "version": "v2" }, { "created": "Fri, 1 Oct 2021 14:01:04 GMT", "version": "v3" } ]
2021-10-04
[ [ "Wu", "Ziniu", "" ], [ "Rockwell", "Harold", "" ], [ "Zhang", "Yimeng", "" ], [ "Tang", "Shiming", "" ], [ "Lee", "Tai Sing", "" ] ]
System identification techniques -- projection pursuit regression models (PPRs) and convolutional neural networks (CNNs) -- provide state-of-the-art performance in predicting visual cortical neurons' responses to arbitrary input stimuli. However, the constituent kernels recovered by these methods are often noisy and lack coherent structure, making it difficult to understand the underlying component features of a neuron's receptive field. In this paper, we show that using a dictionary of diverse kernels with complex shapes learned from natural scenes based on efficient coding theory, as the front-end for PPRs and CNNs can improve their performance in neuronal response prediction as well as algorithmic data efficiency and convergence speed. Extensive experimental results also indicate that these sparse-code kernels provide important information on the component features of a neuron's receptive field. In addition, we find that models with the complex-shaped sparse code front-end are significantly better than models with a standard orientation-selective Gabor filter front-end for modeling V1 neurons that have been found to exhibit complex pattern selectivity. We show that the relative performance difference due to these two front-ends can be used to produce a sensitive metric for detecting complex selectivity in V1 neurons.
1010.4940
Raphael Voituriez
R.J. Hawkins, R. Poincloux, O. B\'enichou, M. Piel, P.Chavrier, R.Voituriez
Spontaneous contractility--mediated cortical flow generates cell migration in 3-dimensional environments
4 pages, 4 figures
null
10.1016/j.bpj.2011.07.038
null
q-bio.CB cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a generic model of cell motility generated by acto-myosin contraction of the cell cortex. We identify analytically dynamical instabilities of the cortex and show that they trigger spontaneous cortical flows which in turn can induce cell migration in 3-dimensional (3D) environments as well as bleb formation. This contractility--based mechanism, widely independent of actin treadmilling, appears as an alternative to the classical picture of lamellipodial motility on flat substrates. Theoretical predictions are compared to experimental data of tumor cells migrating in 3D matrigel and suggest that this mechanism could be a general mode of cell migration in 3D environments.
[ { "created": "Sun, 24 Oct 2010 08:49:11 GMT", "version": "v1" } ]
2015-05-20
[ [ "Hawkins", "R. J.", "" ], [ "Poincloux", "R.", "" ], [ "Bénichou", "O.", "" ], [ "Piel", "M.", "" ], [ "Chavrier", "P.", "" ], [ "Voituriez", "R.", "" ] ]
We present a generic model of cell motility generated by acto-myosin contraction of the cell cortex. We identify analytically dynamical instabilities of the cortex and show that they trigger spontaneous cortical flows which in turn can induce cell migration in 3-dimensional (3D) environments as well as bleb formation. This contractility--based mechanism, widely independent of actin treadmilling, appears as an alternative to the classical picture of lamellipodial motility on flat substrates. Theoretical predictions are compared to experimental data of tumor cells migrating in 3D matrigel and suggest that this mechanism could be a general mode of cell migration in 3D environments.
1511.04997
Casey Bergman
Danny E. Miller, Kevin R. Cook, Nazanin Yeganehkazemi, Clarissa B. Smith, Alexandria J. Cockrell, R. Scott Hawley, Casey M. Bergman
Rare recombination events generate sequence diversity among balancer chromosomes in Drosophila melanogaster
40 pages, 3 figures, 4 supplemental files available on request
Proc Natl Acad Sci U S A. 2016 113:E1352-61
10.1073/pnas.1601232113
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Multiply inverted balancer chromosomes that suppress exchange with their homologs are an essential part of the genetic toolkit in Drosophila melanogaster. Despite their widespread use, the organization of balancer chromosomes has not been characterized at the molecular level, and the degree of sequence variation among copies of any given balancer chromosome is unknown. To map inversion breakpoints and study potential sequence diversity in the descendants of a structurally identical balancer chromosome, we sequenced a panel of laboratory stocks containing the most widely used X-chromosome balancer, First Multiple 7 (FM7). We mapped the locations of FM7 breakpoints to precise euchromatic coordinates and identified the flanking sequence of breakpoints in heterochromatic regions. Analysis of SNP variation revealed megabase-scale blocks of sequence divergence among currently used FM7 stocks. We present evidence that this divergence arose by rare double crossover events that replaced a female-sterile allele of the singed gene (sn[X2]) on FM7c with wild type sequence from balanced chromosomes, and propose that many FM7c chromosomes in the Bloomington Drosophila Stock Center have lost sn[X2] by this mechanism. Finally, we characterize the original allele of the Bar gene (B[1]) that is carried on FM7 and validate the hypothesis that the origin and subsequent reversion of the B1 duplication is mediated by unequal exchange. Our results reject a simple non-recombining, clonal mode for the laboratory evolution of balancer chromosomes and have implications for how balancer chromosomes should be used in the design and interpretation of genetic experiments in Drosophila.
[ { "created": "Mon, 16 Nov 2015 15:45:17 GMT", "version": "v1" } ]
2016-04-22
[ [ "Miller", "Danny E.", "" ], [ "Cook", "Kevin R.", "" ], [ "Yeganehkazemi", "Nazanin", "" ], [ "Smith", "Clarissa B.", "" ], [ "Cockrell", "Alexandria J.", "" ], [ "Hawley", "R. Scott", "" ], [ "Bergman", "Casey...
Multiply inverted balancer chromosomes that suppress exchange with their homologs are an essential part of the genetic toolkit in Drosophila melanogaster. Despite their widespread use, the organization of balancer chromosomes has not been characterized at the molecular level, and the degree of sequence variation among copies of any given balancer chromosome is unknown. To map inversion breakpoints and study potential sequence diversity in the descendants of a structurally identical balancer chromosome, we sequenced a panel of laboratory stocks containing the most widely used X-chromosome balancer, First Multiple 7 (FM7). We mapped the locations of FM7 breakpoints to precise euchromatic coordinates and identified the flanking sequence of breakpoints in heterochromatic regions. Analysis of SNP variation revealed megabase-scale blocks of sequence divergence among currently used FM7 stocks. We present evidence that this divergence arose by rare double crossover events that replaced a female-sterile allele of the singed gene (sn[X2]) on FM7c with wild type sequence from balanced chromosomes, and propose that many FM7c chromosomes in the Bloomington Drosophila Stock Center have lost sn[X2] by this mechanism. Finally, we characterize the original allele of the Bar gene (B[1]) that is carried on FM7 and validate the hypothesis that the origin and subsequent reversion of the B1 duplication is mediated by unequal exchange. Our results reject a simple non-recombining, clonal mode for the laboratory evolution of balancer chromosomes and have implications for how balancer chromosomes should be used in the design and interpretation of genetic experiments in Drosophila.
1911.05151
Ren Yi
Ren Yi, Pi-Chuan Chang, Gunjan Baid, Andrew Carroll
Learning from Data-Rich Problems: A Case Study on Genetic Variant Calling
Machine Learning for Health (ML4H) at NeurIPS 2019 - Extended Abstract
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Next Generation Sequencing can sample the whole genome (WGS) or the 1-2% of the genome that codes for proteins called the whole exome (WES). Machine learning approaches to variant calling achieve high accuracy in WGS data, but the reduced number of training examples causes training with WES data alone to achieve lower accuracy. We propose and compare three different data augmentation strategies for improving performance on WES data: 1) joint training with WES and WGS data, 2) warmstarting the WES model from a WGS model, and 3) joint training with the sequencing type specified. All three approaches show improved accuracy over a model trained using just WES data, suggesting the ability of models to generalize insights from the greater WGS data while retaining performance on the specialized WES problem. These data augmentation approaches may apply to other problem areas in genomics, where several specialized models would each see only a subset of the genome.
[ { "created": "Tue, 12 Nov 2019 21:31:29 GMT", "version": "v1" }, { "created": "Fri, 15 Nov 2019 22:58:38 GMT", "version": "v2" } ]
2019-11-19
[ [ "Yi", "Ren", "" ], [ "Chang", "Pi-Chuan", "" ], [ "Baid", "Gunjan", "" ], [ "Carroll", "Andrew", "" ] ]
Next Generation Sequencing can sample the whole genome (WGS) or the 1-2% of the genome that codes for proteins called the whole exome (WES). Machine learning approaches to variant calling achieve high accuracy in WGS data, but the reduced number of training examples causes training with WES data alone to achieve lower accuracy. We propose and compare three different data augmentation strategies for improving performance on WES data: 1) joint training with WES and WGS data, 2) warmstarting the WES model from a WGS model, and 3) joint training with the sequencing type specified. All three approaches show improved accuracy over a model trained using just WES data, suggesting the ability of models to generalize insights from the greater WGS data while retaining performance on the specialized WES problem. These data augmentation approaches may apply to other problem areas in genomics, where several specialized models would each see only a subset of the genome.
1309.4286
Fabio Vandin
Fabio Vandin, Alexandra Papoutsaki, Benjamin J. Raphael, Eli Upfal
Accurate Computation of Survival Statistics in Genome-wide Studies
Full version of RECOMB 2013 paper
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.
[ { "created": "Tue, 17 Sep 2013 12:46:07 GMT", "version": "v1" } ]
2013-09-18
[ [ "Vandin", "Fabio", "" ], [ "Papoutsaki", "Alexandra", "" ], [ "Raphael", "Benjamin J.", "" ], [ "Upfal", "Eli", "" ] ]
A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.
0812.1180
Francesco Zamponi
Carlo Barbieri, Simona Cocco, Remi Monasson, Francesco Zamponi
Dynamical modelling of molecular constructions and setups for DNA unzipping
null
Phys. Biol. 6, 025003 (2009)
10.1088/1478-3975/6/2/025003
null
q-bio.BM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a dynamical model of DNA mechanical unzipping under the action of a force. The model includes the motion of the fork in the sequence-dependent landscape, the trap(s) acting on the bead(s), and the polymeric components of the molecular construction (unzipped single strands of DNA, and linkers). Different setups are considered to test the model, and the outcome of the simulations is compared to simpler dynamical models existing in the literature where polymers are assumed to be at equilibrium.
[ { "created": "Fri, 5 Dec 2008 16:48:20 GMT", "version": "v1" } ]
2011-09-19
[ [ "Barbieri", "Carlo", "" ], [ "Cocco", "Simona", "" ], [ "Monasson", "Remi", "" ], [ "Zamponi", "Francesco", "" ] ]
We present a dynamical model of DNA mechanical unzipping under the action of a force. The model includes the motion of the fork in the sequence-dependent landscape, the trap(s) acting on the bead(s), and the polymeric components of the molecular construction (unzipped single strands of DNA, and linkers). Different setups are considered to test the model, and the outcome of the simulations is compared to simpler dynamical models existing in the literature where polymers are assumed to be at equilibrium.
0809.2804
Tibor Antal
Tibor Antal, Martin A. Nowak, Arne Traulsen
Strategy abundance in 2x2 games for arbitrary mutation rates
version 2 is the final published version that contains minor changes in response to referee comments
Journal of Theoretical Biology 257, 340 (2009).
10.1016/j.jtbi.2008.11.023
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study evolutionary game dynamics in a well-mixed populations of finite size, N. A well-mixed population means that any two individuals are equally likely to interact. In particular we consider the average abundances of two strategies, A and B, under mutation and selection. The game dynamical interaction between the two strategies is given by the 2x2 payoff matrix [(a,b), (c,d)]. It has previously been shown that A is more abundant than B, if (N-2)a+Nb>Nc+(N-2)d. This result has been derived for particular stochastic processes that operate either in the limit of asymptotically small mutation rates or in the limit of weak selection. Here we show that this result holds in fact for a wide class of stochastic birth-death processes for arbitrary mutation rate and for any intensity of selection.
[ { "created": "Wed, 17 Sep 2008 14:19:29 GMT", "version": "v1" }, { "created": "Thu, 25 Dec 2008 13:26:17 GMT", "version": "v2" } ]
2009-02-24
[ [ "Antal", "Tibor", "" ], [ "Nowak", "Martin A.", "" ], [ "Traulsen", "Arne", "" ] ]
We study evolutionary game dynamics in a well-mixed populations of finite size, N. A well-mixed population means that any two individuals are equally likely to interact. In particular we consider the average abundances of two strategies, A and B, under mutation and selection. The game dynamical interaction between the two strategies is given by the 2x2 payoff matrix [(a,b), (c,d)]. It has previously been shown that A is more abundant than B, if (N-2)a+Nb>Nc+(N-2)d. This result has been derived for particular stochastic processes that operate either in the limit of asymptotically small mutation rates or in the limit of weak selection. Here we show that this result holds in fact for a wide class of stochastic birth-death processes for arbitrary mutation rate and for any intensity of selection.
2302.00863
David Danko
David C. Danko, James Golden, Charles Vorosmarty, Anthony Cak, Fabio Corsi, Christopher E. Mason, Rafael Maciel-de-Freitas, Dorottya Nagy-Szakal, Niamh B. OHara
The Challenges and Opportunities in Creating an Early Warning System for Global Pandemics
null
null
null
null
q-bio.QM cs.CY
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic revealed that global health, social systems, and economies can be surprisingly fragile in an increasingly interconnected and interdependent world. Yet, during the last half of 2022, and quite remarkably, we began dismantling essential infectious disease monitoring programs in several countries. Absent such programs, localized biological risks will transform into global shocks linked directly to our lack of foresight regarding emerging health risks. Additionally, recent studies indicate that more than half of all infectious diseases could be made worse by climate change, complicating pandemic containment. Despite this complexity, the factors leading to pandemics are largely predictable but can only be realized through a well-designed global early warning system. Such a system should integrate data from genomics, climate and environment, social dynamics, and healthcare infrastructure. The glue for such a system is community-driven modeling, a modern logistics of data, and democratization of AI tools. Using the example of dengue fever in Brazil, we can demonstrate how thoughtfully designed technology platforms can build global-scale precision disease detection and response systems that significantly reduce exposure to systemic shocks, accelerate science-informed public health policies, and deliver reliable healthcare and economic opportunities as an intrinsic part of the global sustainable development agenda.
[ { "created": "Thu, 2 Feb 2023 04:18:47 GMT", "version": "v1" } ]
2023-02-03
[ [ "Danko", "David C.", "" ], [ "Golden", "James", "" ], [ "Vorosmarty", "Charles", "" ], [ "Cak", "Anthony", "" ], [ "Corsi", "Fabio", "" ], [ "Mason", "Christopher E.", "" ], [ "Maciel-de-Freitas", "Rafael", ...
The COVID-19 pandemic revealed that global health, social systems, and economies can be surprisingly fragile in an increasingly interconnected and interdependent world. Yet, during the last half of 2022, and quite remarkably, we began dismantling essential infectious disease monitoring programs in several countries. Absent such programs, localized biological risks will transform into global shocks linked directly to our lack of foresight regarding emerging health risks. Additionally, recent studies indicate that more than half of all infectious diseases could be made worse by climate change, complicating pandemic containment. Despite this complexity, the factors leading to pandemics are largely predictable but can only be realized through a well-designed global early warning system. Such a system should integrate data from genomics, climate and environment, social dynamics, and healthcare infrastructure. The glue for such a system is community-driven modeling, a modern logistics of data, and democratization of AI tools. Using the example of dengue fever in Brazil, we can demonstrate how thoughtfully designed technology platforms can build global-scale precision disease detection and response systems that significantly reduce exposure to systemic shocks, accelerate science-informed public health policies, and deliver reliable healthcare and economic opportunities as an intrinsic part of the global sustainable development agenda.
1605.03031
Daniele Marinazzo
Enrico Amico, Daniele Marinazzo, Carol DiPerri, Lizette Heine, Jitka Annen, Charlotte Martial, Mario Dzemidzic, Steven Laureys, and Joaqu\'in Go\~ni
Mapping the functional connectome traits of levels of consciousness
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Examining task-free functional connectivity (FC) in the human brain offers insights on how spontaneous integration and segregation of information relate to human cognition, and how this organization may be altered in different conditions, and neurological disorders. This is particularly relevant for patients in disorders of consciousness (DOC) following severe acquired brain damage and coma, one of the most devastating conditions in modern medical care. We present a novel data-driven methodology, connICA, which implements Independent Component Analysis (ICA) for the extraction of robust independent FC patterns (FC-traits) from a set of individual functional connectomes, without imposing any a priori data stratification into groups. We here apply connICA to investigate associations between network traits derived from task-free FC and cognitive/clinical features that define levels of consciousness. Three main independent FC-traits were identified and linked to consciousness-related clinical features. The first one represents the functional configuration it is associated to a sedative (sevoflurane), the overall effect of the pathology and the level of arousal. The second FC-trait reflects the disconnection of the visual and sensory-motor connectivity patterns. It also relates to the time since the insult and to the ability of communicating with the external environment. The third FC-trait isolates the connectivity pattern encompassing the fronto-parietal and the default-mode network areas as well as the interaction between left and right hemispheres, which are also associated to the awareness of the self and its surroundings. Each FC-trait represents a distinct functional process with a role in the degradation of conscious states of functional brain networks, shedding further light on the functional subcircuits that get disrupted in severe brain-damage.
[ { "created": "Tue, 10 May 2016 14:24:39 GMT", "version": "v1" }, { "created": "Wed, 2 Nov 2016 22:02:55 GMT", "version": "v2" } ]
2016-11-04
[ [ "Amico", "Enrico", "" ], [ "Marinazzo", "Daniele", "" ], [ "DiPerri", "Carol", "" ], [ "Heine", "Lizette", "" ], [ "Annen", "Jitka", "" ], [ "Martial", "Charlotte", "" ], [ "Dzemidzic", "Mario", "" ], [...
Examining task-free functional connectivity (FC) in the human brain offers insights on how spontaneous integration and segregation of information relate to human cognition, and how this organization may be altered in different conditions, and neurological disorders. This is particularly relevant for patients in disorders of consciousness (DOC) following severe acquired brain damage and coma, one of the most devastating conditions in modern medical care. We present a novel data-driven methodology, connICA, which implements Independent Component Analysis (ICA) for the extraction of robust independent FC patterns (FC-traits) from a set of individual functional connectomes, without imposing any a priori data stratification into groups. We here apply connICA to investigate associations between network traits derived from task-free FC and cognitive/clinical features that define levels of consciousness. Three main independent FC-traits were identified and linked to consciousness-related clinical features. The first one represents the functional configuration it is associated to a sedative (sevoflurane), the overall effect of the pathology and the level of arousal. The second FC-trait reflects the disconnection of the visual and sensory-motor connectivity patterns. It also relates to the time since the insult and to the ability of communicating with the external environment. The third FC-trait isolates the connectivity pattern encompassing the fronto-parietal and the default-mode network areas as well as the interaction between left and right hemispheres, which are also associated to the awareness of the self and its surroundings. Each FC-trait represents a distinct functional process with a role in the degradation of conscious states of functional brain networks, shedding further light on the functional subcircuits that get disrupted in severe brain-damage.
1012.0276
Christoph Adami
Christoph Adami, Jory Schossau, and Arend Hintze
Evolution and stability of altruist strategies in microbial games
9 pages, 4 figures. Major rewrite and expansion. New stability analysis
Physical Review E 85 (2012) 011914
10.1103/PhysRevE.85.011914
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When microbes compete for limited resources, they often engage in chemical warfare using bacterial toxins. This competition can be understood in terms of evolutionary game theory (EGT). We study the predictions of EGT for the bacterial "suicide bomber" game in terms of the phase portraits of population dynamics, for parameter combinations that cover all interesting games for two-players, and seven of the 38 possible phase portraits of the three-player game. We compare these predictions to simulations of these competitions in finite well-mixed populations, but also allowing for probabilistic rather than pure strategies, as well as Darwinian adaptation over tens of thousands of generations. We find that Darwinian evolution of probabilistic strategies stabilizes games of the rock-paper-scissors type that emerge for parameters describing realistic bacterial populations, and point to ways in which the population fixed point can be selected by changing those parameters.
[ { "created": "Wed, 1 Dec 2010 19:05:35 GMT", "version": "v1" }, { "created": "Thu, 28 Apr 2011 17:47:10 GMT", "version": "v2" }, { "created": "Fri, 4 Nov 2011 20:56:46 GMT", "version": "v3" } ]
2013-11-07
[ [ "Adami", "Christoph", "" ], [ "Schossau", "Jory", "" ], [ "Hintze", "Arend", "" ] ]
When microbes compete for limited resources, they often engage in chemical warfare using bacterial toxins. This competition can be understood in terms of evolutionary game theory (EGT). We study the predictions of EGT for the bacterial "suicide bomber" game in terms of the phase portraits of population dynamics, for parameter combinations that cover all interesting games for two-players, and seven of the 38 possible phase portraits of the three-player game. We compare these predictions to simulations of these competitions in finite well-mixed populations, but also allowing for probabilistic rather than pure strategies, as well as Darwinian adaptation over tens of thousands of generations. We find that Darwinian evolution of probabilistic strategies stabilizes games of the rock-paper-scissors type that emerge for parameters describing realistic bacterial populations, and point to ways in which the population fixed point can be selected by changing those parameters.
2305.02205
Mahault Albarracin Mx
Maxwell J. D. Ramstead, Mahault Albarracin, Alex Kiefer, Brennan Klein, Chris Fields, Karl Friston, and Adam Safron
The inner screen model of consciousness: applying the free energy principle directly to the study of conscious experience
This is the right version of the paper, with the references showing
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This paper presents a model of consciousness that follows directly from the free-energy principle (FEP). We first rehearse the classical and quantum formulations of the FEP. In particular, we consider the inner screen hypothesis that follows from the quantum information theoretic version of the FEP. We then review applications of the FEP to the known sparse (nested and hierarchical) neuro-anatomy of the brain. We focus on the holographic structure of the brain, and how this structure supports (overt and covert) action.
[ { "created": "Sat, 15 Apr 2023 15:19:27 GMT", "version": "v1" }, { "created": "Wed, 28 Jun 2023 15:44:12 GMT", "version": "v2" }, { "created": "Sun, 31 Dec 2023 20:35:30 GMT", "version": "v3" }, { "created": "Tue, 2 Jan 2024 16:45:53 GMT", "version": "v4" } ]
2024-01-03
[ [ "Ramstead", "Maxwell J. D.", "" ], [ "Albarracin", "Mahault", "" ], [ "Kiefer", "Alex", "" ], [ "Klein", "Brennan", "" ], [ "Fields", "Chris", "" ], [ "Friston", "Karl", "" ], [ "Safron", "Adam", "" ] ]
This paper presents a model of consciousness that follows directly from the free-energy principle (FEP). We first rehearse the classical and quantum formulations of the FEP. In particular, we consider the inner screen hypothesis that follows from the quantum information theoretic version of the FEP. We then review applications of the FEP to the known sparse (nested and hierarchical) neuro-anatomy of the brain. We focus on the holographic structure of the brain, and how this structure supports (overt and covert) action.
1612.03859
Georgi Kapitanov
Georgi I. Kapitanov, Bruce P. Ayati, James A. Martin
Modeling the Effect of Blunt Impact on Mitochondrial Dysfunction in Cartilage
null
null
10.7717/peerj.3468
null
q-bio.CB physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mounting evidence for the role of oxidative stress in the degeneration of articular cartilage after an injurious impact requires our modeling & simulation efforts to temporarily shift from just describing the effect of mechanical stress and inflammation on osteoarthritis (OA). The hypothesis that the injurious impact causes irreversible damage to chondrocyte mitochondria, which in turn increase their production of free radicals, affecting their energy production and their ability to rebuild the extracellular matrix, has to be modeled and the processes quantified in order to further the understanding of OA, its causes, and viable treatment options. The current article presents a calibrated model that captures the damage oxidative stress incurs on the cell viability, ATP production, and cartilage stability in a cartilage explant after a drop-tower impact. The model validates the biological hypothesis and will be used in further efforts to identify possibilities for treatment and be a part of a bigger modeling & simulation framework for the development of OA.
[ { "created": "Mon, 12 Dec 2016 19:25:26 GMT", "version": "v1" } ]
2023-02-14
[ [ "Kapitanov", "Georgi I.", "" ], [ "Ayati", "Bruce P.", "" ], [ "Martin", "James A.", "" ] ]
Mounting evidence for the role of oxidative stress in the degeneration of articular cartilage after an injurious impact requires our modeling & simulation efforts to temporarily shift from just describing the effect of mechanical stress and inflammation on osteoarthritis (OA). The hypothesis that the injurious impact causes irreversible damage to chondrocyte mitochondria, which in turn increase their production of free radicals, affecting their energy production and their ability to rebuild the extracellular matrix, has to be modeled and the processes quantified in order to further the understanding of OA, its causes, and viable treatment options. The current article presents a calibrated model that captures the damage oxidative stress incurs on the cell viability, ATP production, and cartilage stability in a cartilage explant after a drop-tower impact. The model validates the biological hypothesis and will be used in further efforts to identify possibilities for treatment and be a part of a bigger modeling & simulation framework for the development of OA.
1501.05174
David Papo
David Papo
Characterizing the neural correlates of reasoning
8 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics, and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning which stretch over sufficiently long periods of time. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon and experimental designs are discussed.
[ { "created": "Wed, 21 Jan 2015 14:09:21 GMT", "version": "v1" }, { "created": "Sun, 19 Apr 2015 14:13:12 GMT", "version": "v2" } ]
2015-04-21
[ [ "Papo", "David", "" ] ]
The brain did not develop a dedicated device for reasoning. This fact bears dramatic consequences. While for perceptuo-motor functions neural activity is shaped by the input's statistical properties, and processing is carried out at high speed in hardwired spatially segregated modules, in reasoning, neural activity is driven by internal dynamics, and processing times, stages, and functional brain geometry are largely unconstrained a priori. Here, it is shown that the complex properties of spontaneous activity, which can be ignored in a short-lived event-related world, become prominent at the long time scales of certain forms of reasoning which stretch over sufficiently long periods of time. It is argued that the neural correlates of reasoning should in fact be defined in terms of non-trivial generic properties of spontaneous brain activity, and that this implies resorting to concepts, analytical tools, and ways of designing experiments that are as yet non-standard in cognitive neuroscience. The implications in terms of models of brain activity, shape of the neural correlates, methods of data analysis, observability of the phenomenon and experimental designs are discussed.
0812.1656
Eytan Domany
Tal Shay, Wanyu L. Lambiv, Anat Reiner, Monika E. Hegi, Eytan Domany
Combining chromosomal arm status and significantly aberrant genomic locations reveals new cancer subtypes
34 pages, 3 figures; to appear in Cancer Informatics
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many types of tumors exhibit chromosomal losses or gains, as well as local amplifications and deletions. Within any given tumor type, sample specific amplifications and deletionsare also observed. Typically, a region that is aberrant in more tumors,or whose copy number change is stronger, would be considered as a more promising candidate to be biologically relevant to cancer. We sought for an intuitive method to define such aberrations and prioritize them. We define V, the volume associated with an aberration, as the product of three factors: a. fraction of patients with the aberration, b. the aberrations length and c. its amplitude. Our algorithm compares the values of V derived from real data to a null distribution obtained by permutations, and yields the statistical significance, p value, of the measured value of V. We detected genetic locations that were significantly aberrant and combined them with chromosomal arm status to create a succint fingerprint of the tumor genome. This genomic fingerprint is used to visualize the tumors, highlighting events that are co ocurring or mutually exclusive. We allpy the method on three different public array CGH datasets of Medulloblastoma and Neuroblastoma, and demonstrate its ability to detect chromosomal regions that were known to be altered in the tested cancer types, as well as to suggest new genomic locations to be tested. We identified a potential new subtype of Medulloblastoma, which is analogous to Neuroblastoma type 1.
[ { "created": "Tue, 9 Dec 2008 10:43:34 GMT", "version": "v1" } ]
2008-12-10
[ [ "Shay", "Tal", "" ], [ "Lambiv", "Wanyu L.", "" ], [ "Reiner", "Anat", "" ], [ "Hegi", "Monika E.", "" ], [ "Domany", "Eytan", "" ] ]
Many types of tumors exhibit chromosomal losses or gains, as well as local amplifications and deletions. Within any given tumor type, sample specific amplifications and deletionsare also observed. Typically, a region that is aberrant in more tumors,or whose copy number change is stronger, would be considered as a more promising candidate to be biologically relevant to cancer. We sought for an intuitive method to define such aberrations and prioritize them. We define V, the volume associated with an aberration, as the product of three factors: a. fraction of patients with the aberration, b. the aberrations length and c. its amplitude. Our algorithm compares the values of V derived from real data to a null distribution obtained by permutations, and yields the statistical significance, p value, of the measured value of V. We detected genetic locations that were significantly aberrant and combined them with chromosomal arm status to create a succint fingerprint of the tumor genome. This genomic fingerprint is used to visualize the tumors, highlighting events that are co ocurring or mutually exclusive. We allpy the method on three different public array CGH datasets of Medulloblastoma and Neuroblastoma, and demonstrate its ability to detect chromosomal regions that were known to be altered in the tested cancer types, as well as to suggest new genomic locations to be tested. We identified a potential new subtype of Medulloblastoma, which is analogous to Neuroblastoma type 1.
1508.02798
Susan Khor
Susan Khor
The short-cut network within protein residue networks
24 pages. arXiv admin note: text overlap with arXiv:1412.2155
null
null
null
q-bio.MN cs.DM
http://creativecommons.org/licenses/by-nc-sa/4.0/
A protein residue network (PRN) is a network of interacting amino acids within a protein. We describe characteristics of a sparser, highly central and more volatile sub-network of a PRN called the short-cut network (SCN), as a protein folds under molecular dynamics (MD) simulation with the goal of understanding how proteins form navigable small-world networks within themselves. The edges of an SCN are found via a local greedy search on a PRN. SCNs grow in size and transitivity strength as a protein folds, and SCNs from successful MD trajectories are better formed in these terms. Findings from an investigation on how to model the formation of SCNs using dynamic graph theory, and suggestions to move forward are presented. A SCN is enriched with short-range contacts and its formation correlates positively with secondary structure formation. Thus our approach to modeling PRN formation, in essence protein folding from a graph theoretic view point, is more in tune with the notion of increasing order to a random graph than the other way around, and this increase in order coincides with improved navigability of PRNs.
[ { "created": "Wed, 12 Aug 2015 02:28:55 GMT", "version": "v1" } ]
2015-08-13
[ [ "Khor", "Susan", "" ] ]
A protein residue network (PRN) is a network of interacting amino acids within a protein. We describe characteristics of a sparser, highly central and more volatile sub-network of a PRN called the short-cut network (SCN), as a protein folds under molecular dynamics (MD) simulation with the goal of understanding how proteins form navigable small-world networks within themselves. The edges of an SCN are found via a local greedy search on a PRN. SCNs grow in size and transitivity strength as a protein folds, and SCNs from successful MD trajectories are better formed in these terms. Findings from an investigation on how to model the formation of SCNs using dynamic graph theory, and suggestions to move forward are presented. A SCN is enriched with short-range contacts and its formation correlates positively with secondary structure formation. Thus our approach to modeling PRN formation, in essence protein folding from a graph theoretic view point, is more in tune with the notion of increasing order to a random graph than the other way around, and this increase in order coincides with improved navigability of PRNs.
q-bio/0512009
Frederick Matsen IV
Frederick A. Matsen
A geometric approach to tree shape statistics
null
null
null
null
q-bio.PE
null
This article presents a new way to understand the descriptive ability of tree shape statistics. Where before tree shape statistics were chosen by their ability to distinguish between macroevolutionary models, the ``resolution'' presented in this paper quantifies the ability of a statistic to differentiate between similar and different trees. We term this a ``geometric'' approach to differentiate it from the model-based approach previously explored. A distinct advantage of this perspective is that it allows evaluation of multiple tree shape statistics describing different aspects of tree shape. After developing the methodology, it is applied here to make specific recommendations for a suite of three statistics which will hopefully prove useful in applications. The article ends with an application of the tree shape statistics to clarify the impact of omission of taxa on tree shape.
[ { "created": "Sat, 3 Dec 2005 01:45:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Matsen", "Frederick A.", "" ] ]
This article presents a new way to understand the descriptive ability of tree shape statistics. Where before tree shape statistics were chosen by their ability to distinguish between macroevolutionary models, the ``resolution'' presented in this paper quantifies the ability of a statistic to differentiate between similar and different trees. We term this a ``geometric'' approach to differentiate it from the model-based approach previously explored. A distinct advantage of this perspective is that it allows evaluation of multiple tree shape statistics describing different aspects of tree shape. After developing the methodology, it is applied here to make specific recommendations for a suite of three statistics which will hopefully prove useful in applications. The article ends with an application of the tree shape statistics to clarify the impact of omission of taxa on tree shape.
q-bio/0603038
Petter Holme
Mikael Huss, Petter Holme
Currency and commodity metabolites: Their identification and relation to the modularity of metabolic networks
null
IET Systems Biology 1, 280-285 (2007)
10.1049/iet-syb:20060077
null
q-bio.MN cond-mat.dis-nn
null
The large-scale shape and function of metabolic networks are intriguing topics of systems biology. Such networks are on one hand commonly regarded as modular (i.e. built by a number of relatively independent subsystems), but on the other hand they are robust in a way not expected of a purely modular system. To address this question we carefully discuss the partition of metabolic networks into subnetworks. The practice of preprocessing such networks by removing the most abundant substrates, "currency metabolites," is formalized into a network-based algorithm. We study partitions for metabolic networks of many organisms and find cores of currency metabolites and modular peripheries of what we call "commodity metabolites." The networks are found to be more modular than random networks but far from perfectly divisible into modules. We argue that cross-modular edges are the key for the robustness of metabolism.
[ { "created": "Fri, 31 Mar 2006 20:37:25 GMT", "version": "v1" } ]
2011-11-10
[ [ "Huss", "Mikael", "" ], [ "Holme", "Petter", "" ] ]
The large-scale shape and function of metabolic networks are intriguing topics of systems biology. Such networks are on one hand commonly regarded as modular (i.e. built by a number of relatively independent subsystems), but on the other hand they are robust in a way not expected of a purely modular system. To address this question we carefully discuss the partition of metabolic networks into subnetworks. The practice of preprocessing such networks by removing the most abundant substrates, "currency metabolites," is formalized into a network-based algorithm. We study partitions for metabolic networks of many organisms and find cores of currency metabolites and modular peripheries of what we call "commodity metabolites." The networks are found to be more modular than random networks but far from perfectly divisible into modules. We argue that cross-modular edges are the key for the robustness of metabolism.
0904.3649
Yves Jouanneau
Elodie Nicolau (LCBM, IFP), L. Kerhoas (INRA), Martine Lettere (INRA), Yves Jouanneau (LCBM), R\'emy Marchal
Biodegradation of 2-ethylhexyl nitrate (2-EHN) by Mycobacterium austroafricanum IFP 2173
null
Applied and Environmental Microbiology 74 (2008) 6187-6193
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
2-Ethyhexyl nitrate (2-EHN) is a major additive of fuel which is used to comply with the cetane number of diesel. Because of its wide use and possible accidental release, 2-EHN is a potential pollutant of the environment. In this study, Mycobacterium austroafricanum IFP 2173 was selected among several strains as the best 2-EHN degrader. The 2-EHN biodegradation rate was increased in biphasic cultures where the hydrocarbon was dissolved in an inert non-aqueous phase liquid (NAPL), suggesting that the transfer of the hydrophobic substrate to the cells was a growth-limiting factor. Carbon balance calculation as well as organic carbon measurement indicated a release of metabolites in the culture medium. Further analysis by gas chromatography revealed that a single metabolite accumulated during growth. This metabolite had a molecular mass of 114 Da as determined by GC/MS and was provisionally identified as 4-ethyldihydrofuran-2(3H)-one by LC-MS/MS analysis. Identification was confirmed by analysis of the chemically synthesized lactone. Based on these results, a plausible catabolic pathway is proposed whereby 2-EHN is converted to 4-ethyldihydrofuran-2(3H)-one, which cannot be metabolised further by strain IFP 2173. This putative pathway provides an explanation for the low energetic efficiency of 2-EHN degradation and its poor biodegradability.
[ { "created": "Thu, 23 Apr 2009 10:43:49 GMT", "version": "v1" } ]
2009-04-24
[ [ "Nicolau", "Elodie", "", "LCBM, IFP" ], [ "Kerhoas", "L.", "", "INRA" ], [ "Lettere", "Martine", "", "INRA" ], [ "Jouanneau", "Yves", "", "LCBM" ], [ "Marchal", "Rémy", "" ] ]
2-Ethyhexyl nitrate (2-EHN) is a major additive of fuel which is used to comply with the cetane number of diesel. Because of its wide use and possible accidental release, 2-EHN is a potential pollutant of the environment. In this study, Mycobacterium austroafricanum IFP 2173 was selected among several strains as the best 2-EHN degrader. The 2-EHN biodegradation rate was increased in biphasic cultures where the hydrocarbon was dissolved in an inert non-aqueous phase liquid (NAPL), suggesting that the transfer of the hydrophobic substrate to the cells was a growth-limiting factor. Carbon balance calculation as well as organic carbon measurement indicated a release of metabolites in the culture medium. Further analysis by gas chromatography revealed that a single metabolite accumulated during growth. This metabolite had a molecular mass of 114 Da as determined by GC/MS and was provisionally identified as 4-ethyldihydrofuran-2(3H)-one by LC-MS/MS analysis. Identification was confirmed by analysis of the chemically synthesized lactone. Based on these results, a plausible catabolic pathway is proposed whereby 2-EHN is converted to 4-ethyldihydrofuran-2(3H)-one, which cannot be metabolised further by strain IFP 2173. This putative pathway provides an explanation for the low energetic efficiency of 2-EHN degradation and its poor biodegradability.
1404.6424
Eugene Postnikov
Eugene B. Postnikov and Dmitry V. Tatarenkov
Prediction of flu epidemic activity with dynamical model based on weather forecast
12 pages, 2 figures
Ecological Complexity 15 (2013) 109-113
10.1016/j.ecocom.2013.06.001
null
q-bio.PE math.DS physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The seasonality of respiratory diseases (common cold, influenza, etc.) is a well-known phenomenon studied from ancient times. The development of predictive models is still not only an actual unsolved problem of mathematical epidemiology but also is very important for the safety of public health. Here we show that SIRS (Susceptible-Infected-Recovered-Susceptible) model accurately enough reproduces real curves of flu activity. It contains variable reaction rate, which is a function of mean daily temperature. The proposed alternation of variables represents SIRS equations as the second-order ODE with an outer excitation. It reveals an origin of such predictive efficiency and explains analytically the 1:1 dynamical resonance, which is known as a crucial property of epidemic behavior. Our work opens the perspectives for the development of instant short-time prediction of a normal level of flu activity based on the weather forecast, and allow to estimate a current epidemic level more precisely. The latter fact is based on the explicit difference between the expected weather-based activity and instant anomalies.
[ { "created": "Fri, 25 Apr 2014 14:15:59 GMT", "version": "v1" } ]
2014-04-28
[ [ "Postnikov", "Eugene B.", "" ], [ "Tatarenkov", "Dmitry V.", "" ] ]
The seasonality of respiratory diseases (common cold, influenza, etc.) is a well-known phenomenon studied from ancient times. The development of predictive models is still not only an actual unsolved problem of mathematical epidemiology but also is very important for the safety of public health. Here we show that SIRS (Susceptible-Infected-Recovered-Susceptible) model accurately enough reproduces real curves of flu activity. It contains variable reaction rate, which is a function of mean daily temperature. The proposed alternation of variables represents SIRS equations as the second-order ODE with an outer excitation. It reveals an origin of such predictive efficiency and explains analytically the 1:1 dynamical resonance, which is known as a crucial property of epidemic behavior. Our work opens the perspectives for the development of instant short-time prediction of a normal level of flu activity based on the weather forecast, and allow to estimate a current epidemic level more precisely. The latter fact is based on the explicit difference between the expected weather-based activity and instant anomalies.
1806.06545
Nicolas Rougier
Anthony Strock (Mnemosyne), Nicolas Rougier (Mnemosyne), Xavier Hinaut (Mnemosyne)
A Simple Reservoir Model of Working Memory with Real Values
null
International Joint Conference on Neural Networks (IJCNN), Jul 2018, Rio de Janeiro, Brazil
null
null
q-bio.NC cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances.
[ { "created": "Mon, 18 Jun 2018 08:22:45 GMT", "version": "v1" } ]
2018-06-19
[ [ "Strock", "Anthony", "", "Mnemosyne" ], [ "Rougier", "Nicolas", "", "Mnemosyne" ], [ "Hinaut", "Xavier", "", "Mnemosyne" ] ]
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances.
1604.04803
Raunaq Malhotra
Raunaq Malhotra, Manjari Mukhopadhyay, Mary Poss, Raj Acharya
A frame-based representation of genomic sequences for removing errors and rare variant detection in NGS data
13 pages, 3 figures, Submitted to ECCB 2016 Conference
null
null
null
q-bio.GN
http://creativecommons.org/publicdomain/zero/1.0/
We propose a frame-based representation of k-mers for detecting sequencing errors and rare variants in next generation sequencing data obtained from populations of closely related genomes. Frames are sets of non-orthogonal basis functions, traditionally used in signal processing for noise removal. We define a frame for genomes and sequenced reads to consist of discrete spatial signals of every k-mer of a given size. We show that each k-mer in the sequenced data can be projected onto multiple frames and these projections are maximized for spatial signals corresponding to the k-mer's substrings. Our proposed classifier, MultiRes, is trained on the projections of k-mers as features used for marking k-mers as erroneous or true variations in the genome. We evaluate MultiRes on simulated and real viral population datasets and compare it to other error correction methods known in the literature. MultiRes has 4 to 500 times less false positives k-mer predictions compared to other methods, essential for accurate estimation of viral population diversity and their de-novo assembly. It has high recall of the true k-mers, comparable to other error correction methods. MultiRes also has greater than 95% recall for detecting single nucleotide polymorphisms (SNPs), fewer false positive SNPs, while detecting higher number of rare variants compared to other variant calling methods for viral populations. The software is freely available from the GitHub link (https://github.com/raunaq-m/MultiRes).
[ { "created": "Sat, 16 Apr 2016 21:56:34 GMT", "version": "v1" } ]
2016-04-19
[ [ "Malhotra", "Raunaq", "" ], [ "Mukhopadhyay", "Manjari", "" ], [ "Poss", "Mary", "" ], [ "Acharya", "Raj", "" ] ]
We propose a frame-based representation of k-mers for detecting sequencing errors and rare variants in next generation sequencing data obtained from populations of closely related genomes. Frames are sets of non-orthogonal basis functions, traditionally used in signal processing for noise removal. We define a frame for genomes and sequenced reads to consist of discrete spatial signals of every k-mer of a given size. We show that each k-mer in the sequenced data can be projected onto multiple frames and these projections are maximized for spatial signals corresponding to the k-mer's substrings. Our proposed classifier, MultiRes, is trained on the projections of k-mers as features used for marking k-mers as erroneous or true variations in the genome. We evaluate MultiRes on simulated and real viral population datasets and compare it to other error correction methods known in the literature. MultiRes has 4 to 500 times less false positives k-mer predictions compared to other methods, essential for accurate estimation of viral population diversity and their de-novo assembly. It has high recall of the true k-mers, comparable to other error correction methods. MultiRes also has greater than 95% recall for detecting single nucleotide polymorphisms (SNPs), fewer false positive SNPs, while detecting higher number of rare variants compared to other variant calling methods for viral populations. The software is freely available from the GitHub link (https://github.com/raunaq-m/MultiRes).
1804.02437
Tatiana Levanova
T. A. Levanova, A. O. Kazakov, A. G. Korotkov, and G. V. Osipov
The impact of electrical couplings on the sequential bursting activity in the ensemble of inhibitory coupled Van der Pol elements
7 pages
null
null
null
q-bio.NC nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The new phenomenological model of the ensemble of three neurons with chemical (synaptic) and electrical couplings has been studied. One neuron is modeled by a single Van der Pol oscillator. The influence of the electrical coupling strength and the frequency mismatch between the elements to the regime of sequential activity is investigated.
[ { "created": "Thu, 29 Mar 2018 13:41:48 GMT", "version": "v1" } ]
2018-04-10
[ [ "Levanova", "T. A.", "" ], [ "Kazakov", "A. O.", "" ], [ "Korotkov", "A. G.", "" ], [ "Osipov", "G. V.", "" ] ]
The new phenomenological model of the ensemble of three neurons with chemical (synaptic) and electrical couplings has been studied. One neuron is modeled by a single Van der Pol oscillator. The influence of the electrical coupling strength and the frequency mismatch between the elements to the regime of sequential activity is investigated.
1606.01422
Sherry Towers
Sherry Towers, Fred Brauer, Carlos Castillo-Chavez, Andrew K.I. Falconar, Anuj Mubayi, Claudia M.E. Romero-Vivas
Estimate of the reproduction number of the 2015 Zika virus outbreak in Barranquilla, Colombia, and estimation of the relative role of sexual transmission
6 pages, 2 tables, 1 figure. Final version published by Epidemics
Epidemics 17 (2016) 50-55
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: In 2015, the Zika arbovirus (ZIKV) began circulating in the Americas, rapidly expanding its global geographic range in explosive outbreaks. Unusual among mosquito-borne diseases, ZIKV has been shown to also be sexually transmitted, although sustained autochthonous transmission due to sexual transmission alone has not been observed, indicating the reproduction number (R0) for sexual transmission alone is less than 1. Critical to the assessment of outbreak risk, estimation of the potential attack rates, and assessment of control measures, are estimates of the basic reproduction number, R0. Methods: We estimated the R0 of the 2015 ZIKV outbreak in Barranquilla, Colombia, through an analysis of the exponential rise in clinically identified ZIKV cases (n = 359 to the end of November, 2015). Findings: The rate of exponential rise in cases was rho=0.076 days-1, with 95 percent CI [0.066,0.087] days-1. We used a vector-borne disease model with additional direct transmission to estimate the R0; assuming the R0 of sexual transmission alone is less than 1, we estimated the total R0 = 3.8 [2.4,5.6], and that the fraction of cases due to sexual transmission was 0.23 [0.01,0.47] with 95 percent confidence. Interpretation: This is among the first estimates of R0 for a ZIKV outbreak in the Americas, and also among the first quantifications of the relative impact of sexual transmission.
[ { "created": "Sat, 4 Jun 2016 22:25:34 GMT", "version": "v1" }, { "created": "Tue, 11 Oct 2016 17:42:07 GMT", "version": "v2" }, { "created": "Sun, 15 Jan 2017 16:09:17 GMT", "version": "v3" } ]
2017-01-17
[ [ "Towers", "Sherry", "" ], [ "Brauer", "Fred", "" ], [ "Castillo-Chavez", "Carlos", "" ], [ "Falconar", "Andrew K. I.", "" ], [ "Mubayi", "Anuj", "" ], [ "Romero-Vivas", "Claudia M. E.", "" ] ]
Background: In 2015, the Zika arbovirus (ZIKV) began circulating in the Americas, rapidly expanding its global geographic range in explosive outbreaks. Unusual among mosquito-borne diseases, ZIKV has been shown to also be sexually transmitted, although sustained autochthonous transmission due to sexual transmission alone has not been observed, indicating the reproduction number (R0) for sexual transmission alone is less than 1. Critical to the assessment of outbreak risk, estimation of the potential attack rates, and assessment of control measures, are estimates of the basic reproduction number, R0. Methods: We estimated the R0 of the 2015 ZIKV outbreak in Barranquilla, Colombia, through an analysis of the exponential rise in clinically identified ZIKV cases (n = 359 to the end of November, 2015). Findings: The rate of exponential rise in cases was rho=0.076 days-1, with 95 percent CI [0.066,0.087] days-1. We used a vector-borne disease model with additional direct transmission to estimate the R0; assuming the R0 of sexual transmission alone is less than 1, we estimated the total R0 = 3.8 [2.4,5.6], and that the fraction of cases due to sexual transmission was 0.23 [0.01,0.47] with 95 percent confidence. Interpretation: This is among the first estimates of R0 for a ZIKV outbreak in the Americas, and also among the first quantifications of the relative impact of sexual transmission.
1406.6732
Cesar Omar Flores Garcia
Cesar O. Flores, Timoth\'ee Poisot, Sergi Valverde, and Joshua S. Weitz
BiMAT: a MATLAB(R) package to facilitate the analysis and visualization of bipartite networks
15 pages, 5 figures. Plan to be submitted to a Journal
null
null
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The statistical analysis of the structure of bipartite ecological networks has increased in importance in recent years. Yet, both algorithms and software packages for the analysis of network structure focus on properties of unipartite networks. In response, we describe BiMAT, an object-oriented MATLAB package for the study of the structure of bipartite ecological networks. BiMAT can analyze the structure of networks, including features such as modularity and nestedness, using a selection of widely-adopted algorithms. BiMAT also includes a variety of null models for evaluating the statistical significance of network properties. BiMAT is capable of performing multi-scale analysis of structure - a potential (and under-examined) feature of many biological networks. Finally, BiMAT relies on the graphics capabilities of MATLAB to enable the visualization of the statistical structure of bipartite networks in either matrix or graph layout representations. BiMAT is available as an open-source package at http://ecotheory.biology.gatech.edu/cflores.
[ { "created": "Wed, 25 Jun 2014 23:06:34 GMT", "version": "v1" }, { "created": "Thu, 17 Jul 2014 01:30:05 GMT", "version": "v2" } ]
2014-07-18
[ [ "Flores", "Cesar O.", "" ], [ "Poisot", "Timothée", "" ], [ "Valverde", "Sergi", "" ], [ "Weitz", "Joshua S.", "" ] ]
The statistical analysis of the structure of bipartite ecological networks has increased in importance in recent years. Yet, both algorithms and software packages for the analysis of network structure focus on properties of unipartite networks. In response, we describe BiMAT, an object-oriented MATLAB package for the study of the structure of bipartite ecological networks. BiMAT can analyze the structure of networks, including features such as modularity and nestedness, using a selection of widely-adopted algorithms. BiMAT also includes a variety of null models for evaluating the statistical significance of network properties. BiMAT is capable of performing multi-scale analysis of structure - a potential (and under-examined) feature of many biological networks. Finally, BiMAT relies on the graphics capabilities of MATLAB to enable the visualization of the statistical structure of bipartite networks in either matrix or graph layout representations. BiMAT is available as an open-source package at http://ecotheory.biology.gatech.edu/cflores.
2311.12901
Yaniv Altshuler
Yaniv Altshuler, Tzruya Calvao Chebach, Shalom Cohen
From Microbes to Methane: AI-Based Predictive Modeling of Feed Additive Efficacy in Dairy Cows
51 pages, 24 figures, 11 tables, 93 references
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In an era of increasing pressure to achieve sustainable agriculture, the optimization of livestock feed for enhancing yield and minimizing environmental impact is a paramount objective. This study presents a pioneering approach towards this goal, using rumen microbiome data to predict the efficacy of feed additives in dairy cattle. We collected an extensive dataset that includes methane emissions from 2,190 Holstein cows distributed across 34 distinct sites. The cows were divided into control and experimental groups in a double-blind, unbiased manner, accounting for variables such as age, days in lactation, and average milk yield. The experimental groups were administered one of four leading commercial feed additives: Agolin, Kexxtone, Allimax, and Relyon. Methane emissions were measured individually both before the administration of additives and over a subsequent 12-week period. To develop our predictive model for additive efficacy, rumen microbiome samples were collected from 510 cows from the same herds prior to the study's onset. These samples underwent deep metagenomic shotgun sequencing, yielding an average of 15.7 million reads per sample. Utilizing innovative artificial intelligence techniques we successfully estimated the efficacy of these feed additives across different farms. The model's robustness was further confirmed through validation with independent cohorts, affirming its generalizability and reliability. Our results underscore the transformative capability of using targeted feed additive strategies to both optimize dairy yield and milk composition, and to significantly reduce methane emissions. Specifically, our predictive model demonstrates a scenario where its application could guide the assignment of additives to farms where they are most effective. In doing so, we could achieve an average potential reduction of over 27\% in overall emissions.
[ { "created": "Tue, 21 Nov 2023 10:57:28 GMT", "version": "v1" } ]
2023-11-23
[ [ "Altshuler", "Yaniv", "" ], [ "Chebach", "Tzruya Calvao", "" ], [ "Cohen", "Shalom", "" ] ]
In an era of increasing pressure to achieve sustainable agriculture, the optimization of livestock feed for enhancing yield and minimizing environmental impact is a paramount objective. This study presents a pioneering approach towards this goal, using rumen microbiome data to predict the efficacy of feed additives in dairy cattle. We collected an extensive dataset that includes methane emissions from 2,190 Holstein cows distributed across 34 distinct sites. The cows were divided into control and experimental groups in a double-blind, unbiased manner, accounting for variables such as age, days in lactation, and average milk yield. The experimental groups were administered one of four leading commercial feed additives: Agolin, Kexxtone, Allimax, and Relyon. Methane emissions were measured individually both before the administration of additives and over a subsequent 12-week period. To develop our predictive model for additive efficacy, rumen microbiome samples were collected from 510 cows from the same herds prior to the study's onset. These samples underwent deep metagenomic shotgun sequencing, yielding an average of 15.7 million reads per sample. Utilizing innovative artificial intelligence techniques we successfully estimated the efficacy of these feed additives across different farms. The model's robustness was further confirmed through validation with independent cohorts, affirming its generalizability and reliability. Our results underscore the transformative capability of using targeted feed additive strategies to both optimize dairy yield and milk composition, and to significantly reduce methane emissions. Specifically, our predictive model demonstrates a scenario where its application could guide the assignment of additives to farms where they are most effective. In doing so, we could achieve an average potential reduction of over 27\% in overall emissions.
1408.6896
Benjamin de Bivort
Sean Buchanan, Jamey Kain and Benjamin de Bivort
Neuronal control of locomotor handedness in Drosophila
14 pages, 13 figures
null
10.1073/pnas.1500804112
null
q-bio.NC
http://creativecommons.org/licenses/by/3.0/
Handedness in humans - better performance using either the left or right hand - is personally familiar, moderately heritable, and regulated by many genes, including those involved in general body symmetry. But behavioral handedness, i.e. lateralization, is a multifaceted phenomenon. For example, people display clockwise or counter-clockwise biases in their walking behavior that is uncorrelated to their hand dominance, and lateralized behavioral biases have been shown in species as disparate as mice (paw usage), octopi (eye usage), and tortoises (side rolled on during righting). However, the mechanisms by which asymmetries are instilled in behavior are unknown, and a system for studying behavioral handedness in a genetically tractable model system is needed. Here we show that Drosophila melanogaster flies exhibit striking variability in their left-right choice behavior during locomotion. Very strongly biased "left-handed" and "right-handed" individuals are common in every line assayed. The handedness of an individual persists for its lifetime, but is not passed on to progeny, suggesting that mechanisms other than genetics determine individual handedness. We use the Drosophila transgenic toolkit to map a specific set of neurons within the central complex that regulates the strength of behavioral handedness within a line. These findings give insights into choice behaviors and laterality in a simple model organism, and demonstrate that individuals from isogenic populations reared under experimentally identical conditions nevertheless display idiosyncratic behaviors.
[ { "created": "Fri, 29 Aug 2014 01:29:45 GMT", "version": "v1" } ]
2016-07-13
[ [ "Buchanan", "Sean", "" ], [ "Kain", "Jamey", "" ], [ "de Bivort", "Benjamin", "" ] ]
Handedness in humans - better performance using either the left or right hand - is personally familiar, moderately heritable, and regulated by many genes, including those involved in general body symmetry. But behavioral handedness, i.e. lateralization, is a multifaceted phenomenon. For example, people display clockwise or counter-clockwise biases in their walking behavior that is uncorrelated to their hand dominance, and lateralized behavioral biases have been shown in species as disparate as mice (paw usage), octopi (eye usage), and tortoises (side rolled on during righting). However, the mechanisms by which asymmetries are instilled in behavior are unknown, and a system for studying behavioral handedness in a genetically tractable model system is needed. Here we show that Drosophila melanogaster flies exhibit striking variability in their left-right choice behavior during locomotion. Very strongly biased "left-handed" and "right-handed" individuals are common in every line assayed. The handedness of an individual persists for its lifetime, but is not passed on to progeny, suggesting that mechanisms other than genetics determine individual handedness. We use the Drosophila transgenic toolkit to map a specific set of neurons within the central complex that regulates the strength of behavioral handedness within a line. These findings give insights into choice behaviors and laterality in a simple model organism, and demonstrate that individuals from isogenic populations reared under experimentally identical conditions nevertheless display idiosyncratic behaviors.
2202.04917
Asim Kumar Ghosh
Kalpita Ghosh and Asim Kumar Ghosh
Study of COVID-19 epidemiological evolution in India with a multi-wave SIR model
Five pages, two-column, six figures
null
null
null
q-bio.PE cs.CY math.DS
http://creativecommons.org/licenses/by/4.0/
The global pandemic due to the outbreak of COVID-19 ravages the whole world for more than two years in which all the countries are suffering a lot since December 2019. In order to control this ongoing waves of epidemiological infections, attempts have been made to understand the dynamics of this pandemic in deterministic approach with the help of several mathematical models. In this article characteristics of a multi-wave SIR model have been studied which successfully explains the features of this pandemic waves in India. Stability of this model has been studied by identifying the equilibrium points as well as by finding the eigen values of the corresponding Jacobian matrices. Complex eigen values are found which ultimately give rise to the oscillatory solutions for the three categories of populations, say, susceptible, infected and removed. In this model, a finite probability of the recovered people for becoming susceptible again is introduced which eventually lead to the oscillatory solution in other words. The set of differential equations has been solved numerically in order to obtain the variation for numbers of susceptible, infected and removed people with time. In this phenomenological study, finally an additional modification is made in order to explain the aperiodic oscillation which is found necessary to capture the feature of epidemiological waves particularly in India.
[ { "created": "Thu, 10 Feb 2022 09:18:50 GMT", "version": "v1" } ]
2022-02-11
[ [ "Ghosh", "Kalpita", "" ], [ "Ghosh", "Asim Kumar", "" ] ]
The global pandemic due to the outbreak of COVID-19 ravages the whole world for more than two years in which all the countries are suffering a lot since December 2019. In order to control this ongoing waves of epidemiological infections, attempts have been made to understand the dynamics of this pandemic in deterministic approach with the help of several mathematical models. In this article characteristics of a multi-wave SIR model have been studied which successfully explains the features of this pandemic waves in India. Stability of this model has been studied by identifying the equilibrium points as well as by finding the eigen values of the corresponding Jacobian matrices. Complex eigen values are found which ultimately give rise to the oscillatory solutions for the three categories of populations, say, susceptible, infected and removed. In this model, a finite probability of the recovered people for becoming susceptible again is introduced which eventually lead to the oscillatory solution in other words. The set of differential equations has been solved numerically in order to obtain the variation for numbers of susceptible, infected and removed people with time. In this phenomenological study, finally an additional modification is made in order to explain the aperiodic oscillation which is found necessary to capture the feature of epidemiological waves particularly in India.
2306.06699
Ahmed BaHammam
Ahmed S. BaHammam, Khaled Trabelsi, Seithikurippu R. Pandi-Perumal, Hiatham Jahrami
Adapting to the Impact of AI in Scientific Writing: Balancing Benefits and Drawbacks while Developing Policies and Regulations
2 Figure, (in press)
Journal of Nature and Science of Medicine 2023, Volume 6, Issue 3
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This article examines the advantages and disadvantages of Large Language Models (LLMs) and Artificial Intelligence (AI) in research and education and proposes the urgent need for an international statement to guide their responsible use. LLMs and AI demonstrate remarkable natural language processing, data analysis, and decision-making capabilities, offering potential benefits such as improved efficiency and transformative solutions. However, concerns regarding ethical considerations, bias, fake publications, and malicious use also arise. The objectives of this paper are to critically evaluate the utility of LLMs and AI in research and education, call for discussions between stakeholders, and discuss the need for an international statement. We identify advantages such as data processing, task automation, and personalized experiences, alongside disadvantages like bias reinforcement, interpretability challenges, inaccurate reporting, and plagiarism. Stakeholders from academia, industry, government, and civil society must engage in open discussions to address the ethical, legal, and societal implications. The proposed international statement should emphasize transparency, accountability, ongoing research, and risk mitigation. Monitoring, evaluation, user education, and awareness are essential components. By fostering discussions and establishing guidelines, we can ensure the responsible and ethical development and use of LLMs and AI, maximizing benefits while minimizing risks.
[ { "created": "Sun, 11 Jun 2023 15:06:55 GMT", "version": "v1" } ]
2023-06-13
[ [ "BaHammam", "Ahmed S.", "" ], [ "Trabelsi", "Khaled", "" ], [ "Pandi-Perumal", "Seithikurippu R.", "" ], [ "Jahrami", "Hiatham", "" ] ]
This article examines the advantages and disadvantages of Large Language Models (LLMs) and Artificial Intelligence (AI) in research and education and proposes the urgent need for an international statement to guide their responsible use. LLMs and AI demonstrate remarkable natural language processing, data analysis, and decision-making capabilities, offering potential benefits such as improved efficiency and transformative solutions. However, concerns regarding ethical considerations, bias, fake publications, and malicious use also arise. The objectives of this paper are to critically evaluate the utility of LLMs and AI in research and education, call for discussions between stakeholders, and discuss the need for an international statement. We identify advantages such as data processing, task automation, and personalized experiences, alongside disadvantages like bias reinforcement, interpretability challenges, inaccurate reporting, and plagiarism. Stakeholders from academia, industry, government, and civil society must engage in open discussions to address the ethical, legal, and societal implications. The proposed international statement should emphasize transparency, accountability, ongoing research, and risk mitigation. Monitoring, evaluation, user education, and awareness are essential components. By fostering discussions and establishing guidelines, we can ensure the responsible and ethical development and use of LLMs and AI, maximizing benefits while minimizing risks.
q-bio/0312044
Peng-Ye Wang
Ping Xie, Shuo-Xing Dou, Peng-Ye Wang
Model for processive movement of myosin V and myosin VI
18 pages, 7 figures
Chinese Physics, Vol.14, No.4 (2005) 744-752
10.1088/1009-1963/14/4/018
null
q-bio.BM
null
Myosin V and myosin VI are two classes of two-headed molecular motors of the myosin superfamily that move processively along helical actin filaments in opposite directions. Here we present a hand-over-hand model for their processive movements. In the model, the moving direction of a dimeric molecular motor is automatically determined by the relative orientation between its two heads at free state and its head's binding orientation on track filament. This determines that myosin V moves toward the barbed end and myosin VI moves toward the pointed end of actin. During the moving period in one step, one head remains bound to actin for myosin V whereas two heads are detached for myosin VI: The moving manner is determined by the length of neck domain. This naturally explains the similar dynamic behaviors but opposite moving directions of myosin VI and mutant myosin V (the neck of which is truncated to only one-sixth of the native length). Because of different moving manners, myosin VI and mutant myosin V exhibit significantly broader step-size distribution than native myosin V. However, all three motors give the same mean step size of 36 nm (the pseudo-repeat of actin helix). Using the model we study the dynamics of myosin V quantitatively, with theoretical results in agreement with previous experimental ones.
[ { "created": "Tue, 30 Dec 2003 06:59:28 GMT", "version": "v1" } ]
2009-11-10
[ [ "Xie", "Ping", "" ], [ "Dou", "Shuo-Xing", "" ], [ "Wang", "Peng-Ye", "" ] ]
Myosin V and myosin VI are two classes of two-headed molecular motors of the myosin superfamily that move processively along helical actin filaments in opposite directions. Here we present a hand-over-hand model for their processive movements. In the model, the moving direction of a dimeric molecular motor is automatically determined by the relative orientation between its two heads at free state and its head's binding orientation on track filament. This determines that myosin V moves toward the barbed end and myosin VI moves toward the pointed end of actin. During the moving period in one step, one head remains bound to actin for myosin V whereas two heads are detached for myosin VI: The moving manner is determined by the length of neck domain. This naturally explains the similar dynamic behaviors but opposite moving directions of myosin VI and mutant myosin V (the neck of which is truncated to only one-sixth of the native length). Because of different moving manners, myosin VI and mutant myosin V exhibit significantly broader step-size distribution than native myosin V. However, all three motors give the same mean step size of 36 nm (the pseudo-repeat of actin helix). Using the model we study the dynamics of myosin V quantitatively, with theoretical results in agreement with previous experimental ones.
1102.1707
Pamela Reinagel
Philip Meier, Erik Flister, Pamela Reinagel
Collinear features impair visual detection by rats
The first two authors contributed equally; manuscript currently in peer review; this document includes main paper and supplementary materials
Meier, P., Flister, E., & Reinagel, P. (2011). Collinear features impair visual detection by rats. Journal of Vision,11(3):22,1-16
10.1167/11.3.22
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We measure rats' ability to detect an oriented visual target grating located between two flanking stimuli ("flankers"). Flankers varied in contrast, orientation, angular position, and sign. Rats are impaired at detecting visual targets with collinear flankers, compared to configurations where flankers differ from the target in orientation or angular position. In particular, rats are more likely to miss the target when flankers are collinear. The same impairment is found even when the flanker luminance was sign-reversed relative to the target. These findings suggest that contour alignment alters visual processing in rats, despite their lack of orientation columns in visual cortex. This is the first report that the arrangement of visual features relative to each other affects visual behavior in rats. To provide a conceptual framework for our findings, we relate our stimuli to a contrast normalization model of early visual processing. We suggest a pattern-sensitive generalization of the model which could account for a collinear deficit. These experiments were performed using a novel method for automated high-throughput training and testing of visual behavior in rodents.
[ { "created": "Tue, 8 Feb 2011 20:10:46 GMT", "version": "v1" } ]
2012-03-09
[ [ "Meier", "Philip", "" ], [ "Flister", "Erik", "" ], [ "Reinagel", "Pamela", "" ] ]
We measure rats' ability to detect an oriented visual target grating located between two flanking stimuli ("flankers"). Flankers varied in contrast, orientation, angular position, and sign. Rats are impaired at detecting visual targets with collinear flankers, compared to configurations where flankers differ from the target in orientation or angular position. In particular, rats are more likely to miss the target when flankers are collinear. The same impairment is found even when the flanker luminance was sign-reversed relative to the target. These findings suggest that contour alignment alters visual processing in rats, despite their lack of orientation columns in visual cortex. This is the first report that the arrangement of visual features relative to each other affects visual behavior in rats. To provide a conceptual framework for our findings, we relate our stimuli to a contrast normalization model of early visual processing. We suggest a pattern-sensitive generalization of the model which could account for a collinear deficit. These experiments were performed using a novel method for automated high-throughput training and testing of visual behavior in rodents.
2012.03030
Macoto Kikuchi
Tadamune Kaneko and Macoto Kikuchi
Evolution enhances mutational robustness and suppresses the emergence of a new phenotype: A new computational approach for studying evolution
14 pages, 12 figures
PLoS Comput Biol 18 (2022) e1009796
10.1371/journal.pcbi.1009796
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The aim of this paper is two-fold. First, we propose a new computational method to investigate the particularities of evolution. Second, we apply this method to a model of gene regulatory networks (GRNs) and explore the evolution of mutational robustness and bistability. Living systems have developed their functions through evolutionary processes. To understand the particularities of this process theoretically, evolutionary simulation (ES) alone is insufficient because the outcomes of ES depend on evolutionary pathways. We need a reference system for comparison. An appropriate reference system for this purpose is an ensemble of the randomly sampled genotypes. However, generating high-fitness genotypes by simple random sampling is difficult because such genotypes are rare. In this study, we used the multicanonical Monte Carlo method developed in statistical physics to construct a reference ensemble of GRNs and compared it with the outcomes of ES. We obtained the following results. First, mutational robustness was significantly higher in ES than in the reference ensemble at the same fitness level. Second, the emergence of a new phenotype, bistability, was delayed in evolution. Third, the bistable group of GRNs contains many mutationally fragile GRNs compared with those in the non-bistable group. This suggests that the delayed emergence of bistability is a consequence of the mutation-selection mechanism.
[ { "created": "Sat, 5 Dec 2020 13:28:37 GMT", "version": "v1" }, { "created": "Sun, 15 Aug 2021 07:12:44 GMT", "version": "v2" }, { "created": "Tue, 2 Nov 2021 03:30:14 GMT", "version": "v3" }, { "created": "Thu, 16 Dec 2021 06:49:26 GMT", "version": "v4" }, { "cre...
2022-01-21
[ [ "Kaneko", "Tadamune", "" ], [ "Kikuchi", "Macoto", "" ] ]
The aim of this paper is two-fold. First, we propose a new computational method to investigate the particularities of evolution. Second, we apply this method to a model of gene regulatory networks (GRNs) and explore the evolution of mutational robustness and bistability. Living systems have developed their functions through evolutionary processes. To understand the particularities of this process theoretically, evolutionary simulation (ES) alone is insufficient because the outcomes of ES depend on evolutionary pathways. We need a reference system for comparison. An appropriate reference system for this purpose is an ensemble of the randomly sampled genotypes. However, generating high-fitness genotypes by simple random sampling is difficult because such genotypes are rare. In this study, we used the multicanonical Monte Carlo method developed in statistical physics to construct a reference ensemble of GRNs and compared it with the outcomes of ES. We obtained the following results. First, mutational robustness was significantly higher in ES than in the reference ensemble at the same fitness level. Second, the emergence of a new phenotype, bistability, was delayed in evolution. Third, the bistable group of GRNs contains many mutationally fragile GRNs compared with those in the non-bistable group. This suggests that the delayed emergence of bistability is a consequence of the mutation-selection mechanism.
1307.8064
Michael Courtney
Michael W. Courtney and Joshua M. Courtney
Predictions Wrong Again on Dead Zone Area -- Gulf of Mexico Gaining Resistance to Nutrient Loading
5 pages, 1 figure
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mississippi River nutrient loads and water stratification on the Louisiana-Texas shelf contribute to an annually recurring, short-lived hypoxic bottom layer in areas of the northern Gulf of Mexico comprising less than 2% of the total Gulf of Mexico bottom area. This paper observes that the NOAA and LUMCON have now published errant predictions of possible record size areas of temporary bottom water hypoxia ("dead zones") three times since 2005, in 2008, 2011, and 2013 and that the LUMCON predictions of the area of hypoxic bottom water average 31% higher than the actual measured hypoxic areas from 2006 to 2014. These systematically high predictions depend on the assumption that the susceptibility of the Gulf of Mexico to forming hypoxic areas in response to nutrient loading has been relatively constant since 2001, though the susceptibility has been occasionally adjusted upward in different models. It has been previously suggested that tropical storms in a given year that occur on the Louisiana-Texas shelf between the peak nutrient loading in spring and formation of the hypoxic zone in summer can mitigate the size of the hypoxic zone that year through mixing of stratified well oxygenated lighter and warmer surface layers and oxygen depleted heavier and cooler bottom layers. This paper suggests several reasons why the Louisiana-Texas shelf may be systematically growing less susceptible to a given level of nutrient loading over time so that predictions based on the measured area of temporary bottom water hypoxia prior to 2006 tend to be too big in recent years.
[ { "created": "Tue, 30 Jul 2013 17:45:16 GMT", "version": "v1" }, { "created": "Tue, 24 Feb 2015 06:13:39 GMT", "version": "v2" } ]
2015-02-25
[ [ "Courtney", "Michael W.", "" ], [ "Courtney", "Joshua M.", "" ] ]
Mississippi River nutrient loads and water stratification on the Louisiana-Texas shelf contribute to an annually recurring, short-lived hypoxic bottom layer in areas of the northern Gulf of Mexico comprising less than 2% of the total Gulf of Mexico bottom area. This paper observes that the NOAA and LUMCON have now published errant predictions of possible record size areas of temporary bottom water hypoxia ("dead zones") three times since 2005, in 2008, 2011, and 2013 and that the LUMCON predictions of the area of hypoxic bottom water average 31% higher than the actual measured hypoxic areas from 2006 to 2014. These systematically high predictions depend on the assumption that the susceptibility of the Gulf of Mexico to forming hypoxic areas in response to nutrient loading has been relatively constant since 2001, though the susceptibility has been occasionally adjusted upward in different models. It has been previously suggested that tropical storms in a given year that occur on the Louisiana-Texas shelf between the peak nutrient loading in spring and formation of the hypoxic zone in summer can mitigate the size of the hypoxic zone that year through mixing of stratified well oxygenated lighter and warmer surface layers and oxygen depleted heavier and cooler bottom layers. This paper suggests several reasons why the Louisiana-Texas shelf may be systematically growing less susceptible to a given level of nutrient loading over time so that predictions based on the measured area of temporary bottom water hypoxia prior to 2006 tend to be too big in recent years.
1610.09720
James Wilsenach
James Wilsenach, Pietro Landi and Cang Hui
Evolutionary Fields Can Explain Patterns of High Dimensional Complexity in Ecology
9 pages, 6 figures
Phys. Rev. E 95, 042401 (2017)
10.1103/PhysRevE.95.042401
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the properties that make ecological systems so unique is the range of complex behavioural patterns that can be exhibited by even the simplest communities with only a few species. Much of this complexity is commonly attributed to stochastic factors which have very high-degrees of freedom. Orthodox study of the evolution of these simple networks has generally been limited in its ability to explain complexity, since it restricts evolutionary adaptation to an inertia-free process with few degrees of freedom in which only gradual, moderately complex behaviours are possible. We propose a model inspired by particle mediated field phenomena in classical physics in combination with fundamental concepts in adaptation, that suggests that small but high-dimensional chaotic dynamics near to the adaptive trait optimum could help explain complex properties shared by most ecological datasets, such as aperiodicity and pink, fractal noise spectra. By examining a simple predator-prey model and appealing to real ecological data, we show that this type of complexity could be easily confused for or confounded by stochasticity, especially when spurred on or amplified by stochastic factors that share variational and spectral properties with the underlying dynamics.
[ { "created": "Sun, 30 Oct 2016 22:17:58 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2017 18:21:20 GMT", "version": "v2" }, { "created": "Thu, 6 Apr 2017 14:15:13 GMT", "version": "v3" } ]
2022-03-18
[ [ "Wilsenach", "James", "" ], [ "Landi", "Pietro", "" ], [ "Hui", "Cang", "" ] ]
One of the properties that make ecological systems so unique is the range of complex behavioural patterns that can be exhibited by even the simplest communities with only a few species. Much of this complexity is commonly attributed to stochastic factors which have very high-degrees of freedom. Orthodox study of the evolution of these simple networks has generally been limited in its ability to explain complexity, since it restricts evolutionary adaptation to an inertia-free process with few degrees of freedom in which only gradual, moderately complex behaviours are possible. We propose a model inspired by particle mediated field phenomena in classical physics in combination with fundamental concepts in adaptation, that suggests that small but high-dimensional chaotic dynamics near to the adaptive trait optimum could help explain complex properties shared by most ecological datasets, such as aperiodicity and pink, fractal noise spectra. By examining a simple predator-prey model and appealing to real ecological data, we show that this type of complexity could be easily confused for or confounded by stochasticity, especially when spurred on or amplified by stochastic factors that share variational and spectral properties with the underlying dynamics.
1506.02722
Carl Boettiger
Carl Boettiger, Scott Chamberlain, Rutger Vos, Hilmar Lapp
RNeXML: a package for reading and writing richly annotated phylogenetic, character, and trait data in R
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/3.0/
NeXML is a powerful and extensible exchange standard recently proposed to better meet the expanding needs for phylogenetic data and metadata sharing. Here we present the RNeXML package, which provides users of the R programming language with easy-to-use tools for reading and writing NeXML documents, including rich metadata, in a way that interfaces seamlessly with the extensive library of phylogenetic tools already available in the R ecosystem.
[ { "created": "Mon, 8 Jun 2015 23:12:26 GMT", "version": "v1" } ]
2015-06-10
[ [ "Boettiger", "Carl", "" ], [ "Chamberlain", "Scott", "" ], [ "Vos", "Rutger", "" ], [ "Lapp", "Hilmar", "" ] ]
NeXML is a powerful and extensible exchange standard recently proposed to better meet the expanding needs for phylogenetic data and metadata sharing. Here we present the RNeXML package, which provides users of the R programming language with easy-to-use tools for reading and writing NeXML documents, including rich metadata, in a way that interfaces seamlessly with the extensive library of phylogenetic tools already available in the R ecosystem.
1107.0997
Bruno Goncalves
Nicola Perra, Duygu Balcan, Bruno Gon\c{c}alves, Alessandro Vespignani
Towards a characterization of behavior-disease models
24 pages, 15 figures
PLoS ONE 6(8): e23084 (2011)
10.1371/journal.pone.0023084
null
q-bio.PE cond-mat.stat-mech physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The last decade saw the advent of increasingly realistic epidemic models that leverage on the availability of highly detailed census and human mobility data. Data-driven models aim at a granularity down to the level of households or single individuals. However, relatively little systematic work has been done to provide coupled behavior-disease models able to close the feedback loop between behavioral changes triggered in the population by an individual's perception of the disease spread and the actual disease spread itself. While models lacking this coupling can be extremely successful in mild epidemics, they obviously will be of limited use in situations where social disruption or behavioral alterations are induced in the population by knowledge of the disease. Here we propose a characterization of a set of prototypical mechanisms for self-initiated social distancing induced by local and non-local prevalence-based information available to individuals in the population. We characterize the effects of these mechanisms in the framework of a compartmental scheme that enlarges the basic SIR model by considering separate behavioral classes within the population. The transition of individuals in/out of behavioral classes is coupled with the spreading of the disease and provides a rich phase space with multiple epidemic peaks and tipping points. The class of models presented here can be used in the case of data-driven computational approaches to analyze scenarios of social adaptation and behavioral change.
[ { "created": "Tue, 5 Jul 2011 22:00:32 GMT", "version": "v1" } ]
2011-08-10
[ [ "Perra", "Nicola", "" ], [ "Balcan", "Duygu", "" ], [ "Gonçalves", "Bruno", "" ], [ "Vespignani", "Alessandro", "" ] ]
The last decade saw the advent of increasingly realistic epidemic models that leverage on the availability of highly detailed census and human mobility data. Data-driven models aim at a granularity down to the level of households or single individuals. However, relatively little systematic work has been done to provide coupled behavior-disease models able to close the feedback loop between behavioral changes triggered in the population by an individual's perception of the disease spread and the actual disease spread itself. While models lacking this coupling can be extremely successful in mild epidemics, they obviously will be of limited use in situations where social disruption or behavioral alterations are induced in the population by knowledge of the disease. Here we propose a characterization of a set of prototypical mechanisms for self-initiated social distancing induced by local and non-local prevalence-based information available to individuals in the population. We characterize the effects of these mechanisms in the framework of a compartmental scheme that enlarges the basic SIR model by considering separate behavioral classes within the population. The transition of individuals in/out of behavioral classes is coupled with the spreading of the disease and provides a rich phase space with multiple epidemic peaks and tipping points. The class of models presented here can be used in the case of data-driven computational approaches to analyze scenarios of social adaptation and behavioral change.
1903.00458
Zak Costello
Zak Costello, Hector Garcia Martin
How to Hallucinate Functional Proteins
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we present a novel approach to protein design and phenotypic inference using a generative model for protein sequences. BioSeqVAE, a variational autoencoder variant, can hallucinate syntactically valid protein sequences that are likely to fold and function. BioSeqVAE is trained on the entire known protein sequence space and learns to generate valid examples of protein sequences in an unsupervised manner. The model is validated by showing that its latent feature space is useful and that it accurately reconstructs sequences. Its usefulness is demonstrated with a selection of relevant downstream design tasks. This work is intended to serve as a computational first step towards a general purpose structure free protein design tool.
[ { "created": "Fri, 1 Mar 2019 18:39:00 GMT", "version": "v1" } ]
2019-03-04
[ [ "Costello", "Zak", "" ], [ "Martin", "Hector Garcia", "" ] ]
Here we present a novel approach to protein design and phenotypic inference using a generative model for protein sequences. BioSeqVAE, a variational autoencoder variant, can hallucinate syntactically valid protein sequences that are likely to fold and function. BioSeqVAE is trained on the entire known protein sequence space and learns to generate valid examples of protein sequences in an unsupervised manner. The model is validated by showing that its latent feature space is useful and that it accurately reconstructs sequences. Its usefulness is demonstrated with a selection of relevant downstream design tasks. This work is intended to serve as a computational first step towards a general purpose structure free protein design tool.
1202.4923
Jan Hasenauer
J. Hasenauer, D. Schittler, and F. Allgower
A computational model for proliferation dynamics of division- and label-structured populations
null
null
null
null
q-bio.PE math.AP math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In most biological studies and processes, cell proliferation and population dynamics play an essential role. Due to this ubiquity, a multitude of mathematical models has been developed to describe these processes. While the simplest models only consider the size of the overall populations, others take division numbers and labeling of the cells into account. In this work, we present a modeling and computational framework for proliferating cell population undergoing symmetric cell division. In contrast to existing models, the proposed model incorporates both, the discrete age structure and continuous label dynamics. Thus, it allows for the consideration of division number dependent parameters as well as the direct comparison of the model prediction with labeling experiments, e.g., performed with Carboxyfluorescein succinimidyl ester (CFSE). We prove that under mild assumptions the resulting system of coupled partial differential equations (PDEs) can be decomposed into a system of ordinary differential equations (ODEs) and a set of decoupled PDEs, which reduces the computational effort drastically. Furthermore, the PDEs are solved analytically and the ODE system is truncated, which allows for the prediction of the label distribution of complex systems using a low-dimensional system of ODEs. In addition to modeling of labeling dynamics, we link the label-induced fluorescence to the measure fluorescence which includes autofluorescence. For the resulting numerically challenging convolution integral, we provide an analytical approximation. This is illustrated by modeling and simulating a proliferating population with division number dependent proliferation rate.
[ { "created": "Wed, 22 Feb 2012 14:49:50 GMT", "version": "v1" } ]
2012-02-23
[ [ "Hasenauer", "J.", "" ], [ "Schittler", "D.", "" ], [ "Allgower", "F.", "" ] ]
In most biological studies and processes, cell proliferation and population dynamics play an essential role. Due to this ubiquity, a multitude of mathematical models has been developed to describe these processes. While the simplest models only consider the size of the overall populations, others take division numbers and labeling of the cells into account. In this work, we present a modeling and computational framework for proliferating cell population undergoing symmetric cell division. In contrast to existing models, the proposed model incorporates both, the discrete age structure and continuous label dynamics. Thus, it allows for the consideration of division number dependent parameters as well as the direct comparison of the model prediction with labeling experiments, e.g., performed with Carboxyfluorescein succinimidyl ester (CFSE). We prove that under mild assumptions the resulting system of coupled partial differential equations (PDEs) can be decomposed into a system of ordinary differential equations (ODEs) and a set of decoupled PDEs, which reduces the computational effort drastically. Furthermore, the PDEs are solved analytically and the ODE system is truncated, which allows for the prediction of the label distribution of complex systems using a low-dimensional system of ODEs. In addition to modeling of labeling dynamics, we link the label-induced fluorescence to the measure fluorescence which includes autofluorescence. For the resulting numerically challenging convolution integral, we provide an analytical approximation. This is illustrated by modeling and simulating a proliferating population with division number dependent proliferation rate.
0705.2706
Massimo Sandal
Francesco Valle, Massimo Sandal, Bruno Samor\'i
The Interplay between Chemistry and Mechanics in the Transduction of a Mechanical Signal into a Biochemical Function
50 pages, 18 figures
null
10.1016/j.plrev.2007.06.001
null
q-bio.BM q-bio.MN
null
There are many processes in biology in which mechanical forces are generated. Force-bearing networks can transduce locally developed mechanical signals very extensively over different parts of the cell or tissues. In this article we conduct an overview of this kind of mechanical transduction, focusing in particular on the multiple layers of complexity displayed by the mechanisms that control and trigger the conversion of a mechanical signal into a biochemical function. Single molecule methodologies, through their capability to introduce the force in studies of biological processes in which mechanical stresses are developed, are unveiling subtle intertwining mechanisms between chemistry and mechanics and in particular are revealing how chemistry can control mechanics. The possibility that chemistry interplays with mechanics should be always considered in biochemical studies.
[ { "created": "Fri, 18 May 2007 16:11:02 GMT", "version": "v1" }, { "created": "Wed, 23 May 2007 13:26:53 GMT", "version": "v2" } ]
2009-11-13
[ [ "Valle", "Francesco", "" ], [ "Sandal", "Massimo", "" ], [ "Samorí", "Bruno", "" ] ]
There are many processes in biology in which mechanical forces are generated. Force-bearing networks can transduce locally developed mechanical signals very extensively over different parts of the cell or tissues. In this article we conduct an overview of this kind of mechanical transduction, focusing in particular on the multiple layers of complexity displayed by the mechanisms that control and trigger the conversion of a mechanical signal into a biochemical function. Single molecule methodologies, through their capability to introduce the force in studies of biological processes in which mechanical stresses are developed, are unveiling subtle intertwining mechanisms between chemistry and mechanics and in particular are revealing how chemistry can control mechanics. The possibility that chemistry interplays with mechanics should be always considered in biochemical studies.
2109.03889
Na Yu
Na Yu, Gurpreet Jagdev, Michelle Morgovsky
Noise-induced network bursts and coherence in a calcium-mediated neural network
null
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Noise-induced population bursting has been widely identified to play important roles in the information process. We constructed a mathematical model for a random and sparse neural network where bursting can be induced from the resting state by the global stochastic stimulus. Importantly, the noise-induced bursting dynamics of this network are mediated by calcium conductance. We use two spectral measures to evaluate the network coherence in the context of network bursts, the spike trains of all neurons and the individual bursts of all neurons. Our results show that the coherence of the network is optimized by an optimal level of stochastic stimulus, which is known as coherence resonance (CR). We also demonstrate that the interplay of calcium conductance and noise intensity can modify the degree of CR.
[ { "created": "Wed, 8 Sep 2021 19:33:12 GMT", "version": "v1" } ]
2021-09-10
[ [ "Yu", "Na", "" ], [ "Jagdev", "Gurpreet", "" ], [ "Morgovsky", "Michelle", "" ] ]
Noise-induced population bursting has been widely identified to play important roles in the information process. We constructed a mathematical model for a random and sparse neural network where bursting can be induced from the resting state by the global stochastic stimulus. Importantly, the noise-induced bursting dynamics of this network are mediated by calcium conductance. We use two spectral measures to evaluate the network coherence in the context of network bursts, the spike trains of all neurons and the individual bursts of all neurons. Our results show that the coherence of the network is optimized by an optimal level of stochastic stimulus, which is known as coherence resonance (CR). We also demonstrate that the interplay of calcium conductance and noise intensity can modify the degree of CR.
2306.04667
Francesco Ceccarelli Mr
Francesco Ceccarelli, Lorenzo Giusti, Sean B. Holden, Pietro Li\`o
Neural Embeddings for Protein Graphs
10 pages, 5 figures
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Proteins perform much of the work in living organisms, and consequently the development of efficient computational methods for protein representation is essential for advancing large-scale biological research. Most current approaches struggle to efficiently integrate the wealth of information contained in the protein sequence and structure. In this paper, we propose a novel framework for embedding protein graphs in geometric vector spaces, by learning an encoder function that preserves the structural distance between protein graphs. Utilizing Graph Neural Networks (GNNs) and Large Language Models (LLMs), the proposed framework generates structure- and sequence-aware protein representations. We demonstrate that our embeddings are successful in the task of comparing protein structures, while providing a significant speed-up compared to traditional approaches based on structural alignment. Our framework achieves remarkable results in the task of protein structure classification; in particular, when compared to other work, the proposed method shows an average F1-Score improvement of 26% on out-of-distribution (OOD) samples and of 32% when tested on samples coming from the same distribution as the training data. Our approach finds applications in areas such as drug prioritization, drug re-purposing, disease sub-type analysis and elsewhere.
[ { "created": "Wed, 7 Jun 2023 14:50:34 GMT", "version": "v1" } ]
2023-06-09
[ [ "Ceccarelli", "Francesco", "" ], [ "Giusti", "Lorenzo", "" ], [ "Holden", "Sean B.", "" ], [ "Liò", "Pietro", "" ] ]
Proteins perform much of the work in living organisms, and consequently the development of efficient computational methods for protein representation is essential for advancing large-scale biological research. Most current approaches struggle to efficiently integrate the wealth of information contained in the protein sequence and structure. In this paper, we propose a novel framework for embedding protein graphs in geometric vector spaces, by learning an encoder function that preserves the structural distance between protein graphs. Utilizing Graph Neural Networks (GNNs) and Large Language Models (LLMs), the proposed framework generates structure- and sequence-aware protein representations. We demonstrate that our embeddings are successful in the task of comparing protein structures, while providing a significant speed-up compared to traditional approaches based on structural alignment. Our framework achieves remarkable results in the task of protein structure classification; in particular, when compared to other work, the proposed method shows an average F1-Score improvement of 26% on out-of-distribution (OOD) samples and of 32% when tested on samples coming from the same distribution as the training data. Our approach finds applications in areas such as drug prioritization, drug re-purposing, disease sub-type analysis and elsewhere.
2008.07350
Sunilkumar Hosamani Dr
Sunilkumar M. Hosamani
Quantitative Structure Property Analysis of Anti-Covid-19 Drugs
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by recent work on anti-covid-19 drugs \cite{2} here we study the Quantitative-structure property relationships(QSPR) of phytochemicals screened against SARS-CoV-2 $3CL^{pro}$ with the help of topological indices like the first Zagreb index $M_{1}$, second Zagreb index $M_{2}$, Randi$\acute{c}$ index $R$, Balban index $J$ and sum-connectivity index $SCI(G)$. Our study has raveled that the sum-connectivity index $(SCI)$ and the first Zagreb index $(M_{1})$ are two important parameters to predict the molecular weight and the topological polar surface area of phytochemicals respectively.
[ { "created": "Thu, 6 Aug 2020 16:42:18 GMT", "version": "v1" } ]
2020-08-18
[ [ "Hosamani", "Sunilkumar M.", "" ] ]
Inspired by recent work on anti-covid-19 drugs \cite{2} here we study the Quantitative-structure property relationships(QSPR) of phytochemicals screened against SARS-CoV-2 $3CL^{pro}$ with the help of topological indices like the first Zagreb index $M_{1}$, second Zagreb index $M_{2}$, Randi$\acute{c}$ index $R$, Balban index $J$ and sum-connectivity index $SCI(G)$. Our study has raveled that the sum-connectivity index $(SCI)$ and the first Zagreb index $(M_{1})$ are two important parameters to predict the molecular weight and the topological polar surface area of phytochemicals respectively.
0907.3529
Hao Wang
Mark Pollicott, Hao Wang, and Howie Weiss
Recovering the time-dependent transmission rate from infection data via solution of an inverse ODE problem
22 pages, 6 figures
null
null
null
q-bio.QM math.DS q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The transmission rate of many acute infectious diseases varies significantly in time, but the underlying mechanisms are usually uncertain. They may include seasonal changes in the environment, contact rate, immune system response, etc. The transmission rate has been thought difficult to measure directly. We present a new algorithm to compute the time-dependent transmission rate directly from prevalence data, which makes no assumptions about the number of susceptibles or vital rates. The algorithm follows our complete and explicit solution of a mathematical inverse problem for SIR-type transmission models. We prove that almost any infection profile can be perfectly fitted by an SIR model with variable transmission rate. This clearly shows a serious danger of over-fitting such transmission models. We illustrate the algorithm with historic UK measles data and our observations support the common belief that measles transmission was predominantly driven by school contacts.
[ { "created": "Tue, 21 Jul 2009 02:45:59 GMT", "version": "v1" }, { "created": "Thu, 25 Feb 2010 07:14:23 GMT", "version": "v2" }, { "created": "Fri, 12 Nov 2010 21:25:05 GMT", "version": "v3" }, { "created": "Wed, 15 Jun 2011 23:19:11 GMT", "version": "v4" } ]
2011-06-17
[ [ "Pollicott", "Mark", "" ], [ "Wang", "Hao", "" ], [ "Weiss", "Howie", "" ] ]
The transmission rate of many acute infectious diseases varies significantly in time, but the underlying mechanisms are usually uncertain. They may include seasonal changes in the environment, contact rate, immune system response, etc. The transmission rate has been thought difficult to measure directly. We present a new algorithm to compute the time-dependent transmission rate directly from prevalence data, which makes no assumptions about the number of susceptibles or vital rates. The algorithm follows our complete and explicit solution of a mathematical inverse problem for SIR-type transmission models. We prove that almost any infection profile can be perfectly fitted by an SIR model with variable transmission rate. This clearly shows a serious danger of over-fitting such transmission models. We illustrate the algorithm with historic UK measles data and our observations support the common belief that measles transmission was predominantly driven by school contacts.
1408.1921
Paul Smolen
Paul Smolen, Douglas A. Baxter, John H. Byrne
Simulations Suggest Pharmacological Methods for Rescuing Long-Term Potentiation
21 pages, 4 figures
Journal of Theoretical Biology, 7 Nov 2014, pp. 243-250
10.1016/j.jtbi.2014.07.006
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Congenital cognitive dysfunctions are frequently due to deficits in molecular pathways that underlie synaptic plasticity. For example, Rubinstein-Taybi syndrome (RTS) is due to a mutation in cbp, encoding the histone acetyltransferase CREB-binding protein (CBP). CBP is a transcriptional co-activator for CREB, and induction of CREB-dependent transcription plays a key role in long-term memory (LTM). In animal models of RTS, mutations of cbp impair LTM and late-phase long-term potentiation (LTP). To explore intervention strategies to rescue the deficits in LTP, we extended a previous model of LTP induction to describe histone acetylation and simulated LTP impairment due to cbp mutation. Plausible drug effects were simulated by parameter changes, and many increased LTP. However no parameter variation consistent with a biochemical effect of a known drug fully restored LTP. Thus we examined paired parameter variations. A pair that simulated the effects of a phosphodiesterase inhibitor (slowing cAMP degradation) concurrent with a deacetylase inhibitor (prolonging histone acetylation) restored LTP. Importantly these paired parameter changes did not alter basal synaptic weight. A pair that simulated a phosphodiesterase inhibitor and an acetyltransferase activator was similarly effective. For both pairs strong additive synergism was present. These results suggest that promoting histone acetylation while simultaneously slowing the degradation of cAMP may constitute a promising strategy for restoring deficits in LTP that may be associated with learning deficits in RTS. More generally these results illustrate the strategy of combining modeling and empirical studies may help design effective therapies for improving long-term synaptic plasticity and learning in cognitive disorders.
[ { "created": "Fri, 8 Aug 2014 17:24:28 GMT", "version": "v1" } ]
2014-08-11
[ [ "Smolen", "Paul", "" ], [ "Baxter", "Douglas A.", "" ], [ "Byrne", "John H.", "" ] ]
Congenital cognitive dysfunctions are frequently due to deficits in molecular pathways that underlie synaptic plasticity. For example, Rubinstein-Taybi syndrome (RTS) is due to a mutation in cbp, encoding the histone acetyltransferase CREB-binding protein (CBP). CBP is a transcriptional co-activator for CREB, and induction of CREB-dependent transcription plays a key role in long-term memory (LTM). In animal models of RTS, mutations of cbp impair LTM and late-phase long-term potentiation (LTP). To explore intervention strategies to rescue the deficits in LTP, we extended a previous model of LTP induction to describe histone acetylation and simulated LTP impairment due to cbp mutation. Plausible drug effects were simulated by parameter changes, and many increased LTP. However no parameter variation consistent with a biochemical effect of a known drug fully restored LTP. Thus we examined paired parameter variations. A pair that simulated the effects of a phosphodiesterase inhibitor (slowing cAMP degradation) concurrent with a deacetylase inhibitor (prolonging histone acetylation) restored LTP. Importantly these paired parameter changes did not alter basal synaptic weight. A pair that simulated a phosphodiesterase inhibitor and an acetyltransferase activator was similarly effective. For both pairs strong additive synergism was present. These results suggest that promoting histone acetylation while simultaneously slowing the degradation of cAMP may constitute a promising strategy for restoring deficits in LTP that may be associated with learning deficits in RTS. More generally these results illustrate the strategy of combining modeling and empirical studies may help design effective therapies for improving long-term synaptic plasticity and learning in cognitive disorders.
2403.11979
Pan-Jun Kim
Junghun Chae, Roktaek Lim, Thomas L. P. Martin, Cheol-Min Ghim, Pan-Jun Kim
Enlightening the blind spot of the Michaelis-Menten rate law: The role of relaxation dynamics in molecular complex formation
null
null
null
null
q-bio.MN physics.bio-ph q-bio.BM q-bio.SC
http://creativecommons.org/licenses/by/4.0/
The century-long Michaelis-Menten rate law and its modifications in the modeling of biochemical rate processes stand on the assumption that the concentration of the complex of interacting molecules, at each moment, rapidly approaches an equilibrium (quasi-steady state) compared to the pace of molecular concentration changes. Yet, in the case of actively time-varying molecular concentrations with transient or oscillatory dynamics, the deviation of the complex profile from the quasi-steady state becomes relevant. A recent theoretical approach, known as the effective time-delay scheme (ETS), suggests that the delay by the relaxation time of molecular complex formation contributes to the substantial breakdown of the quasi-steady state assumption. Here, we systematically expand this ETS and inquire into the comprehensive roles of relaxation dynamics in complex formation. Through the modeling of rhythmic protein-protein and protein-DNA interactions and the mammalian circadian clock, our analysis reveals the effect of the relaxation dynamics beyond the time delay, which extends to the dampening of changes in the complex concentration with a reduction in the oscillation amplitude against the quasi-steady state. Interestingly, the combined effect of the time delay and amplitude reduction shapes both qualitative and quantitative oscillatory patterns such as the emergence and variability of the mammalian circadian rhythms. These findings highlight the drawback of the routine assumption of quasi-steady states and enhance the mechanistic understanding of rich time-varying biomolecular activities.
[ { "created": "Mon, 18 Mar 2024 17:15:11 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2024 12:12:57 GMT", "version": "v2" } ]
2024-06-10
[ [ "Chae", "Junghun", "" ], [ "Lim", "Roktaek", "" ], [ "Martin", "Thomas L. P.", "" ], [ "Ghim", "Cheol-Min", "" ], [ "Kim", "Pan-Jun", "" ] ]
The century-long Michaelis-Menten rate law and its modifications in the modeling of biochemical rate processes stand on the assumption that the concentration of the complex of interacting molecules, at each moment, rapidly approaches an equilibrium (quasi-steady state) compared to the pace of molecular concentration changes. Yet, in the case of actively time-varying molecular concentrations with transient or oscillatory dynamics, the deviation of the complex profile from the quasi-steady state becomes relevant. A recent theoretical approach, known as the effective time-delay scheme (ETS), suggests that the delay by the relaxation time of molecular complex formation contributes to the substantial breakdown of the quasi-steady state assumption. Here, we systematically expand this ETS and inquire into the comprehensive roles of relaxation dynamics in complex formation. Through the modeling of rhythmic protein-protein and protein-DNA interactions and the mammalian circadian clock, our analysis reveals the effect of the relaxation dynamics beyond the time delay, which extends to the dampening of changes in the complex concentration with a reduction in the oscillation amplitude against the quasi-steady state. Interestingly, the combined effect of the time delay and amplitude reduction shapes both qualitative and quantitative oscillatory patterns such as the emergence and variability of the mammalian circadian rhythms. These findings highlight the drawback of the routine assumption of quasi-steady states and enhance the mechanistic understanding of rich time-varying biomolecular activities.
0706.2053
Michael Sadovsky
Michael G.Sadovsky, Maria Yu.Senashova, Kristina A.Kourshakova
Simple Model of Complex Reflection Behaviour in Two-Species Community
10 pages, no figures
null
null
null
q-bio.PE
null
The model of smart migration for two-species community is developed, where the individuals implement reflexive strategy of spatial redistribution. Simulations have been used to figure out the situations where reflexy gives an advantage over a non-reflexive spatial behaviour, and vice versa.
[ { "created": "Thu, 14 Jun 2007 08:05:10 GMT", "version": "v1" } ]
2007-06-15
[ [ "Sadovsky", "Michael G.", "" ], [ "Senashova", "Maria Yu.", "" ], [ "Kourshakova", "Kristina A.", "" ] ]
The model of smart migration for two-species community is developed, where the individuals implement reflexive strategy of spatial redistribution. Simulations have been used to figure out the situations where reflexy gives an advantage over a non-reflexive spatial behaviour, and vice versa.
1603.07343
YongKeun Park
Jonghee Yoon, KyeoReh Lee, YongKeun Park
A simple and rapid method for detecting living microorganisms in food using laser speckle decorrelation
null
null
null
null
q-bio.QM physics.bio-ph physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Measuring microorganisms in food products is a critical issue for food safety and human health. Although various approaches for detecting low-levels of microorganisms in food have been developed, they require high-cost, complex equipment, invasive procedures, and skilled technicians which limit their widespread use in the food industry. Here, we present a simple, non-destructive, non-contact, and rapid optical method for measuring living microorganisms in meat products using laser speckle decorrelation. By simply measuring dynamic speckle intensity patterns reflected from samples and analyzing the temporal correlation time, the presence of living microorganisms can be non-invasively detected with high sensitivity. We present proof-of-principle demonstrations for detecting E. coli and B. cereus in chicken breast tissues.
[ { "created": "Fri, 18 Mar 2016 01:37:55 GMT", "version": "v1" } ]
2016-03-25
[ [ "Yoon", "Jonghee", "" ], [ "Lee", "KyeoReh", "" ], [ "Park", "YongKeun", "" ] ]
Measuring microorganisms in food products is a critical issue for food safety and human health. Although various approaches for detecting low-levels of microorganisms in food have been developed, they require high-cost, complex equipment, invasive procedures, and skilled technicians which limit their widespread use in the food industry. Here, we present a simple, non-destructive, non-contact, and rapid optical method for measuring living microorganisms in meat products using laser speckle decorrelation. By simply measuring dynamic speckle intensity patterns reflected from samples and analyzing the temporal correlation time, the presence of living microorganisms can be non-invasively detected with high sensitivity. We present proof-of-principle demonstrations for detecting E. coli and B. cereus in chicken breast tissues.
1808.02157
Garren Gaut
Garren Gaut, Xiangrui Li, Zhong-Lin Lu, and Mark Steyvers
Experimental Design Modulates Variance in BOLD Activation: The Variance Design General Linear Model
18 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Typical fMRI studies have focused on either the mean trend in the blood-oxygen-level-dependent (BOLD) time course or functional connectivity (FC). However, other statistics of the neuroimaging data may contain important information. Despite studies showing links between the variance in the BOLD time series (BV) and age and cognitive performance, a formal framework for testing these effects has not yet been developed. We introduce the Variance Design General Linear Model (VDGLM), a novel framework that facilitates the detection of variance effects. We designed the framework for general use in any fMRI study by modeling both mean and variance in BOLD activation as a function of experimental design. The flexibility of this approach allows the VDGLM to i) simultaneously make inferences about a mean or variance effect while controlling for the other and ii) test for variance effects that could be associated with multiple conditions and/or noise regressors. We demonstrate the use of the VDGLM in a working memory application and show that engagement in a working memory task is associated with whole-brain decreases in BOLD variance.
[ { "created": "Mon, 6 Aug 2018 23:57:26 GMT", "version": "v1" } ]
2018-08-08
[ [ "Gaut", "Garren", "" ], [ "Li", "Xiangrui", "" ], [ "Lu", "Zhong-Lin", "" ], [ "Steyvers", "Mark", "" ] ]
Typical fMRI studies have focused on either the mean trend in the blood-oxygen-level-dependent (BOLD) time course or functional connectivity (FC). However, other statistics of the neuroimaging data may contain important information. Despite studies showing links between the variance in the BOLD time series (BV) and age and cognitive performance, a formal framework for testing these effects has not yet been developed. We introduce the Variance Design General Linear Model (VDGLM), a novel framework that facilitates the detection of variance effects. We designed the framework for general use in any fMRI study by modeling both mean and variance in BOLD activation as a function of experimental design. The flexibility of this approach allows the VDGLM to i) simultaneously make inferences about a mean or variance effect while controlling for the other and ii) test for variance effects that could be associated with multiple conditions and/or noise regressors. We demonstrate the use of the VDGLM in a working memory application and show that engagement in a working memory task is associated with whole-brain decreases in BOLD variance.
1902.07787
Jianhua Xing
Oleg Igoshin, Jing Chen, Jianhua Xing, Jian Liu, Timothy C. Elston, Michael Grabe, Kenneth S. Kim, Jasmine Nirody, Padmini Rangamani, Sean Sun, Hongyun Wang, Charles Wolgemuth
Biophysics at the coffee shop: lessons learned working with George Oster
22 pages, 3 figures, accepted in Molecular Biology of the Cell
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past 50 years, the use of mathematical models, derived from physical reasoning, to describe molecular and cellular systems has evolved from an art of the few to a cornerstone of biological inquiry. George Oster stood out as a pioneer of this paradigm shift from descriptive to quantitative biology not only through his numerous research accomplishments, but also through the many students and postdocs he mentored over his long career. Those of us fortunate enough to have worked with George agree that his sharp intellect, physical intuition and passion for scientific inquiry not only inspired us as scientists but also greatly influenced the way we conduct research. We would like to share a few important lessons we learned from George in honor of his memory and with the hope that they may inspire future generations of scientists.
[ { "created": "Wed, 20 Feb 2019 21:45:23 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 14:09:47 GMT", "version": "v2" } ]
2019-03-29
[ [ "Igoshin", "Oleg", "" ], [ "Chen", "Jing", "" ], [ "Xing", "Jianhua", "" ], [ "Liu", "Jian", "" ], [ "Elston", "Timothy C.", "" ], [ "Grabe", "Michael", "" ], [ "Kim", "Kenneth S.", "" ], [ "Nirody"...
Over the past 50 years, the use of mathematical models, derived from physical reasoning, to describe molecular and cellular systems has evolved from an art of the few to a cornerstone of biological inquiry. George Oster stood out as a pioneer of this paradigm shift from descriptive to quantitative biology not only through his numerous research accomplishments, but also through the many students and postdocs he mentored over his long career. Those of us fortunate enough to have worked with George agree that his sharp intellect, physical intuition and passion for scientific inquiry not only inspired us as scientists but also greatly influenced the way we conduct research. We would like to share a few important lessons we learned from George in honor of his memory and with the hope that they may inspire future generations of scientists.
q-bio/0701046
Giovanni Meacci
G. Meacci, J. Ries, E. Fischer-Friedrich, N. Kahya, P. Schwille and K. Kruse
Mobility of Min-proteins in Escherichia coli measured by fluorescence correlation spectroscopy
18 pages, 5 figures
Phys. Biol. 3 (2006) 255-263
10.1088/1478-3975/3/4/003
null
q-bio.SC physics.bio-ph q-bio.BM
null
In the bacterium Escherichia coli, selection of the division site involves pole-to-pole oscillations of the proteins MinD and MinE. Different oscillation mechanisms based on cooperative effects between Min-proteins and on the exchange of Min-proteins between the cytoplasm and the cytoplasmic membrane have been proposed. The parameters characterizing the dynamics of the Min-proteins in vivo are not known. It has therefore been difficult to compare the models quantitatively with experiments. Here, we present in vivo measurements of the mobility of MinD and MinE using fluorescence correlation spectroscopy. Two distinct time-scales are clearly visible in the correlation curves. While the faster time-scale can be attributed to cytoplasmic diffusion, the slower time-scale could result from diffusion of membrane-bound proteins or from protein exchange between the cytoplasm and the membrane. We determine the diffusion constant of cytoplasmic MinD to be approximately 16\mu^{2}/s, while for MinE we find about 10\mu^{2}/s, independently of the processes responsible for the slower time-scale. Implications of the measured values for the oscillation mechanism are discussed.
[ { "created": "Mon, 29 Jan 2007 14:57:21 GMT", "version": "v1" } ]
2010-08-17
[ [ "Meacci", "G.", "" ], [ "Ries", "J.", "" ], [ "Fischer-Friedrich", "E.", "" ], [ "Kahya", "N.", "" ], [ "Schwille", "P.", "" ], [ "Kruse", "K.", "" ] ]
In the bacterium Escherichia coli, selection of the division site involves pole-to-pole oscillations of the proteins MinD and MinE. Different oscillation mechanisms based on cooperative effects between Min-proteins and on the exchange of Min-proteins between the cytoplasm and the cytoplasmic membrane have been proposed. The parameters characterizing the dynamics of the Min-proteins in vivo are not known. It has therefore been difficult to compare the models quantitatively with experiments. Here, we present in vivo measurements of the mobility of MinD and MinE using fluorescence correlation spectroscopy. Two distinct time-scales are clearly visible in the correlation curves. While the faster time-scale can be attributed to cytoplasmic diffusion, the slower time-scale could result from diffusion of membrane-bound proteins or from protein exchange between the cytoplasm and the membrane. We determine the diffusion constant of cytoplasmic MinD to be approximately 16\mu^{2}/s, while for MinE we find about 10\mu^{2}/s, independently of the processes responsible for the slower time-scale. Implications of the measured values for the oscillation mechanism are discussed.
2011.06085
Alexis Nangue
Alexis Nangue, Alan D. Rendall, Brice Kammegne Tcheugam, Patrick Steve Kamdem Simo
Analysis of an initial value problem for an extracellular and intracellular model of hepatitis C virus infection
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a mathematical analysis of the global dynamics of a viral infection model in vivo is carried out. We study the dynamics of a hepatitis C virus (HCV) model, under therapy, that considers both extracellular and intracellular levels of infection. At present most mathematical modeling of viral kinetics after treatment only addresses the process of infection of a cell by the virus and the release of virions by the cell, while the processes taking place inside the cell are not included. We prove that the solutions of the new model with positive initial values are positive, exist globally in time and are bounded. The model has two virus-free steady states. They are distinguished by the fact that viral RNA is absent inside the cells in the first state and present inside the cells in the second. There are basic reproduction numbers associated to each of these steady states. If the basic reproduction number of the first steady state is less than one then that state is asymptotically stable. If the basic reproduction number of the first steady state is greater than one and that of the second less than one then the second steady state is asymptotically stable. If both basic reproduction numbers are greater than one then we obtain various conclusions which depend on different restrictions on the parameters of the model. Under increasingly strong assumptions we prove that there is at least one positive steady state (infected equilibrium), that there is a unique positive steady state and that the positive steady state is stable. We also give a condition under which every positive solution converges to a positive steady state. This is proved by methods of Li and Muldowney. Finally, we illustrate the theoretical results by numerical simulations.
[ { "created": "Wed, 11 Nov 2020 21:46:55 GMT", "version": "v1" } ]
2020-11-13
[ [ "Nangue", "Alexis", "" ], [ "Rendall", "Alan D.", "" ], [ "Tcheugam", "Brice Kammegne", "" ], [ "Simo", "Patrick Steve Kamdem", "" ] ]
In this paper, a mathematical analysis of the global dynamics of a viral infection model in vivo is carried out. We study the dynamics of a hepatitis C virus (HCV) model, under therapy, that considers both extracellular and intracellular levels of infection. At present most mathematical modeling of viral kinetics after treatment only addresses the process of infection of a cell by the virus and the release of virions by the cell, while the processes taking place inside the cell are not included. We prove that the solutions of the new model with positive initial values are positive, exist globally in time and are bounded. The model has two virus-free steady states. They are distinguished by the fact that viral RNA is absent inside the cells in the first state and present inside the cells in the second. There are basic reproduction numbers associated to each of these steady states. If the basic reproduction number of the first steady state is less than one then that state is asymptotically stable. If the basic reproduction number of the first steady state is greater than one and that of the second less than one then the second steady state is asymptotically stable. If both basic reproduction numbers are greater than one then we obtain various conclusions which depend on different restrictions on the parameters of the model. Under increasingly strong assumptions we prove that there is at least one positive steady state (infected equilibrium), that there is a unique positive steady state and that the positive steady state is stable. We also give a condition under which every positive solution converges to a positive steady state. This is proved by methods of Li and Muldowney. Finally, we illustrate the theoretical results by numerical simulations.
2104.01469
Myung Suh Choi
Myung Suh Choi
Modeling ADHD in Drosophila: Investigating the Effects of Glucose on Dopamine Production Demonstrated by Locomotion
9 pages, 4 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Hyperactivity is one of the hallmakrs of ADHD. Aberrant dopamine signaling is a major theme in ADHD and dopamine production is directly linked to the intensity and persistence of hyperactive conduct. The strength and persistence of hyperactivity responses in Drosophila to startle stimuli were measured in a study to determine the effects of sugar on dopamine development. A total of four experimental groups, namely 1%,3%, and 5% glucose, as well as a control group were taken for the diet of Drosophila, and these four different amounts of glucose were introduced to the growth medium where Drosophila was cultured. The movements of Drosophila in the four treatment groups were captured using a camera. This experiment was carried out five times, each time using a different batch of Drosophila. Each group's average velocity over time was also reported. The web adaptation of Drosophila Activity Monitor(DAM) was used to analyze the captured movies from the camera. Furthermore, when it came to hyperactivity persistence, all four treatment classes were statistically different (p0.05). Since the strength and persistence of hyperactive behavior are directly correlated to dopamine output, this study shows that higher glucose intake is associated with more hyperactivity, for both the intensity({\Delta}V) and persistence.
[ { "created": "Sat, 3 Apr 2021 19:51:16 GMT", "version": "v1" } ]
2021-04-06
[ [ "Choi", "Myung Suh", "" ] ]
Hyperactivity is one of the hallmakrs of ADHD. Aberrant dopamine signaling is a major theme in ADHD and dopamine production is directly linked to the intensity and persistence of hyperactive conduct. The strength and persistence of hyperactivity responses in Drosophila to startle stimuli were measured in a study to determine the effects of sugar on dopamine development. A total of four experimental groups, namely 1%,3%, and 5% glucose, as well as a control group were taken for the diet of Drosophila, and these four different amounts of glucose were introduced to the growth medium where Drosophila was cultured. The movements of Drosophila in the four treatment groups were captured using a camera. This experiment was carried out five times, each time using a different batch of Drosophila. Each group's average velocity over time was also reported. The web adaptation of Drosophila Activity Monitor(DAM) was used to analyze the captured movies from the camera. Furthermore, when it came to hyperactivity persistence, all four treatment classes were statistically different (p0.05). Since the strength and persistence of hyperactive behavior are directly correlated to dopamine output, this study shows that higher glucose intake is associated with more hyperactivity, for both the intensity({\Delta}V) and persistence.
1412.6688
Thiparat Chotibut
Thiparat Chotibut, David R. Nelson
Evolutionary Dynamics with Fluctuating Population Sizes and Strong Mutualism
Updated Version, Published in Phys. Rev. E 92, 022718 on 20 August 2015
Phys. Rev. E 92, 022718 (2015)
10.1103/PhysRevE.92.022718
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.
[ { "created": "Sat, 20 Dec 2014 19:57:53 GMT", "version": "v1" }, { "created": "Mon, 29 Aug 2016 17:08:52 GMT", "version": "v2" } ]
2016-08-30
[ [ "Chotibut", "Thiparat", "" ], [ "Nelson", "David R.", "" ] ]
Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.
1810.12581
J\'ohannes Gu{\dh}brandsson
Sigur{\dh}ur M\'ar Einarsson, Sigur{\dh}ur Gu{\dh}j\'onsson, Ingi R\'unar J\'onsson and J\'ohannes Gu{\dh}brandsson
Deep-diving of Atlantic salmon ($\textit{Salmo salar}$) during their marine feeding migrations
12 pages, 4 figures
Environ Biol Fish (2018) 101(12): 1707-1715
10.1007/s10641-018-0817-0
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data from seven data storage tags recovered from Atlantic salmon marked as smolts were analyzed for depth movements and patterns of deep diving during the marine migration. The salmon mostly stayed at the surface and showed diurnal activity especially from autumn until spring. During the first months at sea the salmon stayed at shallower depths (< 100 m). The salmon took short deep dives (> 100 m), that were rare or absent during the first summer at sea but increased in frequency and duration especially in late winter. The maximum depth of the dives varied from 419 to 1187 m. Most of dives were short, (< 5 hours) but could last up to 33 hours. The duration of dives increased in late winter until spring and the overall depth and maximum depth per dive increased exponentially over time. The initiation of the dives was more common in evenings and at night, suggesting nocturnal diving. We hypothesized that deep diving is related to feeding of salmon as mesopelagic fish can be important food for salmon during winter.
[ { "created": "Tue, 30 Oct 2018 08:44:43 GMT", "version": "v1" } ]
2018-11-13
[ [ "Einarsson", "Sigurður Már", "" ], [ "Guðjónsson", "Sigurður", "" ], [ "Jónsson", "Ingi Rúnar", "" ], [ "Guðbrandsson", "Jóhannes", "" ] ]
Data from seven data storage tags recovered from Atlantic salmon marked as smolts were analyzed for depth movements and patterns of deep diving during the marine migration. The salmon mostly stayed at the surface and showed diurnal activity especially from autumn until spring. During the first months at sea the salmon stayed at shallower depths (< 100 m). The salmon took short deep dives (> 100 m), that were rare or absent during the first summer at sea but increased in frequency and duration especially in late winter. The maximum depth of the dives varied from 419 to 1187 m. Most of dives were short, (< 5 hours) but could last up to 33 hours. The duration of dives increased in late winter until spring and the overall depth and maximum depth per dive increased exponentially over time. The initiation of the dives was more common in evenings and at night, suggesting nocturnal diving. We hypothesized that deep diving is related to feeding of salmon as mesopelagic fish can be important food for salmon during winter.
2402.16770
SueYeon Chung
Albert J. Wakhloo, Will Slatton, and SueYeon Chung
Neural population geometry and optimal coding of tasks with shared latent structure
26 Pages and 7 figures in main text. 20 Pages and 7 figures in supplemental material
null
null
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. However, it remains unclear what aspects of neural activity contribute to these computational capabilities. Here, we develop an analytical theory linking the geometry of a neural population's activity to the generalization performance of a linear readout on a set of tasks that depend on a common latent structure. We show that four geometric measures of the activity determine performance across tasks. Using this theory, we find that experimentally observed disentangled representations naturally emerge as an optimal solution to the multi-task learning problem. When data is scarce, these optimal neural codes compress less informative latent variables, and when data is abundant, they expand these variables in the state space. We validate our theory using macaque ventral stream recordings. Our results therefore tie population geometry to multi-task learning.
[ { "created": "Mon, 26 Feb 2024 17:39:23 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2024 17:40:57 GMT", "version": "v2" } ]
2024-04-12
[ [ "Wakhloo", "Albert J.", "" ], [ "Slatton", "Will", "" ], [ "Chung", "SueYeon", "" ] ]
Humans and animals can recognize latent structures in their environment and apply this information to efficiently navigate the world. However, it remains unclear what aspects of neural activity contribute to these computational capabilities. Here, we develop an analytical theory linking the geometry of a neural population's activity to the generalization performance of a linear readout on a set of tasks that depend on a common latent structure. We show that four geometric measures of the activity determine performance across tasks. Using this theory, we find that experimentally observed disentangled representations naturally emerge as an optimal solution to the multi-task learning problem. When data is scarce, these optimal neural codes compress less informative latent variables, and when data is abundant, they expand these variables in the state space. We validate our theory using macaque ventral stream recordings. Our results therefore tie population geometry to multi-task learning.