id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2011.11639
Cristina Postigo
Maria Vittoria Barbieri, Luis Simon Monllor-Alcaraz, Cristina Postigo, Miren Lopez de Alda
Improved fully automated method for the determination of medium to highly polar pesticides in surface and groundwater and application in two distinct agriculture-impacted areas
Published in Science of the Total Environment
Science of The Total Environment Volume 745, 25 November 2020, 140650
10.1016/j.scitotenv.2020.140650
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Water is an essential resource for all living organisms. The continuous and increasing use of pesticides in agricultural and urban activities results in the pollution of water resources and represents an environmental risk. To control and reduce pesticide pollution, reliable multi-residue methods for the detection of these compounds in water are needed. In this context, the present work aimed at providing an analytical method for the simultaneous determination of trace levels of 51 target pesticides in water and applying it to the investigation of target pesticides in two agriculture-impacted areas of interest. The method developed, based on an isotopic dilution approach and on-line solid-phase extraction-liquid chromatography-tandem mass spectrometry, is fast, simple, and to a large extent automated, and allows the analysis of most of the target compounds in compliance with European regulations. Further application of the method to the analysis of selected water samples collected at the lowest stretches of the two largest river basins of Catalonia (NE Spain), Llobregat and Ter, revealed the presence of a wide suite of pesticides, and some of them at concentrations above the water quality standards (irgarol and dichlorvos) or the acceptable method detection limits (methiocarb, imidacloprid, and thiacloprid), in the Llobregat, and much cleaner waters in the Ter River basin. Risk assessment of the pesticide concentrations measured in the Llobregat indicated high risk due to the presence of irgarol, dichlorvos, methiocarb, azinphos ethyl, imidacloprid, and diflufenican (hazard quotient (HQ) values>10), and an only moderate potential risk in the Ter River associated to the occurrence of bentazone and irgarol (HQ>1).
[ { "created": "Mon, 23 Nov 2020 17:47:30 GMT", "version": "v1" } ]
2020-11-25
[ [ "Barbieri", "Maria Vittoria", "" ], [ "Monllor-Alcaraz", "Luis Simon", "" ], [ "Postigo", "Cristina", "" ], [ "de Alda", "Miren Lopez", "" ] ]
Water is an essential resource for all living organisms. The continuous and increasing use of pesticides in agricultural and urban activities results in the pollution of water resources and represents an environmental risk. To control and reduce pesticide pollution, reliable multi-residue methods for the detection of these compounds in water are needed. In this context, the present work aimed at providing an analytical method for the simultaneous determination of trace levels of 51 target pesticides in water and applying it to the investigation of target pesticides in two agriculture-impacted areas of interest. The method developed, based on an isotopic dilution approach and on-line solid-phase extraction-liquid chromatography-tandem mass spectrometry, is fast, simple, and to a large extent automated, and allows the analysis of most of the target compounds in compliance with European regulations. Further application of the method to the analysis of selected water samples collected at the lowest stretches of the two largest river basins of Catalonia (NE Spain), Llobregat and Ter, revealed the presence of a wide suite of pesticides, and some of them at concentrations above the water quality standards (irgarol and dichlorvos) or the acceptable method detection limits (methiocarb, imidacloprid, and thiacloprid), in the Llobregat, and much cleaner waters in the Ter River basin. Risk assessment of the pesticide concentrations measured in the Llobregat indicated high risk due to the presence of irgarol, dichlorvos, methiocarb, azinphos ethyl, imidacloprid, and diflufenican (hazard quotient (HQ) values>10), and an only moderate potential risk in the Ter River associated to the occurrence of bentazone and irgarol (HQ>1).
1309.0547
Ian Dworkin
Christopher H. Chandler, Sudarshan Chari, David Tack and Ian Dworkin
Causes and Consequences of genetic background effects illuminated by integrative genomic analysis
Accepted in Genetics
Genetics 2014 196(4):1321-1336
10.1534/genetics.113.159426
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phenotypic consequences of individual mutations are modulated by the wild type genetic background in which they occur.Although such background dependence is widely observed, we do not know whether general patterns across species and traits exist, nor about the mechanisms underlying it. We also lack knowledge on how mutations interact with genetic background to influence gene expression, and how this in turn mediates mutant phenotypes. Furthermore, how genetic background influences patterns of epistasis remains unclear. To investigate the genetic basis and genomic consequences of genetic background dependence of the scallopedE3 allele on the Drosophila melanogaster wing, we generated multiple novel genome level datasets from a mapping by introgression experiment and a tagged RNA gene expression dataset. In addition we used whole genome re-sequencing of the parental lines two commonly used laboratory strains to predict polymorphic transcription factor binding sites for SD. We integrated these data with previously published genomic datasets from expression microarrays and a modifier mutation screen. By searching for genes showing a congruent signal across multiple datasets, we were able to identify a robust set of candidate loci contributing to the background dependent effects of mutations in sd. We also show that the majority of background-dependent modifiers previously reported are caused by higher-order epistasis, not quantitative non-complementation. These findings provide a useful foundation for more detailed investigations of genetic background dependence in this system, and this approach is likely to prove useful in exploring the genetic basis of other traits as well.
[ { "created": "Mon, 2 Sep 2013 21:12:14 GMT", "version": "v1" }, { "created": "Thu, 23 Jan 2014 20:12:22 GMT", "version": "v2" }, { "created": "Sat, 1 Feb 2014 16:04:28 GMT", "version": "v3" } ]
2014-06-04
[ [ "Chandler", "Christopher H.", "" ], [ "Chari", "Sudarshan", "" ], [ "Tack", "David", "" ], [ "Dworkin", "Ian", "" ] ]
The phenotypic consequences of individual mutations are modulated by the wild type genetic background in which they occur.Although such background dependence is widely observed, we do not know whether general patterns across species and traits exist, nor about the mechanisms underlying it. We also lack knowledge on how mutations interact with genetic background to influence gene expression, and how this in turn mediates mutant phenotypes. Furthermore, how genetic background influences patterns of epistasis remains unclear. To investigate the genetic basis and genomic consequences of genetic background dependence of the scallopedE3 allele on the Drosophila melanogaster wing, we generated multiple novel genome level datasets from a mapping by introgression experiment and a tagged RNA gene expression dataset. In addition we used whole genome re-sequencing of the parental lines two commonly used laboratory strains to predict polymorphic transcription factor binding sites for SD. We integrated these data with previously published genomic datasets from expression microarrays and a modifier mutation screen. By searching for genes showing a congruent signal across multiple datasets, we were able to identify a robust set of candidate loci contributing to the background dependent effects of mutations in sd. We also show that the majority of background-dependent modifiers previously reported are caused by higher-order epistasis, not quantitative non-complementation. These findings provide a useful foundation for more detailed investigations of genetic background dependence in this system, and this approach is likely to prove useful in exploring the genetic basis of other traits as well.
2303.05513
Parsaoran Hutapea
Christopher M. Altenderfer, Frank N. Chang, Parsaoran Hutapea
Development of Pipetting Devices to Separate Protein Complexes
6 pages, 2 figures, Discovery to Commercialization Conference, The Nanotechnology Institute, October 2009, The Chemical Heritage Foundation, Philadelphia, PA
null
null
null
q-bio.OT physics.flu-dyn
http://creativecommons.org/publicdomain/zero/1.0/
The objective of this project is to develop an automated device used to spot protein samples on a hydrophobic membrane to be used for the patented electrophoresis method developed by Chang and Yonan in 2008 [1]. This novel method performs electrophoresis directly on hydrophobic blot membranes as opposed to the previous popular methods such as the 2-D polyacrylamide gel method [2, 3]. This new electrophoresis method significantly reduces the processing time to about 40 minutes, as opposed to the 2-D PAGE method which can take one or two days. Special methods to spot samples on hydrophobic blot membranes were established for successful separation of protein and protein complexes. These spotting methods are used in conjunction with the patented electrophoresis method [1] for effective separation. The automated device effectively replicates these special methods that are repeatable and user independent. The goal of the automated device is to effectively spot and separate protein complexes in addition to single proteins under non-denaturing conditions, thus eliminating the variability of the manual spotting method.
[ { "created": "Tue, 6 Dec 2022 15:44:29 GMT", "version": "v1" } ]
2023-03-13
[ [ "Altenderfer", "Christopher M.", "" ], [ "Chang", "Frank N.", "" ], [ "Hutapea", "Parsaoran", "" ] ]
The objective of this project is to develop an automated device used to spot protein samples on a hydrophobic membrane to be used for the patented electrophoresis method developed by Chang and Yonan in 2008 [1]. This novel method performs electrophoresis directly on hydrophobic blot membranes as opposed to the previous popular methods such as the 2-D polyacrylamide gel method [2, 3]. This new electrophoresis method significantly reduces the processing time to about 40 minutes, as opposed to the 2-D PAGE method which can take one or two days. Special methods to spot samples on hydrophobic blot membranes were established for successful separation of protein and protein complexes. These spotting methods are used in conjunction with the patented electrophoresis method [1] for effective separation. The automated device effectively replicates these special methods that are repeatable and user independent. The goal of the automated device is to effectively spot and separate protein complexes in addition to single proteins under non-denaturing conditions, thus eliminating the variability of the manual spotting method.
1705.09132
Milad Mozafari
Milad Mozafari, Saeed Reza Kheradpisheh, Timoth\'ee Masquelier, Abbas Nowzari-Dalini, Mohammad Ganjtabesh
First-spike based visual categorization using reward-modulated STDP
supplementary materials are added, Caltech face/motorbike demonstration figure is updated, some parts of the main manuscript are moved to the supplementary materials, additional network analysis and performance comparison with deep nets are added
Mozafari, Milad, et al. "First-Spike-Based Visual Categorization Using Reward-Modulated STDP". IEEE Transactions on Neural Networks and Learning Systems (2018). DOI: https://doi.org/10.1109/TNNLS.2018.2826721
10.1109/TNNLS.2018.2826721
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning (RL) has recently regained popularity, with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. If this assumption was correct, the neuron was rewarded, i.e. spike-timing-dependent plasticity (STDP) was applied, which reinforced the neuron's selectivity. Otherwise, anti-STDP was applied, which encouraged the neuron to learn something else. As demonstrated on various image datasets (Caltech, ETH-80, and NORB), this reward modulated STDP (R-STDP) approach extracted particularly discriminative visual features, whereas classic unsupervised STDP extracts any feature that consistently repeats. As a result, R-STDP outperformed STDP on these datasets. Furthermore, R-STDP is suitable for online learning, and can adapt to drastic changes such as label permutations. Finally, it is worth mentioning that both feature extraction and classification were done with spikes, using at most one spike per neuron. Thus the network is hardware friendly and energy efficient.
[ { "created": "Thu, 25 May 2017 11:38:16 GMT", "version": "v1" }, { "created": "Tue, 2 Jan 2018 11:31:48 GMT", "version": "v2" }, { "created": "Tue, 10 Jul 2018 12:20:52 GMT", "version": "v3" } ]
2018-07-11
[ [ "Mozafari", "Milad", "" ], [ "Kheradpisheh", "Saeed Reza", "" ], [ "Masquelier", "Timothée", "" ], [ "Nowzari-Dalini", "Abbas", "" ], [ "Ganjtabesh", "Mohammad", "" ] ]
Reinforcement learning (RL) has recently regained popularity, with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. If this assumption was correct, the neuron was rewarded, i.e. spike-timing-dependent plasticity (STDP) was applied, which reinforced the neuron's selectivity. Otherwise, anti-STDP was applied, which encouraged the neuron to learn something else. As demonstrated on various image datasets (Caltech, ETH-80, and NORB), this reward modulated STDP (R-STDP) approach extracted particularly discriminative visual features, whereas classic unsupervised STDP extracts any feature that consistently repeats. As a result, R-STDP outperformed STDP on these datasets. Furthermore, R-STDP is suitable for online learning, and can adapt to drastic changes such as label permutations. Finally, it is worth mentioning that both feature extraction and classification were done with spikes, using at most one spike per neuron. Thus the network is hardware friendly and energy efficient.
q-bio/0411028
Manuel Middendorf
Manuel Middendorf, Anshul Kundaje, Chris Wiggins, Yoav Freund, Christina Leslie
Predicting Genetic Regulatory Response Using Classification
8 pages, 4 figures, presented at Twelfth International Conference on Intelligent Systems for Molecular Biology (ISMB 2004), supplemental website: http://www.cs.columbia.edu/compbio/geneclass
Proceedings of the Twelfth International Conference on Intelligent Systems for Molecular Biology (ISMB 2004), Bioinformatics 20 Suppl 1, I232-I240, 2004
null
null
q-bio.QM
null
We present a novel classification-based method for learning to predict gene regulatory response. Our approach is motivated by the hypothesis that in simple organisms such as Saccharomyces cerevisiae, we can learn a decision rule for predicting whether a gene is up- or down-regulated in a particular experiment based on (1) the presence of binding site subsequences (``motifs'') in the gene's regulatory region and (2) the expression levels of regulators such as transcription factors in the experiment (``parents''). Thus our learning task integrates two qualitatively different data sources: genome-wide cDNA microarray data across multiple perturbation and mutant experiments along with motif profile data from regulatory sequences. We convert the regression task of predicting real-valued gene expression measurement to a classification task of predicting +1 and -1 labels, corresponding to up- and down-regulation beyond the levels of biological and measurement noise in microarray measurements. The learning algorithm employed is boosting with a margin-based generalization of decision trees, alternating decision trees. This large-margin classifier is sufficiently flexible to allow complex logical functions, yet sufficiently simple to give insight into the combinatorial mechanisms of gene regulation. We observe encouraging prediction accuracy on experiments based on the Gasch S. cerevisiae dataset, and we show that we can accurately predict up- and down-regulation on held-out experiments. Our method thus provides predictive hypotheses, suggests biological experiments, and provides interpretable insight into the structure of genetic regulatory networks.
[ { "created": "Fri, 12 Nov 2004 20:39:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Middendorf", "Manuel", "" ], [ "Kundaje", "Anshul", "" ], [ "Wiggins", "Chris", "" ], [ "Freund", "Yoav", "" ], [ "Leslie", "Christina", "" ] ]
We present a novel classification-based method for learning to predict gene regulatory response. Our approach is motivated by the hypothesis that in simple organisms such as Saccharomyces cerevisiae, we can learn a decision rule for predicting whether a gene is up- or down-regulated in a particular experiment based on (1) the presence of binding site subsequences (``motifs'') in the gene's regulatory region and (2) the expression levels of regulators such as transcription factors in the experiment (``parents''). Thus our learning task integrates two qualitatively different data sources: genome-wide cDNA microarray data across multiple perturbation and mutant experiments along with motif profile data from regulatory sequences. We convert the regression task of predicting real-valued gene expression measurement to a classification task of predicting +1 and -1 labels, corresponding to up- and down-regulation beyond the levels of biological and measurement noise in microarray measurements. The learning algorithm employed is boosting with a margin-based generalization of decision trees, alternating decision trees. This large-margin classifier is sufficiently flexible to allow complex logical functions, yet sufficiently simple to give insight into the combinatorial mechanisms of gene regulation. We observe encouraging prediction accuracy on experiments based on the Gasch S. cerevisiae dataset, and we show that we can accurately predict up- and down-regulation on held-out experiments. Our method thus provides predictive hypotheses, suggests biological experiments, and provides interpretable insight into the structure of genetic regulatory networks.
0804.4804
Tamar Friedlander
Tamar Friedlander and Naama Brenner
From cellular properties to population asymptotics in the Population Balance Equation
Exact solution of Eq. 9 is added
PRL 101, 018104 (2008)
10.1103/PhysRevLett.101.018104
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proliferating cell populations at steady state growth often exhibit broad protein distributions with exponential tails. The sources of this variation and its universality are of much theoretical interest. Here we address the problem by asymptotic analysis of the Population Balance Equation. We show that the steady state distribution tail is determined by a combination of protein production and cell division and is insensitive to other model details. Under general conditions this tail is exponential with a dependence on parameters consistent with experiment. We discuss the conditions for this effect to be dominant over other sources of variation and the relation to experiments.
[ { "created": "Wed, 30 Apr 2008 12:06:25 GMT", "version": "v1" }, { "created": "Mon, 2 Jun 2008 13:32:47 GMT", "version": "v2" }, { "created": "Thu, 24 Jul 2008 10:03:03 GMT", "version": "v3" } ]
2008-07-24
[ [ "Friedlander", "Tamar", "" ], [ "Brenner", "Naama", "" ] ]
Proliferating cell populations at steady state growth often exhibit broad protein distributions with exponential tails. The sources of this variation and its universality are of much theoretical interest. Here we address the problem by asymptotic analysis of the Population Balance Equation. We show that the steady state distribution tail is determined by a combination of protein production and cell division and is insensitive to other model details. Under general conditions this tail is exponential with a dependence on parameters consistent with experiment. We discuss the conditions for this effect to be dominant over other sources of variation and the relation to experiments.
0704.0357
Gergely J Sz\"oll\H{o}si
Gergely J Szollosi and Imre Derenyi
Evolutionary games on minimally structured populations
Supporting information available as EPAPS Document No. E-PLEEE8-78-144809 at http://ftp.aip.org/epaps/phys_rev_e/E-PLEEE8-78-144809/
PHYSICAL REVIEW E 78, 031919 (2008)
10.1103/PhysRevE.78.031919
null
q-bio.PE q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population structure induced by both spatial embedding and more general networks of interaction, such as model social networks, have been shown to have a fundamental effect on the dynamics and outcome of evolutionary games. These effects have, however, proved to be sensitive to the details of the underlying topology and dynamics. Here we introduce a minimal population structure that is described by two distinct hierarchical levels of interaction. We believe this model is able to identify effects of spatial structure that do not depend on the details of the topology. We derive the dynamics governing the evolution of a system starting from fundamental individual level stochastic processes through two successive meanfield approximations. In our model of population structure the topology of interactions is described by only two parameters: the effective population size at the local scale and the relative strength of local dynamics to global mixing. We demonstrate, for example, the existence of a continuous transition leading to the dominance of cooperation in populations with hierarchical levels of unstructured mixing as the benefit to cost ratio becomes smaller then the local population size. Applying our model of spatial structure to the repeated prisoner's dilemma we uncover a novel and counterintuitive mechanism by which the constant influx of defectors sustains cooperation. Further exploring the phase space of the repeated prisoner's dilemma and also of the "rock-paper-scissor" game we find indications of rich structure and are able to reproduce several effects observed in other models with explicit spatial embedding, such as the maintenance of biodiversity and the emergence of global oscillations.
[ { "created": "Tue, 3 Apr 2007 11:02:47 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2007 15:49:13 GMT", "version": "v2" }, { "created": "Wed, 15 Oct 2008 11:29:30 GMT", "version": "v3" } ]
2009-11-13
[ [ "Szollosi", "Gergely J", "" ], [ "Derenyi", "Imre", "" ] ]
Population structure induced by both spatial embedding and more general networks of interaction, such as model social networks, have been shown to have a fundamental effect on the dynamics and outcome of evolutionary games. These effects have, however, proved to be sensitive to the details of the underlying topology and dynamics. Here we introduce a minimal population structure that is described by two distinct hierarchical levels of interaction. We believe this model is able to identify effects of spatial structure that do not depend on the details of the topology. We derive the dynamics governing the evolution of a system starting from fundamental individual level stochastic processes through two successive meanfield approximations. In our model of population structure the topology of interactions is described by only two parameters: the effective population size at the local scale and the relative strength of local dynamics to global mixing. We demonstrate, for example, the existence of a continuous transition leading to the dominance of cooperation in populations with hierarchical levels of unstructured mixing as the benefit to cost ratio becomes smaller then the local population size. Applying our model of spatial structure to the repeated prisoner's dilemma we uncover a novel and counterintuitive mechanism by which the constant influx of defectors sustains cooperation. Further exploring the phase space of the repeated prisoner's dilemma and also of the "rock-paper-scissor" game we find indications of rich structure and are able to reproduce several effects observed in other models with explicit spatial embedding, such as the maintenance of biodiversity and the emergence of global oscillations.
2212.01191
Fabian Gr\"unewald
Peter C. Kroon, Fabian Gr\"unewald, Jonathan Barnoud, Marco van Tilburg, Paulo C. T. Souza, Tsjerk A. Wassenaar, Siewert-Jan Marrink
Martinize2 and Vermouth: Unified Framework for Topology Generation
corresponding authors: F. Gr\"unewald f.grunewald[at]rug.nl; S. J. Marrink s.j.marrink[at]rug.nl; changes made in v2: new benchmark test-case; data availability statement; language corrections; changes made in v3: additional test cases for non-protein molecules; updated explanation of the graph matching algorithm; language corrections; link to extended and updated documentation
null
null
null
q-bio.QM cond-mat.other cs.CE
http://creativecommons.org/licenses/by/4.0/
Ongoing advances in force field and computer hardware development enable the use of molecular dynamics (MD) to simulate increasingly complex systems with the ultimate goal of reaching cellular complexity. At the same time, rational design by high-throughput (HT) simulations is another forefront of MD. In these areas, the Martini coarse-grained force field, especially the latest version (i.e. v3), is being actively explored because it offers enhanced spatial-temporal resolution. However, the automation tools for preparing simulations with the Martini force field, accompanying the previous version, were not designed for HT simulations or studies of complex cellular systems. Therefore, they become a major limiting factor. To address these shortcomings, we present the open-source vermouth python library. Vermouth is designed to become the unified framework for developing programs, which prepare, run, and analyze Martini simulations of complex systems. To demonstrate the power of the vermouth library, the martinize2 program is showcased as a generalization of the martinize script, originally aimed to set up simulations of proteins. In contrast to the previous version, martinize2 automatically handles protonation states in proteins and post-translation modifications, offers more options to fine-tune structural biases such as the elastic network, and can convert non-protein molecules such as ligands. Finally, martinize2 is used in two high-complexity benchmarks. The entire I-TASSER protein template database as well as a subset of 200,000 structures from the AlphaFold Protein Structure Database are converted to CG resolution and we illustrate how the checks on input structure quality can safeguard HT applications.
[ { "created": "Tue, 29 Nov 2022 11:23:42 GMT", "version": "v1" }, { "created": "Fri, 30 Jun 2023 12:44:38 GMT", "version": "v2" }, { "created": "Tue, 9 Apr 2024 09:49:09 GMT", "version": "v3" } ]
2024-04-10
[ [ "Kroon", "Peter C.", "" ], [ "Grünewald", "Fabian", "" ], [ "Barnoud", "Jonathan", "" ], [ "van Tilburg", "Marco", "" ], [ "Souza", "Paulo C. T.", "" ], [ "Wassenaar", "Tsjerk A.", "" ], [ "Marrink", "Siewert-Jan", "" ] ]
Ongoing advances in force field and computer hardware development enable the use of molecular dynamics (MD) to simulate increasingly complex systems with the ultimate goal of reaching cellular complexity. At the same time, rational design by high-throughput (HT) simulations is another forefront of MD. In these areas, the Martini coarse-grained force field, especially the latest version (i.e. v3), is being actively explored because it offers enhanced spatial-temporal resolution. However, the automation tools for preparing simulations with the Martini force field, accompanying the previous version, were not designed for HT simulations or studies of complex cellular systems. Therefore, they become a major limiting factor. To address these shortcomings, we present the open-source vermouth python library. Vermouth is designed to become the unified framework for developing programs, which prepare, run, and analyze Martini simulations of complex systems. To demonstrate the power of the vermouth library, the martinize2 program is showcased as a generalization of the martinize script, originally aimed to set up simulations of proteins. In contrast to the previous version, martinize2 automatically handles protonation states in proteins and post-translation modifications, offers more options to fine-tune structural biases such as the elastic network, and can convert non-protein molecules such as ligands. Finally, martinize2 is used in two high-complexity benchmarks. The entire I-TASSER protein template database as well as a subset of 200,000 structures from the AlphaFold Protein Structure Database are converted to CG resolution and we illustrate how the checks on input structure quality can safeguard HT applications.
1604.02553
P. Gaspard
Pierre Gaspard
Kinetics and thermodynamics of exonuclease-deficient DNA polymerases
Physical Review E (2016)
Phys. Rev. E 93, 042419 (2016)
10.1103/PhysRevE.93.042419
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A kinetic theory is developed for exonuclease-deficient DNA polymerases, based on the experimental observation that the rates depend not only on the newly incorporated nucleotide, but also on the previous one, leading to the growth of Markovian DNA sequences from a Bernoullian template. The dependences on nucleotide concentrations and template sequence are explicitly taken into account. In this framework, the kinetic and thermodynamic properties of DNA replication, in particular, the mean growth velocity, the error probability, and the entropy production in terms of the rate constants and the concentrations are calculated analytically. Theory is compared with numerical simulations for the DNA polymerases of T7 viruses and human mitochondria.
[ { "created": "Sat, 9 Apr 2016 12:20:36 GMT", "version": "v1" } ]
2018-01-04
[ [ "Gaspard", "Pierre", "" ] ]
A kinetic theory is developed for exonuclease-deficient DNA polymerases, based on the experimental observation that the rates depend not only on the newly incorporated nucleotide, but also on the previous one, leading to the growth of Markovian DNA sequences from a Bernoullian template. The dependences on nucleotide concentrations and template sequence are explicitly taken into account. In this framework, the kinetic and thermodynamic properties of DNA replication, in particular, the mean growth velocity, the error probability, and the entropy production in terms of the rate constants and the concentrations are calculated analytically. Theory is compared with numerical simulations for the DNA polymerases of T7 viruses and human mitochondria.
1801.03843
Francesc Rossell\'o
Tom\'as M. Coronado, Arnau Mir, Francesc Rossell\'o
The probabilities of trees and cladograms under Ford's $\alpha$-model
9 pages
The Scientific World Journal Vol. 2018, Article ID 1916094, 7 pages
10.1155/2018/1916094
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give correct explicit formulas for the probabilities of rooted binary trees and cladograms under Ford's $\alpha$-model.
[ { "created": "Thu, 11 Jan 2018 16:07:53 GMT", "version": "v1" } ]
2019-03-29
[ [ "Coronado", "Tomás M.", "" ], [ "Mir", "Arnau", "" ], [ "Rosselló", "Francesc", "" ] ]
We give correct explicit formulas for the probabilities of rooted binary trees and cladograms under Ford's $\alpha$-model.
2303.00163
Stefanie I. Becker
Stefanie I. Becker, Zachary Hamblin-Frohman, Hongfeng Xia and Zeguo Qiu
Tuning to non-veridical features in attention and perceptual decision-making
33 pages (double-spaced), 6 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
[ { "created": "Wed, 1 Mar 2023 01:32:00 GMT", "version": "v1" } ]
2023-03-02
[ [ "Becker", "Stefanie I.", "" ], [ "Hamblin-Frohman", "Zachary", "" ], [ "Xia", "Hongfeng", "" ], [ "Qiu", "Zeguo", "" ] ]
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
2310.02300
J. C. Phillips
J. C. Phillips
Why and How Did the COVID Pandemic End Abruptly?
5 pages, 1 figure
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Phase transition theory, implemented quantitatively by thermodynamic scaling, has explained the evolution of Coronavirus extremely high contagiousness caused by a few key mutations from CoV2003 to CoV2019 identified among hundreds, as well as the later 2021 evolution to Omicron caused by 30 mutations. It also showed that the 2022 strain BA.5 with five mutations began a new path. Here we show that the early 2023 strains BKK with one stiffening mutation confirm that path, and the single flexing mutation of a later 2023 variant EG.5 strengthens it further. The few mutations of the new path have greatly reduced pandemic deaths, for mechanical reasons proposed here.
[ { "created": "Tue, 3 Oct 2023 14:39:35 GMT", "version": "v1" } ]
2023-10-05
[ [ "Phillips", "J. C.", "" ] ]
Phase transition theory, implemented quantitatively by thermodynamic scaling, has explained the evolution of Coronavirus extremely high contagiousness caused by a few key mutations from CoV2003 to CoV2019 identified among hundreds, as well as the later 2021 evolution to Omicron caused by 30 mutations. It also showed that the 2022 strain BA.5 with five mutations began a new path. Here we show that the early 2023 strains BKK with one stiffening mutation confirm that path, and the single flexing mutation of a later 2023 variant EG.5 strengthens it further. The few mutations of the new path have greatly reduced pandemic deaths, for mechanical reasons proposed here.
1803.04364
Sheraz Khan
Sheraz Khan, Javeria Hashmi, Fahimeh Mamashli, Konstantinos Michmizos, Manfred Kitzbichler, Hari Bharadwaj, Yousra Bekhti, Santosh Ganesan, Keri A Garel, Susan Whitfield-Gabrieli, Randy Gollub, Jian Kong, Lucia M Vaina, Kunjan Rana, Steven Stufflebeam, Matti Hamalainen, and Tal Kenet
Maturation Trajectories of Cortical Resting-State Networks Depend on the Mediating Frequency Band
null
null
null
null
q-bio.NC cs.DM stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
The functional significance of resting state networks and their abnormal manifestations in psychiatric disorders are firmly established, as is the importance of the cortical rhythms in mediating these networks. Resting state networks are known to undergo substantial reorganization from childhood to adulthood, but whether distinct cortical rhythms, which are generated by separable neural mechanisms and are often manifested abnormally in psychiatric conditions, mediate maturation differentially, remains unknown. Using magnetoencephalography (MEG) to map frequency band specific maturation of resting state networks from age 7 to 29 in 162 participants (31 independent), we found significant changes with age in networks mediated by the beta (13-30Hz) and gamma (31-80Hz) bands. More specifically, gamma band mediated networks followed an expected asymptotic trajectory, but beta band mediated networks followed a linear trajectory. Network integration increased with age in gamma band mediated networks, while local segregation increased with age in beta band mediated networks. Spatially, the hubs that changed in importance with age in the beta band mediated networks had relatively little overlap with those that showed the greatest changes in the gamma band mediated networks. These findings are relevant for our understanding of the neural mechanisms of cortical maturation, in both typical and atypical development.
[ { "created": "Tue, 13 Feb 2018 01:04:40 GMT", "version": "v1" } ]
2018-03-13
[ [ "Khan", "Sheraz", "" ], [ "Hashmi", "Javeria", "" ], [ "Mamashli", "Fahimeh", "" ], [ "Michmizos", "Konstantinos", "" ], [ "Kitzbichler", "Manfred", "" ], [ "Bharadwaj", "Hari", "" ], [ "Bekhti", "Yousra", "" ], [ "Ganesan", "Santosh", "" ], [ "Garel", "Keri A", "" ], [ "Whitfield-Gabrieli", "Susan", "" ], [ "Gollub", "Randy", "" ], [ "Kong", "Jian", "" ], [ "Vaina", "Lucia M", "" ], [ "Rana", "Kunjan", "" ], [ "Stufflebeam", "Steven", "" ], [ "Hamalainen", "Matti", "" ], [ "Kenet", "Tal", "" ] ]
The functional significance of resting state networks and their abnormal manifestations in psychiatric disorders are firmly established, as is the importance of the cortical rhythms in mediating these networks. Resting state networks are known to undergo substantial reorganization from childhood to adulthood, but whether distinct cortical rhythms, which are generated by separable neural mechanisms and are often manifested abnormally in psychiatric conditions, mediate maturation differentially, remains unknown. Using magnetoencephalography (MEG) to map frequency band specific maturation of resting state networks from age 7 to 29 in 162 participants (31 independent), we found significant changes with age in networks mediated by the beta (13-30Hz) and gamma (31-80Hz) bands. More specifically, gamma band mediated networks followed an expected asymptotic trajectory, but beta band mediated networks followed a linear trajectory. Network integration increased with age in gamma band mediated networks, while local segregation increased with age in beta band mediated networks. Spatially, the hubs that changed in importance with age in the beta band mediated networks had relatively little overlap with those that showed the greatest changes in the gamma band mediated networks. These findings are relevant for our understanding of the neural mechanisms of cortical maturation, in both typical and atypical development.
1906.11039
Nadia Loy
Nadia Loy and Luigi Preziosi
Kinetic models with non-local sensing determining cell polarization and speed according to independent cues
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells move by run and tumble, a kind of dynamics in which the cell alternates runs over straight lines and re-orientations. This erratic motion may be influenced by external factors, like chemicals, nutrients, the extra-cellular matrix, in the sense that the cell measures the external field and elaborates the signal eventually adapting its dynamics. We propose a kinetic transport equation implementing a velocity-jump process in which the transition probability takes into account a double bias, which acts, respectively, on the choice of the direction of motion and of the speed. The double bias depends on two different non-local sensing cues coming from the external environment. We analyze how the size of the cell and the way of sensing the environment with respect to the variation of the external fields affect the cell population dynamics by recovering an appropriate macroscopic limit and directly integrating the kinetic transport equation. A comparison between the solutions of the transport equation and of the proper macroscopic limit is also performed.
[ { "created": "Mon, 17 Jun 2019 16:15:30 GMT", "version": "v1" }, { "created": "Thu, 27 Jun 2019 06:43:59 GMT", "version": "v2" }, { "created": "Mon, 1 Jul 2019 14:06:31 GMT", "version": "v3" } ]
2019-07-02
[ [ "Loy", "Nadia", "" ], [ "Preziosi", "Luigi", "" ] ]
Cells move by run and tumble, a kind of dynamics in which the cell alternates runs over straight lines and re-orientations. This erratic motion may be influenced by external factors, like chemicals, nutrients, the extra-cellular matrix, in the sense that the cell measures the external field and elaborates the signal eventually adapting its dynamics. We propose a kinetic transport equation implementing a velocity-jump process in which the transition probability takes into account a double bias, which acts, respectively, on the choice of the direction of motion and of the speed. The double bias depends on two different non-local sensing cues coming from the external environment. We analyze how the size of the cell and the way of sensing the environment with respect to the variation of the external fields affect the cell population dynamics by recovering an appropriate macroscopic limit and directly integrating the kinetic transport equation. A comparison between the solutions of the transport equation and of the proper macroscopic limit is also performed.
2309.15397
C.C. Alan Fung
Huilin Zhao and Sungchil Yang and Chi Chung Alan Fung
Short-Term Postsynaptic Plasticity Facilitates Predictive Tracking in Continuous Attractors
29 pages, 9 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
The N-methyl-D-aspartate receptor (NMDAR) is a crucial component of synaptic transmission, and its dysfunction is implicated in many neurological diseases and psychiatric conditions. NMDAR-based short-term postsynaptic plasticity (STPP) is a newly discovered postsynaptic response facilitation mechanism. Our group has suggested that long-lasting glutamate binding of NMDAR allows input information to be held for up to 500 ms or longer in brain slices, which contributes to response facilitation. However, the implications of STPP in the dynamics of neuronal populations remain unknown. In this study, we implemented STPP in a continuous attractor neural network (CANN) model to describe the neural information encoded in neuronal populations. Unlike short-term facilitation, which is a kind of presynaptic plasticity, the temporally enhanced synaptic efficacy induced by STPP destabilizes the network state of the CANN by increasing the mobility of the system. This nontrivial dynamical effect enables a CANN with STPP to track a moving stimulus predictively, i.e., the network state responds to the anticipated stimulus. Our findings reveal a novel STPP-based mechanism for sensory prediction that can help develop brain-inspired computational algorithms for prediction.
[ { "created": "Wed, 27 Sep 2023 04:40:51 GMT", "version": "v1" } ]
2023-09-28
[ [ "Zhao", "Huilin", "" ], [ "Yang", "Sungchil", "" ], [ "Fung", "Chi Chung Alan", "" ] ]
The N-methyl-D-aspartate receptor (NMDAR) is a crucial component of synaptic transmission, and its dysfunction is implicated in many neurological diseases and psychiatric conditions. NMDAR-based short-term postsynaptic plasticity (STPP) is a newly discovered postsynaptic response facilitation mechanism. Our group has suggested that long-lasting glutamate binding of NMDAR allows input information to be held for up to 500 ms or longer in brain slices, which contributes to response facilitation. However, the implications of STPP in the dynamics of neuronal populations remain unknown. In this study, we implemented STPP in a continuous attractor neural network (CANN) model to describe the neural information encoded in neuronal populations. Unlike short-term facilitation, which is a kind of presynaptic plasticity, the temporally enhanced synaptic efficacy induced by STPP destabilizes the network state of the CANN by increasing the mobility of the system. This nontrivial dynamical effect enables a CANN with STPP to track a moving stimulus predictively, i.e., the network state responds to the anticipated stimulus. Our findings reveal a novel STPP-based mechanism for sensory prediction that can help develop brain-inspired computational algorithms for prediction.
1205.1912
Jan Urban
Jan Urban
Blank measurement based time-alignment in LC-MS
Urban J., Hrouzek P., Van\v{e}k J., Kopeck\'y J., \v{S}tys D., Time-Alignment in HPLC-MS based on blank measurement, 36th International symposium on High-Performance Liquid Phase separations and Related Techniques, Budapest, Hungary, 2011. Urban J., Hrouzek P., Van\v{e}k J., Kopeck\'y J., \v{S}tys D., Blank measurement based time-alignment in LC-MS, Microscale Bioseparations, Prague, 2010
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here are presenting the blank based time-alignment (BBTA) as a strong analytical approach for treatment of non-linear shift in time occurring in HPLC-MS data. Need of such tool in recent large dataset produced by analytical chemistry and so-called omics studies is evident. Proposed approach is based on measurement and comparison of blank and analyzed sample evident features. In the first step of BBTA procedure, the number of compounds is reduced by max-to-mean ratio thresholding, which extensively reduce the computational time. Simple thresholding is followed by selection of time markers defined from blank inflex points which are then used for the transformation function, polynomial of second degree, in the example. BBTA approach was compared on real HPLC-MS measurement with Correlation Optimized Warping (COW) method. It was proved to have distinctively shorter computational time as well as lower level of mathematical presumptions. The BBTA is computationally much easier, quicker (more then 1000x) and accurate in comparison with warping. Moreover, markers selection works efficiently without any peak detection. It is sufficient to analyze only baseline contribution in the analyte measurement with sparse knowledge of blank behavior. Finally, BBTA does not required usage of extra internal standards and due to its simplicity it has a potential to be widespread tool in HPLC-MS data treatment.
[ { "created": "Wed, 9 May 2012 08:51:38 GMT", "version": "v1" } ]
2016-11-11
[ [ "Urban", "Jan", "" ] ]
Here are presenting the blank based time-alignment (BBTA) as a strong analytical approach for treatment of non-linear shift in time occurring in HPLC-MS data. Need of such tool in recent large dataset produced by analytical chemistry and so-called omics studies is evident. Proposed approach is based on measurement and comparison of blank and analyzed sample evident features. In the first step of BBTA procedure, the number of compounds is reduced by max-to-mean ratio thresholding, which extensively reduce the computational time. Simple thresholding is followed by selection of time markers defined from blank inflex points which are then used for the transformation function, polynomial of second degree, in the example. BBTA approach was compared on real HPLC-MS measurement with Correlation Optimized Warping (COW) method. It was proved to have distinctively shorter computational time as well as lower level of mathematical presumptions. The BBTA is computationally much easier, quicker (more then 1000x) and accurate in comparison with warping. Moreover, markers selection works efficiently without any peak detection. It is sufficient to analyze only baseline contribution in the analyte measurement with sparse knowledge of blank behavior. Finally, BBTA does not required usage of extra internal standards and due to its simplicity it has a potential to be widespread tool in HPLC-MS data treatment.
1809.06997
Shou-Wen Wang
Shou-Wen Wang, Lei-Han Tang
Emergence of collective oscillations in adaptive cells
11 pages and 6 figures for the main text. 18 pages and 14 figures for supplementary
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective oscillation of cells in a population has been reported under diverse biological contexts and with vastly different molecular constructs. Could there be common principles similar to those that govern spontaneous oscillation in mechanical or electrical systems? Here, we answer this question in the affirmative by categorising the response of individual cells against a time-varying signal. A positive intracellular signal relay of sufficient gain from participating cells is required to sustain the oscillations, together with phase matching. The two conditions yield quantitative predictions for the onset cell density and frequency in terms of measured single-cell and signal response functions. Through mathematical constructions, we show that cells that adapt to a constant stimulus fulfil the phase requirement by developing a leading phase in an "active" frequency window that enables cell-to-signal energy flow. Analysis of dynamical quorum sensing in several cellular systems with increasing biological complexity reaffirms the pivotal role of adaptation in powering oscillations in an otherwise dissipative cell-to-cell communication channel. The physical conditions identified can be used to design synthetic oscillatory systems
[ { "created": "Wed, 19 Sep 2018 03:30:04 GMT", "version": "v1" }, { "created": "Thu, 27 Sep 2018 16:20:40 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2019 16:01:26 GMT", "version": "v3" } ]
2019-07-08
[ [ "Wang", "Shou-Wen", "" ], [ "Tang", "Lei-Han", "" ] ]
Collective oscillation of cells in a population has been reported under diverse biological contexts and with vastly different molecular constructs. Could there be common principles similar to those that govern spontaneous oscillation in mechanical or electrical systems? Here, we answer this question in the affirmative by categorising the response of individual cells against a time-varying signal. A positive intracellular signal relay of sufficient gain from participating cells is required to sustain the oscillations, together with phase matching. The two conditions yield quantitative predictions for the onset cell density and frequency in terms of measured single-cell and signal response functions. Through mathematical constructions, we show that cells that adapt to a constant stimulus fulfil the phase requirement by developing a leading phase in an "active" frequency window that enables cell-to-signal energy flow. Analysis of dynamical quorum sensing in several cellular systems with increasing biological complexity reaffirms the pivotal role of adaptation in powering oscillations in an otherwise dissipative cell-to-cell communication channel. The physical conditions identified can be used to design synthetic oscillatory systems
1008.0717
Tsvi Tlusty
Arbel D. Tadmor, Tsvi Tlusty
A Coarse-Grained Biophysical Model of E. coli and Its Application to Perturbation of the rRNA Operon Copy Number
null
PLoS Comput Biol. 2008 May 2;4(4):e1000038
10.1371/journal.pcbi.1000038
null
q-bio.MN physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a biophysical model of Escherichia coli that predicts growth rate and an effective cellular composition from an effective, coarse-grained representation of its genome. We assume that E. coli is in a state of balanced exponential steadystate growth, growing in a temporally and spatially constant environment, rich in resources. We apply this model to a series of past measurements, where the growth rate and rRNA-to-protein ratio have been measured for seven E. coli strains with an rRNA operon copy number ranging from one to seven (the wild-type copy number). These experiments show that growth rate markedly decreases for strains with fewer than six copies. Using the model, we were able to reproduce these measurements. We show that the model that best fits these data suggests that the volume fraction of macromolecules inside E. coli is not fixed when the rRNA operon copy number is varied. Moreover, the model predicts that increasing the copy number beyond seven results in a cytoplasm densely packed with ribosomes and proteins. Assuming that under such overcrowded conditions prolonged diffusion times tend to weaken binding affinities, the model predicts that growth rate will not increase substantially beyond the wild-type growth rate, as indicated by other experiments. Our model therefore suggests that changing the rRNA operon copy number of wild-type E. coli cells growing in a constant rich environment does not substantially increase their growth rate. Other observations regarding strains with an altered rRNA operon copy number, such as nucleoid compaction and the rRNA operon feedback response, appear to be qualitatively consistent with this model. In addition, we discuss possible design principles suggested by the model and propose further experiments to test its validity.
[ { "created": "Wed, 4 Aug 2010 08:47:56 GMT", "version": "v1" } ]
2010-08-05
[ [ "Tadmor", "Arbel D.", "" ], [ "Tlusty", "Tsvi", "" ] ]
We propose a biophysical model of Escherichia coli that predicts growth rate and an effective cellular composition from an effective, coarse-grained representation of its genome. We assume that E. coli is in a state of balanced exponential steadystate growth, growing in a temporally and spatially constant environment, rich in resources. We apply this model to a series of past measurements, where the growth rate and rRNA-to-protein ratio have been measured for seven E. coli strains with an rRNA operon copy number ranging from one to seven (the wild-type copy number). These experiments show that growth rate markedly decreases for strains with fewer than six copies. Using the model, we were able to reproduce these measurements. We show that the model that best fits these data suggests that the volume fraction of macromolecules inside E. coli is not fixed when the rRNA operon copy number is varied. Moreover, the model predicts that increasing the copy number beyond seven results in a cytoplasm densely packed with ribosomes and proteins. Assuming that under such overcrowded conditions prolonged diffusion times tend to weaken binding affinities, the model predicts that growth rate will not increase substantially beyond the wild-type growth rate, as indicated by other experiments. Our model therefore suggests that changing the rRNA operon copy number of wild-type E. coli cells growing in a constant rich environment does not substantially increase their growth rate. Other observations regarding strains with an altered rRNA operon copy number, such as nucleoid compaction and the rRNA operon feedback response, appear to be qualitatively consistent with this model. In addition, we discuss possible design principles suggested by the model and propose further experiments to test its validity.
q-bio/0502003
Anastasia Anishchenko
A. Anishchenko (1), E. Bienenstock (1), A. Treves (2) ((1) Brown University, (2) SISSA)
Autoassociative Memory Retrieval and Spontaneous Activity Bumps in Small-World Networks of Integrate-and-Fire Neurons
25 pages, 7 figures; submitted to Neural Computation
null
null
null
q-bio.NC
null
Qualitatively, some real networks in the brain could be characterized as 'small worlds', in the sense that the structure of their connections is intermediate between the extremes of an orderly geometric arrangement and of a geometry-independent random mesh. Small worlds can be defined more precisely in terms of their mean path length and clustering coefficient; but is such a precise description useful to better understand how the type of connectivity affects memory retrieval? We have simulated an autoassociative memory network of integrate-and-fire units, positioned on a ring, with the network connectivity varied parametrically between ordered and random. We find that the network retrieves when the connectivity is close to random, and displays the characteristic behavior of ordered nets (localized 'bumps' of activity) when the connectivity is close to ordered. Recent analytical work shows that these two behaviours can coexist in a network of simple threshold-linear units, leading to localized retrieval states. We find that they tend to be mutually exclusive behaviours, however, with our integrate-and-fire units. Moreover, the transition between the two occurs for values of the connectivity parameter which are not simply related to the notion of small worlds.
[ { "created": "Fri, 4 Feb 2005 03:05:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Anishchenko", "A.", "" ], [ "Bienenstock", "E.", "" ], [ "Treves", "A.", "" ] ]
Qualitatively, some real networks in the brain could be characterized as 'small worlds', in the sense that the structure of their connections is intermediate between the extremes of an orderly geometric arrangement and of a geometry-independent random mesh. Small worlds can be defined more precisely in terms of their mean path length and clustering coefficient; but is such a precise description useful to better understand how the type of connectivity affects memory retrieval? We have simulated an autoassociative memory network of integrate-and-fire units, positioned on a ring, with the network connectivity varied parametrically between ordered and random. We find that the network retrieves when the connectivity is close to random, and displays the characteristic behavior of ordered nets (localized 'bumps' of activity) when the connectivity is close to ordered. Recent analytical work shows that these two behaviours can coexist in a network of simple threshold-linear units, leading to localized retrieval states. We find that they tend to be mutually exclusive behaviours, however, with our integrate-and-fire units. Moreover, the transition between the two occurs for values of the connectivity parameter which are not simply related to the notion of small worlds.
2402.09163
Emma Kun
P\'eter Ozsv\'art, Emma Kun
Cosmic radiation drives quasi-periodic changes in the diversity of siliceous marine microplankton
13 Pages, 2 Figures. Submitted to Astrobiology. Comments welcome
null
null
null
q-bio.PE astro-ph.EP astro-ph.GA astro-ph.HE physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Radiolarians are significant contributors to the oceanic primary productivity and the global silica cycle in the last 500 Myr. Their diversity throughout the Phanerozoic shows periodic fluctuations. We identify a possible abiotic candidate for driving these patterns which seems to potentially influence radiolarian diversity changes during this period at a significance level of $\sim 2.2 \sigma$. Our finding suggests a significant correlation between the origination of new radiolaria species and maximum excursions of the Solar system from the Galactic plane, where the magnetic shielding of cosmic rays is expected to be weaker. We connect the particularly strong radiolaria blooming during the Middle Triassic to the so-called Mesozoic dipole-low of the geomagnetic field, which was in its deepest state when radiolarias were blooming. According to the scenario, high-energy cosmic rays presumably implied particular damage to the DNA during the maximum excursions which may trigger large chromosomal abnormalities leading to the appearance of a large number of new genera and species during these periods.
[ { "created": "Wed, 14 Feb 2024 13:27:31 GMT", "version": "v1" } ]
2024-02-15
[ [ "Ozsvárt", "Péter", "" ], [ "Kun", "Emma", "" ] ]
Radiolarians are significant contributors to the oceanic primary productivity and the global silica cycle in the last 500 Myr. Their diversity throughout the Phanerozoic shows periodic fluctuations. We identify a possible abiotic candidate for driving these patterns which seems to potentially influence radiolarian diversity changes during this period at a significance level of $\sim 2.2 \sigma$. Our finding suggests a significant correlation between the origination of new radiolaria species and maximum excursions of the Solar system from the Galactic plane, where the magnetic shielding of cosmic rays is expected to be weaker. We connect the particularly strong radiolaria blooming during the Middle Triassic to the so-called Mesozoic dipole-low of the geomagnetic field, which was in its deepest state when radiolarias were blooming. According to the scenario, high-energy cosmic rays presumably implied particular damage to the DNA during the maximum excursions which may trigger large chromosomal abnormalities leading to the appearance of a large number of new genera and species during these periods.
2103.06562
Renaud Jolivet
Dmytro Grytskyy and Renaud B. Jolivet
A learning rule balancing energy consumption and information maximization in a feed-forward neuronal network
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Information measures are often used to assess the efficacy of neural networks, and learning rules can be derived through optimization procedures on such measures. In biological neural networks, computation is restricted by the amount of available resources. Considering energy restrictions, it is thus reasonable to balance information processing efficacy with energy consumption. Here, we studied networks of non-linear Hawkes neurons and assessed the information flow through these networks using mutual information. We then applied gradient descent for a combination of mutual information and energetic costs to obtain a learning rule. Through this procedure, we obtained a rule containing a sliding threshold, similar to the Bienenstock-Cooper-Munro rule. The rule contains terms local in time and in space plus one global variable common to the whole network. The rule thus belongs to so-called three-factor rules and the global variable could be related to a number of biological processes. In neural networks using this learning rule, frequent inputs get mapped onto low energy orbits of the network while rare inputs aren't learned.
[ { "created": "Thu, 11 Mar 2021 09:41:53 GMT", "version": "v1" } ]
2021-03-12
[ [ "Grytskyy", "Dmytro", "" ], [ "Jolivet", "Renaud B.", "" ] ]
Information measures are often used to assess the efficacy of neural networks, and learning rules can be derived through optimization procedures on such measures. In biological neural networks, computation is restricted by the amount of available resources. Considering energy restrictions, it is thus reasonable to balance information processing efficacy with energy consumption. Here, we studied networks of non-linear Hawkes neurons and assessed the information flow through these networks using mutual information. We then applied gradient descent for a combination of mutual information and energetic costs to obtain a learning rule. Through this procedure, we obtained a rule containing a sliding threshold, similar to the Bienenstock-Cooper-Munro rule. The rule contains terms local in time and in space plus one global variable common to the whole network. The rule thus belongs to so-called three-factor rules and the global variable could be related to a number of biological processes. In neural networks using this learning rule, frequent inputs get mapped onto low energy orbits of the network while rare inputs aren't learned.
1205.1322
Alex Skvortsov
P. Katauskis, P. Skakauskas, A. Skvortsov
The receptor-toxin-antibody interaction: Mathematical model and numerical simulation
6 pages, 7 figures
null
null
null
q-bio.CB
http://creativecommons.org/licenses/publicdomain/
A reaction-diffusion model of receptor-toxin-antibody (RTA) interaction is studied numerically. The protective properties of an antibody against a given toxin are evaluated for a spherical cell placed into a toxin-antibody solution. The selection of parameters for numerical simulation approximately corresponds to the practically relevant values reported in the literature (ricin and monoclonal antibody 2B11) with the significant ranges in variation to allow demonstration of different regimes of intracellular transport. The proposed refinement of the RTA model may become important for the consistent evaluation of protective potential of an antibody and for the estimation of the time period during which the application of this antibody becomes the most effective. It can be a useful tool for in vitro selection of potential protective antibodies for progression to in vivo evaluation.
[ { "created": "Mon, 7 May 2012 08:49:47 GMT", "version": "v1" }, { "created": "Tue, 5 Jun 2012 07:44:52 GMT", "version": "v2" } ]
2012-06-06
[ [ "Katauskis", "P.", "" ], [ "Skakauskas", "P.", "" ], [ "Skvortsov", "A.", "" ] ]
A reaction-diffusion model of receptor-toxin-antibody (RTA) interaction is studied numerically. The protective properties of an antibody against a given toxin are evaluated for a spherical cell placed into a toxin-antibody solution. The selection of parameters for numerical simulation approximately corresponds to the practically relevant values reported in the literature (ricin and monoclonal antibody 2B11) with the significant ranges in variation to allow demonstration of different regimes of intracellular transport. The proposed refinement of the RTA model may become important for the consistent evaluation of protective potential of an antibody and for the estimation of the time period during which the application of this antibody becomes the most effective. It can be a useful tool for in vitro selection of potential protective antibodies for progression to in vivo evaluation.
1309.2622
Liane Gabora
Liane Gabora
Five clarifications about cultural evolution
23 pages. arXiv admin note: substantial text overlap with arXiv:1206.4386
Journal of Cognition and Culture, 11, 61-83 (2011)
10.1163/156853711X568699
null
q-bio.PE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reviews and clarifies five misunderstandings about cultural evolution identified by Henrich, Boyd, and Richerson (2008). First, cultural representations are neither discrete nor continuous; they are distributed across neurons that respond to microfeatures. This enables associations to be made, and cultural change to be generated. Second, 'replicator dynamics' do not ensure natural selection. The replicator notion does not capture the distinction between actively interpreted self-assembly code and passively copied self-description, which leads to a fundamental principle of natural selection: inherited information is transmitted, whereas acquired information is not. Third, this principle is violated in culture by the ubiquity of acquired change. Moreover, biased transmission is less important to culture than the creative processes by which novelty is generated. Fourth, there is no objective basis for determining cultural fitness. Fifth, the necessity of randomness is discussed. It is concluded that natural selection is inappropriate as an explanatory framework for culture.
[ { "created": "Tue, 10 Sep 2013 19:31:05 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2019 02:17:23 GMT", "version": "v2" } ]
2019-07-02
[ [ "Gabora", "Liane", "" ] ]
This paper reviews and clarifies five misunderstandings about cultural evolution identified by Henrich, Boyd, and Richerson (2008). First, cultural representations are neither discrete nor continuous; they are distributed across neurons that respond to microfeatures. This enables associations to be made, and cultural change to be generated. Second, 'replicator dynamics' do not ensure natural selection. The replicator notion does not capture the distinction between actively interpreted self-assembly code and passively copied self-description, which leads to a fundamental principle of natural selection: inherited information is transmitted, whereas acquired information is not. Third, this principle is violated in culture by the ubiquity of acquired change. Moreover, biased transmission is less important to culture than the creative processes by which novelty is generated. Fourth, there is no objective basis for determining cultural fitness. Fifth, the necessity of randomness is discussed. It is concluded that natural selection is inappropriate as an explanatory framework for culture.
2310.13806
Brian Hutchinson
Sarah Coffland and Katie Christensen and Filip Jagodzinski and Brian Hutchinson
RoseNet: Predicting Energy Metrics of Double InDel Mutants Using Deep Learning
Presented at Computational Structural Bioinformatics Workshop 2023
Proceedings of the 14th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. ACM BCB 2023
10.1145/3584371.3612951
null
q-bio.BM cs.AI cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An amino acid insertion or deletion, or InDel, can have profound and varying functional impacts on a protein's structure. InDel mutations in the transmembrane conductor regulator protein for example give rise to cystic fibrosis. Unfortunately performing InDel mutations on physical proteins and studying their effects is a time prohibitive process. Consequently, modeling InDels computationally can supplement and inform wet lab experiments. In this work, we make use of our data sets of exhaustive double InDel mutations for three proteins which we computationally generated using a robotics inspired inverse kinematics approach available in Rosetta. We develop and train a neural network, RoseNet, on several structural and energetic metrics output by Rosetta during the mutant generation process. We explore and present how RoseNet is able to emulate the exhaustive data set using deep learning methods, and show to what extent it can predict Rosetta metrics for unseen mutant sequences with two InDels. RoseNet achieves a Pearson correlation coefficient median accuracy of 0.775 over all Rosetta scores for the largest protein. Furthermore, a sensitivity analysis is performed to determine the necessary quantity of data required to accurately emulate the structural scores for computationally generated mutants. We show that the model can be trained on minimal data (<50%) and still retain a high level of accuracy.
[ { "created": "Fri, 20 Oct 2023 20:36:13 GMT", "version": "v1" } ]
2023-10-24
[ [ "Coffland", "Sarah", "" ], [ "Christensen", "Katie", "" ], [ "Jagodzinski", "Filip", "" ], [ "Hutchinson", "Brian", "" ] ]
An amino acid insertion or deletion, or InDel, can have profound and varying functional impacts on a protein's structure. InDel mutations in the transmembrane conductor regulator protein for example give rise to cystic fibrosis. Unfortunately performing InDel mutations on physical proteins and studying their effects is a time prohibitive process. Consequently, modeling InDels computationally can supplement and inform wet lab experiments. In this work, we make use of our data sets of exhaustive double InDel mutations for three proteins which we computationally generated using a robotics inspired inverse kinematics approach available in Rosetta. We develop and train a neural network, RoseNet, on several structural and energetic metrics output by Rosetta during the mutant generation process. We explore and present how RoseNet is able to emulate the exhaustive data set using deep learning methods, and show to what extent it can predict Rosetta metrics for unseen mutant sequences with two InDels. RoseNet achieves a Pearson correlation coefficient median accuracy of 0.775 over all Rosetta scores for the largest protein. Furthermore, a sensitivity analysis is performed to determine the necessary quantity of data required to accurately emulate the structural scores for computationally generated mutants. We show that the model can be trained on minimal data (<50%) and still retain a high level of accuracy.
2002.00363
Jos\'e A. Cuesta
Susanna Manrubia, Jos\'e A. Cuesta, Jacobo Aguirre, Sebastian E. Ahnert, Lee Altenberg, Alejandro V. Cano, Pablo Catal\'an, Ramon Diaz-Uriarte, Santiago F. Elena, Juan Antonio Garc\'ia-Mart\'in, Paulien Hogeweg, Bhavin S. Khatri, Joachim Krug, Ard A. Louis, Nora S. Martin, Joshua L. Payne, Matthew J. Tarnowski, and Marcel Wei{\ss}
From genotypes to organisms: State-of-the-art and perspectives of a cornerstone in evolutionary dynamics
111 pages, 11 figures uses elsarticle latex class
Physics of Life Reviews 38, 55-106 (2021)
10.1016/j.plrev.2021.03.004
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding how genotypes map onto phenotypes, fitness, and eventually organisms is arguably the next major missing piece in a fully predictive theory of evolution. We refer to this generally as the problem of the genotype-phenotype map. Though we are still far from achieving a complete picture of these relationships, our current understanding of simpler questions, such as the structure induced in the space of genotypes by sequences mapped to molecular structures, has revealed important facts that deeply affect the dynamical description of evolutionary processes. Empirical evidence supporting the fundamental relevance of features such as phenotypic bias is mounting as well, while the synthesis of conceptual and experimental progress leads to questioning current assumptions on the nature of evolutionary dynamics-cancer progression models or synthetic biology approaches being notable examples. This work delves into a critical and constructive attitude in our current knowledge of how genotypes map onto molecular phenotypes and organismal functions, and discusses theoretical and empirical avenues to broaden and improve this comprehension. As a final goal, this community should aim at deriving an updated picture of evolutionary processes soundly relying on the structural properties of genotype spaces, as revealed by modern techniques of molecular and functional analysis.
[ { "created": "Sun, 2 Feb 2020 11:12:01 GMT", "version": "v1" }, { "created": "Wed, 15 Jul 2020 17:35:29 GMT", "version": "v2" }, { "created": "Wed, 17 Mar 2021 15:11:45 GMT", "version": "v3" } ]
2022-05-20
[ [ "Manrubia", "Susanna", "" ], [ "Cuesta", "José A.", "" ], [ "Aguirre", "Jacobo", "" ], [ "Ahnert", "Sebastian E.", "" ], [ "Altenberg", "Lee", "" ], [ "Cano", "Alejandro V.", "" ], [ "Catalán", "Pablo", "" ], [ "Diaz-Uriarte", "Ramon", "" ], [ "Elena", "Santiago F.", "" ], [ "García-Martín", "Juan Antonio", "" ], [ "Hogeweg", "Paulien", "" ], [ "Khatri", "Bhavin S.", "" ], [ "Krug", "Joachim", "" ], [ "Louis", "Ard A.", "" ], [ "Martin", "Nora S.", "" ], [ "Payne", "Joshua L.", "" ], [ "Tarnowski", "Matthew J.", "" ], [ "Weiß", "Marcel", "" ] ]
Understanding how genotypes map onto phenotypes, fitness, and eventually organisms is arguably the next major missing piece in a fully predictive theory of evolution. We refer to this generally as the problem of the genotype-phenotype map. Though we are still far from achieving a complete picture of these relationships, our current understanding of simpler questions, such as the structure induced in the space of genotypes by sequences mapped to molecular structures, has revealed important facts that deeply affect the dynamical description of evolutionary processes. Empirical evidence supporting the fundamental relevance of features such as phenotypic bias is mounting as well, while the synthesis of conceptual and experimental progress leads to questioning current assumptions on the nature of evolutionary dynamics-cancer progression models or synthetic biology approaches being notable examples. This work delves into a critical and constructive attitude in our current knowledge of how genotypes map onto molecular phenotypes and organismal functions, and discusses theoretical and empirical avenues to broaden and improve this comprehension. As a final goal, this community should aim at deriving an updated picture of evolutionary processes soundly relying on the structural properties of genotype spaces, as revealed by modern techniques of molecular and functional analysis.
2011.04892
Peixiao Wang
Peixiao Wang, Tao Hu, Hongqiang Liu and Xinyan Zhu
Exploring the impact of under-reported cases on the COVID-19 spatiotemporal distribution using healthcare worker infection data
null
Cities, 2022
10.1016/j.cities.2022.103593
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A timely understanding of the spatiotemporal pattern and development trend of COVID-19 is critical for timely prevention and control. However, the under-reporting of cases is widespread in fields associated with public health. It is also possible to draw biased inferences and formulate inappropriate prevention and control policies if the phenomenon of under-reporting is not taken into account. Therefore, in this paper, a novel framework was proposed to explore the impact of under-reporting on COVID-19 spatiotemporal distributions, and empirical analysis was carried out using infection data of healthcare workers in Wuhan and Hubei (excluding Wuhan). The results show that (1) the lognormal distribution was the most suitable to describe the evolution of epidemic with time; (2) the estimated peak infection time of the reported cases lagged the peak infection time of the healthcare worker cases, and the estimated infection time interval of the reported cases was smaller than that of the healthcare worker cases. (3) The impact of under-reporting cases on the early stages of the pandemic was greater than that on its later stages, and the impact on the early onset area was greater than that on the late onset area. (4) Although the number of reported cases was lower than the actual number of cases, a high spatial correlation existed between the cumulatively reported cases and healthcare worker cases. The proposed framework of this study is highly extensible, and relevant researchers can use data sources from other counties to carry out similar research.
[ { "created": "Tue, 10 Nov 2020 04:48:49 GMT", "version": "v1" }, { "created": "Thu, 26 Nov 2020 03:10:18 GMT", "version": "v2" }, { "created": "Mon, 30 Nov 2020 14:06:54 GMT", "version": "v3" }, { "created": "Sun, 13 Dec 2020 14:13:25 GMT", "version": "v4" }, { "created": "Wed, 27 Jan 2021 10:09:03 GMT", "version": "v5" }, { "created": "Thu, 30 Sep 2021 13:33:04 GMT", "version": "v6" }, { "created": "Thu, 20 Jan 2022 09:21:15 GMT", "version": "v7" } ]
2024-01-15
[ [ "Wang", "Peixiao", "" ], [ "Hu", "Tao", "" ], [ "Liu", "Hongqiang", "" ], [ "Zhu", "Xinyan", "" ] ]
A timely understanding of the spatiotemporal pattern and development trend of COVID-19 is critical for timely prevention and control. However, the under-reporting of cases is widespread in fields associated with public health. It is also possible to draw biased inferences and formulate inappropriate prevention and control policies if the phenomenon of under-reporting is not taken into account. Therefore, in this paper, a novel framework was proposed to explore the impact of under-reporting on COVID-19 spatiotemporal distributions, and empirical analysis was carried out using infection data of healthcare workers in Wuhan and Hubei (excluding Wuhan). The results show that (1) the lognormal distribution was the most suitable to describe the evolution of epidemic with time; (2) the estimated peak infection time of the reported cases lagged the peak infection time of the healthcare worker cases, and the estimated infection time interval of the reported cases was smaller than that of the healthcare worker cases. (3) The impact of under-reporting cases on the early stages of the pandemic was greater than that on its later stages, and the impact on the early onset area was greater than that on the late onset area. (4) Although the number of reported cases was lower than the actual number of cases, a high spatial correlation existed between the cumulatively reported cases and healthcare worker cases. The proposed framework of this study is highly extensible, and relevant researchers can use data sources from other counties to carry out similar research.
1701.03448
Johannes M\"uller
Johannes M\"uller, Karin M\"unch, Bendix Koopmann, Eva Stadler, Louisa Roselius, Dieter Jahn, Richard M\"unch
Plasmid segregation and accumulation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The segregation of plasmids in a bacterial population is investigated. Hereby, a dynamical model is formulated in terms of a size-structured population using a hyperbolic partial differential equation incorporating non-local terms (the fragmentation equation). For a large class of parameter functions this PDE can be re-written as an infinite system of ordinary differential equations for the moments of its solution. We investigate the influence of different plasmid production modes, kinetic parameters, and plasmid segregation modes on the equilibrium plasmid distribution. In particular, at small plasmid numbers the distribution is strongly influenced by the production mode, while the kinetic parameters (cell growth rate resp. basic plasmid reproduction rate) influence the distribution mainly at large plasmid numbers. The plasmid transmission characteristics only gradually influence the distribution, but may become of importance for biologically relevant cases. We compare the theoretical findings with experimental results.
[ { "created": "Thu, 12 Jan 2017 18:35:23 GMT", "version": "v1" } ]
2017-01-13
[ [ "Müller", "Johannes", "" ], [ "Münch", "Karin", "" ], [ "Koopmann", "Bendix", "" ], [ "Stadler", "Eva", "" ], [ "Roselius", "Louisa", "" ], [ "Jahn", "Dieter", "" ], [ "Münch", "Richard", "" ] ]
The segregation of plasmids in a bacterial population is investigated. Hereby, a dynamical model is formulated in terms of a size-structured population using a hyperbolic partial differential equation incorporating non-local terms (the fragmentation equation). For a large class of parameter functions this PDE can be re-written as an infinite system of ordinary differential equations for the moments of its solution. We investigate the influence of different plasmid production modes, kinetic parameters, and plasmid segregation modes on the equilibrium plasmid distribution. In particular, at small plasmid numbers the distribution is strongly influenced by the production mode, while the kinetic parameters (cell growth rate resp. basic plasmid reproduction rate) influence the distribution mainly at large plasmid numbers. The plasmid transmission characteristics only gradually influence the distribution, but may become of importance for biologically relevant cases. We compare the theoretical findings with experimental results.
1501.00019
Peter Waddell
Peter J. Waddell
Extended Distance-based Phylogenetic Analyses Applied to 3D Homo Fossil Skull Evolution
42 pages, 18 figures
null
null
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article shows how 3D geometric morphometric data can be analyzed using newly developed distance-based evolutionary tree inference methods, with extensions to planar graphs. Application of these methods to 3D representations of the skullcap (calvaria) of 13 diverse skulls in the genus Homo, ranging from Homo erectus (ergaster) at about 1.6 mya, all the way forward to modern humans, yields a remarkably clear phylogenetic tree. Various evolutionary hypotheses are tested. Results of these tests include rejection of the monophyly of Homo heidelbergensis, the Multi-Regional hypothesis, and the hypothesis that the unusual 12,000 year old (12kya) Iwo Eleru skull represents a modern human. Rather, by quantitative phylogenetic analyses the latter is seen to be an old (200-400kya) lineage that probably represents a novel African species, Homo iwoelerueensis. It diverged after the lineage leading to Neanderthals, and may have been driven to extinction in the last 10kya by modern humans, Homo sapiens, another African species of Homo that appeared about 100kya. Another enigmatic skull, Qafzeh 6 from the Middle East about 90kya, appears to be a hybrid of two thirds near, but not, anatomically modern human and one third of an archaic lineage diverging close to classic European Neanderthals. Overall, the tree clearly implies an accelerating rate of skullcap shape change, and by extension, change of the underlying brain, over the last 400kya in Africa. This acceleration may have extended right up to the origin of modern humans. Methods of distance-based evolutionary tree inference are refined and extended, with particular attention to diagnosing the model and achieving a better fit. This includes power transformations of the input data which favor root Procrustes distances.
[ { "created": "Tue, 30 Dec 2014 21:05:27 GMT", "version": "v1" } ]
2015-01-05
[ [ "Waddell", "Peter J.", "" ] ]
This article shows how 3D geometric morphometric data can be analyzed using newly developed distance-based evolutionary tree inference methods, with extensions to planar graphs. Application of these methods to 3D representations of the skullcap (calvaria) of 13 diverse skulls in the genus Homo, ranging from Homo erectus (ergaster) at about 1.6 mya, all the way forward to modern humans, yields a remarkably clear phylogenetic tree. Various evolutionary hypotheses are tested. Results of these tests include rejection of the monophyly of Homo heidelbergensis, the Multi-Regional hypothesis, and the hypothesis that the unusual 12,000 year old (12kya) Iwo Eleru skull represents a modern human. Rather, by quantitative phylogenetic analyses the latter is seen to be an old (200-400kya) lineage that probably represents a novel African species, Homo iwoelerueensis. It diverged after the lineage leading to Neanderthals, and may have been driven to extinction in the last 10kya by modern humans, Homo sapiens, another African species of Homo that appeared about 100kya. Another enigmatic skull, Qafzeh 6 from the Middle East about 90kya, appears to be a hybrid of two thirds near, but not, anatomically modern human and one third of an archaic lineage diverging close to classic European Neanderthals. Overall, the tree clearly implies an accelerating rate of skullcap shape change, and by extension, change of the underlying brain, over the last 400kya in Africa. This acceleration may have extended right up to the origin of modern humans. Methods of distance-based evolutionary tree inference are refined and extended, with particular attention to diagnosing the model and achieving a better fit. This includes power transformations of the input data which favor root Procrustes distances.
1405.3946
Hatef Sadeghi
Hatef Sadeghi, S. Bailey and Colin J. Lambert
Silicene-based DNA Nucleobase Sensing
null
Appl. Phys. Lett. 104, 103104 (2014)
10.1063/1.4868123
null
q-bio.QM cond-mat.mes-hall cond-mat.mtrl-sci physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a DNA sequencing scheme based on silicene nanopores. Using first principles theory, we compute the electrical properties of such pores in the absence and presence of nucleobases. Within a two-terminal geometry, we analyze the current-voltage relation in the presence of nucleobases with various orientations. We demonstrate that when nucleobases pass through a pore, even after sampling over many orientations, changes in the electrical properties of the ribbon can be used to discriminate between bases.
[ { "created": "Tue, 13 May 2014 11:21:49 GMT", "version": "v1" } ]
2014-05-16
[ [ "Sadeghi", "Hatef", "" ], [ "Bailey", "S.", "" ], [ "Lambert", "Colin J.", "" ] ]
We propose a DNA sequencing scheme based on silicene nanopores. Using first principles theory, we compute the electrical properties of such pores in the absence and presence of nucleobases. Within a two-terminal geometry, we analyze the current-voltage relation in the presence of nucleobases with various orientations. We demonstrate that when nucleobases pass through a pore, even after sampling over many orientations, changes in the electrical properties of the ribbon can be used to discriminate between bases.
1605.01155
Lincong Wang
Lincong Wang
The contributions of surface charge and geometry to protein-solvent interaction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To better understand protein-solvent interaction we have analyzed a variety of physical and geometrical properties of the solvent-excluded surfaces (SESs) over a large set of soluble proteins with crystal structures. We discover that all have net negative surface charges and permanent electric dipoles. Moreover both SES area and surface charge as well as several physical and geometrical properties defined by them change with protein size via well-fitted power laws. The relevance to protein-solvent interaction of these physical and geometrical properties is supported by strong correlations between them and known hydrophobicity scales and by their large changes upon protein unfolding. The universal existence of negative surface charge and dipole, the characteristic surface geometry and power laws reveal fundamental but distinct roles of surface charge and SES in protein-solvent interaction and make it possible to describe solvation and hydrophobic effect using theories on anion solute in protic solvent. In particular the great significance of surface charge for protein-solvent interaction suggests that a change of perception may be needed since from solvation perspective folding into a native state is to optimize surface negative charge rather than to minimize the hydrophobic surface area.
[ { "created": "Wed, 4 May 2016 06:34:11 GMT", "version": "v1" } ]
2016-05-05
[ [ "Wang", "Lincong", "" ] ]
To better understand protein-solvent interaction we have analyzed a variety of physical and geometrical properties of the solvent-excluded surfaces (SESs) over a large set of soluble proteins with crystal structures. We discover that all have net negative surface charges and permanent electric dipoles. Moreover both SES area and surface charge as well as several physical and geometrical properties defined by them change with protein size via well-fitted power laws. The relevance to protein-solvent interaction of these physical and geometrical properties is supported by strong correlations between them and known hydrophobicity scales and by their large changes upon protein unfolding. The universal existence of negative surface charge and dipole, the characteristic surface geometry and power laws reveal fundamental but distinct roles of surface charge and SES in protein-solvent interaction and make it possible to describe solvation and hydrophobic effect using theories on anion solute in protic solvent. In particular the great significance of surface charge for protein-solvent interaction suggests that a change of perception may be needed since from solvation perspective folding into a native state is to optimize surface negative charge rather than to minimize the hydrophobic surface area.
1012.0985
Andree-Aimee Toucas
Jean Geringer, Laurent Navarro, Bernard Forest
Influence of proteins from physiological solutions on the electrochemical behaviour of the Ti-6Al-4V alloy: reproducibility and time-frequency dependence. ---- Influence de la teneur en prot\'eines de solutions physiologiques sur le comportement \'electrochimique du Ti-6Al-4V : reproductibilit\'e et repr\'esentation temps-fr\'equence
null
Mat\'eriaux & Techniques 98, 1 (2010) 59-68
10.1051/mattech/2010020
JG-M\&T-98-1
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The electrochemical behaviour of the biomedical and metallic alloys, especially in the orthopaedic implants fields, raises many questions. This study is dedicated for studying the Ti-6Al-4V alloy, by electrochemical impedance spectroscopy, EIS, in various physiological media,: Ringer solution, phosphate buffered solution (PBS), PBS solution and albumin, PBS solution with calf serum and PBS solution with calf serum and an antioxidant (sodium azide). Moreover, the desionised water was considered as the reference solution. The tests reproducibility was investigated. The time-frequency-Module graphs highlighted that the desionised water is the most protective for the Ti-6Al-4V alloy. This biomedical alloy is the less protected in the solution constituted by PBS and albumin. The time-frequency graph allows pointing out the graphic signatures of adsorption for organic and inorganic species (differences between the modules means in studied solution and the modules mean in the reference solution). --- Le comportement \'electrochimique des alliages m\'etalliques biom\'edicaux, notamment dans le domaine des implants orthop\'ediques, pose encore de nombreuses questions. Ce travail propose d'\'etudier l'alliage de titane Ti-6Al-4V, par spectroscopie d'imp\'edance \'electrochimique, SIE, dans diff\'erents milieux physiologiques : solution de Ringer, solution \`a base d'un tampon phosphate (PBS), solution PBS avec de l'albumine, solution PBS avec du s\'erum bovin et une solution PBS avec du s\'erum bovin et un antioxydant (azoture de sodium). De plus, une solution d'eau ultra-pure servira de r\'ef\'erence. La reproductibilit\'e des tests a \'et\'e \'etudi\'ee. Les repr\'esentations temps-fr\'equence des modules ont mis en \'evidence que l'eau d\'esionis\'ee est la solution qui pr\'esente le caract\`ere le plus protecteur pour le Ti-6Al-4V. Cet alliage de titane est le moins prot\'eg\'e dans la solution de PBS contenant de l'albumine. Cette repr\'esentation permet de mettre en \'evidence des signatures graphiques d'adsorption des esp\`eces inorganiques et organiques (diff\'erences entre les moyennes des modules dans les solutions \'etudi\'ees et la moyenne des modules dans la solution de r\'ef\'erence).
[ { "created": "Sun, 5 Dec 2010 10:33:52 GMT", "version": "v1" } ]
2020-07-17
[ [ "Geringer", "Jean", "" ], [ "Navarro", "Laurent", "" ], [ "Forest", "Bernard", "" ] ]
The electrochemical behaviour of the biomedical and metallic alloys, especially in the orthopaedic implants fields, raises many questions. This study is dedicated for studying the Ti-6Al-4V alloy, by electrochemical impedance spectroscopy, EIS, in various physiological media,: Ringer solution, phosphate buffered solution (PBS), PBS solution and albumin, PBS solution with calf serum and PBS solution with calf serum and an antioxidant (sodium azide). Moreover, the desionised water was considered as the reference solution. The tests reproducibility was investigated. The time-frequency-Module graphs highlighted that the desionised water is the most protective for the Ti-6Al-4V alloy. This biomedical alloy is the less protected in the solution constituted by PBS and albumin. The time-frequency graph allows pointing out the graphic signatures of adsorption for organic and inorganic species (differences between the modules means in studied solution and the modules mean in the reference solution). --- Le comportement \'electrochimique des alliages m\'etalliques biom\'edicaux, notamment dans le domaine des implants orthop\'ediques, pose encore de nombreuses questions. Ce travail propose d'\'etudier l'alliage de titane Ti-6Al-4V, par spectroscopie d'imp\'edance \'electrochimique, SIE, dans diff\'erents milieux physiologiques : solution de Ringer, solution \`a base d'un tampon phosphate (PBS), solution PBS avec de l'albumine, solution PBS avec du s\'erum bovin et une solution PBS avec du s\'erum bovin et un antioxydant (azoture de sodium). De plus, une solution d'eau ultra-pure servira de r\'ef\'erence. La reproductibilit\'e des tests a \'et\'e \'etudi\'ee. Les repr\'esentations temps-fr\'equence des modules ont mis en \'evidence que l'eau d\'esionis\'ee est la solution qui pr\'esente le caract\`ere le plus protecteur pour le Ti-6Al-4V. Cet alliage de titane est le moins prot\'eg\'e dans la solution de PBS contenant de l'albumine. Cette repr\'esentation permet de mettre en \'evidence des signatures graphiques d'adsorption des esp\`eces inorganiques et organiques (diff\'erences entre les moyennes des modules dans les solutions \'etudi\'ees et la moyenne des modules dans la solution de r\'ef\'erence).
1903.11373
Dalit Engelhardt
Dalit Engelhardt
Dynamic Control of Stochastic Evolution: A Deep Reinforcement Learning Approach to Adaptively Targeting Emergent Drug Resistance
null
Journal of Machine Learning Research 21(203): 1-30, 2020
null
null
q-bio.PE cs.LG q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The challenge in controlling stochastic systems in which low-probability events can set the system on catastrophic trajectories is to develop a robust ability to respond to such events without significantly compromising the optimality of the baseline control policy. This paper presents CelluDose, a stochastic simulation-trained deep reinforcement learning adaptive feedback control prototype for automated precision drug dosing targeting stochastic and heterogeneous cell proliferation. Drug resistance can emerge from random and variable mutations in targeted cell populations; in the absence of an appropriate dosing policy, emergent resistant subpopulations can proliferate and lead to treatment failure. Dynamic feedback dosage control holds promise in combatting this phenomenon, but the application of traditional control approaches to such systems is fraught with challenges due to the complexity of cell dynamics, uncertainty in model parameters, and the need in medical applications for a robust controller that can be trusted to properly handle unexpected outcomes. Here, training on a sample biological scenario identified single-drug and combination therapy policies that exhibit a 100% success rate at suppressing cell proliferation and responding to diverse system perturbations while establishing low-dose no-event baselines. These policies were found to be highly robust to variations in a key model parameter subject to significant uncertainty and unpredictable dynamical changes.
[ { "created": "Wed, 27 Mar 2019 12:25:48 GMT", "version": "v1" }, { "created": "Thu, 15 Oct 2020 22:21:53 GMT", "version": "v2" } ]
2020-10-28
[ [ "Engelhardt", "Dalit", "" ] ]
The challenge in controlling stochastic systems in which low-probability events can set the system on catastrophic trajectories is to develop a robust ability to respond to such events without significantly compromising the optimality of the baseline control policy. This paper presents CelluDose, a stochastic simulation-trained deep reinforcement learning adaptive feedback control prototype for automated precision drug dosing targeting stochastic and heterogeneous cell proliferation. Drug resistance can emerge from random and variable mutations in targeted cell populations; in the absence of an appropriate dosing policy, emergent resistant subpopulations can proliferate and lead to treatment failure. Dynamic feedback dosage control holds promise in combatting this phenomenon, but the application of traditional control approaches to such systems is fraught with challenges due to the complexity of cell dynamics, uncertainty in model parameters, and the need in medical applications for a robust controller that can be trusted to properly handle unexpected outcomes. Here, training on a sample biological scenario identified single-drug and combination therapy policies that exhibit a 100% success rate at suppressing cell proliferation and responding to diverse system perturbations while establishing low-dose no-event baselines. These policies were found to be highly robust to variations in a key model parameter subject to significant uncertainty and unpredictable dynamical changes.
1211.0646
Alex Robson
Alex Robson, Kevin Burrage, Mark Leake
Inferring diffusion in single live cells at the single molecule level
combined ms (1-37 pages, 8 figures) and SI (38-55, 3 figures)
null
10.1098/rstb.2012.0029
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The movement of molecules inside living cells is a fundamental feature of biological processes. The ability to both observe and analyse the details of molecular diffusion in vivo at the single molecule and single cell level can add significant insight into understanding molecular architectures of diffusing molecules and the nanoscale environment in which the molecules diffuse. The tool of choice for monitoring dynamic molecular localization in live cells is fluorescence microscopy, especially so combining total internal reflection fluorescence (TIRF) with the use of fluorescent protein (FP) reporters in offering exceptional imaging contrast for dynamic processes in the cell membrane under relatively physiological conditions compared to competing single molecule techniques. There exist several different complex modes of diffusion, and discriminating these from each other is challenging at the molecular level due to underlying stochastic behaviour. Analysis is traditionally performed using mean square displacements of tracked particles, however, this generally requires more data points than is typical for single FP tracks due to photophysical instability. Presented here is a novel approach allowing robust Bayesian ranking of diffusion processes (BARD) to discriminate multiple complex modes probabilistically. It is a computational approach which biologists can use to understand single molecule features in live cells.
[ { "created": "Sat, 3 Nov 2012 22:43:52 GMT", "version": "v1" } ]
2012-11-21
[ [ "Robson", "Alex", "" ], [ "Burrage", "Kevin", "" ], [ "Leake", "Mark", "" ] ]
The movement of molecules inside living cells is a fundamental feature of biological processes. The ability to both observe and analyse the details of molecular diffusion in vivo at the single molecule and single cell level can add significant insight into understanding molecular architectures of diffusing molecules and the nanoscale environment in which the molecules diffuse. The tool of choice for monitoring dynamic molecular localization in live cells is fluorescence microscopy, especially so combining total internal reflection fluorescence (TIRF) with the use of fluorescent protein (FP) reporters in offering exceptional imaging contrast for dynamic processes in the cell membrane under relatively physiological conditions compared to competing single molecule techniques. There exist several different complex modes of diffusion, and discriminating these from each other is challenging at the molecular level due to underlying stochastic behaviour. Analysis is traditionally performed using mean square displacements of tracked particles, however, this generally requires more data points than is typical for single FP tracks due to photophysical instability. Presented here is a novel approach allowing robust Bayesian ranking of diffusion processes (BARD) to discriminate multiple complex modes probabilistically. It is a computational approach which biologists can use to understand single molecule features in live cells.
1904.03531
Sliman Bensmaia
Elizaveta V. Okorokova, James M. Goodman, Nicholas G. Hatsopoulos and Sliman J. Bensmaia
Decoding hand kinematics from population responses in sensorimotor cortex during grasping
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hand, a complex effector comprising dozens of degrees of freedom of movement, endows us with the ability to flexibly, precisely, and effortlessly interact with objects. The neural signals associated with dexterous hand movements in primary motor cortex (M1) and somatosensory cortex (SC) have received comparatively less attention than have those that are associated with proximal limb control. To fill this gap, we trained three monkeys to grasp objects varying in size, shape and orientation while tracking their hand postures and recording single-unit activity from M1 and SC. We then decoded their hand kinematics across 30 joints from population activity in these areas. We found that we could accurately decode kinematics with a small number of neural signals and that performance was higher for decoding joint angles than joint angular velocities, in contrast to what has been found with proximal limb decoders. We conclude that cortical signals can be used for dexterous hand control in brain machine interface applications and that postural representations in SC may be exploited via intracortical stimulation to close the sensorimotor loop.
[ { "created": "Sat, 6 Apr 2019 21:05:30 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2019 03:58:50 GMT", "version": "v2" } ]
2019-06-21
[ [ "Okorokova", "Elizaveta V.", "" ], [ "Goodman", "James M.", "" ], [ "Hatsopoulos", "Nicholas G.", "" ], [ "Bensmaia", "Sliman J.", "" ] ]
The hand, a complex effector comprising dozens of degrees of freedom of movement, endows us with the ability to flexibly, precisely, and effortlessly interact with objects. The neural signals associated with dexterous hand movements in primary motor cortex (M1) and somatosensory cortex (SC) have received comparatively less attention than have those that are associated with proximal limb control. To fill this gap, we trained three monkeys to grasp objects varying in size, shape and orientation while tracking their hand postures and recording single-unit activity from M1 and SC. We then decoded their hand kinematics across 30 joints from population activity in these areas. We found that we could accurately decode kinematics with a small number of neural signals and that performance was higher for decoding joint angles than joint angular velocities, in contrast to what has been found with proximal limb decoders. We conclude that cortical signals can be used for dexterous hand control in brain machine interface applications and that postural representations in SC may be exploited via intracortical stimulation to close the sensorimotor loop.
1209.3802
Adel Dayarian
Adel Dayarian and Anirvan M. Sengupta
Titration and hysteresis in epigenetic chromatin silencing
null
2013 Phys. Biol. 10 036005
10.1088/1478-3975/10/3/036005
NSF-KITP-12-172
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epigenetic mechanisms of silencing via heritable chromatin modifications play a major role in gene regulation and cell fate specification. We consider a model of epigenetic chromatin silencing in budding yeast and study the bifurcation diagram and characterize the bistable and the monostable regimes. The main focus of this paper is to examine how the perturbations altering the activity of histone modifying enzymes affect the epigenetic states. We analyze the implications of having the total number of silencing proteins given by the sum of proteins bound to the nucleosomes and the ones available in the ambient to be constant. This constraint couples different regions of chromatin through the shared reservoir of ambient silencing proteins. We show that the response of the system to perturbations depends dramatically on the titration effect caused by the above constraint. In particular, for a certain range of overall abundance of silencing proteins, the hysteresis loop changes qualitatively with certain jump replaced by continuous merger of different states. In addition, we find a nonmonotonic dependence of gene expression on the rate of histone deacetylation activity of Sir2. We discuss how these qualitative predictions of our model could be compared with experimental studies of the yeast system under anti-silencing drugs.
[ { "created": "Mon, 17 Sep 2012 21:38:47 GMT", "version": "v1" }, { "created": "Mon, 24 Sep 2012 20:25:40 GMT", "version": "v2" }, { "created": "Fri, 15 Feb 2013 00:32:25 GMT", "version": "v3" } ]
2013-04-24
[ [ "Dayarian", "Adel", "" ], [ "Sengupta", "Anirvan M.", "" ] ]
Epigenetic mechanisms of silencing via heritable chromatin modifications play a major role in gene regulation and cell fate specification. We consider a model of epigenetic chromatin silencing in budding yeast and study the bifurcation diagram and characterize the bistable and the monostable regimes. The main focus of this paper is to examine how the perturbations altering the activity of histone modifying enzymes affect the epigenetic states. We analyze the implications of having the total number of silencing proteins given by the sum of proteins bound to the nucleosomes and the ones available in the ambient to be constant. This constraint couples different regions of chromatin through the shared reservoir of ambient silencing proteins. We show that the response of the system to perturbations depends dramatically on the titration effect caused by the above constraint. In particular, for a certain range of overall abundance of silencing proteins, the hysteresis loop changes qualitatively with certain jump replaced by continuous merger of different states. In addition, we find a nonmonotonic dependence of gene expression on the rate of histone deacetylation activity of Sir2. We discuss how these qualitative predictions of our model could be compared with experimental studies of the yeast system under anti-silencing drugs.
1607.01010
Evelyn Tang
Evelyn Tang, Chad Giusti, Graham Baum, Shi Gu, Eli Pollock, Ari E. Kahn, David Roalf, Tyler M. Moore, Kosha Ruparel, Ruben C. Gur, Raquel E. Gur, Theodore D. Satterthwaite and Danielle S. Bassett
Developmental increases in white matter network controllability support a growing diversity of brain dynamics
In press at Nature Communications
Nature Communications, 1252 (2017)
10.1038/s41467-017-01254-4
null
q-bio.NC cond-mat.dis-nn nlin.CD q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. We use a network representation of diffusion imaging data from 882 youth ages 8 to 22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child versus older youth. We demonstrate that our results cannot be simply explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.
[ { "created": "Mon, 4 Jul 2016 20:00:00 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2016 04:03:51 GMT", "version": "v2" }, { "created": "Mon, 2 Oct 2017 20:34:39 GMT", "version": "v3" } ]
2017-11-02
[ [ "Tang", "Evelyn", "" ], [ "Giusti", "Chad", "" ], [ "Baum", "Graham", "" ], [ "Gu", "Shi", "" ], [ "Pollock", "Eli", "" ], [ "Kahn", "Ari E.", "" ], [ "Roalf", "David", "" ], [ "Moore", "Tyler M.", "" ], [ "Ruparel", "Kosha", "" ], [ "Gur", "Ruben C.", "" ], [ "Gur", "Raquel E.", "" ], [ "Satterthwaite", "Theodore D.", "" ], [ "Bassett", "Danielle S.", "" ] ]
As the human brain develops, it increasingly supports coordinated control of neural activity. The mechanism by which white matter evolves to support this coordination is not well understood. We use a network representation of diffusion imaging data from 882 youth ages 8 to 22 to show that white matter connectivity becomes increasingly optimized for a diverse range of predicted dynamics in development. Notably, stable controllers in subcortical areas are negatively related to cognitive performance. Investigating structural mechanisms supporting these changes, we simulate network evolution with a set of growth rules. We find that all brain networks are structured in a manner highly optimized for network control, with distinct control mechanisms predicted in child versus older youth. We demonstrate that our results cannot be simply explained by changes in network modularity. This work reveals a possible mechanism of human brain development that preferentially optimizes dynamic network control over static network architecture.
2312.03338
Daniel Str\"ombom
Daniel Str\"ombom, Autumn Sands, Jason M. Graham, Amanda Crocker, Cameron Cloud, Grace Tulevech, Kelly Ward
Modeling human activity-related spread of the spotted lanternfly (Lycorma delicatula) in the US
14 pages, 5 figures, 2 tables
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spotted lanternfly (Lycorma delicatula) has recently spread from its native range to several other countries and forecasts predict that it may become a global invasive pest. In particular, since its confirmed presence in the United States in 2014 it has established itself as a major invasive pest in the Mid-Atlantic region where it is damaging both naturally occurring and commercially important farmed plants. Quarantine zones have been introduced to contain the infestation, but the spread to new areas continues. At present the pathways and drivers of spread are not well-understood. In particular, several human activity related factors have been proposed to contribute to the spread; however, which features of the current spread can be attributed to these factors remains unclear. Here we collect county level data on infestation status and four human activity related factors and use statistical methods to determine whether there is evidence for an association between the factors and infestation. Then we construct a mechanistic network model based on the factors found to be associated with infestation and use it to simulate local spread. We find that the model reproduces key features of the spread 2014 to 2021. In particular, the growth of the main infestation region and the opening of spread corridors in the westward and southwestern directions is consistent with data and the model accurately forecasts the correct infestation status at the county level in 2021 with $81\%$ accuracy. We then use the model to forecast the spread up to 2025 in a larger region. Given that this model is based on a few human activity related factors that can be targeted, it may prove useful in informing management and further modeling efforts related to the current spotted lanternfly infestation in the US and potentially for current and future invasions elsewhere globally.
[ { "created": "Wed, 6 Dec 2023 08:20:59 GMT", "version": "v1" } ]
2023-12-07
[ [ "Strömbom", "Daniel", "" ], [ "Sands", "Autumn", "" ], [ "Graham", "Jason M.", "" ], [ "Crocker", "Amanda", "" ], [ "Cloud", "Cameron", "" ], [ "Tulevech", "Grace", "" ], [ "Ward", "Kelly", "" ] ]
The spotted lanternfly (Lycorma delicatula) has recently spread from its native range to several other countries and forecasts predict that it may become a global invasive pest. In particular, since its confirmed presence in the United States in 2014 it has established itself as a major invasive pest in the Mid-Atlantic region where it is damaging both naturally occurring and commercially important farmed plants. Quarantine zones have been introduced to contain the infestation, but the spread to new areas continues. At present the pathways and drivers of spread are not well-understood. In particular, several human activity related factors have been proposed to contribute to the spread; however, which features of the current spread can be attributed to these factors remains unclear. Here we collect county level data on infestation status and four human activity related factors and use statistical methods to determine whether there is evidence for an association between the factors and infestation. Then we construct a mechanistic network model based on the factors found to be associated with infestation and use it to simulate local spread. We find that the model reproduces key features of the spread 2014 to 2021. In particular, the growth of the main infestation region and the opening of spread corridors in the westward and southwestern directions is consistent with data and the model accurately forecasts the correct infestation status at the county level in 2021 with $81\%$ accuracy. We then use the model to forecast the spread up to 2025 in a larger region. Given that this model is based on a few human activity related factors that can be targeted, it may prove useful in informing management and further modeling efforts related to the current spotted lanternfly infestation in the US and potentially for current and future invasions elsewhere globally.
1602.07100
Koen Haak
Koen V. Haak, Andre F. Marquand, Christian F. Beckmann
Connectopic mapping with resting-state fMRI
null
null
10.1016/j.neuroimage.2017.06.075
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain regions are often topographically connected: nearby locations within one brain area connect with nearby locations in another area. Mapping these connection topographies, or 'connectopies' in short, is crucial for understanding how information is processed in the brain. Here, we propose principled, fully data-driven methods for mapping connectopies using functional magnetic resonance imaging (fMRI) data acquired at rest by combining spectral embedding of voxel-wise connectivity 'fingerprints' with a novel approach to spatial statistical inference. We applied the approach in human primary motor and visual cortex, and show that it can trace biologically plausible, overlapping connectopies in individual subjects that follow these regions' somatotopic and retinotopic maps. As a generic mechanism to perform inference over connectopies, the new spatial statistics approach enables rigorous statistical testing of hypotheses regarding the fine-grained spatial profile of functional connectivity and whether that profile is different between subjects or between experimental conditions. The combined framework offers a fundamental alternative to existing approaches to investigating functional connectivity in the brain, from voxel- or seed-pair wise characterizations of functional association, towards a full, multivariate characterization of spatial topography.
[ { "created": "Tue, 23 Feb 2016 09:35:34 GMT", "version": "v1" }, { "created": "Mon, 17 Jul 2017 09:39:28 GMT", "version": "v2" } ]
2017-07-18
[ [ "Haak", "Koen V.", "" ], [ "Marquand", "Andre F.", "" ], [ "Beckmann", "Christian F.", "" ] ]
Brain regions are often topographically connected: nearby locations within one brain area connect with nearby locations in another area. Mapping these connection topographies, or 'connectopies' in short, is crucial for understanding how information is processed in the brain. Here, we propose principled, fully data-driven methods for mapping connectopies using functional magnetic resonance imaging (fMRI) data acquired at rest by combining spectral embedding of voxel-wise connectivity 'fingerprints' with a novel approach to spatial statistical inference. We applied the approach in human primary motor and visual cortex, and show that it can trace biologically plausible, overlapping connectopies in individual subjects that follow these regions' somatotopic and retinotopic maps. As a generic mechanism to perform inference over connectopies, the new spatial statistics approach enables rigorous statistical testing of hypotheses regarding the fine-grained spatial profile of functional connectivity and whether that profile is different between subjects or between experimental conditions. The combined framework offers a fundamental alternative to existing approaches to investigating functional connectivity in the brain, from voxel- or seed-pair wise characterizations of functional association, towards a full, multivariate characterization of spatial topography.
1401.3254
Marcelo Sobottka
Marcelo Sobottka and Andrew G. Hart
A model capturing novel strand symmetries in bacterial DNA
This is a pre-copy-editing, author-produced preprint of an article accepted for publication in Biochemical and Biophysical Research Communications. The definitive publisher-authenticated version is available online at: http://dx.doi.org/10.1016/j.bbrc.2011.06.072 or http://www.sciencedirect.com/science/article/pii/S0006291X1101045X. 9 pages, 1 figure
Biochemical and Biophysical Research Communications, Volume 410, Issue 4, 15 July 2011, Pages 823-828, ISSN 0006-291X
10.1016/j.bbrc.2011.06.072
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chargaff's second parity rule for short oligonucleotides states that the frequency of any short nucleotide sequence on a strand is approximately equal to the frequency of its reverse complement on the same strand. Recent studies have shown that, with the exception of organellar DNA, this parity rule generally holds for double stranded DNA genomes and fails to hold for single-stranded genomes. While Chargaff's first parity rule is fully explained by the Watson-Crick pairing in the DNA double helix, a definitive explanation for the second parity rule has not yet been determined. In this work, we propose a model based on a hidden Markov process for approximating the distributional structure of primitive DNA sequences. Then, we use the model to provide another possible theoretical explanation for Chargaff's second parity rule, and to predict novel distributional aspects of bacterial DNA sequences.
[ { "created": "Tue, 14 Jan 2014 16:57:24 GMT", "version": "v1" } ]
2014-01-15
[ [ "Sobottka", "Marcelo", "" ], [ "Hart", "Andrew G.", "" ] ]
Chargaff's second parity rule for short oligonucleotides states that the frequency of any short nucleotide sequence on a strand is approximately equal to the frequency of its reverse complement on the same strand. Recent studies have shown that, with the exception of organellar DNA, this parity rule generally holds for double stranded DNA genomes and fails to hold for single-stranded genomes. While Chargaff's first parity rule is fully explained by the Watson-Crick pairing in the DNA double helix, a definitive explanation for the second parity rule has not yet been determined. In this work, we propose a model based on a hidden Markov process for approximating the distributional structure of primitive DNA sequences. Then, we use the model to provide another possible theoretical explanation for Chargaff's second parity rule, and to predict novel distributional aspects of bacterial DNA sequences.
2108.00813
Reza Sameni
Jorge Oliveira, Francesco Renna, Paulo Dias Costa, Marcelo Nogueira, Cristina Oliveira, Carlos Ferreira, Alipio Jorge, Sandra Mattos, Thamine Hatem, Thiago Tavares, Andoni Elola, Ali Bahrami Rad, Reza Sameni, Gari D Clifford, Miguel T. Coimbra
The CirCor DigiScope Dataset: From Murmur Detection to Murmur Classification
12 pages, 6 tables, 8 figures, in IEEE Journal of Biomedical and Health Informatics
null
10.1109/JBHI.2021.3137048
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.
[ { "created": "Mon, 2 Aug 2021 12:30:40 GMT", "version": "v1" }, { "created": "Fri, 24 Dec 2021 07:53:12 GMT", "version": "v2" } ]
2021-12-28
[ [ "Oliveira", "Jorge", "" ], [ "Renna", "Francesco", "" ], [ "Costa", "Paulo Dias", "" ], [ "Nogueira", "Marcelo", "" ], [ "Oliveira", "Cristina", "" ], [ "Ferreira", "Carlos", "" ], [ "Jorge", "Alipio", "" ], [ "Mattos", "Sandra", "" ], [ "Hatem", "Thamine", "" ], [ "Tavares", "Thiago", "" ], [ "Elola", "Andoni", "" ], [ "Rad", "Ali Bahrami", "" ], [ "Sameni", "Reza", "" ], [ "Clifford", "Gari D", "" ], [ "Coimbra", "Miguel T.", "" ] ]
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.
q-bio/0309026
Matthew Wiener
Ethan D. Gershon, Matthew C. Wiener, Peter E. Latham, and Barry J. Richmond
Coding Strategies in Monkey V1 and Inferior Temporal Cortices
25 pages, 8 figures. Originally submitted to the neuro-sys archive which was never publicly announced (was 9804001)
J. Neurophsysiol. 79: 1135-1144, 1998
null
null
q-bio.NC
null
We would like to know whether the statistics of neuronal responses vary across cortical areas. We examined stimulus-elicited spike count response distributions in V1 and IT cortices of awake monkeys. In both areas the distribution of spike counts for each stimulus was well-described by a Gaussian, with the log of the variance in the spike count linearly related to the log of the mean spike count. Two significant differences in response characteristics were found: both the range of spike counts and the slope of the log(variance) vs. log(mean) regression were larger in V1 than in IT. However, neurons in the two areas transmitted approximately the same amount of information about the stimuli, and had about the same channel capacity (the maximum possible transmitted information given noise in the responses). These results suggest that neurons in V1 use more variable signals over a larger dynamic range than neurons in IT, which use less variable signals over a smaller dynamic range. The two coding strategies are approximately as effective in transmitting information.
[ { "created": "Thu, 9 Apr 1998 21:25:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Gershon", "Ethan D.", "" ], [ "Wiener", "Matthew C.", "" ], [ "Latham", "Peter E.", "" ], [ "Richmond", "Barry J.", "" ] ]
We would like to know whether the statistics of neuronal responses vary across cortical areas. We examined stimulus-elicited spike count response distributions in V1 and IT cortices of awake monkeys. In both areas the distribution of spike counts for each stimulus was well-described by a Gaussian, with the log of the variance in the spike count linearly related to the log of the mean spike count. Two significant differences in response characteristics were found: both the range of spike counts and the slope of the log(variance) vs. log(mean) regression were larger in V1 than in IT. However, neurons in the two areas transmitted approximately the same amount of information about the stimuli, and had about the same channel capacity (the maximum possible transmitted information given noise in the responses). These results suggest that neurons in V1 use more variable signals over a larger dynamic range than neurons in IT, which use less variable signals over a smaller dynamic range. The two coding strategies are approximately as effective in transmitting information.
2308.16772
Srdjan Ostojic
Srdjan Ostojic and Stefano Fusi
The computational role of structure in neural activity and connectivity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One major challenge of neuroscience is finding interesting structures in a seemingly disorganized neural activity. Often these structures have computational implications that help to understand the functional role of a particular brain area. Here we outline a unified approach to characterize these structures by inspecting the representational geometry and the modularity properties of the recorded activity, and show that this approach can also reveal structures in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent works on model networks performing three classes of computations.
[ { "created": "Thu, 31 Aug 2023 14:49:49 GMT", "version": "v1" } ]
2023-09-01
[ [ "Ostojic", "Srdjan", "" ], [ "Fusi", "Stefano", "" ] ]
One major challenge of neuroscience is finding interesting structures in a seemingly disorganized neural activity. Often these structures have computational implications that help to understand the functional role of a particular brain area. Here we outline a unified approach to characterize these structures by inspecting the representational geometry and the modularity properties of the recorded activity, and show that this approach can also reveal structures in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent works on model networks performing three classes of computations.
1508.05159
Ivo Siekmann
Ivo Siekmann and Horst Malchow
Competition of residents and invaders in a variable environment: Response to enemies and dangerous noise
24 pages, 18 figures, submitted to Mathematical Modelling of Natural Phenomena
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The possible control of competitive invasion by infection of the invader and multiplicative noise is studied. The basic model is the Lotka-Volterra competition system with emergent carrying capacities. Several stationary solutions of the non-infected and infected system are identified as well as parameter ranges of bistability. The latter are used for the numerical study of invasion phenomena. The diffusivities, the infection but in particular the white and colored multiplicative noise are the control parameters. It is shown that not only competition, possible infection and mobilities are important drivers of the invasive dynamics but also the noise and especially its color and the functional response of populations to the emergence of noise.
[ { "created": "Fri, 21 Aug 2015 01:20:34 GMT", "version": "v1" } ]
2015-08-24
[ [ "Siekmann", "Ivo", "" ], [ "Malchow", "Horst", "" ] ]
The possible control of competitive invasion by infection of the invader and multiplicative noise is studied. The basic model is the Lotka-Volterra competition system with emergent carrying capacities. Several stationary solutions of the non-infected and infected system are identified as well as parameter ranges of bistability. The latter are used for the numerical study of invasion phenomena. The diffusivities, the infection but in particular the white and colored multiplicative noise are the control parameters. It is shown that not only competition, possible infection and mobilities are important drivers of the invasive dynamics but also the noise and especially its color and the functional response of populations to the emergence of noise.
1503.01909
Petter Holme
Petter Holme
Shadows of the SIS immortality transition in small networks
Bug fixes from the first version
Phys. Rev. E 92, (2015) 012804
10.1103/PhysRevE.92.012804
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the research on the behavior of the SIS model on networks has concerned the infinite size limit; in particular the phase transition between a state where outbreaks can reach a finite fraction of the population, and a state where only a finite number would be infected. For finite networks, there is also a dynamic transition---the immortality transition---when the per-contact transmission probability $\lambda$ reaches one. If $\lambda < 1$, the probability that an outbreak will survive by an observation time $t$ tends to zero as $t \rightarrow \infty$; if $\lambda = 1$, this probability is one. We show that treating $\lambda = 1$ as a critical point predicts the $\lambda$-dependence of the survival probability also for more moderate $\lambda$-values. The exponent, however, depends on the underlying network. This fact could, by measuring how a vertex' deletion changes the exponent, be used to evaluate the role of a vertex in the outbreak. Our work also confirms an extremely clear separation between the early die-off (from the outbreak failing to take hold in the population) and the later extinctions (corresponding to rare stochastic events of several consecutive transmission events failing to occur).
[ { "created": "Fri, 6 Mar 2015 11:00:08 GMT", "version": "v1" }, { "created": "Wed, 13 May 2015 06:04:23 GMT", "version": "v2" } ]
2015-07-13
[ [ "Holme", "Petter", "" ] ]
Much of the research on the behavior of the SIS model on networks has concerned the infinite size limit; in particular the phase transition between a state where outbreaks can reach a finite fraction of the population, and a state where only a finite number would be infected. For finite networks, there is also a dynamic transition---the immortality transition---when the per-contact transmission probability $\lambda$ reaches one. If $\lambda < 1$, the probability that an outbreak will survive by an observation time $t$ tends to zero as $t \rightarrow \infty$; if $\lambda = 1$, this probability is one. We show that treating $\lambda = 1$ as a critical point predicts the $\lambda$-dependence of the survival probability also for more moderate $\lambda$-values. The exponent, however, depends on the underlying network. This fact could, by measuring how a vertex' deletion changes the exponent, be used to evaluate the role of a vertex in the outbreak. Our work also confirms an extremely clear separation between the early die-off (from the outbreak failing to take hold in the population) and the later extinctions (corresponding to rare stochastic events of several consecutive transmission events failing to occur).
2306.06298
Tandy Warnow
Tandy Warnow, Steven N. Evans, and Luay Nakhleh
Progress on Constructing Phylogenetic Networks for Languages
16 pages, 2 figures
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
In 2006, Warnow, Evans, Ringe, and Nakhleh proposed a stochastic model (hereafter, the WERN 2006 model) of multi-state linguistic character evolution that allowed for homoplasy and borrowing. They proved that if there is no borrowing between languages and homoplastic states are known in advance, then the phylogenetic tree of a set of languages is statistically identifiable under this model, and they presented statistically consistent methods for estimating these phylogenetic trees. However, they left open the question of whether a phylogenetic network -- which would explicitly model borrowing between languages that are in contact -- can be estimated under the model of character evolution. Here, we establish that under some mild additional constraints on the WERN 2006 model, the phylogenetic network topology is statistically identifiable, and we present algorithms to infer the phylogenetic network. We discuss the ramifications for linguistic phylogenetic network estimation in practice, and suggest directions for future research.
[ { "created": "Fri, 9 Jun 2023 23:27:21 GMT", "version": "v1" }, { "created": "Mon, 9 Oct 2023 19:06:56 GMT", "version": "v2" } ]
2023-10-11
[ [ "Warnow", "Tandy", "" ], [ "Evans", "Steven N.", "" ], [ "Nakhleh", "Luay", "" ] ]
In 2006, Warnow, Evans, Ringe, and Nakhleh proposed a stochastic model (hereafter, the WERN 2006 model) of multi-state linguistic character evolution that allowed for homoplasy and borrowing. They proved that if there is no borrowing between languages and homoplastic states are known in advance, then the phylogenetic tree of a set of languages is statistically identifiable under this model, and they presented statistically consistent methods for estimating these phylogenetic trees. However, they left open the question of whether a phylogenetic network -- which would explicitly model borrowing between languages that are in contact -- can be estimated under the model of character evolution. Here, we establish that under some mild additional constraints on the WERN 2006 model, the phylogenetic network topology is statistically identifiable, and we present algorithms to infer the phylogenetic network. We discuss the ramifications for linguistic phylogenetic network estimation in practice, and suggest directions for future research.
2008.07797
Mike Steel Prof.
Andrew Francis, Daniel H. Huson and Mike Steel
Normalising phylogenetic networks
18 pages, 5 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Rooted phylogenetic networks provide a way to describe species' relationships when evolution departs from the simple model of a tree. However, networks inferred from genomic data can be highly tangled, making it difficult to discern the main reticulation signals present. In this paper, we describe a natural way to transform any rooted phylogenetic network into a simpler canonical network, which has desirable mathematical and computational properties, and is based only on the 'visible' nodes in the original network. The method has been implemented and we demonstrate its application to some examples.
[ { "created": "Tue, 18 Aug 2020 08:24:03 GMT", "version": "v1" }, { "created": "Fri, 28 May 2021 03:50:31 GMT", "version": "v2" } ]
2021-05-31
[ [ "Francis", "Andrew", "" ], [ "Huson", "Daniel H.", "" ], [ "Steel", "Mike", "" ] ]
Rooted phylogenetic networks provide a way to describe species' relationships when evolution departs from the simple model of a tree. However, networks inferred from genomic data can be highly tangled, making it difficult to discern the main reticulation signals present. In this paper, we describe a natural way to transform any rooted phylogenetic network into a simpler canonical network, which has desirable mathematical and computational properties, and is based only on the 'visible' nodes in the original network. The method has been implemented and we demonstrate its application to some examples.
2111.04167
Jeremie Unterberger M
J. Unterberger
Exact computation of growth-rate fluctuations in random environment
17 pages
null
null
null
q-bio.PE cond-mat.stat-mech math.PR
http://creativecommons.org/licenses/by-nc-nd/4.0/
We consider a general class of Markovian models describing the growth in a randomly fluctuating environment of a clonal biological population having several phenotypes related by stochastic switching. Phenotypes differ e.g. by the level of gene expression for a population of bacteria. The time-averaged growth rate of the population, $\Lambda$, is self-averaging in the limit of infinite times; it may be understood as the fitness of the population in a context of Darwinian evolution. The observation time $T$ being however typically finite, the growth rate fluctuates. For $T$ finite but large, we obtain the variance of the time-averaged growth rate as the maximum of a functional based on the stationary probability distribution for the phenotypes. This formula is general. In the case of two states, the stationary probability was computed by Hufton, Lin and Galla \cite{HufLin2}, allowing for an explicit expression which can be checked numerically.
[ { "created": "Sun, 7 Nov 2021 19:58:43 GMT", "version": "v1" }, { "created": "Sun, 23 Jan 2022 20:35:26 GMT", "version": "v2" } ]
2022-01-25
[ [ "Unterberger", "J.", "" ] ]
We consider a general class of Markovian models describing the growth in a randomly fluctuating environment of a clonal biological population having several phenotypes related by stochastic switching. Phenotypes differ e.g. by the level of gene expression for a population of bacteria. The time-averaged growth rate of the population, $\Lambda$, is self-averaging in the limit of infinite times; it may be understood as the fitness of the population in a context of Darwinian evolution. The observation time $T$ being however typically finite, the growth rate fluctuates. For $T$ finite but large, we obtain the variance of the time-averaged growth rate as the maximum of a functional based on the stationary probability distribution for the phenotypes. This formula is general. In the case of two states, the stationary probability was computed by Hufton, Lin and Galla \cite{HufLin2}, allowing for an explicit expression which can be checked numerically.
2009.13775
Maryam Ghoojaei
Maryam Ghoojaei, Reza Shirkoohi, Mojtaba Saffari, Amirnader Emamirazavi, Mehrdad Hashemi
Foot-print of Claudin and Occludin Transcriptome in Colorectal Cancer
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background and Purpose: Colorectal cancer as a leading cause of mortality worldwide, can be regarded as a relatively common and fatal disease with increasing incidence over recent years. Colorectal cancer is characterized by uncontrolled growth of abnormal cells occurring in different parts of the colon. About 90% of deaths associated with cancers occur due to metastasis which overcome overcomes the body's cellular connection, including tight junctions. Claudin and Occludin are integral membrane proteins found in tight junctions. The aim of the present study was to investigate the expression level of claudin and occludin in human colorectal cancer. Method: In this study, 38 colorectal cancer patients who referred to Cancer Institute of Imam Khomeini Hospital in Tehran, Iran were studied after obtaining the informed consent. First, quantitative extraction of RNA was performed, then the expression levels of claudin and Occludin genes were examined by reverse transcription, PCR, and Real-time PCR. Findings: The expression levels of both claudin and Occludin genes in cases with higher stage and grade of disease, in the state of metastasis were more than those of the control samples. Conclusion: The increased expression level of the mentioned genes can be considered as an influential factor in turning the normal healthy tissues into cancerous cells.
[ { "created": "Tue, 29 Sep 2020 04:20:52 GMT", "version": "v1" } ]
2020-09-30
[ [ "Ghoojaei", "Maryam", "" ], [ "Shirkoohi", "Reza", "" ], [ "Saffari", "Mojtaba", "" ], [ "Emamirazavi", "Amirnader", "" ], [ "Hashemi", "Mehrdad", "" ] ]
Background and Purpose: Colorectal cancer as a leading cause of mortality worldwide, can be regarded as a relatively common and fatal disease with increasing incidence over recent years. Colorectal cancer is characterized by uncontrolled growth of abnormal cells occurring in different parts of the colon. About 90% of deaths associated with cancers occur due to metastasis which overcome overcomes the body's cellular connection, including tight junctions. Claudin and Occludin are integral membrane proteins found in tight junctions. The aim of the present study was to investigate the expression level of claudin and occludin in human colorectal cancer. Method: In this study, 38 colorectal cancer patients who referred to Cancer Institute of Imam Khomeini Hospital in Tehran, Iran were studied after obtaining the informed consent. First, quantitative extraction of RNA was performed, then the expression levels of claudin and Occludin genes were examined by reverse transcription, PCR, and Real-time PCR. Findings: The expression levels of both claudin and Occludin genes in cases with higher stage and grade of disease, in the state of metastasis were more than those of the control samples. Conclusion: The increased expression level of the mentioned genes can be considered as an influential factor in turning the normal healthy tissues into cancerous cells.
1501.04015
Thomas R. Sokolowski
Thomas R. Sokolowski and Ga\v{s}per Tka\v{c}ik
Optimizing information flow in small genetic networks. IV. Spatial coupling
17 pages, 8 figures (incl. 3 supporting figures)
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We typically think of cells as responding to external signals independently by regulating their gene expression levels, yet they often locally exchange information and coordinate. Can such spatial coupling be of benefit for conveying signals subject to gene regulatory noise? Here we extend our information-theoretic framework for gene regulation to spatially extended systems. As an example, we consider a lattice of nuclei responding to a concentration field of a transcriptional regulator (the "input") by expressing a single diffusible target gene. When input concentrations are low, diffusive coupling markedly improves information transmission; optimal gene activation functions also systematically change. A qualitatively new regulatory strategy emerges where individual cells respond to the input in a nearly step-like fashion that is subsequently averaged out by strong diffusion. While motivated by early patterning events in the Drosophila embryo, our framework is generically applicable to spatially coupled stochastic gene expression models.
[ { "created": "Fri, 16 Jan 2015 15:32:02 GMT", "version": "v1" } ]
2015-01-19
[ [ "Sokolowski", "Thomas R.", "" ], [ "Tkačik", "Gašper", "" ] ]
We typically think of cells as responding to external signals independently by regulating their gene expression levels, yet they often locally exchange information and coordinate. Can such spatial coupling be of benefit for conveying signals subject to gene regulatory noise? Here we extend our information-theoretic framework for gene regulation to spatially extended systems. As an example, we consider a lattice of nuclei responding to a concentration field of a transcriptional regulator (the "input") by expressing a single diffusible target gene. When input concentrations are low, diffusive coupling markedly improves information transmission; optimal gene activation functions also systematically change. A qualitatively new regulatory strategy emerges where individual cells respond to the input in a nearly step-like fashion that is subsequently averaged out by strong diffusion. While motivated by early patterning events in the Drosophila embryo, our framework is generically applicable to spatially coupled stochastic gene expression models.
2305.16241
Alessandro Grecucci Prof
Bianca Monachesi, Alessandro Grecucci, Parisa Ahmadi Ghomroudi, Irene Messina
Understanding the neural architecture of emotion regulation by comparing two different strategies: A meta-analytic approach
32 pages, 3 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
In the emotion regulation literature, the amount of neuroimaging studies on cognitive reappraisal led the impression that the same top-down, control-related neural mechanisms characterize all emotion regulation strategies. However, top-down processes may coexist with more bottom-up and emotion-focused processes that partially bypass the recruitment of executive functions. A case in point is acceptance-based strategies. To better understand neural commonalities and differences behind different emotion regulation strategies, in the present study we applied a meta-analytic method to fMRI studies of task-related activity of reappraisal and acceptance. Results showed increased activity in left-inferior frontal gyrus and insula for both strategies, and decreased activity in the basal ganglia for reappraisal, and decreased activity in limbic regions for acceptance. These findings are discussed in the context of a model of common and specific neural mechanisms of emotion regulation that support and expand the previous dual-routes models. We suggest that emotion regulation may rely on a core inhibitory circuit, and on strategy-specific top-down and bottom-up processes distinct for different strategies.
[ { "created": "Thu, 25 May 2023 16:52:15 GMT", "version": "v1" } ]
2023-05-26
[ [ "Monachesi", "Bianca", "" ], [ "Grecucci", "Alessandro", "" ], [ "Ghomroudi", "Parisa Ahmadi", "" ], [ "Messina", "Irene", "" ] ]
In the emotion regulation literature, the amount of neuroimaging studies on cognitive reappraisal led the impression that the same top-down, control-related neural mechanisms characterize all emotion regulation strategies. However, top-down processes may coexist with more bottom-up and emotion-focused processes that partially bypass the recruitment of executive functions. A case in point is acceptance-based strategies. To better understand neural commonalities and differences behind different emotion regulation strategies, in the present study we applied a meta-analytic method to fMRI studies of task-related activity of reappraisal and acceptance. Results showed increased activity in left-inferior frontal gyrus and insula for both strategies, and decreased activity in the basal ganglia for reappraisal, and decreased activity in limbic regions for acceptance. These findings are discussed in the context of a model of common and specific neural mechanisms of emotion regulation that support and expand the previous dual-routes models. We suggest that emotion regulation may rely on a core inhibitory circuit, and on strategy-specific top-down and bottom-up processes distinct for different strategies.
0707.3716
Lorenzo Farina
Maria Concetta Palumbo, Lorenzo Farina, Alberto De Santis, Alessandro Giuliani, Alfredo Colosimo, Giorgio Morelli and Ida Ruberti
Post-transcriptional Regulation Drives Temporal Compartmentalization of the Yeast Metabolic Cycle
13 pages, 4 figures, 1 table
null
null
null
q-bio.CB q-bio.BM
null
The maintainance of a stable periodicity during the yeast metabolic cycle involving approximately half of the genome requires a very strict and efficient control of gene expression. For this reason, the metabolic cycle is a very good candidate for testing the role of a class of post-transcriptional regulators, the so called PUF-family, whose genome-wide mRNA binding specificity was recently experimentally assessed. Here we show that an integrated computational analysis of gene expression time series during the metabolic cycle and the mRNA binding specificity of PUF-family proteins allow for a clear demonstration of the very specific role exerted by selective post-transcriptional mRNA degradation in yeast metabolic cycle global regulation.
[ { "created": "Wed, 25 Jul 2007 12:21:28 GMT", "version": "v1" } ]
2007-07-26
[ [ "Palumbo", "Maria Concetta", "" ], [ "Farina", "Lorenzo", "" ], [ "De Santis", "Alberto", "" ], [ "Giuliani", "Alessandro", "" ], [ "Colosimo", "Alfredo", "" ], [ "Morelli", "Giorgio", "" ], [ "Ruberti", "Ida", "" ] ]
The maintainance of a stable periodicity during the yeast metabolic cycle involving approximately half of the genome requires a very strict and efficient control of gene expression. For this reason, the metabolic cycle is a very good candidate for testing the role of a class of post-transcriptional regulators, the so called PUF-family, whose genome-wide mRNA binding specificity was recently experimentally assessed. Here we show that an integrated computational analysis of gene expression time series during the metabolic cycle and the mRNA binding specificity of PUF-family proteins allow for a clear demonstration of the very specific role exerted by selective post-transcriptional mRNA degradation in yeast metabolic cycle global regulation.
1504.03145
Johan Elf
Mats Wallden, David Fange, \"Ozden Baltekin, Johan Elf
Fluctuations in growth rates determine the generation time and size distributions of E. coli cells
null
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Isogenic Escherichia coli growing exponentially in a constant environment display large variation in growth-rates, division-sizes and generation-times. It is unclear how these seemingly random cell cycles can be reconciled with the precise regulation required under conditions where the generation time is shorter than the time to replicate the genome. Here we use single molecule microscopy to map the location of the replication machinery to the division cycle of individual cells. We find that the cell-to-cell variation in growth rate is sufficient to explain the corresponding variation in cell size and division timing assuming a simple mechanistic model. In the model, initiation of chromosome replication is triggered at a fixed volume per origin region, and associated with each initiation event is a division event at a growth rate dependent time later. The result implies that cell division in E. coli has no other regulation beyond what is needed to initiate DNA replication at the right volume.
[ { "created": "Mon, 13 Apr 2015 11:53:44 GMT", "version": "v1" }, { "created": "Tue, 13 Oct 2015 18:04:57 GMT", "version": "v2" } ]
2015-10-14
[ [ "Wallden", "Mats", "" ], [ "Fange", "David", "" ], [ "Baltekin", "Özden", "" ], [ "Elf", "Johan", "" ] ]
Isogenic Escherichia coli growing exponentially in a constant environment display large variation in growth-rates, division-sizes and generation-times. It is unclear how these seemingly random cell cycles can be reconciled with the precise regulation required under conditions where the generation time is shorter than the time to replicate the genome. Here we use single molecule microscopy to map the location of the replication machinery to the division cycle of individual cells. We find that the cell-to-cell variation in growth rate is sufficient to explain the corresponding variation in cell size and division timing assuming a simple mechanistic model. In the model, initiation of chromosome replication is triggered at a fixed volume per origin region, and associated with each initiation event is a division event at a growth rate dependent time later. The result implies that cell division in E. coli has no other regulation beyond what is needed to initiate DNA replication at the right volume.
1608.08498
Rudolf Hanel Ass Prof Dr
Rudolf Hanel
Systemic stability, cell differentiation, and evolution - A dynamical systems perspective
8 pages 4 figures
null
null
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species or population that proliferate faster than others become dominant in numbers. Catalysis allows catalytic sets within a molecular reaction network to dominate the non catalytic parts of the network by processing most of the available substrate. As a consequence one may consider a 'catalytic fitness' of sets of molecular species. The fittest sets emerge as the expressed chemical backbone or sub-network of larger chemical reaction networks employed by organisms. However, catalytic fitness depends on the systemic context and the stability of systemic dynamics. Unstable reaction networks would easily be reshaped or destroyed by fluctuations of the chemical environment. In this paper we therefore focus on recognizing systemic stability as an evolutionary selection criterion. In fact, instabilities of regulatory systems dynamics become predictive for associated evolutionary forces driving the emergence large reaction networks that avoid or control inherent instabilities. Systemic instabilities can be identified and analyzed using relatively simple mathematical random networks models of complex regulatory systems. Using a statistical ensemble approach one can identify fundamental causes of instable dynamics, infer evolutionary preferred network properties, and predict evolutionary emergent control mechanisms and their entanglement with cell differentiation processes. Surprisingly, what systemic stability tells us here is that cells (or other non-linear regulatory systems) never had to learn how to differentiate, but rather how to avoid and control differentiation. For example, in this framework we can predict that regulatory systems will evolutionary favor networks where the number of catalytic enhancers is not larger than the number of suppressors.
[ { "created": "Tue, 30 Aug 2016 15:23:02 GMT", "version": "v1" } ]
2016-08-31
[ [ "Hanel", "Rudolf", "" ] ]
Species or population that proliferate faster than others become dominant in numbers. Catalysis allows catalytic sets within a molecular reaction network to dominate the non catalytic parts of the network by processing most of the available substrate. As a consequence one may consider a 'catalytic fitness' of sets of molecular species. The fittest sets emerge as the expressed chemical backbone or sub-network of larger chemical reaction networks employed by organisms. However, catalytic fitness depends on the systemic context and the stability of systemic dynamics. Unstable reaction networks would easily be reshaped or destroyed by fluctuations of the chemical environment. In this paper we therefore focus on recognizing systemic stability as an evolutionary selection criterion. In fact, instabilities of regulatory systems dynamics become predictive for associated evolutionary forces driving the emergence large reaction networks that avoid or control inherent instabilities. Systemic instabilities can be identified and analyzed using relatively simple mathematical random networks models of complex regulatory systems. Using a statistical ensemble approach one can identify fundamental causes of instable dynamics, infer evolutionary preferred network properties, and predict evolutionary emergent control mechanisms and their entanglement with cell differentiation processes. Surprisingly, what systemic stability tells us here is that cells (or other non-linear regulatory systems) never had to learn how to differentiate, but rather how to avoid and control differentiation. For example, in this framework we can predict that regulatory systems will evolutionary favor networks where the number of catalytic enhancers is not larger than the number of suppressors.
1209.1412
Renato Vicente
Roberto H. Schonmann, Robert Boyd and Renato Vicente
The Taylor-Frank method cannot be applied to some biologically important, continuous fitness functions
9 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Taylor-Frank method for making kin selection models when fitness is a nonlinear function of a continuous phenotype requires this function to be differentiable. This assumption sometimes fails for biologically important fitness functions, for instance in microbial data and the theory of repeated n-person games, even when fitness functions are smooth and continuous. In these cases, the Taylor-Frank methodology cannot be used, and a more general form of direct fitness must replace the standard one to account for kin selection, even under weak selection.
[ { "created": "Thu, 6 Sep 2012 21:15:41 GMT", "version": "v1" } ]
2012-09-10
[ [ "Schonmann", "Roberto H.", "" ], [ "Boyd", "Robert", "" ], [ "Vicente", "Renato", "" ] ]
The Taylor-Frank method for making kin selection models when fitness is a nonlinear function of a continuous phenotype requires this function to be differentiable. This assumption sometimes fails for biologically important fitness functions, for instance in microbial data and the theory of repeated n-person games, even when fitness functions are smooth and continuous. In these cases, the Taylor-Frank methodology cannot be used, and a more general form of direct fitness must replace the standard one to account for kin selection, even under weak selection.
2104.14189
Jan Kierfeld
Matthias Schmidt, Jan Kierfeld
Chemomechanical simulation of microtubule dynamics with explicit lateral bond dynamics
37 pages + 14 figures + supplementary material
Front. Phys. 9:673875 (2021)
10.3389/fphy.2021.673875
null
q-bio.SC q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce and parameterize a chemomechanical model of microtubule dynamics on the dimer level, which is based on the allosteric tubulin model and includes attachment, detachment and hydrolysis of tubulin dimers as well as stretching of lateral bonds, bending at longitudinal junctions, and the possibility of lateral bond rupture and formation. The model is computationally efficient such that we reach sufficiently long simulation times to observe repeated catastrophe and rescue events at realistic tubulin concentrations and hydrolysis rates, which allows us to deduce catastrophe and rescue rates. The chemomechanical model also allows us to gain insight into microscopic features of the GTP-tubulin cap structure and microscopic structural features triggering microtubule catastrophes and rescues. Dilution simulations show qualitative agreement with experiments. We also explore the consequences of a possible feedback of mechanical forces onto the hydrolysis process and the GTP-tubulin cap structure.
[ { "created": "Thu, 29 Apr 2021 08:17:45 GMT", "version": "v1" } ]
2021-08-31
[ [ "Schmidt", "Matthias", "" ], [ "Kierfeld", "Jan", "" ] ]
We introduce and parameterize a chemomechanical model of microtubule dynamics on the dimer level, which is based on the allosteric tubulin model and includes attachment, detachment and hydrolysis of tubulin dimers as well as stretching of lateral bonds, bending at longitudinal junctions, and the possibility of lateral bond rupture and formation. The model is computationally efficient such that we reach sufficiently long simulation times to observe repeated catastrophe and rescue events at realistic tubulin concentrations and hydrolysis rates, which allows us to deduce catastrophe and rescue rates. The chemomechanical model also allows us to gain insight into microscopic features of the GTP-tubulin cap structure and microscopic structural features triggering microtubule catastrophes and rescues. Dilution simulations show qualitative agreement with experiments. We also explore the consequences of a possible feedback of mechanical forces onto the hydrolysis process and the GTP-tubulin cap structure.
2008.08814
Hessameddin Akhlaghpour
Hessameddin Akhlaghpour
An RNA-Based Theory of Natural Universal Computation
66 pages total (25 pages main text excluding reference section), 7 figures, appendices in the same document
Journal of Theoretical Biology Volume 537, 21 March 2022, 110984
10.1016/j.jtbi.2021.110984
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved - or possibly began with - a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
[ { "created": "Thu, 20 Aug 2020 07:38:13 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 18:19:35 GMT", "version": "v2" }, { "created": "Tue, 24 Aug 2021 00:12:18 GMT", "version": "v3" }, { "created": "Sun, 12 Dec 2021 18:31:35 GMT", "version": "v4" } ]
2022-03-10
[ [ "Akhlaghpour", "Hessameddin", "" ] ]
Life is confronted with computation problems in a variety of domains including animal behavior, single-cell behavior, and embryonic development. Yet we currently do not know of a naturally existing biological system that is capable of universal computation, i.e., Turing-equivalent in scope. Generic finite-dimensional dynamical systems (which encompass most models of neural networks, intracellular signaling cascades, and gene regulatory networks) fall short of universal computation, but are assumed to be capable of explaining cognition and development. I present a class of models that bridge two concepts from distant fields: combinatory logic (or, equivalently, lambda calculus) and RNA molecular biology. A set of basic RNA editing rules can make it possible to compute any computable function with identical algorithmic complexity to that of Turing machines. The models do not assume extraordinarily complex molecular machinery or any processes that radically differ from what we already know to occur in cells. Distinct independent enzymes can mediate each of the rules and RNA molecules solve the problem of parenthesis matching through their secondary structure. In the most plausible of these models all of the editing rules can be implemented with merely cleavage and ligation operations at fixed positions relative to predefined motifs. This demonstrates that universal computation is well within the reach of molecular biology. It is therefore reasonable to assume that life has evolved - or possibly began with - a universal computer that yet remains to be discovered. The variety of seemingly unrelated computational problems across many scales can potentially be solved using the same RNA-based computation system. Experimental validation of this theory may immensely impact our understanding of memory, cognition, development, disease, evolution, and the early stages of life.
1501.07448
Igor Goychuk
Igor Goychuk and Andriy Goychuk
Stochastic Wilson-Cowan models of neuronal network dynamics with memory and delay
null
New Journal of Physics, vol. 17, 045029 (2015)
10.1088/1367-2630/17/4/045029
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a simple Markovian class of the stochastic Wilson-Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around -1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence.
[ { "created": "Thu, 29 Jan 2015 13:26:14 GMT", "version": "v1" } ]
2015-05-01
[ [ "Goychuk", "Igor", "" ], [ "Goychuk", "Andriy", "" ] ]
We consider a simple Markovian class of the stochastic Wilson-Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around -1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence.
2103.12481
Mar\'ia Vallet-Regi
C. Heras, J. Jimenez Holguin, A. L. Doadrio, M. Vallet-Regi, S. Sanchez-Salcedo, A. J. Salinas
Multifunctional antibiotic- and zinc-containing mesoporous bioactive glass scaffolds to fight bone infection
27 pages, 11 figures
Acta Biomaterialia 114, 395-406 (2020)
10.1016/j.actbio.2020.07.044
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bone regeneration is a clinical challenge that requires multiple approaches. Sometimes, it also includes the development of new osteogenic and antibacterial biomaterials to treat the occurrence of possible infection processes derived from surgery. This study evaluates the antibacterial properties of meso-macroporous scaffolds coated with gelatin and based on a bioactive glass and after being doped with 4% ZnO (4ZN-GE) and loaded with saturated and minimally inhibitory concentrations of one of the antibiotics levofloxacin (LEVO), vancomycin (VANCO), rifampicin (RIFAM) or gentamicin (GENTA). After the physicochemical characterization of the materials, inorganic ion and antibiotic release studies were performed from the scaffolds. In addition, molecular modeling allowed the determination of electrostatic potential density maps and hydrogen bonds of the antibiotics and the glass matrix. In vitro antibacterial studies (in plankton, inhibition halos and biofilm destruction) with S. aureus and E. coli as model bacteria showed a synergistic effect of zinc ions and antibiotics. The effect was especially noticeable in planktonic cultures of S. aureus with 4ZN-GE scaffolds loaded with VANCO, LEVO or RIFAM and in cultures of E. coli with LEVO or GENTA. Furthermore, S. aureus biofilms were completely destroyed by 4ZN-GE scaffolds loaded with VANCO, LEVO or RIFAM and total destruction of E. coli biofilm was achieved with 4ZN-GE scaffolds loaded with GENTA or LEVO. This approach could be an important step in the fight against microbial resistance and provide much needed options for the treatment of bone infection.
[ { "created": "Tue, 23 Mar 2021 12:12:15 GMT", "version": "v1" } ]
2021-03-24
[ [ "Heras", "C.", "" ], [ "Holguin", "J. Jimenez", "" ], [ "Doadrio", "A. L.", "" ], [ "Vallet-Regi", "M.", "" ], [ "Sanchez-Salcedo", "S.", "" ], [ "Salinas", "A. J.", "" ] ]
Bone regeneration is a clinical challenge that requires multiple approaches. Sometimes, it also includes the development of new osteogenic and antibacterial biomaterials to treat the occurrence of possible infection processes derived from surgery. This study evaluates the antibacterial properties of meso-macroporous scaffolds coated with gelatin and based on a bioactive glass and after being doped with 4% ZnO (4ZN-GE) and loaded with saturated and minimally inhibitory concentrations of one of the antibiotics levofloxacin (LEVO), vancomycin (VANCO), rifampicin (RIFAM) or gentamicin (GENTA). After the physicochemical characterization of the materials, inorganic ion and antibiotic release studies were performed from the scaffolds. In addition, molecular modeling allowed the determination of electrostatic potential density maps and hydrogen bonds of the antibiotics and the glass matrix. In vitro antibacterial studies (in plankton, inhibition halos and biofilm destruction) with S. aureus and E. coli as model bacteria showed a synergistic effect of zinc ions and antibiotics. The effect was especially noticeable in planktonic cultures of S. aureus with 4ZN-GE scaffolds loaded with VANCO, LEVO or RIFAM and in cultures of E. coli with LEVO or GENTA. Furthermore, S. aureus biofilms were completely destroyed by 4ZN-GE scaffolds loaded with VANCO, LEVO or RIFAM and total destruction of E. coli biofilm was achieved with 4ZN-GE scaffolds loaded with GENTA or LEVO. This approach could be an important step in the fight against microbial resistance and provide much needed options for the treatment of bone infection.
1711.05141
Andrei Olypher
Andrei Olifer
Generating behavioral acts of predetermined apparent complexity
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Behavior of natural and artificial agents consists of behavioral episodes or acts. This study introduces a quantitative measure of behavioral acts -- their apparent complexity. The measure is based on the concept of the Kolmogorov complexity. It is an apparent measure because it is determined solely by the readings of the signals that directly encode percepts and actions during behavior. The article describes an algorithm of generating behavioral acts of predetermined apparent complexity. Such acts can be used to evaluate and develop learning abilities of artificial agents.
[ { "created": "Tue, 14 Nov 2017 15:29:52 GMT", "version": "v1" } ]
2017-11-15
[ [ "Olifer", "Andrei", "" ] ]
Behavior of natural and artificial agents consists of behavioral episodes or acts. This study introduces a quantitative measure of behavioral acts -- their apparent complexity. The measure is based on the concept of the Kolmogorov complexity. It is an apparent measure because it is determined solely by the readings of the signals that directly encode percepts and actions during behavior. The article describes an algorithm of generating behavioral acts of predetermined apparent complexity. Such acts can be used to evaluate and develop learning abilities of artificial agents.
1709.06489
Louis Lello
Louis Lello, Steven G. Avery, Laurent Tellier, Ana Vazquez, Gustavo de los Campos, Stephen D.H. Hsu
Accurate Genomic Prediction Of Human Height
17 pages, 10 figures
null
null
null
q-bio.GN cs.LG q-bio.QM stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, $\sim$40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate $\sim$0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from GCTA (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the "missing heritability" problem -- i.e., the gap between prediction R-squared and SNP heritability. The $\sim$20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the UK Biobank cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.
[ { "created": "Tue, 19 Sep 2017 15:32:37 GMT", "version": "v1" } ]
2017-09-20
[ [ "Lello", "Louis", "" ], [ "Avery", "Steven G.", "" ], [ "Tellier", "Laurent", "" ], [ "Vazquez", "Ana", "" ], [ "Campos", "Gustavo de los", "" ], [ "Hsu", "Stephen D. H.", "" ] ]
We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, $\sim$40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate $\sim$0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from GCTA (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the "missing heritability" problem -- i.e., the gap between prediction R-squared and SNP heritability. The $\sim$20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the UK Biobank cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.
2106.13565
Tancredi Caruso
Tancredi Caruso, Giulio Virginio Clemente, Matthias C Rillig, Diego Garlaschelli
Fluctuating ecological networks: a synthesis of maximum-entropy approaches for pattern detection and process inference
submitted
Methods in Ecology and Evolution, 00, 1-12 (2022)
10.1111/2041-210X.13985
null
q-bio.QM cond-mat.dis-nn physics.data-an
http://creativecommons.org/licenses/by/4.0/
Ecological networks such as plant-pollinator systems and food webs vary in space and time. This variability includes fluctuations in global network properties such as total number and intensity of interactions but also in the local properties of individual nodes such as the number and intensity of species-level interactions. Fluctuations of species properties can significantly affect higher-order network features, e.g. robustness and nestedness. Local fluctuations should therefore be controlled for in applications that rely on null models, especially pattern and perturbation detection. By contrast, most randomization methods for null models used by ecologists treat node-level local properties as hard constraints that cannot fluctuate. Here, we synthesise a set of methods that resolves the limit of hard constraints and is based on statistical mechanics. We illustrate the methods with some practical examples making available open source computer codes. We clarify how this approach can be used by experimental ecologists to detect non-random network patterns with null models that not only rewire but also redistribute interaction strengths by allowing fluctuations in the null model constraints (soft constraints). Null modelling of species heterogeneity through local fluctuations around typical topological and quantitative constraints offers a statistically robust and expanded (e.g. quantitative null models) set of tools to understand the assembly and resilience of ecological networks.
[ { "created": "Fri, 25 Jun 2021 11:23:59 GMT", "version": "v1" }, { "created": "Fri, 11 Mar 2022 09:03:31 GMT", "version": "v2" } ]
2022-12-23
[ [ "Caruso", "Tancredi", "" ], [ "Clemente", "Giulio Virginio", "" ], [ "Rillig", "Matthias C", "" ], [ "Garlaschelli", "Diego", "" ] ]
Ecological networks such as plant-pollinator systems and food webs vary in space and time. This variability includes fluctuations in global network properties such as total number and intensity of interactions but also in the local properties of individual nodes such as the number and intensity of species-level interactions. Fluctuations of species properties can significantly affect higher-order network features, e.g. robustness and nestedness. Local fluctuations should therefore be controlled for in applications that rely on null models, especially pattern and perturbation detection. By contrast, most randomization methods for null models used by ecologists treat node-level local properties as hard constraints that cannot fluctuate. Here, we synthesise a set of methods that resolves the limit of hard constraints and is based on statistical mechanics. We illustrate the methods with some practical examples making available open source computer codes. We clarify how this approach can be used by experimental ecologists to detect non-random network patterns with null models that not only rewire but also redistribute interaction strengths by allowing fluctuations in the null model constraints (soft constraints). Null modelling of species heterogeneity through local fluctuations around typical topological and quantitative constraints offers a statistically robust and expanded (e.g. quantitative null models) set of tools to understand the assembly and resilience of ecological networks.
1706.05702
Jeyashree Krishnan
Jeyashree Krishnan, PierGianLuca Porta Mana, Moritz Helias, Markus Diesmann, Edoardo Di Napoli
Perfect spike detection via time reversal
9 figures, Preliminary results in proceedings of the Bernstein Conference 2016
null
10.3389/fninf.2017.00075
null
q-bio.NC math.DG physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiking neuronal networks are usually simulated with three main simulation schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but faster than an alternative perfect tests based on bisection or root-finding methods. Comparison confirms earlier results that the imperfect test rarely misses spikes (less than a fraction $1/10^8$ of missed spikes) in biologically relevant settings. This study offers an alternative geometric point of view on neuronal dynamics.
[ { "created": "Sun, 18 Jun 2017 19:13:26 GMT", "version": "v1" } ]
2018-01-24
[ [ "Krishnan", "Jeyashree", "" ], [ "Mana", "PierGianLuca Porta", "" ], [ "Helias", "Moritz", "" ], [ "Diesmann", "Markus", "" ], [ "Di Napoli", "Edoardo", "" ] ]
Spiking neuronal networks are usually simulated with three main simulation schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but faster than an alternative perfect tests based on bisection or root-finding methods. Comparison confirms earlier results that the imperfect test rarely misses spikes (less than a fraction $1/10^8$ of missed spikes) in biologically relevant settings. This study offers an alternative geometric point of view on neuronal dynamics.
1512.06305
Anca Radulescu
Anca Radulescu, Joanna Herron
Ebola impact and quarantine in a network model
16 pages, 8 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much effort has been directed towards using mathematical models to understand and predict contagious disease, in particular Ebola outbreaks. Classical SIR (susceptible-infected-recovered) compartmental models capture well the dynamics of the outbreak in certain communities, and accurately describe the differences between them based on a variety of parameters. However, repeated resurgence of Ebola contagions suggests that there are components of the global disease dynamics that we don't yet fully understand and can't effectively control. In order to understand the dynamics of a more widespread contagion, we placed SIR models within the framework of dynamic networks, with the communities at risk of contracting the virus acting as nonlinear systems, coupled based on a connectivity graph. We study how the effects of the disease (measured as the outbreak impact and duration) change with respect to local parameters, but also with changes in both short-range and long-range connectivity patterns in the graph. We discuss the implications of optimizing both these measures in increasingly realistic models of coupled communities.
[ { "created": "Sun, 20 Dec 2015 01:53:36 GMT", "version": "v1" } ]
2015-12-22
[ [ "Radulescu", "Anca", "" ], [ "Herron", "Joanna", "" ] ]
Much effort has been directed towards using mathematical models to understand and predict contagious disease, in particular Ebola outbreaks. Classical SIR (susceptible-infected-recovered) compartmental models capture well the dynamics of the outbreak in certain communities, and accurately describe the differences between them based on a variety of parameters. However, repeated resurgence of Ebola contagions suggests that there are components of the global disease dynamics that we don't yet fully understand and can't effectively control. In order to understand the dynamics of a more widespread contagion, we placed SIR models within the framework of dynamic networks, with the communities at risk of contracting the virus acting as nonlinear systems, coupled based on a connectivity graph. We study how the effects of the disease (measured as the outbreak impact and duration) change with respect to local parameters, but also with changes in both short-range and long-range connectivity patterns in the graph. We discuss the implications of optimizing both these measures in increasingly realistic models of coupled communities.
1510.07371
Lior Pachter
Lorian Schaeffer, Harold Pimentel, Nicolas Bray, P\'all Melsted and Lior Pachter
Pseudoalignment for metagenomic read assignment
Replaced accidentally duplicated figure with correct version; fixed some issues with figure generation and labeling; fixed problem with some missing genomes from database; added link to GitHub repo containing analysis code; included assessment of aggregate sensitivity and precision; clarified assessment metrics used
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore connections between metagenomic read assignment and the quantification of transcripts from RNA-Seq data. In particular, we show that the recent idea of pseudoalignment introduced in the RNA-Seq context is suitable in the metagenomics setting. When coupled with the Expectation-Maximization (EM) algorithm, reads can be assigned far more accurately and quickly than is currently possible with state of the art software.
[ { "created": "Mon, 26 Oct 2015 05:58:58 GMT", "version": "v1" }, { "created": "Tue, 1 Dec 2015 14:14:44 GMT", "version": "v2" } ]
2015-12-02
[ [ "Schaeffer", "Lorian", "" ], [ "Pimentel", "Harold", "" ], [ "Bray", "Nicolas", "" ], [ "Melsted", "Páll", "" ], [ "Pachter", "Lior", "" ] ]
We explore connections between metagenomic read assignment and the quantification of transcripts from RNA-Seq data. In particular, we show that the recent idea of pseudoalignment introduced in the RNA-Seq context is suitable in the metagenomics setting. When coupled with the Expectation-Maximization (EM) algorithm, reads can be assigned far more accurately and quickly than is currently possible with state of the art software.
1712.05372
Fr\'ed\'erique Robin
Fr\'ed\'erique Cl\'ement, Fr\'ed\'erique Robin and Romain Yvinec
Analysis and calibration of a linear model for structured cell populations with unidirectional motion : Application to the morphogenesis of ovarian follicles
null
null
null
null
q-bio.PE math.AP math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze a multi-type age dependent model for cell populations subject to unidirectional motion, in both a stochastic and deterministic framework. Cells are distributed into successive layers; they may divide and move irreversibly from one layer to the next. We adapt results on the large-time convergence of PDE systems and branching processes to our context, where the Perron-Frobenius or Krein-Rutman theorem can not be applied. We derive explicit analytical formulas for the asymptotic cell number moments, and the stable age distribution. We illustrate these results numerically and we apply them to the study of the morphodynamics of ovarian follicles. We prove the structural parameter identifiability of our model in the case of age independent division rates. Using a set of experimental biological data, we estimate the model parameters to fit the changes in the cell numbers in each layer during the early stages of follicle development.
[ { "created": "Thu, 14 Dec 2017 18:06:44 GMT", "version": "v1" } ]
2017-12-15
[ [ "Clément", "Frédérique", "" ], [ "Robin", "Frédérique", "" ], [ "Yvinec", "Romain", "" ] ]
We analyze a multi-type age dependent model for cell populations subject to unidirectional motion, in both a stochastic and deterministic framework. Cells are distributed into successive layers; they may divide and move irreversibly from one layer to the next. We adapt results on the large-time convergence of PDE systems and branching processes to our context, where the Perron-Frobenius or Krein-Rutman theorem can not be applied. We derive explicit analytical formulas for the asymptotic cell number moments, and the stable age distribution. We illustrate these results numerically and we apply them to the study of the morphodynamics of ovarian follicles. We prove the structural parameter identifiability of our model in the case of age independent division rates. Using a set of experimental biological data, we estimate the model parameters to fit the changes in the cell numbers in each layer during the early stages of follicle development.
q-bio/0310001
Rodrick Wallace
Rodrick Wallace
Comorbidity and Anticomorbidity: Autocognitive developmental disorders of structured psychosocial stress
18 pages, 2 figures
null
null
null
q-bio.NC q-bio.TO
null
We examine interacting cognitive modules of human biology which, in the asymptotic limit of long sequences of responses, define the output of an appropriate 'dual' information source. Applying a 'necessary condition' communication theory formalism roughly similar to that of Dretske, but focused entirely on long sequences of signals, we examine the regularities apparent in comorbid psychiatric and chronic physical disorders using an extension of recent perspectives on autoimmune disease. We find that structured psychosocial stress can literally write a distorted image of itself onto child development, resulting in a life course trajectory to characteristic forms of comorbid mind/body dysfunction affecting both dominant and subordinate populations within a pathogenic social hierarchy.
[ { "created": "Wed, 1 Oct 2003 17:11:11 GMT", "version": "v1" }, { "created": "Thu, 23 Oct 2003 16:58:40 GMT", "version": "v2" }, { "created": "Fri, 12 Dec 2003 16:15:20 GMT", "version": "v3" } ]
2007-05-23
[ [ "Wallace", "Rodrick", "" ] ]
We examine interacting cognitive modules of human biology which, in the asymptotic limit of long sequences of responses, define the output of an appropriate 'dual' information source. Applying a 'necessary condition' communication theory formalism roughly similar to that of Dretske, but focused entirely on long sequences of signals, we examine the regularities apparent in comorbid psychiatric and chronic physical disorders using an extension of recent perspectives on autoimmune disease. We find that structured psychosocial stress can literally write a distorted image of itself onto child development, resulting in a life course trajectory to characteristic forms of comorbid mind/body dysfunction affecting both dominant and subordinate populations within a pathogenic social hierarchy.
0804.0696
Heng Lian
Heng Lian
Inference of genetic networks from time course expression data using functional regression with lasso penalty
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical inference of genetic regulatory networks is essential for understanding temporal interactions of regulatory elements inside the cells. For inferences of large networks, identification of network structure is typical achieved under the assumption of sparsity of the networks. When the number of time points in the expression experiment is not too small, we propose to infer the parameters in the ordinary differential equations using the techniques from functional data analysis (FDA) by regarding the observed time course expression data as continuous-time curves. For networks with a large number of genes, we take advantage of the sparsity of the networks by penalizing the linear coefficients with a L_1 norm. The ability of the algorithm to infer network structure is demonstrated using the cell-cycle time course data for Saccharomyces cerevisiae.
[ { "created": "Fri, 4 Apr 2008 11:11:01 GMT", "version": "v1" }, { "created": "Sat, 5 Apr 2008 06:59:27 GMT", "version": "v2" } ]
2008-04-07
[ [ "Lian", "Heng", "" ] ]
Statistical inference of genetic regulatory networks is essential for understanding temporal interactions of regulatory elements inside the cells. For inferences of large networks, identification of network structure is typical achieved under the assumption of sparsity of the networks. When the number of time points in the expression experiment is not too small, we propose to infer the parameters in the ordinary differential equations using the techniques from functional data analysis (FDA) by regarding the observed time course expression data as continuous-time curves. For networks with a large number of genes, we take advantage of the sparsity of the networks by penalizing the linear coefficients with a L_1 norm. The ability of the algorithm to infer network structure is demonstrated using the cell-cycle time course data for Saccharomyces cerevisiae.
2206.12747
Khaled Mohammed Saifuddin
Khaled Mohammed Saifuddin, Briana Bumgardner, Farhan Tanvir, Esra Akbas
HyGNN: Drug-Drug Interaction Prediction via Hypergraph Neural Network
Some new experiments have been added. One more dataset has been considered. Theoretical part has been updated too
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Drug-Drug Interactions (DDIs) may hamper the functionalities of drugs, and in the worst scenario, they may lead to adverse drug reactions (ADRs). Predicting all DDIs is a challenging and critical problem. Most existing computational models integrate drug-centric information from different sources and leverage them as features in machine learning classifiers to predict DDIs. However, these models have a high chance of failure, especially for the new drugs when all the information is not available. This paper proposes a novel Hypergraph Neural Network (HyGNN) model based on only the SMILES string of drugs, available for any drug, for the DDI prediction problem. To capture the drug similarities, we create a hypergraph from drugs' chemical substructures extracted from the SMILES strings. Then, we develop HyGNN consisting of a novel attention-based hypergraph edge encoder to get the representation of drugs as hyperedges and a decoder to predict the interactions between drug pairs. Furthermore, we conduct extensive experiments to evaluate our model and compare it with several state-of-the-art methods. Experimental results demonstrate that our proposed HyGNN model effectively predicts DDIs and impressively outperforms the baselines with a maximum ROC-AUC and PR-AUC of 97.9% and 98.1%, respectively.
[ { "created": "Sat, 25 Jun 2022 22:48:27 GMT", "version": "v1" }, { "created": "Thu, 14 Jul 2022 06:31:25 GMT", "version": "v2" }, { "created": "Fri, 15 Jul 2022 10:14:45 GMT", "version": "v3" }, { "created": "Tue, 18 Apr 2023 09:57:00 GMT", "version": "v4" } ]
2023-04-19
[ [ "Saifuddin", "Khaled Mohammed", "" ], [ "Bumgardner", "Briana", "" ], [ "Tanvir", "Farhan", "" ], [ "Akbas", "Esra", "" ] ]
Drug-Drug Interactions (DDIs) may hamper the functionalities of drugs, and in the worst scenario, they may lead to adverse drug reactions (ADRs). Predicting all DDIs is a challenging and critical problem. Most existing computational models integrate drug-centric information from different sources and leverage them as features in machine learning classifiers to predict DDIs. However, these models have a high chance of failure, especially for the new drugs when all the information is not available. This paper proposes a novel Hypergraph Neural Network (HyGNN) model based on only the SMILES string of drugs, available for any drug, for the DDI prediction problem. To capture the drug similarities, we create a hypergraph from drugs' chemical substructures extracted from the SMILES strings. Then, we develop HyGNN consisting of a novel attention-based hypergraph edge encoder to get the representation of drugs as hyperedges and a decoder to predict the interactions between drug pairs. Furthermore, we conduct extensive experiments to evaluate our model and compare it with several state-of-the-art methods. Experimental results demonstrate that our proposed HyGNN model effectively predicts DDIs and impressively outperforms the baselines with a maximum ROC-AUC and PR-AUC of 97.9% and 98.1%, respectively.
2202.07751
Gabriel Ocker
Gabriel Koch Ocker
Dynamics of stochastic integrate-and-fire networks
A previous version of this article, now retracted (https://journals.aps.org/prx/abstract/10.1103/ PhysRevX.12.041007), contained errors described in the retraction notice (https://journals.aps.org/prx/ abstract/10.1103/PhysRevX.13.029904). These errors have been corrected in the current article, which was re-reviewed and accepted back at Phys. Rev. X
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
The neural dynamics generating sensory, motor, and cognitive functions are commonly understood through field theories for neural population activity. Classic neural field theories are derived from highly simplified models of individual neurons, while biological neurons are highly complex cells. Integrate-and-fire neuron models balance biophysical detail and analytical tractability. Here, we develop a statistical field theory for networks of integrate-and-fire neurons with stochastic spike emission. This reveals an exact mapping to a self-consistent renewal process and a new mean field theory for the activity in these networks. The mean field theory has a rate-dependent leak, approximating the spike-driven resets of the membrane voltage. This gives rise to bistability between quiescent and active states in homogenous and excitatory-inhibitory pulse-coupled networks. The field-theoretic framework also exposes fluctuation corrections to the mean field theory. We find that due to the spike reset, fluctuations suppress activity. We then examine the roles of spike resets and recurrent inhibition in stabilizing network activity. We calculate the phase diagram for inhibitory stabilization and find that an inhibition-stabilized regime occurs in wide regions of parameter space, consistent with experimental reports of inhibitory stabilization in diverse brain regions. Fluctuations narrow the region of inhibitory stabilization, consistent with their role in suppressing activity through spike resets.
[ { "created": "Tue, 15 Feb 2022 22:05:07 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2022 18:49:50 GMT", "version": "v2" }, { "created": "Fri, 29 Jul 2022 16:39:27 GMT", "version": "v3" }, { "created": "Thu, 16 Feb 2023 19:00:39 GMT", "version": "v4" }, { "created": "Thu, 23 Feb 2023 02:21:37 GMT", "version": "v5" }, { "created": "Fri, 17 Nov 2023 20:36:41 GMT", "version": "v6" } ]
2023-11-21
[ [ "Ocker", "Gabriel Koch", "" ] ]
The neural dynamics generating sensory, motor, and cognitive functions are commonly understood through field theories for neural population activity. Classic neural field theories are derived from highly simplified models of individual neurons, while biological neurons are highly complex cells. Integrate-and-fire neuron models balance biophysical detail and analytical tractability. Here, we develop a statistical field theory for networks of integrate-and-fire neurons with stochastic spike emission. This reveals an exact mapping to a self-consistent renewal process and a new mean field theory for the activity in these networks. The mean field theory has a rate-dependent leak, approximating the spike-driven resets of the membrane voltage. This gives rise to bistability between quiescent and active states in homogenous and excitatory-inhibitory pulse-coupled networks. The field-theoretic framework also exposes fluctuation corrections to the mean field theory. We find that due to the spike reset, fluctuations suppress activity. We then examine the roles of spike resets and recurrent inhibition in stabilizing network activity. We calculate the phase diagram for inhibitory stabilization and find that an inhibition-stabilized regime occurs in wide regions of parameter space, consistent with experimental reports of inhibitory stabilization in diverse brain regions. Fluctuations narrow the region of inhibitory stabilization, consistent with their role in suppressing activity through spike resets.
1911.01202
Michael Plank
Michael J Plank
Asymptotic expansion approximation for spatial structure arising from directionally biased movement
null
Physica A: Statistical Mechanics and its Applications (2019)
10.1016/j.physa.2019.123290
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial structure can arise in spatial point process models via a range of mechanisms, including neighbour-dependent directionally biased movement. This spatial structure is neglected by mean-field models, but can have important effects on population dynamics. Spatial moment dynamics are one way to obtain a deterministic approximation of a dynamic spatial point process that retains some information about spatial structure. However, the applicability of this approach is limited by the computational cost of numerically solving spatial moment dynamic equations at a sufficient resolution. We present an asymptotic expansion for the equilibrium solution to the spatial moment dynamics equations in the presence of neighbour-dependent directional bias. We show that the asymptotic expansion provides a highly efficient scheme for obtaining approximate equilibrium solutions to the spatial moment dynamics equations when bias is weak. This scheme will be particularly useful for performing parameter inference on spatial moment models.
[ { "created": "Tue, 22 Oct 2019 03:07:58 GMT", "version": "v1" }, { "created": "Tue, 5 Nov 2019 01:54:51 GMT", "version": "v2" } ]
2019-11-06
[ [ "Plank", "Michael J", "" ] ]
Spatial structure can arise in spatial point process models via a range of mechanisms, including neighbour-dependent directionally biased movement. This spatial structure is neglected by mean-field models, but can have important effects on population dynamics. Spatial moment dynamics are one way to obtain a deterministic approximation of a dynamic spatial point process that retains some information about spatial structure. However, the applicability of this approach is limited by the computational cost of numerically solving spatial moment dynamic equations at a sufficient resolution. We present an asymptotic expansion for the equilibrium solution to the spatial moment dynamics equations in the presence of neighbour-dependent directional bias. We show that the asymptotic expansion provides a highly efficient scheme for obtaining approximate equilibrium solutions to the spatial moment dynamics equations when bias is weak. This scheme will be particularly useful for performing parameter inference on spatial moment models.
1903.06375
Seong Jun Park
Seong Jun Park
Exploiting product molecule number to consider reaction rate fluctuation in elementary reactions
null
null
10.1063/5.0091597
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many chemical reactions, reaction rate fluctuation is inevitable. Reaction rates are different whenever chemical reaction occurs due to their dependence on the number of reaction events or the product number. As such, understanding the impact of rate fluctuation on product number counting statistics is of the utmost importance when developing a quantitative explanation of chemical reactions. In this work, we present a master equation that describes reaction rates as a function of product number and time. Our equal reveals the relationship between the reaction rate and product number fluctuation. Product number counting statistics uncovers a stochastic property of the product number; product number directly manipulates the reaction rate. Specifically, we find that product number shows super-Poisson characteristics when the product number increases, inducing an increase in the reaction rate. While, on the other hand, when the product number shows sub-Poisson characteristics with an increase in the product number, this is induced by a decrease in the reaction rate. Furthermore, our analysis exploits reaction rate fluctuation, enabling the quantification of the deviation of an elementary reaction process from a renewal process.
[ { "created": "Fri, 15 Mar 2019 05:53:42 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2019 09:48:40 GMT", "version": "v2" } ]
2024-06-19
[ [ "Park", "Seong Jun", "" ] ]
In many chemical reactions, reaction rate fluctuation is inevitable. Reaction rates are different whenever chemical reaction occurs due to their dependence on the number of reaction events or the product number. As such, understanding the impact of rate fluctuation on product number counting statistics is of the utmost importance when developing a quantitative explanation of chemical reactions. In this work, we present a master equation that describes reaction rates as a function of product number and time. Our equal reveals the relationship between the reaction rate and product number fluctuation. Product number counting statistics uncovers a stochastic property of the product number; product number directly manipulates the reaction rate. Specifically, we find that product number shows super-Poisson characteristics when the product number increases, inducing an increase in the reaction rate. While, on the other hand, when the product number shows sub-Poisson characteristics with an increase in the product number, this is induced by a decrease in the reaction rate. Furthermore, our analysis exploits reaction rate fluctuation, enabling the quantification of the deviation of an elementary reaction process from a renewal process.
2402.00077
Yuan Chen
Yuan Chen, Ronglai Shen, Xiwen Feng, Katherine Panageas
Unlocking the Power of Multi-institutional Data: Integrating and Harmonizing Genomic Data Across Institutions
null
null
null
null
q-bio.GN cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer is a complex disease driven by genomic alterations, and tumor sequencing is becoming a mainstay of clinical care for cancer patients. The emergence of multi-institution sequencing data presents a powerful resource for learning real-world evidence to enhance precision oncology. GENIE BPC, led by the American Association for Cancer Research, establishes a unique database linking genomic data with clinical information for patients treated at multiple cancer centers. However, leveraging such multi-institutional sequencing data presents significant challenges. Variations in gene panels result in loss of information when the analysis is conducted on common gene sets. Additionally, differences in sequencing techniques and patient heterogeneity across institutions add complexity. High data dimensionality, sparse gene mutation patterns, and weak signals at the individual gene level further complicate matters. Motivated by these real-world challenges, we introduce the Bridge model. It uses a quantile-matched latent variable approach to derive integrated features to preserve information beyond common genes and maximize the utilization of all available data while leveraging information sharing to enhance both learning efficiency and the model's capacity to generalize. By extracting harmonized and noise-reduced lower-dimensional latent variables, the true mutation pattern unique to each individual is captured. We assess the model's performance and parameter estimation through extensive simulation studies. The extracted latent features from the Bridge model consistently excel in predicting patient survival across six cancer types in GENIE BPC data.
[ { "created": "Tue, 30 Jan 2024 23:25:05 GMT", "version": "v1" } ]
2024-02-02
[ [ "Chen", "Yuan", "" ], [ "Shen", "Ronglai", "" ], [ "Feng", "Xiwen", "" ], [ "Panageas", "Katherine", "" ] ]
Cancer is a complex disease driven by genomic alterations, and tumor sequencing is becoming a mainstay of clinical care for cancer patients. The emergence of multi-institution sequencing data presents a powerful resource for learning real-world evidence to enhance precision oncology. GENIE BPC, led by the American Association for Cancer Research, establishes a unique database linking genomic data with clinical information for patients treated at multiple cancer centers. However, leveraging such multi-institutional sequencing data presents significant challenges. Variations in gene panels result in loss of information when the analysis is conducted on common gene sets. Additionally, differences in sequencing techniques and patient heterogeneity across institutions add complexity. High data dimensionality, sparse gene mutation patterns, and weak signals at the individual gene level further complicate matters. Motivated by these real-world challenges, we introduce the Bridge model. It uses a quantile-matched latent variable approach to derive integrated features to preserve information beyond common genes and maximize the utilization of all available data while leveraging information sharing to enhance both learning efficiency and the model's capacity to generalize. By extracting harmonized and noise-reduced lower-dimensional latent variables, the true mutation pattern unique to each individual is captured. We assess the model's performance and parameter estimation through extensive simulation studies. The extracted latent features from the Bridge model consistently excel in predicting patient survival across six cancer types in GENIE BPC data.
q-bio/0607012
Michael Deem
Jeong-Man Park and Michael W. Deem
Schwinger Boson Formulation and Solution of the Crow-Kimura and Eigen Models of Quasispecies Theory
37 pages; 4 figures; to appear in J. Stat. Phys
null
10.1007/s10955-006-9190-z
null
q-bio.PE cond-mat.stat-mech
null
We express the Crow-Kimura and Eigen models of quasispecies theory in a functional integral representation. We formulate the spin coherent state functional integrals using the Schwinger Boson method. In this formulation, we are able to deduce the long-time behavior of these models for arbitrary replication and degradation functions. We discuss the phase transitions that occur in these models as a function of mutation rate. We derive for these models the leading order corrections to the infinite genome length limit.
[ { "created": "Mon, 10 Jul 2006 00:16:41 GMT", "version": "v1" } ]
2009-11-13
[ [ "Park", "Jeong-Man", "" ], [ "Deem", "Michael W.", "" ] ]
We express the Crow-Kimura and Eigen models of quasispecies theory in a functional integral representation. We formulate the spin coherent state functional integrals using the Schwinger Boson method. In this formulation, we are able to deduce the long-time behavior of these models for arbitrary replication and degradation functions. We discuss the phase transitions that occur in these models as a function of mutation rate. We derive for these models the leading order corrections to the infinite genome length limit.
0801.3963
Ines Samengo
Germ\'an Mato and In\'es Samengo
Type I and type II neuron models are selectively driven by differential stimulus features
25 pages and 9 figures. To appear in Neural Computation
null
null
null
q-bio.NC
null
Neurons in the nervous system exhibit an outstanding variety of morphological and physiological properties. However, close to threshold, this remarkable richness may be grouped succinctly into two basic types of excitability, often referred to as type I and type II. The dynamical traits of these two neuron types have been extensively characterized. It would be interesting, however, to understand the information-processing consequences of their dynamical properties. To that end, here we determine the differences between the stimulus features inducing firing in type I and in type II neurons. We work both with realistic conductance-based models and minimal normal forms. We conclude that type I neurons fire in response to scale-free depolarizing stimuli. Type II neurons, instead, are most efficiently driven by input stimuli containing both depolarizing and hyperpolarizing phases, with significant power in the frequency band corresponding to the intrinsic frequencies of the cell.
[ { "created": "Fri, 25 Jan 2008 15:58:52 GMT", "version": "v1" } ]
2008-01-28
[ [ "Mato", "Germán", "" ], [ "Samengo", "Inés", "" ] ]
Neurons in the nervous system exhibit an outstanding variety of morphological and physiological properties. However, close to threshold, this remarkable richness may be grouped succinctly into two basic types of excitability, often referred to as type I and type II. The dynamical traits of these two neuron types have been extensively characterized. It would be interesting, however, to understand the information-processing consequences of their dynamical properties. To that end, here we determine the differences between the stimulus features inducing firing in type I and in type II neurons. We work both with realistic conductance-based models and minimal normal forms. We conclude that type I neurons fire in response to scale-free depolarizing stimuli. Type II neurons, instead, are most efficiently driven by input stimuli containing both depolarizing and hyperpolarizing phases, with significant power in the frequency band corresponding to the intrinsic frequencies of the cell.
1312.3445
Namiko Mitarai
Filippo Botta and Namiko Mitarai
Disturbance accelerates the transition from low- to high- diversity state in a model ecosystem
8 pages, 8 figures. Typos corrected
Phys. Rev. E vol 89, 022704 (2014)
10.1103/PhysRevE.89.022704
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effect of disturbance on a model ecosystem of sessile and mutually competitive species [Mathiesen et al. Phys. Rev. Lett. 107, 188101 (2011); Mitarai et al. Phys. Rev. E 86, 011929 (2012) ] is studied. The disturbance stochastically removes individuals from the system, and the created empty sites are re-colonized by neighbouring species. We show that the stable high-diversity state, maintained by occasional cyclic species interactions that create isolated patches of meta-populations, is robust against small disturbance. We further demonstrate that finite disturbance can accelerate the transition from the low- to high-diversity state by helping creation of small patches through diffusion of boundaries between species with stand-off relation.
[ { "created": "Thu, 12 Dec 2013 11:12:45 GMT", "version": "v1" }, { "created": "Mon, 17 Feb 2014 15:13:34 GMT", "version": "v2" } ]
2015-06-18
[ [ "Botta", "Filippo", "" ], [ "Mitarai", "Namiko", "" ] ]
The effect of disturbance on a model ecosystem of sessile and mutually competitive species [Mathiesen et al. Phys. Rev. Lett. 107, 188101 (2011); Mitarai et al. Phys. Rev. E 86, 011929 (2012) ] is studied. The disturbance stochastically removes individuals from the system, and the created empty sites are re-colonized by neighbouring species. We show that the stable high-diversity state, maintained by occasional cyclic species interactions that create isolated patches of meta-populations, is robust against small disturbance. We further demonstrate that finite disturbance can accelerate the transition from the low- to high-diversity state by helping creation of small patches through diffusion of boundaries between species with stand-off relation.
1610.07406
Seung Ki Baek
Su Do Yi, Seung Ki Baek, and Jung-Kyoo Choi
Combination with anti-tit-for-tat remedies problems of tit-for-tat
21 pages, 3 figures
J. Theor. Biol. 412, 1 (2017)
10.1016/j.jtbi.2016.09.017
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most important questions in game theory concerns how mutual cooperation can be achieved and maintained in a social dilemma. In Axelrod's tournaments of the iterated prisoner's dilemma, Tit-for-Tat (TFT) demonstrated the role of reciprocity in the emergence of cooperation. However, the stability of TFT does not hold in the presence of implementation error, and a TFT population is prone to neutral drift to unconditional cooperation, which eventually invites defectors. We argue that a combination of TFT and anti-TFT (ATFT) overcomes these difficulties in a noisy environment, provided that ATFT is defined as choosing the opposite to the opponent's last move. According to this TFT-ATFT strategy, a player normally uses TFT; turns to ATFT upon recognizing his or her own error; returns to TFT either when mutual cooperation is recovered or when the opponent unilaterally defects twice in a row. The proposed strategy provides simple and deterministic behavioral rules for correcting implementation error in a way that cannot be exploited by the opponent, and suppresses the neutral drift to unconditional cooperation.
[ { "created": "Mon, 24 Oct 2016 13:39:44 GMT", "version": "v1" } ]
2016-10-25
[ [ "Yi", "Su Do", "" ], [ "Baek", "Seung Ki", "" ], [ "Choi", "Jung-Kyoo", "" ] ]
One of the most important questions in game theory concerns how mutual cooperation can be achieved and maintained in a social dilemma. In Axelrod's tournaments of the iterated prisoner's dilemma, Tit-for-Tat (TFT) demonstrated the role of reciprocity in the emergence of cooperation. However, the stability of TFT does not hold in the presence of implementation error, and a TFT population is prone to neutral drift to unconditional cooperation, which eventually invites defectors. We argue that a combination of TFT and anti-TFT (ATFT) overcomes these difficulties in a noisy environment, provided that ATFT is defined as choosing the opposite to the opponent's last move. According to this TFT-ATFT strategy, a player normally uses TFT; turns to ATFT upon recognizing his or her own error; returns to TFT either when mutual cooperation is recovered or when the opponent unilaterally defects twice in a row. The proposed strategy provides simple and deterministic behavioral rules for correcting implementation error in a way that cannot be exploited by the opponent, and suppresses the neutral drift to unconditional cooperation.
1409.1801
Stephen Plaza
Stephen M. Plaza, Toufiq Parag, Gary B. Huang, Donald J. Olbris, Mathew A. Saunders, Patricia K. Rivlin
Annotating Synapses in Large EM Datasets
null
null
null
null
q-bio.QM cs.CV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing neuronal circuits at the level of synapses is a central problem in neuroscience and becoming a focus of the emerging field of connectomics. To date, electron microscopy (EM) is the most proven technique for identifying and quantifying synaptic connections. As advances in EM make acquiring larger datasets possible, subsequent manual synapse identification ({\em i.e.}, proofreading) for deciphering a connectome becomes a major time bottleneck. Here we introduce a large-scale, high-throughput, and semi-automated methodology to efficiently identify synapses. We successfully applied our methodology to the Drosophila medulla optic lobe, annotating many more synapses than previous connectome efforts. Our approaches are extensible and will make the often complicated process of synapse identification accessible to a wider-community of potential proofreaders.
[ { "created": "Fri, 5 Sep 2014 13:52:47 GMT", "version": "v1" }, { "created": "Thu, 4 Dec 2014 16:18:01 GMT", "version": "v2" } ]
2014-12-05
[ [ "Plaza", "Stephen M.", "" ], [ "Parag", "Toufiq", "" ], [ "Huang", "Gary B.", "" ], [ "Olbris", "Donald J.", "" ], [ "Saunders", "Mathew A.", "" ], [ "Rivlin", "Patricia K.", "" ] ]
Reconstructing neuronal circuits at the level of synapses is a central problem in neuroscience and becoming a focus of the emerging field of connectomics. To date, electron microscopy (EM) is the most proven technique for identifying and quantifying synaptic connections. As advances in EM make acquiring larger datasets possible, subsequent manual synapse identification ({\em i.e.}, proofreading) for deciphering a connectome becomes a major time bottleneck. Here we introduce a large-scale, high-throughput, and semi-automated methodology to efficiently identify synapses. We successfully applied our methodology to the Drosophila medulla optic lobe, annotating many more synapses than previous connectome efforts. Our approaches are extensible and will make the often complicated process of synapse identification accessible to a wider-community of potential proofreaders.
q-bio/0511028
Guillermo Abramson
G. Abramson, L. Giuggioli, V. M. Kenkre, J. W. Dragoo, R. R. Parmenter, C. A. Parmenter, T. L. Yates
Diffusion and Home Range Parameters for Rodents: Peromyscus maniculatus in New Mexico
The published paper in Ecol. Complexity has an old version of Figure 6. Here we have put the correct version of Figure 6
Ecological Complexity 3 (2006) 64-70
10.1016/j.ecocom.2005.07.001
null
q-bio.PE
null
We analyze data from a long term field project in New Mexico, consisting of repeated sessions of mark-recaptures of Peromyscus maniculatus (Rodentia: Muridae), the host and reservoir of Sin Nombre Virus (Bunyaviridae: Hantavirus). The displacements of the recaptured animals provide a means to study their movement from a statistical point of view. We extract two parameters from the data with the help of a simple model: the diffusion constant of the rodents, and the size of their home range. The short time behavior shows the motion to be approximately diffusive and the diffusion constant to be 470+/-50m^2/day. The long time behavior provides an estimation of the diameter of the rodent home ranges, with an average value of 100+/-25m. As in previous investigations directed at Zygodontomys brevicauda observations in Panama, we use a box model for home range estimation. We also use a harmonic model in the present investigation to study the sensitivity of the conclusions to the model used and find that both models lead to similar estimates.
[ { "created": "Wed, 16 Nov 2005 12:28:38 GMT", "version": "v1" }, { "created": "Thu, 9 Mar 2006 17:47:13 GMT", "version": "v2" } ]
2007-05-23
[ [ "Abramson", "G.", "" ], [ "Giuggioli", "L.", "" ], [ "Kenkre", "V. M.", "" ], [ "Dragoo", "J. W.", "" ], [ "Parmenter", "R. R.", "" ], [ "Parmenter", "C. A.", "" ], [ "Yates", "T. L.", "" ] ]
We analyze data from a long term field project in New Mexico, consisting of repeated sessions of mark-recaptures of Peromyscus maniculatus (Rodentia: Muridae), the host and reservoir of Sin Nombre Virus (Bunyaviridae: Hantavirus). The displacements of the recaptured animals provide a means to study their movement from a statistical point of view. We extract two parameters from the data with the help of a simple model: the diffusion constant of the rodents, and the size of their home range. The short time behavior shows the motion to be approximately diffusive and the diffusion constant to be 470+/-50m^2/day. The long time behavior provides an estimation of the diameter of the rodent home ranges, with an average value of 100+/-25m. As in previous investigations directed at Zygodontomys brevicauda observations in Panama, we use a box model for home range estimation. We also use a harmonic model in the present investigation to study the sensitivity of the conclusions to the model used and find that both models lead to similar estimates.
q-bio/0312042
Anders Irb\"ack
Anders Irb\"ack, Fredrik Sjunnesson
Folding thermodynamics of three beta-sheet peptides: A model study
17 pages, 7 figures, to appear in Proteins
Proteins 56 (2004) 110-116
null
LU TP 03-40
q-bio.BM cond-mat.soft
null
We study the folding thermodynamics of a beta-hairpin and two three-stranded beta-sheet peptides using a simplified sequence-based all-atom model, in which folding is driven mainly by backbone hydrogen bonding and effective hydrophobic attraction. The native populations obtained for these three sequences are in good agreement with experimental data. We also show that the apparent native population depends on which observable is studied; the hydrophobicity energy and the number of native hydrogen bonds give different results. The magnitude of this dependence matches well with the results obtained in two different experiments on the beta-hairpin.
[ { "created": "Tue, 30 Dec 2003 20:54:37 GMT", "version": "v1" } ]
2007-05-23
[ [ "Irbäck", "Anders", "" ], [ "Sjunnesson", "Fredrik", "" ] ]
We study the folding thermodynamics of a beta-hairpin and two three-stranded beta-sheet peptides using a simplified sequence-based all-atom model, in which folding is driven mainly by backbone hydrogen bonding and effective hydrophobic attraction. The native populations obtained for these three sequences are in good agreement with experimental data. We also show that the apparent native population depends on which observable is studied; the hydrophobicity energy and the number of native hydrogen bonds give different results. The magnitude of this dependence matches well with the results obtained in two different experiments on the beta-hairpin.
2009.06577
Guilherme Innocentini
Guilherme C.P. Innocentini and Arran Hodgkinson and Fernando Antoneli and Arnaud Debussche and Ovidiu Radulescu
Push-forward method for piecewise deterministic biochemical simulations
arXiv admin note: text overlap with arXiv:1905.00235
Theoretical Computer Science Volume 893, 21 November 2021, Pages 17-40
10.1016/j.tcs.2021.05.025
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A biochemical network can be simulated by a set of ordinary differential equations (ODE) under well stirred reactor conditions, for large numbers of molecules, and frequent reactions. This is no longer a robust representation when some molecular species are in small numbers and reactions changing them are infrequent. In this case, discrete stochastic events trigger changes of the smooth deterministic dynamics of the biochemical network. Piecewise-deterministic Markov processes (PDMP) are well adapted for describing such situations. Although PDMP models are now well established in biology, these models remain computationally challenging. Previously we have introduced the push-forward method to compute how the probability measure is spread by the deterministic ODE flow of PDMPs, through the use of analytic expressions of the corresponding semigroup. In this paper we provide a more general simulation algorithm that works also for non-integrable systems. The method can be used for biochemical simulations with applications in fundamental biology, biotechnology and biocomputing.This work is an extended version of the work presented at the conference CMSB2019.
[ { "created": "Mon, 14 Sep 2020 17:38:15 GMT", "version": "v1" }, { "created": "Fri, 12 Feb 2021 18:22:30 GMT", "version": "v2" } ]
2021-12-17
[ [ "Innocentini", "Guilherme C. P.", "" ], [ "Hodgkinson", "Arran", "" ], [ "Antoneli", "Fernando", "" ], [ "Debussche", "Arnaud", "" ], [ "Radulescu", "Ovidiu", "" ] ]
A biochemical network can be simulated by a set of ordinary differential equations (ODE) under well stirred reactor conditions, for large numbers of molecules, and frequent reactions. This is no longer a robust representation when some molecular species are in small numbers and reactions changing them are infrequent. In this case, discrete stochastic events trigger changes of the smooth deterministic dynamics of the biochemical network. Piecewise-deterministic Markov processes (PDMP) are well adapted for describing such situations. Although PDMP models are now well established in biology, these models remain computationally challenging. Previously we have introduced the push-forward method to compute how the probability measure is spread by the deterministic ODE flow of PDMPs, through the use of analytic expressions of the corresponding semigroup. In this paper we provide a more general simulation algorithm that works also for non-integrable systems. The method can be used for biochemical simulations with applications in fundamental biology, biotechnology and biocomputing.This work is an extended version of the work presented at the conference CMSB2019.
1412.7695
German Mi\~no Dr
Germ\'an A. Mi\~no-Galaz
Allosteric Communication Pathways and Thermal Rectification in PDZ-2 Protein: A Computational Study
29 pages, 8 Figures. All Results Unchanged. Changed Title. Improved Grammar. Added references. Corrected typos. Elimination of the "Knocking" argument for Asp5-Lys91 Interaction in Results and in Discussion sections
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Allosteric communication in proteins is a central and yet unsolved problem of structural biochemistry. Previous findings, from computational biology (Ota and Agard, 2005), have proposed that heat diffuses in a protein through cognate protein allosteric pathways. This work studied heat diffusion in the well-known PDZ-2 protein, and confirmed that this protein has two cognate allosteric pathways and that heat flows preferentially through these. Also, a new property was also observed for protein structures - heat diffuses asymmetrically through the structures. The underling structure of this asymmetrical heat flow was a normal length hydrogen bond (~2.85 {\AA}) that acted as a thermal rectifier. In contrast, thermal rectification was compromised in short hydrogen bonds (~2.60 {\AA}), giving rise to symmetrical thermal diffusion. Asymmetrical heat diffusion was due, on a higher scale, to the local, structural organization of residues that, in turn, was also mediated by hydrogen bonds. This asymmetrical/symmetrical energy flow may be relevant for allosteric signal communication directionality in proteins and for the control of heat flow in materials science.
[ { "created": "Wed, 24 Dec 2014 15:51:08 GMT", "version": "v1" }, { "created": "Sat, 7 Mar 2015 08:34:58 GMT", "version": "v2" } ]
2015-03-10
[ [ "Miño-Galaz", "Germán A.", "" ] ]
Allosteric communication in proteins is a central and yet unsolved problem of structural biochemistry. Previous findings, from computational biology (Ota and Agard, 2005), have proposed that heat diffuses in a protein through cognate protein allosteric pathways. This work studied heat diffusion in the well-known PDZ-2 protein, and confirmed that this protein has two cognate allosteric pathways and that heat flows preferentially through these. Also, a new property was also observed for protein structures - heat diffuses asymmetrically through the structures. The underling structure of this asymmetrical heat flow was a normal length hydrogen bond (~2.85 {\AA}) that acted as a thermal rectifier. In contrast, thermal rectification was compromised in short hydrogen bonds (~2.60 {\AA}), giving rise to symmetrical thermal diffusion. Asymmetrical heat diffusion was due, on a higher scale, to the local, structural organization of residues that, in turn, was also mediated by hydrogen bonds. This asymmetrical/symmetrical energy flow may be relevant for allosteric signal communication directionality in proteins and for the control of heat flow in materials science.
1211.5730
Claus O. Wilke
Stephanie J. Spielman and Claus O. Wilke
Membrane environment imposes unique selection pressures on transmembrane domains of G protein-coupled receptors
19 pages, 4 figures, to appear in J. Mol. Evol
null
null
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have investigated the influence of the plasma membrane environment on the molecular evolution of G protein-coupled receptors (GPCRs), the largest receptor family in Metazoa. In particular, we have analyzed the site-specific rate variation across the two primary structural partitions, transmembrane (TM) and extramembrane (EM), of these membrane proteins. We find that transmembrane domains evolve more slowly than do extramembrane domains, though TM domains display increased rate heterogeneity relative to their EM counterparts. Although the majority of residues across GPCRs experience strong to weak purifying selection, many GPCRs experience positive selection at both TM and EM residues, albeit with a slight bias towards the EM. Further, a subset of GPCRs, chemosensory receptors (including olfactory and taste receptors), exhibit increased rates of evolution relative to other GPCRs, an effect which is more pronounced in their TM spans. Although it has been previously suggested that the TM's low evolutionary rate is caused by their high percentage of buried residues, we show that their attenuated rate seems to stem from the strong biophysical constraints of the membrane itself, or by functional requirements. In spite of the strong evolutionary constraints acting on the transmembrane spans of GPCRs, positive selection and high levels of evolutionary rate variability are common. Thus, biophysical constraints should not be presumed to preclude a protein's ability to evolve.
[ { "created": "Sun, 25 Nov 2012 05:49:47 GMT", "version": "v1" }, { "created": "Thu, 27 Dec 2012 22:15:02 GMT", "version": "v2" } ]
2013-01-01
[ [ "Spielman", "Stephanie J.", "" ], [ "Wilke", "Claus O.", "" ] ]
We have investigated the influence of the plasma membrane environment on the molecular evolution of G protein-coupled receptors (GPCRs), the largest receptor family in Metazoa. In particular, we have analyzed the site-specific rate variation across the two primary structural partitions, transmembrane (TM) and extramembrane (EM), of these membrane proteins. We find that transmembrane domains evolve more slowly than do extramembrane domains, though TM domains display increased rate heterogeneity relative to their EM counterparts. Although the majority of residues across GPCRs experience strong to weak purifying selection, many GPCRs experience positive selection at both TM and EM residues, albeit with a slight bias towards the EM. Further, a subset of GPCRs, chemosensory receptors (including olfactory and taste receptors), exhibit increased rates of evolution relative to other GPCRs, an effect which is more pronounced in their TM spans. Although it has been previously suggested that the TM's low evolutionary rate is caused by their high percentage of buried residues, we show that their attenuated rate seems to stem from the strong biophysical constraints of the membrane itself, or by functional requirements. In spite of the strong evolutionary constraints acting on the transmembrane spans of GPCRs, positive selection and high levels of evolutionary rate variability are common. Thus, biophysical constraints should not be presumed to preclude a protein's ability to evolve.
2405.04248
Evie Malaia
Michelle McCleod, Sean Borneman, Evie Malaia
Neurocomputational Phenotypes in Female and Male Autistic Individuals
10 pages, 2 figures, 4 tables. Submitted to Journal of Science and Health, University of Alabama
null
null
null
q-bio.NC nlin.CD
http://creativecommons.org/licenses/by/4.0/
Autism Spectrum Disorder (ASD) is characterized by an altered phenotype in social interaction and communication. Additionally, autism typically manifests differently in females as opposed to males: a phenomenon that has likely led to long-term problems in diagnostics of autism in females. These sex-based differences in communicative behavior may originate from differences in neurocomputational properties of brain organization. The present study looked to examine the relationship between one neurocomputational measure of brain organization, the local power-law exponent, in autistic vs. neurotypical, as well as male vs. female participants. To investigate the autistic phenotype in neural organization based on biological sex, we collected continuous resting-state EEG data for 19 autistic young adults (10 F), and 23 controls (14 F), using a 64-channel Net Station EEG acquisition system. The data was analyzed to quantify the 1/f power spectrum. Correlations between power-law exponent and behavioral measures were calculated in a between-group (female vs. male; autistic vs. neurotypical) design. On average, the power-law exponent was significantly greater in the male ASD group than in the female ASD group in fronto-central regions. The differences were more pronounced over the left hemisphere, suggesting neural organization differences in regions responsible for language complexity. These differences provide a potential explanation for behavioral variances in female vs. male autistic young adults.
[ { "created": "Tue, 7 May 2024 12:06:12 GMT", "version": "v1" } ]
2024-05-08
[ [ "McCleod", "Michelle", "" ], [ "Borneman", "Sean", "" ], [ "Malaia", "Evie", "" ] ]
Autism Spectrum Disorder (ASD) is characterized by an altered phenotype in social interaction and communication. Additionally, autism typically manifests differently in females as opposed to males: a phenomenon that has likely led to long-term problems in diagnostics of autism in females. These sex-based differences in communicative behavior may originate from differences in neurocomputational properties of brain organization. The present study looked to examine the relationship between one neurocomputational measure of brain organization, the local power-law exponent, in autistic vs. neurotypical, as well as male vs. female participants. To investigate the autistic phenotype in neural organization based on biological sex, we collected continuous resting-state EEG data for 19 autistic young adults (10 F), and 23 controls (14 F), using a 64-channel Net Station EEG acquisition system. The data was analyzed to quantify the 1/f power spectrum. Correlations between power-law exponent and behavioral measures were calculated in a between-group (female vs. male; autistic vs. neurotypical) design. On average, the power-law exponent was significantly greater in the male ASD group than in the female ASD group in fronto-central regions. The differences were more pronounced over the left hemisphere, suggesting neural organization differences in regions responsible for language complexity. These differences provide a potential explanation for behavioral variances in female vs. male autistic young adults.
2007.06975
Francesco Di Lauro Mr
Francesco Di Lauro, Luc Berthouze, Matthew D. Dorey, Joel C. Miller, Istv\'an Z. Kiss
The impact of network properties and mixing on control measures and disease-induced herd immunity in epidemic models: a mean-field model perspective
25 pages, 9 figures, 1 Table
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The contact structure of a population plays an important role in transmission of infection. Many ``structured models'' capture aspects of the contact structure through an underlying network or a mixing matrix. An important observation in such models, is that once a fraction $1-1/\mathcal{R}_0$ has been infected, the residual susceptible population can no longer sustain an epidemic. A recent observation of some structured models is that this threshold can be crossed with a smaller fraction of infected individuals, because the disease acts like a targeted vaccine, preferentially immunizing higher-risk individuals who play a greater role in transmission. Therefore, a limited ``first wave'' may leave behind a residual population that cannot support a second wave once interventions are lifted. In this paper, we systematically analyse a number of mean-field models for networks and other structured populations to address issues relevant to the Covid-19 pandemic. In particular, we consider herd-immunity under several scenarios. We confirm that, in networks with high degree heterogeneity, the first wave confers herd-immunity with significantly fewer infections than equivalent models with lower degree heterogeneity. However, if modelling the intervention as a change in the contact network, then this effect might become more subtle. Indeed, modifying the structure can shield highly connected nodes from becoming infected during the first wave and make the second wave more substantial. We confirm this finding by using an age-structured compartmental model parameterised with real data and comparing lockdown periods implemented either as a global scaling of the mixing matrix or age-specific structural changes. We find that results regarding herd immunity levels are strongly dependent on the model, the duration of lockdown and how lockdown is implemented.
[ { "created": "Tue, 14 Jul 2020 11:26:31 GMT", "version": "v1" } ]
2020-07-15
[ [ "Di Lauro", "Francesco", "" ], [ "Berthouze", "Luc", "" ], [ "Dorey", "Matthew D.", "" ], [ "Miller", "Joel C.", "" ], [ "Kiss", "István Z.", "" ] ]
The contact structure of a population plays an important role in transmission of infection. Many ``structured models'' capture aspects of the contact structure through an underlying network or a mixing matrix. An important observation in such models, is that once a fraction $1-1/\mathcal{R}_0$ has been infected, the residual susceptible population can no longer sustain an epidemic. A recent observation of some structured models is that this threshold can be crossed with a smaller fraction of infected individuals, because the disease acts like a targeted vaccine, preferentially immunizing higher-risk individuals who play a greater role in transmission. Therefore, a limited ``first wave'' may leave behind a residual population that cannot support a second wave once interventions are lifted. In this paper, we systematically analyse a number of mean-field models for networks and other structured populations to address issues relevant to the Covid-19 pandemic. In particular, we consider herd-immunity under several scenarios. We confirm that, in networks with high degree heterogeneity, the first wave confers herd-immunity with significantly fewer infections than equivalent models with lower degree heterogeneity. However, if modelling the intervention as a change in the contact network, then this effect might become more subtle. Indeed, modifying the structure can shield highly connected nodes from becoming infected during the first wave and make the second wave more substantial. We confirm this finding by using an age-structured compartmental model parameterised with real data and comparing lockdown periods implemented either as a global scaling of the mixing matrix or age-specific structural changes. We find that results regarding herd immunity levels are strongly dependent on the model, the duration of lockdown and how lockdown is implemented.
1107.2879
Mariano Beguerisse D\'iaz
Mariano Beguerisse-Diaz, Baojun Wang, Radhika Desikan, Mauricio Barahona
Squeeze-and-Breathe Evolutionary Monte Carlo Optimisation with Local Search Acceleration and its application to parameter fitting
15 Pages, 3 Figures, 6 Tables; Availability: Matlab code available from the authors upon request
null
null
null
q-bio.QM cs.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Estimating parameters from data is a key stage of the modelling process, particularly in biological systems where many parameters need to be estimated from sparse and noisy data sets. Over the years, a variety of heuristics have been proposed to solve this complex optimisation problem, with good results in some cases yet with limitations in the biological setting. Results: In this work, we develop an algorithm for model parameter fitting that combines ideas from evolutionary algorithms, sequential Monte Carlo and direct search optimisation. Our method performs well even when the order of magnitude and/or the range of the parameters is unknown. The method refines iteratively a sequence of parameter distributions through local optimisation combined with partial resampling from a historical prior defined over the support of all previous iterations. We exemplify our method with biological models using both simulated and real experimental data and estimate the parameters efficiently even in the absence of a priori knowledge about the parameters.
[ { "created": "Thu, 14 Jul 2011 17:52:39 GMT", "version": "v1" }, { "created": "Sun, 23 Oct 2011 15:10:58 GMT", "version": "v2" }, { "created": "Fri, 4 Nov 2011 17:08:36 GMT", "version": "v3" } ]
2011-11-07
[ [ "Beguerisse-Diaz", "Mariano", "" ], [ "Wang", "Baojun", "" ], [ "Desikan", "Radhika", "" ], [ "Barahona", "Mauricio", "" ] ]
Motivation: Estimating parameters from data is a key stage of the modelling process, particularly in biological systems where many parameters need to be estimated from sparse and noisy data sets. Over the years, a variety of heuristics have been proposed to solve this complex optimisation problem, with good results in some cases yet with limitations in the biological setting. Results: In this work, we develop an algorithm for model parameter fitting that combines ideas from evolutionary algorithms, sequential Monte Carlo and direct search optimisation. Our method performs well even when the order of magnitude and/or the range of the parameters is unknown. The method refines iteratively a sequence of parameter distributions through local optimisation combined with partial resampling from a historical prior defined over the support of all previous iterations. We exemplify our method with biological models using both simulated and real experimental data and estimate the parameters efficiently even in the absence of a priori knowledge about the parameters.
2309.03194
James Malkin Mr
James Malkin, Cian O'Donnell, Conor Houghton, Laurence Aitchison
Signatures of Bayesian inference emerge from energy efficient synapses
29 pages, 11 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANN) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have 1) higher input firing rates and 2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
[ { "created": "Wed, 6 Sep 2023 17:57:07 GMT", "version": "v1" }, { "created": "Sat, 23 Mar 2024 01:42:32 GMT", "version": "v2" }, { "created": "Fri, 21 Jun 2024 09:14:57 GMT", "version": "v3" }, { "created": "Mon, 1 Jul 2024 11:30:59 GMT", "version": "v4" } ]
2024-07-02
[ [ "Malkin", "James", "" ], [ "O'Donnell", "Cian", "" ], [ "Houghton", "Conor", "" ], [ "Aitchison", "Laurence", "" ] ]
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANN) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have 1) higher input firing rates and 2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.
1705.07856
James O'Dwyer
James P. O'Dwyer and Stephen J. Cornell
Cross-scale ecological theory sheds light on the maintenance of biodiversity
31 pages 3 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the first successes of neutral ecology was to predict realistically-broad distributions of rare and abundant species. However, it has remained an outstanding theoretical challenge to describe how this distribution of abundances changes with spatial scale, and this gap has hampered attempts to use observed species abundances as a way to quantify what non-neutral processes are needed to fully explain observed patterns. To address this, we introduce a new formulation of spatial neutral biodiversity theory and derive analytical predictions for the way abundance distributions change with scale. For tropical forest data where neutrality has been extensively tested before now, we apply this approach and identify an incompatibility between neutral fits at regional and local scales. We use this approach derive a sharp quantification of what remains to be explained by non-neutral processes at the local scale, setting a quantitative target for more general models for the maintenance of biodiversity.
[ { "created": "Mon, 22 May 2017 17:00:42 GMT", "version": "v1" }, { "created": "Wed, 7 Jun 2017 17:09:52 GMT", "version": "v2" }, { "created": "Mon, 16 Jul 2018 19:48:48 GMT", "version": "v3" } ]
2018-07-18
[ [ "O'Dwyer", "James P.", "" ], [ "Cornell", "Stephen J.", "" ] ]
One of the first successes of neutral ecology was to predict realistically-broad distributions of rare and abundant species. However, it has remained an outstanding theoretical challenge to describe how this distribution of abundances changes with spatial scale, and this gap has hampered attempts to use observed species abundances as a way to quantify what non-neutral processes are needed to fully explain observed patterns. To address this, we introduce a new formulation of spatial neutral biodiversity theory and derive analytical predictions for the way abundance distributions change with scale. For tropical forest data where neutrality has been extensively tested before now, we apply this approach and identify an incompatibility between neutral fits at regional and local scales. We use this approach derive a sharp quantification of what remains to be explained by non-neutral processes at the local scale, setting a quantitative target for more general models for the maintenance of biodiversity.
1604.02399
Matthew Tudor
Tracey Filzen, Peter Kutchukian, Jeffrey Hermes, Jing Li, Matthew Tudor
Representing high throughput expression profiles via perturbation barcodes reveals compound targets
19 pages, 3 figures, 2 tables, 2 supplementary figures
null
10.1371/journal.pcbi.1005335
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 genes, chosen to be representative of the genome as whole, across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, in addition to predicting a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.
[ { "created": "Fri, 8 Apr 2016 16:53:19 GMT", "version": "v1" } ]
2017-04-12
[ [ "Filzen", "Tracey", "" ], [ "Kutchukian", "Peter", "" ], [ "Hermes", "Jeffrey", "" ], [ "Li", "Jing", "" ], [ "Tudor", "Matthew", "" ] ]
High throughput mRNA expression profiling can be used to characterize the response of cell culture models to perturbations such as pharmacologic modulators and genetic perturbations. As profiling campaigns expand in scope, it is important to homogenize, summarize, and analyze the resulting data in a manner that captures significant biological signals in spite of various noise sources such as batch effects and stochastic variation. We used the L1000 platform for large-scale profiling of 978 genes, chosen to be representative of the genome as whole, across thousands of compound treatments. Here, a method is described that uses deep learning techniques to convert the expression changes of the landmark genes into a perturbation barcode that reveals important features of the underlying data, performing better than the raw data in revealing important biological insights. The barcode captures compound structure and target information, in addition to predicting a compound's high throughput screening promiscuity, to a higher degree than the original data measurements, indicating that the approach uncovers underlying factors of the expression data that are otherwise entangled or masked by noise. Furthermore, we demonstrate that visualizations derived from the perturbation barcode can be used to more sensitively assign functions to unknown compounds through a guilt-by-association approach, which we use to predict and experimentally validate the activity of compounds on the MAPK pathway. The demonstrated application of deep metric learning to large-scale chemical genetics projects highlights the utility of this and related approaches to the extraction of insights and testable hypotheses from big, sometimes noisy data.
2102.10994
Yurui Ming
Yurui Ming
Coherence of Working Memory Study Between Deep Neural Network and Neurophysiology
null
null
null
null
q-bio.NC cs.LG cs.NE eess.SP
http://creativecommons.org/licenses/by/4.0/
The auto feature extraction capability of deep neural networks (DNN) endows them the potentiality for analysing complicated electroencephalogram (EEG) data captured from brain functionality research. This work investigates the potential coherent correspondence between the region-of-interest (ROI) for DNN to explore, and ROI for conventional neurophysiological oriented methods to work with, exemplified in the case of working memory study. The attention mechanism induced by global average pooling (GAP) is applied to a public EEG dataset of working memory, to unveil these coherent ROIs via a classification problem. The result shows the alignment of ROIs from different research disciplines. This work asserts the confidence and promise of utilizing DNN for EEG data analysis, albeit in lack of the interpretation to network operations.
[ { "created": "Sat, 6 Feb 2021 09:09:57 GMT", "version": "v1" } ]
2021-02-23
[ [ "Ming", "Yurui", "" ] ]
The auto feature extraction capability of deep neural networks (DNN) endows them the potentiality for analysing complicated electroencephalogram (EEG) data captured from brain functionality research. This work investigates the potential coherent correspondence between the region-of-interest (ROI) for DNN to explore, and ROI for conventional neurophysiological oriented methods to work with, exemplified in the case of working memory study. The attention mechanism induced by global average pooling (GAP) is applied to a public EEG dataset of working memory, to unveil these coherent ROIs via a classification problem. The result shows the alignment of ROIs from different research disciplines. This work asserts the confidence and promise of utilizing DNN for EEG data analysis, albeit in lack of the interpretation to network operations.
1901.09975
Farouk Nathoo
Yunlong Nie, Eugene Opoku, Laila Yasmin, Yin Song, Jie Wang, Sidi Wu, Vanessa Scarapicchia, Jodie Gawryluk, Liangliang Wang, Jiguo Cao, Farouk S. Nathoo
Spectral Dynamic Causal Modelling of Resting-State fMRI: Relating Effective Brain Connectivity in the Default Mode Network to Genetics
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We conduct an imaging genetics study to explore how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer's disease and mild cognitive impairment. We develop an analysis of longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) and genetic data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. A Dynamic Causal Model (DCM) is fit to the rs-fMRI scans to estimate effective brain connectivity within the DMN and related to a set of single nucleotide polymorphisms (SNPs) contained in an empirical disease-constrained set which is obtained out-of-sample from 663 ADNI subjects having only genome-wide data. We examine longitudinal data in both a 4-region and an 6-region network and relate longitudinal effective brain connectivity networks estimated using spectral DCM to SNPs using both linear mixed effect (LME) models as well as function-on-scalar regression (FSR). In the former case we implement a parametric bootstrap for testing SNP coefficients and make comparisons with p-values obtained from the chi-squared null distribution. We also implement a parametric bootstrap approach for testing regression functions in FSR and we make comparisons between p-values obtained from the parametric bootstrap to p-values obtained using the F-distribution with degrees-of-freedom based on Satterthwaite's approximation. In both networks we report on exploratory patterns of associations with relatively high ranks that exhibit stability to the differing assumptions made by both FSR and LME.
[ { "created": "Mon, 28 Jan 2019 19:58:32 GMT", "version": "v1" }, { "created": "Wed, 30 Jan 2019 21:14:29 GMT", "version": "v2" }, { "created": "Sat, 4 May 2019 03:33:14 GMT", "version": "v3" }, { "created": "Wed, 8 May 2019 18:39:55 GMT", "version": "v4" }, { "created": "Fri, 10 May 2019 07:53:07 GMT", "version": "v5" }, { "created": "Wed, 21 Aug 2019 20:54:57 GMT", "version": "v6" }, { "created": "Sat, 16 Nov 2019 23:32:35 GMT", "version": "v7" }, { "created": "Tue, 2 Jun 2020 05:35:39 GMT", "version": "v8" } ]
2020-06-03
[ [ "Nie", "Yunlong", "" ], [ "Opoku", "Eugene", "" ], [ "Yasmin", "Laila", "" ], [ "Song", "Yin", "" ], [ "Wang", "Jie", "" ], [ "Wu", "Sidi", "" ], [ "Scarapicchia", "Vanessa", "" ], [ "Gawryluk", "Jodie", "" ], [ "Wang", "Liangliang", "" ], [ "Cao", "Jiguo", "" ], [ "Nathoo", "Farouk S.", "" ] ]
We conduct an imaging genetics study to explore how effective brain connectivity in the default mode network (DMN) may be related to genetics within the context of Alzheimer's disease and mild cognitive impairment. We develop an analysis of longitudinal resting-state functional magnetic resonance imaging (rs-fMRI) and genetic data obtained from a sample of 111 subjects with a total of 319 rs-fMRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. A Dynamic Causal Model (DCM) is fit to the rs-fMRI scans to estimate effective brain connectivity within the DMN and related to a set of single nucleotide polymorphisms (SNPs) contained in an empirical disease-constrained set which is obtained out-of-sample from 663 ADNI subjects having only genome-wide data. We examine longitudinal data in both a 4-region and an 6-region network and relate longitudinal effective brain connectivity networks estimated using spectral DCM to SNPs using both linear mixed effect (LME) models as well as function-on-scalar regression (FSR). In the former case we implement a parametric bootstrap for testing SNP coefficients and make comparisons with p-values obtained from the chi-squared null distribution. We also implement a parametric bootstrap approach for testing regression functions in FSR and we make comparisons between p-values obtained from the parametric bootstrap to p-values obtained using the F-distribution with degrees-of-freedom based on Satterthwaite's approximation. In both networks we report on exploratory patterns of associations with relatively high ranks that exhibit stability to the differing assumptions made by both FSR and LME.
2006.03915
Damian Knopoff
Nicola Bellomo, Richard Bingham, Mark A.J. Chaplain, Giovanni Dosi, Guido Forni, Damian A. Knopoff, John Lowengrub, Reidun Twarock and Maria Enrica Virgillito
A multi-scale model of virus pandemic: Heterogeneous interactive entities in a globally connected world
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is devoted to the multidisciplinary modelling of a pandemic initiated by an aggressive virus, specifically the so-called \textit{SARS--CoV--2 Severe Acute Respiratory Syndrome, corona virus n.2}. The study is developed within a multiscale framework accounting for the interaction of different spatial scales, from the small scale of the virus itself and cells, to the large scale of individuals and further up to the collective behaviour of populations. An interdisciplinary vision is developed thanks to the contributions of epidemiologists, immunologists and economists as well as those of mathematical modellers. The first part of the contents is devoted to understanding the complex features of the system and to the design of a modelling rationale. The modelling approach is treated in the second part of the paper by showing both how the virus propagates into infected individuals, successfully and not successfully recovered, and also the spatial patterns, which are subsequently studied by kinetic and lattice models. The third part reports the contribution of research in the fields of virology, epidemiology, immune competition, and economy focused also on social behaviours. Finally, a critical analysis is proposed looking ahead to research perspectives.
[ { "created": "Sat, 6 Jun 2020 16:45:59 GMT", "version": "v1" } ]
2020-06-09
[ [ "Bellomo", "Nicola", "" ], [ "Bingham", "Richard", "" ], [ "Chaplain", "Mark A. J.", "" ], [ "Dosi", "Giovanni", "" ], [ "Forni", "Guido", "" ], [ "Knopoff", "Damian A.", "" ], [ "Lowengrub", "John", "" ], [ "Twarock", "Reidun", "" ], [ "Virgillito", "Maria Enrica", "" ] ]
This paper is devoted to the multidisciplinary modelling of a pandemic initiated by an aggressive virus, specifically the so-called \textit{SARS--CoV--2 Severe Acute Respiratory Syndrome, corona virus n.2}. The study is developed within a multiscale framework accounting for the interaction of different spatial scales, from the small scale of the virus itself and cells, to the large scale of individuals and further up to the collective behaviour of populations. An interdisciplinary vision is developed thanks to the contributions of epidemiologists, immunologists and economists as well as those of mathematical modellers. The first part of the contents is devoted to understanding the complex features of the system and to the design of a modelling rationale. The modelling approach is treated in the second part of the paper by showing both how the virus propagates into infected individuals, successfully and not successfully recovered, and also the spatial patterns, which are subsequently studied by kinetic and lattice models. The third part reports the contribution of research in the fields of virology, epidemiology, immune competition, and economy focused also on social behaviours. Finally, a critical analysis is proposed looking ahead to research perspectives.
1905.05030
Daniele Proverbio
Daniele Proverbio and Marco Maggiora
Dynamical strategies for obstacle avoidance during Dictyostelium discoideum aggregation: a Multi-agent system model
null
null
null
null
q-bio.CB nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemotaxis, the movement of an organism in response to chemical stimuli, is a typical feature of many microbiological systems. In particular, the social amoeba \textit{Disctyostelium discoideum} is widely used as a model organism, but it is not still clear how it behaves in heterogeneous environments. A few models focusing on mechanical features have already addressed the question; however, we suggest that phenomenological models focusing on the population dynamics may provide new meaningful data. Consequently, by means of a specific Multi-agent system model, we study the dynamical features emerging from complex social interactions among individuals belonging to amoeba colonies.\\ After defining an appropriate metric to quantitatively estimate the gathering process, we find that: a) obstacles play the role of local topological perturbation, as they alter the flux of chemical signals; b) physical obstacles (blocking the cellular motion and the chemical flux) and purely chemical obstacles (only interfering with chemical flux) elicit similar dynamical behaviors; c) a minimal program for robustly gathering simulated cells does not involve mechanisms for obstacle sensing and avoidance; d) fluctuations of the dynamics concur in preventing multiple stable clusters. Comparing those findings with previous results, we speculate about the fact that chemotactic cells can avoid obstacles by simply following the altered chemical gradient. Social interactions are sufficient to guarantee the aggregation of the whole colony past numerous obstacles.
[ { "created": "Mon, 13 May 2019 13:35:27 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 07:03:58 GMT", "version": "v2" }, { "created": "Mon, 16 Dec 2019 08:37:49 GMT", "version": "v3" } ]
2019-12-17
[ [ "Proverbio", "Daniele", "" ], [ "Maggiora", "Marco", "" ] ]
Chemotaxis, the movement of an organism in response to chemical stimuli, is a typical feature of many microbiological systems. In particular, the social amoeba \textit{Disctyostelium discoideum} is widely used as a model organism, but it is not still clear how it behaves in heterogeneous environments. A few models focusing on mechanical features have already addressed the question; however, we suggest that phenomenological models focusing on the population dynamics may provide new meaningful data. Consequently, by means of a specific Multi-agent system model, we study the dynamical features emerging from complex social interactions among individuals belonging to amoeba colonies.\\ After defining an appropriate metric to quantitatively estimate the gathering process, we find that: a) obstacles play the role of local topological perturbation, as they alter the flux of chemical signals; b) physical obstacles (blocking the cellular motion and the chemical flux) and purely chemical obstacles (only interfering with chemical flux) elicit similar dynamical behaviors; c) a minimal program for robustly gathering simulated cells does not involve mechanisms for obstacle sensing and avoidance; d) fluctuations of the dynamics concur in preventing multiple stable clusters. Comparing those findings with previous results, we speculate about the fact that chemotactic cells can avoid obstacles by simply following the altered chemical gradient. Social interactions are sufficient to guarantee the aggregation of the whole colony past numerous obstacles.
2008.05612
Steph Owen MPhys
S. Owen
GPU accelerated enumeration and exploration of HP model genotype-phenotype maps for protein folding
25 pages, 20 figures
null
null
null
q-bio.BM physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution can be broadly described in terms of mutations of the genotype and the subsequent selection of the phenotype. The full enumeration of a given genotype-phenotype (GP) map is therefore a powerful technique in examining evolutionary landscapes. However, because the number of genotypes typically grows exponentially with genome length, such calculations rapidly become intractable. Here I apply graphics processing unit(GPU) techniques to the hydrophobic-polar (HP)model for protein folding. This GP map is a simple and well-studied model for the complex process of protein folding. Prior studies on relatively small 2D and 3D lattices have been exclusively carried out using conventional central processing unit (CPU) approaches. By using GPU techniques, I was able to reproduce the pioneering calculations of Li et al.[1] with a speed up of 580-700 fold over a CPU. I was also able to perform the largest enumeration to date of the 6x6 lattice. These novel calculations provide evidence that a popular "plum-pudding" metaphor that suggests that phenotypes are disconnected in genotype space does not describe the data. Instead a "spaghetti" metaphor of connected genotype networks may be more suitable. Furthermore, the data allows the relationships between designability and complexity within GP space to be explored. GPU approaches appear extremely well suited toGP mapping and the success of this work provides a promising introduction for its wider application in this field.
[ { "created": "Sat, 1 Aug 2020 22:59:48 GMT", "version": "v1" }, { "created": "Fri, 27 Nov 2020 16:16:18 GMT", "version": "v2" } ]
2020-11-30
[ [ "Owen", "S.", "" ] ]
Evolution can be broadly described in terms of mutations of the genotype and the subsequent selection of the phenotype. The full enumeration of a given genotype-phenotype (GP) map is therefore a powerful technique in examining evolutionary landscapes. However, because the number of genotypes typically grows exponentially with genome length, such calculations rapidly become intractable. Here I apply graphics processing unit(GPU) techniques to the hydrophobic-polar (HP)model for protein folding. This GP map is a simple and well-studied model for the complex process of protein folding. Prior studies on relatively small 2D and 3D lattices have been exclusively carried out using conventional central processing unit (CPU) approaches. By using GPU techniques, I was able to reproduce the pioneering calculations of Li et al.[1] with a speed up of 580-700 fold over a CPU. I was also able to perform the largest enumeration to date of the 6x6 lattice. These novel calculations provide evidence that a popular "plum-pudding" metaphor that suggests that phenotypes are disconnected in genotype space does not describe the data. Instead a "spaghetti" metaphor of connected genotype networks may be more suitable. Furthermore, the data allows the relationships between designability and complexity within GP space to be explored. GPU approaches appear extremely well suited toGP mapping and the success of this work provides a promising introduction for its wider application in this field.
1307.3329
Ryan Solava
Ryan W. Solava and Tijana Milenkovi\'c
Revealing missing parts of the interactome
16 pages, 3 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein interaction networks (PINs) are often used to "learn" new biological function from their topology. Since current PINs are noisy, their computational de-noising via link prediction (LP) could improve the learning accuracy. LP uses the existing PIN topology to predict missing and spurious links. Many of existing LP methods rely on shared immediate neighborhoods of the nodes to be linked. As such, they have limitations. Thus, in order to comprehensively study what are the topological properties of nodes in PINs that dictate whether the nodes should be linked, we had to introduce novel sensitive LP measures that overcome the limitations of the existing methods. We systematically evaluate the new and existing LP measures by introducing "synthetic" noise to PINs and measuring how well the different measures reconstruct the original PINs. Our main findings are: 1) LP measures that favor nodes which are both "topologically similar" and have large shared extended neighborhoods are superior; 2) using more network topology often though not always improves LP accuracy; and 3) our new LP measures are superior to the existing measures. After evaluating the different methods, we use them to de-noise PINs. Importantly, we manage to improve biological correctness of the PINs by de-noising them, with respect to "enrichment" of the predicted interactions in Gene Ontology terms. Furthermore, we validate a statistically significant portion of the predicted interactions in independent, external PIN data sources. Software executables are freely available upon request.
[ { "created": "Fri, 12 Jul 2013 05:25:07 GMT", "version": "v1" } ]
2013-07-15
[ [ "Solava", "Ryan W.", "" ], [ "Milenković", "Tijana", "" ] ]
Protein interaction networks (PINs) are often used to "learn" new biological function from their topology. Since current PINs are noisy, their computational de-noising via link prediction (LP) could improve the learning accuracy. LP uses the existing PIN topology to predict missing and spurious links. Many of existing LP methods rely on shared immediate neighborhoods of the nodes to be linked. As such, they have limitations. Thus, in order to comprehensively study what are the topological properties of nodes in PINs that dictate whether the nodes should be linked, we had to introduce novel sensitive LP measures that overcome the limitations of the existing methods. We systematically evaluate the new and existing LP measures by introducing "synthetic" noise to PINs and measuring how well the different measures reconstruct the original PINs. Our main findings are: 1) LP measures that favor nodes which are both "topologically similar" and have large shared extended neighborhoods are superior; 2) using more network topology often though not always improves LP accuracy; and 3) our new LP measures are superior to the existing measures. After evaluating the different methods, we use them to de-noise PINs. Importantly, we manage to improve biological correctness of the PINs by de-noising them, with respect to "enrichment" of the predicted interactions in Gene Ontology terms. Furthermore, we validate a statistically significant portion of the predicted interactions in independent, external PIN data sources. Software executables are freely available upon request.
1309.2382
Benjamin Torben-Nielsen
Willem A.M. Wybo, Klaus M. Stiefel, Benjamin Torben-Nielsen
The Green's function formalism as a bridge between single and multi-compartmental modeling
16 pages, 2 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons are spatially extended structures that receive and process inputs on their dendrites. It is generally accepted that neuronal computations arise from the active integration of synaptic inputs along a dendrite between the input location and the location of spike generation in the axon initial segment. However, many application such as simulations of brain networks, use point-neurons --neurons without a morphological component-- as computational units to keep the conceptual complexity and computational costs low. Inevitably, these applications thus omit a fundamental property of neuronal computation. In this work, we present an approach to model an artificial synapse that mimics dendritic processing without the need to explicitly simulate dendritic dynamics. The model synapse employs an analytic solution for the cable equation to compute the neuron's membrane potential following dendritic inputs. Green's function formalism is used to derive the closed version of the cable equation. We show that by using this synapse model, point-neurons can achieve results that were previously limited to the realms of multi-compartmental models. Moreover, a computational advantage is achieved when only a small number of simulated synapses impinge on a morphologically elaborate neuron. Opportunities and limitations are discussed.
[ { "created": "Tue, 10 Sep 2013 06:10:42 GMT", "version": "v1" } ]
2013-09-11
[ [ "Wybo", "Willem A. M.", "" ], [ "Stiefel", "Klaus M.", "" ], [ "Torben-Nielsen", "Benjamin", "" ] ]
Neurons are spatially extended structures that receive and process inputs on their dendrites. It is generally accepted that neuronal computations arise from the active integration of synaptic inputs along a dendrite between the input location and the location of spike generation in the axon initial segment. However, many application such as simulations of brain networks, use point-neurons --neurons without a morphological component-- as computational units to keep the conceptual complexity and computational costs low. Inevitably, these applications thus omit a fundamental property of neuronal computation. In this work, we present an approach to model an artificial synapse that mimics dendritic processing without the need to explicitly simulate dendritic dynamics. The model synapse employs an analytic solution for the cable equation to compute the neuron's membrane potential following dendritic inputs. Green's function formalism is used to derive the closed version of the cable equation. We show that by using this synapse model, point-neurons can achieve results that were previously limited to the realms of multi-compartmental models. Moreover, a computational advantage is achieved when only a small number of simulated synapses impinge on a morphologically elaborate neuron. Opportunities and limitations are discussed.
q-bio/0408027
Morten Kloster
Morten Kloster
Analysis of evolution through competitive selection
12 pages, 10 figures
null
10.1103/PhysRevLett.95.168701
null
q-bio.PE physics.bio-ph
null
Recent studies of in vitro evolution of DNA via protein binding indicate that the evolution behavior is qualitatively different in different parameter regimes. I here present a general theory that is valid for a wide range of parameters, and which reproduces and extends previous results. Specifically, the mean-field theory of a general translation-invariant model can be reduced to the basic diffusion equation with a dynamic boundary condition. The simple analytical form yields both quantitatively accurate predictions and valuable insight into the principles involved. In particular, I introduce a cutoff criterion for finite populations that illustrates both of these qualities.
[ { "created": "Mon, 30 Aug 2004 21:07:49 GMT", "version": "v1" } ]
2009-11-10
[ [ "Kloster", "Morten", "" ] ]
Recent studies of in vitro evolution of DNA via protein binding indicate that the evolution behavior is qualitatively different in different parameter regimes. I here present a general theory that is valid for a wide range of parameters, and which reproduces and extends previous results. Specifically, the mean-field theory of a general translation-invariant model can be reduced to the basic diffusion equation with a dynamic boundary condition. The simple analytical form yields both quantitatively accurate predictions and valuable insight into the principles involved. In particular, I introduce a cutoff criterion for finite populations that illustrates both of these qualities.
2006.05665
Benjamin Ambrosio
Benjamin Ambrosio, M.A. Aziz-Alaoui
On a coupled time-dependent SIR models fitting with New York and New-Jersey states COVID-19 data
null
null
10.20944/preprints202006.0068.v1
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article describes a simple Susceptible Infected Recovered (SIR) model fitting with COVID-19 data for the month of march 2020 in New York (NY) state. The model is a classical SIR, but is non-autonomous; the rate of susceptible people becoming infected is adjusted over time in order to fit the available data. The death rate is also secondarily adjusted. Our fitting is made under the assumption that due to limiting number of tests, a large part of the infected population has not been tested positive. In the last part, we extend the model to take into account the daily fluxes between New Jersey (NJ) and NY states and fit the data for both states. Our simple model fits the available data, and illustrates typical dynamics of the disease: exponential increase, apex and decrease. The model highlights a decrease in the transmission rate over the period which gives a quantitative illustration about how lockdown policies reduce the spread of the pandemic. The coupled model with NY and NJ states shows a wave in NJ following the NY wave, illustrating the mechanism of spread from one attractive hot spot to its neighbor. }
[ { "created": "Wed, 10 Jun 2020 05:45:44 GMT", "version": "v1" } ]
2020-06-11
[ [ "Ambrosio", "Benjamin", "" ], [ "Aziz-Alaoui", "M. A.", "" ] ]
This article describes a simple Susceptible Infected Recovered (SIR) model fitting with COVID-19 data for the month of march 2020 in New York (NY) state. The model is a classical SIR, but is non-autonomous; the rate of susceptible people becoming infected is adjusted over time in order to fit the available data. The death rate is also secondarily adjusted. Our fitting is made under the assumption that due to limiting number of tests, a large part of the infected population has not been tested positive. In the last part, we extend the model to take into account the daily fluxes between New Jersey (NJ) and NY states and fit the data for both states. Our simple model fits the available data, and illustrates typical dynamics of the disease: exponential increase, apex and decrease. The model highlights a decrease in the transmission rate over the period which gives a quantitative illustration about how lockdown policies reduce the spread of the pandemic. The coupled model with NY and NJ states shows a wave in NJ following the NY wave, illustrating the mechanism of spread from one attractive hot spot to its neighbor. }
2206.04989
Gopikrishnan Chirappurathu Remesan
Gopikrishnan C. Remesan and Jennifer A Flegg and Helen M Byrne
Two-phase model of compressive stress induced on a surrounding hyperelastic medium by an expanding tumour
26 pages, 7 figures
null
null
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
\emph{In vitro} experiments in which tumour cells are seeded in a gelatinous medium, or hydrogel, show how mechanical interactions between tumour cells and the tissue in which they are embedded, together with local levels of an externally-supplied, diffusible nutrient (e.g., oxygen), affect the tumour's growth dynamics. In this article, we present a mathematical model that describes these \emph{in vitro} experiments. We use the model to understand how tumour growth generates mechanical deformations in the hydrogel and how these deformations in turn influence the tumour's growth. The hydrogel is viewed as a nonlinear hyperelastic material and the tumour is modelled as a two-phase mixture, comprising a viscous tumour cell phase and an isotropic, inviscid interstitial fluid phase. Using a combination of numerical and analytical techniques, we show how the tumour's growth dynamics change as the mechanical properties of the hydrogel vary. When the hydrogel is soft, nutrient availability dominates the dynamics: the tumour evolves to a large equilibrium configuration where the proliferation rate of nutrient-rich cells on the tumour boundary balances the death rate of nutrient-starved cells in the central, necrotic core. As the hydrogel stiffness increases, mechanical resistance to growth increases and the tumour's equilibrium size decreases. Indeed, for small tumours embedded in stiff hydrogels, the inhibitory force experienced by the tumour cells may be so large that the tumour is eliminated. Analysis of the model identifies parameter regimes in which the presence of the hydrogel drives tumour elimination.
[ { "created": "Fri, 10 Jun 2022 10:52:00 GMT", "version": "v1" } ]
2022-06-13
[ [ "Remesan", "Gopikrishnan C.", "" ], [ "Flegg", "Jennifer A", "" ], [ "Byrne", "Helen M", "" ] ]
\emph{In vitro} experiments in which tumour cells are seeded in a gelatinous medium, or hydrogel, show how mechanical interactions between tumour cells and the tissue in which they are embedded, together with local levels of an externally-supplied, diffusible nutrient (e.g., oxygen), affect the tumour's growth dynamics. In this article, we present a mathematical model that describes these \emph{in vitro} experiments. We use the model to understand how tumour growth generates mechanical deformations in the hydrogel and how these deformations in turn influence the tumour's growth. The hydrogel is viewed as a nonlinear hyperelastic material and the tumour is modelled as a two-phase mixture, comprising a viscous tumour cell phase and an isotropic, inviscid interstitial fluid phase. Using a combination of numerical and analytical techniques, we show how the tumour's growth dynamics change as the mechanical properties of the hydrogel vary. When the hydrogel is soft, nutrient availability dominates the dynamics: the tumour evolves to a large equilibrium configuration where the proliferation rate of nutrient-rich cells on the tumour boundary balances the death rate of nutrient-starved cells in the central, necrotic core. As the hydrogel stiffness increases, mechanical resistance to growth increases and the tumour's equilibrium size decreases. Indeed, for small tumours embedded in stiff hydrogels, the inhibitory force experienced by the tumour cells may be so large that the tumour is eliminated. Analysis of the model identifies parameter regimes in which the presence of the hydrogel drives tumour elimination.
1206.0858
Jin Yang
Jin Yang and John E. Pearson
Origins of concentration dependence of waiting times for single-molecule fluorescence binding
11 pages, 4 figures; J. Chem. Phys., 137, 2012
null
10.1063/1.4729947
null
q-bio.QM physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binary fluorescence time series obtained from single-molecule imaging experiments can be used to infer protein binding kinetics, in particular, association and dissociation rate constants from waiting time statistics of fluorescence intensity changes. In many cases, rate constants inferred from fluorescence time series exhibit nonintuitive dependence on ligand concentration. Here we examine several possible mechanistic and technical origins that may induce ligand dependence of rate constants. Using aggregated Markov models, we show under the condition of detailed balance that non-fluorescent bindings and missed events due to transient interactions, instead of conformation fluctuations, may underly the dependence of waiting times and thus apparent rate constants on ligand concentrations. In general, waiting times are rational functions of ligand concentration. The shape of concentration dependence is qualitatively affected by the number of binding sites in the single molecule and is quantitatively tuned by model parameters. We also show that ligand dependence can be caused by non-equilibrium conditions which result in violations of detailed balance and require an energy source. As to a different but significant mechanism, we examine the effect of ambient buffers that can substantially reduce the effective concentration of ligands that interact with the single molecules. To demonstrate the effects by these mechanisms, we applied our results to analyze the concentration dependence in a single-molecule experiment EGFR binding to fluorophore-labeled adaptor protein Grb2 by Morimatsu et al. (PNAS,104:18013,2007).
[ { "created": "Tue, 5 Jun 2012 09:40:15 GMT", "version": "v1" }, { "created": "Thu, 14 Jun 2012 13:46:16 GMT", "version": "v2" } ]
2012-06-15
[ [ "Yang", "Jin", "" ], [ "Pearson", "John E.", "" ] ]
Binary fluorescence time series obtained from single-molecule imaging experiments can be used to infer protein binding kinetics, in particular, association and dissociation rate constants from waiting time statistics of fluorescence intensity changes. In many cases, rate constants inferred from fluorescence time series exhibit nonintuitive dependence on ligand concentration. Here we examine several possible mechanistic and technical origins that may induce ligand dependence of rate constants. Using aggregated Markov models, we show under the condition of detailed balance that non-fluorescent bindings and missed events due to transient interactions, instead of conformation fluctuations, may underly the dependence of waiting times and thus apparent rate constants on ligand concentrations. In general, waiting times are rational functions of ligand concentration. The shape of concentration dependence is qualitatively affected by the number of binding sites in the single molecule and is quantitatively tuned by model parameters. We also show that ligand dependence can be caused by non-equilibrium conditions which result in violations of detailed balance and require an energy source. As to a different but significant mechanism, we examine the effect of ambient buffers that can substantially reduce the effective concentration of ligands that interact with the single molecules. To demonstrate the effects by these mechanisms, we applied our results to analyze the concentration dependence in a single-molecule experiment EGFR binding to fluorophore-labeled adaptor protein Grb2 by Morimatsu et al. (PNAS,104:18013,2007).
1310.3185
Filipe Tostevin
Alexander Buchner, Filipe Tostevin, Florian Hinzpeter, Ulrich Gerland
Optimization of collective enzyme activity via spatial localization
12 pages, 9 figures + 5 pages supplementary material
J. Chem. Phys. 139, 135101 (2013)
10.1063/1.4823504
null
q-bio.SC q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spatial organization of enzymes often plays a crucial role in the functionality and efficiency of enzymatic pathways. To fully understand the design and operation of enzymatic pathways, it is therefore crucial to understand how the relative arrangement of enzymes affects pathway function. Here we investigate the effect of enzyme localization on the flux of a minimal two-enzyme pathway within a reaction-diffusion model. We consider different reaction kinetics, spatial dimensions, and loss mechanisms for intermediate substrate molecules. Our systematic analysis of the different regimes of this model reveals both universal features and distinct characteristics in the phenomenology of these different systems. In particular, the distribution of the second pathway enzyme that maximizes the reaction flux undergoes a generic transition from co-localization with the first enzyme when the catalytic efficiency of the second enzyme is low, to an extended profile when the catalytic efficiency is high. However, the critical transition point and the shape of the extended optimal profile is significantly affected by specific features of the model. We explain the behavior of these different systems in terms of the underlying stochastic reaction and diffusion processes of single substrate molecules.
[ { "created": "Fri, 11 Oct 2013 16:15:30 GMT", "version": "v1" } ]
2015-06-17
[ [ "Buchner", "Alexander", "" ], [ "Tostevin", "Filipe", "" ], [ "Hinzpeter", "Florian", "" ], [ "Gerland", "Ulrich", "" ] ]
The spatial organization of enzymes often plays a crucial role in the functionality and efficiency of enzymatic pathways. To fully understand the design and operation of enzymatic pathways, it is therefore crucial to understand how the relative arrangement of enzymes affects pathway function. Here we investigate the effect of enzyme localization on the flux of a minimal two-enzyme pathway within a reaction-diffusion model. We consider different reaction kinetics, spatial dimensions, and loss mechanisms for intermediate substrate molecules. Our systematic analysis of the different regimes of this model reveals both universal features and distinct characteristics in the phenomenology of these different systems. In particular, the distribution of the second pathway enzyme that maximizes the reaction flux undergoes a generic transition from co-localization with the first enzyme when the catalytic efficiency of the second enzyme is low, to an extended profile when the catalytic efficiency is high. However, the critical transition point and the shape of the extended optimal profile is significantly affected by specific features of the model. We explain the behavior of these different systems in terms of the underlying stochastic reaction and diffusion processes of single substrate molecules.