id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1604.08890
Elizaveta Guseva
Elizaveta A Guseva, Ronald N Zuckermann, Ken A Dill
How did prebiotic polymers become informational foldamers?
12 pages, 10 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mystery about the origins of life is which molecular structures $-$ and what spontaneous processes $-$ drove the autocatalytic transition from simple chemistry to biology? Using the HP lattice model of polymer sequence spaces leads to the prediction that random sequences of hydrophobic ($H$) and polar ($P$) monomers can collapse into relatively compact structures, exposing hydrophobic surfaces, acting as primitive versions of today's protein catalysts, elongating other such HP polymers, as ribosomes would now do. Such foldamer-catalysts form an autocatalytic set, growing short chains into longer chains that have particular sequences. The system has capacity for the multimodality: ability to settle at multiple distinct quasi-stable states characterized by different groups of dominating polymers. This is a testable mechanism that we believe is relevant to the early origins of life.
[ { "created": "Thu, 28 Apr 2016 19:07:45 GMT", "version": "v1" } ]
2016-05-02
[ [ "Guseva", "Elizaveta A", "" ], [ "Zuckermann", "Ronald N", "" ], [ "Dill", "Ken A", "" ] ]
A mystery about the origins of life is which molecular structures $-$ and what spontaneous processes $-$ drove the autocatalytic transition from simple chemistry to biology? Using the HP lattice model of polymer sequence spaces leads to the prediction that random sequences of hydrophobic ($H$) and polar ($P$) monomers can collapse into relatively compact structures, exposing hydrophobic surfaces, acting as primitive versions of today's protein catalysts, elongating other such HP polymers, as ribosomes would now do. Such foldamer-catalysts form an autocatalytic set, growing short chains into longer chains that have particular sequences. The system has capacity for the multimodality: ability to settle at multiple distinct quasi-stable states characterized by different groups of dominating polymers. This is a testable mechanism that we believe is relevant to the early origins of life.
1910.05271
Elena Kalinina
Elena Kalinina, Fabian Pedregosa, Vittorio Iacovella, Emanuele Olivetti, Paolo Avesani
A Test for Shared Patterns in Cross-modal Brain Activation Analysis
5 figures, tables after References (as required by SciRep template)
null
null
null
q-bio.NC cs.LG stat.ML stat.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the extent to which different cognitive modalities (understood here as the set of cognitive processes underlying the elaboration of a stimulus by the brain) rely on overlapping neural representations is a fundamental issue in cognitive neuroscience. In the last decade, the identification of shared activity patterns has been mostly framed as a supervised learning problem. For instance, a classifier is trained to discriminate categories (e.g. faces vs. houses) in modality I (e.g. perception) and tested on the same categories in modality II (e.g. imagery). This type of analysis is often referred to as cross-modal decoding. In this paper we take a different approach and instead formulate the problem of assessing shared patterns across modalities within the framework of statistical hypothesis testing. We propose both an appropriate test statistic and a scheme based on permutation testing to compute the significance of this test while making only minimal distributional assumption. We denote this test cross-modal permutation test (CMPT). We also provide empirical evidence on synthetic datasets that our approach has greater statistical power than the cross-modal decoding method while maintaining low Type I errors (rejecting a true null hypothesis). We compare both approaches on an fMRI dataset with three different cognitive modalities (perception, imagery, visual search). Finally, we show how CMPT can be combined with Searchlight analysis to explore spatial distribution of shared activity patterns.
[ { "created": "Tue, 8 Oct 2019 19:33:49 GMT", "version": "v1" } ]
2019-10-14
[ [ "Kalinina", "Elena", "" ], [ "Pedregosa", "Fabian", "" ], [ "Iacovella", "Vittorio", "" ], [ "Olivetti", "Emanuele", "" ], [ "Avesani", "Paolo", "" ] ]
Determining the extent to which different cognitive modalities (understood here as the set of cognitive processes underlying the elaboration of a stimulus by the brain) rely on overlapping neural representations is a fundamental issue in cognitive neuroscience. In the last decade, the identification of shared activity patterns has been mostly framed as a supervised learning problem. For instance, a classifier is trained to discriminate categories (e.g. faces vs. houses) in modality I (e.g. perception) and tested on the same categories in modality II (e.g. imagery). This type of analysis is often referred to as cross-modal decoding. In this paper we take a different approach and instead formulate the problem of assessing shared patterns across modalities within the framework of statistical hypothesis testing. We propose both an appropriate test statistic and a scheme based on permutation testing to compute the significance of this test while making only minimal distributional assumption. We denote this test cross-modal permutation test (CMPT). We also provide empirical evidence on synthetic datasets that our approach has greater statistical power than the cross-modal decoding method while maintaining low Type I errors (rejecting a true null hypothesis). We compare both approaches on an fMRI dataset with three different cognitive modalities (perception, imagery, visual search). Finally, we show how CMPT can be combined with Searchlight analysis to explore spatial distribution of shared activity patterns.
1512.00745
Jayanta Kumar Das
Jayanta Kumar Das, Atrayee Majumder, Pabitra Pal Choudhury
Understanding of Genetic Code Degeneracy and New Way of Classifying of Protein Family: A Mathematical Approach
pages-5, Tables-6
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genetic code is the set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. The code defines a mapping between tri-nucleotide sequences, called codons, and amino acids. Since there are 20 amino acids and 64 possible tri-nucleotide sequences, more than one among these 64 triplets can code for a single amino acid which incorporates the problem of degeneracy. This manuscript explains the underlying logic of degeneracy of genetic code based on a mathematical point of view using a parameter named Impression. Classification of protein family is also a long standing problem in the field of Bio-chemistry and Genomics. Proteins belonging to a particular class have some similar bio-chemical properties which are of utmost importance for new drug design. Using the same parameter Impression and using graph theoretic properties we have also devised a new way of classifying a protein family.
[ { "created": "Mon, 30 Nov 2015 11:01:49 GMT", "version": "v1" } ]
2015-12-03
[ [ "Das", "Jayanta Kumar", "" ], [ "Majumder", "Atrayee", "" ], [ "Choudhury", "Pabitra Pal", "" ] ]
The genetic code is the set of rules by which information encoded in genetic material (DNA or RNA sequences) is translated into proteins (amino acid sequences) by living cells. The code defines a mapping between tri-nucleotide sequences, called codons, and amino acids. Since there are 20 amino acids and 64 possible tri-nucleotide sequences, more than one among these 64 triplets can code for a single amino acid which incorporates the problem of degeneracy. This manuscript explains the underlying logic of degeneracy of genetic code based on a mathematical point of view using a parameter named Impression. Classification of protein family is also a long standing problem in the field of Bio-chemistry and Genomics. Proteins belonging to a particular class have some similar bio-chemical properties which are of utmost importance for new drug design. Using the same parameter Impression and using graph theoretic properties we have also devised a new way of classifying a protein family.
1205.0321
Ramon Ferrer i Cancho
Ramon Ferrer-i-Cancho and Brenda McCowan
The span of correlations in dolphin whistle sequences
New Tables 3 and 4
Journal of Statistical Mechanics, P06002 (2012)
10.1088/1742-5468/2012/06/P06002
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-range correlations are found in symbolic sequences from human language, music and DNA. Determining the span of correlations in dolphin whistle sequences is crucial for shedding light on their communicative complexity. Dolphin whistles share various statistical properties with human words, i.e. Zipf's law for word frequencies (namely that the probability of the $i$th most frequent word of a text is about $i^{-\alpha}$) and a parallel of the tendency of more frequent words to have more meanings. The finding of Zipf's law for word frequencies in dolphin whistles has been the topic of an intense debate on its implications. One of the major arguments against the relevance of Zipf's law in dolphin whistles is that is not possible to distinguish the outcome of a die rolling experiment from that of a linguistic or communicative source producing Zipf's law for word frequencies. Here we show that statistically significant whistle-whistle correlations extend back to the 2nd previous whistle in the sequence using a global randomization test and to the 4th previous whistle using a local randomization test. None of these correlations are expected by a die rolling experiment and other simple explanation of Zipf's law for word frequencies such as Simon's model that produce sequences of unpredictable elements.
[ { "created": "Wed, 2 May 2012 04:49:19 GMT", "version": "v1" }, { "created": "Wed, 9 May 2012 12:29:23 GMT", "version": "v2" } ]
2014-12-03
[ [ "Ferrer-i-Cancho", "Ramon", "" ], [ "McCowan", "Brenda", "" ] ]
Long-range correlations are found in symbolic sequences from human language, music and DNA. Determining the span of correlations in dolphin whistle sequences is crucial for shedding light on their communicative complexity. Dolphin whistles share various statistical properties with human words, i.e. Zipf's law for word frequencies (namely that the probability of the $i$th most frequent word of a text is about $i^{-\alpha}$) and a parallel of the tendency of more frequent words to have more meanings. The finding of Zipf's law for word frequencies in dolphin whistles has been the topic of an intense debate on its implications. One of the major arguments against the relevance of Zipf's law in dolphin whistles is that is not possible to distinguish the outcome of a die rolling experiment from that of a linguistic or communicative source producing Zipf's law for word frequencies. Here we show that statistically significant whistle-whistle correlations extend back to the 2nd previous whistle in the sequence using a global randomization test and to the 4th previous whistle using a local randomization test. None of these correlations are expected by a die rolling experiment and other simple explanation of Zipf's law for word frequencies such as Simon's model that produce sequences of unpredictable elements.
2303.02015
Birgitta Dresp-Langley
Birgitta Dresp-Langley
The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience
null
Information 2023; 14(2):82
10.3390/info14020082
null
q-bio.NC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two universal functional principles of Adaptive Resonance Theory simulate the brain code of all biological learning and adaptive intelligence. Low level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom up activation and under the control of top down matching rules that integrate high level long term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are revisited in this paper here on the basis of examples drawn from the original code and from some of the most recent related empirical findings on contextual modulation in the brain, highlighting the potential of Grossberg's pioneering insights and groundbreaking theoretical work for intelligent solutions in the domain of developmental and cognitive robotics.
[ { "created": "Fri, 3 Mar 2023 15:31:14 GMT", "version": "v1" } ]
2023-03-06
[ [ "Dresp-Langley", "Birgitta", "" ] ]
Two universal functional principles of Adaptive Resonance Theory simulate the brain code of all biological learning and adaptive intelligence. Low level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom up activation and under the control of top down matching rules that integrate high level long term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are revisited in this paper here on the basis of examples drawn from the original code and from some of the most recent related empirical findings on contextual modulation in the brain, highlighting the potential of Grossberg's pioneering insights and groundbreaking theoretical work for intelligent solutions in the domain of developmental and cognitive robotics.
2001.07284
Jose A Capitan
Jose A. Capitan, Sara Cuenda, and David Alonso
Competitive dominance in plant communities: Modeling approaches and theoretical predictions
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantitative predictions about the processes that promote species coexistence are a subject of active research in ecology. In particular, competitive interactions are known to shape and maintain ecological communities, and situations where some species out-compete or dominate over some others are key to describe natural ecosystems. Here we develop ecological theory using a stochastic, synthetic framework for plant community assembly that leads to predictions amenable to empirical testing. We propose two stochastic continuous-time Markov models that incorporate competitive dominance through a hierarchy of species heights. The first model, which is spatially implicit, predicts both the expected number of species that survive and the conditions under which heights are clustered in realized model communities. The second one allows spatially-explicit interactions of individuals and alternative mechanisms that can help shorter plants overcome height-driven competition, and it demonstrates that clustering patterns remain not only locally but also across increasing spatial scales. Moreover, although plants are actually height-clustered in the spatially-explicit model, it allows for plant species abundances not necessarily skewed to taller plants.
[ { "created": "Mon, 20 Jan 2020 23:33:08 GMT", "version": "v1" } ]
2020-01-22
[ [ "Capitan", "Jose A.", "" ], [ "Cuenda", "Sara", "" ], [ "Alonso", "David", "" ] ]
Quantitative predictions about the processes that promote species coexistence are a subject of active research in ecology. In particular, competitive interactions are known to shape and maintain ecological communities, and situations where some species out-compete or dominate over some others are key to describe natural ecosystems. Here we develop ecological theory using a stochastic, synthetic framework for plant community assembly that leads to predictions amenable to empirical testing. We propose two stochastic continuous-time Markov models that incorporate competitive dominance through a hierarchy of species heights. The first model, which is spatially implicit, predicts both the expected number of species that survive and the conditions under which heights are clustered in realized model communities. The second one allows spatially-explicit interactions of individuals and alternative mechanisms that can help shorter plants overcome height-driven competition, and it demonstrates that clustering patterns remain not only locally but also across increasing spatial scales. Moreover, although plants are actually height-clustered in the spatially-explicit model, it allows for plant species abundances not necessarily skewed to taller plants.
q-bio/0508009
Trinh Xuan Hoang
Jayanth R. Banavar, Trinh Xuan Hoang, Amos Maritan
Proteins and polymers
7 pages, 6 figures
J. Chem. Phys. 122, 234910 (2005)
10.1063/1.1940059
null
q-bio.BM cond-mat.soft
null
Proteins, chain molecules of amino acids, behave in ways which are similar to each other yet quite distinct from standard compact polymers. We demonstrate that the Flory theorem, derived for polymer melts, holds for compact protein native state structures and is not incompatible with the existence of structured building blocks such as $\alpha$-helices and $\beta$-strands. We present a discussion on how the notion of the thickness of a polymer chain, besides being useful in describing a chain molecule in the continuum limit, plays a vital role in interpolating between conventional polymer physics and the phase of matter associated with protein structures.
[ { "created": "Mon, 8 Aug 2005 11:48:51 GMT", "version": "v1" } ]
2009-11-11
[ [ "Banavar", "Jayanth R.", "" ], [ "Hoang", "Trinh Xuan", "" ], [ "Maritan", "Amos", "" ] ]
Proteins, chain molecules of amino acids, behave in ways which are similar to each other yet quite distinct from standard compact polymers. We demonstrate that the Flory theorem, derived for polymer melts, holds for compact protein native state structures and is not incompatible with the existence of structured building blocks such as $\alpha$-helices and $\beta$-strands. We present a discussion on how the notion of the thickness of a polymer chain, besides being useful in describing a chain molecule in the continuum limit, plays a vital role in interpolating between conventional polymer physics and the phase of matter associated with protein structures.
1608.06314
Stephen Montgomery-Smith
Stephen Montgomery-Smith and Hesam Oveys
Age-dependent Branching Processes and Applications to the Luria-Delbr\"uck Experiment
null
Electron. J. Differential Equations, Vol. 2021 (2021), No. 56, pp. 1-22
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbial populations adapt to their environment by acquiring advantageous mutations, but in the early twentieth century, questions about how these organisms acquire mutations arose. The experiment of Salvador Luria and Max Delbr\"uck that won them a Nobel Prize in 1969 confirmed that mutations don't occur out of necessity, but instead can occur many generations before there is a selective advantage, and thus organisms follow Darwinian evolution instead of Lamarckian. Since then, new areas of research involving microbial evolution have spawned as a result of their experiment. Determining the mutation rate of a cell is one such area. Probability distributions that determine the number of mutants in a large population have been derived by D. E. Lea, C. A. Coulson, and J. B. S. Haldane. However, not much work has been done when time of cell division is dependent on the cell age, and even less so when cell division is asymmetric, which is the case in most microbial populations. Using probability generating function methods, we rigorously construct a probability distribution for the cell population size given a life-span distribution for both mother and daughter cells, and then determine its asymptotic growth rate. We use this to construct a probability distribution for the number of mutants in a large cell population, which can be used with likelihood methods to estimate the cell mutation rate.
[ { "created": "Mon, 22 Aug 2016 21:05:41 GMT", "version": "v1" } ]
2021-06-24
[ [ "Montgomery-Smith", "Stephen", "" ], [ "Oveys", "Hesam", "" ] ]
Microbial populations adapt to their environment by acquiring advantageous mutations, but in the early twentieth century, questions about how these organisms acquire mutations arose. The experiment of Salvador Luria and Max Delbr\"uck that won them a Nobel Prize in 1969 confirmed that mutations don't occur out of necessity, but instead can occur many generations before there is a selective advantage, and thus organisms follow Darwinian evolution instead of Lamarckian. Since then, new areas of research involving microbial evolution have spawned as a result of their experiment. Determining the mutation rate of a cell is one such area. Probability distributions that determine the number of mutants in a large population have been derived by D. E. Lea, C. A. Coulson, and J. B. S. Haldane. However, not much work has been done when time of cell division is dependent on the cell age, and even less so when cell division is asymmetric, which is the case in most microbial populations. Using probability generating function methods, we rigorously construct a probability distribution for the cell population size given a life-span distribution for both mother and daughter cells, and then determine its asymptotic growth rate. We use this to construct a probability distribution for the number of mutants in a large cell population, which can be used with likelihood methods to estimate the cell mutation rate.
1307.8252
Simone Pigolotti
Simone Pigolotti and Roberto Benzi
Selective advantage of diffusing faster
8 pages, 5 figures (Main Text + Supplementary Information). Accepted version
Phys. Rev. Lett. 112, 188102 (2014)
10.1103/PhysRevLett.112.188102
null
q-bio.PE cond-mat.stat-mech nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a stochastic spatial model of biological competition in which two species have the same birth and death rates, but different diffusion constants. In the absence of this difference, the model can be considered as an off-lattice version of the Voter model and presents similar coarsening properties. We show that even a relative difference in diffusivity on the order of a few percent may lead to a strong bias in the coarsening process favoring the more agile species. We theoretically quantify this selective advantage and present analytical formulas for the average growth of the fastest species and its fixation probability.
[ { "created": "Wed, 31 Jul 2013 08:50:12 GMT", "version": "v1" }, { "created": "Fri, 16 May 2014 09:45:02 GMT", "version": "v2" } ]
2015-06-16
[ [ "Pigolotti", "Simone", "" ], [ "Benzi", "Roberto", "" ] ]
We study a stochastic spatial model of biological competition in which two species have the same birth and death rates, but different diffusion constants. In the absence of this difference, the model can be considered as an off-lattice version of the Voter model and presents similar coarsening properties. We show that even a relative difference in diffusivity on the order of a few percent may lead to a strong bias in the coarsening process favoring the more agile species. We theoretically quantify this selective advantage and present analytical formulas for the average growth of the fastest species and its fixation probability.
1407.7566
Eric Strobl
Eric V. Strobl, Shyam Visweswaran
Dependence versus Conditional Dependence in Local Causal Discovery from Gene Expression Data
11 pages, 2 algorithms, 4 figures, 5 tables
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Algorithms that discover variables which are causally related to a target may inform the design of experiments. With observational gene expression data, many methods discover causal variables by measuring each variable's degree of statistical dependence with the target using dependence measures (DMs). However, other methods measure each variable's ability to explain the statistical dependence between the target and the remaining variables in the data using conditional dependence measures (CDMs), since this strategy is guaranteed to find the target's direct causes, direct effects, and direct causes of the direct effects in the infinite sample limit. In this paper, we design a new algorithm in order to systematically compare the relative abilities of DMs and CDMs in discovering causal variables from gene expression data. Results: The proposed algorithm using a CDM is sample efficient, since it consistently outperforms other state-of-the-art local causal discovery algorithms when samples sizes are small. However, the proposed algorithm using a CDM outperforms the proposed algorithm using a DM only when sample sizes are above several hundred. These results suggest that accurate causal discovery from gene expression data using current CDM-based algorithms requires datasets with at least several hundred samples. Availability: The proposed algorithm is freely available at https://github.com/ericstrobl/DvCD.
[ { "created": "Mon, 28 Jul 2014 20:52:18 GMT", "version": "v1" } ]
2014-07-30
[ [ "Strobl", "Eric V.", "" ], [ "Visweswaran", "Shyam", "" ] ]
Motivation: Algorithms that discover variables which are causally related to a target may inform the design of experiments. With observational gene expression data, many methods discover causal variables by measuring each variable's degree of statistical dependence with the target using dependence measures (DMs). However, other methods measure each variable's ability to explain the statistical dependence between the target and the remaining variables in the data using conditional dependence measures (CDMs), since this strategy is guaranteed to find the target's direct causes, direct effects, and direct causes of the direct effects in the infinite sample limit. In this paper, we design a new algorithm in order to systematically compare the relative abilities of DMs and CDMs in discovering causal variables from gene expression data. Results: The proposed algorithm using a CDM is sample efficient, since it consistently outperforms other state-of-the-art local causal discovery algorithms when samples sizes are small. However, the proposed algorithm using a CDM outperforms the proposed algorithm using a DM only when sample sizes are above several hundred. These results suggest that accurate causal discovery from gene expression data using current CDM-based algorithms requires datasets with at least several hundred samples. Availability: The proposed algorithm is freely available at https://github.com/ericstrobl/DvCD.
1403.1043
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Coupling-Induced Population Synchronization in An Excitatory Population of Subthreshold Izhikevich Neurons
null
Cognitive Neurodynamics, 7, 495-503 (2013)
10.1007/s11571-013-9256-y
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an excitatory population of subthreshold Izhikevich neurons which exhibit noise-induced firings. By varying the coupling strength $J$, we investigate population synchronization between the noise-induced firings which may be used for efficient cognitive processing such as sensory perception, multisensory binding, selective attention, and memory formation. As $J$ is increased, rich types of population synchronization (e.g., spike, burst, and fast spike synchronization) are found to occur. Transitions between population synchronization and incoherence are well described in terms of an order parameter $\cal{O}$. As a final step, the coupling induces oscillator death (quenching of noise-induced spikings) because each neuron is attracted to a noisy equilibrium state. The oscillator death leads to a transition from firing to non-firing states at the population level, which may be well described in terms of the time-averaged population spike rate $\overline{R}$. In addition to the statistical-mechanical analysis using $\cal{O}$ and $\overline{R}$, each population and individual state are also characterized by using the techniques of nonlinear dynamics such as the raster plot of neural spikes, the time series of the membrane potential, and the phase portrait. We note that population synchronization of noise-induced firings may lead to emergence of synchronous brain rhythms in a noisy environment, associated with diverse cognitive functions.
[ { "created": "Wed, 5 Mar 2014 08:40:33 GMT", "version": "v1" } ]
2014-03-06
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We consider an excitatory population of subthreshold Izhikevich neurons which exhibit noise-induced firings. By varying the coupling strength $J$, we investigate population synchronization between the noise-induced firings which may be used for efficient cognitive processing such as sensory perception, multisensory binding, selective attention, and memory formation. As $J$ is increased, rich types of population synchronization (e.g., spike, burst, and fast spike synchronization) are found to occur. Transitions between population synchronization and incoherence are well described in terms of an order parameter $\cal{O}$. As a final step, the coupling induces oscillator death (quenching of noise-induced spikings) because each neuron is attracted to a noisy equilibrium state. The oscillator death leads to a transition from firing to non-firing states at the population level, which may be well described in terms of the time-averaged population spike rate $\overline{R}$. In addition to the statistical-mechanical analysis using $\cal{O}$ and $\overline{R}$, each population and individual state are also characterized by using the techniques of nonlinear dynamics such as the raster plot of neural spikes, the time series of the membrane potential, and the phase portrait. We note that population synchronization of noise-induced firings may lead to emergence of synchronous brain rhythms in a noisy environment, associated with diverse cognitive functions.
2106.06537
Johan Broekaert M.
Johan M. Broekaert
The Auditory Tuning of a Keyboard
6 pages, 5 figures, 5 tables, submitted to MTO, Nature of replacement : Addition of an ACKNOWLEDGEMENT, citations, and a list of WORKS CITED. Note: the initial submission was seen as a kind of "announcement" only, and did therefore not meet requirements that are normal for articles. The goal was to have a very readable text only, without elements enforcing the content of the text
null
null
null
q-bio.NC physics.hist-ph
http://creativecommons.org/publicdomain/zero/1.0/
An optimal auditory tunable well (circular) temperament is determined. A temperament that is applicable in practice is derived from this optimum. No other historical temperament fits as well, with this optimum. A brief comparison of temperaments is worked out.
[ { "created": "Fri, 11 Jun 2021 19:25:13 GMT", "version": "v1" } ]
2021-06-15
[ [ "Broekaert", "Johan M.", "" ] ]
An optimal auditory tunable well (circular) temperament is determined. A temperament that is applicable in practice is derived from this optimum. No other historical temperament fits as well, with this optimum. A brief comparison of temperaments is worked out.
1703.04869
May Anne Mata
May Anne E. Mata, Priscilla E. Greenwood, and Rebecca C. Tyson
The roles of direct and environmental transmission in stochastic avian flu epidemic recurrence
36 pages, 11 figures
null
null
null
q-bio.PE math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an analysis of an avian flu model that yields insight into the role of different transmission routes in the recurrence of avian influenza epidemics. Recent modelling work on avian influenza in wild bird populations takes into account demographic stochasticity and highlights the importance of environmental transmission in determining the outbreak periodicity, but only for a weak between-host transmission rate. We determine the relative contribution of environmental and direct transmission routes to the intensity of outbreaks. We use an approximation method to simulate noise sustained oscillations in a stochastic avian flu model with environmental and direct transmission routes. We see that the oscillations are governed by the product of a rotation and a slowly varying standard Ornstein-Uhlenbeck process (i.e., mean-reverting process). The intrinsic frequency of the damped deterministic version of the system predicts the dominant period of outbreaks. We show, using analytic computation of the intrinsic frequency and theoretical power spectral density, that the outbreak periodicity can be explained in terms of either or both types of transmission.The amplitude of outbreaks tends to be high when both types of transmission are strong.
[ { "created": "Wed, 15 Mar 2017 01:26:33 GMT", "version": "v1" } ]
2017-03-16
[ [ "Mata", "May Anne E.", "" ], [ "Greenwood", "Priscilla E.", "" ], [ "Tyson", "Rebecca C.", "" ] ]
We present an analysis of an avian flu model that yields insight into the role of different transmission routes in the recurrence of avian influenza epidemics. Recent modelling work on avian influenza in wild bird populations takes into account demographic stochasticity and highlights the importance of environmental transmission in determining the outbreak periodicity, but only for a weak between-host transmission rate. We determine the relative contribution of environmental and direct transmission routes to the intensity of outbreaks. We use an approximation method to simulate noise sustained oscillations in a stochastic avian flu model with environmental and direct transmission routes. We see that the oscillations are governed by the product of a rotation and a slowly varying standard Ornstein-Uhlenbeck process (i.e., mean-reverting process). The intrinsic frequency of the damped deterministic version of the system predicts the dominant period of outbreaks. We show, using analytic computation of the intrinsic frequency and theoretical power spectral density, that the outbreak periodicity can be explained in terms of either or both types of transmission.The amplitude of outbreaks tends to be high when both types of transmission are strong.
2006.08702
David Castineira
Courtney Cochrane, David Castineira, Nisreen Shiban and Pavlos Protopapas
Application of Machine Learning to Predict the Risk of Alzheimer's Disease: An Accurate and Practical Solution for Early Diagnostics
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's Disease (AD) ravages the cognitive ability of more than 5 million Americans and creates an enormous strain on the health care system. This paper proposes a machine learning predictive model for AD development without medical imaging and with fewer clinical visits and tests, in hopes of earlier and cheaper diagnoses. That earlier diagnoses could be critical in the effectiveness of any drug or medical treatment to cure this disease. Our model is trained and validated using demographic, biomarker and cognitive test data from two prominent research studies: Alzheimer's Disease Neuroimaging Initiative (ADNI) and Australian Imaging, Biomarker Lifestyle Flagship Study of Aging (AIBL). We systematically explore different machine learning models, pre-processing methods and feature selection techniques. The most performant model demonstrates greater than 90% accuracy and recall in predicting AD, and the results generalize across sub-studies of ADNI and to the independent AIBL study. We also demonstrate that these results are robust to reducing the number of clinical visits or tests per visit. Using a metaclassification algorithm and longitudinal data analysis we are able to produce a "lean" diagnostic protocol with only 3 tests and 4 clinical visits that can predict Alzheimer's development with 87% accuracy and 79% recall. This novel work can be adapted into a practical early diagnostic tool for predicting the development of Alzheimer's that maximizes accuracy while minimizing the number of necessary diagnostic tests and clinical visits.
[ { "created": "Tue, 2 Jun 2020 14:52:51 GMT", "version": "v1" } ]
2020-06-17
[ [ "Cochrane", "Courtney", "" ], [ "Castineira", "David", "" ], [ "Shiban", "Nisreen", "" ], [ "Protopapas", "Pavlos", "" ] ]
Alzheimer's Disease (AD) ravages the cognitive ability of more than 5 million Americans and creates an enormous strain on the health care system. This paper proposes a machine learning predictive model for AD development without medical imaging and with fewer clinical visits and tests, in hopes of earlier and cheaper diagnoses. That earlier diagnoses could be critical in the effectiveness of any drug or medical treatment to cure this disease. Our model is trained and validated using demographic, biomarker and cognitive test data from two prominent research studies: Alzheimer's Disease Neuroimaging Initiative (ADNI) and Australian Imaging, Biomarker Lifestyle Flagship Study of Aging (AIBL). We systematically explore different machine learning models, pre-processing methods and feature selection techniques. The most performant model demonstrates greater than 90% accuracy and recall in predicting AD, and the results generalize across sub-studies of ADNI and to the independent AIBL study. We also demonstrate that these results are robust to reducing the number of clinical visits or tests per visit. Using a metaclassification algorithm and longitudinal data analysis we are able to produce a "lean" diagnostic protocol with only 3 tests and 4 clinical visits that can predict Alzheimer's development with 87% accuracy and 79% recall. This novel work can be adapted into a practical early diagnostic tool for predicting the development of Alzheimer's that maximizes accuracy while minimizing the number of necessary diagnostic tests and clinical visits.
1809.03587
Fang Ou
Fang Ou, Cushla McGoverin, Simon Swift, Fr\'ed\'erique Vanholsbeeck
Near real-time enumeration of live and dead bacteria using a fibre-based spectroscopic device
13 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A rapid, cost-effective and easy method that allows on-site determination of the concentration of live and dead bacterial cells using a fibre-based spectroscopic device (the optrode system) is proposed and demonstrated. Identification of live and dead bacteria was achieved by using the commercially available dyes SYTO 9 and propidium iodide, and fluorescence spectra were measured by the optrode. Three spectral processing methods were evaluated for their effectiveness in predicting the original bacterial concentration in the samples: principal components regression (PCR), partial least squares regression (PLSR) and support vector regression (SVR). Without any sample pre-concentration, PCR achieved the most reliable results. It was able to quantify live bacteria from $10^{8}$ down to $10^{6.2}$ bacteria/mL and showed the potential to detect as low as $10^{5.7}$ bacteria/mL. Meanwhile, enumeration of dead bacteria using PCR was achieved between $10^{8}$ and $10^{7}$ bacteria/mL. The general procedures described in this article can be applied or modified for the enumeration of bacteria within populations stained with fluorescent dyes. The optrode is a promising device for the enumeration of live and dead bacterial populations particularly where rapid, on-site measurement and analysis is required.
[ { "created": "Mon, 10 Sep 2018 20:38:28 GMT", "version": "v1" } ]
2018-09-12
[ [ "Ou", "Fang", "" ], [ "McGoverin", "Cushla", "" ], [ "Swift", "Simon", "" ], [ "Vanholsbeeck", "Frédérique", "" ] ]
A rapid, cost-effective and easy method that allows on-site determination of the concentration of live and dead bacterial cells using a fibre-based spectroscopic device (the optrode system) is proposed and demonstrated. Identification of live and dead bacteria was achieved by using the commercially available dyes SYTO 9 and propidium iodide, and fluorescence spectra were measured by the optrode. Three spectral processing methods were evaluated for their effectiveness in predicting the original bacterial concentration in the samples: principal components regression (PCR), partial least squares regression (PLSR) and support vector regression (SVR). Without any sample pre-concentration, PCR achieved the most reliable results. It was able to quantify live bacteria from $10^{8}$ down to $10^{6.2}$ bacteria/mL and showed the potential to detect as low as $10^{5.7}$ bacteria/mL. Meanwhile, enumeration of dead bacteria using PCR was achieved between $10^{8}$ and $10^{7}$ bacteria/mL. The general procedures described in this article can be applied or modified for the enumeration of bacteria within populations stained with fluorescent dyes. The optrode is a promising device for the enumeration of live and dead bacterial populations particularly where rapid, on-site measurement and analysis is required.
2109.12281
Mareike Fischer
Mareike Fischer and Lina Herbst and Sophie Kersting and Luise K\"uhn and Kristina Wicke
Tree balance indices: a comprehensive survey
main manuscript: 23 pages, fact sheets (one per balance index): 41 pages, appendix with detailed proofs: 80 pages. ATTENTION: This manuscript has been superseded by the SpringerNature book "Tree balance indices -- A comprehensive survey", ISBN 978-3-031-39799-8 and 978-3-031-39800-1
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tree balance plays an important role in phylogenetics and other research areas, which is why several indices to measure tree balance have been introduced over the years. Nevertheless, a formal definition of what a balance index actually is and what makes it a useful measure of balance (or, in other cases, imbalance), has so far not been introduced in the literature. While the established indices all summarize the (im)balance of a tree in a single number, they vary in their definitions and underlying principles. It is the aim of the present manuscript to introduce formal definitions of balance and imbalance indices that classify desirable properties of such indices and to analyze and categorize established indices accordingly. In this regard, we review 19 established (im)balance indices from the literature, summarize their general, statistical and combinatorial properties (where known), prove numerous additional results and indicate directions for future research by making explicit open questions and gaps in the literature. We also prove that a few tree shape statistics that have been used to measure tree balance in the literature do not fulfill our definition of an (im)balance index, which might indicate that their properties are not as useful for practical purposes. Moreover, we show that five additional tree shape statistics from other contexts actually are tree (im)balance indices according to our definition. The manuscript is accompanied by the website \url{treebalance.wordpress.com} containing fact sheets of the discussed indices. Moreover, we introduce the software package \verb|treebalance| implemented in $\mathsf{R}$ that can be used to calculate all indices discussed.
[ { "created": "Sat, 25 Sep 2021 05:51:24 GMT", "version": "v1" }, { "created": "Thu, 9 Nov 2023 10:00:38 GMT", "version": "v2" } ]
2023-11-10
[ [ "Fischer", "Mareike", "" ], [ "Herbst", "Lina", "" ], [ "Kersting", "Sophie", "" ], [ "Kühn", "Luise", "" ], [ "Wicke", "Kristina", "" ] ]
Tree balance plays an important role in phylogenetics and other research areas, which is why several indices to measure tree balance have been introduced over the years. Nevertheless, a formal definition of what a balance index actually is and what makes it a useful measure of balance (or, in other cases, imbalance), has so far not been introduced in the literature. While the established indices all summarize the (im)balance of a tree in a single number, they vary in their definitions and underlying principles. It is the aim of the present manuscript to introduce formal definitions of balance and imbalance indices that classify desirable properties of such indices and to analyze and categorize established indices accordingly. In this regard, we review 19 established (im)balance indices from the literature, summarize their general, statistical and combinatorial properties (where known), prove numerous additional results and indicate directions for future research by making explicit open questions and gaps in the literature. We also prove that a few tree shape statistics that have been used to measure tree balance in the literature do not fulfill our definition of an (im)balance index, which might indicate that their properties are not as useful for practical purposes. Moreover, we show that five additional tree shape statistics from other contexts actually are tree (im)balance indices according to our definition. The manuscript is accompanied by the website \url{treebalance.wordpress.com} containing fact sheets of the discussed indices. Moreover, we introduce the software package \verb|treebalance| implemented in $\mathsf{R}$ that can be used to calculate all indices discussed.
1811.03335
Dr. Alexander Paraskevov
A.V. Paraskevov, D.K. Zendrikov
A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves
14 pages, 7 figures
Phys. Biol. 14, 026003 (2017)
10.1088/1478-3975/aa5fc3
null
q-bio.NC cond-mat.dis-nn nlin.AO nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments $\textit{in vitro}$ do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
[ { "created": "Thu, 8 Nov 2018 09:56:49 GMT", "version": "v1" } ]
2018-11-09
[ [ "Paraskevov", "A. V.", "" ], [ "Zendrikov", "D. K.", "" ] ]
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments $\textit{in vitro}$ do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
2201.00195
Jayden Macklin-Cordes
Jayden L. Macklin-Cordes, Erich R. Round
Challenges of sampling and how phylogenetic comparative methods help: With a case study of the Pama-Nyungan laminal contrast
Accepted for publication in Linguistic Typology. Supplementary data at https://doi.org/10.5281/zenodo.5602216. 96 total pages (Main text: 41 pages, 6 figures, 3 tables. Supplementary S1: 34 pages, 1 figure. Supplementary S2: 21 pages)
null
null
null
q-bio.PE cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic comparative methods are new in our field and are shrouded, for most linguists, in at least a little mystery. Yet the path that led to their discovery in comparative biology is so similar to the methodological history of balanced sampling, that it is only an accident of history that they were not discovered by a typologist. Here we clarify the essential logic behind phylogenetic comparative methods and their fundamental relatedness to a deep intellectual tradition focussed on sampling. Then we introduce concepts, methods and tools which will enable typologists to use these methods in everyday typological research. The key commonality of phylogenetic comparative methods and balanced sampling is that they attempt to deal with statistical non-independence due to genealogy. Whereas sampling can never achieve independence and requires most comparative data to be discarded, phylogenetic comparative methods achieve independence while retaining and using all data. We discuss the essential notions of phylogenetic signal; uncertainty about trees; typological averages and proportions that are sensitive to genealogy; comparison across language families; and the effects of areality. Extensive supplementary materials illustrate computational tools for practical analysis and we illustrate the methods discussed with a typological case study of the laminal contrast in Pama-Nyungan.
[ { "created": "Sat, 1 Jan 2022 14:33:20 GMT", "version": "v1" } ]
2022-01-04
[ [ "Macklin-Cordes", "Jayden L.", "" ], [ "Round", "Erich R.", "" ] ]
Phylogenetic comparative methods are new in our field and are shrouded, for most linguists, in at least a little mystery. Yet the path that led to their discovery in comparative biology is so similar to the methodological history of balanced sampling, that it is only an accident of history that they were not discovered by a typologist. Here we clarify the essential logic behind phylogenetic comparative methods and their fundamental relatedness to a deep intellectual tradition focussed on sampling. Then we introduce concepts, methods and tools which will enable typologists to use these methods in everyday typological research. The key commonality of phylogenetic comparative methods and balanced sampling is that they attempt to deal with statistical non-independence due to genealogy. Whereas sampling can never achieve independence and requires most comparative data to be discarded, phylogenetic comparative methods achieve independence while retaining and using all data. We discuss the essential notions of phylogenetic signal; uncertainty about trees; typological averages and proportions that are sensitive to genealogy; comparison across language families; and the effects of areality. Extensive supplementary materials illustrate computational tools for practical analysis and we illustrate the methods discussed with a typological case study of the laminal contrast in Pama-Nyungan.
2010.09568
Vince Grolmusz
Laszlo Keresztes and Evelin Szogi and Balint Varga and Vince Grolmusz
Introducing and Applying Newtonian Blurring: An Augmented Dataset of 126,000 Human Connectomes at braingraph.org
null
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaussian blurring is a well-established method for image data augmentation: it may generate a large set of images from a small set of pictures for training and testing purposes for Artificial Intelligence (AI) applications. When we apply AI for non-imagelike biological data, hardly any related method exists. Here we introduce the "Newtonian blurring" in human braingraph (or connectome) augmentation: Started from a dataset of 1053 subjects, we first repeat a probabilistic weighted braingraph construction algorithm 10 times for describing the connections of distinct cerebral areas, then take 7 repetitions in every possible way, delete the lower and upper extremes, and average the remaining 7-2=5 edge-weights for the data of each subject. This way we augment the 1053 graph-set to 120 x 1053 = 126,360 graphs. In augmentation techniques, it is an important requirement that no artificial additions should be introduced into the dataset. Gaussian blurring and also this Newtonian blurring satisfy this goal. The resulting dataset of 126,360 graphs, each in 5 resolutions (i.e., 631,800 graphs in total), is freely available at the site https://braingraph.org/cms/download-pit-group-connectomes/. Augmenting with Newtonian blurring may also be applicable in other non-image related fields, where probabilistic processing and data averaging are implemented.
[ { "created": "Mon, 19 Oct 2020 14:51:59 GMT", "version": "v1" }, { "created": "Tue, 20 Oct 2020 07:36:01 GMT", "version": "v2" }, { "created": "Wed, 21 Oct 2020 16:31:26 GMT", "version": "v3" } ]
2020-10-22
[ [ "Keresztes", "Laszlo", "" ], [ "Szogi", "Evelin", "" ], [ "Varga", "Balint", "" ], [ "Grolmusz", "Vince", "" ] ]
Gaussian blurring is a well-established method for image data augmentation: it may generate a large set of images from a small set of pictures for training and testing purposes for Artificial Intelligence (AI) applications. When we apply AI for non-imagelike biological data, hardly any related method exists. Here we introduce the "Newtonian blurring" in human braingraph (or connectome) augmentation: Started from a dataset of 1053 subjects, we first repeat a probabilistic weighted braingraph construction algorithm 10 times for describing the connections of distinct cerebral areas, then take 7 repetitions in every possible way, delete the lower and upper extremes, and average the remaining 7-2=5 edge-weights for the data of each subject. This way we augment the 1053 graph-set to 120 x 1053 = 126,360 graphs. In augmentation techniques, it is an important requirement that no artificial additions should be introduced into the dataset. Gaussian blurring and also this Newtonian blurring satisfy this goal. The resulting dataset of 126,360 graphs, each in 5 resolutions (i.e., 631,800 graphs in total), is freely available at the site https://braingraph.org/cms/download-pit-group-connectomes/. Augmenting with Newtonian blurring may also be applicable in other non-image related fields, where probabilistic processing and data averaging are implemented.
1810.12244
James Faeder
Jose-Juan Tapia, Ali Sinan Saglam, Jacob Czech, Robert Kuczewski, Thomas M. Bartol, Terrence J. Sejnowski, and James R. Faeder
MCell-R: A particle-resolution network-free spatial modeling framework
null
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial heterogeneity can have dramatic effects on the biochemical networks that drive cell regulation and decision-making. For this reason, a number of methods have been developed to model spatial heterogeneity and incorporated into widely used modeling platforms. Unfortunately, the standard approaches for specifying and simulating chemical reaction networks become untenable when dealing with multi-state, multi-component systems that are characterized by combinatorial complexity. To address this issue, we developed MCell-R, a framework that extends the particle-based spatial Monte Carlo simulator, MCell, with the rule-based model specification and simulation capabilities provided by BioNetGen and NFsim. The BioNetGen syntax enables the specification of biomolecules as structured objects whose components can have different internal states that represent such features as covalent modification and conformation and which can bind components of other molecules to form molecular complexes. The network-free simulation algorithm used by NFsim enables efficient simulation of rule-based models even when the size of the network implied by the biochemical rules is too large to enumerate explicitly, which frequently occurs in detailed models of biochemical signaling. The result is a framework that can efficiently simulate systems characterized by combinatorial complexity at the level of spatially-resolved individual molecules over biologically relevant time and length scales.
[ { "created": "Mon, 29 Oct 2018 16:44:58 GMT", "version": "v1" } ]
2018-10-30
[ [ "Tapia", "Jose-Juan", "" ], [ "Saglam", "Ali Sinan", "" ], [ "Czech", "Jacob", "" ], [ "Kuczewski", "Robert", "" ], [ "Bartol", "Thomas M.", "" ], [ "Sejnowski", "Terrence J.", "" ], [ "Faeder", "James R.", ...
Spatial heterogeneity can have dramatic effects on the biochemical networks that drive cell regulation and decision-making. For this reason, a number of methods have been developed to model spatial heterogeneity and incorporated into widely used modeling platforms. Unfortunately, the standard approaches for specifying and simulating chemical reaction networks become untenable when dealing with multi-state, multi-component systems that are characterized by combinatorial complexity. To address this issue, we developed MCell-R, a framework that extends the particle-based spatial Monte Carlo simulator, MCell, with the rule-based model specification and simulation capabilities provided by BioNetGen and NFsim. The BioNetGen syntax enables the specification of biomolecules as structured objects whose components can have different internal states that represent such features as covalent modification and conformation and which can bind components of other molecules to form molecular complexes. The network-free simulation algorithm used by NFsim enables efficient simulation of rule-based models even when the size of the network implied by the biochemical rules is too large to enumerate explicitly, which frequently occurs in detailed models of biochemical signaling. The result is a framework that can efficiently simulate systems characterized by combinatorial complexity at the level of spatially-resolved individual molecules over biologically relevant time and length scales.
1006.0459
Oren Elrad
Oren M. Elrad and Michael F. Hagan
Encapsulation of a polymer by an icosahedral virus
This is an author-created, un-copyedited version of an article accepted for publication in Physical Biology. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The definitive publisher authenticated version is expected to be published online in November 2010
null
10.1088/1478-3975/7/4/045003
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coat proteins of many viruses spontaneously form icosahedral capsids around nucleic acids or other polymers. Elucidating the role of the packaged polymer in capsid formation could promote biomedical efforts to block viral replication and enable use of capsids in nanomaterials applications. To this end, we perform Brownian dynamics on a coarse-grained model that describes the dynamics of icosahedral capsid assembly around a flexible polymer. We identify several mechanisms by which the polymer plays an active role in its encapsulation, including cooperative polymer-protein motions. These mechanisms are related to experimentally controllable parameters such as polymer length, protein concentration, and solution conditions. Furthermore, the simulations demonstrate that assembly mechanisms are correlated to encapsulation efficiency, and we present a phase diagram that predicts assembly outcomes as a function of experimental parameters. We anticipate that our simulation results will provide a framework for designing in vitro assembly experiments on single-stranded RNA virus capsids.
[ { "created": "Wed, 2 Jun 2010 18:06:10 GMT", "version": "v1" }, { "created": "Mon, 20 Sep 2010 19:11:06 GMT", "version": "v2" } ]
2015-05-19
[ [ "Elrad", "Oren M.", "" ], [ "Hagan", "Michael F.", "" ] ]
The coat proteins of many viruses spontaneously form icosahedral capsids around nucleic acids or other polymers. Elucidating the role of the packaged polymer in capsid formation could promote biomedical efforts to block viral replication and enable use of capsids in nanomaterials applications. To this end, we perform Brownian dynamics on a coarse-grained model that describes the dynamics of icosahedral capsid assembly around a flexible polymer. We identify several mechanisms by which the polymer plays an active role in its encapsulation, including cooperative polymer-protein motions. These mechanisms are related to experimentally controllable parameters such as polymer length, protein concentration, and solution conditions. Furthermore, the simulations demonstrate that assembly mechanisms are correlated to encapsulation efficiency, and we present a phase diagram that predicts assembly outcomes as a function of experimental parameters. We anticipate that our simulation results will provide a framework for designing in vitro assembly experiments on single-stranded RNA virus capsids.
1505.07513
Jitendra Jonnagaddala
Jitendra Jonnagaddala, Damian Sue
A Report on the Workshop on Biobanking Informatics
5 Pages, Workshop
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Workshop on Biobanking Informatics in NSW 2013 (WBIN13) was held on Friday, 10 May 2013 at The Wallace Wurth Building in the University of New South Wales. This report summarises the keynotes, presentations and discussions in WBIN13 which discusses current research in the field of Biobanking Informatics in Australia and internationally.
[ { "created": "Wed, 27 May 2015 23:33:34 GMT", "version": "v1" } ]
2015-05-29
[ [ "Jonnagaddala", "Jitendra", "" ], [ "Sue", "Damian", "" ] ]
The Workshop on Biobanking Informatics in NSW 2013 (WBIN13) was held on Friday, 10 May 2013 at The Wallace Wurth Building in the University of New South Wales. This report summarises the keynotes, presentations and discussions in WBIN13 which discusses current research in the field of Biobanking Informatics in Australia and internationally.
2207.10080
Ningyu Zhang
Siyuan Cheng, Xiaozhuan Liang, Zhen Bi, Huajun Chen, Ningyu Zhang
Multi-modal Protein Knowledge Graph Construction and Applications
Accepted by AAAI 2023 (Student Abstract). Dataset available in https://zjunlp.github.io/project/ProteinKG65/
null
null
null
q-bio.QM cs.AI cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing data-centric methods for protein science generally cannot sufficiently capture and leverage biology knowledge, which may be crucial for many protein tasks. To facilitate research in this field, we create ProteinKG65, a knowledge graph for protein science. Using gene ontology and Uniprot knowledge base as a basis, we transform and integrate various kinds of knowledge with aligned descriptions and protein sequences, respectively, to GO terms and protein entities. ProteinKG65 is mainly dedicated to providing a specialized protein knowledge graph, bringing the knowledge of Gene Ontology to protein function and structure prediction. We also illustrate the potential applications of ProteinKG65 with a prototype. Our dataset can be downloaded at https://w3id.org/proteinkg65.
[ { "created": "Fri, 27 May 2022 08:18:56 GMT", "version": "v1" }, { "created": "Sat, 17 Sep 2022 09:35:07 GMT", "version": "v2" }, { "created": "Mon, 14 Nov 2022 16:26:52 GMT", "version": "v3" } ]
2022-11-15
[ [ "Cheng", "Siyuan", "" ], [ "Liang", "Xiaozhuan", "" ], [ "Bi", "Zhen", "" ], [ "Chen", "Huajun", "" ], [ "Zhang", "Ningyu", "" ] ]
Existing data-centric methods for protein science generally cannot sufficiently capture and leverage biology knowledge, which may be crucial for many protein tasks. To facilitate research in this field, we create ProteinKG65, a knowledge graph for protein science. Using gene ontology and Uniprot knowledge base as a basis, we transform and integrate various kinds of knowledge with aligned descriptions and protein sequences, respectively, to GO terms and protein entities. ProteinKG65 is mainly dedicated to providing a specialized protein knowledge graph, bringing the knowledge of Gene Ontology to protein function and structure prediction. We also illustrate the potential applications of ProteinKG65 with a prototype. Our dataset can be downloaded at https://w3id.org/proteinkg65.
2101.00650
Songting Shi
Songting Shi
A Tutorial on the Mathematical Model of Single Cell Variational Inference
null
null
null
null
q-bio.OT cs.LG q-bio.GN stat.ML
http://creativecommons.org/licenses/by/4.0/
As the large amount of sequencing data accumulated in past decades and it is still accumulating, we need to handle the more and more sequencing data. As the fast development of the computing technologies, we now can handle a large amount of data by a reasonable of time using the neural network based model. This tutorial will introduce the the mathematical model of the single cell variational inference (scVI), which use the variational auto-encoder (building on the neural networks) to learn the distribution of the data to gain insights. It was written for beginners in the simple and intuitive way with many deduction details to encourage more researchers into this field.
[ { "created": "Sun, 3 Jan 2021 16:02:36 GMT", "version": "v1" } ]
2021-01-05
[ [ "Shi", "Songting", "" ] ]
As the large amount of sequencing data accumulated in past decades and it is still accumulating, we need to handle the more and more sequencing data. As the fast development of the computing technologies, we now can handle a large amount of data by a reasonable of time using the neural network based model. This tutorial will introduce the the mathematical model of the single cell variational inference (scVI), which use the variational auto-encoder (building on the neural networks) to learn the distribution of the data to gain insights. It was written for beginners in the simple and intuitive way with many deduction details to encourage more researchers into this field.
1302.2234
Nen Saito
Nen Saito, Shuji Ishihara and Kunihiko Kaneko
Evolution of Genetic Redundancy : The Relevance of Complexity in Genotype-Phenotype Mapping
5 pages, 2 figures
null
10.1088/1367-2630/16/6/063013
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic redundancy is ubiquitous and can be found in any organism. However, it has been argued that genetic redundancy reduces total population fitness, and therefore, redundancy is unlikely to evolve. In this letter, we study an evolutionary model with high-dimensional genotype-phenotype mapping (GPM) to investigate the relevance of complexity in GPM to the evolution of genetic redundancy. By applying the replica method to deal with quenched randomness, the redundancy dependence of the fitness is analytically obtained, which demonstrates that genetic redundancy can indeed evolve, provided that the GPM is complex. Our result provides a novel insight into how genetic redundancy evolves.
[ { "created": "Sat, 9 Feb 2013 14:09:21 GMT", "version": "v1" } ]
2015-06-15
[ [ "Saito", "Nen", "" ], [ "Ishihara", "Shuji", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Genetic redundancy is ubiquitous and can be found in any organism. However, it has been argued that genetic redundancy reduces total population fitness, and therefore, redundancy is unlikely to evolve. In this letter, we study an evolutionary model with high-dimensional genotype-phenotype mapping (GPM) to investigate the relevance of complexity in GPM to the evolution of genetic redundancy. By applying the replica method to deal with quenched randomness, the redundancy dependence of the fitness is analytically obtained, which demonstrates that genetic redundancy can indeed evolve, provided that the GPM is complex. Our result provides a novel insight into how genetic redundancy evolves.
2311.16308
Alexis B\'enichou
Alexis B\'enichou, Jean-Baptiste Masson, Christian L. Vestergaard
Compression-based inference of network motif sets
null
null
null
null
q-bio.QM cond-mat.stat-mech cs.SI physics.data-an q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Physical and functional constraints on biological networks lead to complex topological patterns across multiple scales in their organization. A particular type of higher-order network feature that has received considerable interest is network motifs, defined as statistically regular subgraphs. These may implement fundamental logical and computational circuits and are referred as ``building blocks of complex networks''. Their well-defined structures and small sizes also enables the testing of their functions in synthetic and natural biological experiments. The statistical inference of network motifs is however fraught with difficulties, from defining and sampling the right null model to accounting for the large number of possible motifs and their potential correlations in statistical testing. Here we develop a framework for motif mining based on lossless network compression using subgraph contractions. The minimum description length principle allows us to select the most significant set of motifs as well as other prominent network features in terms of their combined compression of the network. The approach inherently accounts for multiple testing and correlations between subgraphs and does not rely on a priori specification of an appropriate null model. This provides an alternative definition of motif significance which guarantees more robust statistical inference. Our approach overcomes the common problems in classic testing-based motif analysis. We apply our methodology to perform comparative connectomics by evaluating the compressibility and the circuit motifs of a range of synaptic-resolution neural connectomes.
[ { "created": "Mon, 27 Nov 2023 20:49:11 GMT", "version": "v1" } ]
2023-11-29
[ [ "Bénichou", "Alexis", "" ], [ "Masson", "Jean-Baptiste", "" ], [ "Vestergaard", "Christian L.", "" ] ]
Physical and functional constraints on biological networks lead to complex topological patterns across multiple scales in their organization. A particular type of higher-order network feature that has received considerable interest is network motifs, defined as statistically regular subgraphs. These may implement fundamental logical and computational circuits and are referred as ``building blocks of complex networks''. Their well-defined structures and small sizes also enables the testing of their functions in synthetic and natural biological experiments. The statistical inference of network motifs is however fraught with difficulties, from defining and sampling the right null model to accounting for the large number of possible motifs and their potential correlations in statistical testing. Here we develop a framework for motif mining based on lossless network compression using subgraph contractions. The minimum description length principle allows us to select the most significant set of motifs as well as other prominent network features in terms of their combined compression of the network. The approach inherently accounts for multiple testing and correlations between subgraphs and does not rely on a priori specification of an appropriate null model. This provides an alternative definition of motif significance which guarantees more robust statistical inference. Our approach overcomes the common problems in classic testing-based motif analysis. We apply our methodology to perform comparative connectomics by evaluating the compressibility and the circuit motifs of a range of synaptic-resolution neural connectomes.
q-bio/0610006
Liu Quanxing
Quan-Xing Liu and Zhen Jin
Formation of spatial patterns in epidemic model with constant removal rate of the infectives
7 figures, 7 pages; The modification according to the referees' remark
J. Stat. Mech. (2007) P05002
10.1088/1742-5468/2007/05/P05002
null
q-bio.PE
null
This paper addresses the question of how population diffusion affects the formation of the spatial patterns in the spatial epidemic model by Turing mechanisms. In particular, we present theoretical analysis to results of the numerical simulations in two dimensions. Moreover, there is a critical value for the system within the linear regime. Below the critical value the spatial patterns are impermanent, whereas above it stationary spot and stripe patterns can coexist over time. We have observed the striking formation of spatial patterns during the evolution, but the isolated ordered spot patterns don't emerge in the space.
[ { "created": "Tue, 3 Oct 2006 02:03:48 GMT", "version": "v1" }, { "created": "Tue, 10 Oct 2006 07:16:31 GMT", "version": "v2" }, { "created": "Mon, 20 Nov 2006 01:51:44 GMT", "version": "v3" }, { "created": "Tue, 6 Feb 2007 07:35:32 GMT", "version": "v4" } ]
2009-09-29
[ [ "Liu", "Quan-Xing", "" ], [ "Jin", "Zhen", "" ] ]
This paper addresses the question of how population diffusion affects the formation of the spatial patterns in the spatial epidemic model by Turing mechanisms. In particular, we present theoretical analysis to results of the numerical simulations in two dimensions. Moreover, there is a critical value for the system within the linear regime. Below the critical value the spatial patterns are impermanent, whereas above it stationary spot and stripe patterns can coexist over time. We have observed the striking formation of spatial patterns during the evolution, but the isolated ordered spot patterns don't emerge in the space.
2112.12147
Mahta Ramezanian Panahi
Mahta Ramezanian Panahi, Germ\'an Abrevaya, Jean-Christophe Gagnon-Audet, Vikram Voleti, Irina Rish and Guillaume Dumas
Generative Models of Brain Dynamics -- A review
Updated to two-column format with 15 pages (excluding refs), 3 figs, submitted to Frontiers
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The principled design and discovery of biologically- and physically-informed models of neuronal dynamics has been advancing since the mid-twentieth century. Recent developments in artificial intelligence (AI) have accelerated this progress. This review article gives a high-level overview of the approaches across different scales of organization and levels of abstraction. The studies covered in this paper include fundamental models in computational neuroscience, nonlinear dynamics, data-driven methods, as well as emergent practices. While not all of these models span the intersection of neuroscience, AI, and system dynamics, all of them do or can work in tandem as generative models, which, as we argue, provide superior properties for the analysis of neuroscientific data. We discuss the limitations and unique dynamical traits of brain data and the complementary need for hypothesis- and data-driven modeling. By way of conclusion, we present several hybrid generative models from recent literature in scientific machine learning, which can be efficiently deployed to yield interpretable models of neural dynamics.
[ { "created": "Wed, 22 Dec 2021 18:59:21 GMT", "version": "v1" }, { "created": "Thu, 23 Dec 2021 18:53:38 GMT", "version": "v2" } ]
2021-12-24
[ [ "Panahi", "Mahta Ramezanian", "" ], [ "Abrevaya", "Germán", "" ], [ "Gagnon-Audet", "Jean-Christophe", "" ], [ "Voleti", "Vikram", "" ], [ "Rish", "Irina", "" ], [ "Dumas", "Guillaume", "" ] ]
The principled design and discovery of biologically- and physically-informed models of neuronal dynamics has been advancing since the mid-twentieth century. Recent developments in artificial intelligence (AI) have accelerated this progress. This review article gives a high-level overview of the approaches across different scales of organization and levels of abstraction. The studies covered in this paper include fundamental models in computational neuroscience, nonlinear dynamics, data-driven methods, as well as emergent practices. While not all of these models span the intersection of neuroscience, AI, and system dynamics, all of them do or can work in tandem as generative models, which, as we argue, provide superior properties for the analysis of neuroscientific data. We discuss the limitations and unique dynamical traits of brain data and the complementary need for hypothesis- and data-driven modeling. By way of conclusion, we present several hybrid generative models from recent literature in scientific machine learning, which can be efficiently deployed to yield interpretable models of neural dynamics.
1404.1017
Harold P. de Vladar
Harold P. de Vladar and Nick Barton
Stability and response of polygenic traits to stabilizing selection and mutation
Accepted in Genetics
null
null
null
q-bio.PE
http://creativecommons.org/licenses/publicdomain/
When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation-selection balance is $2 \mu/S$ per locus, where $\mu$ is the mutation rate and $S$ the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric, and complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by $2 \mu/S$, and on the total mutational effects of alleles that are at intermediate frequency. The interplay between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of $2\sqrt{\mu / S}$ remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find that if the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by $\sqrt{\mu S}$. Long term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a sub-optimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states, and to the speed of the shift of the optimum trait value.
[ { "created": "Thu, 3 Apr 2014 17:13:29 GMT", "version": "v1" } ]
2014-04-04
[ [ "de Vladar", "Harold P.", "" ], [ "Barton", "Nick", "" ] ]
When polygenic traits are under stabilizing selection, many different combinations of alleles allow close adaptation to the optimum. If alleles have equal effects, all combinations that result in the same deviation from the optimum are equivalent. Furthermore, the genetic variance that is maintained by mutation-selection balance is $2 \mu/S$ per locus, where $\mu$ is the mutation rate and $S$ the strength of stabilizing selection. In reality, alleles vary in their effects, making the fitness landscape asymmetric, and complicating analysis of the equilibria. We show that that the resulting genetic variance depends on the fraction of alleles near fixation, which contribute by $2 \mu/S$, and on the total mutational effects of alleles that are at intermediate frequency. The interplay between stabilizing selection and mutation leads to a sharp transition: alleles with effects smaller than a threshold value of $2\sqrt{\mu / S}$ remain polymorphic, whereas those with larger effects are fixed. The genetic load in equilibrium is less than for traits of equal effects, and the fitness equilibria are more similar. We find that if the optimum is displaced, alleles with effects close to the threshold value sweep first, and their rate of increase is bounded by $\sqrt{\mu S}$. Long term response leads in general to well-adapted traits, unlike the case of equal effects that often end up at a sub-optimal fitness peak. However, the particular peaks to which the populations converge are extremely sensitive to the initial states, and to the speed of the shift of the optimum trait value.
2308.07818
Willem Diepeveen
Willem Diepeveen, Carlos Esteve-Yag\"ue, Jan Lellmann, Ozan \"Oktem, Carola-Bibiane Sch\"onlieb
Riemannian geometry for efficient analysis of protein dynamics data
null
null
null
null
q-bio.BM cs.NA math.DG math.NA
http://creativecommons.org/licenses/by-nc-nd/4.0/
An increasingly common viewpoint is that protein dynamics data sets reside in a non-linear subspace of low conformational energy. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich structure to account for a wide range of geometries that can be modelled after an energy landscape. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised to data on a Riemannian manifold. In the context of protein dynamics, a conceptual challenge comes from the lack of a suitable smooth manifold and the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a novel local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold of point clouds modulo rigid body group actions and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. It performs exceptionally well on coarse-grained molecular dynamics simulated data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.
[ { "created": "Tue, 15 Aug 2023 14:52:09 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2023 10:47:23 GMT", "version": "v2" } ]
2023-10-27
[ [ "Diepeveen", "Willem", "" ], [ "Esteve-Yagüe", "Carlos", "" ], [ "Lellmann", "Jan", "" ], [ "Öktem", "Ozan", "" ], [ "Schönlieb", "Carola-Bibiane", "" ] ]
An increasingly common viewpoint is that protein dynamics data sets reside in a non-linear subspace of low conformational energy. Ideal data analysis tools for such data sets should therefore account for such non-linear geometry. The Riemannian geometry setting can be suitable for a variety of reasons. First, it comes with a rich structure to account for a wide range of geometries that can be modelled after an energy landscape. Second, many standard data analysis tools initially developed for data in Euclidean space can also be generalised to data on a Riemannian manifold. In the context of protein dynamics, a conceptual challenge comes from the lack of a suitable smooth manifold and the lack of guidelines for constructing a smooth Riemannian structure based on an energy landscape. In addition, computational feasibility in computing geodesics and related mappings poses a major challenge. This work considers these challenges. The first part of the paper develops a novel local approximation technique for computing geodesics and related mappings on Riemannian manifolds in a computationally feasible manner. The second part constructs a smooth manifold of point clouds modulo rigid body group actions and a Riemannian structure that is based on an energy landscape for protein conformations. The resulting Riemannian geometry is tested on several data analysis tasks relevant for protein dynamics data. It performs exceptionally well on coarse-grained molecular dynamics simulated data. In particular, the geodesics with given start- and end-points approximately recover corresponding molecular dynamics trajectories for proteins that undergo relatively ordered transitions with medium sized deformations. The Riemannian protein geometry also gives physically realistic summary statistics and retrieves the underlying dimension even for large-sized deformations within seconds on a laptop.
2402.11472
Yingying Wang
Yingying Wang, Yun Xiong, Xixi Wu, Xiangguo Sun, Jiawei Zhang
Advanced Drug Interaction Event Prediction
null
null
null
null
q-bio.BM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting drug-drug interaction adverse events, so-called DDI events, is increasingly valuable as it facilitates the study of mechanisms underlying drug use or adverse reactions. Existing models often neglect the distinctive characteristics of individual event classes when integrating multi-source features, which contributes to systematic unfairness when dealing with highly imbalanced event samples. Moreover, the limited capacity of these models to abstract the unique attributes of each event subclass considerably hampers their application in predicting rare drug-drug interaction events with a limited sample size. Reducing dataset bias and abstracting event subclass characteristics are two unresolved challenges. Recently, prompt tuning with frozen pre-trained graph models, namely "pre-train, prompt, fine-tune" strategy, has demonstrated impressive performance in few-shot tasks. Motivated by this, we propose an advanced method as a solution to address these aforementioned challenges. Specifically, our proposed approach entails a hierarchical pre-training task that aims to capture crucial aspects of drug molecular structure and intermolecular interactions while effectively mitigating implicit dataset bias within the node embeddings. Furthermore, we construct a prototypical graph by strategically sampling data from distinct event types and design subgraph prompts utilizing pre-trained node features. Through comprehensive benchmark experiments, we validate the efficacy of our subgraph prompts in accurately representing event classes and achieve exemplary results in both overall and subclass prediction tasks.
[ { "created": "Sun, 18 Feb 2024 06:22:01 GMT", "version": "v1" }, { "created": "Thu, 9 May 2024 08:26:51 GMT", "version": "v2" }, { "created": "Tue, 21 May 2024 12:47:40 GMT", "version": "v3" }, { "created": "Wed, 22 May 2024 19:39:52 GMT", "version": "v4" } ]
2024-05-24
[ [ "Wang", "Yingying", "" ], [ "Xiong", "Yun", "" ], [ "Wu", "Xixi", "" ], [ "Sun", "Xiangguo", "" ], [ "Zhang", "Jiawei", "" ] ]
Predicting drug-drug interaction adverse events, so-called DDI events, is increasingly valuable as it facilitates the study of mechanisms underlying drug use or adverse reactions. Existing models often neglect the distinctive characteristics of individual event classes when integrating multi-source features, which contributes to systematic unfairness when dealing with highly imbalanced event samples. Moreover, the limited capacity of these models to abstract the unique attributes of each event subclass considerably hampers their application in predicting rare drug-drug interaction events with a limited sample size. Reducing dataset bias and abstracting event subclass characteristics are two unresolved challenges. Recently, prompt tuning with frozen pre-trained graph models, namely "pre-train, prompt, fine-tune" strategy, has demonstrated impressive performance in few-shot tasks. Motivated by this, we propose an advanced method as a solution to address these aforementioned challenges. Specifically, our proposed approach entails a hierarchical pre-training task that aims to capture crucial aspects of drug molecular structure and intermolecular interactions while effectively mitigating implicit dataset bias within the node embeddings. Furthermore, we construct a prototypical graph by strategically sampling data from distinct event types and design subgraph prompts utilizing pre-trained node features. Through comprehensive benchmark experiments, we validate the efficacy of our subgraph prompts in accurately representing event classes and achieve exemplary results in both overall and subclass prediction tasks.
1910.01559
Jesus Malo
Jesus Malo
Spatio-Chromatic Information available from different Neural Layers via Gaussianization
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How much visual information about the retinal images can be extracted from the different layers of the visual pathway?. Separate subsystems (e.g. opponent channels, spatial filters, nonlinearities of the texture sensors) have been suggested to be organized for optimal information transmission. However, the efficiency of these different layers has not been measured when they operate together on colorimetrically calibrated natural images and using multivariate information-theoretic units over the joint spatio-chromatic array of responses. In this work we present a statistical tool to address this question in an appropriate (multivariate) way. Specifically, we propose an empirical estimate of the information transmitted by the system based on a recent Gaussianization technique that reduces the challenging multivariate PDF estimation problem to a set of simpler univariate estimations. Total correlation measured using the proposed estimator is consistent with predictions based on the analytical Jacobian of a standard spatio-chromatic model of the retina-cortex pathway. If the noise at certain representation is proportional to the dynamic range of the response, and one assumes sensors of equivalent noise level, transmitted information shows the following trends: (1) progressively deeper representations are better in terms of the amount of information about the input, (2) the transmitted information up to the cortical representation follows the PDF of natural scenes over the chromatic and achromatic dimensions of the stimulus space, (3) the contribution of spatial transforms to capture visual information is substantially bigger than the contribution of chromatic transforms, and (4) nonlinearities of the responses contribute substantially to the transmitted information but less than the linear transforms.
[ { "created": "Thu, 3 Oct 2019 15:51:43 GMT", "version": "v1" }, { "created": "Thu, 31 Oct 2019 17:36:31 GMT", "version": "v2" }, { "created": "Mon, 25 May 2020 12:44:57 GMT", "version": "v3" } ]
2020-05-26
[ [ "Malo", "Jesus", "" ] ]
How much visual information about the retinal images can be extracted from the different layers of the visual pathway?. Separate subsystems (e.g. opponent channels, spatial filters, nonlinearities of the texture sensors) have been suggested to be organized for optimal information transmission. However, the efficiency of these different layers has not been measured when they operate together on colorimetrically calibrated natural images and using multivariate information-theoretic units over the joint spatio-chromatic array of responses. In this work we present a statistical tool to address this question in an appropriate (multivariate) way. Specifically, we propose an empirical estimate of the information transmitted by the system based on a recent Gaussianization technique that reduces the challenging multivariate PDF estimation problem to a set of simpler univariate estimations. Total correlation measured using the proposed estimator is consistent with predictions based on the analytical Jacobian of a standard spatio-chromatic model of the retina-cortex pathway. If the noise at certain representation is proportional to the dynamic range of the response, and one assumes sensors of equivalent noise level, transmitted information shows the following trends: (1) progressively deeper representations are better in terms of the amount of information about the input, (2) the transmitted information up to the cortical representation follows the PDF of natural scenes over the chromatic and achromatic dimensions of the stimulus space, (3) the contribution of spatial transforms to capture visual information is substantially bigger than the contribution of chromatic transforms, and (4) nonlinearities of the responses contribute substantially to the transmitted information but less than the linear transforms.
2110.03842
Michael Fuchs
Michael Fuchs, Hexuan Liu, Guan-Ru Yu
A Short Note on the Exact Counting of Tree-Child Networks
6 pages
null
null
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by/4.0/
Tree-child networks are an important network class which are used in phylogenetics to model reticulate evolution. In a recent paper, Pons and Batle (2021) conjectured a relation between tree-child networks and certain words. In this short note, we prove their conjecture for the (important) class of one-component tree-child networks.
[ { "created": "Fri, 8 Oct 2021 00:59:48 GMT", "version": "v1" } ]
2021-10-11
[ [ "Fuchs", "Michael", "" ], [ "Liu", "Hexuan", "" ], [ "Yu", "Guan-Ru", "" ] ]
Tree-child networks are an important network class which are used in phylogenetics to model reticulate evolution. In a recent paper, Pons and Batle (2021) conjectured a relation between tree-child networks and certain words. In this short note, we prove their conjecture for the (important) class of one-component tree-child networks.
0804.4375
Erez Ben-Yaacov
Erez Ben-Yaacov, Yonina Eldar
A Fast and Flexible Method for the Segmentation of aCGH Data
7 pages, 5 figures, preprint, accepted for publication in Bioinformatics (Proceedings of ECCB08)
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Array Comparative Genomic Hybridization (aCGH) is used to scan the entire genome for variations in DNA copy number. A central task in the analysis of aCGH data is the segmentation into groups of probes sharing the same DNA copy number. Some well known segmentation methods suffer from very long running times, preventing interactive data analysis. Results: We suggest a new segmentation method based on wavelet decomposition and thresholding, which detects significant breakpoints in the data. Our algorithm is over 1,000 times faster than leading approaches, with similar performance. Another key advantage of the proposed method is its simplicity and flexibility. Due to its intuitive structure it can be easily generalized to incorporate several types of side information. Here we consider two extensions which include side information indicating the reliability of each measurement, and compensating for a changing variability in the measurement noise. The resulting algorithm outperforms existing methods, both in terms of speed and performance, when applied on real high density CGH data. Availability: Implementation is available under software tab at: http://www.ee.technion.ac.il/Sites/People/YoninaEldar/ Contact: yonina@ee.technion.ac.il
[ { "created": "Mon, 28 Apr 2008 11:10:22 GMT", "version": "v1" } ]
2008-04-29
[ [ "Ben-Yaacov", "Erez", "" ], [ "Eldar", "Yonina", "" ] ]
Motivation: Array Comparative Genomic Hybridization (aCGH) is used to scan the entire genome for variations in DNA copy number. A central task in the analysis of aCGH data is the segmentation into groups of probes sharing the same DNA copy number. Some well known segmentation methods suffer from very long running times, preventing interactive data analysis. Results: We suggest a new segmentation method based on wavelet decomposition and thresholding, which detects significant breakpoints in the data. Our algorithm is over 1,000 times faster than leading approaches, with similar performance. Another key advantage of the proposed method is its simplicity and flexibility. Due to its intuitive structure it can be easily generalized to incorporate several types of side information. Here we consider two extensions which include side information indicating the reliability of each measurement, and compensating for a changing variability in the measurement noise. The resulting algorithm outperforms existing methods, both in terms of speed and performance, when applied on real high density CGH data. Availability: Implementation is available under software tab at: http://www.ee.technion.ac.il/Sites/People/YoninaEldar/ Contact: yonina@ee.technion.ac.il
1805.09133
Supreeth Prajwal Shashikumar
Supreeth P. Shashikumar, Amit J. Shah, Gari D. Clifford, and Shamim Nemati
Detection of Paroxysmal Atrial Fibrillation using Attention-based Bidirectional Recurrent Neural Networks
Accepted to the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2018), London, UK, 2018
null
null
null
q-bio.NC cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Detection of atrial fibrillation (AF), a type of cardiac arrhythmia, is difficult since many cases of AF are usually clinically silent and undiagnosed. In particular paroxysmal AF is a form of AF that occurs occasionally, and has a higher probability of being undetected. In this work, we present an attention based deep learning framework for detection of paroxysmal AF episodes from a sequence of windows. Time-frequency representation of 30 seconds recording windows, over a 10 minute data segment, are fed sequentially into a deep convolutional neural network for image-based feature extraction, which are then presented to a bidirectional recurrent neural network with an attention layer for AF detection. To demonstrate the effectiveness of the proposed framework for transient AF detection, we use a database of 24 hour Holter Electrocardiogram (ECG) recordings acquired from 2850 patients at the University of Virginia heart station. The algorithm achieves an AUC of 0.94 on the testing set, which exceeds the performance of baseline models. We also demonstrate the cross-domain generalizablity of the approach by adapting the learned model parameters from one recording modality (ECG) to another (photoplethysmogram) with improved AF detection performance. The proposed high accuracy, low false alarm algorithm for detecting paroxysmal AF has potential applications in long-term monitoring using wearable sensors.
[ { "created": "Mon, 7 May 2018 20:34:17 GMT", "version": "v1" } ]
2018-05-24
[ [ "Shashikumar", "Supreeth P.", "" ], [ "Shah", "Amit J.", "" ], [ "Clifford", "Gari D.", "" ], [ "Nemati", "Shamim", "" ] ]
Detection of atrial fibrillation (AF), a type of cardiac arrhythmia, is difficult since many cases of AF are usually clinically silent and undiagnosed. In particular paroxysmal AF is a form of AF that occurs occasionally, and has a higher probability of being undetected. In this work, we present an attention based deep learning framework for detection of paroxysmal AF episodes from a sequence of windows. Time-frequency representation of 30 seconds recording windows, over a 10 minute data segment, are fed sequentially into a deep convolutional neural network for image-based feature extraction, which are then presented to a bidirectional recurrent neural network with an attention layer for AF detection. To demonstrate the effectiveness of the proposed framework for transient AF detection, we use a database of 24 hour Holter Electrocardiogram (ECG) recordings acquired from 2850 patients at the University of Virginia heart station. The algorithm achieves an AUC of 0.94 on the testing set, which exceeds the performance of baseline models. We also demonstrate the cross-domain generalizablity of the approach by adapting the learned model parameters from one recording modality (ECG) to another (photoplethysmogram) with improved AF detection performance. The proposed high accuracy, low false alarm algorithm for detecting paroxysmal AF has potential applications in long-term monitoring using wearable sensors.
1006.1327
Robert Burger PhD
John Robert Burger
The Electron Capture Hypothesis - A Challenge to Neuroscientists
Editing for clarity; Figs 5 & 6 changed to Figs 4 & 5
null
null
null
q-bio.NC physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lower speed impinging ions (with hydration shells) cannot transverse ion channels once internal charge goes positive. Yet neural pulse waveforms fail to show the expected risetime distortion beginning at zero voltage. Observed waveforms cannot be explained unless electron capture is considered.
[ { "created": "Mon, 7 Jun 2010 18:43:28 GMT", "version": "v1" }, { "created": "Wed, 15 Sep 2010 17:57:48 GMT", "version": "v2" }, { "created": "Fri, 17 Sep 2010 16:52:49 GMT", "version": "v3" } ]
2010-09-20
[ [ "Burger", "John Robert", "" ] ]
Lower speed impinging ions (with hydration shells) cannot transverse ion channels once internal charge goes positive. Yet neural pulse waveforms fail to show the expected risetime distortion beginning at zero voltage. Observed waveforms cannot be explained unless electron capture is considered.
0904.3124
Kyung Hyuk Kim
Kyung Hyuk Kim, Herbert M. Sauro
Stochastic Control Analysis for Biochemical Reaction Systems
34 pages, 11 figures
null
null
null
q-bio.QM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate how stochastic reaction processes are affected by external perturbations. We describe an extension of the deterministic metabolic control analysis (MCA) to the stochastic regime. We introduce stochastic sensitivities for mean and covariance values of reactant concentrations and reaction fluxes and show that there exist MCA-like summation theorems among these sensitivities. The summation theorems for flux variances are shown to depend on the size of the measurement time window ($\epsilon$), within which reaction events are counted for measuring a single flux. The degree of the $\epsilon$-dependency can become significant for processes involving multi-time-scale dynamics and is estimated by introducing a new measure of time scale separation. This $\epsilon$-dependency is shown to be closely related to the power-law scaling observed in flux fluctuations in various complex networks. We propose a systematic way to control fluctuations of reactant concentrations while minimizing changes in mean concentration levels. Such orthogonal control is obtained by introducing a control vector indicating the strength and direction of parameter perturbations leading to a sensitive control. We also propose a possible implication in the control of flux fluctuation: The control distribution for flux fluctuations changes with the measurement time window size, $\epsilon$. When a control engineer applies a specific control operation on a reaction system, the system can respond contrary to what is expected, depending on the time window size $\epsilon$.
[ { "created": "Mon, 20 Apr 2009 21:43:18 GMT", "version": "v1" }, { "created": "Tue, 21 Apr 2009 20:12:55 GMT", "version": "v2" }, { "created": "Fri, 21 Aug 2009 20:05:43 GMT", "version": "v3" } ]
2009-08-21
[ [ "Kim", "Kyung Hyuk", "" ], [ "Sauro", "Herbert M.", "" ] ]
In this paper, we investigate how stochastic reaction processes are affected by external perturbations. We describe an extension of the deterministic metabolic control analysis (MCA) to the stochastic regime. We introduce stochastic sensitivities for mean and covariance values of reactant concentrations and reaction fluxes and show that there exist MCA-like summation theorems among these sensitivities. The summation theorems for flux variances are shown to depend on the size of the measurement time window ($\epsilon$), within which reaction events are counted for measuring a single flux. The degree of the $\epsilon$-dependency can become significant for processes involving multi-time-scale dynamics and is estimated by introducing a new measure of time scale separation. This $\epsilon$-dependency is shown to be closely related to the power-law scaling observed in flux fluctuations in various complex networks. We propose a systematic way to control fluctuations of reactant concentrations while minimizing changes in mean concentration levels. Such orthogonal control is obtained by introducing a control vector indicating the strength and direction of parameter perturbations leading to a sensitive control. We also propose a possible implication in the control of flux fluctuation: The control distribution for flux fluctuations changes with the measurement time window size, $\epsilon$. When a control engineer applies a specific control operation on a reaction system, the system can respond contrary to what is expected, depending on the time window size $\epsilon$.
0909.3129
Wentian Li
Wentian Li, Annette Lee, Peter K Gregersen
Copy-number-variation and copy-number-alteration region detection by cumulative plots
null
BMC Bioinformatics, 10(suppl 1):S67 (2009)
10.1186/1471-2105-10-S1-S67
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Regions with copy number variations (in germline cells) or copy number alteration (in somatic cells) are of great interest for human disease gene mapping and cancer studies. They represent a new type of mutation and are larger-scaled than the single nucleotide polymorphisms. Using genotyping microarray for copy number variation detection has become standard, and there is a need for improving analysis methods. Results: We apply the cumulative plot to the detection of regions with copy number variation/alteration, on samples taken from a chronic lymphocytic leukemia patient. Two sets of whole-genome genotyping of 317k single nucleotide polymorphisms, one from the normal cell and another from the cancer cell, are analyzed. We demonstrate the utility of cumulative plot in detecting a 9Mb (9 x 10^6 bases) hemizygous deletion and 1Mb homozygous deletion on chromosome 13. We also show the possibility to detect smaller copy number variation/alteration regions below the 100kb range. Conclusions: As a graphic tool, the cumulative plot is an intuitive and a scale-free (window-less) way for detecting copy number variation/alteration regions, especially when such regions are small.
[ { "created": "Wed, 16 Sep 2009 23:57:31 GMT", "version": "v1" } ]
2012-05-07
[ [ "Li", "Wentian", "" ], [ "Lee", "Annette", "" ], [ "Gregersen", "Peter K", "" ] ]
Background: Regions with copy number variations (in germline cells) or copy number alteration (in somatic cells) are of great interest for human disease gene mapping and cancer studies. They represent a new type of mutation and are larger-scaled than the single nucleotide polymorphisms. Using genotyping microarray for copy number variation detection has become standard, and there is a need for improving analysis methods. Results: We apply the cumulative plot to the detection of regions with copy number variation/alteration, on samples taken from a chronic lymphocytic leukemia patient. Two sets of whole-genome genotyping of 317k single nucleotide polymorphisms, one from the normal cell and another from the cancer cell, are analyzed. We demonstrate the utility of cumulative plot in detecting a 9Mb (9 x 10^6 bases) hemizygous deletion and 1Mb homozygous deletion on chromosome 13. We also show the possibility to detect smaller copy number variation/alteration regions below the 100kb range. Conclusions: As a graphic tool, the cumulative plot is an intuitive and a scale-free (window-less) way for detecting copy number variation/alteration regions, especially when such regions are small.
0705.2504
Yuichi Togashi
Yuichi Togashi, Alexander S. Mikhailov
Nonlinear Relaxation Dynamics in Elastic Networks and Design Principles of Molecular Machines
12 pages, 9 figures
Proc. Natl. Acad. Sci. (USA) 104, 8697 (2007)
10.1073/pnas.0702950104
null
q-bio.BM cond-mat.soft physics.chem-ph
null
Analyzing nonlinear conformational relaxation dynamics in elastic networks corresponding to two classical motor proteins, we find that they respond by well-defined internal mechanical motions to various initial deformations and that these motions are robust against external perturbations. We show that this behavior is not characteristic for random elastic networks. However, special network architectures with such properties can be designed by evolutionary optimization methods. Using them, an example of an artificial elastic network, operating as a cyclic machine powered by ligand binding, is constructed.
[ { "created": "Thu, 17 May 2007 10:21:26 GMT", "version": "v1" } ]
2007-06-13
[ [ "Togashi", "Yuichi", "" ], [ "Mikhailov", "Alexander S.", "" ] ]
Analyzing nonlinear conformational relaxation dynamics in elastic networks corresponding to two classical motor proteins, we find that they respond by well-defined internal mechanical motions to various initial deformations and that these motions are robust against external perturbations. We show that this behavior is not characteristic for random elastic networks. However, special network architectures with such properties can be designed by evolutionary optimization methods. Using them, an example of an artificial elastic network, operating as a cyclic machine powered by ligand binding, is constructed.
1810.12777
Daniel Cooney
Daniel B. Cooney
The Replicator Dynamics for Multilevel Selection in Evolutionary Games
44 pages, 7 figures, Version 2: Revised Discussion
Journal of Mathematical Biology (2009), 1-54
10.1007/s00285-019-01352-5
null
q-bio.PE math.AP math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a stochastic model for evolution of group-structured populations in which interactions between group members correspond to the Prisoner's Dilemma or the Hawk-Dove game. Selection operates at two organization levels: individuals compete with peer group members based on individual payoff, while groups also compete with other groups based on average payoff of group members. In the Prisoner's Dilemma, this creates a tension between the two levels of selection, as defectors are favored at the individual level, whereas groups with at least some cooperators outperform groups of defectors at the between-group level. In the limit of infinite group size and infinite number of groups, we derive a non-local PDE that describes the probability distribution of group compositions in the population. For special families of payoff matrices, we characterize the long-time behavior of solutions of our equation, finding a threshold level of between-group selection required to sustain density steady states and the survival of cooperation. When all-cooperator groups are most fit, the average and most abundant group composition at steady state range from featuring all-defector groups when individual-level selection dominates to featuring all-cooperator groups when group-level selection dominates. When the most fit groups have a mix of cooperators and defectors, then the average and most abundant group compositions always feature a smaller fraction of cooperators than required for the optimal mix, even in the limit where group-level selection is infinitely stronger than individual-level selection. In such cases, the conflict between the two levels of selection cannot be decoupled, and cooperation cannot be sustained at all in the case when between-group competition favors an even mix of cooperators and defectors.
[ { "created": "Tue, 30 Oct 2018 14:43:06 GMT", "version": "v1" }, { "created": "Sun, 16 Dec 2018 20:02:47 GMT", "version": "v2" } ]
2019-04-12
[ [ "Cooney", "Daniel B.", "" ] ]
We consider a stochastic model for evolution of group-structured populations in which interactions between group members correspond to the Prisoner's Dilemma or the Hawk-Dove game. Selection operates at two organization levels: individuals compete with peer group members based on individual payoff, while groups also compete with other groups based on average payoff of group members. In the Prisoner's Dilemma, this creates a tension between the two levels of selection, as defectors are favored at the individual level, whereas groups with at least some cooperators outperform groups of defectors at the between-group level. In the limit of infinite group size and infinite number of groups, we derive a non-local PDE that describes the probability distribution of group compositions in the population. For special families of payoff matrices, we characterize the long-time behavior of solutions of our equation, finding a threshold level of between-group selection required to sustain density steady states and the survival of cooperation. When all-cooperator groups are most fit, the average and most abundant group composition at steady state range from featuring all-defector groups when individual-level selection dominates to featuring all-cooperator groups when group-level selection dominates. When the most fit groups have a mix of cooperators and defectors, then the average and most abundant group compositions always feature a smaller fraction of cooperators than required for the optimal mix, even in the limit where group-level selection is infinitely stronger than individual-level selection. In such cases, the conflict between the two levels of selection cannot be decoupled, and cooperation cannot be sustained at all in the case when between-group competition favors an even mix of cooperators and defectors.
2306.01634
Aaron Ge
Aaron Ge, Tongwu Zhang, Clara Bodelon, Montserrat Garcia-Closas, Jonas Almeida, Jeya Balasubramanian
A FAIR platform for reproducing mutational signature detection on tumor sequencing data
Our proposed in-browser platform is publicly available under the MIT license at https://aaronge-2020.github.io/Sig3-Detection/. No data leaves this privacy-preserving environment, which can be cloned or forked and served from other domains with no restrictions. All the code and relevant data used to create this platform can be found at https://github.com/aaronge-2020/Sig3-Detection
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
This paper presents a portable, privacy-preserving, in-browser platform for the reproducible assessment of mutational signature detection methods from sparse sequencing data generated by targeted gene panels. The platform aims to address the reproducibility challenges in mutational signature research by adhering to the FAIR principles, making it findable, accessible, interoperable, and reusable. Our approach focuses on the detection of specific mutational signatures, such as SBS3, which have been linked to specific mutagenic processes. The platform relies on publicly available data, simulation, downsampling techniques, and machine learning algorithms to generate training data and labels and to train and evaluate models. The key achievement of our platform is its transparency, reusability, and privacy preservation, enabling researchers and clinicians to analyze mutational signatures with the guarantee that no data circulates outside the client machine.
[ { "created": "Fri, 2 Jun 2023 15:53:29 GMT", "version": "v1" } ]
2023-06-05
[ [ "Ge", "Aaron", "" ], [ "Zhang", "Tongwu", "" ], [ "Bodelon", "Clara", "" ], [ "Garcia-Closas", "Montserrat", "" ], [ "Almeida", "Jonas", "" ], [ "Balasubramanian", "Jeya", "" ] ]
This paper presents a portable, privacy-preserving, in-browser platform for the reproducible assessment of mutational signature detection methods from sparse sequencing data generated by targeted gene panels. The platform aims to address the reproducibility challenges in mutational signature research by adhering to the FAIR principles, making it findable, accessible, interoperable, and reusable. Our approach focuses on the detection of specific mutational signatures, such as SBS3, which have been linked to specific mutagenic processes. The platform relies on publicly available data, simulation, downsampling techniques, and machine learning algorithms to generate training data and labels and to train and evaluate models. The key achievement of our platform is its transparency, reusability, and privacy preservation, enabling researchers and clinicians to analyze mutational signatures with the guarantee that no data circulates outside the client machine.
1108.0209
David Murrugarra
Reinhard Laubenbacher, David Murrugarra, and Alan Veliz-Cuba
Structure and Dynamics of Polynomial Dynamical Systems
10 pages, 3 figures. NSF CMMI Research and Innovation Conference 2011
null
null
null
q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete models have a long tradition in engineering, including finite state machines, Boolean networks, Petri nets, and agent-based models. Of particular importance is the question of how the model structure constrains its dynamics. This paper discusses an algebraic framework to study such questions. The systems discussed here are given by mappings on an affine space over a finite field, whose coordinate functions are polynomials. They form a general class of models which can represent many discrete model types. Assigning to such a system its dependency graph, that is, the directed graph that indicates the variable dependencies, provides a mapping from systems to graphs. A basic property of this mapping is derived and used to prove that dynamical systems with an acyclic dependency graph can only have a unique fixed point in their phase space and no periodic orbits. This result is then applied to a published model of in vitro virus competition.
[ { "created": "Sun, 31 Jul 2011 22:21:14 GMT", "version": "v1" } ]
2011-08-02
[ [ "Laubenbacher", "Reinhard", "" ], [ "Murrugarra", "David", "" ], [ "Veliz-Cuba", "Alan", "" ] ]
Discrete models have a long tradition in engineering, including finite state machines, Boolean networks, Petri nets, and agent-based models. Of particular importance is the question of how the model structure constrains its dynamics. This paper discusses an algebraic framework to study such questions. The systems discussed here are given by mappings on an affine space over a finite field, whose coordinate functions are polynomials. They form a general class of models which can represent many discrete model types. Assigning to such a system its dependency graph, that is, the directed graph that indicates the variable dependencies, provides a mapping from systems to graphs. A basic property of this mapping is derived and used to prove that dynamical systems with an acyclic dependency graph can only have a unique fixed point in their phase space and no periodic orbits. This result is then applied to a published model of in vitro virus competition.
q-bio/0609014
Gabriele Scheler
Gabriele Scheler
Dynamic re-wiring of protein interaction: The case of transactivation
4 pages; presented at NIPS 2004 workshop
null
null
null
q-bio.MN
null
We are looking at local protein interaction networks from the perspective of directed, labeled graphs with quantitative values for monotonic changes in concentrations. These systems can be used to perform stability analysis for a stable attractor, given initial values. They can also show re-configuration of whole system states by dynamic insertion of links, given specific patterns of input. The latter issue seems particularly relevant for the concept of multistability in cellular memory. We attempt to show that this level of analysis is well-suited for a number of relevant biological subsystems, such as transactivation in cardiac myocytes or G-protein coupling to adrenergic receptors. In particular, we analyse the 'motif' of an "overflow gate" as a concentration-dependent system reconfiguration.
[ { "created": "Sun, 10 Sep 2006 03:03:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Scheler", "Gabriele", "" ] ]
We are looking at local protein interaction networks from the perspective of directed, labeled graphs with quantitative values for monotonic changes in concentrations. These systems can be used to perform stability analysis for a stable attractor, given initial values. They can also show re-configuration of whole system states by dynamic insertion of links, given specific patterns of input. The latter issue seems particularly relevant for the concept of multistability in cellular memory. We attempt to show that this level of analysis is well-suited for a number of relevant biological subsystems, such as transactivation in cardiac myocytes or G-protein coupling to adrenergic receptors. In particular, we analyse the 'motif' of an "overflow gate" as a concentration-dependent system reconfiguration.
1505.05096
Guo-Wei Wei
Kristopher Opron, Kelin Xia and Guo-Wei Wei
Capturing protein multiscale thermal fluctuations
16 pages, 8 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing elastic network models are typically parametrized at a given cutoff distance and often fail to properly predict the thermal fluctuation of many macromolecules that involve multiple characteristic length scales. We introduce a multiscale flexibility-rigidity index (mFRI) method to resolve this problem. The proposed mFRI utilizes two or three correlation kernels parametrized at different length scales to capture protein interactions at corresponding scales. It is about 20% more accurate than the Gaussian network model (GNM) in the B-factor prediction of a set of 364 proteins. Additionally, the present method is able to delivery accurate predictions for multiscale macromolecules that fail GNM. Finally, or a protein of $N$ residues, mFRI is of linear scaling (O(N)) in computational complexity, in contrast to the order of O(N^3) for GNM.
[ { "created": "Tue, 19 May 2015 17:32:13 GMT", "version": "v1" }, { "created": "Wed, 20 May 2015 01:43:43 GMT", "version": "v2" } ]
2015-05-21
[ [ "Opron", "Kristopher", "" ], [ "Xia", "Kelin", "" ], [ "Wei", "Guo-Wei", "" ] ]
Existing elastic network models are typically parametrized at a given cutoff distance and often fail to properly predict the thermal fluctuation of many macromolecules that involve multiple characteristic length scales. We introduce a multiscale flexibility-rigidity index (mFRI) method to resolve this problem. The proposed mFRI utilizes two or three correlation kernels parametrized at different length scales to capture protein interactions at corresponding scales. It is about 20% more accurate than the Gaussian network model (GNM) in the B-factor prediction of a set of 364 proteins. Additionally, the present method is able to delivery accurate predictions for multiscale macromolecules that fail GNM. Finally, or a protein of $N$ residues, mFRI is of linear scaling (O(N)) in computational complexity, in contrast to the order of O(N^3) for GNM.
1408.1869
Nicolae Radu Zabet
Nicolae Radu Zabet
Negative Feedback and Physical Limits of Genes
17 pages, 7 figures, 1 table
Journal of Theoretical Biology 248:1 (2011) 82-91
10.1016/j.jtbi.2011.06.021
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper compares the auto-repressed gene to a simple one (a gene without auto-regulation) in terms of response time and output noise under the assumption of fixed metabolic cost. The analysis shows that, in the case of non-vanishing leak expression rate, the negative feedback reduces both the switching on and switching off times of a gene. The noise of the auto-repressed gene will be lower than the one of the simple gene only for low leak expression rates. Summing up, for low, but non-vanishing leak expression rates, the auto-repressed gene is both faster and less noisier compared to the simple one.
[ { "created": "Fri, 8 Aug 2014 14:36:06 GMT", "version": "v1" } ]
2014-08-11
[ [ "Zabet", "Nicolae Radu", "" ] ]
This paper compares the auto-repressed gene to a simple one (a gene without auto-regulation) in terms of response time and output noise under the assumption of fixed metabolic cost. The analysis shows that, in the case of non-vanishing leak expression rate, the negative feedback reduces both the switching on and switching off times of a gene. The noise of the auto-repressed gene will be lower than the one of the simple gene only for low leak expression rates. Summing up, for low, but non-vanishing leak expression rates, the auto-repressed gene is both faster and less noisier compared to the simple one.
1502.01409
David Budden
David M Budden, Daniel G Hurley and Edmund J Crampin
TREEOME: A framework for epigenetic and transcriptomic data integration to explore regulatory interactions controlling transcription
14 pages, 6 figures
Epigenetics & Chromatin (2015) 8:21
10.1186/s13072-015-0013-9
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Predictive modelling of gene expression is a powerful framework for the in silico exploration of transcriptional regulatory interactions through the integration of high-throughput -omics data. A major limitation of previous approaches is their inability to handle conditional and synergistic interactions that emerge when collectively analysing genes subject to different regulatory mechanisms. This limitation reduces overall predictive power and thus the reliability of downstream biological inference. Results: We introduce an analytical modelling framework (TREEOME: tree of models of expression) that integrates epigenetic and transcriptomic data by separating genes into putative regulatory classes. Current predictive modelling approaches have found both DNA methylation and histone modification epigenetic data to provide little or no improvement in accuracy of prediction of transcript abundance despite, for example, distinct anti-correlation between mRNA levels and promoter-localised DNA methylation. To improve on this, in TREEOME we evaluate four possible methods of formulating gene-level DNA methylation metrics, which provide a foundation for identifying gene-level methylation events and subsequent differential analysis, whereas most previous techniques operate at the level of individual CpG dinucleotides. We demonstrate TREEOME by integrating gene-level DNA methylation (bisulfite-seq) and histone modification (ChIP-seq) data to accurately predict genome-wide mRNA transcript abundance (RNA-seq) for H1-hESC and GM12878 cell lines. Availability: TREEOME is implemented using open-source software and made available as a pre-configured bootable reference environment. All scripts and data presented in this study are available online at http://sourceforge.net/projects/budden2015treeome/.
[ { "created": "Thu, 5 Feb 2015 02:06:14 GMT", "version": "v1" } ]
2018-08-14
[ [ "Budden", "David M", "" ], [ "Hurley", "Daniel G", "" ], [ "Crampin", "Edmund J", "" ] ]
Motivation: Predictive modelling of gene expression is a powerful framework for the in silico exploration of transcriptional regulatory interactions through the integration of high-throughput -omics data. A major limitation of previous approaches is their inability to handle conditional and synergistic interactions that emerge when collectively analysing genes subject to different regulatory mechanisms. This limitation reduces overall predictive power and thus the reliability of downstream biological inference. Results: We introduce an analytical modelling framework (TREEOME: tree of models of expression) that integrates epigenetic and transcriptomic data by separating genes into putative regulatory classes. Current predictive modelling approaches have found both DNA methylation and histone modification epigenetic data to provide little or no improvement in accuracy of prediction of transcript abundance despite, for example, distinct anti-correlation between mRNA levels and promoter-localised DNA methylation. To improve on this, in TREEOME we evaluate four possible methods of formulating gene-level DNA methylation metrics, which provide a foundation for identifying gene-level methylation events and subsequent differential analysis, whereas most previous techniques operate at the level of individual CpG dinucleotides. We demonstrate TREEOME by integrating gene-level DNA methylation (bisulfite-seq) and histone modification (ChIP-seq) data to accurately predict genome-wide mRNA transcript abundance (RNA-seq) for H1-hESC and GM12878 cell lines. Availability: TREEOME is implemented using open-source software and made available as a pre-configured bootable reference environment. All scripts and data presented in this study are available online at http://sourceforge.net/projects/budden2015treeome/.
2004.03384
Kerstin Ritter
Matthias Ritter, Derek V.M. Ott, Friedemann Paul, John-Dylan Haynes, Kerstin Ritter
Covid-19 -- A simple statistical model for predicting ICU load in early phases of the disease
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
One major bottleneck in the ongoing COVID-19 pandemic is the limited number of critical care beds. Due to the dynamic development of infections and the time lag between when patients are infected and when a proportion of them enters an intensive care unit (ICU), the need for future intensive care can easily be underestimated. To infer future ICU load from reported infections, we suggest a simple statistical model that (1) accounts for time lags and (2) allows for making predictions depending on different future growth of infections. We have evaluated our model for three regions, namely Berlin (Germany), Lombardy (Italy), and Madrid (Spain). Before extensive containment measures made an impact, we first estimate the region-specific model parameters. Whereas for Berlin, an ICU rate of 6%, a time lag of 6 days, and an average stay of 12 days in ICU provide the best fit of the data, for Lombardy and Madrid the ICU rate was higher (18% and 15%) and the time lag (0 and 3 days) and the average stay (4 and 8 days) in ICU shorter. The region-specific models are then used to predict future ICU load assuming either a continued exponential phase with varying growth rates (0-15%) or linear growth. Thus, the model can help to predict a potential exceedance of ICU capacity. Although our predictions are based on small data sets and disregard non-stationary dynamics, our model is simple, robust, and can be used in early phases of the disease when data are scarce.
[ { "created": "Mon, 6 Apr 2020 17:54:18 GMT", "version": "v1" }, { "created": "Mon, 27 Jul 2020 14:50:28 GMT", "version": "v2" } ]
2020-07-28
[ [ "Ritter", "Matthias", "" ], [ "Ott", "Derek V. M.", "" ], [ "Paul", "Friedemann", "" ], [ "Haynes", "John-Dylan", "" ], [ "Ritter", "Kerstin", "" ] ]
One major bottleneck in the ongoing COVID-19 pandemic is the limited number of critical care beds. Due to the dynamic development of infections and the time lag between when patients are infected and when a proportion of them enters an intensive care unit (ICU), the need for future intensive care can easily be underestimated. To infer future ICU load from reported infections, we suggest a simple statistical model that (1) accounts for time lags and (2) allows for making predictions depending on different future growth of infections. We have evaluated our model for three regions, namely Berlin (Germany), Lombardy (Italy), and Madrid (Spain). Before extensive containment measures made an impact, we first estimate the region-specific model parameters. Whereas for Berlin, an ICU rate of 6%, a time lag of 6 days, and an average stay of 12 days in ICU provide the best fit of the data, for Lombardy and Madrid the ICU rate was higher (18% and 15%) and the time lag (0 and 3 days) and the average stay (4 and 8 days) in ICU shorter. The region-specific models are then used to predict future ICU load assuming either a continued exponential phase with varying growth rates (0-15%) or linear growth. Thus, the model can help to predict a potential exceedance of ICU capacity. Although our predictions are based on small data sets and disregard non-stationary dynamics, our model is simple, robust, and can be used in early phases of the disease when data are scarce.
2005.11255
Jenny Poulton
Jenny Marie Poulton, Thomas Edward Ouldridge
Edge-effects dominate copying thermodynamics for finite-length molecular oligomers
null
null
10.1088/1367-2630/ac0389
null
q-bio.SC cond-mat.stat-mech q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living systems produce copies of information-carrying molecules such as DNA by assembling monomer units into finite-length oligomer (short polymer) copies. We explore the role of initiation and termination of the copy process in the thermodynamics of copying. By splitting the free-energy change of copy formation into informational and chemical terms, we show that copy accuracy plays no direct role in the overall thermodynamics. Instead, it is thermodynamically costly to produce outputs that are more similar to the oligomers in the environment than sequences obtained by randomly sampling monomers. Copy accuracy can be thermodynamically neutral, or even favoured, depending on the surroundings. Oligomer copying mechanisms can thus function as information engines that interconvert chemical and information-based free energy. Hard thermodynamic constraints on accuracy derived for infinite-length polymers instead manifest as kinetic barriers experienced while the copy is template-attached. These barriers are easily surmounted by shorter oligomers.
[ { "created": "Fri, 22 May 2020 16:05:11 GMT", "version": "v1" }, { "created": "Mon, 15 Mar 2021 16:32:11 GMT", "version": "v2" } ]
2021-08-11
[ [ "Poulton", "Jenny Marie", "" ], [ "Ouldridge", "Thomas Edward", "" ] ]
Living systems produce copies of information-carrying molecules such as DNA by assembling monomer units into finite-length oligomer (short polymer) copies. We explore the role of initiation and termination of the copy process in the thermodynamics of copying. By splitting the free-energy change of copy formation into informational and chemical terms, we show that copy accuracy plays no direct role in the overall thermodynamics. Instead, it is thermodynamically costly to produce outputs that are more similar to the oligomers in the environment than sequences obtained by randomly sampling monomers. Copy accuracy can be thermodynamically neutral, or even favoured, depending on the surroundings. Oligomer copying mechanisms can thus function as information engines that interconvert chemical and information-based free energy. Hard thermodynamic constraints on accuracy derived for infinite-length polymers instead manifest as kinetic barriers experienced while the copy is template-attached. These barriers are easily surmounted by shorter oligomers.
1605.03090
Yogesh Virkar
Yogesh S. Virkar and Woodrow L. Shew and Juan G. Restrepo and Edward Ott
Metabolite transport through glial networks stabilizes the dynamics of learning
8 pages, 5 figures
Phys. Rev. E 94, 042310 (2016)
10.1103/PhysRevE.94.042310
null
q-bio.NC cond-mat.dis-nn nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model comprised of two interacting networks: a model neural network interconnected by synapses which undergo spike-timing dependent plasticity (STDP); and a model glial network interconnected by gap junctions which diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.
[ { "created": "Tue, 10 May 2016 16:37:30 GMT", "version": "v1" } ]
2016-10-26
[ [ "Virkar", "Yogesh S.", "" ], [ "Shew", "Woodrow L.", "" ], [ "Restrepo", "Juan G.", "" ], [ "Ott", "Edward", "" ] ]
Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model comprised of two interacting networks: a model neural network interconnected by synapses which undergo spike-timing dependent plasticity (STDP); and a model glial network interconnected by gap junctions which diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.
2206.14874
Zachary Fox
Zachary R Fox
Extracting Information from Stochastic Trajectories of Gene Expression
6 pages, 4 figures
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
Gene expression is a stochastic process in which cells produce biomolecules essential to the function of life. Modern experimental methods allow for the measurement of biomolecules at single-cell and single-molecule resolution over time. Mathematical models are used to make sense of these experiments. The codesign of experiments and models allows one to use models to design optimal experiments, and to find experiments which provide as much information as possible about relevant model parameters. Here, we provide a formulation of Fisher information for trajectories sampled from the continuous time Markov processes often used to model biological systems, and apply the result to potentially correlated measurements of stochastic gene expression. We validate the result on two commonly used models of gene expression and show it can be used to optimize measurement periods for simulated single-cell fluorescence microscopy experiments. Finally, we use a connection between Fisher information and mutual information to derive channel capacities of nonlinearly regulated gene expression.
[ { "created": "Wed, 29 Jun 2022 19:38:04 GMT", "version": "v1" } ]
2022-07-01
[ [ "Fox", "Zachary R", "" ] ]
Gene expression is a stochastic process in which cells produce biomolecules essential to the function of life. Modern experimental methods allow for the measurement of biomolecules at single-cell and single-molecule resolution over time. Mathematical models are used to make sense of these experiments. The codesign of experiments and models allows one to use models to design optimal experiments, and to find experiments which provide as much information as possible about relevant model parameters. Here, we provide a formulation of Fisher information for trajectories sampled from the continuous time Markov processes often used to model biological systems, and apply the result to potentially correlated measurements of stochastic gene expression. We validate the result on two commonly used models of gene expression and show it can be used to optimize measurement periods for simulated single-cell fluorescence microscopy experiments. Finally, we use a connection between Fisher information and mutual information to derive channel capacities of nonlinearly regulated gene expression.
1603.05261
Richard Betzel
Richard F. Betzel, Shi Gu, John D. Medaglia, Fabio Pasqualetti, Danielle S. Bassett
Optimally controlling the human connectome: the role of network topology
23 pages, 6 figures, 9 supplementary figures
null
10.1038/srep30770
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To meet ongoing cognitive demands, the human brain must seamlessly transition from one brain state to another, in the process drawing on different cognitive systems. How does the brain's network of anatomical connections help facilitate such transitions? Which features of this network contribute to making one transition easy and another transition difficult? Here, we address these questions using network control theory. We calculate the optimal input signals to drive the brain to and from states dominated by different cognitive systems. The input signals allow us to assess the contributions made by different brain regions. We show that such contributions, which we measure as energy, are correlated with regions' weighted degrees. We also show that the network communicability, a measure of direct and indirect connectedness between brain regions, predicts the extent to which brain regions compensate when input to another region is suppressed. Finally, we identify optimal states in which the brain should start (and finish) in order to minimize transition energy. We show that the optimal target states display high activity in hub regions, implicating the brain's rich club. Furthermore, when rich club organization is destroyed, the energy cost associated with state transitions increases significantly, demonstrating that it is the richness of brain regions that makes them ideal targets.
[ { "created": "Wed, 16 Mar 2016 20:13:56 GMT", "version": "v1" } ]
2016-09-08
[ [ "Betzel", "Richard F.", "" ], [ "Gu", "Shi", "" ], [ "Medaglia", "John D.", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Bassett", "Danielle S.", "" ] ]
To meet ongoing cognitive demands, the human brain must seamlessly transition from one brain state to another, in the process drawing on different cognitive systems. How does the brain's network of anatomical connections help facilitate such transitions? Which features of this network contribute to making one transition easy and another transition difficult? Here, we address these questions using network control theory. We calculate the optimal input signals to drive the brain to and from states dominated by different cognitive systems. The input signals allow us to assess the contributions made by different brain regions. We show that such contributions, which we measure as energy, are correlated with regions' weighted degrees. We also show that the network communicability, a measure of direct and indirect connectedness between brain regions, predicts the extent to which brain regions compensate when input to another region is suppressed. Finally, we identify optimal states in which the brain should start (and finish) in order to minimize transition energy. We show that the optimal target states display high activity in hub regions, implicating the brain's rich club. Furthermore, when rich club organization is destroyed, the energy cost associated with state transitions increases significantly, demonstrating that it is the richness of brain regions that makes them ideal targets.
2205.02665
Adrien Peyrache
Adrien Peyrache
Querying hippocampal replay with subcortical inputs
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
During sleep, the hippocampus recapitulates neuronal patterns corresponding to behavioral trajectories during previous experiences. This hippocampal replay supports the formation of long-term memories. Yet, whether replay originates within the hippocampal circuitry or is initiated by extrahippocampal inputs is unknown. Here, I review recent findings regarding the organization of neuronal activity upstream to the hippocampus, in the head-direction (HD) and grid cell networks. I argue that hippocampal activity is under the influence of primary spatial signals, which originate from subcortical structures and set the stage for memory replay. In turn, hippocampal replay resets the HD network activity to select a new direction for the next replay event. This reciprocal interaction between the HD network and the hippocampus may be essential in providing meaning to hippocampal activity, specifically by training decoders of hippocampal sequences. Neuronal dynamics in thalamo-hippocampal loops may thus be instrumental for memory processes during sleep.
[ { "created": "Thu, 5 May 2022 14:16:05 GMT", "version": "v1" } ]
2022-05-06
[ [ "Peyrache", "Adrien", "" ] ]
During sleep, the hippocampus recapitulates neuronal patterns corresponding to behavioral trajectories during previous experiences. This hippocampal replay supports the formation of long-term memories. Yet, whether replay originates within the hippocampal circuitry or is initiated by extrahippocampal inputs is unknown. Here, I review recent findings regarding the organization of neuronal activity upstream to the hippocampus, in the head-direction (HD) and grid cell networks. I argue that hippocampal activity is under the influence of primary spatial signals, which originate from subcortical structures and set the stage for memory replay. In turn, hippocampal replay resets the HD network activity to select a new direction for the next replay event. This reciprocal interaction between the HD network and the hippocampus may be essential in providing meaning to hippocampal activity, specifically by training decoders of hippocampal sequences. Neuronal dynamics in thalamo-hippocampal loops may thus be instrumental for memory processes during sleep.
1802.04087
Min Xu
Chang Liu, Xiangrui Zeng, Ruogu Lin, Xiaodan Liang, Zachary Freyberg, Eric Xing, Min Xu
Deep learning based supervised semantic segmentation of Electron Cryo-Subtomograms
9 pages
IEEE International Conference on Image Processing (ICIP) 2018
null
null
q-bio.QM cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution. It enables analyzing the native structures of macromolecular complexes and their spatial organization inside single cells. However, due to the high degree of structural complexity and practical imaging limitations, systematic macromolecular structural recovery inside CECT images remains challenging. Particularly, the recovery of a macromolecule is likely to be biased by its neighbor structures due to the high molecular crowding. To reduce the bias, here we introduce a novel 3D convolutional neural network inspired by Fully Convolutional Network and Encoder-Decoder Architecture for the supervised segmentation of macromolecules of interest in subtomograms. The tests of our models on realistically simulated CECT data demonstrate that our new approach has significantly improved segmentation performance compared to our baseline approach. Also, we demonstrate that the proposed model has generalization ability to segment new structures that do not exist in training data.
[ { "created": "Mon, 12 Feb 2018 14:54:49 GMT", "version": "v1" } ]
2018-05-16
[ [ "Liu", "Chang", "" ], [ "Zeng", "Xiangrui", "" ], [ "Lin", "Ruogu", "" ], [ "Liang", "Xiaodan", "" ], [ "Freyberg", "Zachary", "" ], [ "Xing", "Eric", "" ], [ "Xu", "Min", "" ] ]
Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution. It enables analyzing the native structures of macromolecular complexes and their spatial organization inside single cells. However, due to the high degree of structural complexity and practical imaging limitations, systematic macromolecular structural recovery inside CECT images remains challenging. Particularly, the recovery of a macromolecule is likely to be biased by its neighbor structures due to the high molecular crowding. To reduce the bias, here we introduce a novel 3D convolutional neural network inspired by Fully Convolutional Network and Encoder-Decoder Architecture for the supervised segmentation of macromolecules of interest in subtomograms. The tests of our models on realistically simulated CECT data demonstrate that our new approach has significantly improved segmentation performance compared to our baseline approach. Also, we demonstrate that the proposed model has generalization ability to segment new structures that do not exist in training data.
1412.4875
Petter Holme
Petter Holme, Taro Takaguchi
Time evolution of predictability of epidemics on networks
null
Phys. Rev. E 91, 042811 (2015)
10.1103/PhysRevE.91.042811
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information -- i.e., knowing the state of each individual with respect to the disease -- the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
[ { "created": "Tue, 16 Dec 2014 04:55:20 GMT", "version": "v1" }, { "created": "Tue, 5 May 2015 12:34:23 GMT", "version": "v2" } ]
2015-05-20
[ [ "Holme", "Petter", "" ], [ "Takaguchi", "Taro", "" ] ]
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information -- i.e., knowing the state of each individual with respect to the disease -- the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
1508.01737
David Schnoerr
David Schnoerr, Guido Sanguinetti and Ramon Grima
Comparison of different moment-closure approximations for stochastic chemical kinetics
36 pages, 14 figures
J. Chem. Phys. 143, 185101 (2015)
10.1063/1.4934990
null
q-bio.QM physics.chem-ph q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years moment-closure approximations (MA) of the chemical master equation have become a popular method for the study of stochastic effects in chemical reaction systems. Several different MA methods have been proposed and applied in the literature, but it remains unclear how they perform with respect to each other. In this paper we study the normal, Poisson, log-normal and central-moment-neglect MAs by applying them to understand the stochastic properties of chemical systems whose deterministic rate equations show the properties of bistability, ultrasensitivity and oscillatory behaviour. Our results suggest that the normal MA is favourable over the other studied MAs. In particular we found that (i) the size of the region of parameter space where a closure gives physically meaningful results, e.g. positive mean and variance, is considerably larger for the normal closure than for the other three closures; (ii) the accuracy of the predictions of the four closures (relative to simulations using the stochastic simulation algorithm) is comparable in those regions of parameter space where all closures give physically meaningful results; (iii) the Poisson and log-normal MAs are not uniquely defined for systems involving conservation laws in molecule numbers. We also describe the new software package MOCA which enables the automated numerical analysis of various MA methods in a graphical user interface and which was used to perform the comparative analysis presented in this paper. MOCA allows the user to develop novel closure methods and can treat polynomial, non-polynomial, as well as time-dependent propensity functions, thus being applicable to virtually any chemical reaction system.
[ { "created": "Fri, 7 Aug 2015 16:00:34 GMT", "version": "v1" }, { "created": "Sat, 7 Nov 2015 11:18:55 GMT", "version": "v2" } ]
2015-11-17
[ [ "Schnoerr", "David", "" ], [ "Sanguinetti", "Guido", "" ], [ "Grima", "Ramon", "" ] ]
In recent years moment-closure approximations (MA) of the chemical master equation have become a popular method for the study of stochastic effects in chemical reaction systems. Several different MA methods have been proposed and applied in the literature, but it remains unclear how they perform with respect to each other. In this paper we study the normal, Poisson, log-normal and central-moment-neglect MAs by applying them to understand the stochastic properties of chemical systems whose deterministic rate equations show the properties of bistability, ultrasensitivity and oscillatory behaviour. Our results suggest that the normal MA is favourable over the other studied MAs. In particular we found that (i) the size of the region of parameter space where a closure gives physically meaningful results, e.g. positive mean and variance, is considerably larger for the normal closure than for the other three closures; (ii) the accuracy of the predictions of the four closures (relative to simulations using the stochastic simulation algorithm) is comparable in those regions of parameter space where all closures give physically meaningful results; (iii) the Poisson and log-normal MAs are not uniquely defined for systems involving conservation laws in molecule numbers. We also describe the new software package MOCA which enables the automated numerical analysis of various MA methods in a graphical user interface and which was used to perform the comparative analysis presented in this paper. MOCA allows the user to develop novel closure methods and can treat polynomial, non-polynomial, as well as time-dependent propensity functions, thus being applicable to virtually any chemical reaction system.
2210.07345
Megan Chambers
Megan Chambers, Natalie Johnston, Ian Livengood, Miya Spinelli, Radmila Sazdanovic, Mette S Olufsen
A Topological Data Analysis Study on Murine Pulmonary Arterial Trees with Pulmonary Hypertension
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Pulmonary hypertension (PH), defined by a mean pulmonary arterial blood pressure above 20 mmHg, is a cardiovascular disease impacting the pulmonary vasculature. PH is accompanied by vascular remodeling, wherein vessels become stiffer, large vessels dilate, and smaller vessels constrict. Some types of PH, including hypoxia-induced PH (HPH), lead to microvascular rarefaction. The goal of this study is to analyze the change in pulmonary arterial network morphometry in the presence of HPH. To do so, we use novel methods from topological data analysis (TDA), employing persistent homology to quantify arterial network morphometry for control and hypertensive mice. These methods are used to characterize arterial trees extracted from micro-computed tomography (micro-CT) images. To compare results between control and hypertensive animals, we normalize generated networks using three pruning algorithms. This proof-of-concept study shows that the pruning methods effects the spatial tree statistics and complexities of the trees. Results show that HPH trees have higher depth and that the directional complexities correlate with branch number, except for trees pruned by vessel radius, where the left and anterior complexity are lower compared to control trees. While more data is required to make a conclusion about the overall effect of HPH on network topology, this study provides a framework for analyzing the topology of biological networks and is a step towards the extraction of relevant information for diagnosing and detecting HPH.
[ { "created": "Thu, 13 Oct 2022 20:35:37 GMT", "version": "v1" }, { "created": "Wed, 1 Feb 2023 23:49:41 GMT", "version": "v2" } ]
2023-02-03
[ [ "Chambers", "Megan", "" ], [ "Johnston", "Natalie", "" ], [ "Livengood", "Ian", "" ], [ "Spinelli", "Miya", "" ], [ "Sazdanovic", "Radmila", "" ], [ "Olufsen", "Mette S", "" ] ]
Pulmonary hypertension (PH), defined by a mean pulmonary arterial blood pressure above 20 mmHg, is a cardiovascular disease impacting the pulmonary vasculature. PH is accompanied by vascular remodeling, wherein vessels become stiffer, large vessels dilate, and smaller vessels constrict. Some types of PH, including hypoxia-induced PH (HPH), lead to microvascular rarefaction. The goal of this study is to analyze the change in pulmonary arterial network morphometry in the presence of HPH. To do so, we use novel methods from topological data analysis (TDA), employing persistent homology to quantify arterial network morphometry for control and hypertensive mice. These methods are used to characterize arterial trees extracted from micro-computed tomography (micro-CT) images. To compare results between control and hypertensive animals, we normalize generated networks using three pruning algorithms. This proof-of-concept study shows that the pruning methods effects the spatial tree statistics and complexities of the trees. Results show that HPH trees have higher depth and that the directional complexities correlate with branch number, except for trees pruned by vessel radius, where the left and anterior complexity are lower compared to control trees. While more data is required to make a conclusion about the overall effect of HPH on network topology, this study provides a framework for analyzing the topology of biological networks and is a step towards the extraction of relevant information for diagnosing and detecting HPH.
q-bio/0509013
Atul Narang
Jason T. Noel, Brenton Cox, Atul Narang
Identification of the growth-limiting step in continuous cultures from initial rates measured in response to substrate-excess conditions
12 pages
null
null
null
q-bio.MN
null
When steady state chemostat cultures are abruptly exposed to substrate-excess conditions, they exhibit long lags before adjusting to the new environment. The identity of the rate-limiting step for this slow response can be inferred from the initial yields and specific growth rates measured by exposing steady state cultures at various dilution rates to substrate-excess conditions. We measured these parameters for glucose-limited cultures of E. coli ML308 growing at various dilution rates between 0.03 and 0.6 1/hr. In all the cases, the initial yields were 20-30% less than the steady state yields. The decline of the yield implies that overflow metabolism is triggered in response to excess glucose. It is therefore unlikely that the initial response of the cells is limited by substrate uptake. The initial specific growth rates of cultures growing at low dilution rates (D = 0.03, 0.05, 0.075, 0.1, 0.3 1/hr) were significantly higher than the steady state specific growth rates. However, the increment in the specific growth rate decreased with the dilution rate, and at D=0.6 1/hr, there was no improvement in the specific growth rate. The initial specific growth rates varied hyperbolically with the dilution, decreasing sharply at dilution rates below 0.1 1/hr and saturating at D=0.6 1/hr. This is consistent with a picture in which the initial response is limited by the activity of glutamate dehydrogenase.
[ { "created": "Mon, 12 Sep 2005 20:22:46 GMT", "version": "v1" } ]
2007-05-23
[ [ "Noel", "Jason T.", "" ], [ "Cox", "Brenton", "" ], [ "Narang", "Atul", "" ] ]
When steady state chemostat cultures are abruptly exposed to substrate-excess conditions, they exhibit long lags before adjusting to the new environment. The identity of the rate-limiting step for this slow response can be inferred from the initial yields and specific growth rates measured by exposing steady state cultures at various dilution rates to substrate-excess conditions. We measured these parameters for glucose-limited cultures of E. coli ML308 growing at various dilution rates between 0.03 and 0.6 1/hr. In all the cases, the initial yields were 20-30% less than the steady state yields. The decline of the yield implies that overflow metabolism is triggered in response to excess glucose. It is therefore unlikely that the initial response of the cells is limited by substrate uptake. The initial specific growth rates of cultures growing at low dilution rates (D = 0.03, 0.05, 0.075, 0.1, 0.3 1/hr) were significantly higher than the steady state specific growth rates. However, the increment in the specific growth rate decreased with the dilution rate, and at D=0.6 1/hr, there was no improvement in the specific growth rate. The initial specific growth rates varied hyperbolically with the dilution, decreasing sharply at dilution rates below 0.1 1/hr and saturating at D=0.6 1/hr. This is consistent with a picture in which the initial response is limited by the activity of glutamate dehydrogenase.
2309.14841
Florian Ahrens
Florian Ahrens, Mihai Pomarlan, Daniel Be{\ss}ler, Thorsten Fehr, Michael Beetz, Manfred Herrmann
Towards a Neuronally Consistent Ontology for Robotic Agents
Preprint of paper accepted for the European Conference on Artificial Intelligence (ECAI) 2023 (minor typo corrections)
null
null
null
q-bio.NC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Collaborative Research Center for Everyday Activity Science & Engineering (CRC EASE) aims to enable robots to perform environmental interaction tasks with close to human capacity. It therefore employs a shared ontology to model the activity of both kinds of agents, empowering robots to learn from human experiences. To properly describe these human experiences, the ontology will strongly benefit from incorporating characteristics of neuronal information processing which are not accessible from a behavioral perspective alone. We, therefore, propose the analysis of human neuroimaging data for evaluation and validation of concepts and events defined in the ontology model underlying most of the CRC projects. In an exploratory analysis, we employed an Independent Component Analysis (ICA) on functional Magnetic Resonance Imaging (fMRI) data from participants who were presented with the same complex video stimuli of activities as robotic and human agents in different environments and contexts. We then correlated the activity patterns of brain networks represented by derived components with timings of annotated event categories as defined by the ontology model. The present results demonstrate a subset of common networks with stable correlations and specificity towards particular event classes and groups, associated with environmental and contextual factors. These neuronal characteristics will open up avenues for adapting the ontology model to be more consistent with human information processing.
[ { "created": "Tue, 26 Sep 2023 11:13:02 GMT", "version": "v1" } ]
2023-09-27
[ [ "Ahrens", "Florian", "" ], [ "Pomarlan", "Mihai", "" ], [ "Beßler", "Daniel", "" ], [ "Fehr", "Thorsten", "" ], [ "Beetz", "Michael", "" ], [ "Herrmann", "Manfred", "" ] ]
The Collaborative Research Center for Everyday Activity Science & Engineering (CRC EASE) aims to enable robots to perform environmental interaction tasks with close to human capacity. It therefore employs a shared ontology to model the activity of both kinds of agents, empowering robots to learn from human experiences. To properly describe these human experiences, the ontology will strongly benefit from incorporating characteristics of neuronal information processing which are not accessible from a behavioral perspective alone. We, therefore, propose the analysis of human neuroimaging data for evaluation and validation of concepts and events defined in the ontology model underlying most of the CRC projects. In an exploratory analysis, we employed an Independent Component Analysis (ICA) on functional Magnetic Resonance Imaging (fMRI) data from participants who were presented with the same complex video stimuli of activities as robotic and human agents in different environments and contexts. We then correlated the activity patterns of brain networks represented by derived components with timings of annotated event categories as defined by the ontology model. The present results demonstrate a subset of common networks with stable correlations and specificity towards particular event classes and groups, associated with environmental and contextual factors. These neuronal characteristics will open up avenues for adapting the ontology model to be more consistent with human information processing.
0806.3489
Juan G. Restrepo
Juan G. Restrepo and Alain Karma
Line-Defect Patterns of Unstable Spiral Waves in Cardiac Tissue
4 pages, 5 figures
null
10.1103/PhysRevE.79.030906
null
q-bio.TO q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spiral wave propagation in period-2 excitable media is often accompanied by line-defects, the locus of points with period-1 oscillations. Here we investigate spiral line-defects in cardiac tissue where period-2 behavior has a known arrhythmogenic role. We find that the number of line defects, which is constrained to be an odd integer, is three for a freely rotating spiral, with and without meander, but one for a spiral anchored around a fixed heterogeneity. We interpret analytically this finding using a simple theory where spiral wave unstable modes with different numbers of line-defects correspond to quantized solutions of a Helmholtz equation. Furthermore, the slow inward rotation of spiral line-defects is described in different regimes.
[ { "created": "Fri, 20 Jun 2008 23:28:46 GMT", "version": "v1" } ]
2009-11-13
[ [ "Restrepo", "Juan G.", "" ], [ "Karma", "Alain", "" ] ]
Spiral wave propagation in period-2 excitable media is often accompanied by line-defects, the locus of points with period-1 oscillations. Here we investigate spiral line-defects in cardiac tissue where period-2 behavior has a known arrhythmogenic role. We find that the number of line defects, which is constrained to be an odd integer, is three for a freely rotating spiral, with and without meander, but one for a spiral anchored around a fixed heterogeneity. We interpret analytically this finding using a simple theory where spiral wave unstable modes with different numbers of line-defects correspond to quantized solutions of a Helmholtz equation. Furthermore, the slow inward rotation of spiral line-defects is described in different regimes.
2108.01973
Farzad Fatehi
Farzad Fatehi, Richard J. Bingham, Pierre-Philippe Dechant, Peter G. Stockley, and Reidun Twarock
Therapeutic Interfering Particles Exploiting Viral Replication and Assembly Mechanisms Show Promising Performance: A Modelling Study
Accepted version for publication in Scientific Reports after a minor revision
Scientific Reports, 11, 23847 (2021)
10.1038/s41598-021-03168-0
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Defective interfering particles arise spontaneously during a viral infection as mutants lacking essential parts of the viral genome. Their ability to replicate in the presence of the wild-type (WT) virus (at the expense of viable viral particles) is mimicked and exploited by therapeutic interfering particles. We propose a strategy for the design of therapeutic interfering RNAs (tiRNAs) against positive-sense single-stranded RNA viruses that assemble via packaging signal-mediated assembly. These tiRNAs contain both an optimised version of the virus assembly manual that is encoded by multiple dispersed RNA packaging signals and a replication signal for viral polymerase, but lack any protein coding information. We use an intracellular model for hepatitis C viral (HCV) infection that captures key aspects of the competition dynamics between tiRNAs and viral genomes for virally produced capsid protein and polymerase. We show that only a small increase in the assembly and replication efficiency of the tiRNAs compared with WT virus is required in order to achieve a treatment efficacy greater than 99%. This demonstrates that the proposed tiRNA design could be a promising treatment option for RNA viral infections.
[ { "created": "Wed, 4 Aug 2021 11:30:34 GMT", "version": "v1" }, { "created": "Wed, 15 Dec 2021 15:48:42 GMT", "version": "v2" } ]
2021-12-16
[ [ "Fatehi", "Farzad", "" ], [ "Bingham", "Richard J.", "" ], [ "Dechant", "Pierre-Philippe", "" ], [ "Stockley", "Peter G.", "" ], [ "Twarock", "Reidun", "" ] ]
Defective interfering particles arise spontaneously during a viral infection as mutants lacking essential parts of the viral genome. Their ability to replicate in the presence of the wild-type (WT) virus (at the expense of viable viral particles) is mimicked and exploited by therapeutic interfering particles. We propose a strategy for the design of therapeutic interfering RNAs (tiRNAs) against positive-sense single-stranded RNA viruses that assemble via packaging signal-mediated assembly. These tiRNAs contain both an optimised version of the virus assembly manual that is encoded by multiple dispersed RNA packaging signals and a replication signal for viral polymerase, but lack any protein coding information. We use an intracellular model for hepatitis C viral (HCV) infection that captures key aspects of the competition dynamics between tiRNAs and viral genomes for virally produced capsid protein and polymerase. We show that only a small increase in the assembly and replication efficiency of the tiRNAs compared with WT virus is required in order to achieve a treatment efficacy greater than 99%. This demonstrates that the proposed tiRNA design could be a promising treatment option for RNA viral infections.
0904.1815
Josh Mitteldorf PhD
Josh Mitteldorf
Female Fertility and Longevity
null
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Does bearing children shorten a woman's life expectancy? Several demographic studies, historic and current, have found no such effect. But the Caerphilly cohort study is far the most prominent and frequently-cited, and it answers in the affirmative. Why has this study found an effect that others fail to see? Their analysis is based on Poisson regression, a statistical technique that is accurate only if the underlying data are Poisson distributed. But the distribution of the number of children born to women in the Caerphilly database departs strongly from Poisson at the high end. This makes the result overly sensitive to a handful of women with 15 children or more who lived before 1700. When these 5 women are removed from a database of more than 2,900, the Poisson regression no longer shows a significant result. Bi-linear regression relating life span to fertility and date of birth results in a positive coefficient for fertility.
[ { "created": "Sat, 11 Apr 2009 17:16:37 GMT", "version": "v1" } ]
2009-04-14
[ [ "Mitteldorf", "Josh", "" ] ]
Does bearing children shorten a woman's life expectancy? Several demographic studies, historic and current, have found no such effect. But the Caerphilly cohort study is far the most prominent and frequently-cited, and it answers in the affirmative. Why has this study found an effect that others fail to see? Their analysis is based on Poisson regression, a statistical technique that is accurate only if the underlying data are Poisson distributed. But the distribution of the number of children born to women in the Caerphilly database departs strongly from Poisson at the high end. This makes the result overly sensitive to a handful of women with 15 children or more who lived before 1700. When these 5 women are removed from a database of more than 2,900, the Poisson regression no longer shows a significant result. Bi-linear regression relating life span to fertility and date of birth results in a positive coefficient for fertility.
2004.05639
Efr\'en M. Benavides
Efren M.Benavides
Robust predictive model for Carriers, Infections and Recoveries (CIR): first update for CoVid-19 in Spain
9 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article reports a first update on the assesment of the model previously presented in arXiv:2003.13890v1. New available data have been used to feed the model and the comparison with real data still shows good agreement. The main novelty of the model is that it keeps track of the date of infection of a single individual and uses stochastic distributions to aggregate individuals who share the same date of infection. In addition, it uses two types of infections, mild and serious, with a different recovery time. These features are implemented in a set of differential equations which determine the number of Carriers, Infections, Recoveries, Hospitalized and Deaths.
[ { "created": "Sun, 12 Apr 2020 15:55:43 GMT", "version": "v1" } ]
2020-04-14
[ [ "Benavides", "Efren M.", "" ] ]
This article reports a first update on the assesment of the model previously presented in arXiv:2003.13890v1. New available data have been used to feed the model and the comparison with real data still shows good agreement. The main novelty of the model is that it keeps track of the date of infection of a single individual and uses stochastic distributions to aggregate individuals who share the same date of infection. In addition, it uses two types of infections, mild and serious, with a different recovery time. These features are implemented in a set of differential equations which determine the number of Carriers, Infections, Recoveries, Hospitalized and Deaths.
1510.08813
Daniel Okamoto
Daniel K. Okamoto
Competition among eggs shifts to cooperation along a sperm supply gradient in an external fertilizer
In Press in The American Naturalist. 22 pages, 4 figures, 3 tables, 4 appendices
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Competition among gametes for fertilization imposes strong selection. For external fertilizers, this selective pressure extends to eggs for which spawning conditions can range from sperm limitation (competition among eggs) to sexual conflict (overabundance of competing sperm toxic to eggs). Yet existing fertilization models ignore dynamics that can alter the functional nature of gamete interactions. These factors include attraction of sperm to eggs, egg crowding effects or other nonlinearities in per capita rates of sperm-egg interaction. Such processes potentially allow egg concentrations to drastically affect viable fertilization probabilities. I experimentally tested whether such egg effects occur using the urchin $\textit{Strongylocentrotus purpuratus}$ and parameterized a newly derived model of fertilization dynamics and existing models modified to include such interactions. The experiments revealed that at low sperm concentrations, eggs compete for sperm while at high sperm concentrations eggs cooperatively reduce abnormal fertilization (a proxy for polyspermy). I show that these observations are consistent with declines in the per capita rate at which sperm and eggs interact as eggs increase in density. The results suggest a fitness trade-off of egg release during spawning: as sperm range from scarce to superabundant, interactions among eggs transition from highly competitive to cooperative in terms of viable fertilization probabilities.
[ { "created": "Thu, 29 Oct 2015 18:32:52 GMT", "version": "v1" }, { "created": "Wed, 25 Nov 2015 16:57:49 GMT", "version": "v2" }, { "created": "Fri, 11 Dec 2015 14:59:20 GMT", "version": "v3" } ]
2015-12-14
[ [ "Okamoto", "Daniel K.", "" ] ]
Competition among gametes for fertilization imposes strong selection. For external fertilizers, this selective pressure extends to eggs for which spawning conditions can range from sperm limitation (competition among eggs) to sexual conflict (overabundance of competing sperm toxic to eggs). Yet existing fertilization models ignore dynamics that can alter the functional nature of gamete interactions. These factors include attraction of sperm to eggs, egg crowding effects or other nonlinearities in per capita rates of sperm-egg interaction. Such processes potentially allow egg concentrations to drastically affect viable fertilization probabilities. I experimentally tested whether such egg effects occur using the urchin $\textit{Strongylocentrotus purpuratus}$ and parameterized a newly derived model of fertilization dynamics and existing models modified to include such interactions. The experiments revealed that at low sperm concentrations, eggs compete for sperm while at high sperm concentrations eggs cooperatively reduce abnormal fertilization (a proxy for polyspermy). I show that these observations are consistent with declines in the per capita rate at which sperm and eggs interact as eggs increase in density. The results suggest a fitness trade-off of egg release during spawning: as sperm range from scarce to superabundant, interactions among eggs transition from highly competitive to cooperative in terms of viable fertilization probabilities.
1502.07829
Saloni Agrawal
Asif Javed, Saloni Agrawal, Pauline C. Ng
Phen-Gen: combining phenotype and genotype to analyze rare disorders
null
Nat Methods. 2014 Sep;11(9):935-7
10.1038/nmeth.3046
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Phen-Gen, a method which combines patient disease symptoms and sequencing data with prior domain knowledge to identify the causative gene(s) for rare disorders.
[ { "created": "Fri, 27 Feb 2015 08:54:49 GMT", "version": "v1" } ]
2015-03-02
[ [ "Javed", "Asif", "" ], [ "Agrawal", "Saloni", "" ], [ "Ng", "Pauline C.", "" ] ]
We introduce Phen-Gen, a method which combines patient disease symptoms and sequencing data with prior domain knowledge to identify the causative gene(s) for rare disorders.
2002.07064
Michele Gentili
Michele Gentili, Leonardo Martini, Manuela Petti, Lorenzo Farina and Luca Becchetti
Biological Random Walks: integrating heterogeneous data in disease gene prioritization
null
2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 2019, 1-8
10.1109/CIBCB.2019.8791472
null
q-bio.MN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a unified framework to leverage biological information in network propagation-based gene prioritization algorithms. Preliminary results on breast cancer data show significant improvements over state-of-the-art baselines, such as the prioritization of genes that are not identified as potential candidates by interactome-based algorithms, but that appear to be involved in/or potentially related to breast cancer, according to a functional analysis based on recent literature.
[ { "created": "Fri, 14 Feb 2020 17:46:35 GMT", "version": "v1" } ]
2020-02-18
[ [ "Gentili", "Michele", "" ], [ "Martini", "Leonardo", "" ], [ "Petti", "Manuela", "" ], [ "Farina", "Lorenzo", "" ], [ "Becchetti", "Luca", "" ] ]
This work proposes a unified framework to leverage biological information in network propagation-based gene prioritization algorithms. Preliminary results on breast cancer data show significant improvements over state-of-the-art baselines, such as the prioritization of genes that are not identified as potential candidates by interactome-based algorithms, but that appear to be involved in/or potentially related to breast cancer, according to a functional analysis based on recent literature.
2407.03977
Lars Lammers
Lars Lammers, Tom M. W. Nye, Stephan F. Huckemann
Statistics for Phylogenetic Trees in the Presence of Stickiness
37 pages, 16 figures
null
null
null
q-bio.PE math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Samples of phylogenetic trees arise in a variety of evolutionary and biomedical applications, and the Fr\'echet mean in Billera-Holmes-Vogtmann tree space is a summary tree shown to have advantages over other mean or consensus trees. However, use of the Fr\'echet mean raises computational and statistical issues which we explore in this paper. The Fr\'echet sample mean is known often to contain fewer internal edges than the trees in the sample, and in this circumstance calculating the mean by iterative schemes can be problematic due to slow convergence. We present new methods for identifying edges which must lie in the Fr\'echet sample mean and apply these to a data set of gene trees relating organisms from the apicomplexa which cause a variety of parasitic infections. When a sample of trees contains a significant level of heterogeneity in the branching patterns, or topologies, displayed by the trees then the Fr\'echet mean is often a star tree, lacking any internal edges. Not only in this situation, the population Fr\'echet mean is affected by a non-Euclidean phenomenon called stickness which impacts upon asymptotics, and we examine two data sets for which the mean tree is a star tree. The first consists of trees representing the physical shape of artery structures in a sample of medical images of human brains in which the branching patterns are very diverse. The second consists of gene trees from a population of baboons in which there is evidence of substantial hybridization. We develop hypothesis tests which work in the presence of stickiness. The first is a test for the presence of a given edge in the Fr\'echet population mean; the second is a two-sample test for differences in two distributions which share the same sticky population mean.
[ { "created": "Thu, 4 Jul 2024 14:50:42 GMT", "version": "v1" } ]
2024-07-08
[ [ "Lammers", "Lars", "" ], [ "Nye", "Tom M. W.", "" ], [ "Huckemann", "Stephan F.", "" ] ]
Samples of phylogenetic trees arise in a variety of evolutionary and biomedical applications, and the Fr\'echet mean in Billera-Holmes-Vogtmann tree space is a summary tree shown to have advantages over other mean or consensus trees. However, use of the Fr\'echet mean raises computational and statistical issues which we explore in this paper. The Fr\'echet sample mean is known often to contain fewer internal edges than the trees in the sample, and in this circumstance calculating the mean by iterative schemes can be problematic due to slow convergence. We present new methods for identifying edges which must lie in the Fr\'echet sample mean and apply these to a data set of gene trees relating organisms from the apicomplexa which cause a variety of parasitic infections. When a sample of trees contains a significant level of heterogeneity in the branching patterns, or topologies, displayed by the trees then the Fr\'echet mean is often a star tree, lacking any internal edges. Not only in this situation, the population Fr\'echet mean is affected by a non-Euclidean phenomenon called stickness which impacts upon asymptotics, and we examine two data sets for which the mean tree is a star tree. The first consists of trees representing the physical shape of artery structures in a sample of medical images of human brains in which the branching patterns are very diverse. The second consists of gene trees from a population of baboons in which there is evidence of substantial hybridization. We develop hypothesis tests which work in the presence of stickiness. The first is a test for the presence of a given edge in the Fr\'echet population mean; the second is a two-sample test for differences in two distributions which share the same sticky population mean.
2311.10563
Andreas Grigorjew
Andreas Grigorjew, Fernando H. C. Dias, Andrea Cracco, Romeo Rizzi, Alexandru I. Tomescu
Accelerating ILP solvers for Minimum Flow Decompositions through search space and dimensionality reductions
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Given a flow network, the Minimum Flow Decomposition (MFD) problem is finding the smallest possible set of weighted paths whose superposition equals the flow. It is a classical, strongly NP-hard problem that is proven to be useful in RNA transcript assembly and applications outside of Bioinformatics. We improve an existing ILP (Integer Linear Programming) model by Dias et al. [RECOMB 2022] for DAGs by decreasing the solver's search space using solution safety and several other optimizations. This results in a significant speedup compared to the original ILP, of up to 55-90x on average on the hardest instances. Moreover, we show that our optimizations apply also to MFD problem variants, resulting in similar speedups, going up to 123x on the hardest instances. We also developed an ILP model of reduced dimensionality for an MFD variant in which the solution path weights are restricted to a given set. This model can find an optimal MFD solution for most instances, and overall, its accuracy significantly outperforms that of previous greedy algorithms while being up to an order of magnitude faster than our optimized ILP.
[ { "created": "Fri, 17 Nov 2023 14:55:56 GMT", "version": "v1" } ]
2023-11-20
[ [ "Grigorjew", "Andreas", "" ], [ "Dias", "Fernando H. C.", "" ], [ "Cracco", "Andrea", "" ], [ "Rizzi", "Romeo", "" ], [ "Tomescu", "Alexandru I.", "" ] ]
Given a flow network, the Minimum Flow Decomposition (MFD) problem is finding the smallest possible set of weighted paths whose superposition equals the flow. It is a classical, strongly NP-hard problem that is proven to be useful in RNA transcript assembly and applications outside of Bioinformatics. We improve an existing ILP (Integer Linear Programming) model by Dias et al. [RECOMB 2022] for DAGs by decreasing the solver's search space using solution safety and several other optimizations. This results in a significant speedup compared to the original ILP, of up to 55-90x on average on the hardest instances. Moreover, we show that our optimizations apply also to MFD problem variants, resulting in similar speedups, going up to 123x on the hardest instances. We also developed an ILP model of reduced dimensionality for an MFD variant in which the solution path weights are restricted to a given set. This model can find an optimal MFD solution for most instances, and overall, its accuracy significantly outperforms that of previous greedy algorithms while being up to an order of magnitude faster than our optimized ILP.
1403.6358
Ariful Azad
Ariful Azad, Bartek Rajwa, Alex Pothen
Immunophenotypes of Acute Myeloid Leukemia From Flow Cytometry Data Using Templates
9 pages, 5 figures
null
null
null
q-bio.QM cs.CE
http://creativecommons.org/licenses/publicdomain/
Motivation: We investigate whether a template-based classification pipeline could be used to identify immunophenotypes in (and thereby classify) a heterogeneous disease with many subtypes. The disease we consider here is Acute Myeloid Leukemia, which is heterogeneous at the morphologic, cytogenetic and molecular levels, with several known subtypes. The prognosis and treatment for AML depends on the subtype. Results: We apply flowMatch, an algorithmic pipeline for flow cytometry data created in earlier work, to compute templates succinctly summarizing classes of AML and healthy samples. We develop a scoring function that accounts for features of the AML data such as heterogeneity to identify immunophenotypes corresponding to various AML subtypes, including APL. All of the AML samples in the test set are classified correctly with high confidence. Availability: flowMatch is available at www.bioconductor.org/packages/devel/bioc/html/flowMatch.html; programs specific to immunophenotyping AML are at www.cs.purdue.edu/homes/aazad/software.html.
[ { "created": "Sat, 22 Mar 2014 02:23:28 GMT", "version": "v1" } ]
2014-03-26
[ [ "Azad", "Ariful", "" ], [ "Rajwa", "Bartek", "" ], [ "Pothen", "Alex", "" ] ]
Motivation: We investigate whether a template-based classification pipeline could be used to identify immunophenotypes in (and thereby classify) a heterogeneous disease with many subtypes. The disease we consider here is Acute Myeloid Leukemia, which is heterogeneous at the morphologic, cytogenetic and molecular levels, with several known subtypes. The prognosis and treatment for AML depends on the subtype. Results: We apply flowMatch, an algorithmic pipeline for flow cytometry data created in earlier work, to compute templates succinctly summarizing classes of AML and healthy samples. We develop a scoring function that accounts for features of the AML data such as heterogeneity to identify immunophenotypes corresponding to various AML subtypes, including APL. All of the AML samples in the test set are classified correctly with high confidence. Availability: flowMatch is available at www.bioconductor.org/packages/devel/bioc/html/flowMatch.html; programs specific to immunophenotyping AML are at www.cs.purdue.edu/homes/aazad/software.html.
1301.5357
Steven Frank
Steven A. Frank
Natural selection. VI. Partitioning the information in fitness and characters by path analysis
null
Journal of Evolutionary Biology 26:457-471 (2013)
10.1111/jeb.12066
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Three steps aid in the analysis of selection. First, describe phenotypes by their component causes. Components include genes, maternal effects, symbionts, and any other predictors of phenotype that are of interest. Second, describe fitness by its component causes, such as an individual's phenotype, its neighbors' phenotypes, resource availability, and so on. Third, put the predictors of phenotype and fitness into an exact equation for evolutionary change, providing a complete expression of selection and other evolutionary processes. The complete expression separates the distinct causal roles of the various hypothesized components of phenotypes and fitness. Traditionally, those components are given by the covariance, variance, and regression terms of evolutionary models. I show how to interpret those statistical expressions with respect to information theory. The resulting interpretation allows one to read the fundamental equations of selection and evolution as sentences that express how various causes lead to the accumulation of information by selection and the decay of information by other evolutionary processes. The interpretation in terms of information leads to a deeper understanding of selection and heritability, and a clearer sense of how to formulate causal hypotheses about evolutionary process. Kin selection appears as a particular type of causal analysis that partitions social effects into meaningful components.
[ { "created": "Tue, 22 Jan 2013 22:31:47 GMT", "version": "v1" } ]
2013-02-14
[ [ "Frank", "Steven A.", "" ] ]
Three steps aid in the analysis of selection. First, describe phenotypes by their component causes. Components include genes, maternal effects, symbionts, and any other predictors of phenotype that are of interest. Second, describe fitness by its component causes, such as an individual's phenotype, its neighbors' phenotypes, resource availability, and so on. Third, put the predictors of phenotype and fitness into an exact equation for evolutionary change, providing a complete expression of selection and other evolutionary processes. The complete expression separates the distinct causal roles of the various hypothesized components of phenotypes and fitness. Traditionally, those components are given by the covariance, variance, and regression terms of evolutionary models. I show how to interpret those statistical expressions with respect to information theory. The resulting interpretation allows one to read the fundamental equations of selection and evolution as sentences that express how various causes lead to the accumulation of information by selection and the decay of information by other evolutionary processes. The interpretation in terms of information leads to a deeper understanding of selection and heritability, and a clearer sense of how to formulate causal hypotheses about evolutionary process. Kin selection appears as a particular type of causal analysis that partitions social effects into meaningful components.
1804.10828
Hiroshi Ashikaga
Hiroshi Ashikaga, Konstantinos N. Aronis, Susumu Tao, Ryan G. James
Causal Scale Shift Associated with Phase Transition to Human Atrial Fibrillation
9 pages, 7 figures
null
null
null
q-bio.TO nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An example of phase transition in natural complex systems is the qualitative and sudden change in the heart rhythm between sinus rhythm and atrial fibrillation (AF), the most common irregular heart rhythm in humans. While the system behavior is centrally controlled by the behavior of the sinoatrial node in sinus rhythm, the macro-scale collective behavior of the heart causes the micro-scale behavior in AF. To quantitatively analyze this causation shift associated with phase transition in human heart, we evaluated the causal architecture of the human cardiac system using the time series of multi-lead intracardiac unipolar electrograms in a series of spatiotemporal scales by generating a stochastic renormalization group. We found that the phase transition between sinus rhythm and AF is associated with a significant shift of the peak causation from macroscopic to microscopic scales. Causal architecture analysis may improve our understanding of causality in phase transitions in other natural and social complex systems.
[ { "created": "Sat, 28 Apr 2018 16:27:34 GMT", "version": "v1" }, { "created": "Tue, 1 May 2018 00:44:11 GMT", "version": "v2" } ]
2018-05-02
[ [ "Ashikaga", "Hiroshi", "" ], [ "Aronis", "Konstantinos N.", "" ], [ "Tao", "Susumu", "" ], [ "James", "Ryan G.", "" ] ]
An example of phase transition in natural complex systems is the qualitative and sudden change in the heart rhythm between sinus rhythm and atrial fibrillation (AF), the most common irregular heart rhythm in humans. While the system behavior is centrally controlled by the behavior of the sinoatrial node in sinus rhythm, the macro-scale collective behavior of the heart causes the micro-scale behavior in AF. To quantitatively analyze this causation shift associated with phase transition in human heart, we evaluated the causal architecture of the human cardiac system using the time series of multi-lead intracardiac unipolar electrograms in a series of spatiotemporal scales by generating a stochastic renormalization group. We found that the phase transition between sinus rhythm and AF is associated with a significant shift of the peak causation from macroscopic to microscopic scales. Causal architecture analysis may improve our understanding of causality in phase transitions in other natural and social complex systems.
1701.08995
Leo van Iersel
Leo van Iersel, Vincent Moulton, Eveline de Swart and Taoyang Wu
Binets: fundamental building blocks for phylogenetic networks
null
null
null
null
q-bio.PE cs.DS math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are a generalization of evolutionary trees that are used by biologists to represent the evolution of organisms which have undergone reticulate evolution. Essentially, a phylogenetic network is a directed acyclic graph having a unique root in which the leaves are labelled by a given set of species. Recently, some approaches have been developed to construct phylogenetic networks from collections of networks on 2- and 3-leaved networks, which are known as binets and trinets, respectively. Here we study in more depth properties of collections of binets, one of the simplest possible types of networks into which a phylogenetic network can be decomposed. More specifically, we show that if a collection of level-1 binets is compatible with some binary network, then it is also compatible with a binary level-1 network. Our proofs are based on useful structural results concerning lowest stable ancestors in networks. In addition, we show that, although the binets do not determine the topology of the network, they do determine the number of reticulations in the network, which is one of its most important parameters. We also consider algorithmic questions concerning binets. We show that deciding whether an arbitrary set of binets is compatible with some network is at least as hard as the well-known Graph Isomorphism problem. However, if we restrict to level-1 binets, it is possible to decide in polynomial time whether there exists a binary network that displays all the binets. We also show that to find a network that displays a maximum number of the binets is NP-hard, but that there exists a simple polynomial-time 1/3-approximation algorithm for this problem. It is hoped that these results will eventually assist in the development of new methods for constructing phylogenetic networks from collections of smaller networks.
[ { "created": "Tue, 31 Jan 2017 11:18:42 GMT", "version": "v1" } ]
2017-02-01
[ [ "van Iersel", "Leo", "" ], [ "Moulton", "Vincent", "" ], [ "de Swart", "Eveline", "" ], [ "Wu", "Taoyang", "" ] ]
Phylogenetic networks are a generalization of evolutionary trees that are used by biologists to represent the evolution of organisms which have undergone reticulate evolution. Essentially, a phylogenetic network is a directed acyclic graph having a unique root in which the leaves are labelled by a given set of species. Recently, some approaches have been developed to construct phylogenetic networks from collections of networks on 2- and 3-leaved networks, which are known as binets and trinets, respectively. Here we study in more depth properties of collections of binets, one of the simplest possible types of networks into which a phylogenetic network can be decomposed. More specifically, we show that if a collection of level-1 binets is compatible with some binary network, then it is also compatible with a binary level-1 network. Our proofs are based on useful structural results concerning lowest stable ancestors in networks. In addition, we show that, although the binets do not determine the topology of the network, they do determine the number of reticulations in the network, which is one of its most important parameters. We also consider algorithmic questions concerning binets. We show that deciding whether an arbitrary set of binets is compatible with some network is at least as hard as the well-known Graph Isomorphism problem. However, if we restrict to level-1 binets, it is possible to decide in polynomial time whether there exists a binary network that displays all the binets. We also show that to find a network that displays a maximum number of the binets is NP-hard, but that there exists a simple polynomial-time 1/3-approximation algorithm for this problem. It is hoped that these results will eventually assist in the development of new methods for constructing phylogenetic networks from collections of smaller networks.
1402.5289
Paolo Moretti
Pablo Villegas, Paolo Moretti, Miguel A. Mu\~noz
Frustrated hierarchical synchronization and emergent complexity in the human connectome network
4 Figures
Scientific reports 4 (2014) 5990
10.1038/srep05990
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spontaneous emergence of coherent behavior through synchronization plays a key role in neural function, and its anomalies often lie at the basis of pathologies. Here we employ a parsimonious (mesoscopic) approach to study analytically and computationally the synchronization (Kuramoto) dynamics on the actual human-brain connectome network. We elucidate the existence of a so-far-uncovered intermediate phase, placed between the standard synchronous and asynchronous phases, i.e. between order and disorder. This novel phase stems from the hierarchical modular organization of the connectome. Where one would expect a hierarchical synchronization process, we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales, gives rise to frustrated synchronization, metastability, and chimera-like states, resulting in a very rich and complex phenomenology. We uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access --in a robust though flexible way-- a large variety of functional attractors and dynamical repertoires without ad hoc fine-tuning to a critical point.
[ { "created": "Fri, 21 Feb 2014 13:17:15 GMT", "version": "v1" }, { "created": "Thu, 3 Jul 2014 15:07:31 GMT", "version": "v2" } ]
2014-09-30
[ [ "Villegas", "Pablo", "" ], [ "Moretti", "Paolo", "" ], [ "Muñoz", "Miguel A.", "" ] ]
The spontaneous emergence of coherent behavior through synchronization plays a key role in neural function, and its anomalies often lie at the basis of pathologies. Here we employ a parsimonious (mesoscopic) approach to study analytically and computationally the synchronization (Kuramoto) dynamics on the actual human-brain connectome network. We elucidate the existence of a so-far-uncovered intermediate phase, placed between the standard synchronous and asynchronous phases, i.e. between order and disorder. This novel phase stems from the hierarchical modular organization of the connectome. Where one would expect a hierarchical synchronization process, we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales, gives rise to frustrated synchronization, metastability, and chimera-like states, resulting in a very rich and complex phenomenology. We uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access --in a robust though flexible way-- a large variety of functional attractors and dynamical repertoires without ad hoc fine-tuning to a critical point.
1909.10344
Laura Wadkin MMath
L E Wadkin, S Orozco-Fuentes, I Neganova, M Lako, A Shukurov and N G Parker
The recent advances in the mathematical modelling of human pluripotent stem cells
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human pluripotent stem cells hold great promise for developments in regenerative medicine and drug design. The mathematical modelling of stem cells and their properties is necessary to understand and quantify key behaviours and develop non-invasive prognostic modelling tools to assist in the optimisation of laboratory experiments. Here, the recent advances in the mathematical modelling of hPSCs are discussed, including cell kinematics, cell proliferation and colony formation, and pluripotency and differentiation.
[ { "created": "Mon, 23 Sep 2019 12:58:29 GMT", "version": "v1" } ]
2019-09-24
[ [ "Wadkin", "L E", "" ], [ "Orozco-Fuentes", "S", "" ], [ "Neganova", "I", "" ], [ "Lako", "M", "" ], [ "Shukurov", "A", "" ], [ "Parker", "N G", "" ] ]
Human pluripotent stem cells hold great promise for developments in regenerative medicine and drug design. The mathematical modelling of stem cells and their properties is necessary to understand and quantify key behaviours and develop non-invasive prognostic modelling tools to assist in the optimisation of laboratory experiments. Here, the recent advances in the mathematical modelling of hPSCs are discussed, including cell kinematics, cell proliferation and colony formation, and pluripotency and differentiation.
0906.3912
Vladimir Privman
Vladimir Privman, Valber Pedrosa, Dmitriy Melnikov, Marcos Pita, Aleksandr Simonian, Evgeny Katz
Enzymatic AND-Gate Based on Electrode-Immobilized Glucose-6-Phosphate Dehydrogenase: Towards Digital Biosensors and Biochemical Logic Systems with Low Noise
null
Biosens. Bioelectron. 25, 695-701 (2009)
10.1016/j.bios.2009.08.014
null
q-bio.MN cond-mat.soft q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrode-immobilized glucose-6-phosphate dehydrogenase is used to catalyze an enzymatic reaction which carries out the AND logic gate. This logic function is considered here in the context of biocatalytic processes utilized for the biocomputing applications for "digital" (threshold) sensing/actuation. We outline the response functions desirable for such applications and report the first experimental realization of a sigmoid-shape response in one of the inputs. A kinetic model is developed and utilized to evaluate the extent to which the experimentally realized gate is close to optimal.
[ { "created": "Mon, 22 Jun 2009 03:26:34 GMT", "version": "v1" } ]
2010-10-12
[ [ "Privman", "Vladimir", "" ], [ "Pedrosa", "Valber", "" ], [ "Melnikov", "Dmitriy", "" ], [ "Pita", "Marcos", "" ], [ "Simonian", "Aleksandr", "" ], [ "Katz", "Evgeny", "" ] ]
Electrode-immobilized glucose-6-phosphate dehydrogenase is used to catalyze an enzymatic reaction which carries out the AND logic gate. This logic function is considered here in the context of biocatalytic processes utilized for the biocomputing applications for "digital" (threshold) sensing/actuation. We outline the response functions desirable for such applications and report the first experimental realization of a sigmoid-shape response in one of the inputs. A kinetic model is developed and utilized to evaluate the extent to which the experimentally realized gate is close to optimal.
1706.01188
Diederik Aerts
Diederik Aerts, Jonito Aerts Argu\"elles, Lester Beltran, Suzette Geriente, Massimiliano Sassoli de Bianchi, Sandro Sozzo and Tomas Veloz
Spin and Wind Directions II: A Bell State Quantum Model
This a the second half of a two-part article, the first half being entitled 'Spin and Wind Directions I: Identifying Entanglement in Nature and Cognition' and to be found at arXiv:1508.00434
Foundations of Science, 23, pp. 337-365 (2018)
10.1007/s10699-017-9530-2
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the first half of this two-part article, we analyzed a cognitive psychology experiment where participants were asked to select pairs of directions that they considered to be the best example of 'Two Different Wind Directions', and showed that the data violate the CHSH version of Bell's inequality, with same magnitude as in typical Bell-test experiments in physics. In this second part, we complete our analysis by presenting a symmetrized version of the experiment, still violating the CHSH inequality but now also obeying the marginal law, for which we provide a full quantum modeling in Hilbert space, using a singlet state and suitably chosen product measurements. We also address some of the criticisms that have been recently directed at experiments of this kind, according to which they would not highlight the presence of genuine forms of entanglement. We explain that these criticisms are based on a view of entanglement that is too restrictive, thus unable to capture all possible ways physical and conceptual entities can connect and form systems behaving as a whole. We also provide an example of a mechanical model showing that the violations of the marginal law and Bell inequalities are generally to be associated with different mechanisms.
[ { "created": "Mon, 5 Jun 2017 04:24:31 GMT", "version": "v1" } ]
2019-02-12
[ [ "Aerts", "Diederik", "" ], [ "Arguëlles", "Jonito Aerts", "" ], [ "Beltran", "Lester", "" ], [ "Geriente", "Suzette", "" ], [ "de Bianchi", "Massimiliano Sassoli", "" ], [ "Sozzo", "Sandro", "" ], [ "Veloz", "T...
In the first half of this two-part article, we analyzed a cognitive psychology experiment where participants were asked to select pairs of directions that they considered to be the best example of 'Two Different Wind Directions', and showed that the data violate the CHSH version of Bell's inequality, with same magnitude as in typical Bell-test experiments in physics. In this second part, we complete our analysis by presenting a symmetrized version of the experiment, still violating the CHSH inequality but now also obeying the marginal law, for which we provide a full quantum modeling in Hilbert space, using a singlet state and suitably chosen product measurements. We also address some of the criticisms that have been recently directed at experiments of this kind, according to which they would not highlight the presence of genuine forms of entanglement. We explain that these criticisms are based on a view of entanglement that is too restrictive, thus unable to capture all possible ways physical and conceptual entities can connect and form systems behaving as a whole. We also provide an example of a mechanical model showing that the violations of the marginal law and Bell inequalities are generally to be associated with different mechanisms.
1410.8497
Jaan Aru
Kristjan Korjus, Andero Uusberg, Helen Uibo, Nele Kuldkepp, Kairi Kreegipuu, J\"uri Allik, Raul Vicente, Jaan Aru
Personality cannot be predicted from the power of resting state EEG
14 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present study we asked whether it is possible to decode personality traits from resting state EEG data. EEG was recorded from a large sample of subjects (N = 309) who had answered questionnaires measuring personality trait scores of the 5 dimensions as well as the 10 subordinate aspects of the Big Five. Machine learning algorithms were used to build a classifier to predict each personality trait from power spectra of the resting state EEG data. The results indicate that the five dimensions as well as their subordinate aspects could not be predicted from the resting state EEG data. Finally, to demonstrate that this result is not due to systematic algorithmic or implementation mistakes the same methods were used to successfully classify whether the subject had eyes open or eyes closed and whether the subject was male or female. These results indicate that the extraction of personality traits from the power spectra of resting state EEG is extremely noisy, if possible at all.
[ { "created": "Thu, 30 Oct 2014 18:59:13 GMT", "version": "v1" } ]
2014-10-31
[ [ "Korjus", "Kristjan", "" ], [ "Uusberg", "Andero", "" ], [ "Uibo", "Helen", "" ], [ "Kuldkepp", "Nele", "" ], [ "Kreegipuu", "Kairi", "" ], [ "Allik", "Jüri", "" ], [ "Vicente", "Raul", "" ], [ "Aru...
In the present study we asked whether it is possible to decode personality traits from resting state EEG data. EEG was recorded from a large sample of subjects (N = 309) who had answered questionnaires measuring personality trait scores of the 5 dimensions as well as the 10 subordinate aspects of the Big Five. Machine learning algorithms were used to build a classifier to predict each personality trait from power spectra of the resting state EEG data. The results indicate that the five dimensions as well as their subordinate aspects could not be predicted from the resting state EEG data. Finally, to demonstrate that this result is not due to systematic algorithmic or implementation mistakes the same methods were used to successfully classify whether the subject had eyes open or eyes closed and whether the subject was male or female. These results indicate that the extraction of personality traits from the power spectra of resting state EEG is extremely noisy, if possible at all.
2105.08342
Anne Modat
J\'er\^ome Prunier (SETE), Keoni Saint-P\'e (SETE), Simon Blanchet (SETE, EDB), G\'eraldine Loot (SETE, EDB), Olivier Rey (IHPE)
Molecular approaches reveal weak sibship aggregation and a high dispersal propensity in a non-native fish parasite
null
Ecology and Evolution, Wiley Open Access, 2021
10.1002/ece3.7415
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inferring parameters related to the aggregation pattern of parasites and to their dispersal propensity is important for predicting their ecological consequences and evolutionary potential. Nonetheless, it is notoriously difficult to infer these parameters from wildlife parasites given the difficulty in tracking these organisms. Molecular-based inferences constitute a promising approach that has yet rarely been applied in the wild.Here, we combinedseveral population genetic analyses including sibship reconstruction to documentthe genetic structure, patterns of sibship aggregation and the dispersal dynamics of a non-native parasite of fish, the freshwater copepod ectoparasiteTracheliastespolycolpus. We collected parasites according to a hierarchical sampling design,withthe sampling of all parasites from all host individualscapturedineight sites spread along an upstream-downstream river gradient. Individual multilocus genotypes were obtained from 14 microsatellite markers, and used to assign parasites to full-sib families and to investigate the genetic structure of T.polycolpus among both hosts and sampling sites. The distribution of full-sibs obtainedamong the sampling sites was used to estimate individual dispersal distances within families. Our results showed that T. polycolpus sibs tend to be aggregated within sites but not withinhost individuals. We detected important upstream-to-downstream dispersal events of T.polycolpusbetween sites (modal distance: 25.4 km; 95% CI [22.9, 27.7]), becoming scarcer as the geographic distance from their family core location increases. Such a dispersal pattern likely contributes to the strong isolation-by-distance observed at the river scale. We also detected some downstream-to-upstream dispersal events (modal distance: 2.6 km; 95% CI [2.2-23.3]) that likely result from movementsof infected hosts.Within each site, the dispersal of free-living infective larvae among hosts likely contributes to increasing genetic diversity on hosts, possibly fostering the evolutionary potential of T. polycolpus.
[ { "created": "Tue, 18 May 2021 08:10:40 GMT", "version": "v1" } ]
2021-05-19
[ [ "Prunier", "Jérôme", "", "SETE" ], [ "Saint-Pé", "Keoni", "", "SETE" ], [ "Blanchet", "Simon", "", "SETE, EDB" ], [ "Loot", "Géraldine", "", "SETE, EDB" ], [ "Rey", "Olivier", "", "IHPE" ] ]
Inferring parameters related to the aggregation pattern of parasites and to their dispersal propensity is important for predicting their ecological consequences and evolutionary potential. Nonetheless, it is notoriously difficult to infer these parameters from wildlife parasites given the difficulty in tracking these organisms. Molecular-based inferences constitute a promising approach that has yet rarely been applied in the wild.Here, we combinedseveral population genetic analyses including sibship reconstruction to documentthe genetic structure, patterns of sibship aggregation and the dispersal dynamics of a non-native parasite of fish, the freshwater copepod ectoparasiteTracheliastespolycolpus. We collected parasites according to a hierarchical sampling design,withthe sampling of all parasites from all host individualscapturedineight sites spread along an upstream-downstream river gradient. Individual multilocus genotypes were obtained from 14 microsatellite markers, and used to assign parasites to full-sib families and to investigate the genetic structure of T.polycolpus among both hosts and sampling sites. The distribution of full-sibs obtainedamong the sampling sites was used to estimate individual dispersal distances within families. Our results showed that T. polycolpus sibs tend to be aggregated within sites but not withinhost individuals. We detected important upstream-to-downstream dispersal events of T.polycolpusbetween sites (modal distance: 25.4 km; 95% CI [22.9, 27.7]), becoming scarcer as the geographic distance from their family core location increases. Such a dispersal pattern likely contributes to the strong isolation-by-distance observed at the river scale. We also detected some downstream-to-upstream dispersal events (modal distance: 2.6 km; 95% CI [2.2-23.3]) that likely result from movementsof infected hosts.Within each site, the dispersal of free-living infective larvae among hosts likely contributes to increasing genetic diversity on hosts, possibly fostering the evolutionary potential of T. polycolpus.
2112.10362
Arti Dua
Subham Pal, Manmath Panigrahy and Arti Dua
Non-classical transient regime and violation of detailed balance in mesoscopic Michaelis-Menten kinetics
10 pages, 6 figures
null
null
null
q-bio.MN physics.bio-ph physics.chem-ph q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Classical (deterministic) and single-enzyme (stochastic) description of the Michaelis-Menten (MM) kinetics, assume fast equilibration between enzyme and complex, and identify detailed balance as a sufficient condition for the hyperbolic substrate dependence of the MM equation (MME). Stochastic MM kinetics based on the chemical master equation (CME), with no a priori assumption of fast equilibration, however, unravels an observably long non-classical transient regime at mesoscopic enzyme concentrations. The enzymatic velocity in the transient regime is non-hyperbolic for product turnovers below a critical time, but asymptotically recovers the hyperbolic MME at long times. Here, we use this description to introduce a new kinetic measure, the turnover number dependent fractional enzyme velocity. This measure quantifies the degree of non-hyperbolicity in the non-classical transient regime with respect to the hyperbolic MME. From this, we obtain a generalized rate parameter condition for detailed balance in mesoscopic MM kinetics. This condition, while subsuming the fast equilibrium approximation of the classical MM kinetics, provides a strict lower bound on the magnitude of the catalytic rate parameter. Further, from the condition of stationarity of the generating function solution of the CME, we quantify the duration of the non-classical regime. Our results show that the violation of detailed balance condition in the transient regime is inextricably linked to the non-hyperbolic substrate dependence of the enzymatic velocity. In the steady-state, when an effective equilibrium between enzyme and complex is asymptotically established, the condition of detailed balance emerges as a sufficient condition for the hyperbolic substrate dependence of the MME.
[ { "created": "Mon, 20 Dec 2021 07:00:49 GMT", "version": "v1" } ]
2021-12-21
[ [ "Pal", "Subham", "" ], [ "Panigrahy", "Manmath", "" ], [ "Dua", "Arti", "" ] ]
Classical (deterministic) and single-enzyme (stochastic) description of the Michaelis-Menten (MM) kinetics, assume fast equilibration between enzyme and complex, and identify detailed balance as a sufficient condition for the hyperbolic substrate dependence of the MM equation (MME). Stochastic MM kinetics based on the chemical master equation (CME), with no a priori assumption of fast equilibration, however, unravels an observably long non-classical transient regime at mesoscopic enzyme concentrations. The enzymatic velocity in the transient regime is non-hyperbolic for product turnovers below a critical time, but asymptotically recovers the hyperbolic MME at long times. Here, we use this description to introduce a new kinetic measure, the turnover number dependent fractional enzyme velocity. This measure quantifies the degree of non-hyperbolicity in the non-classical transient regime with respect to the hyperbolic MME. From this, we obtain a generalized rate parameter condition for detailed balance in mesoscopic MM kinetics. This condition, while subsuming the fast equilibrium approximation of the classical MM kinetics, provides a strict lower bound on the magnitude of the catalytic rate parameter. Further, from the condition of stationarity of the generating function solution of the CME, we quantify the duration of the non-classical regime. Our results show that the violation of detailed balance condition in the transient regime is inextricably linked to the non-hyperbolic substrate dependence of the enzymatic velocity. In the steady-state, when an effective equilibrium between enzyme and complex is asymptotically established, the condition of detailed balance emerges as a sufficient condition for the hyperbolic substrate dependence of the MME.
1205.3417
Leo van Iersel
Leo van Iersel, Steven Kelk, Nela Leki\'c and Celine Scornavacca
A practical approximation algorithm for solving massive instances of hybridization number for binary and nonbinary trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work and are publicly available. We also apply our methods to real data.
[ { "created": "Tue, 15 May 2012 15:33:13 GMT", "version": "v1" }, { "created": "Mon, 21 May 2012 07:42:02 GMT", "version": "v2" }, { "created": "Thu, 1 May 2014 12:13:04 GMT", "version": "v3" } ]
2014-05-02
[ [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ], [ "Lekić", "Nela", "" ], [ "Scornavacca", "Celine", "" ] ]
Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work and are publicly available. We also apply our methods to real data.
2402.11589
Michael Habeck
Felix Lambrecht, Andreas Kr\"opelin, Mario L\"uttich, Michael Habeck, David Haselbach, Holger Stark
CowScape: Quantitative reconstruction of the conformational landscape of biological macromolecules from cryo-EM data
15 pages + 4 figures (main text)
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Cryo-EM data processing typically focuses on the structure of the main conformational state under investigation and discards images that belong to other states. This approach can reach atomic resolution, but ignores vast amounts of valuable information about the underlying conformational ensemble and its dynamics. CowScape analyzes an entire cryo-EM dataset and thereby obtains a quantitative description of structural variability of macromolecular complexes that represents the biochemically relevant conformational space. By combining extensive image classification with principal component analysis (PCA) of the classified 3D volumes and kernel density estimation, CowScape can be used as a quantitative tool to analyze this variability. PCA projects all 3D structures along the major modes spanning a low-dimensional space that captures a large portion of structural variability. The number of particle images in a given state can be used to calculate an energy landscape based on kernel density estimation and Boltzmann inversion. By revealing allosteric interactions in macromolecular complexes, CowScape allows us to distinguish and interpret dynamic changes in macromolecular complexes during function and regulation.
[ { "created": "Sun, 18 Feb 2024 13:47:26 GMT", "version": "v1" } ]
2024-02-20
[ [ "Lambrecht", "Felix", "" ], [ "Kröpelin", "Andreas", "" ], [ "Lüttich", "Mario", "" ], [ "Habeck", "Michael", "" ], [ "Haselbach", "David", "" ], [ "Stark", "Holger", "" ] ]
Cryo-EM data processing typically focuses on the structure of the main conformational state under investigation and discards images that belong to other states. This approach can reach atomic resolution, but ignores vast amounts of valuable information about the underlying conformational ensemble and its dynamics. CowScape analyzes an entire cryo-EM dataset and thereby obtains a quantitative description of structural variability of macromolecular complexes that represents the biochemically relevant conformational space. By combining extensive image classification with principal component analysis (PCA) of the classified 3D volumes and kernel density estimation, CowScape can be used as a quantitative tool to analyze this variability. PCA projects all 3D structures along the major modes spanning a low-dimensional space that captures a large portion of structural variability. The number of particle images in a given state can be used to calculate an energy landscape based on kernel density estimation and Boltzmann inversion. By revealing allosteric interactions in macromolecular complexes, CowScape allows us to distinguish and interpret dynamic changes in macromolecular complexes during function and regulation.
2307.15471
Joao Pedro De Magalhaes
Kasit Chatsirisupachai, Jo\~ao Pedro de Magalh\~aes
Somatic mutations in human ageing: New insights from DNA sequencing and inherited mutations
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The accumulation of somatic mutations is a driver of cancer and has long been associated with ageing. Due to limitations in quantifying mutation burden with age in non-cancerous tissues, the impact of somatic mutations in other ageing phenotypes is unclear. Recent advances in DNA sequencing technologies have allowed the large-scale quantification of somatic mutations in ageing. These studies have revealed a gradual accumulation of mutations in most normal tissues with age as well as a substantial clonal expansion driven mostly by cancer-related mutations. Nevertheless, because of the relatively modest burden of age-related somatic mutations identified so far and their stochastic nature, it is difficult to envision how somatic mutation accumulation alone can explain most ageing phenotypes that develop gradually. Studies across species have also found that longer-lived species have lower somatic mutation rates, though these could be explained by selective pressures to reduce or postpone cancer as longevity increases. Overall, with a few exceptions like cancer, results from recent DNA sequencing studies do not add weight to the idea that somatic mutations with age drive ageing phenotypes and the phenotypic role, if any, of somatic mutations in ageing remains unclear. Recent studies in patients with somatic mutation burden and no signs of accelerated ageing further question the role of somatic mutations in ageing.
[ { "created": "Fri, 28 Jul 2023 10:43:36 GMT", "version": "v1" } ]
2023-07-31
[ [ "Chatsirisupachai", "Kasit", "" ], [ "de Magalhães", "João Pedro", "" ] ]
The accumulation of somatic mutations is a driver of cancer and has long been associated with ageing. Due to limitations in quantifying mutation burden with age in non-cancerous tissues, the impact of somatic mutations in other ageing phenotypes is unclear. Recent advances in DNA sequencing technologies have allowed the large-scale quantification of somatic mutations in ageing. These studies have revealed a gradual accumulation of mutations in most normal tissues with age as well as a substantial clonal expansion driven mostly by cancer-related mutations. Nevertheless, because of the relatively modest burden of age-related somatic mutations identified so far and their stochastic nature, it is difficult to envision how somatic mutation accumulation alone can explain most ageing phenotypes that develop gradually. Studies across species have also found that longer-lived species have lower somatic mutation rates, though these could be explained by selective pressures to reduce or postpone cancer as longevity increases. Overall, with a few exceptions like cancer, results from recent DNA sequencing studies do not add weight to the idea that somatic mutations with age drive ageing phenotypes and the phenotypic role, if any, of somatic mutations in ageing remains unclear. Recent studies in patients with somatic mutation burden and no signs of accelerated ageing further question the role of somatic mutations in ageing.
1208.0636
Mike Steel Prof.
Sha Zhu, Mike Steel
Is the Random Tree Puzzle process the same as the Yule-Harding process?
8 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been suggested that a Random Tree Puzzle (RTP) process leads to a Yule-Harding (YH) distribution, when the number of taxa becomes large. In this study, we formalize this conjecture, and we prove that the two tree distributions converge for two particular properties, which suggests that the conjecture may be true. However, we present evidence that, while the two distributions are close, the RTP appears to converge on a different distribution than does the YH.
[ { "created": "Fri, 3 Aug 2012 00:53:47 GMT", "version": "v1" }, { "created": "Thu, 9 Aug 2012 04:08:00 GMT", "version": "v2" } ]
2012-08-10
[ [ "Zhu", "Sha", "" ], [ "Steel", "Mike", "" ] ]
It has been suggested that a Random Tree Puzzle (RTP) process leads to a Yule-Harding (YH) distribution, when the number of taxa becomes large. In this study, we formalize this conjecture, and we prove that the two tree distributions converge for two particular properties, which suggests that the conjecture may be true. However, we present evidence that, while the two distributions are close, the RTP appears to converge on a different distribution than does the YH.
2311.10913
William Howard-Snyder
William Howard-Snyder, Will Dumm, Mary Barker, Ognian Milanov, Claris Winston, David H. Rich, Frederick A Matsen IV
Densely sampled phylogenies frequently deviate from maximum parsimony in simple and local ways
18 pages, 7 figures, submitted to RECOMB 2024
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Why do phylogenetic algorithms fail when they return incorrect answers? This simple question has not been answered in detail, even for maximum parsimony (MP), the simplest phylogenetic criterion. Understanding MP has recently gained relevance in the regime of extremely dense sampling, where each virus sample commonly differs by zero or one mutation from another previously sampled virus. Although recent research shows that evolutionary histories in this regime are close to being maximally parsimonious, the structure of their deviations from MP is not yet understood. In this paper, we develop algorithms to understand how the correct tree deviates from being MP in the densely sampled case. By applying these algorithms to simulations that realistically mimic the evolution of SARS-CoV-2, we find that simulated trees frequently only deviate from maximally parsimonious trees locally, through simple structures consisting of the same mutation appearing independently on sister branches.
[ { "created": "Fri, 17 Nov 2023 23:46:19 GMT", "version": "v1" } ]
2023-11-21
[ [ "Howard-Snyder", "William", "" ], [ "Dumm", "Will", "" ], [ "Barker", "Mary", "" ], [ "Milanov", "Ognian", "" ], [ "Winston", "Claris", "" ], [ "Rich", "David H.", "" ], [ "Matsen", "Frederick A", "IV" ] ...
Why do phylogenetic algorithms fail when they return incorrect answers? This simple question has not been answered in detail, even for maximum parsimony (MP), the simplest phylogenetic criterion. Understanding MP has recently gained relevance in the regime of extremely dense sampling, where each virus sample commonly differs by zero or one mutation from another previously sampled virus. Although recent research shows that evolutionary histories in this regime are close to being maximally parsimonious, the structure of their deviations from MP is not yet understood. In this paper, we develop algorithms to understand how the correct tree deviates from being MP in the densely sampled case. By applying these algorithms to simulations that realistically mimic the evolution of SARS-CoV-2, we find that simulated trees frequently only deviate from maximally parsimonious trees locally, through simple structures consisting of the same mutation appearing independently on sister branches.
1705.10922
Sahil Shah
Sahil D. Shah and Rosemary Braun
Network-based identification of disease genes in expression data: the GeneSurrounder method
We have extended the application and evaluation of our GeneSurrounder method to a second disease (gene expression data from bladder cancer) and added additional analyses of GeneSurrounder's ability to identify known cancer-associated genes
null
null
null
q-bio.QM q-bio.GN q-bio.MN stat.AP stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of high--throughput transcription profiling technologies has enabled identification of genes and pathways associated with disease, providing new avenues for precision medicine. A key challenge is to analyze this data in the context of the regulatory networks and pathways that control cellular processes, while still obtaining insights that can be used to design new diagnostic and therapeutic interventions. While classical differential expression analysis provides specific and hence targetable gene-level insights, it does not include any systems-level information. On the other hand, pathway analyses integrate systems-level information with expression data, but are often limited in their ability to indicate specific molecular targets. We introduce GeneSurrounder, an analysis method that takes into account the complex structure of interaction networks to identify specific genes that disrupt pathway activity in a disease-specific manner. GeneSurrounder integrates transcriptomic data and pathway network information in a novel two-step procedure to detect genes that (i) appear to influence the expression of other genes local to it in the network and (ii) are part of a subnetwork of differentially expressed genes. Combined, this evidence can be used to pinpoint specific genes that have a mechanistic role in the phenotype of interest. Applying GeneSurrounder to three distinct ovarian cancer studies using a global KEGG network, we show that our method is able to identify biologically relevant genes and genes missed by single-gene association tests, integrate pathway and expression data, and yield more consistent results across multiple studies of the same phenotype than competing methods.
[ { "created": "Wed, 31 May 2017 02:40:18 GMT", "version": "v1" }, { "created": "Wed, 9 Jan 2019 23:08:36 GMT", "version": "v2" } ]
2019-01-11
[ [ "Shah", "Sahil D.", "" ], [ "Braun", "Rosemary", "" ] ]
The advent of high--throughput transcription profiling technologies has enabled identification of genes and pathways associated with disease, providing new avenues for precision medicine. A key challenge is to analyze this data in the context of the regulatory networks and pathways that control cellular processes, while still obtaining insights that can be used to design new diagnostic and therapeutic interventions. While classical differential expression analysis provides specific and hence targetable gene-level insights, it does not include any systems-level information. On the other hand, pathway analyses integrate systems-level information with expression data, but are often limited in their ability to indicate specific molecular targets. We introduce GeneSurrounder, an analysis method that takes into account the complex structure of interaction networks to identify specific genes that disrupt pathway activity in a disease-specific manner. GeneSurrounder integrates transcriptomic data and pathway network information in a novel two-step procedure to detect genes that (i) appear to influence the expression of other genes local to it in the network and (ii) are part of a subnetwork of differentially expressed genes. Combined, this evidence can be used to pinpoint specific genes that have a mechanistic role in the phenotype of interest. Applying GeneSurrounder to three distinct ovarian cancer studies using a global KEGG network, we show that our method is able to identify biologically relevant genes and genes missed by single-gene association tests, integrate pathway and expression data, and yield more consistent results across multiple studies of the same phenotype than competing methods.
2401.14928
Sean Edwards
Sean M. Edwards, Amy L. Harding, Joseph A. Leedale, Steve D. Webb, Helen E. Colley, Craig Murdoch, Rachel N. Bearon
An innovative in silico model of the oral mucosa reveals the impact of extracellular spaces on chemical permeation through epithelium
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
In pharmaceutical therapeutic design or toxicology, accurately predicting the permeation of chemicals through human epithelial tissues is crucial, where permeation is significantly influenced by the tissue's cellular architecture. Current mathematical models for multi-layered epithelium such as the oral mucosa only use simplistic 'bricks and mortar' geometries and therefore do not account for the complex cellular architecture of these tissues at the microscale level, such as the extensive plasma membrane convolutions that define the extracellular spaces between cells. Chemicals often permeate tissues via this paracellular route, meaning that permeation is underestimated. To address this, measurements of human buccal mucosal tissue were conducted to ascertain the width and tortuosity of extracellular spaces across the epithelium. Using mechanistic mathematical modelling, we show that the convoluted geometry of extracellular spaces significantly impacts chemical permeation and that this can be approximated, provided that extracellular tortuosity is accounted for. We next developed an advanced physically-relevant in silico model of oral mucosal chemical permeation using partial differential equations, fitted to chemical permeation in vitro assays on tissue-engineered human oral mucosa. Tissue geometries were measured and captured in silico, and permeation examined and predicted for chemicals with different physicochemical properties. The effect of altering the extracellular space to mimic permeation enhancers was also assessed by perturbing the in silico model. This novel in vitro-in silico approach has the potential to expedite pharmaceutical innovation for testing oromucosal chemical permeation, providing a more accurate, physiologically-relevant model which can reduce animal testing with early screening based on chemical properties.
[ { "created": "Fri, 26 Jan 2024 15:06:31 GMT", "version": "v1" } ]
2024-01-29
[ [ "Edwards", "Sean M.", "" ], [ "Harding", "Amy L.", "" ], [ "Leedale", "Joseph A.", "" ], [ "Webb", "Steve D.", "" ], [ "Colley", "Helen E.", "" ], [ "Murdoch", "Craig", "" ], [ "Bearon", "Rachel N.", "" ]...
In pharmaceutical therapeutic design or toxicology, accurately predicting the permeation of chemicals through human epithelial tissues is crucial, where permeation is significantly influenced by the tissue's cellular architecture. Current mathematical models for multi-layered epithelium such as the oral mucosa only use simplistic 'bricks and mortar' geometries and therefore do not account for the complex cellular architecture of these tissues at the microscale level, such as the extensive plasma membrane convolutions that define the extracellular spaces between cells. Chemicals often permeate tissues via this paracellular route, meaning that permeation is underestimated. To address this, measurements of human buccal mucosal tissue were conducted to ascertain the width and tortuosity of extracellular spaces across the epithelium. Using mechanistic mathematical modelling, we show that the convoluted geometry of extracellular spaces significantly impacts chemical permeation and that this can be approximated, provided that extracellular tortuosity is accounted for. We next developed an advanced physically-relevant in silico model of oral mucosal chemical permeation using partial differential equations, fitted to chemical permeation in vitro assays on tissue-engineered human oral mucosa. Tissue geometries were measured and captured in silico, and permeation examined and predicted for chemicals with different physicochemical properties. The effect of altering the extracellular space to mimic permeation enhancers was also assessed by perturbing the in silico model. This novel in vitro-in silico approach has the potential to expedite pharmaceutical innovation for testing oromucosal chemical permeation, providing a more accurate, physiologically-relevant model which can reduce animal testing with early screening based on chemical properties.
2004.14291
Victor M. Perez-Garcia
V\'ictor M. P\'erez-Garc\'ia, O. Le\'on-Triana, M. Rosa, A. P\'erez-Mart\'inez
CAR T cells for T-cell leukemias: Insights from mathematical models
null
null
10.1016/j.cnsns.2020.105684
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Immunotherapy has the potential to change the way all cancer types are treated and cured. Cancer immunotherapies use elements of the patient immune system to attack tumor cells. One of the most successful types of immunotherapy is CAR-T cells. This treatment works by extracting patient's T-cells and adding to them an antigen receptor allowing tumor cells to be recognized and targeted. These new cells are called CAR-T cells and are re-infused back into the patient after expansion in-vitro. This approach has been successfully used to treat B-cell malignancies (B-cell leukemias and lymphomas). However, its application to the treatment of T-cell leukemias faces several problems. One of these is fratricide, since the CAR-T cells target both tumor and other CAR-T cells. This leads to nonlinear dynamical phenomena amenable to mathematical modeling. In this paper we construct a mathematical model describing the competition of CAR-T, tumor and normal T-cells and studied some basic properties of the model and its practical implications. Specifically, we found that the model reproduced the observed difficulties for in-vitro expansion of the therapeutic cells found in the laboratory. The mathematical model predicted that CAR-T cell expansion in the patient would be possible due to the initial presence of a large number of targets. We also show that, in the context of our mathematical approach, CAR-T cells could control tumor growth but not eradicate the disease.
[ { "created": "Sun, 26 Apr 2020 15:57:49 GMT", "version": "v1" } ]
2021-02-03
[ [ "Pérez-García", "Víctor M.", "" ], [ "León-Triana", "O.", "" ], [ "Rosa", "M.", "" ], [ "Pérez-Martínez", "A.", "" ] ]
Immunotherapy has the potential to change the way all cancer types are treated and cured. Cancer immunotherapies use elements of the patient immune system to attack tumor cells. One of the most successful types of immunotherapy is CAR-T cells. This treatment works by extracting patient's T-cells and adding to them an antigen receptor allowing tumor cells to be recognized and targeted. These new cells are called CAR-T cells and are re-infused back into the patient after expansion in-vitro. This approach has been successfully used to treat B-cell malignancies (B-cell leukemias and lymphomas). However, its application to the treatment of T-cell leukemias faces several problems. One of these is fratricide, since the CAR-T cells target both tumor and other CAR-T cells. This leads to nonlinear dynamical phenomena amenable to mathematical modeling. In this paper we construct a mathematical model describing the competition of CAR-T, tumor and normal T-cells and studied some basic properties of the model and its practical implications. Specifically, we found that the model reproduced the observed difficulties for in-vitro expansion of the therapeutic cells found in the laboratory. The mathematical model predicted that CAR-T cell expansion in the patient would be possible due to the initial presence of a large number of targets. We also show that, in the context of our mathematical approach, CAR-T cells could control tumor growth but not eradicate the disease.
1605.08612
Sonja Schmid
Sonja Schmid, Markus G\"otz, Thorsten Hugel
Experiment-friendly kinetic analysis of single molecule data in and out of equilibrium
11 pages, 8 figures
Biophysical Journal 111,1375-1384, October 4, 2016
10.1016/j.bpj.2016.08.023
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple and robust technique to extract kinetic rate models and thermodynamic quantities from single molecule time traces. SMACKS (Single Molecule Analysis of Complex Kinetic Sequences) is a maximum likelihood approach that works equally well for long trajectories as for a set of short ones. It resolves all statistically relevant rates and also their uncertainties. This is achieved by optimizing one global kinetic model based on the complete dataset, while allowing for experimental variations between individual trajectories. In particular, neither a priori models nor equilibrium have to be assumed. The power of SMACKS is demonstrated on the kinetics of the multi-domain protein Hsp90 measured by smFRET (single molecule F\"orster resonance energy transfer). Experiments in and out of equilibrium are analyzed and compared to simulations, shedding new light on the role of Hsp90's ATPase function. SMACKS pushes the boundaries of single molecule kinetics far beyond current methods.
[ { "created": "Fri, 27 May 2016 12:44:24 GMT", "version": "v1" } ]
2022-03-10
[ [ "Schmid", "Sonja", "" ], [ "Götz", "Markus", "" ], [ "Hugel", "Thorsten", "" ] ]
We present a simple and robust technique to extract kinetic rate models and thermodynamic quantities from single molecule time traces. SMACKS (Single Molecule Analysis of Complex Kinetic Sequences) is a maximum likelihood approach that works equally well for long trajectories as for a set of short ones. It resolves all statistically relevant rates and also their uncertainties. This is achieved by optimizing one global kinetic model based on the complete dataset, while allowing for experimental variations between individual trajectories. In particular, neither a priori models nor equilibrium have to be assumed. The power of SMACKS is demonstrated on the kinetics of the multi-domain protein Hsp90 measured by smFRET (single molecule F\"orster resonance energy transfer). Experiments in and out of equilibrium are analyzed and compared to simulations, shedding new light on the role of Hsp90's ATPase function. SMACKS pushes the boundaries of single molecule kinetics far beyond current methods.
1210.1472
Manish Gupta
Naman Turakhia, Nilay Chheda, Manish K. Gupta, Ruchin Shah and Jigar Raisinghani
Biospectrogram: a tool for spectral analysis of biological sequences
2 pages, 1 figure, submitted to Bioinformatics Journal, Biospectrogram is available at http://www.guptalab.org/biospectrogram
null
null
null
q-bio.QM cs.CE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Summary: Biospectrogam is an open-source software for the spectral analysis of DNA and protein sequences. The software can fetch (from NCBI server), import and manage biological data. One can analyze the data using Digital Signal Processing (DSP) techniques since the software allows the user to convert the symbolic data into numerical data using 23 popular encodings and then apply popular transformations such as Fast Fourier Transform (FFT) etc. and export it. The ability of exporting (both encoding files and transform files) as a MATLAB .m file gives the user an option to apply variety of techniques of DSP. User can also do window analysis (both sliding in forward and backward directions and stagnant) with different size windows and search for meaningful spectral pattern with the help of exported MATLAB file in a dynamic manner by choosing time delay in the plot using Biospectrogram. Random encodings and user choice encoding allows software to search for many possibilities in spectral space. Availability: Biospectrogam is written in Java and is available to download freely from http://www.guptalab.org/biospectrogram. Software has been optimized to run on Windows, Mac OSX and Linux. User manual and you-tube (product demo) tutorial is also available on the website. We are in the process of acquiring open source license for it.
[ { "created": "Thu, 4 Oct 2012 14:42:50 GMT", "version": "v1" } ]
2012-10-05
[ [ "Turakhia", "Naman", "" ], [ "Chheda", "Nilay", "" ], [ "Gupta", "Manish K.", "" ], [ "Shah", "Ruchin", "" ], [ "Raisinghani", "Jigar", "" ] ]
Summary: Biospectrogam is an open-source software for the spectral analysis of DNA and protein sequences. The software can fetch (from NCBI server), import and manage biological data. One can analyze the data using Digital Signal Processing (DSP) techniques since the software allows the user to convert the symbolic data into numerical data using 23 popular encodings and then apply popular transformations such as Fast Fourier Transform (FFT) etc. and export it. The ability of exporting (both encoding files and transform files) as a MATLAB .m file gives the user an option to apply variety of techniques of DSP. User can also do window analysis (both sliding in forward and backward directions and stagnant) with different size windows and search for meaningful spectral pattern with the help of exported MATLAB file in a dynamic manner by choosing time delay in the plot using Biospectrogram. Random encodings and user choice encoding allows software to search for many possibilities in spectral space. Availability: Biospectrogam is written in Java and is available to download freely from http://www.guptalab.org/biospectrogram. Software has been optimized to run on Windows, Mac OSX and Linux. User manual and you-tube (product demo) tutorial is also available on the website. We are in the process of acquiring open source license for it.
q-bio/0703061
Roderick C. Dewar
Roderick C. Dewar, Annabel Porte
Statistical mechanics unifies different ecological patterns
38 pages, 4 figures, final revision, major rewrite with many clarifications and simplifications, amended title. Accepted by Journal of Theoretical Biology, 12 December 2007
null
null
null
q-bio.PE
null
Recently there has been growing interest in the use of Maximum Relative Entropy (MaxREnt) as a tool for statistical inference in ecology. In contrast, here we propose MaxREnt as a tool for applying statistical mechanics to ecology. We use MaxREnt to explain and predict species abundance patterns in ecological communities in terms of the most probable behaviour under given environmental constraints, in the same way that statistical mechanics explains and predicts the behaviour of thermodynamic systems. We show that MaxREnt unifies a number of different ecological patterns: (i) at relatively local scales a unimodal biodiversity-productivity relationship is predicted in good agreement with published data on grassland communities, (ii) the predicted relative frequency of rare vs. abundant species is very similar to the empirical lognormal distribution, (iii) both neutral and non-neutral species abundance patterns are explained, (iv) on larger scales a monotonic biodiversity-productivity relationship is predicted in agreement with the species-energy law, (v) energetic equivalence and power-law self-thinning behaviour are predicted in resource-rich communities. We identify mathematical similarities between these ecological patterns and the behaviour of thermodynamic systems, and conclude that the explanation of ecological patterns is not unique to ecology but rather reflects the generic statistical behaviour of complex systems with many degrees of freedom under very general types of environmental constraints.
[ { "created": "Wed, 28 Mar 2007 08:33:24 GMT", "version": "v1" }, { "created": "Mon, 10 Sep 2007 15:02:04 GMT", "version": "v2" }, { "created": "Thu, 13 Dec 2007 10:18:51 GMT", "version": "v3" } ]
2007-12-13
[ [ "Dewar", "Roderick C.", "" ], [ "Porte", "Annabel", "" ] ]
Recently there has been growing interest in the use of Maximum Relative Entropy (MaxREnt) as a tool for statistical inference in ecology. In contrast, here we propose MaxREnt as a tool for applying statistical mechanics to ecology. We use MaxREnt to explain and predict species abundance patterns in ecological communities in terms of the most probable behaviour under given environmental constraints, in the same way that statistical mechanics explains and predicts the behaviour of thermodynamic systems. We show that MaxREnt unifies a number of different ecological patterns: (i) at relatively local scales a unimodal biodiversity-productivity relationship is predicted in good agreement with published data on grassland communities, (ii) the predicted relative frequency of rare vs. abundant species is very similar to the empirical lognormal distribution, (iii) both neutral and non-neutral species abundance patterns are explained, (iv) on larger scales a monotonic biodiversity-productivity relationship is predicted in agreement with the species-energy law, (v) energetic equivalence and power-law self-thinning behaviour are predicted in resource-rich communities. We identify mathematical similarities between these ecological patterns and the behaviour of thermodynamic systems, and conclude that the explanation of ecological patterns is not unique to ecology but rather reflects the generic statistical behaviour of complex systems with many degrees of freedom under very general types of environmental constraints.
2407.01248
Rodrigo Dorantes-Gilardi
Rodrigo Dorantes-Gilardi, Kerry Ivey, Lauren Costa, Rachael Matty, Kelly Cho, John Michael Gaziano and Albert-L\'aszl\'o Barab\'asi
Quantifying the Impact of Biobanks and Cohort Studies
14 pages, 5 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Biobanks advance biomedical and clinical research by collecting and offering data and biological samples for numerous studies. However, the impact of these repositories varies greatly due to differences in their purpose, scope, governance, and data collected. Here, we computationally identified 2,663 biobanks and their textual mentions in 228,761 scientific articles, 16,210 grants, 15,469 patents, 1,769 clinical trials, and 9,468 public policy documents, helping characterize the academic communities that utilize and support them. We found a strong concentration of biobank-related research on a few diseases, where 20\% of publications focus on obesity, Alzheimer's disease, breast cancer, and diabetes. Moreover, collaboration, rather than citation count, shapes the community's recognition of a biobank. We show that, on average, 41.1\% of articles miss to reference any of the biobank's reference papers and 59.6\% include a biobank member as a co-author. Using a generalized linear model, we identified the key factors that contribute to the impact of a biobank, finding that an impactful biobank tends to be more open to external researchers, and that quality data -- especially linked medical records -- as opposed to large data, correlates with a higher impact in science, innovation, and disease. The collected data and findings are accessible through an open-access web application intended to inform strategies to expand access and maximize the value of these valuable resources.
[ { "created": "Mon, 1 Jul 2024 12:51:39 GMT", "version": "v1" } ]
2024-07-02
[ [ "Dorantes-Gilardi", "Rodrigo", "" ], [ "Ivey", "Kerry", "" ], [ "Costa", "Lauren", "" ], [ "Matty", "Rachael", "" ], [ "Cho", "Kelly", "" ], [ "Gaziano", "John Michael", "" ], [ "Barabási", "Albert-László", ...
Biobanks advance biomedical and clinical research by collecting and offering data and biological samples for numerous studies. However, the impact of these repositories varies greatly due to differences in their purpose, scope, governance, and data collected. Here, we computationally identified 2,663 biobanks and their textual mentions in 228,761 scientific articles, 16,210 grants, 15,469 patents, 1,769 clinical trials, and 9,468 public policy documents, helping characterize the academic communities that utilize and support them. We found a strong concentration of biobank-related research on a few diseases, where 20\% of publications focus on obesity, Alzheimer's disease, breast cancer, and diabetes. Moreover, collaboration, rather than citation count, shapes the community's recognition of a biobank. We show that, on average, 41.1\% of articles miss to reference any of the biobank's reference papers and 59.6\% include a biobank member as a co-author. Using a generalized linear model, we identified the key factors that contribute to the impact of a biobank, finding that an impactful biobank tends to be more open to external researchers, and that quality data -- especially linked medical records -- as opposed to large data, correlates with a higher impact in science, innovation, and disease. The collected data and findings are accessible through an open-access web application intended to inform strategies to expand access and maximize the value of these valuable resources.
2101.05956
Sean Lawley
Gregory Handy and Sean D Lawley
Revising Berg-Purcell for finite receptor kinetics
26 pages, 5 figures
null
10.1016/j.bpj.2021.03.021
null
q-bio.QM math.PR q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From nutrient uptake, to chemoreception, to synaptic transmission, many systems in cell biology depend on molecules diffusing and binding to membrane receptors. Mathematical analysis of such systems often neglects the fact that receptors process molecules at finite kinetic rates. A key example is the celebrated formula of Berg and Purcell for the rate that cell surface receptors capture extracellular molecules. Indeed, this influential result is only valid if receptors transport molecules through the cell wall at a rate much faster than molecules arrive at receptors. From a mathematical perspective, ignoring receptor kinetics is convenient because it makes the diffusing molecules independent. In contrast, including receptor kinetics introduces correlations between the diffusing molecules since, for example, bound receptors may be temporarily blocked from binding additional molecules. In this work, we present a modeling framework for coupling bulk diffusion to surface receptors with finite kinetic rates. The framework uses boundary homogenization to couple the diffusion equation to nonlinear ordinary differential equations on the boundary. We use this framework to derive an explicit formula for the cellular uptake rate and show that the analysis of Berg and Purcell significantly overestimates uptake in some typical biophysical scenarios. We confirm our analysis by numerical simulations of a many particle stochastic system.
[ { "created": "Fri, 15 Jan 2021 03:33:21 GMT", "version": "v1" } ]
2021-06-16
[ [ "Handy", "Gregory", "" ], [ "Lawley", "Sean D", "" ] ]
From nutrient uptake, to chemoreception, to synaptic transmission, many systems in cell biology depend on molecules diffusing and binding to membrane receptors. Mathematical analysis of such systems often neglects the fact that receptors process molecules at finite kinetic rates. A key example is the celebrated formula of Berg and Purcell for the rate that cell surface receptors capture extracellular molecules. Indeed, this influential result is only valid if receptors transport molecules through the cell wall at a rate much faster than molecules arrive at receptors. From a mathematical perspective, ignoring receptor kinetics is convenient because it makes the diffusing molecules independent. In contrast, including receptor kinetics introduces correlations between the diffusing molecules since, for example, bound receptors may be temporarily blocked from binding additional molecules. In this work, we present a modeling framework for coupling bulk diffusion to surface receptors with finite kinetic rates. The framework uses boundary homogenization to couple the diffusion equation to nonlinear ordinary differential equations on the boundary. We use this framework to derive an explicit formula for the cellular uptake rate and show that the analysis of Berg and Purcell significantly overestimates uptake in some typical biophysical scenarios. We confirm our analysis by numerical simulations of a many particle stochastic system.
1706.04946
Stefano Fusi
Stefano Fusi
Computational models of long term plasticity and memory
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memory is often defined as the mental capacity of retaining information about facts, events, procedures and more generally about any type of previous experience. Memories are remembered as long as they influence our thoughts, feelings, and behavior at the present time. Memory is also one of the fundamental components of learning, our ability to acquire any type of knowledge or skills. In the brain it is not easy to identify the physical substrate of memory. Basically, any long-lasting alteration of a biochemical process can be considered a form of memory, although some of these alterations last only a few milliseconds, and most of them, if taken individually, cannot influence our behavior. However, if we want to understand memory, we need to keep in mind that memory is not a unitary phenomenon, and it certainly involves several distinct mechanisms that operate at different spatial and temporal levels. One of the goals of theoretical neuroscience is to try to understand how these processes are orchestrated to store memories rapidly and preserve them over a lifetime. Theorists have mostly focused on synaptic plasticity, as it is one of the most studied memory mechanisms in experimental neuroscience and it is known to be highly effective in training artificial neural networks to perform real world tasks. Some of the synaptic plasticity models are purely phenomenological, some others have been designed to solve computational problems. In this article I will review some of these models and I will try to identify computational principles that underlie memory storage and preservation.
[ { "created": "Thu, 15 Jun 2017 16:13:24 GMT", "version": "v1" } ]
2017-06-16
[ [ "Fusi", "Stefano", "" ] ]
Memory is often defined as the mental capacity of retaining information about facts, events, procedures and more generally about any type of previous experience. Memories are remembered as long as they influence our thoughts, feelings, and behavior at the present time. Memory is also one of the fundamental components of learning, our ability to acquire any type of knowledge or skills. In the brain it is not easy to identify the physical substrate of memory. Basically, any long-lasting alteration of a biochemical process can be considered a form of memory, although some of these alterations last only a few milliseconds, and most of them, if taken individually, cannot influence our behavior. However, if we want to understand memory, we need to keep in mind that memory is not a unitary phenomenon, and it certainly involves several distinct mechanisms that operate at different spatial and temporal levels. One of the goals of theoretical neuroscience is to try to understand how these processes are orchestrated to store memories rapidly and preserve them over a lifetime. Theorists have mostly focused on synaptic plasticity, as it is one of the most studied memory mechanisms in experimental neuroscience and it is known to be highly effective in training artificial neural networks to perform real world tasks. Some of the synaptic plasticity models are purely phenomenological, some others have been designed to solve computational problems. In this article I will review some of these models and I will try to identify computational principles that underlie memory storage and preservation.
1104.5674
Stanley Lazic
Stanley E. Lazic
Using causal models to distinguish between neurogenesis-dependent and -independent effects on behaviour
To be published in the Journal of the Royal Society Interface
null
10.1098/?rsif.2011.0510
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been a substantial amount of research on the relationship between hippocampal neurogenesis and behaviour over the past fifteen years, but the causal role that new neurons have on cognitive and affective behavioural tasks is still far from clear. This is partly due to the difficulty of manipulating levels of neurogenesis without inducing off-target effects, which might also influence behaviour. In addition, the analytical methods typically used do not directly test whether neurogenesis mediates the effect of an intervention on behaviour. Previous studies may have incorrectly attributed changes in behavioural performance to neurogenesis because the role of known (or unknown) neurogenesis-independent mechanisms were not formally taken into consideration during the analysis. Causal models can tease apart complex causal relationships and were used to demonstrate that the effect of exercise on pattern separation is via neurogenesis-independent mechanisms. Many studies in the neurogenesis literature would benefit from the use of statistical methods that can separate neurogenesis-dependent from neurogenesis-independent effects on behaviour.
[ { "created": "Fri, 29 Apr 2011 16:32:00 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2011 20:35:55 GMT", "version": "v2" } ]
2014-11-11
[ [ "Lazic", "Stanley E.", "" ] ]
There has been a substantial amount of research on the relationship between hippocampal neurogenesis and behaviour over the past fifteen years, but the causal role that new neurons have on cognitive and affective behavioural tasks is still far from clear. This is partly due to the difficulty of manipulating levels of neurogenesis without inducing off-target effects, which might also influence behaviour. In addition, the analytical methods typically used do not directly test whether neurogenesis mediates the effect of an intervention on behaviour. Previous studies may have incorrectly attributed changes in behavioural performance to neurogenesis because the role of known (or unknown) neurogenesis-independent mechanisms were not formally taken into consideration during the analysis. Causal models can tease apart complex causal relationships and were used to demonstrate that the effect of exercise on pattern separation is via neurogenesis-independent mechanisms. Many studies in the neurogenesis literature would benefit from the use of statistical methods that can separate neurogenesis-dependent from neurogenesis-independent effects on behaviour.
1311.1241
Ngan Nguyen
Ngan Nguyen, Glenn Hickey, Brian J. Raney, Joel Armstrong, Hiram Clawson, Ann Zweig, Jim Kent, David Haussler, Benedict Paten
Comparative Assembly Hubs: Web Accessible Browsers for Comparative Genomics
10 pages, 3 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a pipeline to easily generate collections of web accessible UCSC genome browsers interrelated by an alignment. Using the alignment, all annotations and the alignment itself can be efficiently viewed with reference to any genome in the collection, symmetrically. A new, intelligently scaled alignment display makes it simple to view all changes between the genomes at all levels of resolution, from substitutions to complex structural rearrangements, including duplications.
[ { "created": "Tue, 5 Nov 2013 22:28:16 GMT", "version": "v1" } ]
2013-11-07
[ [ "Nguyen", "Ngan", "" ], [ "Hickey", "Glenn", "" ], [ "Raney", "Brian J.", "" ], [ "Armstrong", "Joel", "" ], [ "Clawson", "Hiram", "" ], [ "Zweig", "Ann", "" ], [ "Kent", "Jim", "" ], [ "Haussler", ...
We introduce a pipeline to easily generate collections of web accessible UCSC genome browsers interrelated by an alignment. Using the alignment, all annotations and the alignment itself can be efficiently viewed with reference to any genome in the collection, symmetrically. A new, intelligently scaled alignment display makes it simple to view all changes between the genomes at all levels of resolution, from substitutions to complex structural rearrangements, including duplications.
1806.00897
Griffin Chure
Griffin Chure, Heun Jin Lee, Rob Phillips
Connecting the dots between mechanosensitive channel abundance, osmotic shock, and survival at single-cell resolution
null
null
null
null
q-bio.CB q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Rapid changes in extracellular osmolarity are one of many insults microbial cells face on a daily basis. To protect against such shocks, Escherichia coli and other microbes express several types of transmembrane channels which open and close in response to changes in membrane tension. In E. coli, one of the most abundant channels is the mechanosensitive channel of large conductance (MscL). While this channel has been heavily characterized through structural methods, electrophysiology, and theoretical modeling, our understanding of its physiological role in preventing cell death by alleviating high membrane tension remains tenuous. In this work, we examine the contribution of MscL alone to cell survival after osmotic shock at single cell resolution using quantitative fluorescence microscopy. We conduct these experiments in an E. coli strain which is lacking all mechanosensitive channel genes save for MscL whose expression is tuned across three orders of magnitude through modifications of the Shine-Dalgarno sequence. While theoretical models suggest that only a few MscL channels would be needed to alleviate even large changes in osmotic pressure, we find that between 500 and 700 channels per cell are needed to convey upwards of 80% survival. This number agrees with the average MscL copy number measured in wild-type E. coli cells through proteomic studies and quantitative Western blotting. Furthermore, we observe zero survival events in cells with less than 100 channels per cell. This work opens new questions concerning the contribution of other mechanosensitive channels to survival as well as regulation of their activity.
[ { "created": "Sun, 3 Jun 2018 23:40:52 GMT", "version": "v1" } ]
2018-06-05
[ [ "Chure", "Griffin", "" ], [ "Lee", "Heun Jin", "" ], [ "Phillips", "Rob", "" ] ]
Rapid changes in extracellular osmolarity are one of many insults microbial cells face on a daily basis. To protect against such shocks, Escherichia coli and other microbes express several types of transmembrane channels which open and close in response to changes in membrane tension. In E. coli, one of the most abundant channels is the mechanosensitive channel of large conductance (MscL). While this channel has been heavily characterized through structural methods, electrophysiology, and theoretical modeling, our understanding of its physiological role in preventing cell death by alleviating high membrane tension remains tenuous. In this work, we examine the contribution of MscL alone to cell survival after osmotic shock at single cell resolution using quantitative fluorescence microscopy. We conduct these experiments in an E. coli strain which is lacking all mechanosensitive channel genes save for MscL whose expression is tuned across three orders of magnitude through modifications of the Shine-Dalgarno sequence. While theoretical models suggest that only a few MscL channels would be needed to alleviate even large changes in osmotic pressure, we find that between 500 and 700 channels per cell are needed to convey upwards of 80% survival. This number agrees with the average MscL copy number measured in wild-type E. coli cells through proteomic studies and quantitative Western blotting. Furthermore, we observe zero survival events in cells with less than 100 channels per cell. This work opens new questions concerning the contribution of other mechanosensitive channels to survival as well as regulation of their activity.
2205.04235
Raul Fernandez Rojas
Niraj Hirachan, Anita Mathews, Julio Romero, Raul Fernandez Rojas
Measuring Cognitive Workload Using Multimodal Sensors
null
null
null
null
q-bio.NC cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
This study aims to identify a set of indicators to estimate cognitive workload using a multimodal sensing approach and machine learning. A set of three cognitive tests were conducted to induce cognitive workload in twelve participants at two levels of task difficulty (Easy and Hard). Four sensors were used to measure the participants' physiological change, including, Electrocardiogram (ECG), electrodermal activity (EDA), respiration (RESP), and blood oxygen saturation (SpO2). To understand the perceived cognitive workload, NASA-TLX was used after each test and analysed using Chi-Square test. Three well-know classifiers (LDA, SVM, and DT) were trained and tested independently using the physiological data. The statistical analysis showed that participants' perceived cognitive workload was significantly different (p<0.001) between the tests, which demonstrated the validity of the experimental conditions to induce different cognitive levels. Classification results showed that a fusion of ECG and EDA presented good discriminating power (acc=0.74) for cognitive workload detection. This study provides preliminary results in the identification of a possible set of indicators of cognitive workload. Future work needs to be carried out to validate the indicators using more realistic scenarios and with a larger population.
[ { "created": "Thu, 5 May 2022 23:18:00 GMT", "version": "v1" } ]
2022-05-10
[ [ "Hirachan", "Niraj", "" ], [ "Mathews", "Anita", "" ], [ "Romero", "Julio", "" ], [ "Rojas", "Raul Fernandez", "" ] ]
This study aims to identify a set of indicators to estimate cognitive workload using a multimodal sensing approach and machine learning. A set of three cognitive tests were conducted to induce cognitive workload in twelve participants at two levels of task difficulty (Easy and Hard). Four sensors were used to measure the participants' physiological change, including, Electrocardiogram (ECG), electrodermal activity (EDA), respiration (RESP), and blood oxygen saturation (SpO2). To understand the perceived cognitive workload, NASA-TLX was used after each test and analysed using Chi-Square test. Three well-know classifiers (LDA, SVM, and DT) were trained and tested independently using the physiological data. The statistical analysis showed that participants' perceived cognitive workload was significantly different (p<0.001) between the tests, which demonstrated the validity of the experimental conditions to induce different cognitive levels. Classification results showed that a fusion of ECG and EDA presented good discriminating power (acc=0.74) for cognitive workload detection. This study provides preliminary results in the identification of a possible set of indicators of cognitive workload. Future work needs to be carried out to validate the indicators using more realistic scenarios and with a larger population.
2209.15611
Kevin Wu
Kevin E. Wu, Kevin K. Yang, Rianne van den Berg, James Y. Zou, Alex X. Lu, Ava P. Amini
Protein structure generation via folding diffusion
null
null
null
null
q-bio.BM cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
The ability to computationally generate novel yet physically foldable protein structures could lead to new biological discoveries and new treatments targeting yet incurable diseases. Despite recent advances in protein structure prediction, directly generating diverse, novel protein structures from neural networks remains difficult. In this work, we present a new diffusion-based generative model that designs protein backbone structures via a procedure that mirrors the native folding process. We describe protein backbone structure as a series of consecutive angles capturing the relative orientation of the constituent amino acid residues, and generate new structures by denoising from a random, unfolded state towards a stable folded structure. Not only does this mirror how proteins biologically twist into energetically favorable conformations, the inherent shift and rotational invariance of this representation crucially alleviates the need for complex equivariant networks. We train a denoising diffusion probabilistic model with a simple transformer backbone and demonstrate that our resulting model unconditionally generates highly realistic protein structures with complexity and structural patterns akin to those of naturally-occurring proteins. As a useful resource, we release the first open-source codebase and trained models for protein structure diffusion.
[ { "created": "Fri, 30 Sep 2022 17:35:53 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2022 04:05:41 GMT", "version": "v2" } ]
2022-11-28
[ [ "Wu", "Kevin E.", "" ], [ "Yang", "Kevin K.", "" ], [ "Berg", "Rianne van den", "" ], [ "Zou", "James Y.", "" ], [ "Lu", "Alex X.", "" ], [ "Amini", "Ava P.", "" ] ]
The ability to computationally generate novel yet physically foldable protein structures could lead to new biological discoveries and new treatments targeting yet incurable diseases. Despite recent advances in protein structure prediction, directly generating diverse, novel protein structures from neural networks remains difficult. In this work, we present a new diffusion-based generative model that designs protein backbone structures via a procedure that mirrors the native folding process. We describe protein backbone structure as a series of consecutive angles capturing the relative orientation of the constituent amino acid residues, and generate new structures by denoising from a random, unfolded state towards a stable folded structure. Not only does this mirror how proteins biologically twist into energetically favorable conformations, the inherent shift and rotational invariance of this representation crucially alleviates the need for complex equivariant networks. We train a denoising diffusion probabilistic model with a simple transformer backbone and demonstrate that our resulting model unconditionally generates highly realistic protein structures with complexity and structural patterns akin to those of naturally-occurring proteins. As a useful resource, we release the first open-source codebase and trained models for protein structure diffusion.
1807.00527
Henrik Jeldtoft Jensen
Katharina Brinck and Henrik Jeldtoft Jensen
Bottom-up versus top-down control and the transfer of information in complex model ecosystems
14 pages, 3 figures and 3 tables. Submitted to J Theo. Bio
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ecological systems are emergent features of ecological and adaptive dynamics of a community of interacting species. By natural selection through the abiotic environment and by co-adaptation within the community, species evolve, thereby giving rise to the ecological networks we regard as ecosystems. This reductionist perspective can be contrasted with the view that as species have to fit in the surrounding system, the system itself exerts selection pressure on the evolutionary pathways of the species. This interplay of bottom-up and top-down control in the development and growth of ecological systems has long been discussed, however empirical ecosystem data is scarce and a comprehensive mathematical framework is lacking. We present a way of quantifying the relative weight of natural selection and coadaptation grounded in information theory, to assess the relative role of bottom-up and top-down control in the evolution of ecological systems, and analyse the information transfer in an individual based stochastic complex systems model, the Tangled Nature Model of evolutionary ecology. We show that ecological communities evolve from mainly bottom-up controlled early-successional systems to more strongly top-down controlled late-successional systems, as coadaptation progresses. Species which have a high influence on selection are also generally more abundant. Hence our findings imply that ecological communities are shaped by a dialogue of bottom-up and top-down control, where the role of the systemic selection and integrity becomes more pronounced the further the ecosystem is developed.
[ { "created": "Mon, 2 Jul 2018 08:30:11 GMT", "version": "v1" } ]
2018-07-03
[ [ "Brinck", "Katharina", "" ], [ "Jensen", "Henrik Jeldtoft", "" ] ]
Ecological systems are emergent features of ecological and adaptive dynamics of a community of interacting species. By natural selection through the abiotic environment and by co-adaptation within the community, species evolve, thereby giving rise to the ecological networks we regard as ecosystems. This reductionist perspective can be contrasted with the view that as species have to fit in the surrounding system, the system itself exerts selection pressure on the evolutionary pathways of the species. This interplay of bottom-up and top-down control in the development and growth of ecological systems has long been discussed, however empirical ecosystem data is scarce and a comprehensive mathematical framework is lacking. We present a way of quantifying the relative weight of natural selection and coadaptation grounded in information theory, to assess the relative role of bottom-up and top-down control in the evolution of ecological systems, and analyse the information transfer in an individual based stochastic complex systems model, the Tangled Nature Model of evolutionary ecology. We show that ecological communities evolve from mainly bottom-up controlled early-successional systems to more strongly top-down controlled late-successional systems, as coadaptation progresses. Species which have a high influence on selection are also generally more abundant. Hence our findings imply that ecological communities are shaped by a dialogue of bottom-up and top-down control, where the role of the systemic selection and integrity becomes more pronounced the further the ecosystem is developed.
1211.1990
Choongseok Park
Choongseok Park and Leonid Rubchinsky
Potential mechanisms for imperfect synchronization in parkinsonian basal ganglia
27 pages, 9 figures
PLoS One. 2012; 7(12): e51530
10.1371/journal.pone.0051530
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural activity in the brain of parkinsonian patients is characterized by the intermittently synchronized oscillatory dynamics. This imperfect synchronization, observed in the beta frequency band, is believed to be related to the hypokinetic motor symptoms of the disorder. Our study explores potential mechanisms behind this intermittent synchrony. We study the response of a bursting pallidal neuron to different patterns of synaptic input from subthalamic nucleus (STN) neuron. We show how external globus pallidus (GPe) neuron is sensitive to the phase of the input from the STN cell and can exhibit intermittent phase-locking with the input in the beta band. The temporal properties of this intermittent phase-locking show similarities to the intermittent synchronization observed in experiments. We also study the synchronization of GPe cells to synaptic input from the STN cell with dependence on the dopamine-modulated parameters. Dopamine also affects the cellular properties of neurons. We show how the changes in firing patterns of STN neuron due to the lack of dopamine may lead to transition from a lower to a higher coherent state, roughly matching the synchrony levels observed in basal ganglia in normal and parkinsonian states. The intermittent nature of the neural beta band synchrony in Parkinson's disease is achieved in the model due to the interplay of the timing of STN input to pallidum and pallidal neuronal dynamics, resulting in sensitivity of pallidal output to the phase of the arriving STN input. Thus the mechanism considered here (the change in firing pattern of subthalamic neurons through the dopamine-induced change of membrane properties) may be one of the potential mechanisms responsible for the generation of the intermittent synchronization observed in Parkinson's disease.
[ { "created": "Thu, 8 Nov 2012 21:19:23 GMT", "version": "v1" } ]
2013-02-11
[ [ "Park", "Choongseok", "" ], [ "Rubchinsky", "Leonid", "" ] ]
Neural activity in the brain of parkinsonian patients is characterized by the intermittently synchronized oscillatory dynamics. This imperfect synchronization, observed in the beta frequency band, is believed to be related to the hypokinetic motor symptoms of the disorder. Our study explores potential mechanisms behind this intermittent synchrony. We study the response of a bursting pallidal neuron to different patterns of synaptic input from subthalamic nucleus (STN) neuron. We show how external globus pallidus (GPe) neuron is sensitive to the phase of the input from the STN cell and can exhibit intermittent phase-locking with the input in the beta band. The temporal properties of this intermittent phase-locking show similarities to the intermittent synchronization observed in experiments. We also study the synchronization of GPe cells to synaptic input from the STN cell with dependence on the dopamine-modulated parameters. Dopamine also affects the cellular properties of neurons. We show how the changes in firing patterns of STN neuron due to the lack of dopamine may lead to transition from a lower to a higher coherent state, roughly matching the synchrony levels observed in basal ganglia in normal and parkinsonian states. The intermittent nature of the neural beta band synchrony in Parkinson's disease is achieved in the model due to the interplay of the timing of STN input to pallidum and pallidal neuronal dynamics, resulting in sensitivity of pallidal output to the phase of the arriving STN input. Thus the mechanism considered here (the change in firing pattern of subthalamic neurons through the dopamine-induced change of membrane properties) may be one of the potential mechanisms responsible for the generation of the intermittent synchronization observed in Parkinson's disease.
1309.3329
Alberto d'Onofrio
Alberto d'Onofrio
Fractal growth of tumors and other cellular populations: Linking the mechanistic to the phenomenological modeling and vice versa
8 pages
Chaos Solitons and Fractals 41: 875-880 (2009)
10.1016/j.chaos.2008.04.014
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we study and extend the mechanistic mean field theory of growth of cellular populations proposed by Mombach et al in (Mombach J. C. M. et al., Europhysics Letter, 59 (2002) 923) (MLBI model), and we demonstrate that the original model and our generalizations lead to inferences of biological interest. In the first part of this paper, we show that the model in study is widely general since it admits, as particular cases, the main phenomenological models of cellular growth. In the second part of this work, we generalize the \emph{MLBI} model to a wider family of models by allowing the cells to have a generic unspecified biologically plausible interaction. Then, we derive a relationship between this generic microscopic interaction function and the growth rate of the corresponding macroscopic model. Finally, we propose to use this relationship in order to help the investigation of the biological plausibility of phenomenological models of cancer growth.
[ { "created": "Thu, 12 Sep 2013 23:05:17 GMT", "version": "v1" } ]
2013-09-16
[ [ "d'Onofrio", "Alberto", "" ] ]
In this paper we study and extend the mechanistic mean field theory of growth of cellular populations proposed by Mombach et al in (Mombach J. C. M. et al., Europhysics Letter, 59 (2002) 923) (MLBI model), and we demonstrate that the original model and our generalizations lead to inferences of biological interest. In the first part of this paper, we show that the model in study is widely general since it admits, as particular cases, the main phenomenological models of cellular growth. In the second part of this work, we generalize the \emph{MLBI} model to a wider family of models by allowing the cells to have a generic unspecified biologically plausible interaction. Then, we derive a relationship between this generic microscopic interaction function and the growth rate of the corresponding macroscopic model. Finally, we propose to use this relationship in order to help the investigation of the biological plausibility of phenomenological models of cancer growth.