id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1810.11918
Le Yan
Le Yan, Richard Neher, and Boris I Shraiman
Phylodynamics of rapidly adapting pathogens: extinction and speciation of a Red Queen
15 pages, 9 figures, 1 table
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Rapidly evolving pathogens like influenza viruses can persist by accumulating antigenic novelty fast enough to evade the adaptive immunity of the host population, yet without continuous accumulation of genetic diversity. This dynamical state is often compared to the Red Queen evolving as fast as it can just to maintain its foothold in the host population: Accumulation of antigenic novelty is balanced by the build-up of host immunity. Such Red Queen States (RQS) of continuous adaptation in large rapidly mutating populations are well understood in terms of Traveling Wave (TW) theories of population genetics. Here we shall make explicit the mapping of the established Multi-strain Susceptible-Infected-Recovered (SIR) model onto the TW theory and demonstrate that a pathogen can persist in RQS if cross-immunity is long-ranged and its population size is large populations allowing for rapid adaptation. We then investigate the stability of this state focusing on the rate of extinction and the rate of "speciation" defined as antigenic divergence of viral strains beyond the range of cross-inhibition. RQS states are transient, but in a certain range of evolutionary parameters can exist for the time long compared to the typical time to the most recent common ancestor ($T_{MRCA}$). In this range the steady TW is unstable and the antigenic advance of the lead strains relative to the typical co-circulating viruses tends to oscillate. This results in large fluctuations in prevalence that facilitate extinction. We shall demonstrate that the rate of TW fission into antigenically uncoupled viral populations is related to fluctuations of $T_{MRCA}$ and construct a "phase diagram" identifying different regimes of viral phylodynamics as a function of evolutionary parameters.
[ { "created": "Mon, 29 Oct 2018 01:29:34 GMT", "version": "v1" } ]
2018-10-30
[ [ "Yan", "Le", "" ], [ "Neher", "Richard", "" ], [ "Shraiman", "Boris I", "" ] ]
Rapidly evolving pathogens like influenza viruses can persist by accumulating antigenic novelty fast enough to evade the adaptive immunity of the host population, yet without continuous accumulation of genetic diversity. This dynamical state is often compared to the Red Queen evolving as fast as it can just to maintain its foothold in the host population: Accumulation of antigenic novelty is balanced by the build-up of host immunity. Such Red Queen States (RQS) of continuous adaptation in large rapidly mutating populations are well understood in terms of Traveling Wave (TW) theories of population genetics. Here we shall make explicit the mapping of the established Multi-strain Susceptible-Infected-Recovered (SIR) model onto the TW theory and demonstrate that a pathogen can persist in RQS if cross-immunity is long-ranged and its population size is large populations allowing for rapid adaptation. We then investigate the stability of this state focusing on the rate of extinction and the rate of "speciation" defined as antigenic divergence of viral strains beyond the range of cross-inhibition. RQS states are transient, but in a certain range of evolutionary parameters can exist for the time long compared to the typical time to the most recent common ancestor ($T_{MRCA}$). In this range the steady TW is unstable and the antigenic advance of the lead strains relative to the typical co-circulating viruses tends to oscillate. This results in large fluctuations in prevalence that facilitate extinction. We shall demonstrate that the rate of TW fission into antigenically uncoupled viral populations is related to fluctuations of $T_{MRCA}$ and construct a "phase diagram" identifying different regimes of viral phylodynamics as a function of evolutionary parameters.
1405.5007
Igor Goychuk
Igor Goychuk
Stochastic modeling of excitable dynamics: improved Langevin model for mesoscopic channel noise
V.M. Mladenov and P.C. Ivanov (Eds.): NDES 2014, Communications in Computer and Information Science, vol. 438 (Springer, Switzerland, 2014), pp. 325-332
Communications in Computer and Information Science 438, 325 (2014)
10.1007/978-3-319-08672-9_38
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Influence of mesoscopic channel noise on excitable dynamics of living cells became a hot subject within the last decade, and the traditional biophysical models of neuronal dynamics such as Hodgkin-Huxley model have been generalized to incorporate such effects. There still exists but a controversy on how to do it in a proper and computationally efficient way. Here we introduce an improved Langevin description of stochastic Hodgkin-Huxley dynamics with natural boundary conditions for gating variables. It consistently describes the channel noise variance in a good agreement with discrete state model. Moreover, we show by comparison with our improved Langevin model that two earlier Langevin models by Fox and Lu also work excellently starting from several hundreds of ion channels upon imposing numerically reflecting boundary conditions for gating variables.
[ { "created": "Tue, 20 May 2014 09:20:34 GMT", "version": "v1" } ]
2014-09-24
[ [ "Goychuk", "Igor", "" ] ]
Influence of mesoscopic channel noise on excitable dynamics of living cells became a hot subject within the last decade, and the traditional biophysical models of neuronal dynamics such as Hodgkin-Huxley model have been generalized to incorporate such effects. There still exists but a controversy on how to do it in a proper and computationally efficient way. Here we introduce an improved Langevin description of stochastic Hodgkin-Huxley dynamics with natural boundary conditions for gating variables. It consistently describes the channel noise variance in a good agreement with discrete state model. Moreover, we show by comparison with our improved Langevin model that two earlier Langevin models by Fox and Lu also work excellently starting from several hundreds of ion channels upon imposing numerically reflecting boundary conditions for gating variables.
2112.10989
Yifei Li
Yifei Li, Pascal R. Buenzli and Matthew J. Simpson
Interpreting how nonlinear diffusion affects the fate of bistable populations using a discrete modelling framework
40 pages, 11 figures, 1 supplementary material document
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding whether a population will survive and flourish or become extinct is a central question in population biology. One way of exploring this question is to study population dynamics using reaction-diffusion equations, where migration is usually represented as a linear diffusion term, and birth-death is represented with a bistable source term. While linear diffusion is most commonly employed to study migration, there are several limitations of this approach, such as the inability of linear diffusion-based models to predict a well-defined population front. One way to overcome this is to generalise the constant diffusivity, $D$, to a nonlinear diffusivity function $D(C)$, where $C>0$ is the density. While it has been formally established that the choice of $D(C)$ affects long-term survival or extinction of a bistable population, working solely in a classical continuum framework makes it difficult to understand precisely how the choice of $D(C)$ affects survival or extinction. Here, we address this question by working with a simple discrete simulation model that is easy to interpret. The continuum limit of the discrete model is a nonlinear reaction-diffusion equation, where the flux involves a nonlinear diffusion term and the source term is given by the strong Allee effect bistable model. We study population extinction/survival using this very intuitive discrete framework together with numerical solutions of the reaction-diffusion continuum limit equation. This approach provides clear insight into how the choice of $D(C)$ either encourages or suppresses population extinction relative to the classical linear diffusion model.
[ { "created": "Tue, 21 Dec 2021 05:12:48 GMT", "version": "v1" }, { "created": "Fri, 7 Jan 2022 02:56:37 GMT", "version": "v2" } ]
2022-01-10
[ [ "Li", "Yifei", "" ], [ "Buenzli", "Pascal R.", "" ], [ "Simpson", "Matthew J.", "" ] ]
Understanding whether a population will survive and flourish or become extinct is a central question in population biology. One way of exploring this question is to study population dynamics using reaction-diffusion equations, where migration is usually represented as a linear diffusion term, and birth-death is represented with a bistable source term. While linear diffusion is most commonly employed to study migration, there are several limitations of this approach, such as the inability of linear diffusion-based models to predict a well-defined population front. One way to overcome this is to generalise the constant diffusivity, $D$, to a nonlinear diffusivity function $D(C)$, where $C>0$ is the density. While it has been formally established that the choice of $D(C)$ affects long-term survival or extinction of a bistable population, working solely in a classical continuum framework makes it difficult to understand precisely how the choice of $D(C)$ affects survival or extinction. Here, we address this question by working with a simple discrete simulation model that is easy to interpret. The continuum limit of the discrete model is a nonlinear reaction-diffusion equation, where the flux involves a nonlinear diffusion term and the source term is given by the strong Allee effect bistable model. We study population extinction/survival using this very intuitive discrete framework together with numerical solutions of the reaction-diffusion continuum limit equation. This approach provides clear insight into how the choice of $D(C)$ either encourages or suppresses population extinction relative to the classical linear diffusion model.
0803.2904
Gabriel Cardona
Gabriel Cardona, Merce Llabres, Francesc Rossello, Gabriel Valiente
A Distance Metric for Tree-Sibling Time Consistent Phylogenetic Networks
16 pages, 16 figures
null
null
null
q-bio.PE cs.CE cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The presence of reticulate evolutionary events in phylogenies turn phylogenetic trees into phylogenetic networks. These events imply in particular that there may exist multiple evolutionary paths from a non-extant species to an extant one, and this multiplicity makes the comparison of phylogenetic networks much more difficult than the comparison of phylogenetic trees. In fact, all attempts to define a sound distance measure on the class of all phylogenetic networks have failed so far. Thus, the only practical solutions have been either the use of rough estimates of similarity (based on comparison of the trees embedded in the networks), or narrowing the class of phylogenetic networks to a certain class where such a distance is known and can be efficiently computed. The first approach has the problem that one may identify two networks as equivalent, when they are not; the second one has the drawback that there may not exist algorithms to reconstruct such networks from biological sequences. We present in this paper a distance measure on the class of tree-sibling time consistent phylogenetic networks, which generalize tree-child time consistent phylogenetic networks, and thus also galled-trees. The practical interest of this distance measure is twofold: it can be computed in polynomial time by means of simple algorithms, and there also exist polynomial-time algorithms for reconstructing networks of this class from DNA sequence data. The Perl package Bio::PhyloNetwork, included in the BioPerl bundle, implements many algorithms on phylogenetic networks, including the computation of the distance presented in this paper.
[ { "created": "Wed, 19 Mar 2008 22:24:11 GMT", "version": "v1" } ]
2008-03-21
[ [ "Cardona", "Gabriel", "" ], [ "Llabres", "Merce", "" ], [ "Rossello", "Francesc", "" ], [ "Valiente", "Gabriel", "" ] ]
The presence of reticulate evolutionary events in phylogenies turn phylogenetic trees into phylogenetic networks. These events imply in particular that there may exist multiple evolutionary paths from a non-extant species to an extant one, and this multiplicity makes the comparison of phylogenetic networks much more difficult than the comparison of phylogenetic trees. In fact, all attempts to define a sound distance measure on the class of all phylogenetic networks have failed so far. Thus, the only practical solutions have been either the use of rough estimates of similarity (based on comparison of the trees embedded in the networks), or narrowing the class of phylogenetic networks to a certain class where such a distance is known and can be efficiently computed. The first approach has the problem that one may identify two networks as equivalent, when they are not; the second one has the drawback that there may not exist algorithms to reconstruct such networks from biological sequences. We present in this paper a distance measure on the class of tree-sibling time consistent phylogenetic networks, which generalize tree-child time consistent phylogenetic networks, and thus also galled-trees. The practical interest of this distance measure is twofold: it can be computed in polynomial time by means of simple algorithms, and there also exist polynomial-time algorithms for reconstructing networks of this class from DNA sequence data. The Perl package Bio::PhyloNetwork, included in the BioPerl bundle, implements many algorithms on phylogenetic networks, including the computation of the distance presented in this paper.
1903.07317
Fabian Eitel
Moritz B\"ohle and Fabian Eitel and Martin Weygandt and Kerstin Ritter
Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification
null
Front. Aging Neurosci., 31 July 2019
10.3389/fnagi.2019.00194
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimer's disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation ("Which change in voxels would change the outcome most?"), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals ("Why does this person have AD?") with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual "fingerprints" of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data.
[ { "created": "Mon, 18 Mar 2019 09:18:06 GMT", "version": "v1" }, { "created": "Tue, 27 Aug 2019 15:46:15 GMT", "version": "v2" } ]
2019-08-28
[ [ "Böhle", "Moritz", "" ], [ "Eitel", "Fabian", "" ], [ "Weygandt", "Martin", "" ], [ "Ritter", "Kerstin", "" ] ]
Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimer's disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation ("Which change in voxels would change the outcome most?"), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals ("Why does this person have AD?") with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual "fingerprints" of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data.
0905.1410
Yasser Roudi
Yasser Roudi, Erik Aurell, John Hertz
Statistical physics of pairwise probability models
25 pages, 3 figures
Front. Comput. Neurosci (2009) 3:22
10.3389/neuro.10.022.2009
NORDITA-2009-25
q-bio.QM cond-mat.dis-nn q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
[ { "created": "Sat, 9 May 2009 14:10:37 GMT", "version": "v1" } ]
2009-11-30
[ [ "Roudi", "Yasser", "" ], [ "Aurell", "Erik", "" ], [ "Hertz", "John", "" ] ]
Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.
1603.09195
Vladimir Boza
Vladim\'ir Bo\v{z}a, Bro\v{n}a Brejov\'a and Tom\'a\v{s} Vina\v{r}
DeepNano: Deep Recurrent Neural Networks for Base Calling in MinION Nanopore Reads
null
PLoS ONE 12(6): e0178751
10.1371/journal.pone.0178751
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: The MinION device by Oxford Nanopore is the first portable sequencing device. MinION is able to produce very long reads (reads over 100~kBp were reported), however it suffers from high sequencing error rate. In this paper, we show that the error rate can be reduced by improving the base calling process. Results: We present the first open-source DNA base caller for the MinION sequencing platform by Oxford Nanopore. By employing carefully crafted recurrent neural networks, our tool improves the base calling accuracy compared to the default base caller supplied by the manufacturer. This advance may further enhance applicability of MinION for genome sequencing and various clinical applications. Availability: DeepNano can be downloaded at http://compbio.fmph.uniba.sk/deepnano/. Contact: boza@fmph.uniba.sk
[ { "created": "Wed, 30 Mar 2016 13:52:59 GMT", "version": "v1" } ]
2017-06-29
[ [ "Boža", "Vladimír", "" ], [ "Brejová", "Broňa", "" ], [ "Vinař", "Tomáš", "" ] ]
Motivation: The MinION device by Oxford Nanopore is the first portable sequencing device. MinION is able to produce very long reads (reads over 100~kBp were reported), however it suffers from high sequencing error rate. In this paper, we show that the error rate can be reduced by improving the base calling process. Results: We present the first open-source DNA base caller for the MinION sequencing platform by Oxford Nanopore. By employing carefully crafted recurrent neural networks, our tool improves the base calling accuracy compared to the default base caller supplied by the manufacturer. This advance may further enhance applicability of MinION for genome sequencing and various clinical applications. Availability: DeepNano can be downloaded at http://compbio.fmph.uniba.sk/deepnano/. Contact: boza@fmph.uniba.sk
1810.08725
Sergei Gepshtein
Sergei Gepshtein, Ambarish S. Pawar, Sergey Saveliev, Thomas D. Albright
Neural wave interference and intrinsic tuning in distributed excitatory-inhibitory networks
15 pages, 5 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We developed a model of cortical computation that implements key features of cortical circuitry and is capable of describing propagation of neural signals between cortical locations in response to spatially distributed stimuli. The model is based on the canonical neural circuit that consists of excitatory and inhibitory cells interacting through reciprocal connections, with recurrent feedback. The canonical circuit is used as a node in a distributed network with nearest neighbor coupling between the nodes. We find that this system is characterized by intrinsic preference for spatial frequency. The value of preferred frequency depends on the relative weights of excitatory and inhibitory connections between cells. This balance of excitation and inhibition changes as stimulus contrast increases, which is why intrinsic spatial frequency is predicted to change with contrast in a manner determined by stimulus temporal frequency. The dynamics of network preference is consistent with properties of the cortical area MT in alert macaque monkeys.
[ { "created": "Sat, 20 Oct 2018 01:20:12 GMT", "version": "v1" } ]
2018-10-23
[ [ "Gepshtein", "Sergei", "" ], [ "Pawar", "Ambarish S.", "" ], [ "Saveliev", "Sergey", "" ], [ "Albright", "Thomas D.", "" ] ]
We developed a model of cortical computation that implements key features of cortical circuitry and is capable of describing propagation of neural signals between cortical locations in response to spatially distributed stimuli. The model is based on the canonical neural circuit that consists of excitatory and inhibitory cells interacting through reciprocal connections, with recurrent feedback. The canonical circuit is used as a node in a distributed network with nearest neighbor coupling between the nodes. We find that this system is characterized by intrinsic preference for spatial frequency. The value of preferred frequency depends on the relative weights of excitatory and inhibitory connections between cells. This balance of excitation and inhibition changes as stimulus contrast increases, which is why intrinsic spatial frequency is predicted to change with contrast in a manner determined by stimulus temporal frequency. The dynamics of network preference is consistent with properties of the cortical area MT in alert macaque monkeys.
2004.07208
Sandor D. Katz
Z. Fodor, S.D. Katz, T.G. Kovacs
Why integral equations should be used instead of differential equations to describe the dynamics of epidemics
11 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is of vital importance to understand and track the dynamics of rapidly unfolding epidemics. The health and economic consequences of the current COVID-19 pandemic provide a poignant case. Here we point out that since they are based on differential equations, the most widely used models of epidemic spread are plagued by an approximation that is not justified in the case of the current COVID-19 pandemic. Taking the example of data from New York City, we show that currently used models significantly underestimate the initial basic reproduction number ($R_0$). The correct description, based on integral equations, can be implemented in most of the reported models and it much more accurately accounts for the dynamics of the epidemic after sharp changes in $R_0$ due to restrictive public congregation measures. It also provides a novel way to determine the incubation period, and most importantly, as we demonstrate for several countries, this method allows an accurate monitoring of $R_0$ and thus a fine-tuning of any restrictive measures. Integral equation based models do not only provide the conceptually correct description, they also have more predictive power than differential equation based models, therefore we do not see any reason for using the latter.
[ { "created": "Wed, 15 Apr 2020 17:09:44 GMT", "version": "v1" }, { "created": "Mon, 27 Apr 2020 16:15:56 GMT", "version": "v2" } ]
2020-04-28
[ [ "Fodor", "Z.", "" ], [ "Katz", "S. D.", "" ], [ "Kovacs", "T. G.", "" ] ]
It is of vital importance to understand and track the dynamics of rapidly unfolding epidemics. The health and economic consequences of the current COVID-19 pandemic provide a poignant case. Here we point out that since they are based on differential equations, the most widely used models of epidemic spread are plagued by an approximation that is not justified in the case of the current COVID-19 pandemic. Taking the example of data from New York City, we show that currently used models significantly underestimate the initial basic reproduction number ($R_0$). The correct description, based on integral equations, can be implemented in most of the reported models and it much more accurately accounts for the dynamics of the epidemic after sharp changes in $R_0$ due to restrictive public congregation measures. It also provides a novel way to determine the incubation period, and most importantly, as we demonstrate for several countries, this method allows an accurate monitoring of $R_0$ and thus a fine-tuning of any restrictive measures. Integral equation based models do not only provide the conceptually correct description, they also have more predictive power than differential equation based models, therefore we do not see any reason for using the latter.
2004.04768
Zhibo Yang
Jianyuan Deng, Zhibo Yang, Yao Li, Dimitris Samaras, Fusheng Wang
Towards Better Opioid Antagonists Using Deep Reinforcement Learning
10 pages, 7 figures
null
null
null
q-bio.BM cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Naloxone, an opioid antagonist, has been widely used to save lives from opioid overdose, a leading cause for death in the opioid epidemic. However, naloxone has short brain retention ability, which limits its therapeutic efficacy. Developing better opioid antagonists is critical in combating the opioid epidemic.Instead of exhaustively searching in a huge chemical space for better opioid antagonists, we adopt reinforcement learning which allows efficient gradient-based search towards molecules with desired physicochemical and/or biological properties. Specifically, we implement a deep reinforcement learning framework to discover potential lead compounds as better opioid antagonists with enhanced brain retention ability. A customized multi-objective reward function is designed to bias the generation towards molecules with both sufficient opioid antagonistic effect and enhanced brain retention ability. Thorough evaluation demonstrates that with this framework, we are able to identify valid, novel and feasible molecules with multiple desired properties, which has high potential in drug discovery.
[ { "created": "Thu, 26 Mar 2020 15:28:50 GMT", "version": "v1" } ]
2020-04-13
[ [ "Deng", "Jianyuan", "" ], [ "Yang", "Zhibo", "" ], [ "Li", "Yao", "" ], [ "Samaras", "Dimitris", "" ], [ "Wang", "Fusheng", "" ] ]
Naloxone, an opioid antagonist, has been widely used to save lives from opioid overdose, a leading cause for death in the opioid epidemic. However, naloxone has short brain retention ability, which limits its therapeutic efficacy. Developing better opioid antagonists is critical in combating the opioid epidemic.Instead of exhaustively searching in a huge chemical space for better opioid antagonists, we adopt reinforcement learning which allows efficient gradient-based search towards molecules with desired physicochemical and/or biological properties. Specifically, we implement a deep reinforcement learning framework to discover potential lead compounds as better opioid antagonists with enhanced brain retention ability. A customized multi-objective reward function is designed to bias the generation towards molecules with both sufficient opioid antagonistic effect and enhanced brain retention ability. Thorough evaluation demonstrates that with this framework, we are able to identify valid, novel and feasible molecules with multiple desired properties, which has high potential in drug discovery.
1710.10861
Caroline Gr\"onwall
Caroline Gronwall, Khaled Amara, Uta Hardt, Akilan Krishnamurthy, Johanna Steen, Marianne Engstrom, Meng Sun, A. Jimmy Ytterberg, Roman A. Zubarev, Dagmar Scheel-Toellner, Jeffrey D. Greenberg, Lars Klareskog, Anca I. Catrina, Vivianne Malmstrom and Gregg J. Silverman
Autoreactivity to malondialdehyde-modifications in rheumatoid arthritis is linked to disease activity and synovial pathogenesis
null
J Autoimmun. 2017 Nov; 84:29-45
10.1016/j.jaut.2017.06.004
null
q-bio.BM q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Oxidation-associated malondialdehyde (MDA) modification of proteins can generate immunogenic neo-epitopes that are recognized by autoantibodies. In health, IgM antibodies to MDA-adducts are part of the natural antibody pool, while elevated levels of IgG anti-MDA are associated with inflammatory conditions. Yet, in human autoimmune disease IgG anti-MDA responses have not been well characterized and their potential contribution to disease pathogenesis is not known. Here, we investigate MDA-modifications and anti-MDA-modified protein autoreactivity in rheumatoid arthritis (RA). While RA is primarily associated with autoreactivity to citrullinated antigens, we also observed increases in serum IgG anti-MDA in RA patients compared to controls. IgG anti-MDA levels significantly correlated with disease activity by DAS28-ESR and serum TNF-alpha, IL-6, and CRP. Mass spectrometry analysis of RA synovial tissue identified MDA-modified proteins and revealed shared peptides between MDA-modified and citrullinated actin and vimentin. Furthermore, anti-MDA autoreactivity among synovial B cells was discovered when investigating recombinant monoclonal antibodies (mAbs) cloned from single B cells. Several clones were highly specific for MDA-modification with no cross-reactivity to other antigen modifications. The mAbs recognized MDA-adducts in a variety of proteins. Interestingly, the most reactive clone, originated from an IgG1-bearing memory B cell, was encoded by germline variable genes, and showed similarity to previously reported natural IgM. Other anti-MDA clones display somatic hypermutations and lower reactivity. These anti-MDA antibodies had significant in vitro functional properties and induced enhanced osteoclastogenesis, while the natural antibody related high-reactivity clone did not. We postulate that these may represent distinctly different facets of anti-MDA autoreactive responses.
[ { "created": "Mon, 30 Oct 2017 10:50:19 GMT", "version": "v1" } ]
2017-10-31
[ [ "Gronwall", "Caroline", "" ], [ "Amara", "Khaled", "" ], [ "Hardt", "Uta", "" ], [ "Krishnamurthy", "Akilan", "" ], [ "Steen", "Johanna", "" ], [ "Engstrom", "Marianne", "" ], [ "Sun", "Meng", "" ], [ "Ytterberg", "A. Jimmy", "" ], [ "Zubarev", "Roman A.", "" ], [ "Scheel-Toellner", "Dagmar", "" ], [ "Greenberg", "Jeffrey D.", "" ], [ "Klareskog", "Lars", "" ], [ "Catrina", "Anca I.", "" ], [ "Malmstrom", "Vivianne", "" ], [ "Silverman", "Gregg J.", "" ] ]
Oxidation-associated malondialdehyde (MDA) modification of proteins can generate immunogenic neo-epitopes that are recognized by autoantibodies. In health, IgM antibodies to MDA-adducts are part of the natural antibody pool, while elevated levels of IgG anti-MDA are associated with inflammatory conditions. Yet, in human autoimmune disease IgG anti-MDA responses have not been well characterized and their potential contribution to disease pathogenesis is not known. Here, we investigate MDA-modifications and anti-MDA-modified protein autoreactivity in rheumatoid arthritis (RA). While RA is primarily associated with autoreactivity to citrullinated antigens, we also observed increases in serum IgG anti-MDA in RA patients compared to controls. IgG anti-MDA levels significantly correlated with disease activity by DAS28-ESR and serum TNF-alpha, IL-6, and CRP. Mass spectrometry analysis of RA synovial tissue identified MDA-modified proteins and revealed shared peptides between MDA-modified and citrullinated actin and vimentin. Furthermore, anti-MDA autoreactivity among synovial B cells was discovered when investigating recombinant monoclonal antibodies (mAbs) cloned from single B cells. Several clones were highly specific for MDA-modification with no cross-reactivity to other antigen modifications. The mAbs recognized MDA-adducts in a variety of proteins. Interestingly, the most reactive clone, originated from an IgG1-bearing memory B cell, was encoded by germline variable genes, and showed similarity to previously reported natural IgM. Other anti-MDA clones display somatic hypermutations and lower reactivity. These anti-MDA antibodies had significant in vitro functional properties and induced enhanced osteoclastogenesis, while the natural antibody related high-reactivity clone did not. We postulate that these may represent distinctly different facets of anti-MDA autoreactive responses.
1711.01629
Rakesh Malladi
Rakesh Malladi, Don H Johnson, Giridhar P Kalamangalam, Nitin Tandon and Behnaam Aazhang
Mutual Information in Frequency and its Application to Measure Cross-Frequency Coupling in Epilepsy
This paper is accepted for publication in IEEE Transactions on Signal Processing and contains 15 pages, 9 figures and 1 table
null
10.1109/TSP.2018.2821627
null
q-bio.NC cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.
[ { "created": "Sun, 5 Nov 2017 18:16:06 GMT", "version": "v1" }, { "created": "Thu, 15 Mar 2018 05:47:37 GMT", "version": "v2" } ]
2018-05-23
[ [ "Malladi", "Rakesh", "" ], [ "Johnson", "Don H", "" ], [ "Kalamangalam", "Giridhar P", "" ], [ "Tandon", "Nitin", "" ], [ "Aazhang", "Behnaam", "" ] ]
We define a metric, mutual information in frequency (MI-in-frequency), to detect and quantify the statistical dependence between different frequency components in the data, referred to as cross-frequency coupling and apply it to electrophysiological recordings from the brain to infer cross-frequency coupling. The current metrics used to quantify the cross-frequency coupling in neuroscience cannot detect if two frequency components in non-Gaussian brain recordings are statistically independent or not. Our MI-in-frequency metric, based on Shannon's mutual information between the Cramer's representation of stochastic processes, overcomes this shortcoming and can detect statistical dependence in frequency between non-Gaussian signals. We then describe two data-driven estimators of MI-in-frequency: one based on kernel density estimation and the other based on the nearest neighbor algorithm and validate their performance on simulated data. We then use MI-in-frequency to estimate mutual information between two data streams that are dependent across time, without making any parametric model assumptions. Finally, we use the MI-in- frequency metric to investigate the cross-frequency coupling in seizure onset zone from electrocorticographic recordings during seizures. The inferred cross-frequency coupling characteristics are essential to optimize the spatial and spectral parameters of electrical stimulation based treatments of epilepsy.
1201.3749
Steffen Waldherr
Steffen Waldherr and Bernard Haasdonk
Efficient parametric analysis of the chemical master equation through model order reduction
23 pages, 8 figures, 2 tables
BMC Systems Biology 2012, 6:81
10.1186/1752-0509-6-81
null
q-bio.QM math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Stochastic biochemical reaction networks are commonly modelled by the chemical master equation, and can be simulated as first order linear differential equations through a finite state projection. Due to the very high state space dimension of these equations, numerical simulations are computationally expensive. This is a particular problem for analysis tasks requiring repeated simulations for different parameter values. Such tasks are computationally expensive to the point of infeasibility with the chemical master equation. Results: In this article, we apply parametric model order reduction techniques in order to construct accurate low-dimensional parametric models of the chemical master equation. These surrogate models can be used in various parametric analysis task such as identifiability analysis, parameter estimation, or sensitivity analysis. As biological examples, we consider two models for gene regulation networks, a bistable switch and a network displaying stochastic oscillations. Conclusions: The results show that the parametric model reduction yields efficient models of stochastic biochemical reaction networks, and that these models can be useful for systems biology applications involving parametric analysis problems such as parameter exploration, optimization, estimation or sensitivity analysis.
[ { "created": "Wed, 18 Jan 2012 10:41:12 GMT", "version": "v1" }, { "created": "Mon, 9 Jul 2012 15:36:07 GMT", "version": "v2" } ]
2012-07-10
[ [ "Waldherr", "Steffen", "" ], [ "Haasdonk", "Bernard", "" ] ]
Background: Stochastic biochemical reaction networks are commonly modelled by the chemical master equation, and can be simulated as first order linear differential equations through a finite state projection. Due to the very high state space dimension of these equations, numerical simulations are computationally expensive. This is a particular problem for analysis tasks requiring repeated simulations for different parameter values. Such tasks are computationally expensive to the point of infeasibility with the chemical master equation. Results: In this article, we apply parametric model order reduction techniques in order to construct accurate low-dimensional parametric models of the chemical master equation. These surrogate models can be used in various parametric analysis task such as identifiability analysis, parameter estimation, or sensitivity analysis. As biological examples, we consider two models for gene regulation networks, a bistable switch and a network displaying stochastic oscillations. Conclusions: The results show that the parametric model reduction yields efficient models of stochastic biochemical reaction networks, and that these models can be useful for systems biology applications involving parametric analysis problems such as parameter exploration, optimization, estimation or sensitivity analysis.
2007.09856
Bijan Sarkar
Bijan Sarkar
The cooperation-defection evolution on social networks
A scientific explanation can be accomplished through a simple easier logical procedure. A time consuming trick can also be replaced by an elegant trick
Physica A: Statistical Mechanics and its Applications, Volume 584, 15 December 2021, 126381
10.1016/j.physa.2021.126381
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Without contributing, defectors take more benefit from social resources than cooperators which is the reflection of a specific character of individuals. However, natural physical mechanisms of our society promote cooperation. Thus, in the long run, the evolution about genetic variation is something more than the social evolution about fitness. The loci of evolutionary paths of the cooperation and the defection are correlated, but not a full complement of each other. Yet, the only single specific mechanism which is operated by some rules explains the enhancement of cooperation where the independent analysis of defect evolutionary mechanism is ignored. Moreover, the execution of a particular evolutionary rule through algorithm method over the long time encounters highly sensitive influence of the model parameters. Theoretically, biodiversity of two types relatively persists rarely. Here I describe the evolutionary outcome in the demographic fluctuation. Using both analytical procedure and algorithm method the article concludes that the intratype fitness of individual species is the key factor for not only surviving, but thriving. In consideration of the random drift, the experimental outcomes show that dominant enhancement of cooperation over defection is qualitatively independent of environmental scenario. Collectively, the set of the rules becomes an evolutionary principle to cooperation enhancement.
[ { "created": "Mon, 20 Jul 2020 02:58:33 GMT", "version": "v1" }, { "created": "Wed, 15 Sep 2021 14:04:42 GMT", "version": "v2" } ]
2021-09-16
[ [ "Sarkar", "Bijan", "" ] ]
Without contributing, defectors take more benefit from social resources than cooperators which is the reflection of a specific character of individuals. However, natural physical mechanisms of our society promote cooperation. Thus, in the long run, the evolution about genetic variation is something more than the social evolution about fitness. The loci of evolutionary paths of the cooperation and the defection are correlated, but not a full complement of each other. Yet, the only single specific mechanism which is operated by some rules explains the enhancement of cooperation where the independent analysis of defect evolutionary mechanism is ignored. Moreover, the execution of a particular evolutionary rule through algorithm method over the long time encounters highly sensitive influence of the model parameters. Theoretically, biodiversity of two types relatively persists rarely. Here I describe the evolutionary outcome in the demographic fluctuation. Using both analytical procedure and algorithm method the article concludes that the intratype fitness of individual species is the key factor for not only surviving, but thriving. In consideration of the random drift, the experimental outcomes show that dominant enhancement of cooperation over defection is qualitatively independent of environmental scenario. Collectively, the set of the rules becomes an evolutionary principle to cooperation enhancement.
1307.7933
Aaron Darling
Viraj Deshpande, Eric DK Fung, Son Pham, and Vineet Bafna
Cerulean: A hybrid assembly using high throughput short and long reads
Peer-reviewed and presented as part of the 13th Workshop on Algorithms in Bioinformatics (WABI2013)
null
null
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genome assembly using high throughput data with short reads, arguably, remains an unresolvable task in repetitive genomes, since when the length of a repeat exceeds the read length, it becomes difficult to unambiguously connect the flanking regions. The emergence of third generation sequencing (Pacific Biosciences) with long reads enables the opportunity to resolve complicated repeats that could not be resolved by the short read data. However, these long reads have high error rate and it is an uphill task to assemble the genome without using additional high quality short reads. Recently, Koren et al. 2012 proposed an approach to use high quality short reads data to correct these long reads and, thus, make the assembly from long reads possible. However, due to the large size of both dataset (short and long reads), error-correction of these long reads requires excessively high computational resources, even on small bacterial genomes. In this work, instead of error correction of long reads, we first assemble the short reads and later map these long reads on the assembly graph to resolve repeats. Contribution: We present a hybrid assembly approach that is both computationally effective and produces high quality assemblies. Our algorithm first operates with a simplified version of the assembly graph consisting only of long contigs and gradually improves the assembly by adding smaller contigs in each iteration. In contrast to the state-of-the-art long reads error correction technique, which requires high computational resources and long running time on a supercomputer even for bacterial genome datasets, our software can produce comparable assembly using only a standard desktop in a short running time.
[ { "created": "Tue, 30 Jul 2013 12:05:30 GMT", "version": "v1" } ]
2013-07-31
[ [ "Deshpande", "Viraj", "" ], [ "Fung", "Eric DK", "" ], [ "Pham", "Son", "" ], [ "Bafna", "Vineet", "" ] ]
Genome assembly using high throughput data with short reads, arguably, remains an unresolvable task in repetitive genomes, since when the length of a repeat exceeds the read length, it becomes difficult to unambiguously connect the flanking regions. The emergence of third generation sequencing (Pacific Biosciences) with long reads enables the opportunity to resolve complicated repeats that could not be resolved by the short read data. However, these long reads have high error rate and it is an uphill task to assemble the genome without using additional high quality short reads. Recently, Koren et al. 2012 proposed an approach to use high quality short reads data to correct these long reads and, thus, make the assembly from long reads possible. However, due to the large size of both dataset (short and long reads), error-correction of these long reads requires excessively high computational resources, even on small bacterial genomes. In this work, instead of error correction of long reads, we first assemble the short reads and later map these long reads on the assembly graph to resolve repeats. Contribution: We present a hybrid assembly approach that is both computationally effective and produces high quality assemblies. Our algorithm first operates with a simplified version of the assembly graph consisting only of long contigs and gradually improves the assembly by adding smaller contigs in each iteration. In contrast to the state-of-the-art long reads error correction technique, which requires high computational resources and long running time on a supercomputer even for bacterial genome datasets, our software can produce comparable assembly using only a standard desktop in a short running time.
q-bio/0506029
Moo Young Choi
J. Choi, M.Y. Choi, and B.-G. Yoon
Dynamic model for failures in biological systems
To appear in Europhys. Lett
null
10.1209/epl/i2004-10544-3
null
q-bio.CB physics.bio-ph
null
A dynamic model for failures in biological organisms is proposed and studied both analytically and numerically. Each cell in the organism becomes dead under sufficiently strong stress, and is then allowed to be healed with some probability. It is found that unlike the case of no healing, the organism in general does not completely break down even in the presence of noise. Revealed is the characteristic time evolution that the system tends to resist the stress longer than the system without healing, followed by sudden breakdown with some fraction of cells surviving. When the noise is weak, the critical stress beyond which the system breaks down increases rapidly as the healing parameter is raised from zero, indicative of the importance of healing in biological systems.
[ { "created": "Mon, 20 Jun 2005 20:26:47 GMT", "version": "v1" } ]
2009-11-11
[ [ "Choi", "J.", "" ], [ "Choi", "M. Y.", "" ], [ "Yoon", "B. -G.", "" ] ]
A dynamic model for failures in biological organisms is proposed and studied both analytically and numerically. Each cell in the organism becomes dead under sufficiently strong stress, and is then allowed to be healed with some probability. It is found that unlike the case of no healing, the organism in general does not completely break down even in the presence of noise. Revealed is the characteristic time evolution that the system tends to resist the stress longer than the system without healing, followed by sudden breakdown with some fraction of cells surviving. When the noise is weak, the critical stress beyond which the system breaks down increases rapidly as the healing parameter is raised from zero, indicative of the importance of healing in biological systems.
2405.06377
Robert Worden
Robert Worden
The Evolution of Language and Human Rationality
12 pages; presented at the 14th EvoLang conference, 2024
null
null
null
q-bio.NC q-bio.PE
http://creativecommons.org/licenses/by/4.0/
If language evolved by sexual selection to display superior intelligence, then we require conversational skills, to impress other people, gain high social status, and get a mate. Conversational skills include a Theory of Mind, a sense of self, self esteem and social emotions. To be impressive, we must converse fluently and fast. The syntax of an utterance is defined by fast unification of feature structures. The pragmatic skills of conversation are also learned and deployed as feature structures; we rehearse conversations as verbal thoughts. Many aspects of our mental lives (such as our Theory of Mind, and our social emotions) work by fast, pre conscious unification of learned feature structures, rather than rational deliberation. As we think, we use the Fast Theory of Mind to infer (unreliably) how a Shadow Audience will regard what we think, say, and do. These forces, which determine our motivations and actions, are less rational and deliberate than we like to suppose
[ { "created": "Fri, 10 May 2024 10:25:21 GMT", "version": "v1" } ]
2024-05-13
[ [ "Worden", "Robert", "" ] ]
If language evolved by sexual selection to display superior intelligence, then we require conversational skills, to impress other people, gain high social status, and get a mate. Conversational skills include a Theory of Mind, a sense of self, self esteem and social emotions. To be impressive, we must converse fluently and fast. The syntax of an utterance is defined by fast unification of feature structures. The pragmatic skills of conversation are also learned and deployed as feature structures; we rehearse conversations as verbal thoughts. Many aspects of our mental lives (such as our Theory of Mind, and our social emotions) work by fast, pre conscious unification of learned feature structures, rather than rational deliberation. As we think, we use the Fast Theory of Mind to infer (unreliably) how a Shadow Audience will regard what we think, say, and do. These forces, which determine our motivations and actions, are less rational and deliberate than we like to suppose
2101.01548
Ahlem Karbab
Karbab Ahlem
Extraction, isolation, structure elucidation and evaluation of toxicity, anti-inflammatory and analgesic activity of Pituranthos scoparius constituents
null
null
null
null
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present work aimed to investigate an ethnobotanical survey about Pituranthos scoparius and assess the toxicity, anti-inflammatory (in vitro, and in vivo) potential, in vitro antioxidant, and analgesic effects of stems and roots of Pituranthos scoparius. Furthermore; to isolate and elucidate the chemical constituents of the n-butanol stem extract of P. scoparius (ButE) and determine the toxicity and anti-inflammatory effects of these compounds added to the ButE. Data from an ethnopharmacological study showed that 24.47 % of people used this plant in folk medicine. Four compounds were isolated from ButE. These compounds were characterized by means of NMR and high-resolution mass spectral (HRMS) data.
[ { "created": "Fri, 1 Jan 2021 19:32:47 GMT", "version": "v1" } ]
2021-01-06
[ [ "Ahlem", "Karbab", "" ] ]
The present work aimed to investigate an ethnobotanical survey about Pituranthos scoparius and assess the toxicity, anti-inflammatory (in vitro, and in vivo) potential, in vitro antioxidant, and analgesic effects of stems and roots of Pituranthos scoparius. Furthermore; to isolate and elucidate the chemical constituents of the n-butanol stem extract of P. scoparius (ButE) and determine the toxicity and anti-inflammatory effects of these compounds added to the ButE. Data from an ethnopharmacological study showed that 24.47 % of people used this plant in folk medicine. Four compounds were isolated from ButE. These compounds were characterized by means of NMR and high-resolution mass spectral (HRMS) data.
1408.2474
Daniele Cappelletti
Daniele Cappelletti and Carsten Wiuf
Elimination of Intermediate Species in Multiscale Stochastic Reaction Networks
null
null
10.1214/15-AAP1166
null
q-bio.MN math.DS math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study networks of biochemical reactions modelled by continuous-time Markov processes. Such networks typically contain many molecular species and reactions and are hard to study analytically as well as by simulation. Particularly, we are interested in reaction networks with intermediate species such as the substrate-enzyme complex in the Michaelis-Menten mechanism. These species are virtually in all real-world networks, they are typically short-lived, degraded at a fast rate and hard to observe experimentally. We provide conditions under which the Markov process of a multiscale reaction network with intermediate species is approximated in finite dimensional distribution by the Markov process of a simpler reduced reaction network without intermediate species. We do so by embedding the Markov processes into a one-parameter family of processes, where reaction rates and species abundances are scaled in the parameter. Further, we show that there are close links between these stochastic models and deterministic ODE models of the same networks.
[ { "created": "Mon, 11 Aug 2014 17:23:43 GMT", "version": "v1" }, { "created": "Mon, 5 Oct 2015 01:31:46 GMT", "version": "v2" }, { "created": "Mon, 30 Nov 2015 15:40:54 GMT", "version": "v3" } ]
2018-05-22
[ [ "Cappelletti", "Daniele", "" ], [ "Wiuf", "Carsten", "" ] ]
We study networks of biochemical reactions modelled by continuous-time Markov processes. Such networks typically contain many molecular species and reactions and are hard to study analytically as well as by simulation. Particularly, we are interested in reaction networks with intermediate species such as the substrate-enzyme complex in the Michaelis-Menten mechanism. These species are virtually in all real-world networks, they are typically short-lived, degraded at a fast rate and hard to observe experimentally. We provide conditions under which the Markov process of a multiscale reaction network with intermediate species is approximated in finite dimensional distribution by the Markov process of a simpler reduced reaction network without intermediate species. We do so by embedding the Markov processes into a one-parameter family of processes, where reaction rates and species abundances are scaled in the parameter. Further, we show that there are close links between these stochastic models and deterministic ODE models of the same networks.
1606.00821
Diego Mateos
R. Guevara Erra, D. M. Mateos, R. Wennberg, J.L. Perez Velazquez
Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness
17 page, 4 figures, 2 tables
null
10.1103/PhysRevE.94.052402
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
It has been said that complexity lies between order and disorder. In the case of brain activity, and physiology in general, complexity issues are being considered with increased emphasis. We sought to identify features of brain organization that are optimal for sensory processing, and that may guide the emergence of cognition and consciousness, by analysing neurophysiological recordings in conscious and unconscious states. We find a surprisingly simple result: normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values. Therefore, the information content is larger in the network associated to conscious states, suggesting that consciousness could be the result of an optimization of information processing. These findings encapsulate three main current theories of cognition, as discussed in the text, and more specifically the conceptualization of consciousness in terms of brain complexity. We hope our study represents the preliminary attempt at finding organising principles of brain function that will help to guide in a more formal sense inquiry into how consciousness arises from the organization of matter.
[ { "created": "Wed, 1 Jun 2016 17:45:43 GMT", "version": "v1" }, { "created": "Mon, 9 Jan 2017 19:42:45 GMT", "version": "v2" } ]
2017-01-11
[ [ "Erra", "R. Guevara", "" ], [ "Mateos", "D. M.", "" ], [ "Wennberg", "R.", "" ], [ "Velazquez", "J. L. Perez", "" ] ]
It has been said that complexity lies between order and disorder. In the case of brain activity, and physiology in general, complexity issues are being considered with increased emphasis. We sought to identify features of brain organization that are optimal for sensory processing, and that may guide the emergence of cognition and consciousness, by analysing neurophysiological recordings in conscious and unconscious states. We find a surprisingly simple result: normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values. Therefore, the information content is larger in the network associated to conscious states, suggesting that consciousness could be the result of an optimization of information processing. These findings encapsulate three main current theories of cognition, as discussed in the text, and more specifically the conceptualization of consciousness in terms of brain complexity. We hope our study represents the preliminary attempt at finding organising principles of brain function that will help to guide in a more formal sense inquiry into how consciousness arises from the organization of matter.
2301.11126
Yue Wang
Yue Wang
Three facets of mathematical cancer biology research
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
Cancer, as the uncontrollable cell growth, is related to many branches of biology. In this review, we will discuss three mathematical approaches for studying cancer biology: population dynamics, gene regulation, and developmental biology. If we understand all biochemical mechanisms of cancer cells, we can directly calculate how the cancer cell population behaves. Inversely, just from the cell count data, we can use population dynamics to infer the mechanisms. Cancer cells emerge from certain genetic mutations, which affect the expression of other genes through gene regulation. Therefore, knowledge of gene regulation can help with cancer prevention and treatment. Developmental biology studies acquisition and maintenance of normal cellular function, which is inspiring to cancer biology in the opposite direction. Besides, cancer cells implanted into an embryo can differentiate into normal tissues, which provides a possible approach of curing cancer. This review illustrates the role of mathematics in these three fields: what mathematical models are used, what data analysis tools are applied, and what mathematical theorems need to be proved. We hope that applied mathematicians and even pure mathematicians can find meaningful mathematical problems related to cancer biology.
[ { "created": "Tue, 24 Jan 2023 18:06:52 GMT", "version": "v1" } ]
2023-01-27
[ [ "Wang", "Yue", "" ] ]
Cancer, as the uncontrollable cell growth, is related to many branches of biology. In this review, we will discuss three mathematical approaches for studying cancer biology: population dynamics, gene regulation, and developmental biology. If we understand all biochemical mechanisms of cancer cells, we can directly calculate how the cancer cell population behaves. Inversely, just from the cell count data, we can use population dynamics to infer the mechanisms. Cancer cells emerge from certain genetic mutations, which affect the expression of other genes through gene regulation. Therefore, knowledge of gene regulation can help with cancer prevention and treatment. Developmental biology studies acquisition and maintenance of normal cellular function, which is inspiring to cancer biology in the opposite direction. Besides, cancer cells implanted into an embryo can differentiate into normal tissues, which provides a possible approach of curing cancer. This review illustrates the role of mathematics in these three fields: what mathematical models are used, what data analysis tools are applied, and what mathematical theorems need to be proved. We hope that applied mathematicians and even pure mathematicians can find meaningful mathematical problems related to cancer biology.
1112.5905
Boris Shraiman
Kevin K. Chiou, Lars Hufnagel and Boris I. Shraiman
Mechanical Stress Inference for Two Dimensional Cell Arrays
null
null
10.1371/journal.pcbi.1002512
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many morphogenetic processes involve mechanical rearrangement of epithelial tissues that is driven by precisely regulated cytoskeletal forces and cell adhesion. The mechanical state of the cell and intercellular adhesion are not only the targets of regulation, but are themselves likely signals that coordinate developmental process. Yet, because it is difficult to directly measure mechanical stress {\it in vivo} on sub-cellular scale, little is understood about the role of mechanics of development. Here we present an alternative approach which takes advantage of the recent progress in live imaging of morphogenetic processes and uses computational analysis of high resolution images of epithelial tissues to infer relative magnitude of forces acting within and between cells. We model intracellular stress in terms of bulk pressure and interfacial tension, allowing these parameters to vary from cell to cell and from interface to interface. Assuming that epithelial cell layers are close to mechanical equilibrium, we use the observed geometry of the two dimensional cell array to infer interfacial tensions and intracellular pressures. Here we present the mathematical formulation of the proposed Mechanical Inverse method and apply it to the analysis of epithelial cell layers observed at the onset of ventral furrow formation in the {\it Drosophila} embryo and in the process of hair-cell determination in the avian cochlea. The analysis reveals mechanical anisotropy in the former process and mechanical heterogeneity, correlated with cell differentiation, in the latter process. The method opens a way for quantitative and detailed experimental tests of models of cell and tissue mechanics.
[ { "created": "Tue, 27 Dec 2011 01:15:10 GMT", "version": "v1" } ]
2015-06-03
[ [ "Chiou", "Kevin K.", "" ], [ "Hufnagel", "Lars", "" ], [ "Shraiman", "Boris I.", "" ] ]
Many morphogenetic processes involve mechanical rearrangement of epithelial tissues that is driven by precisely regulated cytoskeletal forces and cell adhesion. The mechanical state of the cell and intercellular adhesion are not only the targets of regulation, but are themselves likely signals that coordinate developmental process. Yet, because it is difficult to directly measure mechanical stress {\it in vivo} on sub-cellular scale, little is understood about the role of mechanics of development. Here we present an alternative approach which takes advantage of the recent progress in live imaging of morphogenetic processes and uses computational analysis of high resolution images of epithelial tissues to infer relative magnitude of forces acting within and between cells. We model intracellular stress in terms of bulk pressure and interfacial tension, allowing these parameters to vary from cell to cell and from interface to interface. Assuming that epithelial cell layers are close to mechanical equilibrium, we use the observed geometry of the two dimensional cell array to infer interfacial tensions and intracellular pressures. Here we present the mathematical formulation of the proposed Mechanical Inverse method and apply it to the analysis of epithelial cell layers observed at the onset of ventral furrow formation in the {\it Drosophila} embryo and in the process of hair-cell determination in the avian cochlea. The analysis reveals mechanical anisotropy in the former process and mechanical heterogeneity, correlated with cell differentiation, in the latter process. The method opens a way for quantitative and detailed experimental tests of models of cell and tissue mechanics.
1511.09166
Benjamin Dickens
Benjamin Dickens, Charles K. Fisher, and Pankaj Mehta
An analytically tractable model for community ecology with many species
15 pages, 4 figures
Phys. Rev. E 94, 022423 (2016)
10.1103/PhysRevE.94.022423
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration give rise to observed patterns in species composition and diversity. Here, we present a simple, analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, the amount of diversity, and demographic and environmental stochasticity affect species composition in a community. In the PA model, species are treated as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special ("critical") point corresponding to Hubbell's neutral theory of biodiversity. These results suggest that the concepts of ecological "phases" and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly.
[ { "created": "Mon, 30 Nov 2015 05:38:45 GMT", "version": "v1" } ]
2016-09-07
[ [ "Dickens", "Benjamin", "" ], [ "Fisher", "Charles K.", "" ], [ "Mehta", "Pankaj", "" ] ]
A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration give rise to observed patterns in species composition and diversity. Here, we present a simple, analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, the amount of diversity, and demographic and environmental stochasticity affect species composition in a community. In the PA model, species are treated as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special ("critical") point corresponding to Hubbell's neutral theory of biodiversity. These results suggest that the concepts of ecological "phases" and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly.
1705.03407
Ulysse Herbach
Ulysse Herbach, Arnaud Bonnaffoux, Thibault Espinasse, Olivier Gandrillon
Inferring gene regulatory networks from single-cell data: a mechanistic approach
null
null
10.1186/s12918-017-0487-0
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent development of single-cell transcriptomics has enabled gene expression to be measured in individual cells instead of being population-averaged. Despite this considerable precision improvement, inferring regulatory networks remains challenging because stochasticity now proves to play a fundamental role in gene expression. In particular, mRNA synthesis is now acknowledged to occur in a highly bursty manner. We propose to view the inference problem as a fitting procedure for a mechanistic gene network model that is inherently stochastic and takes not only protein, but also mRNA levels into account. We first explain how to build and simulate this network model based upon the coupling of genes that are described as piecewise-deterministic Markov processes. Our model is modular and can be used to implement various biochemical hypotheses including causal interactions between genes. However, a naive fitting procedure would be intractable. By performing a relevant approximation of the stationary distribution, we derive a tractable procedure that corresponds to a statistical hidden Markov model with interpretable parameters. This approximation turns out to be extremely close to the theoretical distribution in the case of a simple toggle-switch, and we show that it can indeed fit real single-cell data. As a first step toward inference, our approach was applied to a number of simple two-gene networks simulated in silico from the mechanistic model and satisfactorily recovered the original networks. Our results demonstrate that functional interactions between genes can be inferred from the distribution of a mechanistic, dynamical stochastic model that is able to describe gene expression in individual cells. This approach seems promising in relation to the current explosion of single-cell expression data.
[ { "created": "Tue, 9 May 2017 16:16:45 GMT", "version": "v1" }, { "created": "Wed, 24 May 2017 18:42:40 GMT", "version": "v2" }, { "created": "Sat, 25 Nov 2017 02:59:55 GMT", "version": "v3" } ]
2017-11-28
[ [ "Herbach", "Ulysse", "" ], [ "Bonnaffoux", "Arnaud", "" ], [ "Espinasse", "Thibault", "" ], [ "Gandrillon", "Olivier", "" ] ]
The recent development of single-cell transcriptomics has enabled gene expression to be measured in individual cells instead of being population-averaged. Despite this considerable precision improvement, inferring regulatory networks remains challenging because stochasticity now proves to play a fundamental role in gene expression. In particular, mRNA synthesis is now acknowledged to occur in a highly bursty manner. We propose to view the inference problem as a fitting procedure for a mechanistic gene network model that is inherently stochastic and takes not only protein, but also mRNA levels into account. We first explain how to build and simulate this network model based upon the coupling of genes that are described as piecewise-deterministic Markov processes. Our model is modular and can be used to implement various biochemical hypotheses including causal interactions between genes. However, a naive fitting procedure would be intractable. By performing a relevant approximation of the stationary distribution, we derive a tractable procedure that corresponds to a statistical hidden Markov model with interpretable parameters. This approximation turns out to be extremely close to the theoretical distribution in the case of a simple toggle-switch, and we show that it can indeed fit real single-cell data. As a first step toward inference, our approach was applied to a number of simple two-gene networks simulated in silico from the mechanistic model and satisfactorily recovered the original networks. Our results demonstrate that functional interactions between genes can be inferred from the distribution of a mechanistic, dynamical stochastic model that is able to describe gene expression in individual cells. This approach seems promising in relation to the current explosion of single-cell expression data.
1912.03949
Emmanuelle Bayer
Dawei Yan, Shri Yadav (IIT Roorkee), Andrea Paterlini, William Nicolas (LBM), Ilya Belevich, Magali Grison (LBM), Anne Vaten, Leila Karami, Sedeer El-Showk, Jung-Youn Lee, Gosia Murawska (LBNL), Jenny Mortimer (LBNL), Michael Knoblauch, Eija Jokitalo, Jonathan Markham, Emmanuelle Bayer (LBM), Yk\"a Helariutta
Sphingolipid biosynthesis modulates plasmodesmal ultrastructure and phloem unloading
Nature Plants, Nature Publishing Group, In press
null
10.1038/s41477-019-0429-5
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During phloem unloading, multiple cell-to-cell transport events move organic substances to the root meristem. Although the primary unloading event from the sieve elements to the phloem pole pericycle has been characterized to some extent, little is known about post-sieve element unloading. Here, we report a novel gene, PHLOEM UNLOADING MODULATOR (PLM), in the absence of which plasmodesmata-mediated symplastic transport through the phloem pole pericycle--endodermis interface is specifically enhanced. Increased unloading is attributable to a defect in the formation of the endoplasmic reticulum--plasma membrane tethers during plasmodesmal morphogenesis, resulting in the majority of pores lacking a visible cytoplasmic sleeve. PLM encodes a putative enzyme required for the biosynthesis of sphingolipids with very-long-chain fatty acid. Taken together, our results indicate that post-sieve element unloading involves sphingolipid metabolism, which affects plasmodesmal ultrastructure. They also raise the question of how and why plasmodesmata with no cytoplasmic sleeve facilitate molecular trafficking.
[ { "created": "Mon, 9 Dec 2019 10:22:21 GMT", "version": "v1" } ]
2019-12-10
[ [ "Yan", "Dawei", "", "IIT Roorkee" ], [ "Yadav", "Shri", "", "IIT Roorkee" ], [ "Paterlini", "Andrea", "", "LBM" ], [ "Nicolas", "William", "", "LBM" ], [ "Belevich", "Ilya", "", "LBM" ], [ "Grison", "Magali", "", "LBM" ], [ "Vaten", "Anne", "", "LBNL" ], [ "Karami", "Leila", "", "LBNL" ], [ "El-Showk", "Sedeer", "", "LBNL" ], [ "Lee", "Jung-Youn", "", "LBNL" ], [ "Murawska", "Gosia", "", "LBNL" ], [ "Mortimer", "Jenny", "", "LBNL" ], [ "Knoblauch", "Michael", "", "LBM" ], [ "Jokitalo", "Eija", "", "LBM" ], [ "Markham", "Jonathan", "", "LBM" ], [ "Bayer", "Emmanuelle", "", "LBM" ], [ "Helariutta", "Ykä", "" ] ]
During phloem unloading, multiple cell-to-cell transport events move organic substances to the root meristem. Although the primary unloading event from the sieve elements to the phloem pole pericycle has been characterized to some extent, little is known about post-sieve element unloading. Here, we report a novel gene, PHLOEM UNLOADING MODULATOR (PLM), in the absence of which plasmodesmata-mediated symplastic transport through the phloem pole pericycle--endodermis interface is specifically enhanced. Increased unloading is attributable to a defect in the formation of the endoplasmic reticulum--plasma membrane tethers during plasmodesmal morphogenesis, resulting in the majority of pores lacking a visible cytoplasmic sleeve. PLM encodes a putative enzyme required for the biosynthesis of sphingolipids with very-long-chain fatty acid. Taken together, our results indicate that post-sieve element unloading involves sphingolipid metabolism, which affects plasmodesmal ultrastructure. They also raise the question of how and why plasmodesmata with no cytoplasmic sleeve facilitate molecular trafficking.
1411.4321
Andrew Kennard
Andrew S. Kennard (1 and 2), Matteo Osella (3), Avelino Javer (1), Jacopo Grilli (4 and 5), Philippe Nghe (6 and 7), Sander Tans (6), Pietro Cicuta (1), Marco Cosentino Lagomarsino (8 and 9) ((1) Cavendish Laboratory University of Cambridge, (2) Biophysics Program Stanford University, (3) Dipartimento di Fisica and INFN University of Torino, (4) Department of Ecology and Evolution University of Chicago, (5) Dipartimento di Fisica e Astronomica 'G. Galilei' Universit\`a di Padova, (6) FOM Institute AMOLF, (7) Laboratoire de Biochimie, CNRS/ESPCI \'Ecole Sup\'erieure de Physique et de Chimie Industrielles, (8) Computational and Quantitative Biology Sorbonne Universit\'es UPMC Univ Paris 06, (9) CNRS)
Individuality and universality in the growth-division laws of single E. coli cells
39 pages, 7 main figures, 17 supplementary figures
Phys. Rev. E (2016) 93(1): 012408
10.1103/PhysRevE.93.012408
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The mean size of exponentially dividing E. coli cells cultured in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time and individual growth rate are only starting to be characterized. Recent studies in bacteria (i) revealed the near constancy of the size extension in a single cell cycle (adder mechanism), and (ii) reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here, we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means both collapse size and doubling-time distributions across different conditions, and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms (including the adder mechanism as a particular case) requiring a certain scaling form of the so-called "division hazard rate function", which defines the probability rate of dividing as a function of measurable parameters. This gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.
[ { "created": "Sun, 16 Nov 2014 23:09:52 GMT", "version": "v1" }, { "created": "Thu, 5 Feb 2015 10:08:56 GMT", "version": "v2" }, { "created": "Mon, 22 Jun 2015 07:03:35 GMT", "version": "v3" }, { "created": "Sat, 26 Dec 2015 18:54:18 GMT", "version": "v4" } ]
2016-01-25
[ [ "Kennard", "Andrew S.", "", "1 and 2" ], [ "Osella", "Matteo", "", "4 and 5" ], [ "Javer", "Avelino", "", "4 and 5" ], [ "Grilli", "Jacopo", "", "4 and 5" ], [ "Nghe", "Philippe", "", "6 and 7" ], [ "Tans", "Sander", "", "8 and 9" ], [ "Cicuta", "Pietro", "", "8 and 9" ], [ "Lagomarsino", "Marco Cosentino", "", "8 and 9" ] ]
The mean size of exponentially dividing E. coli cells cultured in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time and individual growth rate are only starting to be characterized. Recent studies in bacteria (i) revealed the near constancy of the size extension in a single cell cycle (adder mechanism), and (ii) reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here, we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means both collapse size and doubling-time distributions across different conditions, and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms (including the adder mechanism as a particular case) requiring a certain scaling form of the so-called "division hazard rate function", which defines the probability rate of dividing as a function of measurable parameters. This gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.
2003.13967
Raphael Wittkowski
Michael te Vrugt, Jens Bickmann, Raphael Wittkowski
Effects of social distancing and isolation on epidemic spreading: a dynamical density functional theory model
9 pages, 3 figures
Nature Communications 11, 5576 (2020)
10.1038/s41467-020-19024-0
null
q-bio.PE physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For preventing the spread of epidemics such as the coronavirus disease COVID-19, social distancing and the isolation of infected persons are crucial. However, existing reaction-diffusion equations for epidemic spreading are incapable of describing these effects. We present an extended model for disease spread based on combining an SIR model with a dynamical density functional theory where social distancing and isolation of infected persons are explicitly taken into account. The model shows interesting nonequilibrium phase separation associated with a reduction of the number of infections, and allows for new insights into the control of pandemics.
[ { "created": "Tue, 31 Mar 2020 06:17:01 GMT", "version": "v1" }, { "created": "Fri, 8 May 2020 16:55:26 GMT", "version": "v2" } ]
2020-11-18
[ [ "Vrugt", "Michael te", "" ], [ "Bickmann", "Jens", "" ], [ "Wittkowski", "Raphael", "" ] ]
For preventing the spread of epidemics such as the coronavirus disease COVID-19, social distancing and the isolation of infected persons are crucial. However, existing reaction-diffusion equations for epidemic spreading are incapable of describing these effects. We present an extended model for disease spread based on combining an SIR model with a dynamical density functional theory where social distancing and isolation of infected persons are explicitly taken into account. The model shows interesting nonequilibrium phase separation associated with a reduction of the number of infections, and allows for new insights into the control of pandemics.
2212.08826
Xin Xia
Xin Xia, Yansen Su, Chunhou Zheng, Xiangxiang Zeng
Molecule optimization via multi-objective evolutionary in implicit chemical space
38 pages, 6 figures, 74 conferences
null
null
null
q-bio.BM cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods have been used to accelerate the molecule optimization process. However, efficient search for optimized molecules satisfying several properties with scarce labeled data remains a challenge for machine learning molecule optimization. In this study, we propose MOMO, a multi-objective molecule optimization framework to address the challenge by combining learning of chemical knowledge with Pareto-based multi-objective evolutionary search. To learn chemistry, it employs a self-supervised codec to construct an implicit chemical space and acquire the continues representation of molecules. To explore the established chemical space, MOMO uses multi-objective evolution to comprehensively and efficiently search for similar molecules with multiple desirable properties. We demonstrate the high performance of MOMO on four multi-objective property and similarity optimization tasks, and illustrate the search capability of MOMO through case studies. Remarkably, our approach significantly outperforms previous approaches in optimizing three objectives simultaneously. The results show the optimization capability of MOMO, suggesting to improve the success rate of lead molecule optimization.
[ { "created": "Sat, 17 Dec 2022 09:09:23 GMT", "version": "v1" } ]
2022-12-20
[ [ "Xia", "Xin", "" ], [ "Su", "Yansen", "" ], [ "Zheng", "Chunhou", "" ], [ "Zeng", "Xiangxiang", "" ] ]
Machine learning methods have been used to accelerate the molecule optimization process. However, efficient search for optimized molecules satisfying several properties with scarce labeled data remains a challenge for machine learning molecule optimization. In this study, we propose MOMO, a multi-objective molecule optimization framework to address the challenge by combining learning of chemical knowledge with Pareto-based multi-objective evolutionary search. To learn chemistry, it employs a self-supervised codec to construct an implicit chemical space and acquire the continues representation of molecules. To explore the established chemical space, MOMO uses multi-objective evolution to comprehensively and efficiently search for similar molecules with multiple desirable properties. We demonstrate the high performance of MOMO on four multi-objective property and similarity optimization tasks, and illustrate the search capability of MOMO through case studies. Remarkably, our approach significantly outperforms previous approaches in optimizing three objectives simultaneously. The results show the optimization capability of MOMO, suggesting to improve the success rate of lead molecule optimization.
1104.5216
Michael Courtney
Simeon Cole-Fletcher, Lucas Marin-Salcedo, Ajaya Rana, and Michael Courtney
Errors in Length-weight Parameters at FishBase.org
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To investigate possible errors, length-weight parameters from FishBase.org were used to graph length-weight curves for six different species: channel catfish, black crappie, largemouth bass, rainbow trout, flathead catfish, and lake trout along with the standard weight curves (Anderson and Neumann 1996, Bister et al. 2000). Parameters noted as doubtful by FishBase were excluded. For each species, variations in curves were noted, and the minimum and maximum predicted weights for a 30 cm long fish were compared with each other and with the standard weight for that length. For lake trout, additional comparisons were made between the parameters and study details reported in FishBase.org for 6 of 8 length-weight relationships and those reported in the reference (Carlander 1969) for those 6 relationships. In all species studied, minimum and maximum curves produced with the length-weight parameters at FishBase.org are notably different from each other, and in many cases predict weights that are clearly absurd. For example, one set of parameters predicts a 30 cm rainbow trout weighing 44 g. For 30 cm length, the range of weights (relative to the standard weight) for each species are: channel catfish (31.4% to 193.1%), black crappie (54.0% to 149.0%), largemouth bass (28.8% to 130.4%), rainbow trout (14.9% to 113.4%), flathead catfish (29.3% to 250.7%), and lake trout (44.0% to 152.7%). Length-weight tables at FishBase.org are not generally reliable and the on-line database contains dubious parameters. Assurance of quality probably will require a systematic review with more careful and comprehensive methods than those currently employed.
[ { "created": "Wed, 27 Apr 2011 19:01:41 GMT", "version": "v1" } ]
2011-04-28
[ [ "Cole-Fletcher", "Simeon", "" ], [ "Marin-Salcedo", "Lucas", "" ], [ "Rana", "Ajaya", "" ], [ "Courtney", "Michael", "" ] ]
To investigate possible errors, length-weight parameters from FishBase.org were used to graph length-weight curves for six different species: channel catfish, black crappie, largemouth bass, rainbow trout, flathead catfish, and lake trout along with the standard weight curves (Anderson and Neumann 1996, Bister et al. 2000). Parameters noted as doubtful by FishBase were excluded. For each species, variations in curves were noted, and the minimum and maximum predicted weights for a 30 cm long fish were compared with each other and with the standard weight for that length. For lake trout, additional comparisons were made between the parameters and study details reported in FishBase.org for 6 of 8 length-weight relationships and those reported in the reference (Carlander 1969) for those 6 relationships. In all species studied, minimum and maximum curves produced with the length-weight parameters at FishBase.org are notably different from each other, and in many cases predict weights that are clearly absurd. For example, one set of parameters predicts a 30 cm rainbow trout weighing 44 g. For 30 cm length, the range of weights (relative to the standard weight) for each species are: channel catfish (31.4% to 193.1%), black crappie (54.0% to 149.0%), largemouth bass (28.8% to 130.4%), rainbow trout (14.9% to 113.4%), flathead catfish (29.3% to 250.7%), and lake trout (44.0% to 152.7%). Length-weight tables at FishBase.org are not generally reliable and the on-line database contains dubious parameters. Assurance of quality probably will require a systematic review with more careful and comprehensive methods than those currently employed.
1805.10827
Yanlong Sun
Yanlong Sun, Hongbin Wang
Learning Temporal Structures of Random Patterns
15 pages, 5 figures
null
null
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A cornerstone of human statistical learning is the ability to extract temporal regularities / patterns from random sequences. Here we present a method of computing pattern time statistics with generating functions for first-order Markov trials and independent Bernoulli trials. We show that the pattern time statistics cover a wide range of measurements commonly used in existing studies of both human and machine learning of stochastic processes, including probability of alternation, temporal correlation between pattern events, and related variance / risk measures. Moreover, we show that recurrent processing and event segmentation by pattern overlap may provide a coherent explanation for the sensitivity of the human brain to the rich statistics and the latent structures in the learning environment.
[ { "created": "Mon, 28 May 2018 09:12:49 GMT", "version": "v1" } ]
2018-06-29
[ [ "Sun", "Yanlong", "" ], [ "Wang", "Hongbin", "" ] ]
A cornerstone of human statistical learning is the ability to extract temporal regularities / patterns from random sequences. Here we present a method of computing pattern time statistics with generating functions for first-order Markov trials and independent Bernoulli trials. We show that the pattern time statistics cover a wide range of measurements commonly used in existing studies of both human and machine learning of stochastic processes, including probability of alternation, temporal correlation between pattern events, and related variance / risk measures. Moreover, we show that recurrent processing and event segmentation by pattern overlap may provide a coherent explanation for the sensitivity of the human brain to the rich statistics and the latent structures in the learning environment.
q-bio/0611046
Michael Meyer-Hermann
Michael Meyer-Hermann, Philip K. Maini, Dagmar Iber
An analysis of B cell selection mechanisms in germinal centers
25 pages, 1 table, 6 figures, supplementary material not included
Math Med Biol 23 (2006) 255
null
null
q-bio.CB physics.bio-ph q-bio.TO
null
Affinity maturation of antibodies during immune responses is achieved by multiple rounds of somatic hypermutation and subsequent preferential selection of those B cells that express B cell receptors with improved binding characteristics for the antigen. The mechanism underlying B cell selection has not yet been defined. By employing an agent-based model, we show that for physiologically reasonable parameter values affinity maturation can neither be driven by competition for binding sites nor antigen -- even in the presence of competing secreted antibodies. Within the tested mechanisms, only clonal competition for T cell help or a refractory time for the interaction of centrocytes with follicular dendritic cells are found to enable affinity maturation while generating the experimentally observed germinal center characteristics and tolerating large variations in the initial antigen density.
[ { "created": "Wed, 15 Nov 2006 17:06:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Meyer-Hermann", "Michael", "" ], [ "Maini", "Philip K.", "" ], [ "Iber", "Dagmar", "" ] ]
Affinity maturation of antibodies during immune responses is achieved by multiple rounds of somatic hypermutation and subsequent preferential selection of those B cells that express B cell receptors with improved binding characteristics for the antigen. The mechanism underlying B cell selection has not yet been defined. By employing an agent-based model, we show that for physiologically reasonable parameter values affinity maturation can neither be driven by competition for binding sites nor antigen -- even in the presence of competing secreted antibodies. Within the tested mechanisms, only clonal competition for T cell help or a refractory time for the interaction of centrocytes with follicular dendritic cells are found to enable affinity maturation while generating the experimentally observed germinal center characteristics and tolerating large variations in the initial antigen density.
1911.05479
Asim Iqbal
Asim Iqbal, Phil Dong, Christopher M Kim, Heeun Jang
Decoding Neural Responses in Mouse Visual Cortex through a Deep Neural Network
null
2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 2019
10.1109/IJCNN.2019.8852121
null
q-bio.NC cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding a code to unravel the population of neural responses that leads to a distinct animal behavior has been a long-standing question in the field of neuroscience. With the recent advances in machine learning, it is shown that the hierarchically Deep Neural Networks (DNNs) perform optimally in decoding unique features out of complex datasets. In this study, we utilize the power of a DNN to explore the computational principles in the mammalian brain by exploiting the Neuropixel data from Allen Brain Institute. We decode the neural responses from mouse visual cortex to predict the presented stimuli to the animal for natural (bear, trees, cheetah, etc.) and artificial (drifted gratings, orientated bars, etc.) classes. Our results indicate that neurons in mouse visual cortex encode the features of natural and artificial objects in a distinct manner, and such neural code is consistent across animals. We investigate this by applying transfer learning to train a DNN on the neural responses of a single animal and test its generalized performance across multiple animals. Within a single animal, DNN is able to decode the neural responses with as much as 100% classification accuracy. Across animals, this accuracy is reduced to 91%. This study demonstrates the potential of utilizing the DNN models as a computational framework to understand the neural coding principles in the mammalian brain.
[ { "created": "Sat, 26 Oct 2019 05:02:33 GMT", "version": "v1" } ]
2019-11-14
[ [ "Iqbal", "Asim", "" ], [ "Dong", "Phil", "" ], [ "Kim", "Christopher M", "" ], [ "Jang", "Heeun", "" ] ]
Finding a code to unravel the population of neural responses that leads to a distinct animal behavior has been a long-standing question in the field of neuroscience. With the recent advances in machine learning, it is shown that the hierarchically Deep Neural Networks (DNNs) perform optimally in decoding unique features out of complex datasets. In this study, we utilize the power of a DNN to explore the computational principles in the mammalian brain by exploiting the Neuropixel data from Allen Brain Institute. We decode the neural responses from mouse visual cortex to predict the presented stimuli to the animal for natural (bear, trees, cheetah, etc.) and artificial (drifted gratings, orientated bars, etc.) classes. Our results indicate that neurons in mouse visual cortex encode the features of natural and artificial objects in a distinct manner, and such neural code is consistent across animals. We investigate this by applying transfer learning to train a DNN on the neural responses of a single animal and test its generalized performance across multiple animals. Within a single animal, DNN is able to decode the neural responses with as much as 100% classification accuracy. Across animals, this accuracy is reduced to 91%. This study demonstrates the potential of utilizing the DNN models as a computational framework to understand the neural coding principles in the mammalian brain.
1310.8598
Onur Varol
Onur Varol, Deniz Yuret, Burak Erman, Alkan Kabak\c{c}{\i}o\u{g}lu
Mode-coupling points to functionally important residues in Myosin II
17 pages, 6 figures, 1 table
null
null
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/3.0/
Relevance of mode coupling to energy/information transfer during protein function, particularly in the context of allosteric interactions is widely accepted. However, existing evidence in favor of this hypothesis comes essentially from model systems. We here report a novel formal analysis of the near-native dynamics of myosin II, which allows us to explore the impact of the interaction between possibly non-Gaussian vibrational modes on fluctutational dynamics. We show that, an information-theoretic measure based on mode coupling {\it alone} yields a ranking of residues with a statistically significant bias favoring the functionally critical locations identified by experiments on myosin II.
[ { "created": "Thu, 31 Oct 2013 17:14:34 GMT", "version": "v1" } ]
2013-11-01
[ [ "Varol", "Onur", "" ], [ "Yuret", "Deniz", "" ], [ "Erman", "Burak", "" ], [ "Kabakçıoğlu", "Alkan", "" ] ]
Relevance of mode coupling to energy/information transfer during protein function, particularly in the context of allosteric interactions is widely accepted. However, existing evidence in favor of this hypothesis comes essentially from model systems. We here report a novel formal analysis of the near-native dynamics of myosin II, which allows us to explore the impact of the interaction between possibly non-Gaussian vibrational modes on fluctutational dynamics. We show that, an information-theoretic measure based on mode coupling {\it alone} yields a ranking of residues with a statistically significant bias favoring the functionally critical locations identified by experiments on myosin II.
1108.3464
Richard A Neher
Richard A. Neher, Boris I. Shraiman, and Daniel S. Fisher
Rate of Adaptation in Large Sexual Populations
null
null
null
NSF-ITP-09-219
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation often involves the acquisition of a large number of genomic changes which arise as mutations in single individuals. In asexual populations, combinations of mutations can fix only when they arise in the same lineage, but for populations in which genetic information is exchanged, beneficial mutations can arise in different individuals and be combined later. In large populations, when the product of the population size N and the total beneficial mutation rate U_b is large, many new beneficial alleles can be segregating in the population simultaneously. We calculate the rate of adaptation, v, in several models of such sexual populations and show that v is linear in NU_b only in sufficiently small populations. In large populations, v increases much more slowly as log NU_b. The prefactor of this logarithm, however, increases as the square of the recombination rate. This acceleration of adaptation by recombination implies a strong evolutionary advantage of sex.
[ { "created": "Wed, 17 Aug 2011 12:18:40 GMT", "version": "v1" } ]
2011-08-18
[ [ "Neher", "Richard A.", "" ], [ "Shraiman", "Boris I.", "" ], [ "Fisher", "Daniel S.", "" ] ]
Adaptation often involves the acquisition of a large number of genomic changes which arise as mutations in single individuals. In asexual populations, combinations of mutations can fix only when they arise in the same lineage, but for populations in which genetic information is exchanged, beneficial mutations can arise in different individuals and be combined later. In large populations, when the product of the population size N and the total beneficial mutation rate U_b is large, many new beneficial alleles can be segregating in the population simultaneously. We calculate the rate of adaptation, v, in several models of such sexual populations and show that v is linear in NU_b only in sufficiently small populations. In large populations, v increases much more slowly as log NU_b. The prefactor of this logarithm, however, increases as the square of the recombination rate. This acceleration of adaptation by recombination implies a strong evolutionary advantage of sex.
2404.17329
Paul Eisenhuth
Paul Eisenhuth, Fabian Liessmann, Rocco Moretti, Jens Meiler
REvoLd: Ultra-Large Library Screening with an Evolutionary Algorithm in Rosetta
29 pages, 9 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ultra-large make-on-demand compound libraries now contain billions of readily available compounds. This represents a golden opportunity for in-silico drug discovery. One challenge, however, is the time and computational cost of an exhaustive screen of such large libraries when receptor flexibility is taken into account. We propose an evolutionary algorithm to search combinatorial make-on-demand chemical space efficiently without enumerating all molecules. We exploit the feature of make-on-demand compound libraries, namely that they are constructed from lists of substrates and chemical reactions. Our novel algorithm RosettaEvolutionaryLigand (REvoLd) explores the vast search space of combinatorial libraries for protein-ligand docking with full ligand and receptor flexibility through RosettaLigand. A benchmark of REvoLd on five drug targets showed improvements in hit rates by factors between 869 and 1,622 compared to random selections. REvoLd is available as an application within the Rosetta software suite.
[ { "created": "Fri, 26 Apr 2024 11:22:57 GMT", "version": "v1" } ]
2024-04-29
[ [ "Eisenhuth", "Paul", "" ], [ "Liessmann", "Fabian", "" ], [ "Moretti", "Rocco", "" ], [ "Meiler", "Jens", "" ] ]
Ultra-large make-on-demand compound libraries now contain billions of readily available compounds. This represents a golden opportunity for in-silico drug discovery. One challenge, however, is the time and computational cost of an exhaustive screen of such large libraries when receptor flexibility is taken into account. We propose an evolutionary algorithm to search combinatorial make-on-demand chemical space efficiently without enumerating all molecules. We exploit the feature of make-on-demand compound libraries, namely that they are constructed from lists of substrates and chemical reactions. Our novel algorithm RosettaEvolutionaryLigand (REvoLd) explores the vast search space of combinatorial libraries for protein-ligand docking with full ligand and receptor flexibility through RosettaLigand. A benchmark of REvoLd on five drug targets showed improvements in hit rates by factors between 869 and 1,622 compared to random selections. REvoLd is available as an application within the Rosetta software suite.
2009.01445
Ernest Montbrio
Ernest Montbri\'o and Diego Paz\'o
Exact mean-field theory explains the dual role of electrical synapses in collective synchronization
null
Phys. Rev. Lett. 125, 248101 (2020)
10.1103/PhysRevLett.125.248101
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrical synapses play a major role in setting up neuronal synchronization, but the precise mechanisms whereby these synapses contribute to synchrony are subtle and remain elusive. To investigate these mechanisms mean-field theories for quadratic integrate-and-fire neurons with electrical synapses have been recently put forward. Still, the validity of these theories is controversial since they assume that the neurons produce unrealistic, symmetric spikes, ignoring the well-known impact of spike shape on synchronization. Here we show that the assumption of symmetric spikes can be relaxed in such theories. The resulting mean-field equations reveal a dual role of electrical synapses: First, they equalize membrane potentials favoring the emergence of synchrony. Second, electrical synapses act as "virtual chemical synapses", which can be either excitatory or inhibitory depending upon the spike shape. Our results offer a precise mathematical explanation of the intricate effect of electrical synapses in collective synchronization. This reconciles previous theoretical and numerical works, and confirms the suitability of recent low-dimensional mean-field theories to investigate electrically coupled neuronal networks.
[ { "created": "Thu, 3 Sep 2020 04:27:19 GMT", "version": "v1" } ]
2020-12-15
[ [ "Montbrió", "Ernest", "" ], [ "Pazó", "Diego", "" ] ]
Electrical synapses play a major role in setting up neuronal synchronization, but the precise mechanisms whereby these synapses contribute to synchrony are subtle and remain elusive. To investigate these mechanisms mean-field theories for quadratic integrate-and-fire neurons with electrical synapses have been recently put forward. Still, the validity of these theories is controversial since they assume that the neurons produce unrealistic, symmetric spikes, ignoring the well-known impact of spike shape on synchronization. Here we show that the assumption of symmetric spikes can be relaxed in such theories. The resulting mean-field equations reveal a dual role of electrical synapses: First, they equalize membrane potentials favoring the emergence of synchrony. Second, electrical synapses act as "virtual chemical synapses", which can be either excitatory or inhibitory depending upon the spike shape. Our results offer a precise mathematical explanation of the intricate effect of electrical synapses in collective synchronization. This reconciles previous theoretical and numerical works, and confirms the suitability of recent low-dimensional mean-field theories to investigate electrically coupled neuronal networks.
2308.07413
Charles Harris
Charles Harris, Kieran Didi, Arian R. Jamasb, Chaitanya K. Joshi, Simon V. Mathis, Pietro Lio, Tom Blundell
Benchmarking Generated Poses: How Rational is Structure-based Drug Design with Generative Models?
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Deep generative models for structure-based drug design (SBDD), where molecule generation is conditioned on a 3D protein pocket, have received considerable interest in recent years. These methods offer the promise of higher-quality molecule generation by explicitly modelling the 3D interaction between a potential drug and a protein receptor. However, previous work has primarily focused on the quality of the generated molecules themselves, with limited evaluation of the 3D molecule \emph{poses} that these methods produce, with most work simply discarding the generated pose and only reporting a "corrected" pose after redocking with traditional methods. Little is known about whether generated molecules satisfy known physical constraints for binding and the extent to which redocking alters the generated interactions. We introduce PoseCheck, an extensive analysis of multiple state-of-the-art methods and find that generated molecules have significantly more physical violations and fewer key interactions compared to baselines, calling into question the implicit assumption that providing rich 3D structure information improves molecule complementarity. We make recommendations for future research tackling identified failure modes and hope our benchmark can serve as a springboard for future SBDD generative modelling work to have a real-world impact.
[ { "created": "Mon, 14 Aug 2023 19:01:21 GMT", "version": "v1" } ]
2023-08-16
[ [ "Harris", "Charles", "" ], [ "Didi", "Kieran", "" ], [ "Jamasb", "Arian R.", "" ], [ "Joshi", "Chaitanya K.", "" ], [ "Mathis", "Simon V.", "" ], [ "Lio", "Pietro", "" ], [ "Blundell", "Tom", "" ] ]
Deep generative models for structure-based drug design (SBDD), where molecule generation is conditioned on a 3D protein pocket, have received considerable interest in recent years. These methods offer the promise of higher-quality molecule generation by explicitly modelling the 3D interaction between a potential drug and a protein receptor. However, previous work has primarily focused on the quality of the generated molecules themselves, with limited evaluation of the 3D molecule \emph{poses} that these methods produce, with most work simply discarding the generated pose and only reporting a "corrected" pose after redocking with traditional methods. Little is known about whether generated molecules satisfy known physical constraints for binding and the extent to which redocking alters the generated interactions. We introduce PoseCheck, an extensive analysis of multiple state-of-the-art methods and find that generated molecules have significantly more physical violations and fewer key interactions compared to baselines, calling into question the implicit assumption that providing rich 3D structure information improves molecule complementarity. We make recommendations for future research tackling identified failure modes and hope our benchmark can serve as a springboard for future SBDD generative modelling work to have a real-world impact.
2011.08713
Glyn Nelson Dr
Glyn Nelson, Laurent Gelman, Orestis Faklaris, Roland Nitschke, Alex Laude
Interpretation of Confocal ISO 21073: 2019 confocal microscopes: Optical data of fluorescence confocal microscopes for biological imaging- Recommended Methodology for Quality Control
When drawing up this document, it became apparent that further work was required to determine best methodology and further minimal QC tests, as well as extend to other common imaging modalities. As a consequence, QUAREP-LiMi (https://quarep.org) was established to produce a more definitive and expansive QC Methodology manual for light microscopy to supersede this document
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The performance of a confocal imaging system may be no better than a general-purpose widefield system if it is not properly maintained or quality controlled. The publication of ISO 21073, 'Confocal microscopes- Optical data of fluorescence confocal microscopes for biological imaging', set a standard for the minimal Quality Control (QC) that should be performed for Confocal Microscopes. Here we describe methodology for performing the QC requirements to satisfy ISO 21073, as well as suggesting other QC methods that should be performed to obtain a minimum level of information about the microscope system.
[ { "created": "Tue, 17 Nov 2020 15:38:41 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2020 08:19:33 GMT", "version": "v2" } ]
2020-11-20
[ [ "Nelson", "Glyn", "" ], [ "Gelman", "Laurent", "" ], [ "Faklaris", "Orestis", "" ], [ "Nitschke", "Roland", "" ], [ "Laude", "Alex", "" ] ]
The performance of a confocal imaging system may be no better than a general-purpose widefield system if it is not properly maintained or quality controlled. The publication of ISO 21073, 'Confocal microscopes- Optical data of fluorescence confocal microscopes for biological imaging', set a standard for the minimal Quality Control (QC) that should be performed for Confocal Microscopes. Here we describe methodology for performing the QC requirements to satisfy ISO 21073, as well as suggesting other QC methods that should be performed to obtain a minimum level of information about the microscope system.
1705.02336
Sergei Shedko
Sergei V. Shedko
Revision of nucleotide substitution rate in mtDNA control region of white sturgeon Acipenser transmontanus (Acipenseridae)
8 pages, in Russian, 1 figure
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The raw data from study of variation of D-loop mtDNA of white sturgeon Acipenser transmontanus (Mol. Biol. Evol. 1993. 10: 326-341) was re-analyzed. Re-calculated nucleotide substitution rate ({\mu}) was 0.782-0.939 x 10-7 substitutions/site/year/lineage, which was 1.4 times less than the estimate given in above-mentioned publication. The use of new {\mu} has led to an increase in estimates of long-term female effective population size (Nef) and coalescence times for mtDNA haplotypes, previously calculated for samples of Amur sturgeon A. schrenckii and kaluga A. dauricus, but it did not affect the conclusions on the critical state of their natural populations.
[ { "created": "Fri, 5 May 2017 02:41:28 GMT", "version": "v1" } ]
2017-05-09
[ [ "Shedko", "Sergei V.", "" ] ]
The raw data from study of variation of D-loop mtDNA of white sturgeon Acipenser transmontanus (Mol. Biol. Evol. 1993. 10: 326-341) was re-analyzed. Re-calculated nucleotide substitution rate ({\mu}) was 0.782-0.939 x 10-7 substitutions/site/year/lineage, which was 1.4 times less than the estimate given in above-mentioned publication. The use of new {\mu} has led to an increase in estimates of long-term female effective population size (Nef) and coalescence times for mtDNA haplotypes, previously calculated for samples of Amur sturgeon A. schrenckii and kaluga A. dauricus, but it did not affect the conclusions on the critical state of their natural populations.
1312.2041
John Storey
Wei Hao, Minsun Song, and John D. Storey
Probabilistic models of genetic variation in structured populations applied to global human studies
Wei Hao and Minsun Song contributed equally to this work
null
10.1093/bioinformatics/btv641
null
q-bio.PE q-bio.GN q-bio.QM stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important, unsolved problem is how to formulate and estimate probabilistic models of observed genotypes that allow for complex population structure. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis (PCA) can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly mixed-membership model as a special case. Noting some drawbacks of this approach, we introduce a new "logistic factor analysis" (LFA) framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the human genome diversity panel and 1000 genomes project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions.
[ { "created": "Sat, 7 Dec 2013 00:14:17 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2015 03:41:05 GMT", "version": "v2" } ]
2017-01-10
[ [ "Hao", "Wei", "" ], [ "Song", "Minsun", "" ], [ "Storey", "John D.", "" ] ]
Modern population genetics studies typically involve genome-wide genotyping of individuals from a diverse network of ancestries. An important, unsolved problem is how to formulate and estimate probabilistic models of observed genotypes that allow for complex population structure. We formulate two general probabilistic models, and we propose computationally efficient algorithms to estimate them. First, we show how principal component analysis (PCA) can be utilized to estimate a general model that includes the well-known Pritchard-Stephens-Donnelly mixed-membership model as a special case. Noting some drawbacks of this approach, we introduce a new "logistic factor analysis" (LFA) framework that seeks to directly model the logit transformation of probabilities underlying observed genotypes in terms of latent variables that capture population structure. We demonstrate these advances on data from the human genome diversity panel and 1000 genomes project, where we are able to identify SNPs that are highly differentiated with respect to structure while making minimal modeling assumptions.
2311.17103
Hu Dayu
Dayu Hu, Zhibin Dong, Ke Liang, Jun Wang, Siwei Wang and Xinwang Liu
Single-cell Multi-view Clustering via Community Detection with Unknown Number of Clusters
null
null
null
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell multi-view clustering enables the exploration of cellular heterogeneity within the same cell from different views. Despite the development of several multi-view clustering methods, two primary challenges persist. Firstly, most existing methods treat the information from both single-cell RNA (scRNA) and single-cell Assay of Transposase Accessible Chromatin (scATAC) views as equally significant, overlooking the substantial disparity in data richness between the two views. This oversight frequently leads to a degradation in overall performance. Additionally, the majority of clustering methods necessitate manual specification of the number of clusters by users. However, for biologists dealing with cell data, precisely determining the number of distinct cell types poses a formidable challenge. To this end, we introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data, which seamlessly integrates information from different views without the need for a predefined number of clusters. The scUNC method comprises several steps: initially, it employs a cross-view fusion network to create an effective embedding, which is then utilized to generate initial clusters via community detection. Subsequently, the clusters are automatically merged and optimized until no further clusters can be merged. We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets. The results underscored that scUNC outperforms the other baseline methods.
[ { "created": "Tue, 28 Nov 2023 08:34:58 GMT", "version": "v1" } ]
2023-11-30
[ [ "Hu", "Dayu", "" ], [ "Dong", "Zhibin", "" ], [ "Liang", "Ke", "" ], [ "Wang", "Jun", "" ], [ "Wang", "Siwei", "" ], [ "Liu", "Xinwang", "" ] ]
Single-cell multi-view clustering enables the exploration of cellular heterogeneity within the same cell from different views. Despite the development of several multi-view clustering methods, two primary challenges persist. Firstly, most existing methods treat the information from both single-cell RNA (scRNA) and single-cell Assay of Transposase Accessible Chromatin (scATAC) views as equally significant, overlooking the substantial disparity in data richness between the two views. This oversight frequently leads to a degradation in overall performance. Additionally, the majority of clustering methods necessitate manual specification of the number of clusters by users. However, for biologists dealing with cell data, precisely determining the number of distinct cell types poses a formidable challenge. To this end, we introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data, which seamlessly integrates information from different views without the need for a predefined number of clusters. The scUNC method comprises several steps: initially, it employs a cross-view fusion network to create an effective embedding, which is then utilized to generate initial clusters via community detection. Subsequently, the clusters are automatically merged and optimized until no further clusters can be merged. We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets. The results underscored that scUNC outperforms the other baseline methods.
2209.05728
Guy Florian Draenert
Guy Florian Draenert, Gergo Mitov
Lack of corundum, carbon residues and revealing gaps on dental implants
further information available on request
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Surface modification is an important topic to improve dental implants. Corundum residues, which are part of current dental implant blasting, disappeared on Straumann dental implants in recent publications. In our investigations of the surface of 4 different Straumann implants using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) we found the following three main findings: surfaces are nearly corundum-free, disseminated gap-framed corundum particles and significant molecular carbon residues. The data strongly suggest that Straumann applies a modified surface technology on dental implants to remove corundum residues and involving unclear carbons. One explanation could be, a Straumann patent involving a dextran coating allowing easy corundum particle removal by aqueous solution, while unintended molecular carbon residues cannot explain all findings. This change of the production process without a new approval by the FDA would be a violation of US federal law and the carbon bindings are a possible danger to patients.
[ { "created": "Tue, 13 Sep 2022 04:45:44 GMT", "version": "v1" } ]
2022-09-14
[ [ "Draenert", "Guy Florian", "" ], [ "Mitov", "Gergo", "" ] ]
Surface modification is an important topic to improve dental implants. Corundum residues, which are part of current dental implant blasting, disappeared on Straumann dental implants in recent publications. In our investigations of the surface of 4 different Straumann implants using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX) we found the following three main findings: surfaces are nearly corundum-free, disseminated gap-framed corundum particles and significant molecular carbon residues. The data strongly suggest that Straumann applies a modified surface technology on dental implants to remove corundum residues and involving unclear carbons. One explanation could be, a Straumann patent involving a dextran coating allowing easy corundum particle removal by aqueous solution, while unintended molecular carbon residues cannot explain all findings. This change of the production process without a new approval by the FDA would be a violation of US federal law and the carbon bindings are a possible danger to patients.
2112.08366
Ruiwei Feng
Ruiwei Feng, Yufeng Xie, Minshan Lai, Danny Z. Chen, Ji Cao, Jian Wu
AGMI: Attention-Guided Multi-omics Integration for Drug Response Prediction with Graph Neural Networks
null
null
10.1109/BIBM52615.2021.9669314
null
q-bio.GN cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate drug response prediction (DRP) is a crucial yet challenging task in precision medicine. This paper presents a novel Attention-Guided Multi-omics Integration (AGMI) approach for DRP, which first constructs a Multi-edge Graph (MeG) for each cell line, and then aggregates multi-omics features to predict drug response using a novel structure, called Graph edge-aware Network (GeNet). For the first time, our AGMI approach explores gene constraint based multi-omics integration for DRP with the whole-genome using GNNs. Empirical experiments on the CCLE and GDSC datasets show that our AGMI largely outperforms state-of-the-art DRP methods by 8.3%--34.2% on four metrics. Our data and code are available at https://github.com/yivan-WYYGDSG/AGMI.
[ { "created": "Wed, 15 Dec 2021 07:42:46 GMT", "version": "v1" }, { "created": "Mon, 10 Jan 2022 02:46:36 GMT", "version": "v2" } ]
2022-01-20
[ [ "Feng", "Ruiwei", "" ], [ "Xie", "Yufeng", "" ], [ "Lai", "Minshan", "" ], [ "Chen", "Danny Z.", "" ], [ "Cao", "Ji", "" ], [ "Wu", "Jian", "" ] ]
Accurate drug response prediction (DRP) is a crucial yet challenging task in precision medicine. This paper presents a novel Attention-Guided Multi-omics Integration (AGMI) approach for DRP, which first constructs a Multi-edge Graph (MeG) for each cell line, and then aggregates multi-omics features to predict drug response using a novel structure, called Graph edge-aware Network (GeNet). For the first time, our AGMI approach explores gene constraint based multi-omics integration for DRP with the whole-genome using GNNs. Empirical experiments on the CCLE and GDSC datasets show that our AGMI largely outperforms state-of-the-art DRP methods by 8.3%--34.2% on four metrics. Our data and code are available at https://github.com/yivan-WYYGDSG/AGMI.
1209.5559
Alex Susemihl
Alex Susemihl, Ron Meir, Manfred Opper
Dynamic State Estimation Based on Poisson Spike Trains: Towards a Theory of Optimal Encoding
26 pages, 9 figures
J. Stat. Mech. (2013) P03009
10.1088/1742-5468/2013/03/P03009
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
[ { "created": "Tue, 25 Sep 2012 09:51:33 GMT", "version": "v1" } ]
2013-09-13
[ [ "Susemihl", "Alex", "" ], [ "Meir", "Ron", "" ], [ "Opper", "Manfred", "" ] ]
Neurons in the nervous system convey information to higher brain regions by the generation of spike trains. An important question in the field of computational neuroscience is how these sensory neurons encode environmental information in a way which may be simply analyzed by subsequent systems. Many aspects of the form and function of the nervous system have been understood using the concepts of optimal population coding. Most studies, however, have neglected the aspect of temporal coding. Here we address this shortcoming through a filtering theory of inhomogeneous Poisson processes. We derive exact relations for the minimal mean squared error of the optimal Bayesian filter and by optimizing the encoder, obtain optimal codes for populations of neurons. We also show that a class of non-Markovian, smooth stimuli are amenable to the same treatment, and provide results for the filtering and prediction error which hold for a general class of stochastic processes. This sets a sound mathematical framework for a population coding theory that takes temporal aspects into account. It also formalizes a number of studies which discussed temporal aspects of coding using time-window paradigms, by stating them in terms of correlation times and firing rates. We propose that this kind of analysis allows for a systematic study of temporal coding and will bring further insights into the nature of the neural code.
1001.4212
Nikolai Sinitsyn
N. A. Sinitsyn and I. Nemenman
Time-dependent corrections to effective rate and event statistics in Michaelis-Menten kinetics
11 pages
IET Syst Biol 4, 409, 2010
10.1049/iet-syb.2010.0064
Technical report: LA-UR-08-04425
q-bio.QM cond-mat.stat-mech physics.chem-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We generalize the concept of the geometric phase in stochastic kinetics to a noncyclic evolution. Its application is demonstrated on kinetics of the Michaelis-Menten reaction. It is shown that the nonperiodic geometric phase is responsible for the correction to the Michaelis-Menten law when parameters, such as a substrate concentration, are changing with time. We apply these ideas to a model of chemical reactions in a bacterial culture of a growing size, where the geometric correction qualitatively changes the outcome of the reaction kinetics.
[ { "created": "Sun, 24 Jan 2010 00:13:16 GMT", "version": "v1" } ]
2010-11-24
[ [ "Sinitsyn", "N. A.", "" ], [ "Nemenman", "I.", "" ] ]
We generalize the concept of the geometric phase in stochastic kinetics to a noncyclic evolution. Its application is demonstrated on kinetics of the Michaelis-Menten reaction. It is shown that the nonperiodic geometric phase is responsible for the correction to the Michaelis-Menten law when parameters, such as a substrate concentration, are changing with time. We apply these ideas to a model of chemical reactions in a bacterial culture of a growing size, where the geometric correction qualitatively changes the outcome of the reaction kinetics.
2405.02674
Josinaldo Menezes Da Silva
R. Barbalho, S. Rodrigues, M. Tenorio, J. Menezes
Ambush strategy enhances organisms' performance in rock-paper-scissors games
8 pages, 5 figures
BioSystems 240, 105229 (2024)
10.1016/j.biosystems.2024.105229
null
q-bio.PE nlin.AO nlin.PS physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a five-species cyclic system wherein individuals of one species strategically adapt their movements to enhance their performance in the spatial rock-paper-scissors game. Environmental cues enable the awareness of the presence of organisms targeted for elimination in the cyclic game. If the local density of target organisms is sufficiently high, individuals move towards concentrated areas for direct attack; otherwise, they employ an ambush tactic, maximising the chances of success by targeting regions likely to be dominated by opponents. Running stochastic simulations, we discover that the ambush strategy enhances the likelihood of individual success compared to direct attacks alone, leading to uneven spatial patterns characterised by spiral waves. We compute the autocorrelation function and measure how the ambush tactic unbalances the organisms' spatial organisation by calculating the characteristic length scale of typical spatial domains of each species. We demonstrate that the threshold for local species density influences the ambush strategy's effectiveness, while the neighbourhood perception range significantly impacts decision-making accuracy. The outcomes show that long-range perception improves performance by over 60\%, although there is potential interference in decision-making under high attack triggers. Understanding how organisms' adaptation to their environment enhances their performance may be helpful not only for ecologists but also for data scientists aiming to improve artificial intelligence systems.
[ { "created": "Sat, 4 May 2024 14:23:59 GMT", "version": "v1" } ]
2024-06-05
[ [ "Barbalho", "R.", "" ], [ "Rodrigues", "S.", "" ], [ "Tenorio", "M.", "" ], [ "Menezes", "J.", "" ] ]
We study a five-species cyclic system wherein individuals of one species strategically adapt their movements to enhance their performance in the spatial rock-paper-scissors game. Environmental cues enable the awareness of the presence of organisms targeted for elimination in the cyclic game. If the local density of target organisms is sufficiently high, individuals move towards concentrated areas for direct attack; otherwise, they employ an ambush tactic, maximising the chances of success by targeting regions likely to be dominated by opponents. Running stochastic simulations, we discover that the ambush strategy enhances the likelihood of individual success compared to direct attacks alone, leading to uneven spatial patterns characterised by spiral waves. We compute the autocorrelation function and measure how the ambush tactic unbalances the organisms' spatial organisation by calculating the characteristic length scale of typical spatial domains of each species. We demonstrate that the threshold for local species density influences the ambush strategy's effectiveness, while the neighbourhood perception range significantly impacts decision-making accuracy. The outcomes show that long-range perception improves performance by over 60\%, although there is potential interference in decision-making under high attack triggers. Understanding how organisms' adaptation to their environment enhances their performance may be helpful not only for ecologists but also for data scientists aiming to improve artificial intelligence systems.
1502.00155
Ulrich S. Schwarz
Marvin A. Boettcher, Heinrich C. R. Klein and Ulrich S. Schwarz (Heidelberg University)
Role of dynamic capsomere supply for viral capsid self-assembly
Revtex, 26 pages, 7 EPS figures
Phys. Biol. 12:016014 (2015)
10.1088/1478-3975/12/1/016014
null
q-bio.SC physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many viruses rely on the self-assembly of their capsids to protect and transport their genomic material. For many viral systems, in particular for human viruses like hepatitis B, adeno or human immunodeficiency virus, that lead to persistent infections, capsomeres are continuously produced in the cytoplasm of the host cell while completed capsids exit the cell for a new round of infection. Here we use coarse-grained Brownian dynamics simulations of a generic patchy particle model to elucidate the role of the dynamic supply of capsomeres for the reversible self-assembly of empty T1 icosahedral virus capsids. We find that for high rates of capsomere influx only a narrow range of bond strengths exists for which a steady state of continuous capsid production is possible. For bond strengths smaller and larger than this optimal value, the reaction volume becomes crowded by small and large intermediates, respectively. For lower rates of capsomere influx a broader range of bond strengths exists for which a steady state of continuous capsid production is established, although now the production rate of capsids is smaller. Thus our simulations suggest that the importance of an optimal bond strength for viral capsid assembly typical for in vitro conditions can be reduced by the dynamic influx of capsomeres in a cellular environment.
[ { "created": "Sat, 31 Jan 2015 19:43:07 GMT", "version": "v1" } ]
2015-02-03
[ [ "Boettcher", "Marvin A.", "", "Heidelberg University" ], [ "Klein", "Heinrich C. R.", "", "Heidelberg University" ], [ "Schwarz", "Ulrich S.", "", "Heidelberg University" ] ]
Many viruses rely on the self-assembly of their capsids to protect and transport their genomic material. For many viral systems, in particular for human viruses like hepatitis B, adeno or human immunodeficiency virus, that lead to persistent infections, capsomeres are continuously produced in the cytoplasm of the host cell while completed capsids exit the cell for a new round of infection. Here we use coarse-grained Brownian dynamics simulations of a generic patchy particle model to elucidate the role of the dynamic supply of capsomeres for the reversible self-assembly of empty T1 icosahedral virus capsids. We find that for high rates of capsomere influx only a narrow range of bond strengths exists for which a steady state of continuous capsid production is possible. For bond strengths smaller and larger than this optimal value, the reaction volume becomes crowded by small and large intermediates, respectively. For lower rates of capsomere influx a broader range of bond strengths exists for which a steady state of continuous capsid production is established, although now the production rate of capsids is smaller. Thus our simulations suggest that the importance of an optimal bond strength for viral capsid assembly typical for in vitro conditions can be reduced by the dynamic influx of capsomeres in a cellular environment.
1703.08792
Burkhard Morgenstern
Chris-Andre Leimeister, Thomas Dencker, Burkhard Morgenstern
Anchor points for genome alignment based on Filtered Spaced Word Matches
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alignment of large genomic sequences is a fundamental task in computational genome analysis. Most methods for genomic alignment use high-scoring local alignments as {\em anchor points} to reduce the search space of the alignment procedure. Speed and quality of these methods therefore depend on the underlying anchor points. Herein, we propose to use {\em Filtered Spaced Word Matches} to calculate anchor points for genome alignment. To evaluate this approach, we used these anchor points in the the widely used alignment pipeline {\em Mugsy}. For distantly related sequence sets, we could substantially improve the quality of alignments produced by {\em Mugsy}.
[ { "created": "Sun, 26 Mar 2017 09:21:07 GMT", "version": "v1" } ]
2017-03-28
[ [ "Leimeister", "Chris-Andre", "" ], [ "Dencker", "Thomas", "" ], [ "Morgenstern", "Burkhard", "" ] ]
Alignment of large genomic sequences is a fundamental task in computational genome analysis. Most methods for genomic alignment use high-scoring local alignments as {\em anchor points} to reduce the search space of the alignment procedure. Speed and quality of these methods therefore depend on the underlying anchor points. Herein, we propose to use {\em Filtered Spaced Word Matches} to calculate anchor points for genome alignment. To evaluate this approach, we used these anchor points in the the widely used alignment pipeline {\em Mugsy}. For distantly related sequence sets, we could substantially improve the quality of alignments produced by {\em Mugsy}.
1902.04851
Edmund Crampin
Hilary Hunt, Agne Tilunaite, Greg Bass, Christian Soeller, H. Llewelyn Roderick, Vijay Rajagopal, Edmund J. Crampin
Ca2+ release via IP3 receptors shapes the cytosolic Ca2+ transient for hypertrophic signalling in ventricular cardiomyocytes
Biophysical Journal, in press
null
10.1016/j.bpj.2020.08.001
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Calcium (Ca2+) plays a central role in mediating both contractile function and hypertrophic signalling in ventricular cardiomyocytes. L-type Ca2+ channels trigger release of Ca2+ from ryanodine receptors (RyRs) for cellular contraction, while signalling downstream of Gq coupled receptors stimulates Ca2+ release via inositol 1,4,5-trisphosphate receptors (IP3Rs), engaging hypertrophic signalling pathways. Modulation of the amplitude, duration, and duty cycle of the cytosolic Ca2+ contraction signal, and spatial localisation, have all been proposed to encode this hypertrophic signal. Given current knowledge of IP3Rs, we develop a model describing the effect of functional interaction (cross-talk) between RyR and IP3R channels on the Ca2+ transient, and examine the sensitivity of the Ca2+ transient shape to properties of IP3R activation. A key result of our study is that IP3R activation increases Ca2+ transient duration for a broad range of IP3R properties, but the effect of IP3R activation on Ca2+ transient amplitude is dependent on IP3 concentration. Furthermore we demonstrate that IP3-mediated Ca2+ release in the cytosol increases the duty cycle of the Ca2+ transient, the fraction of the cycle for which [Ca2+] is elevated, across a broad range of parameter values and IP3 concentrations. When coupled to a model of downstream transcription factor (NFAT) activation, we demonstrate that there is a high correspondence between the Ca transient duty cycle and the proportion of activated NFAT in the nucleus. These findings suggest increased cytosolic Ca2+ duty cycle as a plausible mechanism for IP3-dependent hypertrophic signalling via Ca2+-sensitive transcription factors such as NFAT in ventricular cardiomyocytes.
[ { "created": "Wed, 13 Feb 2019 10:51:18 GMT", "version": "v1" }, { "created": "Wed, 6 Mar 2019 04:48:58 GMT", "version": "v2" }, { "created": "Fri, 17 Jan 2020 04:38:12 GMT", "version": "v3" }, { "created": "Mon, 17 Aug 2020 06:30:01 GMT", "version": "v4" } ]
2020-08-18
[ [ "Hunt", "Hilary", "" ], [ "Tilunaite", "Agne", "" ], [ "Bass", "Greg", "" ], [ "Soeller", "Christian", "" ], [ "Roderick", "H. Llewelyn", "" ], [ "Rajagopal", "Vijay", "" ], [ "Crampin", "Edmund J.", "" ] ]
Calcium (Ca2+) plays a central role in mediating both contractile function and hypertrophic signalling in ventricular cardiomyocytes. L-type Ca2+ channels trigger release of Ca2+ from ryanodine receptors (RyRs) for cellular contraction, while signalling downstream of Gq coupled receptors stimulates Ca2+ release via inositol 1,4,5-trisphosphate receptors (IP3Rs), engaging hypertrophic signalling pathways. Modulation of the amplitude, duration, and duty cycle of the cytosolic Ca2+ contraction signal, and spatial localisation, have all been proposed to encode this hypertrophic signal. Given current knowledge of IP3Rs, we develop a model describing the effect of functional interaction (cross-talk) between RyR and IP3R channels on the Ca2+ transient, and examine the sensitivity of the Ca2+ transient shape to properties of IP3R activation. A key result of our study is that IP3R activation increases Ca2+ transient duration for a broad range of IP3R properties, but the effect of IP3R activation on Ca2+ transient amplitude is dependent on IP3 concentration. Furthermore we demonstrate that IP3-mediated Ca2+ release in the cytosol increases the duty cycle of the Ca2+ transient, the fraction of the cycle for which [Ca2+] is elevated, across a broad range of parameter values and IP3 concentrations. When coupled to a model of downstream transcription factor (NFAT) activation, we demonstrate that there is a high correspondence between the Ca transient duty cycle and the proportion of activated NFAT in the nucleus. These findings suggest increased cytosolic Ca2+ duty cycle as a plausible mechanism for IP3-dependent hypertrophic signalling via Ca2+-sensitive transcription factors such as NFAT in ventricular cardiomyocytes.
2102.05236
Pan Wang
Pan Wang, Rui Zhou, Shuo Wang, Ling Li, Wenjia Bai, Jialu Fan, Chunlin Li, Peter Childs, and Yike Guo
A General Framework for Revealing Human Mind with auto-encoding GANs
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Addressing the question of visualising human mind could help us to find regions that are associated with observed cognition and responsible for expressing the elusive mental image, leading to a better understanding of cognitive function. The traditional approach treats brain decoding as a classification problem, reading the mind through statistical analysis of brain activity. However, human thought is rich and varied, that it is often influenced by more of a combination of object features than a specific type of category. For this reason, we propose an end-to-end brain decoding framework which translates brain activity into an image by latent space alignment. To find the correspondence from brain signal features to image features, we embedded them into two latent spaces with modality-specific encoders and then aligned the two spaces by minimising the distance between paired latent representations. The proposed framework was trained by simultaneous electroencephalogram and functional MRI data, which were recorded when the subjects were viewing or imagining a set of image stimuli. In this paper, we focused on implementing the fMRI experiment. Our experimental results demonstrated the feasibility of translating brain activity to an image. The reconstructed image matches image stimuli approximate in both shape and colour. Our framework provides a promising direction for building a direct visualisation to reveal human mind.
[ { "created": "Wed, 10 Feb 2021 03:18:46 GMT", "version": "v1" } ]
2021-02-11
[ [ "Wang", "Pan", "" ], [ "Zhou", "Rui", "" ], [ "Wang", "Shuo", "" ], [ "Li", "Ling", "" ], [ "Bai", "Wenjia", "" ], [ "Fan", "Jialu", "" ], [ "Li", "Chunlin", "" ], [ "Childs", "Peter", "" ], [ "Guo", "Yike", "" ] ]
Addressing the question of visualising human mind could help us to find regions that are associated with observed cognition and responsible for expressing the elusive mental image, leading to a better understanding of cognitive function. The traditional approach treats brain decoding as a classification problem, reading the mind through statistical analysis of brain activity. However, human thought is rich and varied, that it is often influenced by more of a combination of object features than a specific type of category. For this reason, we propose an end-to-end brain decoding framework which translates brain activity into an image by latent space alignment. To find the correspondence from brain signal features to image features, we embedded them into two latent spaces with modality-specific encoders and then aligned the two spaces by minimising the distance between paired latent representations. The proposed framework was trained by simultaneous electroencephalogram and functional MRI data, which were recorded when the subjects were viewing or imagining a set of image stimuli. In this paper, we focused on implementing the fMRI experiment. Our experimental results demonstrated the feasibility of translating brain activity to an image. The reconstructed image matches image stimuli approximate in both shape and colour. Our framework provides a promising direction for building a direct visualisation to reveal human mind.
2405.04557
Da Zhou Prof.
Yuman Wang, Shuli Chen, Jie Hu, and Da Zhou
Determining cell population size from cell fraction in cell plasticity models
null
null
null
null
q-bio.QM q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Quantifying the size of cell populations is crucial for understanding biological processes such as growth, injury repair, and disease progression. Often, experimental data offer information in the form of relative frequencies of distinct cell types, rather than absolute cell counts. This emphasizes the need to devise effective strategies for estimating absolute cell quantities from fraction data. In response to this challenge, we present two computational approaches grounded in stochastic cell population models: the first-order moment method (FOM) and the second-order moment method (SOM). These methods explicitly establish mathematical mappings from cell fraction to cell population size using moment equations of the stochastic models. Notably, our investigation demonstrates that the SOM method obviates the requirement for a priori knowledge of the initial population size, highlighting the utility of incorporating variance details from cell proportions. The robustness of both the FOM and SOM methods was analyzed from different perspectives. Additionally, we extended the application of the FOM and SOM methods to various biological mechanisms within the context of cell plasticity models. Our methodologies not only assist in mitigating the inherent limitations of experimental techniques when only fraction data is available for detecting cell population size, but they also offer new insights into utilizing the stochastic characteristics of cell population dynamics to quantify interactions between different biomasses within the system.
[ { "created": "Tue, 7 May 2024 08:44:18 GMT", "version": "v1" } ]
2024-05-09
[ [ "Wang", "Yuman", "" ], [ "Chen", "Shuli", "" ], [ "Hu", "Jie", "" ], [ "Zhou", "Da", "" ] ]
Quantifying the size of cell populations is crucial for understanding biological processes such as growth, injury repair, and disease progression. Often, experimental data offer information in the form of relative frequencies of distinct cell types, rather than absolute cell counts. This emphasizes the need to devise effective strategies for estimating absolute cell quantities from fraction data. In response to this challenge, we present two computational approaches grounded in stochastic cell population models: the first-order moment method (FOM) and the second-order moment method (SOM). These methods explicitly establish mathematical mappings from cell fraction to cell population size using moment equations of the stochastic models. Notably, our investigation demonstrates that the SOM method obviates the requirement for a priori knowledge of the initial population size, highlighting the utility of incorporating variance details from cell proportions. The robustness of both the FOM and SOM methods was analyzed from different perspectives. Additionally, we extended the application of the FOM and SOM methods to various biological mechanisms within the context of cell plasticity models. Our methodologies not only assist in mitigating the inherent limitations of experimental techniques when only fraction data is available for detecting cell population size, but they also offer new insights into utilizing the stochastic characteristics of cell population dynamics to quantify interactions between different biomasses within the system.
1912.00270
Gerrit Hilgen
Gerrit Hilgen (Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle, NE2 4HH, United Kingdom)
Challenges for automated spike sorting: beware of pharmacological manipulations
mini review, 1 figure, 7 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of large-scale and high-density extracellular recording devices allows simultaneous recording from thousands of neurons. However, the complexity and size of the data makes it mandatory to develop robust algorithms for fully automated spike sorting. Here it is shown that limitations imposed by biological constraints such as changes in spike waveforms induced under different drug regimes should be carefully taken into consideration in future developments.
[ { "created": "Sat, 30 Nov 2019 21:51:28 GMT", "version": "v1" } ]
2019-12-03
[ [ "Hilgen", "Gerrit", "", "Biosciences Institute, Faculty of Medical Sciences,\n Newcastle University, Newcastle, NE2 4HH, United Kingdom" ] ]
The advent of large-scale and high-density extracellular recording devices allows simultaneous recording from thousands of neurons. However, the complexity and size of the data makes it mandatory to develop robust algorithms for fully automated spike sorting. Here it is shown that limitations imposed by biological constraints such as changes in spike waveforms induced under different drug regimes should be carefully taken into consideration in future developments.
1309.3436
David Holcman
N. Hoze, D. Holcman
Potential wells for AMPA receptors organized in ring nanodomains
4 figures extension of Hoze et al, PNAS 2012
null
null
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By combining high-density super-resolution imaging with a novel stochastic analysis, we report here a peculiar nano-structure organization revealed by the density function of individual AMPA receptors moving on the surface of cultured hippocampal dendrites. High density regions of hundreds of nanometers for the trajectories are associated with local molecular assembly generated by direct molecular interactions due to physical potential wells. We found here that for some of these regions, the potential wells are organized in ring structures. We could find up to 3 wells in a single ring. Inside a ring receptors move in a small band the width of which is of hundreds of nanometers. In addition, rings are transient structures and can be observed for tens of minutes. Potential wells located in a ring are also transient and the position of their peaks can shift with time. We conclude that these rings can trap receptors in a unique geometrical structure contributing to shape receptor trafficking, a process that sustains synaptic transmission and plasticity.
[ { "created": "Fri, 13 Sep 2013 12:27:24 GMT", "version": "v1" }, { "created": "Tue, 24 Sep 2013 17:58:42 GMT", "version": "v2" } ]
2014-07-01
[ [ "Hoze", "N.", "" ], [ "Holcman", "D.", "" ] ]
By combining high-density super-resolution imaging with a novel stochastic analysis, we report here a peculiar nano-structure organization revealed by the density function of individual AMPA receptors moving on the surface of cultured hippocampal dendrites. High density regions of hundreds of nanometers for the trajectories are associated with local molecular assembly generated by direct molecular interactions due to physical potential wells. We found here that for some of these regions, the potential wells are organized in ring structures. We could find up to 3 wells in a single ring. Inside a ring receptors move in a small band the width of which is of hundreds of nanometers. In addition, rings are transient structures and can be observed for tens of minutes. Potential wells located in a ring are also transient and the position of their peaks can shift with time. We conclude that these rings can trap receptors in a unique geometrical structure contributing to shape receptor trafficking, a process that sustains synaptic transmission and plasticity.
2006.02359
Simon DeDeo
Zachary Wojtowicz and Simon DeDeo
From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning
19 pages, 1 figure, comments welcome
Trends in Cognitive Sciences (2020)
10.1016/j.tics.2020.09.013
null
q-bio.NC cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work in cognitive science has uncovered a diversity of explanatory values, or dimensions along which we judge explanations as better or worse. We propose a Bayesian account of how these values fit together to guide explanation. The resulting taxonomy provides a set of predictors for which explanations people prefer and shows how core values from psychology, statistics, and the philosophy of science emerge from a common mathematical framework. In addition to operationalizing the explanatory virtues associated with, for example, scientific argument-making, this framework also enables us to reinterpret the explanatory vices that drive conspiracy theories, delusions, and extremist ideologies.
[ { "created": "Wed, 3 Jun 2020 16:11:45 GMT", "version": "v1" } ]
2020-10-29
[ [ "Wojtowicz", "Zachary", "" ], [ "DeDeo", "Simon", "" ] ]
Recent work in cognitive science has uncovered a diversity of explanatory values, or dimensions along which we judge explanations as better or worse. We propose a Bayesian account of how these values fit together to guide explanation. The resulting taxonomy provides a set of predictors for which explanations people prefer and shows how core values from psychology, statistics, and the philosophy of science emerge from a common mathematical framework. In addition to operationalizing the explanatory virtues associated with, for example, scientific argument-making, this framework also enables us to reinterpret the explanatory vices that drive conspiracy theories, delusions, and extremist ideologies.
0809.1630
Alexei Koulakov
Alexei Koulakov, Tomas Hromadka, and Anthony M. Zador
Correlated connectivity and the distribution of firing rates in the neocortex
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two recent experimental observations pose a challenge to many cortical models. First, the activity in the auditory cortex is sparse, and firing rates can be described by a lognormal distribution. Second, the distribution of non-zero synaptic strengths between nearby cortical neurons can also be described by a lognormal distribution. Here we use a simple model of cortical activity to reconcile these observations. The model makes the experimentally testable prediction that synaptic efficacies onto a given cortical neuron are statistically correlated, i.e. it predicts that some neurons receive many more strong connections than other neurons. We propose a simple Hebb-like learning rule which gives rise to both lognormal firing rates and synaptic efficacies. Our results represent a first step toward reconciling sparse activity and sparse connectivity in cortical networks.
[ { "created": "Tue, 9 Sep 2008 18:44:58 GMT", "version": "v1" } ]
2008-09-10
[ [ "Koulakov", "Alexei", "" ], [ "Hromadka", "Tomas", "" ], [ "Zador", "Anthony M.", "" ] ]
Two recent experimental observations pose a challenge to many cortical models. First, the activity in the auditory cortex is sparse, and firing rates can be described by a lognormal distribution. Second, the distribution of non-zero synaptic strengths between nearby cortical neurons can also be described by a lognormal distribution. Here we use a simple model of cortical activity to reconcile these observations. The model makes the experimentally testable prediction that synaptic efficacies onto a given cortical neuron are statistically correlated, i.e. it predicts that some neurons receive many more strong connections than other neurons. We propose a simple Hebb-like learning rule which gives rise to both lognormal firing rates and synaptic efficacies. Our results represent a first step toward reconciling sparse activity and sparse connectivity in cortical networks.
1811.08718
Yunming Xiao
Yunming Xiao, Bin Wu
Close spatial arrangement of mutants favors and disfavors fixation
23 pages, 8 figures
null
10.1371/journal.pcbi.1007212
null
q-bio.PE cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperation is ubiquitous across all levels of biological systems ranging from microbial communities to human societies. It, however, seemingly contradicts the evolutionary theory, since cooperators are exploited by free-riders and thus are disfavored by natural selection. Many studies based on evolutionary game theory have tried to solve the puzzle and figure out the reason why cooperation exists and how it emerges. Network reciprocity is one of the mechanisms to promote cooperation, where nodes refer to individuals and links refer to social relationships. The spatial arrangement of mutant individuals, which refers to the clustering of mutants, plays a key role in network reciprocity. Besides, many other mechanisms supporting cooperation suggest that the clustering of mutants plays an important role in the expansion of mutants. However, the clustering of mutants and the game dynamics are typically coupled. It is still unclear how the clustering of mutants alone alters the evolutionary dynamics. To this end, we employ a minimal model with frequency independent fitness on a circle. It disentangles the clustering of mutants from game dynamics. The distance between two mutants on the circle is adopted as a natural indicator for the clustering of mutants or assortment. We find that the assortment is an amplifier of the selection for the connected mutants compared with the separated ones. Nevertheless, as mutants are separated, the more dispersed mutants are, the greater the chance of invasion is. It gives rise to the non-monotonic effect of clustering, which is counterintuitive. On the other hand, we find that less assortative mutants speed up fixation. Our model shows that the clustering of mutants plays a non-trivial role in fixation, which has emerged even if the game interaction is absent.
[ { "created": "Wed, 21 Nov 2018 13:32:29 GMT", "version": "v1" }, { "created": "Wed, 21 Aug 2019 02:17:39 GMT", "version": "v2" } ]
2019-09-23
[ [ "Xiao", "Yunming", "" ], [ "Wu", "Bin", "" ] ]
Cooperation is ubiquitous across all levels of biological systems ranging from microbial communities to human societies. It, however, seemingly contradicts the evolutionary theory, since cooperators are exploited by free-riders and thus are disfavored by natural selection. Many studies based on evolutionary game theory have tried to solve the puzzle and figure out the reason why cooperation exists and how it emerges. Network reciprocity is one of the mechanisms to promote cooperation, where nodes refer to individuals and links refer to social relationships. The spatial arrangement of mutant individuals, which refers to the clustering of mutants, plays a key role in network reciprocity. Besides, many other mechanisms supporting cooperation suggest that the clustering of mutants plays an important role in the expansion of mutants. However, the clustering of mutants and the game dynamics are typically coupled. It is still unclear how the clustering of mutants alone alters the evolutionary dynamics. To this end, we employ a minimal model with frequency independent fitness on a circle. It disentangles the clustering of mutants from game dynamics. The distance between two mutants on the circle is adopted as a natural indicator for the clustering of mutants or assortment. We find that the assortment is an amplifier of the selection for the connected mutants compared with the separated ones. Nevertheless, as mutants are separated, the more dispersed mutants are, the greater the chance of invasion is. It gives rise to the non-monotonic effect of clustering, which is counterintuitive. On the other hand, we find that less assortative mutants speed up fixation. Our model shows that the clustering of mutants plays a non-trivial role in fixation, which has emerged even if the game interaction is absent.
1507.07580
Stefano Fusi
Marcus K. Benna and Stefano Fusi
Computational principles of biological memory
21 pages + 46 pages of suppl. info
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Memories are stored, retained, and recollected through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions we construct a broad class of synaptic models that efficiently harnesses biological complexity to preserve numerous memories. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Importantly, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modifications, metaplasticity, and spacing effects.
[ { "created": "Mon, 27 Jul 2015 20:29:33 GMT", "version": "v1" } ]
2015-07-29
[ [ "Benna", "Marcus K.", "" ], [ "Fusi", "Stefano", "" ] ]
Memories are stored, retained, and recollected through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions we construct a broad class of synaptic models that efficiently harnesses biological complexity to preserve numerous memories. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Importantly, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modifications, metaplasticity, and spacing effects.
2004.05069
Gianluca Calcagni
Gianluca Calcagni, Justin A. Harris, Ricardo Pell\'on
Beyond Rescorla-Wagner: the ups and downs of learning
39 pages, 11 figures, 8 tables. v2: discussion improved, figures added, some parts shortened, conclusions unchanged, matches published version
Comput. Brain Behav. (2021)
10.1007/s42113-021-00103-4
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We check the robustness of a recently proposed dynamical model of associative Pavlovian learning that extends the Rescorla-Wagner (RW) model in a natural way and predicts progressively damped oscillations in the response of the subjects. Using the data of two experiments, we compare the dynamical oscillatory model (DOM) with an oscillatory model made of the superposition of the RW learning curve and oscillations. Not only do data clearly show an oscillatory pattern, but they also favor the DOM over the added oscillation model, thus pointing out that these oscillations are the manifestation of an associative process. The latter is interpreted as the fact that subjects make predictions on trial outcomes more extended in time than in the RW model, but with more uncertainty.
[ { "created": "Fri, 10 Apr 2020 15:16:18 GMT", "version": "v1" }, { "created": "Sun, 18 Apr 2021 20:13:59 GMT", "version": "v2" } ]
2021-04-20
[ [ "Calcagni", "Gianluca", "" ], [ "Harris", "Justin A.", "" ], [ "Pellón", "Ricardo", "" ] ]
We check the robustness of a recently proposed dynamical model of associative Pavlovian learning that extends the Rescorla-Wagner (RW) model in a natural way and predicts progressively damped oscillations in the response of the subjects. Using the data of two experiments, we compare the dynamical oscillatory model (DOM) with an oscillatory model made of the superposition of the RW learning curve and oscillations. Not only do data clearly show an oscillatory pattern, but they also favor the DOM over the added oscillation model, thus pointing out that these oscillations are the manifestation of an associative process. The latter is interpreted as the fact that subjects make predictions on trial outcomes more extended in time than in the RW model, but with more uncertainty.
1607.00104
Conrad Burden
Conrad J. Burden and Yurong Tang
An approximate stationary solution for multi-allele neutral diffusion with low mutation rates
34 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of determining the stationary distribution of the multi-allelic, neutral-evolution Wright-Fisher model in the diffusion limit. A full solution to this problem for an arbitrary K x K mutation rate matrix involves solving for the stationary solution of a forward Kolmogorov equation over a (K - 1)-dimensional simplex, and remains intractable. In most practical situations mutations rates are slow on the scale of the diffusion limit and the solution is heavily concentrated on the corners and edges of the simplex. In this paper we present a practical approximate solution for slow mutation rates in the form of a set of line densities along the edges of the simplex. The method of solution relies on parameterising the general non-reversible rate matrix as the sum of a reversible part and a set of (K - 1)(K - 2)/2 independent terms corresponding to fluxes of probability along closed paths around faces of the simplex. The solution is potentially a first step in estimat- ing non-reversible evolutionary rate matrices from observed allele frequency spectra.
[ { "created": "Fri, 1 Jul 2016 03:46:34 GMT", "version": "v1" } ]
2016-07-04
[ [ "Burden", "Conrad J.", "" ], [ "Tang", "Yurong", "" ] ]
We address the problem of determining the stationary distribution of the multi-allelic, neutral-evolution Wright-Fisher model in the diffusion limit. A full solution to this problem for an arbitrary K x K mutation rate matrix involves solving for the stationary solution of a forward Kolmogorov equation over a (K - 1)-dimensional simplex, and remains intractable. In most practical situations mutations rates are slow on the scale of the diffusion limit and the solution is heavily concentrated on the corners and edges of the simplex. In this paper we present a practical approximate solution for slow mutation rates in the form of a set of line densities along the edges of the simplex. The method of solution relies on parameterising the general non-reversible rate matrix as the sum of a reversible part and a set of (K - 1)(K - 2)/2 independent terms corresponding to fluxes of probability along closed paths around faces of the simplex. The solution is potentially a first step in estimat- ing non-reversible evolutionary rate matrices from observed allele frequency spectra.
0906.2872
Thierry Rabilloud
Christian Villiers (U823), Mireille Chevallet (BBSI), H\'el\`ene Diemer (IPHC), Rachel Couderc (U823), Heidi Freitas (U823), Alain Van Dorsselaer (IPHC), Patrice N Marche (U823), Thierry Rabilloud (BBSI)
From secretome analysis to immunology: chitosan induces major alterations in the activation of dendritic cells via a TLR4-dependent mechanism
null
Mol Cell Proteomics 8, 6 (2009) 1252-64
10.1074/mcp.M800589-MCP200
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dendritic cells are known to be activated by a wide range of microbial products, leading to cytokine production and increased levels of membrane markers such as major histocompatibility complex class II molecules. Such activated dendritic cells possess the capacity to activate na\"ive T cells. In the present study we demonstrated that immature dendritic cells secrete both the YM1 lectin and lipocalin-2. By testing the ligands of these two proteins, chitosan and siderophores, respectively, we also demonstrated that chitosan, a degradation product of various fungal and protozoal cell walls, induces an activation of dendritic cells at the membrane level, as shown by the up-regulation of membrane proteins such as class II molecules, CD80 and CD86 via a TLR4-dependent mechanism, but is not able to induce cytokine production. This led to the production of activated dendritic cells unable to stimulate T cells. However, costimulation with other microbial products overcame this partial activation and restored the capacity of these activated dendritic cells to stimulate T cells. In addition, successive stimulation with chitosan and then by lipopolysaccharide induced a dose-dependent change in the cytokinic IL-12/IL-10 balance produced by the dendritic cells.
[ { "created": "Tue, 16 Jun 2009 08:48:57 GMT", "version": "v1" } ]
2009-06-17
[ [ "Villiers", "Christian", "", "U823" ], [ "Chevallet", "Mireille", "", "BBSI" ], [ "Diemer", "Hélène", "", "IPHC" ], [ "Couderc", "Rachel", "", "U823" ], [ "Freitas", "Heidi", "", "U823" ], [ "Van Dorsselaer", "Alain", "", "IPHC" ], [ "Marche", "Patrice N", "", "U823" ], [ "Rabilloud", "Thierry", "", "BBSI" ] ]
Dendritic cells are known to be activated by a wide range of microbial products, leading to cytokine production and increased levels of membrane markers such as major histocompatibility complex class II molecules. Such activated dendritic cells possess the capacity to activate na\"ive T cells. In the present study we demonstrated that immature dendritic cells secrete both the YM1 lectin and lipocalin-2. By testing the ligands of these two proteins, chitosan and siderophores, respectively, we also demonstrated that chitosan, a degradation product of various fungal and protozoal cell walls, induces an activation of dendritic cells at the membrane level, as shown by the up-regulation of membrane proteins such as class II molecules, CD80 and CD86 via a TLR4-dependent mechanism, but is not able to induce cytokine production. This led to the production of activated dendritic cells unable to stimulate T cells. However, costimulation with other microbial products overcame this partial activation and restored the capacity of these activated dendritic cells to stimulate T cells. In addition, successive stimulation with chitosan and then by lipopolysaccharide induced a dose-dependent change in the cytokinic IL-12/IL-10 balance produced by the dendritic cells.
2011.11638
Cristina Postigo
Kaidi Hu, Josefina Toran, Ester Lopez-Garcia, Maria Vittoria Barbieri, Cristina Postigo, Miren Lopez de Alda, Gloria Caminal, Montserrat Sarra, Paqui Blanquez
Fungal bioremediation of diuron-contaminated waters: evaluation of its degradation and the effect of amendable factors on its removal in a trickle-bed reactor under non-sterile conditions
Published in Science of the Total Environment
Science of The Total Environment Volume 743, 15 November 2020, 140628
10.1016/j.scitotenv.2020.140628
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The occurrence of the extensively used herbicide diuron in the environment poses a severe threat to the ecosystem and human health. Four different ligninolytic fungi were studied as biodegradation candidates for the removal of diuron. Among them, T. versicolor was the most effective species, degrading rapidly not only diuron (83%) but also the major metabolite 3,4-dichloroaniline (100%), after 7-day incubation. During diuron degradation, five transformation products (TPs) were found to be formed and the structures for three of them are tentatively proposed. According to the identified TPs, a hydroxylated intermediate 3-(3,4-dichlorophenyl)-1-hydroxymethyl-1-methylurea (DCPHMU) was further metabolized into the N-dealkylated compounds 3-(3,4-dichlorophenyl)-1-methylurea (DCPMU) and 3,4-dichlorophenylurea (DCPU). The discovery of DCPHMU suggests a relevant role of hydroxylation for subsequent N-demethylation, helping to better understand the main reaction mechanisms of diuron detoxification. Experiments also evidenced that degradation reactions may occur intracellularly and be catalyzed by the cytochrome P450 system. A response surface method, established by central composite design, assisted in evaluating the effect of operational variables in a trickle-bed bioreactor immobilized with T. versicolor on diuron removal. The best performance was obtained at low recycling ratios and influent flow rates. Furthermore, results indicate that the contact time between the contaminant and immobilized fungi plays a crucial role in diuron removal. This study represents a pioneering step forward amid techniques for bioremediation of pesticides-contaminated waters using fungal reactors at a real scale.
[ { "created": "Mon, 23 Nov 2020 17:08:49 GMT", "version": "v1" } ]
2020-11-25
[ [ "Hu", "Kaidi", "" ], [ "Toran", "Josefina", "" ], [ "Lopez-Garcia", "Ester", "" ], [ "Barbieri", "Maria Vittoria", "" ], [ "Postigo", "Cristina", "" ], [ "de Alda", "Miren Lopez", "" ], [ "Caminal", "Gloria", "" ], [ "Sarra", "Montserrat", "" ], [ "Blanquez", "Paqui", "" ] ]
The occurrence of the extensively used herbicide diuron in the environment poses a severe threat to the ecosystem and human health. Four different ligninolytic fungi were studied as biodegradation candidates for the removal of diuron. Among them, T. versicolor was the most effective species, degrading rapidly not only diuron (83%) but also the major metabolite 3,4-dichloroaniline (100%), after 7-day incubation. During diuron degradation, five transformation products (TPs) were found to be formed and the structures for three of them are tentatively proposed. According to the identified TPs, a hydroxylated intermediate 3-(3,4-dichlorophenyl)-1-hydroxymethyl-1-methylurea (DCPHMU) was further metabolized into the N-dealkylated compounds 3-(3,4-dichlorophenyl)-1-methylurea (DCPMU) and 3,4-dichlorophenylurea (DCPU). The discovery of DCPHMU suggests a relevant role of hydroxylation for subsequent N-demethylation, helping to better understand the main reaction mechanisms of diuron detoxification. Experiments also evidenced that degradation reactions may occur intracellularly and be catalyzed by the cytochrome P450 system. A response surface method, established by central composite design, assisted in evaluating the effect of operational variables in a trickle-bed bioreactor immobilized with T. versicolor on diuron removal. The best performance was obtained at low recycling ratios and influent flow rates. Furthermore, results indicate that the contact time between the contaminant and immobilized fungi plays a crucial role in diuron removal. This study represents a pioneering step forward amid techniques for bioremediation of pesticides-contaminated waters using fungal reactors at a real scale.
2005.11186
Constantinos Siettos
Evangelos Galaris, Ioannis Gallos, Ivan Myatchin, Lieven Lagae, Constantinos Siettos
EEG source localization analysis in epileptic children during a visual working-memory task
null
International Journal of Numerical Methods in Biomedical Engineering, 36:e3404, 2020
10.1002/cnm.3404
null
q-bio.NC cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We localize the sources of brain activity of children with epilepsy based on EEG recordings acquired during a visual discrimination working memory task. For the numerical solution of the inverse problem, with the aid of age-specific MRI scans processed from a publicly available database, we use and compare three regularization numerical methods, namely the standarized Low Resolution Electromagnetic Tomography (sLORETA), the weighted Minimum Norm Estimation (wMNE) and the dynamic Statistical Parametric Mapping (dSPM). We show that all three methods provide the same spatio-temporal patterns of differences between epileptic and control children. In particular, our analysis reveals statistically significant differences between the two groups in regions of the Parietal Cortex indicating that these may serve as "biomarkers" for diagnostic purposes and ultimately localized treatment.
[ { "created": "Fri, 22 May 2020 13:43:23 GMT", "version": "v1" } ]
2023-03-16
[ [ "Galaris", "Evangelos", "" ], [ "Gallos", "Ioannis", "" ], [ "Myatchin", "Ivan", "" ], [ "Lagae", "Lieven", "" ], [ "Siettos", "Constantinos", "" ] ]
We localize the sources of brain activity of children with epilepsy based on EEG recordings acquired during a visual discrimination working memory task. For the numerical solution of the inverse problem, with the aid of age-specific MRI scans processed from a publicly available database, we use and compare three regularization numerical methods, namely the standarized Low Resolution Electromagnetic Tomography (sLORETA), the weighted Minimum Norm Estimation (wMNE) and the dynamic Statistical Parametric Mapping (dSPM). We show that all three methods provide the same spatio-temporal patterns of differences between epileptic and control children. In particular, our analysis reveals statistically significant differences between the two groups in regions of the Parietal Cortex indicating that these may serve as "biomarkers" for diagnostic purposes and ultimately localized treatment.
1404.6932
Kunihiko Goto
Kunihiko Goto and Toshio Nakaye
Dynamic and integrated mechanical movements of a rat brain associated with evoked potentials
7 pages,and 3 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/publicdomain/
By using a piezoelectric sensor, it was demonstrated that the visual evoked potential of a rat brain was accompanied by mechanical movements of the brain when it was excited. A phase of upward movement was found to be followed by a phase of downward movement. The largest upward movement was a rise in swelling pressure on the order of 100 micro gram, which was about 40 times larger than that of the bullfrog sympathetic ganglion. The waves of mechanical movements were more complicated than those of the evoked potentials. These findings are thought to be due to the fact that the evoked potentials are propagated from the immediate surroundings of the sensor, while the mechanical signals are produced from anywhere beneath the piezo sensor. The mechanisms of mechanical movements propagated in the brain by electrical stimulation are discussed.
[ { "created": "Mon, 28 Apr 2014 11:09:46 GMT", "version": "v1" } ]
2014-04-29
[ [ "Goto", "Kunihiko", "" ], [ "Nakaye", "Toshio", "" ] ]
By using a piezoelectric sensor, it was demonstrated that the visual evoked potential of a rat brain was accompanied by mechanical movements of the brain when it was excited. A phase of upward movement was found to be followed by a phase of downward movement. The largest upward movement was a rise in swelling pressure on the order of 100 micro gram, which was about 40 times larger than that of the bullfrog sympathetic ganglion. The waves of mechanical movements were more complicated than those of the evoked potentials. These findings are thought to be due to the fact that the evoked potentials are propagated from the immediate surroundings of the sensor, while the mechanical signals are produced from anywhere beneath the piezo sensor. The mechanisms of mechanical movements propagated in the brain by electrical stimulation are discussed.
2307.13079
Siavash Ahrar
Brian T. Le, Katherine M. Auer, David A. Lopez, Justin P. Shum, Brian Suarsana, Ga-Young Kelly Suh, Per Niklas Hedde, Siavash Ahrar
Orthogonal-view Microscope for the Biomechanics Investigations of Aquatic Organisms
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Microscopes are essential for biomechanics and hydrodynamical investigation of small aquatic organisms. We report a DIY microscope (GLUBscope) that enables the visualization of organisms from two orthogonal imaging planes (top and side views). Compared to conventional imaging systems, this approach provides a comprehensive visualization strategy of organisms, which could have complex shapes and morphologies. The microscope was constructed by combining custom 3D-printed parts and off-the-shelf components. The system is designed for modularity and reconfigurability. Open-source design files and build instructions are provided in this report. Additionally, proof of use experiments, particularly with Hydra and other organisms that combine the GLUBscope with an analysis pipeline, were demonstrated. Beyond the applications demonstrated, the system can be used or modified for various imaging applications.
[ { "created": "Mon, 24 Jul 2023 19:06:00 GMT", "version": "v1" } ]
2023-07-26
[ [ "Le", "Brian T.", "" ], [ "Auer", "Katherine M.", "" ], [ "Lopez", "David A.", "" ], [ "Shum", "Justin P.", "" ], [ "Suarsana", "Brian", "" ], [ "Suh", "Ga-Young Kelly", "" ], [ "Hedde", "Per Niklas", "" ], [ "Ahrar", "Siavash", "" ] ]
Microscopes are essential for biomechanics and hydrodynamical investigation of small aquatic organisms. We report a DIY microscope (GLUBscope) that enables the visualization of organisms from two orthogonal imaging planes (top and side views). Compared to conventional imaging systems, this approach provides a comprehensive visualization strategy of organisms, which could have complex shapes and morphologies. The microscope was constructed by combining custom 3D-printed parts and off-the-shelf components. The system is designed for modularity and reconfigurability. Open-source design files and build instructions are provided in this report. Additionally, proof of use experiments, particularly with Hydra and other organisms that combine the GLUBscope with an analysis pipeline, were demonstrated. Beyond the applications demonstrated, the system can be used or modified for various imaging applications.
1609.01664
Olga Vsevolozhskaya
Olga A. Vsevolozhskaya, Gabriel Ruiz, Dmitri V. Zaykin
Assessment of P-value variability in the current replicability crisis
Corrected typos
null
null
null
q-bio.GN stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increased availability of data and accessibility of computational tools in recent years have created unprecedented opportunities for scientific research driven by statistical analysis. Inherent limitations of statistics impose constrains on reliability of conclusions drawn from data but misuse of statistical methods is a growing concern. Significance, hypothesis testing and the accompanying P-values are being scrutinized as representing most widely applied and abused practices. One line of critique is that P-values are inherently unfit to fulfill their ostensible role as measures of scientific hypothesis's credibility. It has also been suggested that while P-values may have their role as summary measures of effect, researchers underappreciate the degree of randomness in the P-value. High variability of P-values would suggest that having obtained a small P-value in one study, one is, nevertheless, likely to obtain a much larger P-value in a similarly powered replication study. Thus, "replicability of P-value" is itself questionable. To characterize P-value variability one can use prediction intervals whose endpoints reflect the likely spread of P-values that could have been obtained by a replication study. Unfortunately, the intervals currently in use, the P-intervals, are based on unrealistic implicit assumptions. Namely, P-intervals are constructed with the assumptions that imply substantial chances of encountering large values of effect size in an observational study, which leads to bias. As an alternative to P-intervals, we develop a method that gives researchers flexibility by providing them with the means to control these assumptions. Unlike endpoints of P-intervals, endpoints of our intervals are directly interpreted as probabilistic bounds for replication P-values and are resistant to selection bias contingent upon approximate prior knowledge of the effect size distribution.
[ { "created": "Tue, 6 Sep 2016 17:35:29 GMT", "version": "v1" }, { "created": "Wed, 7 Sep 2016 21:10:42 GMT", "version": "v2" }, { "created": "Sat, 10 Sep 2016 15:28:19 GMT", "version": "v3" } ]
2016-09-13
[ [ "Vsevolozhskaya", "Olga A.", "" ], [ "Ruiz", "Gabriel", "" ], [ "Zaykin", "Dmitri V.", "" ] ]
Increased availability of data and accessibility of computational tools in recent years have created unprecedented opportunities for scientific research driven by statistical analysis. Inherent limitations of statistics impose constrains on reliability of conclusions drawn from data but misuse of statistical methods is a growing concern. Significance, hypothesis testing and the accompanying P-values are being scrutinized as representing most widely applied and abused practices. One line of critique is that P-values are inherently unfit to fulfill their ostensible role as measures of scientific hypothesis's credibility. It has also been suggested that while P-values may have their role as summary measures of effect, researchers underappreciate the degree of randomness in the P-value. High variability of P-values would suggest that having obtained a small P-value in one study, one is, nevertheless, likely to obtain a much larger P-value in a similarly powered replication study. Thus, "replicability of P-value" is itself questionable. To characterize P-value variability one can use prediction intervals whose endpoints reflect the likely spread of P-values that could have been obtained by a replication study. Unfortunately, the intervals currently in use, the P-intervals, are based on unrealistic implicit assumptions. Namely, P-intervals are constructed with the assumptions that imply substantial chances of encountering large values of effect size in an observational study, which leads to bias. As an alternative to P-intervals, we develop a method that gives researchers flexibility by providing them with the means to control these assumptions. Unlike endpoints of P-intervals, endpoints of our intervals are directly interpreted as probabilistic bounds for replication P-values and are resistant to selection bias contingent upon approximate prior knowledge of the effect size distribution.
0901.1657
Thu-Hien To
Thu-Hien To and Michel Habib
Level-k Phylogenetic Network can be Constructed from a Dense Triplet Set in Polynomial Time
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a dense triplet set $\mathcal{T}$, there arise two interesting questions: Does there exists any phylogenetic network consistent with $\mathcal{T}$? And if so, can we find an effective algorithm to construct one? For cases of networks of levels $k=0$ or 1 or 2, these questions were answered with effective polynomial algorithms. For higher levels $k$, partial answers were recently obtained with an $O(|\mathcal{T}|^{k+1})$ time algorithm for simple networks. In this paper we give a complete answer to the general case. The main idea is to use a special property of SN-sets in a level-k network. As a consequence, we can also find the level-k network with the minimum number of reticulations in polynomial time.
[ { "created": "Mon, 12 Jan 2009 20:54:44 GMT", "version": "v1" } ]
2009-01-13
[ [ "To", "Thu-Hien", "" ], [ "Habib", "Michel", "" ] ]
Given a dense triplet set $\mathcal{T}$, there arise two interesting questions: Does there exists any phylogenetic network consistent with $\mathcal{T}$? And if so, can we find an effective algorithm to construct one? For cases of networks of levels $k=0$ or 1 or 2, these questions were answered with effective polynomial algorithms. For higher levels $k$, partial answers were recently obtained with an $O(|\mathcal{T}|^{k+1})$ time algorithm for simple networks. In this paper we give a complete answer to the general case. The main idea is to use a special property of SN-sets in a level-k network. As a consequence, we can also find the level-k network with the minimum number of reticulations in polynomial time.
2211.05661
Fanwang Meng
Wenwen Liu, Cheng Luo, Hecheng Wang, Fanwang Meng
A Benchmarking Dataset with 2440 Organic Molecules for Volume Distribution at Steady State
null
null
null
null
q-bio.QM physics.bio-ph physics.chem-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Background: The volume of distribution at steady state (VDss) is a fundamental pharmacokinetics (PK) property of drugs, which measures how effectively a drug molecule is distributed throughout the body. Along with the clearance (CL), it determines the half-life and, therefore, the drug dosing interval. However, the molecular data size limits the generalizability of the reported machine learning models. Objective: This study aims to provide a clean and comprehensive dataset for human VDss as the benchmarking data source, fostering and benefiting future predictive studies. Moreover, several predictive models were also built with machine learning regression algorithms. Methods: The dataset was curated from 13 publicly accessible data sources and the DrugBank database entirely from intravenous drug administration and then underwent extensive data cleaning. The molecular descriptors were calculated with Mordred, and feature selection was conducted for constructing predictive models. Five machine learning methods were used to build regression models, grid search was used to optimize hyperparameters, and ten-fold cross-validation was used to evaluate the model. Results: An enriched dataset of VDss (https://github.com/da-wen-er/VDss) was constructed with 2440 molecules. Among the prediction models, the LightGBM model was the most stable and had the best internal prediction ability with Q2 = 0.837, R2=0.814 and for the other four models, Q2 was higher than 0.79. Conclusions: To the best of our knowledge, this is the largest dataset for VDss, which can be used as the benchmark for computational studies of VDss. Moreover, the regression models reported within this study can be of use for pharmacokinetic related studies.
[ { "created": "Thu, 10 Nov 2022 15:46:17 GMT", "version": "v1" } ]
2022-11-11
[ [ "Liu", "Wenwen", "" ], [ "Luo", "Cheng", "" ], [ "Wang", "Hecheng", "" ], [ "Meng", "Fanwang", "" ] ]
Background: The volume of distribution at steady state (VDss) is a fundamental pharmacokinetics (PK) property of drugs, which measures how effectively a drug molecule is distributed throughout the body. Along with the clearance (CL), it determines the half-life and, therefore, the drug dosing interval. However, the molecular data size limits the generalizability of the reported machine learning models. Objective: This study aims to provide a clean and comprehensive dataset for human VDss as the benchmarking data source, fostering and benefiting future predictive studies. Moreover, several predictive models were also built with machine learning regression algorithms. Methods: The dataset was curated from 13 publicly accessible data sources and the DrugBank database entirely from intravenous drug administration and then underwent extensive data cleaning. The molecular descriptors were calculated with Mordred, and feature selection was conducted for constructing predictive models. Five machine learning methods were used to build regression models, grid search was used to optimize hyperparameters, and ten-fold cross-validation was used to evaluate the model. Results: An enriched dataset of VDss (https://github.com/da-wen-er/VDss) was constructed with 2440 molecules. Among the prediction models, the LightGBM model was the most stable and had the best internal prediction ability with Q2 = 0.837, R2=0.814 and for the other four models, Q2 was higher than 0.79. Conclusions: To the best of our knowledge, this is the largest dataset for VDss, which can be used as the benchmark for computational studies of VDss. Moreover, the regression models reported within this study can be of use for pharmacokinetic related studies.
1311.3717
Amin Emad
Jonathan G. Ligo, Minji Kim, Amin Emad, Olgica Milenkovic and Venugopal V. Veeravalli
MCUIUC -- A New Framework for Metagenomic Read Compression
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics is an emerging field of molecular biology concerned with analyzing the genomes of environmental samples comprising many different diverse organisms. Given the nature of metagenomic data, one usually has to sequence the genomic material of all organisms in a batch, leading to a mix of reads coming from different DNA sequences. In deep high-throughput sequencing experiments, the volume of the raw reads is extremely high, frequently exceeding 600 Gb. With an ever increasing demand for storing such reads for future studies, the issue of efficient metagenomic compression becomes of paramount importance. We present the first known approach to metagenome read compression, termed MCUIUC (Metagenomic Compression at UIUC). The gist of the proposed algorithm is to perform classification of reads based on unique organism identifiers, followed by reference-based alignment of reads for individually identified organisms, and metagenomic assembly of unclassified reads. Once assembly and classification are completed, lossless reference based compression is performed via positional encoding. We evaluate the performance of the algorithm on moderate sized synthetic metagenomic samples involving 15 randomly selected organisms and describe future directions for improving the proposed compression method.
[ { "created": "Fri, 15 Nov 2013 03:57:05 GMT", "version": "v1" } ]
2013-11-20
[ [ "Ligo", "Jonathan G.", "" ], [ "Kim", "Minji", "" ], [ "Emad", "Amin", "" ], [ "Milenkovic", "Olgica", "" ], [ "Veeravalli", "Venugopal V.", "" ] ]
Metagenomics is an emerging field of molecular biology concerned with analyzing the genomes of environmental samples comprising many different diverse organisms. Given the nature of metagenomic data, one usually has to sequence the genomic material of all organisms in a batch, leading to a mix of reads coming from different DNA sequences. In deep high-throughput sequencing experiments, the volume of the raw reads is extremely high, frequently exceeding 600 Gb. With an ever increasing demand for storing such reads for future studies, the issue of efficient metagenomic compression becomes of paramount importance. We present the first known approach to metagenome read compression, termed MCUIUC (Metagenomic Compression at UIUC). The gist of the proposed algorithm is to perform classification of reads based on unique organism identifiers, followed by reference-based alignment of reads for individually identified organisms, and metagenomic assembly of unclassified reads. Once assembly and classification are completed, lossless reference based compression is performed via positional encoding. We evaluate the performance of the algorithm on moderate sized synthetic metagenomic samples involving 15 randomly selected organisms and describe future directions for improving the proposed compression method.
2007.03245
Oleksandr Oliynyk
Keke Hu, Yan-Ling Liu, Alexander Oleinick (PASTEUR), Michael Mirkin, Wei-Hua Huang, Christian Amatore (PASTEUR)
Nanoelectrodes for intracellular measurements of reactive oxygen and nitrogen species in single living cells
null
Current Opinion in Electrochemistry, Elsevier, 2020, 22, pp.44-50
10.1016/j.coelec.2020.04.003
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reactive oxygen and nitrogen species (ROS and RNS) play important roles in various physiological processes (e.g., phagocytosis) and pathological conditions (e.g., cancer). The primary ROS/RNS, viz., hydrogen peroxide, peroxynitrite ion, nitric oxide, and nitrite ion, can be oxidized at different electrode potentials and therefore detected and quantified by electroanalytical techniques. Nanometer-sized electrochemical probes are especially suitable for measuring ROS/RNS in single cells and cellular organelles. In this article, we survey recent advances in localized measurements of ROS/RNS inside single cells and discuss several methodological issues, including optimization of nanoelectrode geometry, precise positioning of an electrochemical probe inside a cell, and interpretation of electroanalytical data.
[ { "created": "Tue, 7 Jul 2020 07:29:23 GMT", "version": "v1" } ]
2020-07-08
[ [ "Hu", "Keke", "", "PASTEUR" ], [ "Liu", "Yan-Ling", "", "PASTEUR" ], [ "Oleinick", "Alexander", "", "PASTEUR" ], [ "Mirkin", "Michael", "", "PASTEUR" ], [ "Huang", "Wei-Hua", "", "PASTEUR" ], [ "Amatore", "Christian", "", "PASTEUR" ] ]
Reactive oxygen and nitrogen species (ROS and RNS) play important roles in various physiological processes (e.g., phagocytosis) and pathological conditions (e.g., cancer). The primary ROS/RNS, viz., hydrogen peroxide, peroxynitrite ion, nitric oxide, and nitrite ion, can be oxidized at different electrode potentials and therefore detected and quantified by electroanalytical techniques. Nanometer-sized electrochemical probes are especially suitable for measuring ROS/RNS in single cells and cellular organelles. In this article, we survey recent advances in localized measurements of ROS/RNS inside single cells and discuss several methodological issues, including optimization of nanoelectrode geometry, precise positioning of an electrochemical probe inside a cell, and interpretation of electroanalytical data.
1903.04866
Donald Forsdyke Dr.
Donald R. Forsdyke
Success of Alignment-Free Oligonucleotide (k-mer) Analysis Confirms Relative Importance of Genomes not Genes in Speciation and Phylogeny
A 25 page review
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The utility of DNA sequence substrings (k-mers) in alignment-free phylogenetic classification, including that of bacteria and viruses, is increasingly recognized. However, its biological basis eludes many twenty-first century practitioners. A path from the nineteenth century recognition of the informational basis of heredity to the modern era can be discerned. Crick's DNA "unpairing postulate" predicted that recombinational pairing of homologous DNAs during meiosis would be mediated by short k-mers in the loops of stem-loop structures extruded from classical duplex helices. The complementary "kissing" duplex loops - like tRNA anticodon-codon k-mer duplexes - would seed a more extensive pairing that would then extend until limited by lack of homology or other factors. Indeed, this became the principle behind alignment-based methods that assessed similarity by degree of DNA-DNA reassociation in vitro. These are now seen as less sensitive than alignment-free methods that are closely consistent, both theoretically and mechanistically, with chromosomal anti-recombination models for the initiation of divergence into new species. The analytical power of k-mer differences supports the theses that evolutionary advance sometimes serves the needs of nucleic acids (genomes) rather than proteins (genes), and that such differences have often played a role in early speciation events.
[ { "created": "Tue, 12 Mar 2019 12:21:36 GMT", "version": "v1" }, { "created": "Thu, 25 Apr 2019 19:37:06 GMT", "version": "v2" } ]
2019-04-29
[ [ "Forsdyke", "Donald R.", "" ] ]
The utility of DNA sequence substrings (k-mers) in alignment-free phylogenetic classification, including that of bacteria and viruses, is increasingly recognized. However, its biological basis eludes many twenty-first century practitioners. A path from the nineteenth century recognition of the informational basis of heredity to the modern era can be discerned. Crick's DNA "unpairing postulate" predicted that recombinational pairing of homologous DNAs during meiosis would be mediated by short k-mers in the loops of stem-loop structures extruded from classical duplex helices. The complementary "kissing" duplex loops - like tRNA anticodon-codon k-mer duplexes - would seed a more extensive pairing that would then extend until limited by lack of homology or other factors. Indeed, this became the principle behind alignment-based methods that assessed similarity by degree of DNA-DNA reassociation in vitro. These are now seen as less sensitive than alignment-free methods that are closely consistent, both theoretically and mechanistically, with chromosomal anti-recombination models for the initiation of divergence into new species. The analytical power of k-mer differences supports the theses that evolutionary advance sometimes serves the needs of nucleic acids (genomes) rather than proteins (genes), and that such differences have often played a role in early speciation events.
2110.04913
Diederik Aerts
Diederik Aerts and Lester Beltran
Are Words the Quanta of Human Language? Extending the Domain of Quantum Cognition
27 pages, 3 figures
Entropy 24, 6 (2022)
10.3390/e24010006
null
q-bio.NC cs.CL quant-ph
http://creativecommons.org/licenses/by/4.0/
In previous research, we showed that 'texts that tell a story' exhibit a statistical structure that is not Maxwell-Boltzmann but Bose-Einstein. Our explanation is that this is due to the presence of 'indistinguishability' in human language as a result of the same words in different parts of the story being indistinguishable from one another. In the current article, we set out to provide an explanation for this Bose-Einstein statistics. We show that it is the presence of 'meaning' in 'stories' that gives rise to the lack of independence characteristic of Bose-Einstein, and provides conclusive evidence that 'words can be considered the quanta of human language', structurally similar to how 'photons are the quanta of light'. Using several studies on entanglement from our Brussels research group, we also show that it is also the presence of 'meaning' in texts that makes the von Neumann entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called 'quantum cognition', where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce 'quantization' and 'Bose-Einstein statistics' as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.
[ { "created": "Sun, 10 Oct 2021 22:02:06 GMT", "version": "v1" }, { "created": "Tue, 21 Dec 2021 03:59:22 GMT", "version": "v2" } ]
2023-02-27
[ [ "Aerts", "Diederik", "" ], [ "Beltran", "Lester", "" ] ]
In previous research, we showed that 'texts that tell a story' exhibit a statistical structure that is not Maxwell-Boltzmann but Bose-Einstein. Our explanation is that this is due to the presence of 'indistinguishability' in human language as a result of the same words in different parts of the story being indistinguishable from one another. In the current article, we set out to provide an explanation for this Bose-Einstein statistics. We show that it is the presence of 'meaning' in 'stories' that gives rise to the lack of independence characteristic of Bose-Einstein, and provides conclusive evidence that 'words can be considered the quanta of human language', structurally similar to how 'photons are the quanta of light'. Using several studies on entanglement from our Brussels research group, we also show that it is also the presence of 'meaning' in texts that makes the von Neumann entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called 'quantum cognition', where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce 'quantization' and 'Bose-Einstein statistics' as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.
2206.00455
Bangwei Guo
Bangwei Guo, Xingyu Li, Miaomiao Yang, Hong Zhang, Xu Steven Xu
A robust and lightweight deep attention multiple instance learning algorithm for predicting genetic alterations
null
null
null
null
q-bio.QM cs.AI cs.CV cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep-learning models based on whole-slide digital pathology images (WSIs) become increasingly popular for predicting molecular biomarkers. Instance-based models has been the mainstream strategy for predicting genetic alterations using WSIs although bag-based models along with self-attention mechanism-based algorithms have been proposed for other digital pathology applications. In this paper, we proposed a novel Attention-based Multiple Instance Mutation Learning (AMIML) model for predicting gene mutations. AMIML was comprised of successive 1-D convolutional layers, a decoder, and a residual weight connection to facilitate further integration of a lightweight attention mechanism to detect the most predictive image patches. Using data for 24 clinically relevant genes from four cancer cohorts in The Cancer Genome Atlas (TCGA) studies (UCEC, BRCA, GBM and KIRC), we compared AMIML with one popular instance-based model and four recently published bag-based models (e.g., CHOWDER, HE2RNA, etc.). AMIML demonstrated excellent robustness, not only outperforming all the five baseline algorithms in the vast majority of the tested genes (17 out of 24), but also providing near-best-performance for the other seven genes. Conversely, the performance of the baseline published algorithms varied across different cancers/genes. In addition, compared to the published models for genetic alterations, AMIML provided a significant improvement for predicting a wide range of genes (e.g., KMT2C, TP53, and SETD2 for KIRC; ERBB2, BRCA1, and BRCA2 for BRCA; JAK1, POLE, and MTOR for UCEC) as well as produced outstanding predictive models for other clinically relevant gene mutations, which have not been reported in the current literature. Furthermore, with the flexible and interpretable attention-based MIL pooling mechanism, AMIML could further zero-in and detect predictive image patches.
[ { "created": "Tue, 31 May 2022 15:45:29 GMT", "version": "v1" } ]
2022-06-02
[ [ "Guo", "Bangwei", "" ], [ "Li", "Xingyu", "" ], [ "Yang", "Miaomiao", "" ], [ "Zhang", "Hong", "" ], [ "Xu", "Xu Steven", "" ] ]
Deep-learning models based on whole-slide digital pathology images (WSIs) become increasingly popular for predicting molecular biomarkers. Instance-based models has been the mainstream strategy for predicting genetic alterations using WSIs although bag-based models along with self-attention mechanism-based algorithms have been proposed for other digital pathology applications. In this paper, we proposed a novel Attention-based Multiple Instance Mutation Learning (AMIML) model for predicting gene mutations. AMIML was comprised of successive 1-D convolutional layers, a decoder, and a residual weight connection to facilitate further integration of a lightweight attention mechanism to detect the most predictive image patches. Using data for 24 clinically relevant genes from four cancer cohorts in The Cancer Genome Atlas (TCGA) studies (UCEC, BRCA, GBM and KIRC), we compared AMIML with one popular instance-based model and four recently published bag-based models (e.g., CHOWDER, HE2RNA, etc.). AMIML demonstrated excellent robustness, not only outperforming all the five baseline algorithms in the vast majority of the tested genes (17 out of 24), but also providing near-best-performance for the other seven genes. Conversely, the performance of the baseline published algorithms varied across different cancers/genes. In addition, compared to the published models for genetic alterations, AMIML provided a significant improvement for predicting a wide range of genes (e.g., KMT2C, TP53, and SETD2 for KIRC; ERBB2, BRCA1, and BRCA2 for BRCA; JAK1, POLE, and MTOR for UCEC) as well as produced outstanding predictive models for other clinically relevant gene mutations, which have not been reported in the current literature. Furthermore, with the flexible and interpretable attention-based MIL pooling mechanism, AMIML could further zero-in and detect predictive image patches.
2307.14099
Jeremi K. Ochab
Jakub Janarek, Zbigniew Drogosz, Jacek Grela, Jeremi K. Ochab, Pawe{\l} O\'swi\k{e}cimka
Investigating structural and functional aspects of the brain's criticality in stroke
24 pages, 11 figures
Scientific Reports 13: 12341 (2023)
10.1038/s41598-023-39467-x
null
q-bio.NC cond-mat.dis-nn q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the question of the brain's critical dynamics after an injury such as a stroke. It is hypothesized that the healthy brain operates near a phase transition (critical point), which provides optimal conditions for information transmission and responses to inputs. If structural damage could cause the critical point to disappear and thus make self-organized criticality unachievable, it would offer the theoretical explanation for the post-stroke impairment of brain function. In our contribution, however, we demonstrate using network models of the brain, that the dynamics remain critical even after a stroke. In cases where the average size of the second-largest cluster of active nodes, which is one of the commonly used indicators of criticality, shows an anomalous behavior, it results from the loss of integrity of the network, quantifiable within graph theory, and not from genuine non-critical dynamics. We propose a new simple model of an artificial stroke that explains this anomaly. The proposed interpretation of the results is confirmed by an analysis of real connectomes acquired from post-stroke patients and a control group. The results presented refer to neurobiological data; however, the conclusions reached apply to a broad class of complex systems that admit a critical state.
[ { "created": "Wed, 26 Jul 2023 10:58:53 GMT", "version": "v1" } ]
2023-08-01
[ [ "Janarek", "Jakub", "" ], [ "Drogosz", "Zbigniew", "" ], [ "Grela", "Jacek", "" ], [ "Ochab", "Jeremi K.", "" ], [ "Oświęcimka", "Paweł", "" ] ]
This paper addresses the question of the brain's critical dynamics after an injury such as a stroke. It is hypothesized that the healthy brain operates near a phase transition (critical point), which provides optimal conditions for information transmission and responses to inputs. If structural damage could cause the critical point to disappear and thus make self-organized criticality unachievable, it would offer the theoretical explanation for the post-stroke impairment of brain function. In our contribution, however, we demonstrate using network models of the brain, that the dynamics remain critical even after a stroke. In cases where the average size of the second-largest cluster of active nodes, which is one of the commonly used indicators of criticality, shows an anomalous behavior, it results from the loss of integrity of the network, quantifiable within graph theory, and not from genuine non-critical dynamics. We propose a new simple model of an artificial stroke that explains this anomaly. The proposed interpretation of the results is confirmed by an analysis of real connectomes acquired from post-stroke patients and a control group. The results presented refer to neurobiological data; however, the conclusions reached apply to a broad class of complex systems that admit a critical state.
1811.07140
Xi Han
X. Han, L. Zhang, K. Zhou, X. Wang
Deep learning framework DNN with conditional WGAN for protein solubility prediction
7 pages, 4 figures, 3 tables, journal Bioinformatics(submitted)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein solubility plays a critical role in improving production yield of recombinant proteins in biocatalyst and pharmaceutical field. To some extent, protein solubility can represent the function and activity of biocatalysts which are mainly composed of recombinant proteins. Highly soluble proteins are more effective in biocatalytic processes and can reduce the cost of biocatalysts. Screening proteins by experiments in vivo is time-consuming and expensive. In literature, large amounts of machine learning models have been investigated, whereas parameters of those models are underdetermined with insufficient data of protein solubility. A data augmentation algorithm that can enlarge the dataset of protein solubility and improve the performance of prediction model is highly desired, which can alleviate the common issue of insufficient data in biotechnology applications for developing machine learning models. We first implemented a novel approach that a data augmentation algorithm, conditional WGAN was used to improve prediction performance of DNN for protein solubility from protein sequence by generating artificial data. After adding mimic data produced from conditional WGAN, the prediction performance represented by $R^{2}$ was improved compared with the $R^{2}$ without data augmentation. After tuning the hyperparameters of two algorithms and organizing the dataset, we achieved a $R^{2}$ value of $45.04\%$, which enhanced $R^{2}$ about $10\%$ compared with the previous study using the same dataset. Data augmentation opens the door to applications of machine learning models on biological data, as machine learning models always fail to be well trained by small datasets.
[ { "created": "Sat, 17 Nov 2018 10:34:24 GMT", "version": "v1" } ]
2018-11-20
[ [ "Han", "X.", "" ], [ "Zhang", "L.", "" ], [ "Zhou", "K.", "" ], [ "Wang", "X.", "" ] ]
Protein solubility plays a critical role in improving production yield of recombinant proteins in biocatalyst and pharmaceutical field. To some extent, protein solubility can represent the function and activity of biocatalysts which are mainly composed of recombinant proteins. Highly soluble proteins are more effective in biocatalytic processes and can reduce the cost of biocatalysts. Screening proteins by experiments in vivo is time-consuming and expensive. In literature, large amounts of machine learning models have been investigated, whereas parameters of those models are underdetermined with insufficient data of protein solubility. A data augmentation algorithm that can enlarge the dataset of protein solubility and improve the performance of prediction model is highly desired, which can alleviate the common issue of insufficient data in biotechnology applications for developing machine learning models. We first implemented a novel approach that a data augmentation algorithm, conditional WGAN was used to improve prediction performance of DNN for protein solubility from protein sequence by generating artificial data. After adding mimic data produced from conditional WGAN, the prediction performance represented by $R^{2}$ was improved compared with the $R^{2}$ without data augmentation. After tuning the hyperparameters of two algorithms and organizing the dataset, we achieved a $R^{2}$ value of $45.04\%$, which enhanced $R^{2}$ about $10\%$ compared with the previous study using the same dataset. Data augmentation opens the door to applications of machine learning models on biological data, as machine learning models always fail to be well trained by small datasets.
2107.11856
Amine Amor
Amine Amor (1), Pietro Lio' (1), Vikash Singh (1), Ramon Vi\~nas Torn\'e (1), Helena Andres Terre (1)
Graph Representation Learning on Tissue-Specific Multi-Omics
This paper was accepted at the 2021 ICML Workshop on Computational Biology
null
null
null
q-bio.GN cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Combining different modalities of data from human tissues has been critical in advancing biomedical research and personalised medical care. In this study, we leverage a graph embedding model (i.e VGAE) to perform link prediction on tissue-specific Gene-Gene Interaction (GGI) networks. Through ablation experiments, we prove that the combination of multiple biological modalities (i.e multi-omics) leads to powerful embeddings and better link prediction performances. Our evaluation shows that the integration of gene methylation profiles and RNA-sequencing data significantly improves the link prediction performance. Overall, the combination of RNA-sequencing and gene methylation data leads to a link prediction accuracy of 71% on GGI networks. By harnessing graph representation learning on multi-omics data, our work brings novel insights to the current literature on multi-omics integration in bioinformatics.
[ { "created": "Sun, 25 Jul 2021 17:38:45 GMT", "version": "v1" } ]
2021-07-27
[ [ "Amor", "Amine", "" ], [ "Lio'", "Pietro", "" ], [ "Singh", "Vikash", "" ], [ "Torné", "Ramon Viñas", "" ], [ "Terre", "Helena Andres", "" ] ]
Combining different modalities of data from human tissues has been critical in advancing biomedical research and personalised medical care. In this study, we leverage a graph embedding model (i.e VGAE) to perform link prediction on tissue-specific Gene-Gene Interaction (GGI) networks. Through ablation experiments, we prove that the combination of multiple biological modalities (i.e multi-omics) leads to powerful embeddings and better link prediction performances. Our evaluation shows that the integration of gene methylation profiles and RNA-sequencing data significantly improves the link prediction performance. Overall, the combination of RNA-sequencing and gene methylation data leads to a link prediction accuracy of 71% on GGI networks. By harnessing graph representation learning on multi-omics data, our work brings novel insights to the current literature on multi-omics integration in bioinformatics.
1307.4922
Kazuhiro Takemoto
Kazuhiro Takemoto, Takeyuki Tamura, Tatsuya Akutsu
Theoretical estimation of metabolic network robustness against multiple reaction knockouts using branching process approximation
20 pages, 6 figures, 2 tables
Physica A 392, 5525 (2013)
10.1016/j.physa.2013.07.003
null
q-bio.MN physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In our previous study, we showed that the branching process approximation is useful for estimating metabolic robustness, measured using the impact degree. By applying a theory of random family forests, we here extend the branching process approximation to consider the knockout of {\it multiple} reactions, inspired by the importance of multiple knockouts reported by recent computational and experimental studies. In addition, we propose a better definition of the number of offspring of each reaction node, allowing for an improved estimation of the impact degree distribution obtained as a result of a single knockout. Importantly, our proposed approach is also applicable to multiple knockouts. The comparisons between theoretical predictions and numerical results using real-world metabolic networks demonstrate the validity of the modeling based on random family forests for estimating the impact degree distributions resulting from the knockout of multiple reactions.
[ { "created": "Thu, 18 Jul 2013 12:30:48 GMT", "version": "v1" } ]
2013-08-21
[ [ "Takemoto", "Kazuhiro", "" ], [ "Tamura", "Takeyuki", "" ], [ "Akutsu", "Tatsuya", "" ] ]
In our previous study, we showed that the branching process approximation is useful for estimating metabolic robustness, measured using the impact degree. By applying a theory of random family forests, we here extend the branching process approximation to consider the knockout of {\it multiple} reactions, inspired by the importance of multiple knockouts reported by recent computational and experimental studies. In addition, we propose a better definition of the number of offspring of each reaction node, allowing for an improved estimation of the impact degree distribution obtained as a result of a single knockout. Importantly, our proposed approach is also applicable to multiple knockouts. The comparisons between theoretical predictions and numerical results using real-world metabolic networks demonstrate the validity of the modeling based on random family forests for estimating the impact degree distributions resulting from the knockout of multiple reactions.
2112.12290
Alan S. R. Fermin
Alan S. R. Fermin, Karl Friston, Shigeto Yamawaki
Insula Interoception, Active Inference and Feeling Representation
22 pages, 3 figures, opinion
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The body sends interoceptive visceral information through deep brain structures to the cerebral cortex. The insula cortex, organized in hierarchical modules, is the major cortical region receiving interoceptive afferents and contains visceral topographic maps. Yet, the biological significance of the insula's modular architecture in relation to deep brain regions remains unsolved. In this opinion, we propose the Insula Hierarchical Modular Adaptive Interoception Control (IMAC) model to suggest that insula modules (granular, dysgranular and agranular subregions), forming networks with prefrontal (supplementary motor area, dorsolateral and ventromedial cortices) and striatum (posterior, dorsomedial and ventromedial) subregions, are specialized for higher-order interoceptive representations, recruited in a context-dependent manner to support habitual, model-based and exploratory adaptive behavior. We then discuss how insula interoceptive representations, or metaceptions, could give rise to conscious interoceptive feelings built up from low-order visceral representations and associated basic emotions located in deep interoceptive brain structures.
[ { "created": "Thu, 23 Dec 2021 00:51:40 GMT", "version": "v1" } ]
2021-12-24
[ [ "Fermin", "Alan S. R.", "" ], [ "Friston", "Karl", "" ], [ "Yamawaki", "Shigeto", "" ] ]
The body sends interoceptive visceral information through deep brain structures to the cerebral cortex. The insula cortex, organized in hierarchical modules, is the major cortical region receiving interoceptive afferents and contains visceral topographic maps. Yet, the biological significance of the insula's modular architecture in relation to deep brain regions remains unsolved. In this opinion, we propose the Insula Hierarchical Modular Adaptive Interoception Control (IMAC) model to suggest that insula modules (granular, dysgranular and agranular subregions), forming networks with prefrontal (supplementary motor area, dorsolateral and ventromedial cortices) and striatum (posterior, dorsomedial and ventromedial) subregions, are specialized for higher-order interoceptive representations, recruited in a context-dependent manner to support habitual, model-based and exploratory adaptive behavior. We then discuss how insula interoceptive representations, or metaceptions, could give rise to conscious interoceptive feelings built up from low-order visceral representations and associated basic emotions located in deep interoceptive brain structures.
1011.5240
Ben Vanderlei
Ben Vanderlei, James J. Feng, and Leah Edelstein-Keshet
A computational model of cell polarization and motility coupling mechanics and biochemistry
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The motion of a eukaryotic cell presents a variety of interesting and challenging problems from both a modeling and a computational perspective. The processes span many spatial scales (from molecular to tissue) as well as disparate time scales, with reaction kinetics on the order of seconds, and the deformation and motion of the cell occurring on the order of minutes. The computational difficulty, even in 2D, resides in the fact that the problem is inherently one of deforming, non-stationary domains, bounded by an elastic perimeter, inside of which there is redistribution of biochemical signaling substances. Here we report the results of a computational scheme using the immersed boundary method to address this problem. We adopt a simple reaction-diffusion system that represents an internal regulatory mechanism controlling the polarization of a cell, and determining the strength of protrusion forces at the front of its elastic perimeter. Using this computational scheme we are able to study the effect of protrusive and elastic forces on cell shapes on their own, the distribution of the reaction-diffusion system in irregular domains on its own, and the coupled mechanical-chemical system. We find that this representation of cell crawling can recover important aspects of the spontaneous polarization and motion of certain types of crawling cells.
[ { "created": "Tue, 23 Nov 2010 21:24:07 GMT", "version": "v1" } ]
2010-11-25
[ [ "Vanderlei", "Ben", "" ], [ "Feng", "James J.", "" ], [ "Edelstein-Keshet", "Leah", "" ] ]
The motion of a eukaryotic cell presents a variety of interesting and challenging problems from both a modeling and a computational perspective. The processes span many spatial scales (from molecular to tissue) as well as disparate time scales, with reaction kinetics on the order of seconds, and the deformation and motion of the cell occurring on the order of minutes. The computational difficulty, even in 2D, resides in the fact that the problem is inherently one of deforming, non-stationary domains, bounded by an elastic perimeter, inside of which there is redistribution of biochemical signaling substances. Here we report the results of a computational scheme using the immersed boundary method to address this problem. We adopt a simple reaction-diffusion system that represents an internal regulatory mechanism controlling the polarization of a cell, and determining the strength of protrusion forces at the front of its elastic perimeter. Using this computational scheme we are able to study the effect of protrusive and elastic forces on cell shapes on their own, the distribution of the reaction-diffusion system in irregular domains on its own, and the coupled mechanical-chemical system. We find that this representation of cell crawling can recover important aspects of the spontaneous polarization and motion of certain types of crawling cells.
2106.08067
Jaroslav Albert
Jaroslav Albert
Stochastic fluctuations in protein interaction networks are nearly Poissonian
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Gene regulatory networks are comprised of biochemical reactions, which are inherently stochastic. Each reaction channel contributes to this stochasticity in different measure. In this paper we study the stochastic dynamics of protein interaction networks (PIN) that are made up of monomers and dimers. The network is defined by the dimers, which are formed by hybridizing two monomers. The size of a PIN was defined as the number of monomers that interacts with at least one other monomer (including itself). We generated 4200 random PIN of sizes between 2 and 8 (600 per size) and simulated via the Gillespie algorithm the stochastic evolution of copy numbers of all monomers and dimers until they reached a steady state. The simulations revealed that the Fano factors of both monomers and dimers in all networks and for all time points were close to one, either from below or above. Only 10% of Fano factors for monomers were above 1.3 and 10% of Fano factors for dimers were above 1.17, with 5.54 and 3.47 as the maximum value recorded for monomers and dimers, respectively. These findings suggests that PIN in real biological setting contribute to the overall stochastic noise that is close to Poisson. Our results also show a correlation between stochastic noise, network size and network connectivity: for monomers, the Fano factors tend towards 1 from above, while the Fano factors for dimers tend towards 1 from below. For monomers, this tendency is amplified with increased network connectivity.
[ { "created": "Tue, 15 Jun 2021 11:47:37 GMT", "version": "v1" } ]
2021-06-16
[ [ "Albert", "Jaroslav", "" ] ]
Gene regulatory networks are comprised of biochemical reactions, which are inherently stochastic. Each reaction channel contributes to this stochasticity in different measure. In this paper we study the stochastic dynamics of protein interaction networks (PIN) that are made up of monomers and dimers. The network is defined by the dimers, which are formed by hybridizing two monomers. The size of a PIN was defined as the number of monomers that interacts with at least one other monomer (including itself). We generated 4200 random PIN of sizes between 2 and 8 (600 per size) and simulated via the Gillespie algorithm the stochastic evolution of copy numbers of all monomers and dimers until they reached a steady state. The simulations revealed that the Fano factors of both monomers and dimers in all networks and for all time points were close to one, either from below or above. Only 10% of Fano factors for monomers were above 1.3 and 10% of Fano factors for dimers were above 1.17, with 5.54 and 3.47 as the maximum value recorded for monomers and dimers, respectively. These findings suggests that PIN in real biological setting contribute to the overall stochastic noise that is close to Poisson. Our results also show a correlation between stochastic noise, network size and network connectivity: for monomers, the Fano factors tend towards 1 from above, while the Fano factors for dimers tend towards 1 from below. For monomers, this tendency is amplified with increased network connectivity.
1810.03766
Daniel Hurley G
Daniel G. Hurley, Joseph Cursons, Matthew Faria, David M. Budden, Vijay Rajagopal, Edmund J. Crampin
Reference environments: A universal tool for reproducibility in computational biology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The drive for reproducibility in the computational sciences has provoked discussion and effort across a broad range of perspectives: technological, legislative/policy, education, and publishing. Discussion on these topics is not new, but the need to adopt standards for reproducibility of claims made based on computational results is now clear to researchers, publishers and policymakers alike. Many technologies exist to support and promote reproduction of computational results: containerisation tools like Docker, literate programming approaches such as Sweave, knitr, iPython or cloud environments like Amazon Web Services. But these technologies are tied to specific programming languages (e.g. Sweave/knitr to R; iPython to Python) or to platforms (e.g. Docker for 64-bit Linux environments only). To date, no single approach is able to span the broad range of technologies and platforms represented in computational biology and biotechnology. To enable reproducibility across computational biology, we demonstrate an approach and provide a set of tools that is suitable for all computational work and is not tied to a particular programming language or platform. We present published examples from a series of papers in different areas of computational biology, spanning the major languages and technologies in the field (Python/R/MATLAB/Fortran/C/Java). Our approach produces a transparent and flexible process for replication and recomputation of results. Ultimately, its most valuable aspect is the decoupling of methods in computational biology from their implementation. Separating the 'how' (method) of a publication from the 'where' (implementation) promotes genuinely open science and benefits the scientific community as a whole.
[ { "created": "Tue, 9 Oct 2018 01:20:21 GMT", "version": "v1" } ]
2018-10-10
[ [ "Hurley", "Daniel G.", "" ], [ "Cursons", "Joseph", "" ], [ "Faria", "Matthew", "" ], [ "Budden", "David M.", "" ], [ "Rajagopal", "Vijay", "" ], [ "Crampin", "Edmund J.", "" ] ]
The drive for reproducibility in the computational sciences has provoked discussion and effort across a broad range of perspectives: technological, legislative/policy, education, and publishing. Discussion on these topics is not new, but the need to adopt standards for reproducibility of claims made based on computational results is now clear to researchers, publishers and policymakers alike. Many technologies exist to support and promote reproduction of computational results: containerisation tools like Docker, literate programming approaches such as Sweave, knitr, iPython or cloud environments like Amazon Web Services. But these technologies are tied to specific programming languages (e.g. Sweave/knitr to R; iPython to Python) or to platforms (e.g. Docker for 64-bit Linux environments only). To date, no single approach is able to span the broad range of technologies and platforms represented in computational biology and biotechnology. To enable reproducibility across computational biology, we demonstrate an approach and provide a set of tools that is suitable for all computational work and is not tied to a particular programming language or platform. We present published examples from a series of papers in different areas of computational biology, spanning the major languages and technologies in the field (Python/R/MATLAB/Fortran/C/Java). Our approach produces a transparent and flexible process for replication and recomputation of results. Ultimately, its most valuable aspect is the decoupling of methods in computational biology from their implementation. Separating the 'how' (method) of a publication from the 'where' (implementation) promotes genuinely open science and benefits the scientific community as a whole.
0801.0796
Wang Weiming
Weiming Wang, Lei Zhang, Hailing Wang, Zhenqing Li
Pattern formation of a predator-prey system with Ivlev-type functional response
null
null
null
null
q-bio.PE
null
In this paper, we investigate the emergence of a predator-prey system with Ivlev-type functional response and reaction-diffusion. We study how diffusion affects the stability of predator-prey coexistence equilibrium and derive the conditions for Hopf and Turing bifurcation in the spatial domain. Based on the bifurcation analysis, we give the spatial pattern formation, the evolution process of the system near the coexistence equilibrium point, via numerical simulation. We find that pure Hopf instability leads to the formation of spiral patterns and pure Turing instability destroys the spiral pattern and leads to the formation of chaotic spatial pattern. Furthermore, we perform three categories of initial perturbations which predators are introduced in a small domain to the coexistence equilibrium point to illustrate the emergence of spatiotemporal patterns, we also find that in the beginning of evolution of the spatial pattern, the special initial conditions have an effect on the formation of spatial patterns, though the effect is less and less with the more and more iterations. This indicates that for prey-dependent type predator-prey model, pattern formations do depend on the initial conditions, while for predator-dependent type they do not. Our results show that modeling by reaction-diffusion equations is an appropriate tool for investigating fundamental mechanisms of complex spatiotemporal dynamics.
[ { "created": "Sat, 5 Jan 2008 09:08:50 GMT", "version": "v1" } ]
2008-01-08
[ [ "Wang", "Weiming", "" ], [ "Zhang", "Lei", "" ], [ "Wang", "Hailing", "" ], [ "Li", "Zhenqing", "" ] ]
In this paper, we investigate the emergence of a predator-prey system with Ivlev-type functional response and reaction-diffusion. We study how diffusion affects the stability of predator-prey coexistence equilibrium and derive the conditions for Hopf and Turing bifurcation in the spatial domain. Based on the bifurcation analysis, we give the spatial pattern formation, the evolution process of the system near the coexistence equilibrium point, via numerical simulation. We find that pure Hopf instability leads to the formation of spiral patterns and pure Turing instability destroys the spiral pattern and leads to the formation of chaotic spatial pattern. Furthermore, we perform three categories of initial perturbations which predators are introduced in a small domain to the coexistence equilibrium point to illustrate the emergence of spatiotemporal patterns, we also find that in the beginning of evolution of the spatial pattern, the special initial conditions have an effect on the formation of spatial patterns, though the effect is less and less with the more and more iterations. This indicates that for prey-dependent type predator-prey model, pattern formations do depend on the initial conditions, while for predator-dependent type they do not. Our results show that modeling by reaction-diffusion equations is an appropriate tool for investigating fundamental mechanisms of complex spatiotemporal dynamics.
1302.2666
Michael B\"orsch
Thomas Heitkamp, Hendrik Sielaff, Anja Korn, Marc Renz, Nawid Zarrabi, Michael Boersch
Monitoring subunit rotation in single FRET-labeled FoF1-ATP synthase in an anti-Brownian electrokinetic trap
12 pages, 4 figures
null
10.1117/12.2002966
null
q-bio.BM physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FoF1-ATP synthase is the membrane protein catalyzing the synthesis of the 'biological energy currency' adenosine triphosphate (ATP). The enzyme uses internal subunit rotation for the mechanochemical conversion of a proton motive force to the chemical bond. We apply single-molecule F\"orster resonance energy transfer (FRET) to monitor subunit rotation in the two coupled motors F1 and Fo. Therefore, enzymes have to be isolated from the plasma membranes of Escherichia coli, fluorescently labeled and reconstituted into 120-nm sized lipid vesicles to yield proteoliposomes. These freely diffusing proteoliposomes occasionally traverse the confocal detection volume resulting in a burst of photons. Conformational dynamics of the enzyme are identified by sequential changes of FRET efficiencies within a single photon burst. The observation times can be extended by capturing single proteoliposomes in an anti-Brownian electrokinetic trap (ABELtrap, invented by A. E. Cohen and W. E. Moerner). Here we describe the preparation procedures of FoF1-ATP synthase and simulate FRET efficiency trajectories for 'trapped' proteoliposomes. Hidden Markov Models are applied at signal-to-background ratio limits for identifying the dwells and substeps of the rotary enzyme when running at low ATP concentrations, excited by low laser power, and confined by the ABELtrap.
[ { "created": "Mon, 11 Feb 2013 23:15:07 GMT", "version": "v1" } ]
2015-06-15
[ [ "Heitkamp", "Thomas", "" ], [ "Sielaff", "Hendrik", "" ], [ "Korn", "Anja", "" ], [ "Renz", "Marc", "" ], [ "Zarrabi", "Nawid", "" ], [ "Boersch", "Michael", "" ] ]
FoF1-ATP synthase is the membrane protein catalyzing the synthesis of the 'biological energy currency' adenosine triphosphate (ATP). The enzyme uses internal subunit rotation for the mechanochemical conversion of a proton motive force to the chemical bond. We apply single-molecule F\"orster resonance energy transfer (FRET) to monitor subunit rotation in the two coupled motors F1 and Fo. Therefore, enzymes have to be isolated from the plasma membranes of Escherichia coli, fluorescently labeled and reconstituted into 120-nm sized lipid vesicles to yield proteoliposomes. These freely diffusing proteoliposomes occasionally traverse the confocal detection volume resulting in a burst of photons. Conformational dynamics of the enzyme are identified by sequential changes of FRET efficiencies within a single photon burst. The observation times can be extended by capturing single proteoliposomes in an anti-Brownian electrokinetic trap (ABELtrap, invented by A. E. Cohen and W. E. Moerner). Here we describe the preparation procedures of FoF1-ATP synthase and simulate FRET efficiency trajectories for 'trapped' proteoliposomes. Hidden Markov Models are applied at signal-to-background ratio limits for identifying the dwells and substeps of the rotary enzyme when running at low ATP concentrations, excited by low laser power, and confined by the ABELtrap.
2007.13815
Anjalika Nande
Anjalika Nande, Andrew Ferdowsian, Eric Lubin, Erez Yoeli and Martin Nowak
DyPy: A Python Library for Simulating Matrix-Form Games
null
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Evolutionary Game Theory (EGT) simulations are used to model populations undergoing biological and cultural evolution in a range of fields, from biology to economics to linguistics. In this paper we present DyPy, an open source Python package that can perform evolutionary simulations for any matrix form game for three common evolutionary dynamics: Moran, Wright-Fisher and Replicator. We discuss the basic components of this package and illustrate how it can be used to run a variety of simulations. Our package allows a user to run such simulations fairly easily without much prior Python knowledge. We hope that this will be a great asset to researchers in a number of different fields.
[ { "created": "Mon, 27 Jul 2020 19:04:39 GMT", "version": "v1" } ]
2020-07-29
[ [ "Nande", "Anjalika", "" ], [ "Ferdowsian", "Andrew", "" ], [ "Lubin", "Eric", "" ], [ "Yoeli", "Erez", "" ], [ "Nowak", "Martin", "" ] ]
Evolutionary Game Theory (EGT) simulations are used to model populations undergoing biological and cultural evolution in a range of fields, from biology to economics to linguistics. In this paper we present DyPy, an open source Python package that can perform evolutionary simulations for any matrix form game for three common evolutionary dynamics: Moran, Wright-Fisher and Replicator. We discuss the basic components of this package and illustrate how it can be used to run a variety of simulations. Our package allows a user to run such simulations fairly easily without much prior Python knowledge. We hope that this will be a great asset to researchers in a number of different fields.
2112.13283
Judit Aizpuru
Judit Aizpuru and Annina Karolin Kemmer and Jong Woo Kim and Stefan Born and Peter Neubauer and Mariano N. Cruz Bournazou and Tilman Barz
Fitting nonlinear models to continuous oxygen data with oscillatory signal variations via a loss based on DynamicTime Warping
null
null
null
null
q-bio.QM cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High throughput experimental systems play an important role in bioprocess development, as they provide an efficient way of analysing different experimental conditions and perform strain discrimination in previous phases to the industrial scale production. In the millilitre scale, these systems are combinations of parallel mini-bioreactors, liquid handling robots and automated workflows for data handling and model based operation. For successfully monitoring cultivation conditions and improving the overall process quality by model-based approaches, a proper model identification is crucial. However, the quality and amount of measurements makes this task challenging considering the complexity of the bio-processes. TheDissolved Oxygen Tension is often the only measurement which is available online, and therefore, a good understanding of the errors in this signal is important for performing a robust estimation.Some of the expected errors will provoke uncertainties in the time-domain of the measurement, and in those cases, the common Weighted Least Squares estimation procedure can fail providing good results. Moreover, these errors will have even a larger effect in the fed-batch phase where bolus feeding is applied, as this generates fast dynamic responses in the signal. In the present work, an insilico study of the performance of Weighted Least Squares estimator is analysed when the expected time-uncertainties are present in the oxygen signal. As an alternative, a loss based on the Dynamic Time Warping measure is proposed. The results show how this latter procedure outperforms the former reconstructing the oxygen signal, and in addition, returns less biased parameter estimates.
[ { "created": "Sat, 25 Dec 2021 20:31:27 GMT", "version": "v1" } ]
2021-12-28
[ [ "Aizpuru", "Judit", "" ], [ "Kemmer", "Annina Karolin", "" ], [ "Kim", "Jong Woo", "" ], [ "Born", "Stefan", "" ], [ "Neubauer", "Peter", "" ], [ "Bournazou", "Mariano N. Cruz", "" ], [ "Barz", "Tilman", "" ] ]
High throughput experimental systems play an important role in bioprocess development, as they provide an efficient way of analysing different experimental conditions and perform strain discrimination in previous phases to the industrial scale production. In the millilitre scale, these systems are combinations of parallel mini-bioreactors, liquid handling robots and automated workflows for data handling and model based operation. For successfully monitoring cultivation conditions and improving the overall process quality by model-based approaches, a proper model identification is crucial. However, the quality and amount of measurements makes this task challenging considering the complexity of the bio-processes. TheDissolved Oxygen Tension is often the only measurement which is available online, and therefore, a good understanding of the errors in this signal is important for performing a robust estimation.Some of the expected errors will provoke uncertainties in the time-domain of the measurement, and in those cases, the common Weighted Least Squares estimation procedure can fail providing good results. Moreover, these errors will have even a larger effect in the fed-batch phase where bolus feeding is applied, as this generates fast dynamic responses in the signal. In the present work, an insilico study of the performance of Weighted Least Squares estimator is analysed when the expected time-uncertainties are present in the oxygen signal. As an alternative, a loss based on the Dynamic Time Warping measure is proposed. The results show how this latter procedure outperforms the former reconstructing the oxygen signal, and in addition, returns less biased parameter estimates.
1504.06290
Zachary Kilpatrick PhD
Zachary McCleney and Zachary P. Kilpatrick
Entrainment in up and down states of neural populations: non-smooth and stochastic models
23 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the impact of noise on a neural population rate model of up and down states. Up and down states are typically observed in neuronal networks as a slow oscillation, where the population switches between high and low firing rates (Sanchez-Vivez and McCormick, 2000). A neural population model with spike rate adaptation is used to model such slow oscillations, and the timescale of adaptation determines the oscillation period. Furthermore, the period depends non-monotonically on the background tonic input driving the population, having long periods for very weak and very strong stimuli. Using both linearization and fast-slow timescale separation methods, we can compute the phase sensitivity function of the slow oscillation. We find that the phase response is most strongly impacted by perturbations to the adaptation variable. Phase sensitivity functions can then be utilized to quantify the impact of noise on oscillating populations. Noise alters the period of oscillations by speeding up the rate of transition between the up and down states. When common noise is presented to two distinct populations, their transitions will eventually become entrained to one another through stochastic synchrony.
[ { "created": "Thu, 23 Apr 2015 18:36:01 GMT", "version": "v1" } ]
2015-04-24
[ [ "McCleney", "Zachary", "" ], [ "Kilpatrick", "Zachary P.", "" ] ]
We study the impact of noise on a neural population rate model of up and down states. Up and down states are typically observed in neuronal networks as a slow oscillation, where the population switches between high and low firing rates (Sanchez-Vivez and McCormick, 2000). A neural population model with spike rate adaptation is used to model such slow oscillations, and the timescale of adaptation determines the oscillation period. Furthermore, the period depends non-monotonically on the background tonic input driving the population, having long periods for very weak and very strong stimuli. Using both linearization and fast-slow timescale separation methods, we can compute the phase sensitivity function of the slow oscillation. We find that the phase response is most strongly impacted by perturbations to the adaptation variable. Phase sensitivity functions can then be utilized to quantify the impact of noise on oscillating populations. Noise alters the period of oscillations by speeding up the rate of transition between the up and down states. When common noise is presented to two distinct populations, their transitions will eventually become entrained to one another through stochastic synchrony.
1208.4027
Edgardo Brigatti
E. Brigatti, M. N\'u\~nez-L\'opez and M. Oliva
Analysis of a spatial Lotka-Volterra model with a finite range predator-prey interaction
7 pages, 7 figures
Eur. Phys. J. B 81, 321--326 (2011)
10.1140/epjb/e2011-10826-6
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform an analysis of a recent spatial version of the classical Lotka-Volterra model, where a finite scale controls individuals' interaction. We study the behavior of the predator-prey dynamics in physical spaces higher than one, showing how spatial patterns can emerge for some values of the interaction range and of the diffusion parameter.
[ { "created": "Mon, 20 Aug 2012 14:48:41 GMT", "version": "v1" } ]
2012-08-21
[ [ "Brigatti", "E.", "" ], [ "Núñez-López", "M.", "" ], [ "Oliva", "M.", "" ] ]
We perform an analysis of a recent spatial version of the classical Lotka-Volterra model, where a finite scale controls individuals' interaction. We study the behavior of the predator-prey dynamics in physical spaces higher than one, showing how spatial patterns can emerge for some values of the interaction range and of the diffusion parameter.
1604.00069
Thomas Gorochowski
Thomas E. Gorochowski, Rafal Bogacz, Matthew Jones
Cross-Frequency Coupling of Neuronal Oscillations During Cognition
null
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How the brain co-ordinates the actions of distant regions in an efficient manner is an open problem. Many believe that cross-frequency coupling between the amplitude of high frequency local field potential oscillations in one region and the phase of lower frequency signals in another may form a possible mechanism. This work provides a preliminary study from both an experimental and theoretical viewpoint, concentrating on possible coupling between the hippocampus and pre-frontal cortex in rats during tasks involving working memory, spatial navigation and decision making processes. Attempts to search for such coupling events are made using newly developed MATLAB scripts. This leads to the discovery of increased envelope-to-signal correlation (ESC) between the 1-10 Hz hippocampus theta and 30-40 Hz pre-fontal cortex gamma bands when a choice turn is approached during a T-maze task. From a theoretical perspective, a standard connectionist modelling approach is extended to allow for the formation of oscillations. Although detrimental to overall average task performance, this did lead to a reduced increase in performance variation as noise was increased, when compared to a standard non-oscillating model.
[ { "created": "Thu, 31 Mar 2016 22:26:05 GMT", "version": "v1" } ]
2016-04-04
[ [ "Gorochowski", "Thomas E.", "" ], [ "Bogacz", "Rafal", "" ], [ "Jones", "Matthew", "" ] ]
How the brain co-ordinates the actions of distant regions in an efficient manner is an open problem. Many believe that cross-frequency coupling between the amplitude of high frequency local field potential oscillations in one region and the phase of lower frequency signals in another may form a possible mechanism. This work provides a preliminary study from both an experimental and theoretical viewpoint, concentrating on possible coupling between the hippocampus and pre-frontal cortex in rats during tasks involving working memory, spatial navigation and decision making processes. Attempts to search for such coupling events are made using newly developed MATLAB scripts. This leads to the discovery of increased envelope-to-signal correlation (ESC) between the 1-10 Hz hippocampus theta and 30-40 Hz pre-fontal cortex gamma bands when a choice turn is approached during a T-maze task. From a theoretical perspective, a standard connectionist modelling approach is extended to allow for the formation of oscillations. Although detrimental to overall average task performance, this did lead to a reduced increase in performance variation as noise was increased, when compared to a standard non-oscillating model.
1907.00816
David Hansel
Alexandre Mahrach, Guang Chen, Nuo Li, Carl van Vreeswijk, David Hansel
Mechanisms underlying the response of mouse cortical networks to optogenetic manipulation
null
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GABAergic interneurons can be subdivided into three subclasses: parvalbumin positive (PV), somatostatin positive (SOM) and serotonin positive neurons. With principal cells (PCs) they form complex networks. We examine PCs and PV responses in mouse anterior lateral motor cortex (ALM) and barrel cortex (S1) upon PV photostimulation in vivo. In layer 5, the PV response is paradoxical: photoexcitation reduces their activity. This is not the case in ALM layer 2/3. We combine analytical calculations and numerical simulations to investigate how these results constrain the architecture. Two-population models cannot account for the results. Networks with three inhibitory populations and V1-like architecture account for the data in ALM layer 2/3. Our data in layer 5 can be accounted for if SOM neurons receive inputs only from PCs and PV neurons. In both four-population models, the paradoxical effect implies not too strong recurrent excitation. It is not evidence for stabilization by inhibition.
[ { "created": "Mon, 1 Jul 2019 14:22:03 GMT", "version": "v1" } ]
2019-07-02
[ [ "Mahrach", "Alexandre", "" ], [ "Chen", "Guang", "" ], [ "Li", "Nuo", "" ], [ "van Vreeswijk", "Carl", "" ], [ "Hansel", "David", "" ] ]
GABAergic interneurons can be subdivided into three subclasses: parvalbumin positive (PV), somatostatin positive (SOM) and serotonin positive neurons. With principal cells (PCs) they form complex networks. We examine PCs and PV responses in mouse anterior lateral motor cortex (ALM) and barrel cortex (S1) upon PV photostimulation in vivo. In layer 5, the PV response is paradoxical: photoexcitation reduces their activity. This is not the case in ALM layer 2/3. We combine analytical calculations and numerical simulations to investigate how these results constrain the architecture. Two-population models cannot account for the results. Networks with three inhibitory populations and V1-like architecture account for the data in ALM layer 2/3. Our data in layer 5 can be accounted for if SOM neurons receive inputs only from PCs and PV neurons. In both four-population models, the paradoxical effect implies not too strong recurrent excitation. It is not evidence for stabilization by inhibition.
2205.09548
Lixue Cheng
Lixue Cheng, Ziyi Yang, Changyu Hsieh, Benben Liao, Shengyu Zhang
ODBO: Bayesian Optimization with Search Space Prescreening for Directed Protein Evolution
27 pages, 13 figures
null
null
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directed evolution is a versatile technique in protein engineering that mimics the process of natural selection by iteratively alternating between mutagenesis and screening in order to search for sequences that optimize a given property of interest, such as catalytic activity and binding affinity to a specified target. However, the space of possible proteins is too large to search exhaustively in the laboratory, and functional proteins are scarce in the vast sequence space. Machine learning (ML) approaches can accelerate directed evolution by learning to map protein sequences to functions without building a detailed model of the underlying physics, chemistry and biological pathways. Despite the great potentials held by these ML methods, they encounter severe challenges in identifying the most suitable sequences for a targeted function. These failures can be attributed to the common practice of adopting a high-dimensional feature representation for protein sequences and inefficient search methods. To address these issues, we propose an efficient, experimental design-oriented closed-loop optimization framework for protein directed evolution, termed ODBO, which employs a combination of novel low-dimensional protein encoding strategy and Bayesian optimization enhanced with search space prescreening via outlier detection. We further design an initial sample selection strategy to minimize the number of experimental samples for training ML models. We conduct and report four protein directed evolution experiments that substantiate the capability of the proposed framework for finding of the variants with properties of interest. We expect the ODBO framework to greatly reduce the experimental cost and time cost of directed evolution, and can be further generalized as a powerful tool for adaptive experimental design in a broader context.
[ { "created": "Thu, 19 May 2022 13:21:31 GMT", "version": "v1" }, { "created": "Fri, 20 May 2022 13:52:17 GMT", "version": "v2" }, { "created": "Fri, 14 Oct 2022 09:58:09 GMT", "version": "v3" }, { "created": "Mon, 24 Oct 2022 09:20:44 GMT", "version": "v4" }, { "created": "Tue, 25 Oct 2022 01:28:22 GMT", "version": "v5" }, { "created": "Wed, 1 May 2024 15:20:12 GMT", "version": "v6" } ]
2024-05-02
[ [ "Cheng", "Lixue", "" ], [ "Yang", "Ziyi", "" ], [ "Hsieh", "Changyu", "" ], [ "Liao", "Benben", "" ], [ "Zhang", "Shengyu", "" ] ]
Directed evolution is a versatile technique in protein engineering that mimics the process of natural selection by iteratively alternating between mutagenesis and screening in order to search for sequences that optimize a given property of interest, such as catalytic activity and binding affinity to a specified target. However, the space of possible proteins is too large to search exhaustively in the laboratory, and functional proteins are scarce in the vast sequence space. Machine learning (ML) approaches can accelerate directed evolution by learning to map protein sequences to functions without building a detailed model of the underlying physics, chemistry and biological pathways. Despite the great potentials held by these ML methods, they encounter severe challenges in identifying the most suitable sequences for a targeted function. These failures can be attributed to the common practice of adopting a high-dimensional feature representation for protein sequences and inefficient search methods. To address these issues, we propose an efficient, experimental design-oriented closed-loop optimization framework for protein directed evolution, termed ODBO, which employs a combination of novel low-dimensional protein encoding strategy and Bayesian optimization enhanced with search space prescreening via outlier detection. We further design an initial sample selection strategy to minimize the number of experimental samples for training ML models. We conduct and report four protein directed evolution experiments that substantiate the capability of the proposed framework for finding of the variants with properties of interest. We expect the ODBO framework to greatly reduce the experimental cost and time cost of directed evolution, and can be further generalized as a powerful tool for adaptive experimental design in a broader context.
0901.3067
Anastasia Deckard
Anastasia C. Deckard, Frank T. Bergmann, and Herbert M. Sauro
Enumeration and Online Library of Mass-Action Reaction Networks
17 pages, 6 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/3.0/
The aim of this work is to make available to the community a large collection of mass-action reaction networks of a given size for further research. The set is limited to what can be computed on a modern multi-core desktop in reasonable time (< 20 days). We have currently generated over 47 million unique reaction networks. All currently generated sets of networks are available and as new sets are completed they will also be made available. Also provided are programs for translating them into different formats, along with documentation and examples. Source code and binaries for all the programs are included. These can be downloaded from (http://www.sys-bio.org/networkenumeration). This library of networks will allow for thorough studies of the reaction network space. Additionally, these methods serve as an example for future work on enumerating other types of biological networks, such as genetic regulatory networks and mass-action networks that include regulation.
[ { "created": "Mon, 19 Jan 2009 21:35:36 GMT", "version": "v1" } ]
2009-01-21
[ [ "Deckard", "Anastasia C.", "" ], [ "Bergmann", "Frank T.", "" ], [ "Sauro", "Herbert M.", "" ] ]
The aim of this work is to make available to the community a large collection of mass-action reaction networks of a given size for further research. The set is limited to what can be computed on a modern multi-core desktop in reasonable time (< 20 days). We have currently generated over 47 million unique reaction networks. All currently generated sets of networks are available and as new sets are completed they will also be made available. Also provided are programs for translating them into different formats, along with documentation and examples. Source code and binaries for all the programs are included. These can be downloaded from (http://www.sys-bio.org/networkenumeration). This library of networks will allow for thorough studies of the reaction network space. Additionally, these methods serve as an example for future work on enumerating other types of biological networks, such as genetic regulatory networks and mass-action networks that include regulation.
2304.02729
Giulia Bernardini
Giulia Bernardini and Leo van Iersel and Esther Julien and Leen Stougie
Constructing Phylogenetic Networks via Cherry Picking and Machine Learning
42 pages, 20 figures, submitted to Algorithms for Molecular Biology (special issue of WABI 2022)
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by/4.0/
Combining a set of phylogenetic trees into a single phylogenetic network that explains all of them is a fundamental challenge in evolutionary studies. Existing methods are computationally expensive and can either handle only small numbers of phylogenetic trees or are limited to severely restricted classes of networks. In this paper, we apply the recently-introduced theoretical framework of cherry picking to design a class of efficient heuristics that are guaranteed to produce a network containing each of the input trees, for datasets consisting of binary trees. Some of the heuristics in this framework are based on the design and training of a machine learning model that captures essential information on the structure of the input trees and guides the algorithms towards better solutions. We also propose simple and fast randomised heuristics that prove to be very effective when run multiple times. Unlike the existing exact methods, our heuristics are applicable to datasets of practical size, and the experimental study we conducted on both simulated and real data shows that these solutions are qualitatively good, always within some small constant factor from the optimum. Moreover, our machine-learned heuristics are one of the first applications of machine learning to phylogenetics and show its promise.
[ { "created": "Fri, 31 Mar 2023 15:04:42 GMT", "version": "v1" } ]
2023-04-07
[ [ "Bernardini", "Giulia", "" ], [ "van Iersel", "Leo", "" ], [ "Julien", "Esther", "" ], [ "Stougie", "Leen", "" ] ]
Combining a set of phylogenetic trees into a single phylogenetic network that explains all of them is a fundamental challenge in evolutionary studies. Existing methods are computationally expensive and can either handle only small numbers of phylogenetic trees or are limited to severely restricted classes of networks. In this paper, we apply the recently-introduced theoretical framework of cherry picking to design a class of efficient heuristics that are guaranteed to produce a network containing each of the input trees, for datasets consisting of binary trees. Some of the heuristics in this framework are based on the design and training of a machine learning model that captures essential information on the structure of the input trees and guides the algorithms towards better solutions. We also propose simple and fast randomised heuristics that prove to be very effective when run multiple times. Unlike the existing exact methods, our heuristics are applicable to datasets of practical size, and the experimental study we conducted on both simulated and real data shows that these solutions are qualitatively good, always within some small constant factor from the optimum. Moreover, our machine-learned heuristics are one of the first applications of machine learning to phylogenetics and show its promise.
1309.3535
Luc Berthouze
Caroline Hartley and Timothy J Taylor and Istvan Z Kiss and Simon F Farmer and Luc Berthouze
Identification of criticality in neuronal avalanches: II. A theoretical and empirical investigation of the driven case
48 pages, 18 figures
The Journal of Mathematical Neuroscience 2014, 4:9
10.1186/2190-8567-4-9
null
q-bio.NC math-ph math.MP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The observation of apparent power-laws in neuronal systems has led to the suggestion that the brain is at, or close to, a critical state and may be a self-organised critical system. Within the framework of self-organised criticality a separation of timescales is thought to be crucial for the observation of power-law dynamics and computational models are often constructed with this property. However, this is not necessarily a characteristic of physiological neural networks - external input does not only occur when the network is at rest/a steady state. In this paper we study a simple neuronal network model driven by a continuous external input (i.e.\ the model does not have a separation of timescales) and analytically tuned to operate in the region of a critical state (it reaches the critical regime exactly in the absence of input - the case studied in the companion paper to this article). The system displays avalanche dynamics in the form of cascades of neuronal firing separated by periods of silence. We observe partial scale-free behaviour in the distribution of avalanche size for low levels of external input. We analytically derive the distributions of waiting times and investigate their temporal behaviour in relation to different levels of external input, showing that the system's dynamics can exhibit partial long-range temporal correlations. We further show that as the system approaches the critical state by two alternative `routes', different markers of criticality (partial scale-free behaviour and long-range temporal correlations) are displayed. This suggests that signatures of criticality exhibited by a particular system in close proximity to a critical state are dependent on the region in parameter space at which the system (currently) resides.
[ { "created": "Fri, 13 Sep 2013 18:33:55 GMT", "version": "v1" } ]
2014-10-22
[ [ "Hartley", "Caroline", "" ], [ "Taylor", "Timothy J", "" ], [ "Kiss", "Istvan Z", "" ], [ "Farmer", "Simon F", "" ], [ "Berthouze", "Luc", "" ] ]
The observation of apparent power-laws in neuronal systems has led to the suggestion that the brain is at, or close to, a critical state and may be a self-organised critical system. Within the framework of self-organised criticality a separation of timescales is thought to be crucial for the observation of power-law dynamics and computational models are often constructed with this property. However, this is not necessarily a characteristic of physiological neural networks - external input does not only occur when the network is at rest/a steady state. In this paper we study a simple neuronal network model driven by a continuous external input (i.e.\ the model does not have a separation of timescales) and analytically tuned to operate in the region of a critical state (it reaches the critical regime exactly in the absence of input - the case studied in the companion paper to this article). The system displays avalanche dynamics in the form of cascades of neuronal firing separated by periods of silence. We observe partial scale-free behaviour in the distribution of avalanche size for low levels of external input. We analytically derive the distributions of waiting times and investigate their temporal behaviour in relation to different levels of external input, showing that the system's dynamics can exhibit partial long-range temporal correlations. We further show that as the system approaches the critical state by two alternative `routes', different markers of criticality (partial scale-free behaviour and long-range temporal correlations) are displayed. This suggests that signatures of criticality exhibited by a particular system in close proximity to a critical state are dependent on the region in parameter space at which the system (currently) resides.
1403.1957
Joel Miller
Joel C Miller and Istvan Z Kiss
Epidemic spread in networks: Existing methods and current challenges
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the spread of infectious disease through contact networks of Configuration Model type. We assume that the disease spreads through contacts and infected individuals recover into an immune state. We discuss a number of existing mathematical models used to investigate this system, and show relations between the underlying assumptions of the models. In the process we offer simplifications of some of the existing models. The distinctions between the underlying assumptions are subtle, and in many if not most cases this subtlety is irrelevant. Indeed, under appropriate conditions the models are equivalent. We compare the benefits and disadvantages of the different models, and discuss their application to other populations (\emph{e.g.,} clustered networks). Finally we discuss ongoing challenges for network-based epidemic modeling.
[ { "created": "Sat, 8 Mar 2014 10:11:58 GMT", "version": "v1" } ]
2014-03-11
[ [ "Miller", "Joel C", "" ], [ "Kiss", "Istvan Z", "" ] ]
We consider the spread of infectious disease through contact networks of Configuration Model type. We assume that the disease spreads through contacts and infected individuals recover into an immune state. We discuss a number of existing mathematical models used to investigate this system, and show relations between the underlying assumptions of the models. In the process we offer simplifications of some of the existing models. The distinctions between the underlying assumptions are subtle, and in many if not most cases this subtlety is irrelevant. Indeed, under appropriate conditions the models are equivalent. We compare the benefits and disadvantages of the different models, and discuss their application to other populations (\emph{e.g.,} clustered networks). Finally we discuss ongoing challenges for network-based epidemic modeling.
2308.02589
Dan Gorbonos
Dan Gorbonos, Felix Oberhauser, Luke L. Costello, Yannick G\"unzel, Einat Couzin-Fuchs, Benjamin Koger, Iain D. Couzin
An Effective Hydrodynamic Description of Marching Locusts
21 pages, 13 figures
null
null
null
q-bio.QM cond-mat.stat-mech physics.bio-ph physics.flu-dyn q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental question in complex systems is how to relate interactions between individual components ("microscopic description") to the global properties of the system ("macroscopic description"). Another fundamental question is whether such a macroscopic description exists at all and how well it describes the large-scale properties. Here, we address these questions using as a canonical example of a self-organizing complex system - the collective motion of desert locusts. One of the world's most devastating insect plagues begins when flightless juvenile locusts form "marching bands". Moving through semiarid habitats in the search for food, these bands display remarkable coordinated motion. We investigated how well physical models can describe the flow of locusts within a band. For this, we filmed locusts within marching bands during an outbreak in Kenya and automatically tracked all individuals passing through the camera frame. We first analysed the spatial topology of nearest neighbors and found individuals to be isotropically distributed. Despite this apparent randomness, a local order was observed in regions of high density with a clear second neighbor peak in the radial distribution function, akin to an ordered fluid. Furthermore, reconstructing individual locust trajectories revealed a highly-aligned movement, consistent with the one-dimensional version of the Toner-Tu equations, which are a generalization of the Navier-Stokes equations for fluids, used to describe the equivalent macroscopic fluid properties of active particles. Using this effective Toner-Tu equation, which relates the gradient of the pressure to the acceleration, we show that the effective "pressure" of locusts increases as a linear function of density in segments with highest polarization. Our study thus demonstrates an effective hydrodynamic description of flow dynamics in plague locust swarms.
[ { "created": "Thu, 3 Aug 2023 18:56:58 GMT", "version": "v1" } ]
2023-08-08
[ [ "Gorbonos", "Dan", "" ], [ "Oberhauser", "Felix", "" ], [ "Costello", "Luke L.", "" ], [ "Günzel", "Yannick", "" ], [ "Couzin-Fuchs", "Einat", "" ], [ "Koger", "Benjamin", "" ], [ "Couzin", "Iain D.", "" ] ]
A fundamental question in complex systems is how to relate interactions between individual components ("microscopic description") to the global properties of the system ("macroscopic description"). Another fundamental question is whether such a macroscopic description exists at all and how well it describes the large-scale properties. Here, we address these questions using as a canonical example of a self-organizing complex system - the collective motion of desert locusts. One of the world's most devastating insect plagues begins when flightless juvenile locusts form "marching bands". Moving through semiarid habitats in the search for food, these bands display remarkable coordinated motion. We investigated how well physical models can describe the flow of locusts within a band. For this, we filmed locusts within marching bands during an outbreak in Kenya and automatically tracked all individuals passing through the camera frame. We first analysed the spatial topology of nearest neighbors and found individuals to be isotropically distributed. Despite this apparent randomness, a local order was observed in regions of high density with a clear second neighbor peak in the radial distribution function, akin to an ordered fluid. Furthermore, reconstructing individual locust trajectories revealed a highly-aligned movement, consistent with the one-dimensional version of the Toner-Tu equations, which are a generalization of the Navier-Stokes equations for fluids, used to describe the equivalent macroscopic fluid properties of active particles. Using this effective Toner-Tu equation, which relates the gradient of the pressure to the acceleration, we show that the effective "pressure" of locusts increases as a linear function of density in segments with highest polarization. Our study thus demonstrates an effective hydrodynamic description of flow dynamics in plague locust swarms.
1610.09536
Osman Kahraman
Osman Kahraman, Yiwei Li and Christoph A. Haselwandter
Stochastic single-molecule dynamics of synaptic membrane protein domains
Main text (7 pages, 4 figures, 1 table) and supplementary material (3 pages, 3 figures)
EPL, 115 (2016) 68006
10.1209/0295-5075/115/68006
null
q-bio.SC physics.bio-ph q-bio.BM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by single-molecule experiments on synaptic membrane protein domains, we use a stochastic lattice model to study protein reaction and diffusion processes in crowded membranes. We find that the stochastic reaction-diffusion dynamics of synaptic proteins provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the single-molecule trajectories observed for synaptic proteins, and spatially inhomogeneous protein lifetimes at the cell membrane. Our results suggest that central aspects of the single-molecule and collective dynamics observed for membrane protein domains can be understood in terms of stochastic reaction-diffusion processes at the cell membrane.
[ { "created": "Sat, 29 Oct 2016 16:09:18 GMT", "version": "v1" }, { "created": "Mon, 7 Nov 2016 19:25:07 GMT", "version": "v2" } ]
2016-11-08
[ [ "Kahraman", "Osman", "" ], [ "Li", "Yiwei", "" ], [ "Haselwandter", "Christoph A.", "" ] ]
Motivated by single-molecule experiments on synaptic membrane protein domains, we use a stochastic lattice model to study protein reaction and diffusion processes in crowded membranes. We find that the stochastic reaction-diffusion dynamics of synaptic proteins provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the single-molecule trajectories observed for synaptic proteins, and spatially inhomogeneous protein lifetimes at the cell membrane. Our results suggest that central aspects of the single-molecule and collective dynamics observed for membrane protein domains can be understood in terms of stochastic reaction-diffusion processes at the cell membrane.
2109.05545
Grace Hwang
Elise Buckley, Joseph D. Monaco, Kevin M. Schultz, Robert Chalmers, Armin Hadzic, Kechen Zhang, Grace M. Hwang, M. Dwight Carr
An interdisciplinary approach to high school curriculum development: Swarming Powered by Neuroscience
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article discusses how to create an interactive virtual training program at the intersection of neuroscience, robotics, and computer science for high school students. A four-day microseminar, titled Swarming Powered by Neuroscience (SPN), was conducted virtually through a combination of presentations and interactive computer game simulations, delivered by subject matter experts in neuroscience, mathematics, multi-agent swarm robotics, and education. The objective of this research was to determine if taking an interdisciplinary approach to high school education would enhance the students learning experiences in fields such as neuroscience, robotics, or computer science. This study found an improvement in student engagement for neuroscience by 16.6%, while interest in robotics and computer science improved respectively by 2.7% and 1.8%. The curriculum materials, developed for the SPN microseminar, can be used by high school teachers to further evaluate interdisciplinary instructions across life and physical sciences and computer science.
[ { "created": "Sun, 12 Sep 2021 16:00:00 GMT", "version": "v1" } ]
2021-09-14
[ [ "Buckley", "Elise", "" ], [ "Monaco", "Joseph D.", "" ], [ "Schultz", "Kevin M.", "" ], [ "Chalmers", "Robert", "" ], [ "Hadzic", "Armin", "" ], [ "Zhang", "Kechen", "" ], [ "Hwang", "Grace M.", "" ], [ "Carr", "M. Dwight", "" ] ]
This article discusses how to create an interactive virtual training program at the intersection of neuroscience, robotics, and computer science for high school students. A four-day microseminar, titled Swarming Powered by Neuroscience (SPN), was conducted virtually through a combination of presentations and interactive computer game simulations, delivered by subject matter experts in neuroscience, mathematics, multi-agent swarm robotics, and education. The objective of this research was to determine if taking an interdisciplinary approach to high school education would enhance the students learning experiences in fields such as neuroscience, robotics, or computer science. This study found an improvement in student engagement for neuroscience by 16.6%, while interest in robotics and computer science improved respectively by 2.7% and 1.8%. The curriculum materials, developed for the SPN microseminar, can be used by high school teachers to further evaluate interdisciplinary instructions across life and physical sciences and computer science.
2308.13877
Suraj Rajendran
Akshay Bhalla, Suraj Rajendran
Applications of machine Learning to improve the efficiency and range of microbial biosynthesis: a review of state-of-art techniques
null
null
null
null
q-bio.SC cs.LG q-bio.BM
http://creativecommons.org/licenses/by/4.0/
In the modern world, technology is at its peak. Different avenues in programming and technology have been explored for data analysis, automation, and robotics. Machine learning is key to optimize data analysis, make accurate predictions, and hasten/improve existing functions. Thus, presently, the field of machine learning in artificial intelligence is being developed and its uses in varying fields are being explored. One field in which its uses stand out is that of microbial biosynthesis. In this paper, a comprehensive overview of the differing machine learning programs used in biosynthesis is provided, alongside brief descriptions of the fields of machine learning and microbial biosynthesis separately. This information includes past trends, modern developments, future improvements, explanations of processes, and current problems they face. Thus, this paper's main contribution is to distill developments in, and provide a holistic explanation of, 2 key fields and their applicability to improve industry/research. It also highlights challenges and research directions, acting to instigate more research and development in the growing fields. Finally, the paper aims to act as a reference for academics performing research, industry professionals improving their processes, and students looking to understand the concept of machine learning in biosynthesis.
[ { "created": "Sat, 26 Aug 2023 13:27:40 GMT", "version": "v1" }, { "created": "Sat, 14 Oct 2023 23:27:42 GMT", "version": "v2" } ]
2023-10-18
[ [ "Bhalla", "Akshay", "" ], [ "Rajendran", "Suraj", "" ] ]
In the modern world, technology is at its peak. Different avenues in programming and technology have been explored for data analysis, automation, and robotics. Machine learning is key to optimize data analysis, make accurate predictions, and hasten/improve existing functions. Thus, presently, the field of machine learning in artificial intelligence is being developed and its uses in varying fields are being explored. One field in which its uses stand out is that of microbial biosynthesis. In this paper, a comprehensive overview of the differing machine learning programs used in biosynthesis is provided, alongside brief descriptions of the fields of machine learning and microbial biosynthesis separately. This information includes past trends, modern developments, future improvements, explanations of processes, and current problems they face. Thus, this paper's main contribution is to distill developments in, and provide a holistic explanation of, 2 key fields and their applicability to improve industry/research. It also highlights challenges and research directions, acting to instigate more research and development in the growing fields. Finally, the paper aims to act as a reference for academics performing research, industry professionals improving their processes, and students looking to understand the concept of machine learning in biosynthesis.
2305.08238
David Graff
David E. Graff, Edward O. Pyzer-Knapp, Kirk E. Jordan, Eugene I. Shakhnovich, Connor W. Coley
Evaluating the roughness of structure-property relationships using pretrained molecular representations
18 pages, 13 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantitative structure-property relationships (QSPRs) aid in understanding molecular properties as a function of molecular structure. When the correlation between structure and property weakens, a dataset is described as "rough," but this characteristic is partly a function of the chosen representation. Among possible molecular representations are those from recently-developed "foundation models" for chemistry which learn molecular representation from unlabeled samples via self-supervision. However, the performance of these pretrained representations on property prediction benchmarks is mixed when compared to baseline approaches. We sought to understand these trends in terms of the roughness of the underlying QSPR surfaces. We introduce a reformulation of the roughness index (ROGI), ROGI-XD, to enable comparison of ROGI values across representations and evaluate various pretrained representations and those constructed by simple fingerprints and descriptors. We show that pretrained representations do not produce smoother QSPR surfaces, in agreement with previous empirical results of model accuracy. Our findings suggest that imposing stronger assumptions of smoothness with respect to molecular structure during model pretraining can aid in the downstream generation of smoother QSPR surfaces.
[ { "created": "Sun, 14 May 2023 20:10:10 GMT", "version": "v1" } ]
2023-05-16
[ [ "Graff", "David E.", "" ], [ "Pyzer-Knapp", "Edward O.", "" ], [ "Jordan", "Kirk E.", "" ], [ "Shakhnovich", "Eugene I.", "" ], [ "Coley", "Connor W.", "" ] ]
Quantitative structure-property relationships (QSPRs) aid in understanding molecular properties as a function of molecular structure. When the correlation between structure and property weakens, a dataset is described as "rough," but this characteristic is partly a function of the chosen representation. Among possible molecular representations are those from recently-developed "foundation models" for chemistry which learn molecular representation from unlabeled samples via self-supervision. However, the performance of these pretrained representations on property prediction benchmarks is mixed when compared to baseline approaches. We sought to understand these trends in terms of the roughness of the underlying QSPR surfaces. We introduce a reformulation of the roughness index (ROGI), ROGI-XD, to enable comparison of ROGI values across representations and evaluate various pretrained representations and those constructed by simple fingerprints and descriptors. We show that pretrained representations do not produce smoother QSPR surfaces, in agreement with previous empirical results of model accuracy. Our findings suggest that imposing stronger assumptions of smoothness with respect to molecular structure during model pretraining can aid in the downstream generation of smoother QSPR surfaces.
1712.03594
Hwai-Ray Tung
H. Tung
Precluding Oscillations in Michaelis-Menten Approximations of Dual-site Phosphorylation Systems
null
Mathematical Biosciences, Volume 306, 2018, Pages 56-59
10.1016/j.mbs.2018.10.008
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Oscillations play a major role in a number of biological systems, from predator-prey models of ecology to circadian clocks. In this paper we focus on the question of whether oscillations exist within dual-site phosphorylation systems. Previously, Wang and Sontag showed, using monotone systems theory, that the Michaelis-Menten (MM) approximation of the distributive and sequential dual-site phosphorylation system lacks oscillations. However, biological systems are generally not purely distributive; there is generally some processive behavior as well. Accordingly, this paper focuses on the MM approximation of a general sequential dual-site phosphorylation system that contains both processive and distributive components, termed the composite system. Expanding on the methods of Bozeman and Morales, we preclude oscillations in the MM approximation of the composite system. This implies the lack of oscillations in the MM approximations of the processive and distributive systems, shown previously, as well as in the MM approximation of the partially processive and partially distributive mixed-mechanism system.
[ { "created": "Sun, 10 Dec 2017 21:17:03 GMT", "version": "v1" }, { "created": "Wed, 31 Oct 2018 02:38:39 GMT", "version": "v2" } ]
2020-08-03
[ [ "Tung", "H.", "" ] ]
Oscillations play a major role in a number of biological systems, from predator-prey models of ecology to circadian clocks. In this paper we focus on the question of whether oscillations exist within dual-site phosphorylation systems. Previously, Wang and Sontag showed, using monotone systems theory, that the Michaelis-Menten (MM) approximation of the distributive and sequential dual-site phosphorylation system lacks oscillations. However, biological systems are generally not purely distributive; there is generally some processive behavior as well. Accordingly, this paper focuses on the MM approximation of a general sequential dual-site phosphorylation system that contains both processive and distributive components, termed the composite system. Expanding on the methods of Bozeman and Morales, we preclude oscillations in the MM approximation of the composite system. This implies the lack of oscillations in the MM approximations of the processive and distributive systems, shown previously, as well as in the MM approximation of the partially processive and partially distributive mixed-mechanism system.
1506.00288
Vijay Singh
Vijay Singh, Ilya Nemenman
Accurate sensing of multiple ligands with a single receptor
6 pages, 4 figures
null
null
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells use surface receptors to estimate the concentration of external ligands. Limits on the accuracy of such estimations have been well studied for pairs of ligand and receptor species. However, the environment typically contains many ligands, which can bind to the same receptors with different affinities, resulting in cross-talk. In traditional rate models, such cross-talk prevents accurate inference of individual ligand concentrations. In contrast, here we show that knowing the precise timing sequence of stochastic binding and unbinding events allows one receptor to provide information about multiple ligands simultaneously and with a high accuracy. We argue that such high-accuracy estimation of multiple concentrations can be realized by the familiar kinetic proofreading mechanism.
[ { "created": "Sun, 31 May 2015 20:43:24 GMT", "version": "v1" } ]
2015-06-02
[ [ "Singh", "Vijay", "" ], [ "Nemenman", "Ilya", "" ] ]
Cells use surface receptors to estimate the concentration of external ligands. Limits on the accuracy of such estimations have been well studied for pairs of ligand and receptor species. However, the environment typically contains many ligands, which can bind to the same receptors with different affinities, resulting in cross-talk. In traditional rate models, such cross-talk prevents accurate inference of individual ligand concentrations. In contrast, here we show that knowing the precise timing sequence of stochastic binding and unbinding events allows one receptor to provide information about multiple ligands simultaneously and with a high accuracy. We argue that such high-accuracy estimation of multiple concentrations can be realized by the familiar kinetic proofreading mechanism.