id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2010.15493
Laura Wadkin MMath
Laura E Wadkin, Sirio Orozco-Fuentes, Irina Neganova, Majlinda Lako, Nicholas G Parker and Anvar Shukurov
An introduction to the mathematical modelling of iPSCs
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this chapter is to convey the importance and usefulness of mathematical modelling as a tool to achieve a deeper understanding of stem cell biology. We introduce key mathematical concepts (random walk theory, differential equations and agent-based modelling) which form the basis of current descriptions of induced pluripotent stem cells. We hope to encourage a meaningful dialogue between biologists and mathematicians and highlight the value of such an interdisciplinary approach.
[ { "created": "Thu, 29 Oct 2020 11:25:47 GMT", "version": "v1" } ]
2020-10-30
[ [ "Wadkin", "Laura E", "" ], [ "Orozco-Fuentes", "Sirio", "" ], [ "Neganova", "Irina", "" ], [ "Lako", "Majlinda", "" ], [ "Parker", "Nicholas G", "" ], [ "Shukurov", "Anvar", "" ] ]
The aim of this chapter is to convey the importance and usefulness of mathematical modelling as a tool to achieve a deeper understanding of stem cell biology. We introduce key mathematical concepts (random walk theory, differential equations and agent-based modelling) which form the basis of current descriptions of induced pluripotent stem cells. We hope to encourage a meaningful dialogue between biologists and mathematicians and highlight the value of such an interdisciplinary approach.
1706.05121
Massimo Stella
Massimo Stella, Sanja Selakovic, Alberto Antonioni and Cecilia S. Andreazzi
Community interactions determine role of species in parasite spread amplification: the ecomultiplex network model
11 pages, 5 figures
null
null
null
q-bio.QM physics.bio-ph physics.data-an q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most of zoonoses are multi-host parasites with multiple transmission routes that are usually investigated separately despite their potential interplay. As a unifying framework for modelling parasite spread through different paths of infection, we suggest "ecomultiplex" networks, i.e. multiplex networks representing interacting animal communities with (i) spatial structure and (ii) metabolic scaling. We exploit this ecological framework for testing potential control strategies for $T. cruzii$ spread in two real-world ecosystems. Our investigation highlights two interesting results. Firstly, the ecomultiplex topology can be as efficient as more data-demanding epidemiological measures in identifying which species facilitate parasite spread. Secondly, the interplay between predator-prey and host-parasite interactions leads to a phenomenon of parasite amplification in which top predators facilitate $T. cruzii$ spread, offering theoretical interpretation of previous empirical findings. Our approach is broadly applicable and could provide novel insights in designing immunisation strategies for pathogens with multiple transmission routes in real-world ecosystems.
[ { "created": "Fri, 16 Jun 2017 00:41:53 GMT", "version": "v1" } ]
2017-06-19
[ [ "Stella", "Massimo", "" ], [ "Selakovic", "Sanja", "" ], [ "Antonioni", "Alberto", "" ], [ "Andreazzi", "Cecilia S.", "" ] ]
Most of zoonoses are multi-host parasites with multiple transmission routes that are usually investigated separately despite their potential interplay. As a unifying framework for modelling parasite spread through different paths of infection, we suggest "ecomultiplex" networks, i.e. multiplex networks representing interacting animal communities with (i) spatial structure and (ii) metabolic scaling. We exploit this ecological framework for testing potential control strategies for $T. cruzii$ spread in two real-world ecosystems. Our investigation highlights two interesting results. Firstly, the ecomultiplex topology can be as efficient as more data-demanding epidemiological measures in identifying which species facilitate parasite spread. Secondly, the interplay between predator-prey and host-parasite interactions leads to a phenomenon of parasite amplification in which top predators facilitate $T. cruzii$ spread, offering theoretical interpretation of previous empirical findings. Our approach is broadly applicable and could provide novel insights in designing immunisation strategies for pathogens with multiple transmission routes in real-world ecosystems.
2201.00921
Chris Fields
Chris Fields, James F. Glazebrook and Michael Levin
Neurons as hierarchies of quantum reference frames
40 pgs, 7 figures
BioSystems 219: 104714, 2022
10.1016/j.biosystems.2022.104714
null
q-bio.NC quant-ph
http://creativecommons.org/licenses/by/4.0/
Conceptual and mathematical models of neurons have lagged behind empirical understanding for decades. Here we extend previous work in modeling biological systems with fully scale-independent quantum information-theoretic tools to develop a uniform, scalable representation of synapses, dendritic and axonal processes, neurons, and local networks of neurons. In this representation, hierarchies of quantum reference frames act as hierarchical active-inference systems. The resulting model enables specific predictions of correlations between synaptic activity, dendritic remodeling, and trophic reward. We summarize how the model may be generalized to nonneural cells and tissues in developmental and regenerative contexts.
[ { "created": "Tue, 4 Jan 2022 00:53:56 GMT", "version": "v1" } ]
2022-07-21
[ [ "Fields", "Chris", "" ], [ "Glazebrook", "James F.", "" ], [ "Levin", "Michael", "" ] ]
Conceptual and mathematical models of neurons have lagged behind empirical understanding for decades. Here we extend previous work in modeling biological systems with fully scale-independent quantum information-theoretic tools to develop a uniform, scalable representation of synapses, dendritic and axonal processes, neurons, and local networks of neurons. In this representation, hierarchies of quantum reference frames act as hierarchical active-inference systems. The resulting model enables specific predictions of correlations between synaptic activity, dendritic remodeling, and trophic reward. We summarize how the model may be generalized to nonneural cells and tissues in developmental and regenerative contexts.
q-bio/0612005
Henrik Jeldtoft Jensen
Alastair Windus and Henrik Jeldtoft Jensen
Allee Effects and Extinction in a Lattice Model
7 pages and 8 figures
null
null
null
q-bio.PE
null
In the interest of conservation, the importance of having a large habitat available for a species is widely known. Here, we introduce a lattice-based model for a population and look at the importance of fluctuations as well as that of the population density, particularly with respect to Allee effects. We examine the model analytically and by Monte Carlo simulations and find that, while the size of the habitat is important, there exists a critical population density below which extinction is assured. This has large consequences with respect to conservation, especially in the design of habitats and for populations whose density has become small. In particular, we find that the probability of survival for small populations can be increased by a reduction in the size of the habitat and show that there exists an optimal size reduction.
[ { "created": "Tue, 5 Dec 2006 13:26:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Windus", "Alastair", "" ], [ "Jensen", "Henrik Jeldtoft", "" ] ]
In the interest of conservation, the importance of having a large habitat available for a species is widely known. Here, we introduce a lattice-based model for a population and look at the importance of fluctuations as well as that of the population density, particularly with respect to Allee effects. We examine the model analytically and by Monte Carlo simulations and find that, while the size of the habitat is important, there exists a critical population density below which extinction is assured. This has large consequences with respect to conservation, especially in the design of habitats and for populations whose density has become small. In particular, we find that the probability of survival for small populations can be increased by a reduction in the size of the habitat and show that there exists an optimal size reduction.
2010.15334
Hayato Chiba
Kiyoshi Kotani, Akihiko Akao, Hayato Chiba
Bifurcation of the neuronal population dynamics of the modified theta model: transition to macroscopic gamma oscillation
null
null
10.1016/j.physd.2020.132789
null
q-bio.NC math.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions of inhibitory neurons produce gamma oscillations (30--80 Hz) in the local field potential, which is known to be involved in functions such as cognition and attention. In this study, the modified theta model is considered to investigate the theoretical relationship between the microscopic structure of inhibitory neurons and their gamma oscillations under a wide class of distribution functions of tonic currents on individual neurons. The stability and bifurcation of gamma oscillations for the Vlasov equation of the model is investigated by the generalized spectral theory. It is shown that as a connection probability of neurons increases, a pair of generalized eigenvalues crosses the imaginary axis twice, which implies that a stable gamma oscillation exists only when the connection probability has a value within a suitable range. On the other hand, when the distribution of tonic currents on individual neurons is the Lorentzian distribution, the Vlasov equation is reduced to a finite dimensional dynamical system. The bifurcation analyses of the reduced equation exhibit equivalent results with the generalized spectral theory. It is also demonstrated that the numerical computations of neuronal population follow the analyses of the generalized spectral theory as well as the bifurcation analysis of the reduced equation.
[ { "created": "Thu, 29 Oct 2020 03:18:20 GMT", "version": "v1" } ]
2021-02-03
[ [ "Kotani", "Kiyoshi", "" ], [ "Akao", "Akihiko", "" ], [ "Chiba", "Hayato", "" ] ]
Interactions of inhibitory neurons produce gamma oscillations (30--80 Hz) in the local field potential, which is known to be involved in functions such as cognition and attention. In this study, the modified theta model is considered to investigate the theoretical relationship between the microscopic structure of inhibitory neurons and their gamma oscillations under a wide class of distribution functions of tonic currents on individual neurons. The stability and bifurcation of gamma oscillations for the Vlasov equation of the model is investigated by the generalized spectral theory. It is shown that as a connection probability of neurons increases, a pair of generalized eigenvalues crosses the imaginary axis twice, which implies that a stable gamma oscillation exists only when the connection probability has a value within a suitable range. On the other hand, when the distribution of tonic currents on individual neurons is the Lorentzian distribution, the Vlasov equation is reduced to a finite dimensional dynamical system. The bifurcation analyses of the reduced equation exhibit equivalent results with the generalized spectral theory. It is also demonstrated that the numerical computations of neuronal population follow the analyses of the generalized spectral theory as well as the bifurcation analysis of the reduced equation.
2201.06725
Yijun Li
Stefan Stanojevic, Yijun Li, Lana X. Garmire
Computational Methods for Single-Cell Multi-Omics Integration and Alignment
26 pages, 4 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-sa/4.0/
Recently developed technologies to generate single-cell genomic data have made a revolutionary impact in the field of biology. Multi-omics assays offer even greater opportunities to understand cellular states and biological processes. However, the problem of integrating different -omics data with very different dimensionality and statistical properties remains quite challenging. A growing body of computational tools are being developed for this task, leveraging ideas ranging from machine translation to the theory of networks and representing a new frontier on the interface of biology and data science. Our goal in this review paper is to provide a comprehensive, up-to-date survey of computational techniques for the integration of multi-omics and alignment of multiple modalities of genomics data in the single cell research field.
[ { "created": "Tue, 18 Jan 2022 04:00:58 GMT", "version": "v1" } ]
2022-01-19
[ [ "Stanojevic", "Stefan", "" ], [ "Li", "Yijun", "" ], [ "Garmire", "Lana X.", "" ] ]
Recently developed technologies to generate single-cell genomic data have made a revolutionary impact in the field of biology. Multi-omics assays offer even greater opportunities to understand cellular states and biological processes. However, the problem of integrating different -omics data with very different dimensionality and statistical properties remains quite challenging. A growing body of computational tools are being developed for this task, leveraging ideas ranging from machine translation to the theory of networks and representing a new frontier on the interface of biology and data science. Our goal in this review paper is to provide a comprehensive, up-to-date survey of computational techniques for the integration of multi-omics and alignment of multiple modalities of genomics data in the single cell research field.
1311.1496
Juli\'an Candia
Juli\'an Candia, Jayanth R. Banavar, Wolfgang Losert
Understanding Health and Disease with Multidimensional Single-Cell Methods
25 pages, 7 figures; revised version with minor changes. To appear in J. Phys.: Cond. Matt
J. Phys.: Condens. Matter 26 (2014) 073102
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current efforts in the biomedical sciences and related interdisciplinary fields are focused on gaining a molecular understanding of health and disease, which is a problem of daunting complexity that spans many orders of magnitude in characteristic length scales, from small molecules that regulate cell function to cell ensembles that form tissues and organs working together as an organism. In order to uncover the molecular nature of the emergent properties of a cell, it is essential to measure multiple cell components simultaneously in the same cell. In turn, cell heterogeneity requires multiple cells to be measured in order to understand health and disease in the organism. This review summarizes current efforts towards a data-driven framework that leverages single-cell technologies to build robust signatures of healthy and diseased phenotypes. While some approaches focus on multicolor flow cytometry data and other methods are designed to analyze high-content image-based screens, we emphasize the so-called Supercell/SVM paradigm (recently developed by the authors of this review and collaborators) as a unified framework that captures mesoscopic-scale emergence to build reliable phenotypes. Beyond their specific contributions to basic and translational biomedical research, these efforts illustrate, from a larger perspective, the powerful synergy that might be achieved from bringing together methods and ideas from statistical physics, data mining, and mathematics to solve the most pressing problems currently facing the life sciences.
[ { "created": "Wed, 6 Nov 2013 20:53:53 GMT", "version": "v1" }, { "created": "Sun, 1 Dec 2013 16:12:22 GMT", "version": "v2" } ]
2014-01-24
[ [ "Candia", "Julián", "" ], [ "Banavar", "Jayanth R.", "" ], [ "Losert", "Wolfgang", "" ] ]
Current efforts in the biomedical sciences and related interdisciplinary fields are focused on gaining a molecular understanding of health and disease, which is a problem of daunting complexity that spans many orders of magnitude in characteristic length scales, from small molecules that regulate cell function to cell ensembles that form tissues and organs working together as an organism. In order to uncover the molecular nature of the emergent properties of a cell, it is essential to measure multiple cell components simultaneously in the same cell. In turn, cell heterogeneity requires multiple cells to be measured in order to understand health and disease in the organism. This review summarizes current efforts towards a data-driven framework that leverages single-cell technologies to build robust signatures of healthy and diseased phenotypes. While some approaches focus on multicolor flow cytometry data and other methods are designed to analyze high-content image-based screens, we emphasize the so-called Supercell/SVM paradigm (recently developed by the authors of this review and collaborators) as a unified framework that captures mesoscopic-scale emergence to build reliable phenotypes. Beyond their specific contributions to basic and translational biomedical research, these efforts illustrate, from a larger perspective, the powerful synergy that might be achieved from bringing together methods and ideas from statistical physics, data mining, and mathematics to solve the most pressing problems currently facing the life sciences.
0901.4589
Tijana Milenkovic
Natasa Przulj
Biological network comparison using graphlet degree distribution
Proceedings of the 2006 European Conference on Computational Biology, ECCB'06, Eilat, Israel, January 21-24, 2007
Bioinformatics 2007 23: e177-e183
10.1093/bioinformatics/btl301
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analogous to biological sequence comparison, comparing cellular networks is an important problem that could provide insight into biological understanding and therapeutics. For technical reasons, comparing large networks is computationally infeasible, and thus heuristics such as the degree distribution have been sought. It is easy to demonstrate that two networks are different by simply showing a short list of properties in which they differ. It is much harder to show that two networks are similar, as it requires demonstrating their similarity in all of their exponentially many properties. Clearly, it is computationally prohibitive to analyze all network properties, but the larger the number of constraints we impose in determining network similarity, the more likely it is that the networks will truly be similar. We introduce a new systematic measure of a network's local structure that imposes a large number of similarity constraints on networks being compared. In particular, we generalize the degree distribution, which measures the number of nodes 'touching' k edges, into distributions measuring the number of nodes 'touching' k graphlets, where graphlets are small connected non-isomorphic subgraphs of a large network. Our new measure of network local structure consists of 73 graphlet degree distributions (GDDs) of graphlets with 2-5 nodes, but it is easily extendible to a greater number of constraints (i.e. graphlets). Furthermore, we show a way to combine the 73 GDDs into a network 'agreement' measure. Based on this new network agreement measure, we show that almost all of the 14 eukaryotic PPI networks, including human, are better modeled by geometric random graphs than by Erdos-Reny, random scale-free, or Barabasi-Albert scale-free networks.
[ { "created": "Thu, 29 Jan 2009 00:30:12 GMT", "version": "v1" } ]
2009-01-30
[ [ "Przulj", "Natasa", "" ] ]
Analogous to biological sequence comparison, comparing cellular networks is an important problem that could provide insight into biological understanding and therapeutics. For technical reasons, comparing large networks is computationally infeasible, and thus heuristics such as the degree distribution have been sought. It is easy to demonstrate that two networks are different by simply showing a short list of properties in which they differ. It is much harder to show that two networks are similar, as it requires demonstrating their similarity in all of their exponentially many properties. Clearly, it is computationally prohibitive to analyze all network properties, but the larger the number of constraints we impose in determining network similarity, the more likely it is that the networks will truly be similar. We introduce a new systematic measure of a network's local structure that imposes a large number of similarity constraints on networks being compared. In particular, we generalize the degree distribution, which measures the number of nodes 'touching' k edges, into distributions measuring the number of nodes 'touching' k graphlets, where graphlets are small connected non-isomorphic subgraphs of a large network. Our new measure of network local structure consists of 73 graphlet degree distributions (GDDs) of graphlets with 2-5 nodes, but it is easily extendible to a greater number of constraints (i.e. graphlets). Furthermore, we show a way to combine the 73 GDDs into a network 'agreement' measure. Based on this new network agreement measure, we show that almost all of the 14 eukaryotic PPI networks, including human, are better modeled by geometric random graphs than by Erdos-Reny, random scale-free, or Barabasi-Albert scale-free networks.
1309.0455
Bahram Houchmandzadeh
Bahram Houchmandzadeh (LIPhy)
An alternative to the breeder's and Lande's equations
null
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The breeder's equation is a cornerstone of quantitative genetics and is widely used in evolutionary modeling. The equation which reads R=h^{2}S relates response to selection R (the mean phenotype of the progeny) to the selection differential S (mean phenotype of selected parents) through a simple proportionality relation. The validity of this relation however relies strongly on the normal (Gaussian) distribution of parent's genotype which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian, an alternative, exact linear equation in the form of R'=j^{2}S' can be derived, regardless of the parental genotype distribution. Here R' and S' stand for the mean phenotypic lag behind the mean of the fitness function in the offspring and selected populations. To demonstrate this relation, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between these quantities. These computations, which are confirmed by individual based numerical simulations, generalize naturally to the multivariate Lande's equation \Delta\mathbf{\bar{z}}=GP^{-1}\mathbf{S} .
[ { "created": "Mon, 2 Sep 2013 16:25:28 GMT", "version": "v1" } ]
2013-09-03
[ [ "Houchmandzadeh", "Bahram", "", "LIPhy" ] ]
The breeder's equation is a cornerstone of quantitative genetics and is widely used in evolutionary modeling. The equation which reads R=h^{2}S relates response to selection R (the mean phenotype of the progeny) to the selection differential S (mean phenotype of selected parents) through a simple proportionality relation. The validity of this relation however relies strongly on the normal (Gaussian) distribution of parent's genotype which is an unobservable quantity and cannot be ascertained. In contrast, we show here that if the fitness (or selection) function is Gaussian, an alternative, exact linear equation in the form of R'=j^{2}S' can be derived, regardless of the parental genotype distribution. Here R' and S' stand for the mean phenotypic lag behind the mean of the fitness function in the offspring and selected populations. To demonstrate this relation, we derive the exact functional relation between the mean phenotype in the selected and the offspring population and deduce all cases that lead to a linear relation between these quantities. These computations, which are confirmed by individual based numerical simulations, generalize naturally to the multivariate Lande's equation \Delta\mathbf{\bar{z}}=GP^{-1}\mathbf{S} .
2404.15805
Wei Chen
Shujian Jiao, Bingxuan Li, Lei Wang, Xiaojin Zhang, Wei Chen, Jiajie Peng, Zhongyu Wei
Beyond ESM2: Graph-Enhanced Protein Sequence Modeling with Efficient Clustering
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are essential to life's processes, underpinning evolution and diversity. Advances in sequencing technology have revealed millions of proteins, underscoring the need for sophisticated pre-trained protein models for biological analysis and AI development. Facebook's ESM2, the most advanced protein language model to date, leverages a masked prediction task for unsupervised learning, crafting amino acid representations with notable biochemical accuracy. Yet, it lacks in delivering functional protein insights, signaling an opportunity for enhancing representation quality.Our study addresses this gap by incorporating protein family classification into ESM2's training.This approach, augmented with Community Propagation-Based Clustering Algorithm, improves global protein representations, while a contextual prediction task fine-tunes local amino acid accuracy. Significantly, our model achieved state-of-the-art results in several downstream experiments, demonstrating the power of combining global and local methodologies to substantially boost protein representation quality.
[ { "created": "Wed, 24 Apr 2024 11:09:43 GMT", "version": "v1" } ]
2024-04-25
[ [ "Jiao", "Shujian", "" ], [ "Li", "Bingxuan", "" ], [ "Wang", "Lei", "" ], [ "Zhang", "Xiaojin", "" ], [ "Chen", "Wei", "" ], [ "Peng", "Jiajie", "" ], [ "Wei", "Zhongyu", "" ] ]
Proteins are essential to life's processes, underpinning evolution and diversity. Advances in sequencing technology have revealed millions of proteins, underscoring the need for sophisticated pre-trained protein models for biological analysis and AI development. Facebook's ESM2, the most advanced protein language model to date, leverages a masked prediction task for unsupervised learning, crafting amino acid representations with notable biochemical accuracy. Yet, it lacks in delivering functional protein insights, signaling an opportunity for enhancing representation quality.Our study addresses this gap by incorporating protein family classification into ESM2's training.This approach, augmented with Community Propagation-Based Clustering Algorithm, improves global protein representations, while a contextual prediction task fine-tunes local amino acid accuracy. Significantly, our model achieved state-of-the-art results in several downstream experiments, demonstrating the power of combining global and local methodologies to substantially boost protein representation quality.
2407.12382
Hugues Berry
Nathan Quiblier (AISTROSIGHT), Jan-Michael Rye (AISTROSIGHT), Pierre Leclerc (PhLAM), Henri Truong (PhLAM), Abdelkrim Hannou (PhLAM), Laurent H\'eliot (PhLAM), Hugues Berry (AISTROSIGHT)
Enhancing Fluorescence Correlation Spectroscopy with Machine Learning for Advanced Analysis of Anomalous Diffusion
null
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The random motion of molecules in living cells has consistently been reported to deviate from standard Brownian motion, a behavior coined as ``anomalous diffusion''. Fluorescence Correlation Spectroscopy (FCS) is a powerful method to quantify molecular motions in living cells but its application is limited to a subset of random motions and to long acquisition times. Here, we propose a new analysis approach that frees FCS of these limitations by using machine learning to infer the underlying model of motion and estimate the motion parameters. Using simulated FCS recordings, we show that this approach enlarges the range of anomalous motions available in FCS. We further validate our approach via experimental FCS recordings of calibrated fluorescent beads in increasing concentrations of glycerol in water. Taken together, our approach significantly augments the analysis power of FCS to capacities that are similar to the best-in-class state-of-the-art algorithms for single-particle-tracking experiments.
[ { "created": "Wed, 17 Jul 2024 08:02:25 GMT", "version": "v1" } ]
2024-07-18
[ [ "Quiblier", "Nathan", "", "AISTROSIGHT" ], [ "Rye", "Jan-Michael", "", "AISTROSIGHT" ], [ "Leclerc", "Pierre", "", "PhLAM" ], [ "Truong", "Henri", "", "PhLAM" ], [ "Hannou", "Abdelkrim", "", "PhLAM" ], [ "Héliot", "Laurent", "", "PhLAM" ], [ "Berry", "Hugues", "", "AISTROSIGHT" ] ]
The random motion of molecules in living cells has consistently been reported to deviate from standard Brownian motion, a behavior coined as ``anomalous diffusion''. Fluorescence Correlation Spectroscopy (FCS) is a powerful method to quantify molecular motions in living cells but its application is limited to a subset of random motions and to long acquisition times. Here, we propose a new analysis approach that frees FCS of these limitations by using machine learning to infer the underlying model of motion and estimate the motion parameters. Using simulated FCS recordings, we show that this approach enlarges the range of anomalous motions available in FCS. We further validate our approach via experimental FCS recordings of calibrated fluorescent beads in increasing concentrations of glycerol in water. Taken together, our approach significantly augments the analysis power of FCS to capacities that are similar to the best-in-class state-of-the-art algorithms for single-particle-tracking experiments.
q-bio/0610055
Illes Farkas
Illes J. Farkas, Chuang Wu, Chakra Chennubhotla, Ivet Bahar, Zoltan N. Oltvai
Topological basis of signal integration in the transcriptional-regulatory network of the yeast, Saccharomyces cerevisiae
Color figures, supplement and free full text article available at http://www.biomedcentral.com/1471-2105/7/478/abstract
BMC Bioinformatics 2006, 7:478
10.1186/1471-2105-7-478
null
q-bio.MN q-bio.GN
null
BACKGROUND. Signal recognition and information processing is a fundamental cellular function, which in part involves comprehensive transcriptional regulatory (TR) mechanisms carried out in response to complex environmental signals in the context of the cell's own internal state. However, the network topological basis of developing such integrated responses remains poorly understood. RESULTS. By studying the TR network of the yeast Saccharomyces cerevisiae we show that an intermediate layer of transcription factors naturally segregates into distinct subnetworks. In these topological units transcription factors are densely interlinked in a largely hierarchical manner and respond to external signals by utilizing a fraction of these subnets. CONCLUSIONS. As transcriptional regulation represents the "slow" component of overall information processing, the identified topology suggests a model in which successive waves of transcriptional regulation originating from distinct fractions of the TR network control robust integrated responses to complex stimuli.
[ { "created": "Mon, 30 Oct 2006 09:19:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Farkas", "Illes J.", "" ], [ "Wu", "Chuang", "" ], [ "Chennubhotla", "Chakra", "" ], [ "Bahar", "Ivet", "" ], [ "Oltvai", "Zoltan N.", "" ] ]
BACKGROUND. Signal recognition and information processing is a fundamental cellular function, which in part involves comprehensive transcriptional regulatory (TR) mechanisms carried out in response to complex environmental signals in the context of the cell's own internal state. However, the network topological basis of developing such integrated responses remains poorly understood. RESULTS. By studying the TR network of the yeast Saccharomyces cerevisiae we show that an intermediate layer of transcription factors naturally segregates into distinct subnetworks. In these topological units transcription factors are densely interlinked in a largely hierarchical manner and respond to external signals by utilizing a fraction of these subnets. CONCLUSIONS. As transcriptional regulation represents the "slow" component of overall information processing, the identified topology suggests a model in which successive waves of transcriptional regulation originating from distinct fractions of the TR network control robust integrated responses to complex stimuli.
1905.07279
Sirio Orozco-Fuentes
Sirio Orozco-Fuentes, Irina Neganova, Laura E. Wadkin, Andrew W. Baggaley, Rafael A. Barrio, Majlinda Lako, Anvar Shukurov, Nicholas G. Parker
Quantification of the morphological characteristics of hESC colonies
null
null
null
null
q-bio.QM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The maintenance of the pluripotent state in human embryonic stem cells (hESCs) is critical for further application in regenerative medicine, drug testing and studies of fundamental biology. Currently, the selection of the best quality cells and colonies for propagation is typically performed by eye, in terms of the displayed morphological features, such as prominent/abundant nucleoli and a colony with a tightly packed appearance and a well-defined edge. Using image analysis and computational tools, we precisely quantify these properties using phase-contrast images of hESC colonies of different sizes (0.1 -- 1.1$\, \text{mm}^2$) during days 2, 3 and 4 after plating. Our analyses reveal noticeable differences in their structure influenced directly by the colony area $A$. Large colonies ($A > 0.6 \, \text{mm}^2$) have cells with smaller nuclei and a short intercellular distance when compared with small colonies ($A < 0.2 \, \text{mm}^2$). The gaps between the cells, which are present in small and medium sized colonies with $A \le 0.6 \, \text{mm}^2$, disappear in large colonies ($A > 0.6 \, \text{mm}^2$) due to the proliferation of the cells in the bulk. This increases the colony density and the number of nearest neighbours. We also detect the self-organisation of cells in the colonies where newly divided (smallest) cells cluster together in patches, separated from larger cells at the final stages of the cell cycle. This might influence directly cell-to-cell interactions and the community effects within the colonies since the segregation induced by size differences allows the interchange of neighbours as the cells proliferate and the colony grows. Our findings are relevant to efforts to determine the quality of hESC colonies and establish colony characteristics database.
[ { "created": "Fri, 17 May 2019 14:11:44 GMT", "version": "v1" } ]
2019-05-20
[ [ "Orozco-Fuentes", "Sirio", "" ], [ "Neganova", "Irina", "" ], [ "Wadkin", "Laura E.", "" ], [ "Baggaley", "Andrew W.", "" ], [ "Barrio", "Rafael A.", "" ], [ "Lako", "Majlinda", "" ], [ "Shukurov", "Anvar", "" ], [ "Parker", "Nicholas G.", "" ] ]
The maintenance of the pluripotent state in human embryonic stem cells (hESCs) is critical for further application in regenerative medicine, drug testing and studies of fundamental biology. Currently, the selection of the best quality cells and colonies for propagation is typically performed by eye, in terms of the displayed morphological features, such as prominent/abundant nucleoli and a colony with a tightly packed appearance and a well-defined edge. Using image analysis and computational tools, we precisely quantify these properties using phase-contrast images of hESC colonies of different sizes (0.1 -- 1.1$\, \text{mm}^2$) during days 2, 3 and 4 after plating. Our analyses reveal noticeable differences in their structure influenced directly by the colony area $A$. Large colonies ($A > 0.6 \, \text{mm}^2$) have cells with smaller nuclei and a short intercellular distance when compared with small colonies ($A < 0.2 \, \text{mm}^2$). The gaps between the cells, which are present in small and medium sized colonies with $A \le 0.6 \, \text{mm}^2$, disappear in large colonies ($A > 0.6 \, \text{mm}^2$) due to the proliferation of the cells in the bulk. This increases the colony density and the number of nearest neighbours. We also detect the self-organisation of cells in the colonies where newly divided (smallest) cells cluster together in patches, separated from larger cells at the final stages of the cell cycle. This might influence directly cell-to-cell interactions and the community effects within the colonies since the segregation induced by size differences allows the interchange of neighbours as the cells proliferate and the colony grows. Our findings are relevant to efforts to determine the quality of hESC colonies and establish colony characteristics database.
2402.17153
Yun S. Song
Michael Celentano, William S. DeWitt, Sebastian Prillo, Yun S. Song
Exact and efficient phylodynamic simulation from arbitrarily large populations
37 pages, 7 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Many biological studies involve inferring the evolutionary history of a sample of individuals from a large population and interpreting the reconstructed tree. Such an ascertained tree typically represents only a small part of a comprehensive population tree and is distorted by survivorship and sampling biases. Inferring evolutionary parameters from ascertained trees requires modeling both the underlying population dynamics and the ascertainment process. A crucial component of this phylodynamic modeling involves tree simulation, which is used to benchmark probabilistic inference methods. To simulate an ascertained tree, one must first simulate the full population tree and then prune unobserved lineages. Consequently, the computational cost is determined not by the size of the final simulated tree, but by the size of the population tree in which it is embedded. In most biological scenarios, simulations of the entire population are prohibitively expensive due to computational demands placed on lineages without sampled descendants. Here, we address this challenge by proving that, for any partially ascertained process from a general multi-type birth-death-mutation-sampling model, there exists an equivalent process with complete sampling and no death, a property which we leverage to develop a highly efficient algorithm for simulating trees. Our algorithm scales linearly with the size of the final simulated tree and is independent of the population size, enabling simulations from extremely large populations beyond the reach of current methods but essential for various biological applications. We anticipate that this unprecedented speedup will significantly advance the development of novel inference methods that require extensive training data.
[ { "created": "Tue, 27 Feb 2024 02:37:53 GMT", "version": "v1" }, { "created": "Sat, 10 Aug 2024 07:59:31 GMT", "version": "v2" } ]
2024-08-13
[ [ "Celentano", "Michael", "" ], [ "DeWitt", "William S.", "" ], [ "Prillo", "Sebastian", "" ], [ "Song", "Yun S.", "" ] ]
Many biological studies involve inferring the evolutionary history of a sample of individuals from a large population and interpreting the reconstructed tree. Such an ascertained tree typically represents only a small part of a comprehensive population tree and is distorted by survivorship and sampling biases. Inferring evolutionary parameters from ascertained trees requires modeling both the underlying population dynamics and the ascertainment process. A crucial component of this phylodynamic modeling involves tree simulation, which is used to benchmark probabilistic inference methods. To simulate an ascertained tree, one must first simulate the full population tree and then prune unobserved lineages. Consequently, the computational cost is determined not by the size of the final simulated tree, but by the size of the population tree in which it is embedded. In most biological scenarios, simulations of the entire population are prohibitively expensive due to computational demands placed on lineages without sampled descendants. Here, we address this challenge by proving that, for any partially ascertained process from a general multi-type birth-death-mutation-sampling model, there exists an equivalent process with complete sampling and no death, a property which we leverage to develop a highly efficient algorithm for simulating trees. Our algorithm scales linearly with the size of the final simulated tree and is independent of the population size, enabling simulations from extremely large populations beyond the reach of current methods but essential for various biological applications. We anticipate that this unprecedented speedup will significantly advance the development of novel inference methods that require extensive training data.
1208.4973
Simone Pigolotti
Simone Pigolotti, Roberto Benzi, Prasad Perlekar Mogens H. Jensen, Federico Toschi, David R. Nelson
Growth, competition and cooperation in spatial population genetics
29 pages, 14 figures; revised version including a section with results in the presence of fluid flows
Theoretical Population Biology 84, 72-86 (2013)
10.1016/j.tpb.2012.12.002
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study an individual based model describing competition in space between two different alleles. Although the model is similar in spirit to classic models of spatial population genetics such as the stepping stone model, here however space is continuous and the total density of competing individuals fluctuates due to demographic stochasticity. By means of analytics and numerical simulations, we study the behavior of fixation probabilities, fixation times, and heterozygosity, in a neutral setting and in cases where the two species can compete or cooperate. By concluding with examples in which individuals are transported by fluid flows, we argue that this model is a natural choice to describe competition in marine environments.
[ { "created": "Fri, 24 Aug 2012 13:33:37 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2012 12:53:15 GMT", "version": "v2" } ]
2015-03-20
[ [ "Pigolotti", "Simone", "" ], [ "Benzi", "Roberto", "" ], [ "Jensen", "Prasad Perlekar Mogens H.", "" ], [ "Toschi", "Federico", "" ], [ "Nelson", "David R.", "" ] ]
We study an individual based model describing competition in space between two different alleles. Although the model is similar in spirit to classic models of spatial population genetics such as the stepping stone model, here however space is continuous and the total density of competing individuals fluctuates due to demographic stochasticity. By means of analytics and numerical simulations, we study the behavior of fixation probabilities, fixation times, and heterozygosity, in a neutral setting and in cases where the two species can compete or cooperate. By concluding with examples in which individuals are transported by fluid flows, we argue that this model is a natural choice to describe competition in marine environments.
2110.10483
Thorsten Hugel
Benedikt Sohmen, Christian Beck, Tilo Seydel, Ingo Hoffmann, Bianca Hermann, Mark N\"uesch, Marco Grimaldo, Frank Schreiber, Steffen Wolf, Felix Roosen-Runge and Thorsten Hugel
The onset of molecule-spanning dynamics in a multi-domain protein
29 main text pages, 7 main text figures; 27 SI pages, 15 SI figures
null
null
null
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Protein dynamics has been investigated on a wide range of time scales. Nano- and picosecond dynamics have been assigned to local fluctuations, while slower dynamics have been attributed to larger conformational changes. However, it is largely unknown how local fluctuations can lead to global allosteric changes. Here we show that molecule-spanning dynamics on the 100 ns time scale precede larger allosteric changes. We assign global real-space movements to dynamic modes on the 100 ns time scales, which became possible by a combination of single-molecule fluorescence, quasi-elastic neutron scattering and all-atom MD simulations. Additionally, we demonstrate the effect of Sba1, a co-chaperone of Hsp90, on these molecule-spanning dynamics, which implies functional importance of such dynamics. Our integrative approach provides comprehensive insights into molecule-spanning dynamics on the nanosecond time scale for a multi-domain protein and indicates that such dynamics are the molecular basis for allostery and large conformational changes in proteins.
[ { "created": "Wed, 20 Oct 2021 10:59:02 GMT", "version": "v1" }, { "created": "Thu, 17 Mar 2022 15:45:40 GMT", "version": "v2" } ]
2022-03-18
[ [ "Sohmen", "Benedikt", "" ], [ "Beck", "Christian", "" ], [ "Seydel", "Tilo", "" ], [ "Hoffmann", "Ingo", "" ], [ "Hermann", "Bianca", "" ], [ "Nüesch", "Mark", "" ], [ "Grimaldo", "Marco", "" ], [ "Schreiber", "Frank", "" ], [ "Wolf", "Steffen", "" ], [ "Roosen-Runge", "Felix", "" ], [ "Hugel", "Thorsten", "" ] ]
Protein dynamics has been investigated on a wide range of time scales. Nano- and picosecond dynamics have been assigned to local fluctuations, while slower dynamics have been attributed to larger conformational changes. However, it is largely unknown how local fluctuations can lead to global allosteric changes. Here we show that molecule-spanning dynamics on the 100 ns time scale precede larger allosteric changes. We assign global real-space movements to dynamic modes on the 100 ns time scales, which became possible by a combination of single-molecule fluorescence, quasi-elastic neutron scattering and all-atom MD simulations. Additionally, we demonstrate the effect of Sba1, a co-chaperone of Hsp90, on these molecule-spanning dynamics, which implies functional importance of such dynamics. Our integrative approach provides comprehensive insights into molecule-spanning dynamics on the nanosecond time scale for a multi-domain protein and indicates that such dynamics are the molecular basis for allostery and large conformational changes in proteins.
2208.02818
Fran\c{c}ois Hug
Fran\c{c}ois Hug, Simon Avrillon, Jaime Ib\'a\~nez, Dario Farina
Common Synaptic Input, Synergies, and Size Principle: Control of Spinal Motor Neurons for Movement Generation
12 pages; 1 figure; review paper or opinion piece
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding how movement is controlled by the central nervous system remains a major challenge, with ongoing debate about basic features underlying this control. In this review, we introduce a new conceptual framework for the distribution of common input to spinal motor neurons. Specifically, this framework is based on the following assumptions: 1) motor neurons are grouped into functional groups (clusters) based on the common inputs they receive; 2) clusters may significantly differ from the classical definition of motor neuron pools, such that they may span across muscles and/or involve only a portion of a muscle; 3) clusters represent functional modules used by the central nervous system to reduce the dimensionality of the control; and 4) selective volitional control of single motor neurons within a cluster receiving common inputs cannot be achieved. We discuss this framework and its underlying theoretical and experimental evidence.
[ { "created": "Wed, 27 Jul 2022 05:10:13 GMT", "version": "v1" } ]
2022-08-08
[ [ "Hug", "François", "" ], [ "Avrillon", "Simon", "" ], [ "Ibáñez", "Jaime", "" ], [ "Farina", "Dario", "" ] ]
Understanding how movement is controlled by the central nervous system remains a major challenge, with ongoing debate about basic features underlying this control. In this review, we introduce a new conceptual framework for the distribution of common input to spinal motor neurons. Specifically, this framework is based on the following assumptions: 1) motor neurons are grouped into functional groups (clusters) based on the common inputs they receive; 2) clusters may significantly differ from the classical definition of motor neuron pools, such that they may span across muscles and/or involve only a portion of a muscle; 3) clusters represent functional modules used by the central nervous system to reduce the dimensionality of the control; and 4) selective volitional control of single motor neurons within a cluster receiving common inputs cannot be achieved. We discuss this framework and its underlying theoretical and experimental evidence.
1604.01943
Michele Giugliano
Rocco Pulizzi, Gabriele Musumeci, Chris Van Den Haute, Sebastian Van De Vijver, Veerle Baekelandt, Michele Giugliano
Brief wide-field photostimuli evoke and modulate oscillatory reverberating activity in cortical networks
23 pages, 7 figures, 2 supplemental figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cell assemblies manipulation by optogenetics is pivotal to advance neuroscience and neuroengineering. In in vivo applications, photostimulation often broadly addresses a population of cells simultaneously, leading to feed-forward and to reverberating responses in recurrent microcircuits. The former arise from direct activation of targets downstream, and are straightforward to interpret. The latter are consequence of feedback connectivity and may reflect a variety of time-scales and complex dynamical properties. We investigated wide-field photostimulation in cortical networks in vitro, employing substrate-integrated microelectrode arrays and long-term cultured neuronal networks. We characterized the effect of brief light pulses, while restricting the expression of channelrhodopsin to principal neurons. We evoked robust reverberating responses, oscillating in the physiological gamma frequency range, and found that such a frequency could be reliably manipulated varying the light pulse duration, not its intensity. By pharmacology, mathematical modelling, and intracellular recordings, we conclude that gamma oscillations likely emerge as in vivo from the excitatory-inhibitory interplay and that, unexpectedly, the light stimuli transiently facilitate excitatory synaptic transmission. Of relevance for in vitro models of (dys)functional cortical microcircuitry and in vivo manipulations of cell assemblies, we give for the first time evidence of network-level consequences of the alteration of synaptic physiology by optogenetics
[ { "created": "Thu, 7 Apr 2016 10:02:02 GMT", "version": "v1" } ]
2016-04-08
[ [ "Pulizzi", "Rocco", "" ], [ "Musumeci", "Gabriele", "" ], [ "Haute", "Chris Van Den", "" ], [ "Van De Vijver", "Sebastian", "" ], [ "Baekelandt", "Veerle", "" ], [ "Giugliano", "Michele", "" ] ]
Cell assemblies manipulation by optogenetics is pivotal to advance neuroscience and neuroengineering. In in vivo applications, photostimulation often broadly addresses a population of cells simultaneously, leading to feed-forward and to reverberating responses in recurrent microcircuits. The former arise from direct activation of targets downstream, and are straightforward to interpret. The latter are consequence of feedback connectivity and may reflect a variety of time-scales and complex dynamical properties. We investigated wide-field photostimulation in cortical networks in vitro, employing substrate-integrated microelectrode arrays and long-term cultured neuronal networks. We characterized the effect of brief light pulses, while restricting the expression of channelrhodopsin to principal neurons. We evoked robust reverberating responses, oscillating in the physiological gamma frequency range, and found that such a frequency could be reliably manipulated varying the light pulse duration, not its intensity. By pharmacology, mathematical modelling, and intracellular recordings, we conclude that gamma oscillations likely emerge as in vivo from the excitatory-inhibitory interplay and that, unexpectedly, the light stimuli transiently facilitate excitatory synaptic transmission. Of relevance for in vitro models of (dys)functional cortical microcircuitry and in vivo manipulations of cell assemblies, we give for the first time evidence of network-level consequences of the alteration of synaptic physiology by optogenetics
2311.06034
Guido Tiana
A. Zambon, R. Zecchina, G. Tiana
Structure of the space of folding protein sequences defined by large language models
null
null
null
null
q-bio.BM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Proteins populate a manifold in the high-dimensional sequence space whose geometrical structure guides their natural evolution. Leveraging recently-developed structure prediction tools based on transformer models, we first examine the protein sequence landscape as defined by the folding score function. This landscape shares characteristics with optimization challenges encountered in machine learning and constraint satisfaction problems. Our analysis reveals that natural proteins predominantly reside in wide, flat minima within this energy landscape. To investigate further, we employ statistical mechanics algorithms specifically designed to explore regions with high local entropy in relatively flat landscapes. Our findings indicate that these specialized algorithms can identify valleys with higher entropy compared to those found using traditional methods such as Monte Carlo Markov Chains. In a proof-of-concept case, we find that these highly entropic minima exhibit significant similarities to natural sequences, especially in critical key sites and local entropy. Additionally, evaluations through Molecular Dynamics suggests that the stability of these sequences closely resembles that of natural proteins. Our tool combines advancements in machine learning and statistical physics, providing new insights into the exploration of sequence landscapes where wide, flat minima coexist alongside a majority of narrower minima.
[ { "created": "Fri, 10 Nov 2023 12:43:06 GMT", "version": "v1" } ]
2023-11-13
[ [ "Zambon", "A.", "" ], [ "Zecchina", "R.", "" ], [ "Tiana", "G.", "" ] ]
Proteins populate a manifold in the high-dimensional sequence space whose geometrical structure guides their natural evolution. Leveraging recently-developed structure prediction tools based on transformer models, we first examine the protein sequence landscape as defined by the folding score function. This landscape shares characteristics with optimization challenges encountered in machine learning and constraint satisfaction problems. Our analysis reveals that natural proteins predominantly reside in wide, flat minima within this energy landscape. To investigate further, we employ statistical mechanics algorithms specifically designed to explore regions with high local entropy in relatively flat landscapes. Our findings indicate that these specialized algorithms can identify valleys with higher entropy compared to those found using traditional methods such as Monte Carlo Markov Chains. In a proof-of-concept case, we find that these highly entropic minima exhibit significant similarities to natural sequences, especially in critical key sites and local entropy. Additionally, evaluations through Molecular Dynamics suggests that the stability of these sequences closely resembles that of natural proteins. Our tool combines advancements in machine learning and statistical physics, providing new insights into the exploration of sequence landscapes where wide, flat minima coexist alongside a majority of narrower minima.
1001.0113
Nikesh Dattani
Nikesh S. Dattani
A new method for identifying vertebrates using only their mitochondrial DNA
7 Pages, 6 Figures
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new method for determining whether or not a mitrochondrial DNA (mtDNA) sequence belongs to a vertebrate is described and tested. This method only needs the mtDNA sequence of the organism in question, and unlike alignment based methods, it does not require it to be compared with anything else. The method is tested on all 1877 mtDNA sequences that were on NCBI's nucleotide database on August 12, 2009, and works in 94.57% of the cases. Furthermore, all organisms on which this method failed are closely related phylogenetically in comparison to all other organisms included in the study. A list of potential extensions to this method and open problems that emerge out of this study is presented at the end.
[ { "created": "Thu, 31 Dec 2009 10:41:18 GMT", "version": "v1" } ]
2010-01-05
[ [ "Dattani", "Nikesh S.", "" ] ]
A new method for determining whether or not a mitrochondrial DNA (mtDNA) sequence belongs to a vertebrate is described and tested. This method only needs the mtDNA sequence of the organism in question, and unlike alignment based methods, it does not require it to be compared with anything else. The method is tested on all 1877 mtDNA sequences that were on NCBI's nucleotide database on August 12, 2009, and works in 94.57% of the cases. Furthermore, all organisms on which this method failed are closely related phylogenetically in comparison to all other organisms included in the study. A list of potential extensions to this method and open problems that emerge out of this study is presented at the end.
1001.0170
Michael Khasin
M. Khasin, M.I. Dykman and B. Meerson
Speeding up disease extinction with a limited amount of vaccine
null
null
10.1103/PhysRevE.81.051925
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider optimal vaccination protocol where the vaccine is in short supply. In this case, disease extinction results from a large and rare fluctuation. We show that the probability of such fluctuation can be exponentially increased by vaccination. For periodic vaccination with fixed average rate, the optimal vaccination protocol is model independent and presents a sequence of short pulses. The effect of vaccination can be resonantly enhanced if the pulse period coincides with the characteristic period of the disease dynamics or its multiples. This resonant effect is illustrated using a simple epidemic model. If the system is periodically modulated, the pulses must be synchronized with the modulation, whereas in the case of a wrong phase the vaccination can lead to a negative result. The analysis is based on the theory of fluctuation-induced population extinction in periodically modulated systems that we develop.
[ { "created": "Thu, 31 Dec 2009 19:01:58 GMT", "version": "v1" }, { "created": "Sun, 31 Jan 2010 23:29:07 GMT", "version": "v2" } ]
2013-05-29
[ [ "Khasin", "M.", "" ], [ "Dykman", "M. I.", "" ], [ "Meerson", "B.", "" ] ]
We consider optimal vaccination protocol where the vaccine is in short supply. In this case, disease extinction results from a large and rare fluctuation. We show that the probability of such fluctuation can be exponentially increased by vaccination. For periodic vaccination with fixed average rate, the optimal vaccination protocol is model independent and presents a sequence of short pulses. The effect of vaccination can be resonantly enhanced if the pulse period coincides with the characteristic period of the disease dynamics or its multiples. This resonant effect is illustrated using a simple epidemic model. If the system is periodically modulated, the pulses must be synchronized with the modulation, whereas in the case of a wrong phase the vaccination can lead to a negative result. The analysis is based on the theory of fluctuation-induced population extinction in periodically modulated systems that we develop.
1801.04824
Radha Srinivasan
Kiran Vishwasrao, Arjumanara Surti, S. Radha
Susceptibility of Methicillin Resistant Staphylococcus aureus to Vancomycin using Liposomal Drug Delivery System
null
null
null
null
q-bio.TO cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Staphylococcus aureus responsible for nosocomial infections is a significant threat to the public health. The increasing resistance of S.aureus to various antibiotics has drawn it to a prime focus for research on designing an appropriate drug delivery system. Emergence of Methicillin Resistant Staphylococcus aureus (MRSA) in 1961, necessitated the use of vancomycin "the drug of last resort" to treat these infections. Unfortunately, S.aureus has already started gaining resistances to vancomycin. Liposome encapsulation of drugs have been earlier shown to provide an efficient method of microbial inhibition in many cases. We have studied the effect of liposome encapsulated vancomycin on MRSA and evaluated the antibacterial activity of the liposome-entrapped drug in comparison to that of the free drug based on the minimum inhibitory concentration (MIC) of the drug. The MIC for liposomal vancomycin was found to be about half of that of free vancomycin. The growth response of MRSA showed that the liposomal vancomycin induced the culture to go into bacteriostatic state and phagocytic killing was enhanced. Administration of the antibiotic encapsulated in liposome thus was shown to greatly improve the drug delivery as well as the drug resistance caused by MRSA.
[ { "created": "Mon, 15 Jan 2018 14:39:10 GMT", "version": "v1" } ]
2018-01-16
[ [ "Vishwasrao", "Kiran", "" ], [ "Surti", "Arjumanara", "" ], [ "Radha", "S.", "" ] ]
Staphylococcus aureus responsible for nosocomial infections is a significant threat to the public health. The increasing resistance of S.aureus to various antibiotics has drawn it to a prime focus for research on designing an appropriate drug delivery system. Emergence of Methicillin Resistant Staphylococcus aureus (MRSA) in 1961, necessitated the use of vancomycin "the drug of last resort" to treat these infections. Unfortunately, S.aureus has already started gaining resistances to vancomycin. Liposome encapsulation of drugs have been earlier shown to provide an efficient method of microbial inhibition in many cases. We have studied the effect of liposome encapsulated vancomycin on MRSA and evaluated the antibacterial activity of the liposome-entrapped drug in comparison to that of the free drug based on the minimum inhibitory concentration (MIC) of the drug. The MIC for liposomal vancomycin was found to be about half of that of free vancomycin. The growth response of MRSA showed that the liposomal vancomycin induced the culture to go into bacteriostatic state and phagocytic killing was enhanced. Administration of the antibiotic encapsulated in liposome thus was shown to greatly improve the drug delivery as well as the drug resistance caused by MRSA.
q-bio/0403006
Matthew Berryman
Sabrina L. Spencer, Matthew J. Berryman, Jose A. Garcia and Derek Abbott
An ordinary differential equation model for the multistep transformation to cancer
12 pages, submitted to Journal of Theoretical Biology
null
null
null
q-bio.CB q-bio.TO
null
Cancer is viewed as a multistep process whereby a normal cell is transformed into a cancer cell through the acquisition of mutations. We reduce the complexities of cancer progression to a simple set of underlying rules that govern the transformation of normal cells to malignant cells. In doing so, we derive an ordinary differential equation model that explores how the balance of angiogenesis, cell death rates, genetic instability, and replication rates give rise to different kinetics in the development of cancer. The key predictions of the model are that cancer develops fastest through a particular ordering of mutations and that mutations in genes that maintain genomic integrity would be the most deleterious type of mutations to inherit. In addition, we perform a sensitivity analysis on the parameters included in the model to determine the probable contribution of each. This paper presents a novel approach to viewing the genetic basis of cancer from a systems biology perspective and provides the groundwork for other models that can be directly tied to clinical and molecular data.
[ { "created": "Thu, 4 Mar 2004 03:37:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Spencer", "Sabrina L.", "" ], [ "Berryman", "Matthew J.", "" ], [ "Garcia", "Jose A.", "" ], [ "Abbott", "Derek", "" ] ]
Cancer is viewed as a multistep process whereby a normal cell is transformed into a cancer cell through the acquisition of mutations. We reduce the complexities of cancer progression to a simple set of underlying rules that govern the transformation of normal cells to malignant cells. In doing so, we derive an ordinary differential equation model that explores how the balance of angiogenesis, cell death rates, genetic instability, and replication rates give rise to different kinetics in the development of cancer. The key predictions of the model are that cancer develops fastest through a particular ordering of mutations and that mutations in genes that maintain genomic integrity would be the most deleterious type of mutations to inherit. In addition, we perform a sensitivity analysis on the parameters included in the model to determine the probable contribution of each. This paper presents a novel approach to viewing the genetic basis of cancer from a systems biology perspective and provides the groundwork for other models that can be directly tied to clinical and molecular data.
2009.04518
Kristine Heiney
Kristine Heiney, Gunnar Tufte, Stefano Nichele
On Artificial Life and Emergent Computation in Physical Substrates
Accepted conference paper. HPCS2020, Session 3: BICAS
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
In living systems, we often see the emergence of the ingredients necessary for computation -- the capacity for information transmission, storage, and modification -- begging the question of how we may exploit or imitate such biological systems in unconventional computing applications. What can we gain from artificial life in the advancement of computing technology? Artificial life provides us with powerful tools for understanding the dynamic behavior of biological systems and capturing this behavior in manmade substrates. With this approach, we can move towards a new computing paradigm concerned with harnessing emergent computation in physical substrates not governed by the constraints of Moore's law and ultimately realize massively parallel and distributed computing technology. In this paper, we argue that the lens of artificial life offers valuable perspectives for the advancement of high-performance computing technology. We first present a brief foundational background on artificial life and some relevant tools that may be applicable to unconventional computing. Two specific substrates are then discussed in detail: biological neurons and ensembles of nanomagnets. These substrates are the focus of the authors' ongoing work, and they are illustrative of the two sides of the approach outlined here -- the close study of living systems and the construction of artificial systems to produce life-like behaviors. We conclude with a philosophical discussion on what we can learn from approaching computation with the curiosity inherent to the study of artificial life. The main contribution of this paper is to present the great potential of using artificial life methodologies to uncover and harness the inherent computational power of physical substrates toward applications in unconventional high-performance computing.
[ { "created": "Wed, 9 Sep 2020 18:59:53 GMT", "version": "v1" } ]
2020-09-11
[ [ "Heiney", "Kristine", "" ], [ "Tufte", "Gunnar", "" ], [ "Nichele", "Stefano", "" ] ]
In living systems, we often see the emergence of the ingredients necessary for computation -- the capacity for information transmission, storage, and modification -- begging the question of how we may exploit or imitate such biological systems in unconventional computing applications. What can we gain from artificial life in the advancement of computing technology? Artificial life provides us with powerful tools for understanding the dynamic behavior of biological systems and capturing this behavior in manmade substrates. With this approach, we can move towards a new computing paradigm concerned with harnessing emergent computation in physical substrates not governed by the constraints of Moore's law and ultimately realize massively parallel and distributed computing technology. In this paper, we argue that the lens of artificial life offers valuable perspectives for the advancement of high-performance computing technology. We first present a brief foundational background on artificial life and some relevant tools that may be applicable to unconventional computing. Two specific substrates are then discussed in detail: biological neurons and ensembles of nanomagnets. These substrates are the focus of the authors' ongoing work, and they are illustrative of the two sides of the approach outlined here -- the close study of living systems and the construction of artificial systems to produce life-like behaviors. We conclude with a philosophical discussion on what we can learn from approaching computation with the curiosity inherent to the study of artificial life. The main contribution of this paper is to present the great potential of using artificial life methodologies to uncover and harness the inherent computational power of physical substrates toward applications in unconventional high-performance computing.
1407.3812
Brenno Cabella
Brenno Caetano Troca Cabella, Fernando Meloni and Alexandre Souto Martinez
Could sampling make hares eat lynxes?
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cycles in population dynamics are widely found in nature. These cycles are understood as emerging from the interaction between two or more coupled species. Here, we argue that data regarding population dynamics are prone to misinterpretation when sampling is conducted at a slow rate compared to the population cycle period. This effect, known as aliasing, is well described in other areas, such as signal processing and computer graphics. However, to the best of our knowledge, aliasing has never been addressed in the population dynamics context or in coupled oscillatory systems. To illustrate aliasing, the Lotka-Volterra model oscillatory regime is numerically sampled, creating prey-predator cycles. Inadequate sampling periods produce inversions in the cause-effect relationship and an increase in cycle period, as reported in the well-known hare-lynx paradox. More generally, slow acquisition rates may distort data, producing deceptive patterns and eventually leading to data misinterpretation.
[ { "created": "Mon, 14 Jul 2014 20:46:17 GMT", "version": "v1" } ]
2014-07-16
[ [ "Cabella", "Brenno Caetano Troca", "" ], [ "Meloni", "Fernando", "" ], [ "Martinez", "Alexandre Souto", "" ] ]
Cycles in population dynamics are widely found in nature. These cycles are understood as emerging from the interaction between two or more coupled species. Here, we argue that data regarding population dynamics are prone to misinterpretation when sampling is conducted at a slow rate compared to the population cycle period. This effect, known as aliasing, is well described in other areas, such as signal processing and computer graphics. However, to the best of our knowledge, aliasing has never been addressed in the population dynamics context or in coupled oscillatory systems. To illustrate aliasing, the Lotka-Volterra model oscillatory regime is numerically sampled, creating prey-predator cycles. Inadequate sampling periods produce inversions in the cause-effect relationship and an increase in cycle period, as reported in the well-known hare-lynx paradox. More generally, slow acquisition rates may distort data, producing deceptive patterns and eventually leading to data misinterpretation.
2311.09260
Anh Bui Huynh Quoc
Huynh Quoc Anh Bui, Trong Hop Do, Thanh Binh Nguyen
A Proposed Artificial Neural Network based Approach for Molecules Bitter Prediction
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, the development of Artificial Intelligence (AI) has offered the possibility to tackle many interdisciplinary problems, and the field of chemistry is not an exception. Drug analysis is crucial in drug discovery, playing an important role in human life. However, this task encounters many difficulties due to the wide range of computational chemistry methods. Drug analysis also involves a massive amount of work, including determining taste. Thus, applying deep learning to predict a molecule's bitterness is inevitable to accelerate innovation in drug analysis by reducing the time spent. This paper proposes an artificial neural network (ANN) based approach (EC-ANN) for the molecule's bitter prediction. Our approach took the SMILE (Simplified molecular-input line-entry system) string of a molecule as the input data for the prediction, and the 256-bit ECFP descriptor is the input vector for our network. It showed impressive results compared to state-of-the-art, with a higher performance on two out of three test sets according to the experiences on three popular test sets: Phyto-Dictionary, Unimi, and Bitter-new set [1]. For the Phyto-Dictionary test set, our model recorded 0.95 and 0.983 in F1-score and AUPR, respectively, depicted as the highest score in F1-score. For the Unimi test set, our model achieved 0.88 in F1-score and 0.88 in AUPR, which is roughly 12.3% higher than the peak of previous models [1, 2, 3, 4, 5].
[ { "created": "Wed, 15 Nov 2023 04:22:29 GMT", "version": "v1" } ]
2023-11-17
[ [ "Bui", "Huynh Quoc Anh", "" ], [ "Do", "Trong Hop", "" ], [ "Nguyen", "Thanh Binh", "" ] ]
In recent years, the development of Artificial Intelligence (AI) has offered the possibility to tackle many interdisciplinary problems, and the field of chemistry is not an exception. Drug analysis is crucial in drug discovery, playing an important role in human life. However, this task encounters many difficulties due to the wide range of computational chemistry methods. Drug analysis also involves a massive amount of work, including determining taste. Thus, applying deep learning to predict a molecule's bitterness is inevitable to accelerate innovation in drug analysis by reducing the time spent. This paper proposes an artificial neural network (ANN) based approach (EC-ANN) for the molecule's bitter prediction. Our approach took the SMILE (Simplified molecular-input line-entry system) string of a molecule as the input data for the prediction, and the 256-bit ECFP descriptor is the input vector for our network. It showed impressive results compared to state-of-the-art, with a higher performance on two out of three test sets according to the experiences on three popular test sets: Phyto-Dictionary, Unimi, and Bitter-new set [1]. For the Phyto-Dictionary test set, our model recorded 0.95 and 0.983 in F1-score and AUPR, respectively, depicted as the highest score in F1-score. For the Unimi test set, our model achieved 0.88 in F1-score and 0.88 in AUPR, which is roughly 12.3% higher than the peak of previous models [1, 2, 3, 4, 5].
1309.0248
Hayriye Gulbudak
Hayriye Gulbudak and Maia Martcheva
Forward Hysteresis and Backward Bifurcation Caused by Culling in an Avian Influenza Model
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emerging threat of a human pandemic caused by the H5N1 avian influenza virus strain magnifies the need for controlling the incidence of H5N1 infection in domestic bird populations. Culling is one of the most widely used control measures and has proved effective for isolated outbreaks. However, the socio-economic impacts of mass culling, in the face of a disease which has become endemic in many regions of the world, can affect the implementation and success of culling as a control measure. We use mathematical modeling to understand the dynamics of avian influenza under different culling approaches. We incorporate culling into an SI model by considering the per capita culling rates to be general functions of the number of infected birds. Complex dynamics of the system, such as backward bifurcation and forward hysteresis, along with bi-stability, are detected and analyzed for two distinct culling scenarios. In these cases, employing other control measures temporarily can drastically change the dynamics of the solutions to a more favorable outcome for disease control.
[ { "created": "Sun, 1 Sep 2013 17:51:16 GMT", "version": "v1" } ]
2013-09-03
[ [ "Gulbudak", "Hayriye", "" ], [ "Martcheva", "Maia", "" ] ]
The emerging threat of a human pandemic caused by the H5N1 avian influenza virus strain magnifies the need for controlling the incidence of H5N1 infection in domestic bird populations. Culling is one of the most widely used control measures and has proved effective for isolated outbreaks. However, the socio-economic impacts of mass culling, in the face of a disease which has become endemic in many regions of the world, can affect the implementation and success of culling as a control measure. We use mathematical modeling to understand the dynamics of avian influenza under different culling approaches. We incorporate culling into an SI model by considering the per capita culling rates to be general functions of the number of infected birds. Complex dynamics of the system, such as backward bifurcation and forward hysteresis, along with bi-stability, are detected and analyzed for two distinct culling scenarios. In these cases, employing other control measures temporarily can drastically change the dynamics of the solutions to a more favorable outcome for disease control.
2103.13844
Ali Kishk
Ali Kishk, Maria Pires Pacheco, Thomas Sauter
DCcov: Repositioning of Drugs and Drug Combinations for SARS-CoV-2 Infected Lung through Constraint-Based Modelling
null
null
10.1016/j.isci.2021.103331
null
q-bio.MN q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The 2019 coronavirus disease (COVID-19) became a worldwide pandemic with currently no effective antiviral drug except treatments for symptomatic therapy. Flux balance analysis is an efficient method to analyze metabolic networks. It allows optimizing for a metabolic function and thus e.g., predicting the growth rate of a specific cell or the production rate of a metabolite of interest. Here flux balance analysis was applied on human lung cells infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) to reposition metabolic drugs and drug combinations against the replication of the SARS-CoV-2 virus within the host tissue. Making use of expression data sets of infected lung tissue, genome-scale COVID-19-specific metabolic models were reconstructed. Then host-specific essential genes and gene-pairs were determined through in-silico knockouts that permit reducing the viral biomass production without affecting the host biomass. Key pathways that are associated with COVID-19 severity in lung tissue are related to oxidative stress, as well as ferroptosis, sphingolipid metabolism, cysteine metabolism, and fat digestion. By in-silico screening of FDA approved drugs on the putative disease-specific essential genes and gene-pairs, 45 drugs and 99 drug combinations were predicted as promising candidates for COVID-19 focused drug repositioning (https://github.com/sysbiolux/DCcov). Among the 45 drug candidates, six antiviral drugs were found and seven drugs that are being tested in clinical trials against COVID-19. Other drugs like gemcitabine, rosuvastatin and acetylcysteine, and drug combinations like azathioprine-pemetrexed might offer new chances for treating COVID-19.
[ { "created": "Thu, 25 Mar 2021 13:51:37 GMT", "version": "v1" } ]
2021-11-12
[ [ "Kishk", "Ali", "" ], [ "Pacheco", "Maria Pires", "" ], [ "Sauter", "Thomas", "" ] ]
The 2019 coronavirus disease (COVID-19) became a worldwide pandemic with currently no effective antiviral drug except treatments for symptomatic therapy. Flux balance analysis is an efficient method to analyze metabolic networks. It allows optimizing for a metabolic function and thus e.g., predicting the growth rate of a specific cell or the production rate of a metabolite of interest. Here flux balance analysis was applied on human lung cells infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) to reposition metabolic drugs and drug combinations against the replication of the SARS-CoV-2 virus within the host tissue. Making use of expression data sets of infected lung tissue, genome-scale COVID-19-specific metabolic models were reconstructed. Then host-specific essential genes and gene-pairs were determined through in-silico knockouts that permit reducing the viral biomass production without affecting the host biomass. Key pathways that are associated with COVID-19 severity in lung tissue are related to oxidative stress, as well as ferroptosis, sphingolipid metabolism, cysteine metabolism, and fat digestion. By in-silico screening of FDA approved drugs on the putative disease-specific essential genes and gene-pairs, 45 drugs and 99 drug combinations were predicted as promising candidates for COVID-19 focused drug repositioning (https://github.com/sysbiolux/DCcov). Among the 45 drug candidates, six antiviral drugs were found and seven drugs that are being tested in clinical trials against COVID-19. Other drugs like gemcitabine, rosuvastatin and acetylcysteine, and drug combinations like azathioprine-pemetrexed might offer new chances for treating COVID-19.
q-bio/0510020
Miodrag Krmar
Vladan Pankovic, Rade Glavatovic, Milan Predojevic
$C_{E}PT$ Symmetry of the Simple Ecological Dynamical Equations
11 pages, no figures
null
null
NS-05/B-12
q-bio.PE
null
It is shown that all simple ecological, i.e. population dynamical equations (unlimited exponential population growth (or decrease) dynamics, logistic or Verhulst equation, usual and generalized Lotka-Volterra equations) hold a symmetry, called $C_{E}PT$ symmetry. Namely, all simple ecological dynamical equations are invariant (symmetric) in respect to successive application of the time reversal transformation - $T$, space coordinates reversal or parity transformation - $P$, and predator-prey reversal transformation - $C_{E}$ that changes preys in the predators or pure (healthy) in the impure (fatal) environment, and vice versa. It is deeply conceptually analogous to remarkable $CPT$ symmetry of the fundamental physical dynamical equations. Further, it is shown that by more accurate, "microscopic" analysis, given $C_{E}PT$ symmetry becomes explicitly broken.
[ { "created": "Mon, 10 Oct 2005 09:26:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pankovic", "Vladan", "" ], [ "Glavatovic", "Rade", "" ], [ "Predojevic", "Milan", "" ] ]
It is shown that all simple ecological, i.e. population dynamical equations (unlimited exponential population growth (or decrease) dynamics, logistic or Verhulst equation, usual and generalized Lotka-Volterra equations) hold a symmetry, called $C_{E}PT$ symmetry. Namely, all simple ecological dynamical equations are invariant (symmetric) in respect to successive application of the time reversal transformation - $T$, space coordinates reversal or parity transformation - $P$, and predator-prey reversal transformation - $C_{E}$ that changes preys in the predators or pure (healthy) in the impure (fatal) environment, and vice versa. It is deeply conceptually analogous to remarkable $CPT$ symmetry of the fundamental physical dynamical equations. Further, it is shown that by more accurate, "microscopic" analysis, given $C_{E}PT$ symmetry becomes explicitly broken.
2210.17270
Egor I. Kiselev
Egor I. Kiselev and Florian G. Pflug and Arndt von Haeseler
Critical growth of cerebral tissue in organoids: theory and experiments
null
null
10.1103/PhysRevLett.131.178402
null
q-bio.NC cond-mat.soft q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a Fokker-Planck theory of tissue growth with three types of cells (symmetrically dividing, asymmetrically dividing and non-dividing) as main agents to study the growth dynamics of human cerebral organoids. Fitting the theory to lineage tracing data obtained in next generation sequencing experiments, we show that the growth of cerebral organoids is a critical process. We derive analytical expressions describing the time evolution of clonal lineage sizes and show how power-law distributions arise in the limit of long times due to the vanishing of a characteristic growth scale. We discuss that the independence of critical growth on initial conditions could be biologically advantageous.
[ { "created": "Mon, 31 Oct 2022 12:58:27 GMT", "version": "v1" }, { "created": "Sun, 13 Aug 2023 15:45:55 GMT", "version": "v2" }, { "created": "Fri, 25 Aug 2023 15:51:29 GMT", "version": "v3" } ]
2024-06-28
[ [ "Kiselev", "Egor I.", "" ], [ "Pflug", "Florian G.", "" ], [ "von Haeseler", "Arndt", "" ] ]
We develop a Fokker-Planck theory of tissue growth with three types of cells (symmetrically dividing, asymmetrically dividing and non-dividing) as main agents to study the growth dynamics of human cerebral organoids. Fitting the theory to lineage tracing data obtained in next generation sequencing experiments, we show that the growth of cerebral organoids is a critical process. We derive analytical expressions describing the time evolution of clonal lineage sizes and show how power-law distributions arise in the limit of long times due to the vanishing of a characteristic growth scale. We discuss that the independence of critical growth on initial conditions could be biologically advantageous.
2308.15225
Mrugsen Gopnarayan
Mrugsen Nagsen Gopnarayan, Jaan Aru, Sebastian Gluth
From DDMs to DNNs: Using process data and models of decision-making to improve human-AI interactions
Review paper, 13 pages, 1 figure
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agent's true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.
[ { "created": "Tue, 29 Aug 2023 11:27:22 GMT", "version": "v1" }, { "created": "Thu, 7 Sep 2023 15:54:26 GMT", "version": "v2" } ]
2023-09-08
[ [ "Gopnarayan", "Mrugsen Nagsen", "" ], [ "Aru", "Jaan", "" ], [ "Gluth", "Sebastian", "" ] ]
Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agent's true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.
2111.02961
Essam Rashed
Sachiko Kodera, Essam A. Rashed, Akimasa Hirata
Mobility-Dependent and Mobility-Compensated Effective Reproduction Number of COVID-19 Viral Variants: New Metric for Infectivity Evaluation
14 pages, 4 figures, 2 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During epidemics, estimation of the effective reproduction number (ERN) associated with infectious disease is a challenging topic for policy development and medical resource management. There is still an open question about the dominant factors to characterize in corona virus disease 2019 (COVID-19), although recent studies based on nonlinear regression with machine learning suggested mobility. The emergence of new viral variants is common in widespread pandemics. However, understanding the potential ERN of new variants is required for policy revision, including lockdown constraints. In this study, we proposed time-averaged mobility at transit stations as a surrogate to correlate with ERN using data from three prefectures in Japan. The latency and duration to average over the mobility were 6-8 days and 6-7, respectively (R2 was 0.109-0.512 in Tokyo, 0.365-0.607 in Osaka, and 0.317-0.631 in Aichi). The same linear correlation was confirmed in Singapore and London. The mobility-adjusted ERN of the alpha variant was 15%-30%, and was 20%-40% higher than the standard type in Osaka, Aichi, and London. Similarly, the ERN of the delta variant was 20%-40% higher than that of the standard type in Osaka and Aichi. The proposed metric would be useful for proper evaluation of the infectivity of different variants in terms of ERN.
[ { "created": "Mon, 1 Nov 2021 07:53:51 GMT", "version": "v1" } ]
2021-11-05
[ [ "Kodera", "Sachiko", "" ], [ "Rashed", "Essam A.", "" ], [ "Hirata", "Akimasa", "" ] ]
During epidemics, estimation of the effective reproduction number (ERN) associated with infectious disease is a challenging topic for policy development and medical resource management. There is still an open question about the dominant factors to characterize in corona virus disease 2019 (COVID-19), although recent studies based on nonlinear regression with machine learning suggested mobility. The emergence of new viral variants is common in widespread pandemics. However, understanding the potential ERN of new variants is required for policy revision, including lockdown constraints. In this study, we proposed time-averaged mobility at transit stations as a surrogate to correlate with ERN using data from three prefectures in Japan. The latency and duration to average over the mobility were 6-8 days and 6-7, respectively (R2 was 0.109-0.512 in Tokyo, 0.365-0.607 in Osaka, and 0.317-0.631 in Aichi). The same linear correlation was confirmed in Singapore and London. The mobility-adjusted ERN of the alpha variant was 15%-30%, and was 20%-40% higher than the standard type in Osaka, Aichi, and London. Similarly, the ERN of the delta variant was 20%-40% higher than that of the standard type in Osaka and Aichi. The proposed metric would be useful for proper evaluation of the infectivity of different variants in terms of ERN.
1701.05619
Yani Zhao
Yani Zhao, Mateusz Chwastyk and Marek Cieplak
Topological transformations in proteins: effects of heating and proximity of an interface
7 figures
Scientific Reports 7, 39851 (2017)
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a structure-based coarse-grained model of proteins, we study the mechanism of unfolding of knotted proteins through heating. We find that the dominant mechanisms of unfolding depend on the temperature applied and are generally distinct from those identified for folding at its optimal temperature. In particular, for shallowly knotted proteins, folding usually involves formation of two loops whereas unfolding through high-temperature heating is dominated by untying of single loops. Untying the knots is found to generally precede unfolding unless the protein is deeply knotted and the heating temperature exceeds a threshold value. We then use a phenomenological model of the air-water interface to show that such an interface can untie shallow knots, but it can also make knots in proteins that are natively unknotted.
[ { "created": "Thu, 19 Jan 2017 22:05:03 GMT", "version": "v1" } ]
2017-01-23
[ [ "Zhao", "Yani", "" ], [ "Chwastyk", "Mateusz", "" ], [ "Cieplak", "Marek", "" ] ]
Using a structure-based coarse-grained model of proteins, we study the mechanism of unfolding of knotted proteins through heating. We find that the dominant mechanisms of unfolding depend on the temperature applied and are generally distinct from those identified for folding at its optimal temperature. In particular, for shallowly knotted proteins, folding usually involves formation of two loops whereas unfolding through high-temperature heating is dominated by untying of single loops. Untying the knots is found to generally precede unfolding unless the protein is deeply knotted and the heating temperature exceeds a threshold value. We then use a phenomenological model of the air-water interface to show that such an interface can untie shallow knots, but it can also make knots in proteins that are natively unknotted.
q-bio/0604025
Alexei Koulakov
A. Koulakov, A. Gelperin, and D. Rinberg
Combinatorial on/off Model for Olfactory Coding
null
null
null
null
q-bio.NC
null
We present a model for olfactory coding based on spatial representation of glomerular responses. In this model distinct odorants activate specific subsets of glomeruli, dependent upon the odorant's chemical identity and concentration. The glomerular response specificities are understood statistically, based on experimentally measured distributions of detection thresholds. A simple version of the model, in which glomerular responses are binary (the on/off model), allows us to account quantitatively for the following results of human/rodent olfactory psychophysics: 1) just noticeable differences in the perceived concentration of a single odor (Weber ratios) are dC/C ~ 0.04; 2) the number of simultaneously perceived odors can be as high as 12; 3) extensive lesions of the olfactory bulb do not lead to significant changes in detection or discrimination thresholds. We conclude that a combinatorial code based on a binary glomerular response is sufficient to account for the discrimination capacity of the mammalian olfactory system.
[ { "created": "Wed, 19 Apr 2006 16:22:45 GMT", "version": "v1" } ]
2007-05-23
[ [ "Koulakov", "A.", "" ], [ "Gelperin", "A.", "" ], [ "Rinberg", "D.", "" ] ]
We present a model for olfactory coding based on spatial representation of glomerular responses. In this model distinct odorants activate specific subsets of glomeruli, dependent upon the odorant's chemical identity and concentration. The glomerular response specificities are understood statistically, based on experimentally measured distributions of detection thresholds. A simple version of the model, in which glomerular responses are binary (the on/off model), allows us to account quantitatively for the following results of human/rodent olfactory psychophysics: 1) just noticeable differences in the perceived concentration of a single odor (Weber ratios) are dC/C ~ 0.04; 2) the number of simultaneously perceived odors can be as high as 12; 3) extensive lesions of the olfactory bulb do not lead to significant changes in detection or discrimination thresholds. We conclude that a combinatorial code based on a binary glomerular response is sufficient to account for the discrimination capacity of the mammalian olfactory system.
2006.06109
Jona Carmon
Jona Carmon, Moritz Bammel, Peter Brugger, Bigna Lenggenhager
Uncertainty promotes neuroreductionism: A behavioral online study on folk psychological causal inference from neuroimaging data
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Introduction. Increased efforts in neuroscience try to understand mental disorders as brain disorders. In the present study we investigate how common a neuroreductionist inclination is among highly educated people. In particular, we shed light on implicit presuppositions of mental disorders little is known about in the public, exemplified here by the case of Body Integrity Dysphoria (BID) that is considered a mental disorder for the first time in ICD-11. Methods. Identically graphed, simulated data of mind-brain correlations were shown in three contexts with presumably different presumptions about causality. 738 highly-educated laymen rated plausibility of causality attribution from brain to mind and from mind to brain for correlations between brain structural properties and mental phenomena. We contrasted participants' plausibility ratings of causality in the contexts of commonly perceived brain-lesion induced behavior (aphasia), behavior-induced training effects (piano playing), and a newly described mental disorder (BID). Results. The findings reveal the expected context-dependent modulation of causality attributions in the contexts of aphasia and piano playing. Furthermore, we observed a significant tendency to more readily attribute causal inference from brain to mind than vice versa with respect to BID. Conclusion. In some contexts, exemplified here by aphasia and piano playing, unidirectional causality attributions may be justified. However, with respect to BID, we critically discuss presumably unjustified neuroreductionist inclinations under causal uncertainty. Finally, we emphasize the need for a presupposition-free approach in psychiatry.
[ { "created": "Wed, 10 Jun 2020 23:28:00 GMT", "version": "v1" }, { "created": "Wed, 26 May 2021 13:55:57 GMT", "version": "v2" } ]
2021-05-27
[ [ "Carmon", "Jona", "" ], [ "Bammel", "Moritz", "" ], [ "Brugger", "Peter", "" ], [ "Lenggenhager", "Bigna", "" ] ]
Introduction. Increased efforts in neuroscience try to understand mental disorders as brain disorders. In the present study we investigate how common a neuroreductionist inclination is among highly educated people. In particular, we shed light on implicit presuppositions of mental disorders little is known about in the public, exemplified here by the case of Body Integrity Dysphoria (BID) that is considered a mental disorder for the first time in ICD-11. Methods. Identically graphed, simulated data of mind-brain correlations were shown in three contexts with presumably different presumptions about causality. 738 highly-educated laymen rated plausibility of causality attribution from brain to mind and from mind to brain for correlations between brain structural properties and mental phenomena. We contrasted participants' plausibility ratings of causality in the contexts of commonly perceived brain-lesion induced behavior (aphasia), behavior-induced training effects (piano playing), and a newly described mental disorder (BID). Results. The findings reveal the expected context-dependent modulation of causality attributions in the contexts of aphasia and piano playing. Furthermore, we observed a significant tendency to more readily attribute causal inference from brain to mind than vice versa with respect to BID. Conclusion. In some contexts, exemplified here by aphasia and piano playing, unidirectional causality attributions may be justified. However, with respect to BID, we critically discuss presumably unjustified neuroreductionist inclinations under causal uncertainty. Finally, we emphasize the need for a presupposition-free approach in psychiatry.
1805.11081
Emma Towlson
Emma K. Towlson, Petra E. Vertes, Gang Yan, Yee Lian Chew, Denise S. Walker, William R. Schafer, and Albert-Laszlo Barabasi
Caenorhabditis elegans and the network control framework - FAQs
19 pages, 5 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Control is essential to the functioning of any neural system. Indeed, under healthy conditions the brain must be able to continuously maintain a tight functional control between the system's inputs and outputs. One may therefore hypothesise that the brain's wiring is predetermined by the need to maintain control across multiple scales, maintaining the stability of key internal variables, and producing behaviour in response to environmental cues. Recent advances in network control have offered a powerful mathematical framework to explore the structure-function relationship in complex biological, social, and technological networks, and are beginning to yield important and precise insights for neuronal systems. The network control paradigm promises a predictive, quantitative framework to unite the distinct datasets necessary to fully describe a nervous system, and provide mechanistic explanations for the observed structure and function relationships. Here, we provide a thorough review of the network control framework as applied to C. elegans, in the style of a FAQ. We present the theoretical, computational, and experimental aspects of network control, and discuss its current capabilities and limitations, together with the next likely advances and improvements. We further present the Python code to enable exploration of control principles in a manner specific to this prototypical organism.
[ { "created": "Mon, 28 May 2018 17:49:12 GMT", "version": "v1" } ]
2018-05-29
[ [ "Towlson", "Emma K.", "" ], [ "Vertes", "Petra E.", "" ], [ "Yan", "Gang", "" ], [ "Chew", "Yee Lian", "" ], [ "Walker", "Denise S.", "" ], [ "Schafer", "William R.", "" ], [ "Barabasi", "Albert-Laszlo", "" ] ]
Control is essential to the functioning of any neural system. Indeed, under healthy conditions the brain must be able to continuously maintain a tight functional control between the system's inputs and outputs. One may therefore hypothesise that the brain's wiring is predetermined by the need to maintain control across multiple scales, maintaining the stability of key internal variables, and producing behaviour in response to environmental cues. Recent advances in network control have offered a powerful mathematical framework to explore the structure-function relationship in complex biological, social, and technological networks, and are beginning to yield important and precise insights for neuronal systems. The network control paradigm promises a predictive, quantitative framework to unite the distinct datasets necessary to fully describe a nervous system, and provide mechanistic explanations for the observed structure and function relationships. Here, we provide a thorough review of the network control framework as applied to C. elegans, in the style of a FAQ. We present the theoretical, computational, and experimental aspects of network control, and discuss its current capabilities and limitations, together with the next likely advances and improvements. We further present the Python code to enable exploration of control principles in a manner specific to this prototypical organism.
1309.7405
Liane Gabora
Liane Gabora and Patrick Colgan
A Model of the Mechanisms Underlying Exploratory Behaviour
17 pages
In S. Wilson & J. A. Meyer (Eds.), Proceedings of the First International Conference on the Simulation of Adaptive Behavior (pp. 475-484). Cambridge, MA: MIT Press. (1990)
null
null
q-bio.PE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A model of the mechanisms underlying exploratory behaviour, based on empirical research and refined using a computer simulation, is presented. The behaviour of killifish from two lakes, one with killifish predators and one without, was compared in the laboratory. Plotting average activity in a novel environment versus time resulted in an inverted-U-shaped curve for both groups; however, the curve for killifish from the lake without predators was (1) steeper, (2) reached a peak value earlier, (S) reached a higher peak value, and (4) subsumed less area than the curve for killifish from the lake with predators. We hypothesize that the shape of the exploration curve reflects a competition between motivational subsystems that excite and inhibit exploratory behaviour in a way that is tuned to match the affordance probabilities of the animal's environment. A computer implementation of this model produced curves which differed along the same four dimensions as differentiate the two killifish curves. All four differences were reproduced in the model by tuning a single parameter: the time-dependent component of the decay-rate of the exploration-inhibiting subsystem.
[ { "created": "Sat, 28 Sep 2013 01:57:54 GMT", "version": "v1" }, { "created": "Fri, 5 Jul 2019 20:09:19 GMT", "version": "v2" }, { "created": "Tue, 9 Jul 2019 20:01:08 GMT", "version": "v3" } ]
2019-07-11
[ [ "Gabora", "Liane", "" ], [ "Colgan", "Patrick", "" ] ]
A model of the mechanisms underlying exploratory behaviour, based on empirical research and refined using a computer simulation, is presented. The behaviour of killifish from two lakes, one with killifish predators and one without, was compared in the laboratory. Plotting average activity in a novel environment versus time resulted in an inverted-U-shaped curve for both groups; however, the curve for killifish from the lake without predators was (1) steeper, (2) reached a peak value earlier, (S) reached a higher peak value, and (4) subsumed less area than the curve for killifish from the lake with predators. We hypothesize that the shape of the exploration curve reflects a competition between motivational subsystems that excite and inhibit exploratory behaviour in a way that is tuned to match the affordance probabilities of the animal's environment. A computer implementation of this model produced curves which differed along the same four dimensions as differentiate the two killifish curves. All four differences were reproduced in the model by tuning a single parameter: the time-dependent component of the decay-rate of the exploration-inhibiting subsystem.
2011.03419
Isabelle Landrieu
Francesco Bosica (TU/e), Sebastian Andrei (TU/e), Jo\~ao Filipe Neves (ERL 9002 - BSI, RID-AGE), Peter Brandt, Anders Gunnarsson, Isabelle Landrieu (ERL 9002 - BSI, RID-AGE), Christian Ottmann (TU/e), Gavin O'Mahony
Design Of Drug-Like Protein-Protein Interaction Stabilizers Guided By Chelation-Controlled Bioactive Conformation Stabilization
null
Chemistry - A European Journal, Wiley-VCH Verlag, 2020, 26 (31), pp.7131-7139
10.1002/chem.202001608
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The protein-protein interactions (PPIs) of 14-3-3 proteins are a model system for studying PPI stabilization. The complex natural product Fusicoccin A stabilizes many 14-3-3 PPIs but is not amenable for use in SAR studies, motivating the search for more drug-like chemical matter. However, drug-like 14-3-3 PPI stabilizers enabling such study have remained elusive. An X-ray crystal structure of a PPI in complex with an extremely low potency stabilizer uncovered an unexpected non-protein interacting, ligand-chelated Mg 2+ leading to the discovery of metal ion-dependent 14-3-3 PPI stabilization potency. This originates from a novel chelation-controlled bioactive conformation stabilization effect. Metal chelation has been associated with pan-assay interference compounds (PAINS) and frequent hitter behavior, but chelation can evidently also lead to true potency gains and find use as a medicinal chemistry strategy to guide compound optimization. To demonstrate this, we exploited the effect to design the first potent, selective and drug-like 14-3-3 PPI stabilizers.
[ { "created": "Fri, 6 Nov 2020 15:12:02 GMT", "version": "v1" } ]
2020-11-09
[ [ "Bosica", "Francesco", "", "TU/e" ], [ "Andrei", "Sebastian", "", "TU/e" ], [ "Neves", "João Filipe", "", "ERL 9002 - BSI, RID-AGE" ], [ "Brandt", "Peter", "", "ERL 9002 - BSI, RID-AGE" ], [ "Gunnarsson", "Anders", "", "ERL 9002 - BSI, RID-AGE" ], [ "Landrieu", "Isabelle", "", "ERL 9002 - BSI, RID-AGE" ], [ "Ottmann", "Christian", "", "TU/e" ], [ "O'Mahony", "Gavin", "" ] ]
The protein-protein interactions (PPIs) of 14-3-3 proteins are a model system for studying PPI stabilization. The complex natural product Fusicoccin A stabilizes many 14-3-3 PPIs but is not amenable for use in SAR studies, motivating the search for more drug-like chemical matter. However, drug-like 14-3-3 PPI stabilizers enabling such study have remained elusive. An X-ray crystal structure of a PPI in complex with an extremely low potency stabilizer uncovered an unexpected non-protein interacting, ligand-chelated Mg 2+ leading to the discovery of metal ion-dependent 14-3-3 PPI stabilization potency. This originates from a novel chelation-controlled bioactive conformation stabilization effect. Metal chelation has been associated with pan-assay interference compounds (PAINS) and frequent hitter behavior, but chelation can evidently also lead to true potency gains and find use as a medicinal chemistry strategy to guide compound optimization. To demonstrate this, we exploited the effect to design the first potent, selective and drug-like 14-3-3 PPI stabilizers.
1802.10131
Christoph Bauermeister
Christoph Bauermeister, Hanna Keren, Jochen Braun
Broadly heterogeneous network topology begets order-based representation by privileged neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How spiking activity reverberates through neuronal networks, how evoked and spontaneous activity interact and blend, and how the combined activities represent external stimulation are pivotal questions in neuroscience. We simulated minimal models of unstructured spiking networks in silico, asking whether and how gentle external stimulation might be subsequently reflected in spontaneous activity fluctuations. Consistent with earlier findings in silico and in vitro, we observe a privileged sub-population of 'pioneer neurons' that, by their firing order, reliably encode previous external stimulation. We show that the distinctive role of pioneer neurons is owed to a combination of exceptional sensitivity to, and pronounced influence on, network activity. We further show that broadly heterogeneous connection topology - a broad "middle class" in degree of connectedness - not only increases the number of 'pioneer neurons' in unstructured networks, but also renders the emergence of 'pioneer neurons' more robust to changes in the excitatory-inhibitory balance. In conclusion, we offer a minimal model for the emergence and representational role of 'pioneer neurons', as observed experimentally in vitro. In addition, we show how broadly heterogeneous connectivity can enhance the representational capacity of unstructured networks.
[ { "created": "Tue, 27 Feb 2018 19:36:05 GMT", "version": "v1" } ]
2018-03-01
[ [ "Bauermeister", "Christoph", "" ], [ "Keren", "Hanna", "" ], [ "Braun", "Jochen", "" ] ]
How spiking activity reverberates through neuronal networks, how evoked and spontaneous activity interact and blend, and how the combined activities represent external stimulation are pivotal questions in neuroscience. We simulated minimal models of unstructured spiking networks in silico, asking whether and how gentle external stimulation might be subsequently reflected in spontaneous activity fluctuations. Consistent with earlier findings in silico and in vitro, we observe a privileged sub-population of 'pioneer neurons' that, by their firing order, reliably encode previous external stimulation. We show that the distinctive role of pioneer neurons is owed to a combination of exceptional sensitivity to, and pronounced influence on, network activity. We further show that broadly heterogeneous connection topology - a broad "middle class" in degree of connectedness - not only increases the number of 'pioneer neurons' in unstructured networks, but also renders the emergence of 'pioneer neurons' more robust to changes in the excitatory-inhibitory balance. In conclusion, we offer a minimal model for the emergence and representational role of 'pioneer neurons', as observed experimentally in vitro. In addition, we show how broadly heterogeneous connectivity can enhance the representational capacity of unstructured networks.
1510.06629
Gard Spreemann
Gard Spreemann, Benjamin Dunn, Magnus Bakke Botnan, Nils A. Baas
Using persistent homology to reveal hidden information in neural data
null
null
null
null
q-bio.NC math.AT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method, based on persistent homology, to uncover topological properties of a priori unknown covariates of neuron activity. Our input data consist of spike train measurements of a set of neurons of interest, a candidate list of the known stimuli that govern neuron activity, and the corresponding state of the animal throughout the experiment performed. Using a generalized linear model for neuron activity and simple assumptions on the effects of the external stimuli, we infer away any contribution to the observed spike trains by the candidate stimuli. Persistent homology then reveals useful information about any further, unknown, covariates.
[ { "created": "Thu, 22 Oct 2015 13:57:03 GMT", "version": "v1" } ]
2015-10-23
[ [ "Spreemann", "Gard", "" ], [ "Dunn", "Benjamin", "" ], [ "Botnan", "Magnus Bakke", "" ], [ "Baas", "Nils A.", "" ] ]
We propose a method, based on persistent homology, to uncover topological properties of a priori unknown covariates of neuron activity. Our input data consist of spike train measurements of a set of neurons of interest, a candidate list of the known stimuli that govern neuron activity, and the corresponding state of the animal throughout the experiment performed. Using a generalized linear model for neuron activity and simple assumptions on the effects of the external stimuli, we infer away any contribution to the observed spike trains by the candidate stimuli. Persistent homology then reveals useful information about any further, unknown, covariates.
2406.02610
Li Wang
Li Wang, Xiangzheng Fu, Jiahao Yang, Xinyi Zhang, Xiucai Ye, Yiping Liu, Tetsuya Sakurai, and Xiangxiang Zeng
MoFormer: Multi-objective Antimicrobial Peptide Generation Based on Conditional Transformer Joint Multi-modal Fusion Descriptor
null
null
null
null
q-bio.QM cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning holds a big promise for optimizing existing peptides with more desirable properties, a critical step towards accelerating new drug discovery. Despite the recent emergence of several optimized Antimicrobial peptides(AMP) generation methods, multi-objective optimizations remain still quite challenging for the idealism-realism tradeoff. Here, we establish a multi-objective AMP synthesis pipeline (MoFormer) for the simultaneous optimization of multi-attributes of AMPs. MoFormer improves the desired attributes of AMP sequences in a highly structured latent space, guided by conditional constraints and fine-grained multi-descriptor.We show that MoFormer outperforms existing methods in the generation task of enhanced antimicrobial activity and minimal hemolysis. We also utilize a Pareto-based non-dominated sorting algorithm and proxies based on large model fine-tuning to hierarchically rank the candidates. We demonstrate substantial property improvement using MoFormer from two perspectives: (1) employing molecular simulations and scoring interactions among amino acids to decipher the structure and functionality of AMPs; (2) visualizing latent space to examine the qualities and distribution features, verifying an effective means to facilitate multi-objective optimization AMPs with design constraints
[ { "created": "Mon, 3 Jun 2024 07:17:18 GMT", "version": "v1" } ]
2024-06-06
[ [ "Wang", "Li", "" ], [ "Fu", "Xiangzheng", "" ], [ "Yang", "Jiahao", "" ], [ "Zhang", "Xinyi", "" ], [ "Ye", "Xiucai", "" ], [ "Liu", "Yiping", "" ], [ "Sakurai", "Tetsuya", "" ], [ "Zeng", "Xiangxiang", "" ] ]
Deep learning holds a big promise for optimizing existing peptides with more desirable properties, a critical step towards accelerating new drug discovery. Despite the recent emergence of several optimized Antimicrobial peptides(AMP) generation methods, multi-objective optimizations remain still quite challenging for the idealism-realism tradeoff. Here, we establish a multi-objective AMP synthesis pipeline (MoFormer) for the simultaneous optimization of multi-attributes of AMPs. MoFormer improves the desired attributes of AMP sequences in a highly structured latent space, guided by conditional constraints and fine-grained multi-descriptor.We show that MoFormer outperforms existing methods in the generation task of enhanced antimicrobial activity and minimal hemolysis. We also utilize a Pareto-based non-dominated sorting algorithm and proxies based on large model fine-tuning to hierarchically rank the candidates. We demonstrate substantial property improvement using MoFormer from two perspectives: (1) employing molecular simulations and scoring interactions among amino acids to decipher the structure and functionality of AMPs; (2) visualizing latent space to examine the qualities and distribution features, verifying an effective means to facilitate multi-objective optimization AMPs with design constraints
2301.09565
Mark van Rossum
Maxime Girard, Jiamu Jiang, Mark CW van Rossum
Estimating the energy requirements for long term memory formation
7 pages, 1 figure
null
null
null
q-bio.NC cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Brains consume metabolic energy to process information, but also to store memories. The energy required for memory formation can be substantial, for instance in fruit flies memory formation leads to a shorter lifespan upon subsequent starvation (Mery and Kawecki, 2005). Here we estimate that the energy required corresponds to about 10mJ/bit and compare this to biophysical estimates as well as energy requirements in computer hardware. We conclude that biological memory storage is expensive, but the reason behind it is not known.
[ { "created": "Mon, 16 Jan 2023 13:02:22 GMT", "version": "v1" }, { "created": "Wed, 8 Feb 2023 11:29:32 GMT", "version": "v2" } ]
2023-02-09
[ [ "Girard", "Maxime", "" ], [ "Jiang", "Jiamu", "" ], [ "van Rossum", "Mark CW", "" ] ]
Brains consume metabolic energy to process information, but also to store memories. The energy required for memory formation can be substantial, for instance in fruit flies memory formation leads to a shorter lifespan upon subsequent starvation (Mery and Kawecki, 2005). Here we estimate that the energy required corresponds to about 10mJ/bit and compare this to biophysical estimates as well as energy requirements in computer hardware. We conclude that biological memory storage is expensive, but the reason behind it is not known.
2005.11279
Eric Ngondiep
Eric Ngondiep
An efficient explicit approach for predicting the Covid-19 spreading with undetected infectious: The case of Cameroon
26 pages, 11 figures, 11 tables
null
null
null
q-bio.PE cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers an explicit numerical scheme for solving the mathematical model of the propagation of Covid-19 epidemic with undetected infectious cases. We analyze the stability and convergence rate of the new approach in $L^{\infty}$-norm. The proposed method is less time consuming. Furthermore, the method is stable, at least second-order convergent and can serve as a robust tool for the integration of general systems of ordinary differential equations. A wide set of numerical evidences which consider the case of Cameroon are presented and discussed.
[ { "created": "Thu, 21 May 2020 05:37:57 GMT", "version": "v1" }, { "created": "Tue, 26 May 2020 05:09:52 GMT", "version": "v2" } ]
2020-05-27
[ [ "Ngondiep", "Eric", "" ] ]
This paper considers an explicit numerical scheme for solving the mathematical model of the propagation of Covid-19 epidemic with undetected infectious cases. We analyze the stability and convergence rate of the new approach in $L^{\infty}$-norm. The proposed method is less time consuming. Furthermore, the method is stable, at least second-order convergent and can serve as a robust tool for the integration of general systems of ordinary differential equations. A wide set of numerical evidences which consider the case of Cameroon are presented and discussed.
2310.00139
Youssef Kora
Youssef Kora and Christoph Simon
Coarse-graining and criticality in the human connectome
11 pages, 11 figures
null
null
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the face of the stupefying complexity of the human brain, network analysis is a most useful tool that allows one to greatly simplify the problem, typically by approximating the billions of neurons comprising the brain by means of a coarse-grained picture with a practicable number of nodes. But even such relatively small and coarse networks, such as the human connectome with its 100-1000 nodes, may present challenges for some computationally demanding analyses that are incapable of handling networks with more than a handful of nodes. With such applications in mind, we set out to further coarse-grain the human connectome by taking a modularity-based approach, the goal being to produce a network of a relatively small number of modules. We applied this approach to study critical phenomena in the brain; we formulated a hypothesis based on the coarse-grained networks in the context of criticality in the Wilson-Cowan and Ising models, and we verified the hypothesis, which connected a transition value of the former with the critical temperature of the latter, using the original networks. We found that the qualitative behavior of the coarse-grained networks reflected that of the original networks, albeit to a less pronounced extent. This, in principle, allows for the drawing of similar qualitative conclusions by analysing the smaller networks, which opens the door for studying the human connectome in contexts typically regarded as computationally intractable, such Integrated Information Theory and quantum models of the human brain.
[ { "created": "Fri, 29 Sep 2023 20:57:14 GMT", "version": "v1" } ]
2023-10-03
[ [ "Kora", "Youssef", "" ], [ "Simon", "Christoph", "" ] ]
In the face of the stupefying complexity of the human brain, network analysis is a most useful tool that allows one to greatly simplify the problem, typically by approximating the billions of neurons comprising the brain by means of a coarse-grained picture with a practicable number of nodes. But even such relatively small and coarse networks, such as the human connectome with its 100-1000 nodes, may present challenges for some computationally demanding analyses that are incapable of handling networks with more than a handful of nodes. With such applications in mind, we set out to further coarse-grain the human connectome by taking a modularity-based approach, the goal being to produce a network of a relatively small number of modules. We applied this approach to study critical phenomena in the brain; we formulated a hypothesis based on the coarse-grained networks in the context of criticality in the Wilson-Cowan and Ising models, and we verified the hypothesis, which connected a transition value of the former with the critical temperature of the latter, using the original networks. We found that the qualitative behavior of the coarse-grained networks reflected that of the original networks, albeit to a less pronounced extent. This, in principle, allows for the drawing of similar qualitative conclusions by analysing the smaller networks, which opens the door for studying the human connectome in contexts typically regarded as computationally intractable, such Integrated Information Theory and quantum models of the human brain.
2110.05574
Andreas K\"uhnapfel
Andreas K\"uhnapfel, Katrin Horn, Ulrike Klotz, Michael Kiehntopf, Maciej Rosolowski, Markus Loeffler, Peter Ahnert, Norbert Suttorp, Martin Witzenrath, Markus Scholz
Genetic Regulation of Cytokine Response in Patients with Acute Community-acquired Pneumonia
null
null
null
null
q-bio.QM stat.AP
http://creativecommons.org/licenses/by/4.0/
Background: Community-acquired pneumonia (CAP) is an acute disease condition with a high risk of rapid deteriorations. We analysed the influence of genetics on cytokine regulation to obtain a better understanding of patient's heterogeneity. Methods: For up to N=389 genotyped participants of the PROGRESS study of hospitalised CAP patients, we performed a genome-wide association study of ten cytokines IL-1b, IL-6, IL-8, IL-10, IL-12, MCP-1 (MCAF), MIP-1a (CCL3), VEGF, VCAM-1, and ICAM-1. Consecutive secondary analyses were performed to identify independent hits and corresponding causal variants. Results: 102 SNPs from 14 loci showed genome-wide significant associations with five of the cytokines. The most interesting associations were found at 6p21.1 for VEGF (p=1.58x10E-20), at 17q21.32 (p=1.51x10E-9) and at 10p12.1 (p=2.76x10E-9) for IL-1b, at 10p13 for MIP-1a (CCL3) (p=2.28x10E-9), and at 9q34.12 for IL-10 (p=4.52x10E-8). Functionally plausible genes could be assigned to the majority of loci including genes involved in cytokine secretion, granulocyte function, and cilial kinetics. Conclusions: This is the first context-specific genetic association study of blood cytokine concentrations in CAP patients revealing numerous biologically plausible candidate genes. Two of the loci were also associated with atherosclerosis with probable common or consecutive pathomechanisms.
[ { "created": "Mon, 11 Oct 2021 19:32:43 GMT", "version": "v1" } ]
2021-10-13
[ [ "Kühnapfel", "Andreas", "" ], [ "Horn", "Katrin", "" ], [ "Klotz", "Ulrike", "" ], [ "Kiehntopf", "Michael", "" ], [ "Rosolowski", "Maciej", "" ], [ "Loeffler", "Markus", "" ], [ "Ahnert", "Peter", "" ], [ "Suttorp", "Norbert", "" ], [ "Witzenrath", "Martin", "" ], [ "Scholz", "Markus", "" ] ]
Background: Community-acquired pneumonia (CAP) is an acute disease condition with a high risk of rapid deteriorations. We analysed the influence of genetics on cytokine regulation to obtain a better understanding of patient's heterogeneity. Methods: For up to N=389 genotyped participants of the PROGRESS study of hospitalised CAP patients, we performed a genome-wide association study of ten cytokines IL-1b, IL-6, IL-8, IL-10, IL-12, MCP-1 (MCAF), MIP-1a (CCL3), VEGF, VCAM-1, and ICAM-1. Consecutive secondary analyses were performed to identify independent hits and corresponding causal variants. Results: 102 SNPs from 14 loci showed genome-wide significant associations with five of the cytokines. The most interesting associations were found at 6p21.1 for VEGF (p=1.58x10E-20), at 17q21.32 (p=1.51x10E-9) and at 10p12.1 (p=2.76x10E-9) for IL-1b, at 10p13 for MIP-1a (CCL3) (p=2.28x10E-9), and at 9q34.12 for IL-10 (p=4.52x10E-8). Functionally plausible genes could be assigned to the majority of loci including genes involved in cytokine secretion, granulocyte function, and cilial kinetics. Conclusions: This is the first context-specific genetic association study of blood cytokine concentrations in CAP patients revealing numerous biologically plausible candidate genes. Two of the loci were also associated with atherosclerosis with probable common or consecutive pathomechanisms.
1909.04557
Iain Moyles
Iain R Moyles, John G. Donohue, Andrew C. Fowler
Quasi-steady uptake and bacterial community assembly in a mathematical model of soil-phosphorus mobility
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We mathematically model the uptake of phosphorus by a soil community consisting of a plant and two bacterial groups: copiotrophs and oligotrophs. Four equilibrium states emerge, one for each of the species monopolising the resource and dominating the community and one with coexistence of all species. We show that the dynamics are controlled by the ratio of chemical adsorption to bacterial death permitting either oscillatory states or quasi-steady uptake. We show how a steady state can emerge which has soil and plant nutrient content unresponsive to increased fertilization. However, the additional fertilization supports the copiotrophs leading to community reassembly. Our results demonstrate the importance of time-series measurements in nutrient uptake experiments.
[ { "created": "Tue, 10 Sep 2019 15:08:06 GMT", "version": "v1" }, { "created": "Mon, 26 Oct 2020 19:18:52 GMT", "version": "v2" } ]
2020-10-28
[ [ "Moyles", "Iain R", "" ], [ "Donohue", "John G.", "" ], [ "Fowler", "Andrew C.", "" ] ]
We mathematically model the uptake of phosphorus by a soil community consisting of a plant and two bacterial groups: copiotrophs and oligotrophs. Four equilibrium states emerge, one for each of the species monopolising the resource and dominating the community and one with coexistence of all species. We show that the dynamics are controlled by the ratio of chemical adsorption to bacterial death permitting either oscillatory states or quasi-steady uptake. We show how a steady state can emerge which has soil and plant nutrient content unresponsive to increased fertilization. However, the additional fertilization supports the copiotrophs leading to community reassembly. Our results demonstrate the importance of time-series measurements in nutrient uptake experiments.
2008.07183
Adam Griffin
Adam Griffin and Simon E.F. Spencer and Gareth O. Roberts
An epidemic model for an evolving pathogen with strain-dependent immunity
34 pages, 7 figures, in review
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Between pandemics, the influenza virus exhibits periods of incremental evolution via a process known as antigenic drift. This process gives rise to a sequence of strains of the pathogen that are continuously replaced by newer strains, preventing a build up of immunity in the host population. In this paper, a parsimonious epidemic model is defined that attempts to capture the dynamics of evolving strains within a host population. The `evolving strains' epidemic model has many properties that lie in-between the Susceptible-Infected-Susceptible and the Susceptible-Infected-Removed epidemic models, due to the fact that individuals can only be infected by each strain once, but remain susceptible to reinfection by newly emerged strains. Coupling results are used to identify key properties, such as the time to extinction. A range of reproduction numbers are explored to characterize the model, including a novel quasi-stationary reproduction number that can be used to describe the re-emergence of the pathogen into a population with `average' levels of strain immunity, analogous to the beginning of the winter peak in influenza. Finally the quasi-stationary distribution of the evolving strains model is explored via simulation.
[ { "created": "Mon, 17 Aug 2020 09:53:54 GMT", "version": "v1" } ]
2020-08-18
[ [ "Griffin", "Adam", "" ], [ "Spencer", "Simon E. F.", "" ], [ "Roberts", "Gareth O.", "" ] ]
Between pandemics, the influenza virus exhibits periods of incremental evolution via a process known as antigenic drift. This process gives rise to a sequence of strains of the pathogen that are continuously replaced by newer strains, preventing a build up of immunity in the host population. In this paper, a parsimonious epidemic model is defined that attempts to capture the dynamics of evolving strains within a host population. The `evolving strains' epidemic model has many properties that lie in-between the Susceptible-Infected-Susceptible and the Susceptible-Infected-Removed epidemic models, due to the fact that individuals can only be infected by each strain once, but remain susceptible to reinfection by newly emerged strains. Coupling results are used to identify key properties, such as the time to extinction. A range of reproduction numbers are explored to characterize the model, including a novel quasi-stationary reproduction number that can be used to describe the re-emergence of the pathogen into a population with `average' levels of strain immunity, analogous to the beginning of the winter peak in influenza. Finally the quasi-stationary distribution of the evolving strains model is explored via simulation.
2106.05440
Estari Mamidala Dr
Estari Mamidalaa, Rakesh Davella, Swapna Gurrapu, Munipally Praveen Kumar, Abhiav
In silico Prediction of Mozenavir as potential drug for SARS-CoV-2 infection via Binding Multiple Drug Targets
32 pages, 11 figures, 2 tables
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Since the epidemic began in November 2019, no viable medicine against SARS-CoV-2 has been discovered. The typical medication discovery strategy requires several years of rigorous research and development as well as a significant financial commitment, which is not feasible in the face of the current epidemic. Through molecular docking and dynamic simulation studies, we used the FDA-approved drug mezonavir against the most important viral targets, including spike (S) glycoprotein, Transmembrane serine protease 2 (TMPRSS2), RNA-dependent RNA polymerase (RdRp), Main protease (Mpro), human angiotensin-converting enzyme 2 (ACE-2), and furin. These targets are critical for viral replication and infection propagation because they play a key role in replication/transcription and host cell recognition. Molecular docking revealed that the antiviral medication mozenavir showed a stronger affinity for SARS-CoV-2 target proteins than reference medicines in this investigation. We discovered that mozenavir increases the complex's stability and validates the molecular docking findings using molecular dynamics modelling. Furin, a target protein of COVID-19, has a greater binding affinity (-12.04 kcal/mol) than other COVID-19 target proteins, forming different hydrogen bonds and polar and hydrophobic interactions, suggesting that it might be used as an antiviral treatment against SARS-CoV-2. Overall, the present in silico results will be valuable in identifying crucial targets for subsequent experimental investigations that might help combat COVID-19 by blocking the protease furin's proteolytic activity.
[ { "created": "Thu, 10 Jun 2021 00:43:23 GMT", "version": "v1" } ]
2021-06-11
[ [ "Mamidalaa", "Estari", "" ], [ "Davella", "Rakesh", "" ], [ "Gurrapu", "Swapna", "" ], [ "Kumar", "Munipally Praveen", "" ], [ "Abhiav", "", "" ] ]
Since the epidemic began in November 2019, no viable medicine against SARS-CoV-2 has been discovered. The typical medication discovery strategy requires several years of rigorous research and development as well as a significant financial commitment, which is not feasible in the face of the current epidemic. Through molecular docking and dynamic simulation studies, we used the FDA-approved drug mezonavir against the most important viral targets, including spike (S) glycoprotein, Transmembrane serine protease 2 (TMPRSS2), RNA-dependent RNA polymerase (RdRp), Main protease (Mpro), human angiotensin-converting enzyme 2 (ACE-2), and furin. These targets are critical for viral replication and infection propagation because they play a key role in replication/transcription and host cell recognition. Molecular docking revealed that the antiviral medication mozenavir showed a stronger affinity for SARS-CoV-2 target proteins than reference medicines in this investigation. We discovered that mozenavir increases the complex's stability and validates the molecular docking findings using molecular dynamics modelling. Furin, a target protein of COVID-19, has a greater binding affinity (-12.04 kcal/mol) than other COVID-19 target proteins, forming different hydrogen bonds and polar and hydrophobic interactions, suggesting that it might be used as an antiviral treatment against SARS-CoV-2. Overall, the present in silico results will be valuable in identifying crucial targets for subsequent experimental investigations that might help combat COVID-19 by blocking the protease furin's proteolytic activity.
q-bio/0606008
Indranil Mitra Mr
Indranil Mitra, Sisir Roy
Relevance of Quantum Mechanics in Circuit Implementation of Ion channels in Brain Dynamics
12 pages, Submitted to Physical Rev. E
null
null
PACS No.:87.10.+e
q-bio.NC quant-ph
null
With an increasing amount of experimental evidence pouring in from neurobiological investigations, it is quite appropriate to study viable reductionist models which may explain some of the features of brain activities. It is now quite well known that the Hodgkin-Huxley (HH) Model has been quite successful in explaining the neural phenomena. The idea of circuit equivalents and the membrane voltages corresponding to neurons have been remarkable which is essentially a classical result. In view of some recent results which show that quantum mechanics may be important at suitable length scales inside the brain, the question which becomes quite important is to find out a proper quantum analogue of the HH scheme which will reduce to the well known HH model in a suitable limit. From the ideas of neuro-manifold and the relevance of quantum mechanics at some length scales in the ion channels, we investigate this situation in this paper by taking into consideration the Schr\"odinger equation in an arbitrary manifold with a metric, which is in some sense a special case of the heat kernel equation. The next important approach we have taken in order to bring about it's relevance in brain studies and to make connection with HH models is to find out a plausible circuit equivalents of it. What we do realize is that for a proper quantum mechanical description and it's circuit implementation of the same we need to incorporate the non commutativity inside the circuit model. It has been realized here that the metric is a dynamical entity governing space time and for considering equivalent circuits it plays a very distinct role. We have used the methods of stochastic quantization and have constructed a specific case here and see that HH model inductances gets renormalized in the quantum limit.
[ { "created": "Thu, 8 Jun 2006 20:59:34 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mitra", "Indranil", "" ], [ "Roy", "Sisir", "" ] ]
With an increasing amount of experimental evidence pouring in from neurobiological investigations, it is quite appropriate to study viable reductionist models which may explain some of the features of brain activities. It is now quite well known that the Hodgkin-Huxley (HH) Model has been quite successful in explaining the neural phenomena. The idea of circuit equivalents and the membrane voltages corresponding to neurons have been remarkable which is essentially a classical result. In view of some recent results which show that quantum mechanics may be important at suitable length scales inside the brain, the question which becomes quite important is to find out a proper quantum analogue of the HH scheme which will reduce to the well known HH model in a suitable limit. From the ideas of neuro-manifold and the relevance of quantum mechanics at some length scales in the ion channels, we investigate this situation in this paper by taking into consideration the Schr\"odinger equation in an arbitrary manifold with a metric, which is in some sense a special case of the heat kernel equation. The next important approach we have taken in order to bring about it's relevance in brain studies and to make connection with HH models is to find out a plausible circuit equivalents of it. What we do realize is that for a proper quantum mechanical description and it's circuit implementation of the same we need to incorporate the non commutativity inside the circuit model. It has been realized here that the metric is a dynamical entity governing space time and for considering equivalent circuits it plays a very distinct role. We have used the methods of stochastic quantization and have constructed a specific case here and see that HH model inductances gets renormalized in the quantum limit.
0803.3717
Claudiu Giuraniuc
R. Blossey and C. V. Giuraniuc
Mean-field vs. stochastic models for transcriptional regulation
null
Physical Review E 78, 031909 (2008)
10.1103/PhysRevE.78.031909
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a minimal model description for the dynamics of transcriptional regulatory networks. It is studied within a mean-field approximation, i.e., by deterministic ode's representing the reaction kinetics, and by stochastic simulations employing the Gillespie algorithm. We elucidate the different results both approaches can deliver, depending on the network under study, and in particular depending on the level of detail retained in the respective description. Two examples are addressed in detail: the repressilator, a transcriptional clock based on a three-gene network realized experimentally in E. coli, and a bistable two-gene circuit under external driving, a transcriptional network motif recently proposed to play a role in cellular development.
[ { "created": "Wed, 26 Mar 2008 12:39:19 GMT", "version": "v1" }, { "created": "Tue, 8 Apr 2008 13:52:54 GMT", "version": "v2" }, { "created": "Wed, 20 Aug 2008 11:08:34 GMT", "version": "v3" } ]
2009-11-13
[ [ "Blossey", "R.", "" ], [ "Giuraniuc", "C. V.", "" ] ]
We introduce a minimal model description for the dynamics of transcriptional regulatory networks. It is studied within a mean-field approximation, i.e., by deterministic ode's representing the reaction kinetics, and by stochastic simulations employing the Gillespie algorithm. We elucidate the different results both approaches can deliver, depending on the network under study, and in particular depending on the level of detail retained in the respective description. Two examples are addressed in detail: the repressilator, a transcriptional clock based on a three-gene network realized experimentally in E. coli, and a bistable two-gene circuit under external driving, a transcriptional network motif recently proposed to play a role in cellular development.
1010.1866
Jakub Truszkowski
Daniel G. Brown and Jakub Truszkowski
Fast error-tolerant quartet phylogeny algorithms
null
null
null
null
q-bio.PE cs.CE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an algorithm for phylogenetic reconstruction using quartets that returns the correct topology for $n$ taxa in $O(n \log n)$ time with high probability, in a probabilistic model where a quartet is not consistent with the true topology of the tree with constant probability, independent of other quartets. Our incremental algorithm relies upon a search tree structure for the phylogeny that is balanced, with high probability, no matter what the true topology is. Our experimental results show that our method is comparable in runtime to the fastest heuristics, while still offering consistency guarantees.
[ { "created": "Sat, 9 Oct 2010 19:16:55 GMT", "version": "v1" } ]
2010-10-12
[ [ "Brown", "Daniel G.", "" ], [ "Truszkowski", "Jakub", "" ] ]
We present an algorithm for phylogenetic reconstruction using quartets that returns the correct topology for $n$ taxa in $O(n \log n)$ time with high probability, in a probabilistic model where a quartet is not consistent with the true topology of the tree with constant probability, independent of other quartets. Our incremental algorithm relies upon a search tree structure for the phylogeny that is balanced, with high probability, no matter what the true topology is. Our experimental results show that our method is comparable in runtime to the fastest heuristics, while still offering consistency guarantees.
1307.5684
Lucas Paletta
Jason Satel, Ross Story, Matthew D. Hilchey, Zhiguo Wang and Raymond M. Klein
Using a Dynamic Neural Field Model to Explore a Direct Collicular Inhibition Account of Inhibition of Return
null
null
null
ISACS/2013/01
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When the interval between a transient ash of light (a "cue") and a second visual response signal (a "target") exceeds at least 200ms, responding is slowest in the direction indicated by the first signal. This phenomenon is commonly referred to as inhibition of return (IOR). The dynamic neural field model (DNF) has proven to have broad explanatory power for IOR, effectively capturing many empirical results. Previous work has used a short-term depression (STD) implementation of IOR, but this approach fails to explain many behavioral phenomena observed in the literature. Here, we explore a variant model of IOR involving a combination of STD and delayed direct collicular inhibition. We demonstrate that this hybrid model can better reproduce established behavioural results. We use the results of this model to propose several experiments that would yield particularly valuable insight into the nature of the neurophysiological mechanisms underlying IOR.
[ { "created": "Mon, 22 Jul 2013 12:47:18 GMT", "version": "v1" } ]
2013-07-23
[ [ "Satel", "Jason", "" ], [ "Story", "Ross", "" ], [ "Hilchey", "Matthew D.", "" ], [ "Wang", "Zhiguo", "" ], [ "Klein", "Raymond M.", "" ] ]
When the interval between a transient ash of light (a "cue") and a second visual response signal (a "target") exceeds at least 200ms, responding is slowest in the direction indicated by the first signal. This phenomenon is commonly referred to as inhibition of return (IOR). The dynamic neural field model (DNF) has proven to have broad explanatory power for IOR, effectively capturing many empirical results. Previous work has used a short-term depression (STD) implementation of IOR, but this approach fails to explain many behavioral phenomena observed in the literature. Here, we explore a variant model of IOR involving a combination of STD and delayed direct collicular inhibition. We demonstrate that this hybrid model can better reproduce established behavioural results. We use the results of this model to propose several experiments that would yield particularly valuable insight into the nature of the neurophysiological mechanisms underlying IOR.
1912.06207
Hidenori Tanaka
Hidenori Tanaka, Aran Nayebi, Niru Maheswaranathan, Lane McIntosh, Stephen A. Baccus, Surya Ganguli
From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction
null
Neural Information Processing Systems (NeurIPS), 2019
null
null
q-bio.NC cs.LG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.
[ { "created": "Thu, 12 Dec 2019 20:54:08 GMT", "version": "v1" } ]
2019-12-16
[ [ "Tanaka", "Hidenori", "" ], [ "Nayebi", "Aran", "" ], [ "Maheswaranathan", "Niru", "" ], [ "McIntosh", "Lane", "" ], [ "Baccus", "Stephen A.", "" ], [ "Ganguli", "Surya", "" ] ]
Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.
2205.10912
Diksha Gupta
Diksha Gupta and Carlos D. Brody
Limitations of a proposed correction for slow drifts in decision criterion
18 pages, 4 figures
null
null
null
q-bio.NC cs.LG cs.NE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Trial history biases in decision-making tasks are thought to reflect systematic updates of decision variables, therefore their precise nature informs conclusions about underlying heuristic strategies and learning processes. However, random drifts in decision variables can corrupt this inference by mimicking the signatures of systematic updates. Hence, identifying the trial-by-trial evolution of decision variables requires methods that can robustly account for such drifts. Recent studies (Lak'20, Mendon\c{c}a'20) have made important advances in this direction, by proposing a convenient method to correct for the influence of slow drifts in decision criterion, a key decision variable. Here we apply this correction to a variety of updating scenarios, and evaluate its performance. We show that the correction fails for a wide range of commonly assumed systematic updating strategies, distorting one's inference away from the veridical strategies towards a narrow subset. To address these limitations, we propose a model-based approach for disambiguating systematic updates from random drifts, and demonstrate its success on real and synthetic datasets. We show that this approach accurately recovers the latent trajectory of drifts in decision criterion as well as the generative systematic updates from simulated data. Our results offer recommendations for methods to account for the interactions between history biases and slow drifts, and highlight the advantages of incorporating assumptions about the generative process directly into models of decision-making.
[ { "created": "Sun, 22 May 2022 19:33:19 GMT", "version": "v1" } ]
2022-05-24
[ [ "Gupta", "Diksha", "" ], [ "Brody", "Carlos D.", "" ] ]
Trial history biases in decision-making tasks are thought to reflect systematic updates of decision variables, therefore their precise nature informs conclusions about underlying heuristic strategies and learning processes. However, random drifts in decision variables can corrupt this inference by mimicking the signatures of systematic updates. Hence, identifying the trial-by-trial evolution of decision variables requires methods that can robustly account for such drifts. Recent studies (Lak'20, Mendon\c{c}a'20) have made important advances in this direction, by proposing a convenient method to correct for the influence of slow drifts in decision criterion, a key decision variable. Here we apply this correction to a variety of updating scenarios, and evaluate its performance. We show that the correction fails for a wide range of commonly assumed systematic updating strategies, distorting one's inference away from the veridical strategies towards a narrow subset. To address these limitations, we propose a model-based approach for disambiguating systematic updates from random drifts, and demonstrate its success on real and synthetic datasets. We show that this approach accurately recovers the latent trajectory of drifts in decision criterion as well as the generative systematic updates from simulated data. Our results offer recommendations for methods to account for the interactions between history biases and slow drifts, and highlight the advantages of incorporating assumptions about the generative process directly into models of decision-making.
1304.7496
Nabanita Dasgupta-Schubert
Omar S. Castillo Baltasar, Nabanita Dasgupta-Schubert, Christian Schubert
A Systems Model of the Eco-physiological Response of Plants to Environmental Heavy Metal Concentrations
33 pages, 12 figures. 1 table
Published in JOURNAL OF ENVIRONMENTAL STATISTICS, volume 4, issue 10, page 1, April 2013
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ecophysiological response of plants to environmental heavy metal stress is indicated by the profile of its tissue HM concentrations (Cp) versus the concentration of the HM in the substrate (Cs). We report a systems biology approach to the modelling of the Cp- Cs profile using as loose analogy, the Verhulst model of population dynamics but formulated in the concentration domain. The HM is conceptualized as an ecological organism that `colonizes' the resource zone of the plant cells driven by the potential supplied by the higher HM concentration in the substrate. The infinite occupation by the HM is limited by the eventual saturation of the cellular binding sites. The solution of the differential equation results in the logistic equation, the r-K model. The model is tested for 3 metalophillic plants T.erecta, S. vulgaris and E. splendens growing in different types of substrates, contaminated to varying extents by different copper compounds. The model fitted the experimental Cp- Cs profiles well. The r, K parameter values and secondary quantities derived from them, allowed a quantification of the number of Cu binding sites per cell at saturation, the sensitivities (affinities) of these sites for Cu in the 3 experimental systems as well as the extraction of information related to the substrate phyto-availability of the Cu. Thus even though the model operates at the systems level it permits useful insights into underlying processes that ultimately derive from the cumulative molecular processes of HM homeostasis. The chief advantages of the model are its simplicity, fewer arbitrary parameters, and the non-specification of constraints on substrate and plant type.
[ { "created": "Sun, 28 Apr 2013 18:45:45 GMT", "version": "v1" } ]
2013-04-30
[ [ "Baltasar", "Omar S. Castillo", "" ], [ "Dasgupta-Schubert", "Nabanita", "" ], [ "Schubert", "Christian", "" ] ]
The ecophysiological response of plants to environmental heavy metal stress is indicated by the profile of its tissue HM concentrations (Cp) versus the concentration of the HM in the substrate (Cs). We report a systems biology approach to the modelling of the Cp- Cs profile using as loose analogy, the Verhulst model of population dynamics but formulated in the concentration domain. The HM is conceptualized as an ecological organism that `colonizes' the resource zone of the plant cells driven by the potential supplied by the higher HM concentration in the substrate. The infinite occupation by the HM is limited by the eventual saturation of the cellular binding sites. The solution of the differential equation results in the logistic equation, the r-K model. The model is tested for 3 metalophillic plants T.erecta, S. vulgaris and E. splendens growing in different types of substrates, contaminated to varying extents by different copper compounds. The model fitted the experimental Cp- Cs profiles well. The r, K parameter values and secondary quantities derived from them, allowed a quantification of the number of Cu binding sites per cell at saturation, the sensitivities (affinities) of these sites for Cu in the 3 experimental systems as well as the extraction of information related to the substrate phyto-availability of the Cu. Thus even though the model operates at the systems level it permits useful insights into underlying processes that ultimately derive from the cumulative molecular processes of HM homeostasis. The chief advantages of the model are its simplicity, fewer arbitrary parameters, and the non-specification of constraints on substrate and plant type.
2305.01520
Namkyeong Lee
Namkyeong Lee, Dongmin Hyun, Gyoung S. Na, Sungwon Kim, Junseok Lee, Chanyoung Park
Conditional Graph Information Bottleneck for Molecular Relational Learning
ICML 2023
null
null
null
q-bio.MN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular relational learning, whose goal is to learn the interaction behavior between molecular pairs, got a surge of interest in molecular sciences due to its wide range of applications. Recently, graph neural networks have recently shown great success in molecular relational learning by modeling a molecule as a graph structure, and considering atom-level interactions between two molecules. Despite their success, existing molecular relational learning methods tend to overlook the nature of chemistry, i.e., a chemical compound is composed of multiple substructures such as functional groups that cause distinctive chemical reactions. In this work, we propose a novel relational learning framework, called CGIB, that predicts the interaction behavior between a pair of graphs by detecting core subgraphs therein. The main idea is, given a pair of graphs, to find a subgraph from a graph that contains the minimal sufficient information regarding the task at hand conditioned on the paired graph based on the principle of conditional graph information bottleneck. We argue that our proposed method mimics the nature of chemical reactions, i.e., the core substructure of a molecule varies depending on which other molecule it interacts with. Extensive experiments on various tasks with real-world datasets demonstrate the superiority of CGIB over state-of-the-art baselines. Our code is available at https://github.com/Namkyeong/CGIB.
[ { "created": "Sat, 29 Apr 2023 01:17:43 GMT", "version": "v1" }, { "created": "Sun, 9 Jul 2023 23:30:46 GMT", "version": "v2" } ]
2023-07-11
[ [ "Lee", "Namkyeong", "" ], [ "Hyun", "Dongmin", "" ], [ "Na", "Gyoung S.", "" ], [ "Kim", "Sungwon", "" ], [ "Lee", "Junseok", "" ], [ "Park", "Chanyoung", "" ] ]
Molecular relational learning, whose goal is to learn the interaction behavior between molecular pairs, got a surge of interest in molecular sciences due to its wide range of applications. Recently, graph neural networks have recently shown great success in molecular relational learning by modeling a molecule as a graph structure, and considering atom-level interactions between two molecules. Despite their success, existing molecular relational learning methods tend to overlook the nature of chemistry, i.e., a chemical compound is composed of multiple substructures such as functional groups that cause distinctive chemical reactions. In this work, we propose a novel relational learning framework, called CGIB, that predicts the interaction behavior between a pair of graphs by detecting core subgraphs therein. The main idea is, given a pair of graphs, to find a subgraph from a graph that contains the minimal sufficient information regarding the task at hand conditioned on the paired graph based on the principle of conditional graph information bottleneck. We argue that our proposed method mimics the nature of chemical reactions, i.e., the core substructure of a molecule varies depending on which other molecule it interacts with. Extensive experiments on various tasks with real-world datasets demonstrate the superiority of CGIB over state-of-the-art baselines. Our code is available at https://github.com/Namkyeong/CGIB.
1604.02467
Fabio Vandin
Tommy Hansen, Fabio Vandin
Finding Mutated Subnetworks Associated with Survival in Cancer
This paper was selected for oral presentation at RECOMB 2016 and an abstract is published in the conference proceedings
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Next-generation sequencing technologies allow the measurement of somatic mutations in a large number of patients from the same cancer type. One of the main goals in analyzing these mutations is the identification of mutations associated with clinical parameters, such as survival time. This goal is hindered by the genetic heterogeneity of mutations in cancer, due to the fact that genes and mutations act in the context of pathways. To identify mutations associated with survival time it is therefore crucial to study mutations in the context of interaction networks. In this work we study the problem of identifying subnetworks of a large gene-gene interaction network that have mutations associated with survival. We formally define the associated computational problem by using a score for subnetworks based on the test statistic of the log-rank test, a widely used statistical test for comparing the survival of two populations. We show that the computational problem is NP-hard and we propose a novel algorithm, called Network of Mutations Associated with Survival (NoMAS), to solve it. NoMAS is based on the color-coding technique, that has been previously used in other applications to find the highest scoring subnetwork with high probability when the subnetwork score is additive. In our case the score is not additive; nonetheless, we prove that under a reasonable model for mutations in cancer NoMAS does identify the optimal solution with high probability. We test NoMAS on simulated and cancer data, comparing it to approaches based on single gene tests and to various greedy approaches. We show that our method does indeed find the optimal solution and performs better than the other approaches. Moreover, on two cancer datasets our method identifies subnetworks with significant association to survival when none of the genes has significant association with survival when considered in isolation.
[ { "created": "Fri, 8 Apr 2016 20:09:07 GMT", "version": "v1" }, { "created": "Sat, 10 Sep 2016 11:25:43 GMT", "version": "v2" } ]
2016-09-13
[ [ "Hansen", "Tommy", "" ], [ "Vandin", "Fabio", "" ] ]
Next-generation sequencing technologies allow the measurement of somatic mutations in a large number of patients from the same cancer type. One of the main goals in analyzing these mutations is the identification of mutations associated with clinical parameters, such as survival time. This goal is hindered by the genetic heterogeneity of mutations in cancer, due to the fact that genes and mutations act in the context of pathways. To identify mutations associated with survival time it is therefore crucial to study mutations in the context of interaction networks. In this work we study the problem of identifying subnetworks of a large gene-gene interaction network that have mutations associated with survival. We formally define the associated computational problem by using a score for subnetworks based on the test statistic of the log-rank test, a widely used statistical test for comparing the survival of two populations. We show that the computational problem is NP-hard and we propose a novel algorithm, called Network of Mutations Associated with Survival (NoMAS), to solve it. NoMAS is based on the color-coding technique, that has been previously used in other applications to find the highest scoring subnetwork with high probability when the subnetwork score is additive. In our case the score is not additive; nonetheless, we prove that under a reasonable model for mutations in cancer NoMAS does identify the optimal solution with high probability. We test NoMAS on simulated and cancer data, comparing it to approaches based on single gene tests and to various greedy approaches. We show that our method does indeed find the optimal solution and performs better than the other approaches. Moreover, on two cancer datasets our method identifies subnetworks with significant association to survival when none of the genes has significant association with survival when considered in isolation.
2404.10869
Yujiang Wang
Vytene Janiukstyte, Csaba Kozma, Thomas W. Owen, Umair J Chaudhury, Beate Diehl, Louis Lemieux, John S Duncan, Fergus Rugg-Gunn, Jane de Tisi, Yujiang Wang, Peter N. Taylor
Alpha rhythm slowing in temporal epilepsy across Scalp EEG and MEG
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
EEG slowing is reported in various neurological disorders including Alzheimer's, Parkinson's and Epilepsy. Here, we investigate alpha rhythm slowing in individuals with refractory temporal lobe epilepsy (TLE), compared to healthy controls, using scalp electroencephalography (EEG) and magnetoencephalography (MEG). We retrospectively analysed data from 17,(46) healthy controls and 22,(24) individuals with TLE who underwent scalp EEG and (MEG) recordings as part of presurgical evaluation. Resting-state, eyes-closed recordings were source reconstructed using the standardized low-resolution brain electrographic tomography (sLORETA) method. We extracted low (slow) 6-9 Hz and high (fast) 10-11 Hz alpha relative band power and calculated the alpha power ratio by dividing low (slow) alpha by high (fast) alpha. This ratio was computed for all brain regions in all individuals. Alpha oscillations were slower in individuals with TLE than controls (p<0.05). This effect was present in both the ipsilateral and contralateral hemispheres, and across widespread brain regions. Alpha slowing in TLE was found in both EEG and MEG recordings. We interpret greater low (slow)-alpha as greater deviation from health.
[ { "created": "Tue, 16 Apr 2024 19:37:28 GMT", "version": "v1" } ]
2024-04-18
[ [ "Janiukstyte", "Vytene", "" ], [ "Kozma", "Csaba", "" ], [ "Owen", "Thomas W.", "" ], [ "Chaudhury", "Umair J", "" ], [ "Diehl", "Beate", "" ], [ "Lemieux", "Louis", "" ], [ "Duncan", "John S", "" ], [ "Rugg-Gunn", "Fergus", "" ], [ "de Tisi", "Jane", "" ], [ "Wang", "Yujiang", "" ], [ "Taylor", "Peter N.", "" ] ]
EEG slowing is reported in various neurological disorders including Alzheimer's, Parkinson's and Epilepsy. Here, we investigate alpha rhythm slowing in individuals with refractory temporal lobe epilepsy (TLE), compared to healthy controls, using scalp electroencephalography (EEG) and magnetoencephalography (MEG). We retrospectively analysed data from 17,(46) healthy controls and 22,(24) individuals with TLE who underwent scalp EEG and (MEG) recordings as part of presurgical evaluation. Resting-state, eyes-closed recordings were source reconstructed using the standardized low-resolution brain electrographic tomography (sLORETA) method. We extracted low (slow) 6-9 Hz and high (fast) 10-11 Hz alpha relative band power and calculated the alpha power ratio by dividing low (slow) alpha by high (fast) alpha. This ratio was computed for all brain regions in all individuals. Alpha oscillations were slower in individuals with TLE than controls (p<0.05). This effect was present in both the ipsilateral and contralateral hemispheres, and across widespread brain regions. Alpha slowing in TLE was found in both EEG and MEG recordings. We interpret greater low (slow)-alpha as greater deviation from health.
1704.02342
Elizaveta Solomonova
Elizaveta Solomonova
Sleep Paralysis: phenomenology, neurophysiology and treatment
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Sleep paralysis is an experience of being temporarily unable to move or talk during the transitional periods between sleep and wakefulness: at sleep onset or upon awakening. Feeling of paralysis may be accompanied by a variety of vivid and intense sensory experiences, including mentation in visual, auditory, and tactile modalities, as well as a distinct feeling of presence. This chapter discusses a variety of sleep paralysis experiences from the perspective of enactive cognition and cultural neurophenomenology. Current knowledge of neurophysiology and associated conditions is presented, and some techniques for coping with sleep paralysis are proposed. As an experience characterized by a hybrid state of dreaming and waking, sleep paralysis offers a unique window into phenomenology of spontaneous thought in sleep.
[ { "created": "Fri, 7 Apr 2017 18:33:59 GMT", "version": "v1" } ]
2017-04-11
[ [ "Solomonova", "Elizaveta", "" ] ]
Sleep paralysis is an experience of being temporarily unable to move or talk during the transitional periods between sleep and wakefulness: at sleep onset or upon awakening. Feeling of paralysis may be accompanied by a variety of vivid and intense sensory experiences, including mentation in visual, auditory, and tactile modalities, as well as a distinct feeling of presence. This chapter discusses a variety of sleep paralysis experiences from the perspective of enactive cognition and cultural neurophenomenology. Current knowledge of neurophysiology and associated conditions is presented, and some techniques for coping with sleep paralysis are proposed. As an experience characterized by a hybrid state of dreaming and waking, sleep paralysis offers a unique window into phenomenology of spontaneous thought in sleep.
1801.09247
Nikolaos Sfakianakis PhD
Nikolaos Sfakianakis and Aaron Brunk
Stability, convergence, and sensitivity analysis of the Filament Based Lamellipodium Model and the corresponding FEM
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on the study of the Filament Based Lamellipodium Model (FBLM) and the corresponding Finite Element Method (FEM) from a numerical point of view. We study fundamental numerical properties of the FEM and justify the further use of the FBLM. We exhibit that the FEM satisfies a timestep stability condition that is consistent with the nature of the problem. We propose a particular strategy to automatically adapt the time step of the method. We show that the FEM convergences with respect to the (two-dimensional) space discretization in a series of characteristic and representative experiments. We embed and couple the FBLM with a complex extracellular environment comprised of chemical and haptic components and study their combined time evolution. Under this prism, we study the sensitivity of the FBLM on several of its controlling parameters and discuss their influence in the development of the model.
[ { "created": "Sun, 28 Jan 2018 16:44:07 GMT", "version": "v1" } ]
2018-01-30
[ [ "Sfakianakis", "Nikolaos", "" ], [ "Brunk", "Aaron", "" ] ]
This paper focuses on the study of the Filament Based Lamellipodium Model (FBLM) and the corresponding Finite Element Method (FEM) from a numerical point of view. We study fundamental numerical properties of the FEM and justify the further use of the FBLM. We exhibit that the FEM satisfies a timestep stability condition that is consistent with the nature of the problem. We propose a particular strategy to automatically adapt the time step of the method. We show that the FEM convergences with respect to the (two-dimensional) space discretization in a series of characteristic and representative experiments. We embed and couple the FBLM with a complex extracellular environment comprised of chemical and haptic components and study their combined time evolution. Under this prism, we study the sensitivity of the FBLM on several of its controlling parameters and discuss their influence in the development of the model.
q-bio/0702031
Tristan Ursell
Tristan Ursell, Kerwyn Huang, Eric Peterson, Rob Phillips
Cooperative Gating and Spatial Organization of Membrane Proteins through Elastic Interactions
12 pages, 6 figures, 63 references, submitted to PLoS Computational Biology
null
10.1371/journal.pcbi.0030081
null
q-bio.SC q-bio.QM
null
Biological membranes are elastic media in which the presence of a transmembrane protein leads to local bilayer deformation. The energetics of deformation allow two membrane proteins in close proximity to influence each other's equilibrium conformation via their local deformations, and spatially organize the proteins based on their geometry. We use the mechanosensitive channel of large conductance (MscL) as a case study to examine the implications of bilayer-mediated elastic interactions on protein conformational statistics and clustering. The deformations around MscL cost energy on the order of 10 kT and extend ~3nm from the protein edge, as such elastic forces induce cooperative gating and we propose experiments to measure these effects. Additionally, since elastic interactions are coupled to protein conformation, we find that conformational changes can severely alter the average separation between two proteins. This has important implications for how conformational changes organize membrane proteins into functional groups within membranes.
[ { "created": "Wed, 14 Feb 2007 00:19:43 GMT", "version": "v1" } ]
2015-06-26
[ [ "Ursell", "Tristan", "" ], [ "Huang", "Kerwyn", "" ], [ "Peterson", "Eric", "" ], [ "Phillips", "Rob", "" ] ]
Biological membranes are elastic media in which the presence of a transmembrane protein leads to local bilayer deformation. The energetics of deformation allow two membrane proteins in close proximity to influence each other's equilibrium conformation via their local deformations, and spatially organize the proteins based on their geometry. We use the mechanosensitive channel of large conductance (MscL) as a case study to examine the implications of bilayer-mediated elastic interactions on protein conformational statistics and clustering. The deformations around MscL cost energy on the order of 10 kT and extend ~3nm from the protein edge, as such elastic forces induce cooperative gating and we propose experiments to measure these effects. Additionally, since elastic interactions are coupled to protein conformation, we find that conformational changes can severely alter the average separation between two proteins. This has important implications for how conformational changes organize membrane proteins into functional groups within membranes.
2207.02339
Hisashi Inaba
Odo Diekmann and Hisashi Inaba
A systematic procedure for incorporating separable static heterogeneity into compartmental epidemic models
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show how to modify a compartmental epidemic model, without changing the dimension, such that separable static heterogeneity is taken into account. The derivation is based on the Kermack-McKendrick renewal equation.
[ { "created": "Tue, 5 Jul 2022 22:11:34 GMT", "version": "v1" }, { "created": "Tue, 22 Nov 2022 23:42:10 GMT", "version": "v2" } ]
2022-11-24
[ [ "Diekmann", "Odo", "" ], [ "Inaba", "Hisashi", "" ] ]
In this paper, we show how to modify a compartmental epidemic model, without changing the dimension, such that separable static heterogeneity is taken into account. The derivation is based on the Kermack-McKendrick renewal equation.
2008.02484
O\u{g}ul Esen
O\u{g}ul Esen, Eduardo Fern\'andez-Saiz, Cristina Sard\'on, Marcin Zaj\k{a}c
Geometry and solutions of an epidemic SIS model permitting fluctuations and quantization
null
null
null
null
q-bio.PE math-ph math.MP physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some recent works reveal that there are models of differential equations for the mean and variance of infected individuals that reproduce the SIS epidemic model at some point. This stochastic SIS epidemic model can be interpreted as a Hamiltonian system, therefore we wondered if it could be geometrically handled through the theory of Lie--Hamilton systems, and this happened to be the case. The primordial result is that we are able to obtain a general solution for the stochastic/ SIS-epidemic model (with fluctuations) in form of a nonlinear superposition rule that includes particular stochastic solutions and certain constants to be related to initial conditions of the contagion process. The choice of these initial conditions will be crucial to display the expected behavior of the curve of infections during the epidemic. We shall limit these constants to nonsingular regimes and display graphics of the behavior of the solutions. As one could expect, the increase of infected individuals follows a sigmoid-like curve. Lie--Hamiltonian systems admit a quantum deformation, so does the stochastic SIS-epidemic model. We present this generalization as well. If one wants to study the evolution of an SIS epidemic under the influence of a constant heat source (like centrally heated buildings), one can make use of quantum stochastic differential equations coming from the so-called quantum deformation.
[ { "created": "Thu, 6 Aug 2020 07:15:30 GMT", "version": "v1" } ]
2020-08-07
[ [ "Esen", "Oğul", "" ], [ "Fernández-Saiz", "Eduardo", "" ], [ "Sardón", "Cristina", "" ], [ "Zając", "Marcin", "" ] ]
Some recent works reveal that there are models of differential equations for the mean and variance of infected individuals that reproduce the SIS epidemic model at some point. This stochastic SIS epidemic model can be interpreted as a Hamiltonian system, therefore we wondered if it could be geometrically handled through the theory of Lie--Hamilton systems, and this happened to be the case. The primordial result is that we are able to obtain a general solution for the stochastic/ SIS-epidemic model (with fluctuations) in form of a nonlinear superposition rule that includes particular stochastic solutions and certain constants to be related to initial conditions of the contagion process. The choice of these initial conditions will be crucial to display the expected behavior of the curve of infections during the epidemic. We shall limit these constants to nonsingular regimes and display graphics of the behavior of the solutions. As one could expect, the increase of infected individuals follows a sigmoid-like curve. Lie--Hamiltonian systems admit a quantum deformation, so does the stochastic SIS-epidemic model. We present this generalization as well. If one wants to study the evolution of an SIS epidemic under the influence of a constant heat source (like centrally heated buildings), one can make use of quantum stochastic differential equations coming from the so-called quantum deformation.
2004.07397
Gustavo Libotte
Gustavo Barbosa Libotte, Fran S\'ergio Lobato, Gustavo Mendes Platt, Ant\^onio Jos\'e da Silva Neto
Determination of an Optimal Control Strategy for Vaccine Administration in COVID-19 Pandemic Treatment
null
null
null
null
q-bio.PE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During decades, mathematical models have been used to predict the behavior of physical and biologic systems, and to define strategies aiming the minimization of the effects regarding different types of diseases. In the present days, the development of mathematical models to simulate the dynamic behavior of novel coronavirus disease (COVID-19) is considered an important theme due to the quantity of infected people worldwide. In this work, the aim is to determine an optimal control strategy for vaccine administration in COVID-19 pandemic treatment considering real data from China. For this purpose, an inverse problem is formulated and solved in order to determine the parameters of the compartmental SIR (Susceptible-Infectious-Recovered) model. To solve such inverse problem, the Differential Evolution (DE) algorithm is employed. After this step, two optimal control problems (mono- and multi-objective) to determine the optimal strategy for vaccine administration in COVID-19 pandemic treatment are proposed. The first consists of minimizing the quantity of infected individuals during the treatment. The second considers minimizing together the quantity of infected individuals and the prescribed vaccine concentration during the treatment, i.e., a multi-objective optimal control problem. The solution of each optimal control problems is obtained using DE and Multi-Objective Differential Evolution (MODE) algorithms, respectively. The results regarding the proposed multi-objective optimal control problem provides a set of evidences from which an optimal strategy for vaccine administration can be chosen, according to a given criterion.
[ { "created": "Wed, 15 Apr 2020 23:50:01 GMT", "version": "v1" }, { "created": "Mon, 20 Apr 2020 23:18:32 GMT", "version": "v2" } ]
2020-04-22
[ [ "Libotte", "Gustavo Barbosa", "" ], [ "Lobato", "Fran Sérgio", "" ], [ "Platt", "Gustavo Mendes", "" ], [ "Neto", "Antônio José da Silva", "" ] ]
During decades, mathematical models have been used to predict the behavior of physical and biologic systems, and to define strategies aiming the minimization of the effects regarding different types of diseases. In the present days, the development of mathematical models to simulate the dynamic behavior of novel coronavirus disease (COVID-19) is considered an important theme due to the quantity of infected people worldwide. In this work, the aim is to determine an optimal control strategy for vaccine administration in COVID-19 pandemic treatment considering real data from China. For this purpose, an inverse problem is formulated and solved in order to determine the parameters of the compartmental SIR (Susceptible-Infectious-Recovered) model. To solve such inverse problem, the Differential Evolution (DE) algorithm is employed. After this step, two optimal control problems (mono- and multi-objective) to determine the optimal strategy for vaccine administration in COVID-19 pandemic treatment are proposed. The first consists of minimizing the quantity of infected individuals during the treatment. The second considers minimizing together the quantity of infected individuals and the prescribed vaccine concentration during the treatment, i.e., a multi-objective optimal control problem. The solution of each optimal control problems is obtained using DE and Multi-Objective Differential Evolution (MODE) algorithms, respectively. The results regarding the proposed multi-objective optimal control problem provides a set of evidences from which an optimal strategy for vaccine administration can be chosen, according to a given criterion.
q-bio/0607047
Vadim N. Biktashev
Vadim N. Biktashev, Rebecca Suckley, Yury E. Elkin and Radostin D. Simitev
Asymptotic analysis and analytical solutions of a model of cardiac excitation
43 pages, 10 figures, submitted to Bull Math Biol 2007/04/04
null
null
null
q-bio.TO
null
We describe an asymptotic approach to gated ionic models of single-cell cardiac excitability. It has a form essentially different from the Tikhonov fast-slow form assumed in standard asymptotic reductions of excitable systems. This is of interest since the standard approaches have been previously found inadequate to describe phenomena such as the dissipation of cardiac wave fronts and the shape of action potential at repolarization. The proposed asymptotic description overcomes these deficiencies by allowing, among other non-Tikhonov features, that a dynamical variable may change its character from fast to slow within a single solution. The general asymptotic approach is best demonstrated on an example which should be both simple and generic. The classical model of Purkinje fibers (Noble, 1962) has the simplest functional form of all cardiac models but according to the current understanding it assigns a physiologically incorrect role to the Na current. This leads us to suggest an ``Archetypal Model'' with the simplicity of the Noble model but with a structure more typical to contemporary cardiac models. We demonstrate that the Archetypal Model admits a complete asymptotic solution in quadratures. To validate our asymptotic approach, we proceed to consider an exactly solvable ``caricature'' of the Archetypal Model and demonstrate that the asymptotic of its exact solution coincides with the solutions obtained by substituting the ``caricature'' right-hand sides into the asymptotic solution of the generic Archetypal Model. This is necessary, because, unlike in standard asymptotic descriptions, no general results exist which can guarantee the proximity of the non-Tikhonov asymptotic solutions to the solutions of the corresponding detailed ionic model.
[ { "created": "Tue, 25 Jul 2006 18:36:21 GMT", "version": "v1" }, { "created": "Mon, 9 Apr 2007 18:53:24 GMT", "version": "v2" } ]
2007-05-23
[ [ "Biktashev", "Vadim N.", "" ], [ "Suckley", "Rebecca", "" ], [ "Elkin", "Yury E.", "" ], [ "Simitev", "Radostin D.", "" ] ]
We describe an asymptotic approach to gated ionic models of single-cell cardiac excitability. It has a form essentially different from the Tikhonov fast-slow form assumed in standard asymptotic reductions of excitable systems. This is of interest since the standard approaches have been previously found inadequate to describe phenomena such as the dissipation of cardiac wave fronts and the shape of action potential at repolarization. The proposed asymptotic description overcomes these deficiencies by allowing, among other non-Tikhonov features, that a dynamical variable may change its character from fast to slow within a single solution. The general asymptotic approach is best demonstrated on an example which should be both simple and generic. The classical model of Purkinje fibers (Noble, 1962) has the simplest functional form of all cardiac models but according to the current understanding it assigns a physiologically incorrect role to the Na current. This leads us to suggest an ``Archetypal Model'' with the simplicity of the Noble model but with a structure more typical to contemporary cardiac models. We demonstrate that the Archetypal Model admits a complete asymptotic solution in quadratures. To validate our asymptotic approach, we proceed to consider an exactly solvable ``caricature'' of the Archetypal Model and demonstrate that the asymptotic of its exact solution coincides with the solutions obtained by substituting the ``caricature'' right-hand sides into the asymptotic solution of the generic Archetypal Model. This is necessary, because, unlike in standard asymptotic descriptions, no general results exist which can guarantee the proximity of the non-Tikhonov asymptotic solutions to the solutions of the corresponding detailed ionic model.
1511.00083
Subutai Ahmad
Jeff Hawkins and Subutai Ahmad
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex
Submitted for publication
Frontiers in Neural Circuits 10:23 (2016) 1-13
10.3389/fncir.2016.00023
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.
[ { "created": "Sat, 31 Oct 2015 06:03:05 GMT", "version": "v1" }, { "created": "Tue, 1 Dec 2015 21:20:26 GMT", "version": "v2" } ]
2016-04-25
[ [ "Hawkins", "Jeff", "" ], [ "Ahmad", "Subutai", "" ] ]
Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.
2403.14149
Jaroslav Albert
Jaroslav Albert
Exact analytic expressions for discrete first-passage time probability distributions in Markov networks
11 pages, 4 figures
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The first-passage time (FPT) is the time it takes a system variable to cross a given boundary for the first time. In the context of Markov networks, the FPT is the time a random walker takes to reach a particular node (target) by hopping from one node to another. If the walker pauses at each node for a period of time drawn from a continuous distribution, the FPT will be a continuous variable; if the pauses last exactly one unit of time, the FPT will be discrete and equal to the number of hops. We derive an exact analytical expression for the discrete first-passage time (DFPT) in Markov networks. Our approach is as follows: first, we divide each edge (connection between two nodes) of the network into $h$ unidirectional edges connecting a cascade of $h$ fictitious nodes and compute the continuous FPT (CFPT). Second, we set the transition rates along the edges to $h$, and show that as $h\to\infty$, the distribution of travel times between any two nodes of the original network approaches a delta function centered at 1, which is equivalent to pauses lasting 1 unit of time. Using this approach, we also compute the joint-probability distributions for the DFPT, the target node, and the node from which the target node was reached. A comparison with simulation confirms the validity of our approach.
[ { "created": "Thu, 21 Mar 2024 05:50:52 GMT", "version": "v1" } ]
2024-03-22
[ [ "Albert", "Jaroslav", "" ] ]
The first-passage time (FPT) is the time it takes a system variable to cross a given boundary for the first time. In the context of Markov networks, the FPT is the time a random walker takes to reach a particular node (target) by hopping from one node to another. If the walker pauses at each node for a period of time drawn from a continuous distribution, the FPT will be a continuous variable; if the pauses last exactly one unit of time, the FPT will be discrete and equal to the number of hops. We derive an exact analytical expression for the discrete first-passage time (DFPT) in Markov networks. Our approach is as follows: first, we divide each edge (connection between two nodes) of the network into $h$ unidirectional edges connecting a cascade of $h$ fictitious nodes and compute the continuous FPT (CFPT). Second, we set the transition rates along the edges to $h$, and show that as $h\to\infty$, the distribution of travel times between any two nodes of the original network approaches a delta function centered at 1, which is equivalent to pauses lasting 1 unit of time. Using this approach, we also compute the joint-probability distributions for the DFPT, the target node, and the node from which the target node was reached. A comparison with simulation confirms the validity of our approach.
1706.06031
Dmitry Petrov
Dmitry Petrov, Alexander Ivanov, Joshua Faskowitz, Boris Gutman, Daniel Moyer, Julio Villalon, Neda Jahanshad and Paul Thompson
Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise Classification
Accepted for MICCAI 2017, 8 pages, 3 figures
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is no consensus on how to construct structural brain networks from diffusion MRI. How variations in pre-processing steps affect network reliability and its ability to distinguish subjects remains opaque. In this work, we address this issue by comparing 35 structural connectome-building pipelines. We vary diffusion reconstruction models, tractography algorithms and parcellations. Next, we classify structural connectome pairs as either belonging to the same individual or not. Connectome weights and eight topological derivative measures form our feature set. For experiments, we use three test-retest datasets from the Consortium for Reliability and Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare pairwise classification results to a commonly used parametric test-retest measure, Intraclass Correlation Coefficient (ICC).
[ { "created": "Mon, 19 Jun 2017 16:05:11 GMT", "version": "v1" } ]
2017-06-20
[ [ "Petrov", "Dmitry", "" ], [ "Ivanov", "Alexander", "" ], [ "Faskowitz", "Joshua", "" ], [ "Gutman", "Boris", "" ], [ "Moyer", "Daniel", "" ], [ "Villalon", "Julio", "" ], [ "Jahanshad", "Neda", "" ], [ "Thompson", "Paul", "" ] ]
There is no consensus on how to construct structural brain networks from diffusion MRI. How variations in pre-processing steps affect network reliability and its ability to distinguish subjects remains opaque. In this work, we address this issue by comparing 35 structural connectome-building pipelines. We vary diffusion reconstruction models, tractography algorithms and parcellations. Next, we classify structural connectome pairs as either belonging to the same individual or not. Connectome weights and eight topological derivative measures form our feature set. For experiments, we use three test-retest datasets from the Consortium for Reliability and Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare pairwise classification results to a commonly used parametric test-retest measure, Intraclass Correlation Coefficient (ICC).
1211.6397
Jason Graham
Jason M. Graham, Bruce P. Ayati, Sarah A. Holstein, James A. Martin
The Role of Osteocytes in Targeted Bone Remodeling: A Mathematical Model
null
PLoS ONE 2013 May 22; 8(5): e63884
10.1371/journal.pone.0063884
null
q-bio.TO q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Until recently many studies of bone remodeling at the cellular level have focused on the behavior of mature osteoblasts and osteoclasts, and their respective precursor cells, with the role of osteocytes and bone lining cells left largely unexplored. This is particularly true with respect to the mathematical modeling of bone remodeling. However, there is increasing evidence that osteocytes play important roles in the cycle of targeted bone remodeling, in serving as a significant source of RANKL to support osteoclastogenesis, and in secreting the bone formation inhibitor sclerostin. Moreover, there is also increasing interest in sclerostin, an osteocyte-secreted bone formation inhibitor, and its role in regulating local response to changes in the bone microenvironment. Here we develop a cell population model of bone remodeling that includes the role of osteocytes, sclerostin, and allows for the possibility of RANKL expression by osteocyte cell populations. This model extends and complements many of the existing mathematical models for bone remodeling but can be used to explore aspects of the process of bone remodeling that were previously beyond the scope of prior modeling work. Through numerical simulations we demonstrate that our model can be used to theoretically explore many of the most recent experimental results for bone remodeling, and can be utilized to assess the effects of novel bone-targeting agents on the bone remodeling process.
[ { "created": "Tue, 27 Nov 2012 19:30:33 GMT", "version": "v1" }, { "created": "Wed, 28 Nov 2012 22:10:16 GMT", "version": "v2" }, { "created": "Tue, 28 May 2013 18:42:20 GMT", "version": "v3" } ]
2013-05-29
[ [ "Graham", "Jason M.", "" ], [ "Ayati", "Bruce P.", "" ], [ "Holstein", "Sarah A.", "" ], [ "Martin", "James A.", "" ] ]
Until recently many studies of bone remodeling at the cellular level have focused on the behavior of mature osteoblasts and osteoclasts, and their respective precursor cells, with the role of osteocytes and bone lining cells left largely unexplored. This is particularly true with respect to the mathematical modeling of bone remodeling. However, there is increasing evidence that osteocytes play important roles in the cycle of targeted bone remodeling, in serving as a significant source of RANKL to support osteoclastogenesis, and in secreting the bone formation inhibitor sclerostin. Moreover, there is also increasing interest in sclerostin, an osteocyte-secreted bone formation inhibitor, and its role in regulating local response to changes in the bone microenvironment. Here we develop a cell population model of bone remodeling that includes the role of osteocytes, sclerostin, and allows for the possibility of RANKL expression by osteocyte cell populations. This model extends and complements many of the existing mathematical models for bone remodeling but can be used to explore aspects of the process of bone remodeling that were previously beyond the scope of prior modeling work. Through numerical simulations we demonstrate that our model can be used to theoretically explore many of the most recent experimental results for bone remodeling, and can be utilized to assess the effects of novel bone-targeting agents on the bone remodeling process.
2310.15206
Petra Proch\'azkov\'a Schrumpfov\'a
Luk\'a\v{s} Nevos\'ad, Bo\v{z}ena Klodov\'a, David Honys, Radka Svobodov\'a, Tom\'a\v{s} Ra\v{c}ek and Petra Proch\'azkov\'a Schrumpfov\'a
GOLEM: distribution of Gene regulatOry eLEMents within the plant promoters
4 pages, 1 figure
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Motivation: The regulation of gene expression during tissue development is extremely complex. One of the key regulatory mechanisms of gene expression involves the recognition of regulatory motifs by various proteins in the promoter regions of many genes. Localisation of these motifs in proximity to the transcription start site (TSS) or translation start site (ATG) is critical for regulating the initiation and rate of transcription. The levels of transcription of individual genes, regulated by these motifs, can vary signifi cantly in diff erent tissues and developmental stages, especially during tightly regulated processes such as sexual reproduction. However, the precise localisation and visualisation of the regulatory motifs within gene promoters with respect to gene transcription in specifi c tissues, can be challenging. Results: Here, we introduce a program called GOLEM (Gene regulatOry eLEMents) which enables users to precisely locate any motif of interest with respect to TSS or ATG within the relevant plant genomes across the plant Tree of Life (Marchantia, Physcomitrium, Amborella, Oryza, Zea, Solanum and Arabidopsis). The visualisation of the motifs is performed with respect to the transcript levels of particular genes in leaves and male reproductive tissues, and can be compared with genome-wide distribution regardless of the transcription level. Availability and implementation: GOLEM is freely available at https://golem.ncbr.muni.cz and its source codes are provided under the MIT licence at GitHub at https://github.com/sb-ncbr/golem.
[ { "created": "Mon, 23 Oct 2023 11:49:12 GMT", "version": "v1" } ]
2023-10-25
[ [ "Nevosád", "Lukáš", "" ], [ "Klodová", "Božena", "" ], [ "Honys", "David", "" ], [ "Svobodová", "Radka", "" ], [ "Raček", "Tomáš", "" ], [ "Schrumpfová", "Petra Procházková", "" ] ]
Motivation: The regulation of gene expression during tissue development is extremely complex. One of the key regulatory mechanisms of gene expression involves the recognition of regulatory motifs by various proteins in the promoter regions of many genes. Localisation of these motifs in proximity to the transcription start site (TSS) or translation start site (ATG) is critical for regulating the initiation and rate of transcription. The levels of transcription of individual genes, regulated by these motifs, can vary signifi cantly in diff erent tissues and developmental stages, especially during tightly regulated processes such as sexual reproduction. However, the precise localisation and visualisation of the regulatory motifs within gene promoters with respect to gene transcription in specifi c tissues, can be challenging. Results: Here, we introduce a program called GOLEM (Gene regulatOry eLEMents) which enables users to precisely locate any motif of interest with respect to TSS or ATG within the relevant plant genomes across the plant Tree of Life (Marchantia, Physcomitrium, Amborella, Oryza, Zea, Solanum and Arabidopsis). The visualisation of the motifs is performed with respect to the transcript levels of particular genes in leaves and male reproductive tissues, and can be compared with genome-wide distribution regardless of the transcription level. Availability and implementation: GOLEM is freely available at https://golem.ncbr.muni.cz and its source codes are provided under the MIT licence at GitHub at https://github.com/sb-ncbr/golem.
2008.11718
Ville-Pekka Sepp\"a
Ville-Pekka Sepp\"a, Javier Gracia-Tabuenca, Anne Kotaniemi-Syrj\"anen, Kristiina Malmstr\"om, Anton Hult, Anna Pelkonen, Mika J. M\"akel\"a, Jari Viik, L. Pekka Malmberg
Expiratory variability index (EVI) is associated with asthma risk, wheeze and lung function in infants with recurrent respiratory symptoms
null
null
null
null
q-bio.QM physics.med-ph
http://creativecommons.org/licenses/by/4.0/
Recurrent respiratory symptoms are common in infants but the paucity of lung function tests suitable for routine use in infants is a widely acknowledged clinical problem. In this study we evaluated tidal breathing variability (expiratory variability index, EVI) measured at home during sleep using impedance pneumography (IP) as a marker of lower airway obstruction in 36 infants (mean age 12.8 [range 6-23] months) with recurrent respiratory symptoms. Lowered EVI was associated with lower lung function (VmaxFRC), higher asthma risk, and obstructive symptoms, but not with nasal congestion. EVI measured using IP is a potential technique for lung function testing in infants.
[ { "created": "Wed, 26 Aug 2020 10:05:16 GMT", "version": "v1" } ]
2020-08-28
[ [ "Seppä", "Ville-Pekka", "" ], [ "Gracia-Tabuenca", "Javier", "" ], [ "Kotaniemi-Syrjänen", "Anne", "" ], [ "Malmström", "Kristiina", "" ], [ "Hult", "Anton", "" ], [ "Pelkonen", "Anna", "" ], [ "Mäkelä", "Mika J.", "" ], [ "Viik", "Jari", "" ], [ "Malmberg", "L. Pekka", "" ] ]
Recurrent respiratory symptoms are common in infants but the paucity of lung function tests suitable for routine use in infants is a widely acknowledged clinical problem. In this study we evaluated tidal breathing variability (expiratory variability index, EVI) measured at home during sleep using impedance pneumography (IP) as a marker of lower airway obstruction in 36 infants (mean age 12.8 [range 6-23] months) with recurrent respiratory symptoms. Lowered EVI was associated with lower lung function (VmaxFRC), higher asthma risk, and obstructive symptoms, but not with nasal congestion. EVI measured using IP is a potential technique for lung function testing in infants.
2011.03344
Cathal Ryan Mr.
Cathal Ryan, Christophe Gu\'eret, Donagh Berry, Brian Mac Namee
Can We Detect Mastitis earlier than Farmers?
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this study was to build a modelling framework that would allow us to be able to detect mastitis infections before they would normally be found by farmers through the introduction of machine learning techniques. In the making of this we created two different modelling framework's, one that works on the premise of detecting Sub Clinical mastitis infections at one Somatic Cell Count recording in advance called SMA and the other tries to detect both Sub Clinical mastitis infections aswell as Clinical mastitis infections at any time the cow is milked called AMA. We also introduce the idea of two different feature sets for our study, these represent different characteristics that should be taken into account when detecting infections, these were the idea of a cow differing to a farm mean and also trends in the lactation. We reported that the results for SMA are better than those created by AMA for Sub Clinical infections yet it has the significant disadvantage of only being able to classify Sub Clinical infections due to how we recorded Sub Clinical infections as being any time a Somatic Cell Count measurement went above a certain threshold where as CM could appear at any stage of lactation. Thus in some cases the lower accuracy values for AMA might in fact be more beneficial to farmers.
[ { "created": "Thu, 5 Nov 2020 14:36:01 GMT", "version": "v1" } ]
2020-11-09
[ [ "Ryan", "Cathal", "" ], [ "Guéret", "Christophe", "" ], [ "Berry", "Donagh", "" ], [ "Mac Namee", "Brian", "" ] ]
The aim of this study was to build a modelling framework that would allow us to be able to detect mastitis infections before they would normally be found by farmers through the introduction of machine learning techniques. In the making of this we created two different modelling framework's, one that works on the premise of detecting Sub Clinical mastitis infections at one Somatic Cell Count recording in advance called SMA and the other tries to detect both Sub Clinical mastitis infections aswell as Clinical mastitis infections at any time the cow is milked called AMA. We also introduce the idea of two different feature sets for our study, these represent different characteristics that should be taken into account when detecting infections, these were the idea of a cow differing to a farm mean and also trends in the lactation. We reported that the results for SMA are better than those created by AMA for Sub Clinical infections yet it has the significant disadvantage of only being able to classify Sub Clinical infections due to how we recorded Sub Clinical infections as being any time a Somatic Cell Count measurement went above a certain threshold where as CM could appear at any stage of lactation. Thus in some cases the lower accuracy values for AMA might in fact be more beneficial to farmers.
2009.00403
Ferdinando Insalata
Ferdinando Insalata, Hanne Hoitzing, Juvid Aryaman and Nick S. Jones
Survival of the densest accounts for the expansion of mitochondrial mutations in ageing
48 pages, 8 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The expansion of deleted mitochondrial DNA (mtDNA) molecules has been linked to ageing, particularly in skeletal muscle fibres; its mechanism has remained unclear for three decades. Previous accounts assigned a replicative advantage to the deletions, but there is evidence that cells can, instead, selectively remove defective mtDNA. We present a spatial model that, without a replicative advantage, but instead through a combination of enhanced density for mutants and noise, produces a wave of expanding mutations with wave speed consistent with experimental data, unlike a standard model based on replicative advantage. We provide a formula that predicts that the wave speed drops with copy number, in agreement with experimental data. Crucially, our model yields travelling waves of mutants even if mutants are preferentially eliminated. Justified by this exemplar of how noise, density and spatial structure affect muscle ageing, we introduce the mechanism of stochastic survival of the densest, an alternative to replicative advantage, that may underpin other phenomena, like the evolution of altruism.
[ { "created": "Tue, 1 Sep 2020 13:13:33 GMT", "version": "v1" } ]
2020-09-02
[ [ "Insalata", "Ferdinando", "" ], [ "Hoitzing", "Hanne", "" ], [ "Aryaman", "Juvid", "" ], [ "Jones", "Nick S.", "" ] ]
The expansion of deleted mitochondrial DNA (mtDNA) molecules has been linked to ageing, particularly in skeletal muscle fibres; its mechanism has remained unclear for three decades. Previous accounts assigned a replicative advantage to the deletions, but there is evidence that cells can, instead, selectively remove defective mtDNA. We present a spatial model that, without a replicative advantage, but instead through a combination of enhanced density for mutants and noise, produces a wave of expanding mutations with wave speed consistent with experimental data, unlike a standard model based on replicative advantage. We provide a formula that predicts that the wave speed drops with copy number, in agreement with experimental data. Crucially, our model yields travelling waves of mutants even if mutants are preferentially eliminated. Justified by this exemplar of how noise, density and spatial structure affect muscle ageing, we introduce the mechanism of stochastic survival of the densest, an alternative to replicative advantage, that may underpin other phenomena, like the evolution of altruism.
1503.00999
Paul Moore
P.J. Moore
A hierarchical narrative framework for OCD
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper gives an explanatory framework for obsessive-compulsive disorder (OCD) based on a generative model of cognition. The framework is constructed using the new concept of a 'formal narrative' which is a sequence of cognitive states inferred from sense data. First we propose that human cognition uses a hierarchy of narratives to predict changes in the natural and social environment. Each layer in the hierarchy represents a distinct 'view of the world', but it also contributes to a global unitary perspective. Second, the generative models used for cognitive inference can create new narratives from those states already experienced by an individual. We hypothesise that when a threat is recognised, narratives are generated as a cognitive model of possible threat scenarios. Using this framework, we suggest that OCD arises from a dysfunction in sub-surface levels of inference while the global unitary perspective remains intact. The failure of inference is felt as the external world being 'not just right', and its automatic correction by the perceptual system is experienced as compulsion. Ordering and symmetry obsessions are the effects of the perceptual system trying to achieve precise inference. Checking behaviour arises because the security system attempts to finesse inference as part of its protection behaviour. Similarly, fear of harm and distressing thoughts occur because the failure of inference results in an indistinct view of the past or the future. A wide variety of symptoms in OCD is thus explained by a single dysfunction.
[ { "created": "Tue, 3 Mar 2015 16:33:04 GMT", "version": "v1" } ]
2015-03-04
[ [ "Moore", "P. J.", "" ] ]
This paper gives an explanatory framework for obsessive-compulsive disorder (OCD) based on a generative model of cognition. The framework is constructed using the new concept of a 'formal narrative' which is a sequence of cognitive states inferred from sense data. First we propose that human cognition uses a hierarchy of narratives to predict changes in the natural and social environment. Each layer in the hierarchy represents a distinct 'view of the world', but it also contributes to a global unitary perspective. Second, the generative models used for cognitive inference can create new narratives from those states already experienced by an individual. We hypothesise that when a threat is recognised, narratives are generated as a cognitive model of possible threat scenarios. Using this framework, we suggest that OCD arises from a dysfunction in sub-surface levels of inference while the global unitary perspective remains intact. The failure of inference is felt as the external world being 'not just right', and its automatic correction by the perceptual system is experienced as compulsion. Ordering and symmetry obsessions are the effects of the perceptual system trying to achieve precise inference. Checking behaviour arises because the security system attempts to finesse inference as part of its protection behaviour. Similarly, fear of harm and distressing thoughts occur because the failure of inference results in an indistinct view of the past or the future. A wide variety of symptoms in OCD is thus explained by a single dysfunction.
1509.02507
Gennadi Glinsky
Gennadi Glinsky
Phenotypic divergence of Homo sapiens is driven by the evolution of human-specific genomic regulatory networks via two mechanistically distinct pathways of creation of divergent regulatory DNA sequences
13 pages; 15 references; 12 tables; 1 figure
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thousands of candidate human-specific regulatory sequences (HSRS) have been identified, supporting the hypothesis that unique to human phenotypes result from human-specific alterations of genomic regulatory networks. Here, conservation patterns analysis of 18,364 candidate HSRS was carried out based on definition of the sequence conservation threshold as the minimum ratio of bases that must remap of 1.00. A total of 5,535 candidate HSRS were identified that are: i) highly conserved in Great Apes; ii) evolved by the exaptation of highly conserved ancestral DNA; iii) defined by either the acceleration of mutation rates on the human lineage or the functional divergence from nonhuman primates. The exaptation of highly conserved ancestral DNA pathway seems mechanistically distinct from the evolution of regulatory DNA segments driven by the species-specific expansion of transposable elements. Present analysis supports the idea that phenotypic divergence of Homo sapiens is driven by the evolution of human-specific genomic regulatory networks via two mechanistically distinct pathways of creation of divergent sequences of regulatory DNA: i) exaptation of the highly conserved ancestral regulatory DNA segments; ii) human-specific insertions of transposable elements.
[ { "created": "Tue, 8 Sep 2015 19:27:52 GMT", "version": "v1" }, { "created": "Fri, 30 Oct 2015 18:35:56 GMT", "version": "v2" } ]
2015-11-02
[ [ "Glinsky", "Gennadi", "" ] ]
Thousands of candidate human-specific regulatory sequences (HSRS) have been identified, supporting the hypothesis that unique to human phenotypes result from human-specific alterations of genomic regulatory networks. Here, conservation patterns analysis of 18,364 candidate HSRS was carried out based on definition of the sequence conservation threshold as the minimum ratio of bases that must remap of 1.00. A total of 5,535 candidate HSRS were identified that are: i) highly conserved in Great Apes; ii) evolved by the exaptation of highly conserved ancestral DNA; iii) defined by either the acceleration of mutation rates on the human lineage or the functional divergence from nonhuman primates. The exaptation of highly conserved ancestral DNA pathway seems mechanistically distinct from the evolution of regulatory DNA segments driven by the species-specific expansion of transposable elements. Present analysis supports the idea that phenotypic divergence of Homo sapiens is driven by the evolution of human-specific genomic regulatory networks via two mechanistically distinct pathways of creation of divergent sequences of regulatory DNA: i) exaptation of the highly conserved ancestral regulatory DNA segments; ii) human-specific insertions of transposable elements.
1904.10399
Lili Su
Lili Su and Chia-Jung Chang and Nancy Lynch
Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits
null
null
null
null
q-bio.NC cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based $k$--WTA model wherein $n$ randomly generated input spike trains compete with each other based on their underlying statistics, and $k$ winners are supposed to be selected. We slot the time evenly with each time slot of length $1\, ms$, and model the $n$ input spike trains as $n$ independent Bernoulli processes. The Bernoulli process is a good approximation of the popular Poisson process but is more biologically relevant as it takes the refractory periods into account. Due to the randomness in the input spike trains, no circuits can guarantee to successfully select the correct winners in finite time. We focus on analytically characterizing the minimal amount of time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the decision time. We show that to have a (minimax) decision error $\le \delta$ (where $\delta \in (0,1)$), the computation time of any WTA circuit is at least \[ ((1-\delta) \log(k(n -k)+1) -1)T_{\mathcal{R}}, \] where $T_{\mathcal{R}}$ is a difficulty parameter of a WTA task that is independent of $\delta$, $n$, and $k$. We then design a simple WTA circuit whose decision time is \[ O( \log\frac{1}{\delta}+\log k(n-k))T_{\mathcal{R}}). \] It turns out that for any fixed $\delta \in (0,1)$, this decision time is order-optimal in terms of its scaling in $n$, $k$, and $T_{\mathcal{R}}$.
[ { "created": "Sun, 21 Apr 2019 02:45:09 GMT", "version": "v1" } ]
2019-04-24
[ [ "Su", "Lili", "" ], [ "Chang", "Chia-Jung", "" ], [ "Lynch", "Nancy", "" ] ]
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based $k$--WTA model wherein $n$ randomly generated input spike trains compete with each other based on their underlying statistics, and $k$ winners are supposed to be selected. We slot the time evenly with each time slot of length $1\, ms$, and model the $n$ input spike trains as $n$ independent Bernoulli processes. The Bernoulli process is a good approximation of the popular Poisson process but is more biologically relevant as it takes the refractory periods into account. Due to the randomness in the input spike trains, no circuits can guarantee to successfully select the correct winners in finite time. We focus on analytically characterizing the minimal amount of time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the decision time. We show that to have a (minimax) decision error $\le \delta$ (where $\delta \in (0,1)$), the computation time of any WTA circuit is at least \[ ((1-\delta) \log(k(n -k)+1) -1)T_{\mathcal{R}}, \] where $T_{\mathcal{R}}$ is a difficulty parameter of a WTA task that is independent of $\delta$, $n$, and $k$. We then design a simple WTA circuit whose decision time is \[ O( \log\frac{1}{\delta}+\log k(n-k))T_{\mathcal{R}}). \] It turns out that for any fixed $\delta \in (0,1)$, this decision time is order-optimal in terms of its scaling in $n$, $k$, and $T_{\mathcal{R}}$.
2005.13618
Martin Frasch
Aude Castel, Yael Frank, John Feltner, Floyd Karp, Catherine Albright, Martin G. Frasch
Monitoring fetal electroencephalogram intrapartum: a systematic literature review
Prospero number: CRD42020147474. 55 pages, 8 figures
Front. Pediatr. 2020
10.3389/fped.2020.00584
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: Studies about the feasibility of monitoring fetal electroencephalogram (fEEG) during labor began in the early 1940s. By the 1970s, clear diagnostic and prognostic benefits from intrapartum fEEG monitoring were reported, but until today, this monitoring technology has remained a curiosity. Objectives: Our goal was to review the studies reporting the use of fEEG including the insights from interpreting fEEG patterns in response to uterine contractions during labor. We also used the most relevant information gathered from clinical studies to provide recommendations for enrollment in the unique environment of a labor and delivery unit. Data sources: PubMed. Eligibility criteria: The search strategy was: ("fetus"[MeSH Terms] OR "fetus"[All Fields] OR "fetal"[All Fields]) AND ("electroencephalography"[MeSH Terms] OR "electroencephalography"[All Fields] OR "eeg"[All Fields]) AND (Clinical Trial[ptyp] AND "humans"[MeSH Terms]). Because the landscape of fEEG research has been international, we included studies in English, French, German, and Russian. Results: From 256 screened studies, 40 studies were ultimately included in the qualitative analysis. We summarize and report features of fEEG which clearly show its potential to act as a direct biomarker of fetal brain health during delivery, ancillary to fetal heart rate monitoring. However, clinical prospective studies are needed to further establish the utility of fEEG monitoring intrapartum. We identified clinical study designs likely to succeed in bringing this intrapartum monitoring modality to the bedside. Limitations: Despite 80 years of studies in clinical cohorts and animal models, the field of research on intrapartum fEEG is still nascent and shows great promise to augment the currently practiced electronic fetal monitoring.
[ { "created": "Wed, 27 May 2020 19:58:31 GMT", "version": "v1" }, { "created": "Tue, 23 Jun 2020 07:22:11 GMT", "version": "v2" } ]
2020-08-11
[ [ "Castel", "Aude", "" ], [ "Frank", "Yael", "" ], [ "Feltner", "John", "" ], [ "Karp", "Floyd", "" ], [ "Albright", "Catherine", "" ], [ "Frasch", "Martin G.", "" ] ]
Background: Studies about the feasibility of monitoring fetal electroencephalogram (fEEG) during labor began in the early 1940s. By the 1970s, clear diagnostic and prognostic benefits from intrapartum fEEG monitoring were reported, but until today, this monitoring technology has remained a curiosity. Objectives: Our goal was to review the studies reporting the use of fEEG including the insights from interpreting fEEG patterns in response to uterine contractions during labor. We also used the most relevant information gathered from clinical studies to provide recommendations for enrollment in the unique environment of a labor and delivery unit. Data sources: PubMed. Eligibility criteria: The search strategy was: ("fetus"[MeSH Terms] OR "fetus"[All Fields] OR "fetal"[All Fields]) AND ("electroencephalography"[MeSH Terms] OR "electroencephalography"[All Fields] OR "eeg"[All Fields]) AND (Clinical Trial[ptyp] AND "humans"[MeSH Terms]). Because the landscape of fEEG research has been international, we included studies in English, French, German, and Russian. Results: From 256 screened studies, 40 studies were ultimately included in the qualitative analysis. We summarize and report features of fEEG which clearly show its potential to act as a direct biomarker of fetal brain health during delivery, ancillary to fetal heart rate monitoring. However, clinical prospective studies are needed to further establish the utility of fEEG monitoring intrapartum. We identified clinical study designs likely to succeed in bringing this intrapartum monitoring modality to the bedside. Limitations: Despite 80 years of studies in clinical cohorts and animal models, the field of research on intrapartum fEEG is still nascent and shows great promise to augment the currently practiced electronic fetal monitoring.
0707.0329
Mareike Fischer
Mareike Fischer, Mike Steel
Expected Anomalies in the Fossil Record
null
Fischer, M. and Steel, M. (2008). Expected anomalies in the fossil record. Evolutionary bioinformatics online 4: 61--67
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of intermediates in the fossil record has been frequently discussed ever since Darwin. The extent of `gaps' (missing transitional stages) has been used to argue against gradual evolution from a common ancestor. Traditionally, gaps have often been explained by the improbability of fossilization and the discontinuous selection of found fossils. Here we take an analytical approach and demonstrate why, under certain sampling conditions, we may not expect intermediates to be found. Using a simple null model, we show mathematically that the question of whether a taxon sampled from some time in the past is likely to be morphologically intermediate to other samples (dated earlier and later) depends on the shape and dimensions of the underlying phylogenetic tree that connects the taxa, and the times from which the fossils are sampled.
[ { "created": "Tue, 3 Jul 2007 01:24:43 GMT", "version": "v1" }, { "created": "Tue, 26 Aug 2008 23:50:22 GMT", "version": "v2" } ]
2008-08-27
[ [ "Fischer", "Mareike", "" ], [ "Steel", "Mike", "" ] ]
The problem of intermediates in the fossil record has been frequently discussed ever since Darwin. The extent of `gaps' (missing transitional stages) has been used to argue against gradual evolution from a common ancestor. Traditionally, gaps have often been explained by the improbability of fossilization and the discontinuous selection of found fossils. Here we take an analytical approach and demonstrate why, under certain sampling conditions, we may not expect intermediates to be found. Using a simple null model, we show mathematically that the question of whether a taxon sampled from some time in the past is likely to be morphologically intermediate to other samples (dated earlier and later) depends on the shape and dimensions of the underlying phylogenetic tree that connects the taxa, and the times from which the fossils are sampled.
1512.07459
Steven Kelk
Steven Kelk, Mareike Fischer, Vincent Moulton, Taoyang Wu
Reduction rules for the maximum parsimony distance on phylogenetic trees
Added material on graph minors, MSOL and treewidth
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In phylogenetics, distances are often used to measure the incongruence between a pair of phylogenetic trees that are reconstructed by different methods or using different regions of genome. Motivated by the maximum parsimony principle in tree inference, we recently introduced the maximum parsimony (MP) distance, which enjoys various attractive properties due to its connection with several other well-known tree distances, such as TBR and SPR. Here we show that computing the MP distance between two trees, a NP-hard problem in general, is fixed parameter tractable in terms of the TBR distance between the tree pair. Our approach is based on two reduction rules--the chain reduction and the subtree reduction--that are widely used in computing TBR and SPR distances. More precisely, we show that reducing chains to length 4 (but not shorter) preserves the MP distance. In addition, we describe a generalization of the subtree reduction which allows the pendant subtrees to be rooted in different places, and show that this still preserves the MP distance. On a slightly different note we also show that Monadic Second Order Logic (MSOL), posited over an auxiliary graph structure known as the display graph (obtained by merging the two trees at their leaves), can be used to obtain an alternative proof that computation of MP distance is fixed parameter tractable in terms of TBR-distance. We conclude with an extended discussion in which we focus on similarities and differences between MP distance and TBR distance and present a number of open problems. One particularly intriguing question, emerging from the MSOL formulation, is whether two trees with bounded MP distance induce display graphs of bounded treewidth.
[ { "created": "Wed, 23 Dec 2015 13:07:16 GMT", "version": "v1" }, { "created": "Sat, 30 Apr 2016 09:22:27 GMT", "version": "v2" }, { "created": "Thu, 7 Jul 2016 09:28:18 GMT", "version": "v3" } ]
2016-07-08
[ [ "Kelk", "Steven", "" ], [ "Fischer", "Mareike", "" ], [ "Moulton", "Vincent", "" ], [ "Wu", "Taoyang", "" ] ]
In phylogenetics, distances are often used to measure the incongruence between a pair of phylogenetic trees that are reconstructed by different methods or using different regions of genome. Motivated by the maximum parsimony principle in tree inference, we recently introduced the maximum parsimony (MP) distance, which enjoys various attractive properties due to its connection with several other well-known tree distances, such as TBR and SPR. Here we show that computing the MP distance between two trees, a NP-hard problem in general, is fixed parameter tractable in terms of the TBR distance between the tree pair. Our approach is based on two reduction rules--the chain reduction and the subtree reduction--that are widely used in computing TBR and SPR distances. More precisely, we show that reducing chains to length 4 (but not shorter) preserves the MP distance. In addition, we describe a generalization of the subtree reduction which allows the pendant subtrees to be rooted in different places, and show that this still preserves the MP distance. On a slightly different note we also show that Monadic Second Order Logic (MSOL), posited over an auxiliary graph structure known as the display graph (obtained by merging the two trees at their leaves), can be used to obtain an alternative proof that computation of MP distance is fixed parameter tractable in terms of TBR-distance. We conclude with an extended discussion in which we focus on similarities and differences between MP distance and TBR distance and present a number of open problems. One particularly intriguing question, emerging from the MSOL formulation, is whether two trees with bounded MP distance induce display graphs of bounded treewidth.
1709.01076
Daniele Ramazzotti
Daniele Ramazzotti and Alex Graudenzi and Luca De Sano and Marco Antoniotti and Giulio Caravagna
Learning mutational graphs of individual tumour evolution from single-cell and multi-region sequencing data
null
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background. A large number of algorithms is being developed to reconstruct evolutionary models of individual tumours from genome sequencing data. Most methods can analyze multiple samples collected either through bulk multi-region sequencing experiments or the sequencing of individual cancer cells. However, rarely the same method can support both data types. Results. We introduce TRaIT, a computational framework to infer mutational graphs that model the accumulation of multiple types of somatic alterations driving tumour evolution. Compared to other tools, TRaIT supports multi-region and single-cell sequencing data within the same statistical framework, and delivers expressive models that capture many complex evolutionary phenomena. TRaIT improves accuracy, robustness to data-specific errors and computational complexity compared to competing methods. Conclusions. We show that the application of TRaIT to single-cell and multi-region cancer datasets can produce accurate and reliable models of single-tumour evolution, quantify the extent of intra-tumour heterogeneity and generate new testable experimental hypotheses.
[ { "created": "Mon, 4 Sep 2017 15:35:11 GMT", "version": "v1" }, { "created": "Fri, 22 Mar 2019 23:42:19 GMT", "version": "v2" } ]
2019-03-26
[ [ "Ramazzotti", "Daniele", "" ], [ "Graudenzi", "Alex", "" ], [ "De Sano", "Luca", "" ], [ "Antoniotti", "Marco", "" ], [ "Caravagna", "Giulio", "" ] ]
Background. A large number of algorithms is being developed to reconstruct evolutionary models of individual tumours from genome sequencing data. Most methods can analyze multiple samples collected either through bulk multi-region sequencing experiments or the sequencing of individual cancer cells. However, rarely the same method can support both data types. Results. We introduce TRaIT, a computational framework to infer mutational graphs that model the accumulation of multiple types of somatic alterations driving tumour evolution. Compared to other tools, TRaIT supports multi-region and single-cell sequencing data within the same statistical framework, and delivers expressive models that capture many complex evolutionary phenomena. TRaIT improves accuracy, robustness to data-specific errors and computational complexity compared to competing methods. Conclusions. We show that the application of TRaIT to single-cell and multi-region cancer datasets can produce accurate and reliable models of single-tumour evolution, quantify the extent of intra-tumour heterogeneity and generate new testable experimental hypotheses.
1812.02870
Liane Gabora
Liane Gabora and Mike Unrau
The Role of Engagement, Honing, and Mindfulness in Creativity
22 pages;
in Mullen, C. (Ed.) Creativity under duress in education? Resistive theories, practices, and actions, Creativity Theory and Action in Education Vol. 3.(2019)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As both our external world and inner worlds become more complex, we are faced with more novel challenges, hardships, and duress. Creative thinking is needed to provide fresh perspectives and solve new problems.Because creativity can be conducive to accessing and reliving traumatic memories, emotional scars may be exacerbated by creative practices before these are transformed and released. Therefore, in preparing our youth to thrive in an increasingly unpredictable world, it could be helpful to cultivate in them an understanding of the creative process and its relationship to hardship, as well as tools and techniques for fostering not just creativity but self-awareness and mindfulness. This chapter is a review of theories of creativity through the lens of their capacity to account for the relationship between creativity and hardship, as well as the therapeutic effects of creativity. We also review theories and research on aspects of mindfulness attending to potential therapeutic effects of creativity. Drawing upon the creativity and mindfulness literatures, we sketch out what an introductory 'creativity and mindfulness' module might look like as part of an educational curriculum designed to address the unique challenges of the 21st Century.
[ { "created": "Fri, 7 Dec 2018 01:55:30 GMT", "version": "v1" }, { "created": "Wed, 13 Mar 2019 21:31:36 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2019 21:18:11 GMT", "version": "v3" } ]
2019-07-09
[ [ "Gabora", "Liane", "" ], [ "Unrau", "Mike", "" ] ]
As both our external world and inner worlds become more complex, we are faced with more novel challenges, hardships, and duress. Creative thinking is needed to provide fresh perspectives and solve new problems.Because creativity can be conducive to accessing and reliving traumatic memories, emotional scars may be exacerbated by creative practices before these are transformed and released. Therefore, in preparing our youth to thrive in an increasingly unpredictable world, it could be helpful to cultivate in them an understanding of the creative process and its relationship to hardship, as well as tools and techniques for fostering not just creativity but self-awareness and mindfulness. This chapter is a review of theories of creativity through the lens of their capacity to account for the relationship between creativity and hardship, as well as the therapeutic effects of creativity. We also review theories and research on aspects of mindfulness attending to potential therapeutic effects of creativity. Drawing upon the creativity and mindfulness literatures, we sketch out what an introductory 'creativity and mindfulness' module might look like as part of an educational curriculum designed to address the unique challenges of the 21st Century.
1110.2966
Irina Manina
I.V. Manina, N.M. Peretolchina, N.S. Saprikina, A.M. Kozlov, I.N. Mikhaylova, A.Yu.Barishnikov
Experimental Researches of Cutaneous Melanoma Immunotherapy by Antitumor Cell-Whole GM-CSF-Producing Vaccines
8 pages, 4 figures; ISSN 1726-9784. Russian Journal of Biotherapy. /2010.-#3- P.47-50
Russian Journal of Biotherapy. 3/2010.-#3- P.47-50
null
null
q-bio.OT
http://creativecommons.org/licenses/publicdomain/
Various approaches to increase efficiency of antitumor therapy by a combination of vaccinotherapy, chemotherapy and surgical excision of primary tumor nodes, and also the comparative analyses of therapeutic and preventive application of antitumoral vaccines were carried out in melanoma experimental model. It was postulated that preventive vaccination is able to prevent tumor incidence by 70 %. The combination of vaccinotherapy and surgical treatment of melanoma increases antimetastatic activity of vaccination by 43 %. We conclude the combined therapy would lead to more effective antitumor response.
[ { "created": "Wed, 12 Oct 2011 18:32:11 GMT", "version": "v1" } ]
2011-10-14
[ [ "Manina", "I. V.", "" ], [ "Peretolchina", "N. M.", "" ], [ "Saprikina", "N. S.", "" ], [ "Kozlov", "A. M.", "" ], [ "Mikhaylova", "I. N.", "" ], [ "Barishnikov", "A. Yu.", "" ] ]
Various approaches to increase efficiency of antitumor therapy by a combination of vaccinotherapy, chemotherapy and surgical excision of primary tumor nodes, and also the comparative analyses of therapeutic and preventive application of antitumoral vaccines were carried out in melanoma experimental model. It was postulated that preventive vaccination is able to prevent tumor incidence by 70 %. The combination of vaccinotherapy and surgical treatment of melanoma increases antimetastatic activity of vaccination by 43 %. We conclude the combined therapy would lead to more effective antitumor response.
2405.10911
Riccardo Ravasio
Riccardo Ravasio, Kabir Husain, Constantine G. Evans, Rob Phillips, Marco Ribezzi, Jack W. Szostak, Arvind Murugan
A minimal scenario for the origin of non-equilibrium order
null
null
null
null
q-bio.PE cond-mat.stat-mech q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Extant life contains numerous non-equilibrium mechanisms to create order not achievable at equilibrium; it is generally assumed that these mechanisms evolved because the resulting order was sufficiently beneficial to overcome associated costs of time and energy. Here, we identify a broad range of conditions under which non-equilibrium order-creating mechanisms will evolve as an inevitable consequence of self-replication, even if the order is not directly functional. We show that models of polymerases, when expanded to include known stalling effects, can evolve kinetic proofreading through selection for fast replication alone, consistent with data from recent mutational screens. Similarly, replication contingent on fast self-assembly can select for non-equilibrium instabilities and result in more ordered structures without any direct selection for order. We abstract these results into a framework that predicts that self-replication intrinsically amplifies dissipative order-enhancing mechanisms if the distribution of replication times is wide enough. Our work suggests the intriguing possibility that non-equilibrium order can arise more easily than assumed, even before that order is directly functional, with consequences impacting such diverse phenomena as the evolution of mutation rates, kinetic traps in self-assembly, and the origin of life.
[ { "created": "Fri, 17 May 2024 16:59:19 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 03:50:04 GMT", "version": "v2" } ]
2024-07-25
[ [ "Ravasio", "Riccardo", "" ], [ "Husain", "Kabir", "" ], [ "Evans", "Constantine G.", "" ], [ "Phillips", "Rob", "" ], [ "Ribezzi", "Marco", "" ], [ "Szostak", "Jack W.", "" ], [ "Murugan", "Arvind", "" ] ]
Extant life contains numerous non-equilibrium mechanisms to create order not achievable at equilibrium; it is generally assumed that these mechanisms evolved because the resulting order was sufficiently beneficial to overcome associated costs of time and energy. Here, we identify a broad range of conditions under which non-equilibrium order-creating mechanisms will evolve as an inevitable consequence of self-replication, even if the order is not directly functional. We show that models of polymerases, when expanded to include known stalling effects, can evolve kinetic proofreading through selection for fast replication alone, consistent with data from recent mutational screens. Similarly, replication contingent on fast self-assembly can select for non-equilibrium instabilities and result in more ordered structures without any direct selection for order. We abstract these results into a framework that predicts that self-replication intrinsically amplifies dissipative order-enhancing mechanisms if the distribution of replication times is wide enough. Our work suggests the intriguing possibility that non-equilibrium order can arise more easily than assumed, even before that order is directly functional, with consequences impacting such diverse phenomena as the evolution of mutation rates, kinetic traps in self-assembly, and the origin of life.
1909.05660
Juan Miguel Valverde
Juan Miguel Valverde, Vandad Imani, John D. Lewis, Jussi Tohka
Predicting intelligence based on cortical WM/GM contrast, cortical thickness and volumetry
Submission to the ABCD Neurocognitive Prediction Challenge at MICCAI 2019
null
null
null
q-bio.NC cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a four-layer fully-connected neural network (FNN) for predicting fluid intelligence scores from T1-weighted MR images for the ABCD-challenge. In addition to the volumes of brain structures, the FNN uses cortical WM/GM contrast and cortical thickness at 78 cortical regions. These last two measurements were derived from the T1-weighted MR images using cortical surfaces produced by the CIVET pipeline. The age and gender of the subjects and the scanner manufacturer are also used as features for the learning algorithm. This yielded 283 features provided to the FNN with two hidden layers of 20 and 15 nodes. The method was applied to the data from the ABCD study. Trained with a training set of 3736 subjects, the proposed method achieved a MSE of 71.596 and a correlation of 0.151 in the validation set of 415 subjects. For the final submission, the model was trained with 3568 subjects and it achieved a MSE of 94.0270 in the test set comprised of 4383 subjects.
[ { "created": "Mon, 9 Sep 2019 11:53:19 GMT", "version": "v1" } ]
2019-09-13
[ [ "Valverde", "Juan Miguel", "" ], [ "Imani", "Vandad", "" ], [ "Lewis", "John D.", "" ], [ "Tohka", "Jussi", "" ] ]
We propose a four-layer fully-connected neural network (FNN) for predicting fluid intelligence scores from T1-weighted MR images for the ABCD-challenge. In addition to the volumes of brain structures, the FNN uses cortical WM/GM contrast and cortical thickness at 78 cortical regions. These last two measurements were derived from the T1-weighted MR images using cortical surfaces produced by the CIVET pipeline. The age and gender of the subjects and the scanner manufacturer are also used as features for the learning algorithm. This yielded 283 features provided to the FNN with two hidden layers of 20 and 15 nodes. The method was applied to the data from the ABCD study. Trained with a training set of 3736 subjects, the proposed method achieved a MSE of 71.596 and a correlation of 0.151 in the validation set of 415 subjects. For the final submission, the model was trained with 3568 subjects and it achieved a MSE of 94.0270 in the test set comprised of 4383 subjects.
1712.04519
David Cowburn
Samuel Sparks, Deniz B. Temel, Michael P. Rout, and David Cowburn
Deciphering the 'fuzzy' interaction of FG nucleoporins and transport factors using SANS
Minor revisions and reformatting
Structure 2018 26 477-84
10.1016/j.str.2018.01.010
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The largely intrinsically disordered phenylalanine-glycine-rich nucleoporins (FG Nups) underline a selectivity mechanism, which enables the rapid translocation of transport factors (TFs) through the nuclear pore complexes (NPCs). Conflicting models of NPC transport have assumed that FG Nups undergo different conformational transitions upon interacting with TFs. To selectively characterize conformational changes in FG Nups induced by TFs we performed small-angle neutron scattering (SANS) with contrast matching. Conformational ensembles derived SANS data indicate an increase in the overall size of FG Nups is associated with TF interaction. Moreover, the organization of the FG motif in the interacting state is consistent with prior experimental analyses defining that FG motifs undergo conformational restriction upon interacting with TFs. These results provide structural insights into a highly dynamic interaction and illustrate how functional disorder imparts rapid and selective FG Nup / TF interactions.
[ { "created": "Tue, 12 Dec 2017 21:04:30 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2018 21:31:29 GMT", "version": "v2" } ]
2019-03-14
[ [ "Sparks", "Samuel", "" ], [ "Temel", "Deniz B.", "" ], [ "Rout", "Michael P.", "" ], [ "Cowburn", "David", "" ] ]
The largely intrinsically disordered phenylalanine-glycine-rich nucleoporins (FG Nups) underline a selectivity mechanism, which enables the rapid translocation of transport factors (TFs) through the nuclear pore complexes (NPCs). Conflicting models of NPC transport have assumed that FG Nups undergo different conformational transitions upon interacting with TFs. To selectively characterize conformational changes in FG Nups induced by TFs we performed small-angle neutron scattering (SANS) with contrast matching. Conformational ensembles derived SANS data indicate an increase in the overall size of FG Nups is associated with TF interaction. Moreover, the organization of the FG motif in the interacting state is consistent with prior experimental analyses defining that FG motifs undergo conformational restriction upon interacting with TFs. These results provide structural insights into a highly dynamic interaction and illustrate how functional disorder imparts rapid and selective FG Nup / TF interactions.
q-bio/0401034
Maximilian Schlosshauer
Maximilian Schlosshauer and David Baker
Realistic protein-protein association rates from a simple diffusional model neglecting long-range interactions, free energy barriers, and landscape ruggedness
9 pages, 5 figures, 1 table. One figure and a few comments added for clarification
Protein Science (13), 1660-1669 (2004)
null
null
q-bio.BM
null
We develop a simple but rigorous model of protein-protein association kinetics based on diffusional association on free energy landscapes obtained by sampling configurations within and surrounding the native complex binding funnels. Guided by results obtained on exactly solvable model problems, we transform the problem of diffusion in a potential into free diffusion in the presence of an absorbing zone spanning the entrance to the binding funnel. The free diffusion problem is solved using a recently derived analytic expression for the rate of association of asymmetrically oriented molecules. Despite the required high steric specificity and the absence of long-range attractive interactions, the computed rates are typically on the order of 10^4-10^6 M-1 s-1, several orders of magnitude higher than rates obtained using a purely probabilistic model in which the association rate for free diffusion of uniformly reactive molecules is multiplied by the probability of a correct alignment of the two partners in a random collision. As the association rates of many protein-protein complexes are also in the 10^5-10^6 M-1 s-1, our results suggest that free energy barriers arising from desolvation and/or side-chain freezing during complex formation or increased ruggedness within the binding funnel, which are completely neglected in our simple diffusional model, do not contribute significantly to the dynamics of protein-protein association. The transparent physical interpretation of our approach that computes association rates directly from the size and geometry of protein-protein binding funnels makes it a useful complement to Brownian dynamics simulations.
[ { "created": "Mon, 26 Jan 2004 21:11:21 GMT", "version": "v1" }, { "created": "Mon, 19 Jul 2004 22:16:54 GMT", "version": "v2" } ]
2007-05-23
[ [ "Schlosshauer", "Maximilian", "" ], [ "Baker", "David", "" ] ]
We develop a simple but rigorous model of protein-protein association kinetics based on diffusional association on free energy landscapes obtained by sampling configurations within and surrounding the native complex binding funnels. Guided by results obtained on exactly solvable model problems, we transform the problem of diffusion in a potential into free diffusion in the presence of an absorbing zone spanning the entrance to the binding funnel. The free diffusion problem is solved using a recently derived analytic expression for the rate of association of asymmetrically oriented molecules. Despite the required high steric specificity and the absence of long-range attractive interactions, the computed rates are typically on the order of 10^4-10^6 M-1 s-1, several orders of magnitude higher than rates obtained using a purely probabilistic model in which the association rate for free diffusion of uniformly reactive molecules is multiplied by the probability of a correct alignment of the two partners in a random collision. As the association rates of many protein-protein complexes are also in the 10^5-10^6 M-1 s-1, our results suggest that free energy barriers arising from desolvation and/or side-chain freezing during complex formation or increased ruggedness within the binding funnel, which are completely neglected in our simple diffusional model, do not contribute significantly to the dynamics of protein-protein association. The transparent physical interpretation of our approach that computes association rates directly from the size and geometry of protein-protein binding funnels makes it a useful complement to Brownian dynamics simulations.
1912.09205
Viktor Stojkoski MSc
Viktor Stojkoski, Marko Karbevski, Zoran Utkovski, Lasko Basnarkov, Ljupco Kocarev
Evolution of cooperation in networked heterogeneous fluctuating environments
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fluctuating environments are situations where the spatio-temporal stochasticity plays a significant role in the evolutionary dynamics. The study of the evolution of cooperation in these environments typically assumes a homogeneous, well mixed population, whose constituents are endowed with identical capabilities. In this paper, we generalize these results by developing a systematic study for the cooperation dynamics in fluctuating environments under the consideration of structured, heterogeneous populations with individual entities subjected to general behavioral rules. Considering complex network topologies, and a behavioral rule based on generalized reciprocity, we perform a detailed analysis of the effect of the underlying interaction structure on the evolutionary stability of cooperation. We find that, in the presence of environmental fluctuations, the cooperation dynamics can lead to the creation of multiple network components, each with distinct evolutionary properties. This is paralleled to the freezing state in the Random Energy Model. We utilize this result to examine the applicability of our generalized reciprocity behavioral rule in a variety of settings. We thereby show that the introduced rule leads to steady state cooperative behavior that is always greater than or equal to the one predicted by the evolutionary stability analysis of unconditional cooperation. As a consequence, the implementation of our results may go beyond explaining the evolution of cooperation. In particular, they can be directly applied in domains that deal with the development of artificial systems able to adequately mimic reality, such as reinforcement learning.
[ { "created": "Thu, 19 Dec 2019 14:07:09 GMT", "version": "v1" }, { "created": "Wed, 15 Apr 2020 20:09:07 GMT", "version": "v2" }, { "created": "Tue, 16 Jun 2020 13:19:37 GMT", "version": "v3" }, { "created": "Tue, 9 Mar 2021 16:19:44 GMT", "version": "v4" } ]
2021-03-10
[ [ "Stojkoski", "Viktor", "" ], [ "Karbevski", "Marko", "" ], [ "Utkovski", "Zoran", "" ], [ "Basnarkov", "Lasko", "" ], [ "Kocarev", "Ljupco", "" ] ]
Fluctuating environments are situations where the spatio-temporal stochasticity plays a significant role in the evolutionary dynamics. The study of the evolution of cooperation in these environments typically assumes a homogeneous, well mixed population, whose constituents are endowed with identical capabilities. In this paper, we generalize these results by developing a systematic study for the cooperation dynamics in fluctuating environments under the consideration of structured, heterogeneous populations with individual entities subjected to general behavioral rules. Considering complex network topologies, and a behavioral rule based on generalized reciprocity, we perform a detailed analysis of the effect of the underlying interaction structure on the evolutionary stability of cooperation. We find that, in the presence of environmental fluctuations, the cooperation dynamics can lead to the creation of multiple network components, each with distinct evolutionary properties. This is paralleled to the freezing state in the Random Energy Model. We utilize this result to examine the applicability of our generalized reciprocity behavioral rule in a variety of settings. We thereby show that the introduced rule leads to steady state cooperative behavior that is always greater than or equal to the one predicted by the evolutionary stability analysis of unconditional cooperation. As a consequence, the implementation of our results may go beyond explaining the evolution of cooperation. In particular, they can be directly applied in domains that deal with the development of artificial systems able to adequately mimic reality, such as reinforcement learning.
2002.10804
Yangsong Zhang
Cunbo Li, Peiyang Li, Yangsong Zhang, Ning Li, Yajing Si, Fali Li, Dezhong Yao and Peng Xu
Hierarchical emotion-recognition framework based on discriminative brain neural network topology and ensemble co-decision strategy
null
null
null
null
q-bio.NC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain neural networks characterize various information propagation patterns for different emotional states. However, the statistical features based on traditional graph theory may ignore the spacial network difference. To reveal these inherent spatial features and increase the stability of emotional recognition, we proposed a hierarchical framework that can perform the multiple emotion recognitions with the multiple emotion-related spatial network topology patterns (MESNP) by combining a supervised learning with ensemble co-decision strategy. To evaluate the performance of our proposed MESNP approach, we conduct both off-line and simulated on-line experiments with two public datasets i.e., MAHNOB and DEAP. The experiment results demonstrated that MESNP can significantly enhance the classification performance for the multiple emotions. The highest accuracies of off-line experiments for MAHNOB-HCI and DEAP achieved 99.93% (3 classes) and 83.66% (4 classes), respectively. For simulated on-line experiments, we also obtained the best classification accuracies with 100% (3 classes) for MAHNOB and 99.22% (4 classes) for DEAP by proposed MESNP. These results further proved the efficiency of MESNP for structured feature extraction in mult-classification emotional task.
[ { "created": "Tue, 25 Feb 2020 11:47:53 GMT", "version": "v1" } ]
2020-02-26
[ [ "Li", "Cunbo", "" ], [ "Li", "Peiyang", "" ], [ "Zhang", "Yangsong", "" ], [ "Li", "Ning", "" ], [ "Si", "Yajing", "" ], [ "Li", "Fali", "" ], [ "Yao", "Dezhong", "" ], [ "Xu", "Peng", "" ] ]
Brain neural networks characterize various information propagation patterns for different emotional states. However, the statistical features based on traditional graph theory may ignore the spacial network difference. To reveal these inherent spatial features and increase the stability of emotional recognition, we proposed a hierarchical framework that can perform the multiple emotion recognitions with the multiple emotion-related spatial network topology patterns (MESNP) by combining a supervised learning with ensemble co-decision strategy. To evaluate the performance of our proposed MESNP approach, we conduct both off-line and simulated on-line experiments with two public datasets i.e., MAHNOB and DEAP. The experiment results demonstrated that MESNP can significantly enhance the classification performance for the multiple emotions. The highest accuracies of off-line experiments for MAHNOB-HCI and DEAP achieved 99.93% (3 classes) and 83.66% (4 classes), respectively. For simulated on-line experiments, we also obtained the best classification accuracies with 100% (3 classes) for MAHNOB and 99.22% (4 classes) for DEAP by proposed MESNP. These results further proved the efficiency of MESNP for structured feature extraction in mult-classification emotional task.
1006.5846
Vadim N. Biktashev
Vadim N. Biktashev, Irina V. Biktasheva, Narine A. Sarvazyan
Evolution of spiral and scroll waves of excitation in a mathematical model of ischaemic border zone
26 pages, 13 figures, appendix and 2 movies, as accepted to PLoS ONE 2011/08/08
PLoS ONE, 6(9):e24388, 2011
10.1371/journal.pone.0024388
null
q-bio.TO nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abnormal electrical activity from the boundaries of ischemic cardiac tissue is recognized as one of the major causes in generation of ischemia-reperfusion arrhythmias. Here we present theoretical analysis of the waves of electrical activity that can rise on the boundary of cardiac cell network upon its recovery from ischaemia-like conditions. The main factors included in our analysis are macroscopic gradients of the cell-to-cell coupling and cell excitability and microscopic heterogeneity of individual cells. The interplay between these factors allows one to explain how spirals form, drift together with the moving boundary, get transiently pinned to local inhomogeneities, and finally penetrate into the bulk of the well-coupled tissue where they reach macroscopic scale. The asymptotic theory of the drift of spiral and scroll waves based on response functions provides explanation of the drifts involved in this mechanism, with the exception of effects due to the discreteness of cardiac tissue. In particular, this asymptotic theory allows an extrapolation of 2D events into 3D, which has shown that cells within the border zone can give rise to 3D analogues of spirals, the scroll waves. When and if such scroll waves escape into a better coupled tissue, they are likely to collapse due to the positive filament tension. However, our simulations have shown that such collapse of newly generated scrolls is not inevitable and that under certain conditions filament tension becomes negative, leading to scroll filaments to expand and multiply leading to a fibrillation-like state within small areas of cardiac tissue.
[ { "created": "Wed, 30 Jun 2010 13:05:00 GMT", "version": "v1" }, { "created": "Tue, 7 Jun 2011 15:31:45 GMT", "version": "v2" }, { "created": "Tue, 23 Aug 2011 06:59:57 GMT", "version": "v3" } ]
2015-05-19
[ [ "Biktashev", "Vadim N.", "" ], [ "Biktasheva", "Irina V.", "" ], [ "Sarvazyan", "Narine A.", "" ] ]
Abnormal electrical activity from the boundaries of ischemic cardiac tissue is recognized as one of the major causes in generation of ischemia-reperfusion arrhythmias. Here we present theoretical analysis of the waves of electrical activity that can rise on the boundary of cardiac cell network upon its recovery from ischaemia-like conditions. The main factors included in our analysis are macroscopic gradients of the cell-to-cell coupling and cell excitability and microscopic heterogeneity of individual cells. The interplay between these factors allows one to explain how spirals form, drift together with the moving boundary, get transiently pinned to local inhomogeneities, and finally penetrate into the bulk of the well-coupled tissue where they reach macroscopic scale. The asymptotic theory of the drift of spiral and scroll waves based on response functions provides explanation of the drifts involved in this mechanism, with the exception of effects due to the discreteness of cardiac tissue. In particular, this asymptotic theory allows an extrapolation of 2D events into 3D, which has shown that cells within the border zone can give rise to 3D analogues of spirals, the scroll waves. When and if such scroll waves escape into a better coupled tissue, they are likely to collapse due to the positive filament tension. However, our simulations have shown that such collapse of newly generated scrolls is not inevitable and that under certain conditions filament tension becomes negative, leading to scroll filaments to expand and multiply leading to a fibrillation-like state within small areas of cardiac tissue.
2003.10086
Alexander Siegenfeld
Alexander F. Siegenfeld and Yaneer Bar-Yam
Eliminating COVID-19: The Impact of Travel and Timing
null
Communications Physics 3, 204 (2020)
10.1038/s42005-020-00470-7
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We analyze the spread of COVID-19 by considering the transmission of the disease among individuals both within and between regions. A set of regions can be defined as any partition of a population such that travel/social contact within each region far exceeds that between them. COVID-19 can be eliminated if the region-to-region reproductive number---i.e. the average number of other regions to which a single infected region will transmit the virus---is reduced to less than one. We find that this region-to-region reproductive number is proportional to the travel rate between regions and exponential in the length of the time-delay before region-level control measures are imposed. Thus, reductions in travel and the speed with which regions take action play decisive roles in whether COVID-19 is eliminated from a collection of regions. If, on average, infected regions (including those that become re-infected in the future) impose social distancing measures shortly after active spreading begins within them, the number of infected regions, and thus the number of regions in which such measures are required, will exponentially decrease over time. Elimination will in this case be a stable fixed point even after the social distancing measures have been lifted from most of the regions.
[ { "created": "Mon, 23 Mar 2020 05:17:27 GMT", "version": "v1" }, { "created": "Thu, 23 Apr 2020 19:45:40 GMT", "version": "v2" } ]
2020-11-11
[ [ "Siegenfeld", "Alexander F.", "" ], [ "Bar-Yam", "Yaneer", "" ] ]
We analyze the spread of COVID-19 by considering the transmission of the disease among individuals both within and between regions. A set of regions can be defined as any partition of a population such that travel/social contact within each region far exceeds that between them. COVID-19 can be eliminated if the region-to-region reproductive number---i.e. the average number of other regions to which a single infected region will transmit the virus---is reduced to less than one. We find that this region-to-region reproductive number is proportional to the travel rate between regions and exponential in the length of the time-delay before region-level control measures are imposed. Thus, reductions in travel and the speed with which regions take action play decisive roles in whether COVID-19 is eliminated from a collection of regions. If, on average, infected regions (including those that become re-infected in the future) impose social distancing measures shortly after active spreading begins within them, the number of infected regions, and thus the number of regions in which such measures are required, will exponentially decrease over time. Elimination will in this case be a stable fixed point even after the social distancing measures have been lifted from most of the regions.
1906.02086
Harry Dudley
Harry J. Dudley, Zhiyong Jason Ren, and David M. Bortz
Competitive Exclusion in a DAE Model for Microbial Electrolysis Cells
null
null
null
null
q-bio.PE math.CA math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microbial electrolysis cells (MECs) employ electroactive bacteria to perform extracellular electron transfer, enabling hydrogen generation from biodegradable substrates. In previous work, we developed and analyzed a differential-algebraic equation (DAE) model for MECs. The model resembles a chemostat with ordinary differential equations (ODEs) for concentrations of substrate, microorganisms, and an extracellular mediator involved in electron transfer. There is also an algebraic constraint for electric current and hydrogen production. Our goal is to determine the outcome of competition between methanogenic archaea and electroactive bacteria, because only the latter contribute to electric current and resulting hydrogen production. We investigate asymptotic stability in two industrially relevant versions of the model. An important aspect of chemostats models is the principle of competitive exclusion -- only microbes which grow at the lowest substrate concentration will survive as $t\to\infty$. We show that if methanogens grow at the lowest substrate concentration, then the equilibrium corresponding to competitive exclusion by methanogens is globally asymptotically stable. The analogous result for electroactive bacteria is not necessarily true. We show that local asymptotic stability of exclusion by electroactive bacteria is not guaranteed, even in a simplified version of the model. In this case, even if electroactive bacteria can grow at the lowest substrate concentration, a few additional conditions are required to guarantee local asymptotic stability. We also provide numerical simulations supporting these arguments. Our results suggest operating conditions that are most conducive to success of electroactive bacteria and the resulting current and hydrogen production in MECs. This will help identify when methane production or electricity and hydrogen production are favored.
[ { "created": "Wed, 5 Jun 2019 15:45:36 GMT", "version": "v1" }, { "created": "Fri, 14 Feb 2020 03:50:15 GMT", "version": "v2" }, { "created": "Mon, 6 Jul 2020 21:32:31 GMT", "version": "v3" } ]
2022-06-03
[ [ "Dudley", "Harry J.", "" ], [ "Ren", "Zhiyong Jason", "" ], [ "Bortz", "David M.", "" ] ]
Microbial electrolysis cells (MECs) employ electroactive bacteria to perform extracellular electron transfer, enabling hydrogen generation from biodegradable substrates. In previous work, we developed and analyzed a differential-algebraic equation (DAE) model for MECs. The model resembles a chemostat with ordinary differential equations (ODEs) for concentrations of substrate, microorganisms, and an extracellular mediator involved in electron transfer. There is also an algebraic constraint for electric current and hydrogen production. Our goal is to determine the outcome of competition between methanogenic archaea and electroactive bacteria, because only the latter contribute to electric current and resulting hydrogen production. We investigate asymptotic stability in two industrially relevant versions of the model. An important aspect of chemostats models is the principle of competitive exclusion -- only microbes which grow at the lowest substrate concentration will survive as $t\to\infty$. We show that if methanogens grow at the lowest substrate concentration, then the equilibrium corresponding to competitive exclusion by methanogens is globally asymptotically stable. The analogous result for electroactive bacteria is not necessarily true. We show that local asymptotic stability of exclusion by electroactive bacteria is not guaranteed, even in a simplified version of the model. In this case, even if electroactive bacteria can grow at the lowest substrate concentration, a few additional conditions are required to guarantee local asymptotic stability. We also provide numerical simulations supporting these arguments. Our results suggest operating conditions that are most conducive to success of electroactive bacteria and the resulting current and hydrogen production in MECs. This will help identify when methane production or electricity and hydrogen production are favored.
2004.12541
YangQuan Chen Prof.
Conghui Xu, Yongguang Yu, QuanChen Yang, and Zhenzhen Lu
Forecast analysis of the epidemics trend of COVID-19 in the United States by a generalized fractional-order SEIR model
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
In this paper, a generalized fractional-order SEIR model is proposed, denoted by SEIQRP model, which has a basic guiding significance for the prediction of the possible outbreak of infectious diseases like COVID-19 and other insect diseases in the future. Firstly, some qualitative properties of the model are analyzed. The basic reproduction number $R_{0}$ is derived. When $R_{0}<1$, the disease-free equilibrium point is unique and locally asymptotically stable. When $R_{0}>1$, the endemic equilibrium point is also unique. Furthermore, some conditions are established to ensure the local asymptotic stability of disease-free and endemic equilibrium points. The trend of COVID-19 spread in the United States is predicted. Considering the influence of the individual behavior and government mitigation measurement, a modified SEIQRP model is proposed, defined as SEIQRPD model. According to the real data of the United States, it is found that our improved model has a better prediction ability for the epidemic trend in the next two weeks. Hence, the epidemic trend of the United States in the next two weeks is investigated, and the peak of isolated cases are predicted. The modified SEIQRP model successfully capture the development process of COVID-19, which provides an important reference for understanding the trend of the outbreak.
[ { "created": "Mon, 27 Apr 2020 01:56:57 GMT", "version": "v1" }, { "created": "Wed, 29 Apr 2020 06:37:47 GMT", "version": "v2" } ]
2020-04-30
[ [ "Xu", "Conghui", "" ], [ "Yu", "Yongguang", "" ], [ "Yang", "QuanChen", "" ], [ "Lu", "Zhenzhen", "" ] ]
In this paper, a generalized fractional-order SEIR model is proposed, denoted by SEIQRP model, which has a basic guiding significance for the prediction of the possible outbreak of infectious diseases like COVID-19 and other insect diseases in the future. Firstly, some qualitative properties of the model are analyzed. The basic reproduction number $R_{0}$ is derived. When $R_{0}<1$, the disease-free equilibrium point is unique and locally asymptotically stable. When $R_{0}>1$, the endemic equilibrium point is also unique. Furthermore, some conditions are established to ensure the local asymptotic stability of disease-free and endemic equilibrium points. The trend of COVID-19 spread in the United States is predicted. Considering the influence of the individual behavior and government mitigation measurement, a modified SEIQRP model is proposed, defined as SEIQRPD model. According to the real data of the United States, it is found that our improved model has a better prediction ability for the epidemic trend in the next two weeks. Hence, the epidemic trend of the United States in the next two weeks is investigated, and the peak of isolated cases are predicted. The modified SEIQRP model successfully capture the development process of COVID-19, which provides an important reference for understanding the trend of the outbreak.
2103.05075
Alex Williams
Alex H. Williams and Scott W. Linderman
Statistical Neuroscience in the Single Trial Limit
25 pages, 3 figures
null
null
null
q-bio.NC stat.AP
http://creativecommons.org/licenses/by-sa/4.0/
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure -- including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions -- and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
[ { "created": "Mon, 8 Mar 2021 21:08:14 GMT", "version": "v1" }, { "created": "Thu, 4 Nov 2021 19:37:38 GMT", "version": "v2" } ]
2021-11-08
[ [ "Williams", "Alex H.", "" ], [ "Linderman", "Scott W.", "" ] ]
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure -- including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions -- and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
1701.00166
J. Nathan Kutz
Lucas Stolerman, and Pedro Maia, and J. Nathan Kutz
Data-Driven Forecast of Dengue Outbreaks in Brazil: A Critical Assessment of Climate Conditions for Different Capitals
29 pages, 12 figures, 8 tables
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Local climate conditions play a major role in the development of the mosquito population responsible for transmitting Dengue Fever. Since the {\em Aedes Aegypti} mosquito is also a primary vector for the recent Zika and Chikungunya epidemics across the Americas, a detailed monitoring of periods with favorable climate conditions for mosquito profusion may improve the timing of vector-control efforts and other urgent public health strategies. We apply dimensionality reduction techniques and machine-learning algorithms to climate time series data and analyze their connection to the occurrence of Dengue outbreaks for seven major cities in Brazil. Specifically, we have identified two key variables and a period during the annual cycle that are highly predictive of epidemic outbreaks. The key variables are the frequency of precipitation and temperature during an approximately two month window of the winter season preceding the outbreak. Thus simple climate signatures may be influencing Dengue outbreaks even months before their occurrence. Some of the more challenging datasets required usage of compressive-sensing procedures to estimate missing entries for temperature and precipitation records. Our results indicate that each Brazilian capital considered has a unique frequency of precipitation and temperature signature in the winter preceding a Dengue outbreak. Such climate contributions on vector populations are key factors in dengue dynamics which could lead to more accurate prediction models and early warning systems. Finally, we show that critical temperature and precipitation signatures may vary significantly from city to city, suggesting that the interplay between climate variables and dengue outbreaks is more complex than generally appreciated.
[ { "created": "Sat, 31 Dec 2016 20:55:47 GMT", "version": "v1" } ]
2017-01-03
[ [ "Stolerman", "Lucas", "" ], [ "Maia", "Pedro", "" ], [ "Kutz", "J. Nathan", "" ] ]
Local climate conditions play a major role in the development of the mosquito population responsible for transmitting Dengue Fever. Since the {\em Aedes Aegypti} mosquito is also a primary vector for the recent Zika and Chikungunya epidemics across the Americas, a detailed monitoring of periods with favorable climate conditions for mosquito profusion may improve the timing of vector-control efforts and other urgent public health strategies. We apply dimensionality reduction techniques and machine-learning algorithms to climate time series data and analyze their connection to the occurrence of Dengue outbreaks for seven major cities in Brazil. Specifically, we have identified two key variables and a period during the annual cycle that are highly predictive of epidemic outbreaks. The key variables are the frequency of precipitation and temperature during an approximately two month window of the winter season preceding the outbreak. Thus simple climate signatures may be influencing Dengue outbreaks even months before their occurrence. Some of the more challenging datasets required usage of compressive-sensing procedures to estimate missing entries for temperature and precipitation records. Our results indicate that each Brazilian capital considered has a unique frequency of precipitation and temperature signature in the winter preceding a Dengue outbreak. Such climate contributions on vector populations are key factors in dengue dynamics which could lead to more accurate prediction models and early warning systems. Finally, we show that critical temperature and precipitation signatures may vary significantly from city to city, suggesting that the interplay between climate variables and dengue outbreaks is more complex than generally appreciated.
1812.06400
Sebastian Raschka
Sebastian Raschka
Automated discovery of GPCR bioactive ligands
null
Current Opinion in Structural Biology 2019, 55:17-24
10.1016/j.sbi.2019.02.011
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While G-protein coupled receptors (GPCRs) constitute the largest class of membrane proteins, structures and endogenous ligands of a large portion of GPCRs remain unknown. Due to the involvement of GPCRs in various signaling pathways and physiological roles, the identification of endogenous ligands as well as designing novel drugs is of high interest to the research and medical communities. Along with highlighting the recent advances in structure-based ligand discovery, including docking and molecular dynamics, this article focuses on the latest advances for automating the discovery of bioactive ligands using machine learning. Machine learning is centered around the development and applications of algorithms that can learn from data automatically. Such an approach offers immense opportunities for bioactivity prediction as well as quantitative structure-activity relationship studies. This review describes the most recent and successful applications of machine learning for bioactive ligand discovery, concluding with an outlook on deep learning methods that are capable of automatically extracting salient information from structural data as a promising future direction for rapid and efficient bioactive ligand discovery.
[ { "created": "Sun, 16 Dec 2018 06:00:57 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 03:05:04 GMT", "version": "v2" } ]
2019-03-29
[ [ "Raschka", "Sebastian", "" ] ]
While G-protein coupled receptors (GPCRs) constitute the largest class of membrane proteins, structures and endogenous ligands of a large portion of GPCRs remain unknown. Due to the involvement of GPCRs in various signaling pathways and physiological roles, the identification of endogenous ligands as well as designing novel drugs is of high interest to the research and medical communities. Along with highlighting the recent advances in structure-based ligand discovery, including docking and molecular dynamics, this article focuses on the latest advances for automating the discovery of bioactive ligands using machine learning. Machine learning is centered around the development and applications of algorithms that can learn from data automatically. Such an approach offers immense opportunities for bioactivity prediction as well as quantitative structure-activity relationship studies. This review describes the most recent and successful applications of machine learning for bioactive ligand discovery, concluding with an outlook on deep learning methods that are capable of automatically extracting salient information from structural data as a promising future direction for rapid and efficient bioactive ligand discovery.
1903.04987
Alexandra Jurczak
E. Krock, A. Jurczak, C.I. Svensson
Pain pathogenesis in rheumatoid arthritis -- what have we learned from animal models
33 pages, 3 figures
PAIN 2018 Sep; 159 Suppl:S98-S109
10.1097/j.pain.0000000000001333
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rheumatoid arthritis (RA) is a systemic autoimmune disease characterized by joint inflammation and joint pain. Much of RA treatment is focused on suppressing inflammation, with the idea being that if inflammation is controlled other symptoms, such as pain, will disappear. However, pain is the most common complaint of RA patients, is often still present following the resolution of inflammation, and can develop prior to the onset of inflammation. Thus, further research is needed to better understand RA-associated pain mechanisms. A number of preclinical rodent models commonly used in rheumatology research have been developed based on bedside-to-bench and reverse translational approaches. These models include collagen-induced arthritis, antigen-induced arthritis, streptococcal cell wall-induced arthritis, collagen antibody-induced arthritis, serum transfer from K/BxN transgenic mice and tumor necrosis factor (TNF)-transgene mice. They have led to increased understanding of RA pathogenesis and have aided the development of successful RA treatments. More recently, these models have been used to elucidate the complexities of RA associated pain. Several potentially modifiable mechanisms, other than inflammation, have been investigated in these models and may in turn lead to more effective treatments. Furthermore, preclinical research can indicate when specific treatment strategies may benefit specific patient subgroups or at which disease stage they are best used. This review not only highlights RA-associated pain mechanisms, but also suggests the usefulness of preclinical animal research based on a bedside-to-bench approach.
[ { "created": "Tue, 12 Mar 2019 15:20:01 GMT", "version": "v1" } ]
2019-03-13
[ [ "Krock", "E.", "" ], [ "Jurczak", "A.", "" ], [ "Svensson", "C. I.", "" ] ]
Rheumatoid arthritis (RA) is a systemic autoimmune disease characterized by joint inflammation and joint pain. Much of RA treatment is focused on suppressing inflammation, with the idea being that if inflammation is controlled other symptoms, such as pain, will disappear. However, pain is the most common complaint of RA patients, is often still present following the resolution of inflammation, and can develop prior to the onset of inflammation. Thus, further research is needed to better understand RA-associated pain mechanisms. A number of preclinical rodent models commonly used in rheumatology research have been developed based on bedside-to-bench and reverse translational approaches. These models include collagen-induced arthritis, antigen-induced arthritis, streptococcal cell wall-induced arthritis, collagen antibody-induced arthritis, serum transfer from K/BxN transgenic mice and tumor necrosis factor (TNF)-transgene mice. They have led to increased understanding of RA pathogenesis and have aided the development of successful RA treatments. More recently, these models have been used to elucidate the complexities of RA associated pain. Several potentially modifiable mechanisms, other than inflammation, have been investigated in these models and may in turn lead to more effective treatments. Furthermore, preclinical research can indicate when specific treatment strategies may benefit specific patient subgroups or at which disease stage they are best used. This review not only highlights RA-associated pain mechanisms, but also suggests the usefulness of preclinical animal research based on a bedside-to-bench approach.
2110.11310
Ashish B. George
Ashish B. George and Kirill S. Korolev
Ecological landscapes guide the assembly of optimal microbial communities
null
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Assembling optimal microbial communities is key for various applications in biofuel production, agriculture, and human health. Finding the optimal community is challenging because the number of possible communities grows exponentially with the number of species, and so an exhaustive search cannot be performed even for a dozen species. A heuristic search that improves community function by adding or removing one species at a time is more practical, but it is unknown whether this strategy can discover an optimal or nearly optimal community. Using consumer-resource models with and without cross-feeding, we investigate how the efficacy of search depends on the distribution of resources, niche overlap, cross-feeding, and other aspects of community ecology. We show that search efficacy is determined by the ruggedness of the appropriately-defined ecological landscape. We identify specific ruggedness measures that are both predictive of search performance and robust to noise and low sampling density. The feasibility of our approach is demonstrated using experimental data from a soil microbial community. Overall, our results establish conditions necessary for the success of the heuristic search and provide concrete design principles for building high-performing microbial consortia.
[ { "created": "Thu, 21 Oct 2021 17:42:36 GMT", "version": "v1" }, { "created": "Wed, 15 Dec 2021 23:28:52 GMT", "version": "v2" } ]
2021-12-17
[ [ "George", "Ashish B.", "" ], [ "Korolev", "Kirill S.", "" ] ]
Assembling optimal microbial communities is key for various applications in biofuel production, agriculture, and human health. Finding the optimal community is challenging because the number of possible communities grows exponentially with the number of species, and so an exhaustive search cannot be performed even for a dozen species. A heuristic search that improves community function by adding or removing one species at a time is more practical, but it is unknown whether this strategy can discover an optimal or nearly optimal community. Using consumer-resource models with and without cross-feeding, we investigate how the efficacy of search depends on the distribution of resources, niche overlap, cross-feeding, and other aspects of community ecology. We show that search efficacy is determined by the ruggedness of the appropriately-defined ecological landscape. We identify specific ruggedness measures that are both predictive of search performance and robust to noise and low sampling density. The feasibility of our approach is demonstrated using experimental data from a soil microbial community. Overall, our results establish conditions necessary for the success of the heuristic search and provide concrete design principles for building high-performing microbial consortia.
0905.2935
Yuriy Pershin
Yuriy V. Pershin and Massimiliano Di Ventra
Experimental demonstration of associative memory with memristive neural networks
null
Neural Networks 23, 881 (2010)
null
null
q-bio.NC cond-mat.mes-hall q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When someone mentions the name of a known person we immediately recall her face and possibly many other traits. This is because we possess the so-called associative memory, that is the ability to correlate different memories to the same fact or event. Associative memory is such a fundamental and encompassing human ability (and not just human) that the network of neurons in our brain must perform it quite easily. The question is then whether electronic neural networks (electronic schemes that act somewhat similarly to human brains) can be built to perform this type of function. Although the field of neural networks has developed for many years, a key element, namely the synapses between adjacent neurons, has been lacking a satisfactory electronic representation. The reason for this is that a passive circuit element able to reproduce the synapse behaviour needs to remember its past dynamical history, store a continuous set of states, and be "plastic" according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory-resistor (memristor for short). In particular, by using simple and inexpensive off-the-shelf components we have built a memristor emulator which realizes all required synaptic properties. Most importantly, we have demonstrated experimentally the formation of associative memory in a simple neural network consisting of three electronic neurons connected by two memristor-emulator synapses. This experimental demonstration opens up new possibilities in the understanding of neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behaviour with electronic neural networks.
[ { "created": "Mon, 18 May 2009 17:16:03 GMT", "version": "v1" }, { "created": "Fri, 18 Sep 2009 16:00:11 GMT", "version": "v2" }, { "created": "Mon, 18 Jan 2010 02:39:52 GMT", "version": "v3" } ]
2010-08-26
[ [ "Pershin", "Yuriy V.", "" ], [ "Di Ventra", "Massimiliano", "" ] ]
When someone mentions the name of a known person we immediately recall her face and possibly many other traits. This is because we possess the so-called associative memory, that is the ability to correlate different memories to the same fact or event. Associative memory is such a fundamental and encompassing human ability (and not just human) that the network of neurons in our brain must perform it quite easily. The question is then whether electronic neural networks (electronic schemes that act somewhat similarly to human brains) can be built to perform this type of function. Although the field of neural networks has developed for many years, a key element, namely the synapses between adjacent neurons, has been lacking a satisfactory electronic representation. The reason for this is that a passive circuit element able to reproduce the synapse behaviour needs to remember its past dynamical history, store a continuous set of states, and be "plastic" according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory-resistor (memristor for short). In particular, by using simple and inexpensive off-the-shelf components we have built a memristor emulator which realizes all required synaptic properties. Most importantly, we have demonstrated experimentally the formation of associative memory in a simple neural network consisting of three electronic neurons connected by two memristor-emulator synapses. This experimental demonstration opens up new possibilities in the understanding of neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behaviour with electronic neural networks.
2310.13018
Ilia Sucholutsky
Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert M\"uller, Mariya Toneva, Thomas L. Griffiths
Getting aligned on representational alignment
Working paper, changes to be made in upcoming revisions
null
null
null
q-bio.NC cs.AI cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological and artificial information processing systems form representations that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the extent to which the representations formed by these diverse systems agree? Do similarities in representations then translate into similar behavior? How can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most active research areas in cognitive science, neuroscience, and machine learning. For example, cognitive scientists measure the representational alignment of multiple individuals to identify shared cognitive priors, neuroscientists align fMRI responses from multiple individuals into a shared representational space for group-level analyses, and ML researchers distill knowledge from teacher models into student models by increasing their alignment. Unfortunately, there is limited knowledge transfer between research communities interested in representational alignment, so progress in one field often ends up being rediscovered independently in another. Thus, greater cross-field communication would be advantageous. To improve communication between these fields, we propose a unifying framework that can serve as a common language between researchers studying representational alignment. We survey the literature from all three fields and demonstrate how prior work fits into this framework. Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that our work can catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems. We note that this is a working paper and encourage readers to reach out with their suggestions for future revisions.
[ { "created": "Wed, 18 Oct 2023 17:47:58 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2023 17:49:18 GMT", "version": "v2" } ]
2023-11-03
[ [ "Sucholutsky", "Ilia", "" ], [ "Muttenthaler", "Lukas", "" ], [ "Weller", "Adrian", "" ], [ "Peng", "Andi", "" ], [ "Bobu", "Andreea", "" ], [ "Kim", "Been", "" ], [ "Love", "Bradley C.", "" ], [ "Grant", "Erin", "" ], [ "Groen", "Iris", "" ], [ "Achterberg", "Jascha", "" ], [ "Tenenbaum", "Joshua B.", "" ], [ "Collins", "Katherine M.", "" ], [ "Hermann", "Katherine L.", "" ], [ "Oktar", "Kerem", "" ], [ "Greff", "Klaus", "" ], [ "Hebart", "Martin N.", "" ], [ "Jacoby", "Nori", "" ], [ "Zhang", "Qiuyi", "" ], [ "Marjieh", "Raja", "" ], [ "Geirhos", "Robert", "" ], [ "Chen", "Sherol", "" ], [ "Kornblith", "Simon", "" ], [ "Rane", "Sunayana", "" ], [ "Konkle", "Talia", "" ], [ "O'Connell", "Thomas P.", "" ], [ "Unterthiner", "Thomas", "" ], [ "Lampinen", "Andrew K.", "" ], [ "Müller", "Klaus-Robert", "" ], [ "Toneva", "Mariya", "" ], [ "Griffiths", "Thomas L.", "" ] ]
Biological and artificial information processing systems form representations that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the extent to which the representations formed by these diverse systems agree? Do similarities in representations then translate into similar behavior? How can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most active research areas in cognitive science, neuroscience, and machine learning. For example, cognitive scientists measure the representational alignment of multiple individuals to identify shared cognitive priors, neuroscientists align fMRI responses from multiple individuals into a shared representational space for group-level analyses, and ML researchers distill knowledge from teacher models into student models by increasing their alignment. Unfortunately, there is limited knowledge transfer between research communities interested in representational alignment, so progress in one field often ends up being rediscovered independently in another. Thus, greater cross-field communication would be advantageous. To improve communication between these fields, we propose a unifying framework that can serve as a common language between researchers studying representational alignment. We survey the literature from all three fields and demonstrate how prior work fits into this framework. Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that our work can catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems. We note that this is a working paper and encourage readers to reach out with their suggestions for future revisions.
q-bio/0409034
Enrico Carlon
Enrico Carlon, Mehdi Lejard Malki, Ralf Blossey
Exons, introns and DNA thermodynamics
4 pages, 8 figures - Final version as published. See also Phys. Rev. Focus 15, story 17
Phys. Rev. Lett. 94, 178101 (2005)
10.1103/PhysRevLett.94.178101
null
q-bio.BM cond-mat.stat-mech physics.bio-ph
null
The genes of eukaryotes are characterized by protein coding fragments, the exons, interrupted by introns, i.e. stretches of DNA which do not carry any useful information for the protein synthesis. We have analyzed the melting behavior of randomly selected human cDNA sequences obtained from the genomic DNA by removing all introns. A clear correspondence is observed between exons and melting domains. This finding may provide new insights in the physical mechanisms underlying the evolution of genes.
[ { "created": "Wed, 29 Sep 2004 15:48:35 GMT", "version": "v1" }, { "created": "Tue, 27 Sep 2005 08:28:24 GMT", "version": "v2" } ]
2009-11-10
[ [ "Carlon", "Enrico", "" ], [ "Malki", "Mehdi Lejard", "" ], [ "Blossey", "Ralf", "" ] ]
The genes of eukaryotes are characterized by protein coding fragments, the exons, interrupted by introns, i.e. stretches of DNA which do not carry any useful information for the protein synthesis. We have analyzed the melting behavior of randomly selected human cDNA sequences obtained from the genomic DNA by removing all introns. A clear correspondence is observed between exons and melting domains. This finding may provide new insights in the physical mechanisms underlying the evolution of genes.