id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2005.12330
Alessandro Ingrosso
Alessandro Ingrosso
Optimal Learning with Excitatory and Inhibitory synapses
16 pages, 5 figures
null
10.1371/journal.pcbi.1008536
null
q-bio.NC cond-mat.dis-nn cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
[ { "created": "Mon, 25 May 2020 18:25:54 GMT", "version": "v1" } ]
2021-01-27
[ [ "Ingrosso", "Alessandro", "" ] ]
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
1307.2298
Marisa Eisenberg
Marisa C. Eisenberg, Michael A. L. Hayashi
Determining Structurally Identifiable Parameter Combinations Using Subset Profiling
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifiability is a necessary condition for successful parameter estimation of dynamic system models. A major component of identifiability analysis is determining the identifiable parameter combinations, the functional forms for the dependencies between unidentifiable parameters. Identifiable combinations can help in model reparameterization and also in determining which parameters may be experimentally measured to recover model identifiability. Several numerical approaches to determining identifiability of differential equation models have been developed, however the question of determining identifiable combinations remains incompletely addressed. In this paper, we present a new approach which uses parameter subset selection methods based on the Fisher Information Matrix, together with the profile likelihood, to effectively estimate identifiable combinations. We demonstrate this approach on several example models in pharmacokinetics, cellular biology, and physiology.
[ { "created": "Mon, 8 Jul 2013 23:09:45 GMT", "version": "v1" }, { "created": "Fri, 4 Oct 2013 01:00:40 GMT", "version": "v2" } ]
2013-10-07
[ [ "Eisenberg", "Marisa C.", "" ], [ "Hayashi", "Michael A. L.", "" ] ]
Identifiability is a necessary condition for successful parameter estimation of dynamic system models. A major component of identifiability analysis is determining the identifiable parameter combinations, the functional forms for the dependencies between unidentifiable parameters. Identifiable combinations can help in model reparameterization and also in determining which parameters may be experimentally measured to recover model identifiability. Several numerical approaches to determining identifiability of differential equation models have been developed, however the question of determining identifiable combinations remains incompletely addressed. In this paper, we present a new approach which uses parameter subset selection methods based on the Fisher Information Matrix, together with the profile likelihood, to effectively estimate identifiable combinations. We demonstrate this approach on several example models in pharmacokinetics, cellular biology, and physiology.
2111.02689
Ido Kanter
Shira Sardi, Roni Vardi, Yael Tugendhaft, Anton Sheinin, Amir Goldental, and Ido Kanter
Long anisotropic absolute refractory periods with rapid rise-times to reliable responsiveness
28 pages, 6 figures
Phys. Rev. E 105, 014401 (2022)
10.1103/PhysRevE.105.014401
null
q-bio.NC physics.bio-ph q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time-lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute RP could exceed 10 milliseconds and its tail 20 milliseconds, independent of a large stimulation frequency range. It is an order of magnitude longer than anticipated and comparable with the decaying membrane potential timescale. It is followed by a sharp rise-time (relative RP) of merely ~1 millisecond to complete responsiveness. Extracellular stimulations result in longer absolute RPs than solely intracellular ones, and a pair of extracellular stimulations from two different routes exhibits distinct absolute RPs, depending on their order. Our results indicate that a neuron is an accurate excitable element, where the diverse RPs cannot be attributed solely to the soma and imply fast mutual interactions between different stimulation routes and dendrites. Further elucidation of neuronal computational capabilities and their interplay with adaptation mechanisms is warranted.
[ { "created": "Thu, 4 Nov 2021 08:56:44 GMT", "version": "v1" }, { "created": "Mon, 3 Jan 2022 16:21:02 GMT", "version": "v2" } ]
2022-01-04
[ [ "Sardi", "Shira", "" ], [ "Vardi", "Roni", "" ], [ "Tugendhaft", "Yael", "" ], [ "Sheinin", "Anton", "" ], [ "Goldental", "Amir", "" ], [ "Kanter", "Ido", "" ] ]
Refractoriness is a fundamental property of excitable elements, such as neurons, indicating the probability for re-excitation in a given time-lag, and is typically linked to the neuronal hyperpolarization following an evoked spike. Here we measured the refractory periods (RPs) in neuronal cultures and observed that an average anisotropic absolute RP could exceed 10 milliseconds and its tail 20 milliseconds, independent of a large stimulation frequency range. It is an order of magnitude longer than anticipated and comparable with the decaying membrane potential timescale. It is followed by a sharp rise-time (relative RP) of merely ~1 millisecond to complete responsiveness. Extracellular stimulations result in longer absolute RPs than solely intracellular ones, and a pair of extracellular stimulations from two different routes exhibits distinct absolute RPs, depending on their order. Our results indicate that a neuron is an accurate excitable element, where the diverse RPs cannot be attributed solely to the soma and imply fast mutual interactions between different stimulation routes and dendrites. Further elucidation of neuronal computational capabilities and their interplay with adaptation mechanisms is warranted.
1212.0465
Kristina Crona
Kristina Crona
Polytopes, graphs and fitness landscapes
To appear in "Recent Advances in the Theory and Application of Fitness Landscapes" (A. Engelbrecht and H. Richter, eds.). Springer Series in Emergence, Complexity, and Computation, 2013
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Darwinian evolution can be illustrated as an uphill walk in a landscape, where the surface consists of genotypes, the height coordinates represent fitness, and each step corresponds to a point mutation. Epistasis, roughly defined as the dependence between the fitness effects of mutations, is a key concept in the theory of adaptation. Important recent approaches depend on graphs and polytopes. Fitness graphs are useful for describing coarse properties of a landscape, such as mutational trajectories and the number of peaks. The graphs have been used for relating global and local properties of fitness landscapes. The geometric theory of gene interaction, or the shape theory, is the most fine-scaled approach to epistasis. Shapes, defined as triangulations of polytopes for any number of loci, replace the well established concepts of positive and negative epistasis for two mutations. From the shape one can identify the fittest populations, i.e., populations where allele shuffling (recombination) will not increase the mean fitness. Shapes and graphs provide complementary information. The approaches make no structural assumptions about the underlying fitness landscapes, which make them well suited for empirical work.
[ { "created": "Mon, 3 Dec 2012 17:53:59 GMT", "version": "v1" }, { "created": "Tue, 7 May 2013 18:22:16 GMT", "version": "v2" } ]
2013-05-08
[ [ "Crona", "Kristina", "" ] ]
Darwinian evolution can be illustrated as an uphill walk in a landscape, where the surface consists of genotypes, the height coordinates represent fitness, and each step corresponds to a point mutation. Epistasis, roughly defined as the dependence between the fitness effects of mutations, is a key concept in the theory of adaptation. Important recent approaches depend on graphs and polytopes. Fitness graphs are useful for describing coarse properties of a landscape, such as mutational trajectories and the number of peaks. The graphs have been used for relating global and local properties of fitness landscapes. The geometric theory of gene interaction, or the shape theory, is the most fine-scaled approach to epistasis. Shapes, defined as triangulations of polytopes for any number of loci, replace the well established concepts of positive and negative epistasis for two mutations. From the shape one can identify the fittest populations, i.e., populations where allele shuffling (recombination) will not increase the mean fitness. Shapes and graphs provide complementary information. The approaches make no structural assumptions about the underlying fitness landscapes, which make them well suited for empirical work.
q-bio/0611082
Thierry Rabilloud
Aymeric Rivollier, Laure Perrin-Cocon, Sylvie Luche (DRDC), H\'el\`ene Diemer, Jean-Marc Strub, Daniel Hanau, Alain van Dorsselaer, Vincent Lotteau, Chantal Rabourdin-Combe, Thierry Rabilloud (DRDC), Christine Servet-Delprat
High expression of antioxidant proteins in dendritic cells: possible implications in atherosclerosis
cpyright: American Society of Biochemistry and Molecular Biology
Mol Cell Proteomics 5 (04/2006) 726-36
10.1074/mcp.M500262-MCP200
null
q-bio.GN
null
Dendritic cells (DCs) display the unique ability to activate naive T cells and to initiate primary T cell responses revealed in DC-T cell alloreactions. DCs frequently operate under stress conditions. Oxidative stress enhances the production of inflammatory cytokines by DCs. We performed a proteomic analysis to see which major changes occur, at the protein expression level, during DC differentiation and maturation. Comparative two-dimensional gel analysis of the monocyte, immature DC, and mature DC stages was performed. Manganese superoxide dismutase (Mn-SOD) reached 0.7% of the gel-displayed proteins at the mature DC stage. This important amount of Mn-SOD is a primary antioxidant defense system against superoxide radicals, but its product, H(2)O(2), is also deleterious for cells. Peroxiredoxin (Prx) enzymes play an important role in eliminating such peroxide. Prx1 expression level continuously increased during DC differentiation and maturation, whereas Prx6 continuously decreased, and Prx2 peaked at the immature DC stage. As a consequence, DCs were more resistant than monocytes to apoptosis induced by high amounts of oxidized low density lipoproteins containing toxic organic peroxides and hydrogen peroxide. Furthermore DC-stimulated T cells produced high levels of receptor activator of nuclear factor kappaB ligand, a chemotactic and survival factor for monocytes and DCs. This study provides insights into the original ability of DCs to express very high levels of antioxidant enzymes such as Mn-SOD and Prx1, to detoxify oxidized low density lipoproteins, and to induce high levels of receptor activator of nuclear factor kappaB ligand by the T cells they activate and further emphasizes the role that DCs might play in atherosclerosis, a pathology recognized as a chronic inflammatory disorder.
[ { "created": "Fri, 24 Nov 2006 09:03:01 GMT", "version": "v1" } ]
2016-08-16
[ [ "Rivollier", "Aymeric", "", "DRDC" ], [ "Perrin-Cocon", "Laure", "", "DRDC" ], [ "Luche", "Sylvie", "", "DRDC" ], [ "Diemer", "Hélène", "", "DRDC" ], [ "Strub", "Jean-Marc", "", "DRDC" ], [ "Hanau", "Daniel", "", "DRDC" ], [ "van Dorsselaer", "Alain", "", "DRDC" ], [ "Lotteau", "Vincent", "", "DRDC" ], [ "Rabourdin-Combe", "Chantal", "", "DRDC" ], [ "Rabilloud", "Thierry", "", "DRDC" ], [ "Servet-Delprat", "Christine", "" ] ]
Dendritic cells (DCs) display the unique ability to activate naive T cells and to initiate primary T cell responses revealed in DC-T cell alloreactions. DCs frequently operate under stress conditions. Oxidative stress enhances the production of inflammatory cytokines by DCs. We performed a proteomic analysis to see which major changes occur, at the protein expression level, during DC differentiation and maturation. Comparative two-dimensional gel analysis of the monocyte, immature DC, and mature DC stages was performed. Manganese superoxide dismutase (Mn-SOD) reached 0.7% of the gel-displayed proteins at the mature DC stage. This important amount of Mn-SOD is a primary antioxidant defense system against superoxide radicals, but its product, H(2)O(2), is also deleterious for cells. Peroxiredoxin (Prx) enzymes play an important role in eliminating such peroxide. Prx1 expression level continuously increased during DC differentiation and maturation, whereas Prx6 continuously decreased, and Prx2 peaked at the immature DC stage. As a consequence, DCs were more resistant than monocytes to apoptosis induced by high amounts of oxidized low density lipoproteins containing toxic organic peroxides and hydrogen peroxide. Furthermore DC-stimulated T cells produced high levels of receptor activator of nuclear factor kappaB ligand, a chemotactic and survival factor for monocytes and DCs. This study provides insights into the original ability of DCs to express very high levels of antioxidant enzymes such as Mn-SOD and Prx1, to detoxify oxidized low density lipoproteins, and to induce high levels of receptor activator of nuclear factor kappaB ligand by the T cells they activate and further emphasizes the role that DCs might play in atherosclerosis, a pathology recognized as a chronic inflammatory disorder.
2208.07763
Guido Caldarelli
Mirko Hu (1), Guido Caldarelli (2), Tommaso Gili (3) ( (1) University of Parma, Department of Medicine and Surgery (2) Ca'Foscari University of Venice, Department of Molecular Science and Nanosystems (3) Network Unit, IMT Alti Studi Lucca )
Network analysis of a complex disease: the gut microbiota in the inflammatory bowel disease case
15 page, 4 figures, style files included in submission
null
null
null
q-bio.QM cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
Inflammatory bowel diseases (IBD) are complex diseases in which the gut microbiota is attacked by the immune system of genetically predisposed subjects when they are exposed to yet unclear environmental factors. The complexity of this class of diseases makes them suitable to be represented and studied with network science. In the project, the metagenomic data of the gut microbiota of control, Crohn's disease, and ulcerative colitis subjects were divided in three ranges (prevalent, common, uncommon). Then, correlation networks and co-expression networks were used to represent this data. The former networks involved the calculation of the Pearson's correlation and the use of the percolation threshold to binarize the adjacency matrix, whereas the latter involved the construction of the bipartite networks and the monopartite projection after binarization of the biadjacency matrix. Then, centrality measures and community detection were used on the so-built networks. The main results obtained were about the modules of "Bacteroides", which were connected in control subjects' correlation network, "Faecalibacterium prausnitzii", where co-enzyme A became central in IBD correlation networks and "Escherichia coli", which module has different position in the different diagnoses networks.
[ { "created": "Tue, 16 Aug 2022 14:20:39 GMT", "version": "v1" } ]
2022-08-17
[ [ "Hu", "Mirko", "" ], [ "Caldarelli", "Guido", "" ], [ "Gili", "Tommaso", "" ] ]
Inflammatory bowel diseases (IBD) are complex diseases in which the gut microbiota is attacked by the immune system of genetically predisposed subjects when they are exposed to yet unclear environmental factors. The complexity of this class of diseases makes them suitable to be represented and studied with network science. In the project, the metagenomic data of the gut microbiota of control, Crohn's disease, and ulcerative colitis subjects were divided in three ranges (prevalent, common, uncommon). Then, correlation networks and co-expression networks were used to represent this data. The former networks involved the calculation of the Pearson's correlation and the use of the percolation threshold to binarize the adjacency matrix, whereas the latter involved the construction of the bipartite networks and the monopartite projection after binarization of the biadjacency matrix. Then, centrality measures and community detection were used on the so-built networks. The main results obtained were about the modules of "Bacteroides", which were connected in control subjects' correlation network, "Faecalibacterium prausnitzii", where co-enzyme A became central in IBD correlation networks and "Escherichia coli", which module has different position in the different diagnoses networks.
1807.02668
Yasser A. Ahmed
Yasser A. Ahmed, Safwat Ali, Ahmed Ghallab
Hair histology as a tool for forensic identification of some domestic animal species
8 pages, 3 Figures
2018, EXCLI Journal 2018, 17:663-670
10.17179/excli2018-1478
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Animal hair examination at a criminal scene may provide valuable information in forensic investigations. However, local reference databases for animal hair identification are rare. In the present study, we provide differential histological analysis of hair of some domestic animals in Upper Egypt. For this purpose, guard hair of large ruminants (buffalo, camel and cow), small ruminants (sheep and goat), equine (horse and donkey) and canine (dog and cat) were collected and comparative analysis was performed by light microscopy. Based on the hair cuticle scale pattern, type and diameter of the medulla, and the pigmentation, characteristic differential features of each animal species were identified. The cuticle scale pattern was imbricate in all tested animals except in donkey, in which coronal scales were identified. The cuticle scale margin type, shape and the distance in between were characteristic for each animal species. The hair medulla was continuous in most of the tested animal species with the exception of sheep, in which fragmental medulla was detected. The diameter of the hair medulla and the margins differ according to the animal species. Hair shaft pigmentation were not detected in all tested animals with the exception of camel and buffalo, in which granules and streak-like pigmentation were detected. In conclusion, the present study provides a first-step towards preparation of a complete local reference database for animal hair identification that can be used in forensic investigations.
[ { "created": "Sat, 7 Jul 2018 14:19:51 GMT", "version": "v1" } ]
2018-07-10
[ [ "Ahmed", "Yasser A.", "" ], [ "Ali", "Safwat", "" ], [ "Ghallab", "Ahmed", "" ] ]
Animal hair examination at a criminal scene may provide valuable information in forensic investigations. However, local reference databases for animal hair identification are rare. In the present study, we provide differential histological analysis of hair of some domestic animals in Upper Egypt. For this purpose, guard hair of large ruminants (buffalo, camel and cow), small ruminants (sheep and goat), equine (horse and donkey) and canine (dog and cat) were collected and comparative analysis was performed by light microscopy. Based on the hair cuticle scale pattern, type and diameter of the medulla, and the pigmentation, characteristic differential features of each animal species were identified. The cuticle scale pattern was imbricate in all tested animals except in donkey, in which coronal scales were identified. The cuticle scale margin type, shape and the distance in between were characteristic for each animal species. The hair medulla was continuous in most of the tested animal species with the exception of sheep, in which fragmental medulla was detected. The diameter of the hair medulla and the margins differ according to the animal species. Hair shaft pigmentation were not detected in all tested animals with the exception of camel and buffalo, in which granules and streak-like pigmentation were detected. In conclusion, the present study provides a first-step towards preparation of a complete local reference database for animal hair identification that can be used in forensic investigations.
2007.04136
Marco Broccardo
Ziqi Wang, Marco Broccardo, Arnaud Mignan, Didier Sornette
The dynamics of entropy in the COVID-19 outbreaks
null
null
null
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the unfolding of the COVID-19 pandemic, mathematical modeling of epidemics has been perceived and used as a central element in understanding, predicting, and governing the pandemic event. However, soon it became clear that long term predictions were extremely challenging to address. Moreover, it is still unclear which metric shall be used for a global description of the evolution of the outbreaks. Yet a robust modeling of pandemic dynamics and a consistent choice of the transmission metric is crucial for an in-depth understanding of the macroscopic phenomenology and better-informed mitigation strategies. In this study, we propose a Markovian stochastic framework designed to describe the evolution of entropy during the COVID-19 pandemic and the instantaneous reproductive ratio. We then introduce and use entropy-based metrics of global transmission to measure the impact and temporal evolution of a pandemic event. In the formulation of the model, the temporal evolution of the outbreak is modeled by the master equation of a nonlinear Markov process for a statistically averaged individual, leading to a clear physical interpretation. We also provide a full Bayesian inversion scheme for calibration. The time evolution of the entropy rate, the absolute change in the system entropy, and the instantaneous reproductive ratio are natural and transparent outputs of this framework. The framework has the appealing property of being applicable to any compartmental epidemic model. As an illustration, we apply the proposed approach to a simple modification of the Susceptible-Exposed-Infected-Removed (SEIR) model. Applying the model to the Hubei region, South Korean, Italian, Spanish, German, and French COVID-19 data-sets, we discover a significant difference in the absolute change of entropy but highly regular trends for both the entropy evolution and the instantaneous reproductive ratio.
[ { "created": "Wed, 8 Jul 2020 14:07:45 GMT", "version": "v1" } ]
2020-07-09
[ [ "Wang", "Ziqi", "" ], [ "Broccardo", "Marco", "" ], [ "Mignan", "Arnaud", "" ], [ "Sornette", "Didier", "" ] ]
With the unfolding of the COVID-19 pandemic, mathematical modeling of epidemics has been perceived and used as a central element in understanding, predicting, and governing the pandemic event. However, soon it became clear that long term predictions were extremely challenging to address. Moreover, it is still unclear which metric shall be used for a global description of the evolution of the outbreaks. Yet a robust modeling of pandemic dynamics and a consistent choice of the transmission metric is crucial for an in-depth understanding of the macroscopic phenomenology and better-informed mitigation strategies. In this study, we propose a Markovian stochastic framework designed to describe the evolution of entropy during the COVID-19 pandemic and the instantaneous reproductive ratio. We then introduce and use entropy-based metrics of global transmission to measure the impact and temporal evolution of a pandemic event. In the formulation of the model, the temporal evolution of the outbreak is modeled by the master equation of a nonlinear Markov process for a statistically averaged individual, leading to a clear physical interpretation. We also provide a full Bayesian inversion scheme for calibration. The time evolution of the entropy rate, the absolute change in the system entropy, and the instantaneous reproductive ratio are natural and transparent outputs of this framework. The framework has the appealing property of being applicable to any compartmental epidemic model. As an illustration, we apply the proposed approach to a simple modification of the Susceptible-Exposed-Infected-Removed (SEIR) model. Applying the model to the Hubei region, South Korean, Italian, Spanish, German, and French COVID-19 data-sets, we discover a significant difference in the absolute change of entropy but highly regular trends for both the entropy evolution and the instantaneous reproductive ratio.
1902.03277
Shawn Gu
Shawn Gu and Tijana Milenkovic
Data-driven network alignment
null
null
10.1371/journal.pone.0234978
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological network alignment (NA) aims to find a node mapping between species' molecular networks that uncovers similar network regions, thus allowing for transfer of functional knowledge between the aligned nodes. However, current NA methods do not end up aligning functionally related nodes. A likely reason is that they assume it is topologically similar nodes that are functionally related. However, we show that this assumption does not hold well. So, a paradigm shift is needed with how the NA problem is approached. We redefine NA as a data-driven framework, TARA (daTA-dRiven network Alignment), which attempts to learn the relationship between topological relatedness and functional relatedness without assuming that topological relatedness corresponds to topological similarity, like traditional NA methods do. TARA trains a classifier to predict whether two nodes from different networks are functionally related based on their network topological patterns. We find that TARA is able to make accurate predictions. TARA then takes each pair of nodes that are predicted as related to be part of an alignment. Like traditional NA methods, TARA uses this alignment for the across-species transfer of functional knowledge. Clearly, TARA as currently implemented uses topological but not protein sequence information for this task. We find that TARA outperforms existing state-of-the-art NA methods that also use topological information, WAVE and SANA, and even outperforms or complements a state-of-the-art NA method that uses both topological and sequence information, PrimAlign. Hence, adding sequence information to TARA, which is our future work, is likely to further improve its performance.
[ { "created": "Fri, 8 Feb 2019 20:16:38 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2019 03:14:57 GMT", "version": "v2" }, { "created": "Mon, 14 Oct 2019 17:29:55 GMT", "version": "v3" }, { "created": "Fri, 12 Jun 2020 19:53:58 GMT", "version": "v4" } ]
2020-09-09
[ [ "Gu", "Shawn", "" ], [ "Milenkovic", "Tijana", "" ] ]
Biological network alignment (NA) aims to find a node mapping between species' molecular networks that uncovers similar network regions, thus allowing for transfer of functional knowledge between the aligned nodes. However, current NA methods do not end up aligning functionally related nodes. A likely reason is that they assume it is topologically similar nodes that are functionally related. However, we show that this assumption does not hold well. So, a paradigm shift is needed with how the NA problem is approached. We redefine NA as a data-driven framework, TARA (daTA-dRiven network Alignment), which attempts to learn the relationship between topological relatedness and functional relatedness without assuming that topological relatedness corresponds to topological similarity, like traditional NA methods do. TARA trains a classifier to predict whether two nodes from different networks are functionally related based on their network topological patterns. We find that TARA is able to make accurate predictions. TARA then takes each pair of nodes that are predicted as related to be part of an alignment. Like traditional NA methods, TARA uses this alignment for the across-species transfer of functional knowledge. Clearly, TARA as currently implemented uses topological but not protein sequence information for this task. We find that TARA outperforms existing state-of-the-art NA methods that also use topological information, WAVE and SANA, and even outperforms or complements a state-of-the-art NA method that uses both topological and sequence information, PrimAlign. Hence, adding sequence information to TARA, which is our future work, is likely to further improve its performance.
1805.11359
Hendrik Richter
Hendrik Richter
Properties of interaction networks, structure coefficients, and benefit-to-cost ratios
null
null
null
null
q-bio.PE cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In structured populations the spatial arrangement of cooperators and defectors on the interaction graph together with the structure of the graph itself determines the game dynamics and particularly whether or not fixation of cooperation (or defection) is favored. For a single cooperator (and a single defector) and a network described by a regular graph the question of fixation can be addressed by a single parameter, the structure coefficient. As this quantity is generic for any regular graph, we may call it the generic structure coefficient. For two and more cooperators (or several defectors) fixation properties can also be assigned by structure coefficients. These structure coefficients, however, depend on the arrangement of cooperators and defectors which we may interpret as a configuration of the game. Moreover, the coefficients are specific for a given interaction network modeled as regular graph, which is why we may call them specific structure coefficients. In this paper, we study how specific structure coefficients vary over interaction graphs and link the distributions obtained over different graphs to spectral properties of interaction networks. We also discuss implications for the benefit-to-cost ratios of donation games.
[ { "created": "Tue, 29 May 2018 11:25:13 GMT", "version": "v1" }, { "created": "Wed, 6 Jun 2018 09:29:55 GMT", "version": "v2" } ]
2018-06-07
[ [ "Richter", "Hendrik", "" ] ]
In structured populations the spatial arrangement of cooperators and defectors on the interaction graph together with the structure of the graph itself determines the game dynamics and particularly whether or not fixation of cooperation (or defection) is favored. For a single cooperator (and a single defector) and a network described by a regular graph the question of fixation can be addressed by a single parameter, the structure coefficient. As this quantity is generic for any regular graph, we may call it the generic structure coefficient. For two and more cooperators (or several defectors) fixation properties can also be assigned by structure coefficients. These structure coefficients, however, depend on the arrangement of cooperators and defectors which we may interpret as a configuration of the game. Moreover, the coefficients are specific for a given interaction network modeled as regular graph, which is why we may call them specific structure coefficients. In this paper, we study how specific structure coefficients vary over interaction graphs and link the distributions obtained over different graphs to spectral properties of interaction networks. We also discuss implications for the benefit-to-cost ratios of donation games.
2011.03442
Martin Reczko
Dimitra N. Panou and Martin Reczko
DeepFoldit -- A Deep Reinforcement Learning Neural Network Folding Proteins
108 pages, 66 figures
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite considerable progress, ab initio protein structure prediction remains suboptimal. A crowdsourcing approach is the online puzzle video game Foldit, that provided several useful results that matched or even outperformed algorithmically computed solutions. Using Foldit, the WeFold crowd had several successful participations in the Critical Assessment of Techniques for Protein Structure Prediction. Based on the recent Foldit standalone version, we trained a deep reinforcement neural network called DeepFoldit to improve the score assigned to an unfolded protein, using the Q-learning method with experience replay. This paper is focused on model improvement through hyperparameter tuning. We examined various implementations by examining different model architectures and changing hyperparameter values to improve the accuracy of the model. The new model hyper-parameters also improved its ability to generalize. Initial results, from the latest implementation, show that given a set of small unfolded training proteins, DeepFoldit learns action sequences that improve the score both on the training set and on novel test proteins. Our approach combines the intuitive user interface of Foldit with the efficiency of deep reinforcement learning.
[ { "created": "Wed, 28 Oct 2020 16:05:42 GMT", "version": "v1" } ]
2020-11-09
[ [ "Panou", "Dimitra N.", "" ], [ "Reczko", "Martin", "" ] ]
Despite considerable progress, ab initio protein structure prediction remains suboptimal. A crowdsourcing approach is the online puzzle video game Foldit, that provided several useful results that matched or even outperformed algorithmically computed solutions. Using Foldit, the WeFold crowd had several successful participations in the Critical Assessment of Techniques for Protein Structure Prediction. Based on the recent Foldit standalone version, we trained a deep reinforcement neural network called DeepFoldit to improve the score assigned to an unfolded protein, using the Q-learning method with experience replay. This paper is focused on model improvement through hyperparameter tuning. We examined various implementations by examining different model architectures and changing hyperparameter values to improve the accuracy of the model. The new model hyper-parameters also improved its ability to generalize. Initial results, from the latest implementation, show that given a set of small unfolded training proteins, DeepFoldit learns action sequences that improve the score both on the training set and on novel test proteins. Our approach combines the intuitive user interface of Foldit with the efficiency of deep reinforcement learning.
1909.07276
Stephanus Marnus Stoltz
Marnus Stoltz, Boris Bauemer, Remco Bouckaert, Colin Fox, Gordon Hiscott, David Bryant
Bayesian inference of species trees using diffusion models
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/4.0/
We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with a new algorithm for numerically computing likelihoods of quantitative traits. The diffusion approach allows for analysis of datasets containing hundreds or thousands of individuals. The method, which we call \snapper, has been implemented as part of the Beast2 package. We introduce the models, the efficient algorithms, and report performance of \snapper on simulated data sets and on SNP data from rattlesnakes and freshwater turtles.
[ { "created": "Mon, 16 Sep 2019 15:27:29 GMT", "version": "v1" }, { "created": "Tue, 17 Sep 2019 09:50:04 GMT", "version": "v2" } ]
2019-09-18
[ [ "Stoltz", "Marnus", "" ], [ "Bauemer", "Boris", "" ], [ "Bouckaert", "Remco", "" ], [ "Fox", "Colin", "" ], [ "Hiscott", "Gordon", "" ], [ "Bryant", "David", "" ] ]
We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with a new algorithm for numerically computing likelihoods of quantitative traits. The diffusion approach allows for analysis of datasets containing hundreds or thousands of individuals. The method, which we call \snapper, has been implemented as part of the Beast2 package. We introduce the models, the efficient algorithms, and report performance of \snapper on simulated data sets and on SNP data from rattlesnakes and freshwater turtles.
1410.5103
Joseph Crawford
Joseph Crawford and Tijana Milenkovi\'c
GREAT: GRaphlet Edge-based network AlignmenT
16 pages, 9 figures
null
null
null
q-bio.MN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network alignment aims to find regions of topological or functional similarities between networks. In computational biology, it can be used to transfer biological knowledge from a well-studied species to a poorly-studied species between aligned network regions. Typically, existing network aligners first compute similarities between nodes in different networks (via a node cost function) and then aim to find a high-scoring alignment (node mapping between the networks) with respect to "node conservation", typically the total node cost function over all aligned nodes. Only after an alignment is constructed, the existing methods evaluate its quality with respect to an alternative measure, such as "edge conservation". Thus, we recently aimed to directly optimize edge conservation while constructing an alignment, which improved alignment quality. Here, we approach a novel idea of maximizing both node and edge conservation, and we also approach this idea from a novel perspective, by aligning optimally edges between networks first in order to improve node cost function needed to then align well nodes between the networks. In the process, unlike the existing measures of edge conservation that treat each conserved edge the same, we favor conserved edges that are topologically similar over conserved edges that are topologically dissimilar. We show that our novel method, which we call GRaphlet Edge AlignmenT (GREAT), improves upon state-of-the-art methods that aim to optimize node conservation only or edge conservation only.
[ { "created": "Sun, 19 Oct 2014 19:03:50 GMT", "version": "v1" } ]
2014-10-21
[ [ "Crawford", "Joseph", "" ], [ "Milenković", "Tijana", "" ] ]
Network alignment aims to find regions of topological or functional similarities between networks. In computational biology, it can be used to transfer biological knowledge from a well-studied species to a poorly-studied species between aligned network regions. Typically, existing network aligners first compute similarities between nodes in different networks (via a node cost function) and then aim to find a high-scoring alignment (node mapping between the networks) with respect to "node conservation", typically the total node cost function over all aligned nodes. Only after an alignment is constructed, the existing methods evaluate its quality with respect to an alternative measure, such as "edge conservation". Thus, we recently aimed to directly optimize edge conservation while constructing an alignment, which improved alignment quality. Here, we approach a novel idea of maximizing both node and edge conservation, and we also approach this idea from a novel perspective, by aligning optimally edges between networks first in order to improve node cost function needed to then align well nodes between the networks. In the process, unlike the existing measures of edge conservation that treat each conserved edge the same, we favor conserved edges that are topologically similar over conserved edges that are topologically dissimilar. We show that our novel method, which we call GRaphlet Edge AlignmenT (GREAT), improves upon state-of-the-art methods that aim to optimize node conservation only or edge conservation only.
1603.06342
Kieran Fox
Kieran C. R. Fox, Matthew L. Dixon, Savannah Nijeboer, Manesh Girn, James L. Floman, Michael Lifshitz, Melissa Ellamil, Peter Sedlmeier, Kalina Christoff
Functional neuroanatomy of meditation: A review and meta-analysis of 78 functional neuroimaging investigations
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Meditation is a family of mental practices that encompasses a wide array of techniques employing distinctive mental strategies. We systematically reviewed 78 functional neuroimaging (fMRI and PET) studies of meditation, and used activation likelihood estimation to meta-analyze 257 peak foci from 31 experiments involving 527 participants. We found reliably dissociable patterns of brain activation and deactivation for four common styles of meditation (focused attention, mantra recitation, open monitoring, and compassion/loving-kindness), and suggestive differences for three others (visualization, sense-withdrawal, and non-dual awareness practices). Overall, dissociable activation patterns are congruent with the psychological and behavioral aims of each practice. Some brain areas are recruited consistently across multiple techniques - including insula, pre/supplementary motor cortices, dorsal anterior cingulate cortex, and frontopolar cortex - but convergence is the exception rather than the rule. A preliminary effect-size meta-analysis found medium effects for both activations (d = .59) and deactivations (d = -.74), suggesting potential practical significance. Our meta-analysis supports the neurophysiological dissociability of meditation practices, but also raises many methodological concerns and suggests avenues for future research.
[ { "created": "Mon, 21 Mar 2016 07:28:24 GMT", "version": "v1" } ]
2016-03-22
[ [ "Fox", "Kieran C. R.", "" ], [ "Dixon", "Matthew L.", "" ], [ "Nijeboer", "Savannah", "" ], [ "Girn", "Manesh", "" ], [ "Floman", "James L.", "" ], [ "Lifshitz", "Michael", "" ], [ "Ellamil", "Melissa", "" ], [ "Sedlmeier", "Peter", "" ], [ "Christoff", "Kalina", "" ] ]
Meditation is a family of mental practices that encompasses a wide array of techniques employing distinctive mental strategies. We systematically reviewed 78 functional neuroimaging (fMRI and PET) studies of meditation, and used activation likelihood estimation to meta-analyze 257 peak foci from 31 experiments involving 527 participants. We found reliably dissociable patterns of brain activation and deactivation for four common styles of meditation (focused attention, mantra recitation, open monitoring, and compassion/loving-kindness), and suggestive differences for three others (visualization, sense-withdrawal, and non-dual awareness practices). Overall, dissociable activation patterns are congruent with the psychological and behavioral aims of each practice. Some brain areas are recruited consistently across multiple techniques - including insula, pre/supplementary motor cortices, dorsal anterior cingulate cortex, and frontopolar cortex - but convergence is the exception rather than the rule. A preliminary effect-size meta-analysis found medium effects for both activations (d = .59) and deactivations (d = -.74), suggesting potential practical significance. Our meta-analysis supports the neurophysiological dissociability of meditation practices, but also raises many methodological concerns and suggests avenues for future research.
2005.14532
Tomasz Piasecki
Tomasz Piasecki, Piotr B. Mucha, Magdalena Rosi\'nska
A new SEIR type model including quarantine effects and its application to analysis of Covid-19 pandemia in Poland in March-April 2020
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contact tracing and quarantine are well established non-pharmaceutical epidemic control tools. The paper aims to clarify the impact of these measures in COVID-19 epidemic. A new deterministic model is introduced (SEIRQ: susceptible, exposed, infectious, removed, quarantined) with Q compartment capturing individuals and releasing them with delay. We obtain a simple rule defining the reproduction number $\mathcal{R}$ in terms of quarantine parameters, ratio of diagnosed cases and transmission parameters. The model is applied to the epidemic in Poland in March - April 2020, when social distancing measures were in place. We investigate 3 scenarios corresponding to different ratios of diagnosed cases. Our results show that depending on the scenario contact tracing could have prevented from 50\% to over 90\% of cases. The effects of quarantine are limited by fraction of undiagnosed cases. Taking into account the transmission intensity in Poland prior to introduction of social restrictions it is unlikely that the control of the epidemic could be achieved without any social distancing measures.
[ { "created": "Fri, 29 May 2020 12:39:29 GMT", "version": "v1" } ]
2020-06-01
[ [ "Piasecki", "Tomasz", "" ], [ "Mucha", "Piotr B.", "" ], [ "Rosińska", "Magdalena", "" ] ]
Contact tracing and quarantine are well established non-pharmaceutical epidemic control tools. The paper aims to clarify the impact of these measures in COVID-19 epidemic. A new deterministic model is introduced (SEIRQ: susceptible, exposed, infectious, removed, quarantined) with Q compartment capturing individuals and releasing them with delay. We obtain a simple rule defining the reproduction number $\mathcal{R}$ in terms of quarantine parameters, ratio of diagnosed cases and transmission parameters. The model is applied to the epidemic in Poland in March - April 2020, when social distancing measures were in place. We investigate 3 scenarios corresponding to different ratios of diagnosed cases. Our results show that depending on the scenario contact tracing could have prevented from 50\% to over 90\% of cases. The effects of quarantine are limited by fraction of undiagnosed cases. Taking into account the transmission intensity in Poland prior to introduction of social restrictions it is unlikely that the control of the epidemic could be achieved without any social distancing measures.
1004.4992
Mitchell Berger
C B Prior and M A Berger
The evaluation of directionally writhing polymers
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss the appropriate techniques for modelling the geometry of open ended elastic polymer molecules. The molecule is assumed to have fixed endpoints on a boundary surface. In particular we discuss the concept of the winding number, a directional measure of the linking of two curves, which can be shown to be invariant to the set of continuous deformations vanishing at the polymer's end-point and which forbid it from passing through itself. This measure is shown to be the appropriate constraint required to evaluate the geometrical properties of a constrained DNA molecule. Using the net winding measure we define a model of an open ended constrained DNA molecule which combines the necessary constraint of self-avoidance with being analytically tractable. This model builds upon the local models of Bouchiat and Mezard (2000). In particular, we present a new derivation of the polar writhe expression, which detects both the local winding of the curve and non local winding between different sections of the curve. We then show that this expression correctly tracks the net twisting of a DNA molecule subject to rotation at the endpoints, unlike other definitions used in the literature.
[ { "created": "Wed, 28 Apr 2010 10:47:54 GMT", "version": "v1" } ]
2010-04-29
[ [ "Prior", "C B", "" ], [ "Berger", "M A", "" ] ]
We discuss the appropriate techniques for modelling the geometry of open ended elastic polymer molecules. The molecule is assumed to have fixed endpoints on a boundary surface. In particular we discuss the concept of the winding number, a directional measure of the linking of two curves, which can be shown to be invariant to the set of continuous deformations vanishing at the polymer's end-point and which forbid it from passing through itself. This measure is shown to be the appropriate constraint required to evaluate the geometrical properties of a constrained DNA molecule. Using the net winding measure we define a model of an open ended constrained DNA molecule which combines the necessary constraint of self-avoidance with being analytically tractable. This model builds upon the local models of Bouchiat and Mezard (2000). In particular, we present a new derivation of the polar writhe expression, which detects both the local winding of the curve and non local winding between different sections of the curve. We then show that this expression correctly tracks the net twisting of a DNA molecule subject to rotation at the endpoints, unlike other definitions used in the literature.
2309.15366
Vanessa Lopez-Marrero
Vanessa Lopez-Marrero, Patrick R. Johnstone, Gilchan Park, Xihaier Luo
Density Estimation via Measure Transport: Outlook for Applications in the Biological Sciences
46 pages; 18 figures; minor revisions; DOI added
Stat. Anal. Data Min.: ASA Data Sci. J. 17 (2024)
10.1002/sam.11687
null
q-bio.QM cs.LG physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One among several advantages of measure transport methods is that they allow for a unified framework for processing and analysis of data distributed according to a wide class of probability measures. Within this context, we present results from computational studies aimed at assessing the potential of measure transport techniques, specifically, the use of triangular transport maps, as part of a workflow intended to support research in the biological sciences. Scenarios characterized by the availability of limited amount of sample data, which are common in domains such as radiation biology, are of particular interest. We find that when estimating a distribution density function given limited amount of sample data, adaptive transport maps are advantageous. In particular, statistics gathered from computing series of adaptive transport maps, trained on a series of randomly chosen subsets of the set of available data samples, leads to uncovering information hidden in the data. As a result, in the radiation biology application considered here, this approach provides a tool for generating hypotheses about gene relationships and their dynamics under radiation exposure.
[ { "created": "Wed, 27 Sep 2023 02:36:42 GMT", "version": "v1" }, { "created": "Fri, 29 Dec 2023 20:39:16 GMT", "version": "v2" }, { "created": "Sun, 24 Mar 2024 05:06:16 GMT", "version": "v3" }, { "created": "Mon, 13 May 2024 02:17:52 GMT", "version": "v4" } ]
2024-05-14
[ [ "Lopez-Marrero", "Vanessa", "" ], [ "Johnstone", "Patrick R.", "" ], [ "Park", "Gilchan", "" ], [ "Luo", "Xihaier", "" ] ]
One among several advantages of measure transport methods is that they allow for a unified framework for processing and analysis of data distributed according to a wide class of probability measures. Within this context, we present results from computational studies aimed at assessing the potential of measure transport techniques, specifically, the use of triangular transport maps, as part of a workflow intended to support research in the biological sciences. Scenarios characterized by the availability of limited amount of sample data, which are common in domains such as radiation biology, are of particular interest. We find that when estimating a distribution density function given limited amount of sample data, adaptive transport maps are advantageous. In particular, statistics gathered from computing series of adaptive transport maps, trained on a series of randomly chosen subsets of the set of available data samples, leads to uncovering information hidden in the data. As a result, in the radiation biology application considered here, this approach provides a tool for generating hypotheses about gene relationships and their dynamics under radiation exposure.
2306.01791
Blessing Emerenini
Blessing O. Emerenini, Doris Hartung, Ricardo N. G Reyes Grimaldo, Claire Canner, Maya Williams, Ephraim Agyingi, and Robert Osgood
Understanding Biofilm-Phage Interactions in Cystic Fibrosis Patients Using Mathematical Frameworks
null
null
null
null
q-bio.QM math.DS
http://creativecommons.org/licenses/by/4.0/
When planktonic bacteria adhere together to a surface, they begin to form biofilms, or communities of bacteria. Biofilm formation in a host can be extremely problematic if left untreated, especially since antibiotics can be ineffective in treating the bacteria. Certain lung diseases such as cystic fibrosis can cause the formation of biofilms in the lungs and can be fatal. With antibiotic-resistant bacteria, the use of phage therapy has been introduced as an alternative or an additive to the use of antibiotics in order to combat biofilm growth. Phage therapy utilizes phages, or viruses that attack bacteria, in order to penetrate and eradicate biofilms. In order to evaluate the effectiveness of phage therapy against biofilm bacteria, we adapt an ordinary differential equation model to describe the dynamics of phage-biofilm combat in the lungs. We then create our own phage-biofilm model with ordinary differential equations and stochastic modeling. Then, simulations of parameter alterations in both models are investigated to assess how they will affect the efficiency of phage therapy against bacteria. By increasing the phage mortality rate, the biofilm growth can be balanced and allow the biofilm to be more vulnerable to antibiotics. Thus, phage therapy is an effective aid in biofilm treatment.
[ { "created": "Thu, 1 Jun 2023 03:52:36 GMT", "version": "v1" } ]
2023-06-06
[ [ "Emerenini", "Blessing O.", "" ], [ "Hartung", "Doris", "" ], [ "Grimaldo", "Ricardo N. G Reyes", "" ], [ "Canner", "Claire", "" ], [ "Williams", "Maya", "" ], [ "Agyingi", "Ephraim", "" ], [ "Osgood", "Robert", "" ] ]
When planktonic bacteria adhere together to a surface, they begin to form biofilms, or communities of bacteria. Biofilm formation in a host can be extremely problematic if left untreated, especially since antibiotics can be ineffective in treating the bacteria. Certain lung diseases such as cystic fibrosis can cause the formation of biofilms in the lungs and can be fatal. With antibiotic-resistant bacteria, the use of phage therapy has been introduced as an alternative or an additive to the use of antibiotics in order to combat biofilm growth. Phage therapy utilizes phages, or viruses that attack bacteria, in order to penetrate and eradicate biofilms. In order to evaluate the effectiveness of phage therapy against biofilm bacteria, we adapt an ordinary differential equation model to describe the dynamics of phage-biofilm combat in the lungs. We then create our own phage-biofilm model with ordinary differential equations and stochastic modeling. Then, simulations of parameter alterations in both models are investigated to assess how they will affect the efficiency of phage therapy against bacteria. By increasing the phage mortality rate, the biofilm growth can be balanced and allow the biofilm to be more vulnerable to antibiotics. Thus, phage therapy is an effective aid in biofilm treatment.
2211.05032
Chris Watkins
Jenny M. Poulton, Lee Altenberg, Chris Watkins
Evolution with recombination as a Metropolis-Hastings sampling procedure
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a population genetic model of evolution, which includes haploid selection, mutation, recombination, and drift. The mutation-selection equilibrium can be expressed exactly in closed form for arbitrary fitness functions without resorting to diffusion approximations. Tractability is achieved by generating new offspring using n-parent rather than 2-parent recombination. While this enforces linkage equilibrium among offspring, it allows analysis of the whole population under linkage disequilibrium. We derive a general and exact relationship between fitness fluctuations and response to selection. Our assumptions allow analytical calculation of the stationary distribution of the model for a variety of non-trivial fitness functions. These results allow us to speak to genetic architecture, i.e., what stationary distributions result from different fitness functions. This paper presents methods for exactly deriving stationary states for finite and infinite populations. This method can be applied to many fitness functions, and we give exact calculations for four of these. These results allow us to investigate metastability, tradeoffs between fitness functions, and even consider error-correcting codes
[ { "created": "Wed, 9 Nov 2022 17:07:12 GMT", "version": "v1" }, { "created": "Fri, 24 Feb 2023 22:08:48 GMT", "version": "v2" } ]
2023-02-28
[ [ "Poulton", "Jenny M.", "" ], [ "Altenberg", "Lee", "" ], [ "Watkins", "Chris", "" ] ]
This work presents a population genetic model of evolution, which includes haploid selection, mutation, recombination, and drift. The mutation-selection equilibrium can be expressed exactly in closed form for arbitrary fitness functions without resorting to diffusion approximations. Tractability is achieved by generating new offspring using n-parent rather than 2-parent recombination. While this enforces linkage equilibrium among offspring, it allows analysis of the whole population under linkage disequilibrium. We derive a general and exact relationship between fitness fluctuations and response to selection. Our assumptions allow analytical calculation of the stationary distribution of the model for a variety of non-trivial fitness functions. These results allow us to speak to genetic architecture, i.e., what stationary distributions result from different fitness functions. This paper presents methods for exactly deriving stationary states for finite and infinite populations. This method can be applied to many fitness functions, and we give exact calculations for four of these. These results allow us to investigate metastability, tradeoffs between fitness functions, and even consider error-correcting codes
1412.4312
Stephen Hedges
S. Blair Hedges, Julie Marin, Michael Suleski, Madeline Paymer, and Sudhir Kumar
Tree of life reveals clock-like speciation and diversification
17 pages, 6 figures, submitted to journal
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomic data are rapidly resolving the tree of living species calibrated to time, the timetree of life, which will provide a framework for research in diverse fields of science. Previous analyses of taxonomically restricted timetrees have found a decline in the rate of diversification in many groups of organisms, often attributed to ecological interactions among species. Here we have synthesized a global timetree of life from 2,274 studies representing 50,632 species and examined the pattern and rate of diversification as well as the timing of speciation. We found that species diversity has been mostly expanding overall and in many smaller groups of species, and that the rate of diversification in eukaryotes has been mostly constant. We also identified, and avoided, potential biases that may have influenced previous analyses of diversification including low levels of taxon sampling, small clade size, and the inclusion of stem branches in clade analyses. We found consistency in time-to-speciation among plants and animals, approximately two million years, as measured by intervals of crown and stem species times. Together, this clock-like change at different levels suggests that speciation and diversification are processes dominated by random events and that adaptive change is largely a separate process.
[ { "created": "Sun, 14 Dec 2014 04:47:40 GMT", "version": "v1" } ]
2014-12-16
[ [ "Hedges", "S. Blair", "" ], [ "Marin", "Julie", "" ], [ "Suleski", "Michael", "" ], [ "Paymer", "Madeline", "" ], [ "Kumar", "Sudhir", "" ] ]
Genomic data are rapidly resolving the tree of living species calibrated to time, the timetree of life, which will provide a framework for research in diverse fields of science. Previous analyses of taxonomically restricted timetrees have found a decline in the rate of diversification in many groups of organisms, often attributed to ecological interactions among species. Here we have synthesized a global timetree of life from 2,274 studies representing 50,632 species and examined the pattern and rate of diversification as well as the timing of speciation. We found that species diversity has been mostly expanding overall and in many smaller groups of species, and that the rate of diversification in eukaryotes has been mostly constant. We also identified, and avoided, potential biases that may have influenced previous analyses of diversification including low levels of taxon sampling, small clade size, and the inclusion of stem branches in clade analyses. We found consistency in time-to-speciation among plants and animals, approximately two million years, as measured by intervals of crown and stem species times. Together, this clock-like change at different levels suggests that speciation and diversification are processes dominated by random events and that adaptive change is largely a separate process.
2009.08940
Pavel Krapivsky
P. L. Krapivsky
An infection process near criticality: Influence of the initial condition
13 pages, 3 figures
J. Stat. Mech. 013501 (2021)
10.1088/1742-5468/abd4cd
null
q-bio.PE cond-mat.stat-mech math.PR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate how the initial number of infected individuals affects the behavior of the critical susceptible-infected-recovered process. We analyze the outbreak size distribution, duration of the outbreaks, and the role of fluctuations.
[ { "created": "Fri, 18 Sep 2020 17:28:54 GMT", "version": "v1" }, { "created": "Thu, 3 Dec 2020 16:37:25 GMT", "version": "v2" } ]
2021-06-09
[ [ "Krapivsky", "P. L.", "" ] ]
We investigate how the initial number of infected individuals affects the behavior of the critical susceptible-infected-recovered process. We analyze the outbreak size distribution, duration of the outbreaks, and the role of fluctuations.
1606.01017
Mahmoud Hassan
Ahmad Mheich, Mahmoud Hassan, Olivier Dufor, Mohamad Khalil and Fabrice Wendling
Combining EEG source connectivity and network similarity: Application to object categorization in the human brain
5 pages, 2 figures. Accepted for 2016 IEEE Workshop on Statistical Signal Processing
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major challenge in cognitive neuroscience is to evaluate the ability of the human brain to categorize or group visual stimuli based on common features. This categorization process is very fast and occurs in few hundreds of millisecond time scale. However, an accurate tracking of the spatiotemporal dynamics of large-scale brain networks is still an unsolved issue. Here, we show the combination of recently developed method called dense-EEG source connectivity to identify functional brain networks with excellent temporal and spatial resolutions and an algorithm, called SimNet, to compute brain networks similarity. Two categories of visual stimuli were analysed in this study: immobile and mobile. Networks similarity was assessed within each category (intra-condition) and between categories (inter-condition). Results showed high similarity within each category and low similarity between the two categories. A significant difference between similarities computed in the intra and inter-conditions was observed at the period of 120-190ms supposed to be related to visual recognition and memory access. We speculate that these observations will be very helpful toward understanding the object categorization in the human brain from a network perspective.
[ { "created": "Fri, 3 Jun 2016 09:32:51 GMT", "version": "v1" } ]
2016-06-06
[ [ "Mheich", "Ahmad", "" ], [ "Hassan", "Mahmoud", "" ], [ "Dufor", "Olivier", "" ], [ "Khalil", "Mohamad", "" ], [ "Wendling", "Fabrice", "" ] ]
A major challenge in cognitive neuroscience is to evaluate the ability of the human brain to categorize or group visual stimuli based on common features. This categorization process is very fast and occurs in few hundreds of millisecond time scale. However, an accurate tracking of the spatiotemporal dynamics of large-scale brain networks is still an unsolved issue. Here, we show the combination of recently developed method called dense-EEG source connectivity to identify functional brain networks with excellent temporal and spatial resolutions and an algorithm, called SimNet, to compute brain networks similarity. Two categories of visual stimuli were analysed in this study: immobile and mobile. Networks similarity was assessed within each category (intra-condition) and between categories (inter-condition). Results showed high similarity within each category and low similarity between the two categories. A significant difference between similarities computed in the intra and inter-conditions was observed at the period of 120-190ms supposed to be related to visual recognition and memory access. We speculate that these observations will be very helpful toward understanding the object categorization in the human brain from a network perspective.
1901.04465
Noah Rosenberg
Zoe M. Himwich and Noah A. Rosenberg
Roadblocked monotonic paths and the enumeration of coalescent histories for non-matching caterpillar gene trees and species trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a gene tree topology and a species tree topology, a coalescent history represents a possible mapping of the list of gene tree coalescences to associated branches of a species tree on which those coalescences take place. Enumerative properties of coalescent histories have been of interest in the analysis of relationships between gene trees and species trees. The simplest enumerative result identifies a bijection between coalescent histories for a matching caterpillar gene tree and species tree with monotonic paths that do not cross the diagonal of a square lattice, establishing that the associated number of coalescent histories for $n$-taxon matching caterpillar trees ($n \geqslant 2$) is the Catalan number $C_{n-1} = \frac{1}{n} {2n-2 \choose n-1}$. Here, we show that a similar bijection applies for \emph{non-matching} caterpillars, connecting coalescent histories for a non-matching caterpillar gene tree and species tree to a class of \emph{roadblocked} monotonic paths. The result provides a simplified algorithm for enumerating coalescent histories in the non-matching caterpillar case. It enables a rapid proof of a known result that given a caterpillar species tree, no non-matching caterpillar gene tree has a number of coalescent histories exceeding that of the matching gene tree. Additional results on coalescent histories can be obtained by a bijection between permissible roadblocked monotonic paths and Dyck paths. We study the number of coalescent histories for non-matching caterpillar gene trees that differ from the species tree by nearest-neighbor-interchange and subtree-prune-and-regraft moves, characterizing the non-matching caterpillar with the largest number of coalescent histories. We discuss the implications of the results for the study of the combinatorics of gene trees and species trees.
[ { "created": "Mon, 14 Jan 2019 18:58:38 GMT", "version": "v1" } ]
2019-01-15
[ [ "Himwich", "Zoe M.", "" ], [ "Rosenberg", "Noah A.", "" ] ]
Given a gene tree topology and a species tree topology, a coalescent history represents a possible mapping of the list of gene tree coalescences to associated branches of a species tree on which those coalescences take place. Enumerative properties of coalescent histories have been of interest in the analysis of relationships between gene trees and species trees. The simplest enumerative result identifies a bijection between coalescent histories for a matching caterpillar gene tree and species tree with monotonic paths that do not cross the diagonal of a square lattice, establishing that the associated number of coalescent histories for $n$-taxon matching caterpillar trees ($n \geqslant 2$) is the Catalan number $C_{n-1} = \frac{1}{n} {2n-2 \choose n-1}$. Here, we show that a similar bijection applies for \emph{non-matching} caterpillars, connecting coalescent histories for a non-matching caterpillar gene tree and species tree to a class of \emph{roadblocked} monotonic paths. The result provides a simplified algorithm for enumerating coalescent histories in the non-matching caterpillar case. It enables a rapid proof of a known result that given a caterpillar species tree, no non-matching caterpillar gene tree has a number of coalescent histories exceeding that of the matching gene tree. Additional results on coalescent histories can be obtained by a bijection between permissible roadblocked monotonic paths and Dyck paths. We study the number of coalescent histories for non-matching caterpillar gene trees that differ from the species tree by nearest-neighbor-interchange and subtree-prune-and-regraft moves, characterizing the non-matching caterpillar with the largest number of coalescent histories. We discuss the implications of the results for the study of the combinatorics of gene trees and species trees.
1603.07211
Shuo Chen
Shuo Chen, F. DuBois Bowman, and Yishi Xing
Differentially Expressed Functional Connectivity Networks with K-partite Graph Topology
null
null
null
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging brain network studies suggest that interactions between various distributed neuronal populations may be characterized by an organized complex topological structure. Many brain diseases are associated with altered topological patterns of brain connectivity. Therefore, a key inquiry of connectivity analysis is to identify network-level differentially expressed connections that have low false positive rates, sufficient statistical power, and high reproducibility. In this paper, we propose a novel statistical approach to fulfill this goal by leveraging the topological structure of differentially expressed functional connections or edges in a graphical representation. We propose a new algorithm to automatically detect the latent topology of a k-partite graph structure, and we also provide statistical inferential techniques to test the detected topology. We evaluate our new methods via extensive numerical studies. We also apply our new approach to resting state fMRI data (24 cases and 18 controls) for Parkinson's disease research. The detected connectivity network biomaker with the k-partite graph topological structure reveals underlying neural features distinguishing Parkinson's disease patients from healthy control subjects.
[ { "created": "Wed, 23 Mar 2016 14:54:56 GMT", "version": "v1" } ]
2016-03-24
[ [ "Chen", "Shuo", "" ], [ "Bowman", "F. DuBois", "" ], [ "Xing", "Yishi", "" ] ]
Emerging brain network studies suggest that interactions between various distributed neuronal populations may be characterized by an organized complex topological structure. Many brain diseases are associated with altered topological patterns of brain connectivity. Therefore, a key inquiry of connectivity analysis is to identify network-level differentially expressed connections that have low false positive rates, sufficient statistical power, and high reproducibility. In this paper, we propose a novel statistical approach to fulfill this goal by leveraging the topological structure of differentially expressed functional connections or edges in a graphical representation. We propose a new algorithm to automatically detect the latent topology of a k-partite graph structure, and we also provide statistical inferential techniques to test the detected topology. We evaluate our new methods via extensive numerical studies. We also apply our new approach to resting state fMRI data (24 cases and 18 controls) for Parkinson's disease research. The detected connectivity network biomaker with the k-partite graph topological structure reveals underlying neural features distinguishing Parkinson's disease patients from healthy control subjects.
2010.06456
Jakub Tomczak
Ewelina Weglarz-Tomczak, Jakub M. Tomczak, Agoston E. Eiben, Stanley Brul
Population-based Optimization for Kinetic Parameter Identification in Glycolytic Pathway in Saccharomyces cerevisiae
Code at https://github.com/jmtomczak/popi
null
null
null
q-bio.BM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models in systems biology are mathematical descriptions of biological processes that are used to answer questions and gain a better understanding of biological phenomena. Dynamic models represent the network through rates of the production and consumption for the individual species. The ordinary differential equations that describe rates of the reactions in the model include a set of parameters. The parameters are important quantities to understand and analyze biological systems. Moreover, the perturbation of the kinetic parameters are correlated with upregulation of the system by cell-intrinsic and cell-extrinsic factors, including mutations and the environment changes. Here, we aim at using well-established models of biological pathways to identify parameter values and point their potential perturbation/deviation. We present our population-based optimization framework that is able to identify kinetic parameters in the dynamic model based on only input and output data (i.e., timecourses of selected metabolites). Our approach can deal with the identification of the non-measurable parameters as well as with discovering deviation of the parameters. We present our proposed optimization framework on the example of the well-studied glycolytic pathway in Saccharomyces cerevisiae.
[ { "created": "Sat, 19 Sep 2020 21:57:28 GMT", "version": "v1" } ]
2020-10-14
[ [ "Weglarz-Tomczak", "Ewelina", "" ], [ "Tomczak", "Jakub M.", "" ], [ "Eiben", "Agoston E.", "" ], [ "Brul", "Stanley", "" ] ]
Models in systems biology are mathematical descriptions of biological processes that are used to answer questions and gain a better understanding of biological phenomena. Dynamic models represent the network through rates of the production and consumption for the individual species. The ordinary differential equations that describe rates of the reactions in the model include a set of parameters. The parameters are important quantities to understand and analyze biological systems. Moreover, the perturbation of the kinetic parameters are correlated with upregulation of the system by cell-intrinsic and cell-extrinsic factors, including mutations and the environment changes. Here, we aim at using well-established models of biological pathways to identify parameter values and point their potential perturbation/deviation. We present our population-based optimization framework that is able to identify kinetic parameters in the dynamic model based on only input and output data (i.e., timecourses of selected metabolites). Our approach can deal with the identification of the non-measurable parameters as well as with discovering deviation of the parameters. We present our proposed optimization framework on the example of the well-studied glycolytic pathway in Saccharomyces cerevisiae.
1212.6678
Jeffrey Shaman
Jeffrey Shaman, Alicia Karspeck, Marc Lipsitch
Week 51 Influenza Forecast for the 2012-2013 U.S. Season
arXiv admin note: text overlap with arXiv:1212.5750
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This document is part of a series of near real-time weekly influenza forecasts made during the 2012-2013 influenza season. Here we present results of a forecast initiated following assimilation of observations for Week 51 (i.e. the forecast begins December 23, 2012) for municipalities in the United States. The forecast was made on December 28, 2012. Results from forecasts initiated the four previous weeks (Weeks 47-50) are also presented. Predictions generated with an alternate SIRS model, run without absolute humidity forcing (no AH), are also presented.
[ { "created": "Sun, 30 Dec 2012 00:58:23 GMT", "version": "v1" } ]
2013-01-01
[ [ "Shaman", "Jeffrey", "" ], [ "Karspeck", "Alicia", "" ], [ "Lipsitch", "Marc", "" ] ]
This document is part of a series of near real-time weekly influenza forecasts made during the 2012-2013 influenza season. Here we present results of a forecast initiated following assimilation of observations for Week 51 (i.e. the forecast begins December 23, 2012) for municipalities in the United States. The forecast was made on December 28, 2012. Results from forecasts initiated the four previous weeks (Weeks 47-50) are also presented. Predictions generated with an alternate SIRS model, run without absolute humidity forcing (no AH), are also presented.
2308.05125
Mamata Das
Mamata Das, Selvakumar K., P.J.A. Alphonse
Two Novel Approaches to Detect Community: A Case Study of Omicron Lineage Variants PPI Network
23 pages, 11 figures
null
null
null
q-bio.MN cs.LG q-bio.QM
http://creativecommons.org/licenses/by/4.0/
The capacity to identify and analyze protein-protein interactions, along with their internal modular organization, plays a crucial role in comprehending the intricate mechanisms underlying biological processes at the molecular level. We can learn a lot about the structure and dynamics of these interactions by using network analysis. We can improve our understanding of the biological roots of disease pathogenesis by recognizing network communities. This knowledge, in turn, holds significant potential for driving advancements in drug discovery and facilitating personalized medicine approaches for disease treatment. In this study, we aimed to uncover the communities within the variant B.1.1.529 (Omicron virus) using two proposed novel algorithm (ABCDE and ALCDE) and four widely recognized algorithms: Girvan-Newman, Louvain, Leiden, and Label Propagation algorithm. Each of these algorithms has established prominence in the field and offers unique perspectives on identifying communities within complex networks. We also compare the networks by the global properties, statistic summary, subgraph count, graphlet and validate by the modulaity. By employing these approaches, we sought to gain deeper insights into the structural organization and interconnections present within the Omicron virus network.
[ { "created": "Wed, 9 Aug 2023 03:51:20 GMT", "version": "v1" } ]
2023-08-11
[ [ "Das", "Mamata", "" ], [ "K.", "Selvakumar", "" ], [ "Alphonse", "P. J. A.", "" ] ]
The capacity to identify and analyze protein-protein interactions, along with their internal modular organization, plays a crucial role in comprehending the intricate mechanisms underlying biological processes at the molecular level. We can learn a lot about the structure and dynamics of these interactions by using network analysis. We can improve our understanding of the biological roots of disease pathogenesis by recognizing network communities. This knowledge, in turn, holds significant potential for driving advancements in drug discovery and facilitating personalized medicine approaches for disease treatment. In this study, we aimed to uncover the communities within the variant B.1.1.529 (Omicron virus) using two proposed novel algorithm (ABCDE and ALCDE) and four widely recognized algorithms: Girvan-Newman, Louvain, Leiden, and Label Propagation algorithm. Each of these algorithms has established prominence in the field and offers unique perspectives on identifying communities within complex networks. We also compare the networks by the global properties, statistic summary, subgraph count, graphlet and validate by the modulaity. By employing these approaches, we sought to gain deeper insights into the structural organization and interconnections present within the Omicron virus network.
1311.5769
Chris Greenman
Roshan A, Jones PH and Greenman CD
An Exact, Time-Independent Approach to Clone Size Distributions in Normal and Mutated Cells
18 Pages; 6 Figures
null
null
null
q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological tools such as genetic lineage tracing, three dimensional confocal microscopy and next generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Population-wide clone size distributions in vivo are complicated by multiple cell types, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parameterize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors this takes the form of a gamblers ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, an alternative, exact, formulation of the classic Luria Delbruck problem emerges. This approach can be extended beyond neutral models of mutant clonal evolution, and also describe some distributions relating to sub-clones within a tumour. The approaches above are generally applicable to any Markovian branching process where the dynamics of different "coloured" daughter branches are of interest.
[ { "created": "Fri, 22 Nov 2013 14:51:39 GMT", "version": "v1" } ]
2013-11-25
[ [ "A", "Roshan", "" ], [ "PH", "Jones", "" ], [ "CD", "Greenman", "" ] ]
Biological tools such as genetic lineage tracing, three dimensional confocal microscopy and next generation DNA sequencing are providing new ways to quantify the distribution of clones of normal and mutated cells. Population-wide clone size distributions in vivo are complicated by multiple cell types, and overlapping birth and death processes. This has led to the increased need for mathematically informed models to understand their biological significance. Standard approaches usually require knowledge of clonal age. We show that modelling on clone size independent of time is an alternative method that offers certain analytical advantages; it can help parameterize these models, and obtain distributions for counts of mutated or proliferating cells, for example. When applied to a general birth-death process common in epithelial progenitors this takes the form of a gamblers ruin problem, the solution of which relates to counting Motzkin lattice paths. Applying this approach to mutational processes, an alternative, exact, formulation of the classic Luria Delbruck problem emerges. This approach can be extended beyond neutral models of mutant clonal evolution, and also describe some distributions relating to sub-clones within a tumour. The approaches above are generally applicable to any Markovian branching process where the dynamics of different "coloured" daughter branches are of interest.
q-bio/0408021
Anders Eriksson
Anders Eriksson, Kristian Lindgren
Cooperation driven by mutations in multi-person Prisoner's Dilemma
22 pages, 7 figures. Accepted for publication in Journal of Theoretical Biology
Journal of Theoretical Biology, 2005, 232(3), pp. 399 - 409
10.1016/j.jtbi.2004.08.020
null
q-bio.PE
null
The n-person Prisoner's Dilemma is a widely used model for populations where individuals interact in groups. The evolutionary stability of populations has been analysed in the literature for the case where mutations in the population may be considered as isolated events. For this case, and assuming simple trigger strategies and many iterations per game, we analyse the rate of convergence to the evolutionarily stable populations. We find that for some values of the payoff parameters of the Prisoner's Dilemma this rate is so low that the assumption, that mutations in the population are infrequent on that timescale, is unreasonable. Furthermore, the problem is compounded as the group size is increased. In order to address this issue, we derive a deterministic approximation of the evolutionary dynamics with explicit, stochastic mutation processes, valid when the population size is large. We then analyse how the evolutionary dynamics depends on the following factors: mutation rate, group size, the value of the payoff parameters, and the structure of the initial population. In order to carry out the simulations for groups of more than just a few individuals, we derive an efficient way of calculating the fitness values. We find that when the mutation rate per individual and generation is very low, the dynamics is characterised by populations which are evolutionarily stable. As the mutation rate is increased, other fixed points with a higher degree of cooperation become stable. For some values of the payoff parameters, the system is characterised by (apparently) stable limit cycles dominated by cooperative behaviour. The parameter regions corresponding to high degree of cooperation grow in size with the mutation rate, and in number with the group size.
[ { "created": "Thu, 26 Aug 2004 08:24:35 GMT", "version": "v1" } ]
2007-05-23
[ [ "Eriksson", "Anders", "" ], [ "Lindgren", "Kristian", "" ] ]
The n-person Prisoner's Dilemma is a widely used model for populations where individuals interact in groups. The evolutionary stability of populations has been analysed in the literature for the case where mutations in the population may be considered as isolated events. For this case, and assuming simple trigger strategies and many iterations per game, we analyse the rate of convergence to the evolutionarily stable populations. We find that for some values of the payoff parameters of the Prisoner's Dilemma this rate is so low that the assumption, that mutations in the population are infrequent on that timescale, is unreasonable. Furthermore, the problem is compounded as the group size is increased. In order to address this issue, we derive a deterministic approximation of the evolutionary dynamics with explicit, stochastic mutation processes, valid when the population size is large. We then analyse how the evolutionary dynamics depends on the following factors: mutation rate, group size, the value of the payoff parameters, and the structure of the initial population. In order to carry out the simulations for groups of more than just a few individuals, we derive an efficient way of calculating the fitness values. We find that when the mutation rate per individual and generation is very low, the dynamics is characterised by populations which are evolutionarily stable. As the mutation rate is increased, other fixed points with a higher degree of cooperation become stable. For some values of the payoff parameters, the system is characterised by (apparently) stable limit cycles dominated by cooperative behaviour. The parameter regions corresponding to high degree of cooperation grow in size with the mutation rate, and in number with the group size.
2407.12976
Reza Mahdavi
Reza Mahdavi, Sameereh Hashemi Najafabadi, Mohammad Adel Ghiass, Silmu Valaskivi, Hannu V\"alim\"aki, Joose Kreutzer, Charlotte Hamngren Blomqvist, Stefano Romeo, Pasi Kallio, Caroline Beck Adiels
Design, Fabrication, and Characterization of a User-Friendly Microfluidic Device for Studying Liver Zonation-on-Chip (ZoC)
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Liver zonation is a fundamental characteristic of hepatocyte spatial heterogeneity, which is challenging to recapitulate in traditional cell cultures. This study presents a novel microfluidic device designed to induce zonation in liver cell cultures by establishing an oxygen gradient using standard laboratory gases. The device consists of two layers; a bottom layer containing a gas channel network that delivers high and low oxygenated gases to create three distinct zones within the cell culture chamber in the layer above. Computational simulations and ratiometric oxygen sensing were employed to validate the oxygen gradient, demonstrating that stable oxygen levels were achieved within two hours. Liver zonation was confirmed using immunofluorescence staining, which showed zonated albumin production in HepG2 cells directly correlating with oxygen levels and mimicking in-vivo zonation behavior. This user-friendly device supports studies on liver zonation and related metabolic disease mechanisms in vitro. It can also be utilized for experiments that necessitate precise gas concentration gradients, such as hypoxia-related research areas focused on angiogenesis and cancer development.
[ { "created": "Wed, 17 Jul 2024 19:46:48 GMT", "version": "v1" } ]
2024-07-19
[ [ "Mahdavi", "Reza", "" ], [ "Najafabadi", "Sameereh Hashemi", "" ], [ "Ghiass", "Mohammad Adel", "" ], [ "Valaskivi", "Silmu", "" ], [ "Välimäki", "Hannu", "" ], [ "Kreutzer", "Joose", "" ], [ "Blomqvist", "Charlotte Hamngren", "" ], [ "Romeo", "Stefano", "" ], [ "Kallio", "Pasi", "" ], [ "Adiels", "Caroline Beck", "" ] ]
Liver zonation is a fundamental characteristic of hepatocyte spatial heterogeneity, which is challenging to recapitulate in traditional cell cultures. This study presents a novel microfluidic device designed to induce zonation in liver cell cultures by establishing an oxygen gradient using standard laboratory gases. The device consists of two layers; a bottom layer containing a gas channel network that delivers high and low oxygenated gases to create three distinct zones within the cell culture chamber in the layer above. Computational simulations and ratiometric oxygen sensing were employed to validate the oxygen gradient, demonstrating that stable oxygen levels were achieved within two hours. Liver zonation was confirmed using immunofluorescence staining, which showed zonated albumin production in HepG2 cells directly correlating with oxygen levels and mimicking in-vivo zonation behavior. This user-friendly device supports studies on liver zonation and related metabolic disease mechanisms in vitro. It can also be utilized for experiments that necessitate precise gas concentration gradients, such as hypoxia-related research areas focused on angiogenesis and cancer development.
1103.3032
Rhiju Das Rhiju Das
Kyle Beauchamp, Parin Sripakdeevong, Rhiju Das
Why Can't We Predict RNA Structure At Atomic Resolution?
K. Beauchamp & P. Sripakdeevong are equally contributing authors. Submission for book: RNA 3D Structure Analysis and Prediction, editors: N. Leontis & E. Westhof
null
null
null
q-bio.BM physics.bio-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
No existing algorithm can start with arbitrary RNA sequences and return the precise, three-dimensional structures that ensures their biological function. This chapter outlines current algorithms for automated RNA structure prediction (including our own FARNA-FARFAR), highlights their successes, and dissects their limitations, using a tetraloop and the sarcin/ricin motif as examples. The barriers to future advances are considered in light of three particular challenges: improving computational sampling, reducing reliance on experimentally solved structures, and avoiding coarse-grained representations of atomic-level interactions. To help meet these challenges and better understand the current state of the field, we propose an ongoing community-wide CASP-style experiment for evaluating the performance of current structure prediction algorithms.
[ { "created": "Tue, 15 Mar 2011 20:40:15 GMT", "version": "v1" } ]
2011-03-17
[ [ "Beauchamp", "Kyle", "" ], [ "Sripakdeevong", "Parin", "" ], [ "Das", "Rhiju", "" ] ]
No existing algorithm can start with arbitrary RNA sequences and return the precise, three-dimensional structures that ensures their biological function. This chapter outlines current algorithms for automated RNA structure prediction (including our own FARNA-FARFAR), highlights their successes, and dissects their limitations, using a tetraloop and the sarcin/ricin motif as examples. The barriers to future advances are considered in light of three particular challenges: improving computational sampling, reducing reliance on experimentally solved structures, and avoiding coarse-grained representations of atomic-level interactions. To help meet these challenges and better understand the current state of the field, we propose an ongoing community-wide CASP-style experiment for evaluating the performance of current structure prediction algorithms.
1708.07958
Richard Betzel
Richard F. Betzel, Danielle S. Bassett
Generative Models for Network Neuroscience: Prospects and Promise
19 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and to identify principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modeling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, utility in intuiting mechanisms, and a short history on their use in network science broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including \emph{C. elegans}, \emph{Drosophila}, mouse, rat, cat, macaque, and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modeling approach for network neuroscience.
[ { "created": "Sat, 26 Aug 2017 11:30:35 GMT", "version": "v1" } ]
2017-08-29
[ [ "Betzel", "Richard F.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Network neuroscience is the emerging discipline concerned with investigating the complex patterns of interconnections found in neural systems, and to identify principles with which to understand them. Within this discipline, one particularly powerful approach is network generative modeling, in which wiring rules are algorithmically implemented to produce synthetic network architectures with the same properties as observed in empirical network data. Successful models can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. Here we review the prospects and promise of generative models for network neuroscience. We begin with a primer on network generative models, with a discussion of compressibility and predictability, utility in intuiting mechanisms, and a short history on their use in network science broadly. We then discuss generative models in practice and application, paying particular attention to the critical need for cross-validation. Next, we review generative models of biological neural networks, both at the cellular and large-scale level, and across a variety of species including \emph{C. elegans}, \emph{Drosophila}, mouse, rat, cat, macaque, and human. We offer a careful treatment of a few relevant distinctions, including differences between generative models and null models, sufficiency and redundancy, inferring and claiming mechanism, and functional and structural connectivity. We close with a discussion of future directions, outlining exciting frontiers both in empirical data collection efforts as well as in method and theory development that, together, further the utility of the generative network modeling approach for network neuroscience.
2201.11147
Ningyu Zhang
Ningyu Zhang, Zhen Bi, Xiaozhuan Liang, Siyuan Cheng, Haosen Hong, Shumin Deng, Jiazhang Lian, Qiang Zhang, Huajun Chen
OntoProtein: Protein Pretraining With Gene Ontology Embedding
Accepted by ICLR 2022
null
null
null
q-bio.BM cs.AI cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remarkable improvement. However, those prevailing approaches rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better protein representations. We argue that informative biology knowledge in KGs can enhance protein representation with external knowledge. In this work, we propose OntoProtein, the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training. Experimental results show that OntoProtein can surpass state-of-the-art methods with pre-trained protein language models in TAPE benchmark and yield better performance compared with baselines in protein-protein interaction and protein function prediction. Code and datasets are available in https://github.com/zjunlp/OntoProtein.
[ { "created": "Sun, 23 Jan 2022 14:49:49 GMT", "version": "v1" }, { "created": "Tue, 15 Feb 2022 08:23:15 GMT", "version": "v2" }, { "created": "Wed, 13 Apr 2022 03:27:39 GMT", "version": "v3" }, { "created": "Wed, 4 May 2022 23:48:02 GMT", "version": "v4" }, { "created": "Sun, 29 May 2022 14:45:45 GMT", "version": "v5" }, { "created": "Fri, 3 Jun 2022 16:31:08 GMT", "version": "v6" } ]
2022-11-02
[ [ "Zhang", "Ningyu", "" ], [ "Bi", "Zhen", "" ], [ "Liang", "Xiaozhuan", "" ], [ "Cheng", "Siyuan", "" ], [ "Hong", "Haosen", "" ], [ "Deng", "Shumin", "" ], [ "Lian", "Jiazhang", "" ], [ "Zhang", "Qiang", "" ], [ "Chen", "Huajun", "" ] ]
Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remarkable improvement. However, those prevailing approaches rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better protein representations. We argue that informative biology knowledge in KGs can enhance protein representation with external knowledge. In this work, we propose OntoProtein, the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training. Experimental results show that OntoProtein can surpass state-of-the-art methods with pre-trained protein language models in TAPE benchmark and yield better performance compared with baselines in protein-protein interaction and protein function prediction. Code and datasets are available in https://github.com/zjunlp/OntoProtein.
2407.04424
Benoit Baillif
Benoit Baillif, Jason Cole, Patrick McCabe, Andreas Bender
Benchmarking structure-based three-dimensional molecular generative models using GenBench3D: ligand conformation quality matters
null
null
null
null
q-bio.QM q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Three-dimensional (3D) deep molecular generative models offer the advantage of goal-directed generation based on 3D-dependent properties, such as binding affinity for structure-based design within binding pockets. Traditional benchmarks created to evaluate SMILES or molecular graphs generators, such as GuacaMol or MOSES, are limited to evaluate 3D generators as they do not assess the quality of the generated molecular conformation. In this work, we hence developed GenBench3D, which implements a new benchmark for models producing molecules within a binding pocket. Our main contribution is the Validity3D metric, evaluating the conformation quality using the likelihood of bond lengths and valence angles based on reference values observed in the Cambridge Structural Database. The LiGAN, 3D-SBDD, Pocket2Mol, TargetDiff, DiffSBDD and ResGen models were benchmarked. We show that only between 0% and 11% of generated molecules have valid conformations. Performing local relaxation of generated molecules in the pocket considerably improved the Validity3D for all models by a minimum increase of 40%. For LiGAN, 3D-SBDD, or TargetDiff, the set of valid relaxed molecules shows on average higher Vina score (i.e. worse) than the set of raw generated molecules, indicating that the binding affinity of raw generated molecules might be overestimated. Using the other scoring functions, that give higher importance to ligand strain, only yield improved scores when using valid relaxed molecules. Using valid relaxed molecules, TargetDiff and Pocket2Mol show better median Vina, Glide and Gold PLP scores than other models. We have publicly released GenBench3D on GitHub for broader use: https://github.com/bbaillif/genbench3d
[ { "created": "Fri, 5 Jul 2024 11:17:18 GMT", "version": "v1" } ]
2024-07-08
[ [ "Baillif", "Benoit", "" ], [ "Cole", "Jason", "" ], [ "McCabe", "Patrick", "" ], [ "Bender", "Andreas", "" ] ]
Three-dimensional (3D) deep molecular generative models offer the advantage of goal-directed generation based on 3D-dependent properties, such as binding affinity for structure-based design within binding pockets. Traditional benchmarks created to evaluate SMILES or molecular graphs generators, such as GuacaMol or MOSES, are limited to evaluate 3D generators as they do not assess the quality of the generated molecular conformation. In this work, we hence developed GenBench3D, which implements a new benchmark for models producing molecules within a binding pocket. Our main contribution is the Validity3D metric, evaluating the conformation quality using the likelihood of bond lengths and valence angles based on reference values observed in the Cambridge Structural Database. The LiGAN, 3D-SBDD, Pocket2Mol, TargetDiff, DiffSBDD and ResGen models were benchmarked. We show that only between 0% and 11% of generated molecules have valid conformations. Performing local relaxation of generated molecules in the pocket considerably improved the Validity3D for all models by a minimum increase of 40%. For LiGAN, 3D-SBDD, or TargetDiff, the set of valid relaxed molecules shows on average higher Vina score (i.e. worse) than the set of raw generated molecules, indicating that the binding affinity of raw generated molecules might be overestimated. Using the other scoring functions, that give higher importance to ligand strain, only yield improved scores when using valid relaxed molecules. Using valid relaxed molecules, TargetDiff and Pocket2Mol show better median Vina, Glide and Gold PLP scores than other models. We have publicly released GenBench3D on GitHub for broader use: https://github.com/bbaillif/genbench3d
1810.05192
Aviv Regev
Aviv Regev, Sarah Teichmann, Orit Rozenblatt-Rosen, Michael Stubbington, Kristin Ardlie, Ido Amit, Paola Arlotta, Gary Bader, Christophe Benoist, Moshe Biton, Bernd Bodenmiller, Benoit Bruneau, Peter Campbell, Mary Carmichael, Piero Carninci, Leslie Castelo-Soccio, Menna Clatworthy, Hans Clevers, Christian Conrad, Roland Eils, Jeremy Freeman, Lars Fugger, Berthold Goettgens, Daniel Graham, Anna Greka, Nir Hacohen, Muzlifah Haniffa, Ingo Helbig, Robert Heuckeroth, Sekar Kathiresan, Seung Kim, Allon Klein, Bartha Knoppers, Arnold Kriegstein, Eric Lander, Jane Lee, Ed Lein, Sten Linnarsson, Evan Macosko, Sonya MacParland, Robert Majovski, Partha Majumder, John Marioni, Ian McGilvray, Miriam Merad, Musa Mhlanga, Shalin Naik, Martijn Nawijn, Garry Nolan, Benedict Paten, Dana Pe'er, Anthony Philippakis, Chris Ponting, Steve Quake, Jayaraj Rajagopal, Nikolaus Rajewsky, Wolf Reik, Jennifer Rood, Kourosh Saeb-Parsy, Herbert Schiller, Steve Scott, Alex Shalek, Ehud Shapiro, Jay Shin, Kenneth Skeldon, Michael Stratton, Jenna Streicher, Henk Stunnenberg, Kai Tan, Deanne Taylor, Adrian Thorogood, Ludovic Vallier, Alexander van Oudenaarden, Fiona Watt, Wilko Weicher, Jonathan Weissman, Andrew Wells, Barbara Wold, Ramnik Xavier, Xiaowei Zhuang, Human Cell Atlas Organizing Committee
The Human Cell Atlas White Paper
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Human Cell Atlas (HCA) will be made up of comprehensive reference maps of all human cells - the fundamental units of life - as a basis for understanding fundamental human biological processes and diagnosing, monitoring, and treating disease. It will help scientists understand how genetic variants impact disease risk, define drug toxicities, discover better therapies, and advance regenerative medicine. A resource of such ambition and scale should be built in stages, increasing in size, breadth, and resolution as technologies develop and understanding deepens. We will therefore pursue Phase 1 as a suite of flagship projects in key tissues, systems, and organs. We will bring together experts in biology, medicine, genomics, technology development and computation (including data analysis, software engineering, and visualization). We will also need standardized experimental and computational methods that will allow us to compare diverse cell and tissue types - and samples across human communities - in consistent ways, ensuring that the resulting resource is truly global. This document, the first version of the HCA White Paper, was written by experts in the field with feedback and suggestions from the HCA community, gathered during recent international meetings. The White Paper, released at the close of this yearlong planning process, will be a living document that evolves as the HCA community provides additional feedback, as technological and computational advances are made, and as lessons are learned during the construction of the atlas.
[ { "created": "Thu, 11 Oct 2018 18:22:25 GMT", "version": "v1" } ]
2018-10-15
[ [ "Regev", "Aviv", "" ], [ "Teichmann", "Sarah", "" ], [ "Rozenblatt-Rosen", "Orit", "" ], [ "Stubbington", "Michael", "" ], [ "Ardlie", "Kristin", "" ], [ "Amit", "Ido", "" ], [ "Arlotta", "Paola", "" ], [ "Bader", "Gary", "" ], [ "Benoist", "Christophe", "" ], [ "Biton", "Moshe", "" ], [ "Bodenmiller", "Bernd", "" ], [ "Bruneau", "Benoit", "" ], [ "Campbell", "Peter", "" ], [ "Carmichael", "Mary", "" ], [ "Carninci", "Piero", "" ], [ "Castelo-Soccio", "Leslie", "" ], [ "Clatworthy", "Menna", "" ], [ "Clevers", "Hans", "" ], [ "Conrad", "Christian", "" ], [ "Eils", "Roland", "" ], [ "Freeman", "Jeremy", "" ], [ "Fugger", "Lars", "" ], [ "Goettgens", "Berthold", "" ], [ "Graham", "Daniel", "" ], [ "Greka", "Anna", "" ], [ "Hacohen", "Nir", "" ], [ "Haniffa", "Muzlifah", "" ], [ "Helbig", "Ingo", "" ], [ "Heuckeroth", "Robert", "" ], [ "Kathiresan", "Sekar", "" ], [ "Kim", "Seung", "" ], [ "Klein", "Allon", "" ], [ "Knoppers", "Bartha", "" ], [ "Kriegstein", "Arnold", "" ], [ "Lander", "Eric", "" ], [ "Lee", "Jane", "" ], [ "Lein", "Ed", "" ], [ "Linnarsson", "Sten", "" ], [ "Macosko", "Evan", "" ], [ "MacParland", "Sonya", "" ], [ "Majovski", "Robert", "" ], [ "Majumder", "Partha", "" ], [ "Marioni", "John", "" ], [ "McGilvray", "Ian", "" ], [ "Merad", "Miriam", "" ], [ "Mhlanga", "Musa", "" ], [ "Naik", "Shalin", "" ], [ "Nawijn", "Martijn", "" ], [ "Nolan", "Garry", "" ], [ "Paten", "Benedict", "" ], [ "Pe'er", "Dana", "" ], [ "Philippakis", "Anthony", "" ], [ "Ponting", "Chris", "" ], [ "Quake", "Steve", "" ], [ "Rajagopal", "Jayaraj", "" ], [ "Rajewsky", "Nikolaus", "" ], [ "Reik", "Wolf", "" ], [ "Rood", "Jennifer", "" ], [ "Saeb-Parsy", "Kourosh", "" ], [ "Schiller", "Herbert", "" ], [ "Scott", "Steve", "" ], [ "Shalek", "Alex", "" ], [ "Shapiro", "Ehud", "" ], [ "Shin", "Jay", "" ], [ "Skeldon", "Kenneth", "" ], [ "Stratton", "Michael", "" ], [ "Streicher", "Jenna", "" ], [ "Stunnenberg", "Henk", "" ], [ "Tan", "Kai", "" ], [ "Taylor", "Deanne", "" ], [ "Thorogood", "Adrian", "" ], [ "Vallier", "Ludovic", "" ], [ "van Oudenaarden", "Alexander", "" ], [ "Watt", "Fiona", "" ], [ "Weicher", "Wilko", "" ], [ "Weissman", "Jonathan", "" ], [ "Wells", "Andrew", "" ], [ "Wold", "Barbara", "" ], [ "Xavier", "Ramnik", "" ], [ "Zhuang", "Xiaowei", "" ], [ "Committee", "Human Cell Atlas Organizing", "" ] ]
The Human Cell Atlas (HCA) will be made up of comprehensive reference maps of all human cells - the fundamental units of life - as a basis for understanding fundamental human biological processes and diagnosing, monitoring, and treating disease. It will help scientists understand how genetic variants impact disease risk, define drug toxicities, discover better therapies, and advance regenerative medicine. A resource of such ambition and scale should be built in stages, increasing in size, breadth, and resolution as technologies develop and understanding deepens. We will therefore pursue Phase 1 as a suite of flagship projects in key tissues, systems, and organs. We will bring together experts in biology, medicine, genomics, technology development and computation (including data analysis, software engineering, and visualization). We will also need standardized experimental and computational methods that will allow us to compare diverse cell and tissue types - and samples across human communities - in consistent ways, ensuring that the resulting resource is truly global. This document, the first version of the HCA White Paper, was written by experts in the field with feedback and suggestions from the HCA community, gathered during recent international meetings. The White Paper, released at the close of this yearlong planning process, will be a living document that evolves as the HCA community provides additional feedback, as technological and computational advances are made, and as lessons are learned during the construction of the atlas.
2310.09252
Jeya Balaji Balasubramanian
Jeya Balaji Balasubramanian, Parichoy Pal Choudhury, Srijon Mukhopadhyay, Thomas Ahearn, Nilanjan Chatterjee, Montserrat Garc\'ia-Closas, Jonas S. Almeida
Wasm-iCARE: a portable and privacy-preserving web module to build, validate, and apply absolute risk models
10 pages, 2 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Objective: Absolute risk models estimate an individual's future disease risk over a specified time interval. Applications utilizing server-side risk tooling, such as the R-based iCARE (R-iCARE), to build, validate, and apply absolute risk models, face serious limitations in portability and privacy due to their need for circulating user data in remote servers for operation. Our objective was to overcome these limitations. Materials and Methods: We refactored R-iCARE into a Python package (Py-iCARE) then compiled it to WebAssembly (Wasm-iCARE): a portable web module, which operates entirely within the privacy of the user's device. Results: We showcase the portability and privacy of Wasm-iCARE through two applications: for researchers to statistically validate risk models, and to deliver them to end-users. Both applications run entirely on the client-side, requiring no downloads or installations, and keeps user data on-device during risk calculation. Conclusions: Wasm-iCARE fosters accessible and privacy-preserving risk tools, accelerating their validation and delivery.
[ { "created": "Fri, 13 Oct 2023 17:09:57 GMT", "version": "v1" } ]
2023-10-16
[ [ "Balasubramanian", "Jeya Balaji", "" ], [ "Choudhury", "Parichoy Pal", "" ], [ "Mukhopadhyay", "Srijon", "" ], [ "Ahearn", "Thomas", "" ], [ "Chatterjee", "Nilanjan", "" ], [ "García-Closas", "Montserrat", "" ], [ "Almeida", "Jonas S.", "" ] ]
Objective: Absolute risk models estimate an individual's future disease risk over a specified time interval. Applications utilizing server-side risk tooling, such as the R-based iCARE (R-iCARE), to build, validate, and apply absolute risk models, face serious limitations in portability and privacy due to their need for circulating user data in remote servers for operation. Our objective was to overcome these limitations. Materials and Methods: We refactored R-iCARE into a Python package (Py-iCARE) then compiled it to WebAssembly (Wasm-iCARE): a portable web module, which operates entirely within the privacy of the user's device. Results: We showcase the portability and privacy of Wasm-iCARE through two applications: for researchers to statistically validate risk models, and to deliver them to end-users. Both applications run entirely on the client-side, requiring no downloads or installations, and keeps user data on-device during risk calculation. Conclusions: Wasm-iCARE fosters accessible and privacy-preserving risk tools, accelerating their validation and delivery.
2211.05922
Jing Shuang (Lisa) Li
Jing Shuang Li, Anish A. Sarma, Terrence J. Sejnowski, John C. Doyle
Internal feedback in the cortical perception-action loop enables fast and accurate behavior
Submitted to PNAS
null
null
null
q-bio.NC cs.SY eess.SY math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception-action loop. This loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backwards, from motor towards sensory areas. Internal feedback is ubiquitous in neural sensorimotor systems and recent advances in control theory show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes in early sensory areas so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components. For example, fast, giant neurons are necessarily less accurate than smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function -- how different parts of cortex control different parts of the body -- and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological and behavioral observations, including motor signals in visual cortex, heterogeneous kinetics of sensory receptors and the presence of giant Betz cells in motor cortex, Meynert cells in visual cortex and giant von Economo cells in the prefrontal cortex of humans as well as internal feedback patterns and unexplained heterogeneity in other neural systems.
[ { "created": "Thu, 10 Nov 2022 23:43:26 GMT", "version": "v1" }, { "created": "Tue, 10 Jan 2023 18:08:36 GMT", "version": "v2" } ]
2023-01-11
[ [ "Li", "Jing Shuang", "" ], [ "Sarma", "Anish A.", "" ], [ "Sejnowski", "Terrence J.", "" ], [ "Doyle", "John C.", "" ] ]
Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception-action loop. This loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backwards, from motor towards sensory areas. Internal feedback is ubiquitous in neural sensorimotor systems and recent advances in control theory show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes in early sensory areas so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components. For example, fast, giant neurons are necessarily less accurate than smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function -- how different parts of cortex control different parts of the body -- and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological and behavioral observations, including motor signals in visual cortex, heterogeneous kinetics of sensory receptors and the presence of giant Betz cells in motor cortex, Meynert cells in visual cortex and giant von Economo cells in the prefrontal cortex of humans as well as internal feedback patterns and unexplained heterogeneity in other neural systems.
1606.01358
Daqing Guo
Daqing Guo, Mingming Chen, Matjaz Perc, Shengdun Wu, Chuan Xia, Yangsong Zhang, Peng Xu, Yang Xia, Dezhong Yao
Firing regulation of fast-spiking interneurons by autaptic inhibition
6 pages, 5 figures
EPL 114 (2016) 30001
10.1209/0295-5075/114/30001
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fast-spiking (FS) interneurons in the brain are self-innervated by powerful inhibitory GABAergic autaptic connections. By computational modelling, we investigate how autaptic inhibition regulates the firing response of such interneurons. Our results indicate that autaptic inhibition both boosts the current threshold for action potential generation as well as modulates the input-output gain of FS interneurons. The autaptic transmission delay is identified as a key parameter that controls the firing patterns and determines multistability regions of FS interneurons. Furthermore, we observe that neuronal noise influences the firing regulation of FS interneurons by autaptic inhibition and extends their dynamic range for encoding inputs. Importantly, autaptic inhibition modulates noise-induced irregular firing of FS interneurons, such that coherent firing appears at an optimal autaptic inhibition level. Our result reveal the functional roles of autaptic inhibition in taming the firing dynamics of FS interneurons.
[ { "created": "Sat, 4 Jun 2016 09:45:00 GMT", "version": "v1" } ]
2016-07-04
[ [ "Guo", "Daqing", "" ], [ "Chen", "Mingming", "" ], [ "Perc", "Matjaz", "" ], [ "Wu", "Shengdun", "" ], [ "Xia", "Chuan", "" ], [ "Zhang", "Yangsong", "" ], [ "Xu", "Peng", "" ], [ "Xia", "Yang", "" ], [ "Yao", "Dezhong", "" ] ]
Fast-spiking (FS) interneurons in the brain are self-innervated by powerful inhibitory GABAergic autaptic connections. By computational modelling, we investigate how autaptic inhibition regulates the firing response of such interneurons. Our results indicate that autaptic inhibition both boosts the current threshold for action potential generation as well as modulates the input-output gain of FS interneurons. The autaptic transmission delay is identified as a key parameter that controls the firing patterns and determines multistability regions of FS interneurons. Furthermore, we observe that neuronal noise influences the firing regulation of FS interneurons by autaptic inhibition and extends their dynamic range for encoding inputs. Importantly, autaptic inhibition modulates noise-induced irregular firing of FS interneurons, such that coherent firing appears at an optimal autaptic inhibition level. Our result reveal the functional roles of autaptic inhibition in taming the firing dynamics of FS interneurons.
1309.7966
Miguel Angel Munoz
Jesus M Cortes, Mathieu Desroches, Serafim Rodrigues, Romain Veltz, Miguel A. Munoz, Terrence J. Sejnowski
Short-term synaptic plasticity in the deterministic Tsodyks-Markram model leads to unpredictable network dynamics
Published in PNAS (Early edition) sept. 2013
null
10.1073/pnas.1316071110
null
q-bio.NC nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Short-Term Synaptic Plasticity (STSP) strongly affects the neural dynamics of cortical networks. The Tsodyks and Markram (TM) model for STSP accurately accounts for a wide range of physiological responses at different types of cortical synapses. Here, we report for the first time a route to chaotic behavior via a Shilnikov homoclinic bifurcation that dynamically organises some of the responses in the TM model. In particular, the presence of such a homoclinic bifurcation strongly affects the shape of the trajectories in the phase space and induces highly irregular transient dynamics; indeed, in the vicinity of the Shilnikov homoclinic bifurcation, the number of population spikes and their precise timing are unpredictable and highly sensitive to the initial conditions. Such an irregular deterministic dynamics has its counterpart in stochastic/network versions of the TM model: the existence of the Shilnikov homoclinic bifurcation generates complex and irregular spiking patterns and --acting as a sort of springboard-- facilitates transitions between the down-state and unstable periodic orbits. The interplay between the (deterministic) homoclinic bifurcation and stochastic effects may give rise to some of the complex dynamics observed in neural systems.
[ { "created": "Mon, 30 Sep 2013 19:12:05 GMT", "version": "v1" } ]
2014-03-05
[ [ "Cortes", "Jesus M", "" ], [ "Desroches", "Mathieu", "" ], [ "Rodrigues", "Serafim", "" ], [ "Veltz", "Romain", "" ], [ "Munoz", "Miguel A.", "" ], [ "Sejnowski", "Terrence J.", "" ] ]
Short-Term Synaptic Plasticity (STSP) strongly affects the neural dynamics of cortical networks. The Tsodyks and Markram (TM) model for STSP accurately accounts for a wide range of physiological responses at different types of cortical synapses. Here, we report for the first time a route to chaotic behavior via a Shilnikov homoclinic bifurcation that dynamically organises some of the responses in the TM model. In particular, the presence of such a homoclinic bifurcation strongly affects the shape of the trajectories in the phase space and induces highly irregular transient dynamics; indeed, in the vicinity of the Shilnikov homoclinic bifurcation, the number of population spikes and their precise timing are unpredictable and highly sensitive to the initial conditions. Such an irregular deterministic dynamics has its counterpart in stochastic/network versions of the TM model: the existence of the Shilnikov homoclinic bifurcation generates complex and irregular spiking patterns and --acting as a sort of springboard-- facilitates transitions between the down-state and unstable periodic orbits. The interplay between the (deterministic) homoclinic bifurcation and stochastic effects may give rise to some of the complex dynamics observed in neural systems.
2303.03277
Antonio Francesco Zirattu
Antonio Francesco Zirattu, Marta Biondo, Matteo Osella and Michele Caselle
The effect of a linear feedback mechanism in a homeostasis model
9 pages, 6 figures
null
null
null
q-bio.PE q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Feedback loops are essential for regulating cell proliferation and maintaining the delicate balance between cell division and cell death. Thanks to the exact solution of a few simple models of cell growth it is by now clear that stochastic fluctuations play a central role in this process and that cell growth (and in particular the robustness and stability of homeostasis) can be properly addressed only as a stochastic process. Using epidermal homeostasis as a prototypical example, we show that it is possible to discriminate among different feedback strategies which turn out to be characterized by different, experimentally testable, behaviours. In particular, we focus on the so-called Dynamical Heterogeneity model, an epidermal homeostasis model that takes into account two well known cellular features: the plasticity of the cells and their adaptability to face environmental stimuli. We show that specific choices of the parameter on which the feedback is applied may decrease the fluctuations of the homeostatic population level and improve the recovery of the system after an external perturbation.
[ { "created": "Mon, 6 Mar 2023 16:47:34 GMT", "version": "v1" } ]
2023-03-07
[ [ "Zirattu", "Antonio Francesco", "" ], [ "Biondo", "Marta", "" ], [ "Osella", "Matteo", "" ], [ "Caselle", "Michele", "" ] ]
Feedback loops are essential for regulating cell proliferation and maintaining the delicate balance between cell division and cell death. Thanks to the exact solution of a few simple models of cell growth it is by now clear that stochastic fluctuations play a central role in this process and that cell growth (and in particular the robustness and stability of homeostasis) can be properly addressed only as a stochastic process. Using epidermal homeostasis as a prototypical example, we show that it is possible to discriminate among different feedback strategies which turn out to be characterized by different, experimentally testable, behaviours. In particular, we focus on the so-called Dynamical Heterogeneity model, an epidermal homeostasis model that takes into account two well known cellular features: the plasticity of the cells and their adaptability to face environmental stimuli. We show that specific choices of the parameter on which the feedback is applied may decrease the fluctuations of the homeostatic population level and improve the recovery of the system after an external perturbation.
1309.1569
Iaroslav Ispolatov
Iaroslav Ispolatov and Anne Muesch
A model for the self-organization of vesicular flux and protein distributions in the Golgi apparatus
15 pages, 6 figures
PLoS Comput Biol 9(7): e1003125. (2013)
10.1371/journal.pcbi.1003125
null
q-bio.SC nlin.AO physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generation of two non-identical membrane compartments via exchange of vesicles is considered to require two types of vesicles specified by distinct cytosolic coats that selectively recruit cargo and two membrane-bound SNARE pairs that specify fusion and differ in their affinities for each type of vesicles. The mammalian Golgi complex is composed of 6-8 non-identical cisternae that undergo gradual maturation and replacement yet features only two SNARE pairs. We present a model that explains how the distinct composition of Golgi cisternae can be generated with two and even a single SNARE pair and one vesicle coat. A decay of active SNARE concentration in aging cisternae provides the seed for a cis > trans SNARE gradient that generates the predominantly retrograde vesicle flux which further enhances the gradient. This flux in turn yields the observed inhomogeneous steady-state distribution of Golgi enzymes, which compete with each other and with the SNAREs for incorporation into transport vesicles. We show analytically that the steady state SNARE concentration decays exponentially with the cisterna number. Numerical solutions of rate equations reproduce the experimentally observed SNARE gradients, overlapping enzyme peaks in cis, medial and trans and the reported change in vesicle nature across Golgi: Vesicles originating from younger cisternae mostly contain Golgi enzymes and SNAREs enriched in these cisternae and extensively recycle through the Endoplasmic Reticulum (ER), while the other subpopulation of vesicles contains Golgi proteins prevalent in older cisternae and hardly reaches the ER.
[ { "created": "Fri, 6 Sep 2013 08:52:11 GMT", "version": "v1" } ]
2017-02-07
[ [ "Ispolatov", "Iaroslav", "" ], [ "Muesch", "Anne", "" ] ]
The generation of two non-identical membrane compartments via exchange of vesicles is considered to require two types of vesicles specified by distinct cytosolic coats that selectively recruit cargo and two membrane-bound SNARE pairs that specify fusion and differ in their affinities for each type of vesicles. The mammalian Golgi complex is composed of 6-8 non-identical cisternae that undergo gradual maturation and replacement yet features only two SNARE pairs. We present a model that explains how the distinct composition of Golgi cisternae can be generated with two and even a single SNARE pair and one vesicle coat. A decay of active SNARE concentration in aging cisternae provides the seed for a cis > trans SNARE gradient that generates the predominantly retrograde vesicle flux which further enhances the gradient. This flux in turn yields the observed inhomogeneous steady-state distribution of Golgi enzymes, which compete with each other and with the SNAREs for incorporation into transport vesicles. We show analytically that the steady state SNARE concentration decays exponentially with the cisterna number. Numerical solutions of rate equations reproduce the experimentally observed SNARE gradients, overlapping enzyme peaks in cis, medial and trans and the reported change in vesicle nature across Golgi: Vesicles originating from younger cisternae mostly contain Golgi enzymes and SNAREs enriched in these cisternae and extensively recycle through the Endoplasmic Reticulum (ER), while the other subpopulation of vesicles contains Golgi proteins prevalent in older cisternae and hardly reaches the ER.
1502.04239
Raunaq Malhotra
Raunaq Malhotra, Manjari Mukhopadhyay Steven Wu, Allen Rodrigo, Mary Poss, Raj Acharya
Maximum Likelihood de novo reconstruction of viral populations using paired end sequencing data
14 Pages, 3 Figures, Submitted to TCBB Journal; Updated Results section, Updated Supplementary Section. Link for tool: https://github.com/raunaq-m/MLEHaplo
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We present MLEHaplo, a maximum likelihood de novo assembly algorithm for reconstructing viral haplotypes in a virus population from paired-end next generation sequencing (NGS) data. Using the pairing information of reads in our proposed Viral Path Reconstruction Algorithm (ViPRA), we generate a small subset of paths from a De Bruijn graph of reads that serve as candidate paths for true viral haplotypes. Our proposed method MLEHaplo then generates a maximum likelihood estimate of the viral population using the paths reconstructed by ViPRA. We evaluate and compare MLEHaplo on simulated datasets of 1200 base pairs at different sequence coverage, on HCV strains with sequencing errors, and on a lab mixture of five HIV-1 strains. MLEHaplo reconstructs full length viral haplotypes having a 100% sequence identity to the true viral haplotypes in most of the small genome simulated viral populations at 250x sequencing coverage. While reference based methods either under-estimate or over-estimate the viral haplotypes, MLEHaplo limits the over-estimation to 3 times the size of true viral haplotypes, reconstructs the full phylogeny in the HCV to greater than 99% sequencing identity and captures more sequencing variation for the HIV-1 strains dataset compared to their known consensus sequences.
[ { "created": "Sat, 14 Feb 2015 20:12:34 GMT", "version": "v1" }, { "created": "Tue, 3 Mar 2015 00:18:37 GMT", "version": "v2" }, { "created": "Sat, 16 Apr 2016 23:42:11 GMT", "version": "v3" } ]
2016-04-19
[ [ "Malhotra", "Raunaq", "" ], [ "Wu", "Manjari Mukhopadhyay Steven", "" ], [ "Rodrigo", "Allen", "" ], [ "Poss", "Mary", "" ], [ "Acharya", "Raj", "" ] ]
We present MLEHaplo, a maximum likelihood de novo assembly algorithm for reconstructing viral haplotypes in a virus population from paired-end next generation sequencing (NGS) data. Using the pairing information of reads in our proposed Viral Path Reconstruction Algorithm (ViPRA), we generate a small subset of paths from a De Bruijn graph of reads that serve as candidate paths for true viral haplotypes. Our proposed method MLEHaplo then generates a maximum likelihood estimate of the viral population using the paths reconstructed by ViPRA. We evaluate and compare MLEHaplo on simulated datasets of 1200 base pairs at different sequence coverage, on HCV strains with sequencing errors, and on a lab mixture of five HIV-1 strains. MLEHaplo reconstructs full length viral haplotypes having a 100% sequence identity to the true viral haplotypes in most of the small genome simulated viral populations at 250x sequencing coverage. While reference based methods either under-estimate or over-estimate the viral haplotypes, MLEHaplo limits the over-estimation to 3 times the size of true viral haplotypes, reconstructs the full phylogeny in the HCV to greater than 99% sequencing identity and captures more sequencing variation for the HIV-1 strains dataset compared to their known consensus sequences.
2004.09751
Seyedmehdi Abtahi
SeyedMehdi Abtahi and Mojtaba Sharifi
Machine Learning Method to Control and Observe for Treatment and Monitoring of Hepatitis B Virus
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hepatitis type B is one of the most common infectious disease worldwide that can pose severe threats to human health up to the point that may contribute to severe liver damage or cancer. Over the past two decades a large number of dynamic models have been presented based on experimental data to predict the HBV infection behavior. Besides, several kinds of controllers have been employed to obtain effective solutions from the HBV treatment. In this essay we consider the nonlinear HBV dynamic model which subjected to both parametric and non-parametric uncertainties without using any linearization. In previous control methods three HBV dynamic states should be measured virus safe, and infected cells. However in most of the biological systems, the amount of virus is experimentally measured. Accordingly, the necessity to seek a method that can estimate the amount of required drug by receiving the virus data emerges. An ANFIS method is developed in this work to provide an intelligent controller for the drug dosage based on the number of viruses together with an estimator for the amount of infected and uninfected cells. This controller is trained first using the data provided from a previous adaptive control strategy. After that to improve the closed-loop system capabilities two unmeasured state variables of fundamental dynamics are estimated through the training phase of the ANFIS observer. The results of simulations demonstrated that the accuracy of the proposed intelligent controller is high in the tracking of the desired descending virus population.
[ { "created": "Tue, 21 Apr 2020 04:52:09 GMT", "version": "v1" } ]
2020-04-22
[ [ "Abtahi", "SeyedMehdi", "" ], [ "Sharifi", "Mojtaba", "" ] ]
Hepatitis type B is one of the most common infectious disease worldwide that can pose severe threats to human health up to the point that may contribute to severe liver damage or cancer. Over the past two decades a large number of dynamic models have been presented based on experimental data to predict the HBV infection behavior. Besides, several kinds of controllers have been employed to obtain effective solutions from the HBV treatment. In this essay we consider the nonlinear HBV dynamic model which subjected to both parametric and non-parametric uncertainties without using any linearization. In previous control methods three HBV dynamic states should be measured virus safe, and infected cells. However in most of the biological systems, the amount of virus is experimentally measured. Accordingly, the necessity to seek a method that can estimate the amount of required drug by receiving the virus data emerges. An ANFIS method is developed in this work to provide an intelligent controller for the drug dosage based on the number of viruses together with an estimator for the amount of infected and uninfected cells. This controller is trained first using the data provided from a previous adaptive control strategy. After that to improve the closed-loop system capabilities two unmeasured state variables of fundamental dynamics are estimated through the training phase of the ANFIS observer. The results of simulations demonstrated that the accuracy of the proposed intelligent controller is high in the tracking of the desired descending virus population.
1105.6329
Daniel Gamermann Dr.
D. Gamermann, A. Montagud, P. Aparicio, E. Navarro, J. Triana, F. R. Villatoro, J. F. Urchuegu\'ia, P. Fern\'andez de C\'ordoba
A modular synthetic device to calibrate promoters
24 pages, 11 figures
null
10.1142/S0218339012500015
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this contribution, a design of a synthetic calibration genetic circuit to characterize the relative strength of different sensing promoters is proposed and its specifications and performance are analyzed via an effective mathematical model. Our calibrator device possesses certain novel and useful features like modularity (and thus the possibility of being used in many different biological contexts), simplicity, being based on a single cell, high sensitivity and fast response. To uncover the critical model parameters and the corresponding parameter domain at which the calibrator performance will be optimal, a sensitivity analysis of the model parameters was carried out over a given range of sensing protein concentrations (acting as input). Our analysis suggests that the half saturation constants for repression, sensing and difference in binding cooperativity (Hill coefficients) for repression are the key to the performance of the proposed device. They furthermore are determinant for the sensing speed of the device, showing that it is possible to produce detectable differences in the repression protein concentrations and in turn in the corresponding fluorescence in less than two hours. This analysis paves the way for the design, experimental construction and validation of a new family of functional genetic circuits for the purpose of calibrating promoters.
[ { "created": "Tue, 31 May 2011 16:06:13 GMT", "version": "v1" } ]
2012-06-05
[ [ "Gamermann", "D.", "" ], [ "Montagud", "A.", "" ], [ "Aparicio", "P.", "" ], [ "Navarro", "E.", "" ], [ "Triana", "J.", "" ], [ "Villatoro", "F. R.", "" ], [ "Urchueguía", "J. F.", "" ], [ "de Córdoba", "P. Fernández", "" ] ]
In this contribution, a design of a synthetic calibration genetic circuit to characterize the relative strength of different sensing promoters is proposed and its specifications and performance are analyzed via an effective mathematical model. Our calibrator device possesses certain novel and useful features like modularity (and thus the possibility of being used in many different biological contexts), simplicity, being based on a single cell, high sensitivity and fast response. To uncover the critical model parameters and the corresponding parameter domain at which the calibrator performance will be optimal, a sensitivity analysis of the model parameters was carried out over a given range of sensing protein concentrations (acting as input). Our analysis suggests that the half saturation constants for repression, sensing and difference in binding cooperativity (Hill coefficients) for repression are the key to the performance of the proposed device. They furthermore are determinant for the sensing speed of the device, showing that it is possible to produce detectable differences in the repression protein concentrations and in turn in the corresponding fluorescence in less than two hours. This analysis paves the way for the design, experimental construction and validation of a new family of functional genetic circuits for the purpose of calibrating promoters.
2004.06790
Kelsey Linnell
Kelsey Linnell, Thayer Alshaabi, Thomas McAndrew, Jeanie Lim, Peter Sheridan Dodds, and Christopher M. Danforth
The sleep loss insult of Spring Daylight Savings in the US is absorbed by Twitter users within 48 hours
null
null
null
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep loss has been linked to heart disease, diabetes, cancer, and an increase in accidents, all of which are among the leading causes of death in the United States. Population-scale sleep studies have the potential to advance public health by helping to identify at-risk populations, changes in collective sleep patterns, and to inform policy change. Prior research suggests other kinds of health indicators such as depression and obesity can be estimated using social media activity. However, the inability to effectively measure collective sleep with publicly available data has limited large-scale academic studies. Here, we investigate the passive estimation of sleep loss through a proxy analysis of Twitter activity profiles. We use "Spring Forward" events, which occur at the beginning of Daylight Savings Time in the United States, as a natural experimental condition to estimate spatial differences in sleep loss across the United States. On average, peak Twitter activity occurs roughly 45 minutes later on the Sunday following Spring Forward. By Monday morning however, activity curves are realigned with the week before, suggesting that at least on Twitter, the lost hour of early Sunday morning has been quickly absorbed.
[ { "created": "Fri, 10 Apr 2020 19:02:10 GMT", "version": "v1" } ]
2020-04-16
[ [ "Linnell", "Kelsey", "" ], [ "Alshaabi", "Thayer", "" ], [ "McAndrew", "Thomas", "" ], [ "Lim", "Jeanie", "" ], [ "Dodds", "Peter Sheridan", "" ], [ "Danforth", "Christopher M.", "" ] ]
Sleep loss has been linked to heart disease, diabetes, cancer, and an increase in accidents, all of which are among the leading causes of death in the United States. Population-scale sleep studies have the potential to advance public health by helping to identify at-risk populations, changes in collective sleep patterns, and to inform policy change. Prior research suggests other kinds of health indicators such as depression and obesity can be estimated using social media activity. However, the inability to effectively measure collective sleep with publicly available data has limited large-scale academic studies. Here, we investigate the passive estimation of sleep loss through a proxy analysis of Twitter activity profiles. We use "Spring Forward" events, which occur at the beginning of Daylight Savings Time in the United States, as a natural experimental condition to estimate spatial differences in sleep loss across the United States. On average, peak Twitter activity occurs roughly 45 minutes later on the Sunday following Spring Forward. By Monday morning however, activity curves are realigned with the week before, suggesting that at least on Twitter, the lost hour of early Sunday morning has been quickly absorbed.
1610.08484
Gregory Kiar
Gregory Kiar, Krzysztof J. Gorgolewski, Dean Kleissas, William Gray Roncal, Brian Litt, Brian Wandell, Russel A. Poldrack, Martin Wiener, R. Jacob Vogelstein, Randal Burns, Joshua T. Vogelstein
Science In the Cloud (SIC): A use case in MRI Connectomics
13 pages, 5 figures, 4 tables, 2 appendices
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern technologies are enabling scientists to collect extraordinary amounts of complex and sophisticated data across a huge range of scales like never before. With this onslaught of data, we can allow the focal point to shift towards answering the question of how we can analyze and understand the massive amounts of data in front of us. Unfortunately, lack of standardized sharing mechanisms and practices often make reproducing or extending scientific results very difficult. With the creation of data organization structures and tools which drastically improve code portability, we now have the opportunity to design such a framework for communicating extensible scientific discoveries. Our proposed solution leverages these existing technologies and standards, and provides an accessible and extensible model for reproducible research, called "science in the cloud" (sic). Exploiting scientific containers, cloud computing and cloud data services, we show the capability to launch a computer in the cloud and run a web service which enables intimate interaction with the tools and data presented. We hope this model will inspire the community to produce reproducible and, importantly, extensible results which will enable us to collectively accelerate the rate at which scientific breakthroughs are discovered, replicated, and extended.
[ { "created": "Wed, 26 Oct 2016 19:43:47 GMT", "version": "v1" }, { "created": "Tue, 1 Nov 2016 03:48:58 GMT", "version": "v2" }, { "created": "Fri, 11 Nov 2016 06:40:40 GMT", "version": "v3" }, { "created": "Fri, 20 Jan 2017 18:33:30 GMT", "version": "v4" }, { "created": "Tue, 14 Feb 2017 17:24:09 GMT", "version": "v5" } ]
2017-02-15
[ [ "Kiar", "Gregory", "" ], [ "Gorgolewski", "Krzysztof J.", "" ], [ "Kleissas", "Dean", "" ], [ "Roncal", "William Gray", "" ], [ "Litt", "Brian", "" ], [ "Wandell", "Brian", "" ], [ "Poldrack", "Russel A.", "" ], [ "Wiener", "Martin", "" ], [ "Vogelstein", "R. Jacob", "" ], [ "Burns", "Randal", "" ], [ "Vogelstein", "Joshua T.", "" ] ]
Modern technologies are enabling scientists to collect extraordinary amounts of complex and sophisticated data across a huge range of scales like never before. With this onslaught of data, we can allow the focal point to shift towards answering the question of how we can analyze and understand the massive amounts of data in front of us. Unfortunately, lack of standardized sharing mechanisms and practices often make reproducing or extending scientific results very difficult. With the creation of data organization structures and tools which drastically improve code portability, we now have the opportunity to design such a framework for communicating extensible scientific discoveries. Our proposed solution leverages these existing technologies and standards, and provides an accessible and extensible model for reproducible research, called "science in the cloud" (sic). Exploiting scientific containers, cloud computing and cloud data services, we show the capability to launch a computer in the cloud and run a web service which enables intimate interaction with the tools and data presented. We hope this model will inspire the community to produce reproducible and, importantly, extensible results which will enable us to collectively accelerate the rate at which scientific breakthroughs are discovered, replicated, and extended.
1507.03197
Chao Wang
Jianwei Zhu, Haicang Zhang, Chao Wang, Bin Ling, Wei-Mou Zheng, Dongbo Bu
TOPO: Improving remote homologue recognition via identifying common protein structure framework
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein structure prediction remains a challenge in the field of computational biology. Traditional protein structure prediction approaches include template-based modelling (say, homology modelling, and threading), and ab initio. A threading algorithm takes a query protein sequence as input, recognizes the most likely fold, and finally reports the alignments of the query sequence to structure-known templates as output. The existing threading approaches mainly utilizes the information of protein sequence profile, solvent accessibility, contact probability, etc., and correctly recognize folds for some proteins. However, the existing threading approaches show poorly performance for remote homology proteins. How to improve the fold recognition for remote homology proteins remains to be a difficult task for protein structure prediction.
[ { "created": "Sun, 12 Jul 2015 07:31:22 GMT", "version": "v1" } ]
2015-07-14
[ [ "Zhu", "Jianwei", "" ], [ "Zhang", "Haicang", "" ], [ "Wang", "Chao", "" ], [ "Ling", "Bin", "" ], [ "Zheng", "Wei-Mou", "" ], [ "Bu", "Dongbo", "" ] ]
Protein structure prediction remains a challenge in the field of computational biology. Traditional protein structure prediction approaches include template-based modelling (say, homology modelling, and threading), and ab initio. A threading algorithm takes a query protein sequence as input, recognizes the most likely fold, and finally reports the alignments of the query sequence to structure-known templates as output. The existing threading approaches mainly utilizes the information of protein sequence profile, solvent accessibility, contact probability, etc., and correctly recognize folds for some proteins. However, the existing threading approaches show poorly performance for remote homology proteins. How to improve the fold recognition for remote homology proteins remains to be a difficult task for protein structure prediction.
1509.08954
Mauricio Girardi-Schappo
Mauricio Girardi-Schappo, Germano S. Bortolotto, Jheniffer J. Gonsalves, Leonel T. Pinto, Marcelo H. R. Tragtenberg
Griffiths phase and long-range correlations in a biologically motivated V1 model
14 pages, 5 figures, PACS: 05.70.Jk,45.70.Ht,87.19.lt
Scientific Reports 6, Article number: 29561 (2016)
10.1038/srep29561
null
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Activity in the brain propagates as waves of firing neurons, namely avalanches. These waves' size and duration distributions have been experimentally shown to display a stable power-law profile, long-range correlations and $1/f^{b}$ power spectrum \textit{in vivo} and \textit{in vitro}. We study an avalanching biologically motivated model of mammals visual cortex and find an extended critical-like region -- a Griffiths phase -- characterized by divergent susceptibility and zero order parameter. This phase lies close to the expected experimental value of the \textit{excitatory postsynaptic potential} in the cortex suggesting that critical behavior may be found in the visual system. Avalanches are not perfectly power-law distributed, but it is possible to collapse the distributions and define a cutoff avalanche size that diverges as the network size is increased inside the critical region. The avalanches present long-range correlations and $1/f^{b}$ power spectrum, matching experiments. The phase transition is analytically determined by a mean-field approximation.
[ { "created": "Fri, 18 Sep 2015 19:10:53 GMT", "version": "v1" }, { "created": "Tue, 2 Feb 2016 12:54:53 GMT", "version": "v2" }, { "created": "Fri, 18 Nov 2016 18:56:06 GMT", "version": "v3" } ]
2016-11-21
[ [ "Girardi-Schappo", "Mauricio", "" ], [ "Bortolotto", "Germano S.", "" ], [ "Gonsalves", "Jheniffer J.", "" ], [ "Pinto", "Leonel T.", "" ], [ "Tragtenberg", "Marcelo H. R.", "" ] ]
Activity in the brain propagates as waves of firing neurons, namely avalanches. These waves' size and duration distributions have been experimentally shown to display a stable power-law profile, long-range correlations and $1/f^{b}$ power spectrum \textit{in vivo} and \textit{in vitro}. We study an avalanching biologically motivated model of mammals visual cortex and find an extended critical-like region -- a Griffiths phase -- characterized by divergent susceptibility and zero order parameter. This phase lies close to the expected experimental value of the \textit{excitatory postsynaptic potential} in the cortex suggesting that critical behavior may be found in the visual system. Avalanches are not perfectly power-law distributed, but it is possible to collapse the distributions and define a cutoff avalanche size that diverges as the network size is increased inside the critical region. The avalanches present long-range correlations and $1/f^{b}$ power spectrum, matching experiments. The phase transition is analytically determined by a mean-field approximation.
2101.07654
Yuzhe Lu
Yuzhe Lu, Haichun Yang, Zheyu Zhu, Ruining Deng, Agnes B. Fogo, and Yuankai Huo
Improve Global Glomerulosclerosis Classification with Imbalanced Data using CircleMix Augmentation
null
null
null
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classification of glomerular lesions is a routine and essential task in renal pathology. Recently, machine learning approaches, especially deep learning algorithms, have been used to perform computer-aided lesion characterization of glomeruli. However, one major challenge of developing such methods is the naturally imbalanced distribution of different lesions. In this paper, we propose CircleMix, a novel data augmentation technique, to improve the accuracy of classifying globally sclerotic glomeruli with a hierarchical learning strategy. Different from the recently proposed CutMix method, the CircleMix augmentation is optimized for the ball-shaped biomedical objects, such as glomeruli. 6,861 glomeruli with five classes (normal, periglomerular fibrosis, obsolescent glomerulosclerosis, solidified glomerulosclerosis, and disappearing glomerulosclerosis) were employed to develop and evaluate the proposed methods. From five-fold cross-validation, the proposed CircleMix augmentation achieved superior performance (Balanced Accuracy=73.0%) compared with the EfficientNet-B0 baseline (Balanced Accuracy=69.4%)
[ { "created": "Sat, 16 Jan 2021 22:35:38 GMT", "version": "v1" } ]
2021-01-20
[ [ "Lu", "Yuzhe", "" ], [ "Yang", "Haichun", "" ], [ "Zhu", "Zheyu", "" ], [ "Deng", "Ruining", "" ], [ "Fogo", "Agnes B.", "" ], [ "Huo", "Yuankai", "" ] ]
The classification of glomerular lesions is a routine and essential task in renal pathology. Recently, machine learning approaches, especially deep learning algorithms, have been used to perform computer-aided lesion characterization of glomeruli. However, one major challenge of developing such methods is the naturally imbalanced distribution of different lesions. In this paper, we propose CircleMix, a novel data augmentation technique, to improve the accuracy of classifying globally sclerotic glomeruli with a hierarchical learning strategy. Different from the recently proposed CutMix method, the CircleMix augmentation is optimized for the ball-shaped biomedical objects, such as glomeruli. 6,861 glomeruli with five classes (normal, periglomerular fibrosis, obsolescent glomerulosclerosis, solidified glomerulosclerosis, and disappearing glomerulosclerosis) were employed to develop and evaluate the proposed methods. From five-fold cross-validation, the proposed CircleMix augmentation achieved superior performance (Balanced Accuracy=73.0%) compared with the EfficientNet-B0 baseline (Balanced Accuracy=69.4%)
2210.02880
Isaac H Lichter-Marck
Isaac Lichter-Marck
Plant evolution on rock outcrops and cliffs: contrasting patterns of diversification following edaphic specialization
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sheer cliffs on mountains and in deep canyons are among the worlds most iconic landmarks but our understanding of the enigmatic flora that lives in these vertical rock landscapes remains fragmented. In this article, I review and synthesize recent studies on the evolution of specialization onto bare rock and its consequences for plant diversification. Putative adaptations commonly associated with growth on bare rock include specialized root structures, stress tolerant leaf traits, and reduced dispersibility. Fitness tradeoffs are a principal explanation for edaphic specialization, but adaptation to bare environments stands apart as a precursor environment to specialization in other stressful habitats, such as chemically harsh soils. In species level phylogenies, many rock specialist plants are evolutionarily isolated and form the sister lineage to congeners found on decomposed substrates, suggesting that specialization onto bare rock may be an evolutionary trap or dead end. In other cases, rock specialists form diverse clades, suggesting archipelago speciation or ecological release enabled by innovations in stress tolerance. Today, in environments severely impacted by poor management and introduced organisms, such as islands, cliffs serve as important refuges for stress tolerant endangered plants. New technology for climbing safety and drone assisted reconnaissance open new possibilities for the discovery and conservation of cliff plants in these landscapes but come with the inherent risks of increasing their accessibility. The future success of efforts to save plants from extinction may depend on our understanding of the unique and resilient flora endemic to cliffy places.
[ { "created": "Wed, 5 Oct 2022 17:24:20 GMT", "version": "v1" } ]
2022-10-07
[ [ "Lichter-Marck", "Isaac", "" ] ]
Sheer cliffs on mountains and in deep canyons are among the worlds most iconic landmarks but our understanding of the enigmatic flora that lives in these vertical rock landscapes remains fragmented. In this article, I review and synthesize recent studies on the evolution of specialization onto bare rock and its consequences for plant diversification. Putative adaptations commonly associated with growth on bare rock include specialized root structures, stress tolerant leaf traits, and reduced dispersibility. Fitness tradeoffs are a principal explanation for edaphic specialization, but adaptation to bare environments stands apart as a precursor environment to specialization in other stressful habitats, such as chemically harsh soils. In species level phylogenies, many rock specialist plants are evolutionarily isolated and form the sister lineage to congeners found on decomposed substrates, suggesting that specialization onto bare rock may be an evolutionary trap or dead end. In other cases, rock specialists form diverse clades, suggesting archipelago speciation or ecological release enabled by innovations in stress tolerance. Today, in environments severely impacted by poor management and introduced organisms, such as islands, cliffs serve as important refuges for stress tolerant endangered plants. New technology for climbing safety and drone assisted reconnaissance open new possibilities for the discovery and conservation of cliff plants in these landscapes but come with the inherent risks of increasing their accessibility. The future success of efforts to save plants from extinction may depend on our understanding of the unique and resilient flora endemic to cliffy places.
0708.0703
Yong Chen
Yong Chen, Lianchun Yu, and Shao-Meng Qin
Detection of subthreshold pulses in neurons with channel noise
14 pages, 9 figures
Physcial Review E 78, 051909 (2008)
10.1103/PhysRevE.78.051909
null
q-bio.NC q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons are subject to various kinds of noise. In addition to synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron. In this paper, we studied the response of a stochastic Hodgkin-Huxley neuron to transient input subthreshold pulses. It was found that the average response time decreases but variance increases as the amplitude of channel noise increases. In the case of single pulse detection, we show that channel noise enables one neuron to detect the subthreshold signals and an optimal membrane area (or channel noise intensity) exists for a single neuron to achieve optimal performance. However, the detection ability of a single neuron is limited by large errors. Here, we test a simple neuronal network that can enhance the pulse detecting abilities of neurons and find dozens of neurons can perfectly detect subthreshold pulses. The phenomenon of intrinsic stochastic resonance is also found both at the level of single neurons and at the level of networks. At the network level, the detection ability of networks can be optimized for the number of neurons comprising the network.
[ { "created": "Mon, 6 Aug 2007 05:15:44 GMT", "version": "v1" }, { "created": "Fri, 14 Nov 2008 00:25:11 GMT", "version": "v2" } ]
2008-11-14
[ [ "Chen", "Yong", "" ], [ "Yu", "Lianchun", "" ], [ "Qin", "Shao-Meng", "" ] ]
Neurons are subject to various kinds of noise. In addition to synaptic noise, the stochastic opening and closing of ion channels represents an intrinsic source of noise that affects the signal processing properties of the neuron. In this paper, we studied the response of a stochastic Hodgkin-Huxley neuron to transient input subthreshold pulses. It was found that the average response time decreases but variance increases as the amplitude of channel noise increases. In the case of single pulse detection, we show that channel noise enables one neuron to detect the subthreshold signals and an optimal membrane area (or channel noise intensity) exists for a single neuron to achieve optimal performance. However, the detection ability of a single neuron is limited by large errors. Here, we test a simple neuronal network that can enhance the pulse detecting abilities of neurons and find dozens of neurons can perfectly detect subthreshold pulses. The phenomenon of intrinsic stochastic resonance is also found both at the level of single neurons and at the level of networks. At the network level, the detection ability of networks can be optimized for the number of neurons comprising the network.
1411.1612
Andrey Demichev
Andrey Demichev
Effect of Activity and Inter-Cluster Correlations on Information-Theoretic Properties of Neural Networks
16 pages, 9 figures
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On the basis of solutions of the master equation for networks with a small number of neurons it is shown that the conditional entropy and integrated information of neural networks depend on their average activity and inter-cluster correlations.
[ { "created": "Thu, 6 Nov 2014 13:48:41 GMT", "version": "v1" } ]
2014-11-07
[ [ "Demichev", "Andrey", "" ] ]
On the basis of solutions of the master equation for networks with a small number of neurons it is shown that the conditional entropy and integrated information of neural networks depend on their average activity and inter-cluster correlations.
2003.11230
Carlos Armando De Castro Ing.
Carlos Armando De Castro
SIR Model for COVID-19 calibrated with existing data and projected for Colombia
Model has been proven incorrect by actual data
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper we develop a SIR epidemiological model with parameters calculated according to existing data at the time of writing (24/03/2020); the data is from Italy, South Korea and Colombia, the model is then used to project the evolution of the COVID-19 epidemic in Colombia for different scenarios using the data of population for the country and known initial conditions at the start of the simulation.
[ { "created": "Wed, 25 Mar 2020 06:15:26 GMT", "version": "v1" }, { "created": "Mon, 14 Sep 2020 16:42:29 GMT", "version": "v2" }, { "created": "Thu, 24 Sep 2020 19:52:53 GMT", "version": "v3" } ]
2020-09-28
[ [ "De Castro", "Carlos Armando", "" ] ]
In this paper we develop a SIR epidemiological model with parameters calculated according to existing data at the time of writing (24/03/2020); the data is from Italy, South Korea and Colombia, the model is then used to project the evolution of the COVID-19 epidemic in Colombia for different scenarios using the data of population for the country and known initial conditions at the start of the simulation.
1507.08484
Hideyasu Shimadzu
Hideyasu Shimadzu, Maria Dornelas and Anne E. Magurran
Measuring temporal turnover in ecological communities
null
Methods in Ecology and Evolution 6(12) (2015) 1384-1394
10.1111/2041-210X.12438
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Range migrations in response to climate change, invasive species and the emergence of novel ecosystems highlight the importance of temporal turnover in community composition as a fundamental part of global change in the Anthropocene. Temporal turnover is usually quantified using a variety of metrics initially developed to capture spatial change. However, temporal turnover is the consequence of unidirectional community dynamics resulting from processes such as population growth, colonisation and local extinction. Here, we develop a framework based on community dynamics, and propose a new temporal turnover measure. A simulation study and an analysis of an estuarine fish community both clearly demonstrate that our proposed turnover measure offers additional insights relative to spatial-context-based metrics. Our approach reveals whether community turnover is due to shifts in community composition or in community abundance, and identifies the species and/or environmental factors that are responsible for any change.
[ { "created": "Thu, 30 Jul 2015 13:03:06 GMT", "version": "v1" }, { "created": "Fri, 31 Jul 2015 11:11:34 GMT", "version": "v2" }, { "created": "Thu, 21 Apr 2016 23:28:24 GMT", "version": "v3" } ]
2016-04-25
[ [ "Shimadzu", "Hideyasu", "" ], [ "Dornelas", "Maria", "" ], [ "Magurran", "Anne E.", "" ] ]
Range migrations in response to climate change, invasive species and the emergence of novel ecosystems highlight the importance of temporal turnover in community composition as a fundamental part of global change in the Anthropocene. Temporal turnover is usually quantified using a variety of metrics initially developed to capture spatial change. However, temporal turnover is the consequence of unidirectional community dynamics resulting from processes such as population growth, colonisation and local extinction. Here, we develop a framework based on community dynamics, and propose a new temporal turnover measure. A simulation study and an analysis of an estuarine fish community both clearly demonstrate that our proposed turnover measure offers additional insights relative to spatial-context-based metrics. Our approach reveals whether community turnover is due to shifts in community composition or in community abundance, and identifies the species and/or environmental factors that are responsible for any change.
2211.04115
Mathilde Keck
Adrien Flahault, Mathilde Keck, Pierre-Emmanuel Girault-Sotias, Lucie Esteoulle (LIT), Nadia de Mota, Dominique Bonnet (LIT), Catherine Llorens-Cortes
LIT01-196, a Metabolically Stable Apelin-17 Analog, Normalizes Blood Pressure in Hypertensive DOCA-Salt Rats via a NO Synthase-dependent Mechanism
null
Frontiers in Pharmacology, Frontiers, 2021, 12
10.3389/fphar.2021.715095
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apelin is a neuro-vasoactive peptide that plays a major role in the control of cardiovascular functions and water balance, but has an in-vivo half-life in the minute range, limiting its therapeutic use. We previously developed LIT01-196, a systemically active metabolically stable apelin-17 analog, produced by chemical addition of a fluorocarbon chain to the N-terminal part of apelin-17. LIT01-196 behaves as a potent full agonist for the apelin receptor and has an in vivo half-life in the bloodstream of 28 min after intravenous (i.v.) and 156 min after subcutaneous (s.c.) administrations in conscious normotensive rats. We aimed to investigate the effects of LIT01-196 following systemic administrations on arterial blood pressure, heart rate, fluid balance and electrolytes in conscious normotensive and hypertensive deoxycorticosterone acetate (DOCA)-salt rats. Acute i.v. LIT01-196 administration, in increasing doses, dose-dependently decreases arterial blood pressure with ED 50 values of 9.8 and 3.1 nmol/kg in normotensive and hypertensive rats, respectively. This effect occurs for both via a nitric oxide-dependent mechanism. Moreover, acute s.c. LIT01-196 administration (90 nmol/kg) normalizes arterial blood pressure in conscious hypertensive DOCA-salt rats for more than 7 h. The LIT01-196-induced blood pressure decrease remains unchanged after 4 consecutive daily s.c. administrations of 90 nmol/kg, and does not induce any alteration of plasma sodium and potassium levels and kidney function as shown by the lack of change in plasma creatinine and urea nitrogen levels. Activating the apelin receptor with LIT01-196 may constitute a novel approach for the treatment of hypertension.
[ { "created": "Tue, 8 Nov 2022 09:26:11 GMT", "version": "v1" } ]
2022-11-09
[ [ "Flahault", "Adrien", "", "LIT" ], [ "Keck", "Mathilde", "", "LIT" ], [ "Girault-Sotias", "Pierre-Emmanuel", "", "LIT" ], [ "Esteoulle", "Lucie", "", "LIT" ], [ "de Mota", "Nadia", "", "LIT" ], [ "Bonnet", "Dominique", "", "LIT" ], [ "Llorens-Cortes", "Catherine", "" ] ]
Apelin is a neuro-vasoactive peptide that plays a major role in the control of cardiovascular functions and water balance, but has an in-vivo half-life in the minute range, limiting its therapeutic use. We previously developed LIT01-196, a systemically active metabolically stable apelin-17 analog, produced by chemical addition of a fluorocarbon chain to the N-terminal part of apelin-17. LIT01-196 behaves as a potent full agonist for the apelin receptor and has an in vivo half-life in the bloodstream of 28 min after intravenous (i.v.) and 156 min after subcutaneous (s.c.) administrations in conscious normotensive rats. We aimed to investigate the effects of LIT01-196 following systemic administrations on arterial blood pressure, heart rate, fluid balance and electrolytes in conscious normotensive and hypertensive deoxycorticosterone acetate (DOCA)-salt rats. Acute i.v. LIT01-196 administration, in increasing doses, dose-dependently decreases arterial blood pressure with ED 50 values of 9.8 and 3.1 nmol/kg in normotensive and hypertensive rats, respectively. This effect occurs for both via a nitric oxide-dependent mechanism. Moreover, acute s.c. LIT01-196 administration (90 nmol/kg) normalizes arterial blood pressure in conscious hypertensive DOCA-salt rats for more than 7 h. The LIT01-196-induced blood pressure decrease remains unchanged after 4 consecutive daily s.c. administrations of 90 nmol/kg, and does not induce any alteration of plasma sodium and potassium levels and kidney function as shown by the lack of change in plasma creatinine and urea nitrogen levels. Activating the apelin receptor with LIT01-196 may constitute a novel approach for the treatment of hypertension.
2405.01715
Matheus Henrique Pimenta-Zanon
Matheus Henrique Pimenta-Zanon and Andr\'e Yoshiaki Kashiwabara and Andr\'e Lu\'is Laforga Vanzela and Fabricio Martins Lopes
Identification of SNPs in genomes using GRAMEP, an alignment-free method based on the Principle of Maximum Entropy
null
null
null
null
q-bio.GN cs.IT math.IT stat.AP
http://creativecommons.org/licenses/by-nc-sa/4.0/
Advances in high throughput sequencing technologies provide a large number of genomes to be analyzed, so computational methodologies play a crucial role in analyzing and extracting knowledge from the data generated. Investigating genomic mutations is critical because of their impact on chromosomal evolution, genetic disorders, and diseases. It is common to adopt aligning sequences for analyzing genomic variations, however, this approach can be computationally expensive and potentially arbitrary in scenarios with large datasets. Here, we present a novel method for identifying single nucleotide polymorphisms (SNPs) in DNA sequences from assembled genomes. This method uses the principle of maximum entropy to select the most informative k-mers specific to the variant under investigation. The use of this informative k-mer set enables the detection of variant-specific mutations in comparison to a reference sequence. In addition, our method offers the possibility of classifying novel sequences with no need for organism-specific information. GRAMEP demonstrated high accuracy in both in silico simulations and analyses of real viral genomes, including Dengue, HIV, and SARS-CoV-2. Our approach maintained accurate SARS-CoV-2 variant identification while demonstrating a lower computational cost compared to the gold-standard statistical tools. The source code for this proof-of-concept implementation is freely available at https://github.com/omatheuspimenta/GRAMEP.
[ { "created": "Thu, 2 May 2024 20:21:24 GMT", "version": "v1" } ]
2024-05-06
[ [ "Pimenta-Zanon", "Matheus Henrique", "" ], [ "Kashiwabara", "André Yoshiaki", "" ], [ "Vanzela", "André Luís Laforga", "" ], [ "Lopes", "Fabricio Martins", "" ] ]
Advances in high throughput sequencing technologies provide a large number of genomes to be analyzed, so computational methodologies play a crucial role in analyzing and extracting knowledge from the data generated. Investigating genomic mutations is critical because of their impact on chromosomal evolution, genetic disorders, and diseases. It is common to adopt aligning sequences for analyzing genomic variations, however, this approach can be computationally expensive and potentially arbitrary in scenarios with large datasets. Here, we present a novel method for identifying single nucleotide polymorphisms (SNPs) in DNA sequences from assembled genomes. This method uses the principle of maximum entropy to select the most informative k-mers specific to the variant under investigation. The use of this informative k-mer set enables the detection of variant-specific mutations in comparison to a reference sequence. In addition, our method offers the possibility of classifying novel sequences with no need for organism-specific information. GRAMEP demonstrated high accuracy in both in silico simulations and analyses of real viral genomes, including Dengue, HIV, and SARS-CoV-2. Our approach maintained accurate SARS-CoV-2 variant identification while demonstrating a lower computational cost compared to the gold-standard statistical tools. The source code for this proof-of-concept implementation is freely available at https://github.com/omatheuspimenta/GRAMEP.
2010.00305
Tsuyoshi Hondou
Tsuyoshi Hondou
Economic irreversibility in pandemic control processes: Rigorous modeling of delayed countermeasures and consequential cost increases
18 pages, including 6 figures
J. Phys. Soc. Jpn. 90, 114007 (2021) [8 Pages]
10.7566/JPSJ.90.114007
null
q-bio.PE econ.TH physics.bio-ph physics.med-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After the first lockdown in response to the COVID-19 outbreak, many countries faced difficulties in balancing infection control with economics. Due to limited prior knowledge, economists began researching this issue using cost-benefit analysis and found that infection control processes significantly affect economic efficiency. A UK study used economic parameters to numerically demonstrate an optimal balance in the process, including keeping the infected population stationary. However, universally applicable knowledge, which is indispensable for the guiding principles of infection control, has not yet been clearly developed because of the methodological limitations of simulation studies. Here, we propose a simple model and theoretically prove the universal result of economic irreversibility by applying the idea of thermodynamics to pandemic control. This means that delaying infection control measures is more expensive than implementing infection control measures early while keeping infected populations stationary. This implies that once the infected population increases, society cannot return to its previous state without extra expenditures. This universal result is analytically obtained by focusing on the infection-spreading phase of pandemics, and is applicable not just to COVID-19, regardless of "herd immunity." It also confirms the numerical observation of stationary infected populations in its optimally efficient process. Our findings suggest that economic irreversibility is a guiding principle for balancing infection control with economic effects.
[ { "created": "Thu, 1 Oct 2020 11:28:45 GMT", "version": "v1" }, { "created": "Wed, 21 Oct 2020 09:38:49 GMT", "version": "v2" }, { "created": "Thu, 5 Nov 2020 08:33:41 GMT", "version": "v3" }, { "created": "Wed, 18 Nov 2020 06:28:02 GMT", "version": "v4" }, { "created": "Mon, 14 Dec 2020 07:04:00 GMT", "version": "v5" }, { "created": "Wed, 16 Dec 2020 06:17:22 GMT", "version": "v6" }, { "created": "Tue, 22 Dec 2020 06:58:49 GMT", "version": "v7" }, { "created": "Sat, 2 Jan 2021 11:22:52 GMT", "version": "v8" }, { "created": "Thu, 4 Mar 2021 09:39:55 GMT", "version": "v9" } ]
2021-10-19
[ [ "Hondou", "Tsuyoshi", "" ] ]
After the first lockdown in response to the COVID-19 outbreak, many countries faced difficulties in balancing infection control with economics. Due to limited prior knowledge, economists began researching this issue using cost-benefit analysis and found that infection control processes significantly affect economic efficiency. A UK study used economic parameters to numerically demonstrate an optimal balance in the process, including keeping the infected population stationary. However, universally applicable knowledge, which is indispensable for the guiding principles of infection control, has not yet been clearly developed because of the methodological limitations of simulation studies. Here, we propose a simple model and theoretically prove the universal result of economic irreversibility by applying the idea of thermodynamics to pandemic control. This means that delaying infection control measures is more expensive than implementing infection control measures early while keeping infected populations stationary. This implies that once the infected population increases, society cannot return to its previous state without extra expenditures. This universal result is analytically obtained by focusing on the infection-spreading phase of pandemics, and is applicable not just to COVID-19, regardless of "herd immunity." It also confirms the numerical observation of stationary infected populations in its optimally efficient process. Our findings suggest that economic irreversibility is a guiding principle for balancing infection control with economic effects.
2005.03686
Chittaranjan Hens
Sirshendu Bhattacharyya, Pritam Sinha, Rina De, Chittaranjan Hens
Mortality makes coexistence vulnerable in evolutionary game of rock-paper-scissors
null
Phys. Rev. E 102, 012220 (2020)
10.1103/PhysRevE.102.012220
null
q-bio.PE nlin.AO nlin.CD physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple species in the ecosystem are believed to compete cyclically for survival and thus maintain balance in nature. Stochasticity has also an inevitable role in this dynamics. Considering these attributes of nature, the stochastic dynamics of the rock-paper-scissor model based on the idea of cyclic dominance becomes an effective tool to capture different aspects of ecosystem. The evolutionary dynamics of this model crucially depends on different interactions representing different natural habits. In this framework we explore the role of mortality of individual organism in the collective survival of a species. For this purpose a new parameter called `natural death' is introduced. It is meant for bringing about the decease of an individual irrespective of any intra- and interspecific interaction. We perform Monte Carlo simulation followed by the stability analysis of different fixed points of defined rate equations and observe that the natural death rate is surprisingly one of the most significant factors in deciding whether an ecosystem would come up with a coexistence or a single species survival.
[ { "created": "Thu, 7 May 2020 18:12:31 GMT", "version": "v1" } ]
2020-08-05
[ [ "Bhattacharyya", "Sirshendu", "" ], [ "Sinha", "Pritam", "" ], [ "De", "Rina", "" ], [ "Hens", "Chittaranjan", "" ] ]
Multiple species in the ecosystem are believed to compete cyclically for survival and thus maintain balance in nature. Stochasticity has also an inevitable role in this dynamics. Considering these attributes of nature, the stochastic dynamics of the rock-paper-scissor model based on the idea of cyclic dominance becomes an effective tool to capture different aspects of ecosystem. The evolutionary dynamics of this model crucially depends on different interactions representing different natural habits. In this framework we explore the role of mortality of individual organism in the collective survival of a species. For this purpose a new parameter called `natural death' is introduced. It is meant for bringing about the decease of an individual irrespective of any intra- and interspecific interaction. We perform Monte Carlo simulation followed by the stability analysis of different fixed points of defined rate equations and observe that the natural death rate is surprisingly one of the most significant factors in deciding whether an ecosystem would come up with a coexistence or a single species survival.
1804.02454
Keisuke Okamura
Keisuke Okamura
Affinity-based extension of non-extensive entropy and statistical mechanics
[v1] 57 pages, many figures. [v2] Version published in Physica A; 35 pages, 8 figures
Physica A 557 (2020) 124849
10.1016/j.physa.2020.124849
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Tsallis' non-extensive entropy is extended to incorporate the dependence on affinities between the microstates of a system. At the core of our construction of the extended entropy ($\mathcal{H}$) is the concept of the effective number of dissimilar states, termed the effective diversity ($\mathit{\Delta}$). It is a unique integrated measure derived from the probability distribution among states and the affinities between states. The effective diversity is related to the extended entropy through the Boltzmann's-equation-like relation, $\mathcal{H}=\ln_{q}\mathit{\Delta}$, in terms of the Tsallis' $q$-logarithm. A new principle called the Nesting Principle is established, stating that the effective diversity remains invariant under an arbitrary grouping of the constituent states. It is shown that this invariance property holds only for $q=2$; however, the invariance is recovered for general $q$ in the zero-affinity limit (i.e. the Tsallis and Boltzmann-Gibbs case). Using the affinity-based extended Tsallis entropy, the microcanonical and the canonical ensembles are constructed in the presence of general between-state affinities. It is shown that the classic postulate of equal a priori probabilities no longer holds but is modified by affinity-dependent terms. As an illustration, a two-level system is investigated by the extended canonical method, which manifests that the thermal behaviours of the thermodynamic quantities at equilibrium are affected by the between-state affinity. Furthermore, some applications and implications of the affinity-based extended diversity/entropy for information theory and biodiversity theory are addressed in appendices.
[ { "created": "Thu, 29 Mar 2018 16:23:23 GMT", "version": "v1" }, { "created": "Sun, 6 Feb 2022 11:42:47 GMT", "version": "v2" } ]
2022-02-08
[ [ "Okamura", "Keisuke", "" ] ]
Tsallis' non-extensive entropy is extended to incorporate the dependence on affinities between the microstates of a system. At the core of our construction of the extended entropy ($\mathcal{H}$) is the concept of the effective number of dissimilar states, termed the effective diversity ($\mathit{\Delta}$). It is a unique integrated measure derived from the probability distribution among states and the affinities between states. The effective diversity is related to the extended entropy through the Boltzmann's-equation-like relation, $\mathcal{H}=\ln_{q}\mathit{\Delta}$, in terms of the Tsallis' $q$-logarithm. A new principle called the Nesting Principle is established, stating that the effective diversity remains invariant under an arbitrary grouping of the constituent states. It is shown that this invariance property holds only for $q=2$; however, the invariance is recovered for general $q$ in the zero-affinity limit (i.e. the Tsallis and Boltzmann-Gibbs case). Using the affinity-based extended Tsallis entropy, the microcanonical and the canonical ensembles are constructed in the presence of general between-state affinities. It is shown that the classic postulate of equal a priori probabilities no longer holds but is modified by affinity-dependent terms. As an illustration, a two-level system is investigated by the extended canonical method, which manifests that the thermal behaviours of the thermodynamic quantities at equilibrium are affected by the between-state affinity. Furthermore, some applications and implications of the affinity-based extended diversity/entropy for information theory and biodiversity theory are addressed in appendices.
2307.02169
K. Anton Feenstra
Annika Jacobsen, Erik van Dijk, Halima Mouhib, Bas Stringer, Olga Ivanova, Jose Gavald\'a-Garci\'a, Laura Hoekstra, K. Anton Feenstra, Sanne Abeln
Introduction to Protein Structure
editorial responsability: Laura Hoekstra, K. Anton Feenstra, Sanne Abeln. This chapter is part of the book "Introduction to Protein Structural Bioinformatics". The Preface arXiv:1801.09442 contains links to all the (published) chapters. The update adds available arxiv hyperlinks for the chapters
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. Within the living cell, protein molecules perform specific functions, typically by interacting with other proteins, DNA, RNA or small molecules. They take on a specific three dimensional structure, encoded by its amino acid sequence, which allows them to function within the cell. Hence, the understanding of a protein's function is tightly coupled to its sequence and its three dimensional structure. Before going into protein structure analysis and prediction, and protein folding and dynamics, here, we give a short and concise introduction into the basics of protein structures.
[ { "created": "Wed, 5 Jul 2023 10:12:55 GMT", "version": "v1" }, { "created": "Thu, 6 Jul 2023 18:07:01 GMT", "version": "v2" } ]
2023-07-10
[ [ "Jacobsen", "Annika", "" ], [ "van Dijk", "Erik", "" ], [ "Mouhib", "Halima", "" ], [ "Stringer", "Bas", "" ], [ "Ivanova", "Olga", "" ], [ "Gavaldá-Garciá", "Jose", "" ], [ "Hoekstra", "Laura", "" ], [ "Feenstra", "K. Anton", "" ], [ "Abeln", "Sanne", "" ] ]
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which is where the previous topics meet to explore three dimensional protein structures through computational analysis. We provide an overview of existing computational techniques, to validate, simulate, predict and analyse protein structures. More importantly, it will aim to provide practical knowledge about how and when to use such techniques. We will consider proteins from three major vantage points: Protein structure quantification, Protein structure prediction, and Protein simulation & dynamics. Within the living cell, protein molecules perform specific functions, typically by interacting with other proteins, DNA, RNA or small molecules. They take on a specific three dimensional structure, encoded by its amino acid sequence, which allows them to function within the cell. Hence, the understanding of a protein's function is tightly coupled to its sequence and its three dimensional structure. Before going into protein structure analysis and prediction, and protein folding and dynamics, here, we give a short and concise introduction into the basics of protein structures.
0808.0751
Cecilia Lagorio
C. Lagorio, M. V. Migueles, L. A. Braunstein, E. L\'opez, P. A. Macri
Effects of epidemic threshold definition on disease spread statistics
12 pages, 8 figures
null
10.1016/j.physa.2008.10.045
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the statistical properties of the SIR epidemics in heterogeneous networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size s_c. Using percolation theory to calculate the average fractional size <M_SIR> of an epidemic, we find that the strength of the spanning link percolation cluster $P_{\infty}$ is an upper bound to <M_SIR>. For small values of s_c, $P_{\infty}$ is no longer a good approximation, and the average fractional size has to be computed directly. The value of s_c for which $P_{\infty}$ is a good approximation is found to depend on the transmissibility T of the SIR. We also study Q, the probability that an SIR propagation reaches the epidemic mass s_c, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice s_c on predictions of average outcome sizes of computer failure epidemics.
[ { "created": "Wed, 6 Aug 2008 00:38:32 GMT", "version": "v1" } ]
2009-11-13
[ [ "Lagorio", "C.", "" ], [ "Migueles", "M. V.", "" ], [ "Braunstein", "L. A.", "" ], [ "López", "E.", "" ], [ "Macri", "P. A.", "" ] ]
We study the statistical properties of the SIR epidemics in heterogeneous networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size s_c. Using percolation theory to calculate the average fractional size <M_SIR> of an epidemic, we find that the strength of the spanning link percolation cluster $P_{\infty}$ is an upper bound to <M_SIR>. For small values of s_c, $P_{\infty}$ is no longer a good approximation, and the average fractional size has to be computed directly. The value of s_c for which $P_{\infty}$ is a good approximation is found to depend on the transmissibility T of the SIR. We also study Q, the probability that an SIR propagation reaches the epidemic mass s_c, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice s_c on predictions of average outcome sizes of computer failure epidemics.
1103.2366
Gerhard Werner MD
Gerhard Werner
Consciousness Viewed in the Framework of Brain Phase Space Dynamics, Criticality, and the Renormalization Group
null
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/3.0/
To set the stage for viewing Consciousness in terms of brain phase space dynamics and criticality, I will first review currently prominent theoretical conceptualizations and, where appropriate, identify ill-advised and flawed notions in Theoretical Neuroscience that may impede viewing Consciousness as a phenomenon in Physics. I will furthermore introduce relevant facts that tend not to receive adequate attention in much of the current Consciousness discourse. As a new approach to conceptualizing Consciousness, I propose considering it as a collective achievement of the brain' s complex neural dynamics that is amenable to study in the framework of state space dynamics and criticality. In Physics, concepts of phase space transitions and the Renormalization Group are powerful tools for interpreting phenomena involving many scales of length and time in complex systems. The significance of these concepts lies in their accounting for the emergence of different levels of new collective behaviors in complex systems, each level with its distinct ontology, organization and laws, as a new pattern of reality. The presumption of this proposal is that the subjectivity of Consciousness is the epistemic interpretation of a level of reality that originates in phase transitions of the brain-body-environment system.
[ { "created": "Mon, 7 Mar 2011 23:51:22 GMT", "version": "v1" } ]
2011-03-14
[ [ "Werner", "Gerhard", "" ] ]
To set the stage for viewing Consciousness in terms of brain phase space dynamics and criticality, I will first review currently prominent theoretical conceptualizations and, where appropriate, identify ill-advised and flawed notions in Theoretical Neuroscience that may impede viewing Consciousness as a phenomenon in Physics. I will furthermore introduce relevant facts that tend not to receive adequate attention in much of the current Consciousness discourse. As a new approach to conceptualizing Consciousness, I propose considering it as a collective achievement of the brain' s complex neural dynamics that is amenable to study in the framework of state space dynamics and criticality. In Physics, concepts of phase space transitions and the Renormalization Group are powerful tools for interpreting phenomena involving many scales of length and time in complex systems. The significance of these concepts lies in their accounting for the emergence of different levels of new collective behaviors in complex systems, each level with its distinct ontology, organization and laws, as a new pattern of reality. The presumption of this proposal is that the subjectivity of Consciousness is the epistemic interpretation of a level of reality that originates in phase transitions of the brain-body-environment system.
1901.04399
Janina Hesse
Janina Hesse, Susanne Schreiber
How to correctly quantify neuronal phase-response curves from noisy recordings
13 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At the level of individual neurons, various coding properties can be inferred from the input-output relationship of a cell. For small inputs, this relation is captured by the phase-response curve (PRC), which measures the effect of a small perturbation on the timing of the subsequent spike. Experimentally, however, an accurate experimental estimation of PRCs is challenging. Despite elaborate measurement efforts, experimental PRC estimates often cannot be related to those from modeling studies. In particular, experimental PRCs rarely resemble the generic PRC expected close to spike initiation, which is indicative of the underlying spike-onset bifurcation. Here, we show for conductance-based model neurons that the correspondence between theoretical and measured phase-response curve is lost when the stimuli used for the estimation are too large. In this case, the derived phase-response curve is distorted beyond recognition and takes on a generic shape that reflects the measurement protocol, but not the real neuronal dynamics. We discuss how to identify appropriate stimulus strengths for perturbation and noise-stimulation methods, which permit to estimate PRCs that reliably reflect the spike-onset bifurcation -- a task that is particularly difficult if a lower bound for the stimulus amplitude is dictated by prominent intrinsic neuronal noise.
[ { "created": "Mon, 14 Jan 2019 16:58:10 GMT", "version": "v1" } ]
2019-01-15
[ [ "Hesse", "Janina", "" ], [ "Schreiber", "Susanne", "" ] ]
At the level of individual neurons, various coding properties can be inferred from the input-output relationship of a cell. For small inputs, this relation is captured by the phase-response curve (PRC), which measures the effect of a small perturbation on the timing of the subsequent spike. Experimentally, however, an accurate experimental estimation of PRCs is challenging. Despite elaborate measurement efforts, experimental PRC estimates often cannot be related to those from modeling studies. In particular, experimental PRCs rarely resemble the generic PRC expected close to spike initiation, which is indicative of the underlying spike-onset bifurcation. Here, we show for conductance-based model neurons that the correspondence between theoretical and measured phase-response curve is lost when the stimuli used for the estimation are too large. In this case, the derived phase-response curve is distorted beyond recognition and takes on a generic shape that reflects the measurement protocol, but not the real neuronal dynamics. We discuss how to identify appropriate stimulus strengths for perturbation and noise-stimulation methods, which permit to estimate PRCs that reliably reflect the spike-onset bifurcation -- a task that is particularly difficult if a lower bound for the stimulus amplitude is dictated by prominent intrinsic neuronal noise.
0907.4289
Thorsten Erdmann
Thorsten Erdmann, Martin Howard, Pieter Rein ten Wolde
The role of spatial averaging in the precision of gene expression patterns
5 pages, 4 EPS figures
null
10.1103/PhysRevLett.103.258101
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During embryonic development, differentiating cells respond via gene expression to positional cues from morphogen gradients. While gene expression is often highly erratic, embryonic development is precise. We show by theory and simulations that diffusion of the expressed protein can enhance the precision of its expression domain. While diffusion lessens the sharpness of the expression boundary, it also reduces super-Poissonian noise by washing out bursts of gene expression. Balancing these effects yields an optimal diffusion constant maximizing the precision of the expression domain.
[ { "created": "Fri, 24 Jul 2009 13:52:13 GMT", "version": "v1" } ]
2015-05-13
[ [ "Erdmann", "Thorsten", "" ], [ "Howard", "Martin", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
During embryonic development, differentiating cells respond via gene expression to positional cues from morphogen gradients. While gene expression is often highly erratic, embryonic development is precise. We show by theory and simulations that diffusion of the expressed protein can enhance the precision of its expression domain. While diffusion lessens the sharpness of the expression boundary, it also reduces super-Poissonian noise by washing out bursts of gene expression. Balancing these effects yields an optimal diffusion constant maximizing the precision of the expression domain.
1503.00162
Rallis Karamichalis
Rallis Karamichalis, Lila Kari, Stavros Konstantinidis, Steffen Kopecki
An investigation into inter- and intragenomic variations of graphic genomic signatures
14 pages, 6 figures, 5 tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide, on an extensive dataset and using several different distances, confirmation of the hypothesis that CGR patterns are preserved along a genomic DNA sequence, and are different for DNA sequences originating from genomes of different species. This finding lends support to the theory that CGRs of genomic sequences can act as graphic genomic signatures. In particular, we compare the CGR patterns of over five hundred different 150,000 bp genomic sequences originating from the genomes of six organisms, each belonging to one of the kingdoms of life: H. sapiens, S. cerevisiae, A. thaliana, P. falciparum, E. coli, and P. furiosus. We also provide preliminary evidence of this method's applicability to closely related species by comparing H. sapiens (chromosome 21) sequences and over one hundred and fifty genomic sequences, also 150,000 bp long, from P. troglodytes (Animalia; chromosome Y), for a total length of more than 101 million basepairs analyzed. We compute pairwise distances between CGRs of these genomic sequences using six different distances, and construct Molecular Distance Maps that visualize all sequences as points in a two-dimensional or three-dimensional space, to simultaneously display their interrelationships. Our analysis confirms that CGR patterns of DNA sequences from the same genome are in general quantitatively similar, while being different for DNA sequences from genomes of different species. Our analysis of the performance of the assessed distances uses three different quality measures and suggests that several distances outperform the Euclidean distance, which has so far been almost exclusively used for such studies. In particular we show that, for this dataset, DSSIM (Structural Dissimilarity Index) and the descriptor distance (introduced here) are best able to classify genomic sequences.
[ { "created": "Sat, 28 Feb 2015 18:13:53 GMT", "version": "v1" }, { "created": "Tue, 10 Mar 2015 02:43:42 GMT", "version": "v2" } ]
2015-03-11
[ [ "Karamichalis", "Rallis", "" ], [ "Kari", "Lila", "" ], [ "Konstantinidis", "Stavros", "" ], [ "Kopecki", "Steffen", "" ] ]
We provide, on an extensive dataset and using several different distances, confirmation of the hypothesis that CGR patterns are preserved along a genomic DNA sequence, and are different for DNA sequences originating from genomes of different species. This finding lends support to the theory that CGRs of genomic sequences can act as graphic genomic signatures. In particular, we compare the CGR patterns of over five hundred different 150,000 bp genomic sequences originating from the genomes of six organisms, each belonging to one of the kingdoms of life: H. sapiens, S. cerevisiae, A. thaliana, P. falciparum, E. coli, and P. furiosus. We also provide preliminary evidence of this method's applicability to closely related species by comparing H. sapiens (chromosome 21) sequences and over one hundred and fifty genomic sequences, also 150,000 bp long, from P. troglodytes (Animalia; chromosome Y), for a total length of more than 101 million basepairs analyzed. We compute pairwise distances between CGRs of these genomic sequences using six different distances, and construct Molecular Distance Maps that visualize all sequences as points in a two-dimensional or three-dimensional space, to simultaneously display their interrelationships. Our analysis confirms that CGR patterns of DNA sequences from the same genome are in general quantitatively similar, while being different for DNA sequences from genomes of different species. Our analysis of the performance of the assessed distances uses three different quality measures and suggests that several distances outperform the Euclidean distance, which has so far been almost exclusively used for such studies. In particular we show that, for this dataset, DSSIM (Structural Dissimilarity Index) and the descriptor distance (introduced here) are best able to classify genomic sequences.
1108.5154
Martin Kreidl
Martin Kreidl
Note on expected internode distances for gene trees in species trees
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper on 'Estimating Species Trees from Unrooted Gene Trees' Liu and Yu observe that the distance matrix on the underlying taxon set, which is built up from expected internode distances on gene trees under the multispecies coalescent, is tree-like, and that the underlying additive tree has the same topology as the true species tree. Hence they suggest to use (observed) average internode distances on gene trees as an input for the neighbor joining algorithm to estimate the underlying species tree in a statistically consistent way. In this note we give a rigorous proof of their above mentioned observation.
[ { "created": "Thu, 25 Aug 2011 18:45:10 GMT", "version": "v1" } ]
2011-08-26
[ [ "Kreidl", "Martin", "" ] ]
In a recent paper on 'Estimating Species Trees from Unrooted Gene Trees' Liu and Yu observe that the distance matrix on the underlying taxon set, which is built up from expected internode distances on gene trees under the multispecies coalescent, is tree-like, and that the underlying additive tree has the same topology as the true species tree. Hence they suggest to use (observed) average internode distances on gene trees as an input for the neighbor joining algorithm to estimate the underlying species tree in a statistically consistent way. In this note we give a rigorous proof of their above mentioned observation.
1506.07624
Jiajia Dong
J. J. Dong, B. Skinner, N. Breecher, B. Schmittmann, R.K.P. Zia
Spatial structures in a simple model of population dynamics for parasite-host interactions
6 pages, 6 figures
null
10.1209/0295-5075/111/48001
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatial patterning can be crucially important for understanding the behavior of interacting populations. Here we investigate a simple model of parasite and host populations in which parasites are random walkers that must come into contact with a host in order to reproduce. We focus on the spatial arrangement of parasites around a single host, and we derive using analytics and numerical simulations the necessary conditions placed on the parasite fecundity and lifetime for the populations long-term survival. We also show that the parasite population can be pushed to extinction by a large drift velocity, but, counterintuitively, a small drift velocity generally increases the parasite population.
[ { "created": "Thu, 25 Jun 2015 06:10:02 GMT", "version": "v1" } ]
2015-09-07
[ [ "Dong", "J. J.", "" ], [ "Skinner", "B.", "" ], [ "Breecher", "N.", "" ], [ "Schmittmann", "B.", "" ], [ "Zia", "R. K. P.", "" ] ]
Spatial patterning can be crucially important for understanding the behavior of interacting populations. Here we investigate a simple model of parasite and host populations in which parasites are random walkers that must come into contact with a host in order to reproduce. We focus on the spatial arrangement of parasites around a single host, and we derive using analytics and numerical simulations the necessary conditions placed on the parasite fecundity and lifetime for the populations long-term survival. We also show that the parasite population can be pushed to extinction by a large drift velocity, but, counterintuitively, a small drift velocity generally increases the parasite population.
2207.00815
Kazufumi Hosoda
Kazufumi Hosoda, Shigeto Seno, Tsutomu Murata
Simulating reaction time for Eureka effect in visual object recognition using artificial neural network
2 pages, 2 figures
null
null
null
q-bio.NC cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human brain can recognize objects hidden in even severely degraded images after observing them for a while, which is known as a type of Eureka effect, possibly associated with human creativity. A previous psychological study suggests that the basis of this "Eureka recognition" is neural processes of coincidence of multiple stochastic activities. Here we constructed an artificial-neural-network-based model that simulated the characteristics of the human Eureka recognition.
[ { "created": "Thu, 30 Jun 2022 10:58:12 GMT", "version": "v1" } ]
2022-07-05
[ [ "Hosoda", "Kazufumi", "" ], [ "Seno", "Shigeto", "" ], [ "Murata", "Tsutomu", "" ] ]
The human brain can recognize objects hidden in even severely degraded images after observing them for a while, which is known as a type of Eureka effect, possibly associated with human creativity. A previous psychological study suggests that the basis of this "Eureka recognition" is neural processes of coincidence of multiple stochastic activities. Here we constructed an artificial-neural-network-based model that simulated the characteristics of the human Eureka recognition.
1209.1654
Giuseppe Jurman
Giuseppe Jurman and Michele Filosi and Roberto Visintainer and Samantha Riccadonna and Cesare Furlanello
Stability Indicators in Network Reconstruction
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The number of algorithms available to reconstruct a biological network from a dataset of high-throughput measurements is nowadays overwhelming, but evaluating their performance when the gold standard is unknown is a difficult task. Here we propose to use a few reconstruction stability tools as a quantitative solution to this problem. We introduce four indicators to quantitatively assess the stability of a reconstructed network in terms of variability with respect to data subsampling. In particular, we give a measure of the mutual distances among the set of networks generated by a collection of data subsets (and from the network generated on the whole dataset) and we rank nodes and edges according to their decreasing variability within the same set of networks. As a key ingredient, we employ a global/local network distance combined with a bootstrap procedure. We demonstrate the use of the indicators in a controlled situation on a toy dataset, and we show their application on a miRNA microarray dataset with paired tumoral and non-tumoral tissues extracted from a cohort of 241 hepatocellular carcinoma patients.
[ { "created": "Fri, 7 Sep 2012 21:23:35 GMT", "version": "v1" } ]
2012-09-11
[ [ "Jurman", "Giuseppe", "" ], [ "Filosi", "Michele", "" ], [ "Visintainer", "Roberto", "" ], [ "Riccadonna", "Samantha", "" ], [ "Furlanello", "Cesare", "" ] ]
The number of algorithms available to reconstruct a biological network from a dataset of high-throughput measurements is nowadays overwhelming, but evaluating their performance when the gold standard is unknown is a difficult task. Here we propose to use a few reconstruction stability tools as a quantitative solution to this problem. We introduce four indicators to quantitatively assess the stability of a reconstructed network in terms of variability with respect to data subsampling. In particular, we give a measure of the mutual distances among the set of networks generated by a collection of data subsets (and from the network generated on the whole dataset) and we rank nodes and edges according to their decreasing variability within the same set of networks. As a key ingredient, we employ a global/local network distance combined with a bootstrap procedure. We demonstrate the use of the indicators in a controlled situation on a toy dataset, and we show their application on a miRNA microarray dataset with paired tumoral and non-tumoral tissues extracted from a cohort of 241 hepatocellular carcinoma patients.
2310.19481
Thomas Williams
Thomas Williams, James M. McCaw, James Osborne
Spatial information allows inference of the prevalence of direct cell-to-cell viral infection
null
null
null
null
q-bio.QM q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The role of direct cell-to-cell spread in viral infections - where virions spread between host and susceptible cells without needing to be secreted into the extracellular environment - has come to be understood as essential to the dynamics of medically significant viruses like hepatitis C and influenza. Recent work in both the experimental and mathematical modelling literature has attempted to quantify the prevalence of cell-to-cell infection compared to the conventional free virus route using a variety of methods and experimental data. However, estimates are subject to significant uncertainty and moreover rely on data collected by inhibiting one mode of infection by either chemical or physical factors, which may influence the other mode of infection to an extent which is difficult to quantify. In this work, we conduct a simulation-estimation study to probe the practical identifiability of the proportion of cell-to-cell infection, using two standard mathematical models and synthetic data that would likely be realistic to obtain in the laboratory. We show that this quantity cannot be estimated using non-spatial data alone, and that the collection of a data which describes the spatial structure of the infection is necessary to infer the proportion of cell-to-cell infection. Our results provide guidance for the design of relevant experiments and mathematical tools for accurately inferring the prevalence of cell-to-cell infection in $\textit{in vitro}$ and $\textit{in vivo}$ contexts.
[ { "created": "Mon, 30 Oct 2023 12:09:57 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 00:44:46 GMT", "version": "v2" }, { "created": "Mon, 20 May 2024 04:14:00 GMT", "version": "v3" } ]
2024-05-21
[ [ "Williams", "Thomas", "" ], [ "McCaw", "James M.", "" ], [ "Osborne", "James", "" ] ]
The role of direct cell-to-cell spread in viral infections - where virions spread between host and susceptible cells without needing to be secreted into the extracellular environment - has come to be understood as essential to the dynamics of medically significant viruses like hepatitis C and influenza. Recent work in both the experimental and mathematical modelling literature has attempted to quantify the prevalence of cell-to-cell infection compared to the conventional free virus route using a variety of methods and experimental data. However, estimates are subject to significant uncertainty and moreover rely on data collected by inhibiting one mode of infection by either chemical or physical factors, which may influence the other mode of infection to an extent which is difficult to quantify. In this work, we conduct a simulation-estimation study to probe the practical identifiability of the proportion of cell-to-cell infection, using two standard mathematical models and synthetic data that would likely be realistic to obtain in the laboratory. We show that this quantity cannot be estimated using non-spatial data alone, and that the collection of a data which describes the spatial structure of the infection is necessary to infer the proportion of cell-to-cell infection. Our results provide guidance for the design of relevant experiments and mathematical tools for accurately inferring the prevalence of cell-to-cell infection in $\textit{in vitro}$ and $\textit{in vivo}$ contexts.
2308.03714
Tom Chou
Xiangting Li, Sara Habibipour, Tom Chou, Otto O. Yang
The role of APOBEC3-induced mutations in the differential evolution of monkeypox virus
22 pages, 7 figures, Supplement
null
null
null
q-bio.PE cond-mat.stat-mech q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent studies show that newly sampled monkeypox virus (MPXV) genomes exhibit mutations consistent with Apolipoprotein B mRNA Editing Catalytic Polypeptide-like3 (APOBEC3)-mediated editing, compared to MPXV genomes collected earlier. It is unclear whether these single nucleotide polymorphisms (SNPs) result from APOBEC3-induced editing or are a consequence of genetic drift within one or more MPXV animal reservoirs. We develop a simple method based on a generalization of the General-Time-Reversible (GTR) model to show that the observed SNPs are likely the result of APOBEC3-induced editing. The statistical features allow us to extract lineage information and estimate evolutionary events.
[ { "created": "Mon, 7 Aug 2023 16:35:27 GMT", "version": "v1" } ]
2023-08-08
[ [ "Li", "Xiangting", "" ], [ "Habibipour", "Sara", "" ], [ "Chou", "Tom", "" ], [ "Yang", "Otto O.", "" ] ]
Recent studies show that newly sampled monkeypox virus (MPXV) genomes exhibit mutations consistent with Apolipoprotein B mRNA Editing Catalytic Polypeptide-like3 (APOBEC3)-mediated editing, compared to MPXV genomes collected earlier. It is unclear whether these single nucleotide polymorphisms (SNPs) result from APOBEC3-induced editing or are a consequence of genetic drift within one or more MPXV animal reservoirs. We develop a simple method based on a generalization of the General-Time-Reversible (GTR) model to show that the observed SNPs are likely the result of APOBEC3-induced editing. The statistical features allow us to extract lineage information and estimate evolutionary events.
1206.2524
Daniel Manzano
Daniel Manzano
Quantum transport in quantum networks and photosynthetic complexes at the steady state
10 pages, single column, 6 figures. Accepted for publication in Plos One
PLoS ONE 8(2): e57041 (2013)
10.1371/journal.pone.0057041
null
q-bio.BM cond-mat.other physics.bio-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, several works have analysed the efficiency of photosynthetic complexes in a transient scenario and how that efficiency is affected by environmental noise. Here, following a quantum master equation approach, we study the energy and excitation transport in fully connected networks both in general and in the particular case of the Fenna-Matthew-Olson complex. The analysis is carried out for the steady state of the system where the excitation energy is constantly "flowing" through the system. Steady state transport scenarios are particularly relevant if the evolution of the quantum system is not conditioned on the arrival of individual excitations. By adding dephasing to the system, we analyse the possibility of noise-enhancement of the quantum transport.
[ { "created": "Sat, 9 Jun 2012 19:29:17 GMT", "version": "v1" }, { "created": "Wed, 23 Jan 2013 09:44:16 GMT", "version": "v2" }, { "created": "Mon, 25 Feb 2013 09:49:35 GMT", "version": "v3" } ]
2013-02-28
[ [ "Manzano", "Daniel", "" ] ]
Recently, several works have analysed the efficiency of photosynthetic complexes in a transient scenario and how that efficiency is affected by environmental noise. Here, following a quantum master equation approach, we study the energy and excitation transport in fully connected networks both in general and in the particular case of the Fenna-Matthew-Olson complex. The analysis is carried out for the steady state of the system where the excitation energy is constantly "flowing" through the system. Steady state transport scenarios are particularly relevant if the evolution of the quantum system is not conditioned on the arrival of individual excitations. By adding dephasing to the system, we analyse the possibility of noise-enhancement of the quantum transport.
2311.00997
Mrinmoy Chakrabarty
Dolcy Dhar, Manasi Chaturvedi, Saanvi Sehwag, Chehak Malhotra, Udit, Chetan Saraf, Mrinmoy Chakrabarty
Gray matter volume correlates of Comorbid Depression in Autism Spectrum Disorder
33 pages, 3 figures, 3 tables, journal submission
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autism Spectrum Disorder (ASD) involves diverse neurodevelopmental syndromes with significant deficits in communication, motor behaviours, emotional and social comprehension. Often, individuals with ASD exhibit comorbid conditions, one of the most prevalent being depression characterized by a persistent change in mood and diminished interest in previously enjoyable activities. Due to communicative challenges and lack of appropriate assessments in individuals with ASD, comorbid depression can often go undiagnosed during routine clinical examinations, which may aggravate their problems. The current literature on comorbid depression in adults with ASD is limited. Therefore, understanding the neural basis of the comorbid psychopathology of depression in ASD is crucial for identifying objective brain-based markers for its timely and effective management. Towards this end, using structural MRI and phenotypic data from the Autism Brain Imaging Data Exchange II (ABIDE II) repository, we specifically examined the pattern of relationship regional grey matter volume (rGMV) has with comorbid depression and autism severity within regions of a priori interest in adults with ASD (n = 44). The severity of comorbid depression correlated negatively with the rGMV of the right thalamus. Additionally, a significant interaction was evident between the severity of comorbid depression and core ASD symptoms towards explaining the rGMV in the left cerebellum crus II. The whole-brain regional rGMV differences between ASD and typically developed (TD, n = 39) adults remained inconclusive. The results further the understanding of the neurobiological underpinnings of comorbid depression in adults with ASD and are relevant in exploring structural neuroimaging-based biomarkers in the same cohort.
[ { "created": "Thu, 2 Nov 2023 05:25:11 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 09:36:21 GMT", "version": "v2" } ]
2024-03-27
[ [ "Dhar", "Dolcy", "" ], [ "Chaturvedi", "Manasi", "" ], [ "Sehwag", "Saanvi", "" ], [ "Malhotra", "Chehak", "" ], [ "Udit", "", "" ], [ "Saraf", "Chetan", "" ], [ "Chakrabarty", "Mrinmoy", "" ] ]
Autism Spectrum Disorder (ASD) involves diverse neurodevelopmental syndromes with significant deficits in communication, motor behaviours, emotional and social comprehension. Often, individuals with ASD exhibit comorbid conditions, one of the most prevalent being depression characterized by a persistent change in mood and diminished interest in previously enjoyable activities. Due to communicative challenges and lack of appropriate assessments in individuals with ASD, comorbid depression can often go undiagnosed during routine clinical examinations, which may aggravate their problems. The current literature on comorbid depression in adults with ASD is limited. Therefore, understanding the neural basis of the comorbid psychopathology of depression in ASD is crucial for identifying objective brain-based markers for its timely and effective management. Towards this end, using structural MRI and phenotypic data from the Autism Brain Imaging Data Exchange II (ABIDE II) repository, we specifically examined the pattern of relationship regional grey matter volume (rGMV) has with comorbid depression and autism severity within regions of a priori interest in adults with ASD (n = 44). The severity of comorbid depression correlated negatively with the rGMV of the right thalamus. Additionally, a significant interaction was evident between the severity of comorbid depression and core ASD symptoms towards explaining the rGMV in the left cerebellum crus II. The whole-brain regional rGMV differences between ASD and typically developed (TD, n = 39) adults remained inconclusive. The results further the understanding of the neurobiological underpinnings of comorbid depression in adults with ASD and are relevant in exploring structural neuroimaging-based biomarkers in the same cohort.
1407.8508
Alessandro Filisetti Dr.
Roberto Serra, Alessandro Filisetti, Marco Villani, Alex Graudenzi, Chiara Damiani and Tommaso Panini
A stochastic model of catalytic reaction networks in protocells
20 pages, 5 figures
null
10.1007/s11047-014-9445-6
null
q-bio.MN cs.CE math.DS nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protocells are supposed to have played a key role in the self-organizing processes leading to the emergence of life. Existing models either (i) describe protocell architecture and dynamics, given the existence of sets of collectively self-replicating molecules for granted, or (ii) describe the emergence of the aforementioned sets from an ensemble of random molecules in a simple experimental setting (e.g. a closed system or a steady-state flow reactor) that does not properly describe a protocell. In this paper we present a model that goes beyond these limitations by describing the dynamics of sets of replicating molecules within a lipid vesicle. We adopt the simplest possible protocell architecture, by considering a semi-permeable membrane that selects the molecular types that are allowed to enter or exit the protocell and by assuming that the reactions take place in the aqueous phase in the internal compartment. As a first approximation, we ignore the protocell growth and division dynamics. The behavior of catalytic reaction networks is then simulated by means of a stochastic model that accounts for the creation and the extinction of species and reactions. While this is not yet an exhaustive protocell model, it already provides clues regarding some processes that are relevant for understanding the conditions that can enable a population of protocells to undergo evolution and selection.
[ { "created": "Wed, 30 Jul 2014 15:53:49 GMT", "version": "v1" } ]
2014-08-01
[ [ "Serra", "Roberto", "" ], [ "Filisetti", "Alessandro", "" ], [ "Villani", "Marco", "" ], [ "Graudenzi", "Alex", "" ], [ "Damiani", "Chiara", "" ], [ "Panini", "Tommaso", "" ] ]
Protocells are supposed to have played a key role in the self-organizing processes leading to the emergence of life. Existing models either (i) describe protocell architecture and dynamics, given the existence of sets of collectively self-replicating molecules for granted, or (ii) describe the emergence of the aforementioned sets from an ensemble of random molecules in a simple experimental setting (e.g. a closed system or a steady-state flow reactor) that does not properly describe a protocell. In this paper we present a model that goes beyond these limitations by describing the dynamics of sets of replicating molecules within a lipid vesicle. We adopt the simplest possible protocell architecture, by considering a semi-permeable membrane that selects the molecular types that are allowed to enter or exit the protocell and by assuming that the reactions take place in the aqueous phase in the internal compartment. As a first approximation, we ignore the protocell growth and division dynamics. The behavior of catalytic reaction networks is then simulated by means of a stochastic model that accounts for the creation and the extinction of species and reactions. While this is not yet an exhaustive protocell model, it already provides clues regarding some processes that are relevant for understanding the conditions that can enable a population of protocells to undergo evolution and selection.
1709.03894
Renata Rychtarikova
Renata Rychtarikova, Georg Steiner, Gero Kramer, Michael B. Fischer and Dalibor Stys
Application of electromagnetic centroids to colocalization of fluorescing objects in tissue sections
24 pages, 8 figures. arXiv admin note: text overlap with arXiv:1612.04368
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Light microscopy as well as image acquisition and processing suffer from physical and technical prejudices which preclude a correct interpretation of biological observations which can be reflected in, e.g., medical and pharmacological praxis. Using the examples of a diffracting microbead and fluorescently labelled tissue, this article clarifies some ignored aspects of image build-up in the light microscope and introduce algorithms for maximal extraction of information from the 3D microscopic experiments. We provided a correct set-up of the microscope and we sought a voxel (3D pixel) called an electromagnetic centroid which localizes the information about the object. In diffraction imaging and light emission, this voxel shows a minimal intensity change in two consecutive optical cuts. This approach further enabled us to identify z-stack of a DAPI-stained tissue section where at least one object of a relevant fluorescent marker was in focus. The spatial corrections (overlaps) of the DAPI-labelled region with in-focus autofluorescent regions then enabled us to co-localize these three regions in the optimal way when considering physical laws and information theory. We demonstrate that superresolution down to the Nobelish level can be obtained from commonplace widefield bright-field and fluorescence microscopy and bring new perspectives on co-localization in fluorescent microscopy.
[ { "created": "Mon, 11 Sep 2017 10:33:36 GMT", "version": "v1" }, { "created": "Fri, 2 Nov 2018 19:01:39 GMT", "version": "v2" }, { "created": "Wed, 14 Aug 2019 13:18:48 GMT", "version": "v3" } ]
2019-08-15
[ [ "Rychtarikova", "Renata", "" ], [ "Steiner", "Georg", "" ], [ "Kramer", "Gero", "" ], [ "Fischer", "Michael B.", "" ], [ "Stys", "Dalibor", "" ] ]
Light microscopy as well as image acquisition and processing suffer from physical and technical prejudices which preclude a correct interpretation of biological observations which can be reflected in, e.g., medical and pharmacological praxis. Using the examples of a diffracting microbead and fluorescently labelled tissue, this article clarifies some ignored aspects of image build-up in the light microscope and introduce algorithms for maximal extraction of information from the 3D microscopic experiments. We provided a correct set-up of the microscope and we sought a voxel (3D pixel) called an electromagnetic centroid which localizes the information about the object. In diffraction imaging and light emission, this voxel shows a minimal intensity change in two consecutive optical cuts. This approach further enabled us to identify z-stack of a DAPI-stained tissue section where at least one object of a relevant fluorescent marker was in focus. The spatial corrections (overlaps) of the DAPI-labelled region with in-focus autofluorescent regions then enabled us to co-localize these three regions in the optimal way when considering physical laws and information theory. We demonstrate that superresolution down to the Nobelish level can be obtained from commonplace widefield bright-field and fluorescence microscopy and bring new perspectives on co-localization in fluorescent microscopy.
2106.06348
Saint-Clair Chabert-Liddell
Saint-Clair Chabert-Liddell, Pierre Barbillon, Sophie Donnet
Impact of the mesoscale structure of a bipartite ecological interaction network on its robustness through a probabilistic modeling
null
null
10.1002/env.2709
null
q-bio.PE stat.AP stat.ME
http://creativecommons.org/licenses/by-nc-nd/4.0/
The robustness of an ecological network quantifies the resilience of the ecosystem it represents to species loss. It corresponds to the proportion of species that are disconnected from the rest of the network when extinctions occur sequentially. Classically, the robustness is calculated for a given network, from the simulation of a large number of extinction sequences. The link between network structure and robustness remains an open question. Setting a joint probabilistic model on the network and the extinction sequences allows analysis of this relation. Bipartite stochastic block models have proven their ability to model bipartite networks e.g. plant-pollinator networks: species are divided into blocks and interaction probabilities are determined by the blocks of membership. Analytical expressions of the expectation and variance of robustness are obtained under this model, for different distributions of primary extinction sequences. The impact of the network structure on the robustness is analyzed through a set of properties and numerical illustrations. The analysis of a collection of bipartite ecological networks allows us to compare the empirical approach to our probabilistic approach, and illustrates the relevance of the latter when it comes to computing the robustness of a partially observed or incompletely sampled network.
[ { "created": "Fri, 11 Jun 2021 12:42:21 GMT", "version": "v1" }, { "created": "Mon, 1 Nov 2021 12:51:49 GMT", "version": "v2" } ]
2021-11-25
[ [ "Chabert-Liddell", "Saint-Clair", "" ], [ "Barbillon", "Pierre", "" ], [ "Donnet", "Sophie", "" ] ]
The robustness of an ecological network quantifies the resilience of the ecosystem it represents to species loss. It corresponds to the proportion of species that are disconnected from the rest of the network when extinctions occur sequentially. Classically, the robustness is calculated for a given network, from the simulation of a large number of extinction sequences. The link between network structure and robustness remains an open question. Setting a joint probabilistic model on the network and the extinction sequences allows analysis of this relation. Bipartite stochastic block models have proven their ability to model bipartite networks e.g. plant-pollinator networks: species are divided into blocks and interaction probabilities are determined by the blocks of membership. Analytical expressions of the expectation and variance of robustness are obtained under this model, for different distributions of primary extinction sequences. The impact of the network structure on the robustness is analyzed through a set of properties and numerical illustrations. The analysis of a collection of bipartite ecological networks allows us to compare the empirical approach to our probabilistic approach, and illustrates the relevance of the latter when it comes to computing the robustness of a partially observed or incompletely sampled network.
1409.6216
Vivek Shenoy
Abhilash Nair, Brendon M. Baker, Britta Trappmann, Christopher S. Chen and Vivek B. Shenoy
Remodeling of Fibrous Extracellular Matrices by Contractile Cells: Predictions from Discrete Fiber Network Simulations
Accepted for publication in the Biophysical Journal
null
null
null
q-bio.CB cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contractile forces exerted on the surrounding extracellular matrix (ECM) lead to the alignment and stretching of constituent fibers within the vicinity of cells. As a consequence, the matrix reorganizes to form thick bundles of aligned fibers that enable force transmission over distances larger than the size of the cells. Contractile force-mediated remodeling of ECM fibers has bearing on a number of physiologic and pathophysiologic phenomena. In this work, we present a computational model to capture cell-mediated remodeling within fibrous matrices using finite element based discrete fiber network simulations. The model is shown to accurately capture collagen alignment, heterogeneous deformations, and long-range force transmission observed experimentally. The zone of mechanical influence surrounding a single contractile cell and the interaction between two cells are predicted from the strain-induced alignment of fibers. Through parametric studies, the effect of cell contractility and cell shape anisotropy on matrix remodeling and force transmission are quantified and summarized in a phase diagram. For highly contractile and elongated cells, we find a sensing distance that is ten times the cell size, in agreement with experimental observations.
[ { "created": "Mon, 22 Sep 2014 15:56:28 GMT", "version": "v1" } ]
2014-09-23
[ [ "Nair", "Abhilash", "" ], [ "Baker", "Brendon M.", "" ], [ "Trappmann", "Britta", "" ], [ "Chen", "Christopher S.", "" ], [ "Shenoy", "Vivek B.", "" ] ]
Contractile forces exerted on the surrounding extracellular matrix (ECM) lead to the alignment and stretching of constituent fibers within the vicinity of cells. As a consequence, the matrix reorganizes to form thick bundles of aligned fibers that enable force transmission over distances larger than the size of the cells. Contractile force-mediated remodeling of ECM fibers has bearing on a number of physiologic and pathophysiologic phenomena. In this work, we present a computational model to capture cell-mediated remodeling within fibrous matrices using finite element based discrete fiber network simulations. The model is shown to accurately capture collagen alignment, heterogeneous deformations, and long-range force transmission observed experimentally. The zone of mechanical influence surrounding a single contractile cell and the interaction between two cells are predicted from the strain-induced alignment of fibers. Through parametric studies, the effect of cell contractility and cell shape anisotropy on matrix remodeling and force transmission are quantified and summarized in a phase diagram. For highly contractile and elongated cells, we find a sensing distance that is ten times the cell size, in agreement with experimental observations.
2002.05643
Stelios Mylonas
Stelios K. Mylonas (1), Apostolos Axenopoulos (1), Petros Daras (1) ((1) Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece)
DeepSurf: A surface-based deep learning approach for the prediction of ligand binding sites on proteins
null
null
10.1093/bioinformatics/btab009
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
The knowledge of potentially druggable binding sites on proteins is an important preliminary step towards the discovery of novel drugs. The computational prediction of such areas can be boosted by following the recent major advances in the deep learning field and by exploiting the increasing availability of proper data. In this paper, a novel computational method for the prediction of potential binding sites is proposed, called DeepSurf. DeepSurf combines a surface-based representation, where a number of 3D voxelized grids are placed on the protein's surface, with state-of-the-art deep learning architectures. After being trained on the large database of scPDB, DeepSurf demonstrates superior results on three diverse testing datasets, by surpassing all its main deep learning-based competitors, while attaining competitive performance to a set of traditional non-data-driven approaches.
[ { "created": "Thu, 13 Feb 2020 17:22:39 GMT", "version": "v1" }, { "created": "Tue, 16 Feb 2021 16:23:19 GMT", "version": "v2" } ]
2021-02-17
[ [ "Mylonas", "Stelios K.", "" ], [ "Axenopoulos", "Apostolos", "" ], [ "Daras", "Petros", "" ] ]
The knowledge of potentially druggable binding sites on proteins is an important preliminary step towards the discovery of novel drugs. The computational prediction of such areas can be boosted by following the recent major advances in the deep learning field and by exploiting the increasing availability of proper data. In this paper, a novel computational method for the prediction of potential binding sites is proposed, called DeepSurf. DeepSurf combines a surface-based representation, where a number of 3D voxelized grids are placed on the protein's surface, with state-of-the-art deep learning architectures. After being trained on the large database of scPDB, DeepSurf demonstrates superior results on three diverse testing datasets, by surpassing all its main deep learning-based competitors, while attaining competitive performance to a set of traditional non-data-driven approaches.
1311.1417
Nicolas Perony
Thomas O. Richardson, Nicolas Perony, Claudio J. Tessone, Christophe A.H. Bousquet, Marta B. Manser, Frank Schweitzer
Dynamical coupling during collective animal motion
20 pages, 4 figures. Supplemental information: 16 pages, 9 figures. This manuscript was originally submitted to PNAS on June 5, 2013
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The measurement of information flows within moving animal groups has recently been a topic of considerable interest, and it has become clear that the individual(s) that drive collective movement may change over time, and that such individuals may not necessarily always lead from the front. However, methods to quantify the influence of specific individuals on the behaviour of other group members and the direction of information flow in moving group, are lacking on the level of empirical studies and theoretical models. Using high spatio-temporal resolution GPS trajectories of foraging meerkats, Suricata suricatta, we provide an information-theoretic framework to identify dynamical coupling between animals independent of their relative spatial positions. Based on this identification, we then compare designations of individuals as either drivers or responders against designations provided by the relative spatial position. We find that not only does coupling occur both from the frontal to the trailing individuals and vice versa, but also that the coupling direction is a non-linear function of the relative position. This provides evidence for (i) intermittent fluctuation of the coupling strength and (ii) alternation in the coupling direction within foraging meerkat pairs. The framework we introduce allows for a detailed description of the dynamical patterns of mutual influence between all pairs of individuals within moving animal groups. We argue that applying an information-theoretic perspective to the study of coordinated phenomena in animal groups will eventually help to understand cause and effect in collective behaviour.
[ { "created": "Wed, 6 Nov 2013 15:15:27 GMT", "version": "v1" } ]
2013-11-07
[ [ "Richardson", "Thomas O.", "" ], [ "Perony", "Nicolas", "" ], [ "Tessone", "Claudio J.", "" ], [ "Bousquet", "Christophe A. H.", "" ], [ "Manser", "Marta B.", "" ], [ "Schweitzer", "Frank", "" ] ]
The measurement of information flows within moving animal groups has recently been a topic of considerable interest, and it has become clear that the individual(s) that drive collective movement may change over time, and that such individuals may not necessarily always lead from the front. However, methods to quantify the influence of specific individuals on the behaviour of other group members and the direction of information flow in moving group, are lacking on the level of empirical studies and theoretical models. Using high spatio-temporal resolution GPS trajectories of foraging meerkats, Suricata suricatta, we provide an information-theoretic framework to identify dynamical coupling between animals independent of their relative spatial positions. Based on this identification, we then compare designations of individuals as either drivers or responders against designations provided by the relative spatial position. We find that not only does coupling occur both from the frontal to the trailing individuals and vice versa, but also that the coupling direction is a non-linear function of the relative position. This provides evidence for (i) intermittent fluctuation of the coupling strength and (ii) alternation in the coupling direction within foraging meerkat pairs. The framework we introduce allows for a detailed description of the dynamical patterns of mutual influence between all pairs of individuals within moving animal groups. We argue that applying an information-theoretic perspective to the study of coordinated phenomena in animal groups will eventually help to understand cause and effect in collective behaviour.
1010.4618
Olivier Garet
Olivier Garet (IECL), R\'egine Marchand (IECL)
Growth of a population of bacteria in a dynamical hostile environment
31 pages, 1 figure
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the growth of a population of bacteria in a dynamical hostile environment corresponding to the immune system of the colonised organism. The immune cells evolve as subcritical open clusters of oriented percolation and are perpetually reinforced by an immigration process, while the bacteria try to grow as a supercritical oriented percolation in the remaining empty space. For appropriate values of the parameters, we prove that the population of bacteria grows linearly. In this perspective, we build general tools to study dependent percolation models issued from renormalization processes.
[ { "created": "Fri, 22 Oct 2010 06:22:59 GMT", "version": "v1" }, { "created": "Tue, 14 Feb 2012 08:50:41 GMT", "version": "v2" }, { "created": "Fri, 4 Oct 2013 13:56:21 GMT", "version": "v3" } ]
2013-10-07
[ [ "Garet", "Olivier", "", "IECL" ], [ "Marchand", "Régine", "", "IECL" ] ]
We study the growth of a population of bacteria in a dynamical hostile environment corresponding to the immune system of the colonised organism. The immune cells evolve as subcritical open clusters of oriented percolation and are perpetually reinforced by an immigration process, while the bacteria try to grow as a supercritical oriented percolation in the remaining empty space. For appropriate values of the parameters, we prove that the population of bacteria grows linearly. In this perspective, we build general tools to study dependent percolation models issued from renormalization processes.
1811.11040
Jos\'e Halloy
Leo Cazenille, Nicolas Bredeche, Jos\'e Halloy
Modelling zebrafish collective behaviours with multilayer perceptrons optimised by evolutionary algorithms
22 pages, 11 figures, 1 table. arXiv admin note: text overlap with arXiv:1808.03166
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Collective movements are pervasive behaviours among social organisms and have led to the development of many models. However, modelling animal trajectories and social interactions in simple bounded environments remains a challenge. Moreover, advances in the understanding of the sensory-motor loop and the information processing by animals are leading to revisions of the traditional assumptions made in decision-making algorithms. In this context, we develop a methodology based on artificial neural networks (ANN) to describe the collective motion of small zebrafish groups in a bounded environment. Although ANN models are commonly used in artificial systems they are still under-explored to model animal collective behaviours. Here, we present a methodology to calibrate Multilayer Perceptrons by learning from real fish experimental data. The ANNs are trained using either supervised learning or various forms of evolutionary reinforcement learning methods (using the CMA-ES and NSGA-III algorithms). We reveal that ANN models trained using evolutionary methods are capable of generating realistic collective motions for groups of 5 zebrafish including the tank wall effects, a feature that is lacking in previous models. Finally, we also discuss the benefits of optimised ANNs as candidates for driving robotic lure with biologically realistic behaviour, a method that is becoming increasingly popular to gather data and validate assumptions on collective behaviours.
[ { "created": "Mon, 26 Nov 2018 15:31:21 GMT", "version": "v1" } ]
2018-11-28
[ [ "Cazenille", "Leo", "" ], [ "Bredeche", "Nicolas", "" ], [ "Halloy", "José", "" ] ]
Collective movements are pervasive behaviours among social organisms and have led to the development of many models. However, modelling animal trajectories and social interactions in simple bounded environments remains a challenge. Moreover, advances in the understanding of the sensory-motor loop and the information processing by animals are leading to revisions of the traditional assumptions made in decision-making algorithms. In this context, we develop a methodology based on artificial neural networks (ANN) to describe the collective motion of small zebrafish groups in a bounded environment. Although ANN models are commonly used in artificial systems they are still under-explored to model animal collective behaviours. Here, we present a methodology to calibrate Multilayer Perceptrons by learning from real fish experimental data. The ANNs are trained using either supervised learning or various forms of evolutionary reinforcement learning methods (using the CMA-ES and NSGA-III algorithms). We reveal that ANN models trained using evolutionary methods are capable of generating realistic collective motions for groups of 5 zebrafish including the tank wall effects, a feature that is lacking in previous models. Finally, we also discuss the benefits of optimised ANNs as candidates for driving robotic lure with biologically realistic behaviour, a method that is becoming increasingly popular to gather data and validate assumptions on collective behaviours.
2005.08040
Greta Simionato
G. Simionato, K. Hinkelmann, R. Chachanidze, P. Bianchi, E. Fermo, R. van Wijk, M. Leonetti, C. Wagner, L. Kaestner, S. Quint
Artificial neural networks for 3D cell shape recognition from confocal images
17 pages, 8 figures
null
null
null
q-bio.QM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a dual-stage neural network architecture for analyzing fine shape details from microscopy recordings in 3D. The system, tested on red blood cells, uses training data from both healthy donors and patients with a congenital blood disease. Characteristic shape features are revealed from the spherical harmonics spectrum of each cell and are automatically processed to create a reproducible and unbiased shape recognition and classification for diagnostic and theragnostic use.
[ { "created": "Sat, 16 May 2020 16:52:55 GMT", "version": "v1" }, { "created": "Thu, 28 May 2020 16:56:04 GMT", "version": "v2" } ]
2020-05-29
[ [ "Simionato", "G.", "" ], [ "Hinkelmann", "K.", "" ], [ "Chachanidze", "R.", "" ], [ "Bianchi", "P.", "" ], [ "Fermo", "E.", "" ], [ "van Wijk", "R.", "" ], [ "Leonetti", "M.", "" ], [ "Wagner", "C.", "" ], [ "Kaestner", "L.", "" ], [ "Quint", "S.", "" ] ]
We present a dual-stage neural network architecture for analyzing fine shape details from microscopy recordings in 3D. The system, tested on red blood cells, uses training data from both healthy donors and patients with a congenital blood disease. Characteristic shape features are revealed from the spherical harmonics spectrum of each cell and are automatically processed to create a reproducible and unbiased shape recognition and classification for diagnostic and theragnostic use.
1605.04463
Carina Curto
Katherine Morrison, Anda Degeratu, Vladimir Itskov, and Carina Curto
Diversity of emergent dynamics in competitive threshold-linear networks
27 pages, 15 figures. Various revisions to the text; results are unchanged
null
null
null
q-bio.NC nlin.AO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Threshold-linear networks consist of simple units interacting in the presence of a threshold nonlinearity. Competitive threshold-linear networks have long been known to exhibit multistability, where the activity of the network settles into one of potentially many steady states. In this work, we find conditions that guarantee the absence of steady states, while maintaining bounded activity. These conditions lead us to define a combinatorial family of competitive threshold-linear networks, parametrized by a simple directed graph. By exploring this family, we discover that threshold-linear networks are capable of displaying a surprisingly rich variety of nonlinear dynamics, including limit cycles, quasiperiodic attractors, and chaos. In particular, several types of nonlinear behaviors can co-exist in the same network. Our mathematical results also enable us to engineer networks with multiple dynamic patterns. Taken together, these theoretical and computational findings suggest that threshold-linear networks may be a valuable tool for understanding the relationship between network connectivity and emergent dynamics. The new Matlab package CTLN Basic 2.0 can be used to reproduce the simulations in Figures 1-13. The package is available at https://github.com/nebneuron/CTLN-Basic-2.0
[ { "created": "Sat, 14 May 2016 20:10:00 GMT", "version": "v1" }, { "created": "Mon, 12 Sep 2022 17:54:13 GMT", "version": "v2" }, { "created": "Sat, 15 Oct 2022 23:07:09 GMT", "version": "v3" }, { "created": "Sat, 14 Oct 2023 16:01:29 GMT", "version": "v4" } ]
2023-10-17
[ [ "Morrison", "Katherine", "" ], [ "Degeratu", "Anda", "" ], [ "Itskov", "Vladimir", "" ], [ "Curto", "Carina", "" ] ]
Threshold-linear networks consist of simple units interacting in the presence of a threshold nonlinearity. Competitive threshold-linear networks have long been known to exhibit multistability, where the activity of the network settles into one of potentially many steady states. In this work, we find conditions that guarantee the absence of steady states, while maintaining bounded activity. These conditions lead us to define a combinatorial family of competitive threshold-linear networks, parametrized by a simple directed graph. By exploring this family, we discover that threshold-linear networks are capable of displaying a surprisingly rich variety of nonlinear dynamics, including limit cycles, quasiperiodic attractors, and chaos. In particular, several types of nonlinear behaviors can co-exist in the same network. Our mathematical results also enable us to engineer networks with multiple dynamic patterns. Taken together, these theoretical and computational findings suggest that threshold-linear networks may be a valuable tool for understanding the relationship between network connectivity and emergent dynamics. The new Matlab package CTLN Basic 2.0 can be used to reproduce the simulations in Figures 1-13. The package is available at https://github.com/nebneuron/CTLN-Basic-2.0
1912.04403
Miguel Vizcardo Mr.
Miguel Vizcardo, Antonio Ravelo, Pedro Gomis
Analysis of dysautonomia in patients with Chagas Cardiomyopathy
6 pages, 6 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chagas disease American trypanosomiasis is caused by a flagellated parasite: trypanosoma cruzi, transmitted by an insect of the genus Triatoma and also by blood transfusions. In Latin America the number of infected people is approximately 6 million, with a population exposed to the risk of infection of 550000. It is our interest to develop a non-invasive, low-cost methodology, capable of detecting any alteration early on cardiaca produced by T. cruzi. We analyzed the 24 hour RR records in patients with ECG abnormalities (CH2), patients without ECG alterations (CH1) who had positive serological findings for Chagas disease and healthy (Control) matched by sex and age. We found significant differences between the Control, CH1 and CH2 groups that show dysautonomy and enervation of the autonomic nervous system.
[ { "created": "Fri, 6 Dec 2019 00:55:30 GMT", "version": "v1" } ]
2019-12-11
[ [ "Vizcardo", "Miguel", "" ], [ "Ravelo", "Antonio", "" ], [ "Gomis", "Pedro", "" ] ]
Chagas disease American trypanosomiasis is caused by a flagellated parasite: trypanosoma cruzi, transmitted by an insect of the genus Triatoma and also by blood transfusions. In Latin America the number of infected people is approximately 6 million, with a population exposed to the risk of infection of 550000. It is our interest to develop a non-invasive, low-cost methodology, capable of detecting any alteration early on cardiaca produced by T. cruzi. We analyzed the 24 hour RR records in patients with ECG abnormalities (CH2), patients without ECG alterations (CH1) who had positive serological findings for Chagas disease and healthy (Control) matched by sex and age. We found significant differences between the Control, CH1 and CH2 groups that show dysautonomy and enervation of the autonomic nervous system.
2212.05089
Ricardo Ugarte
Ricardo Ugarte
FMO Study of the Interaction Energy between Human Estrogen Receptor $\alpha$ and Selected Ligands
15 pages, 9 figures. arXiv admin note: text overlap with arXiv:2012.10822
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Fragment molecular orbital (FMO) calculations were performed in aqueous media which allowed us to obtain the interaction energy between the human estrogen receptor $\alpha$ ligand-binding domain (ER) and the selected ligands (L): 17$\beta$-estradiol (E2), 17$\alpha$-estradiol (17$\alpha$-E2), estriol (E3), genistein (GNT), diethylstilbestrol (DES), bisphenol A (BPA), bisphenol AF (BPAF), hydroxychlor (HPTE) and methoxychlor (DMDT). These calculations were carried out on representative structures of L-ER complexes obtained from molecular dynamics simulations. The MP2/6-31G(d) L-ER FMO interaction energy in kcal/mol is as follows: E3 (-100.1) < GNT (-95.8) < E2 (-88.5) < BPA (-84.7) < DES (-82.6) < BPAF (-80.6) < 17$\alpha$-E2 (-78.7) < HPTE (-75.9) < DMDT (-46.3) The central hydrophobic core of the ligands interacts attractively with several apolar amino acid residues of ER. Glu 353 and His 524 interacts strongly with most ligands through a hydrogen bond with the hydroxyl group of the phenol A-ring and the terminal hydroxylated ring, respectively. Water molecules were found at the binding site of receptor. In our model systems we have demonstrated what is generally observed in ligand-receptor complexes: the steric and chemical complementarity of the groups on the ligand and binding site surfaces.
[ { "created": "Fri, 9 Dec 2022 19:18:28 GMT", "version": "v1" } ]
2022-12-13
[ [ "Ugarte", "Ricardo", "" ] ]
Fragment molecular orbital (FMO) calculations were performed in aqueous media which allowed us to obtain the interaction energy between the human estrogen receptor $\alpha$ ligand-binding domain (ER) and the selected ligands (L): 17$\beta$-estradiol (E2), 17$\alpha$-estradiol (17$\alpha$-E2), estriol (E3), genistein (GNT), diethylstilbestrol (DES), bisphenol A (BPA), bisphenol AF (BPAF), hydroxychlor (HPTE) and methoxychlor (DMDT). These calculations were carried out on representative structures of L-ER complexes obtained from molecular dynamics simulations. The MP2/6-31G(d) L-ER FMO interaction energy in kcal/mol is as follows: E3 (-100.1) < GNT (-95.8) < E2 (-88.5) < BPA (-84.7) < DES (-82.6) < BPAF (-80.6) < 17$\alpha$-E2 (-78.7) < HPTE (-75.9) < DMDT (-46.3) The central hydrophobic core of the ligands interacts attractively with several apolar amino acid residues of ER. Glu 353 and His 524 interacts strongly with most ligands through a hydrogen bond with the hydroxyl group of the phenol A-ring and the terminal hydroxylated ring, respectively. Water molecules were found at the binding site of receptor. In our model systems we have demonstrated what is generally observed in ligand-receptor complexes: the steric and chemical complementarity of the groups on the ligand and binding site surfaces.
2306.00873
Joanna Masel
Joanna Masel, James Petrie, Jason Bay, Wolfgang Ebbers, Aalekh Sharan, Scott Leibrand, Andreas Gebhard, Samuel Zimmerman
Digital contact tracing/notification for SARS-CoV-2: navigating six points of failure
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Digital contact tracing/notification was initially hailed as a promising strategy to combat SARS-CoV-2, but in most jurisdictions it did not live up to its promise. To avert a given transmission event, both parties must have adopted the tech, it must detect the contact, the primary case must be promptly diagnosed, notifications must be triggered, and the secondary case must change their behavior to avoid the focal tertiary transmission event. If we approximate these as independent events, achieving a 26% reduction in R(t) would require 80% success rates at each of these six points of failure. Here we review the six failure rates experienced by a variety of digital contact tracing/notification schemes, including Singapore's TraceTogether, India's Aarogya Setu, and leading implementations of the Google Apple Exposure Notification system. This leads to a number of recommendations, e.g. that tracing/notification apps be multi-functional and integrated with testing, manual contact tracing, and the gathering of critical scientific data, and that the narrative be framed in terms of user autonomy rather than user privacy.
[ { "created": "Thu, 1 Jun 2023 16:34:45 GMT", "version": "v1" }, { "created": "Fri, 7 Jul 2023 01:34:39 GMT", "version": "v2" } ]
2023-07-10
[ [ "Masel", "Joanna", "" ], [ "Petrie", "James", "" ], [ "Bay", "Jason", "" ], [ "Ebbers", "Wolfgang", "" ], [ "Sharan", "Aalekh", "" ], [ "Leibrand", "Scott", "" ], [ "Gebhard", "Andreas", "" ], [ "Zimmerman", "Samuel", "" ] ]
Digital contact tracing/notification was initially hailed as a promising strategy to combat SARS-CoV-2, but in most jurisdictions it did not live up to its promise. To avert a given transmission event, both parties must have adopted the tech, it must detect the contact, the primary case must be promptly diagnosed, notifications must be triggered, and the secondary case must change their behavior to avoid the focal tertiary transmission event. If we approximate these as independent events, achieving a 26% reduction in R(t) would require 80% success rates at each of these six points of failure. Here we review the six failure rates experienced by a variety of digital contact tracing/notification schemes, including Singapore's TraceTogether, India's Aarogya Setu, and leading implementations of the Google Apple Exposure Notification system. This leads to a number of recommendations, e.g. that tracing/notification apps be multi-functional and integrated with testing, manual contact tracing, and the gathering of critical scientific data, and that the narrative be framed in terms of user autonomy rather than user privacy.
1712.06666
Weishi Liu
Liwei Zhang, Bob Eisenberg, Weishi Liu
An effect of large permanent charge: Decreasing flux to zero with increasing transmembrane potential to infinity
27 pages, 13 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we examine effects of large permanent charges on ionic flow through ion channels based on a quasi-one dimensional Poisson-Nernst-Planck model. It turns out large positive permanent charges inhibit the flux of cation as expected, but strikingly, as the transmembrane electrochemical potential for anion increases in a particular way, the flux of anion decreases. The latter phenomenon was observed experimentally but the cause seemed to be unclear. The mechanisms for these phenomena are examined with the help of the profiles of the ionic concentrations, electric fields and electrochemical potentials. The underlying reasons for the near zero flux of cation and for the decreasing flux of anion are shown to be different over different regions of the permanent charge. Our model is oversimplified. More structural detail and more correlations between ions can and should be included. But the basic finding seems striking and important and deserving of further investigation.
[ { "created": "Tue, 12 Dec 2017 16:45:45 GMT", "version": "v1" } ]
2017-12-20
[ [ "Zhang", "Liwei", "" ], [ "Eisenberg", "Bob", "" ], [ "Liu", "Weishi", "" ] ]
In this work, we examine effects of large permanent charges on ionic flow through ion channels based on a quasi-one dimensional Poisson-Nernst-Planck model. It turns out large positive permanent charges inhibit the flux of cation as expected, but strikingly, as the transmembrane electrochemical potential for anion increases in a particular way, the flux of anion decreases. The latter phenomenon was observed experimentally but the cause seemed to be unclear. The mechanisms for these phenomena are examined with the help of the profiles of the ionic concentrations, electric fields and electrochemical potentials. The underlying reasons for the near zero flux of cation and for the decreasing flux of anion are shown to be different over different regions of the permanent charge. Our model is oversimplified. More structural detail and more correlations between ions can and should be included. But the basic finding seems striking and important and deserving of further investigation.
2103.10012
Xianhao Chen
Xianhao Chen, Guangyu Zhu, Lan Zhang, Yuguang Fang, Linke Guo, and Xinguang Chen
Age-Stratified COVID-19 Spread Analysis and Vaccination: A Multitype Random Network Approach
11 pages, 9 figures
null
10.1109/TNSE.2021.3075222
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The risk for severe illness and mortality from COVID-19 significantly increases with age. As a result, age-stratified modeling for COVID-19 dynamics is the key to study how to reduce hospitalizations and mortality from COVID-19. By taking advantage of network theory, we develop an age-stratified epidemic model for COVID-19 in complex contact networks. Specifically, we present an extension of standard SEIR (susceptible-exposed-infectious-removed) compartmental model, called age-stratified SEAHIR (susceptible-exposedasymptomatic-hospitalized-infectious-removed) model, to capture the spread of COVID-19 over multitype random networks with general degree distributions. We derive several key epidemiological metrics and then propose an age-stratified vaccination strategy to decrease the mortality and hospitalizations. Through extensive study, we discover that the outcome of vaccination prioritization depends on the reproduction number R0. Specifically, the elderly should be prioritized only when R0 is relatively high. If ongoing intervention policies, such as universal masking, could suppress R0 at a relatively low level, prioritizing the high-transmission age group (i.e., adults aged 20-39) is most effective to reduce both mortality and hospitalizations. These conclusions provide useful recommendations for age-based vaccination prioritization for COVID-19.
[ { "created": "Thu, 18 Mar 2021 04:21:19 GMT", "version": "v1" } ]
2021-05-12
[ [ "Chen", "Xianhao", "" ], [ "Zhu", "Guangyu", "" ], [ "Zhang", "Lan", "" ], [ "Fang", "Yuguang", "" ], [ "Guo", "Linke", "" ], [ "Chen", "Xinguang", "" ] ]
The risk for severe illness and mortality from COVID-19 significantly increases with age. As a result, age-stratified modeling for COVID-19 dynamics is the key to study how to reduce hospitalizations and mortality from COVID-19. By taking advantage of network theory, we develop an age-stratified epidemic model for COVID-19 in complex contact networks. Specifically, we present an extension of standard SEIR (susceptible-exposed-infectious-removed) compartmental model, called age-stratified SEAHIR (susceptible-exposedasymptomatic-hospitalized-infectious-removed) model, to capture the spread of COVID-19 over multitype random networks with general degree distributions. We derive several key epidemiological metrics and then propose an age-stratified vaccination strategy to decrease the mortality and hospitalizations. Through extensive study, we discover that the outcome of vaccination prioritization depends on the reproduction number R0. Specifically, the elderly should be prioritized only when R0 is relatively high. If ongoing intervention policies, such as universal masking, could suppress R0 at a relatively low level, prioritizing the high-transmission age group (i.e., adults aged 20-39) is most effective to reduce both mortality and hospitalizations. These conclusions provide useful recommendations for age-based vaccination prioritization for COVID-19.
1008.3961
Yulin Chang
Yulin V. Chang
Toward A Quantitative Understanding of Gas Exchange in the Lung
9 pages, 3 figures
null
null
null
q-bio.QM physics.med-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we present a mathematical framework that quantifies the gas-exchange processes in the lung. The theory is based on the solution of the one-dimensional diffusion equation on a simplified model of lung septum. Gases dissolved into different compartments of the lung are all treated separately with physiologically important parameters. The model can be applied in magnetic resonance of hyperpolarized xenon for quantification of lung parameters such as surface-to-volume ratio and the air-blood barrier thickness. In general this model provides a description of a broad range of biological exchange processes that are driven by diffusion.
[ { "created": "Tue, 24 Aug 2010 04:35:05 GMT", "version": "v1" } ]
2010-08-25
[ [ "Chang", "Yulin V.", "" ] ]
In this work we present a mathematical framework that quantifies the gas-exchange processes in the lung. The theory is based on the solution of the one-dimensional diffusion equation on a simplified model of lung septum. Gases dissolved into different compartments of the lung are all treated separately with physiologically important parameters. The model can be applied in magnetic resonance of hyperpolarized xenon for quantification of lung parameters such as surface-to-volume ratio and the air-blood barrier thickness. In general this model provides a description of a broad range of biological exchange processes that are driven by diffusion.
2302.11669
Sebastian Wild
Evarista Onokpasa and Sebastian Wild and Prudence W. H. Wong
RNA secondary structures: from ab initio prediction to better compression, and back
paper at Data Compression Conference 2023
null
null
null
q-bio.BM cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we use the biological domain knowledge incorporated into stochastic models for ab initio RNA secondary-structure prediction to improve the state of the art in joint compression of RNA sequence and structure data (Liu et al., BMC Bioinformatics, 2008). Moreover, we show that, conversely, compression ratio can serve as a cheap and robust proxy for comparing the prediction quality of different stochastic models, which may help guide the search for better RNA structure prediction models. Our results build on expert stochastic context-free grammar models of RNA secondary structures (Dowell & Eddy, BMC Bioinformatics, 2004; Nebel & Scheid, Theory in Biosciences, 2011) combined with different (static and adaptive) models for rule probabilities and arithmetic coding. We provide a prototype implementation and an extensive empirical evaluation, where we illustrate how grammar features and probability models affect compression ratios.
[ { "created": "Wed, 22 Feb 2023 21:45:33 GMT", "version": "v1" } ]
2023-02-24
[ [ "Onokpasa", "Evarista", "" ], [ "Wild", "Sebastian", "" ], [ "Wong", "Prudence W. H.", "" ] ]
In this paper, we use the biological domain knowledge incorporated into stochastic models for ab initio RNA secondary-structure prediction to improve the state of the art in joint compression of RNA sequence and structure data (Liu et al., BMC Bioinformatics, 2008). Moreover, we show that, conversely, compression ratio can serve as a cheap and robust proxy for comparing the prediction quality of different stochastic models, which may help guide the search for better RNA structure prediction models. Our results build on expert stochastic context-free grammar models of RNA secondary structures (Dowell & Eddy, BMC Bioinformatics, 2004; Nebel & Scheid, Theory in Biosciences, 2011) combined with different (static and adaptive) models for rule probabilities and arithmetic coding. We provide a prototype implementation and an extensive empirical evaluation, where we illustrate how grammar features and probability models affect compression ratios.
1703.03449
Roc\'io Espada
Roc\'io Espada, R. Gonzalo Parra, Thierry Mora, Aleksandra M. Walczak, Diego U. Ferreiro
Inferring repeat protein energetics from evolutionary information
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural protein sequences contain a record of their history. A common constraint in a given protein family is the ability to fold to specific structures, and it has been shown possible to infer the main native ensemble by analyzing covariations in extant sequences. Still, many natural proteins that fold into the same structural topology show different stabilization energies, and these are often related to their physiological behavior. We propose a description for the energetic variation given by sequence modifications in repeat proteins, systems for which the overall problem is simplified by their inherent symmetry. We explicitly account for single amino acid and pair-wise interactions and treat higher order correlations with a single term. We show that the resulting force field can be interpreted with structural detail. We trace the variations in the energetic scores of natural proteins and relate them to their experimental characterization. The resulting energetic force field allows the prediction of the folding free energy change for several mutants, and can be used to generate synthetic sequences that are statistically indistinguishable from the natural counterparts.
[ { "created": "Thu, 9 Mar 2017 20:11:23 GMT", "version": "v1" }, { "created": "Wed, 15 Mar 2017 17:09:10 GMT", "version": "v2" } ]
2017-03-16
[ [ "Espada", "Rocío", "" ], [ "Parra", "R. Gonzalo", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Ferreiro", "Diego U.", "" ] ]
Natural protein sequences contain a record of their history. A common constraint in a given protein family is the ability to fold to specific structures, and it has been shown possible to infer the main native ensemble by analyzing covariations in extant sequences. Still, many natural proteins that fold into the same structural topology show different stabilization energies, and these are often related to their physiological behavior. We propose a description for the energetic variation given by sequence modifications in repeat proteins, systems for which the overall problem is simplified by their inherent symmetry. We explicitly account for single amino acid and pair-wise interactions and treat higher order correlations with a single term. We show that the resulting force field can be interpreted with structural detail. We trace the variations in the energetic scores of natural proteins and relate them to their experimental characterization. The resulting energetic force field allows the prediction of the folding free energy change for several mutants, and can be used to generate synthetic sequences that are statistically indistinguishable from the natural counterparts.
1907.00263
Emily Toomey
Emily Toomey, Ken Segall, and Karl K. Berggren
A Power Efficient Artificial Neuron Using Superconducting Nanowires
12 figures, 1 table
null
null
null
q-bio.NC cond-mat.supr-con cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rising societal demand for more information-processing capacity with lower power consumption, alternative architectures inspired by the parallelism and robustness of the human brain have recently emerged as possible solutions. In particular, spiking neural networks (SNNs) offer a bio-realistic approach, relying on pulses analogous to action potentials as units of information. While software encoded networks provide flexibility and precision, they are often computationally expensive. As a result, hardware SNNs based on the spiking dynamics of a device or circuit represent an increasingly appealing direction. Here, we propose to use superconducting nanowires as a platform for the development of an artificial neuron. Building on an architecture first proposed for Josephson junctions, we rely on the intrinsic nonlinearity of two coupled nanowires to generate spiking behavior, and use electrothermal circuit simulations to demonstrate that the nanowire neuron reproduces multiple characteristics of biological neurons. Furthermore, by harnessing the nonlinearity of the superconducting nanowire's inductance, we develop a design for a variable inductive synapse capable of both excitatory and inhibitory control. We demonstrate that this synapse design supports direct fanout, a feature that has been difficult to achieve in other superconducting architectures, and that the nanowire neuron's nominal energy performance is competitive with that of current technologies.
[ { "created": "Sat, 29 Jun 2019 19:28:25 GMT", "version": "v1" } ]
2019-07-02
[ [ "Toomey", "Emily", "" ], [ "Segall", "Ken", "" ], [ "Berggren", "Karl K.", "" ] ]
With the rising societal demand for more information-processing capacity with lower power consumption, alternative architectures inspired by the parallelism and robustness of the human brain have recently emerged as possible solutions. In particular, spiking neural networks (SNNs) offer a bio-realistic approach, relying on pulses analogous to action potentials as units of information. While software encoded networks provide flexibility and precision, they are often computationally expensive. As a result, hardware SNNs based on the spiking dynamics of a device or circuit represent an increasingly appealing direction. Here, we propose to use superconducting nanowires as a platform for the development of an artificial neuron. Building on an architecture first proposed for Josephson junctions, we rely on the intrinsic nonlinearity of two coupled nanowires to generate spiking behavior, and use electrothermal circuit simulations to demonstrate that the nanowire neuron reproduces multiple characteristics of biological neurons. Furthermore, by harnessing the nonlinearity of the superconducting nanowire's inductance, we develop a design for a variable inductive synapse capable of both excitatory and inhibitory control. We demonstrate that this synapse design supports direct fanout, a feature that has been difficult to achieve in other superconducting architectures, and that the nanowire neuron's nominal energy performance is competitive with that of current technologies.
q-bio/0609012
Kei Tokita
Haruyuki Irie and Kei Tokita
Species-area relationship for power-law species abundance distribution
8 pages, 1 figure
null
null
null
q-bio.PE
null
We studied the mathematical relations between species abundance distributions (SADs) and species-area relationships (SARs) and found that a power-law SAR can be generally derived from a power-law SAD without a special assumption such as the ``canonical hypothesis''. In the present analysis, an SAR-exponent is obtained as a function of an SAD-exponent for a finite number of species. We also studied the inverse problem, from SARs to SADs, and found that a power-SAD can be derived from a power-SAR under the condition that the functional form of the corresponding SAD is invariant for changes in the number of species. We also discuss general relationships among lognormal SADs, the broken-stick model (exponential SADs), linear SARs and logarithmic SARs. These results suggest the existence of a common mechanism for SADs and SARs, which could prove a useful tool for theoretical and experimental studies on biodiversity and species coexistence.
[ { "created": "Fri, 8 Sep 2006 09:26:31 GMT", "version": "v1" }, { "created": "Sat, 9 Sep 2006 03:48:17 GMT", "version": "v2" }, { "created": "Mon, 25 Sep 2006 00:16:53 GMT", "version": "v3" } ]
2007-05-23
[ [ "Irie", "Haruyuki", "" ], [ "Tokita", "Kei", "" ] ]
We studied the mathematical relations between species abundance distributions (SADs) and species-area relationships (SARs) and found that a power-law SAR can be generally derived from a power-law SAD without a special assumption such as the ``canonical hypothesis''. In the present analysis, an SAR-exponent is obtained as a function of an SAD-exponent for a finite number of species. We also studied the inverse problem, from SARs to SADs, and found that a power-SAD can be derived from a power-SAR under the condition that the functional form of the corresponding SAD is invariant for changes in the number of species. We also discuss general relationships among lognormal SADs, the broken-stick model (exponential SADs), linear SARs and logarithmic SARs. These results suggest the existence of a common mechanism for SADs and SARs, which could prove a useful tool for theoretical and experimental studies on biodiversity and species coexistence.
2011.06387
Nisheet Patel
Nisheet Patel, Luigi Acerbi, Alexandre Pouget
Dynamic allocation of limited memory resources in reinforcement learning
In Advances in Neural Information Processing Systems 33 (NeurIPS 2020). [16 pages: 9 main + 3 references + 4 supplementary; 4 figures: 3 main + 1 supplementary]
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Biological brains are inherently limited in their capacity to process and store information, but are nevertheless capable of solving complex tasks with apparent ease. Intelligent behavior is related to these limitations, since resource constraints drive the need to generalize and assign importance differentially to features in the environment or memories of past experiences. Recently, there have been parallel efforts in reinforcement learning and neuroscience to understand strategies adopted by artificial and biological agents to circumvent limitations in information storage. However, the two threads have been largely separate. In this article, we propose a dynamical framework to maximize expected reward under constraints of limited resources, which we implement with a cost function that penalizes precise representations of action-values in memory, each of which may vary in its precision. We derive from first principles an algorithm, Dynamic Resource Allocator (DRA), which we apply to two standard tasks in reinforcement learning and a model-based planning task, and find that it allocates more resources to items in memory that have a higher impact on cumulative rewards. Moreover, DRA learns faster when starting with a higher resource budget than what it eventually allocates for performing well on tasks, which may explain why frontal cortical areas in biological brains appear more engaged in early stages of learning before settling to lower asymptotic levels of activity. Our work provides a normative solution to the problem of learning how to allocate costly resources to a collection of uncertain memories in a manner that is capable of adapting to changes in the environment.
[ { "created": "Thu, 12 Nov 2020 13:58:07 GMT", "version": "v1" }, { "created": "Fri, 13 Nov 2020 11:37:12 GMT", "version": "v2" } ]
2020-11-16
[ [ "Patel", "Nisheet", "" ], [ "Acerbi", "Luigi", "" ], [ "Pouget", "Alexandre", "" ] ]
Biological brains are inherently limited in their capacity to process and store information, but are nevertheless capable of solving complex tasks with apparent ease. Intelligent behavior is related to these limitations, since resource constraints drive the need to generalize and assign importance differentially to features in the environment or memories of past experiences. Recently, there have been parallel efforts in reinforcement learning and neuroscience to understand strategies adopted by artificial and biological agents to circumvent limitations in information storage. However, the two threads have been largely separate. In this article, we propose a dynamical framework to maximize expected reward under constraints of limited resources, which we implement with a cost function that penalizes precise representations of action-values in memory, each of which may vary in its precision. We derive from first principles an algorithm, Dynamic Resource Allocator (DRA), which we apply to two standard tasks in reinforcement learning and a model-based planning task, and find that it allocates more resources to items in memory that have a higher impact on cumulative rewards. Moreover, DRA learns faster when starting with a higher resource budget than what it eventually allocates for performing well on tasks, which may explain why frontal cortical areas in biological brains appear more engaged in early stages of learning before settling to lower asymptotic levels of activity. Our work provides a normative solution to the problem of learning how to allocate costly resources to a collection of uncertain memories in a manner that is capable of adapting to changes in the environment.
2303.06734
Emmanouil Giannakakis
Emmanouil Giannakakis, Sina Khajehabdollahi, Anna Levina
Environmental variability and network structure determine the optimal plasticity mechanisms in embodied agents
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The evolutionary balance between innate and learned behaviors is highly intricate, and different organisms have found different solutions to this problem. We hypothesize that the emergence and exact form of learning behaviors is naturally connected with the statistics of environmental fluctuations and tasks an organism needs to solve. Here, we study how different aspects of simulated environments shape an evolved synaptic plasticity rule in static and moving artificial agents. We demonstrate that environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity. Interestingly, the form of the emerging plasticity rule is additionally determined by the details of the task the artificial organisms are aiming to solve. Moreover, we show that co-evolution between static connectivity and interacting plasticity mechanisms in distinct sub-networks changes the function and form of the emerging plasticity rules in embodied agents performing a foraging task.
[ { "created": "Sun, 12 Mar 2023 19:29:31 GMT", "version": "v1" } ]
2023-03-14
[ [ "Giannakakis", "Emmanouil", "" ], [ "Khajehabdollahi", "Sina", "" ], [ "Levina", "Anna", "" ] ]
The evolutionary balance between innate and learned behaviors is highly intricate, and different organisms have found different solutions to this problem. We hypothesize that the emergence and exact form of learning behaviors is naturally connected with the statistics of environmental fluctuations and tasks an organism needs to solve. Here, we study how different aspects of simulated environments shape an evolved synaptic plasticity rule in static and moving artificial agents. We demonstrate that environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity. Interestingly, the form of the emerging plasticity rule is additionally determined by the details of the task the artificial organisms are aiming to solve. Moreover, we show that co-evolution between static connectivity and interacting plasticity mechanisms in distinct sub-networks changes the function and form of the emerging plasticity rules in embodied agents performing a foraging task.
2211.04764
Tripti Goel Dr
Shradha Verma, Tripti Goel, and M Tanveer
Quantitative Susceptibility Mapping in Cognitive Decline: A Review of Technical Aspects and Applications
null
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the human brain, essential iron molecules for proper neurological functioning exist in transferrin (tf) and ferritin (Fe3) forms. However, its unusual increment manifests iron overload, which reacts with hydrogen peroxide. This reaction will generate hydroxyl radicals, and irons higher oxidation states. Further, this reaction causes tissue damage or cognitive decline in the brain and also leads to neurodegenerative diseases. The susceptibility difference due to iron overload within the volume of interest (VOI) responsible for field perturbation of MRI and can benefit in estimating the neural disorder. The quantitative susceptibility mapping (QSM) technique can estimate susceptibility alteration and assist in quantifying the local tissue susceptibility differences. It has attracted many researchers and clinicians to diagnose and detect neural disorders such as Parkinsons, Alzheimers, Multiple Sclerosis, and aging. The paper presents a systematic review illustrating QSM fundamentals and its processing steps, including phase unwrapping, background field removal, and susceptibility inversion. Using QSM, the present work delivers novel predictive biomarkers for various neural disorders. It can strengthen new researchers fundamental knowledge and provides insight into its applicability for cognitive decline disclosure. The paper discusses the future scope of QSM processing stages and their applications in identifying new biomarkers for neural disorders.
[ { "created": "Wed, 9 Nov 2022 09:37:58 GMT", "version": "v1" } ]
2022-11-10
[ [ "Verma", "Shradha", "" ], [ "Goel", "Tripti", "" ], [ "Tanveer", "M", "" ] ]
In the human brain, essential iron molecules for proper neurological functioning exist in transferrin (tf) and ferritin (Fe3) forms. However, its unusual increment manifests iron overload, which reacts with hydrogen peroxide. This reaction will generate hydroxyl radicals, and irons higher oxidation states. Further, this reaction causes tissue damage or cognitive decline in the brain and also leads to neurodegenerative diseases. The susceptibility difference due to iron overload within the volume of interest (VOI) responsible for field perturbation of MRI and can benefit in estimating the neural disorder. The quantitative susceptibility mapping (QSM) technique can estimate susceptibility alteration and assist in quantifying the local tissue susceptibility differences. It has attracted many researchers and clinicians to diagnose and detect neural disorders such as Parkinsons, Alzheimers, Multiple Sclerosis, and aging. The paper presents a systematic review illustrating QSM fundamentals and its processing steps, including phase unwrapping, background field removal, and susceptibility inversion. Using QSM, the present work delivers novel predictive biomarkers for various neural disorders. It can strengthen new researchers fundamental knowledge and provides insight into its applicability for cognitive decline disclosure. The paper discusses the future scope of QSM processing stages and their applications in identifying new biomarkers for neural disorders.
1711.09114
Michael Deem
Shubham Tripathi and Michael W. Deem
The standard genetic code facilitates exploration of the space of functional nucleotide sequences
31 pages, 10 figures, 1 table
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The standard genetic code is well known to be optimized for minimizing the phenotypic effects of single nucleotide substitutions, a property that was likely selected for during the emergence of a universal code. Given the fitness advantage afforded by high standing genetic diversity in a population in a dynamic environment, it is possible that selection to explore a large fraction of the space of functional proteins also occurred. To determine whether selection for such a property played a role during the emergence of the nearly universal genetic code, we investigated the number of functional variants of the Escherichia coli PhoQ protein explored at different time scales under translation using different genetic codes. We found that the standard genetic code is highly optimal for exploring a large fraction of the space of functional PhoQ variants at intermediate time scales as compared to random codes. Environmental changes, in response to which genetic diversity in a population provides a fitness advantage, are likely to have occurred at these intermediate time scales. Our results indicate that the ability of the standard code to explore a large fraction of the space of functional sequence variants arises from a balance between robustness and flexibility and is largely independent of the property of the standard code to minimize the phenotypic effects of mutations. We propose that selection to explore a large fraction of the functional sequence space while minimizing the phenotypic effects of mutations contributed towards the emergence of the standard code as the universal genetic code.
[ { "created": "Fri, 24 Nov 2017 19:29:52 GMT", "version": "v1" } ]
2017-11-28
[ [ "Tripathi", "Shubham", "" ], [ "Deem", "Michael W.", "" ] ]
The standard genetic code is well known to be optimized for minimizing the phenotypic effects of single nucleotide substitutions, a property that was likely selected for during the emergence of a universal code. Given the fitness advantage afforded by high standing genetic diversity in a population in a dynamic environment, it is possible that selection to explore a large fraction of the space of functional proteins also occurred. To determine whether selection for such a property played a role during the emergence of the nearly universal genetic code, we investigated the number of functional variants of the Escherichia coli PhoQ protein explored at different time scales under translation using different genetic codes. We found that the standard genetic code is highly optimal for exploring a large fraction of the space of functional PhoQ variants at intermediate time scales as compared to random codes. Environmental changes, in response to which genetic diversity in a population provides a fitness advantage, are likely to have occurred at these intermediate time scales. Our results indicate that the ability of the standard code to explore a large fraction of the space of functional sequence variants arises from a balance between robustness and flexibility and is largely independent of the property of the standard code to minimize the phenotypic effects of mutations. We propose that selection to explore a large fraction of the functional sequence space while minimizing the phenotypic effects of mutations contributed towards the emergence of the standard code as the universal genetic code.
1911.00625
Hui Xue PhD
Hui Xue, Rhodri Davies, Louis AE Brown, Kristopher D Knott, Tushar Kotecha, Marianna Fontana, Sven Plein, James C Moon, Peter Kellman
Automated Inline Analysis of Myocardial Perfusion MRI with Deep Learning
This work has been submitted to Radiology: Artificial Intelligence for possible publication
null
null
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent development of quantitative myocardial blood flow (MBF) mapping allows direct evaluation of absolute myocardial perfusion, by computing pixel-wise flow maps. Clinical studies suggest quantitative evaluation would be more desirable for objectivity and efficiency. Objective assessment can be further facilitated by segmenting the myocardium and automatically generating reports following the AHA model. This will free user interaction for analysis and lead to a 'one-click' solution to improve workflow. This paper proposes a deep neural network based computational workflow for inline myocardial perfusion analysis. Adenosine stress and rest perfusion scans were acquired from three hospitals. Training set included N=1,825 perfusion series from 1,034 patients. Independent test set included 200 scans from 105 patients. Data were consecutively acquired at each site. A convolution neural net (CNN) model was trained to provide segmentation for LV cavity, myocardium and right ventricular by processing incoming 2D+T perfusion Gd series. Model outputs were compared to manual ground-truth for accuracy of segmentation and flow measures derived on global and per-sector basis. The trained models were integrated onto MR scanners for effective inference. Segmentation accuracy and myocardial flow measures were compared between CNN models and manual ground-truth. The mean Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and per-sector values showed no significant difference, compared to manual results. The AHA 16 segment model was automatically generated and reported on the MR scanner. As a result, the fully automated analysis of perfusion flow mapping was achieved. This solution was integrated on the MR scanner, enabling 'one-click' analysis and reporting of myocardial blood flow.
[ { "created": "Sat, 2 Nov 2019 01:33:56 GMT", "version": "v1" }, { "created": "Fri, 29 May 2020 14:22:58 GMT", "version": "v2" } ]
2020-06-01
[ [ "Xue", "Hui", "" ], [ "Davies", "Rhodri", "" ], [ "Brown", "Louis AE", "" ], [ "Knott", "Kristopher D", "" ], [ "Kotecha", "Tushar", "" ], [ "Fontana", "Marianna", "" ], [ "Plein", "Sven", "" ], [ "Moon", "James C", "" ], [ "Kellman", "Peter", "" ] ]
Recent development of quantitative myocardial blood flow (MBF) mapping allows direct evaluation of absolute myocardial perfusion, by computing pixel-wise flow maps. Clinical studies suggest quantitative evaluation would be more desirable for objectivity and efficiency. Objective assessment can be further facilitated by segmenting the myocardium and automatically generating reports following the AHA model. This will free user interaction for analysis and lead to a 'one-click' solution to improve workflow. This paper proposes a deep neural network based computational workflow for inline myocardial perfusion analysis. Adenosine stress and rest perfusion scans were acquired from three hospitals. Training set included N=1,825 perfusion series from 1,034 patients. Independent test set included 200 scans from 105 patients. Data were consecutively acquired at each site. A convolution neural net (CNN) model was trained to provide segmentation for LV cavity, myocardium and right ventricular by processing incoming 2D+T perfusion Gd series. Model outputs were compared to manual ground-truth for accuracy of segmentation and flow measures derived on global and per-sector basis. The trained models were integrated onto MR scanners for effective inference. Segmentation accuracy and myocardial flow measures were compared between CNN models and manual ground-truth. The mean Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and per-sector values showed no significant difference, compared to manual results. The AHA 16 segment model was automatically generated and reported on the MR scanner. As a result, the fully automated analysis of perfusion flow mapping was achieved. This solution was integrated on the MR scanner, enabling 'one-click' analysis and reporting of myocardial blood flow.
2004.03126
Indranil Mukhopadhyay
Sarmistha Das, Pramit Ghosh, Bandana Sen, and Indranil Mukhopadhyay
Critical community size for COVID-19 -- a model based approach to provide a rationale behind the lockdown
13 pages, 3 figures
Statistics and Applications {ISSN 2452-7395(online)} Volume 18, No. 1, 2020 (New Series), pp 181-196
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Restrictive mass quarantine or lockdown has been implemented as the most important controlling measure to fight against COVID-19. Many countries have enforced 2 - 4 weeks' lockdown and are extending the period depending on their current disease scenario. Most probably the 14-day period of estimated communicability of COVID-19 prompted such decision. But the idea that, if the susceptible population drops below certain threshold, the infection would naturally die out in small communities after a fixed time (following the outbreak), unless the disease is reintroduced from outside, was proposed by Bartlett in 1957. This threshold was termed as Critical Community Size (CCS). Methods: We propose an SEIR model that explains COVID-19 disease dynamics. Using our model, we have calculated country-specific expected time to extinction (TTE) and CCS that would essentially determine the ideal number of lockdown days required and size of quarantined population. Findings: With the given country-wise rates of death, recovery and other parameters, we have identified that, if at a place the total number of susceptible population drops below CCS, infection will cease to exist after a period of TTE days, unless it is introduced from outside. But the disease will almost die out much sooner. We have calculated the country-specific estimate of the ideal number of lockdown days. Thus, smaller lockdown phase is sufficient to contain COVID-19. On a cautionary note, our model indicates another rise in infection almost a year later but on a lesser magnitude.
[ { "created": "Tue, 7 Apr 2020 04:51:56 GMT", "version": "v1" } ]
2020-10-28
[ [ "Das", "Sarmistha", "" ], [ "Ghosh", "Pramit", "" ], [ "Sen", "Bandana", "" ], [ "Mukhopadhyay", "Indranil", "" ] ]
Background: Restrictive mass quarantine or lockdown has been implemented as the most important controlling measure to fight against COVID-19. Many countries have enforced 2 - 4 weeks' lockdown and are extending the period depending on their current disease scenario. Most probably the 14-day period of estimated communicability of COVID-19 prompted such decision. But the idea that, if the susceptible population drops below certain threshold, the infection would naturally die out in small communities after a fixed time (following the outbreak), unless the disease is reintroduced from outside, was proposed by Bartlett in 1957. This threshold was termed as Critical Community Size (CCS). Methods: We propose an SEIR model that explains COVID-19 disease dynamics. Using our model, we have calculated country-specific expected time to extinction (TTE) and CCS that would essentially determine the ideal number of lockdown days required and size of quarantined population. Findings: With the given country-wise rates of death, recovery and other parameters, we have identified that, if at a place the total number of susceptible population drops below CCS, infection will cease to exist after a period of TTE days, unless it is introduced from outside. But the disease will almost die out much sooner. We have calculated the country-specific estimate of the ideal number of lockdown days. Thus, smaller lockdown phase is sufficient to contain COVID-19. On a cautionary note, our model indicates another rise in infection almost a year later but on a lesser magnitude.
1112.3988
Aleksandar Stojmirovi\'c
Aleksandar Stojmirovi\'c and Yi-Kuo Yu
Information Flow in Interaction Networks
30 pages, 5 figures. This paper was published in 2007 in Journal of Computational Biology. The version posted here does not include post peer-review changes
J. Comput. Biol., 14 (8): 1115-1143, 2007
10.1089/cmb.2007.0069
null
q-bio.MN
http://creativecommons.org/licenses/publicdomain/
Interaction networks, consisting of agents linked by their interactions, are ubiquitous across many disciplines of modern science. Many methods of analysis of interaction networks have been proposed, mainly concentrating on node degree distribution or aiming to discover clusters of agents that are very strongly connected between themselves. These methods are principally based on graph-theory or machine learning. We present a mathematically simple formalism for modelling context-specific information propagation in interaction networks based on random walks. The context is provided by selection of sources and destinations of information and by use of potential functions that direct the flow towards the destinations. We also use the concept of dissipation to model the aging of information as it diffuses from its source. Using examples from yeast protein-protein interaction networks and some of the histone acetyltransferases involved in control of transcription, we demonstrate the utility of the concepts and the mathematical constructs introduced in this paper.
[ { "created": "Fri, 16 Dec 2011 22:33:10 GMT", "version": "v1" } ]
2011-12-20
[ [ "Stojmirović", "Aleksandar", "" ], [ "Yu", "Yi-Kuo", "" ] ]
Interaction networks, consisting of agents linked by their interactions, are ubiquitous across many disciplines of modern science. Many methods of analysis of interaction networks have been proposed, mainly concentrating on node degree distribution or aiming to discover clusters of agents that are very strongly connected between themselves. These methods are principally based on graph-theory or machine learning. We present a mathematically simple formalism for modelling context-specific information propagation in interaction networks based on random walks. The context is provided by selection of sources and destinations of information and by use of potential functions that direct the flow towards the destinations. We also use the concept of dissipation to model the aging of information as it diffuses from its source. Using examples from yeast protein-protein interaction networks and some of the histone acetyltransferases involved in control of transcription, we demonstrate the utility of the concepts and the mathematical constructs introduced in this paper.