id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1308.5850
Roland Langrock
Roland Langrock, J. Grant C. Hopcraft, Paul G. Blackwell, Victoria Goodall, Ruth King, Mu Niu, Toby A. Patterson, Martin W. Pedersen, Anna Skarin, Robert S. Schick
Modelling group dynamic animal movement
null
Methods in Ecology and Evolution, 2014, Vol. 5, Issue 2, pages 190-199
10.1111/2041-210X.12155
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias in non-ideal scenarios, we show that generally the estimation of models of this type is both feasible and ecologically informative. We illustrate the approach using real movement data from 11 reindeer (Rangifer tarandus). Results indicate a directional bias towards a group centroid for reindeer in an encamped state. Though the attraction to the group centroid is relatively weak, our model successfully captures group-influenced movement dynamics. Specifically, as compared to a regular mixture of correlated random walks, the group dynamic model more accurately predicts the non-diffusive behaviour of a cohesive mobile group.
[ { "created": "Tue, 27 Aug 2013 12:52:33 GMT", "version": "v1" } ]
2015-05-21
[ [ "Langrock", "Roland", "" ], [ "Hopcraft", "J. Grant C.", "" ], [ "Blackwell", "Paul G.", "" ], [ "Goodall", "Victoria", "" ], [ "King", "Ruth", "" ], [ "Niu", "Mu", "" ], [ "Patterson", "Toby A.", "" ], [ "Pedersen", "Martin W.", "" ], [ "Skarin", "Anna", "" ], [ "Schick", "Robert S.", "" ] ]
Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias in non-ideal scenarios, we show that generally the estimation of models of this type is both feasible and ecologically informative. We illustrate the approach using real movement data from 11 reindeer (Rangifer tarandus). Results indicate a directional bias towards a group centroid for reindeer in an encamped state. Though the attraction to the group centroid is relatively weak, our model successfully captures group-influenced movement dynamics. Specifically, as compared to a regular mixture of correlated random walks, the group dynamic model more accurately predicts the non-diffusive behaviour of a cohesive mobile group.
1908.08482
Christian Quirouette
Christian Quirouette, Nada P. Younis, Micaela B. Reddy, Catherine A.A. Beauchemin
A mathematical model describing the localization and spread of influenza A virus infection within the human respiratory tract
27 pages, 11 figures, 2 supplementary videos
PLoS Comput Biol. 2020 Apr 13;16(4):e1007705
10.1371/journal.pcbi.1007705
RIKEN-iTHEMS-Report-19
q-bio.CB q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Within the human respiratory tract (HRT), viruses diffuse through the periciliary fluid (PCF) bathing the epithelium, and travel upwards via advection towards the nose and mouth, as the mucus escalator entrains the PCF. While many mathematical models (MMs) to date have described the course of influenza A virus (IAV) infections in vivo, none have considered the impact of both diffusion and advection on the kinetics and localization of the infection. The MM herein represents the HRT as a one-dimensional track extending from the nose down to a depth of 30 cm, wherein stationary cells interact with the concentration of IAV which move along within the PCF. When IAV advection and diffusion are both considered, the former is found to dominate infection kinetics, and a 10-fold increase in the virus production rate is required to counter its effects. The MM predicts that advection prevents infection from disseminating below the depth at which virus first deposits. Because virus is entrained upwards, the upper HRT sees the most virus, whereas the lower HRT sees far less. As such, infection peaks and resolves faster in the upper than in the lower HRT, making it appear as though infection progresses from the upper towards the lower HRT. When the spatial MM is expanded to include cellular regeneration and an immune response, it can capture the time course of infection with a seasonal and an avian IAV strain by shifting parameters in a manner consistent with what is expected to differ between these two types of infection. The impact of antiviral therapy with neuraminidase inhibitors was also investigated. This new MM offers a convenient and unique platform from which to study the localization and spread of respiratory viral infections within the HRT.
[ { "created": "Thu, 22 Aug 2019 16:31:40 GMT", "version": "v1" } ]
2020-04-15
[ [ "Quirouette", "Christian", "" ], [ "Younis", "Nada P.", "" ], [ "Reddy", "Micaela B.", "" ], [ "Beauchemin", "Catherine A. A.", "" ] ]
Within the human respiratory tract (HRT), viruses diffuse through the periciliary fluid (PCF) bathing the epithelium, and travel upwards via advection towards the nose and mouth, as the mucus escalator entrains the PCF. While many mathematical models (MMs) to date have described the course of influenza A virus (IAV) infections in vivo, none have considered the impact of both diffusion and advection on the kinetics and localization of the infection. The MM herein represents the HRT as a one-dimensional track extending from the nose down to a depth of 30 cm, wherein stationary cells interact with the concentration of IAV which move along within the PCF. When IAV advection and diffusion are both considered, the former is found to dominate infection kinetics, and a 10-fold increase in the virus production rate is required to counter its effects. The MM predicts that advection prevents infection from disseminating below the depth at which virus first deposits. Because virus is entrained upwards, the upper HRT sees the most virus, whereas the lower HRT sees far less. As such, infection peaks and resolves faster in the upper than in the lower HRT, making it appear as though infection progresses from the upper towards the lower HRT. When the spatial MM is expanded to include cellular regeneration and an immune response, it can capture the time course of infection with a seasonal and an avian IAV strain by shifting parameters in a manner consistent with what is expected to differ between these two types of infection. The impact of antiviral therapy with neuraminidase inhibitors was also investigated. This new MM offers a convenient and unique platform from which to study the localization and spread of respiratory viral infections within the HRT.
1811.01425
Francesco Maria Sabatini Dr
Francesco Maria Sabatini, Borja Jim\'enez-Alfaro, Sabina Burrascano, Andrea Lora, Milan Chytr\'y
Beta-diversity of Central European forests decreases along an elevational gradient due to the variation in local community assembly processes
Accepted version 25 pages, 5 figures, 1 table
Ecography 41(6): 1038-1048 (2018)
10.1111/ecog.02809
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Beta-diversity has been repeatedly shown to decline with increasing elevation, but the causes of this pattern remain unclear, partly because they are confounded by coincident variation in alpha- and gamma-diversity. We used 8,795 forest vegetation-plot records from the Czech National Phytosociological Database to compare the observed patterns of beta diversity to null-model expectations (beta-deviation) controlling for the effects of alpha- and gamma-diversity. We tested whether \b{eta}-diversity patterns along a 1,200 m elevation gradient exclusively depend on the effect of varying species pool size, or also on the variation of the magnitude of community assembly mechanisms determining the distribution of species across communities (e.g., environmental filtering, dispersal limitation). The null model we used is a novel extension of an existing null-model designed for presence/absence data and was specifically designed to disrupt the effect of community assembly mechanisms, while retaining some key features of observed communities such as average species richness and species abundance distribution. Analyses were replicated in ten subregions with comparable elevation ranges. Beta-diversity declined along the elevation gradient due to a decrease in gamma-diversity, which was steeper than the decrease in alpha-diversity. This pattern persisted after controlling for alpha- and gamma-diversity variation, and the results were robust when different resampling schemes and diversity metrics were used. We conclude that in temperate forests the pattern of decreasing beta-diversity with elevation does not exclusively depend on variation in species pool size, as has been hypothesized, but also on variation in community assembly mechanisms. The results were consistent across resampling schemes and diversity measures, thus supporting the use of vegetation plot databases for understanding...
[ { "created": "Sun, 4 Nov 2018 19:52:19 GMT", "version": "v1" } ]
2018-11-06
[ [ "Sabatini", "Francesco Maria", "" ], [ "Jiménez-Alfaro", "Borja", "" ], [ "Burrascano", "Sabina", "" ], [ "Lora", "Andrea", "" ], [ "Chytrý", "Milan", "" ] ]
Beta-diversity has been repeatedly shown to decline with increasing elevation, but the causes of this pattern remain unclear, partly because they are confounded by coincident variation in alpha- and gamma-diversity. We used 8,795 forest vegetation-plot records from the Czech National Phytosociological Database to compare the observed patterns of beta diversity to null-model expectations (beta-deviation) controlling for the effects of alpha- and gamma-diversity. We tested whether \b{eta}-diversity patterns along a 1,200 m elevation gradient exclusively depend on the effect of varying species pool size, or also on the variation of the magnitude of community assembly mechanisms determining the distribution of species across communities (e.g., environmental filtering, dispersal limitation). The null model we used is a novel extension of an existing null-model designed for presence/absence data and was specifically designed to disrupt the effect of community assembly mechanisms, while retaining some key features of observed communities such as average species richness and species abundance distribution. Analyses were replicated in ten subregions with comparable elevation ranges. Beta-diversity declined along the elevation gradient due to a decrease in gamma-diversity, which was steeper than the decrease in alpha-diversity. This pattern persisted after controlling for alpha- and gamma-diversity variation, and the results were robust when different resampling schemes and diversity metrics were used. We conclude that in temperate forests the pattern of decreasing beta-diversity with elevation does not exclusively depend on variation in species pool size, as has been hypothesized, but also on variation in community assembly mechanisms. The results were consistent across resampling schemes and diversity measures, thus supporting the use of vegetation plot databases for understanding...
1211.1607
Vincenzo Forgetta
Vincenzo Forgetta and Ken Dewar
CGB: A UNIX shell program to create custom instances of the UCSC Genome Browser
8 pages, 1 figure and 1 table
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The UCSC Genome Browser is a popular tool for the exploration and analysis of reference genomes. Mirrors of the UCSC Genome Browser and its contents exist at multiple geographic locations, and this mirror procedure has been modified to support genome sequences not maintained by UCSC and generated by individual researchers. While straightforward, this procedure is lengthy and tedious and would benefit from automation, especially when processing many genome sequences. We present a Unix shell program that facilitates the creation of custom instances of the UCSC Genome Browser for genome sequences not being maintained by UCSC. It automates many steps of the browser creation process, provides password protection for each browser instance, and automates the creation of basic annotation tracks. As an example we generate a custom UCSC Genome Browser for a bacterial genome obtained from a massively parallel sequencing platform.
[ { "created": "Wed, 7 Nov 2012 17:10:14 GMT", "version": "v1" } ]
2012-11-29
[ [ "Forgetta", "Vincenzo", "" ], [ "Dewar", "Ken", "" ] ]
The UCSC Genome Browser is a popular tool for the exploration and analysis of reference genomes. Mirrors of the UCSC Genome Browser and its contents exist at multiple geographic locations, and this mirror procedure has been modified to support genome sequences not maintained by UCSC and generated by individual researchers. While straightforward, this procedure is lengthy and tedious and would benefit from automation, especially when processing many genome sequences. We present a Unix shell program that facilitates the creation of custom instances of the UCSC Genome Browser for genome sequences not being maintained by UCSC. It automates many steps of the browser creation process, provides password protection for each browser instance, and automates the creation of basic annotation tracks. As an example we generate a custom UCSC Genome Browser for a bacterial genome obtained from a massively parallel sequencing platform.
1301.2366
Andrei Zinovyev Dr.
Andrei Zinovyev, Simon Fourquet, Laurent Tournier, Laurence Calzone and Emmanuel Barillot
Cell death and life in cancer: mathematical modeling of cell fate decisions
null
Advances in Experimental Medicine and Biology, Vol. 736 (Goryanin, I. and Goryachev A., eds.), Springer, 2012, 682p
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumor development is characterized by a compromised balance between cell life and death decision mechanisms, which are tighly regulated in normal cells. Understanding this process provides insights for developing new treatments for fighting with cancer. We present a study of a mathematical model describing cellular choice between survival and two alternative cell death modalities: apoptosis and necrosis. The model is implemented in discrete modeling formalism and allows to predict probabilities of having a particular cellular phenotype in response to engagement of cell death receptors. Using an original parameter sensitivity analysis developed for discrete dynamic systems, we determine the critical parameters affecting cellular fate decision variables that appear to be critical in the cellular fate decision and discuss how they are exploited by existing cancer therapies.
[ { "created": "Fri, 11 Jan 2013 00:32:16 GMT", "version": "v1" } ]
2013-01-14
[ [ "Zinovyev", "Andrei", "" ], [ "Fourquet", "Simon", "" ], [ "Tournier", "Laurent", "" ], [ "Calzone", "Laurence", "" ], [ "Barillot", "Emmanuel", "" ] ]
Tumor development is characterized by a compromised balance between cell life and death decision mechanisms, which are tighly regulated in normal cells. Understanding this process provides insights for developing new treatments for fighting with cancer. We present a study of a mathematical model describing cellular choice between survival and two alternative cell death modalities: apoptosis and necrosis. The model is implemented in discrete modeling formalism and allows to predict probabilities of having a particular cellular phenotype in response to engagement of cell death receptors. Using an original parameter sensitivity analysis developed for discrete dynamic systems, we determine the critical parameters affecting cellular fate decision variables that appear to be critical in the cellular fate decision and discuss how they are exploited by existing cancer therapies.
2204.09798
Josinaldo Menezes
J. Menezes, S. Rodrigues, S. Batista
Mobility unevenness in rock-paper-scissors models
7 pages, 7 figures
Ecological Complexity 52, 101028 (2022)
10.1016/j.ecocom.2022.101028
null
q-bio.PE cond-mat.stat-mech nlin.AO nlin.PS physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a tritrophic system whose cyclic dominance is modelled by the rock-paper-scissors game. We consider that organisms of one or two species are affected by movement limitations, which unbalances the cyclic spatial game. Performing stochastic simulations, we show that mobility unevenness controls the population dynamics. In the case of one slow species, the predominant species depends on the level of mobility restriction, with the slow species being preponderant if the mobility limitations are substantial. If two species face mobility limitations, our outcomes show that being higher dispersive does not constitute an advantage in terms of population growth. On the contrary, if organisms move with higher mobility, they expose themselves to enemies more frequently, being more vulnerable to being eliminated. Finally, our findings show that biodiversity benefits in regions where species are slowed. Biodiversity loss for high mobility organisms, common to cyclic systems, may be avoided with coexistence probability being higher for robust mobility limitations. Our results may help biologists understand the dynamics of unbalanced spatial systems where organisms' dispersal is fundamental to biodiversity conservation.
[ { "created": "Wed, 20 Apr 2022 21:59:01 GMT", "version": "v1" } ]
2023-03-13
[ [ "Menezes", "J.", "" ], [ "Rodrigues", "S.", "" ], [ "Batista", "S.", "" ] ]
We investigate a tritrophic system whose cyclic dominance is modelled by the rock-paper-scissors game. We consider that organisms of one or two species are affected by movement limitations, which unbalances the cyclic spatial game. Performing stochastic simulations, we show that mobility unevenness controls the population dynamics. In the case of one slow species, the predominant species depends on the level of mobility restriction, with the slow species being preponderant if the mobility limitations are substantial. If two species face mobility limitations, our outcomes show that being higher dispersive does not constitute an advantage in terms of population growth. On the contrary, if organisms move with higher mobility, they expose themselves to enemies more frequently, being more vulnerable to being eliminated. Finally, our findings show that biodiversity benefits in regions where species are slowed. Biodiversity loss for high mobility organisms, common to cyclic systems, may be avoided with coexistence probability being higher for robust mobility limitations. Our results may help biologists understand the dynamics of unbalanced spatial systems where organisms' dispersal is fundamental to biodiversity conservation.
0801.2708
Ioana Bena Dr.
Ioana Bena, Michel Droz, Janusz Szwabinski, Andrzej Pekalski
How bad is to be slow-reacting ? On the effect of the delay in response to a changing environment on a population's survival
7 pages, 4 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
null
We consider a simple-model population, whose individuals react with a certain delay to temporal variations of their habitat. We investigate the impact of such a delayed-answer on the survival chances of the population, both in a periodically changing environment, and in the case of an abrupt change of it. It is found that for population with low degree of mutation-induced variability, being "slow-reacting" decreases the extinction risk face to environmental changes. On the contrary, for populations with high mutation amplitude, the delayed reaction reduces the survival chances.
[ { "created": "Thu, 17 Jan 2008 15:50:41 GMT", "version": "v1" } ]
2008-01-18
[ [ "Bena", "Ioana", "" ], [ "Droz", "Michel", "" ], [ "Szwabinski", "Janusz", "" ], [ "Pekalski", "Andrzej", "" ] ]
We consider a simple-model population, whose individuals react with a certain delay to temporal variations of their habitat. We investigate the impact of such a delayed-answer on the survival chances of the population, both in a periodically changing environment, and in the case of an abrupt change of it. It is found that for population with low degree of mutation-induced variability, being "slow-reacting" decreases the extinction risk face to environmental changes. On the contrary, for populations with high mutation amplitude, the delayed reaction reduces the survival chances.
1508.00684
Denys Dutykh
Ramon Escobedo (BCAM), Denys Dutykh (LAMA), Cristina Muro (AEPA), Lee Spector, Raymond Coppinger
Group Size Effect on the Success of Wolves Hunting
20 pages, 4 figures, 8 references. Other author's papers can be downloaded at http://www.denys-dutykh.com/
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social foraging shows unexpected features such as the existence of a group size threshold to accomplish a successful hunt. Above this threshold, additional individuals do not increase the probability of capturing the prey. Recent direct observations of wolves in Yellowstone Park show that the group size threshold when hunting its most formidable prey, bison, is nearly three times greater than when hunting elk, a prey that is considerably less challenging to capture than bison. These observations provide empirical support to a computational particle model of group hunting which was previously shown to be effective in explaining why hunting success peaks at apparently small pack sizes when hunting elk. The model is based on considering two critical distances between wolves and prey: the minimal safe distance at which wolves stand from the prey, and the avoidance distance at which wolves move away from each other when they approach the prey. The minimal safe distance is longer when the prey is more dangerous to hunt. We show that the model explains effectively that the group size threshold is greater when the minimal safe distance is longer. Although both distances are longer when the prey is more dangerous, they contribute oppositely to the value of the group size threshold: the group size threshold is smaller when the avoidance distance is longer. This unexpected mechanism gives rise to a global increase of the group size threshold when considering bison with respect to elk, but other prey more dangerous than elk can lead to specific critical distances that can give rise to the same group size threshold. Our results show that the computational model can guide further research on group size effects, suggesting that more experimental observations should be obtained for other kind of prey as e.g. moose.
[ { "created": "Tue, 4 Aug 2015 07:20:08 GMT", "version": "v1" } ]
2015-08-05
[ [ "Escobedo", "Ramon", "", "BCAM" ], [ "Dutykh", "Denys", "", "LAMA" ], [ "Muro", "Cristina", "", "AEPA" ], [ "Spector", "Lee", "" ], [ "Coppinger", "Raymond", "" ] ]
Social foraging shows unexpected features such as the existence of a group size threshold to accomplish a successful hunt. Above this threshold, additional individuals do not increase the probability of capturing the prey. Recent direct observations of wolves in Yellowstone Park show that the group size threshold when hunting its most formidable prey, bison, is nearly three times greater than when hunting elk, a prey that is considerably less challenging to capture than bison. These observations provide empirical support to a computational particle model of group hunting which was previously shown to be effective in explaining why hunting success peaks at apparently small pack sizes when hunting elk. The model is based on considering two critical distances between wolves and prey: the minimal safe distance at which wolves stand from the prey, and the avoidance distance at which wolves move away from each other when they approach the prey. The minimal safe distance is longer when the prey is more dangerous to hunt. We show that the model explains effectively that the group size threshold is greater when the minimal safe distance is longer. Although both distances are longer when the prey is more dangerous, they contribute oppositely to the value of the group size threshold: the group size threshold is smaller when the avoidance distance is longer. This unexpected mechanism gives rise to a global increase of the group size threshold when considering bison with respect to elk, but other prey more dangerous than elk can lead to specific critical distances that can give rise to the same group size threshold. Our results show that the computational model can guide further research on group size effects, suggesting that more experimental observations should be obtained for other kind of prey as e.g. moose.
2110.13603
Stav Marcus
Stav Marcus, Ari M. Turner and Guy Bunin
Local and collective transitions in sparsely-interacting ecological communities
15 pages, 11 figures
null
10.1371/journal.pcbi.1010274
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactions in natural communities can be highly heterogeneous, with any given species interacting appreciably with only some of the others, a situation commonly represented by sparse interaction networks. We study the consequences of sparse competitive interactions, in a theoretical model of a community assembled from a species pool. We find that communities can be in a number of different regimes, depending on the interaction strength. When interactions are strong, the network of coexisting species breaks up into small subgraphs, while for weaker interactions these graphs are larger and more complex, eventually encompassing all species. This process is driven by emergence of new allowed subgraphs as interaction strength decreases, leading to sharp changes in diversity and other community properties, and at weaker interactions to two distinct collective transitions: a percolation transition, and a transition between having a unique equilibrium and having multiple alternative equilibria. Understanding community structure is thus made up of two parts: first, finding which subgraphs are allowed at a given interaction strength, and secondly, a discrete problem of matching these structures over the entire community. In a shift from the focus of many previous theories, these different regimes can be traversed by modifying the interaction strength alone, without need for heterogeneity in either interaction strengths or the number of competitors per species.
[ { "created": "Tue, 26 Oct 2021 12:00:35 GMT", "version": "v1" } ]
2022-10-12
[ [ "Marcus", "Stav", "" ], [ "Turner", "Ari M.", "" ], [ "Bunin", "Guy", "" ] ]
Interactions in natural communities can be highly heterogeneous, with any given species interacting appreciably with only some of the others, a situation commonly represented by sparse interaction networks. We study the consequences of sparse competitive interactions, in a theoretical model of a community assembled from a species pool. We find that communities can be in a number of different regimes, depending on the interaction strength. When interactions are strong, the network of coexisting species breaks up into small subgraphs, while for weaker interactions these graphs are larger and more complex, eventually encompassing all species. This process is driven by emergence of new allowed subgraphs as interaction strength decreases, leading to sharp changes in diversity and other community properties, and at weaker interactions to two distinct collective transitions: a percolation transition, and a transition between having a unique equilibrium and having multiple alternative equilibria. Understanding community structure is thus made up of two parts: first, finding which subgraphs are allowed at a given interaction strength, and secondly, a discrete problem of matching these structures over the entire community. In a shift from the focus of many previous theories, these different regimes can be traversed by modifying the interaction strength alone, without need for heterogeneity in either interaction strengths or the number of competitors per species.
1507.06890
Ziyue Gao
Ziyue Gao, Minyoung J. Wyman, Guy Sella and Molly Przeworski
Interpreting the dependence of mutation rates on age and time
5 figures, 2 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutations can arise from the chance misincorporation of nucleotides during DNA replication or from DNA lesions that are not repaired correctly. We introduce a model that relates the source of mutations to their accumulation with cell divisions, providing a framework for understanding how mutation rates depend on sex, age and absolute time. We show that the accrual of mutations should track cell divisions not only when mutations are replicative in origin but also when they are non-replicative and repaired efficiently. One implication is that the higher incidence of cancer in rapidly renewing tissues, an observation ascribed to replication errors, could instead reflect exogenous or endogenous mutagens. We further find that only mutations that arise from inefficiently repaired lesions will accrue according to absolute time; thus, in the absence of selection on mutation rates, the phylogenetic "molecular clock" should not be expected to run steadily across species.
[ { "created": "Fri, 24 Jul 2015 15:30:45 GMT", "version": "v1" } ]
2015-07-27
[ [ "Gao", "Ziyue", "" ], [ "Wyman", "Minyoung J.", "" ], [ "Sella", "Guy", "" ], [ "Przeworski", "Molly", "" ] ]
Mutations can arise from the chance misincorporation of nucleotides during DNA replication or from DNA lesions that are not repaired correctly. We introduce a model that relates the source of mutations to their accumulation with cell divisions, providing a framework for understanding how mutation rates depend on sex, age and absolute time. We show that the accrual of mutations should track cell divisions not only when mutations are replicative in origin but also when they are non-replicative and repaired efficiently. One implication is that the higher incidence of cancer in rapidly renewing tissues, an observation ascribed to replication errors, could instead reflect exogenous or endogenous mutagens. We further find that only mutations that arise from inefficiently repaired lesions will accrue according to absolute time; thus, in the absence of selection on mutation rates, the phylogenetic "molecular clock" should not be expected to run steadily across species.
1108.4951
Areejit Samal
Areejit Samal, Andreas Wagner and Olivier C. Martin
Environmental versatility promotes modularity in genome-scale metabolic networks
34 pages, 4 main figures, 7 additional figures, 2 additional tables
BMC Systems Biology, 5:135 (2011)
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ubiquity of modules in biological networks may result from an evolutionary benefit of a modular organization. For instance, modularity may increase the rate of adaptive evolution, because modules can be easily combined into new arrangements that may benefit their carrier. Conversely, modularity may emerge as a by-product of some trait. We here ask whether this last scenario may play a role in genome-scale metabolic networks that need to sustain life in one or more chemical environments. For such networks, we define a network module as a maximal set of reactions that are fully coupled, i.e., whose fluxes can only vary in fixed proportions. This definition overcomes limitations of purely graph based analyses of metabolism by exploiting the functional links between reactions. We call a metabolic network viable in a given chemical environment if it can synthesize all of an organism's biomass compounds from nutrients in this environment. An organism's metabolism is highly versatile if it can sustain life in many different chemical environments. We here ask whether versatility affects the modularity of metabolic networks.
[ { "created": "Wed, 24 Aug 2011 20:50:27 GMT", "version": "v1" } ]
2011-08-26
[ [ "Samal", "Areejit", "" ], [ "Wagner", "Andreas", "" ], [ "Martin", "Olivier C.", "" ] ]
The ubiquity of modules in biological networks may result from an evolutionary benefit of a modular organization. For instance, modularity may increase the rate of adaptive evolution, because modules can be easily combined into new arrangements that may benefit their carrier. Conversely, modularity may emerge as a by-product of some trait. We here ask whether this last scenario may play a role in genome-scale metabolic networks that need to sustain life in one or more chemical environments. For such networks, we define a network module as a maximal set of reactions that are fully coupled, i.e., whose fluxes can only vary in fixed proportions. This definition overcomes limitations of purely graph based analyses of metabolism by exploiting the functional links between reactions. We call a metabolic network viable in a given chemical environment if it can synthesize all of an organism's biomass compounds from nutrients in this environment. An organism's metabolism is highly versatile if it can sustain life in many different chemical environments. We here ask whether versatility affects the modularity of metabolic networks.
1910.09738
Ali Madani
Ali Madani, Cyna Shirazinejad, Jia Rui Ong, Hengameh Shams, Mohammad Mofrad
ProDyn0: Inferring calponin homology domain stretching behavior using graph neural networks
8 pages, 2 figures, 2 tables
ICLR 2019: Representation learning on graphs and manifolds
null
null
q-bio.QM cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Graph neural networks are a quickly emerging field for non-Euclidean data that leverage the inherent graphical structure to predict node, edge, and global-level properties of a system. Protein properties can not easily be understood as a simple sum of their parts (i.e. amino acids), therefore, understanding their dynamical properties in the context of graphs is attractive for revealing how perturbations to their structure can affect their global function. To tackle this problem, we generate a database of 2020 mutated calponin homology (CH) domains undergoing large-scale separation in molecular dynamics. To predict the mechanosensitive force response, we develop neural message passing networks and residual gated graph convnets which predict the protein dependent force separation at 86.63 percent, 81.59 kJ/mol/nm MAE, 76.99 psec MAE for force mode classification, max force magnitude, max force time respectively-- significantly better than non-graph-based deep learning techniques. Towards uniting geometric learning techniques and biophysical observables, we premiere our simulation database as a benchmark dataset for further development/evaluation of graph neural network architectures.
[ { "created": "Tue, 22 Oct 2019 02:42:58 GMT", "version": "v1" } ]
2019-10-23
[ [ "Madani", "Ali", "" ], [ "Shirazinejad", "Cyna", "" ], [ "Ong", "Jia Rui", "" ], [ "Shams", "Hengameh", "" ], [ "Mofrad", "Mohammad", "" ] ]
Graph neural networks are a quickly emerging field for non-Euclidean data that leverage the inherent graphical structure to predict node, edge, and global-level properties of a system. Protein properties can not easily be understood as a simple sum of their parts (i.e. amino acids), therefore, understanding their dynamical properties in the context of graphs is attractive for revealing how perturbations to their structure can affect their global function. To tackle this problem, we generate a database of 2020 mutated calponin homology (CH) domains undergoing large-scale separation in molecular dynamics. To predict the mechanosensitive force response, we develop neural message passing networks and residual gated graph convnets which predict the protein dependent force separation at 86.63 percent, 81.59 kJ/mol/nm MAE, 76.99 psec MAE for force mode classification, max force magnitude, max force time respectively-- significantly better than non-graph-based deep learning techniques. Towards uniting geometric learning techniques and biophysical observables, we premiere our simulation database as a benchmark dataset for further development/evaluation of graph neural network architectures.
2102.05452
Nitai Bar
Nitai Bar, Jonathan A. Sobel, Thomas Penzel, Yosi Shamay, Joachim A. Behar
From sleep medicine to medicine during sleep: A clinical perspective
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sleep has a profound influence on the physiology of body systems and biological processes. Molecular studies have shown circadian-regulated shifts in protein expression patterns across human tissues, further emphasizing the unique functional, behavioral and pharmacokinetic landscape of sleep. Thus, many pathological processes are also expected to exhibit sleep-specific manifestations. Nevertheless, sleep is seldom utilized for the study, detection and treatment of non-sleep-specific pathologies. Modern advances in biosensor technologies have enabled remote, non-invasive recording of a growing number of physiologic parameters and biomarkers. Sleep is an ideal time frame for the collection of long and clean physiological time series data which can then be analyzed using data-driven algorithms such as deep learning. In this perspective paper, we aim to highlight the potential of sleep as an auspicious time for diagnosis, management and therapy of nonsleep-specific pathologies. We introduce key clinical studies in selected medical fields, which leveraged novel technologies and the advantageous period of sleep to diagnose, monitor and treat pathologies. We then discuss possible opportunities to further harness this new paradigm and modern technologies to explore human health and disease during sleep and to advance the development of novel clinical applications: From sleep medicine to medicine during sleep.
[ { "created": "Tue, 9 Feb 2021 18:45:42 GMT", "version": "v1" } ]
2021-02-11
[ [ "Bar", "Nitai", "" ], [ "Sobel", "Jonathan A.", "" ], [ "Penzel", "Thomas", "" ], [ "Shamay", "Yosi", "" ], [ "Behar", "Joachim A.", "" ] ]
Sleep has a profound influence on the physiology of body systems and biological processes. Molecular studies have shown circadian-regulated shifts in protein expression patterns across human tissues, further emphasizing the unique functional, behavioral and pharmacokinetic landscape of sleep. Thus, many pathological processes are also expected to exhibit sleep-specific manifestations. Nevertheless, sleep is seldom utilized for the study, detection and treatment of non-sleep-specific pathologies. Modern advances in biosensor technologies have enabled remote, non-invasive recording of a growing number of physiologic parameters and biomarkers. Sleep is an ideal time frame for the collection of long and clean physiological time series data which can then be analyzed using data-driven algorithms such as deep learning. In this perspective paper, we aim to highlight the potential of sleep as an auspicious time for diagnosis, management and therapy of nonsleep-specific pathologies. We introduce key clinical studies in selected medical fields, which leveraged novel technologies and the advantageous period of sleep to diagnose, monitor and treat pathologies. We then discuss possible opportunities to further harness this new paradigm and modern technologies to explore human health and disease during sleep and to advance the development of novel clinical applications: From sleep medicine to medicine during sleep.
2010.07162
Konstantinos Spiliotis
Konstantinos Spiliotis and Jens Starke and Denise Franz and Angelika Richter and R\"udiger K\"ohling
Deep brain stimulation for movement disorder treatment: Exploring frequency-dependent efficacy in a computational network model
40 pages, 16 figures
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large scale computational model of the basal ganglia (BG) network is proposed to describes movement disorder including deep brain stimulation (DBS). The model of this complex network considers four areas of the basal ganglia network: the subthalamic nucleus (STN) as target area of DBS, globus pallidus, both pars externa and pars interna (GPe-GPi), and the thalamus (THA). Parkinsonian conditions are simulated by assuming reduced dopaminergic input and corresponding pronounced inhibitory or disinhibited projections to GPe and GPi. Macroscopic quantities can be derived which correlate closely to thalamic responses and hence motor programme fidelity. It can be demonstrated that depending on different levels of striatal projections to the GPe and GPi, the dynamics of these macroscopic quantities switch from normal conditions to parkinsonian. Simulating DBS on the STN affects the dynamics of the entire network, increasing the thalamic activity to levels close to normal, while differing from both normal and parkinsonian dynamics. Using the mentioned macroscopic quantities, the model proposes optimal DBS frequency ranges above 130 Hz.
[ { "created": "Wed, 14 Oct 2020 15:28:27 GMT", "version": "v1" }, { "created": "Fri, 12 Mar 2021 14:57:39 GMT", "version": "v2" } ]
2021-03-15
[ [ "Spiliotis", "Konstantinos", "" ], [ "Starke", "Jens", "" ], [ "Franz", "Denise", "" ], [ "Richter", "Angelika", "" ], [ "Köhling", "Rüdiger", "" ] ]
A large scale computational model of the basal ganglia (BG) network is proposed to describes movement disorder including deep brain stimulation (DBS). The model of this complex network considers four areas of the basal ganglia network: the subthalamic nucleus (STN) as target area of DBS, globus pallidus, both pars externa and pars interna (GPe-GPi), and the thalamus (THA). Parkinsonian conditions are simulated by assuming reduced dopaminergic input and corresponding pronounced inhibitory or disinhibited projections to GPe and GPi. Macroscopic quantities can be derived which correlate closely to thalamic responses and hence motor programme fidelity. It can be demonstrated that depending on different levels of striatal projections to the GPe and GPi, the dynamics of these macroscopic quantities switch from normal conditions to parkinsonian. Simulating DBS on the STN affects the dynamics of the entire network, increasing the thalamic activity to levels close to normal, while differing from both normal and parkinsonian dynamics. Using the mentioned macroscopic quantities, the model proposes optimal DBS frequency ranges above 130 Hz.
1907.04319
EPTCS
Ozan Kahramano\u{g}ullar{\i} (University of Trento, Department of Mathematics)
On Quantitative Comparison of Chemical Reaction Network Models
In Proceedings HCVS/PERR 2019, arXiv:1907.03523
EPTCS 296, 2019, pp. 14-27
10.4204/EPTCS.296.5
null
q-bio.MN cs.DM cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chemical reaction networks (CRNs) provide a convenient language for modelling a broad variety of biological systems. These models are commonly studied with respect to the time series they generate in deterministic or stochastic simulations. Their dynamic behaviours are then analysed, often by using deterministic methods based on differential equations with a focus on the steady states. Here, we propose a method for comparing CRNs with respect to their behaviour in stochastic simulations. Our method is based on using the flux graphs that are delivered by stochastic simulations as abstract representations of their dynamic behaviour. This allows us to compare the behaviour of any two CRNs for any time interval, and define a notion of equivalence on them that overlaps with graph isomorphism at the lowest level of representation. The similarity between the compared CRNs can be quantified in terms of their distance. The results can then be used to refine the models or to replace a larger model with a smaller one that produces the same behaviour or vice versa.
[ { "created": "Tue, 9 Jul 2019 06:01:27 GMT", "version": "v1" } ]
2019-07-11
[ [ "Kahramanoğulları", "Ozan", "", "University of Trento, Department of\n Mathematics" ] ]
Chemical reaction networks (CRNs) provide a convenient language for modelling a broad variety of biological systems. These models are commonly studied with respect to the time series they generate in deterministic or stochastic simulations. Their dynamic behaviours are then analysed, often by using deterministic methods based on differential equations with a focus on the steady states. Here, we propose a method for comparing CRNs with respect to their behaviour in stochastic simulations. Our method is based on using the flux graphs that are delivered by stochastic simulations as abstract representations of their dynamic behaviour. This allows us to compare the behaviour of any two CRNs for any time interval, and define a notion of equivalence on them that overlaps with graph isomorphism at the lowest level of representation. The similarity between the compared CRNs can be quantified in terms of their distance. The results can then be used to refine the models or to replace a larger model with a smaller one that produces the same behaviour or vice versa.
1309.2032
Subhadip Raychaudhuri
Subhadip Raychaudhuri, Somkanya C Das
Monte Carlo study elucidates the type 1/type 2 choice in apoptotic death signaling in normal and cancer cells
35 pages, 15 figures
Cells 2013 2:361-392
null
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apoptotic cell death is coordinated through two distinct (type 1 and type 2) intracellular signaling pathways. How the type 1/type 2 choice is made remains a fundamental problem in the biology of apoptosis and has implications for apoptosis related diseases and therapy. We study the problem of type 1/type 2 choice in silico utilizing a kinetic Monte Carlo model of cell death signaling. Our results show that the type 1/type 2 choice is linked to deterministic versus stochastic cell death activation, elucidating a unique regulatory control of the apoptotic pathways. Consistent with previous findings, our results indicate that caspase 8 activation level is a key regulator of the choice between deterministic type 1 and stochastic type 2 pathways, irrespective of cell types. Expression levels of signaling molecules downstream also regulate the type 1/type 2 choice. A simplified model of DISC clustering elucidates the mechanism of increased active caspase 8 generation, and type 1 activation, in cancer cells having increased sensitivity to death receptor activation. We demonstrate that rapid deterministic activation of the type 1 pathway can selectively target those cancer cells, especially if XIAP is also inhibited; while inherent cell-to-cell variability would allow normal cells stay protected.
[ { "created": "Mon, 9 Sep 2013 02:19:28 GMT", "version": "v1" } ]
2013-09-10
[ [ "Raychaudhuri", "Subhadip", "" ], [ "Das", "Somkanya C", "" ] ]
Apoptotic cell death is coordinated through two distinct (type 1 and type 2) intracellular signaling pathways. How the type 1/type 2 choice is made remains a fundamental problem in the biology of apoptosis and has implications for apoptosis related diseases and therapy. We study the problem of type 1/type 2 choice in silico utilizing a kinetic Monte Carlo model of cell death signaling. Our results show that the type 1/type 2 choice is linked to deterministic versus stochastic cell death activation, elucidating a unique regulatory control of the apoptotic pathways. Consistent with previous findings, our results indicate that caspase 8 activation level is a key regulator of the choice between deterministic type 1 and stochastic type 2 pathways, irrespective of cell types. Expression levels of signaling molecules downstream also regulate the type 1/type 2 choice. A simplified model of DISC clustering elucidates the mechanism of increased active caspase 8 generation, and type 1 activation, in cancer cells having increased sensitivity to death receptor activation. We demonstrate that rapid deterministic activation of the type 1 pathway can selectively target those cancer cells, especially if XIAP is also inhibited; while inherent cell-to-cell variability would allow normal cells stay protected.
1303.1904
Hugues Berry
Anne-Sophie Coquel (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL, LIRIS), Jean-Pascal Jacob (MAP5), Ma\"el Primet (MAP5), Alice Demarez (MAP5), Mariella Dimiccoli (MAP5), Thomas Julou (LPS), Lionel Moisan (MAP5), Ariel B. Lindner, Hugues Berry (Insa Lyon / INRIA Grenoble Rh\^one-Alpes / UCBL)
Localization of protein aggregation in Escherichia coli is governed by diffusion and nucleoid macromolecular crowding effect
PLoS Computational Biology (2013)
null
10.1371/journal.pcbi.1003038
null
q-bio.CB cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3d individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion- aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of "soft" intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.
[ { "created": "Fri, 8 Mar 2013 07:53:49 GMT", "version": "v1" } ]
2015-06-15
[ [ "Coquel", "Anne-Sophie", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL,\n LIRIS" ], [ "Jacob", "Jean-Pascal", "", "MAP5" ], [ "Primet", "Maël", "", "MAP5" ], [ "Demarez", "Alice", "", "MAP5" ], [ "Dimiccoli", "Mariella", "", "MAP5" ], [ "Julou", "Thomas", "", "LPS" ], [ "Moisan", "Lionel", "", "MAP5" ], [ "Lindner", "Ariel B.", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL" ], [ "Berry", "Hugues", "", "Insa Lyon / INRIA Grenoble Rhône-Alpes / UCBL" ] ]
Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3d individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion- aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of "soft" intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.
2306.11031
Emil Mallmin
Emil Mallmin, Arne Traulsen and Silvia De Monte
Chaotic turnover of rare and abundant species in a strongly interacting model community
15 pages, 7 figures
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in metagenomic methods have revealed an astonishing number and diversity of microbial lifeforms, most of which are rare relative to the most-abundant, dominant taxa. The ecological and evolutionary mechanisms that generate and sustain broadly observed microbial diversity patterns remain debated. One possibility is that complex interactions between numerous taxa are a main driver of the composition of a microbial community. Lotka-Volterra equations with disordered interactions between species offer a minimal yet rich modelling framework to investigate this hypothesis. We consider communities with strong, mostly competitive interactions, where species-rich coexistence equilibria are typically unstable. When species extinction is prevented by a small rate of immigration, one generically finds a sustained chaotic phase, where all species participate in a continuous turnover of who is rare and who is dominant. The distribution of rare species' abundances -- in a snapshot of the whole community, and for each species individually in time -- follows a distribution with a prominent power-law trend with exponent $\nu>1$. We formulate a focal-species model in terms of a logistic growth equation with coloured noise that reproduces dynamical features of the disordered Lotka-Volterra model. With its use, we discover that $\nu$ is mainly determined by three effective parameters of the dominant community, such as its timescale of turnover. Approximate proportionalities between the effective parameters constrain the variation of $\nu$ across the range of interaction statistics resulting in chaotic turnover. We discuss our findings in the context of marine plankton communities, where chaos, boom-bust dynamics, and a power-law abundance distribution have been observed.
[ { "created": "Mon, 19 Jun 2023 15:48:31 GMT", "version": "v1" } ]
2023-06-21
[ [ "Mallmin", "Emil", "" ], [ "Traulsen", "Arne", "" ], [ "De Monte", "Silvia", "" ] ]
Advances in metagenomic methods have revealed an astonishing number and diversity of microbial lifeforms, most of which are rare relative to the most-abundant, dominant taxa. The ecological and evolutionary mechanisms that generate and sustain broadly observed microbial diversity patterns remain debated. One possibility is that complex interactions between numerous taxa are a main driver of the composition of a microbial community. Lotka-Volterra equations with disordered interactions between species offer a minimal yet rich modelling framework to investigate this hypothesis. We consider communities with strong, mostly competitive interactions, where species-rich coexistence equilibria are typically unstable. When species extinction is prevented by a small rate of immigration, one generically finds a sustained chaotic phase, where all species participate in a continuous turnover of who is rare and who is dominant. The distribution of rare species' abundances -- in a snapshot of the whole community, and for each species individually in time -- follows a distribution with a prominent power-law trend with exponent $\nu>1$. We formulate a focal-species model in terms of a logistic growth equation with coloured noise that reproduces dynamical features of the disordered Lotka-Volterra model. With its use, we discover that $\nu$ is mainly determined by three effective parameters of the dominant community, such as its timescale of turnover. Approximate proportionalities between the effective parameters constrain the variation of $\nu$ across the range of interaction statistics resulting in chaotic turnover. We discuss our findings in the context of marine plankton communities, where chaos, boom-bust dynamics, and a power-law abundance distribution have been observed.
1703.04342
Daihai He
Alice P.Y. Chiu, Duo Yu, Jonathan Dushoff and Daihai He
Patterns of Influenza Vaccination Coverage in the United States from 2009 to 2015
10 pages, 2 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Background: Globally, influenza is a major cause of morbidity, hospitalization and mortality. Influenza vaccination has shown substantial protective effectiveness in the United States. We investigated state-level patterns of coverage rates of seasonal and pandemic influenza vaccination, among the overall population in the U.S. and specifically among children and the elderly, from 2009/10 to 2014/15, and associations with ecological factors. Methods and Findings: We obtained state-level influenza vaccination coverage rates from national surveys, and state-level socio-demographic and health data from a variety of sources. We employed a retrospective ecological study design, and used mixed-model regression to determine the levels of ecological association of the state-level vaccinations rates with these factors, both with and without region as a factor for the three populations. We found that health-care access is positively and significantly associated with mean influenza vaccination coverage rates across all populations and models. We also found that prevalence of asthma in adults are negatively and significantly associated with mean influenza vaccination coverage rates in the elderly populations. Conclusions: Health-care access has a robust, positive association with state-level vaccination rates across different populations. This highlights a potential population-level advantage of expanding health-care access.
[ { "created": "Mon, 13 Mar 2017 11:40:53 GMT", "version": "v1" } ]
2017-03-14
[ [ "Chiu", "Alice P. Y.", "" ], [ "Yu", "Duo", "" ], [ "Dushoff", "Jonathan", "" ], [ "He", "Daihai", "" ] ]
Background: Globally, influenza is a major cause of morbidity, hospitalization and mortality. Influenza vaccination has shown substantial protective effectiveness in the United States. We investigated state-level patterns of coverage rates of seasonal and pandemic influenza vaccination, among the overall population in the U.S. and specifically among children and the elderly, from 2009/10 to 2014/15, and associations with ecological factors. Methods and Findings: We obtained state-level influenza vaccination coverage rates from national surveys, and state-level socio-demographic and health data from a variety of sources. We employed a retrospective ecological study design, and used mixed-model regression to determine the levels of ecological association of the state-level vaccinations rates with these factors, both with and without region as a factor for the three populations. We found that health-care access is positively and significantly associated with mean influenza vaccination coverage rates across all populations and models. We also found that prevalence of asthma in adults are negatively and significantly associated with mean influenza vaccination coverage rates in the elderly populations. Conclusions: Health-care access has a robust, positive association with state-level vaccination rates across different populations. This highlights a potential population-level advantage of expanding health-care access.
2007.01043
Francesco Di Lauro Mr
Francesco Di Lauro, Jean-Charles Croix, Luc Berthouze and Istv\'an Kiss
PDE-limits of stochastic SIS epidemics on networks
16 pages, 7 figures, code available online
null
null
null
q-bio.PE math.PR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stochastic epidemic models on networks are inherently high-dimensional and the resulting exact models are intractable numerically even for modest network sizes. Mean-field models provide an alternative but can only capture average quantities, thus offering little or no information about variability in the outcome of the exact process. In this paper we conjecture and numerically prove that it is possible to construct PDE-limits of the exact stochastic SIS epidemics on regular and Erd\H{o}s-R\'enyi networks. To do this we first approximate the exact stochastic process at population level by a Birth-and-Death process (BD) (with a state space of $O(N)$ rather than $O(2^N)$) whose coefficients are determined numerically from Gillespie simulations of the exact epidemic on explicit networks. We numerically demonstrate that the coefficients of the resulting BD process are density-dependent, a crucial condition for the existence of a PDE limit. Extensive numerical tests for Regular and Erd\H{o}s-R\'enyi networks show excellent agreement between the outcome of simulations and the numerical solution of the Fokker-Planck equations. Apart from a significant reduction in dimensionality, the PDE also provides the means to derive the epidemic outbreak threshold linking network and disease dynamics parameters, albeit in an implicit way. Perhaps more importantly, it enables the formulation and numerical evaluation of likelihoods for epidemic and network inference as illustrated in a worked out example.
[ { "created": "Thu, 2 Jul 2020 12:01:14 GMT", "version": "v1" } ]
2020-07-03
[ [ "Di Lauro", "Francesco", "" ], [ "Croix", "Jean-Charles", "" ], [ "Berthouze", "Luc", "" ], [ "Kiss", "István", "" ] ]
Stochastic epidemic models on networks are inherently high-dimensional and the resulting exact models are intractable numerically even for modest network sizes. Mean-field models provide an alternative but can only capture average quantities, thus offering little or no information about variability in the outcome of the exact process. In this paper we conjecture and numerically prove that it is possible to construct PDE-limits of the exact stochastic SIS epidemics on regular and Erd\H{o}s-R\'enyi networks. To do this we first approximate the exact stochastic process at population level by a Birth-and-Death process (BD) (with a state space of $O(N)$ rather than $O(2^N)$) whose coefficients are determined numerically from Gillespie simulations of the exact epidemic on explicit networks. We numerically demonstrate that the coefficients of the resulting BD process are density-dependent, a crucial condition for the existence of a PDE limit. Extensive numerical tests for Regular and Erd\H{o}s-R\'enyi networks show excellent agreement between the outcome of simulations and the numerical solution of the Fokker-Planck equations. Apart from a significant reduction in dimensionality, the PDE also provides the means to derive the epidemic outbreak threshold linking network and disease dynamics parameters, albeit in an implicit way. Perhaps more importantly, it enables the formulation and numerical evaluation of likelihoods for epidemic and network inference as illustrated in a worked out example.
1812.11290
Vahe Galstyan Mr.
Vahe Galstyan, Luke Funk, Tal Einav, Rob Phillips
Combinatorial Control through Allostery
null
null
null
null
q-bio.BM physics.bio-ph physics.chem-ph
http://creativecommons.org/licenses/by/4.0/
Many instances of cellular signaling and transcriptional regulation involve switch-like molecular responses to the presence or absence of input ligands. To understand how these responses come about and how they can be harnessed, we develop a statistical mechanical model to characterize the types of Boolean logic that can arise from allosteric molecules following the Monod-Wyman-Changeux (MWC) model. Building upon previous work, we show how an allosteric molecule regulated by two inputs can elicit AND, OR, NAND and NOR responses, but is unable to realize XOR or XNOR gates. Next, we demonstrate the ability of an MWC molecule to perform ratiometric sensing - a response behavior where activity depends monotonically on the ratio of ligand concentrations. We then extend our analysis to more general schemes of combinatorial control involving either additional binding sites for the two ligands or an additional third ligand and show how these additions can cause a switch in the logic behavior of the molecule. Overall, our results demonstrate the wide variety of control schemes that biological systems can implement using simple mechanisms.
[ { "created": "Sat, 29 Dec 2018 05:43:48 GMT", "version": "v1" } ]
2019-01-01
[ [ "Galstyan", "Vahe", "" ], [ "Funk", "Luke", "" ], [ "Einav", "Tal", "" ], [ "Phillips", "Rob", "" ] ]
Many instances of cellular signaling and transcriptional regulation involve switch-like molecular responses to the presence or absence of input ligands. To understand how these responses come about and how they can be harnessed, we develop a statistical mechanical model to characterize the types of Boolean logic that can arise from allosteric molecules following the Monod-Wyman-Changeux (MWC) model. Building upon previous work, we show how an allosteric molecule regulated by two inputs can elicit AND, OR, NAND and NOR responses, but is unable to realize XOR or XNOR gates. Next, we demonstrate the ability of an MWC molecule to perform ratiometric sensing - a response behavior where activity depends monotonically on the ratio of ligand concentrations. We then extend our analysis to more general schemes of combinatorial control involving either additional binding sites for the two ligands or an additional third ligand and show how these additions can cause a switch in the logic behavior of the molecule. Overall, our results demonstrate the wide variety of control schemes that biological systems can implement using simple mechanisms.
0706.1908
Jason Locasale W
Jason W. Locasale
Computational investigations into the orgins of 'short term' biochemical memory in T cell activation
11 pages, published July 18th 2007
Locasale JW (2007) Computational Investigations into the Origins of Short-Term Biochemical Memory in T cell Activation. PLoS ONE 2(7): e627
10.1371/journal.pone.0000627
null
q-bio.MN physics.bio-ph q-bio.CB q-bio.SC
null
Recent studies have reported that T cells can integrate signals between interrupted encounters with Antigen Presenting Cells (APCs) in such a way that the process of signal integration exhibits a form of memory. Here, we carry out a computational study using a simple mathematical model of T cell activation to investigate the ramifications of interrupted T cell-APC contacts on signal integration. We consider several mechanisms of how signal integration at these time scales may be achieved and conclude that feedback control of immediate early gene products (IEGs) appears to be a highly plausible mechanism that allows for effective signal integration and cytokine production from multiple exposures to APCs. Analysis of these computer simulations provides an experimental roadmap involving several testable predictions.
[ { "created": "Wed, 13 Jun 2007 14:10:12 GMT", "version": "v1" }, { "created": "Fri, 15 Jun 2007 16:28:17 GMT", "version": "v2" }, { "created": "Wed, 18 Jul 2007 18:23:23 GMT", "version": "v3" } ]
2007-07-18
[ [ "Locasale", "Jason W.", "" ] ]
Recent studies have reported that T cells can integrate signals between interrupted encounters with Antigen Presenting Cells (APCs) in such a way that the process of signal integration exhibits a form of memory. Here, we carry out a computational study using a simple mathematical model of T cell activation to investigate the ramifications of interrupted T cell-APC contacts on signal integration. We consider several mechanisms of how signal integration at these time scales may be achieved and conclude that feedback control of immediate early gene products (IEGs) appears to be a highly plausible mechanism that allows for effective signal integration and cytokine production from multiple exposures to APCs. Analysis of these computer simulations provides an experimental roadmap involving several testable predictions.
q-bio/0401044
Lorenzo Farina
Lorenzo Farina, and Ilaria Mogno
A Fast Reconstruction Algorithm for Gene Networks
12 pages, 3 figures
null
null
null
q-bio.QM q-bio.GN
null
This paper deals with gene networks whose dynamics is assumed to be generated by a continuous-time, linear, time invariant, finite dimensional system (LTI) at steady state. In particular, we deal with the problem of network reconstruction in the typical practical situation in which the number of available data is largely insufficient to uniquely determine the network. In order to try to remove this ambiguity, we will exploit the biologically a priori assumption of network sparseness, and propose a new algorithm for network reconstruction having a very low computational complexity (linear in the number of genes) so to be able to deal also with very large networks (say, thousands of genes). Its performances are also tested both on artificial data (generated with linear models) and on real data obtained by Gardner et al. from the SOS pathway in Escherichia coli.
[ { "created": "Fri, 30 Jan 2004 18:55:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Farina", "Lorenzo", "" ], [ "Mogno", "Ilaria", "" ] ]
This paper deals with gene networks whose dynamics is assumed to be generated by a continuous-time, linear, time invariant, finite dimensional system (LTI) at steady state. In particular, we deal with the problem of network reconstruction in the typical practical situation in which the number of available data is largely insufficient to uniquely determine the network. In order to try to remove this ambiguity, we will exploit the biologically a priori assumption of network sparseness, and propose a new algorithm for network reconstruction having a very low computational complexity (linear in the number of genes) so to be able to deal also with very large networks (say, thousands of genes). Its performances are also tested both on artificial data (generated with linear models) and on real data obtained by Gardner et al. from the SOS pathway in Escherichia coli.
1701.03164
Danielle Oliveira Costa Santos Dr.
A.J. da Silva, S. Floquet, D.O.C. Santos
Statistical crossover and nonextensive behavior of the neuronal short-term depression
13 pages, 4 figures, J Biol Phys (2017)
null
10.1007/s10867-017-9474-3
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theoretical basis of neuronal coding, associated with short term degradation in synaptic transmission, is a matter of debate in the literature. In fact, electrophysiological signals are commonly characterized as inversely proportional to stimulus intensity. Among theoretical descriptions of this phenomenon, models based on $1/f$-dependency are employed to investigate the biophysical properties of the short term synaptic depression. In this work we formulated a model based on a paradigmatic \textit{q}-differential equation to obtain a generalized formalism useful for investigation of nonextensivity in this specific type of synaptic plasticity. Our analysis reveals nonextensivity in data from electrophysiological recordings and also a statistical crossover in neurotransmission. In particular, statistical transitions providesadditional support to the hypothesis of heterogeneous release probability of neurotransmitters. On the other hand, the simple vesicle model agrees with data only at low frequency stimulations. Thus, the present work presents a method to demonstrate that short-term depression is not only governed by random mechanisms but also by a nonextensive behavior. Our findings also conciliate morphological and electrophysiological investigations into a coherent biophysical scenario.
[ { "created": "Tue, 27 Dec 2016 13:16:06 GMT", "version": "v1" }, { "created": "Sun, 5 Feb 2017 12:09:22 GMT", "version": "v2" }, { "created": "Tue, 14 Nov 2017 21:36:15 GMT", "version": "v3" }, { "created": "Fri, 17 Nov 2017 21:48:46 GMT", "version": "v4" } ]
2017-11-21
[ [ "da Silva", "A. J.", "" ], [ "Floquet", "S.", "" ], [ "Santos", "D. O. C.", "" ] ]
The theoretical basis of neuronal coding, associated with short term degradation in synaptic transmission, is a matter of debate in the literature. In fact, electrophysiological signals are commonly characterized as inversely proportional to stimulus intensity. Among theoretical descriptions of this phenomenon, models based on $1/f$-dependency are employed to investigate the biophysical properties of the short term synaptic depression. In this work we formulated a model based on a paradigmatic \textit{q}-differential equation to obtain a generalized formalism useful for investigation of nonextensivity in this specific type of synaptic plasticity. Our analysis reveals nonextensivity in data from electrophysiological recordings and also a statistical crossover in neurotransmission. In particular, statistical transitions providesadditional support to the hypothesis of heterogeneous release probability of neurotransmitters. On the other hand, the simple vesicle model agrees with data only at low frequency stimulations. Thus, the present work presents a method to demonstrate that short-term depression is not only governed by random mechanisms but also by a nonextensive behavior. Our findings also conciliate morphological and electrophysiological investigations into a coherent biophysical scenario.
q-bio/0511015
SANDra KANani
Sandra Kanani, Alain Pumir, Valentine Krinsky
Genetically engineered cardiac pacemaker: stem cells transfected with HCN2 gene and myocytes - a model
null
null
null
null
q-bio.CB
null
Artificial biological pacemakers were developed and tested in canine ventricles. Next steps will require obtaining oscillations sensitive to external regulations, and robust with respect to long term drifts of expression levels of pacemaker currents and gap junctions. We introduce mathematical models intended to be used in parallel with the experiments. The models describe human mesenchymal stem cells ({\it hMSC}) transfected with HCN2 genes and connected to myocytes. They are intended to mimic experiments with oscillation induction in a cell pair, in cell culture and in the cardiac tissue. We give examples of oscillations in a cell pair, in a 1 dim cell culture, and oscillation dependence on number of pacemaker channels per cell and number of gap junctions. The models permit to mimic experiments with levels of gene expressions not achieved yet, and to predict if the work to achieve this levels will significantly increase the quality of oscillations. This give arguments for selecting the directions of the experimental work.
[ { "created": "Mon, 14 Nov 2005 11:22:50 GMT", "version": "v1" }, { "created": "Tue, 15 Nov 2005 10:14:21 GMT", "version": "v2" }, { "created": "Wed, 16 Nov 2005 15:16:58 GMT", "version": "v3" } ]
2007-05-23
[ [ "Kanani", "Sandra", "" ], [ "Pumir", "Alain", "" ], [ "Krinsky", "Valentine", "" ] ]
Artificial biological pacemakers were developed and tested in canine ventricles. Next steps will require obtaining oscillations sensitive to external regulations, and robust with respect to long term drifts of expression levels of pacemaker currents and gap junctions. We introduce mathematical models intended to be used in parallel with the experiments. The models describe human mesenchymal stem cells ({\it hMSC}) transfected with HCN2 genes and connected to myocytes. They are intended to mimic experiments with oscillation induction in a cell pair, in cell culture and in the cardiac tissue. We give examples of oscillations in a cell pair, in a 1 dim cell culture, and oscillation dependence on number of pacemaker channels per cell and number of gap junctions. The models permit to mimic experiments with levels of gene expressions not achieved yet, and to predict if the work to achieve this levels will significantly increase the quality of oscillations. This give arguments for selecting the directions of the experimental work.
0907.1159
Vasile Morariu
Vasile V. Morariu, Luiza Buimaga-Iarinca
Autoregressive Modeling of Coding Sequence Lengths in Bacterial Genome
11 pages, 5 figures, 2 tables
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous investigation of coding sequence lengths (CDS) in the bacterial circular chromosome revealed short range correlation in the series of these data. We have further analyzed the averaged periodograms of these series and we found that the organization of CDS can be well described by first order autoregressive processes. This involves interaction between the neighboring terms. The autoregressive analysis may have great potential in modeling various physical and biological processes like light emission of galaxies, protein organization, cell flickering, cognitive processes and perhaps others.
[ { "created": "Tue, 7 Jul 2009 08:05:45 GMT", "version": "v1" } ]
2009-07-08
[ [ "Morariu", "Vasile V.", "" ], [ "Buimaga-Iarinca", "Luiza", "" ] ]
Previous investigation of coding sequence lengths (CDS) in the bacterial circular chromosome revealed short range correlation in the series of these data. We have further analyzed the averaged periodograms of these series and we found that the organization of CDS can be well described by first order autoregressive processes. This involves interaction between the neighboring terms. The autoregressive analysis may have great potential in modeling various physical and biological processes like light emission of galaxies, protein organization, cell flickering, cognitive processes and perhaps others.
1705.08753
Rukhsan Ul Haq Wani
Rukhsan Ul Haq and Shalini Harkar
Quantum theory of time perception: phases,clocks and quantum algebra
8pp,typos corrected,references added
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experience of time is one of the primordial human experiences which is deeply tied to human consciousness. But despite this intimate relation of time with human conscious experience, time has proved to be very elusive. Particularly in physics, though there is already some understanding of time, there are still so many paradoxes that plague this understanding. In this paper we take rather a different route to question of time. We first attempt to come up with a theoretical understanding of time perception. Quite interestingly we find that quantum theory provides an algebraic formulation within which we can understand some essential aspects of time perception by human mind. We then ask whether a similar formalism can furnish the understanding of time as well and find connections of our formulation of time with similar works by other researchers. Our underlying approach to question of time has been inspired by R. W. Hamilton who considers algebra as science of pure time. Hence our work has an extensive algebraic flavor. Our work also incorporates another approach based on Kauffman's iterant algebra which relates time to underlying recursions and oscillations. We believe that our work will initiate more investigations in this direction.
[ { "created": "Mon, 1 May 2017 13:08:10 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2017 14:54:38 GMT", "version": "v2" } ]
2017-06-16
[ [ "Haq", "Rukhsan Ul", "" ], [ "Harkar", "Shalini", "" ] ]
Experience of time is one of the primordial human experiences which is deeply tied to human consciousness. But despite this intimate relation of time with human conscious experience, time has proved to be very elusive. Particularly in physics, though there is already some understanding of time, there are still so many paradoxes that plague this understanding. In this paper we take rather a different route to question of time. We first attempt to come up with a theoretical understanding of time perception. Quite interestingly we find that quantum theory provides an algebraic formulation within which we can understand some essential aspects of time perception by human mind. We then ask whether a similar formalism can furnish the understanding of time as well and find connections of our formulation of time with similar works by other researchers. Our underlying approach to question of time has been inspired by R. W. Hamilton who considers algebra as science of pure time. Hence our work has an extensive algebraic flavor. Our work also incorporates another approach based on Kauffman's iterant algebra which relates time to underlying recursions and oscillations. We believe that our work will initiate more investigations in this direction.
0808.3622
Steven N. Evans
Kenneth W. Wachter and David R. Steinsaltz and Steven N. Evans
Vital rates from the action of mutation accumulation
17 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New models for evolutionary processes of mutation accumulation allow hypotheses about the age-specificity of mutational effects to be translated into predictions of heterogeneous population hazard functions. We apply these models to questions in the biodemography of longevity, including proposed explanations of Gompertz hazards and mortality plateaus, and use them to explore the possibility of melding evolutionary and functional models of aging.
[ { "created": "Wed, 27 Aug 2008 03:27:24 GMT", "version": "v1" }, { "created": "Thu, 27 Aug 2009 18:00:11 GMT", "version": "v2" } ]
2009-08-27
[ [ "Wachter", "Kenneth W.", "" ], [ "Steinsaltz", "David R.", "" ], [ "Evans", "Steven N.", "" ] ]
New models for evolutionary processes of mutation accumulation allow hypotheses about the age-specificity of mutational effects to be translated into predictions of heterogeneous population hazard functions. We apply these models to questions in the biodemography of longevity, including proposed explanations of Gompertz hazards and mortality plateaus, and use them to explore the possibility of melding evolutionary and functional models of aging.
1403.4636
Adam Cohen
Peng Zou (Harvard University), Yongxin Zhao (University of Alberta), Adam D. Douglass (University of Utah), Daniel R. Hochbaum (Harvard University), Daan Brinks (Harvard University), Christopher A. Werley (Harvard University), D. Jed Harrison (University of Alberta), Robert E. Campbell (University of Alberta), Adam E. Cohen (Harvard University)
Bright and fast voltage reporters across the visible spectrum via electrochromic FRET (eFRET)
* Denotes equal contribution. For correspondence regarding the library screen: robert.e.campbell@ualberta.ca; For other correspondence: cohen@chemistry.harvard.edu
null
null
null
q-bio.BM physics.bio-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a palette of brightly fluorescent genetically encoded voltage indicators (GEVIs) with excitation and emission peaks spanning the visible spectrum, sensitivities from 6 - 10% Delta F/F per 100 mV, and half-maximal response times from 1 - 7 ms. A fluorescent protein is fused to an Archaerhodopsin-derived voltage sensor. Voltage-induced shifts in the absorption spectrum of the rhodopsin lead to voltage-dependent nonradiative quenching of the appended fluorescent protein. Through a library screen, we identified linkers and fluorescent protein combinations which reported neuronal action potentials in cultured rat hippocampal neurons with a single-trial signal-to-noise ratio from 6.6 to 11.6 in a 1 kHz imaging bandwidth at modest illumination intensity. The freedom to choose a voltage indicator from an array of colors facilitates multicolor voltage imaging, as well as combination with other optical reporters and optogenetic actuators.
[ { "created": "Tue, 18 Mar 2014 22:39:50 GMT", "version": "v1" } ]
2014-03-20
[ [ "Zou", "Peng", "", "Harvard University" ], [ "Zhao", "Yongxin", "", "University of Alberta" ], [ "Douglass", "Adam D.", "", "University of Utah" ], [ "Hochbaum", "Daniel R.", "", "Harvard\n University" ], [ "Brinks", "Daan", "", "Harvard University" ], [ "Werley", "Christopher A.", "", "Harvard\n University" ], [ "Harrison", "D. Jed", "", "University of Alberta" ], [ "Campbell", "Robert E.", "", "University of Alberta" ], [ "Cohen", "Adam E.", "", "Harvard University" ] ]
We present a palette of brightly fluorescent genetically encoded voltage indicators (GEVIs) with excitation and emission peaks spanning the visible spectrum, sensitivities from 6 - 10% Delta F/F per 100 mV, and half-maximal response times from 1 - 7 ms. A fluorescent protein is fused to an Archaerhodopsin-derived voltage sensor. Voltage-induced shifts in the absorption spectrum of the rhodopsin lead to voltage-dependent nonradiative quenching of the appended fluorescent protein. Through a library screen, we identified linkers and fluorescent protein combinations which reported neuronal action potentials in cultured rat hippocampal neurons with a single-trial signal-to-noise ratio from 6.6 to 11.6 in a 1 kHz imaging bandwidth at modest illumination intensity. The freedom to choose a voltage indicator from an array of colors facilitates multicolor voltage imaging, as well as combination with other optical reporters and optogenetic actuators.
1705.03321
Hamid Reza Hassanzadeh
Hamid Reza Hassanzadeh, Pushkar Kolhe, Charles L. Isbell, May D. Wang
MotifMark: Finding Regulatory Motifs in DNA Sequences
null
null
null
null
q-bio.QM cs.LG q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interaction between proteins and DNA is a key driving force in a significant number of biological processes such as transcriptional regulation, repair, recombination, splicing, and DNA modification. The identification of DNA-binding sites and the specificity of target proteins in binding to these regions are two important steps in understanding the mechanisms of these biological activities. A number of high-throughput technologies have recently emerged that try to quantify the affinity between proteins and DNA motifs. Despite their success, these technologies have their own limitations and fall short in precise characterization of motifs, and as a result, require further downstream analysis to extract useful and interpretable information from a haystack of noisy and inaccurate data. Here we propose MotifMark, a new algorithm based on graph theory and machine learning, that can find binding sites on candidate probes and rank their specificity in regard to the underlying transcription factor. We developed a pipeline to analyze experimental data derived from compact universal protein binding microarrays and benchmarked it against two of the most accurate motif search methods. Our results indicate that MotifMark can be a viable alternative technique for prediction of motif from protein binding microarrays and possibly other related high-throughput techniques.
[ { "created": "Thu, 4 May 2017 14:50:12 GMT", "version": "v1" } ]
2017-05-10
[ [ "Hassanzadeh", "Hamid Reza", "" ], [ "Kolhe", "Pushkar", "" ], [ "Isbell", "Charles L.", "" ], [ "Wang", "May D.", "" ] ]
The interaction between proteins and DNA is a key driving force in a significant number of biological processes such as transcriptional regulation, repair, recombination, splicing, and DNA modification. The identification of DNA-binding sites and the specificity of target proteins in binding to these regions are two important steps in understanding the mechanisms of these biological activities. A number of high-throughput technologies have recently emerged that try to quantify the affinity between proteins and DNA motifs. Despite their success, these technologies have their own limitations and fall short in precise characterization of motifs, and as a result, require further downstream analysis to extract useful and interpretable information from a haystack of noisy and inaccurate data. Here we propose MotifMark, a new algorithm based on graph theory and machine learning, that can find binding sites on candidate probes and rank their specificity in regard to the underlying transcription factor. We developed a pipeline to analyze experimental data derived from compact universal protein binding microarrays and benchmarked it against two of the most accurate motif search methods. Our results indicate that MotifMark can be a viable alternative technique for prediction of motif from protein binding microarrays and possibly other related high-throughput techniques.
1401.0413
Emilio Hernandez-Garcia
Emilio Hernandez-Garcia, Els Heinsalu, Cristobal Lopez
Spatial patterns of competing random walkers
38 pages, including 6 figures
Ecological Complexity 21, 166-176 (2015)
10.1016/j.ecocom.2014.06.005
null
q-bio.PE cond-mat.stat-mech nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We review recent results obtained from simple individual-based models of biological competition in which birth and death rates of an organism depend on the presence of other competing organisms close to it. In addition the individuals perform random walks of different types (Gaussian diffusion and L\'{e}vy flights). We focus on how competition and random motions affect each other, from which spatial instabilities and extinctions arise. Under suitable conditions, competitive interactions lead to clustering of individuals and periodic pattern formation. Random motion has a homogenizing effect and then delays this clustering instability. When individuals from species differing in their random walk characteristics are allowed to compete together, the ones with a tendency to form narrower clusters get a competitive advantage over the others. Mean-field deterministic equations are analyzed and compared with the outcome of the individual-based simulations.
[ { "created": "Thu, 2 Jan 2014 11:08:02 GMT", "version": "v1" }, { "created": "Fri, 23 May 2014 14:06:39 GMT", "version": "v2" } ]
2015-03-03
[ [ "Hernandez-Garcia", "Emilio", "" ], [ "Heinsalu", "Els", "" ], [ "Lopez", "Cristobal", "" ] ]
We review recent results obtained from simple individual-based models of biological competition in which birth and death rates of an organism depend on the presence of other competing organisms close to it. In addition the individuals perform random walks of different types (Gaussian diffusion and L\'{e}vy flights). We focus on how competition and random motions affect each other, from which spatial instabilities and extinctions arise. Under suitable conditions, competitive interactions lead to clustering of individuals and periodic pattern formation. Random motion has a homogenizing effect and then delays this clustering instability. When individuals from species differing in their random walk characteristics are allowed to compete together, the ones with a tendency to form narrower clusters get a competitive advantage over the others. Mean-field deterministic equations are analyzed and compared with the outcome of the individual-based simulations.
2301.01445
Evan Patterson
Rebekah Aduddell, James Fairbanks, Amit Kumar, Pablo S. Ocal, Evan Patterson, Brandon T. Shapiro
A compositional account of motifs, mechanisms, and dynamics in biochemical regulatory networks
Final version published in Compositionality
Compositionality, Volume 6 (2024) (May 13, 2024) compositionality:13637
10.32408/compositionality-6-2
null
q-bio.MN math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Regulatory networks depict promoting or inhibiting interactions between molecules in a biochemical system. We introduce a category-theoretic formalism for regulatory networks, using signed graphs to model the networks and signed functors to describe occurrences of one network in another, especially occurrences of network motifs. With this foundation, we establish functorial mappings between regulatory networks and other mathematical models in biochemistry. We construct a functor from reaction networks, modeled as Petri nets with signed links, to regulatory networks, enabling us to precisely define when a reaction network could be a physical mechanism underlying a regulatory network. Turning to quantitative models, we associate a regulatory network with a Lotka-Volterra system of differential equations, defining a functor from the category of signed graphs to a category of parameterized dynamical systems. We extend this result from closed to open systems, demonstrating that Lotka-Volterra dynamics respects not only inclusions and collapsings of regulatory networks, but also the process of building up complex regulatory networks by gluing together simpler pieces. Formally, we use the theory of structured cospans to produce a lax double functor from the double category of open signed graphs to that of open parameterized dynamical systems. Throughout the paper, we ground the categorical formalism in examples inspired by systems biology.
[ { "created": "Wed, 4 Jan 2023 04:32:08 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 22:35:34 GMT", "version": "v2" }, { "created": "Sun, 5 May 2024 21:26:08 GMT", "version": "v3" } ]
2024-08-07
[ [ "Aduddell", "Rebekah", "" ], [ "Fairbanks", "James", "" ], [ "Kumar", "Amit", "" ], [ "Ocal", "Pablo S.", "" ], [ "Patterson", "Evan", "" ], [ "Shapiro", "Brandon T.", "" ] ]
Regulatory networks depict promoting or inhibiting interactions between molecules in a biochemical system. We introduce a category-theoretic formalism for regulatory networks, using signed graphs to model the networks and signed functors to describe occurrences of one network in another, especially occurrences of network motifs. With this foundation, we establish functorial mappings between regulatory networks and other mathematical models in biochemistry. We construct a functor from reaction networks, modeled as Petri nets with signed links, to regulatory networks, enabling us to precisely define when a reaction network could be a physical mechanism underlying a regulatory network. Turning to quantitative models, we associate a regulatory network with a Lotka-Volterra system of differential equations, defining a functor from the category of signed graphs to a category of parameterized dynamical systems. We extend this result from closed to open systems, demonstrating that Lotka-Volterra dynamics respects not only inclusions and collapsings of regulatory networks, but also the process of building up complex regulatory networks by gluing together simpler pieces. Formally, we use the theory of structured cospans to produce a lax double functor from the double category of open signed graphs to that of open parameterized dynamical systems. Throughout the paper, we ground the categorical formalism in examples inspired by systems biology.
2305.11198
J Gregory Caporaso
Christopher R. Keefe, Matthew R. Dillon, Chloe Herman, Mary Jewell, Colin V. Wood, Evan Bolyen, J. Gregory Caporaso
Facilitating Bioinformatics Reproducibility
5 pages, 2 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Study reproducibility is essential to corroborate, build on, and learn from the results of scientific research but is notoriously challenging in bioinformatics, which often involves large data sets and complex analytic workflows involving many different tools. Additionally many biologists aren't trained in how to effectively record their bioinformatics analysis steps to ensure reproducibility, so critical information is often missing. Software tools used in bioinformatics can automate provenance tracking of the results they generate, removing most barriers to bioinformatics reproducibility. Here we present an implementation of that idea, Provenance Replay, a tool for generating new executable code from results generated with the QIIME 2 bioinformatics platform, and discuss considerations for bioinformatics developers who wish to implement similar functionality in their software.
[ { "created": "Thu, 18 May 2023 14:33:23 GMT", "version": "v1" } ]
2023-05-22
[ [ "Keefe", "Christopher R.", "" ], [ "Dillon", "Matthew R.", "" ], [ "Herman", "Chloe", "" ], [ "Jewell", "Mary", "" ], [ "Wood", "Colin V.", "" ], [ "Bolyen", "Evan", "" ], [ "Caporaso", "J. Gregory", "" ] ]
Study reproducibility is essential to corroborate, build on, and learn from the results of scientific research but is notoriously challenging in bioinformatics, which often involves large data sets and complex analytic workflows involving many different tools. Additionally many biologists aren't trained in how to effectively record their bioinformatics analysis steps to ensure reproducibility, so critical information is often missing. Software tools used in bioinformatics can automate provenance tracking of the results they generate, removing most barriers to bioinformatics reproducibility. Here we present an implementation of that idea, Provenance Replay, a tool for generating new executable code from results generated with the QIIME 2 bioinformatics platform, and discuss considerations for bioinformatics developers who wish to implement similar functionality in their software.
2402.15360
Amanda Navine
Amanda K. Navine, Tom Denton, Matthew J. Weldy, Patrick J. Hart
All Thresholds Barred: Direct Estimation of Call Density in Bioacoustic Data
14 pages, 6 figures, 3 tables; submitted to Frontiers in Bird Science; Our Hawaiian PAM dataset and classifier scores, as well as annotation information for the three study species, can be found on Zenodo at https://doi.org/10.5281/zenodo.10581530. The fully annotated Powdermill dataset assembled by Chronister et al. that was used in this study is available at https://doi.org/10.1002/ecy.3329
null
null
null
q-bio.QM cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Passive acoustic monitoring (PAM) studies generate thousands of hours of audio, which may be used to monitor specific animal populations, conduct broad biodiversity surveys, detect threats such as poachers, and more. Machine learning classifiers for species identification are increasingly being used to process the vast amount of audio generated by bioacoustic surveys, expediting analysis and increasing the utility of PAM as a management tool. In common practice, a threshold is applied to classifier output scores, and scores above the threshold are aggregated into a detection count. The choice of threshold produces biased counts of vocalizations, which are subject to false positive/negative rates that may vary across subsets of the dataset. In this work, we advocate for directly estimating call density: The proportion of detection windows containing the target vocalization, regardless of classifier score. Our approach targets a desirable ecological estimator and provides a more rigorous grounding for identifying the core problems caused by distribution shifts -- when the defining characteristics of the data distribution change -- and designing strategies to mitigate them. We propose a validation scheme for estimating call density in a body of data and obtain, through Bayesian reasoning, probability distributions of confidence scores for both the positive and negative classes. We use these distributions to predict site-level densities, which may be subject to distribution shifts. We test our proposed methods on a real-world study of Hawaiian birds and provide simulation results leveraging existing fully annotated datasets, demonstrating robustness to variations in call density and classifier model quality.
[ { "created": "Fri, 23 Feb 2024 14:52:44 GMT", "version": "v1" } ]
2024-02-26
[ [ "Navine", "Amanda K.", "" ], [ "Denton", "Tom", "" ], [ "Weldy", "Matthew J.", "" ], [ "Hart", "Patrick J.", "" ] ]
Passive acoustic monitoring (PAM) studies generate thousands of hours of audio, which may be used to monitor specific animal populations, conduct broad biodiversity surveys, detect threats such as poachers, and more. Machine learning classifiers for species identification are increasingly being used to process the vast amount of audio generated by bioacoustic surveys, expediting analysis and increasing the utility of PAM as a management tool. In common practice, a threshold is applied to classifier output scores, and scores above the threshold are aggregated into a detection count. The choice of threshold produces biased counts of vocalizations, which are subject to false positive/negative rates that may vary across subsets of the dataset. In this work, we advocate for directly estimating call density: The proportion of detection windows containing the target vocalization, regardless of classifier score. Our approach targets a desirable ecological estimator and provides a more rigorous grounding for identifying the core problems caused by distribution shifts -- when the defining characteristics of the data distribution change -- and designing strategies to mitigate them. We propose a validation scheme for estimating call density in a body of data and obtain, through Bayesian reasoning, probability distributions of confidence scores for both the positive and negative classes. We use these distributions to predict site-level densities, which may be subject to distribution shifts. We test our proposed methods on a real-world study of Hawaiian birds and provide simulation results leveraging existing fully annotated datasets, demonstrating robustness to variations in call density and classifier model quality.
1908.02334
Emily Diller
Emily E Diller, Sha Cao, Beth Ey, Robert Lober, Jason G Parker
Predicted disease compositions of human gliomas estimated from multiparametric MRI can predict endothelial proliferation, tumor grade, and overall survival
13 pages, 3 figures, 5 tables
null
null
null
q-bio.QM cs.LG eess.IV physics.med-ph stat.AP stat.ML
http://creativecommons.org/licenses/by/4.0/
Background and Purpose: Biopsy is the main determinants of glioma clinical management, but require invasive sampling that fail to detect relevant features because of tumor heterogeneity. The purpose of this study was to evaluate the accuracy of a voxel-wise, multiparametric MRI radiomic method to predict features and develop a minimally invasive method to objectively assess neoplasms. Methods: Multiparametric MRI were registered to T1-weighted gadolinium contrast-enhanced data using a 12 degree-of-freedom affine model. The retrospectively collected MRI data included T1-weighted, T1-weighted gadolinium contrast-enhanced, T2-weighted, fluid attenuated inversion recovery, and multi-b-value diffusion-weighted acquired at 1.5T or 3.0T. Clinical experts provided voxel-wise annotations for five disease states on a subset of patients to establish a training feature vector of 611,930 observations. Then, a k-nearest-neighbor (k-NN) classifier was trained using a 25% hold-out design. The trained k-NN model was applied to 13,018,171 observations from seventeen histologically confirmed glioma patients. Linear regression tested overall survival (OS) relationship to predicted disease compositions (PDC) and diagnostic age (alpha = 0.05). Canonical discriminant analysis tested if PDC and diagnostic age could differentiate clinical, genetic, and microscopic factors (alpha = 0.05). Results: The model predicted voxel annotation class with a Dice similarity coefficient of 94.34% +/- 2.98. Linear combinations of PDCs and diagnostic age predicted OS (p = 0.008), grade (p = 0.014), and endothelia proliferation (p = 0.003); but fell short predicting gene mutations for TP53BP1 and IDH1. Conclusions: This voxel-wise, multi-parametric MRI radiomic strategy holds potential as a non-invasive decision-making aid for clinicians managing patients with glioma.
[ { "created": "Tue, 6 Aug 2019 19:10:32 GMT", "version": "v1" } ]
2019-08-08
[ [ "Diller", "Emily E", "" ], [ "Cao", "Sha", "" ], [ "Ey", "Beth", "" ], [ "Lober", "Robert", "" ], [ "Parker", "Jason G", "" ] ]
Background and Purpose: Biopsy is the main determinants of glioma clinical management, but require invasive sampling that fail to detect relevant features because of tumor heterogeneity. The purpose of this study was to evaluate the accuracy of a voxel-wise, multiparametric MRI radiomic method to predict features and develop a minimally invasive method to objectively assess neoplasms. Methods: Multiparametric MRI were registered to T1-weighted gadolinium contrast-enhanced data using a 12 degree-of-freedom affine model. The retrospectively collected MRI data included T1-weighted, T1-weighted gadolinium contrast-enhanced, T2-weighted, fluid attenuated inversion recovery, and multi-b-value diffusion-weighted acquired at 1.5T or 3.0T. Clinical experts provided voxel-wise annotations for five disease states on a subset of patients to establish a training feature vector of 611,930 observations. Then, a k-nearest-neighbor (k-NN) classifier was trained using a 25% hold-out design. The trained k-NN model was applied to 13,018,171 observations from seventeen histologically confirmed glioma patients. Linear regression tested overall survival (OS) relationship to predicted disease compositions (PDC) and diagnostic age (alpha = 0.05). Canonical discriminant analysis tested if PDC and diagnostic age could differentiate clinical, genetic, and microscopic factors (alpha = 0.05). Results: The model predicted voxel annotation class with a Dice similarity coefficient of 94.34% +/- 2.98. Linear combinations of PDCs and diagnostic age predicted OS (p = 0.008), grade (p = 0.014), and endothelia proliferation (p = 0.003); but fell short predicting gene mutations for TP53BP1 and IDH1. Conclusions: This voxel-wise, multi-parametric MRI radiomic strategy holds potential as a non-invasive decision-making aid for clinicians managing patients with glioma.
2402.13555
Xiangzhe Kong
Xiangzhe Kong, Yinjun Jia, Wenbing Huang, Yang Liu
Full-Atom Peptide Design with Geometric Latent Diffusion
25 pages
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peptide design plays a pivotal role in therapeutics, allowing brand new possibility to leverage target binding sites that are previously undruggable. Most existing methods are either inefficient or only concerned with the target-agnostic design of 1D sequences. In this paper, we propose a generative model for full-atom \textbf{Pep}tide design with \textbf{G}eometric \textbf{LA}tent \textbf{D}iffusion (PepGLAD). We first establish a benchmark consisting of both 1D sequences and 3D structures from Protein Data Bank (PDB) and literature for systematic evaluation. We then identify two major challenges of leveraging current diffusion-based models for peptide design: the full-atom geometry and the variable binding geometry. To tackle the first challenge, PepGLAD derives a variational autoencoder that first encodes full-atom residues of variable size into fixed-dimensional latent representations, and then decodes back to the residue space after conducting the diffusion process in the latent space. For the second issue, PepGLAD explores a receptor-specific affine transformation to convert the 3D coordinates into a shared standard space, enabling better generalization ability across different binding shapes. Experimental Results show that our method not only improves diversity and binding affinity significantly in the task of sequence-structure co-design, but also excels at recovering reference structures for binding conformation generation.
[ { "created": "Wed, 21 Feb 2024 06:25:35 GMT", "version": "v1" }, { "created": "Wed, 22 May 2024 03:20:40 GMT", "version": "v2" } ]
2024-05-24
[ [ "Kong", "Xiangzhe", "" ], [ "Jia", "Yinjun", "" ], [ "Huang", "Wenbing", "" ], [ "Liu", "Yang", "" ] ]
Peptide design plays a pivotal role in therapeutics, allowing brand new possibility to leverage target binding sites that are previously undruggable. Most existing methods are either inefficient or only concerned with the target-agnostic design of 1D sequences. In this paper, we propose a generative model for full-atom \textbf{Pep}tide design with \textbf{G}eometric \textbf{LA}tent \textbf{D}iffusion (PepGLAD). We first establish a benchmark consisting of both 1D sequences and 3D structures from Protein Data Bank (PDB) and literature for systematic evaluation. We then identify two major challenges of leveraging current diffusion-based models for peptide design: the full-atom geometry and the variable binding geometry. To tackle the first challenge, PepGLAD derives a variational autoencoder that first encodes full-atom residues of variable size into fixed-dimensional latent representations, and then decodes back to the residue space after conducting the diffusion process in the latent space. For the second issue, PepGLAD explores a receptor-specific affine transformation to convert the 3D coordinates into a shared standard space, enabling better generalization ability across different binding shapes. Experimental Results show that our method not only improves diversity and binding affinity significantly in the task of sequence-structure co-design, but also excels at recovering reference structures for binding conformation generation.
2306.01313
Ivana Pajic-Lijakovic Dr.
Ivana Pajic-Lijakovic and Milan Milivojevic
Cell jamming and unjamming in development: physical aspects
18 pages, 4 figures
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Collective cell migration is essential for a wide range of biological processes such as: morphogenesis, wound healing, and cancer spreading. However, it is well known that migrating epithelial collectives frequently undergo jamming, stay trapped some period of time, and then start migration again. Consequently, only a part of epithelial cells actively contributes to the tissue development. In contrast to epithelial cells, migrating mesenchymal collectives successfully avoid the jamming. It has been confirmed that the epithelial unjamming cannot be treated as the epithelial-to-mesenchymal transition. Some other mechanism is responsible for the epithelial jamming/unjamming. Despite extensive research devoted to study the cell jamming/unjamming, we still do not understand the origin of this phenomenon. The origin is connected to physical factors such as: the cell compressive residual stress accumulation and surface characteristics of migrating (unjamming) and resting (jamming) epithelial clusters which depend primarily on the strength of cell-cell adhesion contacts and cell contractility. The main goal of this theoretical consideration is to clarify these cause-consequence relations.
[ { "created": "Fri, 2 Jun 2023 07:24:18 GMT", "version": "v1" } ]
2023-06-05
[ [ "Pajic-Lijakovic", "Ivana", "" ], [ "Milivojevic", "Milan", "" ] ]
Collective cell migration is essential for a wide range of biological processes such as: morphogenesis, wound healing, and cancer spreading. However, it is well known that migrating epithelial collectives frequently undergo jamming, stay trapped some period of time, and then start migration again. Consequently, only a part of epithelial cells actively contributes to the tissue development. In contrast to epithelial cells, migrating mesenchymal collectives successfully avoid the jamming. It has been confirmed that the epithelial unjamming cannot be treated as the epithelial-to-mesenchymal transition. Some other mechanism is responsible for the epithelial jamming/unjamming. Despite extensive research devoted to study the cell jamming/unjamming, we still do not understand the origin of this phenomenon. The origin is connected to physical factors such as: the cell compressive residual stress accumulation and surface characteristics of migrating (unjamming) and resting (jamming) epithelial clusters which depend primarily on the strength of cell-cell adhesion contacts and cell contractility. The main goal of this theoretical consideration is to clarify these cause-consequence relations.
2405.20359
Cristina-Maria Valcu
Cristina-Maria Valcu, Richard A. Scheltema, Ralf M. Schweiggert, Mihai Valcu, Kim Teltscher, Dirk M. Walther, Reinhold Carle and Bart Kempenaers
Life history shapes variation in egg composition in the blue tit Cyanistes caeruleus
null
Communications Biology (2019) 2:6
10.1038/s42003-018-0247-8
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Maternal investment directly shapes early developmental conditions and therefore has longterm fitness consequences for the offspring. In oviparous species prenatal maternal investment is fixed at the time of laying. To ensure the best survival chances for most of their offspring, females must equip their eggs with the resources required to perform well under various circumstances, yet the actual mechanisms remain unknown. Here we describe the blue tit egg albumen and yolk proteomes and evaluate their potential to mediate maternal effects. We show that variation in egg composition (proteins, lipids, carotenoids) primarily depends on laying order and female age. Egg proteomic profiles are mainly driven by laying order, and investment in the egg proteome is functionally biased among eggs. Our results suggest that maternal effects on egg composition result from both passive and active (partly compensatory) mechanisms, and that variation in egg composition creates diverse biochemical environments for embryonic development.
[ { "created": "Thu, 30 May 2024 07:54:25 GMT", "version": "v1" } ]
2024-06-03
[ [ "Valcu", "Cristina-Maria", "" ], [ "Scheltema", "Richard A.", "" ], [ "Schweiggert", "Ralf M.", "" ], [ "Valcu", "Mihai", "" ], [ "Teltscher", "Kim", "" ], [ "Walther", "Dirk M.", "" ], [ "Carle", "Reinhold", "" ], [ "Kempenaers", "Bart", "" ] ]
Maternal investment directly shapes early developmental conditions and therefore has longterm fitness consequences for the offspring. In oviparous species prenatal maternal investment is fixed at the time of laying. To ensure the best survival chances for most of their offspring, females must equip their eggs with the resources required to perform well under various circumstances, yet the actual mechanisms remain unknown. Here we describe the blue tit egg albumen and yolk proteomes and evaluate their potential to mediate maternal effects. We show that variation in egg composition (proteins, lipids, carotenoids) primarily depends on laying order and female age. Egg proteomic profiles are mainly driven by laying order, and investment in the egg proteome is functionally biased among eggs. Our results suggest that maternal effects on egg composition result from both passive and active (partly compensatory) mechanisms, and that variation in egg composition creates diverse biochemical environments for embryonic development.
2405.06645
Darin Tsui
Darin Tsui, Amirali Aghazadeh
On Recovering Higher-order Interactions from Protein Language Models
null
null
null
null
q-bio.BM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Protein language models leverage evolutionary information to perform state-of-the-art 3D structure and zero-shot variant prediction. Yet, extracting and explaining all the mutational interactions that govern model predictions remains difficult as it requires querying the entire amino acid space for $n$ sites using $20^n$ sequences, which is computationally expensive even for moderate values of $n$ (e.g., $n\sim10$). Although approaches to lower the sample complexity exist, they often limit the interpretability of the model to just single and pairwise interactions. Recently, computationally scalable algorithms relying on the assumption of sparsity in the Fourier domain have emerged to learn interactions from experimental data. However, extracting interactions from language models poses unique challenges: it's unclear if sparsity is always present or if it is the only metric needed to assess the utility of Fourier algorithms. Herein, we develop a framework to do a systematic Fourier analysis of the protein language model ESM2 applied on three proteins-green fluorescent protein (GFP), tumor protein P53 (TP53), and G domain B1 (GB1)-across various sites for 228 experiments. We demonstrate that ESM2 is dominated by three regions in the sparsity-ruggedness plane, two of which are better suited for sparse Fourier transforms. Validations on two sample proteins demonstrate recovery of all interactions with $R^2=0.72$ in the more sparse region and $R^2=0.66$ in the more dense region, using only 7 million out of $20^{10}\sim10^{13}$ ESM2 samples, reducing the computational time by a staggering factor of 15,000. All codes and data are available on our GitHub repository https://github.com/amirgroup-codes/InteractionRecovery.
[ { "created": "Fri, 15 Mar 2024 16:35:47 GMT", "version": "v1" } ]
2024-05-14
[ [ "Tsui", "Darin", "" ], [ "Aghazadeh", "Amirali", "" ] ]
Protein language models leverage evolutionary information to perform state-of-the-art 3D structure and zero-shot variant prediction. Yet, extracting and explaining all the mutational interactions that govern model predictions remains difficult as it requires querying the entire amino acid space for $n$ sites using $20^n$ sequences, which is computationally expensive even for moderate values of $n$ (e.g., $n\sim10$). Although approaches to lower the sample complexity exist, they often limit the interpretability of the model to just single and pairwise interactions. Recently, computationally scalable algorithms relying on the assumption of sparsity in the Fourier domain have emerged to learn interactions from experimental data. However, extracting interactions from language models poses unique challenges: it's unclear if sparsity is always present or if it is the only metric needed to assess the utility of Fourier algorithms. Herein, we develop a framework to do a systematic Fourier analysis of the protein language model ESM2 applied on three proteins-green fluorescent protein (GFP), tumor protein P53 (TP53), and G domain B1 (GB1)-across various sites for 228 experiments. We demonstrate that ESM2 is dominated by three regions in the sparsity-ruggedness plane, two of which are better suited for sparse Fourier transforms. Validations on two sample proteins demonstrate recovery of all interactions with $R^2=0.72$ in the more sparse region and $R^2=0.66$ in the more dense region, using only 7 million out of $20^{10}\sim10^{13}$ ESM2 samples, reducing the computational time by a staggering factor of 15,000. All codes and data are available on our GitHub repository https://github.com/amirgroup-codes/InteractionRecovery.
1807.01768
Anna Maltsev
Anna Maltsev, Michael Stern, Victor Maltsev
Mechanisms of Calcium Leak from Cardiac Sarcoplasmic Reticulum Revealed by Statistical Mechanics
20 pages, 6 figures, supplemental material
null
10.1016/j.bpj.2018.11.277
null
q-bio.SC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heart muscle contraction is normally activated by a synchronized Ca release from sarcoplasmic reticulum (SR), a major intracellular Ca store. However, under abnormal conditions Ca leaks from the SR, decreasing heart contraction amplitude and increasing risk of life-threatening arrhythmia. The mechanisms and regimes of SR operation generating the abnormal Ca leak remain unclear. Here we employed both numerical and analytical modeling to get mechanistic insights into the emergent Ca leak phenomenon. Our numerical simulations using a detailed realistic model of Ca release unit (CRU) reveal sharp transitions resulting in Ca leak. The emergence of leak is closely mapped mathematically to the Ising model from statistical mechanics. The system steady-state behavior is determined by two aggregate parameters: the analogues of magnetic field ($h$) and the inverse temperature ($\beta$) in the Ising model, for which we have explicit formulas in terms of SR Ca and release channel opening/closing rates. The classification of leak regimes takes the shape of a phase $\beta$-$h$ diagram, with the regime boundaries occurring at $h$=0 and a critical value of $\beta$ ($\beta*$) which we estimate using a classical Ising model and mean field theory. Our theory predicts that a synchronized Ca leak will occur when $h$>0 and $\beta>\beta*$ and a disordered leak occurs when $\beta<\beta*$ and $h$ is not too negative. The disorder leak is distinguished from synchronized leak (in long-lasting sparks) by larger Peierls contour lengths, an output parameter reflecting degree of disorder. Thus, in addition to our detailed numerical model approach we also offer an instantaneous computational tool using analytical formulas of the Ising model for respective RyR parameters and SR Ca load that describe and classify phase transitions and leak emergence.
[ { "created": "Wed, 4 Jul 2018 20:47:43 GMT", "version": "v1" }, { "created": "Sun, 12 May 2019 17:53:52 GMT", "version": "v2" } ]
2023-07-19
[ [ "Maltsev", "Anna", "" ], [ "Stern", "Michael", "" ], [ "Maltsev", "Victor", "" ] ]
Heart muscle contraction is normally activated by a synchronized Ca release from sarcoplasmic reticulum (SR), a major intracellular Ca store. However, under abnormal conditions Ca leaks from the SR, decreasing heart contraction amplitude and increasing risk of life-threatening arrhythmia. The mechanisms and regimes of SR operation generating the abnormal Ca leak remain unclear. Here we employed both numerical and analytical modeling to get mechanistic insights into the emergent Ca leak phenomenon. Our numerical simulations using a detailed realistic model of Ca release unit (CRU) reveal sharp transitions resulting in Ca leak. The emergence of leak is closely mapped mathematically to the Ising model from statistical mechanics. The system steady-state behavior is determined by two aggregate parameters: the analogues of magnetic field ($h$) and the inverse temperature ($\beta$) in the Ising model, for which we have explicit formulas in terms of SR Ca and release channel opening/closing rates. The classification of leak regimes takes the shape of a phase $\beta$-$h$ diagram, with the regime boundaries occurring at $h$=0 and a critical value of $\beta$ ($\beta*$) which we estimate using a classical Ising model and mean field theory. Our theory predicts that a synchronized Ca leak will occur when $h$>0 and $\beta>\beta*$ and a disordered leak occurs when $\beta<\beta*$ and $h$ is not too negative. The disorder leak is distinguished from synchronized leak (in long-lasting sparks) by larger Peierls contour lengths, an output parameter reflecting degree of disorder. Thus, in addition to our detailed numerical model approach we also offer an instantaneous computational tool using analytical formulas of the Ising model for respective RyR parameters and SR Ca load that describe and classify phase transitions and leak emergence.
1610.02543
Richard McMurtrey
Richard J. McMurtrey
Multi-Compartmental Biomaterial Scaffolds for Patterning Neural Tissue Organoids in Models of Neurodevelopment and Tissue Regeneration
null
J. Tissue Engineering 2016; 7:1-8
10.1177/2041731416671926
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biomaterials are becoming an essential tool in the study and application of stem cell research. Various types of biomaterials enable three-dimensional (3D) culture of stem cells, and, more recently, also enable high-resolution patterning and organization of multicellular architectures. Biomaterials also hold potential to provide many additional advantages over cell transplants alone in regenerative medicine. This paper describes novel designs for functionalized biomaterial constructs that guide tissue development to targeted regional identities and structures. Such designs comprise compartmentalized regions in the biomaterial structure that are functionalized with molecular factors that form concentration gradients through the construct and guide stem cell development, axis patterning, and tissue architecture, including rostral/caudal, ventral/dorsal, or medial/lateral identities of the central nervous system. The ability to recapitulate innate developmental processes in a 3D environment and under specific controlled conditions has vital application to advanced models of neurodevelopment and for repair of specific sites of damaged or diseased neural tissue.
[ { "created": "Sat, 8 Oct 2016 15:12:53 GMT", "version": "v1" } ]
2016-10-11
[ [ "McMurtrey", "Richard J.", "" ] ]
Biomaterials are becoming an essential tool in the study and application of stem cell research. Various types of biomaterials enable three-dimensional (3D) culture of stem cells, and, more recently, also enable high-resolution patterning and organization of multicellular architectures. Biomaterials also hold potential to provide many additional advantages over cell transplants alone in regenerative medicine. This paper describes novel designs for functionalized biomaterial constructs that guide tissue development to targeted regional identities and structures. Such designs comprise compartmentalized regions in the biomaterial structure that are functionalized with molecular factors that form concentration gradients through the construct and guide stem cell development, axis patterning, and tissue architecture, including rostral/caudal, ventral/dorsal, or medial/lateral identities of the central nervous system. The ability to recapitulate innate developmental processes in a 3D environment and under specific controlled conditions has vital application to advanced models of neurodevelopment and for repair of specific sites of damaged or diseased neural tissue.
1307.6432
Casey Dunn
Casey W. Dunn, Mark Howison, and Felipe Zapata
Agalma: an automated phylogenomics workflow
17 pages, 4 figures
BMC Bioinformatics 14 (2013) 330
10.1186/1471-2105-14-330
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past decade, transcriptome data have become an important component of many phylogenetic studies. Phylogenetic studies now regularly include genes from newly sequenced transcriptomes, as well as publicly available transcriptomes and genomes. Implementing such a phylogenomic study, however, is computationally intensive, requires the coordinated use of many complex software tools, and includes multiple steps for which no published tools exist. Phylogenomic studies have therefore been manual or semiautomated. In addition to taking considerable user time, this makes phylogenomic analyses difficult to reproduce, compare, and extend. In addition, methodological improvements made in the context of one study often cannot be easily applied and evaluated in the context of other studies. We present Agalma, an automated tool that conducts phylogenomic analyses. The user provides raw Illumina transcriptome data, and Agalma produces annotated assemblies, aligned gene sequence matrices, a preliminary phylogeny, and detailed diagnostics that allow the investigator to make extensive assessments of intermediate analysis steps and the final results. Sequences from other sources, such as externally assembled genomes and transcriptomes, can also be incorporated in the analyses. Agalma tracks provenance, profiles processor and memory use, records diagnostics, manages metadata, and enables rich HTML reports for all stages of the analysis. Agalma includes a test data set and a built-in test analysis of these data. In addition to describing Agalma, we here present a sample analysis of a larger seven-taxon data set. Agalma is available for download at https://bitbucket.org/caseywdunn/agalma. Agalma allows complex phylogenomic analyses to be implemented and described unambiguously as a series of high-level commands. This will enable phylogenomic studies to be readily reproduced, modified, and extended.
[ { "created": "Wed, 24 Jul 2013 14:18:29 GMT", "version": "v1" } ]
2014-01-14
[ [ "Dunn", "Casey W.", "" ], [ "Howison", "Mark", "" ], [ "Zapata", "Felipe", "" ] ]
In the past decade, transcriptome data have become an important component of many phylogenetic studies. Phylogenetic studies now regularly include genes from newly sequenced transcriptomes, as well as publicly available transcriptomes and genomes. Implementing such a phylogenomic study, however, is computationally intensive, requires the coordinated use of many complex software tools, and includes multiple steps for which no published tools exist. Phylogenomic studies have therefore been manual or semiautomated. In addition to taking considerable user time, this makes phylogenomic analyses difficult to reproduce, compare, and extend. In addition, methodological improvements made in the context of one study often cannot be easily applied and evaluated in the context of other studies. We present Agalma, an automated tool that conducts phylogenomic analyses. The user provides raw Illumina transcriptome data, and Agalma produces annotated assemblies, aligned gene sequence matrices, a preliminary phylogeny, and detailed diagnostics that allow the investigator to make extensive assessments of intermediate analysis steps and the final results. Sequences from other sources, such as externally assembled genomes and transcriptomes, can also be incorporated in the analyses. Agalma tracks provenance, profiles processor and memory use, records diagnostics, manages metadata, and enables rich HTML reports for all stages of the analysis. Agalma includes a test data set and a built-in test analysis of these data. In addition to describing Agalma, we here present a sample analysis of a larger seven-taxon data set. Agalma is available for download at https://bitbucket.org/caseywdunn/agalma. Agalma allows complex phylogenomic analyses to be implemented and described unambiguously as a series of high-level commands. This will enable phylogenomic studies to be readily reproduced, modified, and extended.
1607.00998
Marco Alberto Javarone
Marco Alberto Javarone
The Host-Pathogen Game: an evolutionary approach to biological competitions
17 pages, 7 figures
Front. Phys. 6:94 2018
10.3389/fphy.2018.00094
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a model called Host-Pathogen game for studying biological competitions. Notably, we focus on the invasive dynamics of external agents, like bacteria, within a host organism. The former are mapped to a population of defectors that aim to spread in the extracellular medium of the host. In turn, the latter is composed of cells, mapped to a population of cooperators, that aim to kill pathogens. The cooperative behavior of cells is fundamental for the emergence of the living functions of the whole organism, since each one contributes to a specific set of tasks. So, broadly speaking, their contribution can be viewed as a form of energy. When bacteria are spatially close to a cell, the latter can use a fraction of its energy to remove them. On the other hand, when bacteria survive an attack, they absorb the received energy, becoming stronger and more resistant to further attacks. In addition, since bacteria play as defectors, their unique target is to increase their wealth, without supporting their own kind. As in many living organisms, the host temperature plays a relevant role in the host-pathogen equilibrium. For instance, in animals like human beings, a neural mechanism triggers the increasing of the body temperature in order to activate the immune system. Here, cooperators succeed once bacteria are completely removed while, in the opposite scenario, the host undergoes a deep invasive process, like a blood poisoning. Results of numerical simulations show that the dynamics of the proposed model allow to reach a variety of states. At a very high level of abstraction, some of these states seem to be similar to those that can be observed in some living systems. Therefore, to conclude, we deem that our model might be exploited for studying further biological phenomena.
[ { "created": "Mon, 4 Jul 2016 19:22:03 GMT", "version": "v1" }, { "created": "Fri, 31 Aug 2018 15:48:17 GMT", "version": "v2" } ]
2018-09-03
[ [ "Javarone", "Marco Alberto", "" ] ]
We introduce a model called Host-Pathogen game for studying biological competitions. Notably, we focus on the invasive dynamics of external agents, like bacteria, within a host organism. The former are mapped to a population of defectors that aim to spread in the extracellular medium of the host. In turn, the latter is composed of cells, mapped to a population of cooperators, that aim to kill pathogens. The cooperative behavior of cells is fundamental for the emergence of the living functions of the whole organism, since each one contributes to a specific set of tasks. So, broadly speaking, their contribution can be viewed as a form of energy. When bacteria are spatially close to a cell, the latter can use a fraction of its energy to remove them. On the other hand, when bacteria survive an attack, they absorb the received energy, becoming stronger and more resistant to further attacks. In addition, since bacteria play as defectors, their unique target is to increase their wealth, without supporting their own kind. As in many living organisms, the host temperature plays a relevant role in the host-pathogen equilibrium. For instance, in animals like human beings, a neural mechanism triggers the increasing of the body temperature in order to activate the immune system. Here, cooperators succeed once bacteria are completely removed while, in the opposite scenario, the host undergoes a deep invasive process, like a blood poisoning. Results of numerical simulations show that the dynamics of the proposed model allow to reach a variety of states. At a very high level of abstraction, some of these states seem to be similar to those that can be observed in some living systems. Therefore, to conclude, we deem that our model might be exploited for studying further biological phenomena.
2004.03181
Leo Bouscarrat
L\'eo Bouscarrat (QARMA, TALEP), Antoine Bonnefoy, C\'ecile Capponi (LIF, QARMA), Carlos Ramisch (TALEP)
Multilingual enrichment of disease biomedical ontologies
null
2nd workshop on MultilingualBIO: Multilingual Biomedical Text Processing, May 2020, Marseille, France
null
null
q-bio.QM cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translating biomedical ontologies is an important challenge, but doing it manually requires much time and money. We study the possibility to use open-source knowledge bases to translate biomedical ontologies. We focus on two aspects: coverage and quality. We look at the coverage of two biomedical ontologies focusing on diseases with respect to Wikidata for 9 European languages (Czech, Dutch, English, French, German, Italian, Polish, Portuguese and Spanish) for both ontologies, plus Arabic, Chinese and Russian for the second one. We first use direct links between Wikidata and the studied ontologies and then use second-order links by going through other intermediate ontologies. We then compare the quality of the translations obtained thanks to Wikidata with a commercial machine translation tool, here Google Cloud Translation.
[ { "created": "Tue, 7 Apr 2020 08:04:21 GMT", "version": "v1" } ]
2020-04-08
[ [ "Bouscarrat", "Léo", "", "QARMA, TALEP" ], [ "Bonnefoy", "Antoine", "", "LIF, QARMA" ], [ "Capponi", "Cécile", "", "LIF, QARMA" ], [ "Ramisch", "Carlos", "", "TALEP" ] ]
Translating biomedical ontologies is an important challenge, but doing it manually requires much time and money. We study the possibility to use open-source knowledge bases to translate biomedical ontologies. We focus on two aspects: coverage and quality. We look at the coverage of two biomedical ontologies focusing on diseases with respect to Wikidata for 9 European languages (Czech, Dutch, English, French, German, Italian, Polish, Portuguese and Spanish) for both ontologies, plus Arabic, Chinese and Russian for the second one. We first use direct links between Wikidata and the studied ontologies and then use second-order links by going through other intermediate ontologies. We then compare the quality of the translations obtained thanks to Wikidata with a commercial machine translation tool, here Google Cloud Translation.
2102.00925
Zichao Yan
Zichao Yan, William L. Hamilton and Mathieu Blanchette
Neural representation and generation for RNA secondary structures
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules.
[ { "created": "Mon, 1 Feb 2021 15:49:25 GMT", "version": "v1" } ]
2021-02-02
[ [ "Yan", "Zichao", "" ], [ "Hamilton", "William L.", "" ], [ "Blanchette", "Mathieu", "" ] ]
Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules.
2001.07396
William Ireland
William T. Ireland, Suzannah M. Beeler, Emanuel Flores-Bautista, Nathan M. Belliveau, Michael J. Sweredoski, Annie Moradian, Justin B. Kinney, and Rob Phillips
Deciphering the regulatory genome of $\textit{Escherichia coli}$, one hundred promoters at a time
47 pages, 15 figures
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Advances in DNA sequencing have revolutionized our ability to read genomes. However, even in the most well-studied of organisms, the bacterium ${\it Escherichia coli}$, for $\approx$ 65$\%$ of the promoters we remain completely ignorant of their regulation. Until we have cracked this regulatory Rosetta Stone, efforts to read and write genomes will remain haphazard. We introduce a new method (Reg-Seq) linking a massively-parallel reporter assay and mass spectrometry to produce a base pair resolution dissection of more than 100 promoters in ${\it E. coli}$ in 12 different growth conditions. First, we show that our method recapitulates regulatory information from known sequences. Then, we examine the regulatory architectures for more than 80 promoters in the ${\it E. coli}$ genome which previously had no known regulation. In many cases, we also identify which transcription factors mediate their regulation. The method introduced here clears a path for fully characterizing the regulatory genome of model organisms, with the potential of moving on to an array of other microbes of ecological and medical relevance.
[ { "created": "Tue, 21 Jan 2020 09:07:10 GMT", "version": "v1" } ]
2020-01-22
[ [ "Ireland", "William T.", "" ], [ "Beeler", "Suzannah M.", "" ], [ "Flores-Bautista", "Emanuel", "" ], [ "Belliveau", "Nathan M.", "" ], [ "Sweredoski", "Michael J.", "" ], [ "Moradian", "Annie", "" ], [ "Kinney", "Justin B.", "" ], [ "Phillips", "Rob", "" ] ]
Advances in DNA sequencing have revolutionized our ability to read genomes. However, even in the most well-studied of organisms, the bacterium ${\it Escherichia coli}$, for $\approx$ 65$\%$ of the promoters we remain completely ignorant of their regulation. Until we have cracked this regulatory Rosetta Stone, efforts to read and write genomes will remain haphazard. We introduce a new method (Reg-Seq) linking a massively-parallel reporter assay and mass spectrometry to produce a base pair resolution dissection of more than 100 promoters in ${\it E. coli}$ in 12 different growth conditions. First, we show that our method recapitulates regulatory information from known sequences. Then, we examine the regulatory architectures for more than 80 promoters in the ${\it E. coli}$ genome which previously had no known regulation. In many cases, we also identify which transcription factors mediate their regulation. The method introduced here clears a path for fully characterizing the regulatory genome of model organisms, with the potential of moving on to an array of other microbes of ecological and medical relevance.
1908.05370
Natsuko Rivera Ms
Natsuko Rivera-Yoshida, Alejandra Hernandez-Teran, Ana E. Escalante, Mariana Benitez
Laboratory biases hinder Eco-Evo-Devo integration: hints from the microworld
29 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How specific environmental contexts contribute to the robustness and variation of developmental trajectories and evolutionary transitions is a central point in Eco-Evo-Devo. However, the articulation of ecological, evolutionary and developmental processes into integrative frameworks has been elusive, partly because standard experimental designs neglect or oversimplify ecologically meaningful contexts. Microbial models are useful to expose and discuss two possible sources of bias associated with gene-centered experimental designs: the use of laboratory strains and laboratory environmental conditions. We illustrate our point by showing how contrasting developmental phenotypes in Myxococcus xanthus depend on the joint variation of temperature and substrate stiffness. Microorganismal development can provide key information for better understanding the role of environmental conditions in the evolution of developmental variation, and to overcome some of the limitations associated with current experimental approaches.
[ { "created": "Wed, 14 Aug 2019 23:05:08 GMT", "version": "v1" } ]
2019-08-16
[ [ "Rivera-Yoshida", "Natsuko", "" ], [ "Hernandez-Teran", "Alejandra", "" ], [ "Escalante", "Ana E.", "" ], [ "Benitez", "Mariana", "" ] ]
How specific environmental contexts contribute to the robustness and variation of developmental trajectories and evolutionary transitions is a central point in Eco-Evo-Devo. However, the articulation of ecological, evolutionary and developmental processes into integrative frameworks has been elusive, partly because standard experimental designs neglect or oversimplify ecologically meaningful contexts. Microbial models are useful to expose and discuss two possible sources of bias associated with gene-centered experimental designs: the use of laboratory strains and laboratory environmental conditions. We illustrate our point by showing how contrasting developmental phenotypes in Myxococcus xanthus depend on the joint variation of temperature and substrate stiffness. Microorganismal development can provide key information for better understanding the role of environmental conditions in the evolution of developmental variation, and to overcome some of the limitations associated with current experimental approaches.
1603.04518
Matthew Holden
Matthew Holden and Stephen Ellner
Human judgment vs. theoretical models for the management of ecological resources
Ecological Applications (2016)
Ecological Applications. Volume 26. Issue 5. Pages 1553-1565. (2016)
10.1890/15-1295
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to poor decisions. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this paper, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: 1. the model used to produce the simulated population dynamics observed in the game, with all underlying parameter values known [a control], 2. the same model, but with unknown parameter values that must be estimated during the game from observed data, 3. models that are structurally different from those used to simulate the population dynamics and 4. a model that ignores age structure. Humans on average performed much worse than the models in cases 1 - 3. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66 percent of the time.
[ { "created": "Tue, 15 Mar 2016 01:04:24 GMT", "version": "v1" } ]
2017-09-05
[ [ "Holden", "Matthew", "" ], [ "Ellner", "Stephen", "" ] ]
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to poor decisions. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this paper, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: 1. the model used to produce the simulated population dynamics observed in the game, with all underlying parameter values known [a control], 2. the same model, but with unknown parameter values that must be estimated during the game from observed data, 3. models that are structurally different from those used to simulate the population dynamics and 4. a model that ignores age structure. Humans on average performed much worse than the models in cases 1 - 3. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66 percent of the time.
2103.04979
Jitka Polechova
Jitka Polechov\'a, Kory D. Johnson, Pavel Payne, Alex Crozier, Mathias Beiglb\"ock, Pavel Plevka, Eva Schernhammer
Evidence suggests that SARS-CoV-2 rapid antigen tests provide benefits for epidemic control -- observations from Austrian schools
We have updated the article with recent data on viral loads in breakthrough infections and more information about testing efficacy, especially in children
null
10.1016/j.jclinepi.2022.01.002
null
q-bio.PE
http://creativecommons.org/licenses/by-sa/4.0/
Rapid antigen tests detect proteins at the surface of virus particles, identifying the disease during its infectious phase. In contrast, PCR tests detect viral genomes; they can thus diagnose COVID-19 before the infectious phase but also react to remnants of the virus genome, even weeks after live virus ceases to be detectable in the respiratory tract. Furthermore, the logistics for administering the tests are different, with rapid antigen tests being much easier to administer at-scale. In this article, we discuss the relative advantages of the different testing procedures and summarise evidence that shows that using antigen tests 2-3 times per week could become a powerful tool to suppress the COVID-19 pandemic. We also discuss the results of recent large-scale rapid antigen testing in Austrian schools. While our report on testing predates Delta, we have updated the review with recent data on viral loads in breakthrough infections and more information about testing efficacy, especially in children.
[ { "created": "Mon, 8 Mar 2021 18:57:48 GMT", "version": "v1" }, { "created": "Sat, 20 Mar 2021 14:15:28 GMT", "version": "v2" }, { "created": "Wed, 24 Mar 2021 16:45:18 GMT", "version": "v3" }, { "created": "Mon, 29 Mar 2021 15:50:48 GMT", "version": "v4" }, { "created": "Fri, 6 Aug 2021 10:46:28 GMT", "version": "v5" }, { "created": "Tue, 10 Aug 2021 17:51:53 GMT", "version": "v6" }, { "created": "Sun, 5 Dec 2021 12:16:49 GMT", "version": "v7" }, { "created": "Fri, 17 Dec 2021 14:06:25 GMT", "version": "v8" } ]
2022-01-21
[ [ "Polechová", "Jitka", "" ], [ "Johnson", "Kory D.", "" ], [ "Payne", "Pavel", "" ], [ "Crozier", "Alex", "" ], [ "Beiglböck", "Mathias", "" ], [ "Plevka", "Pavel", "" ], [ "Schernhammer", "Eva", "" ] ]
Rapid antigen tests detect proteins at the surface of virus particles, identifying the disease during its infectious phase. In contrast, PCR tests detect viral genomes; they can thus diagnose COVID-19 before the infectious phase but also react to remnants of the virus genome, even weeks after live virus ceases to be detectable in the respiratory tract. Furthermore, the logistics for administering the tests are different, with rapid antigen tests being much easier to administer at-scale. In this article, we discuss the relative advantages of the different testing procedures and summarise evidence that shows that using antigen tests 2-3 times per week could become a powerful tool to suppress the COVID-19 pandemic. We also discuss the results of recent large-scale rapid antigen testing in Austrian schools. While our report on testing predates Delta, we have updated the review with recent data on viral loads in breakthrough infections and more information about testing efficacy, especially in children.
2003.03232
Alexei Vazquez
Alexei Vazquez
The colon-pile
4 pages, 5 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bacteria populate the colon where they replicate and migrate in response to nutrient availability. Here I model the colon bacterial population as a sandpile model, the colon-pile. Sand addition mimics bacterial replication and grains toppling represents bacterial migration coupled to high population density. The numerical simulations reveal a behaviour similar to non-conservative sandpile models, approaching a critical state with system wide avalanches when the death rate becomes negligible. The critical exponents estimation indicates that the colon-pile belongs to a new universality class. This work suggest that the colon microbiome is in a self-organised critical state, where small perturbations can trigger large scale rearrangements, covering an area comparable to the system size and characterised by a 1/f noise spectra
[ { "created": "Fri, 6 Mar 2020 14:18:03 GMT", "version": "v1" } ]
2020-03-09
[ [ "Vazquez", "Alexei", "" ] ]
Bacteria populate the colon where they replicate and migrate in response to nutrient availability. Here I model the colon bacterial population as a sandpile model, the colon-pile. Sand addition mimics bacterial replication and grains toppling represents bacterial migration coupled to high population density. The numerical simulations reveal a behaviour similar to non-conservative sandpile models, approaching a critical state with system wide avalanches when the death rate becomes negligible. The critical exponents estimation indicates that the colon-pile belongs to a new universality class. This work suggest that the colon microbiome is in a self-organised critical state, where small perturbations can trigger large scale rearrangements, covering an area comparable to the system size and characterised by a 1/f noise spectra
0801.0365
Sanzo Miyazawa
Sanzo Miyazawa and Akira R. Kinjo
Properties of contact matrices induced by pairwise interactions in proteins
Errata in DOI:10.1103/PhysRevE.77.051910 has been corrected in the present version
Physical Review E, 77, 051910, 2008
10.1103/PhysRevE.77.051910
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The total conformational energy is assumed to consist of pairwise interaction energies between atoms or residues, each of which is expressed as a product of a conformation-dependent function (an element of a contact matrix, C-matrix) and a sequence-dependent energy parameter (an element of a contact energy matrix, E-matrix). Such pairwise interactions in proteins force native C-matrices to be in a relationship as if the interactions are a Go-like potential [N. Go, Annu. Rev. Biophys. Bioeng. 12. 183 (1983)] for the native C-matrix, because the lowest bound of the total energy function is equal to the total energy of the native conformation interacting in a Go-like pairwise potential. This relationship between C- and E-matrices corresponds to (a) a parallel relationship between the eigenvectors of the C- and E-matrices and a linear relationship between their eigenvalues, and (b) a parallel relationship between a contact number vector and the principal eigenvectors of the C- and E-matrices; the E-matrix is expanded in a series of eigenspaces with an additional constant term, which corresponds to a threshold of contact energy that approximately separates native contacts from non-native ones. These relationships are confirmed in 182 representatives from each family of the SCOP database by examining inner products between the principal eigenvector of the C-matrix, that of the E-matrix evaluated with a statistical contact potential, and a contact number vector. In addition, the spectral representation of C- and E-matrices reveals that pairwise residue-residue interactions, which depends only on the types of interacting amino acids but not on other residues in a protein, are insufficient and other interactions including residue connectivities and steric hindrance are needed to make native structures the unique lowest energy conformations.
[ { "created": "Wed, 2 Jan 2008 10:37:16 GMT", "version": "v1" }, { "created": "Tue, 22 Jan 2008 06:43:43 GMT", "version": "v2" }, { "created": "Wed, 31 Aug 2011 07:53:25 GMT", "version": "v3" } ]
2011-09-01
[ [ "Miyazawa", "Sanzo", "" ], [ "Kinjo", "Akira R.", "" ] ]
The total conformational energy is assumed to consist of pairwise interaction energies between atoms or residues, each of which is expressed as a product of a conformation-dependent function (an element of a contact matrix, C-matrix) and a sequence-dependent energy parameter (an element of a contact energy matrix, E-matrix). Such pairwise interactions in proteins force native C-matrices to be in a relationship as if the interactions are a Go-like potential [N. Go, Annu. Rev. Biophys. Bioeng. 12. 183 (1983)] for the native C-matrix, because the lowest bound of the total energy function is equal to the total energy of the native conformation interacting in a Go-like pairwise potential. This relationship between C- and E-matrices corresponds to (a) a parallel relationship between the eigenvectors of the C- and E-matrices and a linear relationship between their eigenvalues, and (b) a parallel relationship between a contact number vector and the principal eigenvectors of the C- and E-matrices; the E-matrix is expanded in a series of eigenspaces with an additional constant term, which corresponds to a threshold of contact energy that approximately separates native contacts from non-native ones. These relationships are confirmed in 182 representatives from each family of the SCOP database by examining inner products between the principal eigenvector of the C-matrix, that of the E-matrix evaluated with a statistical contact potential, and a contact number vector. In addition, the spectral representation of C- and E-matrices reveals that pairwise residue-residue interactions, which depends only on the types of interacting amino acids but not on other residues in a protein, are insufficient and other interactions including residue connectivities and steric hindrance are needed to make native structures the unique lowest energy conformations.
1101.1892
Michael Yampolsky
Michael Yampolsky, Carolyn M. Salafia, Oleksandr Shlakhter, Danielle Haas, Barbara Eucker, John Thorp
Abnormality of the placental vasculature affects placental thickness
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our empirical modeling suggests that deformation of placental vascular growth is associated with abnormal placental chorionic surface shape. Altered chorionic surface shape is associated with lowered placental functional efficiency. We hypothesize that placentas with deformed chorionic surface vascular trees and reduced functional efficiency also have irregular vascular arborization that will be reflected in increased variability of placental thickness and a lower mean thickness. We find that non-centrality of the umbilical cord insertion is strongly and significantly correlated with disk thickness (Spearman's rho=0.128, p=0.002). Deformed shape is strongly and significantly associated with lower overall thickness and higher variability of thickness with beta between -0.173 and -0.254 (p<0.001) . Both lower mean thickness and high variability of thickness are strongly correlated with higher beta (reduced placental efficiency) (p<0.001 and p=0.038 respectively). Greater thickness variability is correlated with higher beta independent of the other placental shape variables p=0.004.
[ { "created": "Mon, 10 Jan 2011 17:21:26 GMT", "version": "v1" } ]
2011-01-11
[ [ "Yampolsky", "Michael", "" ], [ "Salafia", "Carolyn M.", "" ], [ "Shlakhter", "Oleksandr", "" ], [ "Haas", "Danielle", "" ], [ "Eucker", "Barbara", "" ], [ "Thorp", "John", "" ] ]
Our empirical modeling suggests that deformation of placental vascular growth is associated with abnormal placental chorionic surface shape. Altered chorionic surface shape is associated with lowered placental functional efficiency. We hypothesize that placentas with deformed chorionic surface vascular trees and reduced functional efficiency also have irregular vascular arborization that will be reflected in increased variability of placental thickness and a lower mean thickness. We find that non-centrality of the umbilical cord insertion is strongly and significantly correlated with disk thickness (Spearman's rho=0.128, p=0.002). Deformed shape is strongly and significantly associated with lower overall thickness and higher variability of thickness with beta between -0.173 and -0.254 (p<0.001) . Both lower mean thickness and high variability of thickness are strongly correlated with higher beta (reduced placental efficiency) (p<0.001 and p=0.038 respectively). Greater thickness variability is correlated with higher beta independent of the other placental shape variables p=0.004.
0906.3489
Andrieux David
David Andrieux and Takaaki Monnai
Firing Rate of Noisy Integrate-and-fire Neurons with Synaptic Current Dynamics
null
Physical Review E 80, 021933 (2009)
10.1103/PhysRevE.80.021933
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We derive analytical formulae for the firing rate of integrate-and-fire neurons endowed with realistic synaptic dynamics. In particular we include the possibility of multiple synaptic inputs as well as the effect of an absolute refractory period into the description.
[ { "created": "Thu, 18 Jun 2009 17:49:20 GMT", "version": "v1" } ]
2009-08-27
[ [ "Andrieux", "David", "" ], [ "Monnai", "Takaaki", "" ] ]
We derive analytical formulae for the firing rate of integrate-and-fire neurons endowed with realistic synaptic dynamics. In particular we include the possibility of multiple synaptic inputs as well as the effect of an absolute refractory period into the description.
1304.4620
Russell Dickson
Russell J. Dickson and Gregory B. Gloor
XORRO: Rapid Paired-End Read Overlapper
6 pages, 2 figures
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Computational analysis of next-generation sequencing data is outpaced by data generation in many cases. In one such case, paired-end reads can be produced from the Illumina sequencing method faster than they can be overlapped by downstream analysis. The advantages in read length and accuracy provided by overlapping paired-end reads demonstrates the necessity for software to efficiently solve this problem. Results: XORRO is an extremely efficient paired-end read overlapping program. XORRO can overlap millions of short paired-end reads in a few minutes. It uses 64-bit registers with a two bit alphabet to represent sequences and does comparisons using low-level logical operations like XOR, AND, bitshifting and popcount. Conclusions: As of the writing of this manuscript, XORRO provides the fastest solution to the paired-end read overlap problem. XORRO is available for download at: sourceforge.net/projects/xorro-overlap/
[ { "created": "Tue, 16 Apr 2013 20:54:32 GMT", "version": "v1" } ]
2013-04-18
[ [ "Dickson", "Russell J.", "" ], [ "Gloor", "Gregory B.", "" ] ]
Background: Computational analysis of next-generation sequencing data is outpaced by data generation in many cases. In one such case, paired-end reads can be produced from the Illumina sequencing method faster than they can be overlapped by downstream analysis. The advantages in read length and accuracy provided by overlapping paired-end reads demonstrates the necessity for software to efficiently solve this problem. Results: XORRO is an extremely efficient paired-end read overlapping program. XORRO can overlap millions of short paired-end reads in a few minutes. It uses 64-bit registers with a two bit alphabet to represent sequences and does comparisons using low-level logical operations like XOR, AND, bitshifting and popcount. Conclusions: As of the writing of this manuscript, XORRO provides the fastest solution to the paired-end read overlap problem. XORRO is available for download at: sourceforge.net/projects/xorro-overlap/
1806.05013
Xiaochang Leng
Xiaochang Leng, Lindsey Davis, Xiaomin Deng, Tarek Shazly, Michael A. Sutton, Susan M. Lessner
An inverse analysis of cohesive zone model parameter values for human fibrous cap mode I tearing
26 pages, 7 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atherosclerotic plaque failure results from various pathophysiological events, with the existence of fibrous cap mode I tearing in the arterial wall, having the potential to block the aortic lumen and correspondingly induce serious clinical conditions.The aim of this study was to quantify the interfacial strength and critical energy release rate of the fibrous tissue across the thickness. in this study, an inverse analysis method via finite element modeling and simulation approach was presented. A cohesive zone model (CZM) was applied to simulate the tearing of the fibrous cap tissue under uniaxial tensile tests along the circumferential direction. A fiber-reinforced hyperelastic model (Holzapfel-Gasser-Ogden) was implemented for characterizing the mechanical properties of bulk material. With the material parameter values of HGO model from inverse analysis process as the input for the bulk material, the interfacial strength and critical energy release rate along the tearing path or failure zones are obtained through the same method as material identification process of HGO model. Results of this study demonstrate the fibrous cap tissue tearing failure processes.
[ { "created": "Wed, 13 Jun 2018 13:11:26 GMT", "version": "v1" } ]
2018-06-14
[ [ "Leng", "Xiaochang", "" ], [ "Davis", "Lindsey", "" ], [ "Deng", "Xiaomin", "" ], [ "Shazly", "Tarek", "" ], [ "Sutton", "Michael A.", "" ], [ "Lessner", "Susan M.", "" ] ]
Atherosclerotic plaque failure results from various pathophysiological events, with the existence of fibrous cap mode I tearing in the arterial wall, having the potential to block the aortic lumen and correspondingly induce serious clinical conditions.The aim of this study was to quantify the interfacial strength and critical energy release rate of the fibrous tissue across the thickness. in this study, an inverse analysis method via finite element modeling and simulation approach was presented. A cohesive zone model (CZM) was applied to simulate the tearing of the fibrous cap tissue under uniaxial tensile tests along the circumferential direction. A fiber-reinforced hyperelastic model (Holzapfel-Gasser-Ogden) was implemented for characterizing the mechanical properties of bulk material. With the material parameter values of HGO model from inverse analysis process as the input for the bulk material, the interfacial strength and critical energy release rate along the tearing path or failure zones are obtained through the same method as material identification process of HGO model. Results of this study demonstrate the fibrous cap tissue tearing failure processes.
2110.12622
Rishabh Rishabh
Rishabh, Hadi Zadeh-Haghighi, Dennis Salahub, Christoph Simon
Radical pairs may explain reactive oxygen species-mediated effects of hypomagnetic field on neurogenesis
16 pages, 6 figures, 2 tables
null
null
null
q-bio.NC physics.bio-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exposures to a hypomagnetic field can affect biological processes. Recently, it has been observed that hypomagnetic field exposure can adversely affect adult hippocampal neurogenesis and hippocampus-dependent cognition in mice. In the same study, the role of reactive oxygen species (ROS) in hypomagnetic field effects has been demonstrated. However, the mechanistic reasons behind this effect are not clear. This study proposes a radical pair mechanism based on a flavin-superoxide radical pair to explain the modulation of ROS production and the attenuation of adult hippocampal neurogenesis in a hypomagnetic field. The results of our calculations favor a singlet-born radical pair over a triplet-born radical pair. Our model predicts hypomagnetic field effects on the triplet/singlet yield of comparable strength as the effects observed in experimental studies on adult hippocampal neurogenesis. Our predictions are also in qualitative agreement with experimental results on superoxide concentration and other observed ROS effects. We also predict the effects of applied magnetic fields and oxygen isotopic substitution on adult hippocampal neurogenesis. Our findings strengthen the idea that nature might harness quantum resources in the context of the brain.
[ { "created": "Mon, 25 Oct 2021 03:19:22 GMT", "version": "v1" } ]
2021-10-26
[ [ "Rishabh", "", "" ], [ "Zadeh-Haghighi", "Hadi", "" ], [ "Salahub", "Dennis", "" ], [ "Simon", "Christoph", "" ] ]
Exposures to a hypomagnetic field can affect biological processes. Recently, it has been observed that hypomagnetic field exposure can adversely affect adult hippocampal neurogenesis and hippocampus-dependent cognition in mice. In the same study, the role of reactive oxygen species (ROS) in hypomagnetic field effects has been demonstrated. However, the mechanistic reasons behind this effect are not clear. This study proposes a radical pair mechanism based on a flavin-superoxide radical pair to explain the modulation of ROS production and the attenuation of adult hippocampal neurogenesis in a hypomagnetic field. The results of our calculations favor a singlet-born radical pair over a triplet-born radical pair. Our model predicts hypomagnetic field effects on the triplet/singlet yield of comparable strength as the effects observed in experimental studies on adult hippocampal neurogenesis. Our predictions are also in qualitative agreement with experimental results on superoxide concentration and other observed ROS effects. We also predict the effects of applied magnetic fields and oxygen isotopic substitution on adult hippocampal neurogenesis. Our findings strengthen the idea that nature might harness quantum resources in the context of the brain.
1712.00813
Simcha Srebnik
Boris Haimov and Simcha Srebnik
The Relation Between {\alpha}-Helical Conformation And Amyloidogenicity
null
null
10.1016/j.bpj.2018.03.019
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Amyloid fibrils are stable aggregates of misfolded proteins and polypeptides that are insoluble and resistant to protease activity. Abnormal formation of amyloid fibrils in vivo may lead to neurodegenerative disorders and other systemic amyloidosis such as Alzheimer's, Parkinson's, and atherosclerosis. Because of their clinical importance amyloids are found under intense scientific research. Amyloidogenic sequences of short polypeptide segments within proteins are responsible for the transformation of correctly folded proteins into parts of larger amyloid fibrils. The {\alpha}-helical secondary structure is believed to host many amyloidogenic sequences and be a key player in different stages of the amyloidogenesis process. Most of the studies on amyloids focus on the role of amyloidogenic sequences. The focus of this study is the relation between amyloidogenicity and the structure of the amyloidogenic {\alpha}-helical sequence. We have previously shown that the {\alpha}-helical conformation may be expressed by two parameters ({\theta} and \{rho}) that form orthogonal coordinates based on the Ramachandran dihedrals ({\phi} and {\psi}) and provide an illuminating interpretation of the {\alpha}-helical conformation. By performing statistical analysis on {\alpha}-helical conformations found in the protein data bank, an apparent relation between {\alpha}-helical conformation, as expressed by {\theta} and \{rho}, and amyloidogenicity is revealed. Remarkably, random amino acid sequences, whose helical structure was obtained from the most probably dihedral angles as obtained from PDB data, revealed the same dependency of amyloidogenicity, suggesting the importance of {\alpha}-helical structure as opposed to sequence.
[ { "created": "Sun, 3 Dec 2017 18:43:15 GMT", "version": "v1" } ]
2018-05-22
[ [ "Haimov", "Boris", "" ], [ "Srebnik", "Simcha", "" ] ]
Amyloid fibrils are stable aggregates of misfolded proteins and polypeptides that are insoluble and resistant to protease activity. Abnormal formation of amyloid fibrils in vivo may lead to neurodegenerative disorders and other systemic amyloidosis such as Alzheimer's, Parkinson's, and atherosclerosis. Because of their clinical importance amyloids are found under intense scientific research. Amyloidogenic sequences of short polypeptide segments within proteins are responsible for the transformation of correctly folded proteins into parts of larger amyloid fibrils. The {\alpha}-helical secondary structure is believed to host many amyloidogenic sequences and be a key player in different stages of the amyloidogenesis process. Most of the studies on amyloids focus on the role of amyloidogenic sequences. The focus of this study is the relation between amyloidogenicity and the structure of the amyloidogenic {\alpha}-helical sequence. We have previously shown that the {\alpha}-helical conformation may be expressed by two parameters ({\theta} and \{rho}) that form orthogonal coordinates based on the Ramachandran dihedrals ({\phi} and {\psi}) and provide an illuminating interpretation of the {\alpha}-helical conformation. By performing statistical analysis on {\alpha}-helical conformations found in the protein data bank, an apparent relation between {\alpha}-helical conformation, as expressed by {\theta} and \{rho}, and amyloidogenicity is revealed. Remarkably, random amino acid sequences, whose helical structure was obtained from the most probably dihedral angles as obtained from PDB data, revealed the same dependency of amyloidogenicity, suggesting the importance of {\alpha}-helical structure as opposed to sequence.
2402.07663
Calina M. Durbac
M.H. Duong, C.M. Durbac, T.A. Han
Cost optimisation of individual-based institutional reward incentives for promoting cooperation in finite populations
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the problem of cost optimisation of individual-based institutional incentives (reward, punishment, and hybrid) for guaranteeing a certain minimal level of cooperative behaviour in a well-mixed, finite population. In this scheme, the individuals in the population interact via cooperation dilemmas (Donation Game or Public Goods Game) in which institutional reward is carried out only if cooperation is not abundant enough (i.e., the number of cooperators is below a threshold $1\leq t\leq N-1$, where $N$ is the population size); and similarly, institutional punishment is carried out only when defection is too abundant. We study analytically the cases $t=1$ for the reward incentive under the small mutation limit assumption and two different initial states, showing that the cost function is always non-decreasing. We derive the neutral drift and strong selection limits when the intensity of selection tends to zero and infinity, respectively. We numerically investigate the problem for other values of $t$ and for population dynamics with arbitrary mutation rates.
[ { "created": "Mon, 12 Feb 2024 14:11:28 GMT", "version": "v1" }, { "created": "Sun, 21 Jul 2024 10:29:57 GMT", "version": "v2" }, { "created": "Mon, 29 Jul 2024 10:58:36 GMT", "version": "v3" } ]
2024-07-30
[ [ "Duong", "M. H.", "" ], [ "Durbac", "C. M.", "" ], [ "Han", "T. A.", "" ] ]
In this paper, we study the problem of cost optimisation of individual-based institutional incentives (reward, punishment, and hybrid) for guaranteeing a certain minimal level of cooperative behaviour in a well-mixed, finite population. In this scheme, the individuals in the population interact via cooperation dilemmas (Donation Game or Public Goods Game) in which institutional reward is carried out only if cooperation is not abundant enough (i.e., the number of cooperators is below a threshold $1\leq t\leq N-1$, where $N$ is the population size); and similarly, institutional punishment is carried out only when defection is too abundant. We study analytically the cases $t=1$ for the reward incentive under the small mutation limit assumption and two different initial states, showing that the cost function is always non-decreasing. We derive the neutral drift and strong selection limits when the intensity of selection tends to zero and infinity, respectively. We numerically investigate the problem for other values of $t$ and for population dynamics with arbitrary mutation rates.
2312.06100
Zachary Kilpatrick PhD
Sage Shaw and Zachary P Kilpatrick
Representing stimulus motion with waves in adaptive neural fields
31 pages, 6 figures
null
null
null
q-bio.NC nlin.PS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.
[ { "created": "Mon, 11 Dec 2023 04:06:27 GMT", "version": "v1" } ]
2023-12-12
[ [ "Shaw", "Sage", "" ], [ "Kilpatrick", "Zachary P", "" ] ]
Traveling waves of neural activity emerge in cortical networks both spontaneously and in response to stimuli. The spatiotemporal structure of waves can indicate the information they encode and the physiological processes that sustain them. Here, we investigate the stimulus-response relationships of traveling waves emerging in adaptive neural fields as a model of visual motion processing. Neural field equations model the activity of cortical tissue as a continuum excitable medium, and adaptive processes provide negative feedback, generating localized activity patterns. Synaptic connectivity in our model is described by an integral kernel that weakens dynamically due to activity-dependent synaptic depression, leading to marginally stable traveling fronts (with attenuated backs) or pulses of a fixed speed. Our analysis quantifies how weak stimuli shift the relative position of these waves over time, characterized by a wave response function we obtain perturbatively. Persistent and continuously visible stimuli model moving visual objects. Intermittent flashes that hop across visual space can produce the experience of smooth apparent visual motion. Entrainment of waves to both kinds of moving stimuli are well characterized by our theory and numerical simulations, providing a mechanistic description of the perception of visual motion.
2005.03093
Marcus Kaiser
Marcus Kaiser
Functional compensation after lesions: Predicting site and extent of recovery
Technical Report
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In some cases, the function of a lesioned area can be compensated for by another area. However, it remains unpredictable if and by which other area a lesion can be compensated. We assume that similar incoming and outgoing connections are necessary to encode the same function as the damaged region. The similarity can be measured both locally using the matching index and looking at a more global scale by non-metric multidimensional scaling (NMDS). We tested how well both measures can predict the compensating area for the loss of the visual cortex in kittens. For this case study, the global comparison of connectivity turns out to be a better method for predicting functional compensation. In future studies, the extent of the similarity between the lesioned and compensating regions might be a measure of the extent to which function can be successfully recovered.
[ { "created": "Wed, 6 May 2020 19:29:49 GMT", "version": "v1" } ]
2020-05-08
[ [ "Kaiser", "Marcus", "" ] ]
In some cases, the function of a lesioned area can be compensated for by another area. However, it remains unpredictable if and by which other area a lesion can be compensated. We assume that similar incoming and outgoing connections are necessary to encode the same function as the damaged region. The similarity can be measured both locally using the matching index and looking at a more global scale by non-metric multidimensional scaling (NMDS). We tested how well both measures can predict the compensating area for the loss of the visual cortex in kittens. For this case study, the global comparison of connectivity turns out to be a better method for predicting functional compensation. In future studies, the extent of the similarity between the lesioned and compensating regions might be a measure of the extent to which function can be successfully recovered.
1902.00249
Mohammed AlQuraishi
Mohammed AlQuraishi
ProteinNet: a standardized data set for machine learning of protein structure
8 pages, 6 figures, 1 table
null
null
null
q-bio.BM cs.LG q-bio.QM stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Rapid progress in deep learning has spurred its application to bioinformatics problems including protein structure prediction and design. In classic machine learning problems like computer vision, progress has been driven by standardized data sets that facilitate fair assessment of new methods and lower the barrier to entry for non-domain experts. While data sets of protein sequence and structure exist, they lack certain components critical for machine learning, including high-quality multiple sequence alignments and insulated training / validation splits that account for deep but only weakly detectable homology across protein space. We have created the ProteinNet series of data sets to provide a standardized mechanism for training and assessing data-driven models of protein sequence-structure relationships. ProteinNet integrates sequence, structure, and evolutionary information in programmatically accessible file formats tailored for machine learning frameworks. Multiple sequence alignments of all structurally characterized proteins were created using substantial high-performance computing resources. Standardized data splits were also generated to emulate the difficulty of past CASP (Critical Assessment of protein Structure Prediction) experiments by resetting protein sequence and structure space to the historical states that preceded six prior CASPs. Utilizing sensitive evolution-based distance metrics to segregate distantly related proteins, we have additionally created validation sets distinct from the official CASP sets that faithfully mimic their difficulty. ProteinNet thus represents a comprehensive and accessible resource for training and assessing machine-learned models of protein structure.
[ { "created": "Fri, 1 Feb 2019 09:43:50 GMT", "version": "v1" } ]
2019-02-04
[ [ "AlQuraishi", "Mohammed", "" ] ]
Rapid progress in deep learning has spurred its application to bioinformatics problems including protein structure prediction and design. In classic machine learning problems like computer vision, progress has been driven by standardized data sets that facilitate fair assessment of new methods and lower the barrier to entry for non-domain experts. While data sets of protein sequence and structure exist, they lack certain components critical for machine learning, including high-quality multiple sequence alignments and insulated training / validation splits that account for deep but only weakly detectable homology across protein space. We have created the ProteinNet series of data sets to provide a standardized mechanism for training and assessing data-driven models of protein sequence-structure relationships. ProteinNet integrates sequence, structure, and evolutionary information in programmatically accessible file formats tailored for machine learning frameworks. Multiple sequence alignments of all structurally characterized proteins were created using substantial high-performance computing resources. Standardized data splits were also generated to emulate the difficulty of past CASP (Critical Assessment of protein Structure Prediction) experiments by resetting protein sequence and structure space to the historical states that preceded six prior CASPs. Utilizing sensitive evolution-based distance metrics to segregate distantly related proteins, we have additionally created validation sets distinct from the official CASP sets that faithfully mimic their difficulty. ProteinNet thus represents a comprehensive and accessible resource for training and assessing machine-learned models of protein structure.
1404.1626
Andrei Zinovyev Dr.
Andrei Zinovyev
Dealing with complexity of biological systems: from data to models
HDR m\'emoire (habilitation thesis) defended on the 04/04/2014
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Four chapters of the synthesis represent four major areas of my research interests: 1) data analysis in molecular biology, 2) mathematical modeling of biological networks, 3) genome evolution, and 4) cancer systems biology. The first chapter is devoted to my work in developing non-linear methods of dimension reduction (methods of elastic maps and principal trees) which extends the classical method of principal components. Also I present application of matrix factorization techniques to analysis of cancer data. The second chapter is devoted to the complexity of mathematical models in molecular biology. I describe the basic ideas of asymptotology of chemical reaction networks aiming at dissecting and simplifying complex chemical kinetics models. Two applications of this approach are presented: to modeling NFkB and apoptosis pathways, and to modeling mechanisms of miRNA action on protein translation. The third chapter briefly describes my investigations of the genome structure in different organisms (from microbes to human cancer genomes). Unsupervised data analysis approaches are used to investigate the patterns in genomic sequences shaped by genome evolution and influenced by the basic properties of the environment. The fourth chapter summarizes my experience in studying cancer by computational methods (through combining integrative data analysis and mathematical modeling approaches). In particular, I describe the on-going research projects such as mathematical modeling of cell fate decisions and synthetic lethal interactions in DNA repair network. The synthesis is concluded by listing major challenges in computational systems biology, connected to the topics of this text, i.e. dealing with complexity of biological systems.
[ { "created": "Sun, 6 Apr 2014 21:46:30 GMT", "version": "v1" } ]
2014-04-08
[ [ "Zinovyev", "Andrei", "" ] ]
Four chapters of the synthesis represent four major areas of my research interests: 1) data analysis in molecular biology, 2) mathematical modeling of biological networks, 3) genome evolution, and 4) cancer systems biology. The first chapter is devoted to my work in developing non-linear methods of dimension reduction (methods of elastic maps and principal trees) which extends the classical method of principal components. Also I present application of matrix factorization techniques to analysis of cancer data. The second chapter is devoted to the complexity of mathematical models in molecular biology. I describe the basic ideas of asymptotology of chemical reaction networks aiming at dissecting and simplifying complex chemical kinetics models. Two applications of this approach are presented: to modeling NFkB and apoptosis pathways, and to modeling mechanisms of miRNA action on protein translation. The third chapter briefly describes my investigations of the genome structure in different organisms (from microbes to human cancer genomes). Unsupervised data analysis approaches are used to investigate the patterns in genomic sequences shaped by genome evolution and influenced by the basic properties of the environment. The fourth chapter summarizes my experience in studying cancer by computational methods (through combining integrative data analysis and mathematical modeling approaches). In particular, I describe the on-going research projects such as mathematical modeling of cell fate decisions and synthetic lethal interactions in DNA repair network. The synthesis is concluded by listing major challenges in computational systems biology, connected to the topics of this text, i.e. dealing with complexity of biological systems.
2206.06862
Jakub Kaczmarzyk
Jakub R. Kaczmarzyk, Tahsin M. Kurc, Shahira Abousamra, Rajarsi Gupta, Joel H. Saltz, Peter K. Koo
Evaluating histopathology transfer learning with ChampKit
Submitted to NeurIPS 2022 Track on Datasets and Benchmarks. Source code available at https://github.com/kaczmarj/champkit
null
10.1016/j.cmpb.2023.107631
null
q-bio.QM cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Histopathology remains the gold standard for diagnosis of various cancers. Recent advances in computer vision, specifically deep learning, have facilitated the analysis of histopathology images for various tasks, including immune cell detection and microsatellite instability classification. The state-of-the-art for each task often employs base architectures that have been pretrained for image classification on ImageNet. The standard approach to develop classifiers in histopathology tends to focus narrowly on optimizing models for a single task, not considering the aspects of modeling innovations that improve generalization across tasks. Here we present ChampKit (Comprehensive Histopathology Assessment of Model Predictions toolKit): an extensible, fully reproducible benchmarking toolkit that consists of a broad collection of patch-level image classification tasks across different cancers. ChampKit enables a way to systematically document the performance impact of proposed improvements in models and methodology. ChampKit source code and data are freely accessible at https://github.com/kaczmarj/champkit .
[ { "created": "Tue, 14 Jun 2022 14:00:17 GMT", "version": "v1" } ]
2023-11-02
[ [ "Kaczmarzyk", "Jakub R.", "" ], [ "Kurc", "Tahsin M.", "" ], [ "Abousamra", "Shahira", "" ], [ "Gupta", "Rajarsi", "" ], [ "Saltz", "Joel H.", "" ], [ "Koo", "Peter K.", "" ] ]
Histopathology remains the gold standard for diagnosis of various cancers. Recent advances in computer vision, specifically deep learning, have facilitated the analysis of histopathology images for various tasks, including immune cell detection and microsatellite instability classification. The state-of-the-art for each task often employs base architectures that have been pretrained for image classification on ImageNet. The standard approach to develop classifiers in histopathology tends to focus narrowly on optimizing models for a single task, not considering the aspects of modeling innovations that improve generalization across tasks. Here we present ChampKit (Comprehensive Histopathology Assessment of Model Predictions toolKit): an extensible, fully reproducible benchmarking toolkit that consists of a broad collection of patch-level image classification tasks across different cancers. ChampKit enables a way to systematically document the performance impact of proposed improvements in models and methodology. ChampKit source code and data are freely accessible at https://github.com/kaczmarj/champkit .
2401.03390
Mansooreh Montazerin
Majd Al Aawar, Srikar Mutnuri, Mansooreh Montazerin, Ajitesh Srivastava
Dynamics-based Feature Augmentation of Graph Neural Networks for Variant Emergence Prediction
null
null
null
null
q-bio.PE cs.LG physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
During the COVID-19 pandemic, a major driver of new surges has been the emergence of new variants. When a new variant emerges in one or more countries, other nations monitor its spread in preparation for its potential arrival. The impact of the new variant and the timings of epidemic peaks in a country highly depend on when the variant arrives. The current methods for predicting the spread of new variants rely on statistical modeling, however, these methods work only when the new variant has already arrived in the region of interest and has a significant prevalence. Can we predict when a variant existing elsewhere will arrive in a given region? To address this question, we propose a variant-dynamics-informed Graph Neural Network (GNN) approach. First, we derive the dynamics of variant prevalence across pairs of regions (countries) that apply to a large class of epidemic models. The dynamics motivate the introduction of certain features in the GNN. We demonstrate that our proposed dynamics-informed GNN outperforms all the baselines, including the currently pervasive framework of Physics-Informed Neural Networks (PINNs). To advance research in this area, we introduce a benchmarking tool to assess a user-defined model's prediction performance across 87 countries and 36 variants.
[ { "created": "Sun, 7 Jan 2024 05:03:30 GMT", "version": "v1" }, { "created": "Wed, 29 May 2024 00:10:30 GMT", "version": "v2" } ]
2024-05-30
[ [ "Aawar", "Majd Al", "" ], [ "Mutnuri", "Srikar", "" ], [ "Montazerin", "Mansooreh", "" ], [ "Srivastava", "Ajitesh", "" ] ]
During the COVID-19 pandemic, a major driver of new surges has been the emergence of new variants. When a new variant emerges in one or more countries, other nations monitor its spread in preparation for its potential arrival. The impact of the new variant and the timings of epidemic peaks in a country highly depend on when the variant arrives. The current methods for predicting the spread of new variants rely on statistical modeling, however, these methods work only when the new variant has already arrived in the region of interest and has a significant prevalence. Can we predict when a variant existing elsewhere will arrive in a given region? To address this question, we propose a variant-dynamics-informed Graph Neural Network (GNN) approach. First, we derive the dynamics of variant prevalence across pairs of regions (countries) that apply to a large class of epidemic models. The dynamics motivate the introduction of certain features in the GNN. We demonstrate that our proposed dynamics-informed GNN outperforms all the baselines, including the currently pervasive framework of Physics-Informed Neural Networks (PINNs). To advance research in this area, we introduce a benchmarking tool to assess a user-defined model's prediction performance across 87 countries and 36 variants.
1006.0825
Alex Bladon
Alex J. Bladon, Tobias Galla, Alan J. McKane
Evolutionary dynamics, intrinsic noise and cycles of co-operation
14 pages, 12 figures, accepted for publication by Phys. Rev. E
Phys. Rev. E 81, 066122 (2010)
10.1103/PhysRevE.81.066122
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use analytical techniques based on an expansion in the inverse system size to study the stochastic evolutionary dynamics of finite populations of players interacting in a repeated prisoner's dilemma game. We show that a mechanism of amplification of demographic noise can give rise to coherent oscillations in parameter regimes where deterministic descriptions converge to fixed points with complex eigenvalues. These quasi-cycles between co-operation and defection have previously been observed in computer simulations; here we provide a systematic and comprehensive analytical characterization of their properties. We are able to predict their power spectra as a function of the mutation rate and other model parameters, and to compare the relative magnitude of the cycles induced by different types of underlying microscopic dynamics. We also extend our analysis to the iterated prisoner's dilemma game with a win-stay lose-shift strategy, appropriate in situations where players are subject to errors of the trembling-hand type.
[ { "created": "Fri, 4 Jun 2010 09:47:17 GMT", "version": "v1" } ]
2012-04-20
[ [ "Bladon", "Alex J.", "" ], [ "Galla", "Tobias", "" ], [ "McKane", "Alan J.", "" ] ]
We use analytical techniques based on an expansion in the inverse system size to study the stochastic evolutionary dynamics of finite populations of players interacting in a repeated prisoner's dilemma game. We show that a mechanism of amplification of demographic noise can give rise to coherent oscillations in parameter regimes where deterministic descriptions converge to fixed points with complex eigenvalues. These quasi-cycles between co-operation and defection have previously been observed in computer simulations; here we provide a systematic and comprehensive analytical characterization of their properties. We are able to predict their power spectra as a function of the mutation rate and other model parameters, and to compare the relative magnitude of the cycles induced by different types of underlying microscopic dynamics. We also extend our analysis to the iterated prisoner's dilemma game with a win-stay lose-shift strategy, appropriate in situations where players are subject to errors of the trembling-hand type.
2406.12108
Alexander Titus
Samuel A. Donkor, Matthew E. Walsh, and Alexander J. Titus
Computing in the Life Sciences: From Early Algorithms to Modern AI
53 pages, 4 figures, 10 tables
null
null
null
q-bio.OT cs.AI
http://creativecommons.org/licenses/by/4.0/
Computing in the life sciences has undergone a transformative evolution, from early computational models in the 1950s to the applications of artificial intelligence (AI) and machine learning (ML) seen today. This paper highlights key milestones and technological advancements through the historical development of computing in the life sciences. The discussion includes the inception of computational models for biological processes, the advent of bioinformatics tools, and the integration of AI/ML in modern life sciences research. Attention is given to AI-enabled tools used in the life sciences, such as scientific large language models and bio-AI tools, examining their capabilities, limitations, and impact to biological risk. This paper seeks to clarify and establish essential terminology and concepts to ensure informed decision-making and effective communication across disciplines.
[ { "created": "Mon, 17 Jun 2024 21:36:52 GMT", "version": "v1" }, { "created": "Wed, 19 Jun 2024 03:54:28 GMT", "version": "v2" } ]
2024-06-21
[ [ "Donkor", "Samuel A.", "" ], [ "Walsh", "Matthew E.", "" ], [ "Titus", "Alexander J.", "" ] ]
Computing in the life sciences has undergone a transformative evolution, from early computational models in the 1950s to the applications of artificial intelligence (AI) and machine learning (ML) seen today. This paper highlights key milestones and technological advancements through the historical development of computing in the life sciences. The discussion includes the inception of computational models for biological processes, the advent of bioinformatics tools, and the integration of AI/ML in modern life sciences research. Attention is given to AI-enabled tools used in the life sciences, such as scientific large language models and bio-AI tools, examining their capabilities, limitations, and impact to biological risk. This paper seeks to clarify and establish essential terminology and concepts to ensure informed decision-making and effective communication across disciplines.
1608.01473
Changbong Hyeon
Yoonji Lee, Songmi Kim, Sun Choi, Changbong Hyeon
Ultraslow water-mediated transmembrane interactions regulate the activation of A$_{\text{2A}}$ adenosine receptor
21 pages, 14 figures
Biophys. J. (2016) vol. 111, 1180-1191
10.1016/j.bpj.2016.08.002
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Water molecules inside G-protein coupled receptor have recently been spotlighted in a series of crystal structures. To decipher the dynamics and functional roles of internal waters in GPCR activity, we studied A$_{\text{2A}}$ adenosine receptor using $\mu$sec-molecular dynamics simulations. Our study finds that the amount of water flux across the transmembrane (TM) domain varies depending on the receptor state, and that the water molecules of the TM channel in the active state flow three times slower than those in the inactive state. Depending on the location in solvent-protein interface as well as the receptor state, the average residence time of water in each residue varies from $\sim\mathcal{O}(10^2)$ psec to $\sim\mathcal{O}(10^2)$ nsec. Especially, water molecules, exhibiting ultraslow relaxation ($\sim\mathcal{O}(10^2)$ nsec) in the active state, are found around the microswitch residues that are considered activity hotspots for GPCR function. A continuous allosteric network spanning the TM domain, arising from water-mediated contacts, is unique in the active state, underscoring the importance of slow waters in the GPCR activation.
[ { "created": "Thu, 4 Aug 2016 09:11:50 GMT", "version": "v1" } ]
2017-01-04
[ [ "Lee", "Yoonji", "" ], [ "Kim", "Songmi", "" ], [ "Choi", "Sun", "" ], [ "Hyeon", "Changbong", "" ] ]
Water molecules inside G-protein coupled receptor have recently been spotlighted in a series of crystal structures. To decipher the dynamics and functional roles of internal waters in GPCR activity, we studied A$_{\text{2A}}$ adenosine receptor using $\mu$sec-molecular dynamics simulations. Our study finds that the amount of water flux across the transmembrane (TM) domain varies depending on the receptor state, and that the water molecules of the TM channel in the active state flow three times slower than those in the inactive state. Depending on the location in solvent-protein interface as well as the receptor state, the average residence time of water in each residue varies from $\sim\mathcal{O}(10^2)$ psec to $\sim\mathcal{O}(10^2)$ nsec. Especially, water molecules, exhibiting ultraslow relaxation ($\sim\mathcal{O}(10^2)$ nsec) in the active state, are found around the microswitch residues that are considered activity hotspots for GPCR function. A continuous allosteric network spanning the TM domain, arising from water-mediated contacts, is unique in the active state, underscoring the importance of slow waters in the GPCR activation.
2005.05295
Arthur Goldberg
Arthur P. Goldberg (1) and David R. Jefferson (2) and John A. P. Sekar (1) and Jonathan R. Karr (1) ((1) Icahn Institute for Data Science and Genomic Technology, and Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, (2) Lawrence Livermore National Laboratory)
Exact Parallelization of the Stochastic Simulation Algorithm for Scalable Simulation of Large Biochemical Networks
21 pages, 4 figures; 2020-05-20 submission: updated authors, affiliations, emails, acknowledgments and layout
null
null
null
q-bio.MN cs.DC cs.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comprehensive simulations of the entire biochemistry of cells have great potential to help physicians treat disease and help engineers design biological machines. But such simulations must model networks of millions of molecular species and reactions. The Stochastic Simulation Algorithm (SSA) is widely used for simulating biochemistry, especially systems with species populations small enough that discreteness and stochasticity play important roles. However, existing serial SSA methods are prohibitively slow for comprehensive networks, and existing parallel SSA methods, which use periodic synchronization, sacrifice accuracy. To enable fast, accurate, and scalable simulations of biochemistry, we present an exact parallel algorithm for SSA that partitions a biochemical network into many SSA processes that simulate in parallel. Our parallel SSA algorithm exactly coordinates the interactions among these SSA processes and the species state they share by structuring the algorithm as a parallel discrete event simulation (DES) application and using an optimistic parallel DES simulator to synchronize the interactions. We anticipate that our method will enable unprecedented biochemical simulations.
[ { "created": "Mon, 11 May 2020 17:56:21 GMT", "version": "v1" }, { "created": "Wed, 20 May 2020 21:27:01 GMT", "version": "v2" } ]
2020-05-22
[ [ "Goldberg", "Arthur P.", "" ], [ "Jefferson", "David R.", "" ], [ "Sekar", "John A. P.", "" ], [ "Karr", "Jonathan R.", "" ] ]
Comprehensive simulations of the entire biochemistry of cells have great potential to help physicians treat disease and help engineers design biological machines. But such simulations must model networks of millions of molecular species and reactions. The Stochastic Simulation Algorithm (SSA) is widely used for simulating biochemistry, especially systems with species populations small enough that discreteness and stochasticity play important roles. However, existing serial SSA methods are prohibitively slow for comprehensive networks, and existing parallel SSA methods, which use periodic synchronization, sacrifice accuracy. To enable fast, accurate, and scalable simulations of biochemistry, we present an exact parallel algorithm for SSA that partitions a biochemical network into many SSA processes that simulate in parallel. Our parallel SSA algorithm exactly coordinates the interactions among these SSA processes and the species state they share by structuring the algorithm as a parallel discrete event simulation (DES) application and using an optimistic parallel DES simulator to synchronize the interactions. We anticipate that our method will enable unprecedented biochemical simulations.
1611.05080
Hugo Gabriel Eyherabide Dr
Hugo Gabriel Eyherabide
Neural stochastic codes, encoding and decoding
The additional material and some of the theorems have been integrated within the main results of the manuscript, and few typos have been corrected
null
null
null
q-bio.NC cs.IT math.IT q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding brain function, constructing computational models and engineering neural prosthetics require assessing two problems, namely encoding and decoding, but their relation remains controversial. For decades, the encoding problem has been shown to provide insight into the decoding problem, for example, by upper bounding the decoded information. However, here we show that this need not be the case when studying response aspects beyond noise correlations, and trace back the actual causes of this major departure from traditional views. To that end, we reformulate the encoding and decoding problems from the observer or organism perspective. In addition, we study the role of spike-time precision and response discrimination, among other response aspects, using stochastic transformations of the neural responses, here called stochastic codes. Our results show that stochastic codes may cause different information losses when used to describe neural responses and when employed to train optimal decoders. Therefore, we conclude that response aspects beyond noise correlations may play different roles in encoding and decoding. In practice, our results show for the first time that decoders constructed low-quality descriptions of response aspects may operate optimally on high-quality descriptions and vice versa, thereby potentially yielding experimental and computational savings, as well as new opportunities for simplifying the design of computational brain models and neural prosthetics.
[ { "created": "Tue, 15 Nov 2016 22:26:50 GMT", "version": "v1" }, { "created": "Fri, 13 Jan 2017 11:19:12 GMT", "version": "v2" } ]
2017-01-16
[ [ "Eyherabide", "Hugo Gabriel", "" ] ]
Understanding brain function, constructing computational models and engineering neural prosthetics require assessing two problems, namely encoding and decoding, but their relation remains controversial. For decades, the encoding problem has been shown to provide insight into the decoding problem, for example, by upper bounding the decoded information. However, here we show that this need not be the case when studying response aspects beyond noise correlations, and trace back the actual causes of this major departure from traditional views. To that end, we reformulate the encoding and decoding problems from the observer or organism perspective. In addition, we study the role of spike-time precision and response discrimination, among other response aspects, using stochastic transformations of the neural responses, here called stochastic codes. Our results show that stochastic codes may cause different information losses when used to describe neural responses and when employed to train optimal decoders. Therefore, we conclude that response aspects beyond noise correlations may play different roles in encoding and decoding. In practice, our results show for the first time that decoders constructed low-quality descriptions of response aspects may operate optimally on high-quality descriptions and vice versa, thereby potentially yielding experimental and computational savings, as well as new opportunities for simplifying the design of computational brain models and neural prosthetics.
0709.1874
Cheong Xin Chan
Cheong Xin Chan, Robert G. Beiko and Mark A. Ragan
A two-phase approach for detecting recombination in nucleotide sequences
5 pages, 3 figures. Chan CX, Beiko RG and Ragan MA (2007). A two-phase approach for detecting recombination in nucleotide sequences. In Hazelhurst S and Ramsay M (Eds) Proceedings of the First Southern African Bioinformatics Workshop, 28-30 January, Johannesburg, 9-15
null
null
null
q-bio.PE
null
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombinationdetecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of postrecombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
[ { "created": "Wed, 12 Sep 2007 14:02:18 GMT", "version": "v1" } ]
2007-09-13
[ [ "Chan", "Cheong Xin", "" ], [ "Beiko", "Robert G.", "" ], [ "Ragan", "Mark A.", "" ] ]
Genetic recombination can produce heterogeneous phylogenetic histories within a set of homologous genes. Delineating recombination events is important in the study of molecular evolution, as inference of such events provides a clearer picture of the phylogenetic relationships among different gene sequences or genomes. Nevertheless, detecting recombination events can be a daunting task, as the performance of different recombinationdetecting approaches can vary, depending on evolutionary events that take place after recombination. We recently evaluated the effects of postrecombination events on the prediction accuracy of recombination-detecting approaches using simulated nucleotide sequence data. The main conclusion, supported by other studies, is that one should not depend on a single method when searching for recombination events. In this paper, we introduce a two-phase strategy, applying three statistical measures to detect the occurrence of recombination events, and a Bayesian phylogenetic approach in delineating breakpoints of such events in nucleotide sequences. We evaluate the performance of these approaches using simulated data, and demonstrate the applicability of this strategy to empirical data. The two-phase strategy proves to be time-efficient when applied to large datasets, and yields high-confidence results.
2212.02402
Anwaar Ulhaq Dr
Sadi Md. Redwan, Md Palash Uddin, Muhammad Imran Sharif, and Anwaar Ulhaq
A Network Theory Investigation into the Altered Resting State Functional Connectivity in Attention-Deficit Hyperactivity Disorder
8 Figures, 14 Pages
null
null
null
q-bio.NC cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
In the last two decades, functional magnetic resonance imaging (fMRI) has emerged as one of the most effective technologies in clinical research of the human brain. fMRI allows researchers to study healthy and pathological brains while they perform various neuropsychological functions. Beyond task-related activations, the human brain has some intrinsic activity at a task-negative (resting) state that surprisingly consumes a lot of energy to support communication among neurons. Recent neuroimaging research has also seen an increase in modeling and analyzing brain activity in terms of a graph or network. Since graph models facilitate a systems-theoretic explanation of the brain, they have become increasingly relevant with advances in network science and the popularization of complex systems theory. The purpose of this study is to look into the abnormalities in resting brain functions in adults with Attention Deficit Hyperactivity Disorder (ADHD). The primary goal is to investigate resting-state functional connectivity (FC), which can be construed as a significant temporal coincidence in blood-oxygen-level dependent (BOLD) signals between functionally related brain regions in the absence of any stimulus or task. When compared to healthy controls, ADHD patients have lower average connectivity in the Supramarginal Gyrus and Superior Parietal Lobule, but higher connectivity in the Lateral Occipital Cortex and Inferior Temporal Gyrus. We also hypothesize that the network organization of default mode and dorsal attention regions is abnormal in ADHD patients.
[ { "created": "Wed, 23 Nov 2022 00:35:16 GMT", "version": "v1" } ]
2022-12-06
[ [ "Redwan", "Sadi Md.", "" ], [ "Uddin", "Md Palash", "" ], [ "Sharif", "Muhammad Imran", "" ], [ "Ulhaq", "Anwaar", "" ] ]
In the last two decades, functional magnetic resonance imaging (fMRI) has emerged as one of the most effective technologies in clinical research of the human brain. fMRI allows researchers to study healthy and pathological brains while they perform various neuropsychological functions. Beyond task-related activations, the human brain has some intrinsic activity at a task-negative (resting) state that surprisingly consumes a lot of energy to support communication among neurons. Recent neuroimaging research has also seen an increase in modeling and analyzing brain activity in terms of a graph or network. Since graph models facilitate a systems-theoretic explanation of the brain, they have become increasingly relevant with advances in network science and the popularization of complex systems theory. The purpose of this study is to look into the abnormalities in resting brain functions in adults with Attention Deficit Hyperactivity Disorder (ADHD). The primary goal is to investigate resting-state functional connectivity (FC), which can be construed as a significant temporal coincidence in blood-oxygen-level dependent (BOLD) signals between functionally related brain regions in the absence of any stimulus or task. When compared to healthy controls, ADHD patients have lower average connectivity in the Supramarginal Gyrus and Superior Parietal Lobule, but higher connectivity in the Lateral Occipital Cortex and Inferior Temporal Gyrus. We also hypothesize that the network organization of default mode and dorsal attention regions is abnormal in ADHD patients.
q-bio/0512012
Max Shpak
Max Shpak and Stephen Proulx
The Role of Life Cycle and Migration in Selection for Offspring Variance
null
null
null
null
q-bio.PE
null
For two genotypes that have the same mean number of offspring but differ in the variance in offspring number, natural selection will favor the genotype with lower variance. The concept of fitness becomes cloudy under these conditions because the outcome of evolution is not deterministic. However, the effect of variance in offspring number on the fixation probability of mutant strategies has been calculated under several scenarios with the general conclusion that variance in offspring number reduces fitness but only in proportion to the inverse of the population size (Gillespie 1974, Proulx 2000). This relationship becomes more complicated under a metapopulation scenario where the "effective" population size depends on migration rate, population structure, and life cycle. We show that under hard selection and weak migration fitness in a metapopulation composed of equal sized demes is determined by deme size. Conversely, for high migration rates and hard selection the effective fitness depends on the total size of the metapopulation. Interestingly, under soft selection there is no effect of migration or neighboring population structure on effective fitness, and fitness depends only on deme size. We use individual based simulations in developed in Shpak (2005) to validate our analytical approximations and investigate deviations of our assumption of equal deme size.
[ { "created": "Mon, 5 Dec 2005 20:54:55 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2005 21:06:00 GMT", "version": "v2" }, { "created": "Mon, 19 Dec 2005 21:12:24 GMT", "version": "v3" } ]
2007-05-23
[ [ "Shpak", "Max", "" ], [ "Proulx", "Stephen", "" ] ]
For two genotypes that have the same mean number of offspring but differ in the variance in offspring number, natural selection will favor the genotype with lower variance. The concept of fitness becomes cloudy under these conditions because the outcome of evolution is not deterministic. However, the effect of variance in offspring number on the fixation probability of mutant strategies has been calculated under several scenarios with the general conclusion that variance in offspring number reduces fitness but only in proportion to the inverse of the population size (Gillespie 1974, Proulx 2000). This relationship becomes more complicated under a metapopulation scenario where the "effective" population size depends on migration rate, population structure, and life cycle. We show that under hard selection and weak migration fitness in a metapopulation composed of equal sized demes is determined by deme size. Conversely, for high migration rates and hard selection the effective fitness depends on the total size of the metapopulation. Interestingly, under soft selection there is no effect of migration or neighboring population structure on effective fitness, and fitness depends only on deme size. We use individual based simulations in developed in Shpak (2005) to validate our analytical approximations and investigate deviations of our assumption of equal deme size.
1311.0778
Urs K\"oster
Urs K\"oster, Bruno Olshausen
Testing our conceptual understanding of V1 function
10 pages, 5 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we test our conceptual understanding of V1 function by asking two experimental questions: 1) How do neurons respond to the spatiotemporal structure contained in dynamic, natural scenes? and 2) What is the true range of visual responsiveness and predictability of neural responses obtained in an unbiased sample of neurons across all layers of cortex? We address these questions by recording responses to natural movie stimuli with 32 channel silicon probes. By simultaneously recording from cells in all layers, and taking all recorded cells, we reduce recording bias that results from "hunting" for neural responses evoked from drifting bars and gratings. A nonparametric model reveals that many cells that are visually responsive do not appear to be captured by standard receptive field models. Using nonlinear Radial Basis Function kernels in a support vector machine, we can explain the responses of some of these cells better than standard linear and phase-invariant complex cell models. This suggests that V1 neurons exhibit more complex and diverse responses than standard models can capture, ranging from simple and complex cells strongly driven by their classical receptive fields, to cells with more nonlinear receptive fields inferred from the nonparametric and RFB model, and cells that are not visually responsive despite robust firing.
[ { "created": "Mon, 4 Nov 2013 17:25:54 GMT", "version": "v1" } ]
2013-11-05
[ [ "Köster", "Urs", "" ], [ "Olshausen", "Bruno", "" ] ]
Here we test our conceptual understanding of V1 function by asking two experimental questions: 1) How do neurons respond to the spatiotemporal structure contained in dynamic, natural scenes? and 2) What is the true range of visual responsiveness and predictability of neural responses obtained in an unbiased sample of neurons across all layers of cortex? We address these questions by recording responses to natural movie stimuli with 32 channel silicon probes. By simultaneously recording from cells in all layers, and taking all recorded cells, we reduce recording bias that results from "hunting" for neural responses evoked from drifting bars and gratings. A nonparametric model reveals that many cells that are visually responsive do not appear to be captured by standard receptive field models. Using nonlinear Radial Basis Function kernels in a support vector machine, we can explain the responses of some of these cells better than standard linear and phase-invariant complex cell models. This suggests that V1 neurons exhibit more complex and diverse responses than standard models can capture, ranging from simple and complex cells strongly driven by their classical receptive fields, to cells with more nonlinear receptive fields inferred from the nonparametric and RFB model, and cells that are not visually responsive despite robust firing.
1312.1206
Andrey Olypher
Andrey Olypher, Jean Vaillant
On the properties of input-to-output transformations in networks of perceptrons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Information processing in certain neuronal networks in the brain can be considered as a map of binary vectors, where ones (spikes) and zeros (no spikes) of input neurons are transformed into spikes and no spikes of output neurons. A simple but fundamental characteristic of such a map is how it transforms distances between input vectors. In particular what is the mean distance between output vectors given certain distance between input vectors? Using combinatorial approach we found an exact solution to this problem for networks of perceptrons with binary weights. he resulting formulas allow for precise analysis how network connectivity and neuronal excitability affect the transformation of distances between the vectors of neuronal spiking. As an application, we considered a simple network model of information processing in the hippocampus, a brain area critically implicated in learning and memory, and found a combination of parameters for which the output neurons discriminated similar and distinct inputs most effectively. A decrease of threshold values of the output neurons, which in biological networks may be associated with decreased inhibition, impaired optimality of discrimination.
[ { "created": "Wed, 4 Dec 2013 15:25:37 GMT", "version": "v1" } ]
2013-12-05
[ [ "Olypher", "Andrey", "" ], [ "Vaillant", "Jean", "" ] ]
Information processing in certain neuronal networks in the brain can be considered as a map of binary vectors, where ones (spikes) and zeros (no spikes) of input neurons are transformed into spikes and no spikes of output neurons. A simple but fundamental characteristic of such a map is how it transforms distances between input vectors. In particular what is the mean distance between output vectors given certain distance between input vectors? Using combinatorial approach we found an exact solution to this problem for networks of perceptrons with binary weights. he resulting formulas allow for precise analysis how network connectivity and neuronal excitability affect the transformation of distances between the vectors of neuronal spiking. As an application, we considered a simple network model of information processing in the hippocampus, a brain area critically implicated in learning and memory, and found a combination of parameters for which the output neurons discriminated similar and distinct inputs most effectively. A decrease of threshold values of the output neurons, which in biological networks may be associated with decreased inhibition, impaired optimality of discrimination.
2012.12583
Pedro Cardoso-Leite
Aur\'elien Defossez, Morteza Ansarinia, Brice Clocher, Emmanuel Schm\"uck, Paul Schrater and Pedro Cardoso-Leite
The structure of behavioral data
12 pages, 1 table, 2 figures
null
null
null
q-bio.NC stat.ME
http://creativecommons.org/licenses/by/4.0/
For more than a century, scientists have been collecting behavioral data--an increasing fraction of which is now being publicly shared so other researchers can reuse them to replicate, integrate or extend past results. Although behavioral data is fundamental to many scientific fields, there is currently no widely adopted standard for formatting, naming, organizing, describing or sharing such data. This lack of standardization is a major bottleneck for scientific progress. Not only does it prevent the effective reuse of data, it also affects how behavioral data in general are processed, as non-standard data calls for custom-made data analysis code and prevents the development of efficient tools. To address this problem, we develop the Behaverse Data Model (BDM), a standard for structuring behavioral data. Here we focus on major concepts in behavioral data, leaving further details and developments to the project's website (https://behaverse.github.io/data-model/).
[ { "created": "Wed, 23 Dec 2020 10:22:00 GMT", "version": "v1" } ]
2020-12-24
[ [ "Defossez", "Aurélien", "" ], [ "Ansarinia", "Morteza", "" ], [ "Clocher", "Brice", "" ], [ "Schmück", "Emmanuel", "" ], [ "Schrater", "Paul", "" ], [ "Cardoso-Leite", "Pedro", "" ] ]
For more than a century, scientists have been collecting behavioral data--an increasing fraction of which is now being publicly shared so other researchers can reuse them to replicate, integrate or extend past results. Although behavioral data is fundamental to many scientific fields, there is currently no widely adopted standard for formatting, naming, organizing, describing or sharing such data. This lack of standardization is a major bottleneck for scientific progress. Not only does it prevent the effective reuse of data, it also affects how behavioral data in general are processed, as non-standard data calls for custom-made data analysis code and prevents the development of efficient tools. To address this problem, we develop the Behaverse Data Model (BDM), a standard for structuring behavioral data. Here we focus on major concepts in behavioral data, leaving further details and developments to the project's website (https://behaverse.github.io/data-model/).
2310.00947
Quratul Ain Dr.
Qurat-ul-Ain Sidra Rafi, Khairullah, Saeedullah, Arshia Arshia, Reaz Uddin, Atia-ul-Wahab, Khalid Mohammed Khan, and M. Iqbal Choudhary
Benzophenone Semicarbazones as Potential alpha-glucosidase and Prolyl Endopeptidase Inhibitor: In-vitro free radical scavenging, enzyme inhibition, mechanistic, and molecular docking studies
1 schematic,3 tables,7 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
$\alpha$-glucosidase and prolylendopeptidase has altered expression and activity patterns in neurological disease, type 2diabetes respectively and several cancers. Here we screened a series 1-29 benzophenone semicarbazone derivatives for in vitro free radical scavenging, alpha-glucosidase and prolylendopeptidase inhibition activities. Seven derivatives were identified as potential free radical scavengers, 14 as alpha-glucosidase, and 9 as prolylendopeptidase inhibitors. Kinetic studies on the most promising inhibitors were performed. Compounds 23, 27, 25 and 28 were found as inhibitor of alpha-glucosidase, while compound 26 inhibited both prolylendopeptidase and alpha-glucosidase. The binding modes and binding free energy of the multi targeted inhibitor 26 were predicted by molecular docking studies. These results provide insights on prolylendopeptidase and alpha-glucosidase inhibition of compound 26 for further development as therapeutic agents for neoplastic, neurological, and endocrine disorders.
[ { "created": "Mon, 2 Oct 2023 07:37:38 GMT", "version": "v1" } ]
2023-10-03
[ [ "Rafi", "Qurat-ul-Ain Sidra", "" ], [ "Khairullah", "", "" ], [ "Saeedullah", "", "" ], [ "Arshia", "Arshia", "" ], [ "Uddin", "Reaz", "" ], [ "Atia-ul-Wahab", "", "" ], [ "Khan", "Khalid Mohammed", "" ], [ "Choudhary", "M. Iqbal", "" ] ]
$\alpha$-glucosidase and prolylendopeptidase has altered expression and activity patterns in neurological disease, type 2diabetes respectively and several cancers. Here we screened a series 1-29 benzophenone semicarbazone derivatives for in vitro free radical scavenging, alpha-glucosidase and prolylendopeptidase inhibition activities. Seven derivatives were identified as potential free radical scavengers, 14 as alpha-glucosidase, and 9 as prolylendopeptidase inhibitors. Kinetic studies on the most promising inhibitors were performed. Compounds 23, 27, 25 and 28 were found as inhibitor of alpha-glucosidase, while compound 26 inhibited both prolylendopeptidase and alpha-glucosidase. The binding modes and binding free energy of the multi targeted inhibitor 26 were predicted by molecular docking studies. These results provide insights on prolylendopeptidase and alpha-glucosidase inhibition of compound 26 for further development as therapeutic agents for neoplastic, neurological, and endocrine disorders.
1309.7072
Christopher Ellison
Qi Zhou, Christopher E. Ellison, Vera B. Kaiser, Artyom A. Alekseyenko, Andrey A. Gorchakov, Doris Bachtrog
The epigenome of evolving Drosophila neo-sex chromosomes: dosage compensation and heterochromatin formation
null
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drosophila Y chromosomes are composed entirely of silent heterochromatin, while male X chromosomes have highly accessible chromatin and are hypertranscribed due to dosage compensation. Here, we dissect the molecular mechanisms and functional pressures driving heterochromatin formation and dosage compensation of the recently formed neo-sex chromosomes of Drosophila miranda. We show that the onset of heterochromatin formation on the neo-Y is triggered by an accumulation of repetitive DNA. The neo-X has evolved partial dosage compensation and we find that diverse mutational paths have been utilized to establish several dozen novel binding consensus motifs for the dosage compensation complex on the neo-X, including simple point mutations at pre-binding sites, insertion and deletion mutations, microsatellite expansions, or tandem amplification of weak binding sites. Spreading of these silencing or activating chromatin modifications to adjacent regions results in massive mis-expression of neo-sex linked genes, and little correspondence between functionality of genes and their silencing on the neo-Y or dosage compensation on the neo-X. Intriguingly, the genomic regions being targeted by the dosage compensation complex on the neo-X and those becoming heterochromatic on the neo-Y show little overlap, possibly reflecting different propensities along the ancestral chromosome to adopt active or repressive chromatin configurations. Our findings have broad implications for current models of sex chromosome evolution, and demonstrate how mechanistic constraints can limit evolutionary adaptations. Our study also highlights how evolution can follow predictable genetic trajectories, by repeatedly acquiring the same 21-bp consensus motif for recruitment of the dosage compensation complex, yet utilizing a diverse array of random mutational changes to attain the same phenotypic outcome.
[ { "created": "Thu, 26 Sep 2013 20:58:26 GMT", "version": "v1" } ]
2013-09-30
[ [ "Zhou", "Qi", "" ], [ "Ellison", "Christopher E.", "" ], [ "Kaiser", "Vera B.", "" ], [ "Alekseyenko", "Artyom A.", "" ], [ "Gorchakov", "Andrey A.", "" ], [ "Bachtrog", "Doris", "" ] ]
Drosophila Y chromosomes are composed entirely of silent heterochromatin, while male X chromosomes have highly accessible chromatin and are hypertranscribed due to dosage compensation. Here, we dissect the molecular mechanisms and functional pressures driving heterochromatin formation and dosage compensation of the recently formed neo-sex chromosomes of Drosophila miranda. We show that the onset of heterochromatin formation on the neo-Y is triggered by an accumulation of repetitive DNA. The neo-X has evolved partial dosage compensation and we find that diverse mutational paths have been utilized to establish several dozen novel binding consensus motifs for the dosage compensation complex on the neo-X, including simple point mutations at pre-binding sites, insertion and deletion mutations, microsatellite expansions, or tandem amplification of weak binding sites. Spreading of these silencing or activating chromatin modifications to adjacent regions results in massive mis-expression of neo-sex linked genes, and little correspondence between functionality of genes and their silencing on the neo-Y or dosage compensation on the neo-X. Intriguingly, the genomic regions being targeted by the dosage compensation complex on the neo-X and those becoming heterochromatic on the neo-Y show little overlap, possibly reflecting different propensities along the ancestral chromosome to adopt active or repressive chromatin configurations. Our findings have broad implications for current models of sex chromosome evolution, and demonstrate how mechanistic constraints can limit evolutionary adaptations. Our study also highlights how evolution can follow predictable genetic trajectories, by repeatedly acquiring the same 21-bp consensus motif for recruitment of the dosage compensation complex, yet utilizing a diverse array of random mutational changes to attain the same phenotypic outcome.
1704.01039
Naho Ichikawa
Naho Ichikawa, Giuseppe Lisi, Noriaki Yahata, Go Okada, Masahiro Takamura, Makiko Yamada, Tetsuya Suhara, Ryu-ichiro Hashimoto, Takashi Yamada, Yujiro Yoshihara, Hidehiko Takahashi, Kiyoto Kasai, Nobumasa Kato, Shigeto Yamawaki, Mitsuo Kawato, Jun Morimoto, Yasumasa Okamoto
Identifying melancholic depression biomarker using whole-brain functional connectivity
null
null
null
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By focusing on melancholic features with biological homogeneity, this study aimed to identify a small number of critical functional connections (FCs) that were specific only to the melancholic type of MDD. On the resting-state fMRI data, classifiers were developed to differentiate MDD patients from healthy controls (HCs). The classification accuracy was improved from 50 % (93 MDD and 93 HCs) to 70% (66 melancholic MDD and 66 HCs), when we specifically focused on the melancholic MDD with moderate or severer level of depressive symptoms. It showed 65% accuracy for the independent validation cohort. The biomarker score distribution showed improvements with escitalopram treatments, and also showed significant correlations with depression symptom scores. This classifier was specific to melancholic MDD, and it did not generalize in other mental disorders including autism spectrum disorder (ASD, 54% accuracy) and schizophrenia spectrum disorder (SSD, 45% accuracy). Among the identified 12 FCs from 9,316 FCs between whole brain anatomical node pairs, the left DLPFC / IFG region, which has most commonly been targeted for depression treatments, and its functional connections between Precuneus / PCC, and between right DLPFC / SMA areas had the highest contributions. Given the heterogeneity of the MDD, focusing on the melancholic features is the key to achieve high classification accuracy. The identified FCs specifically predicted the melancholic MDD and associated with subjective depressive symptoms. These results suggested key FCs of melancholic depression, and open doors to novel treatments targeting these regions in the future.
[ { "created": "Mon, 3 Apr 2017 15:05:59 GMT", "version": "v1" }, { "created": "Tue, 18 Apr 2017 05:50:40 GMT", "version": "v2" }, { "created": "Sun, 14 May 2017 20:09:58 GMT", "version": "v3" } ]
2017-05-16
[ [ "Ichikawa", "Naho", "" ], [ "Lisi", "Giuseppe", "" ], [ "Yahata", "Noriaki", "" ], [ "Okada", "Go", "" ], [ "Takamura", "Masahiro", "" ], [ "Yamada", "Makiko", "" ], [ "Suhara", "Tetsuya", "" ], [ "Hashimoto", "Ryu-ichiro", "" ], [ "Yamada", "Takashi", "" ], [ "Yoshihara", "Yujiro", "" ], [ "Takahashi", "Hidehiko", "" ], [ "Kasai", "Kiyoto", "" ], [ "Kato", "Nobumasa", "" ], [ "Yamawaki", "Shigeto", "" ], [ "Kawato", "Mitsuo", "" ], [ "Morimoto", "Jun", "" ], [ "Okamoto", "Yasumasa", "" ] ]
By focusing on melancholic features with biological homogeneity, this study aimed to identify a small number of critical functional connections (FCs) that were specific only to the melancholic type of MDD. On the resting-state fMRI data, classifiers were developed to differentiate MDD patients from healthy controls (HCs). The classification accuracy was improved from 50 % (93 MDD and 93 HCs) to 70% (66 melancholic MDD and 66 HCs), when we specifically focused on the melancholic MDD with moderate or severer level of depressive symptoms. It showed 65% accuracy for the independent validation cohort. The biomarker score distribution showed improvements with escitalopram treatments, and also showed significant correlations with depression symptom scores. This classifier was specific to melancholic MDD, and it did not generalize in other mental disorders including autism spectrum disorder (ASD, 54% accuracy) and schizophrenia spectrum disorder (SSD, 45% accuracy). Among the identified 12 FCs from 9,316 FCs between whole brain anatomical node pairs, the left DLPFC / IFG region, which has most commonly been targeted for depression treatments, and its functional connections between Precuneus / PCC, and between right DLPFC / SMA areas had the highest contributions. Given the heterogeneity of the MDD, focusing on the melancholic features is the key to achieve high classification accuracy. The identified FCs specifically predicted the melancholic MDD and associated with subjective depressive symptoms. These results suggested key FCs of melancholic depression, and open doors to novel treatments targeting these regions in the future.
q-bio/0702011
Steffen Waldherr
Steffen Waldherr, Thomas Eissing, Madalena Chaves, Frank Allgower
Bistability preserving model reduction in apoptosis
6 pages, 5 figures
null
null
null
q-bio.MN
null
Biological systems are typically very complex and need to be reduced before they are amenable to a thorough analysis. Also, they often possess functionally important dynamic features like bistability. In model reduction, it is sometimes more desirable to preserve the dynamic features only than to recover a good quantitative approximation. We present an approach to reduce the order of a bistable dynamical system significantly while preserving bistability and the switching threshold. These properties are important for the operation of the system in the context of a larger network. As an application example, a bistable model for caspase activation in apoptosis is considered.
[ { "created": "Wed, 7 Feb 2007 14:19:25 GMT", "version": "v1" } ]
2007-05-23
[ [ "Waldherr", "Steffen", "" ], [ "Eissing", "Thomas", "" ], [ "Chaves", "Madalena", "" ], [ "Allgower", "Frank", "" ] ]
Biological systems are typically very complex and need to be reduced before they are amenable to a thorough analysis. Also, they often possess functionally important dynamic features like bistability. In model reduction, it is sometimes more desirable to preserve the dynamic features only than to recover a good quantitative approximation. We present an approach to reduce the order of a bistable dynamical system significantly while preserving bistability and the switching threshold. These properties are important for the operation of the system in the context of a larger network. As an application example, a bistable model for caspase activation in apoptosis is considered.
2303.09351
Guy Katriel
Guy Katriel
Optimizing antimicrobial treatment schedules: some fundamental analytical results
null
Bulletin of Mathematical Biology 86 (1), 2024
10.1007/s11538-023-01230-8
null
q-bio.PE math.OC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work studies fundamental questions regarding the optimal design of antimicrobial treatment protocols, using standard pharmacodynamic and pharmacokinetic mathematical models. We consider the problem of designing an antimicrobial treatment schedule to achieve eradication of a microbial infection, while minimizing the area under the time-concentration curve (AUC). We first solve this problem under the assumption that an arbitrary antimicrobial concentration profile may be chosen, and prove that the 'ideal' concentration profile consists of a constant concentration over a finite time duration, where explicit expressions for the optimal concentration and the time duration are given in terms of the pharmacodynamic parameters. Since antimicrobial concentration profiles are induced by a dosing schedule and the antimicrobial pharmacokinetics, the ideal concentration profile is not strictly feasible. We therefore also investigate the possibility of achieving outcomes which are close to those provided by the ideal concentration profile,using a bolus+continuous dosing schedule, which consists of a loading dose followed by infusion of the antimicrobial at a constant rate. We explicitly find the optimal bolus+continuous dosing schedule, and show that, for realistic parameter ranges, this schedule achieves results which are nearly as efficient as those attained by the ideal concentration profile. The optimality results obtained here provide a baseline and reference point for comparison and evaluation of antimicrobial treatment plans.
[ { "created": "Thu, 16 Mar 2023 14:32:18 GMT", "version": "v1" }, { "created": "Sat, 23 Sep 2023 18:04:48 GMT", "version": "v2" } ]
2023-11-27
[ [ "Katriel", "Guy", "" ] ]
This work studies fundamental questions regarding the optimal design of antimicrobial treatment protocols, using standard pharmacodynamic and pharmacokinetic mathematical models. We consider the problem of designing an antimicrobial treatment schedule to achieve eradication of a microbial infection, while minimizing the area under the time-concentration curve (AUC). We first solve this problem under the assumption that an arbitrary antimicrobial concentration profile may be chosen, and prove that the 'ideal' concentration profile consists of a constant concentration over a finite time duration, where explicit expressions for the optimal concentration and the time duration are given in terms of the pharmacodynamic parameters. Since antimicrobial concentration profiles are induced by a dosing schedule and the antimicrobial pharmacokinetics, the ideal concentration profile is not strictly feasible. We therefore also investigate the possibility of achieving outcomes which are close to those provided by the ideal concentration profile,using a bolus+continuous dosing schedule, which consists of a loading dose followed by infusion of the antimicrobial at a constant rate. We explicitly find the optimal bolus+continuous dosing schedule, and show that, for realistic parameter ranges, this schedule achieves results which are nearly as efficient as those attained by the ideal concentration profile. The optimality results obtained here provide a baseline and reference point for comparison and evaluation of antimicrobial treatment plans.
q-bio/0506015
Matthew Berryman
Matthew J. Berryman, Scott W. Coussens, Yvonne Pamula, Declan Kennedy, Kurt Lushington, Cosma Shalizi, Andrew Allison, A. James Martin, David Saint and Derek Abbott
Nonlinear aspects of the EEG during sleep in children
9 pages, 2 figures, 4 tables
Proc. SPIE: Fluctuations and Noise in Biological, Biophysical, and Biomedical Systems, Austin, Texas, USA, May 24-26, 2005, vol. 5841, pp. 40-48
10.1117/12.622380
null
q-bio.NC
null
Electroencephalograph (EEG) analysis enables the neuronal behavior of a section of the brain to be examined. If the behavior is nonlinear then nonlinear tools can be used to glean information on brain behavior, and aid in the diagnosis of sleep abnormalities such as obstructive sleep apnea syndrome (OSAS). In this paper the sleep EEGs of a set of normal and mild OSAS children are evaluated for nonlinear behaviour. We consider how the behaviour of the brain changes with sleep stage and between normal and OSAS children.
[ { "created": "Tue, 14 Jun 2005 01:49:59 GMT", "version": "v1" } ]
2009-11-11
[ [ "Berryman", "Matthew J.", "" ], [ "Coussens", "Scott W.", "" ], [ "Pamula", "Yvonne", "" ], [ "Kennedy", "Declan", "" ], [ "Lushington", "Kurt", "" ], [ "Shalizi", "Cosma", "" ], [ "Allison", "Andrew", "" ], [ "Martin", "A. James", "" ], [ "Saint", "David", "" ], [ "Abbott", "Derek", "" ] ]
Electroencephalograph (EEG) analysis enables the neuronal behavior of a section of the brain to be examined. If the behavior is nonlinear then nonlinear tools can be used to glean information on brain behavior, and aid in the diagnosis of sleep abnormalities such as obstructive sleep apnea syndrome (OSAS). In this paper the sleep EEGs of a set of normal and mild OSAS children are evaluated for nonlinear behaviour. We consider how the behaviour of the brain changes with sleep stage and between normal and OSAS children.
2001.00091
Domenico Gatti
Rosella Scrima, Sabino Fugetto, Nazzareno Capitanio, Domenico L. Gatti
Hemoglobin Non-equilibrium Oxygen Dissociation Curve
null
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abnormal hemoglobins can have major consequences for tissue delivery of oxygen. Correct diagnosis of hemoglobinopathies with altered oxygen affinity requires a determination of hemoglobin oxygen dissociation curve (ODC), which relates the hemoglobin oxygen saturation to the partial pressure of oxygen in the blood. Determination of the ODC of human hemoglobin is typically carried out under conditions in which hemoglobin is in equilibrium with O2 at each partial pressure. However, in the human body due to the fast transit of RBCs through tissues hemoglobin oxygen exchanges occur under non-equilibrium conditions. We describe the determination of non-equilibrium ODC, and show that under these conditions Hb cooperativity has two apparent components in the Adair, Perutz, and MWC models of Hb. The first component, which we call sequential cooperativity, accounts for ~70% of Hb cooperativity, and emerges from the constraint of sequential binding that is shared by the three models. The second component, which we call conformational cooperativity, accounts for ~30% of Hb cooperativity, and is due either to a conformational equilibrium between low affinity and high affinity tetramers (as in the MWC model), or to a conformational change from low to high affinity once two of the tetramer sites are occupied (Perutz model).
[ { "created": "Tue, 31 Dec 2019 22:03:51 GMT", "version": "v1" } ]
2020-01-03
[ [ "Scrima", "Rosella", "" ], [ "Fugetto", "Sabino", "" ], [ "Capitanio", "Nazzareno", "" ], [ "Gatti", "Domenico L.", "" ] ]
Abnormal hemoglobins can have major consequences for tissue delivery of oxygen. Correct diagnosis of hemoglobinopathies with altered oxygen affinity requires a determination of hemoglobin oxygen dissociation curve (ODC), which relates the hemoglobin oxygen saturation to the partial pressure of oxygen in the blood. Determination of the ODC of human hemoglobin is typically carried out under conditions in which hemoglobin is in equilibrium with O2 at each partial pressure. However, in the human body due to the fast transit of RBCs through tissues hemoglobin oxygen exchanges occur under non-equilibrium conditions. We describe the determination of non-equilibrium ODC, and show that under these conditions Hb cooperativity has two apparent components in the Adair, Perutz, and MWC models of Hb. The first component, which we call sequential cooperativity, accounts for ~70% of Hb cooperativity, and emerges from the constraint of sequential binding that is shared by the three models. The second component, which we call conformational cooperativity, accounts for ~30% of Hb cooperativity, and is due either to a conformational equilibrium between low affinity and high affinity tetramers (as in the MWC model), or to a conformational change from low to high affinity once two of the tetramer sites are occupied (Perutz model).
2403.00842
Hong Zhou
Jianfeng Chen, Jize Xiong, Yixu Wang, Qi Xin, Hong Zhou
Implementation of an AI-based MRD evaluation and prediction model for multiple myeloma
7 pages, 6 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the application of hematopoietic stem cell transplantation and new drugs, the progression-free survival rate and overall survival rate of multiple myeloma have been greatly improved, but it is still considered as a kind of disease that cannot be completely cured. Many patients have disease recurrence after complete remission, which is rooted in the presence of minimal residual disease MRD in patients. Studies have shown that positive MRD is an independent adverse prognostic factor affecting survival, so MRD detection is an important indicator to judge the prognosis of patients and guide clinical treatment. At present, multipa-rameter flow cytometry (MFC), polymerase chain reaction (PCR), positron emission tomography (positron emission) Several techniques, such as PET/computer tomography (CT), have been used for MRD detection of multiple myeloma.However, there is still no cure for the disease. "IFM2013-04" four clinical studies confirmed for the first time that proteasome inhibitors (PIs) and immunomodulatory drugs, The synergism and importance of the combination of IMiDs in the treatment of MM, the large Phase 3 clinical study SWOG SO777 compared the combination of bortezomib plus lenalidomide and dexamethasone. The efficacy of VRD and D established the status of VRD first-line treatment of MM, and due to the good efficacy of CD38 monoclonal antibody in large clinical studies, combination therapy with VRD has been recommended as the first-line treatment of MM. However, to explore the clinical value and problems of applying artificial intelligence bone marrow cell recognition system Morphogo in the detection of multiple myeloma minimal residual disease (MRD)
[ { "created": "Thu, 29 Feb 2024 12:10:53 GMT", "version": "v1" } ]
2024-03-05
[ [ "Chen", "Jianfeng", "" ], [ "Xiong", "Jize", "" ], [ "Wang", "Yixu", "" ], [ "Xin", "Qi", "" ], [ "Zhou", "Hong", "" ] ]
With the application of hematopoietic stem cell transplantation and new drugs, the progression-free survival rate and overall survival rate of multiple myeloma have been greatly improved, but it is still considered as a kind of disease that cannot be completely cured. Many patients have disease recurrence after complete remission, which is rooted in the presence of minimal residual disease MRD in patients. Studies have shown that positive MRD is an independent adverse prognostic factor affecting survival, so MRD detection is an important indicator to judge the prognosis of patients and guide clinical treatment. At present, multipa-rameter flow cytometry (MFC), polymerase chain reaction (PCR), positron emission tomography (positron emission) Several techniques, such as PET/computer tomography (CT), have been used for MRD detection of multiple myeloma.However, there is still no cure for the disease. "IFM2013-04" four clinical studies confirmed for the first time that proteasome inhibitors (PIs) and immunomodulatory drugs, The synergism and importance of the combination of IMiDs in the treatment of MM, the large Phase 3 clinical study SWOG SO777 compared the combination of bortezomib plus lenalidomide and dexamethasone. The efficacy of VRD and D established the status of VRD first-line treatment of MM, and due to the good efficacy of CD38 monoclonal antibody in large clinical studies, combination therapy with VRD has been recommended as the first-line treatment of MM. However, to explore the clinical value and problems of applying artificial intelligence bone marrow cell recognition system Morphogo in the detection of multiple myeloma minimal residual disease (MRD)
1310.0213
Valentina Agoni
Valentina Agoni
G-quadruplexes and mRNA localization
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
G-quadruplexes represent a novelty for molecular biology. Their role inside the cell remains mysterious. We investigate a possible correlation with mRNA localization. In particular, we hypothesize that Gquadruplexes influence fluid dynamics.
[ { "created": "Tue, 1 Oct 2013 09:43:40 GMT", "version": "v1" } ]
2013-10-02
[ [ "Agoni", "Valentina", "" ] ]
G-quadruplexes represent a novelty for molecular biology. Their role inside the cell remains mysterious. We investigate a possible correlation with mRNA localization. In particular, we hypothesize that Gquadruplexes influence fluid dynamics.
0912.3513
Kanaka Rajan
Kanaka Rajan, L F Abbott and Haim Sompolinsky
Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks
12 pages, 3 figures
Physical Review E 82, 011903 (2010)
10.1103/PhysRevE.82.011903
null
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a non-monotonic function of stimulus frequency, revealing a "resonant" frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.
[ { "created": "Thu, 17 Dec 2009 20:39:04 GMT", "version": "v1" }, { "created": "Mon, 2 Aug 2010 20:46:19 GMT", "version": "v2" } ]
2010-08-04
[ [ "Rajan", "Kanaka", "" ], [ "Abbott", "L F", "" ], [ "Sompolinsky", "Haim", "" ] ]
Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a non-monotonic function of stimulus frequency, revealing a "resonant" frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.
2301.02916
Rodrigo Bonazzola
Rodrigo Bonazzola, Enzo Ferrante, Nishant Ravikumar, Yan Xia, Bernard Keavney, Sven Plein, Tanveer Syeda-Mahmood, and Alejandro F Frangi
Unsupervised ensemble-based phenotyping helps enhance the discoverability of genes related to heart morphology
14 pages of main text, 22 pages of supplemental information
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recent genome-wide association studies (GWAS) have been successful in identifying associations between genetic variants and simple cardiac parameters derived from cardiac magnetic resonance (CMR) images. However, the emergence of big databases including genetic data linked to CMR, facilitates investigation of more nuanced patterns of shape variability. Here, we propose a new framework for gene discovery entitled Unsupervised Phenotype Ensembles (UPE). UPE builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner, using deep learning models trained with different hyperparameters. These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations across the ensemble. We apply our approach to the UK Biobank database to extract left-ventricular (LV) geometric features from image-derived three-dimensional meshes. We demonstrate that our approach greatly improves the discoverability of genes influencing LV shape, identifying 11 loci with study-wide significance and 8 with suggestive significance. We argue that our approach would enable more extensive discovery of gene associations with image-derived phenotypes for other organs or image modalities.
[ { "created": "Sat, 7 Jan 2023 18:36:44 GMT", "version": "v1" } ]
2023-01-10
[ [ "Bonazzola", "Rodrigo", "" ], [ "Ferrante", "Enzo", "" ], [ "Ravikumar", "Nishant", "" ], [ "Xia", "Yan", "" ], [ "Keavney", "Bernard", "" ], [ "Plein", "Sven", "" ], [ "Syeda-Mahmood", "Tanveer", "" ], [ "Frangi", "Alejandro F", "" ] ]
Recent genome-wide association studies (GWAS) have been successful in identifying associations between genetic variants and simple cardiac parameters derived from cardiac magnetic resonance (CMR) images. However, the emergence of big databases including genetic data linked to CMR, facilitates investigation of more nuanced patterns of shape variability. Here, we propose a new framework for gene discovery entitled Unsupervised Phenotype Ensembles (UPE). UPE builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner, using deep learning models trained with different hyperparameters. These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations across the ensemble. We apply our approach to the UK Biobank database to extract left-ventricular (LV) geometric features from image-derived three-dimensional meshes. We demonstrate that our approach greatly improves the discoverability of genes influencing LV shape, identifying 11 loci with study-wide significance and 8 with suggestive significance. We argue that our approach would enable more extensive discovery of gene associations with image-derived phenotypes for other organs or image modalities.
1801.07093
Genki Ichinose
Genki Ichinose, Yoshiki Satotani, Takashi Nagatani
Network flow of mobile agents enhances the evolution of cooperation
7 pages, 5 figures
EPL 121, 28001, 2018
10.1209/0295-5075/121/28001
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the effect of contingent movement on the persistence of cooperation on complex networks with empty nodes. Each agent plays Prisoner's Dilemma game with its neighbors and then it either updates the strategy depending on the payoff difference with neighbors or it moves to another empty node if not satisfied with its own payoff. If no neighboring node is empty, each agent stays at the same site. By extensive evolutionary simulations, we show that the medium density of agents enhances cooperation where the network flow of mobile agents is also medium. Moreover, if the movements of agents are more frequent than the strategy updating, cooperation is further promoted. In scale-free networks, the optimal density for cooperation is lower than other networks because agents get stuck at hubs. Our study suggests that keeping a smooth network flow is significant for the persistence of cooperation in ever-changing societies.
[ { "created": "Mon, 22 Jan 2018 13:47:39 GMT", "version": "v1" }, { "created": "Tue, 20 Mar 2018 02:10:07 GMT", "version": "v2" } ]
2018-04-18
[ [ "Ichinose", "Genki", "" ], [ "Satotani", "Yoshiki", "" ], [ "Nagatani", "Takashi", "" ] ]
We study the effect of contingent movement on the persistence of cooperation on complex networks with empty nodes. Each agent plays Prisoner's Dilemma game with its neighbors and then it either updates the strategy depending on the payoff difference with neighbors or it moves to another empty node if not satisfied with its own payoff. If no neighboring node is empty, each agent stays at the same site. By extensive evolutionary simulations, we show that the medium density of agents enhances cooperation where the network flow of mobile agents is also medium. Moreover, if the movements of agents are more frequent than the strategy updating, cooperation is further promoted. In scale-free networks, the optimal density for cooperation is lower than other networks because agents get stuck at hubs. Our study suggests that keeping a smooth network flow is significant for the persistence of cooperation in ever-changing societies.
2205.04670
Mingyu Song
Mingyu Song, Carolyn E. Jones, Marie-H. Monfils, Yael Niv
Explaining the effectiveness of fear extinction through latent-cause inference
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Acquiring fear responses to predictors of aversive outcomes is crucial for survival. At the same time, it is important to be able to modify such associations when they are maladaptive, for instance in treating anxiety and trauma-related disorders. Standard extinction procedures can reduce fear temporarily, but with sufficient delay or with reminders of the aversive experience, fear often returns. The latent-cause inference framework explains the return of fear by presuming that animals learn a rich model of the environment, in which the standard extinction procedure triggers the inference of a new latent cause, preventing the unlearning of the original aversive associations. This computational framework had previously inspired an alternative extinction paradigm -- gradual extinction -- which indeed was shown to be more effective in reducing the return of fear. However, the original framework was not sufficient to explain the pattern of results seen in the experiments. Here, we propose a formal model to explain the effectiveness of gradual extinction in reducing spontaneous recovery and reinstatement effects, in contrast to the ineffectiveness of standard extinction and a gradual reverse control procedure. We demonstrate through quantitative simulation that our model can explain qualitative behavioral differences across different extinction procedures as seen in the empirical study. We verify the necessity of several key assumptions added to the latent-cause framework, which suggest potential general principles of animal learning and provide novel predictions for future experiments.
[ { "created": "Tue, 10 May 2022 04:51:37 GMT", "version": "v1" } ]
2022-05-11
[ [ "Song", "Mingyu", "" ], [ "Jones", "Carolyn E.", "" ], [ "Monfils", "Marie-H.", "" ], [ "Niv", "Yael", "" ] ]
Acquiring fear responses to predictors of aversive outcomes is crucial for survival. At the same time, it is important to be able to modify such associations when they are maladaptive, for instance in treating anxiety and trauma-related disorders. Standard extinction procedures can reduce fear temporarily, but with sufficient delay or with reminders of the aversive experience, fear often returns. The latent-cause inference framework explains the return of fear by presuming that animals learn a rich model of the environment, in which the standard extinction procedure triggers the inference of a new latent cause, preventing the unlearning of the original aversive associations. This computational framework had previously inspired an alternative extinction paradigm -- gradual extinction -- which indeed was shown to be more effective in reducing the return of fear. However, the original framework was not sufficient to explain the pattern of results seen in the experiments. Here, we propose a formal model to explain the effectiveness of gradual extinction in reducing spontaneous recovery and reinstatement effects, in contrast to the ineffectiveness of standard extinction and a gradual reverse control procedure. We demonstrate through quantitative simulation that our model can explain qualitative behavioral differences across different extinction procedures as seen in the empirical study. We verify the necessity of several key assumptions added to the latent-cause framework, which suggest potential general principles of animal learning and provide novel predictions for future experiments.
2203.13650
Eddy Kwessi
Eddy Kwessi
Strong Allee effect synaptic plasticity rule in an unsupervised learning environment
null
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synaptic plasticity or the ability of a brain to changes one or more of its functions or structures has generated and is sill generating a lot of interest from the scientific community especially neuroscientists. These interests especially went into high gear after empirical evidences were collected that challenged the established paradigm that human brain structures and functions are set from childhood and only modest changes were expected beyond. Early synaptic plasticity rules or laws to that regard include the basic Hebbian rule that proposed a mechanism for strengthening or weakening of synapses (weights) during learning and memory. This rule however did not account from the fact that weights must have bounded growth overtime. Thereafter, many other rules were proposed to complement the basic Hebbian rule and they also possess other desirable properties. In particular, a desirable property in synaptic plasticity rule is that the ambient system must account for inhibition which is often achieved if the rule used allows for a lower bound in synaptic weights. In this paper, we propose a synaptic plasticity rule inspired from the Allee effect, a phenomenon often observed in population dynamics. We show properties such such as synaptic normalization, competition between weights, de-correlation potential, and dynamic stability are satisfied. We show that in fact, an Allee effect in synaptic plasticity can be construed as an absence of plasticity.
[ { "created": "Fri, 25 Mar 2022 13:57:19 GMT", "version": "v1" } ]
2022-03-28
[ [ "Kwessi", "Eddy", "" ] ]
Synaptic plasticity or the ability of a brain to changes one or more of its functions or structures has generated and is sill generating a lot of interest from the scientific community especially neuroscientists. These interests especially went into high gear after empirical evidences were collected that challenged the established paradigm that human brain structures and functions are set from childhood and only modest changes were expected beyond. Early synaptic plasticity rules or laws to that regard include the basic Hebbian rule that proposed a mechanism for strengthening or weakening of synapses (weights) during learning and memory. This rule however did not account from the fact that weights must have bounded growth overtime. Thereafter, many other rules were proposed to complement the basic Hebbian rule and they also possess other desirable properties. In particular, a desirable property in synaptic plasticity rule is that the ambient system must account for inhibition which is often achieved if the rule used allows for a lower bound in synaptic weights. In this paper, we propose a synaptic plasticity rule inspired from the Allee effect, a phenomenon often observed in population dynamics. We show properties such such as synaptic normalization, competition between weights, de-correlation potential, and dynamic stability are satisfied. We show that in fact, an Allee effect in synaptic plasticity can be construed as an absence of plasticity.
1510.00471
Ron Nielsen
Ron W. Nielsen
Demographic Transition Theory Contradicted Repeatedly by Data
21 pages, 6 figures, 8796 words
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the absence of convincing evidence, data for Sweden and Mauritius are used in academic publications to illustrate the Demographic Transition Theory. These data are closely examined and found to be in clear contradiction of this theory. Demographic Transition Theory is also contradicted by the best available data for England. Other examples of contradicting evidence are also discussed.
[ { "created": "Fri, 2 Oct 2015 02:21:05 GMT", "version": "v1" }, { "created": "Wed, 20 Jan 2016 10:21:15 GMT", "version": "v2" } ]
2016-01-21
[ [ "Nielsen", "Ron W.", "" ] ]
In the absence of convincing evidence, data for Sweden and Mauritius are used in academic publications to illustrate the Demographic Transition Theory. These data are closely examined and found to be in clear contradiction of this theory. Demographic Transition Theory is also contradicted by the best available data for England. Other examples of contradicting evidence are also discussed.
q-bio/0609016
Sung Min Park
Sung Min Park and Beom Jun Kim
Dynamic behaviors in directed networks
null
Phys. Rev. E 74, 026114 (2006)
10.1103/PhysRevE.74.026114
null
q-bio.QM
null
Motivated by the abundance of directed synaptic couplings in a real biological neuronal network, we investigate the synchronization behavior of the Hodgkin-Huxley model in a directed network. We start from the standard model of the Watts-Strogatz undirected network and then change undirected edges to directed arcs with a given probability, still preserving the connectivity of the network. A generalized clustering coefficient for directed networks is defined and used to investigate the interplay between the synchronization behavior and underlying structural properties of directed networks. We observe that the directedness of complex networks plays an important role in emerging dynamical behaviors, which is also confirmed by a numerical study of the sociological game theoretic voter model on directed networks.
[ { "created": "Mon, 11 Sep 2006 10:52:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Park", "Sung Min", "" ], [ "Kim", "Beom Jun", "" ] ]
Motivated by the abundance of directed synaptic couplings in a real biological neuronal network, we investigate the synchronization behavior of the Hodgkin-Huxley model in a directed network. We start from the standard model of the Watts-Strogatz undirected network and then change undirected edges to directed arcs with a given probability, still preserving the connectivity of the network. A generalized clustering coefficient for directed networks is defined and used to investigate the interplay between the synchronization behavior and underlying structural properties of directed networks. We observe that the directedness of complex networks plays an important role in emerging dynamical behaviors, which is also confirmed by a numerical study of the sociological game theoretic voter model on directed networks.
1511.01426
Kevin Emmett
Kevin Emmett, Benjamin Schweinhart, Raul Rabadan
Multiscale Topology of Chromatin Folding
4 pages, 7 figures. Accepted for presentation at BICT 2015 Special Track on Topology-driven bio-inspired methods and models for complex systems (TOPDRIM4bio)
null
null
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The three dimensional structure of DNA in the nucleus (chromatin) plays an important role in many cellular processes. Recent experimental advances have led to high-throughput methods of capturing information about chromatin conformation on genome-wide scales. New models are needed to quantitatively interpret this data at a global scale. Here we introduce the use of tools from topological data analysis to study chromatin conformation. We use persistent homology to identify and characterize conserved loops and voids in contact map data and identify scales of interaction. We demonstrate the utility of the approach on simulated data and then look data from both a bacterial genome and a human cell line. We identify substantial multiscale topology in these datasets.
[ { "created": "Wed, 4 Nov 2015 18:34:43 GMT", "version": "v1" } ]
2015-11-05
[ [ "Emmett", "Kevin", "" ], [ "Schweinhart", "Benjamin", "" ], [ "Rabadan", "Raul", "" ] ]
The three dimensional structure of DNA in the nucleus (chromatin) plays an important role in many cellular processes. Recent experimental advances have led to high-throughput methods of capturing information about chromatin conformation on genome-wide scales. New models are needed to quantitatively interpret this data at a global scale. Here we introduce the use of tools from topological data analysis to study chromatin conformation. We use persistent homology to identify and characterize conserved loops and voids in contact map data and identify scales of interaction. We demonstrate the utility of the approach on simulated data and then look data from both a bacterial genome and a human cell line. We identify substantial multiscale topology in these datasets.
1903.02026
Pingkun Yan
Grant Haskins, Uwe Kruger, Pingkun Yan
Deep Learning in Medical Image Registration: A Survey
Accepted for publication by Machine Vision and Applications on January 8, 2020
null
10.1007/s00138-020-01060-x
null
q-bio.QM cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The establishment of image correspondence through robust image registration is critical to many clinical tasks such as image fusion, organ atlas creation, and tumor growth monitoring, and is a very challenging problem. Since the beginning of the recent deep learning renaissance, the medical imaging research community has developed deep learning based approaches and achieved the state-of-the-art in many applications, including image registration. The rapid adoption of deep learning for image registration applications over the past few years necessitates a comprehensive summary and outlook, which is the main scope of this survey. This requires placing a focus on the different research areas as well as highlighting challenges that practitioners face. This survey, therefore, outlines the evolution of deep learning based medical image registration in the context of both research challenges and relevant innovations in the past few years. Further, this survey highlights future research directions to show how this field may be possibly moved forward to the next level.
[ { "created": "Tue, 5 Mar 2019 19:37:51 GMT", "version": "v1" }, { "created": "Tue, 21 Jan 2020 14:58:06 GMT", "version": "v2" } ]
2020-01-22
[ [ "Haskins", "Grant", "" ], [ "Kruger", "Uwe", "" ], [ "Yan", "Pingkun", "" ] ]
The establishment of image correspondence through robust image registration is critical to many clinical tasks such as image fusion, organ atlas creation, and tumor growth monitoring, and is a very challenging problem. Since the beginning of the recent deep learning renaissance, the medical imaging research community has developed deep learning based approaches and achieved the state-of-the-art in many applications, including image registration. The rapid adoption of deep learning for image registration applications over the past few years necessitates a comprehensive summary and outlook, which is the main scope of this survey. This requires placing a focus on the different research areas as well as highlighting challenges that practitioners face. This survey, therefore, outlines the evolution of deep learning based medical image registration in the context of both research challenges and relevant innovations in the past few years. Further, this survey highlights future research directions to show how this field may be possibly moved forward to the next level.
1407.7518
Ziyue Gao
Ziyue Gao, Darrel Waggoner, Matthew Stephens, Carole Ober and Molly Przeworski
An estimate of the average number of recessive lethal mutations carried by humans
37 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and non-consanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To circumvent this limitation, we focused on a founder population with almost complete Mendelian disease ascertainment and a known pedigree. By considering all recessive lethal diseases reported in the pedigree and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.83]) autosomal, recessive alleles that lead to complete sterility or severe disorders at birth or before reproductive age when homozygous. Comparison to existing estimates of the deleterious effects of all recessive alleles suggests that a substantial fraction of the burden of autosomal, recessive variants is due to single mutations that lead to death between birth and reproductive age. In turn, the comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes.
[ { "created": "Mon, 28 Jul 2014 19:53:29 GMT", "version": "v1" } ]
2014-07-29
[ [ "Gao", "Ziyue", "" ], [ "Waggoner", "Darrel", "" ], [ "Stephens", "Matthew", "" ], [ "Ober", "Carole", "" ], [ "Przeworski", "Molly", "" ] ]
The effects of inbreeding on human health depend critically on the number and severity of recessive, deleterious mutations carried by individuals. In humans, existing estimates of these quantities are based on comparisons between consanguineous and non-consanguineous couples, an approach that confounds socioeconomic and genetic effects of inbreeding. To circumvent this limitation, we focused on a founder population with almost complete Mendelian disease ascertainment and a known pedigree. By considering all recessive lethal diseases reported in the pedigree and simulating allele transmissions, we estimated that each haploid set of human autosomes carries on average 0.29 (95% credible interval [0.10, 0.83]) autosomal, recessive alleles that lead to complete sterility or severe disorders at birth or before reproductive age when homozygous. Comparison to existing estimates of the deleterious effects of all recessive alleles suggests that a substantial fraction of the burden of autosomal, recessive variants is due to single mutations that lead to death between birth and reproductive age. In turn, the comparison to estimates from other eukaryotes points to a surprising constancy of the average number of recessive lethal mutations across organisms with markedly different genome sizes.
2010.06063
Laura Kubatko
Andrew Richards and Laura Kubatko
Bayesian Weighted Triplet and Quartet Methods for Species Tree Inference
null
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inference of the evolutionary histories of species, commonly represented by a species tree, is complicated by the divergent evolutionary history of different parts of the genome. Different loci on the genome can have different histories from the underlying species tree (and each other) due to processes such as incomplete lineage sorting (ILS), gene duplication and loss, and horizontal gene transfer. The multispecies coalescent is a commonly used model for performing inference on species and gene trees in the presence of ILS. This paper introduces Lily-T and Lily-Q, two new methods for species tree inference under the multispecies coalescent. We then compare them to two frequently used methods, SVDQuartets and ASTRAL, using simulated and empirical data. Both methods generally showed improvement over SVDQuartets, and Lily-Q was superior to Lily-T for most simulation settings. The comparison to ASTRAL was more mixed - Lily-Q tended to be better than ASTRAL when the length of recombination-free loci was short, when the coalescent population parameter {\theta} was small, or when the internal branch lengths were longer.
[ { "created": "Mon, 12 Oct 2020 22:54:59 GMT", "version": "v1" } ]
2020-10-14
[ [ "Richards", "Andrew", "" ], [ "Kubatko", "Laura", "" ] ]
Inference of the evolutionary histories of species, commonly represented by a species tree, is complicated by the divergent evolutionary history of different parts of the genome. Different loci on the genome can have different histories from the underlying species tree (and each other) due to processes such as incomplete lineage sorting (ILS), gene duplication and loss, and horizontal gene transfer. The multispecies coalescent is a commonly used model for performing inference on species and gene trees in the presence of ILS. This paper introduces Lily-T and Lily-Q, two new methods for species tree inference under the multispecies coalescent. We then compare them to two frequently used methods, SVDQuartets and ASTRAL, using simulated and empirical data. Both methods generally showed improvement over SVDQuartets, and Lily-Q was superior to Lily-T for most simulation settings. The comparison to ASTRAL was more mixed - Lily-Q tended to be better than ASTRAL when the length of recombination-free loci was short, when the coalescent population parameter {\theta} was small, or when the internal branch lengths were longer.
2310.13468
Lara Herriott
Lara Herriott, Henriette L. Capel, Isaac Ellmen, Nathan Schofield, Jiayuan Zhu, Ben Lambert, David Gavaghan, Ioana Bouros, Richard Creswell and Kit Gallagher
EpiGeoPop: A Tool for Developing Spatially Accurate Country-level Epidemiological Models
16 pages, 6 figures, 3 supplementary figures
null
null
null
q-bio.PE physics.soc-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models play a crucial role in understanding the spread of infectious disease outbreaks and influencing policy decisions. These models aid pandemic preparedness by predicting outcomes under hypothetical scenarios and identifying weaknesses in existing frameworks. However, their accuracy, utility, and comparability are being scrutinized. Agent-based models (ABMs) have emerged as a valuable tool, capturing population heterogeneity and spatial effects, particularly when assessing intervention strategies. Here we present EpiGeoPop, a user-friendly tool for rapidly preparing spatially accurate population configurations of entire countries. EpiGeoPop helps to address the problem of complex and time-consuming model set up in ABMs, specifically improving the integration of spatial detail. We subsequently demonstrate the importance of accurate spatial detail in ABM simulations of disease outbreaks using Epiabm, an ABM based on Imperial College London's CovidSim with improved modularity, documentation and testing. Our investigation involves the interplay between population density, the implementation of spatial transmission, and realistic interventions implemented in Epiabm.
[ { "created": "Fri, 20 Oct 2023 13:05:03 GMT", "version": "v1" } ]
2023-10-23
[ [ "Herriott", "Lara", "" ], [ "Capel", "Henriette L.", "" ], [ "Ellmen", "Isaac", "" ], [ "Schofield", "Nathan", "" ], [ "Zhu", "Jiayuan", "" ], [ "Lambert", "Ben", "" ], [ "Gavaghan", "David", "" ], [ "Bouros", "Ioana", "" ], [ "Creswell", "Richard", "" ], [ "Gallagher", "Kit", "" ] ]
Mathematical models play a crucial role in understanding the spread of infectious disease outbreaks and influencing policy decisions. These models aid pandemic preparedness by predicting outcomes under hypothetical scenarios and identifying weaknesses in existing frameworks. However, their accuracy, utility, and comparability are being scrutinized. Agent-based models (ABMs) have emerged as a valuable tool, capturing population heterogeneity and spatial effects, particularly when assessing intervention strategies. Here we present EpiGeoPop, a user-friendly tool for rapidly preparing spatially accurate population configurations of entire countries. EpiGeoPop helps to address the problem of complex and time-consuming model set up in ABMs, specifically improving the integration of spatial detail. We subsequently demonstrate the importance of accurate spatial detail in ABM simulations of disease outbreaks using Epiabm, an ABM based on Imperial College London's CovidSim with improved modularity, documentation and testing. Our investigation involves the interplay between population density, the implementation of spatial transmission, and realistic interventions implemented in Epiabm.
2106.03713
Jan Vandenbroucke
Jan P Vandenbroucke, Elizabeth B Brickley, Christina M.J.E. Vandenbroucke-Grauls, Neil Pearce
The evolving usefulness of the Test-Negative Design in studying risk factors for COVID-19 due to changes in testing policy
3 pages
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is a short extension of our previous paper [arXiv:2004.06033] about the use of the Test-Negative design to study risk factors for COVID-19 [See: PubMed and ArXiv reference below] Reason for the extension is that the conditions under which people refer themselves for testing have greatly changed: originally, in most countries priority was given to people with symptoms, but nowadays people without symptoms are also tested for different reasons, e.g., during contact tracing, or to be allowed on an (international) flight. Interestingly, this opens new possibilities to separately investigate risk factors for infection and risk factors for becoming diseased. To use this new situation to best effect, one has to think carefully about how to elucidate the different reasons for testing and what analyses one might do with the different groups.
[ { "created": "Mon, 7 Jun 2021 15:24:17 GMT", "version": "v1" } ]
2021-06-08
[ [ "Vandenbroucke", "Jan P", "" ], [ "Brickley", "Elizabeth B", "" ], [ "Vandenbroucke-Grauls", "Christina M. J. E.", "" ], [ "Pearce", "Neil", "" ] ]
This paper is a short extension of our previous paper [arXiv:2004.06033] about the use of the Test-Negative design to study risk factors for COVID-19 [See: PubMed and ArXiv reference below] Reason for the extension is that the conditions under which people refer themselves for testing have greatly changed: originally, in most countries priority was given to people with symptoms, but nowadays people without symptoms are also tested for different reasons, e.g., during contact tracing, or to be allowed on an (international) flight. Interestingly, this opens new possibilities to separately investigate risk factors for infection and risk factors for becoming diseased. To use this new situation to best effect, one has to think carefully about how to elucidate the different reasons for testing and what analyses one might do with the different groups.
2312.12888
Alain Nogaret
Stephen A. Wells, Joseph D. Taylor, Paul G. Morris and Alain Nogaret
Inferring the dynamics of ionic currents from recursive piecewise data assimilation of approximate neuron models
null
null
null
null
q-bio.QM math-ph math.MP
http://creativecommons.org/licenses/by/4.0/
We construct neuron models from data by transferring information from an observed time series to the state variables and parameters of Hodgkin-Huxley models. When the learning period completes, the model will predict additional observations and its parameters uniquely characterise the complement of ion channels. However, the assimilation of biological data, as opposed to model data, is complicated by the lack of knowledge of the true neuron equations. Reliance on guessed conductance models is plagued with multi-valued parameter solutions. Here, we report on the distributions of parameters and currents predicted with intentionally erroneous models, over-specified models, and an approximate model fitting hippocampal neuron data. We introduce a recursive piecewise data assimilation (RPDA) algorithm that converges with near-perfect reliability when the model is known. When the model is unknown, we show model error introduces correlations between certain parameters. The ionic currents reconstructed from these parameters are excellent predictors of true currents and carry a higher degree of confidence, >95.5%, than underlying parameters, >53%. Unexpressed ionic currents are correctly filtered out even in the presence of mild model error. When the model is unknown, the covariance eigenvalues of parameter estimates are found to be a good gauge of model error. Our results suggest that biological information may be retrieved from data by focussing on current estimates rather than parameters.
[ { "created": "Wed, 20 Dec 2023 09:56:54 GMT", "version": "v1" } ]
2023-12-21
[ [ "Wells", "Stephen A.", "" ], [ "Taylor", "Joseph D.", "" ], [ "Morris", "Paul G.", "" ], [ "Nogaret", "Alain", "" ] ]
We construct neuron models from data by transferring information from an observed time series to the state variables and parameters of Hodgkin-Huxley models. When the learning period completes, the model will predict additional observations and its parameters uniquely characterise the complement of ion channels. However, the assimilation of biological data, as opposed to model data, is complicated by the lack of knowledge of the true neuron equations. Reliance on guessed conductance models is plagued with multi-valued parameter solutions. Here, we report on the distributions of parameters and currents predicted with intentionally erroneous models, over-specified models, and an approximate model fitting hippocampal neuron data. We introduce a recursive piecewise data assimilation (RPDA) algorithm that converges with near-perfect reliability when the model is known. When the model is unknown, we show model error introduces correlations between certain parameters. The ionic currents reconstructed from these parameters are excellent predictors of true currents and carry a higher degree of confidence, >95.5%, than underlying parameters, >53%. Unexpressed ionic currents are correctly filtered out even in the presence of mild model error. When the model is unknown, the covariance eigenvalues of parameter estimates are found to be a good gauge of model error. Our results suggest that biological information may be retrieved from data by focussing on current estimates rather than parameters.
2205.09122
Markus Daniel Herrmann
Chris Gorman, Davide Punzo, Igor Octaviano, Steve Pieper, William J.R. Longabaugh, David A. Clunie, Ron Kikinis, Andrey Y. Fedorov, Markus D. Herrmann
Slim: interoperable slide microscopy viewer and annotation tool for imaging data science and computational pathology
null
null
10.1038/s41467-023-37224-2
null
q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
The exchange of large and complex slide microscopy imaging data in biomedical research and pathology practice is impeded by a lack of data standardization and interoperability, which is detrimental to the reproducibility of scientific findings and clinical integration of technological innovations. Slim is an open-source, web-based slide microscopy viewer that implements the internationally accepted Digital Imaging and Communications in Medicine (DICOM) standard to achieve interoperability with a multitude of existing medical imaging systems. We showcase the capabilities of Slim as the slide microscopy viewer of the NCI Imaging Data Commons and demonstrate how the viewer enables interactive visualization of traditional brightfield microscopy and highly-multiplexed immunofluorescence microscopy images from The Cancer Genome Atlas and Human Tissue Atlas Network, respectively, using standard DICOMweb services. We further show how Slim enables the collection of standardized image annotations for the development or validation of machine learning models and the visual interpretation of model inference results in the form of segmentation masks, spatial heat maps, or image-derived measurements.
[ { "created": "Wed, 18 May 2022 17:06:07 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2022 23:27:00 GMT", "version": "v2" } ]
2023-04-26
[ [ "Gorman", "Chris", "" ], [ "Punzo", "Davide", "" ], [ "Octaviano", "Igor", "" ], [ "Pieper", "Steve", "" ], [ "Longabaugh", "William J. R.", "" ], [ "Clunie", "David A.", "" ], [ "Kikinis", "Ron", "" ], [ "Fedorov", "Andrey Y.", "" ], [ "Herrmann", "Markus D.", "" ] ]
The exchange of large and complex slide microscopy imaging data in biomedical research and pathology practice is impeded by a lack of data standardization and interoperability, which is detrimental to the reproducibility of scientific findings and clinical integration of technological innovations. Slim is an open-source, web-based slide microscopy viewer that implements the internationally accepted Digital Imaging and Communications in Medicine (DICOM) standard to achieve interoperability with a multitude of existing medical imaging systems. We showcase the capabilities of Slim as the slide microscopy viewer of the NCI Imaging Data Commons and demonstrate how the viewer enables interactive visualization of traditional brightfield microscopy and highly-multiplexed immunofluorescence microscopy images from The Cancer Genome Atlas and Human Tissue Atlas Network, respectively, using standard DICOMweb services. We further show how Slim enables the collection of standardized image annotations for the development or validation of machine learning models and the visual interpretation of model inference results in the form of segmentation masks, spatial heat maps, or image-derived measurements.
0807.3287
Johannes Wollbold
Johannes Wollbold, Reinhard Guthke, Bernhard Ganter
Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods
15 pages, 1 figure, LaTeX style llncsdoc.sty
K. Horimoto et al. (Eds.): AB 2008, LNCS 5147. Springer, Heidelberg 2008, pp. 230-244
null
null
q-bio.MN cs.AI math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.
[ { "created": "Mon, 21 Jul 2008 15:46:22 GMT", "version": "v1" } ]
2008-07-22
[ [ "Wollbold", "Johannes", "" ], [ "Guthke", "Reinhard", "" ], [ "Ganter", "Bernhard", "" ] ]
Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.