id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2308.11176
Katharina Huber
Katharina T. Huber, Leo van Iersel, Vincent Moulton, Guillaume Scholz
Is this network proper forest-based?
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
In evolutionary biology, networks are becoming increasingly used to represent evolutionary histories for species that have undergone non-treelike or reticulate evolution. Such networks are essentially directed acyclic graphs with a leaf set that corresponds to a collection of species, and in which non-leaf vertices with indegree 1 correspond to speciation events and vertices with indegree greater than 1 correspond to reticulate events such as gene transfer. Recently forest-based networks have been introduced, which are essentially (multi-rooted) networks that can be formed by adding some arcs to a collection of phylogenetic trees (or phylogenetic forest), where each arc is added in such a way that its ends always lie in two different trees in the forest. In this paper, we consider the complexity of deciding whether or not a given network is proper forest-based, that is, whether it can be formed by adding arcs to some underlying phylogenetic forest which contains the same number of trees as there are roots in the network. More specifically, we show that it can be decided in polynomial time whether or not a binary, tree-child network with $m \ge 2$ roots is proper forest-based in case $m=2$, but that this problem is NP-complete for $m\ge 3$. We also give a fixed parameter tractable (FPT) algorithm for deciding whether or not a network in which every vertex has indegree at most 2 is proper forest-based. A key element in proving our results is a new characterization for when a network with $m$ roots is proper forest-based which is given in terms of the existence of certain $m$-colorings of the vertices of the network.
[ { "created": "Tue, 22 Aug 2023 04:07:56 GMT", "version": "v1" } ]
2023-08-23
[ [ "Huber", "Katharina T.", "" ], [ "van Iersel", "Leo", "" ], [ "Moulton", "Vincent", "" ], [ "Scholz", "Guillaume", "" ] ]
In evolutionary biology, networks are becoming increasingly used to represent evolutionary histories for species that have undergone non-treelike or reticulate evolution. Such networks are essentially directed acyclic graphs with a leaf set that corresponds to a collection of species, and in which non-leaf vertices with indegree 1 correspond to speciation events and vertices with indegree greater than 1 correspond to reticulate events such as gene transfer. Recently forest-based networks have been introduced, which are essentially (multi-rooted) networks that can be formed by adding some arcs to a collection of phylogenetic trees (or phylogenetic forest), where each arc is added in such a way that its ends always lie in two different trees in the forest. In this paper, we consider the complexity of deciding whether or not a given network is proper forest-based, that is, whether it can be formed by adding arcs to some underlying phylogenetic forest which contains the same number of trees as there are roots in the network. More specifically, we show that it can be decided in polynomial time whether or not a binary, tree-child network with $m \ge 2$ roots is proper forest-based in case $m=2$, but that this problem is NP-complete for $m\ge 3$. We also give a fixed parameter tractable (FPT) algorithm for deciding whether or not a network in which every vertex has indegree at most 2 is proper forest-based. A key element in proving our results is a new characterization for when a network with $m$ roots is proper forest-based which is given in terms of the existence of certain $m$-colorings of the vertices of the network.
1208.5095
Michael Courtney
Joshua Courtney, Taylor Klinkmann, Amy Courtney, Joseph Torano, and Michael Courtney
Relative Condition Factors of Fish as Bioindicators One Year after the Deepwater Horizon Oil Spill
null
null
null
null
q-bio.PE physics.ao-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creel surveys were performed over a three week period in late spring, 2011, in the Lafourche and Calcasieu area estuaries of the Louisiana Gulf Coast. Weights and lengths were measured for black drum (Pogonias cromis), red drum (Sciaenops ocellatus), and spotted seatrout (Cynoscion nebulosus), and relative condition factors were calculated relative to expected weights from the long term (5 year) Louisiana data for each species. A normal relative condition factor is 1.00. The mean relative condition factors in the Lafourche area were black drum, 0.955 (0.020); red drum, 0.955 (0.011); spotted seatrout, 0.994 (0.009). In the Calcasieu area, the mean relative condition factors were black drum, 0.934 (0.017); red drum, 0.965 (0.014); spotted seatrout, 0.971 (0.010). Uncertainties are in parentheses. Results suggest that the abundance of primary food sources for black drum and red drum in Lafourche, including oysters and crab, were likely affected by the oil spill and continued to be reduced one year later. Increased harvest of oysters and blue crab in the Calcasieu area (in part to make up for the ban in most of Louisiana) resulted in less food for the black drum and red drum there, also. Spotted seatrout eat mainly shrimp and small fish and showed no significant reduction in relative condition factor in Lafourche and a slight reduction in Calcasieu one year after the oil spill.
[ { "created": "Sat, 25 Aug 2012 03:02:48 GMT", "version": "v1" } ]
2012-08-28
[ [ "Courtney", "Joshua", "" ], [ "Klinkmann", "Taylor", "" ], [ "Courtney", "Amy", "" ], [ "Torano", "Joseph", "" ], [ "Courtney", "Michael", "" ] ]
Creel surveys were performed over a three week period in late spring, 2011, in the Lafourche and Calcasieu area estuaries of the Louisiana Gulf Coast. Weights and lengths were measured for black drum (Pogonias cromis), red drum (Sciaenops ocellatus), and spotted seatrout (Cynoscion nebulosus), and relative condition factors were calculated relative to expected weights from the long term (5 year) Louisiana data for each species. A normal relative condition factor is 1.00. The mean relative condition factors in the Lafourche area were black drum, 0.955 (0.020); red drum, 0.955 (0.011); spotted seatrout, 0.994 (0.009). In the Calcasieu area, the mean relative condition factors were black drum, 0.934 (0.017); red drum, 0.965 (0.014); spotted seatrout, 0.971 (0.010). Uncertainties are in parentheses. Results suggest that the abundance of primary food sources for black drum and red drum in Lafourche, including oysters and crab, were likely affected by the oil spill and continued to be reduced one year later. Increased harvest of oysters and blue crab in the Calcasieu area (in part to make up for the ban in most of Louisiana) resulted in less food for the black drum and red drum there, also. Spotted seatrout eat mainly shrimp and small fish and showed no significant reduction in relative condition factor in Lafourche and a slight reduction in Calcasieu one year after the oil spill.
2007.08028
Joseph Bae
Joseph Bae, Saarthak Kapse, Gagandeep Singh, Rishabh Gattu, Syed Ali, Neal Shah, Colin Marshall, Jonathan Pierce, Tej Phatak, Amit Gupta, Jeremy Green, Nikhil Madan, Prateek Prasanna
Predicting Clinical Outcomes in COVID-19 using Radiomics and Deep Learning on Chest Radiographs: A Multi-Institutional Study
Joseph Bae and Saarthak Kapse have contributed equally to this work
null
null
null
q-bio.QM cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients. This two-center, retrospective study analyzed 530 deidentified CXRs from 515 COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020. DL and machine learning classifiers to predict mechanical ventilation requirement and mortality were trained and evaluated using patient CXRs. A novel radiomic embedding framework was also explored for outcome prediction. All results are compared against radiologist grading of CXRs (zone-wise expert severity scores). Radiomic and DL classification models had mAUCs of 0.78+/-0.02 and 0.81+/-0.04, compared with expert scores mAUCs of 0.75+/-0.02 and 0.79+/-0.05 for mechanical ventilation requirement and mortality prediction, respectively. Combined classifiers using both radiomics and expert severity scores resulted in mAUCs of 0.79+/-0.04 and 0.83+/-0.04 for each prediction task, demonstrating improvement over either artificial intelligence or radiologist interpretation alone. Our results also suggest instances where inclusion of radiomic features in DL improves model predictions, something that might be explored in other pathologies. The models proposed in this study and the prognostic information they provide might aid physician decision making and resource allocation during the COVID-19 pandemic.
[ { "created": "Wed, 15 Jul 2020 22:48:11 GMT", "version": "v1" }, { "created": "Thu, 1 Jul 2021 18:47:22 GMT", "version": "v2" } ]
2021-07-05
[ [ "Bae", "Joseph", "" ], [ "Kapse", "Saarthak", "" ], [ "Singh", "Gagandeep", "" ], [ "Gattu", "Rishabh", "" ], [ "Ali", "Syed", "" ], [ "Shah", "Neal", "" ], [ "Marshall", "Colin", "" ], [ "Pierce", "Jonathan", "" ], [ "Phatak", "Tej", "" ], [ "Gupta", "Amit", "" ], [ "Green", "Jeremy", "" ], [ "Madan", "Nikhil", "" ], [ "Prasanna", "Prateek", "" ] ]
We predict mechanical ventilation requirement and mortality using computational modeling of chest radiographs (CXRs) for coronavirus disease 2019 (COVID-19) patients. This two-center, retrospective study analyzed 530 deidentified CXRs from 515 COVID-19 patients treated at Stony Brook University Hospital and Newark Beth Israel Medical Center between March and August 2020. DL and machine learning classifiers to predict mechanical ventilation requirement and mortality were trained and evaluated using patient CXRs. A novel radiomic embedding framework was also explored for outcome prediction. All results are compared against radiologist grading of CXRs (zone-wise expert severity scores). Radiomic and DL classification models had mAUCs of 0.78+/-0.02 and 0.81+/-0.04, compared with expert scores mAUCs of 0.75+/-0.02 and 0.79+/-0.05 for mechanical ventilation requirement and mortality prediction, respectively. Combined classifiers using both radiomics and expert severity scores resulted in mAUCs of 0.79+/-0.04 and 0.83+/-0.04 for each prediction task, demonstrating improvement over either artificial intelligence or radiologist interpretation alone. Our results also suggest instances where inclusion of radiomic features in DL improves model predictions, something that might be explored in other pathologies. The models proposed in this study and the prognostic information they provide might aid physician decision making and resource allocation during the COVID-19 pandemic.
2309.16046
Arno Granier
Arno Granier, Mihai A. Petrovici, Walter Senn and Katharina A. Wilmes
Confidence and second-order errors in cortical circuits
null
null
null
null
q-bio.NC cs.NE
http://creativecommons.org/licenses/by/4.0/
Minimization of cortical prediction errors has been considered a key computational goal of the cerebral cortex underlying perception, action and learning. However, it is still unclear how the cortex should form and use information about uncertainty in this process. Here, we formally derive neural dynamics that minimize prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their confidence (inverse expected uncertainty) in their predictions. In the resulting neuronal dynamics, the integration of bottom-up and top-down cortical streams is dynamically modulated based on confidence in accordance with the Bayesian principle. Moreover, the theory predicts the existence of cortical second-order errors, comparing confidence and actual performance. These errors are propagated through the cortical hierarchy alongside classical prediction errors and are used to learn the weights of synapses responsible for formulating confidence. We propose a detailed mapping of the theory to cortical circuitry, discuss entailed functional interpretations and provide potential directions for experimental work.
[ { "created": "Wed, 27 Sep 2023 21:58:18 GMT", "version": "v1" }, { "created": "Fri, 8 Dec 2023 15:06:49 GMT", "version": "v2" }, { "created": "Tue, 26 Mar 2024 20:58:15 GMT", "version": "v3" } ]
2024-03-28
[ [ "Granier", "Arno", "" ], [ "Petrovici", "Mihai A.", "" ], [ "Senn", "Walter", "" ], [ "Wilmes", "Katharina A.", "" ] ]
Minimization of cortical prediction errors has been considered a key computational goal of the cerebral cortex underlying perception, action and learning. However, it is still unclear how the cortex should form and use information about uncertainty in this process. Here, we formally derive neural dynamics that minimize prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams but also jointly project their confidence (inverse expected uncertainty) in their predictions. In the resulting neuronal dynamics, the integration of bottom-up and top-down cortical streams is dynamically modulated based on confidence in accordance with the Bayesian principle. Moreover, the theory predicts the existence of cortical second-order errors, comparing confidence and actual performance. These errors are propagated through the cortical hierarchy alongside classical prediction errors and are used to learn the weights of synapses responsible for formulating confidence. We propose a detailed mapping of the theory to cortical circuitry, discuss entailed functional interpretations and provide potential directions for experimental work.
1302.6423
Ankush Sharma Dr
Ankush Sharma, Susan Costantini and Giovanni Colonna
The protein-protein interaction network of human Sirtuin family
25 pages, 6 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-protein interaction networks are useful for studying human diseases and to look for possible health care through a holistic approach. Networks are playing an increasing and important role in the understanding of physiological processes such as homeostasis, signaling, spatial and temporal organizations, and pathological conditions. In this article we show the complex system of interactions determined by human Sirtuins (Sirt) largely involved in many metabolic processes as well as in different diseases. The Sirtuin family consists of seven homologous Sirt-s having structurally similar cores but different terminal segments, being rather variable in length and/or intrinsically disordered. Many studies have determined their cellular location as well as biological functions although molecular mechanisms through which they act are actually little known. Therefore, the aim of this work was to define, explore and understand the Sirtuin-related human interactome. As a first step, we have integrated the experimentally determined protein-protein interactions of the Sirtuin-family as well as their first and second neighbors to a Sirtuin-related sub-interactome. Our data showed that the second-neighbor network of Sirtuins encompasses 25% of the entire human interactome, exhibits a scale-free degree distribution and interconnectedness among top degree nodes. Moreover, the Sirtuin sub interactome showed a modular structure around the core comprising mixed functions. Finally, we extracted from the Sirtuin sub-interactome subnets related to cancer, aging and post-translational modifications for information on key nodes and topological space of the subnets in Sirt family network.
[ { "created": "Tue, 26 Feb 2013 13:03:45 GMT", "version": "v1" }, { "created": "Wed, 27 Feb 2013 03:58:46 GMT", "version": "v2" } ]
2013-02-28
[ [ "Sharma", "Ankush", "" ], [ "Costantini", "Susan", "" ], [ "Colonna", "Giovanni", "" ] ]
Protein-protein interaction networks are useful for studying human diseases and to look for possible health care through a holistic approach. Networks are playing an increasing and important role in the understanding of physiological processes such as homeostasis, signaling, spatial and temporal organizations, and pathological conditions. In this article we show the complex system of interactions determined by human Sirtuins (Sirt) largely involved in many metabolic processes as well as in different diseases. The Sirtuin family consists of seven homologous Sirt-s having structurally similar cores but different terminal segments, being rather variable in length and/or intrinsically disordered. Many studies have determined their cellular location as well as biological functions although molecular mechanisms through which they act are actually little known. Therefore, the aim of this work was to define, explore and understand the Sirtuin-related human interactome. As a first step, we have integrated the experimentally determined protein-protein interactions of the Sirtuin-family as well as their first and second neighbors to a Sirtuin-related sub-interactome. Our data showed that the second-neighbor network of Sirtuins encompasses 25% of the entire human interactome, exhibits a scale-free degree distribution and interconnectedness among top degree nodes. Moreover, the Sirtuin sub interactome showed a modular structure around the core comprising mixed functions. Finally, we extracted from the Sirtuin sub-interactome subnets related to cancer, aging and post-translational modifications for information on key nodes and topological space of the subnets in Sirt family network.
2001.02844
Yutaka Shikano
Masazumi Fujiwara, Simo Sun, Alexander Dohms, Yushi Nishimura, Ken Suto, Yuka Takezawa, Keisuke Oshimi, Li Zhao, Nikola Sadzak, Yumi Umehara, Yoshio Teki, Naoki Komatsu, Oliver Benson, Yutaka Shikano, Eriko Kage-Nakadai
Real-time nanodiamond thermometry probing in-vivo thermogenic responses
9 + 10 pages, 4 + 11 figures, our submission is jointly with the paper arXiv:2001.02664
Science Advances 6, eaba9636 (2020)
10.1126/sciadv.aba9636
null
q-bio.QM cond-mat.mes-hall physics.bio-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time temperature monitoring inside living organisms provides a direct measure of their biological activities, such as homeostatic thermoregulation and energy metabolism. However, it is challenging to reduce the size of bio-compatible thermometers down to submicrometers despite their potential applications for the thermal imaging of subtissue structures with single-cell resolution. Light-emitting nanothermometers that remotely sense temperature via optical signals exhibit considerable potential in such \textit{in-vivo} high-spatial-resolution thermometry. Here, using quantum nanothermometers based on optically accessible electron spins in nanodiamonds (NDs), we demonstrate \textit{in-vivo} real-time temperature monitoring inside \textit{Caenorhabditis elegans} (\textit{C. elegans}) worms. We developed a thermometry system that can measure the temperatures of movable NDs inside live adult worms with a precision of $\pm 0.22^{\circ}{\rm C}$. Using this system, we determined the increase in temperature based on the thermogenic responses of the worms during the chemical stimuli of mitochondrial uncouplers. Our technique demonstrates sub-micrometer localization of real-time temperature information in living animals and direct identification of their pharmacological thermogenesis. The results obtained facilitate the development of a method to probe subcellular temperature variation inside living organisms and may allow for quantification of their biological activities based on their energy expenditures.
[ { "created": "Thu, 9 Jan 2020 05:17:14 GMT", "version": "v1" }, { "created": "Thu, 16 Jan 2020 09:04:14 GMT", "version": "v2" } ]
2020-09-29
[ [ "Fujiwara", "Masazumi", "" ], [ "Sun", "Simo", "" ], [ "Dohms", "Alexander", "" ], [ "Nishimura", "Yushi", "" ], [ "Suto", "Ken", "" ], [ "Takezawa", "Yuka", "" ], [ "Oshimi", "Keisuke", "" ], [ "Zhao", "Li", "" ], [ "Sadzak", "Nikola", "" ], [ "Umehara", "Yumi", "" ], [ "Teki", "Yoshio", "" ], [ "Komatsu", "Naoki", "" ], [ "Benson", "Oliver", "" ], [ "Shikano", "Yutaka", "" ], [ "Kage-Nakadai", "Eriko", "" ] ]
Real-time temperature monitoring inside living organisms provides a direct measure of their biological activities, such as homeostatic thermoregulation and energy metabolism. However, it is challenging to reduce the size of bio-compatible thermometers down to submicrometers despite their potential applications for the thermal imaging of subtissue structures with single-cell resolution. Light-emitting nanothermometers that remotely sense temperature via optical signals exhibit considerable potential in such \textit{in-vivo} high-spatial-resolution thermometry. Here, using quantum nanothermometers based on optically accessible electron spins in nanodiamonds (NDs), we demonstrate \textit{in-vivo} real-time temperature monitoring inside \textit{Caenorhabditis elegans} (\textit{C. elegans}) worms. We developed a thermometry system that can measure the temperatures of movable NDs inside live adult worms with a precision of $\pm 0.22^{\circ}{\rm C}$. Using this system, we determined the increase in temperature based on the thermogenic responses of the worms during the chemical stimuli of mitochondrial uncouplers. Our technique demonstrates sub-micrometer localization of real-time temperature information in living animals and direct identification of their pharmacological thermogenesis. The results obtained facilitate the development of a method to probe subcellular temperature variation inside living organisms and may allow for quantification of their biological activities based on their energy expenditures.
1801.10195
Gianrocco Lazzari
Gianrocco Lazzari, Yannis Jaquet, Djilani Kebaili, Laura Symul, Marcel Salath\'e
FoodRepo: An Open Food Repository of Barcoded Food Products
13 pages, 3 figures
Frontiers in Nutrition 5 (2018): 57
10.3389/fnut.2018.00057
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past decade, digital technologies have started to profoundly influence healthcare systems. Digital self-tracking has facilitated more precise epidemiological studies, and in the field of nutritional epidemiology, mobile apps have the potential to alleviate a significant part of the journaling burden by, for example, allowing users to record their food intake via a simple scan of packaged products barcodes. Such studies thus rely on databases of commercialized products, their barcodes, ingredients, and nutritional values, which are not yet openly available with sufficient geographical and product coverage. In this paper, we present FoodRepo (https://www.foodrepo.org), an open food repository of barcoded food items, whose database is programmatically accessible through an application programming interface (API). Furthermore, an open source license gives the appropriate rights to anyone to share and reuse FoodRepo data, including for commercial purposes. With currently more than 21,000 items available on the Swiss market, our database represents a solid starting point for large-scale studies in the field of digital nutrition, with the aim to lead to a better understanding of the intricate connections between diets and health in general, and metabolic disorders in particular.
[ { "created": "Thu, 25 Jan 2018 16:22:33 GMT", "version": "v1" } ]
2018-07-18
[ [ "Lazzari", "Gianrocco", "" ], [ "Jaquet", "Yannis", "" ], [ "Kebaili", "Djilani", "" ], [ "Symul", "Laura", "" ], [ "Salathé", "Marcel", "" ] ]
In the past decade, digital technologies have started to profoundly influence healthcare systems. Digital self-tracking has facilitated more precise epidemiological studies, and in the field of nutritional epidemiology, mobile apps have the potential to alleviate a significant part of the journaling burden by, for example, allowing users to record their food intake via a simple scan of packaged products barcodes. Such studies thus rely on databases of commercialized products, their barcodes, ingredients, and nutritional values, which are not yet openly available with sufficient geographical and product coverage. In this paper, we present FoodRepo (https://www.foodrepo.org), an open food repository of barcoded food items, whose database is programmatically accessible through an application programming interface (API). Furthermore, an open source license gives the appropriate rights to anyone to share and reuse FoodRepo data, including for commercial purposes. With currently more than 21,000 items available on the Swiss market, our database represents a solid starting point for large-scale studies in the field of digital nutrition, with the aim to lead to a better understanding of the intricate connections between diets and health in general, and metabolic disorders in particular.
1707.07422
Chen Jia
Chen Jia
Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts
20 pages, 4 figures
Phys. Rev. E 96, 032402 (2017)
10.1103/PhysRevE.96.032402
null
q-bio.MN cond-mat.stat-mech q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multi-scale biochemical reaction kinetics of stochastic gene expression.
[ { "created": "Mon, 24 Jul 2017 07:02:46 GMT", "version": "v1" }, { "created": "Sun, 20 Aug 2017 07:59:53 GMT", "version": "v2" } ]
2017-09-13
[ [ "Jia", "Chen", "" ] ]
Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multi-scale biochemical reaction kinetics of stochastic gene expression.
1806.00272
Aryeh Wides
Aryeh Wides, Ron Milo
Understanding the Dynamics and Optimizing the Performance of Chemostat Selection Experiments
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
A chemostat enables long-term, continuous, exponential-phase growth in an environment limited as prescribed by the researcher. It is thus a potent tool for laboratory evolution - selecting for strains with desired phenotypes. However, despite the apparently simple design governed by a limited set of rules, analysis of chemostat dynamics shows that they display counter-intuitive properties. For example, the concentration of limiting substrate in the chemostat is independent of the concentration in the influx and only dependent on the dilution rate and the strain parameters. Moreover, choosing optimal operational parameters (dilution rate, volume size, etc.) can be challenging. There are conflicting requirements in the experimental design, such as a need for relatively fast growth conditions for mutation accumulation on the one hand versus slow dilution for a large fitness advantage for mutants to take over the population quickly on the other.In this study, we provide analytic and computational tools to help understand and predict chemostat dynamics, and choose suitable operational parameters. We refer to five stages of the process: (A) parameter choice and setup, (B) basic steady state growth, (C) mutation, (D) single takeover and (E) successive takeovers. We present a qualitative and quantitative framework to answer the questions confronted in each of these stages. We provide a set of simulations which support the quantitative results, and a graphical user interface to give a hands-on opportunity to experience and visualize the analytic results. We detail conditions that produce ineffectual selection regimes, and find that when avoided, the selection time is relatively robust, and usually varies by less than an order of magnitude. Finally, we suggest rules of thumb to help ensure that the chosen parameters lead to effective selection and minimize the duration of the selection process.
[ { "created": "Fri, 1 Jun 2018 10:27:56 GMT", "version": "v1" } ]
2018-06-04
[ [ "Wides", "Aryeh", "" ], [ "Milo", "Ron", "" ] ]
A chemostat enables long-term, continuous, exponential-phase growth in an environment limited as prescribed by the researcher. It is thus a potent tool for laboratory evolution - selecting for strains with desired phenotypes. However, despite the apparently simple design governed by a limited set of rules, analysis of chemostat dynamics shows that they display counter-intuitive properties. For example, the concentration of limiting substrate in the chemostat is independent of the concentration in the influx and only dependent on the dilution rate and the strain parameters. Moreover, choosing optimal operational parameters (dilution rate, volume size, etc.) can be challenging. There are conflicting requirements in the experimental design, such as a need for relatively fast growth conditions for mutation accumulation on the one hand versus slow dilution for a large fitness advantage for mutants to take over the population quickly on the other.In this study, we provide analytic and computational tools to help understand and predict chemostat dynamics, and choose suitable operational parameters. We refer to five stages of the process: (A) parameter choice and setup, (B) basic steady state growth, (C) mutation, (D) single takeover and (E) successive takeovers. We present a qualitative and quantitative framework to answer the questions confronted in each of these stages. We provide a set of simulations which support the quantitative results, and a graphical user interface to give a hands-on opportunity to experience and visualize the analytic results. We detail conditions that produce ineffectual selection regimes, and find that when avoided, the selection time is relatively robust, and usually varies by less than an order of magnitude. Finally, we suggest rules of thumb to help ensure that the chosen parameters lead to effective selection and minimize the duration of the selection process.
1104.4823
Joshua Goldwyn
Joshua H. Goldwyn and Eric Shea-Brown
The what and where of adding channel noise to the Hodgkin-Huxley equations
14 pages, 3 figures, review article
null
10.1371/journal.pcbi.1002247
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most celebrated successes in computational biology is the Hodgkin-Huxley framework for modeling electrically active cells. This framework, expressed through a set of differential equations, synthesizes the impact of ionic currents on a cell's voltage -- and the highly nonlinear impact of that voltage back on the currents themselves -- into the rapid push and pull of the action potential. Latter studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the Hodgkin-Huxley equations. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic Hodgkin-Huxley equations. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly Matlab simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.
[ { "created": "Mon, 25 Apr 2011 23:39:12 GMT", "version": "v1" } ]
2015-05-28
[ [ "Goldwyn", "Joshua H.", "" ], [ "Shea-Brown", "Eric", "" ] ]
One of the most celebrated successes in computational biology is the Hodgkin-Huxley framework for modeling electrically active cells. This framework, expressed through a set of differential equations, synthesizes the impact of ionic currents on a cell's voltage -- and the highly nonlinear impact of that voltage back on the currents themselves -- into the rapid push and pull of the action potential. Latter studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the Hodgkin-Huxley equations. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic Hodgkin-Huxley equations. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly Matlab simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.
1210.4044
Benjamin Good
Benjamin H. Good and Michael M. Desai
Fluctuations in fitness distributions and the effects of weak linked selection on sequence evolution
null
null
10.1016/j.tpb.2013.01.005
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary dynamics and patterns of molecular evolution are strongly influenced by selection on linked regions of the genome, but our quantitative understanding of these effects remains incomplete. Recent work has focused on predicting the distribution of fitness within an evolving population, and this forms the basis for several methods that leverage the fitness distribution to predict the patterns of genetic diversity when selection is strong. However, in weakly selected populations random fluctuations due to genetic drift are more severe, and neither the distribution of fitness nor the sequence diversity within the population are well understood. Here, we briefly review the motivations behind the fitness-distribution picture, and summarize the general approaches that have been used to analyze this distribution in the strong-selection regime. We then extend these approaches to the case of weak selection, by outlining a perturbative treatment of selection at a large number of linked sites. This allows us to quantify the stochastic behavior of the fitness distribution and yields exact analytical predictions for the sequence diversity and substitution rate in the limit that selection is weak.
[ { "created": "Mon, 15 Oct 2012 14:18:42 GMT", "version": "v1" } ]
2013-05-29
[ [ "Good", "Benjamin H.", "" ], [ "Desai", "Michael M.", "" ] ]
Evolutionary dynamics and patterns of molecular evolution are strongly influenced by selection on linked regions of the genome, but our quantitative understanding of these effects remains incomplete. Recent work has focused on predicting the distribution of fitness within an evolving population, and this forms the basis for several methods that leverage the fitness distribution to predict the patterns of genetic diversity when selection is strong. However, in weakly selected populations random fluctuations due to genetic drift are more severe, and neither the distribution of fitness nor the sequence diversity within the population are well understood. Here, we briefly review the motivations behind the fitness-distribution picture, and summarize the general approaches that have been used to analyze this distribution in the strong-selection regime. We then extend these approaches to the case of weak selection, by outlining a perturbative treatment of selection at a large number of linked sites. This allows us to quantify the stochastic behavior of the fitness distribution and yields exact analytical predictions for the sequence diversity and substitution rate in the limit that selection is weak.
0807.1276
Thierry Rabilloud
Thierry Rabilloud (BBSI)
Mitochondrial proteomics: analysis of a whole mitochondrial extract with two-dimensional electrophoresis
null
Methods in molecular biology (Clifton, N.J.) 432 (2008) 83-100
10.1007/978-1-59745-028-7_6
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mitochondria are complex organelles, and their proteomics analysis requires a combination of techniques. The emphasis in this chapter is made first on mitochondria preparation from cultured mammalian cells, then on the separation of the mitochondrial proteins with two-dimensional electrophoresis (2DE), showing some adjustment over the classical techniques to improve resolution of the mitochondrial proteins. This covers both the protein solubilization, the electrophoretic part per se, and the protein detection on the gels, which makes the interface with the protein identification part relying on mass spectrometry.
[ { "created": "Tue, 8 Jul 2008 15:28:08 GMT", "version": "v1" } ]
2008-07-09
[ [ "Rabilloud", "Thierry", "", "BBSI" ] ]
Mitochondria are complex organelles, and their proteomics analysis requires a combination of techniques. The emphasis in this chapter is made first on mitochondria preparation from cultured mammalian cells, then on the separation of the mitochondrial proteins with two-dimensional electrophoresis (2DE), showing some adjustment over the classical techniques to improve resolution of the mitochondrial proteins. This covers both the protein solubilization, the electrophoretic part per se, and the protein detection on the gels, which makes the interface with the protein identification part relying on mass spectrometry.
1602.07957
Nathan Baker
Marilyn R. Gunner and Nathan A. Baker
Continuum Electrostatics Approaches to Calculating p$K_a$s and $E_m$s in Proteins
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research.
[ { "created": "Thu, 25 Feb 2016 15:04:45 GMT", "version": "v1" } ]
2016-02-26
[ [ "Gunner", "Marilyn R.", "" ], [ "Baker", "Nathan A.", "" ] ]
Proteins change their charge state through protonation and redox reactions as well as through binding charged ligands. The free energy of these reactions are dominated by solvation and electrostatic energies and modulated by protein conformational relaxation in response to the ionization state changes. Although computational methods for calculating these interactions can provide very powerful tools for predicting protein charge states, they include several critical approximations of which users should be aware. This chapter discusses the strengths, weaknesses, and approximations of popular computational methods for predicting charge states and understanding their underlying electrostatic interactions. The goal of this chapter is to inform users about applications and potential caveats of these methods as well as outline directions for future theoretical and computational research.
1704.08193
Spiros Denaxas
Vaclav Papez, Spiros Denaxas, Harry Hemingway
Evaluating openEHR for storing computable representations of electronic health record phenotyping algorithms
30th IEEE International Symposium on Computer-Based Medical Systems - IEEE CBMS 2017
null
null
null
q-bio.QM cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electronic Health Records (EHR) are data generated during routine clinical care. EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the pace of precision medicine at scale. A main EHR use-case is creating phenotyping algorithms to define disease status, onset and severity. Currently, no common machine-readable standard exists for defining phenotyping algorithms which often are stored in human-readable formats. As a result, the translation of algorithms to implementation code is challenging and sharing across the scientific community is problematic. In this paper, we evaluate openEHR, a formal EHR data specification, for computable representations of EHR phenotyping algorithms.
[ { "created": "Thu, 20 Apr 2017 20:54:19 GMT", "version": "v1" }, { "created": "Thu, 27 Apr 2017 07:18:10 GMT", "version": "v2" } ]
2017-04-28
[ [ "Papez", "Vaclav", "" ], [ "Denaxas", "Spiros", "" ], [ "Hemingway", "Harry", "" ] ]
Electronic Health Records (EHR) are data generated during routine clinical care. EHR offer researchers unprecedented phenotypic breadth and depth and have the potential to accelerate the pace of precision medicine at scale. A main EHR use-case is creating phenotyping algorithms to define disease status, onset and severity. Currently, no common machine-readable standard exists for defining phenotyping algorithms which often are stored in human-readable formats. As a result, the translation of algorithms to implementation code is challenging and sharing across the scientific community is problematic. In this paper, we evaluate openEHR, a formal EHR data specification, for computable representations of EHR phenotyping algorithms.
2307.15095
Artem Muliukov
Artem Muliukov, Laurent Rodriguez, Benoit Miramond
Cortex Inspired Learning to Recover Damaged Signal Modality with ReD-SOM Model
9 pages, 8 images, unofficial version, currently under review
null
null
null
q-bio.NC cs.AI cs.CV math.OC
http://creativecommons.org/licenses/by/4.0/
Recent progress in the fields of AI and cognitive sciences opens up new challenges that were previously inaccessible to study. One of such modern tasks is recovering lost data of one modality by using the data from another one. A similar effect (called the McGurk Effect) has been found in the functioning of the human brain. Observing this effect, one modality of information interferes with another, changing its perception. In this paper, we propose a way to simulate such an effect and use it to reconstruct lost data modalities by combining Variational Auto-Encoders, Self-Organizing Maps, and Hebb connections in a unified ReD-SOM (Reentering Deep Self-organizing Map) model. We are inspired by human's capability to use different zones of the brain in different modalities, in case of having a lack of information in one of the modalities. This new approach not only improves the analysis of ambiguous data but also restores the intended signal! The results obtained on the multimodal dataset demonstrate an increase of quality of the signal reconstruction. The effect is remarkable both visually and quantitatively, specifically in presence of a significant degree of signal's distortion.
[ { "created": "Thu, 27 Jul 2023 09:44:12 GMT", "version": "v1" } ]
2023-07-31
[ [ "Muliukov", "Artem", "" ], [ "Rodriguez", "Laurent", "" ], [ "Miramond", "Benoit", "" ] ]
Recent progress in the fields of AI and cognitive sciences opens up new challenges that were previously inaccessible to study. One of such modern tasks is recovering lost data of one modality by using the data from another one. A similar effect (called the McGurk Effect) has been found in the functioning of the human brain. Observing this effect, one modality of information interferes with another, changing its perception. In this paper, we propose a way to simulate such an effect and use it to reconstruct lost data modalities by combining Variational Auto-Encoders, Self-Organizing Maps, and Hebb connections in a unified ReD-SOM (Reentering Deep Self-organizing Map) model. We are inspired by human's capability to use different zones of the brain in different modalities, in case of having a lack of information in one of the modalities. This new approach not only improves the analysis of ambiguous data but also restores the intended signal! The results obtained on the multimodal dataset demonstrate an increase of quality of the signal reconstruction. The effect is remarkable both visually and quantitatively, specifically in presence of a significant degree of signal's distortion.
1704.07567
David Nguyen
David H. Nguyen
Quantifying Subtle Regions of Order and Disorder in Tumor Architecture by Calculating the Nearest-Neighbor Angular Profile
null
null
null
null
q-bio.TO
http://creativecommons.org/publicdomain/zero/1.0/
Pathologists routinely classify breast tumors according to recurring patterns of nuclear grades, cytoplasmic coloration, and large-scale morphological formations (i.e. streams of spindle cells, adenoid islands, etc.). The fact that there are large-scale morphological formations suggest that tumor cells still possess the genetic programming to arrange themselves in orderly patterns. However, small regions of order or subtle patterns of order are invisible to the human eye. The ability to detect subtle regions of order and correlate them with clinical outcome and resistance to treatment can enhance diagnostic efficacy. By measuring the acute angle that results when the line extending from the longest length within a nucleus intersects with the corresponding line of an adjacent nucleus, the degree of alignment between two adjacent nuclei can be measured. Through a series of systematic transformations, subtle regions of order and disorder within a tumor image can be quantified and visualized in the form of a heat map. This numerical transformation of spatial relationships between nuclei within tumors allows for the detection of subtly ordered regions.
[ { "created": "Tue, 25 Apr 2017 07:31:54 GMT", "version": "v1" } ]
2017-04-26
[ [ "Nguyen", "David H.", "" ] ]
Pathologists routinely classify breast tumors according to recurring patterns of nuclear grades, cytoplasmic coloration, and large-scale morphological formations (i.e. streams of spindle cells, adenoid islands, etc.). The fact that there are large-scale morphological formations suggest that tumor cells still possess the genetic programming to arrange themselves in orderly patterns. However, small regions of order or subtle patterns of order are invisible to the human eye. The ability to detect subtle regions of order and correlate them with clinical outcome and resistance to treatment can enhance diagnostic efficacy. By measuring the acute angle that results when the line extending from the longest length within a nucleus intersects with the corresponding line of an adjacent nucleus, the degree of alignment between two adjacent nuclei can be measured. Through a series of systematic transformations, subtle regions of order and disorder within a tumor image can be quantified and visualized in the form of a heat map. This numerical transformation of spatial relationships between nuclei within tumors allows for the detection of subtly ordered regions.
1804.11226
Ali Haidar T.
Ali T. Haidar, Abbas Al-Hakim, and Zhiyi Zhang
Sample Size for Concurrent Species Detection in a Species-Rich Assemblage
17 pages, 0 figures
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
Monitoring the distribution of microfossils in stratigraphic successions is an essential tool for biostratigraphic, evolutionary and paleoecologic/paleoceanographic studies. To estimate the relative abundance (%) of a given species, it is necessary to estimate in advance the minimum number of specimens to be used in the count (n). This requires an a priori assumption about a specified level of confidence, and about the species population proportion (p). It is common use to apply the binomial distribution to determine n to detect the presence of more than one species in the same sample, although the multinomial distribution should necessarily be used instead. The mathematical theory of sample size computation using the multinomial distribution is adapted to the computation of n for any number of species to be detected together (K) at any level of confidence. Easy-to-use extensive tables show n, for a combination of K and p. These tables indicate a large difference for n between that indicated by the binomial and those by the multinomial distribution when many species are to be detected simultaneously. Counting only 300 specimens (with 95 % confidence level) or 500 (99 %) is not enough to detect more than one taxon. The reconstructed history of the micro-biosphere may therefore, in many instances, need to be largely revised. This revision should affect our understanding of the ecological and evolutionary relationships between the past changes in the biosphere and the other major reservoirs (hydrosphere, geosphere and atmosphere). In biostratigraphy and biochronology, using a much larger sample size, when more than one marker species is to be detected in the neighborhood of the same biozone boundary, may help clarifying the nature of the apparent inconsistencies given by the observed reversals in the ordinal (rank) biostratigraphic data shown as intersections of the correlation lines
[ { "created": "Mon, 30 Apr 2018 14:24:32 GMT", "version": "v1" }, { "created": "Fri, 11 May 2018 03:03:08 GMT", "version": "v2" } ]
2018-05-14
[ [ "Haidar", "Ali T.", "" ], [ "Al-Hakim", "Abbas", "" ], [ "Zhang", "Zhiyi", "" ] ]
Monitoring the distribution of microfossils in stratigraphic successions is an essential tool for biostratigraphic, evolutionary and paleoecologic/paleoceanographic studies. To estimate the relative abundance (%) of a given species, it is necessary to estimate in advance the minimum number of specimens to be used in the count (n). This requires an a priori assumption about a specified level of confidence, and about the species population proportion (p). It is common use to apply the binomial distribution to determine n to detect the presence of more than one species in the same sample, although the multinomial distribution should necessarily be used instead. The mathematical theory of sample size computation using the multinomial distribution is adapted to the computation of n for any number of species to be detected together (K) at any level of confidence. Easy-to-use extensive tables show n, for a combination of K and p. These tables indicate a large difference for n between that indicated by the binomial and those by the multinomial distribution when many species are to be detected simultaneously. Counting only 300 specimens (with 95 % confidence level) or 500 (99 %) is not enough to detect more than one taxon. The reconstructed history of the micro-biosphere may therefore, in many instances, need to be largely revised. This revision should affect our understanding of the ecological and evolutionary relationships between the past changes in the biosphere and the other major reservoirs (hydrosphere, geosphere and atmosphere). In biostratigraphy and biochronology, using a much larger sample size, when more than one marker species is to be detected in the neighborhood of the same biozone boundary, may help clarifying the nature of the apparent inconsistencies given by the observed reversals in the ordinal (rank) biostratigraphic data shown as intersections of the correlation lines
2312.00910
Anna Paola Muntoni
A.P. Muntoni, F. Mazza, A. Braunstein, G. Catania, and L. Dall'Asta
Effectiveness of probabilistic contact tracing in epidemic containment: the role of super-spreaders and transmission paths reconstruction
null
null
null
null
q-bio.PE cond-mat.stat-mech cs.AI cs.LG physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
The recent COVID-19 pandemic underscores the significance of early-stage non-pharmacological intervention strategies. The widespread use of masks and the systematic implementation of contact tracing strategies provide a potentially equally effective and socially less impactful alternative to more conventional approaches, such as large-scale mobility restrictions. However, manual contact tracing faces strong limitations in accessing the network of contacts, and the scalability of currently implemented protocols for smartphone-based digital contact tracing becomes impractical during the rapid expansion phases of the outbreaks, due to the surge in exposure notifications and associated tests. A substantial improvement in digital contact tracing can be obtained through the integration of probabilistic techniques for risk assessment that can more effectively guide the allocation of new diagnostic tests. In this study, we first quantitatively analyze the diagnostic and social costs associated with these containment measures based on contact tracing, employing three state-of-the-art models of SARS-CoV-2 spreading. Our results suggest that probabilistic techniques allow for more effective mitigation at a lower cost. Secondly, our findings reveal a remarkable efficacy of probabilistic contact-tracing techniques in capturing backward propagations and super-spreading events, relevant features of the diffusion of many pathogens, including SARS-CoV-2.
[ { "created": "Fri, 1 Dec 2023 20:19:12 GMT", "version": "v1" } ]
2023-12-05
[ [ "Muntoni", "A. P.", "" ], [ "Mazza", "F.", "" ], [ "Braunstein", "A.", "" ], [ "Catania", "G.", "" ], [ "Dall'Asta", "L.", "" ] ]
The recent COVID-19 pandemic underscores the significance of early-stage non-pharmacological intervention strategies. The widespread use of masks and the systematic implementation of contact tracing strategies provide a potentially equally effective and socially less impactful alternative to more conventional approaches, such as large-scale mobility restrictions. However, manual contact tracing faces strong limitations in accessing the network of contacts, and the scalability of currently implemented protocols for smartphone-based digital contact tracing becomes impractical during the rapid expansion phases of the outbreaks, due to the surge in exposure notifications and associated tests. A substantial improvement in digital contact tracing can be obtained through the integration of probabilistic techniques for risk assessment that can more effectively guide the allocation of new diagnostic tests. In this study, we first quantitatively analyze the diagnostic and social costs associated with these containment measures based on contact tracing, employing three state-of-the-art models of SARS-CoV-2 spreading. Our results suggest that probabilistic techniques allow for more effective mitigation at a lower cost. Secondly, our findings reveal a remarkable efficacy of probabilistic contact-tracing techniques in capturing backward propagations and super-spreading events, relevant features of the diffusion of many pathogens, including SARS-CoV-2.
1501.05836
Luiz Baccal\'a
Luiz A. Baccal\'a, Daniel Y. Takahashi, Koichi Sameshima
Consolidating a Link Centered Neural Connectivity Framework with Directed Transfer Function Asymptotics
12 figures
null
null
null
q-bio.NC math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a unified mathematical derivation of the asymptotic behaviour of three of the main forms of \textit{directed transfer function} (DTF) complementing recent partial directed coherence (PDC) results \cite{Baccala2013}. Based on these results and numerical examples we argue for a new directed `link' centered neural connectivity framework to replace the widespread correlation based effective/functional network concepts so that directed network influences between structures become classified as to whether links are \textit{active} in a \textit{direct} or in an \textit{indirect} way thereby leading to the new notions of \textit{Granger connectivity} and \textit{Granger influenciability} which are more descriptive than speaking of Granger causality alone.
[ { "created": "Fri, 23 Jan 2015 15:39:55 GMT", "version": "v1" } ]
2015-01-26
[ [ "Baccalá", "Luiz A.", "" ], [ "Takahashi", "Daniel Y.", "" ], [ "Sameshima", "Koichi", "" ] ]
We present a unified mathematical derivation of the asymptotic behaviour of three of the main forms of \textit{directed transfer function} (DTF) complementing recent partial directed coherence (PDC) results \cite{Baccala2013}. Based on these results and numerical examples we argue for a new directed `link' centered neural connectivity framework to replace the widespread correlation based effective/functional network concepts so that directed network influences between structures become classified as to whether links are \textit{active} in a \textit{direct} or in an \textit{indirect} way thereby leading to the new notions of \textit{Granger connectivity} and \textit{Granger influenciability} which are more descriptive than speaking of Granger causality alone.
1312.6639
Iosif Lazaridis
Iosif Lazaridis, Nick Patterson, Alissa Mittnik, Gabriel Renaud, Swapan Mallick, Karola Kirsanow, Peter H. Sudmant, Joshua G. Schraiber, Sergi Castellano, Mark Lipson, Bonnie Berger, Christos Economou, Ruth Bollongino, Qiaomei Fu, Kirsten I. Bos, Susanne Nordenfelt, Heng Li, Cesare de Filippo, Kay Pr\"ufer, Susanna Sawyer, Cosimo Posth, Wolfgang Haak, Fredrik Hallgren, Elin Fornander, Nadin Rohland, Dominique Delsate, Michael Francken, Jean-Michel Guinet, Joachim Wahl, George Ayodo, Hamza A. Babiker, Graciela Bailliet, Elena Balanovska, Oleg Balanovsky, Ramiro Barrantes, Gabriel Bedoya, Haim Ben-Ami, Judit Bene, Fouad Berrada, Claudio M. Bravi, Francesca Brisighelli, George Busby, Francesco Cali, Mikhail Churnosov, David E. C. Cole, Daniel Corach, Larissa Damba, George van Driem, Stanislav Dryomov, Jean-Michel Dugoujon, Sardana A. Fedorova, Irene Gallego Romero, Marina Gubina, Michael Hammer, Brenna Henn, Tor Hervig, Ugur Hodoglugil, Aashish R. Jha, Sena Karachanak-Yankova, Rita Khusainova, Elza Khusnutdinova, Rick Kittles, Toomas Kivisild, William Klitz, Vaidutis Ku\v{c}inskas, Alena Kushniarevich, Leila Laredj, Sergey Litvinov, Theologos Loukidis, Robert W. Mahley, B\'ela Melegh, Ene Metspalu, Julio Molina, Joanna Mountain, Klemetti N\"akk\"al\"aj\"arvi, Desislava Nesheva, Thomas Nyambo, Ludmila Osipova, J\"uri Parik, Fedor Platonov, Olga Posukh, Valentino Romano, Francisco Rothhammer, Igor Rudan, Ruslan Ruizbakiev, Hovhannes Sahakyan, Antti Sajantila, Antonio Salas, Elena B. Starikovskaya, Ayele Tarekegn, Draga Toncheva, Shahlo Turdikulova, Ingrida Uktveryte, Olga Utevska, Ren\'e Vasquez, Mercedes Villena, Mikhail Voevoda, Cheryl Winkler, Levon Yepiskoposyan, Pierre Zalloua, Tatijana Zemunik, Alan Cooper, Cristian Capelli, Mark G. Thomas, Andres Ruiz-Linares, Sarah A. Tishkoff, Lalji Singh, Kumarasamy Thangaraj, Richard Villems, David Comas, Rem Sukernik, Mait Metspalu, Matthias Meyer, Evan E. Eichler, Joachim Burger, Montgomery Slatkin, Svante P\"a\"abo, Janet Kelso, David Reich, Johannes Krause
Ancient human genomes suggest three ancestral populations for present-day Europeans
null
null
10.1038/nature13673
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We sequenced genomes from a $\sim$7,000 year old early farmer from Stuttgart in Germany, an $\sim$8,000 year old hunter-gatherer from Luxembourg, and seven $\sim$8,000 year old hunter-gatherers from southern Sweden. We analyzed these data together with other ancient genomes and 2,345 contemporary humans to show that the great majority of present-day Europeans derive from at least three highly differentiated populations: West European Hunter-Gatherers (WHG), who contributed ancestry to all Europeans but not to Near Easterners; Ancient North Eurasians (ANE), who were most closely related to Upper Paleolithic Siberians and contributed to both Europeans and Near Easterners; and Early European Farmers (EEF), who were mainly of Near Eastern origin but also harbored WHG-related ancestry. We model these populations' deep relationships and show that EEF had $\sim$44% ancestry from a "Basal Eurasian" lineage that split prior to the diversification of all other non-African lineages.
[ { "created": "Mon, 23 Dec 2013 18:47:50 GMT", "version": "v1" }, { "created": "Wed, 2 Apr 2014 02:53:33 GMT", "version": "v2" } ]
2015-06-18
[ [ "Lazaridis", "Iosif", "" ], [ "Patterson", "Nick", "" ], [ "Mittnik", "Alissa", "" ], [ "Renaud", "Gabriel", "" ], [ "Mallick", "Swapan", "" ], [ "Kirsanow", "Karola", "" ], [ "Sudmant", "Peter H.", "" ], [ "Schraiber", "Joshua G.", "" ], [ "Castellano", "Sergi", "" ], [ "Lipson", "Mark", "" ], [ "Berger", "Bonnie", "" ], [ "Economou", "Christos", "" ], [ "Bollongino", "Ruth", "" ], [ "Fu", "Qiaomei", "" ], [ "Bos", "Kirsten I.", "" ], [ "Nordenfelt", "Susanne", "" ], [ "Li", "Heng", "" ], [ "de Filippo", "Cesare", "" ], [ "Prüfer", "Kay", "" ], [ "Sawyer", "Susanna", "" ], [ "Posth", "Cosimo", "" ], [ "Haak", "Wolfgang", "" ], [ "Hallgren", "Fredrik", "" ], [ "Fornander", "Elin", "" ], [ "Rohland", "Nadin", "" ], [ "Delsate", "Dominique", "" ], [ "Francken", "Michael", "" ], [ "Guinet", "Jean-Michel", "" ], [ "Wahl", "Joachim", "" ], [ "Ayodo", "George", "" ], [ "Babiker", "Hamza A.", "" ], [ "Bailliet", "Graciela", "" ], [ "Balanovska", "Elena", "" ], [ "Balanovsky", "Oleg", "" ], [ "Barrantes", "Ramiro", "" ], [ "Bedoya", "Gabriel", "" ], [ "Ben-Ami", "Haim", "" ], [ "Bene", "Judit", "" ], [ "Berrada", "Fouad", "" ], [ "Bravi", "Claudio M.", "" ], [ "Brisighelli", "Francesca", "" ], [ "Busby", "George", "" ], [ "Cali", "Francesco", "" ], [ "Churnosov", "Mikhail", "" ], [ "Cole", "David E. C.", "" ], [ "Corach", "Daniel", "" ], [ "Damba", "Larissa", "" ], [ "van Driem", "George", "" ], [ "Dryomov", "Stanislav", "" ], [ "Dugoujon", "Jean-Michel", "" ], [ "Fedorova", "Sardana A.", "" ], [ "Romero", "Irene Gallego", "" ], [ "Gubina", "Marina", "" ], [ "Hammer", "Michael", "" ], [ "Henn", "Brenna", "" ], [ "Hervig", "Tor", "" ], [ "Hodoglugil", "Ugur", "" ], [ "Jha", "Aashish R.", "" ], [ "Karachanak-Yankova", "Sena", "" ], [ "Khusainova", "Rita", "" ], [ "Khusnutdinova", "Elza", "" ], [ "Kittles", "Rick", "" ], [ "Kivisild", "Toomas", "" ], [ "Klitz", "William", "" ], [ "Kučinskas", "Vaidutis", "" ], [ "Kushniarevich", "Alena", "" ], [ "Laredj", "Leila", "" ], [ "Litvinov", "Sergey", "" ], [ "Loukidis", "Theologos", "" ], [ "Mahley", "Robert W.", "" ], [ "Melegh", "Béla", "" ], [ "Metspalu", "Ene", "" ], [ "Molina", "Julio", "" ], [ "Mountain", "Joanna", "" ], [ "Näkkäläjärvi", "Klemetti", "" ], [ "Nesheva", "Desislava", "" ], [ "Nyambo", "Thomas", "" ], [ "Osipova", "Ludmila", "" ], [ "Parik", "Jüri", "" ], [ "Platonov", "Fedor", "" ], [ "Posukh", "Olga", "" ], [ "Romano", "Valentino", "" ], [ "Rothhammer", "Francisco", "" ], [ "Rudan", "Igor", "" ], [ "Ruizbakiev", "Ruslan", "" ], [ "Sahakyan", "Hovhannes", "" ], [ "Sajantila", "Antti", "" ], [ "Salas", "Antonio", "" ], [ "Starikovskaya", "Elena B.", "" ], [ "Tarekegn", "Ayele", "" ], [ "Toncheva", "Draga", "" ], [ "Turdikulova", "Shahlo", "" ], [ "Uktveryte", "Ingrida", "" ], [ "Utevska", "Olga", "" ], [ "Vasquez", "René", "" ], [ "Villena", "Mercedes", "" ], [ "Voevoda", "Mikhail", "" ], [ "Winkler", "Cheryl", "" ], [ "Yepiskoposyan", "Levon", "" ], [ "Zalloua", "Pierre", "" ], [ "Zemunik", "Tatijana", "" ], [ "Cooper", "Alan", "" ], [ "Capelli", "Cristian", "" ], [ "Thomas", "Mark G.", "" ], [ "Ruiz-Linares", "Andres", "" ], [ "Tishkoff", "Sarah A.", "" ], [ "Singh", "Lalji", "" ], [ "Thangaraj", "Kumarasamy", "" ], [ "Villems", "Richard", "" ], [ "Comas", "David", "" ], [ "Sukernik", "Rem", "" ], [ "Metspalu", "Mait", "" ], [ "Meyer", "Matthias", "" ], [ "Eichler", "Evan E.", "" ], [ "Burger", "Joachim", "" ], [ "Slatkin", "Montgomery", "" ], [ "Pääbo", "Svante", "" ], [ "Kelso", "Janet", "" ], [ "Reich", "David", "" ], [ "Krause", "Johannes", "" ] ]
We sequenced genomes from a $\sim$7,000 year old early farmer from Stuttgart in Germany, an $\sim$8,000 year old hunter-gatherer from Luxembourg, and seven $\sim$8,000 year old hunter-gatherers from southern Sweden. We analyzed these data together with other ancient genomes and 2,345 contemporary humans to show that the great majority of present-day Europeans derive from at least three highly differentiated populations: West European Hunter-Gatherers (WHG), who contributed ancestry to all Europeans but not to Near Easterners; Ancient North Eurasians (ANE), who were most closely related to Upper Paleolithic Siberians and contributed to both Europeans and Near Easterners; and Early European Farmers (EEF), who were mainly of Near Eastern origin but also harbored WHG-related ancestry. We model these populations' deep relationships and show that EEF had $\sim$44% ancestry from a "Basal Eurasian" lineage that split prior to the diversification of all other non-African lineages.
1611.09565
Nadav M. Shnerb
Matan Danino, David A. Kessler and Nadav M. Shnerb
Stability of two-species communities: drift, environmental stochasticity, storage effect and selection
null
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of two competing species in a finite size community is one of the most studied problems in population genetics and community ecology. Stochastic fluctuations lead, inevitably, to the extinction of one of the species, but the relevant timescale depends on the underlying dynamics. The persistence time of the community has been calculated for neutral models, where the only drive of the system is drift (demographic stochasticity) and for models with strong selection. Following recent analyses that stress the importance of environmental stochasticity in empirical systems, we present here a general theory of persistence time of two-species community where drift, environmental variations and time independent selective advantage are all taken into account.
[ { "created": "Tue, 29 Nov 2016 10:59:06 GMT", "version": "v1" } ]
2016-11-30
[ [ "Danino", "Matan", "" ], [ "Kessler", "David A.", "" ], [ "Shnerb", "Nadav M.", "" ] ]
The dynamics of two competing species in a finite size community is one of the most studied problems in population genetics and community ecology. Stochastic fluctuations lead, inevitably, to the extinction of one of the species, but the relevant timescale depends on the underlying dynamics. The persistence time of the community has been calculated for neutral models, where the only drive of the system is drift (demographic stochasticity) and for models with strong selection. Following recent analyses that stress the importance of environmental stochasticity in empirical systems, we present here a general theory of persistence time of two-species community where drift, environmental variations and time independent selective advantage are all taken into account.
1407.1024
Joanna Masel
Joanna Masel
Eco-evolutionary "fitness" in 3 dimensions: absolute growth, absolute efficiency, and relative competitiveness
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Competitions can occur on an absolute scale, to be faster or more efficient, or they can occur on a relative scale, to "beat" one's competitor in a zero-sum game. Ecological models have focused on absolute competitions, in which optima exist. Classic evolutionary models such as the Wright-Fisher model, as well as more recent models of travelling waves, have focused on purely relative competitions, in which fitness continues to increase indefinitely, without actually progressing anywhere. This manuscript proposes a new way to describe both at the same time. It begins with a revised version of r/K-selection theory. r continues to describe maximum reproductive speed, but the new version of K, with a different subscript, now describes parsimoniousness in territory use, a group-selected, anti-tragedy-of-the-commons trait. A third dimension c of fitness is then added to this novel system, one which is unitless and normalized, and hence capable of capturing the population genetics concept w of a strictly relative, genetically-limited competitive race. MacArthur's original version of r/K-selection theory is shown to confound parsimoniousness K with competitive ability c, despite the fact that available data suggests a negative correlation between the two; here they are disentangled. A rotation of the resulting three-dimensional system provides a population genetic underpinning for Grime's universal adaptive strategy theory of ruderals (selected for high r), stress tolerators (selected for a combination of high r and high K), and competitors (selected for a combination of high r and high c).
[ { "created": "Thu, 3 Jul 2014 19:29:52 GMT", "version": "v1" }, { "created": "Tue, 18 Nov 2014 01:29:29 GMT", "version": "v2" }, { "created": "Tue, 17 Feb 2015 18:51:12 GMT", "version": "v3" }, { "created": "Thu, 1 Oct 2015 01:07:54 GMT", "version": "v4" } ]
2015-10-02
[ [ "Masel", "Joanna", "" ] ]
Competitions can occur on an absolute scale, to be faster or more efficient, or they can occur on a relative scale, to "beat" one's competitor in a zero-sum game. Ecological models have focused on absolute competitions, in which optima exist. Classic evolutionary models such as the Wright-Fisher model, as well as more recent models of travelling waves, have focused on purely relative competitions, in which fitness continues to increase indefinitely, without actually progressing anywhere. This manuscript proposes a new way to describe both at the same time. It begins with a revised version of r/K-selection theory. r continues to describe maximum reproductive speed, but the new version of K, with a different subscript, now describes parsimoniousness in territory use, a group-selected, anti-tragedy-of-the-commons trait. A third dimension c of fitness is then added to this novel system, one which is unitless and normalized, and hence capable of capturing the population genetics concept w of a strictly relative, genetically-limited competitive race. MacArthur's original version of r/K-selection theory is shown to confound parsimoniousness K with competitive ability c, despite the fact that available data suggests a negative correlation between the two; here they are disentangled. A rotation of the resulting three-dimensional system provides a population genetic underpinning for Grime's universal adaptive strategy theory of ruderals (selected for high r), stress tolerators (selected for a combination of high r and high K), and competitors (selected for a combination of high r and high c).
1301.1593
Yandong Huang
Yandong Huang and Jianwei Shuai
Polarization effect of zinc on the region 1-16 of amyloid-beta peptide: a molecular dynamics study
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Zinc is found saturated in the deposited Amyloid-beta (AB) peptide plaques in brains of patients subjected to Alzheimer disease (AD). Zinc binding to AB promotes aggregations, including the toxic soluble AB species. Up to now, only the region 1-16 of AB complexed with Zinc (AB16-Zn) is defined structurally in experiment, requiring an efficient theoretical method to present the interaction between zinc and AB peptide. In order to explore the induced polarization effect on the global conformation fluctuations and the experimentally observed coordination mode of AB16-Zn, in this work we consider an all-atom molecular dynamics (MD) of AB16-Zn solvated in implicit water. In our model the polarization effect affects the whole peptide is applied. The induced dipoles are divided into three distinct scales according to their distances from zinc. Besides, the atomistic polarizability on the coordinating sidechains is rescaled to describe the electron redistribution effect. As a comparison, another model which exactly follows the method of Sakharov and Lim (J. Am. Chem. Soc., 127, 13, 2005) has been discussed also. We show that, associated with proper van der Waals (vdW) parameters, our model not only obtains the reasonable coordinating configuration of zinc binding site, but also retains the global stabilization, especially the N-terminal region, of the AB16-Zn. We suggest that it is the induced polarization effect that promotes reasonable solvent exposures of hydrophobic/hydrophilic residues regarding zinc-induced AB aggregation.
[ { "created": "Tue, 8 Jan 2013 17:03:05 GMT", "version": "v1" } ]
2013-01-09
[ [ "Huang", "Yandong", "" ], [ "Shuai", "Jianwei", "" ] ]
Zinc is found saturated in the deposited Amyloid-beta (AB) peptide plaques in brains of patients subjected to Alzheimer disease (AD). Zinc binding to AB promotes aggregations, including the toxic soluble AB species. Up to now, only the region 1-16 of AB complexed with Zinc (AB16-Zn) is defined structurally in experiment, requiring an efficient theoretical method to present the interaction between zinc and AB peptide. In order to explore the induced polarization effect on the global conformation fluctuations and the experimentally observed coordination mode of AB16-Zn, in this work we consider an all-atom molecular dynamics (MD) of AB16-Zn solvated in implicit water. In our model the polarization effect affects the whole peptide is applied. The induced dipoles are divided into three distinct scales according to their distances from zinc. Besides, the atomistic polarizability on the coordinating sidechains is rescaled to describe the electron redistribution effect. As a comparison, another model which exactly follows the method of Sakharov and Lim (J. Am. Chem. Soc., 127, 13, 2005) has been discussed also. We show that, associated with proper van der Waals (vdW) parameters, our model not only obtains the reasonable coordinating configuration of zinc binding site, but also retains the global stabilization, especially the N-terminal region, of the AB16-Zn. We suggest that it is the induced polarization effect that promotes reasonable solvent exposures of hydrophobic/hydrophilic residues regarding zinc-induced AB aggregation.
2102.11750
Kristoffer Rypdal
Kristoffer Rypdal
The tipping effect of delayed interventions on the evolution of COVID-19 incidence
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
We combine infectious disease transmission and the non-pharmaceutical intervention (NPI) response to disease incidence into one closed model consisting of two coupled delay differential equations for the incidence rate and the time-dependent reproduction number. The model contains three free parameters, the initial reproduction number, the intervention strength, and the response delay relative to the time of infection. The NPI response is modeled by assuming that the rate of change of the reproduction number is proportional to the negative deviation of the incidence rate from an intervention threshold. This delay dynamical system exhibits damped oscillations in one part of the parameter space, and growing oscillations in another, and these are separated by a surface where the solution is a strictly periodic nonlinear oscillation. For parameters relevant for the COVID-19 pandemic, the tipping transition from damped to growing oscillations occurs for response delays of the order of one week, and suggests that effective control and mitigation of successive epidemic waves cannot be achieved unless NPIs are implemented in a precautionary manner, rather than merely as a response to the present incidence rate.
[ { "created": "Tue, 23 Feb 2021 15:28:55 GMT", "version": "v1" } ]
2021-02-24
[ [ "Rypdal", "Kristoffer", "" ] ]
We combine infectious disease transmission and the non-pharmaceutical intervention (NPI) response to disease incidence into one closed model consisting of two coupled delay differential equations for the incidence rate and the time-dependent reproduction number. The model contains three free parameters, the initial reproduction number, the intervention strength, and the response delay relative to the time of infection. The NPI response is modeled by assuming that the rate of change of the reproduction number is proportional to the negative deviation of the incidence rate from an intervention threshold. This delay dynamical system exhibits damped oscillations in one part of the parameter space, and growing oscillations in another, and these are separated by a surface where the solution is a strictly periodic nonlinear oscillation. For parameters relevant for the COVID-19 pandemic, the tipping transition from damped to growing oscillations occurs for response delays of the order of one week, and suggests that effective control and mitigation of successive epidemic waves cannot be achieved unless NPIs are implemented in a precautionary manner, rather than merely as a response to the present incidence rate.
2209.06158
Yingce Xia
Kehan Wu, Yingce Xia, Yang Fan, Pan Deng, Haiguang Liu, Lijun Wu, Shufang Xie, Tong Wang, Tao Qin and Tie-Yan Liu
Tailoring Molecules for Protein Pockets: a Transformer-based Generative Solution for Structured-based Drug Design
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Structure-based drug design is drawing growing attentions in computer-aided drug discovery. Compared with the virtual screening approach where a pre-defined library of compounds are computationally screened, de novo drug design based on the structure of a target protein can provide novel drug candidates. In this paper, we present a generative solution named TamGent (Target-aware molecule generator with Transformer) that can directly generate candidate drugs from scratch for a given target, overcoming the limits imposed by existing compound libraries. Following the Transformer framework (a state-of-the-art framework in deep learning), we design a variant of Transformer encoder to process 3D geometric information of targets and pre-train the Transformer decoder on 10 million compounds from PubChem for candidate drug generation. Systematical evaluation on candidate compounds generated for targets from DrugBank shows that both binding affinity and drugability are largely improved. TamGent outperforms previous baselines in terms of both effectiveness and efficiency. The method is further verified by generating candidate compounds for the SARS-CoV-2 main protease and the oncogenic mutant KRAS G12C. The results show that our method not only re-discovers previously verified drug molecules , but also generates novel molecules with better docking scores, expanding the compound pool and potentially leading to the discovery of novel drugs.
[ { "created": "Tue, 30 Aug 2022 09:32:39 GMT", "version": "v1" } ]
2022-09-14
[ [ "Wu", "Kehan", "" ], [ "Xia", "Yingce", "" ], [ "Fan", "Yang", "" ], [ "Deng", "Pan", "" ], [ "Liu", "Haiguang", "" ], [ "Wu", "Lijun", "" ], [ "Xie", "Shufang", "" ], [ "Wang", "Tong", "" ], [ "Qin", "Tao", "" ], [ "Liu", "Tie-Yan", "" ] ]
Structure-based drug design is drawing growing attentions in computer-aided drug discovery. Compared with the virtual screening approach where a pre-defined library of compounds are computationally screened, de novo drug design based on the structure of a target protein can provide novel drug candidates. In this paper, we present a generative solution named TamGent (Target-aware molecule generator with Transformer) that can directly generate candidate drugs from scratch for a given target, overcoming the limits imposed by existing compound libraries. Following the Transformer framework (a state-of-the-art framework in deep learning), we design a variant of Transformer encoder to process 3D geometric information of targets and pre-train the Transformer decoder on 10 million compounds from PubChem for candidate drug generation. Systematical evaluation on candidate compounds generated for targets from DrugBank shows that both binding affinity and drugability are largely improved. TamGent outperforms previous baselines in terms of both effectiveness and efficiency. The method is further verified by generating candidate compounds for the SARS-CoV-2 main protease and the oncogenic mutant KRAS G12C. The results show that our method not only re-discovers previously verified drug molecules , but also generates novel molecules with better docking scores, expanding the compound pool and potentially leading to the discovery of novel drugs.
2002.10936
Matthew Leming
Matthew Leming, John Suckling
Stochastic encoding of graphs in deep learning allows for complex analysis of gender classification in resting-state and task functional brain networks from the UK Biobank
null
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classification of whole-brain functional connectivity MRI data with convolutional neural networks (CNNs) has shown promise, but the complexity of these models impedes understanding of which aspects of brain activity contribute to classification. While visualization techniques have been developed to interpret CNNs, bias inherent in the method of encoding abstract input data, as well as the natural variance of deep learning models, detract from the accuracy of these techniques. We introduce a stochastic encoding method in an ensemble of CNNs to classify functional connectomes by gender. We applied our method to resting-state and task data from the UK BioBank, using two visualization techniques to measure the salience of three brain networks involved in task- and resting-states, and their interaction. To regress confounding factors such as head motion, age, and intracranial volume, we introduced a multivariate balancing algorithm to ensure equal distributions of such covariates between classes in our data. We achieved a final AUROC of 0.8459. We found that resting-state data classifies more accurately than task data, with the inner salience network playing the most important role of the three networks overall in classification of resting-state data and connections to the central executive network in task data.
[ { "created": "Tue, 25 Feb 2020 15:10:51 GMT", "version": "v1" }, { "created": "Wed, 27 May 2020 16:20:28 GMT", "version": "v2" } ]
2020-05-28
[ [ "Leming", "Matthew", "" ], [ "Suckling", "John", "" ] ]
Classification of whole-brain functional connectivity MRI data with convolutional neural networks (CNNs) has shown promise, but the complexity of these models impedes understanding of which aspects of brain activity contribute to classification. While visualization techniques have been developed to interpret CNNs, bias inherent in the method of encoding abstract input data, as well as the natural variance of deep learning models, detract from the accuracy of these techniques. We introduce a stochastic encoding method in an ensemble of CNNs to classify functional connectomes by gender. We applied our method to resting-state and task data from the UK BioBank, using two visualization techniques to measure the salience of three brain networks involved in task- and resting-states, and their interaction. To regress confounding factors such as head motion, age, and intracranial volume, we introduced a multivariate balancing algorithm to ensure equal distributions of such covariates between classes in our data. We achieved a final AUROC of 0.8459. We found that resting-state data classifies more accurately than task data, with the inner salience network playing the most important role of the three networks overall in classification of resting-state data and connections to the central executive network in task data.
0807.2008
Jason Locasale W
Jason W. Locasale, Arup K. Chakraborty
Regulation of signal duration and the statistical dynamics of kinase activation by scaffold proteins
12 pages, 6 figures
PLoS Comput Biol. 2008 Jun 27;4(6):e1000099
10.1371/journal.pcbi.1000099
null
q-bio.SC q-bio.MN
http://creativecommons.org/licenses/by/3.0/
Scaffolding proteins that direct the assembly of multiple kinases into a spatially localized signaling complex are often essential for the maintenance of an appropriate biological response. Although scaffolds are widely believed to have dramatic effects on the dynamics of signal propagation, the mechanisms that underlie these consequences are not well understood. Here, Monte Carlo simulations of a model kinase cascade are used to investigate how the temporal characteristics of signaling cascades can be influenced by the presence of scaffold proteins. Specifically, we examine the effects of spatially localizing kinase components on a scaffold on signaling dynamics. The simulations indicate that a major effect that scaffolds exert on the dynamics of cell signaling is to control how the activation of protein kinases is distributed over time. Scaffolds can influence the timing of kinase activation by allowing for kinases to become activated over a broad range of times, thus allowing for signaling at both early and late times. Scaffold concentrations that result in optimal signal amplitude also result in the broadest distributions of times over which kinases are activated. These calculations provide insights into one mechanism that describes how the duration of a signal can potentially be regulated in a scaffold mediated protein kinase cascade. Our results illustrate another complexity in the broad array of control properties that emerge from the physical effects of spatially localizing components of kinase cascades on scaffold proteins.
[ { "created": "Sun, 13 Jul 2008 01:58:02 GMT", "version": "v1" }, { "created": "Wed, 29 Oct 2008 03:06:08 GMT", "version": "v2" } ]
2015-05-13
[ [ "Locasale", "Jason W.", "" ], [ "Chakraborty", "Arup K.", "" ] ]
Scaffolding proteins that direct the assembly of multiple kinases into a spatially localized signaling complex are often essential for the maintenance of an appropriate biological response. Although scaffolds are widely believed to have dramatic effects on the dynamics of signal propagation, the mechanisms that underlie these consequences are not well understood. Here, Monte Carlo simulations of a model kinase cascade are used to investigate how the temporal characteristics of signaling cascades can be influenced by the presence of scaffold proteins. Specifically, we examine the effects of spatially localizing kinase components on a scaffold on signaling dynamics. The simulations indicate that a major effect that scaffolds exert on the dynamics of cell signaling is to control how the activation of protein kinases is distributed over time. Scaffolds can influence the timing of kinase activation by allowing for kinases to become activated over a broad range of times, thus allowing for signaling at both early and late times. Scaffold concentrations that result in optimal signal amplitude also result in the broadest distributions of times over which kinases are activated. These calculations provide insights into one mechanism that describes how the duration of a signal can potentially be regulated in a scaffold mediated protein kinase cascade. Our results illustrate another complexity in the broad array of control properties that emerge from the physical effects of spatially localizing components of kinase cascades on scaffold proteins.
q-bio/0404008
Matthew Berryman
Matthew J. Berryman, Andrew Allison and Derek Abbott
Stochastic evolution and multifractal classification of prokaryotes
9 pages, 3 figures
Proc. SPIE 5110, Fluctuations and Noise in Biological, Biophysical, and Biomedical Systems, Ed. Sergey M. Bezrukov, Hans Frauenfelder and Frank Moss, Santa Fe, USA, June 2003, pp192-200
null
null
q-bio.PE
null
We introduce a model for simulating mutation of prokaryote DNA sequences. Using that model we can then evaluated traditional techniques like parsimony and maximum likelihood methods for computing phylogenetic relationships. We also use the model to mimic large scale genomic changes, and use this to evaluate multifractal and related information theory techniques which take into account these large changes in determining phylogenetic relationships.
[ { "created": "Wed, 7 Apr 2004 04:58:20 GMT", "version": "v1" } ]
2007-05-23
[ [ "Berryman", "Matthew J.", "" ], [ "Allison", "Andrew", "" ], [ "Abbott", "Derek", "" ] ]
We introduce a model for simulating mutation of prokaryote DNA sequences. Using that model we can then evaluated traditional techniques like parsimony and maximum likelihood methods for computing phylogenetic relationships. We also use the model to mimic large scale genomic changes, and use this to evaluate multifractal and related information theory techniques which take into account these large changes in determining phylogenetic relationships.
2312.09317
Johannes Textor
Shabaz Sultan, Sapna Devi, Scott N. Mueller, Johannes Textor
A parallelized cellular Potts model that enables simulations at tissue scale
29 pages, 11 figures, 3 tables
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The Cellular Potts Model (CPM) is a widely used simulation paradigm for systems of interacting cells that has been used to study scenarios ranging from plant development to morphogenesis, tumour growth and cell migration. Despite their wide use, CPM simulations are considered too computationally intensive for three-dimensional (3D) models at organ scale. CPMs have been difficult to parallelise because of their inherently sequential update scheme. Here, we present a Graphical Processing Unit (GPU)-based parallelisation scheme that preserves local update statistics and is up to 3-4 orders of magnitude faster than serial implementations. We show several examples where our scheme preserves simulation behaviors that are drastically altered by existing parallelisation methods. We use our framework to construct tissue-scale models of liver and lymph node environments containing millions of cells that are directly based on microscopy-imaged tissue structures. Thus, our GPU-based CPM framework enables in silico studies of multicellular systems of unprecedented scale.
[ { "created": "Thu, 14 Dec 2023 20:01:51 GMT", "version": "v1" } ]
2023-12-18
[ [ "Sultan", "Shabaz", "" ], [ "Devi", "Sapna", "" ], [ "Mueller", "Scott N.", "" ], [ "Textor", "Johannes", "" ] ]
The Cellular Potts Model (CPM) is a widely used simulation paradigm for systems of interacting cells that has been used to study scenarios ranging from plant development to morphogenesis, tumour growth and cell migration. Despite their wide use, CPM simulations are considered too computationally intensive for three-dimensional (3D) models at organ scale. CPMs have been difficult to parallelise because of their inherently sequential update scheme. Here, we present a Graphical Processing Unit (GPU)-based parallelisation scheme that preserves local update statistics and is up to 3-4 orders of magnitude faster than serial implementations. We show several examples where our scheme preserves simulation behaviors that are drastically altered by existing parallelisation methods. We use our framework to construct tissue-scale models of liver and lymph node environments containing millions of cells that are directly based on microscopy-imaged tissue structures. Thus, our GPU-based CPM framework enables in silico studies of multicellular systems of unprecedented scale.
2103.07850
Seung Ki Baek
Sunhee Chae, Nahyeon Lee, Seung Ki Baek, and Hyeong-Chai Jeong
Assortative clustering in a one-dimensional population with replication strategies
7 pages, 6 figures
Physical Review E 103, 032114 (2021)
10.1103/PhysRevE.103.032114
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a geographically distributed population, assortative clustering plays an important role in evolution by modifying local environments. To examine its effects in a linear habitat, we consider a one-dimensional grid of cells, where each cell is either empty or occupied by an organism whose replication strategy is genetically inherited to offspring. The strategy determines whether to have offspring in surrounding cells, as a function of the neighborhood configuration. If more than one offspring compete for a cell, then they can be all exterminated due to the cost of conflict depending on environmental conditions. We find that the system is more densely populated in an unfavorable environment than in a favorable one because only the latter has to pay the cost of conflict. This observation agrees reasonably well with a mean-field analysis which takes assortative clustering of strategies into consideration. Our finding suggests a possibility of intrinsic nonlinearity between environmental conditions and population density when an evolutionary process is involved.
[ { "created": "Sun, 14 Mar 2021 06:01:58 GMT", "version": "v1" } ]
2021-03-16
[ [ "Chae", "Sunhee", "" ], [ "Lee", "Nahyeon", "" ], [ "Baek", "Seung Ki", "" ], [ "Jeong", "Hyeong-Chai", "" ] ]
In a geographically distributed population, assortative clustering plays an important role in evolution by modifying local environments. To examine its effects in a linear habitat, we consider a one-dimensional grid of cells, where each cell is either empty or occupied by an organism whose replication strategy is genetically inherited to offspring. The strategy determines whether to have offspring in surrounding cells, as a function of the neighborhood configuration. If more than one offspring compete for a cell, then they can be all exterminated due to the cost of conflict depending on environmental conditions. We find that the system is more densely populated in an unfavorable environment than in a favorable one because only the latter has to pay the cost of conflict. This observation agrees reasonably well with a mean-field analysis which takes assortative clustering of strategies into consideration. Our finding suggests a possibility of intrinsic nonlinearity between environmental conditions and population density when an evolutionary process is involved.
1911.10313
Zexuan Sun
Zexuan Sun, Shujun Huang, Peiran Jiang and Pingzhao Hu
DTF: Deep Tensor Factorization for Predicting Anticancer Drug Synergy
Final draft in Bioinformatics, btaa287, https://doi.org/10.1093/bioinformatics/btaa287
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Combination therapies have been widely used to treat cancers. However, it is cost- and time-consuming to experimentally screen synergistic drug pairs due to the enormous number of possible drug combinations. Thus, computational methods have become an important way to predict and prioritize synergistic drug pairs. Results: We proposed a Deep Tensor Factorization (DTF) model, which integrated a tensor factorization method and a deep neural network (DNN), to predict drug synergy. The former extracts latent features from drug synergy information while the latter constructs a binary classifier to predict the drug synergy status. Compared to the tensor-based method, the DTF model performed better in predicting drug synergy. The area under the precision-recall curve (PR AUC) was 0.57 for DTF and 0.24 for the tensor method. We also compared the DTF model with DeepSynergy and logistic regression models and found that the DTF outperformed the logistic regression model and achieved almost the same performance as DeepSynergy using several typical metrics for the classification task. Applying the DTF model to predict missing entries in our drug-cell line tensor, we identified novel synergistic drug combinations for 10 cell lines from the 5 cancer types. A literature survey showed that some of these predicted drug synergies have been identified in vivo or in vitro. Thus, the DTF model could be valuable in silico tool for prioritizing novel synergistic drug combinations.
[ { "created": "Sat, 23 Nov 2019 04:58:13 GMT", "version": "v1" }, { "created": "Tue, 26 Nov 2019 04:19:36 GMT", "version": "v2" }, { "created": "Wed, 27 Nov 2019 16:25:33 GMT", "version": "v3" }, { "created": "Mon, 2 Dec 2019 08:14:45 GMT", "version": "v4" }, { "created": "Sat, 18 Jan 2020 09:19:34 GMT", "version": "v5" }, { "created": "Fri, 28 Feb 2020 05:53:53 GMT", "version": "v6" }, { "created": "Wed, 16 Sep 2020 11:28:14 GMT", "version": "v7" } ]
2020-09-17
[ [ "Sun", "Zexuan", "" ], [ "Huang", "Shujun", "" ], [ "Jiang", "Peiran", "" ], [ "Hu", "Pingzhao", "" ] ]
Motivation: Combination therapies have been widely used to treat cancers. However, it is cost- and time-consuming to experimentally screen synergistic drug pairs due to the enormous number of possible drug combinations. Thus, computational methods have become an important way to predict and prioritize synergistic drug pairs. Results: We proposed a Deep Tensor Factorization (DTF) model, which integrated a tensor factorization method and a deep neural network (DNN), to predict drug synergy. The former extracts latent features from drug synergy information while the latter constructs a binary classifier to predict the drug synergy status. Compared to the tensor-based method, the DTF model performed better in predicting drug synergy. The area under the precision-recall curve (PR AUC) was 0.57 for DTF and 0.24 for the tensor method. We also compared the DTF model with DeepSynergy and logistic regression models and found that the DTF outperformed the logistic regression model and achieved almost the same performance as DeepSynergy using several typical metrics for the classification task. Applying the DTF model to predict missing entries in our drug-cell line tensor, we identified novel synergistic drug combinations for 10 cell lines from the 5 cancer types. A literature survey showed that some of these predicted drug synergies have been identified in vivo or in vitro. Thus, the DTF model could be valuable in silico tool for prioritizing novel synergistic drug combinations.
1408.0640
Alexander Lange
Alexander Lange
Reconstruction of disease transmission rates: applications to measles, dengue, and influenza
null
J. Theor. Biol. 400 (2016) 138-153
10.1016/j.jtbi.2016.04.017
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transmission rates are key in understanding the spread of infectious diseases. Using the framework of compartmental models, we introduce a simple method that enables us to reconstruct time series of transmission rates directly from incidence or disease-related mortality data. The reconstruction exploits differential equations, which model the time evolution of infective stages and strains. Being sensitive to initial values, the method produces asymptotically correct solutions. The computations are fast, with time complexity being quadratic. We apply the reconstruction to data of measles (England and Wales, 1948-67), dengue (Thailand, 1982-99), and influenza (U.S., 1910-27). The Measles example offers comparison with earlier work. Here we re-investigate reporting corrections, include and exclude demographic information. The dengue example deals with the failure of vector-control measures in reducing dengue hemorrhagic fever (DHF) in Thailand. Two competing mechanisms have been held responsible: strain interaction and demographic transitions. Our reconstruction reveals that both explanations are possible, showing that the increase in DHF cases is consistent with decreasing transmission rates resulting from reduced vector counts. The flu example focuses on the 1918/19 pandemic, examining the transmission rate evolution for an invading strain. Our analysis indicates that the pandemic strain could have circulated in the population for many months before the pandemic was initiated by an event of highly increased transmission.
[ { "created": "Mon, 4 Aug 2014 11:08:41 GMT", "version": "v1" } ]
2016-05-06
[ [ "Lange", "Alexander", "" ] ]
Transmission rates are key in understanding the spread of infectious diseases. Using the framework of compartmental models, we introduce a simple method that enables us to reconstruct time series of transmission rates directly from incidence or disease-related mortality data. The reconstruction exploits differential equations, which model the time evolution of infective stages and strains. Being sensitive to initial values, the method produces asymptotically correct solutions. The computations are fast, with time complexity being quadratic. We apply the reconstruction to data of measles (England and Wales, 1948-67), dengue (Thailand, 1982-99), and influenza (U.S., 1910-27). The Measles example offers comparison with earlier work. Here we re-investigate reporting corrections, include and exclude demographic information. The dengue example deals with the failure of vector-control measures in reducing dengue hemorrhagic fever (DHF) in Thailand. Two competing mechanisms have been held responsible: strain interaction and demographic transitions. Our reconstruction reveals that both explanations are possible, showing that the increase in DHF cases is consistent with decreasing transmission rates resulting from reduced vector counts. The flu example focuses on the 1918/19 pandemic, examining the transmission rate evolution for an invading strain. Our analysis indicates that the pandemic strain could have circulated in the population for many months before the pandemic was initiated by an event of highly increased transmission.
2002.11592
Debswapna Bhattacharya
Andrew McGehee, Sutanu Bhattacharya, Rahmatullah Roche, Debswapna Bhattacharya
PolyFold: an interactive visual simulator for distance-based protein folding
19 pages, 3 figures
null
10.1371/journal.pone.0243331
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in distance-based protein folding have led to a paradigm shift in protein structure prediction. Through sufficiently precise estimation of the inter-residue distance matrix for a protein sequence, it is now feasible to predict the correct folds for new proteins much more accurately than ever before. Despite the exciting progress, a dedicated visualization system that can dynamically capture the distance-based folding process is still lacking. Most molecular visualizers typically provide only a static view of a folded protein conformation, but do not capture the folding process. Even among the selected few graphical interfaces that do adopt a dynamic perspective, none of them are distance-based. Here we present PolyFold, an interactive visual simulator for dynamically capturing the distance-based protein folding process through real-time rendering of a distance matrix and its compatible spatial conformation as it folds in an intuitive and easy-to-use interface. PolyFold integrates highly convergent stochastic optimization algorithms with on-demand customizations and interactive manipulations to maximally satisfy the geometric constraints imposed by a distance matrix. PolyFold is capable of simulating the complex process of protein folding even on modest personal computers, thus making it accessible to the general public for fostering citizen science. Open source code of PolyFold is freely available for download at https://github.com/Bhattacharya-Lab/PolyFold. It is implemented in cross-platform Java and binary executables are available for macOS, Linux, and Windows.
[ { "created": "Fri, 14 Feb 2020 17:16:54 GMT", "version": "v1" }, { "created": "Sun, 29 Nov 2020 03:08:27 GMT", "version": "v2" } ]
2021-01-27
[ [ "McGehee", "Andrew", "" ], [ "Bhattacharya", "Sutanu", "" ], [ "Roche", "Rahmatullah", "" ], [ "Bhattacharya", "Debswapna", "" ] ]
Recent advances in distance-based protein folding have led to a paradigm shift in protein structure prediction. Through sufficiently precise estimation of the inter-residue distance matrix for a protein sequence, it is now feasible to predict the correct folds for new proteins much more accurately than ever before. Despite the exciting progress, a dedicated visualization system that can dynamically capture the distance-based folding process is still lacking. Most molecular visualizers typically provide only a static view of a folded protein conformation, but do not capture the folding process. Even among the selected few graphical interfaces that do adopt a dynamic perspective, none of them are distance-based. Here we present PolyFold, an interactive visual simulator for dynamically capturing the distance-based protein folding process through real-time rendering of a distance matrix and its compatible spatial conformation as it folds in an intuitive and easy-to-use interface. PolyFold integrates highly convergent stochastic optimization algorithms with on-demand customizations and interactive manipulations to maximally satisfy the geometric constraints imposed by a distance matrix. PolyFold is capable of simulating the complex process of protein folding even on modest personal computers, thus making it accessible to the general public for fostering citizen science. Open source code of PolyFold is freely available for download at https://github.com/Bhattacharya-Lab/PolyFold. It is implemented in cross-platform Java and binary executables are available for macOS, Linux, and Windows.
1602.08530
Vijay Singh
Vijay Singh, Martin Tchernookov, Ilya Nemenman
Extrinsic and intrinsic correlations in molecular information transmission
10 pages, 2 figures
null
10.1103/PhysRevE.94.022425
null
q-bio.NC physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells measure concentrations of external ligands by capturing ligand molecules with cell surface receptors. The numbers of molecules captured by different receptors co-vary because they depend on the same extrinsic ligand fluctuations. However, these numbers also counter-vary due to the intrinsic stochasticity of chemical processes because a single molecule randomly captured by a receptor cannot be captured by another. Such structure of receptor correlations is generally believed to lead to an increase in information about the external signal compared to the case of independent receptors. We analyze a solvable model of two molecular receptors and show that, contrary to this widespread expectation, the correlations have a small and negative effect on the information about the ligand concentration. Further, we show that measurements that average over multiple receptors are almost as informative as those that track the states of every individual one.
[ { "created": "Fri, 26 Feb 2016 23:46:03 GMT", "version": "v1" } ]
2016-09-21
[ [ "Singh", "Vijay", "" ], [ "Tchernookov", "Martin", "" ], [ "Nemenman", "Ilya", "" ] ]
Cells measure concentrations of external ligands by capturing ligand molecules with cell surface receptors. The numbers of molecules captured by different receptors co-vary because they depend on the same extrinsic ligand fluctuations. However, these numbers also counter-vary due to the intrinsic stochasticity of chemical processes because a single molecule randomly captured by a receptor cannot be captured by another. Such structure of receptor correlations is generally believed to lead to an increase in information about the external signal compared to the case of independent receptors. We analyze a solvable model of two molecular receptors and show that, contrary to this widespread expectation, the correlations have a small and negative effect on the information about the ligand concentration. Further, we show that measurements that average over multiple receptors are almost as informative as those that track the states of every individual one.
1709.01437
Sina Tootoonian
Sina Tootoonian and Peter Latham
Sparse connectivity for MAP inference in linear models using sister mitral cells
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions usually require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose a principled algorithm that provably reaches the MAP inference solution but using sparse connectivity. Our algorithm is inspired by the mouse olfactory bulb, but our approach is general enough to apply to other modalities; in addition, it should be possible to extend it to nonlinear encoding models.
[ { "created": "Tue, 5 Sep 2017 15:02:02 GMT", "version": "v1" } ]
2017-09-06
[ [ "Tootoonian", "Sina", "" ], [ "Latham", "Peter", "" ] ]
Sensory processing is hard because the variables of interest are encoded in spike trains in a relatively complex way. A major goal in sensory processing is to understand how the brain extracts those variables. Here we revisit a common encoding model in which variables are encoded linearly. Although there are typically more variables than neurons, this problem is still solvable because only a small number of variables appear at any one time (sparse prior). However, previous solutions usually require all-to-all connectivity, inconsistent with the sparse connectivity seen in the brain. Here we propose a principled algorithm that provably reaches the MAP inference solution but using sparse connectivity. Our algorithm is inspired by the mouse olfactory bulb, but our approach is general enough to apply to other modalities; in addition, it should be possible to extend it to nonlinear encoding models.
2008.09225
Jiaxi Zhao
Nicholas C. Lammers, Yang Joon Kim, Jiaxi Zhao, Hernan G. Garcia
A matter of time: Using dynamics and theory to uncover mechanisms of transcriptional bursting
41 pages, 4 figures, review article
null
10.1016/j.ceb.2020.08.001
null
q-bio.SC q-bio.CB q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic transcription generally occurs in bursts of activity lasting minutes to hours; however, state-of-the-art measurements have revealed that many of the molecular processes that underlie bursting, such as transcription factor binding to DNA, unfold on timescales of seconds. This temporal disconnect lies at the heart of a broader challenge in physical biology of predicting transcriptional outcomes and cellular decision-making from the dynamics of underlying molecular processes. Here, we review how new dynamical information about the processes underlying transcriptional control can be combined with theoretical models that predict not only averaged transcriptional dynamics, but also their variability, to formulate testable hypotheses about the molecular mechanisms underlying transcriptional bursting and control.
[ { "created": "Thu, 20 Aug 2020 23:17:44 GMT", "version": "v1" }, { "created": "Thu, 17 Dec 2020 21:11:54 GMT", "version": "v2" } ]
2020-12-21
[ [ "Lammers", "Nicholas C.", "" ], [ "Kim", "Yang Joon", "" ], [ "Zhao", "Jiaxi", "" ], [ "Garcia", "Hernan G.", "" ] ]
Eukaryotic transcription generally occurs in bursts of activity lasting minutes to hours; however, state-of-the-art measurements have revealed that many of the molecular processes that underlie bursting, such as transcription factor binding to DNA, unfold on timescales of seconds. This temporal disconnect lies at the heart of a broader challenge in physical biology of predicting transcriptional outcomes and cellular decision-making from the dynamics of underlying molecular processes. Here, we review how new dynamical information about the processes underlying transcriptional control can be combined with theoretical models that predict not only averaged transcriptional dynamics, but also their variability, to formulate testable hypotheses about the molecular mechanisms underlying transcriptional bursting and control.
2311.02704
Aran Nayebi
Aran Nayebi
A Goal-Driven Approach to Systems Neuroscience
230 pages, Stanford University PhD Thesis, March 2022: https://purl.stanford.edu/qk457cr2641
null
null
null
q-bio.NC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Humans and animals exhibit a range of interesting behaviors in dynamic environments, and it is unclear how our brains actively reformat this dense sensory information to enable these behaviors. Experimental neuroscience is undergoing a revolution in its ability to record and manipulate hundreds to thousands of neurons while an animal is performing a complex behavior. As these paradigms enable unprecedented access to the brain, a natural question that arises is how to distill these data into interpretable insights about how neural circuits give rise to intelligent behaviors. The classical approach in systems neuroscience has been to ascribe well-defined operations to individual neurons and provide a description of how these operations combine to produce a circuit-level theory of neural computations. While this approach has had some success for small-scale recordings with simple stimuli, designed to probe a particular circuit computation, often times these ultimately lead to disparate descriptions of the same system across stimuli. Perhaps more strikingly, many response profiles of neurons are difficult to succinctly describe in words, suggesting that new approaches are needed in light of these experimental observations. In this thesis, we offer a different definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits, and describes the evolutionary constraints that give rise to the response properties of the neural population, including those that have previously been difficult to describe individually. We demonstrate the utility of this framework across multiple brain areas and species to study the roles of recurrent processing in the primate ventral visual pathway; mouse visual processing; heterogeneity in rodent medial entorhinal cortex; and facilitating biological learning.
[ { "created": "Sun, 5 Nov 2023 16:37:53 GMT", "version": "v1" } ]
2023-11-07
[ [ "Nayebi", "Aran", "" ] ]
Humans and animals exhibit a range of interesting behaviors in dynamic environments, and it is unclear how our brains actively reformat this dense sensory information to enable these behaviors. Experimental neuroscience is undergoing a revolution in its ability to record and manipulate hundreds to thousands of neurons while an animal is performing a complex behavior. As these paradigms enable unprecedented access to the brain, a natural question that arises is how to distill these data into interpretable insights about how neural circuits give rise to intelligent behaviors. The classical approach in systems neuroscience has been to ascribe well-defined operations to individual neurons and provide a description of how these operations combine to produce a circuit-level theory of neural computations. While this approach has had some success for small-scale recordings with simple stimuli, designed to probe a particular circuit computation, often times these ultimately lead to disparate descriptions of the same system across stimuli. Perhaps more strikingly, many response profiles of neurons are difficult to succinctly describe in words, suggesting that new approaches are needed in light of these experimental observations. In this thesis, we offer a different definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits, and describes the evolutionary constraints that give rise to the response properties of the neural population, including those that have previously been difficult to describe individually. We demonstrate the utility of this framework across multiple brain areas and species to study the roles of recurrent processing in the primate ventral visual pathway; mouse visual processing; heterogeneity in rodent medial entorhinal cortex; and facilitating biological learning.
2003.09865
Sonja Aits
Salma Kazemi Rashed, Rafsan Ahmed, Johan Frid, Sonja Aits
English dictionaries, gold and silver standard corpora for biomedical natural language processing related to SARS-CoV-2 and COVID-19
8 pages, 1 table, 8 supplementary files (available online)
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated information extraction with natural language processing (NLP) tools is required to gain systematic insights from the large number of COVID-19 publications, reports and social media posts, which far exceed human processing capabilities. A key challenge for NLP is the extensive variation in terminology used to describe medical entities, which was especially pronounced for this newly emergent disease. Here we present an NLP toolbox comprising very large English dictionaries of synonyms for SARS-CoV-2 (including variant names) and COVID-19, which can be used with dictionary-based NLP tools. We also present a silver standard corpus generated with the dictionaries, and a gold standard corpus, consisting of PubMed abstracts manually annotated for disease, virus, symptom, protein/gene, cell type, chemical and species terms, which can be used to train and evaluate COVID-19-related NLP tools. Code for annotation, which can be used to expand the silver standard corpus or for text mining is also included. This toolbox is freely available on GitHub (on https://github.com/Aitslab/corona) and zenodo (https://doi.org/10.5281/zenodo.6642275). The toolbox can be used for a variety of text analytics tasks related to the COVID-19 crisis and has already been used to create a COVID-19 knowledge graph, study the variability and evolution of COVID-19-related terminology and develop and benchmark text mining tools.
[ { "created": "Sun, 22 Mar 2020 11:37:58 GMT", "version": "v1" }, { "created": "Fri, 3 Jun 2022 12:38:28 GMT", "version": "v2" }, { "created": "Tue, 14 Jun 2022 14:53:16 GMT", "version": "v3" } ]
2022-06-15
[ [ "Rashed", "Salma Kazemi", "" ], [ "Ahmed", "Rafsan", "" ], [ "Frid", "Johan", "" ], [ "Aits", "Sonja", "" ] ]
Automated information extraction with natural language processing (NLP) tools is required to gain systematic insights from the large number of COVID-19 publications, reports and social media posts, which far exceed human processing capabilities. A key challenge for NLP is the extensive variation in terminology used to describe medical entities, which was especially pronounced for this newly emergent disease. Here we present an NLP toolbox comprising very large English dictionaries of synonyms for SARS-CoV-2 (including variant names) and COVID-19, which can be used with dictionary-based NLP tools. We also present a silver standard corpus generated with the dictionaries, and a gold standard corpus, consisting of PubMed abstracts manually annotated for disease, virus, symptom, protein/gene, cell type, chemical and species terms, which can be used to train and evaluate COVID-19-related NLP tools. Code for annotation, which can be used to expand the silver standard corpus or for text mining is also included. This toolbox is freely available on GitHub (on https://github.com/Aitslab/corona) and zenodo (https://doi.org/10.5281/zenodo.6642275). The toolbox can be used for a variety of text analytics tasks related to the COVID-19 crisis and has already been used to create a COVID-19 knowledge graph, study the variability and evolution of COVID-19-related terminology and develop and benchmark text mining tools.
2104.04235
Disheng Tang
Disheng Tang, Wei Cao, Jiang Bian, Tie-Yan Liu, Zhifeng Gao, Shun Zheng, Jue Liu
Impact of pandemic fatigue on the spread of COVID-19: a mathematical modelling study
null
null
null
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
In late-2020, many countries around the world faced another surge in number of confirmed cases of COVID-19, including United Kingdom, Canada, Brazil, United States, etc., which resulted in a large nationwide and even worldwide wave. While there have been indications that precaution fatigue could be a key factor, no scientific evidence has been provided so far. We used a stochastic metapopulation model with a hierarchical structure and fitted the model to the positive cases in the US from the start of outbreak to the end of 2020. We incorporated non-pharmaceutical interventions (NPIs) into this model by assuming that the precaution strength grows with positive cases and studied two types of pandemic fatigue. We found that people in most states and in the whole US respond to the outbreak in a sublinear manner (with exponent k=0.5), while only three states (Massachusetts, New York and New Jersey) have linear reaction (k=1). Case fatigue (decline in people's vigilance to positive cases) is responsible for 58% of cases, while precaution fatigue (decay of maximal fraction of vigilant group) accounts for 26% cases. If there were no pandemic fatigue (no case fatigue and no precaution fatigue), total positive cases would have reduced by 68% on average. Our study shows that pandemic fatigue is the major cause of the worsening situation of COVID-19 in United States. Reduced vigilance is responsible for most positive cases, and higher mortality rate tends to push local people to react to the outbreak faster and maintain vigilant for longer time.
[ { "created": "Fri, 9 Apr 2021 08:01:18 GMT", "version": "v1" } ]
2021-04-12
[ [ "Tang", "Disheng", "" ], [ "Cao", "Wei", "" ], [ "Bian", "Jiang", "" ], [ "Liu", "Tie-Yan", "" ], [ "Gao", "Zhifeng", "" ], [ "Zheng", "Shun", "" ], [ "Liu", "Jue", "" ] ]
In late-2020, many countries around the world faced another surge in number of confirmed cases of COVID-19, including United Kingdom, Canada, Brazil, United States, etc., which resulted in a large nationwide and even worldwide wave. While there have been indications that precaution fatigue could be a key factor, no scientific evidence has been provided so far. We used a stochastic metapopulation model with a hierarchical structure and fitted the model to the positive cases in the US from the start of outbreak to the end of 2020. We incorporated non-pharmaceutical interventions (NPIs) into this model by assuming that the precaution strength grows with positive cases and studied two types of pandemic fatigue. We found that people in most states and in the whole US respond to the outbreak in a sublinear manner (with exponent k=0.5), while only three states (Massachusetts, New York and New Jersey) have linear reaction (k=1). Case fatigue (decline in people's vigilance to positive cases) is responsible for 58% of cases, while precaution fatigue (decay of maximal fraction of vigilant group) accounts for 26% cases. If there were no pandemic fatigue (no case fatigue and no precaution fatigue), total positive cases would have reduced by 68% on average. Our study shows that pandemic fatigue is the major cause of the worsening situation of COVID-19 in United States. Reduced vigilance is responsible for most positive cases, and higher mortality rate tends to push local people to react to the outbreak faster and maintain vigilant for longer time.
2309.13089
Tiphaine SAULNIER
Tiphaine Saulnier (BPH), Margherita Fabbri (CIC 1436, CHU Toulouse), M\'elanie Le Goff, Catherine Helmer (BPH), Anne Pavy-Le Traon, Wassilios G. Meissner (IMN), Olivier Rascol (CIC 1436), C\'ecile Proust-Lima (BPH), Alexandra Foubert-Samier (BPH, IMN)
Patient-perceived progression in multiple system atrophy: natural history of quality of life
null
null
null
null
q-bio.QM stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Health-related quality of life (Hr-QoL) scales provide crucial information on neurodegenerative disease progression, help improving patient care, and constitute a meaningful endpoint for therapeutic research. However, Hr-QoL progression is usually poorly documented, as for multiple system atrophy (MSA), a rare and rapidly progressing alpha-synucleinopathy. This work aimed to describe Hr-QoL progression during the natural course of MSA, explore disparities between patients, and identify informative items using a four-step statistical strategy.We leveraged the data of the French MSA cohort comprising annual assessments with the MSA-QoL questionnaire for more than 500 patients over up to 11 years. The four-step strategy (1) determined the subdimensions of Hr-QoL in MSA; (2) modelled the subdimension trajectories over time, accounting for the risk of death; (3) mapped the sequence of item impairments with disease stages; and (4) identified the most informative items specific to each disease stage.Among the 536 patients included, 50% were women and they were aged on average 65.1 years old at entry. Among them, 63.1% died during the follow-up. Four dimensions were identified. In addition to the original motor, nonmotor, and emotional domains, an oropharyngeal component was highlighted. While the motor and oropharyngeal domains deteriorated rapidly, the nonmotor and emotional aspects were already slightly to moderately impaired at cohort entry and deteriorated slowly over the course of the disease. Impairments were associated with sex, diagnosis subtype, and delay since symptom onset. Except for the emotional domain, each dimension was driven by key identified items.Hr-QoL is a multidimensional concept that deteriorates progressively over the course of MSA and brings essential knowledge for improving patient care. As exemplified with MSA, the thorough description of Hr-QoL using the 4-step original analysis can provide new perspectives on neurodegenerative diseases' management to ultimately deliver better support focused on the patient's perspective.
[ { "created": "Fri, 22 Sep 2023 07:27:21 GMT", "version": "v1" } ]
2023-09-26
[ [ "Saulnier", "Tiphaine", "", "BPH" ], [ "Fabbri", "Margherita", "", "CIC 1436, CHU Toulouse" ], [ "Goff", "Mélanie Le", "", "BPH" ], [ "Helmer", "Catherine", "", "BPH" ], [ "Traon", "Anne Pavy-Le", "", "IMN" ], [ "Meissner", "Wassilios G.", "", "IMN" ], [ "Rascol", "Olivier", "", "CIC 1436" ], [ "Proust-Lima", "Cécile", "", "BPH" ], [ "Foubert-Samier", "Alexandra", "", "BPH, IMN" ] ]
Health-related quality of life (Hr-QoL) scales provide crucial information on neurodegenerative disease progression, help improving patient care, and constitute a meaningful endpoint for therapeutic research. However, Hr-QoL progression is usually poorly documented, as for multiple system atrophy (MSA), a rare and rapidly progressing alpha-synucleinopathy. This work aimed to describe Hr-QoL progression during the natural course of MSA, explore disparities between patients, and identify informative items using a four-step statistical strategy.We leveraged the data of the French MSA cohort comprising annual assessments with the MSA-QoL questionnaire for more than 500 patients over up to 11 years. The four-step strategy (1) determined the subdimensions of Hr-QoL in MSA; (2) modelled the subdimension trajectories over time, accounting for the risk of death; (3) mapped the sequence of item impairments with disease stages; and (4) identified the most informative items specific to each disease stage.Among the 536 patients included, 50% were women and they were aged on average 65.1 years old at entry. Among them, 63.1% died during the follow-up. Four dimensions were identified. In addition to the original motor, nonmotor, and emotional domains, an oropharyngeal component was highlighted. While the motor and oropharyngeal domains deteriorated rapidly, the nonmotor and emotional aspects were already slightly to moderately impaired at cohort entry and deteriorated slowly over the course of the disease. Impairments were associated with sex, diagnosis subtype, and delay since symptom onset. Except for the emotional domain, each dimension was driven by key identified items.Hr-QoL is a multidimensional concept that deteriorates progressively over the course of MSA and brings essential knowledge for improving patient care. As exemplified with MSA, the thorough description of Hr-QoL using the 4-step original analysis can provide new perspectives on neurodegenerative diseases' management to ultimately deliver better support focused on the patient's perspective.
2107.12901
Dixon Vimalajeewa
Dixon Vimalajeewa and Sasitharan Balasubramaniam
Channel Capacity of Starch and Glucose Molecular Communications in the Small Intestine Digestive Tract
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by-nc-sa/4.0/
The emerging field of Molecular Communication (MC) aims to characterize biological-based signaling environment through information that are encoded into molecules. Since the birth of this field, a number of different applications and biological systems have been characterized using MC theory. This study proposes a new application and direction for MC, focusing on the digestive system, where we characterize and model the starch and glucose propagation along the small intestine. Based on the advection-diffusion and reaction mechanisms, we define a channel capacity for the small intestine digestive tract that is dependent on the starch to glucose conversion, velocity flow within the tract, viscosity of the digest product, and length of the tract and position of the receivers for glucose absorption. The numerical results from the derived channel capacity model shows that the SI digestive capacity depends both on physiological factors of the digestive system and type of consumed food, where the digestive capacity is greater for shorter gastric emptying time, low viscosity of the digest product and efficient enzyme activity. We believe that our digital MC model of the digestive tract and lead to personalized diet for each individual, which can potentially avoid a number of different diseases (e.g., celiac disease).
[ { "created": "Fri, 16 Jul 2021 09:29:40 GMT", "version": "v1" } ]
2021-07-28
[ [ "Vimalajeewa", "Dixon", "" ], [ "Balasubramaniam", "Sasitharan", "" ] ]
The emerging field of Molecular Communication (MC) aims to characterize biological-based signaling environment through information that are encoded into molecules. Since the birth of this field, a number of different applications and biological systems have been characterized using MC theory. This study proposes a new application and direction for MC, focusing on the digestive system, where we characterize and model the starch and glucose propagation along the small intestine. Based on the advection-diffusion and reaction mechanisms, we define a channel capacity for the small intestine digestive tract that is dependent on the starch to glucose conversion, velocity flow within the tract, viscosity of the digest product, and length of the tract and position of the receivers for glucose absorption. The numerical results from the derived channel capacity model shows that the SI digestive capacity depends both on physiological factors of the digestive system and type of consumed food, where the digestive capacity is greater for shorter gastric emptying time, low viscosity of the digest product and efficient enzyme activity. We believe that our digital MC model of the digestive tract and lead to personalized diet for each individual, which can potentially avoid a number of different diseases (e.g., celiac disease).
2201.08443
Ava Hoffman
The Genomic Data Science Community Network, Rosa Alcazar (1), Maria Alvarez (2), Rachel Arnold (3), Mentewab Ayalew (4), Lyle G. Best (5), Michael C. Campbell (6), Kamal Chowdhury (7), Katherine E. L. Cox (8), Christina Daulton (9), Youping Deng (10), Carla Easter (11), Karla Fuller (12), Shazia Tabassum Hakim (13), Ava M. Hoffman (8), Natalie Kucher (14), Andrew Lee (15), Joslynn Lee (16), Jeffrey T. Leek (8), Robert Meller (17), Loyda B. M\'endez (18), Miguel P. M\'endez-Gonz\'alez (19), Stephen Mosher (14), Michele Nishiguchi (20), Siddharth Pratap (21), Tiffany Rolle (9), Sourav Roy (22), Rachel Saidi (23), Michael C. Schatz (14 and 24), Shurjo Sen (9), James Sniezek (25), Edu Suarez Martinez (26), Frederick Tan (27), Jennifer Vessio (14), Karriem Watson (28), Wendy Westbroek (29), Joseph Wilcox (30), Xianfa Xie (31) ((1) Clovis Community College, Fresno, CA, USA, (2) Biology, El Paso Community College, El Paso, TX, USA, (3) US Fish and Wildlife and Northwest Indian College, Onalaska, WI, USA, (4) Biology Department, Spelman College, Atlanta, GA, USA, (5) Turtle Mountain Community College, Belcourt, ND, USA, (6) Department of Biological Sciences, University of Southern California, Los Angeles CA, USA, (7) Biology Department, Claflin University, Orangeburg, SC, USA, (8) Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, (9) National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA, (10) Department of Quantitative Health Sciences, University of Hawaii at Manoa, Honolulu, HI, USA, (11) Smithsonian Institute National Museum of Natural History, Washington, DC, USA, (12) Guttman Community College, New York, NY, USA, (13) Department of Microbiology and Biomedical Sciences, Dine College, Tuba City, AZ, USA, (14) Department of Biology, Johns Hopkins University, Baltimore, MD, USA, (15) Department of Biology, Northern Virginia Community College - Alexandria, Alexandria, VA, USA, (16) Department of Chemistry and Biochemistry, Fort Lewis College, Durango, CO, USA, (17) Department of Neurobiology, Morehouse School of Medicine, Atlanta, GA, USA, (18) Science & Technology, Universidad Ana G. M\'endez, Carolina, Carolina, PR, (19) Natural Sciences Department, University of Puerto Rico at Aguadilla, Aguadilla, PR, (20) Department of Molecular and Cell Biology, University of California, Merced, Merced, CA, USA, (21) School of Graduate Studies and Research, Meharry Medical College, Nashville, TN, USA, (22) Department of Biological Sciences and Border Biomedical Research Center, University of Texas at El Paso, El Paso, TX, USA, (23) Department of Math, Statistics, and Data Science, Montgomery College, Rockville, MD, USA, (24) Departments of Computer Science, Johns Hopkins University, Baltimore, MD, USA, (25) Chemical and Biological Sciences, Montgomery College, Germantown, MD, USA, (26) Department of Biology, University of Puerto Rico, Ponce, Ponce, PR, (27) Department of Embryology, Carnegie Institution, Baltimore, MD, USA, (28) National Institutes of Health, Bethesda, MD, USA, (29) Department of Biology, Flathead Valley Community College, Kalispell, MT, USA, (30) Department of Biology, Nevada State College, Henderson, NV, USA, (31) Department of Biology, Virginia State University, Petersburg, VA, USA)
Diversifying the Genomic Data Science Research Community
42 pages, 3 figures
null
null
null
q-bio.OT cs.CY
http://creativecommons.org/licenses/by/4.0/
Over the last 20 years, there has been an explosion of genomic data collected for disease association, functional analyses, and other large-scale discoveries. At the same time, there have been revolutions in cloud computing that enable computational and data science research, while making data accessible to anyone with a web browser and an internet connection. However, students at institutions with limited resources have received relatively little exposure to curricula or professional development opportunities that lead to careers in genomic data science. To broaden participation in genomics research, the scientific community needs to support students, faculty, and administrators at Underserved Institutions (UIs) including Community Colleges, Historically Black Colleges and Universities, Hispanic-Serving Institutions, and Tribal Colleges and Universities in taking advantage of these tools in local educational and research programs. We have formed the Genomic Data Science Community Network (http://www.gdscn.org/) to identify opportunities and support broadening access to cloud-enabled genomic data science. Here, we provide a summary of the priorities for faculty members at UIs, as well as administrators, funders, and R1 researchers to consider as we create a more diverse genomic data science community.
[ { "created": "Thu, 20 Jan 2022 20:36:18 GMT", "version": "v1" }, { "created": "Thu, 9 Jun 2022 13:49:35 GMT", "version": "v2" } ]
2022-06-10
[ [ "Network", "The Genomic Data Science Community", "", "14 and 24" ], [ "Alcazar", "Rosa", "", "14 and 24" ], [ "Alvarez", "Maria", "", "14 and 24" ], [ "Arnold", "Rachel", "", "14 and 24" ], [ "Ayalew", "Mentewab", "", "14 and 24" ], [ "Best", "Lyle G.", "", "14 and 24" ], [ "Campbell", "Michael C.", "", "14 and 24" ], [ "Chowdhury", "Kamal", "", "14 and 24" ], [ "Cox", "Katherine E. L.", "", "14 and 24" ], [ "Daulton", "Christina", "", "14 and 24" ], [ "Deng", "Youping", "", "14 and 24" ], [ "Easter", "Carla", "", "14 and 24" ], [ "Fuller", "Karla", "", "14 and 24" ], [ "Hakim", "Shazia Tabassum", "", "14 and 24" ], [ "Hoffman", "Ava M.", "", "14 and 24" ], [ "Kucher", "Natalie", "", "14 and 24" ], [ "Lee", "Andrew", "", "14 and 24" ], [ "Lee", "Joslynn", "", "14 and 24" ], [ "Leek", "Jeffrey T.", "", "14 and 24" ], [ "Meller", "Robert", "", "14 and 24" ], [ "Méndez", "Loyda B.", "", "14 and 24" ], [ "Méndez-González", "Miguel P.", "", "14 and 24" ], [ "Mosher", "Stephen", "", "14 and 24" ], [ "Nishiguchi", "Michele", "", "14 and 24" ], [ "Pratap", "Siddharth", "", "14 and 24" ], [ "Rolle", "Tiffany", "", "14 and 24" ], [ "Roy", "Sourav", "", "14 and 24" ], [ "Saidi", "Rachel", "", "14 and 24" ], [ "Schatz", "Michael C.", "", "14 and 24" ], [ "Sen", "Shurjo", "" ], [ "Sniezek", "James", "" ], [ "Martinez", "Edu Suarez", "" ], [ "Tan", "Frederick", "" ], [ "Vessio", "Jennifer", "" ], [ "Watson", "Karriem", "" ], [ "Westbroek", "Wendy", "" ], [ "Wilcox", "Joseph", "" ], [ "Xie", "Xianfa", "" ] ]
Over the last 20 years, there has been an explosion of genomic data collected for disease association, functional analyses, and other large-scale discoveries. At the same time, there have been revolutions in cloud computing that enable computational and data science research, while making data accessible to anyone with a web browser and an internet connection. However, students at institutions with limited resources have received relatively little exposure to curricula or professional development opportunities that lead to careers in genomic data science. To broaden participation in genomics research, the scientific community needs to support students, faculty, and administrators at Underserved Institutions (UIs) including Community Colleges, Historically Black Colleges and Universities, Hispanic-Serving Institutions, and Tribal Colleges and Universities in taking advantage of these tools in local educational and research programs. We have formed the Genomic Data Science Community Network (http://www.gdscn.org/) to identify opportunities and support broadening access to cloud-enabled genomic data science. Here, we provide a summary of the priorities for faculty members at UIs, as well as administrators, funders, and R1 researchers to consider as we create a more diverse genomic data science community.
q-bio/0411009
Luciano da Fontoura Costa
Luciano da Fontoura Costa and Regina Celia Coelho
Growth-Driven Percolations: The Dynamics of Community Formation in Neuronal Systems
8 pages, 10 figures
null
10.1140/epjb/e2005-00354-5
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
null
The quintessential property of neuronal systems is their intensive patterns of selective synaptic connections. The current work describes a physics-based approach to neuronal shape modeling and synthesis and its consideration for the simulation of neuronal development and the formation of neuronal communities. Starting from images of real neurons, geometrical measurements are obtained and used to construct probabilistic models which can be subsequently sampled in order to produce morphologically realistic neuronal cells. Such cells are progressively grown while monitoring their connections along time, which are analysed in terms of percolation concepts. However, unlike traditional percolation, the critical point is verified along the growth stages, not the density of cells, which remains constant throughout the neuronal growth dynamics. It is shown, through simulations, that growing beta cells tend to reach percolation sooner than the alpha counterparts with the same diameter. Also, the percolation becomes more abrupt for higher densities of cells, being markedly sharper for the beta cells.
[ { "created": "Tue, 2 Nov 2004 00:59:45 GMT", "version": "v1" } ]
2009-11-10
[ [ "Costa", "Luciano da Fontoura", "" ], [ "Coelho", "Regina Celia", "" ] ]
The quintessential property of neuronal systems is their intensive patterns of selective synaptic connections. The current work describes a physics-based approach to neuronal shape modeling and synthesis and its consideration for the simulation of neuronal development and the formation of neuronal communities. Starting from images of real neurons, geometrical measurements are obtained and used to construct probabilistic models which can be subsequently sampled in order to produce morphologically realistic neuronal cells. Such cells are progressively grown while monitoring their connections along time, which are analysed in terms of percolation concepts. However, unlike traditional percolation, the critical point is verified along the growth stages, not the density of cells, which remains constant throughout the neuronal growth dynamics. It is shown, through simulations, that growing beta cells tend to reach percolation sooner than the alpha counterparts with the same diameter. Also, the percolation becomes more abrupt for higher densities of cells, being markedly sharper for the beta cells.
1610.00763
Stuart Hagler
Stuart Hagler
Patterns of Selection of Human Movements II: Movement Limits, Mechanical Energy, and Very Slow Walking Gaits
20 pages, 5 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The biomechanics of the human body allow humans a range of possible ways of executing movements to attain specific goals. This range of movement is limited by a number of mechanical, biomechanical, or cognitive constraints. Shifts in these limits result in changes available possible movements from which a subject can select and can affect which movements a subject selects. Therefore by understanding the limits on the range of movement we can come to a better understanding of declines in movement performance due to disease or aging. In this project, we look at how models for the limits on the range of movement can be derived in a principled manner from a model of the movement. Using the example of normal walking gaits, we develop a lower limit on the avg. walking speed by examining the process by which the body restores mechanical energy lost during walking, and we develop an upper limit on the avg. step length by examining the forces the body can exert doing external mechanical work, in this case, pulling a cart. Making slight changes to the model for normal walking gaits, we develop a model of very slow walking gaits with avg. walking speeds below the lower limit on normal walking gaits but that also has a lower limit on the avg. walking speed. We note that the lowest avg. walking speeds observed clinically fall into the range of very slow walking gaits so defined, and argue that forms of bipedal locomotion with still lower speeds should be considered distinct from walking gaits.
[ { "created": "Mon, 3 Oct 2016 21:45:18 GMT", "version": "v1" }, { "created": "Thu, 10 Nov 2016 19:47:27 GMT", "version": "v2" }, { "created": "Thu, 24 Nov 2016 22:44:18 GMT", "version": "v3" }, { "created": "Sun, 9 Dec 2018 19:33:34 GMT", "version": "v4" } ]
2018-12-11
[ [ "Hagler", "Stuart", "" ] ]
The biomechanics of the human body allow humans a range of possible ways of executing movements to attain specific goals. This range of movement is limited by a number of mechanical, biomechanical, or cognitive constraints. Shifts in these limits result in changes available possible movements from which a subject can select and can affect which movements a subject selects. Therefore by understanding the limits on the range of movement we can come to a better understanding of declines in movement performance due to disease or aging. In this project, we look at how models for the limits on the range of movement can be derived in a principled manner from a model of the movement. Using the example of normal walking gaits, we develop a lower limit on the avg. walking speed by examining the process by which the body restores mechanical energy lost during walking, and we develop an upper limit on the avg. step length by examining the forces the body can exert doing external mechanical work, in this case, pulling a cart. Making slight changes to the model for normal walking gaits, we develop a model of very slow walking gaits with avg. walking speeds below the lower limit on normal walking gaits but that also has a lower limit on the avg. walking speed. We note that the lowest avg. walking speeds observed clinically fall into the range of very slow walking gaits so defined, and argue that forms of bipedal locomotion with still lower speeds should be considered distinct from walking gaits.
1412.3857
Eric Forgoston
Garrett Nieddu, Lora Billings, Eric Forgoston
Analysis and control of pre-extinction dynamics in stochastic populations
17 pages, 9 figures. Final version to appear in Bulletin of Mathematical Biology
null
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a stochastic population model where the intrinsic or demographic noise causes cycling between states before the population eventually goes extinct. A master equation approach coupled with a WKB (Wentzel-Kramers-Brillouin) approximation is used to construct the optimal path to extinction. In addition, a probabilistic argument is used to understand the pre-extinction dynamics and approximate the mean time to extinction. Analytical results agree well with numerical Monte Carlo simulations. A control method is implemented to decrease the mean time to extinction. Analytical results quantify the effectiveness of the control and agree well with numerical simulations.
[ { "created": "Thu, 11 Dec 2014 23:05:11 GMT", "version": "v1" } ]
2014-12-16
[ [ "Nieddu", "Garrett", "" ], [ "Billings", "Lora", "" ], [ "Forgoston", "Eric", "" ] ]
We consider a stochastic population model where the intrinsic or demographic noise causes cycling between states before the population eventually goes extinct. A master equation approach coupled with a WKB (Wentzel-Kramers-Brillouin) approximation is used to construct the optimal path to extinction. In addition, a probabilistic argument is used to understand the pre-extinction dynamics and approximate the mean time to extinction. Analytical results agree well with numerical Monte Carlo simulations. A control method is implemented to decrease the mean time to extinction. Analytical results quantify the effectiveness of the control and agree well with numerical simulations.
1701.07646
Danielle Bassett
Pranav G. Reddy, Marcelo G. Mattar, Andrew C. Murphy, Nicholas F. Wymbs, Scott T. Grafton, Theodore D. Satterthwaite, Danielle S. Bassett
Brain State Flexibility Accompanies Motor-Skill Acquisition
36 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning requires the traversal of inherently distinct cognitive states to produce behavioral adaptation. Yet, tools to explicitly measure these states with non-invasive imaging -- and to assess their dynamics during learning -- remain limited. Here, we describe an approach based on a novel application of graph theory in which points in time are represented by network nodes, and similarities in brain states between two different time points are represented as network edges. We use a graph-based clustering technique to identify clusters of time points representing canonical brain states, and to assess the manner in which the brain moves from one state to another as learning progresses. We observe the presence of two primary states characterized by either high activation in sensorimotor cortex or high activation in a frontal-subcortical system. Flexible switching among these primary states and other less common states becomes more frequent as learning progresses, and is inversely correlated with individual differences in learning rate. These results are consistent with the notion that the development of automaticity is associated with a greater freedom to use cognitive resources for other processes. Taken together, our work offers new insights into the constrained, low dimensional nature of brain dynamics characteristic of early learning, which give way to less constrained, high-dimensional dynamics in later learning.
[ { "created": "Thu, 26 Jan 2017 10:44:29 GMT", "version": "v1" } ]
2017-01-27
[ [ "Reddy", "Pranav G.", "" ], [ "Mattar", "Marcelo G.", "" ], [ "Murphy", "Andrew C.", "" ], [ "Wymbs", "Nicholas F.", "" ], [ "Grafton", "Scott T.", "" ], [ "Satterthwaite", "Theodore D.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Learning requires the traversal of inherently distinct cognitive states to produce behavioral adaptation. Yet, tools to explicitly measure these states with non-invasive imaging -- and to assess their dynamics during learning -- remain limited. Here, we describe an approach based on a novel application of graph theory in which points in time are represented by network nodes, and similarities in brain states between two different time points are represented as network edges. We use a graph-based clustering technique to identify clusters of time points representing canonical brain states, and to assess the manner in which the brain moves from one state to another as learning progresses. We observe the presence of two primary states characterized by either high activation in sensorimotor cortex or high activation in a frontal-subcortical system. Flexible switching among these primary states and other less common states becomes more frequent as learning progresses, and is inversely correlated with individual differences in learning rate. These results are consistent with the notion that the development of automaticity is associated with a greater freedom to use cognitive resources for other processes. Taken together, our work offers new insights into the constrained, low dimensional nature of brain dynamics characteristic of early learning, which give way to less constrained, high-dimensional dynamics in later learning.
0807.1513
Jonathan Tapson
J. Tapson, C. Jin, A. van Schaik, and R. Etienne-Cummings
A First-Order Non-Homogeneous Markov Model for the Response of Spiking Neurons Stimulated by Small Phase-Continuous Signals
Accepted for publication in Neural Computation
Neural Computation Volume 21 Issue 6 Pages 1554-1588 Year 2009
null
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a first-order non-homogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes only the neuron characteristics, and the other of which describes only the signal characteristics. This allows the use of this model to predict the response when the underlying neuron model is not known or well determined. The approximation shows particularly clearly that signal autocorrelations and cross-correlations arise as natural features of the interspike-interval density, and are particularly clear for small signals and moderate noise. We show that this model simplifies the design of spiking neuron cross-correlation systems, and describe a four-neuron mutual inhibition network that generates a cross-correlation output for two input signals.
[ { "created": "Wed, 9 Jul 2008 18:36:48 GMT", "version": "v1" } ]
2012-08-15
[ [ "Tapson", "J.", "" ], [ "Jin", "C.", "" ], [ "van Schaik", "A.", "" ], [ "Etienne-Cummings", "R.", "" ] ]
We present a first-order non-homogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes only the neuron characteristics, and the other of which describes only the signal characteristics. This allows the use of this model to predict the response when the underlying neuron model is not known or well determined. The approximation shows particularly clearly that signal autocorrelations and cross-correlations arise as natural features of the interspike-interval density, and are particularly clear for small signals and moderate noise. We show that this model simplifies the design of spiking neuron cross-correlation systems, and describe a four-neuron mutual inhibition network that generates a cross-correlation output for two input signals.
2004.10671
Zachary Kilpatrick PhD
Zachary P Kilpatrick, Jacob D Davidson, and Ahmed El Hady
Normative theory of patch foraging decisions
28 pages, 10 figures, 1 table
null
null
null
q-bio.NC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Foraging is a fundamental behavior as animals' search for food is crucial for their survival. Patch leaving is a canonical foraging behavior, but classic theoretical conceptions of patch leaving decisions lack some key naturalistic details. Optimal foraging theory provides general rules for when an animal should leave a patch, but does not provide mechanistic insights about how those rules change with the structure of the environment. Such a mechanistic framework would aid in designing quantitative experiments to unravel behavioral and neural underpinnings of foraging. To address these shortcomings, we develop a normative theory of patch foraging decisions. Using a Bayesian approach, we treat patch leaving behavior as a statistical inference problem. We derive the animals' optimal decision strategies in both non-depleting and depleting environments. A majority of these cases can be analyzed explicitly using methods from stochastic processes. Our behavioral predictions are expressed in terms of the optimal patch residence time and the decision rule by which an animal departs a patch. We also extend our theory to a hierarchical model in which the forager learns the environmental food resource distribution. The quantitative framework we develop will therefore help experimenters move from analyzing trial based behavior to continuous behavior without the loss of quantitative rigor. Our theoretical framework both extends optimal foraging theory and motivates a variety of behavioral and neuroscientific experiments investigating patch foraging behavior.
[ { "created": "Wed, 22 Apr 2020 16:09:04 GMT", "version": "v1" } ]
2020-04-23
[ [ "Kilpatrick", "Zachary P", "" ], [ "Davidson", "Jacob D", "" ], [ "Hady", "Ahmed El", "" ] ]
Foraging is a fundamental behavior as animals' search for food is crucial for their survival. Patch leaving is a canonical foraging behavior, but classic theoretical conceptions of patch leaving decisions lack some key naturalistic details. Optimal foraging theory provides general rules for when an animal should leave a patch, but does not provide mechanistic insights about how those rules change with the structure of the environment. Such a mechanistic framework would aid in designing quantitative experiments to unravel behavioral and neural underpinnings of foraging. To address these shortcomings, we develop a normative theory of patch foraging decisions. Using a Bayesian approach, we treat patch leaving behavior as a statistical inference problem. We derive the animals' optimal decision strategies in both non-depleting and depleting environments. A majority of these cases can be analyzed explicitly using methods from stochastic processes. Our behavioral predictions are expressed in terms of the optimal patch residence time and the decision rule by which an animal departs a patch. We also extend our theory to a hierarchical model in which the forager learns the environmental food resource distribution. The quantitative framework we develop will therefore help experimenters move from analyzing trial based behavior to continuous behavior without the loss of quantitative rigor. Our theoretical framework both extends optimal foraging theory and motivates a variety of behavioral and neuroscientific experiments investigating patch foraging behavior.
1905.02459
Emiliano J. Quinto Dr
E.J. Quinto, J.M. Marin, I. Caro, J. Mateo, D.W. Schaffner
Bayesian modeling of two-species bacterial competition growth and decline rates in milk
39 pages, 3 tables, 4 figures, 4 supplement figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shiga toxin-producing Escherichia coli O157: H7 is a food-borne pathogen and the major cause of hemorrhagic colitis. Pseudomonas is the genus most frequent psychrotrophic spoilage microorganisms present in milk. Two-species bacterial systems with Escherichia coli O157: H7, non-pathogenic Escherichia coli, and Pseudomonas fluorescens in skimmed milk at 7, 13, 19, or 25 C was studied. Bacterial interactions were modelled after applying a Bayesian approach. No direct correlation between Pseudomonas fluorescent's growth rate and its effect on the maximum population densities of Escherichia coli species was found. The results show the complexity of the interactions between two species into a food model. The use of natural microbiota members to control foodborne pathogens could be useful to improve food safety during the processing and storage of refrigerated foods.
[ { "created": "Tue, 7 May 2019 10:39:10 GMT", "version": "v1" } ]
2019-05-08
[ [ "Quinto", "E. J.", "" ], [ "Marin", "J. M.", "" ], [ "Caro", "I.", "" ], [ "Mateo", "J.", "" ], [ "Schaffner", "D. W.", "" ] ]
Shiga toxin-producing Escherichia coli O157: H7 is a food-borne pathogen and the major cause of hemorrhagic colitis. Pseudomonas is the genus most frequent psychrotrophic spoilage microorganisms present in milk. Two-species bacterial systems with Escherichia coli O157: H7, non-pathogenic Escherichia coli, and Pseudomonas fluorescens in skimmed milk at 7, 13, 19, or 25 C was studied. Bacterial interactions were modelled after applying a Bayesian approach. No direct correlation between Pseudomonas fluorescent's growth rate and its effect on the maximum population densities of Escherichia coli species was found. The results show the complexity of the interactions between two species into a food model. The use of natural microbiota members to control foodborne pathogens could be useful to improve food safety during the processing and storage of refrigerated foods.
2006.02936
Indrajit Ghosh
Indrajit Ghosh
Within host dynamics of SARS-CoV-2 in humans: Modeling immune responses and antiviral treatments
19 pages, 1 table, 7 figures
null
null
null
q-bio.PE q-bio.CB
http://creativecommons.org/licenses/by/4.0/
In December 2019, a newly discovered SARS-CoV-2 virus was emerged from China and propagated worldwide as a pandemic. In the absence of preventive medicine or a ready to use vaccine, mathematical models can provide useful scientific insights about transmission patterns and targets for drug development. In this study, we propose a within-host mathematical model of SARS-CoV-2 infection considering innate and adaptive immune responses. We analyze the equilibrium points of the proposed model and obtain an expression of the basic reproduction number. We then numerically show the existence of a transcritical bifurcation. The proposed model is calibrated to real viral load data of two COVID-19 patients. Using the estimated parameters, we perform global sensitivity analysis with respect to the peak of viral load. Finally, we study the efficacy of antiviral drugs and vaccination on the dynamics of SARS-CoV-2 infection. Our results suggest that blocking the production of the virus by infected cells decreases the viral load more than reducing the infection rate of healthy cells. Vaccination is also found useful but during the vaccine development phase, blocking virus production from infected cells can be targeted for antiviral drug development.
[ { "created": "Wed, 3 Jun 2020 04:45:04 GMT", "version": "v1" }, { "created": "Sun, 28 Jun 2020 14:43:38 GMT", "version": "v2" } ]
2020-06-30
[ [ "Ghosh", "Indrajit", "" ] ]
In December 2019, a newly discovered SARS-CoV-2 virus was emerged from China and propagated worldwide as a pandemic. In the absence of preventive medicine or a ready to use vaccine, mathematical models can provide useful scientific insights about transmission patterns and targets for drug development. In this study, we propose a within-host mathematical model of SARS-CoV-2 infection considering innate and adaptive immune responses. We analyze the equilibrium points of the proposed model and obtain an expression of the basic reproduction number. We then numerically show the existence of a transcritical bifurcation. The proposed model is calibrated to real viral load data of two COVID-19 patients. Using the estimated parameters, we perform global sensitivity analysis with respect to the peak of viral load. Finally, we study the efficacy of antiviral drugs and vaccination on the dynamics of SARS-CoV-2 infection. Our results suggest that blocking the production of the virus by infected cells decreases the viral load more than reducing the infection rate of healthy cells. Vaccination is also found useful but during the vaccine development phase, blocking virus production from infected cells can be targeted for antiviral drug development.
1710.09452
Yen Ting Lin
Yen Ting Lin, Nicolas E. Buchler
Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes
15 pages, 11 figures, 1 table
J. R. Soc. Interface 15: 20170804 (2018)
10.1098/rsif.2017.0804
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. One source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analyzing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying Chemical Master Equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.
[ { "created": "Wed, 25 Oct 2017 20:38:21 GMT", "version": "v1" } ]
2018-02-01
[ [ "Lin", "Yen Ting", "" ], [ "Buchler", "Nicolas E.", "" ] ]
Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. One source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analyzing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying Chemical Master Equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.
1608.06897
Les Hatton
Les Hatton and Gregory Warr
Full Computational Reproducibility in Biological Science: Methods, Software and a Case Study in Protein Biology
17 pages, 1 figure, extends cited PlOS ONE paper demonstrating how to reproduce it precisely
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Independent computational reproducibility of scientific results is rapidly becoming of pivotal importance in scientific progress as computation itself plays a more and more central role in so many branches of science. Historically, reproducibility has followed the familiar Popperian [38] model whereby theory cannot be verified by scientific testing, it can only be falsified. Ultimately, this implies that if an experiment cannot be reproduced independently to some satisfactory level of precision, its value is essentially unquantifiable; put brutally, it is impossible to determine its scientific value. The burgeoning presence of software in most scientific work adds a new and particularly opaque layer of complexity [29]. In spite of much recent interest in many scientific areas, emphasis remains more on procedures, strictures and discussion [12, 14, 16, 29, 30, 37, 41], reflecting the inexperience of most scientific journals when it comes to software, rather than the details of how computational reproducibility is actually achieved, for which there appear to be relatively few guiding examples [6, 10, 17]. After considering basic principles, here we show how full computational reproducibility can be achieved in practice at every stage using a case study of a multi-gigabyte protein study on the open SwissProt protein database, from data download all the way to individual figure by figure reproduction as an exemplar for general scientific computation.
[ { "created": "Wed, 24 Aug 2016 17:00:00 GMT", "version": "v1" } ]
2016-08-25
[ [ "Hatton", "Les", "" ], [ "Warr", "Gregory", "" ] ]
Independent computational reproducibility of scientific results is rapidly becoming of pivotal importance in scientific progress as computation itself plays a more and more central role in so many branches of science. Historically, reproducibility has followed the familiar Popperian [38] model whereby theory cannot be verified by scientific testing, it can only be falsified. Ultimately, this implies that if an experiment cannot be reproduced independently to some satisfactory level of precision, its value is essentially unquantifiable; put brutally, it is impossible to determine its scientific value. The burgeoning presence of software in most scientific work adds a new and particularly opaque layer of complexity [29]. In spite of much recent interest in many scientific areas, emphasis remains more on procedures, strictures and discussion [12, 14, 16, 29, 30, 37, 41], reflecting the inexperience of most scientific journals when it comes to software, rather than the details of how computational reproducibility is actually achieved, for which there appear to be relatively few guiding examples [6, 10, 17]. After considering basic principles, here we show how full computational reproducibility can be achieved in practice at every stage using a case study of a multi-gigabyte protein study on the open SwissProt protein database, from data download all the way to individual figure by figure reproduction as an exemplar for general scientific computation.
1312.5778
Ivani Lopes
Ivani de O. N. Lopes, Alexander Schliep, and Andr\'e P. L. F. de Carvalho
The discriminant power of RNA features for pre-miRNA recognition
Submitted to BMC Bioinformatics in October 25, 2013. The material to reproduce the main results from this paper can be downloaded from http://bioinformatics.rutgers.edu/Static/Software/discriminant.tar.gz
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational discovery of microRNAs (miRNA) is based on pre-determined sets of features from miRNA precursors (pre-miRNA). These feature sets used by current tools for pre-miRNA recognition differ in construction and dimension. Some feature sets are composed of sequence-structure patterns commonly found in pre-miRNAs, while others are a combination of more sophisticated RNA features. Current tools achieve similar predictive performance even though the feature sets used - and their computational cost - differ widely. In this work, we analyze the discriminant power of seven feature sets, which are used in six pre-miRNA prediction tools. The analysis is based on the classification performance achieved with these feature sets for the training algorithms used in these tools. We also evaluate feature discrimination through the F-score and feature importance in the induction of random forests. More diverse feature sets produce classifiers with significantly higher classification performance compared to feature sets composed only of sequence-structure patterns. However, small or non-significant differences were found among the estimated classification performances of classifiers induced using sets with diversification of features, despite the wide differences in their dimension. Based on these results, we applied a feature selection method to reduce the computational cost of computing the feature set, while maintaining discriminant power. We obtained a lower-dimensional feature set, which achieved a sensitivity of 90% and a specificity of 95%. Our feature set achieves a sensitivity and specificity within 0.1% of the maximal values obtained with any feature set while it is 34x faster to compute. Even compared to another feature set, which is the computationally least expensive feature set of those from the literature which perform within 0.1% of the maximal values, it is 34x faster to compute.
[ { "created": "Thu, 19 Dec 2013 23:32:27 GMT", "version": "v1" }, { "created": "Tue, 18 Mar 2014 00:56:57 GMT", "version": "v2" } ]
2014-03-19
[ [ "Lopes", "Ivani de O. N.", "" ], [ "Schliep", "Alexander", "" ], [ "de Carvalho", "André P. L. F.", "" ] ]
Computational discovery of microRNAs (miRNA) is based on pre-determined sets of features from miRNA precursors (pre-miRNA). These feature sets used by current tools for pre-miRNA recognition differ in construction and dimension. Some feature sets are composed of sequence-structure patterns commonly found in pre-miRNAs, while others are a combination of more sophisticated RNA features. Current tools achieve similar predictive performance even though the feature sets used - and their computational cost - differ widely. In this work, we analyze the discriminant power of seven feature sets, which are used in six pre-miRNA prediction tools. The analysis is based on the classification performance achieved with these feature sets for the training algorithms used in these tools. We also evaluate feature discrimination through the F-score and feature importance in the induction of random forests. More diverse feature sets produce classifiers with significantly higher classification performance compared to feature sets composed only of sequence-structure patterns. However, small or non-significant differences were found among the estimated classification performances of classifiers induced using sets with diversification of features, despite the wide differences in their dimension. Based on these results, we applied a feature selection method to reduce the computational cost of computing the feature set, while maintaining discriminant power. We obtained a lower-dimensional feature set, which achieved a sensitivity of 90% and a specificity of 95%. Our feature set achieves a sensitivity and specificity within 0.1% of the maximal values obtained with any feature set while it is 34x faster to compute. Even compared to another feature set, which is the computationally least expensive feature set of those from the literature which perform within 0.1% of the maximal values, it is 34x faster to compute.
0912.3941
Pan-Jun Kim
Pan-Jun Kim, Nathan D. Price
Macroscopic Kinetic Effect of Cell-to-Cell Variation in Biochemical Reactions
null
Phys. Rev. Lett. 104, 148103 (2010)
10.1016/j.bpj.2009.12.2332
null
q-bio.SC physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetically identical cells under the same environmental conditions can show strong variations in protein copy numbers due to inherently stochastic events in individual cells. We here develop a theoretical framework to address how variations in enzyme abundance affect the collective kinetics of metabolic reactions observed within a population of cells. Kinetic parameters measured at the cell population level are shown to be systematically deviated from those of single cells, even within populations of homogeneous parameters. Because of these considerations, Michaelis-Menten kinetics can even be inappropriate to apply at the population level. Our findings elucidate a novel origin of discrepancy between in vivo and in vitro kinetics, and offer potential utility for analysis of single-cell metabolomic data.
[ { "created": "Sat, 19 Dec 2009 22:02:49 GMT", "version": "v1" }, { "created": "Thu, 8 Apr 2010 15:50:06 GMT", "version": "v2" } ]
2017-08-23
[ [ "Kim", "Pan-Jun", "" ], [ "Price", "Nathan D.", "" ] ]
Genetically identical cells under the same environmental conditions can show strong variations in protein copy numbers due to inherently stochastic events in individual cells. We here develop a theoretical framework to address how variations in enzyme abundance affect the collective kinetics of metabolic reactions observed within a population of cells. Kinetic parameters measured at the cell population level are shown to be systematically deviated from those of single cells, even within populations of homogeneous parameters. Because of these considerations, Michaelis-Menten kinetics can even be inappropriate to apply at the population level. Our findings elucidate a novel origin of discrepancy between in vivo and in vitro kinetics, and offer potential utility for analysis of single-cell metabolomic data.
1112.2720
Ralph Brinks
Ralph Brinks
A new method for deriving incidence rates from prevalence data and its application to dementia in Germany
10 pages, 1 figure
null
null
null
q-bio.QM q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper descibes a new method for deriving incidence rates of a chronic disease from prevalence data. It is based on a new ordinary differential equation, which relates the change in the age-specific prevalence to the agespecific incidence and mortality rates. The method allows the extraction of longtudinal information from cross-sectional studies. Applicability of the method is tested in the prevalence of dementia in Germany. The derived age-specific incidence is in good agreement with published values.
[ { "created": "Mon, 12 Dec 2011 21:15:06 GMT", "version": "v1" } ]
2011-12-14
[ [ "Brinks", "Ralph", "" ] ]
This paper descibes a new method for deriving incidence rates of a chronic disease from prevalence data. It is based on a new ordinary differential equation, which relates the change in the age-specific prevalence to the agespecific incidence and mortality rates. The method allows the extraction of longtudinal information from cross-sectional studies. Applicability of the method is tested in the prevalence of dementia in Germany. The derived age-specific incidence is in good agreement with published values.
1711.05995
David Shanafelt
R. T. Melstrom, K. R. Salau, D. W. Shanafelt
The optimal timing of reintroducing captive populations into the wild
Keywords: bioeconomics, captive breeding, endangered species, wildlife conservation
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine a conservation problem in which the recovery of an endangered species depends on a captive breeding and reintroduction program. The model is applied to the case of the black-footed ferret (Mustela nigripes), an endangered species in North America reliant on captive breeding for survival. The timing of reintroduction is an important concern in these programs as there is a tradeoff between the duration (and therefore the cost) of the captive breeding program and the period the population spends in recovery and in the wild. In this paper, we develop a stylized bioeconomic model to determine the optimal reintroduction time, in which the objective is to minimize the cost of reintroduction while providing a viably-sized population in the wild. Our control variable is the timing of reintroduction, which departs from a large body of work in bioeconomics that focuses on adjustable controls that directly affect the target population. Generally, we find it is optimal to reintroduce ferrets early in a reintroduction program, although this result is contingent on species interactions and provisioning services.
[ { "created": "Thu, 16 Nov 2017 09:18:16 GMT", "version": "v1" } ]
2017-11-17
[ [ "Melstrom", "R. T.", "" ], [ "Salau", "K. R.", "" ], [ "Shanafelt", "D. W.", "" ] ]
We examine a conservation problem in which the recovery of an endangered species depends on a captive breeding and reintroduction program. The model is applied to the case of the black-footed ferret (Mustela nigripes), an endangered species in North America reliant on captive breeding for survival. The timing of reintroduction is an important concern in these programs as there is a tradeoff between the duration (and therefore the cost) of the captive breeding program and the period the population spends in recovery and in the wild. In this paper, we develop a stylized bioeconomic model to determine the optimal reintroduction time, in which the objective is to minimize the cost of reintroduction while providing a viably-sized population in the wild. Our control variable is the timing of reintroduction, which departs from a large body of work in bioeconomics that focuses on adjustable controls that directly affect the target population. Generally, we find it is optimal to reintroduce ferrets early in a reintroduction program, although this result is contingent on species interactions and provisioning services.
q-bio/0404018
Udo Erdmann
Udo Erdmann, Werner Ebeling, Lutz Schimansky-Geier, Anke Ordemann, Frank Moss
Active Brownian Particle and Random Walk Theories of the Motions of Zooplankton: Application to Experiments with Swarms of Daphnia
38 pages, 21 figures
null
null
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
null
Active Brownian Particles are self-propelled particles that move in a dissipative medium subject to random forces, or noise . Additionally, they can be confined by an external field and/or they can interact with one another. The external field may actually be an attractive marker, for example a light field (as in the experiment) or an energy potential or a chemical gradient (as in the theory). The potential energy can also be the result of interparticle attractive and/or repulsive forces summed over all particles (a mean field potential). Four, qualitatively different motions of the particles are possible: at small particle density their motions are approximately independent of one another subject only to the external field and the noise, which results in moving randomly through or performing rotational motions about a central point in space. At increasing densities interactions play an important role and individuals form a swarm performing several types of self-organized collective motion. We apply this model for the description of zooplankton Daphnia swarms. In the case of the zooplankton Daphnia (and probably many other aquatic animals that form similar motions as well) this vortex is hydrodynamical but motivated by the self-propelled motion of the individuals. Similar vortex-type motions have been observed for other creatures ranging in size from bacteria to flocks of birds and schools of fish. However, our experiment with Daphnia is unique in that all four motions can be observed in controlled laboratory conditions with the same animal. Moreover, the theory, presented in both continuous differential equation and random walk forms, offers a quantitative, physically based explanation of the four motions.
[ { "created": "Sun, 18 Apr 2004 15:57:58 GMT", "version": "v1" }, { "created": "Tue, 20 Apr 2004 14:59:04 GMT", "version": "v2" } ]
2007-05-23
[ [ "Erdmann", "Udo", "" ], [ "Ebeling", "Werner", "" ], [ "Schimansky-Geier", "Lutz", "" ], [ "Ordemann", "Anke", "" ], [ "Moss", "Frank", "" ] ]
Active Brownian Particles are self-propelled particles that move in a dissipative medium subject to random forces, or noise . Additionally, they can be confined by an external field and/or they can interact with one another. The external field may actually be an attractive marker, for example a light field (as in the experiment) or an energy potential or a chemical gradient (as in the theory). The potential energy can also be the result of interparticle attractive and/or repulsive forces summed over all particles (a mean field potential). Four, qualitatively different motions of the particles are possible: at small particle density their motions are approximately independent of one another subject only to the external field and the noise, which results in moving randomly through or performing rotational motions about a central point in space. At increasing densities interactions play an important role and individuals form a swarm performing several types of self-organized collective motion. We apply this model for the description of zooplankton Daphnia swarms. In the case of the zooplankton Daphnia (and probably many other aquatic animals that form similar motions as well) this vortex is hydrodynamical but motivated by the self-propelled motion of the individuals. Similar vortex-type motions have been observed for other creatures ranging in size from bacteria to flocks of birds and schools of fish. However, our experiment with Daphnia is unique in that all four motions can be observed in controlled laboratory conditions with the same animal. Moreover, the theory, presented in both continuous differential equation and random walk forms, offers a quantitative, physically based explanation of the four motions.
0804.0682
Denis Goldobin
D.S. Goldobin, M. Mishto, K. Textoris-Taube, P.M. Kloetzel, and A. Zaikin
Reverse Engineering of Proteasomal Translocation Rates
4 pages, 3 figures
null
null
null
q-bio.OT q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of proteasomal protein translocation and introduce a new stochastic model of the proteasomal digestion (cleavage) of proteins. In this model we account for the protein translocation and the positioning of cleavage sites of a proteasome from first principles. We show by test examples and by processing experimental data that our model allows reconstruction of the translocation and cleavage rates from mass spectroscopy data on digestion patterns and can be used to investigate the properties of transport in different experimental set-ups. Detailed investigation with this model will enable theoretical quantitative prediction of the proteasomal activity.
[ { "created": "Fri, 4 Apr 2008 15:29:55 GMT", "version": "v1" }, { "created": "Mon, 28 Jul 2008 13:43:38 GMT", "version": "v2" } ]
2008-07-28
[ [ "Goldobin", "D. S.", "" ], [ "Mishto", "M.", "" ], [ "Textoris-Taube", "K.", "" ], [ "Kloetzel", "P. M.", "" ], [ "Zaikin", "A.", "" ] ]
We address the problem of proteasomal protein translocation and introduce a new stochastic model of the proteasomal digestion (cleavage) of proteins. In this model we account for the protein translocation and the positioning of cleavage sites of a proteasome from first principles. We show by test examples and by processing experimental data that our model allows reconstruction of the translocation and cleavage rates from mass spectroscopy data on digestion patterns and can be used to investigate the properties of transport in different experimental set-ups. Detailed investigation with this model will enable theoretical quantitative prediction of the proteasomal activity.
q-bio/0701017
Emmanuel Tannenbaum
Emmanuel Tannenbaum
Speculations on the emergence of self-awareness in big-brained organisms
null
null
null
null
q-bio.NC q-bio.PE
null
This paper argues that self-awareness is a learned behavior that emerges in organisms whose brains have a sufficiently integrated, complex ability for associative learning and memory. Continual sensory input of information related to the organism causes the organism's brain to learn the physical characteristics of the organism, in the sense that neural pathways are produced that are reinforced by, and therefore recognize, various features associated with the organism. This results in the formation of a set of associations that may be termed an organismal self-image, which provides a mechanistic basis for the emergence of various behaviors that are associated with self-awareness, such as self-recognition. In humans, self-awareness includes additional behaviors such as recognition of self-awareness, the concept of I, and various existential and religious questions. This paper shows how associative memory and learning, combined with an organismal self-image and, in the case of humans, language, leads to the emergence of these various behaviors. This paper also discusses various tautologies that invariably emerge when discussing self-awareness, that ultimately prevent an unambiguous resolution to the various existential issues that arise. We continue with various speculations on manipulating self-awareness, and discuss how concepts from set and logic may provide a highly useful set of tools in computational neuroscience for understanding the emergence of higher cognitive functions in complex organisms. The existence of other types of awareness, and the role of mirror neurons in the emergence of self-awareness, are also briefly discussed.
[ { "created": "Thu, 11 Jan 2007 19:23:03 GMT", "version": "v1" }, { "created": "Sat, 24 Mar 2007 16:30:24 GMT", "version": "v2" }, { "created": "Sat, 2 Jun 2007 10:54:59 GMT", "version": "v3" } ]
2007-06-13
[ [ "Tannenbaum", "Emmanuel", "" ] ]
This paper argues that self-awareness is a learned behavior that emerges in organisms whose brains have a sufficiently integrated, complex ability for associative learning and memory. Continual sensory input of information related to the organism causes the organism's brain to learn the physical characteristics of the organism, in the sense that neural pathways are produced that are reinforced by, and therefore recognize, various features associated with the organism. This results in the formation of a set of associations that may be termed an organismal self-image, which provides a mechanistic basis for the emergence of various behaviors that are associated with self-awareness, such as self-recognition. In humans, self-awareness includes additional behaviors such as recognition of self-awareness, the concept of I, and various existential and religious questions. This paper shows how associative memory and learning, combined with an organismal self-image and, in the case of humans, language, leads to the emergence of these various behaviors. This paper also discusses various tautologies that invariably emerge when discussing self-awareness, that ultimately prevent an unambiguous resolution to the various existential issues that arise. We continue with various speculations on manipulating self-awareness, and discuss how concepts from set and logic may provide a highly useful set of tools in computational neuroscience for understanding the emergence of higher cognitive functions in complex organisms. The existence of other types of awareness, and the role of mirror neurons in the emergence of self-awareness, are also briefly discussed.
1803.07850
Wendong Ge
Wendong Ge, Hee Yeun Kim, Sonali Desai, Leonid Perlovsky, Alexander Turchin
Contribution of Data Categories to Readmission Prediction Accuracy
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification of patients at high risk for readmission could help reduce morbidity and mortality as well as healthcare costs. Most of the existing studies on readmission prediction did not compare the contribution of data categories. In this study we analyzed relative contribution of 90,101 variables across 398,884 admission records corresponding to 163,468 patients, including patient demographics, historical hospitalization information, discharge disposition, diagnoses, procedures, medications and laboratory test results. We established an interpretable readmission prediction model based on Logistic Regression in scikit-learn, and added the available variables to the model one by one in order to analyze the influences of individual data categories on readmission prediction accuracy. Diagnosis related groups (c-statistic increment of 0.0933) and discharge disposition (c-statistic increment of 0.0269) were the strongest contributors to model accuracy. Additionally, we also identified the top ten contributing variables in every data category.
[ { "created": "Wed, 21 Mar 2018 10:54:38 GMT", "version": "v1" }, { "created": "Thu, 22 Mar 2018 15:36:52 GMT", "version": "v2" } ]
2018-03-23
[ [ "Ge", "Wendong", "" ], [ "Kim", "Hee Yeun", "" ], [ "Desai", "Sonali", "" ], [ "Perlovsky", "Leonid", "" ], [ "Turchin", "Alexander", "" ] ]
Identification of patients at high risk for readmission could help reduce morbidity and mortality as well as healthcare costs. Most of the existing studies on readmission prediction did not compare the contribution of data categories. In this study we analyzed relative contribution of 90,101 variables across 398,884 admission records corresponding to 163,468 patients, including patient demographics, historical hospitalization information, discharge disposition, diagnoses, procedures, medications and laboratory test results. We established an interpretable readmission prediction model based on Logistic Regression in scikit-learn, and added the available variables to the model one by one in order to analyze the influences of individual data categories on readmission prediction accuracy. Diagnosis related groups (c-statistic increment of 0.0933) and discharge disposition (c-statistic increment of 0.0269) were the strongest contributors to model accuracy. Additionally, we also identified the top ten contributing variables in every data category.
1701.06776
Naoki Osada Dr.
Yasuaki Takada, Ryutaro Miyagi, Aya Takahashi, Toshinori Endo, and Naoki Osada
A generalized linear model for decomposing cis-regulatory, parent-of-origin, and maternal effects on allele-specific gene expression
27 pages, 3 figures, 2 table
null
10.1534/g3.117.042895
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint quantification of genetic and epigenetic effects on gene expression is important for understanding the establishment of complex gene regulation systems in living organisms. In particular, genomic imprinting and maternal effects play important roles in the developmental process of mammals and flowering plants. However, the influence of these effects on gene expression are difficult to quantify because they act simultaneously with cis-regulatory mutations. Here we propose a simple method to decompose cis-regulatory (i.e., allelic genotype, AG), genomic imprinting (i.e., parent-of-origin, PO), and maternal (i.e., maternal genotype, MG) effects on allele-specific gene expression using RNA-seq data obtained from reciprocal crosses. We evaluated the efficiency of method using a simulated dataset and applied the method to whole-body Drosophila and mouse trophoblast stem cell (TSC) and liver RNA-seq data. Consistent with previous studies, we found little evidence of PO and MG effects in adult Drosophila samples. In contrast, we identified dozens and hundreds of mouse genes with significant PO and MG effects, respectively. Interestingly, a similar number of genes with significant PO effect were detect in mouse TSCs and livers, whereas more genes with significant MG effect were observed in livers. Further application of this method will clarify how these three effects influence gene expression levels in different tissues and developmental stages, and provide novel insight into the evolution of gene expression regulation.
[ { "created": "Tue, 24 Jan 2017 09:15:04 GMT", "version": "v1" }, { "created": "Wed, 15 Feb 2017 11:07:19 GMT", "version": "v2" }, { "created": "Thu, 16 Feb 2017 02:01:17 GMT", "version": "v3" }, { "created": "Tue, 2 May 2017 06:23:06 GMT", "version": "v4" } ]
2017-05-24
[ [ "Takada", "Yasuaki", "" ], [ "Miyagi", "Ryutaro", "" ], [ "Takahashi", "Aya", "" ], [ "Endo", "Toshinori", "" ], [ "Osada", "Naoki", "" ] ]
Joint quantification of genetic and epigenetic effects on gene expression is important for understanding the establishment of complex gene regulation systems in living organisms. In particular, genomic imprinting and maternal effects play important roles in the developmental process of mammals and flowering plants. However, the influence of these effects on gene expression are difficult to quantify because they act simultaneously with cis-regulatory mutations. Here we propose a simple method to decompose cis-regulatory (i.e., allelic genotype, AG), genomic imprinting (i.e., parent-of-origin, PO), and maternal (i.e., maternal genotype, MG) effects on allele-specific gene expression using RNA-seq data obtained from reciprocal crosses. We evaluated the efficiency of method using a simulated dataset and applied the method to whole-body Drosophila and mouse trophoblast stem cell (TSC) and liver RNA-seq data. Consistent with previous studies, we found little evidence of PO and MG effects in adult Drosophila samples. In contrast, we identified dozens and hundreds of mouse genes with significant PO and MG effects, respectively. Interestingly, a similar number of genes with significant PO effect were detect in mouse TSCs and livers, whereas more genes with significant MG effect were observed in livers. Further application of this method will clarify how these three effects influence gene expression levels in different tissues and developmental stages, and provide novel insight into the evolution of gene expression regulation.
1212.6820
Peter Waddell
Peter J. Waddell and Xi Tan
New g%AIC, g%AICc, g%BIC, and Power Divergence Fit Statistics Expose Mating between Modern Humans, Neanderthals and other Archaics
null
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The purpose of this article is to look at how information criteria, such as AIC and BIC, relate to the g%SD fit criterion derived in Waddell et al. (2007, 2010a). The g%SD criterion measures the fit of data to model based on a normalized weighted root mean square percentage deviation between the observed data and model estimates of the data, with g%SD = 0 being a perfectly fitting model. However, this criterion may not be adjusting for the number of parameters in the model comprehensively. Thus, its relationship to more traditional measures for maximizing useful information in a model, including AIC and BIC, are examined. This results in an extended set of fit criteria including g%AIC and g%BIC. Further, a broader range of asymptotically most powerful fit criteria of the power divergence family, which includes maximum likelihood (or minimum G^2) and minimum X^2 modeling as special cases, are used to replace the sum of squares fit criterion within the g%SD criterion. Results are illustrated with a set of genetic distances looking particularly at a range of Jewish populations, plus a genomic data set that looks at how Neanderthals and Denisovans are related to each other and modern humans. Evidence that Homo erectus may have left a significant fraction of its genome within the Denisovan is shown to persist with the new modeling criteria.
[ { "created": "Mon, 31 Dec 2012 04:55:48 GMT", "version": "v1" } ]
2013-01-01
[ [ "Waddell", "Peter J.", "" ], [ "Tan", "Xi", "" ] ]
The purpose of this article is to look at how information criteria, such as AIC and BIC, relate to the g%SD fit criterion derived in Waddell et al. (2007, 2010a). The g%SD criterion measures the fit of data to model based on a normalized weighted root mean square percentage deviation between the observed data and model estimates of the data, with g%SD = 0 being a perfectly fitting model. However, this criterion may not be adjusting for the number of parameters in the model comprehensively. Thus, its relationship to more traditional measures for maximizing useful information in a model, including AIC and BIC, are examined. This results in an extended set of fit criteria including g%AIC and g%BIC. Further, a broader range of asymptotically most powerful fit criteria of the power divergence family, which includes maximum likelihood (or minimum G^2) and minimum X^2 modeling as special cases, are used to replace the sum of squares fit criterion within the g%SD criterion. Results are illustrated with a set of genetic distances looking particularly at a range of Jewish populations, plus a genomic data set that looks at how Neanderthals and Denisovans are related to each other and modern humans. Evidence that Homo erectus may have left a significant fraction of its genome within the Denisovan is shown to persist with the new modeling criteria.
2312.14703
Lucas D. Valdez
L. D. Valdez
Explosive epidemic transitions induced by quarantine fatigue
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quarantine measures are one of the first lines of defense against the spread of infectious diseases. However, maintaining these measures over extended periods can be challenging due to a phenomenon known as quarantine fatigue. In this paper, we investigate the impact of quarantine fatigue on the spread of infectious diseases by using an epidemic model on random networks with cliques. In our model, susceptible individuals can be quarantined up to $n$ times, after which they stop complying with quarantine orders due to fatigue. Our results show that quarantine fatigue may induce a regime in which increasing the probability of detecting and isolating infected individuals (along with their close contacts) could subsequently increase the expected number of cases at the end of an outbreak. Moreover, we observe that quarantine fatigue can trigger an abrupt phase transition at the critical reproduction number $R_0=1$. Finally, we explore a scenario where a non-negligible number of individuals are infected at the beginning of an epidemic, and our results show that, depending on the value of $n$, an abrupt transition between a controlled epidemic and a large epidemic event can occur for $R_0<1$.
[ { "created": "Fri, 22 Dec 2023 14:04:47 GMT", "version": "v1" } ]
2023-12-25
[ [ "Valdez", "L. D.", "" ] ]
Quarantine measures are one of the first lines of defense against the spread of infectious diseases. However, maintaining these measures over extended periods can be challenging due to a phenomenon known as quarantine fatigue. In this paper, we investigate the impact of quarantine fatigue on the spread of infectious diseases by using an epidemic model on random networks with cliques. In our model, susceptible individuals can be quarantined up to $n$ times, after which they stop complying with quarantine orders due to fatigue. Our results show that quarantine fatigue may induce a regime in which increasing the probability of detecting and isolating infected individuals (along with their close contacts) could subsequently increase the expected number of cases at the end of an outbreak. Moreover, we observe that quarantine fatigue can trigger an abrupt phase transition at the critical reproduction number $R_0=1$. Finally, we explore a scenario where a non-negligible number of individuals are infected at the beginning of an epidemic, and our results show that, depending on the value of $n$, an abrupt transition between a controlled epidemic and a large epidemic event can occur for $R_0<1$.
0902.1025
R. C. Penner
R. C. Penner, Michael Knudsen, Carsten Wiuf, Joergen Ellegaard Andersen
Fatgraph Models of Proteins
32 pages, 12 figures
null
null
null
q-bio.BM math.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new model of proteins, which extends and enhances the traditional graphical representation by associating a combinatorial object called a fatgraph to any protein based upon its intrinsic geometry. Fatgraphs can easily be stored and manipulated as triples of permutations, and these methods are therefore amenable to fast computer implementation. Applications include the refinement of structural protein classifications and the prediction of geometric and other properties of proteins from their chemical structures.
[ { "created": "Fri, 6 Feb 2009 08:02:59 GMT", "version": "v1" }, { "created": "Sat, 30 May 2009 10:08:40 GMT", "version": "v2" } ]
2009-05-30
[ [ "Penner", "R. C.", "" ], [ "Knudsen", "Michael", "" ], [ "Wiuf", "Carsten", "" ], [ "Andersen", "Joergen Ellegaard", "" ] ]
We introduce a new model of proteins, which extends and enhances the traditional graphical representation by associating a combinatorial object called a fatgraph to any protein based upon its intrinsic geometry. Fatgraphs can easily be stored and manipulated as triples of permutations, and these methods are therefore amenable to fast computer implementation. Applications include the refinement of structural protein classifications and the prediction of geometric and other properties of proteins from their chemical structures.
2104.14954
Hamdan Awan
Hamdan Awan, Andreani Odysseos, Niovi Nicolaou and Sasitharan Balasubramaniam
Analysis of Molecular Communications on the Growth Structure of Glioblastoma Multiforme
7 pages, 10 Figures- Submitted for possible publication in IEEE Conference
null
null
null
q-bio.NC cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we consider the influence of intercellular communication on the development and progression of Glioblastoma Multiforme (GBM), a grade IV malignant glioma which is defined by an interplay Grow i.e. self renewal and Go i.e. invasiveness potential of multiple malignant glioma stem cells. Firstly, we performed wet lab experiments with U87 malignant glioma cells to study the node-stem growth pattern of GBM. Next we develop a model accounting for the structural influence of multiple transmitter and receiver glioma stem cells resulting in the node-stem growth structure of GBM tumour. By using information theory we study different properties associated with this communication model to show that the growth of GBM in a particular direction (node to stem) is related to an increase in mutual information. We further show that information flow between glioblastoma cells for different levels of invasiveness vary at different points between node and stem. These findings are expected to contribute significantly in the design of future therapeutic mechanisms for GBM.
[ { "created": "Fri, 30 Apr 2021 12:41:48 GMT", "version": "v1" } ]
2021-05-03
[ [ "Awan", "Hamdan", "" ], [ "Odysseos", "Andreani", "" ], [ "Nicolaou", "Niovi", "" ], [ "Balasubramaniam", "Sasitharan", "" ] ]
In this paper we consider the influence of intercellular communication on the development and progression of Glioblastoma Multiforme (GBM), a grade IV malignant glioma which is defined by an interplay Grow i.e. self renewal and Go i.e. invasiveness potential of multiple malignant glioma stem cells. Firstly, we performed wet lab experiments with U87 malignant glioma cells to study the node-stem growth pattern of GBM. Next we develop a model accounting for the structural influence of multiple transmitter and receiver glioma stem cells resulting in the node-stem growth structure of GBM tumour. By using information theory we study different properties associated with this communication model to show that the growth of GBM in a particular direction (node to stem) is related to an increase in mutual information. We further show that information flow between glioblastoma cells for different levels of invasiveness vary at different points between node and stem. These findings are expected to contribute significantly in the design of future therapeutic mechanisms for GBM.
2001.11641
Chetan Gadgil
Dimpal A Nyayanit and Chetan J Gadgil
Mathematical model for autoregulated miRNA biogenesis
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MicroRNAs are small non-coding nucleotide sequences that regulate target protein expression at post-transcriptional levels. Biogenesis of microRNA is a highly regulated multi-step pathway. Regulation of miRNA biogenesis can be caused directly by the components of the biogenesis pathway or indirectly by other regulators. In this study, we have built a detailed mathematical model of microRNA biogenesis to investigate the regulatory role of biogenesis pathway components. We extended a previous model to incorporate Microprocessor regulation of DGCR8 synthesis, exportin-mediated transport to the cytoplasm, and positive auto-regulation catalysed by mature miRNA translocation into the nucleus. Our simulation results lead to three hypotheses (i) Biogenesis is robust to Dicer protein levels at higher Exportin protein levels; (ii) Higher miRNA transcript formation may lead to lower RISC levels: an optimal level of both precursor miRNA and Dicer is required for optimal miRNA formation at lower levels of Exportin protein; and (iii) The positive auto-regulation by mature miRNA translocation into the nucleus can decrease the net functional cytoplasmic miRNA. Wherever possible, we compare these results to experimental observations not used in the model construction or calibration.
[ { "created": "Fri, 31 Jan 2020 03:09:39 GMT", "version": "v1" } ]
2020-02-03
[ [ "Nyayanit", "Dimpal A", "" ], [ "Gadgil", "Chetan J", "" ] ]
MicroRNAs are small non-coding nucleotide sequences that regulate target protein expression at post-transcriptional levels. Biogenesis of microRNA is a highly regulated multi-step pathway. Regulation of miRNA biogenesis can be caused directly by the components of the biogenesis pathway or indirectly by other regulators. In this study, we have built a detailed mathematical model of microRNA biogenesis to investigate the regulatory role of biogenesis pathway components. We extended a previous model to incorporate Microprocessor regulation of DGCR8 synthesis, exportin-mediated transport to the cytoplasm, and positive auto-regulation catalysed by mature miRNA translocation into the nucleus. Our simulation results lead to three hypotheses (i) Biogenesis is robust to Dicer protein levels at higher Exportin protein levels; (ii) Higher miRNA transcript formation may lead to lower RISC levels: an optimal level of both precursor miRNA and Dicer is required for optimal miRNA formation at lower levels of Exportin protein; and (iii) The positive auto-regulation by mature miRNA translocation into the nucleus can decrease the net functional cytoplasmic miRNA. Wherever possible, we compare these results to experimental observations not used in the model construction or calibration.
2110.09521
Zhongqi Tian
Zhong-qi K. Tian, Kai Chen, Songting Li, David W. McLaughlin, and Douglas Zhou
Quantitative relations among causality measures with applications to nonlinear pulse-output network reconstruction
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The causal connectivity of a network is often inferred to understand the network function. It is arguably acknowledged that the inferred causal connectivity relies on causality measure one applies, and it may differ from the network's underlying structural connectivity. However, the interpretation of causal connectivity remains to be fully clarified, in particular, how causal connectivity depends on causality measures and how causal connectivity relates to structural connectivity. Here, we focus on nonlinear networks with pulse signals as measured output, $e.g.$, neural networks with spike output, and address the above issues based on four intensively utilized causality measures, $i.e.$, time-delayed correlation, time-delayed mutual information, Granger causality, and transfer entropy. We theoretically show how these causality measures are related to one another when applied to pulse signals. Taking the simulated Hodgkin-Huxley neural network and the real mouse brain network as two illustrative examples, we further verify the quantitative relations among the four causality measures and demonstrate that the causal connectivity inferred by any of the four well coincides with the underlying network structural connectivity, therefore establishing a direct link between the causal and structural connectivity. We stress that the structural connectivity of networks can be reconstructed pairwisely without conditioning on the global information of all other nodes in a network, thus circumventing the curse of dimensionality. Our framework provides a practical and effective approach for pulse-output network reconstruction.
[ { "created": "Sun, 17 Oct 2021 16:30:29 GMT", "version": "v1" } ]
2021-10-20
[ [ "Tian", "Zhong-qi K.", "" ], [ "Chen", "Kai", "" ], [ "Li", "Songting", "" ], [ "McLaughlin", "David W.", "" ], [ "Zhou", "Douglas", "" ] ]
The causal connectivity of a network is often inferred to understand the network function. It is arguably acknowledged that the inferred causal connectivity relies on causality measure one applies, and it may differ from the network's underlying structural connectivity. However, the interpretation of causal connectivity remains to be fully clarified, in particular, how causal connectivity depends on causality measures and how causal connectivity relates to structural connectivity. Here, we focus on nonlinear networks with pulse signals as measured output, $e.g.$, neural networks with spike output, and address the above issues based on four intensively utilized causality measures, $i.e.$, time-delayed correlation, time-delayed mutual information, Granger causality, and transfer entropy. We theoretically show how these causality measures are related to one another when applied to pulse signals. Taking the simulated Hodgkin-Huxley neural network and the real mouse brain network as two illustrative examples, we further verify the quantitative relations among the four causality measures and demonstrate that the causal connectivity inferred by any of the four well coincides with the underlying network structural connectivity, therefore establishing a direct link between the causal and structural connectivity. We stress that the structural connectivity of networks can be reconstructed pairwisely without conditioning on the global information of all other nodes in a network, thus circumventing the curse of dimensionality. Our framework provides a practical and effective approach for pulse-output network reconstruction.
1710.08916
William Softky Ph.D.
William Softky, Criscillia Benford
Sensory Metrics of Neuromechanical Trust
59 pages, 14 figures
Softky, W. & Benford C. (2017). "Sensory Metrics of Neuromechanical Trust." Neural Computation 29, 2293-2351
10.1162/NECO_a_00988
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today digital sources supply an unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet Addiction Disorder and Internet Gaming Disorder. This paper offers a mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction. We quantify human informational needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Using these metrics, we define the trust humans experience as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. This approach allows us to model addiction in general as the result of homeostatic regulation gone awry in novel environments and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and that individuals can improve sensory and sociosensory resolution through deliberate sensory reintegration practices. We conclude that we humans are the victims of our own success, our hands so skilled they fill the world with captivating things, our eyes so innocent they follow eagerly.
[ { "created": "Thu, 19 Oct 2017 17:13:42 GMT", "version": "v1" } ]
2017-10-26
[ [ "Softky", "William", "" ], [ "Benford", "Criscillia", "" ] ]
Today digital sources supply an unprecedented component of human sensorimotor data, the consumption of which is correlated with poorly understood maladies such as Internet Addiction Disorder and Internet Gaming Disorder. This paper offers a mathematical understanding of human sensorimotor processing as multiscale, continuous-time vibratory interaction. We quantify human informational needs using the signal processing metrics of entropy, noise, dimensionality, continuity, latency, and bandwidth. Using these metrics, we define the trust humans experience as a primitive statistical algorithm processing finely grained sensorimotor data from neuromechanical interaction. This definition of neuromechanical trust implies that artificial sensorimotor inputs and interactions that attract low-level attention through frequent discontinuities and enhanced coherence will decalibrate a brain's representation of its world over the long term by violating the implicit statistical contract for which self-calibration evolved. This approach allows us to model addiction in general as the result of homeostatic regulation gone awry in novel environments and digital dependency as a sub-case in which the decalibration caused by digital sensorimotor data spurs yet more consumption of them. We predict that institutions can use these sensorimotor metrics to quantify media richness to improve employee well-being; that dyads and family-size groups will bond and heal best through low-latency, high-resolution multisensory interaction such as shared meals and reciprocated touch; and that individuals can improve sensory and sociosensory resolution through deliberate sensory reintegration practices. We conclude that we humans are the victims of our own success, our hands so skilled they fill the world with captivating things, our eyes so innocent they follow eagerly.
1406.7777
Andr\'e Henrion
Cristian G. Arsene, J\"urgen Kratzsch, Andr\'e Henrion
Mass Spectrometry-An Alternative in Growth Hormone Measurement
10 pages, 8 figures; submitted for publication to Bioanalysis
Bioanalysis 6 (2014) 2391-2402
10.4155/BIO.14.196
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Growth hormone (GH) constitutes a set of closely related protein isoforms. In clinical practice, the disagreement of test results between commercially available ligand-binding assays is still an ongoing issue, and incomplete knowledge about the particular function of the different forms leaves an uncertainty of what should be the appropriate measurand. Mass spectrometry is promising to be a way forward. Not only is it capable of providing SI-traceable reference values for the calibration of current GH-tests, but it also offers an independent approach to highly reliable mass-selective quantification of individual GH-isoforms. This capability may add to reliability in doping control too. The article points out why and how.
[ { "created": "Mon, 30 Jun 2014 15:40:37 GMT", "version": "v1" } ]
2014-11-12
[ [ "Arsene", "Cristian G.", "" ], [ "Kratzsch", "Jürgen", "" ], [ "Henrion", "André", "" ] ]
Growth hormone (GH) constitutes a set of closely related protein isoforms. In clinical practice, the disagreement of test results between commercially available ligand-binding assays is still an ongoing issue, and incomplete knowledge about the particular function of the different forms leaves an uncertainty of what should be the appropriate measurand. Mass spectrometry is promising to be a way forward. Not only is it capable of providing SI-traceable reference values for the calibration of current GH-tests, but it also offers an independent approach to highly reliable mass-selective quantification of individual GH-isoforms. This capability may add to reliability in doping control too. The article points out why and how.
2007.02062
Manuel Beiran
Manuel Beiran, Alexis Dubreuil, Adrian Valente, Francesca Mastrogiuseppe, Srdjan Ostojic
Shaping dynamics with multiple populations in low-rank recurrent networks
29 pages, 7 figures
null
10.1162/neco_a_01381
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An emerging paradigm proposes that neural computations can be understood at the level of dynamical systems that govern low-dimensional trajectories of collective neural activity. How the connectivity structure of a network determines the emergent dynamical system however remains to be clarified. Here we consider a novel class of models, Gaussian-mixture low-rank recurrent networks, in which the rank of the connectivity matrix and the number of statistically-defined populations are independent hyper-parameters. We show that the resulting collective dynamics form a dynamical system, where the rank sets the dimensionality and the population structure shapes the dynamics. In particular, the collective dynamics can be described in terms of a simplified effective circuit of interacting latent variables. While having a single, global population strongly restricts the possible dynamics, we demonstrate that if the number of populations is large enough, a rank-R network can approximate any R-dimensional dynamical system.
[ { "created": "Sat, 4 Jul 2020 10:13:04 GMT", "version": "v1" }, { "created": "Tue, 17 Nov 2020 08:40:09 GMT", "version": "v2" } ]
2021-05-28
[ [ "Beiran", "Manuel", "" ], [ "Dubreuil", "Alexis", "" ], [ "Valente", "Adrian", "" ], [ "Mastrogiuseppe", "Francesca", "" ], [ "Ostojic", "Srdjan", "" ] ]
An emerging paradigm proposes that neural computations can be understood at the level of dynamical systems that govern low-dimensional trajectories of collective neural activity. How the connectivity structure of a network determines the emergent dynamical system however remains to be clarified. Here we consider a novel class of models, Gaussian-mixture low-rank recurrent networks, in which the rank of the connectivity matrix and the number of statistically-defined populations are independent hyper-parameters. We show that the resulting collective dynamics form a dynamical system, where the rank sets the dimensionality and the population structure shapes the dynamics. In particular, the collective dynamics can be described in terms of a simplified effective circuit of interacting latent variables. While having a single, global population strongly restricts the possible dynamics, we demonstrate that if the number of populations is large enough, a rank-R network can approximate any R-dimensional dynamical system.
1706.07220
Rembrandt Bakker
Paul Tiesinga, Rembrandt Bakker, Sean Hill, and Jan G. Bjaalie
Feeding the human brain model
Figures are reprints from other publications, we are verifying whether we can include them in this submission
null
10.1016/j.conb.2015.02.003
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of the Human Brain Project is to develop during the next decade an infrastructure necessary for running a simulation of the entire human brain constrained by current experimental data. One of the key issues is therefore to integrate and make accessible the experimental data necessary to constrain and fully specify this model. The required data covers many different spatial scales, ranging from the molecular scale to the whole brain and these data are obtained using a variety of techniques whose measurements may not be directly comparable. Furthermore, these data are incomplete, and will remain so at least for the coming decade. Here we review new neuroinformatics techniques that need to be developed and applied to address these issues.
[ { "created": "Thu, 22 Jun 2017 09:24:41 GMT", "version": "v1" } ]
2017-06-23
[ [ "Tiesinga", "Paul", "" ], [ "Bakker", "Rembrandt", "" ], [ "Hill", "Sean", "" ], [ "Bjaalie", "Jan G.", "" ] ]
The goal of the Human Brain Project is to develop during the next decade an infrastructure necessary for running a simulation of the entire human brain constrained by current experimental data. One of the key issues is therefore to integrate and make accessible the experimental data necessary to constrain and fully specify this model. The required data covers many different spatial scales, ranging from the molecular scale to the whole brain and these data are obtained using a variety of techniques whose measurements may not be directly comparable. Furthermore, these data are incomplete, and will remain so at least for the coming decade. Here we review new neuroinformatics techniques that need to be developed and applied to address these issues.
2307.10113
Navneet Roshan
Navneet Roshan and Rahul Pandit
Multiscale studies of delayed afterdepolarizations II: Calcium-overload-induced ventricular arrhythmias
This manuscript contains 17 pages, 14 figures and 79 citations
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Disturbances in calcium homeostasis in a cardiac myocyte can lead to calcium-overload conditions and abnormal calcium releases, which occur primarily in the following two phases of the action potential (AP): (a) triggered or late calcium release (LCR) during the plateau phase; (b) spontaneous calcium release (SCR) during the diastolic interval (DI). Experimental and numerical studies of LCRs and SCRs have suggested that these abnormal calcium releases can lead to triggered excitations and, thence to life-threatening ventricular arrhythmias. We explore this suggestion in detail by building on our work in the previous accompanying Paper I, where we have studied abnormal calcium releases and delayed afterdepolarizations (DADs) in two state-of-the-art mathematical models for human ventricular myocytes. Here, we carry out a detailed \textit{in-silico} study of one of these models, namely, the ten Tusscher-Panfilov TP06~\cite{ten2006alternans} model. We increase the L-type Ca-channel current $I_{\rm{CaL}}$ to trigger LCRs, and calcium leak through the ryanodine receptor (RyR) to trigger SCRs, in the myocyte. We then perform multiscale simulations of coupled TP06-model myocytes in tissue in one-, two-, and three-dimensional (1D, 2D, and 3D) domains, with clumps of DAD-capable myocytes, to demonstrate how these clumps precipitate premature ventricular complexes (PVCs) that lead, in turn, to fibrillatory excitations like spiral and scroll waves. We examine possible pharmacological implications of our study for the class of ventricular arrhythmias that result from Ca\textsuperscript{2+} overload.
[ { "created": "Fri, 14 Jul 2023 18:10:09 GMT", "version": "v1" } ]
2023-07-20
[ [ "Roshan", "Navneet", "" ], [ "Pandit", "Rahul", "" ] ]
Disturbances in calcium homeostasis in a cardiac myocyte can lead to calcium-overload conditions and abnormal calcium releases, which occur primarily in the following two phases of the action potential (AP): (a) triggered or late calcium release (LCR) during the plateau phase; (b) spontaneous calcium release (SCR) during the diastolic interval (DI). Experimental and numerical studies of LCRs and SCRs have suggested that these abnormal calcium releases can lead to triggered excitations and, thence to life-threatening ventricular arrhythmias. We explore this suggestion in detail by building on our work in the previous accompanying Paper I, where we have studied abnormal calcium releases and delayed afterdepolarizations (DADs) in two state-of-the-art mathematical models for human ventricular myocytes. Here, we carry out a detailed \textit{in-silico} study of one of these models, namely, the ten Tusscher-Panfilov TP06~\cite{ten2006alternans} model. We increase the L-type Ca-channel current $I_{\rm{CaL}}$ to trigger LCRs, and calcium leak through the ryanodine receptor (RyR) to trigger SCRs, in the myocyte. We then perform multiscale simulations of coupled TP06-model myocytes in tissue in one-, two-, and three-dimensional (1D, 2D, and 3D) domains, with clumps of DAD-capable myocytes, to demonstrate how these clumps precipitate premature ventricular complexes (PVCs) that lead, in turn, to fibrillatory excitations like spiral and scroll waves. We examine possible pharmacological implications of our study for the class of ventricular arrhythmias that result from Ca\textsuperscript{2+} overload.
1703.08583
Harikrishnan Jayamohan
Harikrishnan Jayamohan, Valentin Romanov, Huizhong Li, Jiyoung Son, Raheel Samuel, John Nelson, Bruce Gale
Advances in Microfluidics and Lab-on-a-Chip Technologies
null
Molecular Diagnostics (3rd Ed.), Academic Press, 2017, pp 197-217, ISBN 9780128029718
10.1016/B978-0-12-802971-8.00011-0
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advances in molecular biology are enabling rapid and efficient analyses for effective intervention in domains such as biology research, infectious disease management, food safety, and biodefense. The emergence of microfluidics and nanotechnologies has enabled both new capabilities and instrument sizes practical for point-of-care. It has also introduced new functionality, enhanced sensitivity, and reduced the time and cost involved in conventional molecular diagnostic techniques. This chapter reviews the application of microfluidics for molecular diagnostics methods such as nucleic acid amplification, next-generation sequencing, high resolution melting analysis, cytogenetics, protein detection and analysis, and cell sorting. We also review microfluidic sample preparation platforms applied to molecular diagnostics and targeted to sample-in, answer-out capabilities.
[ { "created": "Fri, 24 Mar 2017 19:57:41 GMT", "version": "v1" } ]
2017-03-28
[ [ "Jayamohan", "Harikrishnan", "" ], [ "Romanov", "Valentin", "" ], [ "Li", "Huizhong", "" ], [ "Son", "Jiyoung", "" ], [ "Samuel", "Raheel", "" ], [ "Nelson", "John", "" ], [ "Gale", "Bruce", "" ] ]
Advances in molecular biology are enabling rapid and efficient analyses for effective intervention in domains such as biology research, infectious disease management, food safety, and biodefense. The emergence of microfluidics and nanotechnologies has enabled both new capabilities and instrument sizes practical for point-of-care. It has also introduced new functionality, enhanced sensitivity, and reduced the time and cost involved in conventional molecular diagnostic techniques. This chapter reviews the application of microfluidics for molecular diagnostics methods such as nucleic acid amplification, next-generation sequencing, high resolution melting analysis, cytogenetics, protein detection and analysis, and cell sorting. We also review microfluidic sample preparation platforms applied to molecular diagnostics and targeted to sample-in, answer-out capabilities.
0711.4344
Anne Taormina
N. E. Grayson (York U.), A. Taormina (Durham U.) and R. Twarock (York U.)
DNA duplex cage structures with icosahedral symmetry
13 pages, LaTex, 9 figures; focus on vertex junctions that are experimentally realizable, some figures upgraded, some removed
Theor. Comp. Sci. 410:15 (2009) 1440-1447
10.1016/j.tcs.2008.12.005
null
q-bio.BM
null
A construction method for duplex cage structures with icosahedral sym- metry made out of single-stranded DNA molecules is presented and applied to an icosidodecahedral cage. It is shown via a mixture of analytic and computer techniques that there exist realisations of this graph in terms of two circular DNA molecules. These blueprints for the organisation of a cage structure with a noncrystallographic symmetry may assist in the design of containers made from DNA for applications in nanotechnology.
[ { "created": "Tue, 27 Nov 2007 20:34:17 GMT", "version": "v1" }, { "created": "Thu, 15 May 2008 15:39:14 GMT", "version": "v2" } ]
2013-04-09
[ [ "Grayson", "N. E.", "", "York U." ], [ "Taormina", "A.", "", "Durham U." ], [ "Twarock", "R.", "", "York\n U." ] ]
A construction method for duplex cage structures with icosahedral sym- metry made out of single-stranded DNA molecules is presented and applied to an icosidodecahedral cage. It is shown via a mixture of analytic and computer techniques that there exist realisations of this graph in terms of two circular DNA molecules. These blueprints for the organisation of a cage structure with a noncrystallographic symmetry may assist in the design of containers made from DNA for applications in nanotechnology.
1810.09161
Mariya Ptashnyk
Henry R. Allen and Mariya Ptashnyk
Mathematical Modelling of Auxin Transport in Plant Tissues: Flux meets Signalling and Growth
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plant hormone auxin has critical roles in plant growth, dependent on its heterogeneous distribution in plant tissues. Exactly how auxin transport and developmental processes such as growth coordinate to achieve the precise patterns of auxin observed experimentally is not well understood. Here we use mathematical modelling to examine the interplay between auxin dynamics and growth and their contribution to formation of patterns in auxin distribution in plant tissues. Mathematical models describing the auxin-related signalling pathway, PIN and AUX1 dynamics, auxin transport, and cell growth in plant tissues are derived. A key assumption of our models is the regulation of PIN proteins by the auxin-responsive ARF-Aux/IAA signalling pathway, with upregulation of PIN biosynthesis by ARFs. Models are analysed and solved numerically to examine the long-time behaviour and auxin distribution. Changes in auxin-related signalling processes are shown to be able to trigger transition between passage and spot type patterns in auxin distribution. The model was also shown to be able to generate isolated cells with oscillatory dynamics in levels of components of the auxin signalling pathway which could explain oscillations in levels of ARF targets that have been observed experimentally. Cell growth was shown to have influence on PIN polarisation and determination of auxin distribution patterns. Numerical simulation results indicate that auxin-related signalling processes can explain the different patterns in auxin distributions observed in plant tissues, whereas the interplay between auxin transport and growth can explain the `reverse-fountain' pattern in auxin distribution observed at plant root tips.
[ { "created": "Mon, 22 Oct 2018 10:05:55 GMT", "version": "v1" }, { "created": "Wed, 13 Nov 2019 10:54:07 GMT", "version": "v2" } ]
2019-11-14
[ [ "Allen", "Henry R.", "" ], [ "Ptashnyk", "Mariya", "" ] ]
Plant hormone auxin has critical roles in plant growth, dependent on its heterogeneous distribution in plant tissues. Exactly how auxin transport and developmental processes such as growth coordinate to achieve the precise patterns of auxin observed experimentally is not well understood. Here we use mathematical modelling to examine the interplay between auxin dynamics and growth and their contribution to formation of patterns in auxin distribution in plant tissues. Mathematical models describing the auxin-related signalling pathway, PIN and AUX1 dynamics, auxin transport, and cell growth in plant tissues are derived. A key assumption of our models is the regulation of PIN proteins by the auxin-responsive ARF-Aux/IAA signalling pathway, with upregulation of PIN biosynthesis by ARFs. Models are analysed and solved numerically to examine the long-time behaviour and auxin distribution. Changes in auxin-related signalling processes are shown to be able to trigger transition between passage and spot type patterns in auxin distribution. The model was also shown to be able to generate isolated cells with oscillatory dynamics in levels of components of the auxin signalling pathway which could explain oscillations in levels of ARF targets that have been observed experimentally. Cell growth was shown to have influence on PIN polarisation and determination of auxin distribution patterns. Numerical simulation results indicate that auxin-related signalling processes can explain the different patterns in auxin distributions observed in plant tissues, whereas the interplay between auxin transport and growth can explain the `reverse-fountain' pattern in auxin distribution observed at plant root tips.
1209.3170
Philip Gerlee Dr
P. Gerlee
The model muddle: in search of tumour growth laws
null
null
10.1158/0008-5472.CAN-12-4355
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we shall trace the historical development of tumour growth laws, which in a quantitative fashion describe the increase in tumour mass/volume over time. These models are usually formulated in terms of differential equations that relate the growth rate of the tumour to its current state, and range from the simple one-parameter exponential growth model, to more advanced models that contain a large number of parameters. Understanding the assumptions and consequences of such models is important, since they often underpin more complex models of tumour growth. The conclusion of this brief survey is that although much improvement has occurred over the last century, more effort and new models are required if we are to understand the intricacies of tumour growth.
[ { "created": "Fri, 14 Sep 2012 12:48:55 GMT", "version": "v1" }, { "created": "Fri, 8 Feb 2013 09:25:15 GMT", "version": "v2" } ]
2013-02-11
[ [ "Gerlee", "P.", "" ] ]
In this article we shall trace the historical development of tumour growth laws, which in a quantitative fashion describe the increase in tumour mass/volume over time. These models are usually formulated in terms of differential equations that relate the growth rate of the tumour to its current state, and range from the simple one-parameter exponential growth model, to more advanced models that contain a large number of parameters. Understanding the assumptions and consequences of such models is important, since they often underpin more complex models of tumour growth. The conclusion of this brief survey is that although much improvement has occurred over the last century, more effort and new models are required if we are to understand the intricacies of tumour growth.
q-bio/0409004
Pablo Balenzuela
Pablo Balenzuela and Jordi Garcia-Ojalvo
A neural mechanism for binaural pitch perception via ghost stochastic resonance
7 pages, 5 figures
null
10.1063/1.1871612
null
q-bio.NC physics.bio-ph
null
We present a physiologically plausible binaural mechanism for the perception of the pitch of complex sounds via ghost stochastic resonance. In this scheme, two neurons are driven by noise and different periodic signal each (with frequencies f1=kf0 and f2=(k+1)f0, where k>1), and their outputs (plus noise) are applied synaptically to a third neuron. Our numerical results, using the Morris-Lecar neuron model with chemical synapses explicity considered, show that intermediate noise levels enhance the response of the third neuron at frequencies close to f0, as in the cases previously described of ghost resonance. For the case of inharmonic combinations of inputs (both frequencies shifted by the same amount Df) noise is also seen to enhance the response of the third neuron at a frequency fr with also shift linearly with Df. In addition, we show that similar resonances can be observed as a function of the synaptic time constant. The suggested ghost-resonance-based stochastic mechanism can thus arise either at the peripheral level or at a higher level of neural processing in the perception of the pitch
[ { "created": "Wed, 1 Sep 2004 15:41:09 GMT", "version": "v1" }, { "created": "Thu, 2 Sep 2004 15:43:57 GMT", "version": "v2" } ]
2009-11-10
[ [ "Balenzuela", "Pablo", "" ], [ "Garcia-Ojalvo", "Jordi", "" ] ]
We present a physiologically plausible binaural mechanism for the perception of the pitch of complex sounds via ghost stochastic resonance. In this scheme, two neurons are driven by noise and different periodic signal each (with frequencies f1=kf0 and f2=(k+1)f0, where k>1), and their outputs (plus noise) are applied synaptically to a third neuron. Our numerical results, using the Morris-Lecar neuron model with chemical synapses explicity considered, show that intermediate noise levels enhance the response of the third neuron at frequencies close to f0, as in the cases previously described of ghost resonance. For the case of inharmonic combinations of inputs (both frequencies shifted by the same amount Df) noise is also seen to enhance the response of the third neuron at a frequency fr with also shift linearly with Df. In addition, we show that similar resonances can be observed as a function of the synaptic time constant. The suggested ghost-resonance-based stochastic mechanism can thus arise either at the peripheral level or at a higher level of neural processing in the perception of the pitch
2210.05712
Kristen Windoloski
Kristen A. Windoloski, Elisabeth O. Bansgaard, Atanaska Dobreva, Johnny T. Ottesen, and Mette S. Olufsen
A unified model for the human response to lipopolysaccharide-induced inflammation
40 pages, 12 figures
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
This study develops a unified model predicting the whole-body response to endotoxin. We simulate dynamics using differential equations examining the response to a lipopolysaccharide (LPS) injection. The model tracks pro- and anti-inflammatory cytokines (TNF-$\alpha$, IL-6, IL-10), concentrations of corticotropin-releasing hormone (CRH), adrenocorticotropic hormone (ACTH), and cortisol in the hypothalamic-pituitary-adrenal (HPA) axis. Daily hormonal variations are integrated into the model by including circadian oscillations when tracking CRH. Additionally, the model tracks heart rate, blood pressure, body temperature, and pain perception. Studied quantities function on timescales ranging from minutes to days. To understand how endotoxin impacts the body over this vast span of timescales, we examine the response to variations in LPS administration methods (single dose, repeated dose, and continuous dose) as well as the timing of the administration and the amount of endotoxin released into the system. We calibrate the model to literature data for a 2 ng/kg LPS bolus injection. Results show that LPS administration during early morning or late evening generates a more pronounced hormonal response. Most of the LPS effects are eliminated from the body 24 hours after administration, the main impact of inflammation remains in the system for 48 hours, and repeated dose simulations show that residual effects remain more than 10 days after the initial injection. We also show that if the LPS administration method or total dosage is increased, the system response is amplified, posing a greater risk of hypotension and pyrexia.
[ { "created": "Fri, 7 Oct 2022 18:18:36 GMT", "version": "v1" }, { "created": "Wed, 9 Nov 2022 19:04:08 GMT", "version": "v2" }, { "created": "Fri, 13 Jan 2023 00:20:54 GMT", "version": "v3" } ]
2023-01-16
[ [ "Windoloski", "Kristen A.", "" ], [ "Bansgaard", "Elisabeth O.", "" ], [ "Dobreva", "Atanaska", "" ], [ "Ottesen", "Johnny T.", "" ], [ "Olufsen", "Mette S.", "" ] ]
This study develops a unified model predicting the whole-body response to endotoxin. We simulate dynamics using differential equations examining the response to a lipopolysaccharide (LPS) injection. The model tracks pro- and anti-inflammatory cytokines (TNF-$\alpha$, IL-6, IL-10), concentrations of corticotropin-releasing hormone (CRH), adrenocorticotropic hormone (ACTH), and cortisol in the hypothalamic-pituitary-adrenal (HPA) axis. Daily hormonal variations are integrated into the model by including circadian oscillations when tracking CRH. Additionally, the model tracks heart rate, blood pressure, body temperature, and pain perception. Studied quantities function on timescales ranging from minutes to days. To understand how endotoxin impacts the body over this vast span of timescales, we examine the response to variations in LPS administration methods (single dose, repeated dose, and continuous dose) as well as the timing of the administration and the amount of endotoxin released into the system. We calibrate the model to literature data for a 2 ng/kg LPS bolus injection. Results show that LPS administration during early morning or late evening generates a more pronounced hormonal response. Most of the LPS effects are eliminated from the body 24 hours after administration, the main impact of inflammation remains in the system for 48 hours, and repeated dose simulations show that residual effects remain more than 10 days after the initial injection. We also show that if the LPS administration method or total dosage is increased, the system response is amplified, posing a greater risk of hypotension and pyrexia.
1301.3422
Albert Erives
Albert Erives and Jan Fassler
Metabolic and Chaperone Gene Loss Marks the Origin of Animals: Evidence for Hsp104 and Hsp78 Sharing Mitochondrial Clients
This is a reformatted version from the recent official publication in PLoS ONE (2015). This version differs substantially from first three arXiV versions. This version uses a fixed-width font for DNA sequences as was done in the earlier arXiv versions but which is missing in the official PLoS ONE publication. The title has also been shortened slightly from the official publication
PLoS ONE 10(2): e0117192 (2015)
10.1371/journal.pone.0117192
null
q-bio.GN q-bio.BM q-bio.PE q-bio.TO
http://creativecommons.org/licenses/by/3.0/
The evolution of animals involved acquisition of an emergent gene repertoire for gastrulation. Whether loss of genes also co-evolved with this developmental reprogramming has not yet been addressed. Here, we identify twenty-four genetic functions that are retained in fungi and choanoflagellates but undetectable in animals. These lost genes encode: (i) sixteen distinct biosynthetic functions; (ii) the two ancestral eukaryotic ClpB disaggregases, Hsp78 and Hsp104, which function in the mitochondria and cytosol, respectively; and (iii) six other assorted functions. We present computational and experimental data that are consistent with a joint function for the differentially localized ClpB disaggregases, and with the possibility of a shared client/chaperone relationship between the mitochondrial Fe/S homoaconitase encoded by the lost LYS4 gene and the two ClpBs. Our analyses lead to the hypothesis that the evolution of gastrulation-based multicellularity in animals led to efficient extraction of nutrients from dietary sources, loss of natural selection for maintenance of energetically expensive biosynthetic pathways, and subsequent loss of their attendant ClpB chaperones.
[ { "created": "Tue, 15 Jan 2013 17:22:23 GMT", "version": "v1" }, { "created": "Wed, 6 Feb 2013 18:07:25 GMT", "version": "v2" }, { "created": "Sun, 6 Oct 2013 22:56:18 GMT", "version": "v3" }, { "created": "Sat, 28 Feb 2015 01:12:01 GMT", "version": "v4" } ]
2015-03-03
[ [ "Erives", "Albert", "" ], [ "Fassler", "Jan", "" ] ]
The evolution of animals involved acquisition of an emergent gene repertoire for gastrulation. Whether loss of genes also co-evolved with this developmental reprogramming has not yet been addressed. Here, we identify twenty-four genetic functions that are retained in fungi and choanoflagellates but undetectable in animals. These lost genes encode: (i) sixteen distinct biosynthetic functions; (ii) the two ancestral eukaryotic ClpB disaggregases, Hsp78 and Hsp104, which function in the mitochondria and cytosol, respectively; and (iii) six other assorted functions. We present computational and experimental data that are consistent with a joint function for the differentially localized ClpB disaggregases, and with the possibility of a shared client/chaperone relationship between the mitochondrial Fe/S homoaconitase encoded by the lost LYS4 gene and the two ClpBs. Our analyses lead to the hypothesis that the evolution of gastrulation-based multicellularity in animals led to efficient extraction of nutrients from dietary sources, loss of natural selection for maintenance of energetically expensive biosynthetic pathways, and subsequent loss of their attendant ClpB chaperones.
0802.2059
Artem Novozhilov S
Artem S Novozhilov
On the spread of epidemics in a closed heterogeneous population
23 pages, 2 figures
Mathematical Biosciences. 2008, 215, 177-185
10.1016/j.mbs.2008.07.010
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Heterogeneity is an important property of any population experiencing a disease. Here we apply general methods of the theory of heterogeneous populations to the simplest mathematical models in epidemiology. In particular, an SIR (susceptible-infective-removed) model is formulated and analyzed for different sources of heterogeneity. It is shown that a heterogeneous model can be reduced to a homogeneous model with a nonlinear transmission function, which is given in explicit form. The widely used power transmission function is deduced from a heterogeneous model with the initial gamma-distribution of the disease parameters. Therefore, a mechanistic derivation of the phenomenological model, which mimics reality very well, is provided. The equation for the final size of an epidemic for an arbitrary initial distribution is found. The implications of population heterogeneity are discussed, in particular, it is pointed out that usual moment-closure methods can lead to erroneous conclusions if applied for the study of the long-term behavior of the model.
[ { "created": "Thu, 14 Feb 2008 17:17:15 GMT", "version": "v1" } ]
2012-02-28
[ [ "Novozhilov", "Artem S", "" ] ]
Heterogeneity is an important property of any population experiencing a disease. Here we apply general methods of the theory of heterogeneous populations to the simplest mathematical models in epidemiology. In particular, an SIR (susceptible-infective-removed) model is formulated and analyzed for different sources of heterogeneity. It is shown that a heterogeneous model can be reduced to a homogeneous model with a nonlinear transmission function, which is given in explicit form. The widely used power transmission function is deduced from a heterogeneous model with the initial gamma-distribution of the disease parameters. Therefore, a mechanistic derivation of the phenomenological model, which mimics reality very well, is provided. The equation for the final size of an epidemic for an arbitrary initial distribution is found. The implications of population heterogeneity are discussed, in particular, it is pointed out that usual moment-closure methods can lead to erroneous conclusions if applied for the study of the long-term behavior of the model.
1604.04484
Sean Simmons
Sean Simmons, Cenk Sahinalp, and Bonnie Berger
Enabling Privacy-Preserving GWAS in Heterogeneous Human Populations
To be presented at RECOMB 2016
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The projected increase of genotyping in the clinic and the rise of large genomic databases has led to the possibility of using patient medical data to perform genomewide association studies (GWAS) on a larger scale and at a lower cost than ever before. Due to privacy concerns, however, access to this data is limited to a few trusted individuals, greatly reducing its impact on biomedical research. Privacy preserving methods have been suggested as a way of allowing more people access to this precious data while protecting patients. In particular, there has been growing interest in applying the concept of differential privacy to GWAS results. Unfortunately, previous approaches for performing differentially private GWAS are based on rather simple statistics that have some major limitations. In particular, they do not correct for population stratification, a major issue when dealing with the genetically diverse populations present in modern GWAS. To address this concern we introduce a novel computational framework for performing GWAS that tailors ideas from differential privacy to protect private phenotype information, while at the same time correcting for population stratification. This framework allows us to produce privacy preserving GWAS results based on two of the most commonly used GWAS statistics: EIGENSTRAT and linear mixed model (LMM) based statistics. We test our differentially private statistics, PrivSTRAT and PrivLMM, on both simulated and real GWAS datasets and find that they are able to protect privacy while returning meaningful GWAS results.
[ { "created": "Fri, 15 Apr 2016 13:06:47 GMT", "version": "v1" } ]
2016-04-18
[ [ "Simmons", "Sean", "" ], [ "Sahinalp", "Cenk", "" ], [ "Berger", "Bonnie", "" ] ]
The projected increase of genotyping in the clinic and the rise of large genomic databases has led to the possibility of using patient medical data to perform genomewide association studies (GWAS) on a larger scale and at a lower cost than ever before. Due to privacy concerns, however, access to this data is limited to a few trusted individuals, greatly reducing its impact on biomedical research. Privacy preserving methods have been suggested as a way of allowing more people access to this precious data while protecting patients. In particular, there has been growing interest in applying the concept of differential privacy to GWAS results. Unfortunately, previous approaches for performing differentially private GWAS are based on rather simple statistics that have some major limitations. In particular, they do not correct for population stratification, a major issue when dealing with the genetically diverse populations present in modern GWAS. To address this concern we introduce a novel computational framework for performing GWAS that tailors ideas from differential privacy to protect private phenotype information, while at the same time correcting for population stratification. This framework allows us to produce privacy preserving GWAS results based on two of the most commonly used GWAS statistics: EIGENSTRAT and linear mixed model (LMM) based statistics. We test our differentially private statistics, PrivSTRAT and PrivLMM, on both simulated and real GWAS datasets and find that they are able to protect privacy while returning meaningful GWAS results.
q-bio/0607035
Javier Macia Santamaria
J. Macia, R.V. Sole
Synthetic Turing protocells: vesicle self-reproduction through symmetry-breaking instabilities
null
null
null
null
q-bio.CB
null
The reproduction of a living cell requires a repeatable set of chemical events to be properly coordinated. Such events define a replication cycle, coupling the growth and shape change of the cell membrane with internal metabolic reactions. Although the logic of such process is determined by potentially simple physico-chemical laws, the modeling of a full, self-maintained cell cycle is not trivial. Here we present a novel approach to the problem which makes use of so called symmetry breaking instabilities as the engine of cell growth and division. It is shown that the process occurs as a consequence of the breaking of spatial symmetry and provides a reliable mechanism of vesicle growth and reproduction. Our model opens the possibility of a synthetic protocell lacking information but displaying self-reproduction under a very simple set of chemical reactions.
[ { "created": "Fri, 21 Jul 2006 17:06:21 GMT", "version": "v1" } ]
2007-05-23
[ [ "Macia", "J.", "" ], [ "Sole", "R. V.", "" ] ]
The reproduction of a living cell requires a repeatable set of chemical events to be properly coordinated. Such events define a replication cycle, coupling the growth and shape change of the cell membrane with internal metabolic reactions. Although the logic of such process is determined by potentially simple physico-chemical laws, the modeling of a full, self-maintained cell cycle is not trivial. Here we present a novel approach to the problem which makes use of so called symmetry breaking instabilities as the engine of cell growth and division. It is shown that the process occurs as a consequence of the breaking of spatial symmetry and provides a reliable mechanism of vesicle growth and reproduction. Our model opens the possibility of a synthetic protocell lacking information but displaying self-reproduction under a very simple set of chemical reactions.
2305.17183
Kai San Chan
Kai San Chan, Huimiao Chen, Chenyu Jin, Yuxuan Tian, Dingchang Lin
ProGroTrack: Deep Learning-Assisted Tracking of Intracellular Protein Growth Dynamics
null
null
null
null
q-bio.QM cs.AI eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate tracking of cellular and subcellular structures, along with their dynamics, plays a pivotal role in understanding the underlying mechanisms of biological systems. This paper presents a novel approach, ProGroTrack, that combines the You Only Look Once (YOLO) and ByteTrack algorithms within the detection-based tracking (DBT) framework to track intracellular protein nanostructures. Focusing on iPAK4 protein fibers as a representative case study, we conducted a comprehensive evaluation of YOLOv5 and YOLOv8 models, revealing the superior performance of YOLOv5 on our dataset. Notably, YOLOv5x achieved an impressive mAP50 of 0.839 and F-score of 0.819. To further optimize detection capabilities, we incorporated semi-supervised learning for model improvement, resulting in enhanced performances in all metrics. Subsequently, we successfully applied our approach to track the growth behavior of iPAK4 protein fibers, revealing their two distinct growth phases consistent with a previously reported kinetic model. This research showcases the promising potential of our approach, extending beyond iPAK4 fibers. It also offers a significant advancement in precise tracking of dynamic processes in live cells, and fostering new avenues for biomedical research.
[ { "created": "Fri, 26 May 2023 18:15:38 GMT", "version": "v1" } ]
2023-05-30
[ [ "Chan", "Kai San", "" ], [ "Chen", "Huimiao", "" ], [ "Jin", "Chenyu", "" ], [ "Tian", "Yuxuan", "" ], [ "Lin", "Dingchang", "" ] ]
Accurate tracking of cellular and subcellular structures, along with their dynamics, plays a pivotal role in understanding the underlying mechanisms of biological systems. This paper presents a novel approach, ProGroTrack, that combines the You Only Look Once (YOLO) and ByteTrack algorithms within the detection-based tracking (DBT) framework to track intracellular protein nanostructures. Focusing on iPAK4 protein fibers as a representative case study, we conducted a comprehensive evaluation of YOLOv5 and YOLOv8 models, revealing the superior performance of YOLOv5 on our dataset. Notably, YOLOv5x achieved an impressive mAP50 of 0.839 and F-score of 0.819. To further optimize detection capabilities, we incorporated semi-supervised learning for model improvement, resulting in enhanced performances in all metrics. Subsequently, we successfully applied our approach to track the growth behavior of iPAK4 protein fibers, revealing their two distinct growth phases consistent with a previously reported kinetic model. This research showcases the promising potential of our approach, extending beyond iPAK4 fibers. It also offers a significant advancement in precise tracking of dynamic processes in live cells, and fostering new avenues for biomedical research.
1903.05381
Jonathan Potts
Jonathan R. Potts and Mark A. Lewis
Spatial memory and taxis-driven pattern formation in model ecosystems
20 pages; 7 figures
Bulletin of Mathematical Biology, 2019
10.1007/s11538-019-00626-9
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical models of spatial population dynamics typically focus on the interplay between dispersal events and birth/death processes. However, for many animal communities, significant arrangement in space can occur on shorter timescales, where births and deaths are negligible. This phenomenon is particularly prevalent in populations of larger, vertebrate animals who often reproduce only once per year or less. To understand spatial arrangements of animal communities on such timescales, we use a class of diffusion-taxis equations for modelling inter-population movement responses between $N \geq 2$ populations. These systems of equations incorporate the effect on animal movement of both the current presence of other populations and the memory of past presence encoded either in the environment or in the minds of animals. We give general criteria for the spontaneous formation of both stationary and oscillatory patterns, via linear pattern formation analysis. For $N=2$, we classify completely the pattern formation properties using a combination of linear analysis and non-linear energy functionals. In this case, the only patterns that can occur asymptotically in time are stationary. However, for $N \geq 3$, oscillatory patterns can occur asymptotically, giving rise to a sequence of period-doubling bifurcations leading to patterns with no obvious regularity, a hallmark of chaos. Our study highlights the importance of understanding between-population animal movement for understanding spatial species distributions, something that is typically ignored in species distribution modelling, and so develops a new paradigm for spatial population dynamics.
[ { "created": "Wed, 13 Mar 2019 09:51:09 GMT", "version": "v1" }, { "created": "Tue, 28 May 2019 18:43:34 GMT", "version": "v2" } ]
2019-06-06
[ [ "Potts", "Jonathan R.", "" ], [ "Lewis", "Mark A.", "" ] ]
Mathematical models of spatial population dynamics typically focus on the interplay between dispersal events and birth/death processes. However, for many animal communities, significant arrangement in space can occur on shorter timescales, where births and deaths are negligible. This phenomenon is particularly prevalent in populations of larger, vertebrate animals who often reproduce only once per year or less. To understand spatial arrangements of animal communities on such timescales, we use a class of diffusion-taxis equations for modelling inter-population movement responses between $N \geq 2$ populations. These systems of equations incorporate the effect on animal movement of both the current presence of other populations and the memory of past presence encoded either in the environment or in the minds of animals. We give general criteria for the spontaneous formation of both stationary and oscillatory patterns, via linear pattern formation analysis. For $N=2$, we classify completely the pattern formation properties using a combination of linear analysis and non-linear energy functionals. In this case, the only patterns that can occur asymptotically in time are stationary. However, for $N \geq 3$, oscillatory patterns can occur asymptotically, giving rise to a sequence of period-doubling bifurcations leading to patterns with no obvious regularity, a hallmark of chaos. Our study highlights the importance of understanding between-population animal movement for understanding spatial species distributions, something that is typically ignored in species distribution modelling, and so develops a new paradigm for spatial population dynamics.
1811.07269
Kong Hyeok
Un-Hyang Ho, Hye-Ok Kong
Prediction of Signal Sequences in Abiotic Stress Inducible Genes from Main Crops by Association Rule Mining
null
null
null
null
q-bio.GN cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is important to study on genes affecting to growing environment of main crops. Especially the recognition problem of promoter region, which is the problem to predict whether DNA sequences contain promoter regions or not, is prior to find abiotic stress-inducible genes. Studies on predicting promoter sequences in DNA sequences have been studied by traditional pattern matching methods and machine learning methods in biology and computer science.
[ { "created": "Sun, 18 Nov 2018 04:21:18 GMT", "version": "v1" } ]
2018-11-20
[ [ "Ho", "Un-Hyang", "" ], [ "Kong", "Hye-Ok", "" ] ]
It is important to study on genes affecting to growing environment of main crops. Especially the recognition problem of promoter region, which is the problem to predict whether DNA sequences contain promoter regions or not, is prior to find abiotic stress-inducible genes. Studies on predicting promoter sequences in DNA sequences have been studied by traditional pattern matching methods and machine learning methods in biology and computer science.
2106.10851
Jian Zhai
Jian Zhai, Chaojun Yu, You Zhai
Witten-type topological field theory of self-organized criticality for stochastic neural networks
4 figures
null
null
null
q-bio.NC quant-ph
http://creativecommons.org/licenses/by/4.0/
We study the Witten-type topological field theory(W-TFT) of self-organized criticality(SOC) for stochastic neural networks. The Parisi-Sourlas-Wu quantization of general stochastic differential equations (SDEs) for neural networks, the Becchi-Rouet-Stora-Tyutin(BRST)-symmetry of the diffusion system and the relation between spontaneous breaking and instantons connecting steady states of the SDEs, as well as the sufficient and necessary condition on pseudo-supersymmetric stochastic neural networks are obtained. Suppose neuronal avalanche is a mechanism of cortical information processing and storage \cite{Beggs}\cite{Plenz1}\cite{Plenz2} and the model of stochastic neural networks\cite{Dayan} is correct, as well as the SOC system can be looked upon as a W-TFT with spontaneously broken BRST symmetry. Then we should recover the neuronal avalanches and spontaneously broken BRST symmetry from the model of stochastic neural networks. We find that, provided the divergence of drift coefficients is small and non-constant, the model of stochastic neural networks is BRST symmetric. That is, if the SOC of brain neural networks system can be looked upon as a W-TFT with spontaneously broken BRST symmetry, then the general model of stochastic neural networks which be extensively used in neuroscience \cite{Dayan} is not enough to describe the SOC. On the other hand, using the Fokker-Planck equation, we show the sufficient condition on diffusion so that there exists a steady state probability distribution for the stochastic neural networks. Rhythms of the firing rates of the neuronal networks arise from the process, meanwhile some biological laws are conserved.
[ { "created": "Mon, 21 Jun 2021 04:37:29 GMT", "version": "v1" }, { "created": "Fri, 28 Jan 2022 06:58:43 GMT", "version": "v2" } ]
2022-01-31
[ [ "Zhai", "Jian", "" ], [ "Yu", "Chaojun", "" ], [ "Zhai", "You", "" ] ]
We study the Witten-type topological field theory(W-TFT) of self-organized criticality(SOC) for stochastic neural networks. The Parisi-Sourlas-Wu quantization of general stochastic differential equations (SDEs) for neural networks, the Becchi-Rouet-Stora-Tyutin(BRST)-symmetry of the diffusion system and the relation between spontaneous breaking and instantons connecting steady states of the SDEs, as well as the sufficient and necessary condition on pseudo-supersymmetric stochastic neural networks are obtained. Suppose neuronal avalanche is a mechanism of cortical information processing and storage \cite{Beggs}\cite{Plenz1}\cite{Plenz2} and the model of stochastic neural networks\cite{Dayan} is correct, as well as the SOC system can be looked upon as a W-TFT with spontaneously broken BRST symmetry. Then we should recover the neuronal avalanches and spontaneously broken BRST symmetry from the model of stochastic neural networks. We find that, provided the divergence of drift coefficients is small and non-constant, the model of stochastic neural networks is BRST symmetric. That is, if the SOC of brain neural networks system can be looked upon as a W-TFT with spontaneously broken BRST symmetry, then the general model of stochastic neural networks which be extensively used in neuroscience \cite{Dayan} is not enough to describe the SOC. On the other hand, using the Fokker-Planck equation, we show the sufficient condition on diffusion so that there exists a steady state probability distribution for the stochastic neural networks. Rhythms of the firing rates of the neuronal networks arise from the process, meanwhile some biological laws are conserved.
1210.7165
Farzad Farkhooi
Farzad Farkhooi, Anja Froese, Eilif Muller, Randolf Menzel, Martin P. Nawrot
Cellular Adaptation Accounts for the Sparse and Reliable Sensory Stimulus Representation
17 pages, 4 figures
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus representation in the later stages of cortical sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in a network with adapting neurons. We find that cellular adaptation plays a critical role in the transient reduction of the trial-by-trial variability of cortical spiking, providing an explanation for a wide-spread and hitherto unexplained phenomenon by a simple mechanism. In insect olfaction, cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body, independent of inhibitory mechanisms. Our results reveal a computational principle that relates neuronal firing rate adaptation to temporal sparse coding and variability suppression in nervous systems with a sequential processing architecture.
[ { "created": "Fri, 26 Oct 2012 15:15:08 GMT", "version": "v1" } ]
2012-10-29
[ [ "Farkhooi", "Farzad", "" ], [ "Froese", "Anja", "" ], [ "Muller", "Eilif", "" ], [ "Menzel", "Randolf", "" ], [ "Nawrot", "Martin P.", "" ] ]
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus representation in the later stages of cortical sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in a network with adapting neurons. We find that cellular adaptation plays a critical role in the transient reduction of the trial-by-trial variability of cortical spiking, providing an explanation for a wide-spread and hitherto unexplained phenomenon by a simple mechanism. In insect olfaction, cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body, independent of inhibitory mechanisms. Our results reveal a computational principle that relates neuronal firing rate adaptation to temporal sparse coding and variability suppression in nervous systems with a sequential processing architecture.
1303.7116
Johannes Knebel
Johannes Knebel, Torben Kr\"uger, Markus F. Weber, Erwin Frey
Coexistence and Survival in Conservative Lotka-Volterra Networks
5 pages, 3 figures
Phys. Rev. Lett. 110, 168106 (2013)
10.1103/PhysRevLett.110.168106
LMU-ASC 63/12
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing coexistence and survival scenarios of Lotka-Volterra (LV) networks in which the total biomass is conserved is of vital importance for the characterization of long-term dynamics of ecological communities. Here, we introduce a classification scheme for coexistence scenarios in these conservative LV models and quantify the extinction process by employing the Pfaffian of the network's interaction matrix. We illustrate our findings on global stability properties for general systems of four and five species and find a generalized scaling law for the extinction time.
[ { "created": "Thu, 28 Mar 2013 12:57:46 GMT", "version": "v1" } ]
2013-04-26
[ [ "Knebel", "Johannes", "" ], [ "Krüger", "Torben", "" ], [ "Weber", "Markus F.", "" ], [ "Frey", "Erwin", "" ] ]
Analyzing coexistence and survival scenarios of Lotka-Volterra (LV) networks in which the total biomass is conserved is of vital importance for the characterization of long-term dynamics of ecological communities. Here, we introduce a classification scheme for coexistence scenarios in these conservative LV models and quantify the extinction process by employing the Pfaffian of the network's interaction matrix. We illustrate our findings on global stability properties for general systems of four and five species and find a generalized scaling law for the extinction time.
2308.10831
Emmanuel Calvet
Emmanuel Calvet, Jean Rouat, Bertrand Reulet
Excitatory/Inhibitory Balance Emerges as a Key Factor for RBN Performance, Overriding Attractor Dynamics
22 pages, 6 figures
Front. Comput. Neurosci. Volume 17 - 2023
10.3389/fncom.2023.1223258
null
q-bio.NC cs.LG cs.NE stat.CO
http://creativecommons.org/licenses/by-sa/4.0/
Reservoir computing provides a time and cost-efficient alternative to traditional learning methods.Critical regimes, known as the "edge of chaos," have been found to optimize computational performance in binary neural networks. However, little attention has been devoted to studying reservoir-to-reservoir variability when investigating the link between connectivity, dynamics, and performance. As physical reservoir computers become more prevalent, developing a systematic approach to network design is crucial. In this article, we examine Random Boolean Networks (RBNs) and demonstrate that specific distribution parameters can lead to diverse dynamics near critical points. We identify distinct dynamical attractors and quantify their statistics, revealing that most reservoirs possess a dominant attractor. We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance. In comparison, a negative inhibitory balance delivers another critical point with better prediction performance. Interestingly, we show that the intrinsic attractor dynamics have little influence on performance in either case.
[ { "created": "Wed, 2 Aug 2023 17:41:58 GMT", "version": "v1" } ]
2023-08-22
[ [ "Calvet", "Emmanuel", "" ], [ "Rouat", "Jean", "" ], [ "Reulet", "Bertrand", "" ] ]
Reservoir computing provides a time and cost-efficient alternative to traditional learning methods.Critical regimes, known as the "edge of chaos," have been found to optimize computational performance in binary neural networks. However, little attention has been devoted to studying reservoir-to-reservoir variability when investigating the link between connectivity, dynamics, and performance. As physical reservoir computers become more prevalent, developing a systematic approach to network design is crucial. In this article, we examine Random Boolean Networks (RBNs) and demonstrate that specific distribution parameters can lead to diverse dynamics near critical points. We identify distinct dynamical attractors and quantify their statistics, revealing that most reservoirs possess a dominant attractor. We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance. In comparison, a negative inhibitory balance delivers another critical point with better prediction performance. Interestingly, we show that the intrinsic attractor dynamics have little influence on performance in either case.
1902.08058
Abdulhakim Abdi
A. M. Abdi, N. Boke-Olen, H. Jin, L. Eklundh, T. Tagesson, V. Lehsten, J. Ardo
First assessment of the plant phenology index (PPI) for estimating gross primary productivity in African semi-arid ecosystems
Accepted manuscript; 12 pages, 4 tables, 9 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The importance of semi-arid ecosystems in the global carbon cycle as sinks for CO2 emissions has recently been highlighted. Africa is a carbon sink and nearly half its area comprises arid and semi-arid ecosystems. However, there are uncertainties regarding CO2 fluxes for semi-arid ecosystems in Africa, particularly savannas and dry tropical woodlands. In order to improve on existing remote-sensing based methods for estimating carbon uptake across semi-arid Africa we applied and tested the recently developed plant phenology index (PPI). We developed a PPI-based model estimating gross primary productivity (GPP) that accounts for canopy water stress, and compared it against three other Earth observation-based GPP models: the temperature and greenness model, the greenness and radiation model and a light use efficiency model. The models were evaluated against in situ data from four semi-arid sites in Africa with varying tree canopy cover (3 to 65 percent). Evaluation results from the four GPP models showed reasonable agreement with in situ GPP measured from eddy covariance flux towers (EC GPP) based on coefficient of variation, root-mean-square error, and Bayesian information criterion. The PPI-based GPP model was able to capture the magnitude of EC GPP better than the other tested models. The results of this study show that a PPI-based GPP model is a promising tool for the estimation of GPP in the semi-arid ecosystems of Africa.
[ { "created": "Thu, 21 Feb 2019 14:04:21 GMT", "version": "v1" } ]
2019-02-22
[ [ "Abdi", "A. M.", "" ], [ "Boke-Olen", "N.", "" ], [ "Jin", "H.", "" ], [ "Eklundh", "L.", "" ], [ "Tagesson", "T.", "" ], [ "Lehsten", "V.", "" ], [ "Ardo", "J.", "" ] ]
The importance of semi-arid ecosystems in the global carbon cycle as sinks for CO2 emissions has recently been highlighted. Africa is a carbon sink and nearly half its area comprises arid and semi-arid ecosystems. However, there are uncertainties regarding CO2 fluxes for semi-arid ecosystems in Africa, particularly savannas and dry tropical woodlands. In order to improve on existing remote-sensing based methods for estimating carbon uptake across semi-arid Africa we applied and tested the recently developed plant phenology index (PPI). We developed a PPI-based model estimating gross primary productivity (GPP) that accounts for canopy water stress, and compared it against three other Earth observation-based GPP models: the temperature and greenness model, the greenness and radiation model and a light use efficiency model. The models were evaluated against in situ data from four semi-arid sites in Africa with varying tree canopy cover (3 to 65 percent). Evaluation results from the four GPP models showed reasonable agreement with in situ GPP measured from eddy covariance flux towers (EC GPP) based on coefficient of variation, root-mean-square error, and Bayesian information criterion. The PPI-based GPP model was able to capture the magnitude of EC GPP better than the other tested models. The results of this study show that a PPI-based GPP model is a promising tool for the estimation of GPP in the semi-arid ecosystems of Africa.
2011.11710
Mihai Alexandru Petrovici
Elena Kreutzer, Walter M. Senn, Mihai A. Petrovici
Natural-gradient learning for spiking neurons
Joint senior authorship: Walter M. Senn and Mihai A. Petrovici
null
null
null
q-bio.NC cs.NE math.DG stat.CO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.
[ { "created": "Mon, 23 Nov 2020 20:26:37 GMT", "version": "v1" }, { "created": "Wed, 23 Feb 2022 19:29:15 GMT", "version": "v2" } ]
2022-02-25
[ [ "Kreutzer", "Elena", "" ], [ "Senn", "Walter M.", "" ], [ "Petrovici", "Mihai A.", "" ] ]
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural gradient descent.
2005.03004
Wengong Jin
Wengong Jin, Regina Barzilay, Tommi Jaakkola
Adaptive Invariance for Molecule Property Prediction
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective property prediction methods can help accelerate the search for COVID-19 antivirals either through accurate in-silico screens or by effectively guiding on-going at-scale experimental efforts. However, existing prediction tools have limited ability to accommodate scarce or fragmented training data currently available. In this paper, we introduce a novel approach to learn predictors that can generalize or extrapolate beyond the heterogeneous data. Our method builds on and extends recently proposed invariant risk minimization, adaptively forcing the predictor to avoid nuisance variation. We achieve this by continually exercising and manipulating latent representations of molecules to highlight undesirable variation to the predictor. To test the method we use a combination of three data sources: SARS-CoV-2 antiviral screening data, molecular fragments that bind to SARS-CoV-2 main protease and large screening data for SARS-CoV-1. Our predictor outperforms state-of-the-art transfer learning methods by significant margin. We also report the top 20 predictions of our model on Broad drug repurposing hub.
[ { "created": "Tue, 5 May 2020 19:47:20 GMT", "version": "v1" } ]
2020-05-08
[ [ "Jin", "Wengong", "" ], [ "Barzilay", "Regina", "" ], [ "Jaakkola", "Tommi", "" ] ]
Effective property prediction methods can help accelerate the search for COVID-19 antivirals either through accurate in-silico screens or by effectively guiding on-going at-scale experimental efforts. However, existing prediction tools have limited ability to accommodate scarce or fragmented training data currently available. In this paper, we introduce a novel approach to learn predictors that can generalize or extrapolate beyond the heterogeneous data. Our method builds on and extends recently proposed invariant risk minimization, adaptively forcing the predictor to avoid nuisance variation. We achieve this by continually exercising and manipulating latent representations of molecules to highlight undesirable variation to the predictor. To test the method we use a combination of three data sources: SARS-CoV-2 antiviral screening data, molecular fragments that bind to SARS-CoV-2 main protease and large screening data for SARS-CoV-1. Our predictor outperforms state-of-the-art transfer learning methods by significant margin. We also report the top 20 predictions of our model on Broad drug repurposing hub.
0902.3919
Wojciech Borkowski
Wojciech Borkowski
Cellular Automata Model of Macroevolution
8 pages, 3 figures, Fourteenth National Conference on Application of Mathematics in Biology and Medicine, Leszno 2008 (POLAND)
Proceedings of the Fourteenth National Conference on Application of Mathematics in Biology and Medicine (pp. 18-25), Uniwersytet Warszawski, QPrint Warszawa 2008, ISBN:83-903893-4-7
null
null
q-bio.PE q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper I describe a cellular automaton model of a multi-species ecosystem, suitable for the study of emergent properties of macroevolution. Unlike majority of ecological models, the number of coexisting species is not fixed. Starting from one common ancestor they appear by "mutations" of existent species, and then survive or extinct depending on the balance of local ecological interactions. Monte-Carlo numerical simulations show that this model is able to qualitatively reproduce phenomena that have been observed in other models and in nature.
[ { "created": "Mon, 23 Feb 2009 15:33:18 GMT", "version": "v1" }, { "created": "Wed, 23 Sep 2009 12:31:50 GMT", "version": "v2" } ]
2009-09-23
[ [ "Borkowski", "Wojciech", "" ] ]
In this paper I describe a cellular automaton model of a multi-species ecosystem, suitable for the study of emergent properties of macroevolution. Unlike majority of ecological models, the number of coexisting species is not fixed. Starting from one common ancestor they appear by "mutations" of existent species, and then survive or extinct depending on the balance of local ecological interactions. Monte-Carlo numerical simulations show that this model is able to qualitatively reproduce phenomena that have been observed in other models and in nature.
2312.17540
Alexander Spirov
Alexander Spirov
Co-evolution of replicators and their parasites
14 pages; in russian
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The problem of evolutionary complexification of life is considered one of the fundamental aspects in contemporary evolutionary theory. Parasitism is ubiquitous, inevitable, and arises as soon as the first replicators appear, even during the prebiotic stages of evolution. Both in theoretical approaches (computer modeling and analysis) and in real experiments (replication of biological macromolecules), parasitic processes emerge almost immediately. An effective way to avoid the elimination of the host-parasite system is through compartmentalization. In both theory and experiments, the pressure of parasitism leads to the complexification of the host-parasite system into a network of cooperative replicators and their parasites. Parasites have the ability to create niches for new replicators. The co-evolutionary arms race between defense systems and counter-defense mechanisms among parasites and hosts can progress for a considerable duration, involving multiple stages, if not indefinitely.
[ { "created": "Fri, 29 Dec 2023 10:07:47 GMT", "version": "v1" } ]
2024-01-01
[ [ "Spirov", "Alexander", "" ] ]
The problem of evolutionary complexification of life is considered one of the fundamental aspects in contemporary evolutionary theory. Parasitism is ubiquitous, inevitable, and arises as soon as the first replicators appear, even during the prebiotic stages of evolution. Both in theoretical approaches (computer modeling and analysis) and in real experiments (replication of biological macromolecules), parasitic processes emerge almost immediately. An effective way to avoid the elimination of the host-parasite system is through compartmentalization. In both theory and experiments, the pressure of parasitism leads to the complexification of the host-parasite system into a network of cooperative replicators and their parasites. Parasites have the ability to create niches for new replicators. The co-evolutionary arms race between defense systems and counter-defense mechanisms among parasites and hosts can progress for a considerable duration, involving multiple stages, if not indefinitely.
0808.0321
Patrick Warren
Patrick B. Warren, Silvio M. Duarte Queiros, Janette L. Jones
Flux networks in metabolic graphs
9 pages, 4 figures, RevTeX 4.0, supplementary data available (excel)
Phys. Biol. v6, 046006 (2009)
10.1088/1478-3975/6/4/046006
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A metabolic model can be represented as bipartite graph comprising linked reaction and metabolite nodes. Here it is shown how a network of conserved fluxes can be assigned to the edges of such a graph by combining the reaction fluxes with a conserved metabolite property such as molecular weight. A similar flux network can be constructed by combining the primal and dual solutions to the linear programming problem that typically arises in constraint-based modelling. Such constructions may help with the visualisation of flux distributions in complex metabolic networks. The analysis also explains the strong correlation observed between metabolite shadow prices (the dual linear programming variables) and conserved metabolite properties. The methods were applied to recent metabolic models for Escherichia coli, Saccharomyces cerevisiae, and Methanosarcina barkeri. Detailed results are reported for E. coli; similar results were found for the other organisms.
[ { "created": "Sun, 3 Aug 2008 14:29:51 GMT", "version": "v1" }, { "created": "Mon, 28 Sep 2009 16:29:22 GMT", "version": "v2" } ]
2015-05-13
[ [ "Warren", "Patrick B.", "" ], [ "Queiros", "Silvio M. Duarte", "" ], [ "Jones", "Janette L.", "" ] ]
A metabolic model can be represented as bipartite graph comprising linked reaction and metabolite nodes. Here it is shown how a network of conserved fluxes can be assigned to the edges of such a graph by combining the reaction fluxes with a conserved metabolite property such as molecular weight. A similar flux network can be constructed by combining the primal and dual solutions to the linear programming problem that typically arises in constraint-based modelling. Such constructions may help with the visualisation of flux distributions in complex metabolic networks. The analysis also explains the strong correlation observed between metabolite shadow prices (the dual linear programming variables) and conserved metabolite properties. The methods were applied to recent metabolic models for Escherichia coli, Saccharomyces cerevisiae, and Methanosarcina barkeri. Detailed results are reported for E. coli; similar results were found for the other organisms.
2406.08601
Johann Summhammer
Johann Summhammer
Mental intervention in quantum scattering of ions without violating conservation laws
14 pages, 5 figures
null
null
null
q-bio.NC quant-ph
http://creativecommons.org/licenses/by/4.0/
There have been several proposals in the past that mind might influence matter by exploiting the randomness of quantum events. Here, calculations are presented how mental selection of quantum mechanical scattering directions of ions in the axon hillock of neuronal cells could influence diffusion and initiate an action potential. Only a few thousand ions would need to be affected. No conservation laws are violated, but a momentary and very small local decrease of temperature should occur, consistent with a quantum mechanically possible but extremely improbable evolution. An estimate of the concurrent violation of the second law of thermodynamics is presented. Some thoughts are given to how this hypothesized mental intervention could be tested.
[ { "created": "Wed, 12 Jun 2024 19:10:05 GMT", "version": "v1" } ]
2024-06-14
[ [ "Summhammer", "Johann", "" ] ]
There have been several proposals in the past that mind might influence matter by exploiting the randomness of quantum events. Here, calculations are presented how mental selection of quantum mechanical scattering directions of ions in the axon hillock of neuronal cells could influence diffusion and initiate an action potential. Only a few thousand ions would need to be affected. No conservation laws are violated, but a momentary and very small local decrease of temperature should occur, consistent with a quantum mechanically possible but extremely improbable evolution. An estimate of the concurrent violation of the second law of thermodynamics is presented. Some thoughts are given to how this hypothesized mental intervention could be tested.
1908.09144
Jon Bohlin
Jon Bohlin, Brittany Rose, Ola Brynildsrud and Birgitte Freiesleben De Blasio
A simple stochastic model to describe the evolution over time of core genome SNP GC content in prokaryotes
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genomes in living organisms consist of the nucleotides adenine (A), guanine (G), cytosine (C) and thymine (T). All prokaryotes have genomes consisting of double-stranded DNA, where the A's and G's (purines) of one strand bind respectively to the T's and C's (pyrimidines) of the other. As such, the number of A's on one strand nearly equals the number of T's on the other, and the same is true of one strand's G's and the other's C's. Globally, this relationship is formalized as Chargaff's first parity rule; its strandwise equivalent is Chargaff's second parity rule. Therefore, the GC content of any double-stranded DNA genome can be expressed as %GC=100%-%AT. Variation in prokaryotic GC content can be substantial between taxa but is generally small within microbial genomes. This variation has been found to correlate with both phylogeny and environmental factors. Since novel single-nucleotide polymorphisms (SNPs) within genomes are at least partially linked to the environment, SNP GC content can be considered a compound measure of an organism's environmental influences, lifestyle and phylogeny. We present a mathematical model that describes how SNP GC content in microbial genomes evolves over time as a function of the AT->GC and GC->AT mutation rates with Gaussian white noise disturbances. The model suggests that, in non-recombining bacteria, mutations can first accumulate unnoticeably and then abruptly fluctuate out of control. Thus, minuscule variations in mutation rates can suddenly become unsustainable, ultimately driving a species to extinction if not counteracted early enough. This model, which is suited specifically to symbiotic prokaryotes, conforms to scenarios predicted by Muller's ratchet and may suggest that this is not always a gradual, degrading process.
[ { "created": "Sat, 24 Aug 2019 14:51:34 GMT", "version": "v1" } ]
2019-08-27
[ [ "Bohlin", "Jon", "" ], [ "Rose", "Brittany", "" ], [ "Brynildsrud", "Ola", "" ], [ "De Blasio", "Birgitte Freiesleben", "" ] ]
Genomes in living organisms consist of the nucleotides adenine (A), guanine (G), cytosine (C) and thymine (T). All prokaryotes have genomes consisting of double-stranded DNA, where the A's and G's (purines) of one strand bind respectively to the T's and C's (pyrimidines) of the other. As such, the number of A's on one strand nearly equals the number of T's on the other, and the same is true of one strand's G's and the other's C's. Globally, this relationship is formalized as Chargaff's first parity rule; its strandwise equivalent is Chargaff's second parity rule. Therefore, the GC content of any double-stranded DNA genome can be expressed as %GC=100%-%AT. Variation in prokaryotic GC content can be substantial between taxa but is generally small within microbial genomes. This variation has been found to correlate with both phylogeny and environmental factors. Since novel single-nucleotide polymorphisms (SNPs) within genomes are at least partially linked to the environment, SNP GC content can be considered a compound measure of an organism's environmental influences, lifestyle and phylogeny. We present a mathematical model that describes how SNP GC content in microbial genomes evolves over time as a function of the AT->GC and GC->AT mutation rates with Gaussian white noise disturbances. The model suggests that, in non-recombining bacteria, mutations can first accumulate unnoticeably and then abruptly fluctuate out of control. Thus, minuscule variations in mutation rates can suddenly become unsustainable, ultimately driving a species to extinction if not counteracted early enough. This model, which is suited specifically to symbiotic prokaryotes, conforms to scenarios predicted by Muller's ratchet and may suggest that this is not always a gradual, degrading process.
2404.14799
Cristian Axenie
Cristian Axenie
Antifragile control systems in neuronal processing: A sensorimotor perspective
null
null
null
null
q-bio.NC cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
The stability--robustness--resilience--adaptiveness continuum in neuronal processing follows a hierarchical structure that explains interactions and information processing among the different time scales. Interestingly, using "canonical" neuronal computational circuits, such as Homeostatic Activity Regulation, Winner-Take-All, and Hebbian Temporal Correlation Learning, one can extend the behaviour spectrum towards antifragility. Cast already in both probability theory and dynamical systems, antifragility can explain and define the interesting interplay among neural circuits, found, for instance, in sensorimotor control in the face of uncertainty and volatility. This perspective proposes a new framework to analyse and describe closed-loop neuronal processing using principles of antifragility, targeting sensorimotor control. Our objective is two-fold. First, we introduce antifragile control as a conceptual framework to quantify closed-loop neuronal network behaviours that gain from uncertainty and volatility. Second, we introduce neuronal network design principles, opening the path to neuromorphic implementations and transfer to technical systems.
[ { "created": "Tue, 23 Apr 2024 07:25:57 GMT", "version": "v1" } ]
2024-04-24
[ [ "Axenie", "Cristian", "" ] ]
The stability--robustness--resilience--adaptiveness continuum in neuronal processing follows a hierarchical structure that explains interactions and information processing among the different time scales. Interestingly, using "canonical" neuronal computational circuits, such as Homeostatic Activity Regulation, Winner-Take-All, and Hebbian Temporal Correlation Learning, one can extend the behaviour spectrum towards antifragility. Cast already in both probability theory and dynamical systems, antifragility can explain and define the interesting interplay among neural circuits, found, for instance, in sensorimotor control in the face of uncertainty and volatility. This perspective proposes a new framework to analyse and describe closed-loop neuronal processing using principles of antifragility, targeting sensorimotor control. Our objective is two-fold. First, we introduce antifragile control as a conceptual framework to quantify closed-loop neuronal network behaviours that gain from uncertainty and volatility. Second, we introduce neuronal network design principles, opening the path to neuromorphic implementations and transfer to technical systems.
2102.06676
Mitchel Colebank
Mitchel J. Colebank, M. Umar Qureshi, Sudarshan Rajagopal, Richard A. Krasuski, and Mette S. Olufsen
A multiscale model of vascular function in chronic thromboembolic pulmonary hypertension
41 pages, 9 figures, 4 tables
null
10.1152/ajpheart.00086.2021
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Chronic thromboembolic pulmonary hypertension (CTEPH) is caused by recurrent or unresolved pulmonary thromboemboli, leading to perfusion defects and increased arterial wave reflections. CTEPH treatment aims to reduce pulmonary arterial pressure and reestablish adequate lung perfusion, yet patients with distal lesions are inoperable by standard surgical intervention. Instead, these patients undergo balloon pulmonary angioplasty (BPA), a multi-session, minimally invasive surgery that disrupts the thromboembolic material within the vessel lumen using a catheter balloon. However, there still lacks an integrative, holistic tool for identifying optimal target lesions for treatment. To address this insufficiency, we simulate CTEPH hemodynamics and BPA therapy using a multiscale fluid dynamics model. The large pulmonary arterial geometry is derived from a computed tomography (CT) image, whereas a fractal tree represents the small vessels. We model ring- and web-like lesions, common in CTEPH, and simulate normotensive conditions and four CTEPH disease scenarios; the latter includes both large artery lesions and vascular remodeling. BPA therapy is simulated by simultaneously reducing lesion severity in three locations. Our predictions mimic severe CTEPH, manifested by an increase in mean proximal pulmonary arterial pressure above 20 mmHg and prominent wave reflections. Both flow and pressure decrease in vessels distal to the lesions and increase in unobstructed vascular regions. We use the main pulmonary artery (MPA) pressure, a wave reflection index, and a measure of flow heterogeneity to select optimal target lesions for BPA. In summary, this study provides a multiscale, image-to-hemodynamics pipeline for BPA therapy planning for inoperable CTEPH patients.
[ { "created": "Fri, 12 Feb 2021 18:28:31 GMT", "version": "v1" }, { "created": "Tue, 1 Jun 2021 13:43:17 GMT", "version": "v2" } ]
2021-08-16
[ [ "Colebank", "Mitchel J.", "" ], [ "Qureshi", "M. Umar", "" ], [ "Rajagopal", "Sudarshan", "" ], [ "Krasuski", "Richard A.", "" ], [ "Olufsen", "Mette S.", "" ] ]
Chronic thromboembolic pulmonary hypertension (CTEPH) is caused by recurrent or unresolved pulmonary thromboemboli, leading to perfusion defects and increased arterial wave reflections. CTEPH treatment aims to reduce pulmonary arterial pressure and reestablish adequate lung perfusion, yet patients with distal lesions are inoperable by standard surgical intervention. Instead, these patients undergo balloon pulmonary angioplasty (BPA), a multi-session, minimally invasive surgery that disrupts the thromboembolic material within the vessel lumen using a catheter balloon. However, there still lacks an integrative, holistic tool for identifying optimal target lesions for treatment. To address this insufficiency, we simulate CTEPH hemodynamics and BPA therapy using a multiscale fluid dynamics model. The large pulmonary arterial geometry is derived from a computed tomography (CT) image, whereas a fractal tree represents the small vessels. We model ring- and web-like lesions, common in CTEPH, and simulate normotensive conditions and four CTEPH disease scenarios; the latter includes both large artery lesions and vascular remodeling. BPA therapy is simulated by simultaneously reducing lesion severity in three locations. Our predictions mimic severe CTEPH, manifested by an increase in mean proximal pulmonary arterial pressure above 20 mmHg and prominent wave reflections. Both flow and pressure decrease in vessels distal to the lesions and increase in unobstructed vascular regions. We use the main pulmonary artery (MPA) pressure, a wave reflection index, and a measure of flow heterogeneity to select optimal target lesions for BPA. In summary, this study provides a multiscale, image-to-hemodynamics pipeline for BPA therapy planning for inoperable CTEPH patients.
1202.0187
Jose A. Cuesta
Jelena Gruji\'c, Jos\'e A. Cuesta and Angel S\'anchez
On the coexistence of cooperators, defectors and conditional cooperators in the multiplayer iterated Prisoner's Dilemma
12 pages, 10 figures, uses elsart.cls
Journal of Theoretical Biology 300, 299-308 (2012)
10.1016/j.jtbi.2012.02.003
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experimental evidence [Gruji\'c et al., PLoS ONE 5, e13749 (2010)] on the spatial Prisoner's Dilemma suggests that players choosing to cooperate or not on the basis of their previous action and the actions of their neighbors coexist with steady defectors and cooperators. We here study the coexistence of these three strategies in the multiplayer iterated Prisoner's Dilemma by means of the replicator dynamics. We consider groups with n = 2, 3, 4 and 5 players and compute the payoffs to every type of player as the limit of a Markov chain where the transition probabilities between actions are found from the corresponding strategies. We show that for group sizes up to n = 4 there exists an interior point in which the three strategies coexist, the corresponding basin of attraction decreasing with increasing number of players, whereas we have not been able to locate such a point for n = 5. We analytically show that in the infinite n limit no interior points can arise. We conclude by discussing the implications of this theoretical approach on the behavior observed in experiments.
[ { "created": "Wed, 1 Feb 2012 15:22:21 GMT", "version": "v1" } ]
2015-02-18
[ [ "Grujić", "Jelena", "" ], [ "Cuesta", "José A.", "" ], [ "Sánchez", "Angel", "" ] ]
Recent experimental evidence [Gruji\'c et al., PLoS ONE 5, e13749 (2010)] on the spatial Prisoner's Dilemma suggests that players choosing to cooperate or not on the basis of their previous action and the actions of their neighbors coexist with steady defectors and cooperators. We here study the coexistence of these three strategies in the multiplayer iterated Prisoner's Dilemma by means of the replicator dynamics. We consider groups with n = 2, 3, 4 and 5 players and compute the payoffs to every type of player as the limit of a Markov chain where the transition probabilities between actions are found from the corresponding strategies. We show that for group sizes up to n = 4 there exists an interior point in which the three strategies coexist, the corresponding basin of attraction decreasing with increasing number of players, whereas we have not been able to locate such a point for n = 5. We analytically show that in the infinite n limit no interior points can arise. We conclude by discussing the implications of this theoretical approach on the behavior observed in experiments.