id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2406.04386
Julia Bicker
Julia Bicker, Ren\'e Schmieding, Michael Meyer-Hermann, Martin J. K\"uhn
Hybrid metapopulation agent-based epidemiological models for efficient insight on the individual scale: a contribution to green computing
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging infectious diseases and climate change are two of the major challenges in 21st century. Although over the past decades, highly-resolved mathematical models have contributed in understanding dynamics of infectious diseases and are of great aid when it comes to finding suitable intervention measures, they may need substantial computational effort and produce significant $CO_2$ emissions. Two popular modeling approaches for mitigating infectious disease dynamics are agent-based and differential equation-based models. Agent-based models (ABMs) offer an arbitrary level of detail and are thus able to capture heterogeneous human contact behavior and mobility patterns. However, insights on individual-level dynamics come with high computational effort that scales with the number of agents. On the other hand, (differential) equation-based models (EBMs) are computationally efficient even for large populations due to their complexity being independent of the population size. Yet, equation-based models are restricted in their granularity as they assume a well-mixed population. To manage the trade-off between complexity and detail, we propose spatial- and temporal-hybrid models that use agent-based models only in an area or time frame of interest. To account for relevant influences to disease dynamics, we use EBMs, only adding moderate computational costs. Our hybridization approach demonstrates significant reduction in computational effort by up to 98 % -- without losing the required depth in information in the focus frame. The hybrid models used in our numerical simulations are based on two recently proposed models, however, any suitable combination of ABM-EBM could be used, too. Concluding, hybrid epidemiological models can provide insights on the individual scale where necessary, using aggregated models where possible, thereby making an important contribution to green computing.
[ { "created": "Thu, 6 Jun 2024 09:09:42 GMT", "version": "v1" } ]
2024-06-10
[ [ "Bicker", "Julia", "" ], [ "Schmieding", "René", "" ], [ "Meyer-Hermann", "Michael", "" ], [ "Kühn", "Martin J.", "" ] ]
Emerging infectious diseases and climate change are two of the major challenges in 21st century. Although over the past decades, highly-resolved mathematical models have contributed in understanding dynamics of infectious diseases and are of great aid when it comes to finding suitable intervention measures, they may need substantial computational effort and produce significant $CO_2$ emissions. Two popular modeling approaches for mitigating infectious disease dynamics are agent-based and differential equation-based models. Agent-based models (ABMs) offer an arbitrary level of detail and are thus able to capture heterogeneous human contact behavior and mobility patterns. However, insights on individual-level dynamics come with high computational effort that scales with the number of agents. On the other hand, (differential) equation-based models (EBMs) are computationally efficient even for large populations due to their complexity being independent of the population size. Yet, equation-based models are restricted in their granularity as they assume a well-mixed population. To manage the trade-off between complexity and detail, we propose spatial- and temporal-hybrid models that use agent-based models only in an area or time frame of interest. To account for relevant influences to disease dynamics, we use EBMs, only adding moderate computational costs. Our hybridization approach demonstrates significant reduction in computational effort by up to 98 % -- without losing the required depth in information in the focus frame. The hybrid models used in our numerical simulations are based on two recently proposed models, however, any suitable combination of ABM-EBM could be used, too. Concluding, hybrid epidemiological models can provide insights on the individual scale where necessary, using aggregated models where possible, thereby making an important contribution to green computing.
1601.07637
Ke-Fei Cao
Huan Wang, Jing-Bo Hu, Chuan-Yun Xu, De-Hai Zhang, Qian Yan, Ming Xu, Ke-Fei Cao, Xu-Sheng Zhang
A pathway-based network analysis of hypertension-related genes
14 pages, 7 figures, 5 tables; a revised version of an article published in Physica A: Statistical Mechanics and its Applications, Physica A 444 (2016) 928--939; Corrigendum: Physica A 447 (2016) 569--570. arXiv admin note: text overlap with arXiv:1601.07192
Physica A 447 (2016) 569--570
10.1016/j.physa.2015.10.048
null
q-bio.MN physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex network approach has become an effective way to describe interrelationships among large amounts of biological data, which is especially useful in finding core functions and global behavior of biological systems. Hypertension is a complex disease caused by many reasons including genetic, physiological, psychological and even social factors. In this paper, based on the information of biological pathways, we construct a network model of hypertension-related genes of the salt-sensitive rat to explore the interrelationship between genes. Statistical and topological characteristics show that the network has the small-world but not scale-free property, and exhibits a modular structure, revealing compact and complex connections among these genes. By the threshold of integrated centrality larger than 0.71, seven key hub genes are found: Jun, Rps6kb1, Cycs, Creb3l2, Cdk4, Actg1 and RT1-Da. These genes should play an important role in hypertension, suggesting that the treatment of hypertension should focus on the combination of drugs on multiple genes.
[ { "created": "Thu, 28 Jan 2016 04:17:13 GMT", "version": "v1" } ]
2016-01-29
[ [ "Wang", "Huan", "" ], [ "Hu", "Jing-Bo", "" ], [ "Xu", "Chuan-Yun", "" ], [ "Zhang", "De-Hai", "" ], [ "Yan", "Qian", "" ], [ "Xu", "Ming", "" ], [ "Cao", "Ke-Fei", "" ], [ "Zhang", "Xu-Sheng", ...
Complex network approach has become an effective way to describe interrelationships among large amounts of biological data, which is especially useful in finding core functions and global behavior of biological systems. Hypertension is a complex disease caused by many reasons including genetic, physiological, psychological and even social factors. In this paper, based on the information of biological pathways, we construct a network model of hypertension-related genes of the salt-sensitive rat to explore the interrelationship between genes. Statistical and topological characteristics show that the network has the small-world but not scale-free property, and exhibits a modular structure, revealing compact and complex connections among these genes. By the threshold of integrated centrality larger than 0.71, seven key hub genes are found: Jun, Rps6kb1, Cycs, Creb3l2, Cdk4, Actg1 and RT1-Da. These genes should play an important role in hypertension, suggesting that the treatment of hypertension should focus on the combination of drugs on multiple genes.
2003.13071
J\"orn Sesterhenn
J\"orn Lothar Sesterhenn
Adjoint-based Data Assimilation of an Epidemiology Model for the Covid-19 Pandemic in 2020
null
null
10.5281/zenodo.3732292
null
q-bio.PE cs.NA math.NA math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data assimilation is used to optimally fit a classical epidemiology model to the Johns Hopkins data of the Covid-19 pandemic. The optimisation is based on the confirmed cases and confirmed deaths. This is the only data available with reasonable accuracy. Infection and recovery rates can be infered from the model as well as the model parameters. The parameters can be linked with government actions or events like the end of the holiday season. Based on this numbers predictions for the future can be made and control targets specified. With other words: We look for a solution to a given model which fits the given data in an optimal sense. Having that solution, we have all parameters.
[ { "created": "Sun, 29 Mar 2020 16:49:39 GMT", "version": "v1" } ]
2020-03-31
[ [ "Sesterhenn", "Jörn Lothar", "" ] ]
Data assimilation is used to optimally fit a classical epidemiology model to the Johns Hopkins data of the Covid-19 pandemic. The optimisation is based on the confirmed cases and confirmed deaths. This is the only data available with reasonable accuracy. Infection and recovery rates can be infered from the model as well as the model parameters. The parameters can be linked with government actions or events like the end of the holiday season. Based on this numbers predictions for the future can be made and control targets specified. With other words: We look for a solution to a given model which fits the given data in an optimal sense. Having that solution, we have all parameters.
1808.03166
Jos\'e Halloy
Leo Cazenille, Nicolas Bredeche and Jos\'e Halloy
Evolutionary optimisation of neural network models for fish collective behaviours in mixed groups of robots and zebrafish
10 pages, 4 figures
Lecture Notes in Computer Science, vol 10928. Springer, Cham, 2018
10.1007/978-3-319-95972-6_10
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animal and robot social interactions are interesting both for ethological studies and robotics. On the one hand, the robots can be tools and models to analyse animal collective behaviours, on the other hand, the robots and their artificial intelligence are directly confronted and compared to the natural animal collective intelligence. The first step is to design robots and their behavioural controllers that are capable of socially interact with animals. Designing such behavioural bio-mimetic controllers remains an important challenge as they have to reproduce the animal behaviours and have to be calibrated on experimental data. Most animal collective behavioural models are designed by modellers based on experimental data. This process is long and costly because it is difficult to identify the relevant behavioural features that are then used as a priori knowledge in model building. Here, we want to model the fish individual and collective behaviours in order to develop robot controllers. We explore the use of optimised black-box models based on artificial neural networks (ANN) to model fish behaviours. While the ANN may not be biomimetic but rather bio-inspired, they can be used to link perception to motor responses. These models are designed to be implementable as robot controllers to form mixed-groups of fish and robots, using few a priori knowledge of the fish behaviours. We present a methodology with multilayer perceptron or echo state networks that are optimised through evolutionary algorithms to model accurately the fish individual and collective behaviours in a bounded rectangular arena. We assess the biomimetism of the generated models and compare them to the fish experimental behaviours.
[ { "created": "Thu, 9 Aug 2018 13:54:23 GMT", "version": "v1" } ]
2019-02-12
[ [ "Cazenille", "Leo", "" ], [ "Bredeche", "Nicolas", "" ], [ "Halloy", "José", "" ] ]
Animal and robot social interactions are interesting both for ethological studies and robotics. On the one hand, the robots can be tools and models to analyse animal collective behaviours, on the other hand, the robots and their artificial intelligence are directly confronted and compared to the natural animal collective intelligence. The first step is to design robots and their behavioural controllers that are capable of socially interact with animals. Designing such behavioural bio-mimetic controllers remains an important challenge as they have to reproduce the animal behaviours and have to be calibrated on experimental data. Most animal collective behavioural models are designed by modellers based on experimental data. This process is long and costly because it is difficult to identify the relevant behavioural features that are then used as a priori knowledge in model building. Here, we want to model the fish individual and collective behaviours in order to develop robot controllers. We explore the use of optimised black-box models based on artificial neural networks (ANN) to model fish behaviours. While the ANN may not be biomimetic but rather bio-inspired, they can be used to link perception to motor responses. These models are designed to be implementable as robot controllers to form mixed-groups of fish and robots, using few a priori knowledge of the fish behaviours. We present a methodology with multilayer perceptron or echo state networks that are optimised through evolutionary algorithms to model accurately the fish individual and collective behaviours in a bounded rectangular arena. We assess the biomimetism of the generated models and compare them to the fish experimental behaviours.
1103.2605
Jesus M. Cortes
J.M. Cortes and D. Marinazzo and P. Series and M.W. Oram and T.J. Sejnowski and M.C.W. van Rossum
The effect of neural adaptation of population coding accuracy
35 pages, 8 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most neurons in the primary visual cortex initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. The functional consequences of adaptation are unclear. Typically a reduction of firing rate would reduce single neuron accuracy as less spikes are available for decoding, but it has been suggested that on the population level, adaptation increases coding accuracy. This question requires careful analysis as adaptation not only changes the firing rates of neurons, but also the neural variability and correlations between neurons, which affect coding accuracy as well. We calculate the coding accuracy using a computational model that implements two forms of adaptation: spike frequency adaptation and synaptic adaptation in the form of short-term synaptic plasticity. We find that the net effect of adaptation is subtle and heterogeneous. Depending on adaptation mechanism and test stimulus, adaptation can either increase or decrease coding accuracy. We discuss the neurophysiological and psychophysical implications of the findings and relate it to published experimental data.
[ { "created": "Mon, 14 Mar 2011 09:02:42 GMT", "version": "v1" } ]
2011-03-15
[ [ "Cortes", "J. M.", "" ], [ "Marinazzo", "D.", "" ], [ "Series", "P.", "" ], [ "Oram", "M. W.", "" ], [ "Sejnowski", "T. J.", "" ], [ "van Rossum", "M. C. W.", "" ] ]
Most neurons in the primary visual cortex initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. The functional consequences of adaptation are unclear. Typically a reduction of firing rate would reduce single neuron accuracy as less spikes are available for decoding, but it has been suggested that on the population level, adaptation increases coding accuracy. This question requires careful analysis as adaptation not only changes the firing rates of neurons, but also the neural variability and correlations between neurons, which affect coding accuracy as well. We calculate the coding accuracy using a computational model that implements two forms of adaptation: spike frequency adaptation and synaptic adaptation in the form of short-term synaptic plasticity. We find that the net effect of adaptation is subtle and heterogeneous. Depending on adaptation mechanism and test stimulus, adaptation can either increase or decrease coding accuracy. We discuss the neurophysiological and psychophysical implications of the findings and relate it to published experimental data.
1309.2398
Andrea De Martino
A. De Martino, D. De Martino, R. Mulet, A. Pagnani
Identifying all irreducible conserved metabolite pools in genome-scale metabolic networks: a general method and the case of Escherichia coli
10 pages, 4 tables, one figure
null
null
null
q-bio.MN cond-mat.dis-nn physics.bio-ph physics.chem-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The stoichiometry of metabolic networks usually gives rise to a family of conservation laws for the aggregate concentration of specific pools of metabolites, which not only constrain the dynamics of the network, but also provide key insight into a cell's production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all integer solutions to an NP-hard programming problem. Here we propose a novel and efficient computational strategy that combines Monte Carlo, message passing, and relaxation algorithms to compute the complete set of irreducible integer conservation laws of a given stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. The method is deployed for the analysis of the complete set of irreducible integer pools of two large-scale reconstructions of the metabolism of the bacterium Escherichia coli in different growth media. In addition, we uncover a scaling relation that links the size of the irreducible pool basis to the number of metabolites, for which we present an analytical explanation.
[ { "created": "Tue, 10 Sep 2013 07:46:22 GMT", "version": "v1" } ]
2013-09-11
[ [ "De Martino", "A.", "" ], [ "De Martino", "D.", "" ], [ "Mulet", "R.", "" ], [ "Pagnani", "A.", "" ] ]
The stoichiometry of metabolic networks usually gives rise to a family of conservation laws for the aggregate concentration of specific pools of metabolites, which not only constrain the dynamics of the network, but also provide key insight into a cell's production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all integer solutions to an NP-hard programming problem. Here we propose a novel and efficient computational strategy that combines Monte Carlo, message passing, and relaxation algorithms to compute the complete set of irreducible integer conservation laws of a given stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. The method is deployed for the analysis of the complete set of irreducible integer pools of two large-scale reconstructions of the metabolism of the bacterium Escherichia coli in different growth media. In addition, we uncover a scaling relation that links the size of the irreducible pool basis to the number of metabolites, for which we present an analytical explanation.
1105.0695
Stanley Lazic
Stanley E. Lazic
Modelling hippocampal neurogenesis across the lifespan in seven species
In press at Neurobiology of Aging
null
10.1016/j.neurobiolaging.2011.03.008
null
q-bio.NC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The aim of this study was to estimate the number of new cells and neurons added to the dentate gyrus across the lifespan, and to compare the rate of age-associated decline in neurogenesis across species. Data from mice (Mus musculus), rats (Rattus norvegicus), lesser hedgehog tenrecs (Echinops telfairi), macaques (Macaca mulatta), marmosets (Callithrix jacchus), tree shrews (Tupaia belangeri), and humans (Homo sapiens) were extracted from twenty one data sets published in fourteen different papers. ANOVA, exponential, Weibull, and power models were fit to the data to determine which best described the relationship between age and neurogenesis. Exponential models provided a suitable fit and were used to estimate the relevant parameters. The rate of decrease of neurogenesis correlated with species longevity r = 0.769, p = 0.043), but not body mass or basal metabolic rate. Of all the cells added postnatally to the mouse dentate gyrus, only 8.5% (95% CI = 1.0% to 14.7%) of these will be added after middle age. In addition, only 5.7% (95% CI = 0.7% to 9.9%) of the existing cell population turns over from middle age onwards. Thus, relatively few new cells are added for much of an animal's life, and only a proportion of these will mature into functional neurons.
[ { "created": "Tue, 3 May 2011 21:30:46 GMT", "version": "v1" } ]
2011-05-05
[ [ "Lazic", "Stanley E.", "" ] ]
The aim of this study was to estimate the number of new cells and neurons added to the dentate gyrus across the lifespan, and to compare the rate of age-associated decline in neurogenesis across species. Data from mice (Mus musculus), rats (Rattus norvegicus), lesser hedgehog tenrecs (Echinops telfairi), macaques (Macaca mulatta), marmosets (Callithrix jacchus), tree shrews (Tupaia belangeri), and humans (Homo sapiens) were extracted from twenty one data sets published in fourteen different papers. ANOVA, exponential, Weibull, and power models were fit to the data to determine which best described the relationship between age and neurogenesis. Exponential models provided a suitable fit and were used to estimate the relevant parameters. The rate of decrease of neurogenesis correlated with species longevity r = 0.769, p = 0.043), but not body mass or basal metabolic rate. Of all the cells added postnatally to the mouse dentate gyrus, only 8.5% (95% CI = 1.0% to 14.7%) of these will be added after middle age. In addition, only 5.7% (95% CI = 0.7% to 9.9%) of the existing cell population turns over from middle age onwards. Thus, relatively few new cells are added for much of an animal's life, and only a proportion of these will mature into functional neurons.
1505.00774
Vishal Sahni Dr
Dayal Pyari Srivastava, Vishal Sahni, Prem Saran Satsangi
Modelling Microtubules in the Brain as n-qudit Quantum Hopfield Network and Beyond
null
International Journal of General Systems, Vol. 45, Issue 1, 2016
10.1080/03081079.2015.1076405
null
q-bio.NC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.
[ { "created": "Sat, 2 May 2015 11:54:31 GMT", "version": "v1" } ]
2017-03-28
[ [ "Srivastava", "Dayal Pyari", "" ], [ "Sahni", "Vishal", "" ], [ "Satsangi", "Prem Saran", "" ] ]
The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.
1611.10003
Tom Anderson
Tom A. F. Anderson, C.-H. Ruan
Vocabulary and the Brain: Evidence from Neuroimaging Studies
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In summary of the research findings presented in this paper, various brain regions are correlated with vocabulary and vocabulary acquisition. Semantic associations for vocabulary seem to be located near brain areas that vary according to the type of vocabulary, e.g. ventral temporal regions important for words for things that can be seen. Semantic processing is believed to be strongly associated with the ANG. Phonological ability has been closely related to the anterior surfaces of the SMG. Pathways through the posterior SMG are thought to link the anterior SMG and the ANG. In vocabulary tasks, mediotemporal structures may be related to long-term memory processing, with left hippocampal and parahippocampal regions related to long-term and working memory, respectively. Precentral structures are associated with phonological retrieval. Furthermore, many more regions of the brain are of interest in vocabulary tasks, particularly in areas important for visual and auditory processing. Furthermore, differences between brain anatomies can be attributed to vocabulary demands of different languages.
[ { "created": "Wed, 30 Nov 2016 05:17:11 GMT", "version": "v1" } ]
2016-12-01
[ [ "Anderson", "Tom A. F.", "" ], [ "Ruan", "C. -H.", "" ] ]
In summary of the research findings presented in this paper, various brain regions are correlated with vocabulary and vocabulary acquisition. Semantic associations for vocabulary seem to be located near brain areas that vary according to the type of vocabulary, e.g. ventral temporal regions important for words for things that can be seen. Semantic processing is believed to be strongly associated with the ANG. Phonological ability has been closely related to the anterior surfaces of the SMG. Pathways through the posterior SMG are thought to link the anterior SMG and the ANG. In vocabulary tasks, mediotemporal structures may be related to long-term memory processing, with left hippocampal and parahippocampal regions related to long-term and working memory, respectively. Precentral structures are associated with phonological retrieval. Furthermore, many more regions of the brain are of interest in vocabulary tasks, particularly in areas important for visual and auditory processing. Furthermore, differences between brain anatomies can be attributed to vocabulary demands of different languages.
2008.11883
Lin Yang
Jiacheng Li, Xiaoliang Ma, Shuai Guo, Chengyu Hou, Liping Shi, Hongchi Zhang, Bing Zheng, Chencheng Liao, Lin Yang, Lin Ye, Xiaodong He
A hydrophobic-interaction-based mechanism trigger docking between the SARS CoV 2 spike and angiotensin-converting enzyme 2
null
null
null
null
q-bio.BM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent experimental study found that the binding affinity between the cellular receptor human angiotensin converting enzyme 2 (ACE2) and receptor-binding domain (RBD) in spike (S) protein of novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is more than 10-fold higher than that of the original severe acute respiratory syndrome coronavirus (SARS-CoV). However, main-chain structures of the SARS-CoV-2 RBD are almost the same with that of the SARS-CoV RBD. Understanding physical mechanism responsible for the outstanding affinity between the SARS-CoV-2 S and ACE2 is the "urgent challenge" for developing blockers, vaccines and therapeutic antibodies against the coronavirus disease 2019 (COVID-19) pandemic. Considering the mechanisms of hydrophobic interaction, hydration shell, surface tension, and the shielding effect of water molecules, this study reveals a hydrophobic-interaction-based mechanism by means of which SARS-CoV-2 S and ACE2 bind together in an aqueous environment. The hydrophobic interaction between the SARS-CoV-2 S and ACE2 protein is found to be significantly greater than that between SARS-CoV S and ACE2. At the docking site, the hydrophobic portions of the hydrophilic side chains of SARS-CoV-2 S are found to be involved in the hydrophobic interaction between SARS-CoV-2 S and ACE2. We propose a method to design live attenuated viruses by mutating several key amino acid residues of the spike protein to decrease the hydrophobic surface areas at the docking site. Mutation of a small amount of residues can greatly reduce the hydrophobic binding of the coronavirus to the receptor, which may be significant reduce infectivity and transmissibility of the virus.
[ { "created": "Thu, 27 Aug 2020 01:54:20 GMT", "version": "v1" } ]
2020-08-28
[ [ "Li", "Jiacheng", "" ], [ "Ma", "Xiaoliang", "" ], [ "Guo", "Shuai", "" ], [ "Hou", "Chengyu", "" ], [ "Shi", "Liping", "" ], [ "Zhang", "Hongchi", "" ], [ "Zheng", "Bing", "" ], [ "Liao", "Chen...
A recent experimental study found that the binding affinity between the cellular receptor human angiotensin converting enzyme 2 (ACE2) and receptor-binding domain (RBD) in spike (S) protein of novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is more than 10-fold higher than that of the original severe acute respiratory syndrome coronavirus (SARS-CoV). However, main-chain structures of the SARS-CoV-2 RBD are almost the same with that of the SARS-CoV RBD. Understanding physical mechanism responsible for the outstanding affinity between the SARS-CoV-2 S and ACE2 is the "urgent challenge" for developing blockers, vaccines and therapeutic antibodies against the coronavirus disease 2019 (COVID-19) pandemic. Considering the mechanisms of hydrophobic interaction, hydration shell, surface tension, and the shielding effect of water molecules, this study reveals a hydrophobic-interaction-based mechanism by means of which SARS-CoV-2 S and ACE2 bind together in an aqueous environment. The hydrophobic interaction between the SARS-CoV-2 S and ACE2 protein is found to be significantly greater than that between SARS-CoV S and ACE2. At the docking site, the hydrophobic portions of the hydrophilic side chains of SARS-CoV-2 S are found to be involved in the hydrophobic interaction between SARS-CoV-2 S and ACE2. We propose a method to design live attenuated viruses by mutating several key amino acid residues of the spike protein to decrease the hydrophobic surface areas at the docking site. Mutation of a small amount of residues can greatly reduce the hydrophobic binding of the coronavirus to the receptor, which may be significant reduce infectivity and transmissibility of the virus.
0904.1627
Hidetsugu Sakaguchi
Hidetsugu Sakaguchi
Ratio control in a cascade model of cell differentiation
13 pages, 7 figures
null
10.1103/PhysRevE.79.051916
null
q-bio.CB cond-mat.stat-mech q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a kind of reaction-diffusion equations for cell differentiation, which exhibits the Turing instability. If the diffusivity of some variables is set to be infinity, we get coupled competitive reaction-diffusion equations with a global feedback term. The size ratio of each cell type is controlled by a system parameter in the model. Finally, we extend the model to a cascade model of cell differentiation. A hierarchical spatial structure appears as a result of the cell differentiation. The size ratio of each cell type is also controlled by the system parameter.
[ { "created": "Fri, 10 Apr 2009 02:33:47 GMT", "version": "v1" } ]
2015-05-13
[ [ "Sakaguchi", "Hidetsugu", "" ] ]
We propose a kind of reaction-diffusion equations for cell differentiation, which exhibits the Turing instability. If the diffusivity of some variables is set to be infinity, we get coupled competitive reaction-diffusion equations with a global feedback term. The size ratio of each cell type is controlled by a system parameter in the model. Finally, we extend the model to a cascade model of cell differentiation. A hierarchical spatial structure appears as a result of the cell differentiation. The size ratio of each cell type is also controlled by the system parameter.
2102.07309
Michael Lingzhi Li
Dimitris Bertsimas, Vassilis Digalakis Jr., Alexander Jacquillat, Michael Lingzhi Li, Alessandro Previero
Where to locate COVID-19 mass vaccination facilities?
null
null
10.1002/nav.22007
null
q-bio.PE math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outbreak of COVID-19 led to a record-breaking race to develop a vaccine. However, the limited vaccine capacity creates another massive challenge: how to distribute vaccines to mitigate the near-end impact of the pandemic? In the United States in particular, the new Biden administration is launching mass vaccination sites across the country, raising the obvious question of where to locate these clinics to maximize the effectiveness of the vaccination campaign. This paper tackles this question with a novel data-driven approach to optimize COVID-19 vaccine distribution. We first augment a state-of-the-art epidemiological model, called DELPHI, to capture the effects of vaccinations and the variability in mortality rates across age groups. We then integrate this predictive model into a prescriptive model to optimize the location of vaccination sites and subsequent vaccine allocation. The model is formulated as a bilinear, non-convex optimization model. To solve it, we propose a coordinate descent algorithm that iterates between optimizing vaccine distribution and simulating the dynamics of the pandemic. As compared to benchmarks based on demographic and epidemiological information, the proposed optimization approach increases the effectiveness of the vaccination campaign by an estimated $20\%$, saving an extra $4000$ extra lives in the United States over a three-month period. The proposed solution achieves critical fairness objectives -- by reducing the death toll of the pandemic in several states without hurting others -- and is highly robust to uncertainties and forecast errors -- by achieving similar benefits under a vast range of perturbations.
[ { "created": "Mon, 15 Feb 2021 02:32:17 GMT", "version": "v1" }, { "created": "Sun, 18 Jul 2021 07:03:40 GMT", "version": "v2" } ]
2021-07-20
[ [ "Bertsimas", "Dimitris", "" ], [ "Digalakis", "Vassilis", "Jr." ], [ "Jacquillat", "Alexander", "" ], [ "Li", "Michael Lingzhi", "" ], [ "Previero", "Alessandro", "" ] ]
The outbreak of COVID-19 led to a record-breaking race to develop a vaccine. However, the limited vaccine capacity creates another massive challenge: how to distribute vaccines to mitigate the near-end impact of the pandemic? In the United States in particular, the new Biden administration is launching mass vaccination sites across the country, raising the obvious question of where to locate these clinics to maximize the effectiveness of the vaccination campaign. This paper tackles this question with a novel data-driven approach to optimize COVID-19 vaccine distribution. We first augment a state-of-the-art epidemiological model, called DELPHI, to capture the effects of vaccinations and the variability in mortality rates across age groups. We then integrate this predictive model into a prescriptive model to optimize the location of vaccination sites and subsequent vaccine allocation. The model is formulated as a bilinear, non-convex optimization model. To solve it, we propose a coordinate descent algorithm that iterates between optimizing vaccine distribution and simulating the dynamics of the pandemic. As compared to benchmarks based on demographic and epidemiological information, the proposed optimization approach increases the effectiveness of the vaccination campaign by an estimated $20\%$, saving an extra $4000$ extra lives in the United States over a three-month period. The proposed solution achieves critical fairness objectives -- by reducing the death toll of the pandemic in several states without hurting others -- and is highly robust to uncertainties and forecast errors -- by achieving similar benefits under a vast range of perturbations.
1710.00317
Andres Angel
Alejandro Feged-Rivadeneira, Andr\'es Angel, Felipe Gonz\'alez-Casabianca, Camilo Rivera
Malaria intensity in Colombia by regions and populations
null
null
10.1371/journal.pone.0203673
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the distribution of disease prevalence among heterogeneous populations at the national scale is fundamental for epidemiology and public health. Here, we use a combination of methods (spatial scan statistic, topological data analysis, epidemic profile) to study measurable differences in malaria intensity by regions and populations of Colombia. This study explores three main questions: What are the regions of Colombia where malaria is epidemic? What are the regions and populations in Colombia where malaria is endemic? What associations exist between epidemic outbreaks between regions in Colombia? \textit{Plasmodium falciparum} is most prevalent in the Pacific Coast, some regions of the Amazon Basin, and some regions of the Magdalena Basin. \textit{Plasmodium vivax} is the most prevalent parasite in Colombia, particularly in the Northern Amazon Basin, the Caribbean, and municipalities of Sucre, Antioquia and Cordoba. Malaria has been reported to be most common among 15-45 year old men. We find that the age-class suffering high risk of malaria infection ranges from 20 to 30 with an acute peak at 25 years of age. Second, this pattern was not found to be generalizable across Colombian populations, Indigenous and Afrocolombian populations experience endemic malaria (with household transmission). Third, clusters of epidemic malaria for \textit{Plasmodium vivax} were detected across Southern Colombia including the Amazon Basin and the Southern Pacific region. \textit{Plasmodium falciparum}, was is epidemic in 13 of the 1,123 municipalities (1.2\%). Some key locations act as bridges between epidemic and endemic regions. Finally, we generate a regional classification based on intensity and synchrony, dividing the country into epidemic areas and bridge areas.
[ { "created": "Sun, 1 Oct 2017 08:42:37 GMT", "version": "v1" } ]
2018-11-21
[ [ "Feged-Rivadeneira", "Alejandro", "" ], [ "Angel", "Andrés", "" ], [ "González-Casabianca", "Felipe", "" ], [ "Rivera", "Camilo", "" ] ]
Determining the distribution of disease prevalence among heterogeneous populations at the national scale is fundamental for epidemiology and public health. Here, we use a combination of methods (spatial scan statistic, topological data analysis, epidemic profile) to study measurable differences in malaria intensity by regions and populations of Colombia. This study explores three main questions: What are the regions of Colombia where malaria is epidemic? What are the regions and populations in Colombia where malaria is endemic? What associations exist between epidemic outbreaks between regions in Colombia? \textit{Plasmodium falciparum} is most prevalent in the Pacific Coast, some regions of the Amazon Basin, and some regions of the Magdalena Basin. \textit{Plasmodium vivax} is the most prevalent parasite in Colombia, particularly in the Northern Amazon Basin, the Caribbean, and municipalities of Sucre, Antioquia and Cordoba. Malaria has been reported to be most common among 15-45 year old men. We find that the age-class suffering high risk of malaria infection ranges from 20 to 30 with an acute peak at 25 years of age. Second, this pattern was not found to be generalizable across Colombian populations, Indigenous and Afrocolombian populations experience endemic malaria (with household transmission). Third, clusters of epidemic malaria for \textit{Plasmodium vivax} were detected across Southern Colombia including the Amazon Basin and the Southern Pacific region. \textit{Plasmodium falciparum}, was is epidemic in 13 of the 1,123 municipalities (1.2\%). Some key locations act as bridges between epidemic and endemic regions. Finally, we generate a regional classification based on intensity and synchrony, dividing the country into epidemic areas and bridge areas.
2407.09128
Francesco Avanzini
Francesco Avanzini, Timur Aslyamov, \'Etienne Fodor, Massimiliano Esposito
Nonequilibrium Thermodynamics of Non-Ideal Reaction-Diffusion Systems: Implications for Active Self-Organization
null
null
null
null
q-bio.MN cond-mat.stat-mech
http://creativecommons.org/licenses/by/4.0/
We develop a framework describing the dynamics and thermodynamics of open non-ideal reaction-diffusion systems, which embodies Flory-Huggins theories of mixtures and chemical reaction network theories. Our theory elucidates the mechanisms underpinning the emergence of self-organized dissipative structures in these systems. It evaluates the dissipation needed to sustain and control them, discriminating the contributions from each reaction and diffusion process with spatial resolution. It also reveals the role of the reaction network in powering and shaping these structures. We identify particular classes of networks in which diffusion processes always equilibrate within the structures, while dissipation occurs solely due to chemical reactions. The spatial configurations resulting from these processes can be derived by minimizing a kinetic potential, contrasting with the minimization of the thermodynamic free energy in passive systems. This framework opens the way to investigating the energetic cost of phenomena such as liquid-liquid phase separation, coacervation, and the formation of biomolecular condensates.
[ { "created": "Fri, 12 Jul 2024 09:48:46 GMT", "version": "v1" } ]
2024-07-15
[ [ "Avanzini", "Francesco", "" ], [ "Aslyamov", "Timur", "" ], [ "Fodor", "Étienne", "" ], [ "Esposito", "Massimiliano", "" ] ]
We develop a framework describing the dynamics and thermodynamics of open non-ideal reaction-diffusion systems, which embodies Flory-Huggins theories of mixtures and chemical reaction network theories. Our theory elucidates the mechanisms underpinning the emergence of self-organized dissipative structures in these systems. It evaluates the dissipation needed to sustain and control them, discriminating the contributions from each reaction and diffusion process with spatial resolution. It also reveals the role of the reaction network in powering and shaping these structures. We identify particular classes of networks in which diffusion processes always equilibrate within the structures, while dissipation occurs solely due to chemical reactions. The spatial configurations resulting from these processes can be derived by minimizing a kinetic potential, contrasting with the minimization of the thermodynamic free energy in passive systems. This framework opens the way to investigating the energetic cost of phenomena such as liquid-liquid phase separation, coacervation, and the formation of biomolecular condensates.
1710.06486
Felix Barber
Felix Barber, Po-Yi Ho, Andrew W. Murray, Ariel Amir
Details Matter: noise and model structure set the relationship between cell size and cell cycle timing
24 pages, 6 figures
Frontiers in Cell and Developmental Biology, 2017, vol. 5, pp. 92
10.3389/fcell.2017.00092
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted "molecular" models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This adder behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C + D period). In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously [1]. In bacteria, division into two equally sized cells does not broaden the size distribution.
[ { "created": "Tue, 17 Oct 2017 20:01:57 GMT", "version": "v1" }, { "created": "Mon, 30 Oct 2017 18:53:04 GMT", "version": "v2" } ]
2017-11-07
[ [ "Barber", "Felix", "" ], [ "Ho", "Po-Yi", "" ], [ "Murray", "Andrew W.", "" ], [ "Amir", "Ariel", "" ] ]
Organisms across all domains of life regulate the size of their cells. However, the means by which this is done is poorly understood. We study two abstracted "molecular" models for size regulation: inhibitor dilution and initiator accumulation. We apply the models to two settings: bacteria like Escherichia coli, that grow fully before they set a division plane and divide into two equally sized cells, and cells that form a bud early in the cell division cycle, confine new growth to that bud, and divide at the connection between that bud and the mother cell, like the budding yeast Saccharomyces cerevisiae. In budding cells, delaying cell division until buds reach the same size as their mother leads to very weak size control, with average cell size and standard deviation of cell size increasing over time and saturating up to 100-fold higher than those values for cells that divide when the bud is still substantially smaller than its mother. In budding yeast, both inhibitor dilution or initiator accumulation models are consistent with the observation that the daughters of diploid cells add a constant volume before they divide. This adder behavior has also been observed in bacteria. We find that in bacteria an inhibitor dilution model produces adder correlations that are not robust to noise in the timing of DNA replication initiation or in the timing from initiation of DNA replication to cell division (the C + D period). In contrast, in bacteria an initiator accumulation model yields robust adder correlations in the regime where noise in the timing of DNA replication initiation is much greater than noise in the C + D period, as reported previously [1]. In bacteria, division into two equally sized cells does not broaden the size distribution.
2104.07793
Jorge Velasco-Hernandez
Jorge X. Velasco-Hernandez
Modelling epidemics: a perspective on mathematical models and their use in the present SARS-CoV-2 epidemic
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this work we look at several mathematical models that have been constructed during the present pandemic to address dfferent issues of importance to public health policies about epidemic scenarios and thier causes. We start by briefly reviewing the most basic properties of the classic Kermack-McKendrick models and then proceed to look at some generalizations and their applications to contact structures, cocirculation of viral infections, growth patterns of epidemic curves, characterization of probability distributions and passage times necessary to parametrize the compartments that are added to the basic Kermack-McKendrick model. All of these examples revolve around the idea that a system of differential equations constructed for an specifc epidemiological problem, has as central and main theoretical and conceptual support the epidemiological, medical and biological context that motivates its construction and analysis. Fitting differential models to data without a sound perspective on the biological feasibility of the mathematical model will provide an statistical significant fit but will also fall short of providing useful information for the management, mitigation and eventual control of the epidemic of SARS-CoV-2.
[ { "created": "Thu, 15 Apr 2021 21:56:33 GMT", "version": "v1" } ]
2021-04-19
[ [ "Velasco-Hernandez", "Jorge X.", "" ] ]
In this work we look at several mathematical models that have been constructed during the present pandemic to address dfferent issues of importance to public health policies about epidemic scenarios and thier causes. We start by briefly reviewing the most basic properties of the classic Kermack-McKendrick models and then proceed to look at some generalizations and their applications to contact structures, cocirculation of viral infections, growth patterns of epidemic curves, characterization of probability distributions and passage times necessary to parametrize the compartments that are added to the basic Kermack-McKendrick model. All of these examples revolve around the idea that a system of differential equations constructed for an specifc epidemiological problem, has as central and main theoretical and conceptual support the epidemiological, medical and biological context that motivates its construction and analysis. Fitting differential models to data without a sound perspective on the biological feasibility of the mathematical model will provide an statistical significant fit but will also fall short of providing useful information for the management, mitigation and eventual control of the epidemic of SARS-CoV-2.
0809.0228
Florian Hartig
Florian Hartig and Martin Drechsler
Smart spatial incentives for market-based conservation
10 pages, 8 figures, only improved figure quality as compared to v3
Biological Conservation, 2009, 142, 779-788
10.1016/j.biocon.2008.12.014
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Market-based instruments such as payments, auctions or tradable permits have been proposed as flexible and cost-effective instruments for biodiversity conservation on private lands. Trading the service of conservation requires one to define a metric that determines the extent to which a conserved site adds to the regional conservation objective. Yet, while markets for conservation are widely discussed and increasingly applied, little research has been conducted on explicitly accounting for spatial ecological processes in the trading. In this paper, we use a coupled ecological economic simulation model to examine how spatial connectivity may be considered in the financial incentives created by a market-based conservation scheme. Land use decisions, driven by changing conservation costs and the conservation market, are simulated by an agent-based model of land users. On top of that, a metapopulation model evaluates the conservational success of the market. We find that optimal spatial incentives for agents correlate with species characteristics such as the dispersal distance, but they also depend on the spatio-temporal distribution of conservation costs. We conclude that a combined analysis of ecological and socio-economic conditions should be applied when designing market instruments to protect biodiversity.
[ { "created": "Mon, 1 Sep 2008 13:22:22 GMT", "version": "v1" }, { "created": "Mon, 1 Dec 2008 18:28:00 GMT", "version": "v2" }, { "created": "Mon, 23 Feb 2009 02:57:28 GMT", "version": "v3" }, { "created": "Fri, 19 Nov 2010 14:11:48 GMT", "version": "v4" } ]
2010-11-22
[ [ "Hartig", "Florian", "" ], [ "Drechsler", "Martin", "" ] ]
Market-based instruments such as payments, auctions or tradable permits have been proposed as flexible and cost-effective instruments for biodiversity conservation on private lands. Trading the service of conservation requires one to define a metric that determines the extent to which a conserved site adds to the regional conservation objective. Yet, while markets for conservation are widely discussed and increasingly applied, little research has been conducted on explicitly accounting for spatial ecological processes in the trading. In this paper, we use a coupled ecological economic simulation model to examine how spatial connectivity may be considered in the financial incentives created by a market-based conservation scheme. Land use decisions, driven by changing conservation costs and the conservation market, are simulated by an agent-based model of land users. On top of that, a metapopulation model evaluates the conservational success of the market. We find that optimal spatial incentives for agents correlate with species characteristics such as the dispersal distance, but they also depend on the spatio-temporal distribution of conservation costs. We conclude that a combined analysis of ecological and socio-economic conditions should be applied when designing market instruments to protect biodiversity.
2306.05635
Yang Tian
Yang Tian, Zeren Tan, Hedong Hou, Guoqi Li, Aohua Cheng, Yike Qiu, Kangyu Weng, Chun Chen, Pei Sun
Theoretical foundations of studying criticality in the brain
null
Theoretical foundations of studying criticality in the brain. Network Neuroscience 2022; 6 (4): 1148-1185
10.1162/netn_a_00269
null
q-bio.NC cond-mat.dis-nn cond-mat.other cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Criticality is hypothesized as a physical mechanism underlying efficient transitions between cortical states and remarkable information processing capacities in the brain. While considerable evidence generally supports this hypothesis, non-negligible controversies persist regarding the ubiquity of criticality in neural dynamics and its role in information processing. Validity issues frequently arise during identifying potential brain criticality from empirical data. Moreover, the functional benefits implied by brain criticality are frequently misconceived or unduly generalized. These problems stem from the non-triviality and immaturity of the physical theories that analytically derive brain criticality and the statistic techniques that estimate brain criticality from empirical data. To help solve these problems, we present a systematic review and reformulate the foundations of studying brain criticality, i.e., ordinary criticality (OC), quasi-criticality (qC), self-organized criticality (SOC), and self-organized quasi-criticality (SOqC), using the terminology of neuroscience. We offer accessible explanations of the physical theories and statistic techniques of brain criticality, providing step-by-step derivations to characterize neural dynamics as a physical system with avalanches. We summarize error-prone details and existing limitations in brain criticality analysis and suggest possible solutions. Moreover, we present a forward-looking perspective on how optimizing the foundations of studying brain criticality can deepen our understanding of various neuroscience questions.
[ { "created": "Fri, 9 Jun 2023 02:41:37 GMT", "version": "v1" } ]
2023-06-12
[ [ "Tian", "Yang", "" ], [ "Tan", "Zeren", "" ], [ "Hou", "Hedong", "" ], [ "Li", "Guoqi", "" ], [ "Cheng", "Aohua", "" ], [ "Qiu", "Yike", "" ], [ "Weng", "Kangyu", "" ], [ "Chen", "Chun", "" ...
Criticality is hypothesized as a physical mechanism underlying efficient transitions between cortical states and remarkable information processing capacities in the brain. While considerable evidence generally supports this hypothesis, non-negligible controversies persist regarding the ubiquity of criticality in neural dynamics and its role in information processing. Validity issues frequently arise during identifying potential brain criticality from empirical data. Moreover, the functional benefits implied by brain criticality are frequently misconceived or unduly generalized. These problems stem from the non-triviality and immaturity of the physical theories that analytically derive brain criticality and the statistic techniques that estimate brain criticality from empirical data. To help solve these problems, we present a systematic review and reformulate the foundations of studying brain criticality, i.e., ordinary criticality (OC), quasi-criticality (qC), self-organized criticality (SOC), and self-organized quasi-criticality (SOqC), using the terminology of neuroscience. We offer accessible explanations of the physical theories and statistic techniques of brain criticality, providing step-by-step derivations to characterize neural dynamics as a physical system with avalanches. We summarize error-prone details and existing limitations in brain criticality analysis and suggest possible solutions. Moreover, we present a forward-looking perspective on how optimizing the foundations of studying brain criticality can deepen our understanding of various neuroscience questions.
q-bio/0409020
Paolo Tieri
Paolo Tieri, Silvana Valensin, Vito Latora, Gastone C. Castellani, Massimo Marchiori, Daniel Remondini, Claudio Franceschi
Quantifying the relevance of different mediators in the human immune cell network
10 pages, 3 figures
Bioinformatics. 2005 Apr 15;21(8):1639-43. Epub 2004 Dec 21.
null
null
q-bio.MN
null
Immune cells coordinate their efforts for the correct and efficient functioning of the immune system (IS). Each cell type plays a distinct role and communicates with other cell types through mediators such as cytokines, chemokines and hormones, among others, that are crucial for the functioning of the IS and its fine tuning. Nevertheless, a quantitative analysis of the topological properties of an immunological network involving this complex interchange of mediators among immune cells is still lacking. Here we present a method for quantifying the relevance of different mediators in the immune network, which exploits a definition of centrality based on the concept of efficient communication. The analysis, applied to the human immune system, indicates that its mediators significantly differ in their network relevance. We found that cytokines involved in innate immunity and inflammation and some hormones rank highest in the network, revealing that the most prominent mediators of the IS are molecules involved in these ancestral types of defence mechanisms highly integrated with the adaptive immune response, and at the interplay among the nervous, the endocrine and the immune systems.
[ { "created": "Fri, 17 Sep 2004 10:39:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Tieri", "Paolo", "" ], [ "Valensin", "Silvana", "" ], [ "Latora", "Vito", "" ], [ "Castellani", "Gastone C.", "" ], [ "Marchiori", "Massimo", "" ], [ "Remondini", "Daniel", "" ], [ "Franceschi", "Claudio", ...
Immune cells coordinate their efforts for the correct and efficient functioning of the immune system (IS). Each cell type plays a distinct role and communicates with other cell types through mediators such as cytokines, chemokines and hormones, among others, that are crucial for the functioning of the IS and its fine tuning. Nevertheless, a quantitative analysis of the topological properties of an immunological network involving this complex interchange of mediators among immune cells is still lacking. Here we present a method for quantifying the relevance of different mediators in the immune network, which exploits a definition of centrality based on the concept of efficient communication. The analysis, applied to the human immune system, indicates that its mediators significantly differ in their network relevance. We found that cytokines involved in innate immunity and inflammation and some hormones rank highest in the network, revealing that the most prominent mediators of the IS are molecules involved in these ancestral types of defence mechanisms highly integrated with the adaptive immune response, and at the interplay among the nervous, the endocrine and the immune systems.
1908.07446
Alex Gomez-Marin
Regina Zaghi-Lara, Miguel \'Angel Gea, Jordi Cam\'i, Luis M. Mart\'inez, Alex Gomez-Marin
Playing magic tricks to deep neural networks untangles human deception
null
null
null
null
q-bio.NC cs.AI cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Magic is the art of producing in the spectator an illusion of impossibility. Although the scientific study of magic is in its infancy, the advent of recent tracking algorithms based on deep learning allow now to quantify the skills of the magician in naturalistic conditions at unprecedented resolution and robustness. In this study, we deconstructed stage magic into purely motor maneuvers and trained an artificial neural network (DeepLabCut) to follow coins as a professional magician made them appear and disappear in a series of tricks. Rather than using AI as a mere tracking tool, we conceived it as an "artificial spectator". When the coins were not visible, the algorithm was trained to infer their location as a human spectator would (i.e. in the left fist). This created situations where the human was fooled while AI (as seen by a human) was not, and vice versa. Magic from the perspective of the machine reveals our own cognitive biases.
[ { "created": "Tue, 20 Aug 2019 15:50:11 GMT", "version": "v1" } ]
2019-08-21
[ [ "Zaghi-Lara", "Regina", "" ], [ "Gea", "Miguel Ángel", "" ], [ "Camí", "Jordi", "" ], [ "Martínez", "Luis M.", "" ], [ "Gomez-Marin", "Alex", "" ] ]
Magic is the art of producing in the spectator an illusion of impossibility. Although the scientific study of magic is in its infancy, the advent of recent tracking algorithms based on deep learning allow now to quantify the skills of the magician in naturalistic conditions at unprecedented resolution and robustness. In this study, we deconstructed stage magic into purely motor maneuvers and trained an artificial neural network (DeepLabCut) to follow coins as a professional magician made them appear and disappear in a series of tricks. Rather than using AI as a mere tracking tool, we conceived it as an "artificial spectator". When the coins were not visible, the algorithm was trained to infer their location as a human spectator would (i.e. in the left fist). This created situations where the human was fooled while AI (as seen by a human) was not, and vice versa. Magic from the perspective of the machine reveals our own cognitive biases.
2010.06995
Brandon Gallas
Sarah N Dudgeon (1), Si Wen (1), Matthew G Hanna (2), Rajarsi Gupta (3), Mohamed Amgad (4), Manasi Sheth (5), Hetal Marble (6), Richard Huang (6), Markus D Herrmann (7), Clifford H. Szu (8), Darick Tong (8), Bruce Werness (8), Evan Szu (8), Denis Larsimont (9), Anant Madabhushi (10), Evangelos Hytopoulos (11), Weijie Chen (1), Rajendra Singh (12), Steven N. Hart (13), Joel Saltz (3), Roberto Salgado (14), Brandon D Gallas (1) ((1) United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics & Software Reliability, White Oak, MD, (2) Memorial Sloan Kettering Cancer Center, New York, NY, (3) Stony Brook Medicine Dept of Biomedical Informatics, Stony Brook, NY, (4) Department of Pathology, Northwestern University, Rubloff Building, Chicago Illinois, (5) United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Product Quality and Evaluation, Office of Clinical Evidence and Analysis, Division of Biostatistics, White Oak, MD, (6) Massachusetts General Hospital/Harvard Medical School, Boston, MA, (7) Computational Pathology, Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, (8) Arrive Origin, San Francisco, CA, (9) Department of Pathology, Institut Jules Bordet, Brussels, Belgium, (10) Case Western Reserve University, Cleveland, OH, (11) iRhythm Technologies Inc., San Francisco, CA, (12) Northwell health and Zucker School of Medicine, New York, NY, (13) Department of Health Sciences Research, Mayo Clinic, Rochester MN, (14) Division of Research, Peter Mac Callum Cancer Centre, Melbourne, Australia, Department of Pathology, GZA-ZNA Hospitals, Antwerp, Belgium)
A Pathologist-Annotated Dataset for Validating Artificial Intelligence: A Project Description and Pilot Study
26 pages, 4 figures, 2 tables Submitted to the Journal of Pathology Informatics Project web page: https://ncihub.org/groups/eedapstudies
null
10.4103/jpi.jpi_83_20
null
q-bio.QM eess.IV
http://creativecommons.org/publicdomain/zero/1.0/
Purpose: In this work, we present a collaboration to create a validation dataset of pathologist annotations for algorithms that process whole slide images (WSIs). We focus on data collection and evaluation of algorithm performance in the context of estimating the density of stromal tumor infiltrating lymphocytes (sTILs) in breast cancer. Methods: We digitized 64 glass slides of hematoxylin- and eosin-stained ductal carcinoma core biopsies prepared at a single clinical site. We created training materials and workflows to crowdsource pathologist image annotations on two modes: an optical microscope and two digital platforms. The workflows collect the ROI type, a decision on whether the ROI is appropriate for estimating the density of sTILs, and if appropriate, the sTIL density value for that ROI. Results: The pilot study yielded an abundant number of cases with nominal sTIL infiltration. Furthermore, we found that the sTIL densities are correlated within a case, and there is notable pathologist variability. Consequently, we outline plans to improve our ROI and case sampling methods. We also outline statistical methods to account for ROI correlations within a case and pathologist variability when validating an algorithm. Conclusion: We have built workflows for efficient data collection and tested them in a pilot study. As we prepare for pivotal studies, we will consider what it will take for the dataset to be fit for a regulatory purpose: study size, patient population, and pathologist training and qualifications. To this end, we will elicit feedback from the FDA via the Medical Device Development Tool program and from the broader digital pathology and AI community. Ultimately, we intend to share the dataset, statistical methods, and lessons learned.
[ { "created": "Wed, 14 Oct 2020 12:16:07 GMT", "version": "v1" } ]
2021-11-18
[ [ "Dudgeon", "Sarah N", "" ], [ "Wen", "Si", "" ], [ "Hanna", "Matthew G", "" ], [ "Gupta", "Rajarsi", "" ], [ "Amgad", "Mohamed", "" ], [ "Sheth", "Manasi", "" ], [ "Marble", "Hetal", "" ], [ "Huang"...
Purpose: In this work, we present a collaboration to create a validation dataset of pathologist annotations for algorithms that process whole slide images (WSIs). We focus on data collection and evaluation of algorithm performance in the context of estimating the density of stromal tumor infiltrating lymphocytes (sTILs) in breast cancer. Methods: We digitized 64 glass slides of hematoxylin- and eosin-stained ductal carcinoma core biopsies prepared at a single clinical site. We created training materials and workflows to crowdsource pathologist image annotations on two modes: an optical microscope and two digital platforms. The workflows collect the ROI type, a decision on whether the ROI is appropriate for estimating the density of sTILs, and if appropriate, the sTIL density value for that ROI. Results: The pilot study yielded an abundant number of cases with nominal sTIL infiltration. Furthermore, we found that the sTIL densities are correlated within a case, and there is notable pathologist variability. Consequently, we outline plans to improve our ROI and case sampling methods. We also outline statistical methods to account for ROI correlations within a case and pathologist variability when validating an algorithm. Conclusion: We have built workflows for efficient data collection and tested them in a pilot study. As we prepare for pivotal studies, we will consider what it will take for the dataset to be fit for a regulatory purpose: study size, patient population, and pathologist training and qualifications. To this end, we will elicit feedback from the FDA via the Medical Device Development Tool program and from the broader digital pathology and AI community. Ultimately, we intend to share the dataset, statistical methods, and lessons learned.
1902.10597
Peter Zeidman
Peter Zeidman, Amirhossein Jafarian, Nad\`ege Corbin, Mohamed L. Seghier, Adeel Razi, Cathy J. Price, Karl J. Friston
A tutorial on group effective connectivity analysis, part 1: first level analysis with DCM for fMRI
null
null
10.1016/j.neuroimage.2019.06.031
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic Causal Modelling (DCM) is the predominant method for inferring effective connectivity from neuroimaging data. In the 15 years since its introduction, the neural models and statistical routines in DCM have developed in parallel, driven by the needs of researchers in cognitive and clinical neuroscience. In this tutorial, we step through an exemplar fMRI analysis in detail, reviewing the current implementation of DCM and demonstrating recent developments in group-level connectivity analysis. In the first part of the tutorial (current paper), we focus on issues specific to DCM for fMRI, unpacking the relevant theory and highlighting practical considerations. In particular, we clarify the assumptions (i.e., priors) used in DCM for fMRI and how to interpret the model parameters. This tutorial is accompanied by all the necessary data and instructions to reproduce the analyses using the SPM software. In the second part (in a companion paper), we move from subject-level to group-level modelling using the Parametric Empirical Bayes framework, and illustrate how to test for commonalities and differences in effective connectivity across subjects, based on imaging data from any modality.
[ { "created": "Wed, 27 Feb 2019 15:39:41 GMT", "version": "v1" } ]
2019-07-15
[ [ "Zeidman", "Peter", "" ], [ "Jafarian", "Amirhossein", "" ], [ "Corbin", "Nadège", "" ], [ "Seghier", "Mohamed L.", "" ], [ "Razi", "Adeel", "" ], [ "Price", "Cathy J.", "" ], [ "Friston", "Karl J.", "" ]...
Dynamic Causal Modelling (DCM) is the predominant method for inferring effective connectivity from neuroimaging data. In the 15 years since its introduction, the neural models and statistical routines in DCM have developed in parallel, driven by the needs of researchers in cognitive and clinical neuroscience. In this tutorial, we step through an exemplar fMRI analysis in detail, reviewing the current implementation of DCM and demonstrating recent developments in group-level connectivity analysis. In the first part of the tutorial (current paper), we focus on issues specific to DCM for fMRI, unpacking the relevant theory and highlighting practical considerations. In particular, we clarify the assumptions (i.e., priors) used in DCM for fMRI and how to interpret the model parameters. This tutorial is accompanied by all the necessary data and instructions to reproduce the analyses using the SPM software. In the second part (in a companion paper), we move from subject-level to group-level modelling using the Parametric Empirical Bayes framework, and illustrate how to test for commonalities and differences in effective connectivity across subjects, based on imaging data from any modality.
2208.03325
Sara Ghazanfari
Sara Ghazanfari, Ali Rasteh, Seyed Abolfazl Motahari, Mahdieh Soleymani Baghshah
Isoform Function Prediction Using a Deep Neural Network
It needs a final review from co-authors
null
null
null
q-bio.GN cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Isoforms are mRNAs produced from the same gene site in the phenomenon called Alternative Splicing. Studies have shown that more than 95% of human multi-exon genes have undergone alternative splicing. Although there are few changes in mRNA sequence, They may have a systematic effect on cell function and regulation. It is widely reported that isoforms of a gene have distinct or even contrasting functions. Most studies have shown that alternative splicing plays a significant role in human health and disease. Despite the wide range of gene function studies, there is little information about isoforms' functionalities. Recently, some computational methods based on Multiple Instance Learning have been proposed to predict isoform function using gene function and gene expression profile. However, their performance is not desirable due to the lack of labeled training data. In addition, probabilistic models such as Conditional Random Field (CRF) have been used to model the relation between isoforms. This project uses all the data and valuable information such as isoform sequences, expression profiles, and gene ontology graphs and proposes a comprehensive model based on Deep Neural Networks. The UniProt Gene Ontology (GO) database is used as a standard reference for gene functions. The NCBI RefSeq database is used for extracting gene and isoform sequences, and the NCBI SRA database is used for expression profile data. Metrics such as Receiver Operating Characteristic Area Under the Curve (ROC AUC) and Precision-Recall Under the Curve (PR AUC) are used to measure the prediction accuracy.
[ { "created": "Fri, 5 Aug 2022 09:31:25 GMT", "version": "v1" }, { "created": "Mon, 22 Aug 2022 06:18:18 GMT", "version": "v2" }, { "created": "Tue, 25 Apr 2023 17:04:58 GMT", "version": "v3" } ]
2023-04-26
[ [ "Ghazanfari", "Sara", "" ], [ "Rasteh", "Ali", "" ], [ "Motahari", "Seyed Abolfazl", "" ], [ "Baghshah", "Mahdieh Soleymani", "" ] ]
Isoforms are mRNAs produced from the same gene site in the phenomenon called Alternative Splicing. Studies have shown that more than 95% of human multi-exon genes have undergone alternative splicing. Although there are few changes in mRNA sequence, They may have a systematic effect on cell function and regulation. It is widely reported that isoforms of a gene have distinct or even contrasting functions. Most studies have shown that alternative splicing plays a significant role in human health and disease. Despite the wide range of gene function studies, there is little information about isoforms' functionalities. Recently, some computational methods based on Multiple Instance Learning have been proposed to predict isoform function using gene function and gene expression profile. However, their performance is not desirable due to the lack of labeled training data. In addition, probabilistic models such as Conditional Random Field (CRF) have been used to model the relation between isoforms. This project uses all the data and valuable information such as isoform sequences, expression profiles, and gene ontology graphs and proposes a comprehensive model based on Deep Neural Networks. The UniProt Gene Ontology (GO) database is used as a standard reference for gene functions. The NCBI RefSeq database is used for extracting gene and isoform sequences, and the NCBI SRA database is used for expression profile data. Metrics such as Receiver Operating Characteristic Area Under the Curve (ROC AUC) and Precision-Recall Under the Curve (PR AUC) are used to measure the prediction accuracy.
2009.11288
Liaofu Luo
Liaofu Luo and Yongchun Zuo
Spike conformation transition in SARS-CoV-2 infection
14 pages, 1 figure
null
null
null
q-bio.BM q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A theory on the conformation transition for SARS-CoV-2 spike protein (S) is proposed. The conformation equilibrium between open (up) and closed (down) conformations of receptor binding domain (RBD) of the spike is studied from the first-principle. The conformational state population is deduced from the free energy change in conformation transition of S protein. We demonstrated that the free energy includes two parts, one from the multi-minima of conformational potential and another from the variation of structural elasticity. Both factors are dependent of amino acid mutation. The former is related to the change of affinity of RBD to ACE 2 due to the mutation in the subdomain RBM (receptor binding motif) of RBD. The latter is caused by the change of elastic energy of S protein. When the affinity has increased significantly and/or the elastic energy has been reduced substantially the equilibrium is biased to the open conformation. Only then can the virus infection process continue. Possible new SARS-CoV-2 variants from amino acid mutations in 5-9 sites on RBD interface are predicted. The elastic energy variation needed for conformation transition is estimated quantitatively. Taking the elastic-structural change into account more virus variants are possible. Both the D614G mutation, the K986P mutation and the new variants 501Y in current SARS-CoV-2 pandemic can be interpreted from the presented theory. The comparison of the infectivity of SARS-CoV-2 with SARS-CoV-1 is made from the point of conformation equilibrium. Why the virus entrance takes priority in lower temperature and higher humidity is interpreted by the present theory. The conformational transition influenced by electromagnetic field is also discussed briefly.
[ { "created": "Wed, 23 Sep 2020 03:06:23 GMT", "version": "v1" }, { "created": "Fri, 29 Jan 2021 03:41:28 GMT", "version": "v2" } ]
2021-02-01
[ [ "Luo", "Liaofu", "" ], [ "Zuo", "Yongchun", "" ] ]
A theory on the conformation transition for SARS-CoV-2 spike protein (S) is proposed. The conformation equilibrium between open (up) and closed (down) conformations of receptor binding domain (RBD) of the spike is studied from the first-principle. The conformational state population is deduced from the free energy change in conformation transition of S protein. We demonstrated that the free energy includes two parts, one from the multi-minima of conformational potential and another from the variation of structural elasticity. Both factors are dependent of amino acid mutation. The former is related to the change of affinity of RBD to ACE 2 due to the mutation in the subdomain RBM (receptor binding motif) of RBD. The latter is caused by the change of elastic energy of S protein. When the affinity has increased significantly and/or the elastic energy has been reduced substantially the equilibrium is biased to the open conformation. Only then can the virus infection process continue. Possible new SARS-CoV-2 variants from amino acid mutations in 5-9 sites on RBD interface are predicted. The elastic energy variation needed for conformation transition is estimated quantitatively. Taking the elastic-structural change into account more virus variants are possible. Both the D614G mutation, the K986P mutation and the new variants 501Y in current SARS-CoV-2 pandemic can be interpreted from the presented theory. The comparison of the infectivity of SARS-CoV-2 with SARS-CoV-1 is made from the point of conformation equilibrium. Why the virus entrance takes priority in lower temperature and higher humidity is interpreted by the present theory. The conformational transition influenced by electromagnetic field is also discussed briefly.
2404.13934
Peter Carstensen
Peter Emil Carstensen, Jacob Bendsen, Laura Hjort Blicher, Kim Kristensen, John Bagterp J{\o}rgensen
A whole-body mathematical model of cholesterol metabolism and transport
6 pages, 10 figures, submitted to be presented at a conference
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiovascular diseases are the leading cause of death. Increased levels of plasma cholesterol are consistently associated with an increased risk of cardiovascular disease. As a result, it is imperative that studies are conducted to determine the best course of action to reduce whole-body cholesterol levels. A whole-body mathematical model for cholesterol metabolism and transport is proposed. The model can simulate the effects of lipid-lowering drugs like statins and anti-PCSK9. The model is based on ordinary differential equations and kinetic functions. It has been validated against literature data. It offers a versatile platform for designing personalized interventions for cardiovascular health management.
[ { "created": "Mon, 22 Apr 2024 07:23:18 GMT", "version": "v1" } ]
2024-04-23
[ [ "Carstensen", "Peter Emil", "" ], [ "Bendsen", "Jacob", "" ], [ "Blicher", "Laura Hjort", "" ], [ "Kristensen", "Kim", "" ], [ "Jørgensen", "John Bagterp", "" ] ]
Cardiovascular diseases are the leading cause of death. Increased levels of plasma cholesterol are consistently associated with an increased risk of cardiovascular disease. As a result, it is imperative that studies are conducted to determine the best course of action to reduce whole-body cholesterol levels. A whole-body mathematical model for cholesterol metabolism and transport is proposed. The model can simulate the effects of lipid-lowering drugs like statins and anti-PCSK9. The model is based on ordinary differential equations and kinetic functions. It has been validated against literature data. It offers a versatile platform for designing personalized interventions for cardiovascular health management.
1404.7766
Ya Hu
Ya Hu, Yi Wang, Qiliang Ding, Yungang He, Minxian Wang, Jiucun Wang, Shuhua Xu, Li Jin
Genome-wide Scan of Archaic Hominin Introgressions in Eurasians Reveals Complex Admixture History
42 Pages, 1 Table, 4 Figures, 1 Supplementary Table, and 10 Supplementary Figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Introgressions from Neanderthals and Denisovans were detected in modern humans. Introgressions from other archaic hominins were also implicated, however, identification of which poses a great technical challenge. Here, we introduced an approach in identifying introgressions from all possible archaic hominins in Eurasian genomes, without referring to archaic hominin sequences. We focused on mutations emerged in archaic hominins after their divergence from modern humans (denoted as archaic-specific mutations), and identified introgressive segments which showed significant enrichment of archaic-specific mutations over the rest of the genome. Furthermore, boundaries of introgressions were identified using a dynamic programming approach to partition whole genome into segments which contained different levels of archaic-specific mutations. We found that detected introgressions shared more archaic-specific mutations with Altai Neanderthal than they shared with Denisovan, and 60.3% of archaic hominin introgressions were from Neanderthals. Furthermore, we detected more introgressions from two unknown archaic hominins whom diverged with modern humans approximately 859 and 3,464 thousand years ago. The latter unknown archaic hominin contributed to the genomes of the common ancestors of modern humans and Neanderthals. In total, archaic hominin introgressions comprised 2.4% of Eurasian genomes. Above results suggested a complex admixture history among hominins. The proposed approach could also facilitate admixture research across species.
[ { "created": "Wed, 30 Apr 2014 15:50:01 GMT", "version": "v1" } ]
2014-05-01
[ [ "Hu", "Ya", "" ], [ "Wang", "Yi", "" ], [ "Ding", "Qiliang", "" ], [ "He", "Yungang", "" ], [ "Wang", "Minxian", "" ], [ "Wang", "Jiucun", "" ], [ "Xu", "Shuhua", "" ], [ "Jin", "Li", "" ]...
Introgressions from Neanderthals and Denisovans were detected in modern humans. Introgressions from other archaic hominins were also implicated, however, identification of which poses a great technical challenge. Here, we introduced an approach in identifying introgressions from all possible archaic hominins in Eurasian genomes, without referring to archaic hominin sequences. We focused on mutations emerged in archaic hominins after their divergence from modern humans (denoted as archaic-specific mutations), and identified introgressive segments which showed significant enrichment of archaic-specific mutations over the rest of the genome. Furthermore, boundaries of introgressions were identified using a dynamic programming approach to partition whole genome into segments which contained different levels of archaic-specific mutations. We found that detected introgressions shared more archaic-specific mutations with Altai Neanderthal than they shared with Denisovan, and 60.3% of archaic hominin introgressions were from Neanderthals. Furthermore, we detected more introgressions from two unknown archaic hominins whom diverged with modern humans approximately 859 and 3,464 thousand years ago. The latter unknown archaic hominin contributed to the genomes of the common ancestors of modern humans and Neanderthals. In total, archaic hominin introgressions comprised 2.4% of Eurasian genomes. Above results suggested a complex admixture history among hominins. The proposed approach could also facilitate admixture research across species.
2303.15928
M.C. Vera
M. Marv\'a, E. Venturino, M.C. Vera
A minimal model coupling communicable and non-communicable diseases
19 pages, 5 figures
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by-sa/4.0/
This work presents a model combining the simplest communicable and non-communicable disease models. The latter is, by far, the leading cause of sickness and death in the World, and introduces basal heterogeneity in populations where communicable diseases evolve. The model can be interpreted as a risk-structured model, another way of accounting for population heterogeneity. Our results show that considering the non-communicable disease (in the end, heterogeneous populations) allows the communicable disease to become endemic even if the basic reproduction number is less than $1$. This feature is known as subcritical bifurcation. Furthermore, ignoring the non-communicable disease dynamics results in overestimating the reproduction number and, thus, giving wrong information about the actual number of infected individuals. We calculate sensitivity indices and derive interesting epidemic-control information.
[ { "created": "Tue, 28 Mar 2023 12:34:43 GMT", "version": "v1" } ]
2023-03-29
[ [ "Marvá", "M.", "" ], [ "Venturino", "E.", "" ], [ "Vera", "M. C.", "" ] ]
This work presents a model combining the simplest communicable and non-communicable disease models. The latter is, by far, the leading cause of sickness and death in the World, and introduces basal heterogeneity in populations where communicable diseases evolve. The model can be interpreted as a risk-structured model, another way of accounting for population heterogeneity. Our results show that considering the non-communicable disease (in the end, heterogeneous populations) allows the communicable disease to become endemic even if the basic reproduction number is less than $1$. This feature is known as subcritical bifurcation. Furthermore, ignoring the non-communicable disease dynamics results in overestimating the reproduction number and, thus, giving wrong information about the actual number of infected individuals. We calculate sensitivity indices and derive interesting epidemic-control information.
2010.00060
Cheng-Shang Chang
Yi-Jheng Lin, Che-Hao Yu, Tzu-Hsuan Liu, Cheng-Shang Chang, Wen-Tsuen Chen
Constructions and Comparisons of Pooling Matrices for Pooled Testing of COVID-19
null
null
null
null
q-bio.PE cs.DM cs.IT math.IT stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In comparison with individual testing, group testing (also known as pooled testing) is more efficient in reducing the number of tests and potentially leading to tremendous cost reduction. As indicated in the recent article posted on the US FDA website, the group testing approach for COVID-19 has received a lot of interest lately. There are two key elements in a group testing technique: (i) the pooling matrix that directs samples to be pooled into groups, and (ii) the decoding algorithm that uses the group test results to reconstruct the status of each sample. In this paper, we propose a new family of pooling matrices from packing the pencil of lines (PPoL) in a finite projective plane. We compare their performance with various pooling matrices proposed in the literature, including 2D-pooling, P-BEST, and Tapestry, using the two-stage definite defectives (DD) decoding algorithm. By conducting extensive simulations for a range of prevalence rates up to 5%, our numerical results show that there is no pooling matrix with the lowest relative cost in the whole range of the prevalence rates. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. The family of PPoL matrices can dynamically adjust their column weights according to the prevalence rates and could be a better alternative than using a fixed pooling matrix.
[ { "created": "Wed, 30 Sep 2020 19:05:23 GMT", "version": "v1" }, { "created": "Tue, 15 Jun 2021 13:51:59 GMT", "version": "v2" } ]
2021-06-16
[ [ "Lin", "Yi-Jheng", "" ], [ "Yu", "Che-Hao", "" ], [ "Liu", "Tzu-Hsuan", "" ], [ "Chang", "Cheng-Shang", "" ], [ "Chen", "Wen-Tsuen", "" ] ]
In comparison with individual testing, group testing (also known as pooled testing) is more efficient in reducing the number of tests and potentially leading to tremendous cost reduction. As indicated in the recent article posted on the US FDA website, the group testing approach for COVID-19 has received a lot of interest lately. There are two key elements in a group testing technique: (i) the pooling matrix that directs samples to be pooled into groups, and (ii) the decoding algorithm that uses the group test results to reconstruct the status of each sample. In this paper, we propose a new family of pooling matrices from packing the pencil of lines (PPoL) in a finite projective plane. We compare their performance with various pooling matrices proposed in the literature, including 2D-pooling, P-BEST, and Tapestry, using the two-stage definite defectives (DD) decoding algorithm. By conducting extensive simulations for a range of prevalence rates up to 5%, our numerical results show that there is no pooling matrix with the lowest relative cost in the whole range of the prevalence rates. To optimize the performance, one should choose the right pooling matrix, depending on the prevalence rate. The family of PPoL matrices can dynamically adjust their column weights according to the prevalence rates and could be a better alternative than using a fixed pooling matrix.
2003.11277
Soumyabrata Bhattacharjee
Soumyabrata Bhattacharjee (Royal School of Engineering & Technology, Guwahati, Assam, India)
Statistical investigation of relationship between spread of coronavirus disease (COVID-19) and environmental factors based on study of four mostly affected places of China and five mostly affected places of Italy
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 is a new type of coronavirus disease which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It originated in China in the month of December 2019 and quickly started to spread within the country. On 31st December 2019, it was first reported to country office of World Health Organization (WHO) in China. Since then, it has spread to most of the countries around the globe. However, there has been a recent rise in trend in believing that it would go away during summer days, which has not yet been properly investigated. In this paper, relationship of daily number of confirmed cases of COVID-19 with three environmental factors, viz. maximum relative humidity (RH_max), maximum temperature (T_max) and highest wind speed (WS_max), considering the incubation period, have been investigated statistically, for four of the most affected places of China, viz. Beijing, Chongqing, Shanghai, Wuhan and five of the most affected places of Italy, viz. Bergamo, Cremona, Lodi, Milano. It has been found that the relationship with maximum relative humidity and highest wind is mostly negligible, whereas relationship with maximum temperature is ranging between negligible to moderate.
[ { "created": "Wed, 25 Mar 2020 08:49:33 GMT", "version": "v1" } ]
2020-03-26
[ [ "Bhattacharjee", "Soumyabrata", "", "Royal School of Engineering & Technology,\n Guwahati, Assam, India" ] ]
COVID-19 is a new type of coronavirus disease which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It originated in China in the month of December 2019 and quickly started to spread within the country. On 31st December 2019, it was first reported to country office of World Health Organization (WHO) in China. Since then, it has spread to most of the countries around the globe. However, there has been a recent rise in trend in believing that it would go away during summer days, which has not yet been properly investigated. In this paper, relationship of daily number of confirmed cases of COVID-19 with three environmental factors, viz. maximum relative humidity (RH_max), maximum temperature (T_max) and highest wind speed (WS_max), considering the incubation period, have been investigated statistically, for four of the most affected places of China, viz. Beijing, Chongqing, Shanghai, Wuhan and five of the most affected places of Italy, viz. Bergamo, Cremona, Lodi, Milano. It has been found that the relationship with maximum relative humidity and highest wind is mostly negligible, whereas relationship with maximum temperature is ranging between negligible to moderate.
2311.02232
Navid Mohammad Mirzaei
Navid Mohammad Mirzaei and Leili Shahriyari
Modeling Cancer Progression: An Integrated Workflow Extending Data-Driven Kinetic Models to Bio-Mechanical PDE Models
16 pages, 5 figures, Submitted to Physical Biology
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Computational modeling of cancer can help unveil dynamics and interactions that are hard to replicate experimentally. Thanks to the advancement in cancer databases and data analysis technologies, these models have become more robust than ever. There are many mathematical models which investigate cancer through different approaches, from sub-cellular to tissue scale, and from treatment to diagnostic points of view. In this study, we lay out a step-by-step methodology for a data-driven mechanistic model of the tumor microenvironment. We discuss data acquisition strategies, data preparation, parameter estimation, and sensitivity analysis techniques. Furthermore, we propose a possible approach to extend mechanistic ODE models to PDE models coupled with mechanical growth. The workflow discussed in this article can help understand the complex temporal and spatial interactions between cells and cytokines in the tumor microenvironment and their effect on tumor growth.
[ { "created": "Fri, 3 Nov 2023 20:42:34 GMT", "version": "v1" } ]
2023-11-07
[ [ "Mirzaei", "Navid Mohammad", "" ], [ "Shahriyari", "Leili", "" ] ]
Computational modeling of cancer can help unveil dynamics and interactions that are hard to replicate experimentally. Thanks to the advancement in cancer databases and data analysis technologies, these models have become more robust than ever. There are many mathematical models which investigate cancer through different approaches, from sub-cellular to tissue scale, and from treatment to diagnostic points of view. In this study, we lay out a step-by-step methodology for a data-driven mechanistic model of the tumor microenvironment. We discuss data acquisition strategies, data preparation, parameter estimation, and sensitivity analysis techniques. Furthermore, we propose a possible approach to extend mechanistic ODE models to PDE models coupled with mechanical growth. The workflow discussed in this article can help understand the complex temporal and spatial interactions between cells and cytokines in the tumor microenvironment and their effect on tumor growth.
1807.08220
Liane Gabora
Liane Gabora
Creativity: Linchpin in the Quest for a Viable Theory of Cultural Evolution
13 pages; 2 tables; 1 figure; Accepted for publication in Current Opinion in Behavioral Sciences
Current Opinion in Behavioral Sciences, 27, 77-83 (2019)
null
null
q-bio.NC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper outlines the implications of neural-level accounts of insight, and models of the conceptual interactions that underlie creativity, for a theory of cultural evolution. Since elements of human culture exhibit cumulative, adaptive, open-ended change, it seems reasonable to view culture as an evolutionary process, one fueled by creativity. Associative memory models of creativity and mathematical models of how concepts combine and transform through interaction with a context, support a view of creativity that is incompatible with a Darwinian (selectionist) framework for cultural evolution, but compatible with a non-Darwinian (Self-Other Reorganization) framework. A theory of cultural evolution in which creativity is centre stage could provide the kind of integrative framework for the behavioral sciences that Darwin provided for the life sciences.
[ { "created": "Sun, 22 Jul 2018 01:14:57 GMT", "version": "v1" }, { "created": "Tue, 18 Sep 2018 03:25:47 GMT", "version": "v2" }, { "created": "Wed, 13 Mar 2019 21:16:43 GMT", "version": "v3" } ]
2019-03-15
[ [ "Gabora", "Liane", "" ] ]
This paper outlines the implications of neural-level accounts of insight, and models of the conceptual interactions that underlie creativity, for a theory of cultural evolution. Since elements of human culture exhibit cumulative, adaptive, open-ended change, it seems reasonable to view culture as an evolutionary process, one fueled by creativity. Associative memory models of creativity and mathematical models of how concepts combine and transform through interaction with a context, support a view of creativity that is incompatible with a Darwinian (selectionist) framework for cultural evolution, but compatible with a non-Darwinian (Self-Other Reorganization) framework. A theory of cultural evolution in which creativity is centre stage could provide the kind of integrative framework for the behavioral sciences that Darwin provided for the life sciences.
1309.7157
Veit Schw\"ammle
Veit Schw\"ammle, Ole N{\o}rregaard Jensen
A computational model for histone mark propagation reproduces the distribution of heterochromatin in different human cell types
24 pages,9 figures, 1 table + supplementary material
PLoS ONE 8(9): e73818
10.1371/journal.pone.0073818
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.
[ { "created": "Fri, 27 Sep 2013 08:55:55 GMT", "version": "v1" } ]
2013-09-30
[ [ "Schwämmle", "Veit", "" ], [ "Jensen", "Ole Nørregaard", "" ] ]
Chromatin is a highly compact and dynamic nuclear structure that consists of DNA and associated proteins. The main organizational unit is the nucleosome, which consists of a histone octamer with DNA wrapped around it. Histone proteins are implicated in the regulation of eukaryote genes and they carry numerous reversible post-translational modifications that control DNA-protein interactions and the recruitment of chromatin binding proteins. Heterochromatin, the transcriptionally inactive part of the genome, is densely packed and contains histone H3 that is methylated at Lys 9 (H3K9me). The propagation of H3K9me in nucleosomes along the DNA in chromatin is antagonizing by methylation of H3 Lysine 4 (H3K4me) and acetylations of several lysines, which is related to euchromatin and active genes. We show that the related histone modifications form antagonized domains on a coarse scale. These histone marks are assumed to be initiated within distinct nucleation sites in the DNA and to propagate bi-directionally. We propose a simple computer model that simulates the distribution of heterochromatin in human chromosomes. The simulations are in agreement with previously reported experimental observations from two different human cell lines. We reproduced different types of barriers between heterochromatin and euchromatin providing a unified model for their function. The effect of changes in the nucleation site distribution and of propagation rates were studied. The former occurs mainly with the aim of (de-)activation of single genes or gene groups and the latter has the power of controlling the transcriptional programs of entire chromosomes. Generally, the regulatory program of gene transcription is controlled by the distribution of nucleation sites along the DNA string.
1704.07635
Onerva Korhonen
Onerva Korhonen (1,2), Heini Saarim\"aki (1), Enrico Glerean (1), Mikko Sams (1), Jari Saram\"aki (2) ((1) Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo, Finland, (2) Department of Computer Science, School of Science, Aalto University, Espoo, Finland)
Consistency of Regions of Interest as nodes of functional brain networks measured by fMRI
28 + 19 pages, 7 + 14 figures. Accepted for publication in Network Neuroscience
Network Neuroscience 1(3) (2017) 254-274
10.1162/NETN_a_00013
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The functional network approach, where fMRI BOLD time series are mapped to networks depicting functional relationships between brain areas, has opened new insights into the function of the human brain. In this approach, the choice of network nodes is of crucial importance. One option is to consider fMRI voxels as nodes. This results in a large number of nodes, making network analysis and interpretation of results challenging. A common alternative is to use pre-defined clusters of anatomically close voxels, Regions of Interest (ROIs). This approach assumes that voxels within ROIs are functionally similar. Because these two approaches result in different network structures, it is crucial to understand what happens to network connectivity when moving from the voxel level to the ROI level. We show that the consistency of ROIs, defined as the mean Pearson correlation coefficient between the time series of their voxels, varies widely in resting-state experimental data. Therefore the assumption of similar voxel dynamics within each ROI does not generally hold. Further, the time series of low-consistency ROIs may be highly correlated, resulting in spurious links in ROI-level networks. Based on these results, we recommend that averaging BOLD signals over anatomically defined ROIs should be carefully considered.
[ { "created": "Tue, 25 Apr 2017 11:23:12 GMT", "version": "v1" } ]
2017-11-10
[ [ "Korhonen", "Onerva", "" ], [ "Saarimäki", "Heini", "" ], [ "Glerean", "Enrico", "" ], [ "Sams", "Mikko", "" ], [ "Saramäki", "Jari", "" ] ]
The functional network approach, where fMRI BOLD time series are mapped to networks depicting functional relationships between brain areas, has opened new insights into the function of the human brain. In this approach, the choice of network nodes is of crucial importance. One option is to consider fMRI voxels as nodes. This results in a large number of nodes, making network analysis and interpretation of results challenging. A common alternative is to use pre-defined clusters of anatomically close voxels, Regions of Interest (ROIs). This approach assumes that voxels within ROIs are functionally similar. Because these two approaches result in different network structures, it is crucial to understand what happens to network connectivity when moving from the voxel level to the ROI level. We show that the consistency of ROIs, defined as the mean Pearson correlation coefficient between the time series of their voxels, varies widely in resting-state experimental data. Therefore the assumption of similar voxel dynamics within each ROI does not generally hold. Further, the time series of low-consistency ROIs may be highly correlated, resulting in spurious links in ROI-level networks. Based on these results, we recommend that averaging BOLD signals over anatomically defined ROIs should be carefully considered.
2101.03907
Gilles Bouchet
Antoine Giovanni, Thomas Radulesco, Gilles Bouchet (IUSTI), Alexia Mattei, Joana R\'evis, Estelle Bogdanski, Justin Michel
Transmission of droplet-conveyed infectious agents such as SARS-CoV-2 by speech and vocal exercises during speech therapy: preliminary experiment concerning airflow velocity
European Archives of Oto-Rhino-Laryngology, Springer Verlag, 2020
null
10.1007/s00405-020-06200-7
null
q-bio.QM physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose Infectious agents, such as SARS-CoV-2, can be carried by droplets expelled during breathing. The spatial dissemination of droplets varies according to their initial velocity. After a short literature review, our goal was to determine the velocity of the exhaled air during vocal exercises. Methods A propylene glycol cloud produced by 2 e-cigarettes' users allowed visualization of the exhaled air emitted during vocal exercises. Airflow velocities were measured during the first 200 ms of a long exhalation, a sustained vowel /a/ and varied vocal exercises. For the long exhalation and the sustained vowel /a/, the decrease of airflow velocity was measured until 3 s. Results were compared with a Computational Fluid Dynamics (CFD) study using boundary conditions consistent with our experimental study. Results Regarding the production of vowels, higher velocities were found in loud and whispered voices than in normal voice. Voiced consonants like /3/ or /v/ generated higher velocities than vowels. Some voiceless consonants, e.g., /t/ generated high velocities, but long exhalation had the highest velocities. Semi-occluded vocal tract exercises generated faster airflow velocities than loud speech, with a decreased velocity during voicing. The initial velocity quickly decreased as was shown during a long exhalation or a sustained vowel /a/. Velocities were consistent with the CFD data. Conclusion Initial velocity of the exhaled air is a key factor influencing droplets trajectory. Our study revealed that vocal exercises produce a slower airflow than long exhalation. Speech therapy should, therefore, not be associated with an increased risk of contamination when implementing standard recommendations.
[ { "created": "Wed, 6 Jan 2021 08:38:49 GMT", "version": "v1" } ]
2021-01-12
[ [ "Giovanni", "Antoine", "", "IUSTI" ], [ "Radulesco", "Thomas", "", "IUSTI" ], [ "Bouchet", "Gilles", "", "IUSTI" ], [ "Mattei", "Alexia", "" ], [ "Révis", "Joana", "" ], [ "Bogdanski", "Estelle", "" ], [ ...
Purpose Infectious agents, such as SARS-CoV-2, can be carried by droplets expelled during breathing. The spatial dissemination of droplets varies according to their initial velocity. After a short literature review, our goal was to determine the velocity of the exhaled air during vocal exercises. Methods A propylene glycol cloud produced by 2 e-cigarettes' users allowed visualization of the exhaled air emitted during vocal exercises. Airflow velocities were measured during the first 200 ms of a long exhalation, a sustained vowel /a/ and varied vocal exercises. For the long exhalation and the sustained vowel /a/, the decrease of airflow velocity was measured until 3 s. Results were compared with a Computational Fluid Dynamics (CFD) study using boundary conditions consistent with our experimental study. Results Regarding the production of vowels, higher velocities were found in loud and whispered voices than in normal voice. Voiced consonants like /3/ or /v/ generated higher velocities than vowels. Some voiceless consonants, e.g., /t/ generated high velocities, but long exhalation had the highest velocities. Semi-occluded vocal tract exercises generated faster airflow velocities than loud speech, with a decreased velocity during voicing. The initial velocity quickly decreased as was shown during a long exhalation or a sustained vowel /a/. Velocities were consistent with the CFD data. Conclusion Initial velocity of the exhaled air is a key factor influencing droplets trajectory. Our study revealed that vocal exercises produce a slower airflow than long exhalation. Speech therapy should, therefore, not be associated with an increased risk of contamination when implementing standard recommendations.
q-bio/0409012
Peter Ashwin
Peter Ashwin and Jon Borresen
How to compute using globally coupled oscillators
null
null
null
null
q-bio.NC
null
Synchronization is known to play a vital role within many highly connected neural systems such as the olfactory systems of fish and insects. In this paper we show how one can robustly and effectively perform practical computations using small perturbations to a very simple globally coupled network of coupled oscillators. Computations are performed by exploiting the spatio-temporal dynamics of a robust attracting heteroclinic network (also referred to as `winnerless competition' dynamics). We use different cluster synchronization states to encode memory states and use this to design a simple multi-base counter. The simulations indicate that this gives a robust computational system exploiting the natural dynamics of the system.
[ { "created": "Thu, 9 Sep 2004 10:35:24 GMT", "version": "v1" } ]
2007-05-23
[ [ "Ashwin", "Peter", "" ], [ "Borresen", "Jon", "" ] ]
Synchronization is known to play a vital role within many highly connected neural systems such as the olfactory systems of fish and insects. In this paper we show how one can robustly and effectively perform practical computations using small perturbations to a very simple globally coupled network of coupled oscillators. Computations are performed by exploiting the spatio-temporal dynamics of a robust attracting heteroclinic network (also referred to as `winnerless competition' dynamics). We use different cluster synchronization states to encode memory states and use this to design a simple multi-base counter. The simulations indicate that this gives a robust computational system exploiting the natural dynamics of the system.
1012.5882
Peter Waddell
Peter J. Waddell, Xi Tan and Ishita Khan
What use are Exponential Weights for flexi-Weighted Least Squares Phylogenetic Trees?
16 pages, 7 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The method of flexi-Weighted Least Squares on evolutionary trees uses simple polynomial or exponential functions of the evolutionary distance in place of model-based variances. This has the advantage that unexpected deviations from additivity can be modeled in a more flexible way. At present, only polynomial weights have been used. However, a general family of exponential weights is desirable to compare with polynomial weights and to potentially exploit recent insights into fast least squares edge length estimation on trees. Here describe families of weights that are multiplicative on trees, along with measures of fit of data to tree. It is shown that polynomial, but also multiplicative weights can approximate model-based variance of evolutionary distances well. Both models are fitted to evolutionary data from yeast genomes and while the polynomial weights model fits better, the exponential weights model can fit a lot better than ordinary least squares. Iterated least squares is evaluated and is seen to converge quickly and with minimal change in the fit statistics when the data are in the range expected for the useful evolutionary distances and simple Markov models of character change. In summary, both polynomial and exponential weighted least squares work well and justify further investment into developing the fastest possible algorithms for evaluating evolutionary trees.
[ { "created": "Wed, 29 Dec 2010 07:43:13 GMT", "version": "v1" } ]
2010-12-30
[ [ "Waddell", "Peter J.", "" ], [ "Tan", "Xi", "" ], [ "Khan", "Ishita", "" ] ]
The method of flexi-Weighted Least Squares on evolutionary trees uses simple polynomial or exponential functions of the evolutionary distance in place of model-based variances. This has the advantage that unexpected deviations from additivity can be modeled in a more flexible way. At present, only polynomial weights have been used. However, a general family of exponential weights is desirable to compare with polynomial weights and to potentially exploit recent insights into fast least squares edge length estimation on trees. Here describe families of weights that are multiplicative on trees, along with measures of fit of data to tree. It is shown that polynomial, but also multiplicative weights can approximate model-based variance of evolutionary distances well. Both models are fitted to evolutionary data from yeast genomes and while the polynomial weights model fits better, the exponential weights model can fit a lot better than ordinary least squares. Iterated least squares is evaluated and is seen to converge quickly and with minimal change in the fit statistics when the data are in the range expected for the useful evolutionary distances and simple Markov models of character change. In summary, both polynomial and exponential weighted least squares work well and justify further investment into developing the fastest possible algorithms for evaluating evolutionary trees.
2402.15888
Caglar Koca
Caglar Koca and Ozgur B. Akan
Modelling 1D Partially Absorbing Boundaries for Brownian Molecular Communication Channels
null
null
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Molecular Communication (MC) architectures suffer from molecular build-up in the channel if they do not have appropriate reuptake mechanisms. The molecular build-up either leads to intersymbol interference (ISI) or reduces the transmission rate. To measure the molecular build-up, we derive analytic expressions for the incidence rate and absorption rate for one-dimensional MC channels where molecular dispersion obeys the Brownian Motion. We verify each of our key results with Monte Carlo simulations. Our results contribute to the development of more complicated models and analytic expressions to measure the molecular build-up and the impact of ISI in MC.
[ { "created": "Sat, 24 Feb 2024 19:45:05 GMT", "version": "v1" } ]
2024-02-27
[ [ "Koca", "Caglar", "" ], [ "Akan", "Ozgur B.", "" ] ]
Molecular Communication (MC) architectures suffer from molecular build-up in the channel if they do not have appropriate reuptake mechanisms. The molecular build-up either leads to intersymbol interference (ISI) or reduces the transmission rate. To measure the molecular build-up, we derive analytic expressions for the incidence rate and absorption rate for one-dimensional MC channels where molecular dispersion obeys the Brownian Motion. We verify each of our key results with Monte Carlo simulations. Our results contribute to the development of more complicated models and analytic expressions to measure the molecular build-up and the impact of ISI in MC.
0709.1189
Natalia Kudryavtseva
N. P. Bondar, I. L. Kovalenko, D. F. Avgustinovich, A. G. Khamoyan, N. N. Kudryavtseva
Effect of THz-radiation on Behavior of Male Mice
6 pages, 1 figure, 2 tables
null
null
null
q-bio.OT
null
Effect of terahertz radiation (3.6 THz, 81.5 mkm,15 mV) on some behavioral patterns of intact mice has been investigated. In home cage mice demonstrated avoidance of laser ray and enhanced replacement activity in free behavior. Animals irradiated during 30 minutes manifested an increased level of anxiety, which was evaluated in the plus maze test on the day following the radiation.
[ { "created": "Sat, 8 Sep 2007 07:13:00 GMT", "version": "v1" } ]
2007-09-11
[ [ "Bondar", "N. P.", "" ], [ "Kovalenko", "I. L.", "" ], [ "Avgustinovich", "D. F.", "" ], [ "Khamoyan", "A. G.", "" ], [ "Kudryavtseva", "N. N.", "" ] ]
Effect of terahertz radiation (3.6 THz, 81.5 mkm,15 mV) on some behavioral patterns of intact mice has been investigated. In home cage mice demonstrated avoidance of laser ray and enhanced replacement activity in free behavior. Animals irradiated during 30 minutes manifested an increased level of anxiety, which was evaluated in the plus maze test on the day following the radiation.
1606.02737
Cameron Mura
Charles E. McAnany, Cameron Mura
Claws, Disorder, and Conformational Dynamics of the C-terminal Region of Human Desmoplakin
68 pages (47 pp main text + 21 pp of Supporting Information ); 6 figures and 1 table in the main text; in press
The Journal of Physical Chemistry B (2016)
10.1021/acs.jpcb.6b03261
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multicellular organisms consist of cells that interact via elaborate adhesion complexes. Desmosomes are membrane-associated adhesion complexes that mechanically tether the cytoskeletal intermediate filaments (IFs) between two adjacent cells, creating a network of tough connections in tissues such as skin and heart. Desmoplakin (DP) is the key desmosomal protein that binds IFs, and the DP-IF association poses a quandary: desmoplakin must stably and tightly bind IFs to maintain the structural integrity of the desmosome. Yet, newly synthesized DP must traffick along the cytoskeleton to the site of nascent desmosome assembly without 'sticking' to the IF network, implying weak or transient DP--IF contacts. Recent work reveals that these contacts are modulated by post-translational modifications (PTMs) in DP's C-terminal tail. Using molecular dynamics simulations, we have elucidated the structural basis of these PTM-induced effects. Our simulations, nearing 2 microseconds in aggregate, indicate that phosphorylation of S2849 induces an 'arginine claw' in desmoplakin's C-terminal tail (DPCTT). If a key arginine, R2834, is methylated, the DPCTT preferentially samples conformations that are geometrically well-suited as substrates for processive phosphorylation by the cognate kinase GSK3. We suggest that DPCTT is a molecular switch that modulates, via its conformational dynamics, DP's efficacy as a substrate for GSK3. Finally, we show that the fluctuating DPCTT can contact other parts of DP, suggesting a competitive binding mechanism for the modulation of DP--IF interactions.
[ { "created": "Wed, 8 Jun 2016 20:17:04 GMT", "version": "v1" } ]
2018-10-01
[ [ "McAnany", "Charles E.", "" ], [ "Mura", "Cameron", "" ] ]
Multicellular organisms consist of cells that interact via elaborate adhesion complexes. Desmosomes are membrane-associated adhesion complexes that mechanically tether the cytoskeletal intermediate filaments (IFs) between two adjacent cells, creating a network of tough connections in tissues such as skin and heart. Desmoplakin (DP) is the key desmosomal protein that binds IFs, and the DP-IF association poses a quandary: desmoplakin must stably and tightly bind IFs to maintain the structural integrity of the desmosome. Yet, newly synthesized DP must traffick along the cytoskeleton to the site of nascent desmosome assembly without 'sticking' to the IF network, implying weak or transient DP--IF contacts. Recent work reveals that these contacts are modulated by post-translational modifications (PTMs) in DP's C-terminal tail. Using molecular dynamics simulations, we have elucidated the structural basis of these PTM-induced effects. Our simulations, nearing 2 microseconds in aggregate, indicate that phosphorylation of S2849 induces an 'arginine claw' in desmoplakin's C-terminal tail (DPCTT). If a key arginine, R2834, is methylated, the DPCTT preferentially samples conformations that are geometrically well-suited as substrates for processive phosphorylation by the cognate kinase GSK3. We suggest that DPCTT is a molecular switch that modulates, via its conformational dynamics, DP's efficacy as a substrate for GSK3. Finally, we show that the fluctuating DPCTT can contact other parts of DP, suggesting a competitive binding mechanism for the modulation of DP--IF interactions.
1702.00632
Luca Ciandrini
Lucas D. Fernandes, Alessandro P.S. de Moura and Luca Ciandrini
Gene length as a regulator for ribosome recruitment and protein synthesis: theoretical insights
null
Scientific Reports 7, Article number: 17409 (2017)
10.1038/s41598-017-17618-1
null
q-bio.SC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein synthesis rates are determined, at the translational level, by properties of the transcript's sequence. The efficiency of an mRNA can be tuned by varying the ribosome binding sites controlling the recruitment of the ribosomes, or the codon usage establishing the speed of protein elongation. In this work we propose transcript length as a further key determinant of translation efficiency. Based on a physical model that considers the kinetics of ribosomes advancing on the mRNA and diffusing in its surrounding, as well as mRNA circularisation and ribosome drop-off, we explain how the transcript length may play a central role in establishing ribosome recruitment and the overall translation rate of an mRNA. According to our results, the proximity of the 3' end to the ribosomal recruitment site of the mRNA could induce a feedback in the translation process that would favour the recycling of ribosomes. We also demonstrate how this process may be involved in shaping the experimental ribosome density-gene length dependence. Finally, we argue that cells could exploit this mechanism to adjust and balance the usage of its ribosomal resources.
[ { "created": "Thu, 2 Feb 2017 11:57:32 GMT", "version": "v1" }, { "created": "Mon, 3 Jul 2017 09:39:44 GMT", "version": "v2" }, { "created": "Tue, 12 Dec 2017 15:47:08 GMT", "version": "v3" } ]
2017-12-14
[ [ "Fernandes", "Lucas D.", "" ], [ "de Moura", "Alessandro P. S.", "" ], [ "Ciandrini", "Luca", "" ] ]
Protein synthesis rates are determined, at the translational level, by properties of the transcript's sequence. The efficiency of an mRNA can be tuned by varying the ribosome binding sites controlling the recruitment of the ribosomes, or the codon usage establishing the speed of protein elongation. In this work we propose transcript length as a further key determinant of translation efficiency. Based on a physical model that considers the kinetics of ribosomes advancing on the mRNA and diffusing in its surrounding, as well as mRNA circularisation and ribosome drop-off, we explain how the transcript length may play a central role in establishing ribosome recruitment and the overall translation rate of an mRNA. According to our results, the proximity of the 3' end to the ribosomal recruitment site of the mRNA could induce a feedback in the translation process that would favour the recycling of ribosomes. We also demonstrate how this process may be involved in shaping the experimental ribosome density-gene length dependence. Finally, we argue that cells could exploit this mechanism to adjust and balance the usage of its ribosomal resources.
q-bio/0601041
Eivind Almaas
E. Almaas, Z.N. Oltvai, and A.-L. Barabasi
The Activity Reaction Core and Plasticity of Metabolic Networks
21 pages, 4 figures, supp. mat. available at http://www.nd.edu/~networks
PLoS Comput Biol 1, e68 (2005)
10.1371/journal.pcbi.0010068
null
q-bio.MN cond-mat.dis-nn q-bio.CB q-bio.QM
null
Understanding the system level adaptive changes taking place in an organism in response to variations in the environment is a key issue of contemporary biology. Current modeling approaches such as the constraint-based flux balance analyses (FBA) have proved highly successful in analyzing the capabilities of cellular metabolism, including its capacity to predict deletion phenotypes, the ability to calculate the relative flux values of metabolic reactions and the properties of alternate optimal growth states. Here, we use FBA to thoroughly assess the activity of the Escherichia coli, Helicobacter pylori, and Saccharomyces cerevisiae metabolism in 30,000 diverse simulated environments. We identify a set of metabolic reactions forming a connected metabolic core that carry non-zero fluxes under all growth conditions, and whose flux variations are highly correlated. Furthermore, we find that the enzymes catalyzing the core reactions display a considerably higher fraction of phenotypic essentiality and evolutionary conservation than those catalyzing non-core reactions. Cellular metabolism is characterized by a large number of species-specific conditionally-active reactions organized around an evolutionary conserved always active metabolic core. Finally, we find that most current antibiotics interfering with the bacterial metabolism target the core enzymes, indicating that our findings may have important implications for antimicrobial drug target discovery.
[ { "created": "Tue, 24 Jan 2006 07:40:07 GMT", "version": "v1" } ]
2007-05-23
[ [ "Almaas", "E.", "" ], [ "Oltvai", "Z. N.", "" ], [ "Barabasi", "A. -L.", "" ] ]
Understanding the system level adaptive changes taking place in an organism in response to variations in the environment is a key issue of contemporary biology. Current modeling approaches such as the constraint-based flux balance analyses (FBA) have proved highly successful in analyzing the capabilities of cellular metabolism, including its capacity to predict deletion phenotypes, the ability to calculate the relative flux values of metabolic reactions and the properties of alternate optimal growth states. Here, we use FBA to thoroughly assess the activity of the Escherichia coli, Helicobacter pylori, and Saccharomyces cerevisiae metabolism in 30,000 diverse simulated environments. We identify a set of metabolic reactions forming a connected metabolic core that carry non-zero fluxes under all growth conditions, and whose flux variations are highly correlated. Furthermore, we find that the enzymes catalyzing the core reactions display a considerably higher fraction of phenotypic essentiality and evolutionary conservation than those catalyzing non-core reactions. Cellular metabolism is characterized by a large number of species-specific conditionally-active reactions organized around an evolutionary conserved always active metabolic core. Finally, we find that most current antibiotics interfering with the bacterial metabolism target the core enzymes, indicating that our findings may have important implications for antimicrobial drug target discovery.
1601.02748
Gregory Giecold
Gregory Giecold, Eugenio Marco, Lorenzo Trippa and Guo-Cheng Yuan
Robust Lineage Reconstruction from High-Dimensional Single-Cell Data
22 pages
null
null
null
q-bio.QM stat.AP stat.CO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell gene expression data provide invaluable resources for systematic characterization of cellular hierarchy in multi-cellular organisms. However, cell lineage reconstruction is still often associated with significant uncertainty due to technological constraints. Such uncertainties have not been taken into account in current methods. We present ECLAIR, a novel computational method for the statistical inference of cell lineage relationships from single-cell gene expression data. ECLAIR uses an ensemble approach to improve the robustness of lineage predictions, and provides a quantitative estimate of the uncertainty of lineage branchings. We show that the application of ECLAIR to published datasets successfully reconstructs known lineage relationships and significantly improves the robustness of predictions. In conclusion, ECLAIR is a powerful bioinformatics tool for single-cell data analysis. It can be used for robust lineage reconstruction with quantitative estimate of prediction accuracy.
[ { "created": "Tue, 12 Jan 2016 07:01:55 GMT", "version": "v1" } ]
2016-01-13
[ [ "Giecold", "Gregory", "" ], [ "Marco", "Eugenio", "" ], [ "Trippa", "Lorenzo", "" ], [ "Yuan", "Guo-Cheng", "" ] ]
Single-cell gene expression data provide invaluable resources for systematic characterization of cellular hierarchy in multi-cellular organisms. However, cell lineage reconstruction is still often associated with significant uncertainty due to technological constraints. Such uncertainties have not been taken into account in current methods. We present ECLAIR, a novel computational method for the statistical inference of cell lineage relationships from single-cell gene expression data. ECLAIR uses an ensemble approach to improve the robustness of lineage predictions, and provides a quantitative estimate of the uncertainty of lineage branchings. We show that the application of ECLAIR to published datasets successfully reconstructs known lineage relationships and significantly improves the robustness of predictions. In conclusion, ECLAIR is a powerful bioinformatics tool for single-cell data analysis. It can be used for robust lineage reconstruction with quantitative estimate of prediction accuracy.
1411.0940
Nico Franz
Nico M. Franz, Mingmin Chen, Shizhuo Yu, Parisa Kianmajd, Shawn Bowers, Bertram Ludaescher
Reasoning over Taxonomic Change: Exploring Alignments for the Perelleschus Use Case
30 pages, 16 figures
null
10.1371/journal.pone.0118247
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations.
[ { "created": "Mon, 3 Nov 2014 20:54:28 GMT", "version": "v1" } ]
2015-06-23
[ [ "Franz", "Nico M.", "" ], [ "Chen", "Mingmin", "" ], [ "Yu", "Shizhuo", "" ], [ "Kianmajd", "Parisa", "" ], [ "Bowers", "Shawn", "" ], [ "Ludaescher", "Bertram", "" ] ]
Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations.
1710.09499
Maria Kleshnina
Maria Kleshnina, Jerzy A. Filar, Vladimir Ejov, and Jody C. McKerral
Evolutionary games under incompetence
17 pages, 3 figures
null
null
null
q-bio.PE cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adaptation process of a species to a new environment is a significant area of study in biology. As part of natural selection, adaptation is a mutation process which improves survival skills and reproductive functions of species. Here, we investigate this process by combining the idea of incompetence with evolutionary game theory. In the sense of evolution, incompetence and training can be interpreted as a special learning process. With focus on the social side of the problem, we analyze the influence of incompetence on behavior of species. We introduce an incompetence parameter into a learning function in a single-population game and analyze its effect on the outcome of the replicator dynamics. Incompetence can change the outcome of the game and its dynamics, indicating its significance within what are inherently imperfect natural systems.
[ { "created": "Thu, 26 Oct 2017 00:30:18 GMT", "version": "v1" } ]
2017-10-27
[ [ "Kleshnina", "Maria", "" ], [ "Filar", "Jerzy A.", "" ], [ "Ejov", "Vladimir", "" ], [ "McKerral", "Jody C.", "" ] ]
The adaptation process of a species to a new environment is a significant area of study in biology. As part of natural selection, adaptation is a mutation process which improves survival skills and reproductive functions of species. Here, we investigate this process by combining the idea of incompetence with evolutionary game theory. In the sense of evolution, incompetence and training can be interpreted as a special learning process. With focus on the social side of the problem, we analyze the influence of incompetence on behavior of species. We introduce an incompetence parameter into a learning function in a single-population game and analyze its effect on the outcome of the replicator dynamics. Incompetence can change the outcome of the game and its dynamics, indicating its significance within what are inherently imperfect natural systems.
2010.12644
David Lipshutz
David Lipshutz, Charlie Windolf, Siavash Golkar, Dmitri B. Chklovskii
A biologically plausible neural network for Slow Feature Analysis
17 pages, 7 figures
null
null
null
q-bio.NC cs.LG cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.
[ { "created": "Fri, 23 Oct 2020 20:09:03 GMT", "version": "v1" } ]
2020-10-27
[ [ "Lipshutz", "David", "" ], [ "Windolf", "Charlie", "" ], [ "Golkar", "Siavash", "" ], [ "Chklovskii", "Dmitri B.", "" ] ]
Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation. We validate Bio-SFA on naturalistic stimuli.
q-bio/0601046
Thomas R. Weikl
Lothar Reich and Thomas R. Weikl
Substructural cooperativity and parallel versus sequential events during protein unfolding
8 pages, 7 figures (1 figure in color)
null
null
null
q-bio.BM cond-mat.soft
null
According to the 'old view', proteins fold along well-defined sequential pathways, whereas the 'new view' sees protein folding as a highly parallel stochastic process on funnel-shaped energy landscapes. We have analyzed parallel and sequential processes on a large number of Molecular Dynamics unfolding trajectories of the protein CI2 at high temperatures. Using rigorous statistical measures, we quantify the degree of sequentiality on two structural levels. The unfolding process is highly parallel on the microstructural level of individual contacts. On a coarser, macrostructural level of contact clusters, characteristic parallel and sequential events emerge. These characteristic events can be understood from loop-closure dependencies between the contact clusters. A correlation analysis of the unfolding times of the contacts reveals a high degree of substructural cooperativity within the contact clusters.
[ { "created": "Sat, 28 Jan 2006 17:26:18 GMT", "version": "v1" } ]
2007-05-23
[ [ "Reich", "Lothar", "" ], [ "Weikl", "Thomas R.", "" ] ]
According to the 'old view', proteins fold along well-defined sequential pathways, whereas the 'new view' sees protein folding as a highly parallel stochastic process on funnel-shaped energy landscapes. We have analyzed parallel and sequential processes on a large number of Molecular Dynamics unfolding trajectories of the protein CI2 at high temperatures. Using rigorous statistical measures, we quantify the degree of sequentiality on two structural levels. The unfolding process is highly parallel on the microstructural level of individual contacts. On a coarser, macrostructural level of contact clusters, characteristic parallel and sequential events emerge. These characteristic events can be understood from loop-closure dependencies between the contact clusters. A correlation analysis of the unfolding times of the contacts reveals a high degree of substructural cooperativity within the contact clusters.
1410.7004
Steven Kelk
Leo van Iersel, Celine Scornavacca, Steven Kelk
Exact reconciliation of undated trees
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconciliation methods aim at recovering macro evolutionary events and at localizing them in the species history, by observing discrepancies between gene family trees and species trees. In this article we introduce an Integer Linear Programming (ILP) approach for the NP-hard problem of computing a most parsimonious time-consistent reconciliation of a gene tree with a species tree when dating information on speciations is not available. The ILP formulation, which builds upon the DTL model, returns a most parsimonious reconciliation ranging over all possible datings of the nodes of the species tree. By studying its performance on plausible simulated data we conclude that the ILP approach is significantly faster than a brute force search through the space of all possible species tree datings. Although the ILP formulation is currently limited to small trees, we believe that it is an important proof-of-concept which opens the door to the possibility of developing an exact, parsimony based approach to dating species trees. The software (ILPEACE) is freely available for download.
[ { "created": "Sun, 26 Oct 2014 09:17:49 GMT", "version": "v1" } ]
2014-10-28
[ [ "van Iersel", "Leo", "" ], [ "Scornavacca", "Celine", "" ], [ "Kelk", "Steven", "" ] ]
Reconciliation methods aim at recovering macro evolutionary events and at localizing them in the species history, by observing discrepancies between gene family trees and species trees. In this article we introduce an Integer Linear Programming (ILP) approach for the NP-hard problem of computing a most parsimonious time-consistent reconciliation of a gene tree with a species tree when dating information on speciations is not available. The ILP formulation, which builds upon the DTL model, returns a most parsimonious reconciliation ranging over all possible datings of the nodes of the species tree. By studying its performance on plausible simulated data we conclude that the ILP approach is significantly faster than a brute force search through the space of all possible species tree datings. Although the ILP formulation is currently limited to small trees, we believe that it is an important proof-of-concept which opens the door to the possibility of developing an exact, parsimony based approach to dating species trees. The software (ILPEACE) is freely available for download.
q-bio/0510010
Cristian Degli Esposti Boschi
E. Louis C. Degli Esposti Boschi G. J. Ortega E. Fernandez
Role of transport performance on neuron cell morphology
9 pages with 3 figures, submitted to Neuroscience Letters
FASEB Journal 21, 866 (2007)
10.1096/fj.06-5977com
null
q-bio.CB physics.bio-ph q-bio.SC
null
The compartmental model is a basic tool for studying signal propagation in neurons, and, if the model parameters are adequately defined, it can also be of help in the study of electrical or fluid transport. Here we show that the input resistance, in different networks which simulate the passive properties of neurons, is the result of an interplay between the relevant conductances, morphology and size. These results suggest that neurons must grow in such a way that facilitates the current flow. We propose that power consumption is an important factor by which neurons attain their final morphological appearance.
[ { "created": "Wed, 5 Oct 2005 11:06:59 GMT", "version": "v1" } ]
2007-05-23
[ [ "Fernandez", "E. Louis C. Degli Esposti Boschi G. J. Ortega E.", "" ] ]
The compartmental model is a basic tool for studying signal propagation in neurons, and, if the model parameters are adequately defined, it can also be of help in the study of electrical or fluid transport. Here we show that the input resistance, in different networks which simulate the passive properties of neurons, is the result of an interplay between the relevant conductances, morphology and size. These results suggest that neurons must grow in such a way that facilitates the current flow. We propose that power consumption is an important factor by which neurons attain their final morphological appearance.
1510.04612
Thomas Hopf
Thomas A. Hopf, John B. Ingraham, Frank J. Poelwijk, Michael Springer, Chris Sander, Debora S. Marks
Quantification of the effect of mutations using a global probability model of natural sequence variation
null
null
10.1038/nbt.3769
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern biomedicine is challenged to predict the effects of genetic variation. Systematic functional assays of point mutants of proteins have provided valuable empirical information, but vast regions of sequence space remain unexplored. Fortunately, the mutation-selection process of natural evolution has recorded rich information in the diversity of natural protein sequences. Here, building on probabilistic models for correlated amino-acid substitutions that have been successfully applied to determine the three-dimensional structures of proteins, we present a statistical approach for quantifying the contribution of residues and their interactions to protein function, using a statistical energy, the evolutionary Hamiltonian. We find that these probability models predict the experimental effects of mutations with reasonable accuracy for a number of proteins, especially where the selective pressure is similar to the evolutionary pressure on the protein, such as antibiotics.
[ { "created": "Thu, 15 Oct 2015 16:34:12 GMT", "version": "v1" } ]
2017-01-18
[ [ "Hopf", "Thomas A.", "" ], [ "Ingraham", "John B.", "" ], [ "Poelwijk", "Frank J.", "" ], [ "Springer", "Michael", "" ], [ "Sander", "Chris", "" ], [ "Marks", "Debora S.", "" ] ]
Modern biomedicine is challenged to predict the effects of genetic variation. Systematic functional assays of point mutants of proteins have provided valuable empirical information, but vast regions of sequence space remain unexplored. Fortunately, the mutation-selection process of natural evolution has recorded rich information in the diversity of natural protein sequences. Here, building on probabilistic models for correlated amino-acid substitutions that have been successfully applied to determine the three-dimensional structures of proteins, we present a statistical approach for quantifying the contribution of residues and their interactions to protein function, using a statistical energy, the evolutionary Hamiltonian. We find that these probability models predict the experimental effects of mutations with reasonable accuracy for a number of proteins, especially where the selective pressure is similar to the evolutionary pressure on the protein, such as antibiotics.
2007.08383
Wenhao Gao
Wenhao Gao, Sai Pooja Mahajan, Jeremias Sulam, and Jeffrey J. Gray
Deep Learning in Protein Structural Modeling and Design
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep learning is catalyzing a scientific revolution fueled by big data, accessible toolkits, and powerful computational resources, impacting many fields including protein structural modeling. Protein structural modeling, such as predicting structure from amino acid sequence and evolutionary information, designing proteins toward desirable functionality, or predicting properties or behavior of a protein, is critical to understand and engineer biological systems at the molecular level. In this review, we summarize the recent advances in applying deep learning techniques to tackle problems in protein structural modeling and design. We dissect the emerging approaches using deep learning techniques for protein structural modeling, and discuss advances and challenges that must be addressed. We argue for the central importance of structure, following the "sequence -> structure -> function" paradigm. This review is directed to help both computational biologists to gain familiarity with the deep learning methods applied in protein modeling, and computer scientists to gain perspective on the biologically meaningful problems that may benefit from deep learning techniques.
[ { "created": "Thu, 16 Jul 2020 14:59:38 GMT", "version": "v1" } ]
2020-07-17
[ [ "Gao", "Wenhao", "" ], [ "Mahajan", "Sai Pooja", "" ], [ "Sulam", "Jeremias", "" ], [ "Gray", "Jeffrey J.", "" ] ]
Deep learning is catalyzing a scientific revolution fueled by big data, accessible toolkits, and powerful computational resources, impacting many fields including protein structural modeling. Protein structural modeling, such as predicting structure from amino acid sequence and evolutionary information, designing proteins toward desirable functionality, or predicting properties or behavior of a protein, is critical to understand and engineer biological systems at the molecular level. In this review, we summarize the recent advances in applying deep learning techniques to tackle problems in protein structural modeling and design. We dissect the emerging approaches using deep learning techniques for protein structural modeling, and discuss advances and challenges that must be addressed. We argue for the central importance of structure, following the "sequence -> structure -> function" paradigm. This review is directed to help both computational biologists to gain familiarity with the deep learning methods applied in protein modeling, and computer scientists to gain perspective on the biologically meaningful problems that may benefit from deep learning techniques.
2404.01183
Xin Li
Xin Li
Positioning is All You Need
null
null
null
null
q-bio.NC nlin.AO
http://creativecommons.org/publicdomain/zero/1.0/
One can drive safely with a GPS without memorizing a world map (not to mention the dark regions that humans have never explored). Such a locality-based attention mechanism has a profound implication on our understanding of how the brain works. This paper refines the existing embodied cognition framework by turning the locality from a constraint to an advantage. Analogous to GPS-based navigation, positioning represents a computationally more efficient solution to flexible behaviors than reconstruction. This simple intuition implies that {\em positioning is all you need} to understand cognitive functions generated by hippocampal-neocortical systems. That is, the neocortex generates thousands of local maps whose indexing is maintained by the hippocampus. Geometrically, we present a simple manifold positioning framework to explain the principle of localized embodied cognition. The positioning operation implemented by the attention mechanism can be interpreted as a nonlinear projection linking the discovery of local subspace structure by the neocortex (a sensorimotor machine interacting with the world locally) for the navigation task in mind without discovering global manifold topology.
[ { "created": "Mon, 1 Apr 2024 15:35:41 GMT", "version": "v1" }, { "created": "Wed, 3 Apr 2024 17:32:32 GMT", "version": "v2" } ]
2024-04-04
[ [ "Li", "Xin", "" ] ]
One can drive safely with a GPS without memorizing a world map (not to mention the dark regions that humans have never explored). Such a locality-based attention mechanism has a profound implication on our understanding of how the brain works. This paper refines the existing embodied cognition framework by turning the locality from a constraint to an advantage. Analogous to GPS-based navigation, positioning represents a computationally more efficient solution to flexible behaviors than reconstruction. This simple intuition implies that {\em positioning is all you need} to understand cognitive functions generated by hippocampal-neocortical systems. That is, the neocortex generates thousands of local maps whose indexing is maintained by the hippocampus. Geometrically, we present a simple manifold positioning framework to explain the principle of localized embodied cognition. The positioning operation implemented by the attention mechanism can be interpreted as a nonlinear projection linking the discovery of local subspace structure by the neocortex (a sensorimotor machine interacting with the world locally) for the navigation task in mind without discovering global manifold topology.
2003.06343
Jesus Garrido
Sergio E. Galindo, Pablo Toharia, Oscar D. Robles, Eduardo Ros, Luis Pastor, Jes\'us A. Garrido
Simulation, visualization and analysis tools for pattern recognition assessment with spiking neuronal networks
null
null
10.1016/j.neucom.2020.02.114
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computational modeling is becoming a widely used methodology in modern neuroscience. However, as the complexity of the phenomena under study increases, the analysis of the results emerging from the simulations concomitantly becomes more challenging. In particular, the configuration and validation of brain circuits involving learning often require the processing of large amounts of action potentials and their comparison to the stimulation being presented to the input of the system. In this study we present a systematic work-flow for the configuration of spiking-neuronal-network-based learning systems including evolutionary algorithms for information transmission optimization, advanced visualization tools for the validation of the best suitable configuration and customized scripts for final quantitative evaluation of the learning capabilities. By integrating both grouped action potential information and stimulation-related events, the proposed visualization framework provides qualitatively assessment of the evolution of the learning process in the simulation under study. The proposed work-flow has been used to study how receptive fields emerge in a network of inhibitory interneurons with excitatory and inhibitory spike-timing dependent plasticity when it is exposed to repetitive and partially overlapped stimulation patterns. According to our results, the output population reliably detected the presence of the stimulation patterns, even when the fan-in ratio of the interneurons was considerably restricted.
[ { "created": "Fri, 13 Mar 2020 15:34:32 GMT", "version": "v1" } ]
2020-03-16
[ [ "Galindo", "Sergio E.", "" ], [ "Toharia", "Pablo", "" ], [ "Robles", "Oscar D.", "" ], [ "Ros", "Eduardo", "" ], [ "Pastor", "Luis", "" ], [ "Garrido", "Jesús A.", "" ] ]
Computational modeling is becoming a widely used methodology in modern neuroscience. However, as the complexity of the phenomena under study increases, the analysis of the results emerging from the simulations concomitantly becomes more challenging. In particular, the configuration and validation of brain circuits involving learning often require the processing of large amounts of action potentials and their comparison to the stimulation being presented to the input of the system. In this study we present a systematic work-flow for the configuration of spiking-neuronal-network-based learning systems including evolutionary algorithms for information transmission optimization, advanced visualization tools for the validation of the best suitable configuration and customized scripts for final quantitative evaluation of the learning capabilities. By integrating both grouped action potential information and stimulation-related events, the proposed visualization framework provides qualitatively assessment of the evolution of the learning process in the simulation under study. The proposed work-flow has been used to study how receptive fields emerge in a network of inhibitory interneurons with excitatory and inhibitory spike-timing dependent plasticity when it is exposed to repetitive and partially overlapped stimulation patterns. According to our results, the output population reliably detected the presence of the stimulation patterns, even when the fan-in ratio of the interneurons was considerably restricted.
2306.12360
Nathan Frey
Nathan C. Frey, Daniel Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi
Protein Discovery with Discrete Walk-Jump Sampling
ICLR 2024 oral presentation, top 1.2% of submissions; {ICLR 2023 Physics for Machine Learning, NeurIPS 2023 GenBio, MLCB 2023} Spotlight
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by/4.0/
We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising. Our Discrete Walk-Jump Sampling formalism combines the contrastive divergence training of an energy-based model and improved sample quality of a score-based model, while simplifying training and sampling by requiring only a single noise level. We evaluate the robustness of our approach on generative modeling of antibody proteins and introduce the distributional conformity score to benchmark protein generative models. By optimizing and sampling from our models for the proposed distributional conformity score, 97-100% of generated samples are successfully expressed and purified and 70% of functional designs show equal or improved binding affinity compared to known functional antibodies on the first attempt in a single round of laboratory experiments. We also report the first demonstration of long-run fast-mixing MCMC chains where diverse antibody protein classes are visited in a single MCMC chain.
[ { "created": "Thu, 8 Jun 2023 17:03:46 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 19:16:01 GMT", "version": "v2" } ]
2024-03-19
[ [ "Frey", "Nathan C.", "" ], [ "Berenberg", "Daniel", "" ], [ "Zadorozhny", "Karina", "" ], [ "Kleinhenz", "Joseph", "" ], [ "Lafrance-Vanasse", "Julien", "" ], [ "Hotzel", "Isidro", "" ], [ "Wu", "Yan", "" ...
We resolve difficulties in training and sampling from a discrete generative model by learning a smoothed energy function, sampling from the smoothed data manifold with Langevin Markov chain Monte Carlo (MCMC), and projecting back to the true data manifold with one-step denoising. Our Discrete Walk-Jump Sampling formalism combines the contrastive divergence training of an energy-based model and improved sample quality of a score-based model, while simplifying training and sampling by requiring only a single noise level. We evaluate the robustness of our approach on generative modeling of antibody proteins and introduce the distributional conformity score to benchmark protein generative models. By optimizing and sampling from our models for the proposed distributional conformity score, 97-100% of generated samples are successfully expressed and purified and 70% of functional designs show equal or improved binding affinity compared to known functional antibodies on the first attempt in a single round of laboratory experiments. We also report the first demonstration of long-run fast-mixing MCMC chains where diverse antibody protein classes are visited in a single MCMC chain.
2208.12305
Peter Thompson
Peter R. Thompson, Melodie Kunegel-Lion, Mark A. Lewis
Simulating how animals learn: a new modelling framework applied to the process of optimal foraging
6 figures, 1 table, 2 figures in the supplementary material
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Animal learning has interested ecologists and psychologists for over a century. Mathematical models that explain how animals store and recall information have gained attention recently. Central to this work is statistical decision theory (SDT), which relates information uptake in animals to Bayesian inference. SDT effectively explains many learning tasks in animals, but extending this theory to predict how animals will learn in changing environments still poses a challenge for ecologists. We addressed this shortcoming with a novel implementation of Bayesian Markov Chain Monte Carlo (MCMC) sampling to simulate how animals sample environmental information and learn as a result. We applied our framework to an individual-based model simulating complex foraging tasks encountered by wild animals. Simulated ``animals" learned behavioral strategies that optimized foraging returns simply by following the principles of an MCMC sampler. In these simulations, behavioral plasticity was most conducive to efficient foraging in unpredictable and uncertain environments. Our model suggests that animals prioritize highly concentrated resources even when these resources are less available overall, in line with existing knowledge on optimal foraging and ideal free distribution theory. Our innovative computational modelling framework can be applied more widely to simulate the learning of many other tasks in animals and humans.
[ { "created": "Thu, 25 Aug 2022 19:02:18 GMT", "version": "v1" } ]
2022-08-29
[ [ "Thompson", "Peter R.", "" ], [ "Kunegel-Lion", "Melodie", "" ], [ "Lewis", "Mark A.", "" ] ]
Animal learning has interested ecologists and psychologists for over a century. Mathematical models that explain how animals store and recall information have gained attention recently. Central to this work is statistical decision theory (SDT), which relates information uptake in animals to Bayesian inference. SDT effectively explains many learning tasks in animals, but extending this theory to predict how animals will learn in changing environments still poses a challenge for ecologists. We addressed this shortcoming with a novel implementation of Bayesian Markov Chain Monte Carlo (MCMC) sampling to simulate how animals sample environmental information and learn as a result. We applied our framework to an individual-based model simulating complex foraging tasks encountered by wild animals. Simulated ``animals" learned behavioral strategies that optimized foraging returns simply by following the principles of an MCMC sampler. In these simulations, behavioral plasticity was most conducive to efficient foraging in unpredictable and uncertain environments. Our model suggests that animals prioritize highly concentrated resources even when these resources are less available overall, in line with existing knowledge on optimal foraging and ideal free distribution theory. Our innovative computational modelling framework can be applied more widely to simulate the learning of many other tasks in animals and humans.
2207.09264
Jerome Charmet
Jing Yan, Jie Wang, Robert Dallmann, Renquan Lu, J\'er\^ome Charmet
Flow Rate Independent Multiscale Liquid Biopsy for Precision Oncology
19 pages, 5 figures (+ supplementary materials: 16 pages, 10 figures)
null
10.1021/acssensors.2c02577
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Immunoaffinity-based liquid biopsies of circulating tumor cells (CTCs) hold great promise for cancer management, but typically suffer from low throughput, relative complexity and post-processing limitations. Here we address these issues simultaneously by decoupling and independently optimizing the nano-, micro- and macro-scales of an enrichment device that is simple to fabricate and operate. Unlike other affinity-based devices, our scalable mesh approach enables optimum capture conditions at any flow rate, as demonstrated with constant capture efficiencies, above 75% between 50-200 uL/min. The device achieved 96% sensitivity and 100% specificity when used to detect CTCs in the blood of 79 cancer patients and 20 healthy controls. We demonstrate its post processing capacity with the identification of potential responders to immune checkpoint inhibition therapy and the detection of HER2 positive breast cancer. The results compare well with other assays, including clinical standards. This suggests that our approach, which overcomes major limitations associated with affinity-based liquid biopsies, could help improve cancer management.
[ { "created": "Tue, 19 Jul 2022 13:23:27 GMT", "version": "v1" }, { "created": "Mon, 14 Nov 2022 07:52:51 GMT", "version": "v2" } ]
2023-02-23
[ [ "Yan", "Jing", "" ], [ "Wang", "Jie", "" ], [ "Dallmann", "Robert", "" ], [ "Lu", "Renquan", "" ], [ "Charmet", "Jérôme", "" ] ]
Immunoaffinity-based liquid biopsies of circulating tumor cells (CTCs) hold great promise for cancer management, but typically suffer from low throughput, relative complexity and post-processing limitations. Here we address these issues simultaneously by decoupling and independently optimizing the nano-, micro- and macro-scales of an enrichment device that is simple to fabricate and operate. Unlike other affinity-based devices, our scalable mesh approach enables optimum capture conditions at any flow rate, as demonstrated with constant capture efficiencies, above 75% between 50-200 uL/min. The device achieved 96% sensitivity and 100% specificity when used to detect CTCs in the blood of 79 cancer patients and 20 healthy controls. We demonstrate its post processing capacity with the identification of potential responders to immune checkpoint inhibition therapy and the detection of HER2 positive breast cancer. The results compare well with other assays, including clinical standards. This suggests that our approach, which overcomes major limitations associated with affinity-based liquid biopsies, could help improve cancer management.
1111.6164
Pleuni Pennings
Pleuni S. Pennings
Standing genetic variation and the evolution of drug resistance in HIV
33 pages 6 figures
null
10.1371/journal.pcbi.1002527
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug resistance remains a major problem for the treatment of HIV. Resistance can occur due to mutations that were present before treatment starts or due to mutations that occur during treatment. The relative importance of these two sources is unknown. We study three different situations in which HIV drug resistance may evolve: starting triple-drug therapy, treatment with a single dose of nevirapine and interruption of treatment. For each of these three cases good data are available from literature, which allows us to estimate the probability that resistance evolves from standing genetic variation. Depending on the treatment we find probabilities of the evolution of drug resistance due to standing genetic variation between 0 and 39%. For patients who start triple-drug combination therapy, we find that drug resistance evolves from standing genetic variation in approximately 6% of the patients. We use a population-dynamic and population-genetic model to understand the observations and to estimate important evolutionary parameters. We find that both, the effective population size of the virus before treatment, and the fitness of the resistant mutant during treatment, are key-parameters that determine the probability that resistance evolves from standing genetic variation. Importantly, clinical data indicate that both of these parameters can be manipulated by the kind of treatment that is used.
[ { "created": "Sat, 26 Nov 2011 15:10:10 GMT", "version": "v1" } ]
2015-06-03
[ [ "Pennings", "Pleuni S.", "" ] ]
Drug resistance remains a major problem for the treatment of HIV. Resistance can occur due to mutations that were present before treatment starts or due to mutations that occur during treatment. The relative importance of these two sources is unknown. We study three different situations in which HIV drug resistance may evolve: starting triple-drug therapy, treatment with a single dose of nevirapine and interruption of treatment. For each of these three cases good data are available from literature, which allows us to estimate the probability that resistance evolves from standing genetic variation. Depending on the treatment we find probabilities of the evolution of drug resistance due to standing genetic variation between 0 and 39%. For patients who start triple-drug combination therapy, we find that drug resistance evolves from standing genetic variation in approximately 6% of the patients. We use a population-dynamic and population-genetic model to understand the observations and to estimate important evolutionary parameters. We find that both, the effective population size of the virus before treatment, and the fitness of the resistant mutant during treatment, are key-parameters that determine the probability that resistance evolves from standing genetic variation. Importantly, clinical data indicate that both of these parameters can be manipulated by the kind of treatment that is used.
1802.00810
Tianwei Yue
Tianwei Yue, Yuanxin Wang, Longxiang Zhang, Chunming Gu, Haoru Xue, Wenping Wang, Qi Lyu, Yujie Dun
Deep Learning for Genomics: A Concise Overview
null
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Advancements in genomic research such as high-throughput sequencing techniques have driven modern genomic studies into "big data" disciplines. This data explosion is constantly challenging conventional methods used in genomics. In parallel with the urgent demand for robust algorithms, deep learning has succeeded in a variety of fields such as vision, speech, and text processing. Yet genomics entails unique challenges to deep learning since we are expecting from deep learning a superhuman intelligence that explores beyond our knowledge to interpret the genome. A powerful deep learning model should rely on insightful utilization of task-specific knowledge. In this paper, we briefly discuss the strengths of different deep learning models from a genomic perspective so as to fit each particular task with a proper deep architecture, and remark on practical considerations of developing modern deep learning architectures for genomics. We also provide a concise review of deep learning applications in various aspects of genomic research, as well as pointing out potential opportunities and obstacles for future genomics applications.
[ { "created": "Fri, 2 Feb 2018 12:50:25 GMT", "version": "v1" }, { "created": "Tue, 8 May 2018 15:23:01 GMT", "version": "v2" }, { "created": "Mon, 3 Jul 2023 21:24:57 GMT", "version": "v3" }, { "created": "Wed, 4 Oct 2023 20:26:48 GMT", "version": "v4" } ]
2023-10-06
[ [ "Yue", "Tianwei", "" ], [ "Wang", "Yuanxin", "" ], [ "Zhang", "Longxiang", "" ], [ "Gu", "Chunming", "" ], [ "Xue", "Haoru", "" ], [ "Wang", "Wenping", "" ], [ "Lyu", "Qi", "" ], [ "Dun", "Yujie...
Advancements in genomic research such as high-throughput sequencing techniques have driven modern genomic studies into "big data" disciplines. This data explosion is constantly challenging conventional methods used in genomics. In parallel with the urgent demand for robust algorithms, deep learning has succeeded in a variety of fields such as vision, speech, and text processing. Yet genomics entails unique challenges to deep learning since we are expecting from deep learning a superhuman intelligence that explores beyond our knowledge to interpret the genome. A powerful deep learning model should rely on insightful utilization of task-specific knowledge. In this paper, we briefly discuss the strengths of different deep learning models from a genomic perspective so as to fit each particular task with a proper deep architecture, and remark on practical considerations of developing modern deep learning architectures for genomics. We also provide a concise review of deep learning applications in various aspects of genomic research, as well as pointing out potential opportunities and obstacles for future genomics applications.
1210.4616
Bo Hu
Bo Hu, David A. Kessler, Wouter-Jan Rappel, and Herbert Levine
How input fluctuations reshape the dynamics of a biological switching system
7 pages, 4 figures, submitted to Physical Review E
null
10.1103/PhysRevE.86.061910
null
q-bio.MN physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett. 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a nonnegative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this new setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.
[ { "created": "Wed, 17 Oct 2012 02:52:03 GMT", "version": "v1" } ]
2015-06-11
[ [ "Hu", "Bo", "" ], [ "Kessler", "David A.", "" ], [ "Rappel", "Wouter-Jan", "" ], [ "Levine", "Herbert", "" ] ]
An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett. 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a nonnegative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this new setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.
2304.02891
Prakash Chourasia
Sarwan Ali, Prakash Chourasia, Zahra Tayebi, Babatunde Bello, Murray Patterson
ViralVectors: Compact and Scalable Alignment-free Virome Feature Generation
24 pages, 5 figures, accepted to Springer Medical & Biological Engineering & Computing
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
The amount of sequencing data for SARS-CoV-2 is several orders of magnitude larger than any virus. This will continue to grow geometrically for SARS-CoV-2, and other viruses, as many countries heavily finance genomic surveillance efforts. Hence, we need methods for processing large amounts of sequence data to allow for effective yet timely decision-making. Such data will come from heterogeneous sources: aligned, unaligned, or even unassembled raw nucleotide or amino acid sequencing reads pertaining to the whole genome or regions (e.g., spike) of interest. In this work, we propose \emph{ViralVectors}, a compact feature vector generation from virome sequencing data that allows effective downstream analysis. Such generation is based on \emph{minimizers}, a type of lightweight "signature" of a sequence, used traditionally in assembly and read mapping -- to our knowledge, the first use minimizers in this way. We validate our approach on different types of sequencing data: (a) 2.5M SARS-CoV-2 spike sequences (to show scalability); (b) 3K Coronaviridae spike sequences (to show robustness to more genomic variability); and (c) 4K raw WGS reads sets taken from nasal-swab PCR tests (to show the ability to process unassembled reads). Our results show that ViralVectors outperforms current benchmarks in most classification and clustering tasks.
[ { "created": "Thu, 6 Apr 2023 06:46:17 GMT", "version": "v1" }, { "created": "Fri, 7 Apr 2023 11:58:23 GMT", "version": "v2" } ]
2023-04-10
[ [ "Ali", "Sarwan", "" ], [ "Chourasia", "Prakash", "" ], [ "Tayebi", "Zahra", "" ], [ "Bello", "Babatunde", "" ], [ "Patterson", "Murray", "" ] ]
The amount of sequencing data for SARS-CoV-2 is several orders of magnitude larger than any virus. This will continue to grow geometrically for SARS-CoV-2, and other viruses, as many countries heavily finance genomic surveillance efforts. Hence, we need methods for processing large amounts of sequence data to allow for effective yet timely decision-making. Such data will come from heterogeneous sources: aligned, unaligned, or even unassembled raw nucleotide or amino acid sequencing reads pertaining to the whole genome or regions (e.g., spike) of interest. In this work, we propose \emph{ViralVectors}, a compact feature vector generation from virome sequencing data that allows effective downstream analysis. Such generation is based on \emph{minimizers}, a type of lightweight "signature" of a sequence, used traditionally in assembly and read mapping -- to our knowledge, the first use minimizers in this way. We validate our approach on different types of sequencing data: (a) 2.5M SARS-CoV-2 spike sequences (to show scalability); (b) 3K Coronaviridae spike sequences (to show robustness to more genomic variability); and (c) 4K raw WGS reads sets taken from nasal-swab PCR tests (to show the ability to process unassembled reads). Our results show that ViralVectors outperforms current benchmarks in most classification and clustering tasks.
0712.2068
Lalit Ponnala
Lalit Ponnala, Donald Bitzer, Anne Stomp, Mladen Vouk
A mechanistic model for +1 frameshifts in eubacteria
31 pages, 52 figures
null
null
null
q-bio.GN
null
This work applies the methods of signal processing and the concepts of control system design to model the maintenance and modulation of reading frame in the process of protein synthesis. The model shows how translational speed can modulate translational accuracy to accomplish programmed +1 frameshifts and could have implications for the regulation of translational efficiency. A series of free energy estimates were calculated from the ribosome's interaction with mRNA sequences during the process of translation elongation in eubacteria. A sinusoidal pattern of roughly constant phase was detected in these free energy signals. Signal phase was identified as a useful parameter for locating programmed +1 frameshifts encoded in bacterial genes for release factor 2. A displacement model was developed that captures the mechanism of frameshift based on the information content of the signal parameters and the relative abundance of tRNA in the bacterial cell. Results are presented using experimentally verified frameshift genes across eubacteria.
[ { "created": "Thu, 13 Dec 2007 01:02:37 GMT", "version": "v1" } ]
2007-12-14
[ [ "Ponnala", "Lalit", "" ], [ "Bitzer", "Donald", "" ], [ "Stomp", "Anne", "" ], [ "Vouk", "Mladen", "" ] ]
This work applies the methods of signal processing and the concepts of control system design to model the maintenance and modulation of reading frame in the process of protein synthesis. The model shows how translational speed can modulate translational accuracy to accomplish programmed +1 frameshifts and could have implications for the regulation of translational efficiency. A series of free energy estimates were calculated from the ribosome's interaction with mRNA sequences during the process of translation elongation in eubacteria. A sinusoidal pattern of roughly constant phase was detected in these free energy signals. Signal phase was identified as a useful parameter for locating programmed +1 frameshifts encoded in bacterial genes for release factor 2. A displacement model was developed that captures the mechanism of frameshift based on the information content of the signal parameters and the relative abundance of tRNA in the bacterial cell. Results are presented using experimentally verified frameshift genes across eubacteria.
1207.4375
Bernard Ycart
Bernard Ycart
Fluctuation analysis with cell deaths
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classical Luria-Delbr\"uck model for fluctuation analysis is extended to the case where cells can either divide or die at the end of their generation time. This leads to a family of probability distributions generalizing the Luria-Delbr\"uck family, and depending on three parameters: the expected number of mutations, the relative fitness of normal cells compared to mutants, and the death probability of mutants. The probabilistic treatment is similar to that of the classical case; simulation and computing algorithms are provided. The estimation problem is discussed: if the death probability is known, the two other parameters can be reliably estimated. If the death probability is unknown, the model can be identified only for large samples.
[ { "created": "Wed, 18 Jul 2012 13:35:18 GMT", "version": "v1" }, { "created": "Wed, 29 May 2013 07:30:16 GMT", "version": "v2" } ]
2013-05-30
[ [ "Ycart", "Bernard", "" ] ]
The classical Luria-Delbr\"uck model for fluctuation analysis is extended to the case where cells can either divide or die at the end of their generation time. This leads to a family of probability distributions generalizing the Luria-Delbr\"uck family, and depending on three parameters: the expected number of mutations, the relative fitness of normal cells compared to mutants, and the death probability of mutants. The probabilistic treatment is similar to that of the classical case; simulation and computing algorithms are provided. The estimation problem is discussed: if the death probability is known, the two other parameters can be reliably estimated. If the death probability is unknown, the model can be identified only for large samples.
2205.14939
Jinkui Zhao
Jinkui Zhao
Metabolic scaling is governed by Murray's network in animals and by hydraulic conductance and photosynthesis in plants
null
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
The prevailing theory for metabolic scaling is based on area-preserved, space-filling fractal vascular networks. However, it's known both theoretically and experimentally that animals' vascular systems obey Murray's cubic branching law. Area-preserved branching conflicts with energy minimization and hence the least-work principle. Additionally, while Kleiber's law is the dominant rule for both animals and plants, small animals are observed to follow the 2/3-power law, large animals have larger than 3/4 scaling exponents, and small plants have near-linear scaling behaviors. No known theory explains all the observations. Here, I show that animals' metabolism is determined by their Murray's vascular systems. For plants, the scaling is determined by the trunks' hydraulic conductance and the leaves' photosynthesis. Both analyses agree with data of various body sizes. Animals' scaling has a concave curvature while plants have a convex one. The empirical power laws are approximations within selected mass ranges. Generally, the 3/4-power law applies to animals of ~15 g to 10,000 kg and the 2/3-power law to those of ~1 g to 10 kg. For plants, the scaling exponent is 1 for small plants and decreases to 3/4 for those greater than ~10 kg.
[ { "created": "Mon, 30 May 2022 09:04:11 GMT", "version": "v1" } ]
2022-05-31
[ [ "Zhao", "Jinkui", "" ] ]
The prevailing theory for metabolic scaling is based on area-preserved, space-filling fractal vascular networks. However, it's known both theoretically and experimentally that animals' vascular systems obey Murray's cubic branching law. Area-preserved branching conflicts with energy minimization and hence the least-work principle. Additionally, while Kleiber's law is the dominant rule for both animals and plants, small animals are observed to follow the 2/3-power law, large animals have larger than 3/4 scaling exponents, and small plants have near-linear scaling behaviors. No known theory explains all the observations. Here, I show that animals' metabolism is determined by their Murray's vascular systems. For plants, the scaling is determined by the trunks' hydraulic conductance and the leaves' photosynthesis. Both analyses agree with data of various body sizes. Animals' scaling has a concave curvature while plants have a convex one. The empirical power laws are approximations within selected mass ranges. Generally, the 3/4-power law applies to animals of ~15 g to 10,000 kg and the 2/3-power law to those of ~1 g to 10 kg. For plants, the scaling exponent is 1 for small plants and decreases to 3/4 for those greater than ~10 kg.
1707.04194
Peter Ashcroft
Peter Ashcroft and Markus G. Manz and Sebastian Bonhoeffer
Clonal Dominance and Transplantation Dynamics in Hematopoietic Stem Cell Compartments
46 pages, 11 figures (inclusive of SI)
PLoS Comput Biol 13(10): e1005803 (2017)
10.1371/journal.pcbi.1005803
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hematopoietic stem cells in mammals are known to reside mostly in the bone marrow, but also transitively passage in small numbers in the blood. Experimental findings have suggested that they exist in a dynamic equilibrium, continuously migrating between these two compartments. Here we construct an individual-based mathematical model of this process, which is parametrised using existing empirical findings from mice. This approach allows us to quantify the amount of migration between the bone marrow niches and the peripheral blood. We use this model to investigate clonal hematopoiesis, which is a significant risk factor for hematologic cancers. We also analyse the engraftment of donor stem cells into non-conditioned and conditioned hosts, quantifying the impact of different treatment scenarios. The simplicity of the model permits a thorough mathematical analysis, providing deeper insights into the dynamics of both the model and of the real-world system. We predict the time taken for mutant clones to expand within a host, as well as chimerism levels that can be expected following transplantation therapy, and the probability that a preconditioned host is reconstituted by donor cells.
[ { "created": "Thu, 13 Jul 2017 16:10:51 GMT", "version": "v1" }, { "created": "Wed, 11 Oct 2017 17:44:19 GMT", "version": "v2" } ]
2017-10-12
[ [ "Ashcroft", "Peter", "" ], [ "Manz", "Markus G.", "" ], [ "Bonhoeffer", "Sebastian", "" ] ]
Hematopoietic stem cells in mammals are known to reside mostly in the bone marrow, but also transitively passage in small numbers in the blood. Experimental findings have suggested that they exist in a dynamic equilibrium, continuously migrating between these two compartments. Here we construct an individual-based mathematical model of this process, which is parametrised using existing empirical findings from mice. This approach allows us to quantify the amount of migration between the bone marrow niches and the peripheral blood. We use this model to investigate clonal hematopoiesis, which is a significant risk factor for hematologic cancers. We also analyse the engraftment of donor stem cells into non-conditioned and conditioned hosts, quantifying the impact of different treatment scenarios. The simplicity of the model permits a thorough mathematical analysis, providing deeper insights into the dynamics of both the model and of the real-world system. We predict the time taken for mutant clones to expand within a host, as well as chimerism levels that can be expected following transplantation therapy, and the probability that a preconditioned host is reconstituted by donor cells.
0812.4467
Geoffrey Hoffmann PhD
Geoffrey W. Hoffmann
Recent Developments in Immune Network Theory including a concept for an HIV Vaccine
Predictions of the theory for experiments in mice and macaque monkeys have been added
null
null
null
q-bio.PE q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The symmetrical network theory is a framework for understanding the immune system, that dates back to the mid 1970s. The symmetrical network theory is based on symmetrical stimulatory, inhibitory and killing interactions between clones that are specific for each other. Previous papers described roles for helper and suppressor T cells in regulating immune responses and a model for HIV pathogenesis. This paper extends the theory to account for regulatory T cells that include three types of suppressor cells called Ts1, Ts2 and Ts3, and two types of helper cells called Th1 and Th2. The theory leads to a concept for an HIV vaccine, namely a reagent commonly known as IVIG, to be administered in small amounts in an immunogenic form via an immunogenic route. Predictions are made for experiments in mice and macaque monkeys.
[ { "created": "Wed, 24 Dec 2008 01:10:21 GMT", "version": "v1" }, { "created": "Wed, 31 Dec 2008 16:59:10 GMT", "version": "v2" } ]
2008-12-31
[ [ "Hoffmann", "Geoffrey W.", "" ] ]
The symmetrical network theory is a framework for understanding the immune system, that dates back to the mid 1970s. The symmetrical network theory is based on symmetrical stimulatory, inhibitory and killing interactions between clones that are specific for each other. Previous papers described roles for helper and suppressor T cells in regulating immune responses and a model for HIV pathogenesis. This paper extends the theory to account for regulatory T cells that include three types of suppressor cells called Ts1, Ts2 and Ts3, and two types of helper cells called Th1 and Th2. The theory leads to a concept for an HIV vaccine, namely a reagent commonly known as IVIG, to be administered in small amounts in an immunogenic form via an immunogenic route. Predictions are made for experiments in mice and macaque monkeys.
1701.08038
Qing Zhang
Qing Zhang, Federico Bassetti, Marco Gherardi, Marco Cosentino Lagomarsino
Cell-to-cell variability and robustness in S-phase duration from genome replication kinetics
null
Nucleic Acids Research (2017) 45 (14): 8190-8198
10.1093/nar/gkx556
null
q-bio.GN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genome replication, a key process for a cell, relies on stochastic initiation by replication origins, causing a variability of replication timing from cell to cell. While stochastic models of eukaryotic replication are widely available, the link between the key parameters and overall replication timing has not been addressed systematically.We use a combined analytical and computational approach to calculate how positions and strength of many origins lead to a given cell-to-cell variability of total duration of the replication of a large region, a chromosome or the entire genome.Specifically, the total replication timing can be framed as an extreme-value problem, since it is due to the last region that replicates in each cell. Our calculations identify two regimes based on the spread between characteristic completion times of all inter-origin regions of a genome. For widely different completion times, timing is set by the single specific region that is typically the last to replicate in all cells. Conversely, when the completion time of all regions are comparable,an extreme-value estimate shows that the cell-to-cell variability of genome replication timing has universal properties. Comparison with available data shows that the replication program of three yeast species falls in this extreme-value regime.
[ { "created": "Fri, 27 Jan 2017 12:47:59 GMT", "version": "v1" }, { "created": "Thu, 25 May 2017 03:10:35 GMT", "version": "v2" } ]
2017-08-23
[ [ "Zhang", "Qing", "" ], [ "Bassetti", "Federico", "" ], [ "Gherardi", "Marco", "" ], [ "Lagomarsino", "Marco Cosentino", "" ] ]
Genome replication, a key process for a cell, relies on stochastic initiation by replication origins, causing a variability of replication timing from cell to cell. While stochastic models of eukaryotic replication are widely available, the link between the key parameters and overall replication timing has not been addressed systematically.We use a combined analytical and computational approach to calculate how positions and strength of many origins lead to a given cell-to-cell variability of total duration of the replication of a large region, a chromosome or the entire genome.Specifically, the total replication timing can be framed as an extreme-value problem, since it is due to the last region that replicates in each cell. Our calculations identify two regimes based on the spread between characteristic completion times of all inter-origin regions of a genome. For widely different completion times, timing is set by the single specific region that is typically the last to replicate in all cells. Conversely, when the completion time of all regions are comparable,an extreme-value estimate shows that the cell-to-cell variability of genome replication timing has universal properties. Comparison with available data shows that the replication program of three yeast species falls in this extreme-value regime.
1301.6854
Leonard Harris
Justin S. Hogg, Leonard A. Harris, Lori J. Stover, Niketh S. Nair, and James R. Faeder
Exact hybrid particle/population simulation of rule-based models of biochemical systems
Version accepted for publication. 19 pages, 11 figures, 2 tables, 1 suppl Dataset (provided as 4 PDF docs), 1 suppl Figure, 17 suppl Text files (1 main text, 16 model files), 1 suppl README. Additional material added to "Related work"
PLoS Comput. Biol. 10, e1003544 (2014)
10.1371/journal.pcbi.1003544
null
q-bio.QM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim.
[ { "created": "Tue, 29 Jan 2013 07:50:56 GMT", "version": "v1" }, { "created": "Fri, 8 Feb 2013 19:10:26 GMT", "version": "v2" }, { "created": "Fri, 8 Mar 2013 07:55:36 GMT", "version": "v3" }, { "created": "Sat, 9 Nov 2013 02:19:42 GMT", "version": "v4" }, { "crea...
2014-05-20
[ [ "Hogg", "Justin S.", "" ], [ "Harris", "Leonard A.", "" ], [ "Stover", "Lori J.", "" ], [ "Nair", "Niketh S.", "" ], [ "Faeder", "James R.", "" ] ]
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim.
2205.05285
Michael Watson
M. G. Watson, K. L. Chambers, M. R. Myerscough
A Lipid-Structured Model of Atherosclerotic Plaque Macrophages with Lipid-Dependent Kinetics
null
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Atherosclerotic plaques are fatty growths in artery walls that cause heart attacks and strokes. Plaque formation is orchestrated by macrophages that are recruited to the artery wall to consume and remove blood-derived lipids, such as low-density lipoprotein (LDL). Ineffective lipid removal, due to macrophage death and other factors, leads to the accumulation of lipid-loaded macrophages and formation of a necrotic core. Experimental observations suggest that macrophage functionality varies with the extent of lipid loading. However, little is known about the resultant influence on plaque fate. Extending work by Ford et al. (2019) and Chambers et al. (2022), we develop a plaque model in which macrophages are classified by their ingested lipid content and behave in a lipid-dependent manner. The model, a system of partial-integro differential equations, considers several macrophage behaviours. These include: recruitment to the artery wall; proliferation and apoptosis; ingestion of LDL, apoptotic cells and necrotic lipid; emigration from the artery wall; and necrosis of apoptotic cells. Here, we consider apoptosis, emigration and proliferation to be lipid-dependent. We model lipid-dependence in these behaviours with experimentally-informed functions of the internalised lipid load. Our results demonstrate that lipid-dependent macrophage behaviour can substantially alter plaque fate by changing both the total quantity of lipid in the plaque and the distribution of lipid between the live cells, dead cells and necrotic core. For lipid-dependent apoptosis and lipid-dependent emigration simulations, we find significant differences in outcomes for cases that ultimately converge on the same net rate of apoptosis or emigration.
[ { "created": "Wed, 11 May 2022 06:16:44 GMT", "version": "v1" } ]
2022-05-12
[ [ "Watson", "M. G.", "" ], [ "Chambers", "K. L.", "" ], [ "Myerscough", "M. R.", "" ] ]
Atherosclerotic plaques are fatty growths in artery walls that cause heart attacks and strokes. Plaque formation is orchestrated by macrophages that are recruited to the artery wall to consume and remove blood-derived lipids, such as low-density lipoprotein (LDL). Ineffective lipid removal, due to macrophage death and other factors, leads to the accumulation of lipid-loaded macrophages and formation of a necrotic core. Experimental observations suggest that macrophage functionality varies with the extent of lipid loading. However, little is known about the resultant influence on plaque fate. Extending work by Ford et al. (2019) and Chambers et al. (2022), we develop a plaque model in which macrophages are classified by their ingested lipid content and behave in a lipid-dependent manner. The model, a system of partial-integro differential equations, considers several macrophage behaviours. These include: recruitment to the artery wall; proliferation and apoptosis; ingestion of LDL, apoptotic cells and necrotic lipid; emigration from the artery wall; and necrosis of apoptotic cells. Here, we consider apoptosis, emigration and proliferation to be lipid-dependent. We model lipid-dependence in these behaviours with experimentally-informed functions of the internalised lipid load. Our results demonstrate that lipid-dependent macrophage behaviour can substantially alter plaque fate by changing both the total quantity of lipid in the plaque and the distribution of lipid between the live cells, dead cells and necrotic core. For lipid-dependent apoptosis and lipid-dependent emigration simulations, we find significant differences in outcomes for cases that ultimately converge on the same net rate of apoptosis or emigration.
1204.6015
Michael Deem
Keyao Pan and Michael W. Deem
A Multi-Scale Model for Correlation in B Cell VDJ Usage of Zebrafish
29 pages, 10 figures, 1 table
Phys. Biol. 8 (2011) 055006
10.1088/1478-3975/8/5/055006
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The zebrafish (\emph{Danio rerio}) is one of the model animals for study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a "microscopic" random energy model. This generalized $NK$ model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principles calculation of the probability, $p$, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability $p$ increases with the B cell population size and the B cell selection intensity. The probability $p$ decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.
[ { "created": "Thu, 26 Apr 2012 19:09:22 GMT", "version": "v1" } ]
2015-06-04
[ [ "Pan", "Keyao", "" ], [ "Deem", "Michael W.", "" ] ]
The zebrafish (\emph{Danio rerio}) is one of the model animals for study of immunology because the dynamics in the adaptive immune system of zebrafish are similar to that in higher animals. In this work, we built a multi-scale model to simulate the dynamics of B cells in the primary and secondary immune responses of zebrafish. We use this model to explain the reported correlation between VDJ usage of B cell repertoires in individual zebrafish. We use a delay ordinary differential equation (ODE) system to model the immune responses in the 6-month lifespan of a zebrafish. This mean field theory gives the number of high affinity B cells as a function of time during an infection. The sequences of those B cells are then taken from a distribution calculated by a "microscopic" random energy model. This generalized $NK$ model shows that mature B cells specific to one antigen largely possess a single VDJ recombination. The model allows first-principles calculation of the probability, $p$, that two zebrafish responding to the same antigen will select the same VDJ recombination. This probability $p$ increases with the B cell population size and the B cell selection intensity. The probability $p$ decreases with the B cell hypermutation rate. The multi-scale model predicts correlations in the immune system of the zebrafish that are highly similar to that from experiment.
1208.0248
Henry Arellano
Adela V\'asquez, Henry Arellano
Estructura, Biomasa a\'erea y carbono almacenado en los bosques del Sur y Noroccidente de c\'ordoba
null
Colombia Diversidad Bi\'otica XII, La regi\'on Caribe de Colombia (Instituto de Ciencias Naturales, Universidad Nacional de Colombia, Bogot\'a, 2012) 923-961
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We estimated the aerial biomass and stored carbon in twelve types of forests in the department of C\'ordoba with annual rain fall ranging from 3000 mm (super humid climates) to 1300 mm (semi-humid climate). Biomass was estimated based on structural aspects of the vegetation (diameter at breast height, total height, and wood specific weight). We tested nine allometric equations for tropical forests available in the literature and selected those proposed by Chave et al (2005) that are specific for humid and dry forests. Carbon content in trees was measured in four tissues (stem, branch, bark, and leafs) through an automated dry combustion method, which estimates the percentage of carbon in a sample of known weight.
[ { "created": "Tue, 31 Jul 2012 18:02:46 GMT", "version": "v1" } ]
2012-08-03
[ [ "Vásquez", "Adela", "" ], [ "Arellano", "Henry", "" ] ]
We estimated the aerial biomass and stored carbon in twelve types of forests in the department of C\'ordoba with annual rain fall ranging from 3000 mm (super humid climates) to 1300 mm (semi-humid climate). Biomass was estimated based on structural aspects of the vegetation (diameter at breast height, total height, and wood specific weight). We tested nine allometric equations for tropical forests available in the literature and selected those proposed by Chave et al (2005) that are specific for humid and dry forests. Carbon content in trees was measured in four tissues (stem, branch, bark, and leafs) through an automated dry combustion method, which estimates the percentage of carbon in a sample of known weight.
1302.0784
Anna Carbone
Anna Carbone
Information Measures for Long-Range Correlated Sequences: the Case of the 24 Human Chromosome Sequences
Scientific Reports (2013)
Scientific Reports vol. 3, Article number: 2721 (2013)
10.1038/srep02721
null
q-bio.GN physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new approach to estimate the Shannon entropy of a long-range correlated sequence is proposed. The entropy is written as the sum of two terms corresponding respectively to power-law (\emph{ordered}) and exponentially (\emph{disordered}) distributed blocks (clusters). The approach is illustrated on the 24 human chromosome sequences by taking the nucleotide composition as the relevant information to be encoded/decoded. Interestingly, the nucleotide composition of the \emph{ordered} clusters is found, on the average, comparable to the one of the whole analyzed sequence, while that of the \emph{disordered} clusters fluctuates. From the information theory standpoint, this means that the power-law correlated clusters carry the same information of the whole analysed sequence. Furthermore, the fluctuations of the nucleotide composition of the disordered clusters are linked to relevant biological properties, such as segmental duplications and gene density.
[ { "created": "Mon, 4 Feb 2013 18:21:27 GMT", "version": "v1" }, { "created": "Thu, 5 Sep 2013 16:03:37 GMT", "version": "v2" } ]
2013-10-30
[ [ "Carbone", "Anna", "" ] ]
A new approach to estimate the Shannon entropy of a long-range correlated sequence is proposed. The entropy is written as the sum of two terms corresponding respectively to power-law (\emph{ordered}) and exponentially (\emph{disordered}) distributed blocks (clusters). The approach is illustrated on the 24 human chromosome sequences by taking the nucleotide composition as the relevant information to be encoded/decoded. Interestingly, the nucleotide composition of the \emph{ordered} clusters is found, on the average, comparable to the one of the whole analyzed sequence, while that of the \emph{disordered} clusters fluctuates. From the information theory standpoint, this means that the power-law correlated clusters carry the same information of the whole analysed sequence. Furthermore, the fluctuations of the nucleotide composition of the disordered clusters are linked to relevant biological properties, such as segmental duplications and gene density.
2203.14967
Martin Frasch
Mingju Cao, Shikha Kuthiala, Keven Jason Jean, Hai Lun Liu, Marc Courchesne, Karen Nygard, Patrick Burns, Andr\'e Desrochers, Gilles Fecteau, Christophe Faure, and Martin G. Frasch
The vagus nerve regulates immunometabolic homeostasis in the ovine fetus near term: impact on terminal ileum
null
null
null
null
q-bio.TO q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
The contribution of the vagus nerve to inflammation and glucosensing in the fetus is not understood. We hypothesized that vagotomy (Vx) will trigger a rise in systemic glucose levels and this will be enhanced during systemic and organ-specific inflammation. Efferent vagus nerve stimulation (VNS) should reverse this phenotype. Near-term fetal sheep (n=57) were surgically prepared with vascular catheters and ECG electrodes as control and treatment groups (lipopolysaccharide (LPS), Vx+LPS, Vx+LPS+selective efferent VNS). Fetal arterial blood samples were drawn for 7 days to profile inflammation (IL-6), insulin, blood gas and metabolism (glucose). At 54 h, a necropsy was performed; terminal ileum macrophages; CD11c (M1 phenotype) immunofluorescence was quantified to detect inflammation. Across the treatment groups, blood gas and cardiovascular changes indicated mild septicemia. At 3 h, in the LPS group IL-6 peaked; that peak was decreased in Vx+LPS400 and doubled in Vx+LPS800 group; the efferent VNS sped up the reduction of the inflammatory response profile over 54 h. M1 macrophage activity was increased in the LPS and Vx+LPS800 groups only. Glucose and insulin levels in the Vx+LPS group were respectively 1.3-fold and 2.3-fold higher vs. control at 3 h, and the efferent VNS normalized glucose levels. Complete withdrawal of vagal innervation results in a 72h delayed onset of sustained increase in glucose levels for at least 54h and intermittent hyperinsulinemia. Under conditions of moderate fetal inflammation, this is related to higher levels of gut inflammation; the efferent VNS reduces the systemic inflammatory response as well as restores both the levels of glucose and terminal ileum inflammation, but not the insulin levels. Our findings reveal a novel regulatory, hormetic, role of the vagus nerve in the immunometabolic response to endotoxin in near-term fetuses.
[ { "created": "Sun, 27 Mar 2022 19:54:08 GMT", "version": "v1" }, { "created": "Sun, 1 May 2022 20:27:58 GMT", "version": "v2" }, { "created": "Thu, 23 Jun 2022 05:24:41 GMT", "version": "v3" } ]
2022-06-24
[ [ "Cao", "Mingju", "" ], [ "Kuthiala", "Shikha", "" ], [ "Jean", "Keven Jason", "" ], [ "Liu", "Hai Lun", "" ], [ "Courchesne", "Marc", "" ], [ "Nygard", "Karen", "" ], [ "Burns", "Patrick", "" ], [ "...
The contribution of the vagus nerve to inflammation and glucosensing in the fetus is not understood. We hypothesized that vagotomy (Vx) will trigger a rise in systemic glucose levels and this will be enhanced during systemic and organ-specific inflammation. Efferent vagus nerve stimulation (VNS) should reverse this phenotype. Near-term fetal sheep (n=57) were surgically prepared with vascular catheters and ECG electrodes as control and treatment groups (lipopolysaccharide (LPS), Vx+LPS, Vx+LPS+selective efferent VNS). Fetal arterial blood samples were drawn for 7 days to profile inflammation (IL-6), insulin, blood gas and metabolism (glucose). At 54 h, a necropsy was performed; terminal ileum macrophages; CD11c (M1 phenotype) immunofluorescence was quantified to detect inflammation. Across the treatment groups, blood gas and cardiovascular changes indicated mild septicemia. At 3 h, in the LPS group IL-6 peaked; that peak was decreased in Vx+LPS400 and doubled in Vx+LPS800 group; the efferent VNS sped up the reduction of the inflammatory response profile over 54 h. M1 macrophage activity was increased in the LPS and Vx+LPS800 groups only. Glucose and insulin levels in the Vx+LPS group were respectively 1.3-fold and 2.3-fold higher vs. control at 3 h, and the efferent VNS normalized glucose levels. Complete withdrawal of vagal innervation results in a 72h delayed onset of sustained increase in glucose levels for at least 54h and intermittent hyperinsulinemia. Under conditions of moderate fetal inflammation, this is related to higher levels of gut inflammation; the efferent VNS reduces the systemic inflammatory response as well as restores both the levels of glucose and terminal ileum inflammation, but not the insulin levels. Our findings reveal a novel regulatory, hormetic, role of the vagus nerve in the immunometabolic response to endotoxin in near-term fetuses.
1906.01464
Dr. Biplab Chattopadhyay
Nirmalendu Hui and Biplab Chattopadhyay
Studies on Bone-mass Formation within a Theoretical Model
14 pages, 11 figures, 1 table. arXiv admin note: substantial text overlap with arXiv:1307.5833
IOSR Journal of Pharmacy and Biological Sciences, Volume 9, Issue 4 Ver. II (Jul - Aug 2014), PP 07-20
10.9790/3008-09420720
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bone-mass formation in human is looked at to understand the underlying dynamics with an eye on healing of bone-fracture and non-unions in non-invasive pathways. Three biological cells osteoblasts, osteoclasts and osteocytes are important players in creating new bone or osseous matter in which quite a few hormones, proteins and minerals have indispensable supportive role. Assuming populations of the three mentioned cells as variables, we frame a theoretical model which is represented as a set of time differential equations. These equations imitate the dynamic process of bone matter creation. High value of osteocytes with moderate level values of osteoblast and osteoclast, all at asymptotic scale, imply creation of new bone-matter in our model. The model is studied both analytically and numerically. Some important results are highlighted and relevant predictions are made which could be put to future experimental test.
[ { "created": "Mon, 3 Jun 2019 15:49:45 GMT", "version": "v1" } ]
2019-06-05
[ [ "Hui", "Nirmalendu", "" ], [ "Chattopadhyay", "Biplab", "" ] ]
Bone-mass formation in human is looked at to understand the underlying dynamics with an eye on healing of bone-fracture and non-unions in non-invasive pathways. Three biological cells osteoblasts, osteoclasts and osteocytes are important players in creating new bone or osseous matter in which quite a few hormones, proteins and minerals have indispensable supportive role. Assuming populations of the three mentioned cells as variables, we frame a theoretical model which is represented as a set of time differential equations. These equations imitate the dynamic process of bone matter creation. High value of osteocytes with moderate level values of osteoblast and osteoclast, all at asymptotic scale, imply creation of new bone-matter in our model. The model is studied both analytically and numerically. Some important results are highlighted and relevant predictions are made which could be put to future experimental test.
q-bio/0404021
Keiji Miura
Keiji Miura and Masato Okada
Pulse-coupled resonate-and-fire models
15 pages, 8 figures
null
10.1103/PhysRevE.70.021914
null
q-bio.NC
null
We analyze two pulse-coupled resonate-and-fire neurons. Numerical simulation reveals that an anti-phase state is an attractor of this model. We can analytically explain the stability of anti-phase states by means of a return map of firing times, which we propose in this paper. The resultant stability condition turns out to be quite simple. The phase diagram based on our theory shows that there are two types of anti-phase states. One of these cannot be seen in coupled integrate-and-fire models and is peculiar to resonate-and-fire models. The results of our theory coincide with those of numerical simulations.
[ { "created": "Mon, 19 Apr 2004 19:06:39 GMT", "version": "v1" } ]
2009-11-10
[ [ "Miura", "Keiji", "" ], [ "Okada", "Masato", "" ] ]
We analyze two pulse-coupled resonate-and-fire neurons. Numerical simulation reveals that an anti-phase state is an attractor of this model. We can analytically explain the stability of anti-phase states by means of a return map of firing times, which we propose in this paper. The resultant stability condition turns out to be quite simple. The phase diagram based on our theory shows that there are two types of anti-phase states. One of these cannot be seen in coupled integrate-and-fire models and is peculiar to resonate-and-fire models. The results of our theory coincide with those of numerical simulations.
2201.10958
Louxin Zhang
Gary Goh, Michael Fuchs, Louxin Zhang
Two Results about the Sackin and Colless Indices for Phylogenetic Trees and Their Shapes
10 pages, 1 fugre
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The Sackin and Colless indices are two widely-used metrics for measuring the balance of trees and for testing evolutionary models in phylogenetics. This short paper contributes two results about the Sackin and Colless indices of trees. One result is the asymptotic analysis of the expected Sackin and Colless indices of a tree shape (which are full binary rooted unlabelled trees) under the uniform model where tree shapes are sampled with equal probability. Another is a short elementary proof of the closed formula for the expected Sackin index of phylogenetic trees (which are full binary rooted trees with leaves being labelled with taxa) under the uniform model.
[ { "created": "Wed, 26 Jan 2022 14:21:56 GMT", "version": "v1" }, { "created": "Tue, 19 Jul 2022 02:33:19 GMT", "version": "v2" } ]
2022-07-20
[ [ "Goh", "Gary", "" ], [ "Fuchs", "Michael", "" ], [ "Zhang", "Louxin", "" ] ]
The Sackin and Colless indices are two widely-used metrics for measuring the balance of trees and for testing evolutionary models in phylogenetics. This short paper contributes two results about the Sackin and Colless indices of trees. One result is the asymptotic analysis of the expected Sackin and Colless indices of a tree shape (which are full binary rooted unlabelled trees) under the uniform model where tree shapes are sampled with equal probability. Another is a short elementary proof of the closed formula for the expected Sackin index of phylogenetic trees (which are full binary rooted trees with leaves being labelled with taxa) under the uniform model.
2303.14238
Praful Gagrani
Praful Gagrani, Victor Blanco, Eric Smith, David Baum
Polyhedral geometry and combinatorics of an autocatalytic ecosystem
36 pages, 17 figures, 7 tables
null
null
null
q-bio.MN math.DS
http://creativecommons.org/licenses/by/4.0/
Developing a mathematical understanding of autocatalysis in reaction networks has both theoretical and practical implications. We review definitions of autocatalytic networks and prove some properties for minimal autocatalytic subnetworks (MASs). We show that it is possible to classify MASs in equivalence classes, and develop mathematical results about their behavior. We also provide linear-programming algorithms to exhaustively enumerate them and a scheme to visualize their polyhedral geometry and combinatorics. We then define cluster chemical reaction networks, a framework for coarse-graining real chemical reactions with positive integer conservation laws. We find that the size of the list of minimal autocatalytic subnetworks in a maximally connected cluster chemical reaction network with one conservation law grows exponentially in the number of species. We end our discussion with open questions concerning an ecosystem of autocatalytic subnetworks and multidisciplinary opportunities for future investigation.
[ { "created": "Fri, 24 Mar 2023 19:00:19 GMT", "version": "v1" }, { "created": "Tue, 28 Mar 2023 00:34:58 GMT", "version": "v2" }, { "created": "Wed, 27 Sep 2023 15:03:21 GMT", "version": "v3" }, { "created": "Fri, 10 Nov 2023 21:42:23 GMT", "version": "v4" } ]
2023-11-14
[ [ "Gagrani", "Praful", "" ], [ "Blanco", "Victor", "" ], [ "Smith", "Eric", "" ], [ "Baum", "David", "" ] ]
Developing a mathematical understanding of autocatalysis in reaction networks has both theoretical and practical implications. We review definitions of autocatalytic networks and prove some properties for minimal autocatalytic subnetworks (MASs). We show that it is possible to classify MASs in equivalence classes, and develop mathematical results about their behavior. We also provide linear-programming algorithms to exhaustively enumerate them and a scheme to visualize their polyhedral geometry and combinatorics. We then define cluster chemical reaction networks, a framework for coarse-graining real chemical reactions with positive integer conservation laws. We find that the size of the list of minimal autocatalytic subnetworks in a maximally connected cluster chemical reaction network with one conservation law grows exponentially in the number of species. We end our discussion with open questions concerning an ecosystem of autocatalytic subnetworks and multidisciplinary opportunities for future investigation.
q-bio/0605011
Chao Tang
Yuping Zhang, Minping Qian, Qi Ouyang, Minghua Deng, Fangting Li, Chao Tang
Stochastic Model of Yeast Cell Cycle Network
14 pages, 4 figures
Physica D 219 (2006) 35-39
10.1016/j.physd.2006.05.009
null
q-bio.MN
null
Biological functions in living cells are controlled by protein interaction and genetic networks. These molecular networks should be dynamically stable against various fluctuations which are inevitable in the living world. In this paper, we propose and study a stochastic model for the network regulating the cell cycle of the budding yeast. The stochasticity in the model is controlled by a temperature-like parameter $\beta$. Our simulation results show that both the biological stationary state and the biological pathway are stable for a wide range of "temperature". There is, however, a sharp transition-like behavior at $\beta_c$, below which the dynamics is dominated by noise. We also define a pseudo energy landscape for the system in which the biological pathway can be seen as a deep valley.
[ { "created": "Sun, 7 May 2006 03:42:09 GMT", "version": "v1" } ]
2009-11-13
[ [ "Zhang", "Yuping", "" ], [ "Qian", "Minping", "" ], [ "Ouyang", "Qi", "" ], [ "Deng", "Minghua", "" ], [ "Li", "Fangting", "" ], [ "Tang", "Chao", "" ] ]
Biological functions in living cells are controlled by protein interaction and genetic networks. These molecular networks should be dynamically stable against various fluctuations which are inevitable in the living world. In this paper, we propose and study a stochastic model for the network regulating the cell cycle of the budding yeast. The stochasticity in the model is controlled by a temperature-like parameter $\beta$. Our simulation results show that both the biological stationary state and the biological pathway are stable for a wide range of "temperature". There is, however, a sharp transition-like behavior at $\beta_c$, below which the dynamics is dominated by noise. We also define a pseudo energy landscape for the system in which the biological pathway can be seen as a deep valley.
1402.0749
Katy Rubin
Katy J. Rubin, Katherine Lawler, Peter Sollich, Tony Ng
Memory effects in biochemical networks as the natural counterpart of extrinsic noise
null
null
null
null
q-bio.QM physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that in the generic situation where a biological network, e.g. a protein interaction network, is in fact a subnetwork embedded in a larger "bulk" network, the presence of the bulk causes not just extrinsic noise but also memory effects. This means that the dynamics of the subnetwork will depend not only on its present state, but also its past. We use projection techniques to get explicit expressions for the memory functions that encode such memory effects, for generic protein interaction networks involving binary and unary reactions such as complex formation and phosphorylation, respectively. Remarkably, in the limit of low intrinsic copy-number noise such expressions can be obtained even for nonlinear dependences on the past. We illustrate the method with examples from a protein interaction network around epidermal growth factor receptor (EGFR), which is relevant to cancer signalling. These examples demonstrate that inclusion of memory terms is not only important conceptually but also leads to substantially higher quantitative accuracy in the predicted subnetwork dynamics.
[ { "created": "Tue, 4 Feb 2014 14:42:51 GMT", "version": "v1" }, { "created": "Mon, 16 Jun 2014 12:44:40 GMT", "version": "v2" } ]
2014-06-17
[ [ "Rubin", "Katy J.", "" ], [ "Lawler", "Katherine", "" ], [ "Sollich", "Peter", "" ], [ "Ng", "Tony", "" ] ]
We show that in the generic situation where a biological network, e.g. a protein interaction network, is in fact a subnetwork embedded in a larger "bulk" network, the presence of the bulk causes not just extrinsic noise but also memory effects. This means that the dynamics of the subnetwork will depend not only on its present state, but also its past. We use projection techniques to get explicit expressions for the memory functions that encode such memory effects, for generic protein interaction networks involving binary and unary reactions such as complex formation and phosphorylation, respectively. Remarkably, in the limit of low intrinsic copy-number noise such expressions can be obtained even for nonlinear dependences on the past. We illustrate the method with examples from a protein interaction network around epidermal growth factor receptor (EGFR), which is relevant to cancer signalling. These examples demonstrate that inclusion of memory terms is not only important conceptually but also leads to substantially higher quantitative accuracy in the predicted subnetwork dynamics.
1611.05805
Hector Banos
Hector Ba\~nos, Nathaniel Bushek, Ruth Davidson, Elizabeth Gross, Pamela E. Harris, Robert Krone, Colby Long, Allen Stewart, and Robert Walker
Phylogenetic trees
null
J. Softw. Alg. Geom. 11 (2021) 1-7
10.2140/jsag.2021.11.1
null
q-bio.PE math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.
[ { "created": "Thu, 17 Nov 2016 18:09:22 GMT", "version": "v1" } ]
2021-01-27
[ [ "Baños", "Hector", "" ], [ "Bushek", "Nathaniel", "" ], [ "Davidson", "Ruth", "" ], [ "Gross", "Elizabeth", "" ], [ "Harris", "Pamela E.", "" ], [ "Krone", "Robert", "" ], [ "Long", "Colby", "" ], [ ...
We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.
1105.2767
Brian Williams Dr
Brian G. Williams
How important is the acute phase in HIV epidemiology?
One pdf file. The units on the x-axis of Figure 1 were 'years' but should have been 'days'. This has been corrected
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At present, the best hope for eliminating HIV transmission and bringing the epidemic of HIV to an end lies in the use of anti-retroviral therapy for prevention, a strategy referred to variously as Test and Treat (T&T), Treatment as Prevention (TasP) or Treatment centred Prevention (TcP). One of the key objections to the use of T&T to stop transmission concerns the role of the acute phase in HIV transmission. The acute phase of infection lasts for one to three months after HIV-seroconversion during which time the risk of transmission may be ten to twenty times higher, per sexual encounter, than it is during the chronic phase which lasts for the next ten years. Regular testing for HIV is more likely to miss people who are in the acute phase than in the chronic phase and it is essential to determine the extent to which this might compromise the impact of T&T on HIV-transmission. Here we show that 1) provided the initial epidemic doubling time is about 1.0 to 1.5 years, as observed in South Africa, random testing with an average test interval of one year will still bring the epidemic close to elimination even if the acute phase lasts for 3 months during which time transmission is 26 times higher than in the chronic phase; 2) testing people regularly at yearly intervals is significantly more effective then testing them randomly; 3) testing people regularly at six monthly intervals and starting them on ART immediately, will almost certainly guarantee elimination. In general it seems unlikely that elevated transmission during the acute phase is likely to change predictions of the impact of treatment on transmission significantly. Other factors, in particular age structure, the structure of sexual networks and variation in set-point viral load are likely to be more important and should be given priority in further analyses.
[ { "created": "Fri, 13 May 2011 16:19:49 GMT", "version": "v1" }, { "created": "Thu, 5 Feb 2015 17:44:04 GMT", "version": "v2" } ]
2015-02-06
[ [ "Williams", "Brian G.", "" ] ]
At present, the best hope for eliminating HIV transmission and bringing the epidemic of HIV to an end lies in the use of anti-retroviral therapy for prevention, a strategy referred to variously as Test and Treat (T&T), Treatment as Prevention (TasP) or Treatment centred Prevention (TcP). One of the key objections to the use of T&T to stop transmission concerns the role of the acute phase in HIV transmission. The acute phase of infection lasts for one to three months after HIV-seroconversion during which time the risk of transmission may be ten to twenty times higher, per sexual encounter, than it is during the chronic phase which lasts for the next ten years. Regular testing for HIV is more likely to miss people who are in the acute phase than in the chronic phase and it is essential to determine the extent to which this might compromise the impact of T&T on HIV-transmission. Here we show that 1) provided the initial epidemic doubling time is about 1.0 to 1.5 years, as observed in South Africa, random testing with an average test interval of one year will still bring the epidemic close to elimination even if the acute phase lasts for 3 months during which time transmission is 26 times higher than in the chronic phase; 2) testing people regularly at yearly intervals is significantly more effective then testing them randomly; 3) testing people regularly at six monthly intervals and starting them on ART immediately, will almost certainly guarantee elimination. In general it seems unlikely that elevated transmission during the acute phase is likely to change predictions of the impact of treatment on transmission significantly. Other factors, in particular age structure, the structure of sexual networks and variation in set-point viral load are likely to be more important and should be given priority in further analyses.
1711.01423
I\'nigo Arandia-Romero
I\~nigo Arandia-Romero, Ramon Nogueira, Gabriela Mochol, Rub\'en Moreno-Bote
What can neuronal populations tell us about cognition?
21 pages, 4 figures
Current Opinion in Neurobiology, Volume 46, October 2017, Pages 48-57
10.1016/j.conb.2017.07.008
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, it is possible to record the activity of hundreds of cells at the same time in behaving animals. However, these data are often treated and analyzed as if they consisted of many independently recorded neurons. How can neuronal populations be uniquely used to learn about cognition? We describe recent work that shows that populations of simultaneously recorded neurons are fundamental to understand the basis of decision-making, including processes such as ongoing deliberations and decision confidence, which generally fall outside the reach of single-cell analysis. Thus, neuronal population data allow addressing novel questions, but they also come with so far unsolved challenges.
[ { "created": "Sat, 4 Nov 2017 10:45:14 GMT", "version": "v1" } ]
2017-11-07
[ [ "Arandia-Romero", "Iñigo", "" ], [ "Nogueira", "Ramon", "" ], [ "Mochol", "Gabriela", "" ], [ "Moreno-Bote", "Rubén", "" ] ]
Nowadays, it is possible to record the activity of hundreds of cells at the same time in behaving animals. However, these data are often treated and analyzed as if they consisted of many independently recorded neurons. How can neuronal populations be uniquely used to learn about cognition? We describe recent work that shows that populations of simultaneously recorded neurons are fundamental to understand the basis of decision-making, including processes such as ongoing deliberations and decision confidence, which generally fall outside the reach of single-cell analysis. Thus, neuronal population data allow addressing novel questions, but they also come with so far unsolved challenges.
1503.02032
Sebastian Kmiecik
Mateusz Kurcinski, Michal Jamroz, Maciej Blaszczyk, Andrzej Kolinski and Sebastian Kmiecik
CABS-dock web server for the flexible docking of peptides to proteins without prior knowledge of the binding site
null
Nucleic Acids Research, 43 (W1): W419-W424, 2015
10.1093/nar/gkv456
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-peptide interactions play a key role in cell functions. Their structural characterization, though challenging, is important for the discovery of new drugs. The CABS-dock web server provides an interface for modeling protein-peptide interactions using a highly efficient protocol for the flexible docking of peptides to proteins. While other docking algorithms require pre-defined localization of the binding site, CABS-dock doesn't require such knowledge. Given a protein receptor structure and a peptide sequence (and starting from random conformations and positions of the peptide), CABS-dock performs simulation search for the binding site allowing for full flexibility of the peptide and small fluctuations of the receptor backbone. This protocol was extensively tested over the largest dataset of non-redundant protein-peptide interactions available to date (including bound and unbound docking cases). For over 80% of bound and unbound data set cases, we obtained models with high or medium accuracy (sufficient for practical applications). Additionally, as optional features, CABS-dock can exclude user-selected binding modes from docking search or to increase the level of flexibility for chosen receptor fragments. CABS-dock is freely available as a web server at http://biocomp.chem.uw.edu.pl/CABSdock
[ { "created": "Fri, 6 Mar 2015 18:40:11 GMT", "version": "v1" } ]
2015-07-08
[ [ "Kurcinski", "Mateusz", "" ], [ "Jamroz", "Michal", "" ], [ "Blaszczyk", "Maciej", "" ], [ "Kolinski", "Andrzej", "" ], [ "Kmiecik", "Sebastian", "" ] ]
Protein-peptide interactions play a key role in cell functions. Their structural characterization, though challenging, is important for the discovery of new drugs. The CABS-dock web server provides an interface for modeling protein-peptide interactions using a highly efficient protocol for the flexible docking of peptides to proteins. While other docking algorithms require pre-defined localization of the binding site, CABS-dock doesn't require such knowledge. Given a protein receptor structure and a peptide sequence (and starting from random conformations and positions of the peptide), CABS-dock performs simulation search for the binding site allowing for full flexibility of the peptide and small fluctuations of the receptor backbone. This protocol was extensively tested over the largest dataset of non-redundant protein-peptide interactions available to date (including bound and unbound docking cases). For over 80% of bound and unbound data set cases, we obtained models with high or medium accuracy (sufficient for practical applications). Additionally, as optional features, CABS-dock can exclude user-selected binding modes from docking search or to increase the level of flexibility for chosen receptor fragments. CABS-dock is freely available as a web server at http://biocomp.chem.uw.edu.pl/CABSdock
0811.0203
Max Souza
Fabio A. C. C. Chalub, Max O. Souza
From discrete to continuous evolution models: a unifying approach to drift-diffusion and replicator dynamics
18 pages, 3 figures
Theor. Pop. Biol., 76 (4), 268--277 (2009)
10.1016/j.tpb.2009.08.006
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the large population limit of the Moran process, assuming weak-selection, and for different scalings. Depending on the particular choice of scalings, we obtain a continuous model that may highlight the genetic-drift (neutral evolution) or natural selection; for one precise scaling, both effects are present. For the scalings that take the genetic-drift into account, the continuous model is given by a singular diffusion equation, together with two conservation laws that are already present at the discrete level. For scalings that take into account only natural selection, we obtain a hyperbolic singular equation that embeds the Replicator Dynamics and satisfies only one conservation law. The derivation is made in two steps: a formal one, where the candidate limit model is obtained, and a rigorous one, where convergence of the probability density is proved. Additional results on the fixation probabilities are also presented.
[ { "created": "Mon, 3 Nov 2008 01:13:15 GMT", "version": "v1" } ]
2013-01-21
[ [ "Chalub", "Fabio A. C. C.", "" ], [ "Souza", "Max O.", "" ] ]
We study the large population limit of the Moran process, assuming weak-selection, and for different scalings. Depending on the particular choice of scalings, we obtain a continuous model that may highlight the genetic-drift (neutral evolution) or natural selection; for one precise scaling, both effects are present. For the scalings that take the genetic-drift into account, the continuous model is given by a singular diffusion equation, together with two conservation laws that are already present at the discrete level. For scalings that take into account only natural selection, we obtain a hyperbolic singular equation that embeds the Replicator Dynamics and satisfies only one conservation law. The derivation is made in two steps: a formal one, where the candidate limit model is obtained, and a rigorous one, where convergence of the probability density is proved. Additional results on the fixation probabilities are also presented.
2405.10488
Logan Thrasher Collins
Logan Thrasher Collins, Randal Koene
Comparative prospects of imaging methods for whole-brain mammalian connectomics
See page 10 after references for Supplemental Information
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by-sa/4.0/
Mammalian whole-brain connectomes are a crucial ingredient for holistic understanding of brain function. Imaging these connectomes at sufficient resolution to densely reconstruct cellular morphology and synapses represents a longstanding goal in neuroscience. Although the technologies needed to reconstruct whole-brain connectomes have not yet reached full maturity, they are advancing rapidly enough that the mouse brain might be within reach in the near future. Human connectomes remain a more distant goal. Here, we quantitatively compare existing and emerging imaging technologies that have potential to enable whole-brain mammalian connectomics. We perform calculations on electron microscopy (EM) techniques and expansion microscopy coupled with light-sheet fluorescence microscopy (ExLSFM) methods. We consider techniques from the literature that have sufficiently high resolution to identify all synapses and sufficiently high speed to be relevant for whole mammalian brains. Each imaging modality comes with benefits and drawbacks, so we suggest that attacking the problem through multiple approaches could yield the best outcomes. We offer this analysis as a resource for those considering how to organize efforts towards imaging whole-brain mammalian connectomes.
[ { "created": "Fri, 17 May 2024 01:26:22 GMT", "version": "v1" } ]
2024-05-20
[ [ "Collins", "Logan Thrasher", "" ], [ "Koene", "Randal", "" ] ]
Mammalian whole-brain connectomes are a crucial ingredient for holistic understanding of brain function. Imaging these connectomes at sufficient resolution to densely reconstruct cellular morphology and synapses represents a longstanding goal in neuroscience. Although the technologies needed to reconstruct whole-brain connectomes have not yet reached full maturity, they are advancing rapidly enough that the mouse brain might be within reach in the near future. Human connectomes remain a more distant goal. Here, we quantitatively compare existing and emerging imaging technologies that have potential to enable whole-brain mammalian connectomics. We perform calculations on electron microscopy (EM) techniques and expansion microscopy coupled with light-sheet fluorescence microscopy (ExLSFM) methods. We consider techniques from the literature that have sufficiently high resolution to identify all synapses and sufficiently high speed to be relevant for whole mammalian brains. Each imaging modality comes with benefits and drawbacks, so we suggest that attacking the problem through multiple approaches could yield the best outcomes. We offer this analysis as a resource for those considering how to organize efforts towards imaging whole-brain mammalian connectomes.
2007.12688
Peter Green
Peter J. Green, Julia Mortera and Lourdes Prieto
Casework applications of probabilistic genotyping methods for DNA mixtures that allow relationships between contributors
12 pages, 11 tables; new version has additional analysis of first case study, and appendix about methodology
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In both criminal cases and civil cases there is an increasing demand for the analysis of DNA mixtures involving relationships. The goal might be, for example, to identify the contributors to a DNA mixture where the donors may be related, or to infer the relationship between individuals based on a DNA mixture. This paper applies a recent approach to modelling and computation for DNA mixtures involving contributors with arbitrarily complex relationships to two real cases from the Spanish Forensic Police.
[ { "created": "Fri, 24 Jul 2020 10:46:57 GMT", "version": "v1" }, { "created": "Sun, 10 Jan 2021 18:25:46 GMT", "version": "v2" } ]
2021-01-12
[ [ "Green", "Peter J.", "" ], [ "Mortera", "Julia", "" ], [ "Prieto", "Lourdes", "" ] ]
In both criminal cases and civil cases there is an increasing demand for the analysis of DNA mixtures involving relationships. The goal might be, for example, to identify the contributors to a DNA mixture where the donors may be related, or to infer the relationship between individuals based on a DNA mixture. This paper applies a recent approach to modelling and computation for DNA mixtures involving contributors with arbitrarily complex relationships to two real cases from the Spanish Forensic Police.
2212.13951
Alina Glaubitz
Alina Glaubitz, Feng Fu
Population heterogeneity in vaccine coverage impacts epidemic thresholds and bifurcation dynamics
14 pages, 8 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population heterogeneity, especially in individuals' contact networks, plays an important role in transmission dynamics of infectious diseases. For vaccine-preventable diseases, outstanding issues like vaccine hesitancy and availability of vaccines further lead to nonuniform coverage among groups, not to mention the efficacy of vaccines and the mixing pattern varying from one group to another. As the ongoing COVID-19 pandemic transitions to endemicity, it is of interest and significance to understand the impact of aforementioned population heterogeneity on the emergence and persistence of epidemics. Here we analyze epidemic thresholds and characterize bifurcation dynamics by accounting for heterogeneity caused by group-dependent characteristics, including vaccination rate and efficacy as well as disease transmissibility. Our analysis shows that increases in the difference in vaccination coverage among groups can render multiple equilibria of disease burden to exist even if the overall basic reproductive ratio is below one (also known as backward bifurcation). The presence of other heterogeneity factors such as differences in vaccine efficacy, transmission, mixing pattern, and group size can each exhibit subtle impacts on bifurcation. We find that heterogeneity in vaccine efficacy can undermine the condition for backward bifurcations whereas homophily tends to aggravate disease endemicity. Our results have practical implications for improving public health efforts by addressing the role of population heterogeneity in the spread and control of diseases.
[ { "created": "Wed, 28 Dec 2022 16:53:26 GMT", "version": "v1" } ]
2022-12-29
[ [ "Glaubitz", "Alina", "" ], [ "Fu", "Feng", "" ] ]
Population heterogeneity, especially in individuals' contact networks, plays an important role in transmission dynamics of infectious diseases. For vaccine-preventable diseases, outstanding issues like vaccine hesitancy and availability of vaccines further lead to nonuniform coverage among groups, not to mention the efficacy of vaccines and the mixing pattern varying from one group to another. As the ongoing COVID-19 pandemic transitions to endemicity, it is of interest and significance to understand the impact of aforementioned population heterogeneity on the emergence and persistence of epidemics. Here we analyze epidemic thresholds and characterize bifurcation dynamics by accounting for heterogeneity caused by group-dependent characteristics, including vaccination rate and efficacy as well as disease transmissibility. Our analysis shows that increases in the difference in vaccination coverage among groups can render multiple equilibria of disease burden to exist even if the overall basic reproductive ratio is below one (also known as backward bifurcation). The presence of other heterogeneity factors such as differences in vaccine efficacy, transmission, mixing pattern, and group size can each exhibit subtle impacts on bifurcation. We find that heterogeneity in vaccine efficacy can undermine the condition for backward bifurcations whereas homophily tends to aggravate disease endemicity. Our results have practical implications for improving public health efforts by addressing the role of population heterogeneity in the spread and control of diseases.
q-bio/0607019
Carolyn Berger
Carolyn M. Berger, Xiaopeng Zhao, David G. Schaeffer, Hana M. Dobrovolny, Wanda Krassowska and Daniel J. Gauthier
Evidence for an unfolded border-collision bifurcation in paced cardiac tissue
null
null
null
null
q-bio.TO
null
We investigate, both experimentally and theoretically, the bifurcation to alternans in heart tissue. Previously, this phenomenon has been modeled either as a smooth or as border-collision period-doubling bifurcation. Using a new experimental technique, we find a hybrid behavior: very close to the bifurcation point the dynamics are smooth-like, whereas further away they are border-collision-like. This behavior is captured by a new type of model, called an unfolded border-collision bifurcation.
[ { "created": "Thu, 13 Jul 2006 22:20:04 GMT", "version": "v1" } ]
2007-05-23
[ [ "Berger", "Carolyn M.", "" ], [ "Zhao", "Xiaopeng", "" ], [ "Schaeffer", "David G.", "" ], [ "Dobrovolny", "Hana M.", "" ], [ "Krassowska", "Wanda", "" ], [ "Gauthier", "Daniel J.", "" ] ]
We investigate, both experimentally and theoretically, the bifurcation to alternans in heart tissue. Previously, this phenomenon has been modeled either as a smooth or as border-collision period-doubling bifurcation. Using a new experimental technique, we find a hybrid behavior: very close to the bifurcation point the dynamics are smooth-like, whereas further away they are border-collision-like. This behavior is captured by a new type of model, called an unfolded border-collision bifurcation.
2211.09058
David Beers
David Beers, Heather A. Harrington, and Alain Goriely
Stability of topological descriptors for neuronal morphology
11 pages, 4 figures
null
null
null
q-bio.NC math.AT
http://creativecommons.org/licenses/by/4.0/
The topological morphology descriptor of a neuron is a multiset of intervals associated to the shape of the neuron represented as a tree. In practice, topological morphology descriptors are vectorized using persistence images, which can help classify and characterize the morphology of broad groups of neurons. We study the stability of topological morphology descriptors under small changes to neuronal morphology. We show that the persistence diagram arising from the topological morphology descriptor of a neuron is stable for the 1-Wasserstein distance against a range of perturbations to the tree. These results guarantee that persistence images of topological morphology descriptors are stable against the same set of perturbations and reliable.
[ { "created": "Wed, 16 Nov 2022 17:20:48 GMT", "version": "v1" } ]
2022-11-17
[ [ "Beers", "David", "" ], [ "Harrington", "Heather A.", "" ], [ "Goriely", "Alain", "" ] ]
The topological morphology descriptor of a neuron is a multiset of intervals associated to the shape of the neuron represented as a tree. In practice, topological morphology descriptors are vectorized using persistence images, which can help classify and characterize the morphology of broad groups of neurons. We study the stability of topological morphology descriptors under small changes to neuronal morphology. We show that the persistence diagram arising from the topological morphology descriptor of a neuron is stable for the 1-Wasserstein distance against a range of perturbations to the tree. These results guarantee that persistence images of topological morphology descriptors are stable against the same set of perturbations and reliable.
1704.05412
Oleksandr Holovachov
Oleksandr Holovachov, Quiterie Haenel, Sarah J. Bourlat and Ulf Jondelius
Taxonomy assignment approach determines the efficiency of identification of metabarcodes in marine nematodes
24 pages, 3 figures, 3 supplementary figures, 9 supplementary tables, 1 supplementary data
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Precision and reliability of barcode-based biodiversity assessment can be affected at several steps during acquisition and analysis of the data. Identification of barcodes is one of the crucial steps in the process and can be accomplished using several different approaches, namely, alignment-based, probabilistic, tree-based and phylogeny-based. Number of identified sequences in the reference databases affects the precision of identification. This paper compares the identification of marine nematode barcodes using alignment-based, tree-based and phylogeny-based approaches. Because the nematode reference dataset is limited in its taxonomic scope, barcodes can only be assigned to higher taxonomic categories, families. Phylogeny-based approach using Evolutionary Placement Algorithm provided the largest number of positively assigned metabarcodes and was least affected by erroneous sequences and limitations of reference data, comparing to alignment-based and tree-based approaches.
[ { "created": "Tue, 18 Apr 2017 16:29:34 GMT", "version": "v1" } ]
2017-04-19
[ [ "Holovachov", "Oleksandr", "" ], [ "Haenel", "Quiterie", "" ], [ "Bourlat", "Sarah J.", "" ], [ "Jondelius", "Ulf", "" ] ]
Precision and reliability of barcode-based biodiversity assessment can be affected at several steps during acquisition and analysis of the data. Identification of barcodes is one of the crucial steps in the process and can be accomplished using several different approaches, namely, alignment-based, probabilistic, tree-based and phylogeny-based. Number of identified sequences in the reference databases affects the precision of identification. This paper compares the identification of marine nematode barcodes using alignment-based, tree-based and phylogeny-based approaches. Because the nematode reference dataset is limited in its taxonomic scope, barcodes can only be assigned to higher taxonomic categories, families. Phylogeny-based approach using Evolutionary Placement Algorithm provided the largest number of positively assigned metabarcodes and was least affected by erroneous sequences and limitations of reference data, comparing to alignment-based and tree-based approaches.
1503.06245
Carl Veller
Carl Veller and Laura K. Hayward
Finite-population evolution with rare mutations in asymmetric games
Journal of Economic Theory (2015)
null
10.1016/j.jet.2015.12.005
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model evolution according to an asymmetric game as occurring in multiple finite populations, one for each role in the game, and study the effect of subjecting individuals to stochastic strategy mutations. We show that, when these mutations occur sufficiently infrequently, the dynamics over all population states simplify to an ergodic Markov chain over just the pure population states (where each population is monomorphic). This makes calculation of the stationary distribution computationally feasible. The transition probabilities of this embedded Markov chain involve fixation probabilities of mutants in single populations. The asymmetry of the underlying game leads to fixation probabilities that are derived from frequency-independent selection, in contrast to the analogous single-population symmetric-game case (Fudenberg and Imhof 2006). This frequency independence is useful in that it allows us to employ results from the population genetics literature to calculate the stationary distribution of the evolutionary process, giving sharper, and sometimes even analytic, results. We demonstrate the utility of this approach by applying it to a battle-of-the-sexes game, a Crawford-Sobel signalling game, and the beer-quiche game of Cho and Kreps (1987).
[ { "created": "Fri, 20 Mar 2015 22:57:08 GMT", "version": "v1" }, { "created": "Mon, 21 Dec 2015 10:44:07 GMT", "version": "v2" } ]
2015-12-22
[ [ "Veller", "Carl", "" ], [ "Hayward", "Laura K.", "" ] ]
We model evolution according to an asymmetric game as occurring in multiple finite populations, one for each role in the game, and study the effect of subjecting individuals to stochastic strategy mutations. We show that, when these mutations occur sufficiently infrequently, the dynamics over all population states simplify to an ergodic Markov chain over just the pure population states (where each population is monomorphic). This makes calculation of the stationary distribution computationally feasible. The transition probabilities of this embedded Markov chain involve fixation probabilities of mutants in single populations. The asymmetry of the underlying game leads to fixation probabilities that are derived from frequency-independent selection, in contrast to the analogous single-population symmetric-game case (Fudenberg and Imhof 2006). This frequency independence is useful in that it allows us to employ results from the population genetics literature to calculate the stationary distribution of the evolutionary process, giving sharper, and sometimes even analytic, results. We demonstrate the utility of this approach by applying it to a battle-of-the-sexes game, a Crawford-Sobel signalling game, and the beer-quiche game of Cho and Kreps (1987).
1707.08242
Thierry Dufour
S. Zhang, A. Rousseau, T. Dufour
Promoting lentil germination and stem growth by plasma activated tap water, demineralized water and liquid fertilizer
8 pages, 9 figures, article with peer review
RSC Adv., 2017,7, 31244-31251
10.1039/C7RA04663D
null
q-bio.OT physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Tap water, demineralized water and liquid fertilizer have been activated using an atmospheric pressure plasma jet (APPJ) to investigate their benefits for the germination rate and stem elongation rate of lentils from Puy-en-Velay (France). By plasma-activating tap water, we have obtained germination rates as high as 80% (instead of 30% with tap water). Also, higher stem elongation rates and final stem lengths were obtained using activated tap water compared with commercial fertilizer. We show that these rates of germination and stem growth strongly depend on the combination of two radicals generated in the liquids by the plasma: hydrogen peroxide and nitrate. This synergy appears to be a condition for releasing seed dormancy through the endogenous production of NO radicals.
[ { "created": "Tue, 25 Jul 2017 22:47:17 GMT", "version": "v1" } ]
2017-08-08
[ [ "Zhang", "S.", "" ], [ "Rousseau", "A.", "" ], [ "Dufour", "T.", "" ] ]
Tap water, demineralized water and liquid fertilizer have been activated using an atmospheric pressure plasma jet (APPJ) to investigate their benefits for the germination rate and stem elongation rate of lentils from Puy-en-Velay (France). By plasma-activating tap water, we have obtained germination rates as high as 80% (instead of 30% with tap water). Also, higher stem elongation rates and final stem lengths were obtained using activated tap water compared with commercial fertilizer. We show that these rates of germination and stem growth strongly depend on the combination of two radicals generated in the liquids by the plasma: hydrogen peroxide and nitrate. This synergy appears to be a condition for releasing seed dormancy through the endogenous production of NO radicals.
1006.2420
Alexander Sch\"onhuth
Alexander Sch\"onhuth, Raheleh Salari, S. Cenk Sahinalp
Pair HMM based gap statistics for re-evaluation of indels in alignments with affine gap penalties: Extended Version
17 pages, 7 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although computationally aligning sequence is a crucial step in the vast majority of comparative genomics studies our understanding of alignment biases still needs to be improved. To infer true structural or homologous regions computational alignments need further evaluation. It has been shown that the accuracy of aligned positions can drop substantially in particular around gaps. Here we focus on re-evaluation of score-based alignments with affine gap penalty costs. We exploit their relationships with pair hidden Markov models and develop efficient algorithms by which to identify gaps which are significant in terms of length and multiplicity. We evaluate our statistics with respect to the well-established structural alignments from SABmark and find that indel reliability substantially increases with their significance in particular in worst-case twilight zone alignments. This points out that our statistics can reliably complement other methods which mostly focus on the reliability of match positions.
[ { "created": "Fri, 11 Jun 2010 23:56:14 GMT", "version": "v1" } ]
2010-06-15
[ [ "Schönhuth", "Alexander", "" ], [ "Salari", "Raheleh", "" ], [ "Sahinalp", "S. Cenk", "" ] ]
Although computationally aligning sequence is a crucial step in the vast majority of comparative genomics studies our understanding of alignment biases still needs to be improved. To infer true structural or homologous regions computational alignments need further evaluation. It has been shown that the accuracy of aligned positions can drop substantially in particular around gaps. Here we focus on re-evaluation of score-based alignments with affine gap penalty costs. We exploit their relationships with pair hidden Markov models and develop efficient algorithms by which to identify gaps which are significant in terms of length and multiplicity. We evaluate our statistics with respect to the well-established structural alignments from SABmark and find that indel reliability substantially increases with their significance in particular in worst-case twilight zone alignments. This points out that our statistics can reliably complement other methods which mostly focus on the reliability of match positions.
2312.15387
Zhaoning Yu
Zhaoning Yu and Hongyang Gao
MotifPiece: A Data-Driven Approach for Effective Motif Extraction and Molecular Representation Learning
null
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motif extraction is an important task in motif based molecular representation learning. Previously, machine learning approaches employing either rule-based or string-based techniques to extract motifs. Rule-based approaches may extract motifs that aren't frequent or prevalent within the molecular data, which can lead to an incomplete understanding of essential structural patterns in molecules. String-based methods often lose the topological information inherent in molecules. This can be a significant drawback because topology plays a vital role in defining the spatial arrangement and connectivity of atoms within a molecule, which can be critical for understanding its properties and behavior. In this paper, we develop a data-driven motif extraction technique known as MotifPiece, which employs statistical measures to define motifs. To comprehensively evaluate the effectiveness of MotifPiece, we introduce a heterogeneous learning module. Our model shows an improvement compared to previously reported models. Additionally, we demonstrate that its performance can be further enhanced in two ways: first, by incorporating more data to aid in generating a richer motif vocabulary, and second, by merging multiple datasets that share enough motifs, allowing for cross-dataset learning.
[ { "created": "Sun, 24 Dec 2023 02:20:15 GMT", "version": "v1" } ]
2023-12-27
[ [ "Yu", "Zhaoning", "" ], [ "Gao", "Hongyang", "" ] ]
Motif extraction is an important task in motif based molecular representation learning. Previously, machine learning approaches employing either rule-based or string-based techniques to extract motifs. Rule-based approaches may extract motifs that aren't frequent or prevalent within the molecular data, which can lead to an incomplete understanding of essential structural patterns in molecules. String-based methods often lose the topological information inherent in molecules. This can be a significant drawback because topology plays a vital role in defining the spatial arrangement and connectivity of atoms within a molecule, which can be critical for understanding its properties and behavior. In this paper, we develop a data-driven motif extraction technique known as MotifPiece, which employs statistical measures to define motifs. To comprehensively evaluate the effectiveness of MotifPiece, we introduce a heterogeneous learning module. Our model shows an improvement compared to previously reported models. Additionally, we demonstrate that its performance can be further enhanced in two ways: first, by incorporating more data to aid in generating a richer motif vocabulary, and second, by merging multiple datasets that share enough motifs, allowing for cross-dataset learning.
q-bio/0701024
David A. Kessler
David A. Kessler, Nadav M. Shnerb
Solution of an infection model near threshold
null
null
10.1103/PhysRevE.76.010901
null
q-bio.PE
null
We study the Susceptible-Infected-Recovered model of epidemics in the vicinity of the threshold infectivity. We derive the distribution of total outbreak size in the limit of large population size $N$. This is accomplished by mapping the problem to the first passage time of a random walker subject to a drift that increases linearly with time. We recover the scaling results of Ben-Naim and Krapivsky that the effective maximal size of the outbreak scales as $N^{2/3}$, with the average scaling as $N^{1/3}$, with an explicit form for the scaling function.
[ { "created": "Wed, 17 Jan 2007 15:58:32 GMT", "version": "v1" } ]
2009-11-13
[ [ "Kessler", "David A.", "" ], [ "Shnerb", "Nadav M.", "" ] ]
We study the Susceptible-Infected-Recovered model of epidemics in the vicinity of the threshold infectivity. We derive the distribution of total outbreak size in the limit of large population size $N$. This is accomplished by mapping the problem to the first passage time of a random walker subject to a drift that increases linearly with time. We recover the scaling results of Ben-Naim and Krapivsky that the effective maximal size of the outbreak scales as $N^{2/3}$, with the average scaling as $N^{1/3}$, with an explicit form for the scaling function.
q-bio/0409001
Michael C. Mackey
Michael C. Mackey and Moises Santillan
Mathematics, Biology, and Physics: Interactions and Interdependence
12 pages
null
null
null
q-bio.OT
null
This paper traces the seminal roles that physicists and mathematicians have played in the conceptual development of the biological sciences in the past, and especially in the 19th and 20th centuries.
[ { "created": "Wed, 1 Sep 2004 00:03:48 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mackey", "Michael C.", "" ], [ "Santillan", "Moises", "" ] ]
This paper traces the seminal roles that physicists and mathematicians have played in the conceptual development of the biological sciences in the past, and especially in the 19th and 20th centuries.
0711.4262
Thibault Lagache
T. Lagache, E. Dauty, D. Holcman
Toward a quantitative analysis of virus and plasmid trafficking in cells
10 pages, 6 figures
null
null
null
q-bio.QM q-bio.SC
null
Intracellular transport of DNA carriers is a fundamental step of gene delivery. We present here a theoretical approach to study generically a single virus or DNA particle trafficking in a cell cytoplasm. Cellular trafficking has been studied experimentally mostly at the macroscopic level, but very little has been done so far at the microscopic level. We present here a physical model to account for certain aspects of cellular organization, starting with the observation that a viral particle trajectory consists of epochs of pure diffusion and epochs of active transport along microtubules. We define a general degradation rate to describe the limitations of the delivery of plasmid or viral particles to the nucleus imposed by various types of direct and indirect hydrolysis activity inside the cytoplasm. Following a homogenization procedure, which consists of replacing the switching dynamics by a single steady state stochastic description, not only can we study the spatio-temporal dynamics of moving objects in the cytosol, but also estimate the probability and the mean time to go from the cell membrane to a nuclear pore. Computational simulations confirm that our model can be used to analyze and interpret viral trajectories and estimate quantitatively the success of nuclear delivery.
[ { "created": "Tue, 27 Nov 2007 14:22:35 GMT", "version": "v1" } ]
2007-11-28
[ [ "Lagache", "T.", "" ], [ "Dauty", "E.", "" ], [ "Holcman", "D.", "" ] ]
Intracellular transport of DNA carriers is a fundamental step of gene delivery. We present here a theoretical approach to study generically a single virus or DNA particle trafficking in a cell cytoplasm. Cellular trafficking has been studied experimentally mostly at the macroscopic level, but very little has been done so far at the microscopic level. We present here a physical model to account for certain aspects of cellular organization, starting with the observation that a viral particle trajectory consists of epochs of pure diffusion and epochs of active transport along microtubules. We define a general degradation rate to describe the limitations of the delivery of plasmid or viral particles to the nucleus imposed by various types of direct and indirect hydrolysis activity inside the cytoplasm. Following a homogenization procedure, which consists of replacing the switching dynamics by a single steady state stochastic description, not only can we study the spatio-temporal dynamics of moving objects in the cytosol, but also estimate the probability and the mean time to go from the cell membrane to a nuclear pore. Computational simulations confirm that our model can be used to analyze and interpret viral trajectories and estimate quantitatively the success of nuclear delivery.
q-bio/0404002
J. S. van Zon
Jeroen S. van Zon and Pieter Rein ten Wolde
Green's Function Reaction Dynamics: a new approach to simulate biochemical networks at the particle level and in time and space
13 pages, 6 figures
null
null
null
q-bio.MN
null
Biochemical networks are the analog computers of life. They allow living cells to control a large number of biological processes, such as gene expression and cell signalling. In biochemical networks, the concentrations of the components are often low. This means that the discrete nature of the reactants and the stochastic character of their interactions have to be taken into account. Moreover, the spatial distribution of the components can be of crucial importance. However, the current numerical techniques for simulating biochemical networks either ignore the particulate nature of matter or treat the spatial fluctuations in a mean-field manner. We have developed a new technique, called Green's Function Reaction Dynamics (GFRD), that makes it possible to simulate biochemical networks at the particle level and in both time and space. In this scheme, a maximum time step is chosen such that only single particles or pairs of particles have to be considered. For these particles, the Smoluchowski equation can be solved analytically using Green's functions. The main idea of GFRD is to exploit the exact solution of the Smoluchoswki equation to set up an event-driven algorithm. This allows GFRD to make large jumps in time when the particles are far apart from each other. Here, we apply the technique to a simple model of gene expression. The simulations reveal that the scheme is highly efficient. Under biologically relevant conditions, GFRD is up to six orders of magnitude faster than conventional particle-based techniques for simulating biochemical networks in time and space.
[ { "created": "Fri, 2 Apr 2004 16:23:28 GMT", "version": "v1" }, { "created": "Fri, 11 Jun 2004 10:51:29 GMT", "version": "v2" } ]
2007-05-23
[ [ "van Zon", "Jeroen S.", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Biochemical networks are the analog computers of life. They allow living cells to control a large number of biological processes, such as gene expression and cell signalling. In biochemical networks, the concentrations of the components are often low. This means that the discrete nature of the reactants and the stochastic character of their interactions have to be taken into account. Moreover, the spatial distribution of the components can be of crucial importance. However, the current numerical techniques for simulating biochemical networks either ignore the particulate nature of matter or treat the spatial fluctuations in a mean-field manner. We have developed a new technique, called Green's Function Reaction Dynamics (GFRD), that makes it possible to simulate biochemical networks at the particle level and in both time and space. In this scheme, a maximum time step is chosen such that only single particles or pairs of particles have to be considered. For these particles, the Smoluchowski equation can be solved analytically using Green's functions. The main idea of GFRD is to exploit the exact solution of the Smoluchoswki equation to set up an event-driven algorithm. This allows GFRD to make large jumps in time when the particles are far apart from each other. Here, we apply the technique to a simple model of gene expression. The simulations reveal that the scheme is highly efficient. Under biologically relevant conditions, GFRD is up to six orders of magnitude faster than conventional particle-based techniques for simulating biochemical networks in time and space.
2103.03348
Andrew Holbrook
Andrew J. Holbrook, Xiang Ji, Marc A. Suchard
From viral evolution to spatial contagion: a biologically modulated Hawkes model
null
null
null
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mutations sometimes increase contagiousness for evolving pathogens. During an epidemic, scientists use viral genome data to infer a shared evolutionary history and connect this history to geographic spread. We propose a model that directly relates a pathogen's evolution to its spatial contagion dynamics -- effectively combining the two epidemiological paradigms of phylogenetic inference and self-exciting process modeling -- and apply this \emph{phylogenetic Hawkes process} to a Bayesian analysis of 23,422 viral cases from the 2014-2016 Ebola outbreak in West Africa. The proposed model is able to detect individual viruses with significantly elevated rates of spatiotemporal propagation for a subset of 1,610 samples that provide genome data. Finally, to facilitate model application in big data settings, we develop massively parallel implementations for the gradient and Hessian of the log-likelihood and apply our high performance computing framework within an adaptively preconditioned Hamiltonian Monte Carlo routine.
[ { "created": "Wed, 3 Mar 2021 02:36:55 GMT", "version": "v1" }, { "created": "Mon, 22 Mar 2021 18:54:00 GMT", "version": "v2" }, { "created": "Fri, 10 Sep 2021 20:11:22 GMT", "version": "v3" } ]
2021-09-14
[ [ "Holbrook", "Andrew J.", "" ], [ "Ji", "Xiang", "" ], [ "Suchard", "Marc A.", "" ] ]
Mutations sometimes increase contagiousness for evolving pathogens. During an epidemic, scientists use viral genome data to infer a shared evolutionary history and connect this history to geographic spread. We propose a model that directly relates a pathogen's evolution to its spatial contagion dynamics -- effectively combining the two epidemiological paradigms of phylogenetic inference and self-exciting process modeling -- and apply this \emph{phylogenetic Hawkes process} to a Bayesian analysis of 23,422 viral cases from the 2014-2016 Ebola outbreak in West Africa. The proposed model is able to detect individual viruses with significantly elevated rates of spatiotemporal propagation for a subset of 1,610 samples that provide genome data. Finally, to facilitate model application in big data settings, we develop massively parallel implementations for the gradient and Hessian of the log-likelihood and apply our high performance computing framework within an adaptively preconditioned Hamiltonian Monte Carlo routine.
1112.3529
Marta Casanellas
Marta Casanellas and Anna Kedzierska
Generating Markov evolutionary matrices for a given branch length
22 pages
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Under a markovian evolutionary process, the expected number of substitutions per site (also called branch length) that have occurred when a sequence has evolved from another according to a transition matrix $P$ can be approximated by $-1/4log det P.$ When the Markov process is assumed to be continuous in time, i.e. $P=\exp Qt$ it is easy to simulate this evolutionary process for a given branch length (this amounts to requiring $Q$ of a certain trace). For the more general case (what we call discrete-time models), it is not trivial to generate a substitution matrix $P$ of given determinant (i.e. corresponding to a process of given branch length). In this paper we solve this problem for the most well-known discrete-time models JC*, K80*, K81*, SSM and GMM. These models lie in the class of nonhomogeneous evolutionary models. For any of these models we provide concise algorithms to generate matrices $P$ of given determinant. Moreover, in the first four models, our results prove that any of these matrices can be generated in this way. Our techniques are mainly based on algebraic tools.
[ { "created": "Thu, 15 Dec 2011 14:58:34 GMT", "version": "v1" } ]
2011-12-16
[ [ "Casanellas", "Marta", "" ], [ "Kedzierska", "Anna", "" ] ]
Under a markovian evolutionary process, the expected number of substitutions per site (also called branch length) that have occurred when a sequence has evolved from another according to a transition matrix $P$ can be approximated by $-1/4log det P.$ When the Markov process is assumed to be continuous in time, i.e. $P=\exp Qt$ it is easy to simulate this evolutionary process for a given branch length (this amounts to requiring $Q$ of a certain trace). For the more general case (what we call discrete-time models), it is not trivial to generate a substitution matrix $P$ of given determinant (i.e. corresponding to a process of given branch length). In this paper we solve this problem for the most well-known discrete-time models JC*, K80*, K81*, SSM and GMM. These models lie in the class of nonhomogeneous evolutionary models. For any of these models we provide concise algorithms to generate matrices $P$ of given determinant. Moreover, in the first four models, our results prove that any of these matrices can be generated in this way. Our techniques are mainly based on algebraic tools.
q-bio/0310006
Paulo Campos
Viviane M. de Oliveira and Paulo R. A. Campos
Dynamics of Fixation of Advantageous Mutations
13 pages
null
10.1016/j.physa.2004.02.007
null
q-bio.PE
null
We investigate the process of fixation of advantageous mutations in an asexual population. We assume that the effect of each beneficial mutation is exponentially distributed with mean value $\omega_{med}=1/\beta$. The model also considers that the effect of each new deleterious mutation reduces the fitness of the organism independent on the previous number of mutations. We use the branching process formulation and also extensive simulations to study the model. The agreement between the analytical predictions and the simulational data is quite satisfactory. Surprisingly, we observe that the dependence of the probability of fixation $P_{fix}$ on the parameter $\omega_{med}$ is precisely described by a power-law relation, $P_{fix} \sim \omega_{med}^{\gamma}$. The exponent $\gamma$ is an increase function of the rate of deleterious mutations $U$, whereas the probability $P_{fix}$ is a decreasing function of $U$. The mean value $\omega_{fix}$ of the beneficial mutations which reach ultimate fixation depends on $U$ and $\omega_{med}$. The ratio $\omega_{fix}/\omega_{med}$ increases as we consider higher values of mutation value $U$ in the region of intermediate to large values of $\omega_{med}$, whereas for low $\omega_{med}$ we observe the opposite behavior.
[ { "created": "Wed, 8 Oct 2003 20:03:35 GMT", "version": "v1" } ]
2009-11-10
[ [ "de Oliveira", "Viviane M.", "" ], [ "Campos", "Paulo R. A.", "" ] ]
We investigate the process of fixation of advantageous mutations in an asexual population. We assume that the effect of each beneficial mutation is exponentially distributed with mean value $\omega_{med}=1/\beta$. The model also considers that the effect of each new deleterious mutation reduces the fitness of the organism independent on the previous number of mutations. We use the branching process formulation and also extensive simulations to study the model. The agreement between the analytical predictions and the simulational data is quite satisfactory. Surprisingly, we observe that the dependence of the probability of fixation $P_{fix}$ on the parameter $\omega_{med}$ is precisely described by a power-law relation, $P_{fix} \sim \omega_{med}^{\gamma}$. The exponent $\gamma$ is an increase function of the rate of deleterious mutations $U$, whereas the probability $P_{fix}$ is a decreasing function of $U$. The mean value $\omega_{fix}$ of the beneficial mutations which reach ultimate fixation depends on $U$ and $\omega_{med}$. The ratio $\omega_{fix}/\omega_{med}$ increases as we consider higher values of mutation value $U$ in the region of intermediate to large values of $\omega_{med}$, whereas for low $\omega_{med}$ we observe the opposite behavior.
q-bio/0512016
Guilherme Innocentini
Guilherme da C.P. Innocentini and Jose E. M. Hornos
Modeling stochastic gene expression under repression
20 pages, 20 figures
null
null
null
q-bio.OT
null
Intrinsic transcriptional noise induced by operator fluctuations is investigated with a simple spin like stochastic model. The effects of transcriptional fluctuations in protein synthesis is probed by coupling transcription and translation by an amplificative interaction. In the presence of repression a new term contributes to the noise which depends on the rate of mRNA production. If the switching time is small compared with the mRNA life time the noise is also small. In general the dumping of protein production by a repressive agent occurs linearly but the fluctuations can show a maxima at intermediate repression. The discrepancy between the switching time, the mRNA degradation and protein degradation is crucial for the repressive control in translation without large fluctuations. The noise profiles obtained here are in quantitative agreement with recent experiments.
[ { "created": "Wed, 7 Dec 2005 22:01:56 GMT", "version": "v1" } ]
2007-05-23
[ [ "Innocentini", "Guilherme da C. P.", "" ], [ "Hornos", "Jose E. M.", "" ] ]
Intrinsic transcriptional noise induced by operator fluctuations is investigated with a simple spin like stochastic model. The effects of transcriptional fluctuations in protein synthesis is probed by coupling transcription and translation by an amplificative interaction. In the presence of repression a new term contributes to the noise which depends on the rate of mRNA production. If the switching time is small compared with the mRNA life time the noise is also small. In general the dumping of protein production by a repressive agent occurs linearly but the fluctuations can show a maxima at intermediate repression. The discrepancy between the switching time, the mRNA degradation and protein degradation is crucial for the repressive control in translation without large fluctuations. The noise profiles obtained here are in quantitative agreement with recent experiments.