id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2307.07794
Anna Levina
Sahel Azizpour, Viola Priesemann, Johannes Zierenberg, Anna Levina
Available observation time regulates optimal balance between sensitivity and confidence
6 pages, 3 figures; methods: 4 pages, 1 figure; supplementary. Comments and suggestions are welcome
null
null
null
q-bio.NC cond-mat.dis-nn
http://creativecommons.org/licenses/by/4.0/
Tasks that require information about the world imply a trade-off between the time spent on observation and the variance of the response. In particular, fast decisions need to rely on uncertain information. However, standard estimates of information processing capabilities, such as the dynamic range, are defined based on mean values that assume infinite observation times. Here, we show that limiting the observation time results in distributions of responses whose variance increases with the temporal correlations in a system and, importantly, affects a system's confidence in distinguishing inputs and thereby making decisions. To quantify the ability to distinguish features of an input, we propose several measures and demonstrate them on the prime example of a recurrent neural network that represents an input rate by a response firing averaged over a finite observation time. We show analytically and in simulations that the optimal tuning of the network depends on the available observation time, implying that tasks require a ``useful'' rather than maximal sensitivity. Interestingly, this shifts the optimal dynamic regime from critical to subcritical for finite observation times and highlights the importance of incorporating the finite observation times concept in future studies of information processing capabilities in a principled manner.
[ { "created": "Sat, 15 Jul 2023 12:52:27 GMT", "version": "v1" } ]
2023-07-18
[ [ "Azizpour", "Sahel", "" ], [ "Priesemann", "Viola", "" ], [ "Zierenberg", "Johannes", "" ], [ "Levina", "Anna", "" ] ]
Tasks that require information about the world imply a trade-off between the time spent on observation and the variance of the response. In particular, fast decisions need to rely on uncertain information. However, standard estimates of information processing capabilities, such as the dynamic range, are defined based on mean values that assume infinite observation times. Here, we show that limiting the observation time results in distributions of responses whose variance increases with the temporal correlations in a system and, importantly, affects a system's confidence in distinguishing inputs and thereby making decisions. To quantify the ability to distinguish features of an input, we propose several measures and demonstrate them on the prime example of a recurrent neural network that represents an input rate by a response firing averaged over a finite observation time. We show analytically and in simulations that the optimal tuning of the network depends on the available observation time, implying that tasks require a ``useful'' rather than maximal sensitivity. Interestingly, this shifts the optimal dynamic regime from critical to subcritical for finite observation times and highlights the importance of incorporating the finite observation times concept in future studies of information processing capabilities in a principled manner.
1804.06835
Jose A. Cuesta
Jacobo Aguirre, Pablo Catal\'an, Jos\'e A. Cuesta, and Susanna Manrubia
On the networked architecture of genotype spaces and its critical effects on molecular evolution
48 pages, 4 figures
Open Biology 8, 180069 (2018)
10.1098/rsob.180069
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary dynamics is often viewed as a subtle process of change accumulation that causes a divergence among organisms and their genomes. However, this interpretation is an inheritance of a gradualistic view that has been challenged at the macroevolutionary, ecological, and molecular level. Actually, when the complex architecture of genotype spaces is taken into account, the evolutionary dynamics of molecular populations becomes intrinsically non-uniform, sharing deep qualitative and quantitative similarities with slowly driven physical systems: non-linear responses analogous to critical transitions, sudden state changes, or hysteresis, among others. Furthermore, the phenotypic plasticity inherent to genotypes transforms classical fitness landscapes into multiscapes where adaptation in response to an environmental change may be very fast. The quantitative nature of adaptive molecular processes is deeply dependent on a networks-of-networks multilayered structure of the map from genotype to function that we begin to unveil.
[ { "created": "Wed, 18 Apr 2018 17:49:47 GMT", "version": "v1" } ]
2018-07-27
[ [ "Aguirre", "Jacobo", "" ], [ "Catalán", "Pablo", "" ], [ "Cuesta", "José A.", "" ], [ "Manrubia", "Susanna", "" ] ]
Evolutionary dynamics is often viewed as a subtle process of change accumulation that causes a divergence among organisms and their genomes. However, this interpretation is an inheritance of a gradualistic view that has been challenged at the macroevolutionary, ecological, and molecular level. Actually, when the complex architecture of genotype spaces is taken into account, the evolutionary dynamics of molecular populations becomes intrinsically non-uniform, sharing deep qualitative and quantitative similarities with slowly driven physical systems: non-linear responses analogous to critical transitions, sudden state changes, or hysteresis, among others. Furthermore, the phenotypic plasticity inherent to genotypes transforms classical fitness landscapes into multiscapes where adaptation in response to an environmental change may be very fast. The quantitative nature of adaptive molecular processes is deeply dependent on a networks-of-networks multilayered structure of the map from genotype to function that we begin to unveil.
1809.05336
Richa Tripathi
Richa Tripathi, Dyutiman Mukhopadhyay, Chakresh Kumar Singh, Krishna Prasad Miyapuram and Shivakumar Jolad
Characterizing functional brain networks and emotional centers based on Rasa theory of Indian aesthetics
13 pages, 7 figures
null
10.1007/978-3-030-36683-4_68
Complex Networks and Their Applications VIII pp 854-867
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
In Indian history of arts, Rasas are the aesthetics associated with any auditory, visual, literary or musical piece of art that evokes highly orchestrated emotional states. In this work, we study the functional response of the brain to movie clippings meant to evoke the Rasas through network analysis. We extract functional brain networks using coherence measures on EEG recordings of film clips from popular Indian Bollywood movies representing nine Rasas in the Indian Natyasastra. Structural and functional network measures were computed for these brain networks, averaging over a range of significant edge weights, in different brainwave frequency bands. We identify segregation of neuronal wiring in the brain into modules using a community detection algorithm. Further, using mutual information measure, we compare and contrast the modular organizations of brain network corresponding to different Rasas. Hubs identified using centrality measure reveal the network nodes that are central to information propagation across all Rasas. We also observe that the functional connectivity is suppressed when high-frequency waves such as beta and gamma are dominant in the brain. The significant links causing differences between the Rasa pairs are extracted statistically.
[ { "created": "Fri, 14 Sep 2018 10:15:54 GMT", "version": "v1" } ]
2020-05-05
[ [ "Tripathi", "Richa", "" ], [ "Mukhopadhyay", "Dyutiman", "" ], [ "Singh", "Chakresh Kumar", "" ], [ "Miyapuram", "Krishna Prasad", "" ], [ "Jolad", "Shivakumar", "" ] ]
In Indian history of arts, Rasas are the aesthetics associated with any auditory, visual, literary or musical piece of art that evokes highly orchestrated emotional states. In this work, we study the functional response of the brain to movie clippings meant to evoke the Rasas through network analysis. We extract functional brain networks using coherence measures on EEG recordings of film clips from popular Indian Bollywood movies representing nine Rasas in the Indian Natyasastra. Structural and functional network measures were computed for these brain networks, averaging over a range of significant edge weights, in different brainwave frequency bands. We identify segregation of neuronal wiring in the brain into modules using a community detection algorithm. Further, using mutual information measure, we compare and contrast the modular organizations of brain network corresponding to different Rasas. Hubs identified using centrality measure reveal the network nodes that are central to information propagation across all Rasas. We also observe that the functional connectivity is suppressed when high-frequency waves such as beta and gamma are dominant in the brain. The significant links causing differences between the Rasa pairs are extracted statistically.
1011.0244
Ascelin Gordon Dr
Ascelin Gordon, William T. Langford, Matt D. White, James A. Todd, Lucy Bastin
Modelling trade offs between public and private conservation policies
20 pages, 5 figures
null
10.1016/j.biocon.2010.10.011
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To reduce global biodiversity loss, there is an urgent need to determine the most efficient allocation of conservation resources. Recently, there has been a growing trend for many governments to supplement public ownership and management of reserves with incentive programs for conservation on private land. At the same time, policies to promote conservation on private land are rarely evaluated in terms of their ecological consequences. This raises important questions, such as the extent to which private land conservation can improve conservation outcomes, and how it should be mixed with more traditional public land conservation. We address these questions, using a general framework for modelling environmental policies and a case study examining the conservation of endangered native grasslands to the west of Melbourne, Australia. Specifically, we examine three policies that involve: i) spending all resources on creating public conservation areas; ii) spending all resources on an ongoing incentive program where private landholders are paid to manage vegetation on their property with 5-year contracts; and iii) splitting resources between these two approaches. The performance of each strategy is quantified with a vegetation condition change model that predicts future changes in grassland quality. Of the policies tested, no one policy was always best and policy performance depended on the objectives of those enacting the policy. This work demonstrates a general method for evaluating environmental policies and highlights the utility of a model which combines ecological and socioeconomic processes.
[ { "created": "Mon, 1 Nov 2010 04:04:47 GMT", "version": "v1" } ]
2010-11-02
[ [ "Gordon", "Ascelin", "" ], [ "Langford", "William T.", "" ], [ "White", "Matt D.", "" ], [ "Todd", "James A.", "" ], [ "Bastin", "Lucy", "" ] ]
To reduce global biodiversity loss, there is an urgent need to determine the most efficient allocation of conservation resources. Recently, there has been a growing trend for many governments to supplement public ownership and management of reserves with incentive programs for conservation on private land. At the same time, policies to promote conservation on private land are rarely evaluated in terms of their ecological consequences. This raises important questions, such as the extent to which private land conservation can improve conservation outcomes, and how it should be mixed with more traditional public land conservation. We address these questions, using a general framework for modelling environmental policies and a case study examining the conservation of endangered native grasslands to the west of Melbourne, Australia. Specifically, we examine three policies that involve: i) spending all resources on creating public conservation areas; ii) spending all resources on an ongoing incentive program where private landholders are paid to manage vegetation on their property with 5-year contracts; and iii) splitting resources between these two approaches. The performance of each strategy is quantified with a vegetation condition change model that predicts future changes in grassland quality. Of the policies tested, no one policy was always best and policy performance depended on the objectives of those enacting the policy. This work demonstrates a general method for evaluating environmental policies and highlights the utility of a model which combines ecological and socioeconomic processes.
2007.01436
Vikram Sundar
Vikram Sundar (1) and Lucy Colwell (1 and 2) ((1) Google Research, (2) Department of Chemistry, University of Cambridge)
Attribution Methods Reveal Flaws in Fingerprint-Based Virtual Screening
4 pages, 5 figures. In proceedings for the 2020 ICML workshop on Machine Learning Interpretability for Scientific Discovery
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fingerprint-based models for protein-ligand binding have demonstrated outstanding success on benchmark datasets; however, these models may not learn the correct binding rules. To assess this concern, we use in silico datasets with known binding rules to develop a general framework for evaluating model attribution. This framework identifies fragments that a model considers necessary to achieve a particular score, sidestepping the need for a model to be differentiable. Our results confirm that high-performing models may not learn the correct binding rule, and suggest concrete steps that can remedy this situation. We show that adding fragment-matched inactive molecules (decoys) to the data reduces attribution false negatives, while attribution false positives largely arise from the background correlation structure of molecular data. Normalizing for these background correlations helps to reveal the true binding logic. Our work highlights the danger of trusting attributions from high-performing models and suggests that a closer examination of fingerprint correlation structure and better decoy selection may help reduce misattributions.
[ { "created": "Thu, 2 Jul 2020 23:23:47 GMT", "version": "v1" }, { "created": "Wed, 8 Jul 2020 22:34:00 GMT", "version": "v2" } ]
2020-07-10
[ [ "Sundar", "Vikram", "", "1 and 2" ], [ "Colwell", "Lucy", "", "1 and 2" ] ]
Fingerprint-based models for protein-ligand binding have demonstrated outstanding success on benchmark datasets; however, these models may not learn the correct binding rules. To assess this concern, we use in silico datasets with known binding rules to develop a general framework for evaluating model attribution. This framework identifies fragments that a model considers necessary to achieve a particular score, sidestepping the need for a model to be differentiable. Our results confirm that high-performing models may not learn the correct binding rule, and suggest concrete steps that can remedy this situation. We show that adding fragment-matched inactive molecules (decoys) to the data reduces attribution false negatives, while attribution false positives largely arise from the background correlation structure of molecular data. Normalizing for these background correlations helps to reveal the true binding logic. Our work highlights the danger of trusting attributions from high-performing models and suggests that a closer examination of fingerprint correlation structure and better decoy selection may help reduce misattributions.
1612.09330
Jason Merritt
Jason Merritt and Seppe Kuehn
Frequency and amplitude dependent population dynamics during cycles of feast and famine
New sets of experiments; substantially revised model. Supplemental Material included in ancillary files
Phys. Rev. Lett. 121, 098101 (2018)
10.1103/PhysRevLett.121.098101
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In nature microbial populations are subject to fluctuating nutrient levels. Nutrient fluctuations are important for evolutionary and ecological dynamics in microbial communities since they impact growth rates, population sizes and biofilm formation. Here we use automated continuous-culture devices and high-throughput imaging to show that when populations of Escherichia coli are subjected to cycles of nutrient excess (feasts) and scarcity (famine) their abundance dynamics during famines depend on the frequency and amplitude of feasts. We show that frequency and amplitude dependent dynamics in planktonic populations arise from nutrient and history dependent rates of aggregation and dispersal. A phenomenological model recapitulates our experimental observations. Our results show that the statistical properties of environmental fluctuations have substantial impacts on spatial structure in bacterial populations driving large changes in abundance dynamics.
[ { "created": "Thu, 29 Dec 2016 22:05:43 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2018 05:43:16 GMT", "version": "v2" } ]
2018-09-12
[ [ "Merritt", "Jason", "" ], [ "Kuehn", "Seppe", "" ] ]
In nature microbial populations are subject to fluctuating nutrient levels. Nutrient fluctuations are important for evolutionary and ecological dynamics in microbial communities since they impact growth rates, population sizes and biofilm formation. Here we use automated continuous-culture devices and high-throughput imaging to show that when populations of Escherichia coli are subjected to cycles of nutrient excess (feasts) and scarcity (famine) their abundance dynamics during famines depend on the frequency and amplitude of feasts. We show that frequency and amplitude dependent dynamics in planktonic populations arise from nutrient and history dependent rates of aggregation and dispersal. A phenomenological model recapitulates our experimental observations. Our results show that the statistical properties of environmental fluctuations have substantial impacts on spatial structure in bacterial populations driving large changes in abundance dynamics.
2306.05555
Tung D. Nguyen
Tung D. Nguyen and Yixiang Wu and Tingting Tang and Amy Veprauskas and Ying Zhou and Behzad Djafari Rouhani and Zhisheng Shuai
Impact of resource distributions on the competition of species in stream environment
32 pages
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our earlier work in \cite{nguyen2022population} shows that concentrating the resources on the upstream end tends to maximize the total biomass in a metapopulation model for a stream species. In this paper, we continue our research direction by further considering a Lotka-Voletrra competition patch model for two stream species. We show that the species whose resource allocations maximize the total biomass has competitive advantage.
[ { "created": "Thu, 8 Jun 2023 20:54:31 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2023 18:25:12 GMT", "version": "v2" } ]
2023-07-31
[ [ "Nguyen", "Tung D.", "" ], [ "Wu", "Yixiang", "" ], [ "Tang", "Tingting", "" ], [ "Veprauskas", "Amy", "" ], [ "Zhou", "Ying", "" ], [ "Rouhani", "Behzad Djafari", "" ], [ "Shuai", "Zhisheng", "" ] ]
Our earlier work in \cite{nguyen2022population} shows that concentrating the resources on the upstream end tends to maximize the total biomass in a metapopulation model for a stream species. In this paper, we continue our research direction by further considering a Lotka-Voletrra competition patch model for two stream species. We show that the species whose resource allocations maximize the total biomass has competitive advantage.
1803.01639
Tuomas Rajala
T. Rajala and S. Olhede and D.J. Murrell
When do we have the power to detect biological interactions in spatial point patterns?
Main text 18 pages on 12pt font, 4 figures. Appendix 7 pages
null
null
null
q-bio.PE stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the relative importance of environmental factors, biotic interactions and stochasticity in assembling and maintaining species-rich communities remains a major challenge in ecology. In plant communities, interactions between individuals of different species are expected to leave a spatial signature in the form of positive or negative spatial correlations over distances relating to the spatial scale of interaction. Most studies using spatial point process tools have found relatively little evidence for interactions between pairs of species. More interactions tend to be detected in communities with fewer species. However, there is currently no understanding of how the power to detect spatial interactions may change with sample size, or the scale and intensity of interactions. We use a simple 2-species model where the scale and intensity of interactions are controlled to simulate point pattern data. In combination with an approximation to the variance of the spatial summary statistics that we sample, we investigate the power of current spatial point pattern methods to correctly reject the null model of bivariate species independence. We show that the power to detect interactions is positively related to the abundances of the species tested, and the intensity and scale of interactions. Increasing imbalance in abundances has a negative effect on the power to detect interactions. At population sizes typically found in currently available datasets for species-rich plant communities we find only a very low power to detect interactions. Differences in power may explain the increased frequency of interactions in communities with fewer species. Furthermore, the community-wide frequency of detected interactions is very sensitive to a minimum abundance criterion for including species in the analyses.
[ { "created": "Mon, 5 Mar 2018 12:50:19 GMT", "version": "v1" }, { "created": "Sat, 17 Mar 2018 10:35:54 GMT", "version": "v2" } ]
2018-03-20
[ [ "Rajala", "T.", "" ], [ "Olhede", "S.", "" ], [ "Murrell", "D. J.", "" ] ]
Determining the relative importance of environmental factors, biotic interactions and stochasticity in assembling and maintaining species-rich communities remains a major challenge in ecology. In plant communities, interactions between individuals of different species are expected to leave a spatial signature in the form of positive or negative spatial correlations over distances relating to the spatial scale of interaction. Most studies using spatial point process tools have found relatively little evidence for interactions between pairs of species. More interactions tend to be detected in communities with fewer species. However, there is currently no understanding of how the power to detect spatial interactions may change with sample size, or the scale and intensity of interactions. We use a simple 2-species model where the scale and intensity of interactions are controlled to simulate point pattern data. In combination with an approximation to the variance of the spatial summary statistics that we sample, we investigate the power of current spatial point pattern methods to correctly reject the null model of bivariate species independence. We show that the power to detect interactions is positively related to the abundances of the species tested, and the intensity and scale of interactions. Increasing imbalance in abundances has a negative effect on the power to detect interactions. At population sizes typically found in currently available datasets for species-rich plant communities we find only a very low power to detect interactions. Differences in power may explain the increased frequency of interactions in communities with fewer species. Furthermore, the community-wide frequency of detected interactions is very sensitive to a minimum abundance criterion for including species in the analyses.
1611.05741
Shivakumar Jolad
Murali Krishna Enduri and Shivakumar Jolad
Estimation of reproduction number and non stationary spectral analysis of Dengue epidemic
15 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we analyze the post monsoon Dengue outbreaks by analyzing the transient and long term dynamics of Dengue incidences and its environmental correlates in Ahmedabad city in western India from 2005-2012. We calculate the reproduction number $R_p$ using the growth rate of post monsoon Dengue outbreaks and biological parameters like host and vector incubation periods and vector mortality rate, and its uncertainties are estimated through Monte-Carlo simulations by sampling parameters from their respective probability distributions. Reduction in Female Aedes mosquito density required for an effective prevention of Dengue outbreaks is also calculated. The non stationary pattern of Dengue incidences and its climatic correlates like rainfall temperature is analyzed through Wavelet based methods. We find that the mean time lag between peak of monsoon and Dengue is 9 weeks. Monsoon and Dengue cases are phase locked from 2008-2012 in the 16-32 weeks band. The duration of post monsoon outbreak has been increasing every year, especially post 2008, even though the intensity and duration of monsoon has been decreasing. Temperature and Dengue incidences show correlations in the same band, but phase lock is not stationary.
[ { "created": "Wed, 16 Nov 2016 18:00:26 GMT", "version": "v1" } ]
2016-11-18
[ [ "Enduri", "Murali Krishna", "" ], [ "Jolad", "Shivakumar", "" ] ]
In this work we analyze the post monsoon Dengue outbreaks by analyzing the transient and long term dynamics of Dengue incidences and its environmental correlates in Ahmedabad city in western India from 2005-2012. We calculate the reproduction number $R_p$ using the growth rate of post monsoon Dengue outbreaks and biological parameters like host and vector incubation periods and vector mortality rate, and its uncertainties are estimated through Monte-Carlo simulations by sampling parameters from their respective probability distributions. Reduction in Female Aedes mosquito density required for an effective prevention of Dengue outbreaks is also calculated. The non stationary pattern of Dengue incidences and its climatic correlates like rainfall temperature is analyzed through Wavelet based methods. We find that the mean time lag between peak of monsoon and Dengue is 9 weeks. Monsoon and Dengue cases are phase locked from 2008-2012 in the 16-32 weeks band. The duration of post monsoon outbreak has been increasing every year, especially post 2008, even though the intensity and duration of monsoon has been decreasing. Temperature and Dengue incidences show correlations in the same band, but phase lock is not stationary.
2301.02149
Wilfred Ndifon
Abdoelnaser M Degoot and Wilfred Ndifon
Stochastics of DNA Quantification
49 pages, 4 figures
null
null
null
q-bio.QM math.CO stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
A common approach to quantifying DNA involves repeated cycles of DNA amplification. This approach, employed by the polymerase chain reaction (PCR), produces outputs that are corrupted by amplification noise, making it challenging to accurately back-calculate the amount of input DNA. Standard mathematical solutions to this back-calculation problem do not take adequate account of such noise and are error-prone. Here, we develop a parsimonious mathematical model of the stochastic mapping of input DNA onto experimental outputs that accounts, in a natural way, for amplification noise. We use the model to derive the probability density of the quantification cycle, a frequently reported experimental output, which can be fit to data to estimate input DNA. Strikingly, the model predicts that a sample with only one input DNA molecule has a $<$4% chance of testing positive, which is $>$25-fold lower than assumed by a standard method of interpreting PCR data. We provide formulae for calculating both the limit of detection and the limit of quantification, two important operating characteristics of DNA quantification methods that are frequently assessed by using ad-hoc mathematical techniques. Our results provide a mathematical foundation for the rigorous analysis of DNA quantification.
[ { "created": "Thu, 5 Jan 2023 16:49:43 GMT", "version": "v1" } ]
2023-01-06
[ [ "Degoot", "Abdoelnaser M", "" ], [ "Ndifon", "Wilfred", "" ] ]
A common approach to quantifying DNA involves repeated cycles of DNA amplification. This approach, employed by the polymerase chain reaction (PCR), produces outputs that are corrupted by amplification noise, making it challenging to accurately back-calculate the amount of input DNA. Standard mathematical solutions to this back-calculation problem do not take adequate account of such noise and are error-prone. Here, we develop a parsimonious mathematical model of the stochastic mapping of input DNA onto experimental outputs that accounts, in a natural way, for amplification noise. We use the model to derive the probability density of the quantification cycle, a frequently reported experimental output, which can be fit to data to estimate input DNA. Strikingly, the model predicts that a sample with only one input DNA molecule has a $<$4% chance of testing positive, which is $>$25-fold lower than assumed by a standard method of interpreting PCR data. We provide formulae for calculating both the limit of detection and the limit of quantification, two important operating characteristics of DNA quantification methods that are frequently assessed by using ad-hoc mathematical techniques. Our results provide a mathematical foundation for the rigorous analysis of DNA quantification.
0707.2011
Negadi Tidjani none
Tidjani Negadi
The genetic code multiplet structure, in one number
9 pages
Symmetry: Culture and Science, Volume 18, Numbers 2-3, pages 149-160 (2007)
null
null
q-bio.OT
null
The standard genetic code multiplet structure as well as the correct degeneracies, class by class, are all extracted from the (unique) number 23, the order of the permutation group of 23 objects.
[ { "created": "Fri, 13 Jul 2007 13:58:01 GMT", "version": "v1" } ]
2008-09-01
[ [ "Negadi", "Tidjani", "" ] ]
The standard genetic code multiplet structure as well as the correct degeneracies, class by class, are all extracted from the (unique) number 23, the order of the permutation group of 23 objects.
q-bio/0506005
Ana Carpio
A. Carpio
Asymptotic construction of pulses in the Hodgkin Huxley model for myelinated nerves
to appear in Phys. Rev. E
null
10.1103/PhysRevE.72.011905
null
q-bio.NC q-bio.QM
null
A quantitative description of pulses and wave trains in the spatially discrete Hodgkin-Huxley model for myelinated nerves is given. Predictions of the shape and speed of the waves and the thresholds for propagation failure are obtained. Our asymptotic predictions agree quite well with numerical solutions of the model and describe wave patterns generated by repeated firing at a boundary.
[ { "created": "Tue, 7 Jun 2005 13:53:06 GMT", "version": "v1" } ]
2009-11-11
[ [ "Carpio", "A.", "" ] ]
A quantitative description of pulses and wave trains in the spatially discrete Hodgkin-Huxley model for myelinated nerves is given. Predictions of the shape and speed of the waves and the thresholds for propagation failure are obtained. Our asymptotic predictions agree quite well with numerical solutions of the model and describe wave patterns generated by repeated firing at a boundary.
1909.12138
Noemi Castelletti
Noemi Castelletti, Maria Vittoria Barbarossa
A mathematical view on head lice infestations
null
null
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Commonly known as head lice, Pediculus humanus capitis are human ectoparasites which cause infestations in children worldwide. Understanding the life cycle of head lice is an important step in knowing how to treat lice infestations, as the parasite behavior depends considerably on its age and gender. In this work we propose a mathematical model for head lice population dynamics in hosts who could be or not quarantined and treated. Considering a lice population structured by age and gender we formulate the model as a system of hyperbolic PDEs, which can be reduced to compartmental systems of delay or ordinary differential equations. Besides studying fundamental properties of the model, such as existence, uniqueness and nonnegativity of solutions, we show the existence of (in certain cases multiple) equilibria at which the infestation persists on the host's head. Aiming to assess the performance of treatments against head lice infestations, by mean of computer experiments and numerical simulations we investigate four possible treatment strategies. Our main results can be summarized as follows: (i) early detection is crucial for quick and efficient eradication of lice infestations; (ii) dimeticone-based products applied every 4 days effectively remove lice in at most three applications even in case of severe infestations and (iii) minimization of the reinfection risk, e.g. by mean of synchronized treatments in families/classrooms is recommended.
[ { "created": "Thu, 26 Sep 2019 14:20:10 GMT", "version": "v1" }, { "created": "Fri, 27 Sep 2019 12:46:30 GMT", "version": "v2" } ]
2019-09-30
[ [ "Castelletti", "Noemi", "" ], [ "Barbarossa", "Maria Vittoria", "" ] ]
Commonly known as head lice, Pediculus humanus capitis are human ectoparasites which cause infestations in children worldwide. Understanding the life cycle of head lice is an important step in knowing how to treat lice infestations, as the parasite behavior depends considerably on its age and gender. In this work we propose a mathematical model for head lice population dynamics in hosts who could be or not quarantined and treated. Considering a lice population structured by age and gender we formulate the model as a system of hyperbolic PDEs, which can be reduced to compartmental systems of delay or ordinary differential equations. Besides studying fundamental properties of the model, such as existence, uniqueness and nonnegativity of solutions, we show the existence of (in certain cases multiple) equilibria at which the infestation persists on the host's head. Aiming to assess the performance of treatments against head lice infestations, by mean of computer experiments and numerical simulations we investigate four possible treatment strategies. Our main results can be summarized as follows: (i) early detection is crucial for quick and efficient eradication of lice infestations; (ii) dimeticone-based products applied every 4 days effectively remove lice in at most three applications even in case of severe infestations and (iii) minimization of the reinfection risk, e.g. by mean of synchronized treatments in families/classrooms is recommended.
1509.06810
Thorsten Pr\"ustel
Thorsten Pr\"ustel and Martin Meier-Schellersheim
Exact propagation without analytical solutions
14 pages, 3 figures
null
null
null
q-bio.QM
http://creativecommons.org/publicdomain/zero/1.0/
We present a simulation algorithm that accurately propagates a molecule pair using large time steps without the need to invoke the full exact analytical solutions of the Smoluchowski diffusion equation. Because the proposed method only uses uniform and Gaussian random numbers, it allows for position updates that are two to three orders of magnitude faster than those of a corresponding scheme based on full solutions, while mantaining the same degree of accuracy. Neither simplifying nor ad hoc assumptions that are foreign to the underlying Smoluchowski theory are employed, instead, the algorithm faithfully incorporates the individual elements of the theoretical model. The method is flexible and applicable in 1, 2 and 3 dimensions, suggesting that it may find broad usage in various stochastic simulation algorithms. We demonstrate the algorithm for the case of a non-reactive, irreversible and reversible reacting molecule pair.
[ { "created": "Tue, 22 Sep 2015 23:28:16 GMT", "version": "v1" } ]
2015-09-24
[ [ "Prüstel", "Thorsten", "" ], [ "Meier-Schellersheim", "Martin", "" ] ]
We present a simulation algorithm that accurately propagates a molecule pair using large time steps without the need to invoke the full exact analytical solutions of the Smoluchowski diffusion equation. Because the proposed method only uses uniform and Gaussian random numbers, it allows for position updates that are two to three orders of magnitude faster than those of a corresponding scheme based on full solutions, while mantaining the same degree of accuracy. Neither simplifying nor ad hoc assumptions that are foreign to the underlying Smoluchowski theory are employed, instead, the algorithm faithfully incorporates the individual elements of the theoretical model. The method is flexible and applicable in 1, 2 and 3 dimensions, suggesting that it may find broad usage in various stochastic simulation algorithms. We demonstrate the algorithm for the case of a non-reactive, irreversible and reversible reacting molecule pair.
1411.2847
Benjamin Amor
B. Amor, S. N. Yaliraki, R. Woscholski, and M. Barahona
Uncovering allosteric pathways in caspase-1 with Markov transient analysis and multiscale community detection
14 pages, 8 figures
Mol Biosyst. 2014 Aug;10(8):2247-58
10.1039/c4mb00088a
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Allosteric regulation at distant sites is central to many cellular processes. In particular, allosteric sites in proteins are a major target to increase the range and selectivity of new drugs, and there is a need for methods capable of identifying intra-molecular signalling pathways leading to allosteric effects. Here, we use an atomistic graph-theoretical approach that exploits Markov transients to extract such pathways and exemplify our results in an important allosteric protein, caspase-1. Firstly, we use Markov Stability community detection to perform a multiscale analysis of the structure of caspase-1 which reveals that the active conformation has a weaker, less compartmentalised large-scale structure as compared to the inactive conformation, resulting in greater intra-protein coherence and signal propagation. We also carry out a full computational point mutagenesis and identify that only a few residues are critical to such structural coherence. Secondly, we characterise explicitly the transients of random walks originating at the active site and predict the location of a known allosteric site in this protein quantifying the contribution of individual bonds to the communication pathway between the active and allosteric sites. Several of the bonds we find have been shown experimentally to be functionally critical, but we also predict a number of as yet unidentified bonds which may contribute to the pathway. Our approach offers a computationally inexpensive method for the identification of allosteric sites and communication pathways in proteins using a fully atomistic description.
[ { "created": "Tue, 11 Nov 2014 15:14:35 GMT", "version": "v1" } ]
2014-11-12
[ [ "Amor", "B.", "" ], [ "Yaliraki", "S. N.", "" ], [ "Woscholski", "R.", "" ], [ "Barahona", "M.", "" ] ]
Allosteric regulation at distant sites is central to many cellular processes. In particular, allosteric sites in proteins are a major target to increase the range and selectivity of new drugs, and there is a need for methods capable of identifying intra-molecular signalling pathways leading to allosteric effects. Here, we use an atomistic graph-theoretical approach that exploits Markov transients to extract such pathways and exemplify our results in an important allosteric protein, caspase-1. Firstly, we use Markov Stability community detection to perform a multiscale analysis of the structure of caspase-1 which reveals that the active conformation has a weaker, less compartmentalised large-scale structure as compared to the inactive conformation, resulting in greater intra-protein coherence and signal propagation. We also carry out a full computational point mutagenesis and identify that only a few residues are critical to such structural coherence. Secondly, we characterise explicitly the transients of random walks originating at the active site and predict the location of a known allosteric site in this protein quantifying the contribution of individual bonds to the communication pathway between the active and allosteric sites. Several of the bonds we find have been shown experimentally to be functionally critical, but we also predict a number of as yet unidentified bonds which may contribute to the pathway. Our approach offers a computationally inexpensive method for the identification of allosteric sites and communication pathways in proteins using a fully atomistic description.
2205.07045
Yuhao Huang
Yuhao Huang, Corey Keller
How can I investigate causal brain networks with iEEG?
Forthcoming chapter in "Intracranial EEG for Cognitive Neuroscience"
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While many human imaging methodologies probe the structural and functional connectivity of the brain, techniques to investigate cortical networks in a causal and directional manner are critical but limited. The use of iEEG enables several approaches to directly characterize brain regions that are functionally connected and in some cases also establish directionality of these connections. In this chapter we focus on the basis, method and application of the cortico-cortical evoked potential (CCEP), whereby electrical pulses applied to one set of intracranial electrodes yields an electrically-induced brain response at local and remote regions. In this chapter, CCEPs are first contextualized within common brain connectivity methods used to define cortical networks and how CCEP adds unique information. Second, the practical and analytical considerations when using CCEP are discussed. Third, we review the neurophysiology underlying CCEPs and the applications of CCEPs including exploring functional and pathological brain networks and probing brain plasticity. Finally, we end with a discussion of limitations, caveats, and directions to improve CCEP utilization in the future.
[ { "created": "Sat, 14 May 2022 12:01:35 GMT", "version": "v1" } ]
2022-05-17
[ [ "Huang", "Yuhao", "" ], [ "Keller", "Corey", "" ] ]
While many human imaging methodologies probe the structural and functional connectivity of the brain, techniques to investigate cortical networks in a causal and directional manner are critical but limited. The use of iEEG enables several approaches to directly characterize brain regions that are functionally connected and in some cases also establish directionality of these connections. In this chapter we focus on the basis, method and application of the cortico-cortical evoked potential (CCEP), whereby electrical pulses applied to one set of intracranial electrodes yields an electrically-induced brain response at local and remote regions. In this chapter, CCEPs are first contextualized within common brain connectivity methods used to define cortical networks and how CCEP adds unique information. Second, the practical and analytical considerations when using CCEP are discussed. Third, we review the neurophysiology underlying CCEPs and the applications of CCEPs including exploring functional and pathological brain networks and probing brain plasticity. Finally, we end with a discussion of limitations, caveats, and directions to improve CCEP utilization in the future.
0710.3258
Steven Kelk
Jaroslaw Byrka, Pawel Gawrychowski, Katharina T. Huber, Steven Kelk
Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks
A new version with heavily optimized derandomization running time. And a very fast triplet-consistency checking algorithm as subroutine
null
null
null
q-bio.PE
null
This article concerns the following question arising in computational evolutionary biology. For a given subclass of phylogenetic networks, what is the maximum value of 0 <= p <= 1 such that for every input set T of rooted triplets, there exists some network N(T) from the subclass such that at least p|T| of the triplets are consistent with N(T)? Here we prove that the set containing all triplets (the full triplet set) in some sense defines p, and moreover that any network N achieving fraction p' for the full triplet set can be converted in polynomial time into an isomorphic network N'(T) achieving >= p' for an arbitrary triplet set T. We demonstrate the power of this result for the field of phylogenetics by giving worst-case optimal algorithms for level-1 phylogenetic networks (a much-studied extension of phylogenetic trees), improving considerably upon the 5/12 fraction obtained recently by Jansson, Nguyen and Sung. For level-2 phylogenetic networks we show that p >= 0.61. We note that all the results in this article also apply to weighted triplet sets.
[ { "created": "Wed, 17 Oct 2007 10:17:17 GMT", "version": "v1" }, { "created": "Thu, 15 Nov 2007 13:33:28 GMT", "version": "v2" }, { "created": "Wed, 13 Feb 2008 10:30:19 GMT", "version": "v3" } ]
2008-02-13
[ [ "Byrka", "Jaroslaw", "" ], [ "Gawrychowski", "Pawel", "" ], [ "Huber", "Katharina T.", "" ], [ "Kelk", "Steven", "" ] ]
This article concerns the following question arising in computational evolutionary biology. For a given subclass of phylogenetic networks, what is the maximum value of 0 <= p <= 1 such that for every input set T of rooted triplets, there exists some network N(T) from the subclass such that at least p|T| of the triplets are consistent with N(T)? Here we prove that the set containing all triplets (the full triplet set) in some sense defines p, and moreover that any network N achieving fraction p' for the full triplet set can be converted in polynomial time into an isomorphic network N'(T) achieving >= p' for an arbitrary triplet set T. We demonstrate the power of this result for the field of phylogenetics by giving worst-case optimal algorithms for level-1 phylogenetic networks (a much-studied extension of phylogenetic trees), improving considerably upon the 5/12 fraction obtained recently by Jansson, Nguyen and Sung. For level-2 phylogenetic networks we show that p >= 0.61. We note that all the results in this article also apply to weighted triplet sets.
2004.01248
Samuel Heroy
Samuel Heroy
Metropolitan-scale COVID-19 outbreaks: how similar are they?
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we use US county-level COVID-19 case data from January 21-March 25, 2020 to study the exponential behavior of case growth at the metropolitan scale. In particular, we assume that all localized outbreaks are in an early stage (either undergoing exponential growth in the number of cases, or are effectively contained) and compare the explanatory performance of different simple exponential and linear growth models for different metropolitan areas. While we find no relationship between city size and exponential growth rate (directly related to $R0$, which denotes average the number of cases an infected individual infects), we do find that larger cities seem to begin exponential spreading earlier and are thus in a more advanced stage of the pandemic at the time of submission. We also use more recent data to compute prediction errors given our models, and find that in many cities, exponential growth models trained on data before March 26 are poor predictors for case numbers in this more recent period (March 26-30), likely indicating a reduction in the number of new cases facilitated through social distancing.
[ { "created": "Thu, 2 Apr 2020 20:25:56 GMT", "version": "v1" }, { "created": "Mon, 6 Apr 2020 09:23:41 GMT", "version": "v2" } ]
2020-04-07
[ [ "Heroy", "Samuel", "" ] ]
In this study, we use US county-level COVID-19 case data from January 21-March 25, 2020 to study the exponential behavior of case growth at the metropolitan scale. In particular, we assume that all localized outbreaks are in an early stage (either undergoing exponential growth in the number of cases, or are effectively contained) and compare the explanatory performance of different simple exponential and linear growth models for different metropolitan areas. While we find no relationship between city size and exponential growth rate (directly related to $R0$, which denotes average the number of cases an infected individual infects), we do find that larger cities seem to begin exponential spreading earlier and are thus in a more advanced stage of the pandemic at the time of submission. We also use more recent data to compute prediction errors given our models, and find that in many cities, exponential growth models trained on data before March 26 are poor predictors for case numbers in this more recent period (March 26-30), likely indicating a reduction in the number of new cases facilitated through social distancing.
1304.3266
Jes\'us Requena Carri\'on
Jes\'us Requena-Carri\'on, Ferney A. Beltr\'an-Molina, Antonio G. Marques
Relating the spectrum of cardiac signals to the spatiotemporal dynamics of cardiac sources
28 pages, 3 figures
null
null
null
q-bio.QM physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasing number of studies use the spectrum of cardiac signals for analyzing the spatiotemporal dynamics of complex cardiac arrhythmias. However, the relationship between the spectrum of cardiac signals and the spatiotemporal dynamics of the underlying cardiac sources remains to date unclear. In this paper, we derive a mathematical expression relating the spectrum of cardiac signals to the spatiotemporal dynamics of cardiac sources and the measurement characteristics of the lead systems. Then, by using analytical methods and computer simulations we analyze the spectrum of cardiac signals measured by idealized lead systems during correlated and uncorrelated spatiotemporal dynamics. Our results show that lead systems can have distorting effects on the spectral envelope of cardiac signals, which depend on the spatial resolution of the lead systems and on the degree of spatiotemporal correlation of the underlying cardiac sources. In addition to this, our results indicate that the spectral features that do not depend on the spectral envelope, such as the dominant frequency, behave robustly against different choices of lead systems.
[ { "created": "Thu, 11 Apr 2013 11:58:50 GMT", "version": "v1" } ]
2013-04-12
[ [ "Requena-Carrión", "Jesús", "" ], [ "Beltrán-Molina", "Ferney A.", "" ], [ "Marques", "Antonio G.", "" ] ]
An increasing number of studies use the spectrum of cardiac signals for analyzing the spatiotemporal dynamics of complex cardiac arrhythmias. However, the relationship between the spectrum of cardiac signals and the spatiotemporal dynamics of the underlying cardiac sources remains to date unclear. In this paper, we derive a mathematical expression relating the spectrum of cardiac signals to the spatiotemporal dynamics of cardiac sources and the measurement characteristics of the lead systems. Then, by using analytical methods and computer simulations we analyze the spectrum of cardiac signals measured by idealized lead systems during correlated and uncorrelated spatiotemporal dynamics. Our results show that lead systems can have distorting effects on the spectral envelope of cardiac signals, which depend on the spatial resolution of the lead systems and on the degree of spatiotemporal correlation of the underlying cardiac sources. In addition to this, our results indicate that the spectral features that do not depend on the spectral envelope, such as the dominant frequency, behave robustly against different choices of lead systems.
2309.04741
Roozbeh H. Pazuki
Roozbeh H. Pazuki, Robert G. Endres
Upper limits on the robustness of Turing models and other multiparametric dynamical systems
null
null
null
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional linear stability analysis based on matrix diagonalization is a computationally intensive $O(n^3)$ process for $n$-dimensional systems of differential equations, posing substantial limitations for the exploration of Turing systems of pattern formation where an additional wave-number parameter needs to be investigated. In this study, we introduce an efficient $O(n)$ technique that leverages Gershgorin's theorem to determine upper limits on regions of parameter space and the wave number beyond which Turing instabilities cannot occur. This method offers a streamlined avenue for exploring the phase diagrams of other complex multiparametric models, such as those found in systems biology.
[ { "created": "Sat, 9 Sep 2023 09:59:33 GMT", "version": "v1" } ]
2023-09-12
[ [ "Pazuki", "Roozbeh H.", "" ], [ "Endres", "Robert G.", "" ] ]
Traditional linear stability analysis based on matrix diagonalization is a computationally intensive $O(n^3)$ process for $n$-dimensional systems of differential equations, posing substantial limitations for the exploration of Turing systems of pattern formation where an additional wave-number parameter needs to be investigated. In this study, we introduce an efficient $O(n)$ technique that leverages Gershgorin's theorem to determine upper limits on regions of parameter space and the wave number beyond which Turing instabilities cannot occur. This method offers a streamlined avenue for exploring the phase diagrams of other complex multiparametric models, such as those found in systems biology.
1508.04174
Hyunju Kim
Hyunju Kim and Paul Davies and Sara Imari Walker
New Scaling Relation for Information Transfer in Biological Networks
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Living systems are often described utilizing informational analogies. An important open question is whether information is merely a useful conceptual metaphor, or intrinsic to the operation of biological systems. To address this question, we provide a rigorous case study of the informational architecture of two representative biological networks: the Boolean network model for the cell-cycle regulatory network of the fission yeast S. pombe and that of the budding yeast S. cerevisiae. We compare our results for these biological networks to the same analysis performed on ensembles of two different types of random networks. We show that both biological networks share features in common that are not shared by either ensemble. In particular, the biological networks in our study, on average, process more information than the random networks. They also exhibit a scaling relation in information transferred between nodes that distinguishes them from either ensemble: even when compared to the ensemble of random networks that shares important topological properties, such as a scale-free structure. We show that the most biologically distinct regime of this scaling relation is associated with the dynamics and function of the biological networks. Information processing in biological networks is therefore interpreted as an emergent property of topology (causal structure) and dynamics (function). These results demonstrate quantitatively how the informational architecture of biologically evolved networks can distinguish them from other classes of network architecture that do not share the same informational properties.
[ { "created": "Mon, 17 Aug 2015 23:06:43 GMT", "version": "v1" } ]
2015-08-19
[ [ "Kim", "Hyunju", "" ], [ "Davies", "Paul", "" ], [ "Walker", "Sara Imari", "" ] ]
Living systems are often described utilizing informational analogies. An important open question is whether information is merely a useful conceptual metaphor, or intrinsic to the operation of biological systems. To address this question, we provide a rigorous case study of the informational architecture of two representative biological networks: the Boolean network model for the cell-cycle regulatory network of the fission yeast S. pombe and that of the budding yeast S. cerevisiae. We compare our results for these biological networks to the same analysis performed on ensembles of two different types of random networks. We show that both biological networks share features in common that are not shared by either ensemble. In particular, the biological networks in our study, on average, process more information than the random networks. They also exhibit a scaling relation in information transferred between nodes that distinguishes them from either ensemble: even when compared to the ensemble of random networks that shares important topological properties, such as a scale-free structure. We show that the most biologically distinct regime of this scaling relation is associated with the dynamics and function of the biological networks. Information processing in biological networks is therefore interpreted as an emergent property of topology (causal structure) and dynamics (function). These results demonstrate quantitatively how the informational architecture of biologically evolved networks can distinguish them from other classes of network architecture that do not share the same informational properties.
1705.02867
Alberto Sorrentino
Alberto Sorrentino and Michele Piana
Inverse Modeling for MEG/EEG data
15 pages, 1 figure
null
null
null
q-bio.QM math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide an overview of the state-of-the-art for mathematical methods that are used to reconstruct brain activity from neurophysiological data. After a brief introduction on the mathematics of the forward problem, we discuss standard and recently proposed regularization methods, as well as Monte Carlo techniques for Bayesian inference. We classify the inverse methods based on the underlying source model, and discuss advantages and disadvantages. Finally we describe an application to the pre-surgical evaluation of epileptic patients.
[ { "created": "Mon, 8 May 2017 13:37:23 GMT", "version": "v1" } ]
2017-05-09
[ [ "Sorrentino", "Alberto", "" ], [ "Piana", "Michele", "" ] ]
We provide an overview of the state-of-the-art for mathematical methods that are used to reconstruct brain activity from neurophysiological data. After a brief introduction on the mathematics of the forward problem, we discuss standard and recently proposed regularization methods, as well as Monte Carlo techniques for Bayesian inference. We classify the inverse methods based on the underlying source model, and discuss advantages and disadvantages. Finally we describe an application to the pre-surgical evaluation of epileptic patients.
1301.5527
David A. Kessler
Shlomit Weisman, David A. Kessler
Coexistence in an inhomogeneous environment
null
null
10.1371/journal.pone.0062699
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the two-dimensional extension of the model of Kessler and Sander of competition between two species identical except for dispersion rates. In this class of models, the spatial inhomogeneity of reproduction rates gives rise to an implicit cost of dispersal, due to the tendency to leave favorable locations. Then, as in the Hamilton-May model with its explicit dispersal cost, the tradeoff between dispersal case and the beneficial role of dispersal in limiting fluctuations, leads to an advantage of one dispersal rate over another, and the eventual extinction of the disadvantaged species. In two dimensions we find that while the competition leads to the elimination of one species at high and low population density, at intermediate densities the two species can coexist essentially indefinitely. This is a new phenomenon not present in either the one-dimensional form of the Kessler-Sander model nor in the totally connected Hamilton-May model, and points to the importance of geometry in the question of dispersal.
[ { "created": "Wed, 23 Jan 2013 15:09:33 GMT", "version": "v1" } ]
2015-06-12
[ [ "Weisman", "Shlomit", "" ], [ "Kessler", "David A.", "" ] ]
We examine the two-dimensional extension of the model of Kessler and Sander of competition between two species identical except for dispersion rates. In this class of models, the spatial inhomogeneity of reproduction rates gives rise to an implicit cost of dispersal, due to the tendency to leave favorable locations. Then, as in the Hamilton-May model with its explicit dispersal cost, the tradeoff between dispersal case and the beneficial role of dispersal in limiting fluctuations, leads to an advantage of one dispersal rate over another, and the eventual extinction of the disadvantaged species. In two dimensions we find that while the competition leads to the elimination of one species at high and low population density, at intermediate densities the two species can coexist essentially indefinitely. This is a new phenomenon not present in either the one-dimensional form of the Kessler-Sander model nor in the totally connected Hamilton-May model, and points to the importance of geometry in the question of dispersal.
2306.05984
Anyou Wang
Anyou Wang
Noncoding RNAs evolutionarily extend animal lifespan
13 pages and 4 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The mechanisms underlying lifespan evolution in organisms have long been mysterious. However, recent studies have demonstrated that organisms evolutionarily gain noncoding RNAs (ncRNAs) that carry endogenous profound functions in higher organisms, including lifespan. This study unveils ncRNAs as crucial drivers driving animal lifespan evolution. Species in the animal kingdom evolutionarily increase their ncRNA length in their genomes, coinciding with trimming mitochondrial genome length. This leads to lower energy consumption and ultimately lifespan extension. Notably, during lifespan extension, species exhibit a gradual acquisition of long-life ncRNA motifs while concurrently losing short-life motifs. These longevity-associated ncRNA motifs, such as GGTGCG, are particularly active in key tissues, including the endometrium, ovary, testis, and cerebral cortex. The activation of ncRNAs in the ovary and endometrium offers insights into why women generally exhibit longer lifespans than men. This groundbreaking discovery reveals the pivotal role of ncRNAs in driving lifespan evolution and provides a fundamental foundation for the study of longevity and aging.
[ { "created": "Fri, 9 Jun 2023 15:52:45 GMT", "version": "v1" } ]
2023-06-12
[ [ "Wang", "Anyou", "" ] ]
The mechanisms underlying lifespan evolution in organisms have long been mysterious. However, recent studies have demonstrated that organisms evolutionarily gain noncoding RNAs (ncRNAs) that carry endogenous profound functions in higher organisms, including lifespan. This study unveils ncRNAs as crucial drivers driving animal lifespan evolution. Species in the animal kingdom evolutionarily increase their ncRNA length in their genomes, coinciding with trimming mitochondrial genome length. This leads to lower energy consumption and ultimately lifespan extension. Notably, during lifespan extension, species exhibit a gradual acquisition of long-life ncRNA motifs while concurrently losing short-life motifs. These longevity-associated ncRNA motifs, such as GGTGCG, are particularly active in key tissues, including the endometrium, ovary, testis, and cerebral cortex. The activation of ncRNAs in the ovary and endometrium offers insights into why women generally exhibit longer lifespans than men. This groundbreaking discovery reveals the pivotal role of ncRNAs in driving lifespan evolution and provides a fundamental foundation for the study of longevity and aging.
0905.2843
Wolfgang Keil
Wolfgang Keil, Karl-Friedrich Schmidt, Siegrid Loewel, Matthias Kaschube
Reorganization of columnar architecture in the growing visual cortex
8+13 pages, 4+8 figures, paper + supplementary material
PNAS July 6, 2010 vol. 107 no. 27 12293-12298
10.1073/pnas.0913020107
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many cortical areas increase in size considerably during postnatal development, progressively displacing neuronal cell bodies from each other. At present, little is known about how cortical growth affects the development of neuronal circuits. Here, in acute and chronic experiments, we study the layout of ocular dominance (OD) columns in cat primary visual cortex (V1) during a period of substantial postnatal growth. We find that despite a considerable size increase of V1, the spacing between columns is largely preserved. In contrast, their spatial arrangement changes systematically over this period. While in young animals columns are more band-like, layouts become more isotropic in mature animals. We propose a novel mechanism of growth-induced reorganization that is based on the `zigzag instability', a dynamical instability observed in several inanimate pattern forming systems. We argue that this mechanism is inherent to a wide class of models for the activity-dependent formation of OD columns. Analyzing one member of this class, the Elastic Network model, we show that this mechanism can account for the preservation of column spacing and the specific mode of reorganization of OD columns that we observe. We conclude that neurons systematically shift their selectivities during normal development and that this reorganization is induced by the cortical expansion during growth. Our work suggests that cortical circuits remain plastic for an extended period in development in order to facilitate the modification of neuronal circuits to adjust for cortical growth.
[ { "created": "Mon, 18 May 2009 11:51:34 GMT", "version": "v1" }, { "created": "Mon, 11 Apr 2011 15:04:59 GMT", "version": "v2" } ]
2011-04-12
[ [ "Keil", "Wolfgang", "" ], [ "Schmidt", "Karl-Friedrich", "" ], [ "Loewel", "Siegrid", "" ], [ "Kaschube", "Matthias", "" ] ]
Many cortical areas increase in size considerably during postnatal development, progressively displacing neuronal cell bodies from each other. At present, little is known about how cortical growth affects the development of neuronal circuits. Here, in acute and chronic experiments, we study the layout of ocular dominance (OD) columns in cat primary visual cortex (V1) during a period of substantial postnatal growth. We find that despite a considerable size increase of V1, the spacing between columns is largely preserved. In contrast, their spatial arrangement changes systematically over this period. While in young animals columns are more band-like, layouts become more isotropic in mature animals. We propose a novel mechanism of growth-induced reorganization that is based on the `zigzag instability', a dynamical instability observed in several inanimate pattern forming systems. We argue that this mechanism is inherent to a wide class of models for the activity-dependent formation of OD columns. Analyzing one member of this class, the Elastic Network model, we show that this mechanism can account for the preservation of column spacing and the specific mode of reorganization of OD columns that we observe. We conclude that neurons systematically shift their selectivities during normal development and that this reorganization is induced by the cortical expansion during growth. Our work suggests that cortical circuits remain plastic for an extended period in development in order to facilitate the modification of neuronal circuits to adjust for cortical growth.
1303.6993
Alfred Bennun
Alfred Bennun
The coupling of thermodynamics with the organizational water-protein intra-dynamics driven by the H-bonds dissipative potential of cluster water
9 pages, 2 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Red cell-Hb-CSF functions as a sensor adapting response to Hb heterotropic equilibriums. At the lungs O2 and Mg2+, each one increasing affinity for the other stabilize the relax (R) form [(O2)4Hb(Mg)2].(H2O)R. At tissue level, the inclusion of H+ and 2,3-DPG excludes O2 and Mg2+ to stabilize the tense (T) form 2,3-DPG-deoxyHb-(H2O)T. Both senses are integrated into a cycle T into R and R into T, without involving a direct reversal. The dissipative potential of water cluster (H2O)n interacts with the hydrophilic asymmetries of Hb, to restrict randomness of the kinetic sense implicated in a single peak for activation energy (Ea). The hydration shells could sequence an enhanced Ea into several peaks, to sequentially activate transitions states. Hence, changes in dipole state, sliding, pKa, n-H-bonds, etc., could became concatenated for vectoriality. (H2O)n by the loss of H-bonds couple with to the hydration turnover of proteins and ions to result in incomplete water cluster (H2O)n*, with a lower-n. (H2O)n* became a carrier of heat/entropy into the cerebrospinal fluid (CSF) which has to be replaced 3.7 times per day. OxyHb formation involves sliding-down of alpha vs beta chains, to shift alpha1 and alpha2 Pro 44 into allowing the entrance of a fully hydrated [Mg.(H2O)6](H2O)12-14(2+) (or Zn2+) into the hydrophilic beta2-alpha1 and beta1-alpha2 interfaces. OxyHb pKa of 6.4 leads to H+-dissociation increasing negative charge of R-groups. This at beta2-alpha1 sequence two tetradentate chelates, first an Mg2+, bonding with beta2 His 92 and a second Mg2+ with alpha1 His 87, to cooperatively release hindrance. The interconversion of oxy-to-deoxyHb, pKa=8, leads to the amphoteric imidazole to became positively charged and proximal histidines return into hindrance position, releasing the incomplete hydrated Mg.(H2O)inc(2+) and O2 into CSF.
[ { "created": "Wed, 27 Mar 2013 22:14:45 GMT", "version": "v1" } ]
2013-03-29
[ [ "Bennun", "Alfred", "" ] ]
The Red cell-Hb-CSF functions as a sensor adapting response to Hb heterotropic equilibriums. At the lungs O2 and Mg2+, each one increasing affinity for the other stabilize the relax (R) form [(O2)4Hb(Mg)2].(H2O)R. At tissue level, the inclusion of H+ and 2,3-DPG excludes O2 and Mg2+ to stabilize the tense (T) form 2,3-DPG-deoxyHb-(H2O)T. Both senses are integrated into a cycle T into R and R into T, without involving a direct reversal. The dissipative potential of water cluster (H2O)n interacts with the hydrophilic asymmetries of Hb, to restrict randomness of the kinetic sense implicated in a single peak for activation energy (Ea). The hydration shells could sequence an enhanced Ea into several peaks, to sequentially activate transitions states. Hence, changes in dipole state, sliding, pKa, n-H-bonds, etc., could became concatenated for vectoriality. (H2O)n by the loss of H-bonds couple with to the hydration turnover of proteins and ions to result in incomplete water cluster (H2O)n*, with a lower-n. (H2O)n* became a carrier of heat/entropy into the cerebrospinal fluid (CSF) which has to be replaced 3.7 times per day. OxyHb formation involves sliding-down of alpha vs beta chains, to shift alpha1 and alpha2 Pro 44 into allowing the entrance of a fully hydrated [Mg.(H2O)6](H2O)12-14(2+) (or Zn2+) into the hydrophilic beta2-alpha1 and beta1-alpha2 interfaces. OxyHb pKa of 6.4 leads to H+-dissociation increasing negative charge of R-groups. This at beta2-alpha1 sequence two tetradentate chelates, first an Mg2+, bonding with beta2 His 92 and a second Mg2+ with alpha1 His 87, to cooperatively release hindrance. The interconversion of oxy-to-deoxyHb, pKa=8, leads to the amphoteric imidazole to became positively charged and proximal histidines return into hindrance position, releasing the incomplete hydrated Mg.(H2O)inc(2+) and O2 into CSF.
2111.14421
Ivana Pajic-Lijakovic Dr.
Ivana Pajic-Lijakovic and MIlan Milivojevic
Marangoni effect and cell spreading
7837 words, 3 figures, 1 table, 64 references
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Cells are very sensitive to the shear stress (SS). However, undesirable SS is generated during physiological process such as collective cell migration (CCM) and influences the biological processes such as morphogenesis, wound healing and cancer invasion. Despite extensive research devoted to study the stress generation caused by CCM, we still do not fully understand the main cause of SS generation. An attempt is made here to offer some answers to these questions by considering the rearrangement of cell monolayers. The SS generation represents a consequence of natural and forced convection. While forced convection is dependent on cell speed, the natural convection is induced by the gradient of tissue surface tension. The phenomenon is known as the Marangoni effect. The gradient of tissue surface tension induces directed cell spreading from the regions of lower tissue surface tension to the regions of higher tissue surface tension. This directed cell migration is described by the Marangoni flux. The phenomenon has been recognized during the rearrangement of (1) epithelial cell monolayers and (2) mixed cell monolayers made by epithelial and mesenchymal cells. The consequence of the Marangoni effect is an intensive spreading of cancer cells through an epithelium. In this work, a review of existing literature about SS generation caused by CCM is given along with the assortment of published experimental findings, in order to invite experimentalists to test given theoretical considerations in multicellular systems.
[ { "created": "Mon, 29 Nov 2021 10:06:05 GMT", "version": "v1" } ]
2021-11-30
[ [ "Pajic-Lijakovic", "Ivana", "" ], [ "Milivojevic", "MIlan", "" ] ]
Cells are very sensitive to the shear stress (SS). However, undesirable SS is generated during physiological process such as collective cell migration (CCM) and influences the biological processes such as morphogenesis, wound healing and cancer invasion. Despite extensive research devoted to study the stress generation caused by CCM, we still do not fully understand the main cause of SS generation. An attempt is made here to offer some answers to these questions by considering the rearrangement of cell monolayers. The SS generation represents a consequence of natural and forced convection. While forced convection is dependent on cell speed, the natural convection is induced by the gradient of tissue surface tension. The phenomenon is known as the Marangoni effect. The gradient of tissue surface tension induces directed cell spreading from the regions of lower tissue surface tension to the regions of higher tissue surface tension. This directed cell migration is described by the Marangoni flux. The phenomenon has been recognized during the rearrangement of (1) epithelial cell monolayers and (2) mixed cell monolayers made by epithelial and mesenchymal cells. The consequence of the Marangoni effect is an intensive spreading of cancer cells through an epithelium. In this work, a review of existing literature about SS generation caused by CCM is given along with the assortment of published experimental findings, in order to invite experimentalists to test given theoretical considerations in multicellular systems.
2108.08358
Sean Lawley
Elijah D Counterman, Sean D Lawley
Designing drug regimens that mitigate nonadherence
38 pages, 7 figures
null
null
null
q-bio.QM math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medication adherence is a well-known problem for pharmaceutical treatment of chronic diseases. Understanding how nonadherence affects treatment efficacy is made difficult by the ethics of clinical trials that force patients to skip doses of the medication being tested, the unpredictable timing of missed doses by actual patients, and the many competing variables that can either mitigate or magnify the deleterious effects of nonadherence, such as pharmacokinetic absorption and elimination rates, dosing intervals, dose sizes, adherence rates, etc. In this paper, we formulate and analyze a mathematical model of the drug concentration in an imperfectly adherent patient. Our model takes the form of the standard single compartment pharmacokinetic model with first order absorption and elimination, except that the patient takes medication only at a given proportion of the prescribed dosing times. Doses are missed randomly, and we use stochastic analysis to study the resulting random drug level in the body. We then use our mathematical results to propose principles for designing drug regimens that are robust to nonadherence. In particular, we quantify the resilience of extended release drugs to nonadherence, which is quite significant in some circumstances, and we show the benefit of taking a double dose following a missed dose if the drug absorption or elimination rate is slow compared to the dosing interval. We further use our results to compare some antiepileptic and antipsychotic drug regimens.
[ { "created": "Wed, 18 Aug 2021 19:27:31 GMT", "version": "v1" }, { "created": "Mon, 27 Dec 2021 22:20:31 GMT", "version": "v2" } ]
2021-12-30
[ [ "Counterman", "Elijah D", "" ], [ "Lawley", "Sean D", "" ] ]
Medication adherence is a well-known problem for pharmaceutical treatment of chronic diseases. Understanding how nonadherence affects treatment efficacy is made difficult by the ethics of clinical trials that force patients to skip doses of the medication being tested, the unpredictable timing of missed doses by actual patients, and the many competing variables that can either mitigate or magnify the deleterious effects of nonadherence, such as pharmacokinetic absorption and elimination rates, dosing intervals, dose sizes, adherence rates, etc. In this paper, we formulate and analyze a mathematical model of the drug concentration in an imperfectly adherent patient. Our model takes the form of the standard single compartment pharmacokinetic model with first order absorption and elimination, except that the patient takes medication only at a given proportion of the prescribed dosing times. Doses are missed randomly, and we use stochastic analysis to study the resulting random drug level in the body. We then use our mathematical results to propose principles for designing drug regimens that are robust to nonadherence. In particular, we quantify the resilience of extended release drugs to nonadherence, which is quite significant in some circumstances, and we show the benefit of taking a double dose following a missed dose if the drug absorption or elimination rate is slow compared to the dosing interval. We further use our results to compare some antiepileptic and antipsychotic drug regimens.
1308.1985
Hunter Fraser
Hunter B. Fraser
Cell-cycle regulated transcription associates with DNA replication timing in yeast and human
null
null
null
null
q-bio.GN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eukaryotic DNA replication follows a specific temporal program, with some genomic regions consistently replicating earlier than others, yet what determines this program is largely unknown. Highly transcribed regions have been observed to replicate in early S-phase in all plant and animal species studied to date, but this relationship is thought to be absent from both budding yeast and fission yeast. No association between cell-cycle regulated transcription and replication timing has been reported for any species. Here I show that in budding yeast, fission yeast, and human, the genes most highly transcribed during S-phase replicate early, whereas those repressed in S-phase replicate late. Transcription during other cell-cycle phases shows either the opposite correlation with replication timing, or no relation. The relationship is strongest near late-firing origins of replication, which is not consistent with a previously proposed model -- that replication timing may affect transcription -- and instead suggests a potential mechanism involving the recruitment of limiting replication initiation factors during S-phase. These results suggest that S-phase transcription may be an important determinant of DNA replication timing across eukaryotes, which may explain the well-established association between transcription and replication timing.
[ { "created": "Thu, 8 Aug 2013 21:43:33 GMT", "version": "v1" } ]
2013-08-12
[ [ "Fraser", "Hunter B.", "" ] ]
Eukaryotic DNA replication follows a specific temporal program, with some genomic regions consistently replicating earlier than others, yet what determines this program is largely unknown. Highly transcribed regions have been observed to replicate in early S-phase in all plant and animal species studied to date, but this relationship is thought to be absent from both budding yeast and fission yeast. No association between cell-cycle regulated transcription and replication timing has been reported for any species. Here I show that in budding yeast, fission yeast, and human, the genes most highly transcribed during S-phase replicate early, whereas those repressed in S-phase replicate late. Transcription during other cell-cycle phases shows either the opposite correlation with replication timing, or no relation. The relationship is strongest near late-firing origins of replication, which is not consistent with a previously proposed model -- that replication timing may affect transcription -- and instead suggests a potential mechanism involving the recruitment of limiting replication initiation factors during S-phase. These results suggest that S-phase transcription may be an important determinant of DNA replication timing across eukaryotes, which may explain the well-established association between transcription and replication timing.
2107.11770
Niklas Kolbe
Niklas Kolbe, Lorenz Hexemer, Lukas-Malte Bammert, Alexander Loewer, M\'aria Luk\'a\v{c}ov\'a-Medvi\v{d}ov\'a and Stefan Legewie
Data-based stochastic modeling reveals sources of activity bursts in single-cell TGF-$\beta$ signaling
27 pages, 7 figures, 5 tables
null
10.1371/journal.pcbi.1010266
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells sense their surrounding by employing intracellular signaling pathways that transmit hormonal signals from the cell membrane to the nucleus. TGF-$\beta$/SMAD signaling encodes various cell fates, controls tissue homeostasis and is deregulated in diseases such as cancer. The pathway shows strong heterogeneity at the single-cell level, but quantitative insights into mechanisms underlying fluctuations at various time scales are still missing, partly due to inefficiency in the calibration of stochastic models that mechanistically describe signaling processes. In this work we analyze single-cell TGF-$\beta$/SMAD signaling and show that it exhibits temporal stochastic bursts which are dose-dependent and whose number and magnitude correlate with cell migration. We propose a stochastic modeling approach to mechanistically describe these pathway fluctuations with high computational efficiency. Employing high-order numerical integration and fitting to burst statistics we enable efficient quantitative parameter estimation and discriminate models that assume noise in different reactions at the receptor level. This modeling approach suggests that stochasticity in the internalization of TGF-$\beta$ receptors into endosomes plays a key role in the observed temporal bursting. Further, the model predicts the single-cell dynamics of TGF-$\beta$/SMAD signaling in untested conditions, e.g., successfully reflects memory effects of signaling noise and cellular sensitivity towards repeated stimulation. Taken together, our computational framework based on burst analysis, noise modeling and path computation scheme is a suitable tool for the data-based modeling of complex signaling pathways, capable of identifying the source of temporal noise.
[ { "created": "Sun, 25 Jul 2021 09:49:42 GMT", "version": "v1" }, { "created": "Tue, 25 Jan 2022 21:13:21 GMT", "version": "v2" } ]
2022-10-12
[ [ "Kolbe", "Niklas", "" ], [ "Hexemer", "Lorenz", "" ], [ "Bammert", "Lukas-Malte", "" ], [ "Loewer", "Alexander", "" ], [ "Lukáčová-Medviďová", "Mária", "" ], [ "Legewie", "Stefan", "" ] ]
Cells sense their surrounding by employing intracellular signaling pathways that transmit hormonal signals from the cell membrane to the nucleus. TGF-$\beta$/SMAD signaling encodes various cell fates, controls tissue homeostasis and is deregulated in diseases such as cancer. The pathway shows strong heterogeneity at the single-cell level, but quantitative insights into mechanisms underlying fluctuations at various time scales are still missing, partly due to inefficiency in the calibration of stochastic models that mechanistically describe signaling processes. In this work we analyze single-cell TGF-$\beta$/SMAD signaling and show that it exhibits temporal stochastic bursts which are dose-dependent and whose number and magnitude correlate with cell migration. We propose a stochastic modeling approach to mechanistically describe these pathway fluctuations with high computational efficiency. Employing high-order numerical integration and fitting to burst statistics we enable efficient quantitative parameter estimation and discriminate models that assume noise in different reactions at the receptor level. This modeling approach suggests that stochasticity in the internalization of TGF-$\beta$ receptors into endosomes plays a key role in the observed temporal bursting. Further, the model predicts the single-cell dynamics of TGF-$\beta$/SMAD signaling in untested conditions, e.g., successfully reflects memory effects of signaling noise and cellular sensitivity towards repeated stimulation. Taken together, our computational framework based on burst analysis, noise modeling and path computation scheme is a suitable tool for the data-based modeling of complex signaling pathways, capable of identifying the source of temporal noise.
1807.11935
Danielle Bassett
Danielle S. Bassett, Perry Zurn, Joshua I. Gold
Network models in neuroscience
Under consideration as a book chapter in Cerebral Cortex 3.0, MIT Press
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From interacting cellular components to networks of neurons and neural systems, interconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from the development of new mathematical tools that can relate a system's architecture to its dynamics and function. These tools, which are known collectively as network science, have been used with increasing success to build models of neural systems across spatial scales and species. Here we discuss the nature of network models in neuroscience. We begin with a review of model theory from a philosophical perspective to inform our view of networks as models of complex systems in general, and of the brain in particular. We then summarize the types of models that are frequently studied in network neuroscience along three primary dimensions: from data representations to first-principles theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarse-grained approximations. We then consider ways to validate these models, focusing on approaches that perturb a system to probe its function. We close with a description of important frontiers in the construction of network models and their relevance for understanding increasingly complex functions of neural systems.
[ { "created": "Tue, 31 Jul 2018 17:51:16 GMT", "version": "v1" } ]
2018-08-01
[ [ "Bassett", "Danielle S.", "" ], [ "Zurn", "Perry", "" ], [ "Gold", "Joshua I.", "" ] ]
From interacting cellular components to networks of neurons and neural systems, interconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from the development of new mathematical tools that can relate a system's architecture to its dynamics and function. These tools, which are known collectively as network science, have been used with increasing success to build models of neural systems across spatial scales and species. Here we discuss the nature of network models in neuroscience. We begin with a review of model theory from a philosophical perspective to inform our view of networks as models of complex systems in general, and of the brain in particular. We then summarize the types of models that are frequently studied in network neuroscience along three primary dimensions: from data representations to first-principles theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarse-grained approximations. We then consider ways to validate these models, focusing on approaches that perturb a system to probe its function. We close with a description of important frontiers in the construction of network models and their relevance for understanding increasingly complex functions of neural systems.
2108.04240
Charalambos Chrysostomou
Charalambos Chrysostomou, Floris Alexandrou, Mihalis A. Nicolaou and Huseyin Seker
Classification of Influenza Hemagglutinin Protein Sequences using Convolutional Neural Networks
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
The Influenza virus can be considered as one of the most severe viruses that can infect multiple species with often fatal consequences to the hosts. The Hemagglutinin (HA) gene of the virus can be a target for antiviral drug development realised through accurate identification of its sub-types and possible the targeted hosts. This paper focuses on accurately predicting if an Influenza type A virus can infect specific hosts, and more specifically, Human, Avian and Swine hosts, using only the protein sequence of the HA gene. In more detail, we propose encoding the protein sequences into numerical signals using the Hydrophobicity Index and subsequently utilising a Convolutional Neural Network-based predictive model. The Influenza HA protein sequences used in the proposed work are obtained from the Influenza Research Database (IRD). Specifically, complete and unique HA protein sequences were used for avian, human and swine hosts. The data obtained for this work was 17999 human-host proteins, 17667 avian-host proteins and 9278 swine-host proteins. Given this set of collected proteins, the proposed method yields as much as 10% higher accuracy for an individual class (namely, Avian) and 5% higher overall accuracy than in an earlier study. It is also observed that the accuracy for each class in this work is more balanced than what was presented in this earlier study. As the results show, the proposed model can distinguish HA protein sequences with high accuracy whenever the virus under investigation can infect Human, Avian or Swine hosts.
[ { "created": "Mon, 9 Aug 2021 10:42:26 GMT", "version": "v1" } ]
2021-08-11
[ [ "Chrysostomou", "Charalambos", "" ], [ "Alexandrou", "Floris", "" ], [ "Nicolaou", "Mihalis A.", "" ], [ "Seker", "Huseyin", "" ] ]
The Influenza virus can be considered as one of the most severe viruses that can infect multiple species with often fatal consequences to the hosts. The Hemagglutinin (HA) gene of the virus can be a target for antiviral drug development realised through accurate identification of its sub-types and possible the targeted hosts. This paper focuses on accurately predicting if an Influenza type A virus can infect specific hosts, and more specifically, Human, Avian and Swine hosts, using only the protein sequence of the HA gene. In more detail, we propose encoding the protein sequences into numerical signals using the Hydrophobicity Index and subsequently utilising a Convolutional Neural Network-based predictive model. The Influenza HA protein sequences used in the proposed work are obtained from the Influenza Research Database (IRD). Specifically, complete and unique HA protein sequences were used for avian, human and swine hosts. The data obtained for this work was 17999 human-host proteins, 17667 avian-host proteins and 9278 swine-host proteins. Given this set of collected proteins, the proposed method yields as much as 10% higher accuracy for an individual class (namely, Avian) and 5% higher overall accuracy than in an earlier study. It is also observed that the accuracy for each class in this work is more balanced than what was presented in this earlier study. As the results show, the proposed model can distinguish HA protein sequences with high accuracy whenever the virus under investigation can infect Human, Avian or Swine hosts.
2309.03928
Mandana Mirbakhsh
Mandana Mirbakhsh, Zahra Zahed
Enhancing Phosphorus Uptake in Sugarcane: A Critical Evaluation of Humic Acid and Phosphorus Fertilizers Effectiveness
null
null
null
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Our research conducted in an area characterized by alkaline, lime-abundant soils investigated the potential of utilizing phosphorus fertilizer and humic acid to enhance phosphorus absorption in sugarcane crops. The results indicated that the application of phosphorus fertilizer significantly increased the total and bioavailable phosphorus in the rhizospheric soil, despite observing a decrease in phosphatase enzyme activity. An important observation was the considerable growth of active carbon, a crucial soil health indicator, under the influence of humic acid treatments. The findings also demonstrated an enhancement in phosphorus absorption by sugarcane due to the synergistic application of humic acid and phosphorus fertilizer at both harvest periods. Interestingly, humic acid treatments, when applied through immersion, were found to be more effective than soil applications, implying a greater impact on root absorption processes. The findings underline the potential of integrating humic acid into sugarcane cultivation for better phosphorus absorption. Our study offers valuable insights for improved soil management strategies, and could potentially pave the way towards more sustainable agricultural practices. However, we also recommend further investigation into alternative methods of humic acid application and its usage at different stages of plant growth. Such exploration could provide a comprehensive understanding of the potential benefits and most effective utilization of humic acid in agriculture, especially in regions with similar soil characteristics as West Azarbaijan, Iran
[ { "created": "Thu, 7 Sep 2023 12:57:09 GMT", "version": "v1" } ]
2023-09-11
[ [ "Mirbakhsh", "Mandana", "" ], [ "Zahed", "Zahra", "" ] ]
Our research conducted in an area characterized by alkaline, lime-abundant soils investigated the potential of utilizing phosphorus fertilizer and humic acid to enhance phosphorus absorption in sugarcane crops. The results indicated that the application of phosphorus fertilizer significantly increased the total and bioavailable phosphorus in the rhizospheric soil, despite observing a decrease in phosphatase enzyme activity. An important observation was the considerable growth of active carbon, a crucial soil health indicator, under the influence of humic acid treatments. The findings also demonstrated an enhancement in phosphorus absorption by sugarcane due to the synergistic application of humic acid and phosphorus fertilizer at both harvest periods. Interestingly, humic acid treatments, when applied through immersion, were found to be more effective than soil applications, implying a greater impact on root absorption processes. The findings underline the potential of integrating humic acid into sugarcane cultivation for better phosphorus absorption. Our study offers valuable insights for improved soil management strategies, and could potentially pave the way towards more sustainable agricultural practices. However, we also recommend further investigation into alternative methods of humic acid application and its usage at different stages of plant growth. Such exploration could provide a comprehensive understanding of the potential benefits and most effective utilization of humic acid in agriculture, especially in regions with similar soil characteristics as West Azarbaijan, Iran
1512.00695
Marta Tyran-Kaminska
Michael C. Mackey, Marta Tyran-Kaminska
The Limiting Dynamics of a Bistable Molecular Switch With and Without Noise
27 pages, 10 figures
Journal of Mathematical Biology 73 (2016), 367-395
10.1007/s00285-015-0949-1
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the dynamics of a population of organisms containing two mutually inhibitory gene regulatory networks, that can result in a bistable switch-like behaviour. We completely characterize their local and global dynamics in the absence of any noise, and then go on to consider the effects of either noise coming from bursting (transcription or translation), or Gaussian noise in molecular degradation rates when there is a dominant slow variable in the system. We show analytically how the steady state distribution in the population can range from a single unimodal distribution through a bimodal distribution and give the explicit analytic form for the invariant stationary density which is globally asymptotically stable. Rather remarkably, the behaviour of the stationary density with respect to the parameters characterizing the molecular behaviour of the bistable switch is qualitatively identical in the presence of noise coming from bursting as well as in the presence of Gaussian noise in the degradation rate. This implies that one cannot distinguish between either the dominant source or nature of noise based on the stationary molecular distribution in a population of cells. We finally show that the switch model with bursting but two dominant slow genes has an asymptotically stable stationary density.
[ { "created": "Wed, 2 Dec 2015 14:01:54 GMT", "version": "v1" } ]
2016-10-07
[ [ "Mackey", "Michael C.", "" ], [ "Tyran-Kaminska", "Marta", "" ] ]
We consider the dynamics of a population of organisms containing two mutually inhibitory gene regulatory networks, that can result in a bistable switch-like behaviour. We completely characterize their local and global dynamics in the absence of any noise, and then go on to consider the effects of either noise coming from bursting (transcription or translation), or Gaussian noise in molecular degradation rates when there is a dominant slow variable in the system. We show analytically how the steady state distribution in the population can range from a single unimodal distribution through a bimodal distribution and give the explicit analytic form for the invariant stationary density which is globally asymptotically stable. Rather remarkably, the behaviour of the stationary density with respect to the parameters characterizing the molecular behaviour of the bistable switch is qualitatively identical in the presence of noise coming from bursting as well as in the presence of Gaussian noise in the degradation rate. This implies that one cannot distinguish between either the dominant source or nature of noise based on the stationary molecular distribution in a population of cells. We finally show that the switch model with bursting but two dominant slow genes has an asymptotically stable stationary density.
q-bio/0611035
Sylvain Hanneton
Sylvain Hanneton (NPSM), Claudia Munoz (NPSM)
Action for perception : influence of handedness in visuo-auditory sensory substitution
null
Enactive 2006 : Enaction & Complexity, France (20/11/2006) 73-74
null
null
q-bio.NC
null
In this preliminary study we address the question of the influence of handedness on the localization of targets perceived through a visuo-auditory substitution device. Participants hold the device in one hand in order to explore the environment and to perceive the target. They point to the estimated location of the target with the other hand. This preliminary results support our hypothesis that pointing is more accurate when the device is held in the right dominant hand. Dexterity has to be attributed to the active part of the perceptive system. This study has obviously to be completed but it shows how the concept of enaction is important and how it can be experimentaly addressed in the field of sensory substitution.
[ { "created": "Thu, 9 Nov 2006 13:27:54 GMT", "version": "v1" } ]
2019-04-22
[ [ "Hanneton", "Sylvain", "", "NPSM" ], [ "Munoz", "Claudia", "", "NPSM" ] ]
In this preliminary study we address the question of the influence of handedness on the localization of targets perceived through a visuo-auditory substitution device. Participants hold the device in one hand in order to explore the environment and to perceive the target. They point to the estimated location of the target with the other hand. This preliminary results support our hypothesis that pointing is more accurate when the device is held in the right dominant hand. Dexterity has to be attributed to the active part of the perceptive system. This study has obviously to be completed but it shows how the concept of enaction is important and how it can be experimentaly addressed in the field of sensory substitution.
q-bio/0602027
Charles Epstein
Charles L. Epstein
Anderson Localization, Non-linearity and Stable Genetic Diversity
25 pages, 8 Figures
null
10.1007/s10955-006-9149-0
null
q-bio.PE cond-mat.stat-mech math.DS math.SP nlin.AO q-bio.QM
null
In many models of genotypic evolution, the vector of genotype populations satisfies a system of linear ordinary differential equations. This system of equations models a competition between differential replication rates (fitness) and mutation. Mutation operates as a generalized diffusion process on genotype space. In the large time asymptotics, the replication term tends to produce a single dominant quasispecies, unless the mutation rate is too high, in which case the populations of different genotypes becomes de-localized. We introduce a more macroscopic picture of genotypic evolution wherein a random replication term in the linear model displays features analogous to Anderson localization. When coupled with non-linearities that limit the population of any given genotype, we obtain a model whose large time asymptotics display stable genotypic diversity
[ { "created": "Tue, 28 Feb 2006 14:52:25 GMT", "version": "v1" } ]
2009-11-13
[ [ "Epstein", "Charles L.", "" ] ]
In many models of genotypic evolution, the vector of genotype populations satisfies a system of linear ordinary differential equations. This system of equations models a competition between differential replication rates (fitness) and mutation. Mutation operates as a generalized diffusion process on genotype space. In the large time asymptotics, the replication term tends to produce a single dominant quasispecies, unless the mutation rate is too high, in which case the populations of different genotypes becomes de-localized. We introduce a more macroscopic picture of genotypic evolution wherein a random replication term in the linear model displays features analogous to Anderson localization. When coupled with non-linearities that limit the population of any given genotype, we obtain a model whose large time asymptotics display stable genotypic diversity
q-bio/0508020
Michael Deem
Jun Sun, David J. Earl, and Michael W. Deem
Glassy Dynamics in the Adaptive Immune Response Prevents Autoimmune Disease
5 pages, 3 figures, to appear in Phys. Rev. Lett
null
10.1103/PhysRevLett.95.148104
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
null
The immune system normally protects the human host against death by infection. However, when an immune response is mistakenly directed at self antigens, autoimmune disease can occur. We describe a model of protein evolution to simulate the dynamics of the adaptive immune response to antigens. Computer simulations of the dynamics of antibody evolution show that different evolutionary mechanisms, namely gene segment swapping and point mutation, lead to different evolved antibody binding affinities. Although a combination of gene segment swapping and point mutation can yield a greater affinity to a specific antigen than point mutation alone, the antibodies so evolved are highly cross-reactive and would cause autoimmune disease, and this is not the chosen dynamics of the immune system. We suggest that in the immune system a balance has evolved between binding affinity and specificity in the mechanism for searching the amino acid sequence space of antibodies.
[ { "created": "Wed, 17 Aug 2005 08:08:26 GMT", "version": "v1" } ]
2009-11-11
[ [ "Sun", "Jun", "" ], [ "Earl", "David J.", "" ], [ "Deem", "Michael W.", "" ] ]
The immune system normally protects the human host against death by infection. However, when an immune response is mistakenly directed at self antigens, autoimmune disease can occur. We describe a model of protein evolution to simulate the dynamics of the adaptive immune response to antigens. Computer simulations of the dynamics of antibody evolution show that different evolutionary mechanisms, namely gene segment swapping and point mutation, lead to different evolved antibody binding affinities. Although a combination of gene segment swapping and point mutation can yield a greater affinity to a specific antigen than point mutation alone, the antibodies so evolved are highly cross-reactive and would cause autoimmune disease, and this is not the chosen dynamics of the immune system. We suggest that in the immune system a balance has evolved between binding affinity and specificity in the mechanism for searching the amino acid sequence space of antibodies.
2404.05865
Qi Dai
Qi Dai, Ryan Davis, Houlin Hong, and Ying Gu
Effectiveness of Self-Assessment Software to Evaluate Preclinical Operative Procedures
null
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: To assess the effectiveness of digital scanning techniques for self-assessment and of preparations and restorations in preclinical dental education when compared to traditional faculty grading. Methods: Forty-four separate Class I (#30-O), Class II (#30-MO) preparations, and class II amalgam restorations (#31-MO) were generated respectively under preclinical assessment setting. Calibrated faculty evaluated the preparations and restorations using a standard rubric from preclinical operative class. The same teeth were scanned using Planmeca PlanScan intraoral scanner and graded using the Romexis E4D Compare Software. Each tooth was compared against a corresponding gold standard tooth with tolerance intervals ranging from 100{\mu}m to 500{\mu}m. These scores were compared to traditional faculty grades using a linear mixed model to estimate the mean differences at 95% confidence interval for each tolerance level. Results: The average Compare Software grade of Class I preparation at 300{\mu}m tolerance had the smallest mean difference of 1.64 points on a 100 points scale compared to the average faculty grade. Class II preparation at 400{\mu}m tolerance had the smallest mean difference of 0.41 points. Finally, Class II Restoration at 300{\mu}m tolerance had the smallest mean difference at 0.20 points. Conclusion: In this study, tolerance levels that best correlated the Compare Software grades with the faculty grades were determined for three operative procedures: class I preparation, class II preparation and class II restoration. This Compare Software can be used as a useful adjunct method for more objective grading. It also can be used by students as a great self-assessment tool.
[ { "created": "Mon, 8 Apr 2024 20:54:58 GMT", "version": "v1" } ]
2024-04-10
[ [ "Dai", "Qi", "" ], [ "Davis", "Ryan", "" ], [ "Hong", "Houlin", "" ], [ "Gu", "Ying", "" ] ]
Objectives: To assess the effectiveness of digital scanning techniques for self-assessment and of preparations and restorations in preclinical dental education when compared to traditional faculty grading. Methods: Forty-four separate Class I (#30-O), Class II (#30-MO) preparations, and class II amalgam restorations (#31-MO) were generated respectively under preclinical assessment setting. Calibrated faculty evaluated the preparations and restorations using a standard rubric from preclinical operative class. The same teeth were scanned using Planmeca PlanScan intraoral scanner and graded using the Romexis E4D Compare Software. Each tooth was compared against a corresponding gold standard tooth with tolerance intervals ranging from 100{\mu}m to 500{\mu}m. These scores were compared to traditional faculty grades using a linear mixed model to estimate the mean differences at 95% confidence interval for each tolerance level. Results: The average Compare Software grade of Class I preparation at 300{\mu}m tolerance had the smallest mean difference of 1.64 points on a 100 points scale compared to the average faculty grade. Class II preparation at 400{\mu}m tolerance had the smallest mean difference of 0.41 points. Finally, Class II Restoration at 300{\mu}m tolerance had the smallest mean difference at 0.20 points. Conclusion: In this study, tolerance levels that best correlated the Compare Software grades with the faculty grades were determined for three operative procedures: class I preparation, class II preparation and class II restoration. This Compare Software can be used as a useful adjunct method for more objective grading. It also can be used by students as a great self-assessment tool.
2010.10129
Thomas Sturm
Niclas Kruff, Christoph L\"uders, Ovidiu Radulescu, Thomas Sturm, Sebastian Walcher
Algorithmic Reduction of Biological Networks With Multiple Time Scales
null
null
null
null
q-bio.MN cs.LO cs.SC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a symbolic algorithmic approach that allows to compute invariant manifolds and corresponding reduced systems for differential equations modeling biological networks which comprise chemical reaction networks for cellular biochemistry, and compartmental models for pharmacology, epidemiology and ecology. Multiple time scales of a given network are obtained by scaling, based on tropical geometry. Our reduction is mathematically justified within a singular perturbation setting. The existence of invariant manifolds is subject to hyperbolicity conditions, for which we propose an algorithmic test based on Hurwitz criteria. We finally obtain a sequence of nested invariant manifolds and respective reduced systems on those manifolds. Our theoretical results are generally accompanied by rigorous algorithmic descriptions suitable for direct implementation based on existing off-the-shelf software systems, specifically symbolic computation libraries and Satisfiability Modulo Theories solvers. We present computational examples taken from the well-known BioModels database using our own prototypical implementations.
[ { "created": "Tue, 20 Oct 2020 08:48:09 GMT", "version": "v1" }, { "created": "Tue, 2 Mar 2021 17:47:20 GMT", "version": "v2" } ]
2021-03-03
[ [ "Kruff", "Niclas", "" ], [ "Lüders", "Christoph", "" ], [ "Radulescu", "Ovidiu", "" ], [ "Sturm", "Thomas", "" ], [ "Walcher", "Sebastian", "" ] ]
We present a symbolic algorithmic approach that allows to compute invariant manifolds and corresponding reduced systems for differential equations modeling biological networks which comprise chemical reaction networks for cellular biochemistry, and compartmental models for pharmacology, epidemiology and ecology. Multiple time scales of a given network are obtained by scaling, based on tropical geometry. Our reduction is mathematically justified within a singular perturbation setting. The existence of invariant manifolds is subject to hyperbolicity conditions, for which we propose an algorithmic test based on Hurwitz criteria. We finally obtain a sequence of nested invariant manifolds and respective reduced systems on those manifolds. Our theoretical results are generally accompanied by rigorous algorithmic descriptions suitable for direct implementation based on existing off-the-shelf software systems, specifically symbolic computation libraries and Satisfiability Modulo Theories solvers. We present computational examples taken from the well-known BioModels database using our own prototypical implementations.
1911.07711
Cole Butler
Cole Butler, Jinjin Cheng, Lorena Correa, Maria Preciado-Rivas, Andres Rios-Gutierrez, Cesar Montalvo, and Christopher Kribs
Comparison of screening for methicillin-resistant Staphylococcus aureus (MRSA) at hospital admission and discharge
20 pages, 7 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methicillin-resistant Staphylococcus aureus (MRSA) is a significant contributor to the growing concern of antibiotic resistant bacteria, especially given its stubborn persistence in hospitals and other health care facility settings. In combination with this characteristic of S. aureus (colloquially referred to as staph), MRSA presents an additional barrier to treatment and is now believed to have colonized two of every 100 people worldwide. According to the CDC, MRSA prevalence sits as high as 25-50% in countries such as the United Kingdom and the United States. Given the resistant nature of staph as well as its capability of evolving to compensate antibiotic treatment, controlling MRSA levels is more a matter of precautionary and defensive measures. This study examines the method of "search and isolation," which seeks to isolate MRSA positive patients in a hospital so as to decrease infection potential. Although this strategy is straightforward, the question of just whom to screen is of practical importance. We compare screening at admission to screening at discharge. To do this, we develop a mathematical model and use simulations to determine MRSA endemic levels in a hospital with either control measure implemented. We found that screening at discharge was the more effective method in controlling MRSA endemicity, but at the cost of a greater number of isolated patients.
[ { "created": "Mon, 18 Nov 2019 15:40:59 GMT", "version": "v1" } ]
2019-11-19
[ [ "Butler", "Cole", "" ], [ "Cheng", "Jinjin", "" ], [ "Correa", "Lorena", "" ], [ "Preciado-Rivas", "Maria", "" ], [ "Rios-Gutierrez", "Andres", "" ], [ "Montalvo", "Cesar", "" ], [ "Kribs", "Christopher", "" ] ]
Methicillin-resistant Staphylococcus aureus (MRSA) is a significant contributor to the growing concern of antibiotic resistant bacteria, especially given its stubborn persistence in hospitals and other health care facility settings. In combination with this characteristic of S. aureus (colloquially referred to as staph), MRSA presents an additional barrier to treatment and is now believed to have colonized two of every 100 people worldwide. According to the CDC, MRSA prevalence sits as high as 25-50% in countries such as the United Kingdom and the United States. Given the resistant nature of staph as well as its capability of evolving to compensate antibiotic treatment, controlling MRSA levels is more a matter of precautionary and defensive measures. This study examines the method of "search and isolation," which seeks to isolate MRSA positive patients in a hospital so as to decrease infection potential. Although this strategy is straightforward, the question of just whom to screen is of practical importance. We compare screening at admission to screening at discharge. To do this, we develop a mathematical model and use simulations to determine MRSA endemic levels in a hospital with either control measure implemented. We found that screening at discharge was the more effective method in controlling MRSA endemicity, but at the cost of a greater number of isolated patients.
1602.01889
Furong Huang
Furong Huang, Animashree Anandkumar, Christian Borgs, Jennifer Chayes, Ernest Fraenkel, Michael Hawrylycz, Ed Lein, Alessandro Ingrosso, Srinivas Turaga
Discovering Neuronal Cell Types and Their Gene Expression Profiles Using a Spatial Point Process Mixture Model
null
null
null
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cataloging the neuronal cell types that comprise circuitry of individual brain regions is a major goal of modern neuroscience and the BRAIN initiative. Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles. While the single-cell techniques are extremely powerful and hold great promise, they are currently still labor intensive, have a high cost per cell, and, most importantly, do not provide information on spatial distribution of cell types in specific regions of the brain. We propose a complementary approach that uses computational methods to infer the cell types and their gene expression profiles through analysis of brain-wide single-cell resolution in situ hybridization (ISH) imagery contained in the Allen Brain Atlas (ABA). We measure the spatial distribution of neurons labeled in the ISH image for each gene and model it as a spatial point process mixture, whose mixture weights are given by the cell types which express that gene. By fitting a point process mixture model jointly to the ISH images, we infer both the spatial point process distribution for each cell type and their gene expression profile. We validate our predictions of cell type-specific gene expression profiles using single cell RNA sequencing data, recently published for the mouse somatosensory cortex. Jointly with the gene expression profiles, cell features such as cell size, orientation, intensity and local density level are inferred per cell type.
[ { "created": "Thu, 4 Feb 2016 23:52:18 GMT", "version": "v1" }, { "created": "Sat, 11 Jun 2016 01:45:12 GMT", "version": "v2" } ]
2016-06-14
[ [ "Huang", "Furong", "" ], [ "Anandkumar", "Animashree", "" ], [ "Borgs", "Christian", "" ], [ "Chayes", "Jennifer", "" ], [ "Fraenkel", "Ernest", "" ], [ "Hawrylycz", "Michael", "" ], [ "Lein", "Ed", "" ], [ "Ingrosso", "Alessandro", "" ], [ "Turaga", "Srinivas", "" ] ]
Cataloging the neuronal cell types that comprise circuitry of individual brain regions is a major goal of modern neuroscience and the BRAIN initiative. Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles. While the single-cell techniques are extremely powerful and hold great promise, they are currently still labor intensive, have a high cost per cell, and, most importantly, do not provide information on spatial distribution of cell types in specific regions of the brain. We propose a complementary approach that uses computational methods to infer the cell types and their gene expression profiles through analysis of brain-wide single-cell resolution in situ hybridization (ISH) imagery contained in the Allen Brain Atlas (ABA). We measure the spatial distribution of neurons labeled in the ISH image for each gene and model it as a spatial point process mixture, whose mixture weights are given by the cell types which express that gene. By fitting a point process mixture model jointly to the ISH images, we infer both the spatial point process distribution for each cell type and their gene expression profile. We validate our predictions of cell type-specific gene expression profiles using single cell RNA sequencing data, recently published for the mouse somatosensory cortex. Jointly with the gene expression profiles, cell features such as cell size, orientation, intensity and local density level are inferred per cell type.
0904.1063
Miguel Navascues
M. Navascu\'es (BIO), B. C. Emerson (BIO)
Chloroplast microsatellites: measures of genetic diversity and the effect of homoplasy
null
Molecular Ecology 14, 5 (2005) 1333-41
10.1111/j.1365-294X.2005.02504.x
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Chloroplast microsatellites have been widely used in population genetic studies of conifers in recent years. However, their haplotype configurations suggest that they could have high levels of homoplasy, thus limiting the power of these molecular markers. A coalescent-based computer simulation was used to explore the influence of homoplasy on measures of genetic diversity based on chloroplast microsatellites. The conditions of the simulation were defined to fit isolated populations originating from the colonization of one single haplotype into an area left available after a glacial retreat. Simulated data were compared with empirical data available from the literature for a species of Pinus that has expanded north after the Last Glacial Maximum. In the evaluation of genetic diversity, homoplasy was found to have little influence on Nei's unbiased haplotype diversity (H(E)) while Goldstein's genetic distance estimates (D2sh) were much more affected. The effect of the number of chloroplast microsatellite loci for evaluation of genetic diversity is also discussed.
[ { "created": "Tue, 7 Apr 2009 06:46:24 GMT", "version": "v1" } ]
2009-04-08
[ [ "Navascués", "M.", "", "BIO" ], [ "Emerson", "B. C.", "", "BIO" ] ]
Chloroplast microsatellites have been widely used in population genetic studies of conifers in recent years. However, their haplotype configurations suggest that they could have high levels of homoplasy, thus limiting the power of these molecular markers. A coalescent-based computer simulation was used to explore the influence of homoplasy on measures of genetic diversity based on chloroplast microsatellites. The conditions of the simulation were defined to fit isolated populations originating from the colonization of one single haplotype into an area left available after a glacial retreat. Simulated data were compared with empirical data available from the literature for a species of Pinus that has expanded north after the Last Glacial Maximum. In the evaluation of genetic diversity, homoplasy was found to have little influence on Nei's unbiased haplotype diversity (H(E)) while Goldstein's genetic distance estimates (D2sh) were much more affected. The effect of the number of chloroplast microsatellite loci for evaluation of genetic diversity is also discussed.
2403.03688
Valerie Carabetta
Liya Popova and Valerie J. Carabetta
The use of next-generation sequencing in personalized medicine
37 pages, 3 figures, 1 table
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The revolutionary progress in development of next-generation sequencing (NGS) technologies has made it possible to deliver accurate genomic information in a timely manner. Over the past several years, NGS has transformed biomedical and clinical research and found its application in the field of personalized medicine. Here we discuss the rise of personalized medicine and the history of NGS. We discuss current applications and uses of NGS in medicine, including infectious diseases, oncology, genomic medicine, and dermatology. We provide a brief discussion of selected studies where NGS was used to respond to wide variety of questions in biomedical research and clinical medicine. Finally, we discuss the challenges of implementing NGS into routine clinical use.
[ { "created": "Wed, 6 Mar 2024 13:14:25 GMT", "version": "v1" } ]
2024-03-07
[ [ "Popova", "Liya", "" ], [ "Carabetta", "Valerie J.", "" ] ]
The revolutionary progress in development of next-generation sequencing (NGS) technologies has made it possible to deliver accurate genomic information in a timely manner. Over the past several years, NGS has transformed biomedical and clinical research and found its application in the field of personalized medicine. Here we discuss the rise of personalized medicine and the history of NGS. We discuss current applications and uses of NGS in medicine, including infectious diseases, oncology, genomic medicine, and dermatology. We provide a brief discussion of selected studies where NGS was used to respond to wide variety of questions in biomedical research and clinical medicine. Finally, we discuss the challenges of implementing NGS into routine clinical use.
1906.10819
David Murrugarra
Devin Willmott and David Murrugarra and Qiang Ye
Improving RNA secondary structure prediction via state inference with deep recurrent neural networks
15 pages, 3 figures, and 5 tables
Computational and Mathematical Biophysics, 8(1), 36-50, 2020
10.1515/cmb-2020-0002
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of determining which nucleotides of an RNA sequence are paired or unpaired in the secondary structure of an RNA, which we call RNA state inference, can be studied by different machine learning techniques. Successful state inference of RNA sequences can be used to generate auxiliary information for data-directed RNA secondary structure prediction. Bidirectional long short-term memory (LSTM) neural networks have emerged as a powerful tool that can model global nonlinear sequence dependencies and have achieved state-of-the-art performances on many different classification problems. This paper presents a practical approach to RNA secondary structure inference centered around a deep learning method for state inference. State predictions from a deep bidirectional LSTM are used to generate synthetic SHAPE data that can be incorporated into RNA secondary structure prediction via the Nearest Neighbor Thermodynamic Model (NNTM). This method produces predicted secondary structures for a diverse test set of 16S ribosomal RNA that are, on average, 25 percentage points more accurate than undirected MFE structures. These improvements range from several percentage points for some sequences to nearly 50 percentage points for others. Accuracy is highly dependent on the success of our state inference method, and investigating the global features of our state predictions reveals that accuracy of both our state inference and structure inference methods are highly dependent on the similarity of the sequence to the dataset. This paper presents a deep learning state inference tool, trained and tested on 16S ribosomal RNA. Converting these state predictions into synthetic SHAPE data with which to direct NNTM can result in large improvements in secondary structure prediction accuracy, as shown on a test set of 16S rRNA.
[ { "created": "Wed, 26 Jun 2019 02:47:36 GMT", "version": "v1" }, { "created": "Sun, 23 Feb 2020 15:40:21 GMT", "version": "v2" } ]
2024-07-09
[ [ "Willmott", "Devin", "" ], [ "Murrugarra", "David", "" ], [ "Ye", "Qiang", "" ] ]
The problem of determining which nucleotides of an RNA sequence are paired or unpaired in the secondary structure of an RNA, which we call RNA state inference, can be studied by different machine learning techniques. Successful state inference of RNA sequences can be used to generate auxiliary information for data-directed RNA secondary structure prediction. Bidirectional long short-term memory (LSTM) neural networks have emerged as a powerful tool that can model global nonlinear sequence dependencies and have achieved state-of-the-art performances on many different classification problems. This paper presents a practical approach to RNA secondary structure inference centered around a deep learning method for state inference. State predictions from a deep bidirectional LSTM are used to generate synthetic SHAPE data that can be incorporated into RNA secondary structure prediction via the Nearest Neighbor Thermodynamic Model (NNTM). This method produces predicted secondary structures for a diverse test set of 16S ribosomal RNA that are, on average, 25 percentage points more accurate than undirected MFE structures. These improvements range from several percentage points for some sequences to nearly 50 percentage points for others. Accuracy is highly dependent on the success of our state inference method, and investigating the global features of our state predictions reveals that accuracy of both our state inference and structure inference methods are highly dependent on the similarity of the sequence to the dataset. This paper presents a deep learning state inference tool, trained and tested on 16S ribosomal RNA. Converting these state predictions into synthetic SHAPE data with which to direct NNTM can result in large improvements in secondary structure prediction accuracy, as shown on a test set of 16S rRNA.
2101.02557
Julia Ann Jose
Julia Ann Jose, Trae Waggoner, Sudarsan Manikandan
Continuous Glucose Monitoring Prediction
null
null
null
null
q-bio.OT cs.LG
http://creativecommons.org/licenses/by/4.0/
Diabetes is one of the deadliest diseases in the world and affects nearly 10 percent of the global adult population. Fortunately, powerful new technologies allow for a consistent and reliable treatment plan for people with diabetes. One major development is a system called continuous blood glucose monitoring (CGM). In this review, we look at three different continuous meal detection algorithms that were developed using given CGM data from patients with diabetes. From this analysis, an initial meal prediction algorithm was also developed utilizing these methods.
[ { "created": "Mon, 4 Jan 2021 21:32:20 GMT", "version": "v1" } ]
2021-01-08
[ [ "Jose", "Julia Ann", "" ], [ "Waggoner", "Trae", "" ], [ "Manikandan", "Sudarsan", "" ] ]
Diabetes is one of the deadliest diseases in the world and affects nearly 10 percent of the global adult population. Fortunately, powerful new technologies allow for a consistent and reliable treatment plan for people with diabetes. One major development is a system called continuous blood glucose monitoring (CGM). In this review, we look at three different continuous meal detection algorithms that were developed using given CGM data from patients with diabetes. From this analysis, an initial meal prediction algorithm was also developed utilizing these methods.
2401.05282
Ricardo Henriques Prof
Leonor Morgado, Estibaliz G\'omez-de-Mariscal, Hannah S. Heil and Ricardo Henriques
The Rise of Data-Driven Microscopy powered by Machine Learning
7 pages, 4 figures, review
null
10.1111/jmi.13282
null
q-bio.QM physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Optical microscopy is an indispensable tool in life sciences research, but conventional techniques require compromises between imaging parameters like speed, resolution, field-of-view, and phototoxicity. To overcome these limitations, data-driven microscopes incorporate feedback loops between data acquisition and analysis. This review overviews how machine learning enables automated image analysis to optimise microscopy in real-time. We first introduce key data-driven microscopy concepts and machine learning methods relevant to microscopy image analysis. Subsequently, we highlight pioneering works and recent advances in integrating machine learning into microscopy acquisition workflows, including optimising illumination, switching modalities and acquisition rates, and triggering targeted experiments. We then discuss the remaining challenges and future outlook. Overall, intelligent microscopes that can sense, analyse, and adapt promise to transform optical imaging by opening new experimental possibilities.
[ { "created": "Wed, 10 Jan 2024 17:28:17 GMT", "version": "v1" } ]
2024-04-01
[ [ "Morgado", "Leonor", "" ], [ "Gómez-de-Mariscal", "Estibaliz", "" ], [ "Heil", "Hannah S.", "" ], [ "Henriques", "Ricardo", "" ] ]
Optical microscopy is an indispensable tool in life sciences research, but conventional techniques require compromises between imaging parameters like speed, resolution, field-of-view, and phototoxicity. To overcome these limitations, data-driven microscopes incorporate feedback loops between data acquisition and analysis. This review overviews how machine learning enables automated image analysis to optimise microscopy in real-time. We first introduce key data-driven microscopy concepts and machine learning methods relevant to microscopy image analysis. Subsequently, we highlight pioneering works and recent advances in integrating machine learning into microscopy acquisition workflows, including optimising illumination, switching modalities and acquisition rates, and triggering targeted experiments. We then discuss the remaining challenges and future outlook. Overall, intelligent microscopes that can sense, analyse, and adapt promise to transform optical imaging by opening new experimental possibilities.
1501.01677
Norichika Ogata
Norichika Ogata, Toshinori Kozaki, Takeshi Yokoyama, Tamako Hata and Kikuo Iwabuchi
Comparison between the amount of environmental change and the amount of transcriptome change
null
null
10.1371/journal.pone.0144822
null
q-bio.CB q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells must coordinate adjustments in genome expression to accommodate changes in their environment. We hypothesized that the amount of transcriptome change is proportional to the amount of environmental change. To capture the effects of environmental changes on the transcriptome, we compared transcriptome diversities (defined as the Shannon entropy of frequency distribution) of silkworm fat-body tissues cultured with several concentrations of phenobarbital. Although there was no proportional relationship, we did identify a drug concentration tipping point between 0.25 and 1.0 mM. Cells cultured in media containing lower drug concentrations than the tipping point showed uniformly high transcriptome diversities, while those cultured at higher drug concentrations than the tipping point showed uniformly low transcriptome diversities. The plasticity of transcriptome diversity was corroborated by cultivations of fat bodies in MGM-450 insect medium without phenobarbital and in 0.25 mM phenobarbital-supplemented MGM-450 insect medium after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). Interestingly, the transcriptome diversities of cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium) were different from cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital). This hysteretic phenomenon of transcriptome diversities indicates multi-stability of the genome expression system.
[ { "created": "Wed, 7 Jan 2015 22:30:28 GMT", "version": "v1" } ]
2016-02-17
[ [ "Ogata", "Norichika", "" ], [ "Kozaki", "Toshinori", "" ], [ "Yokoyama", "Takeshi", "" ], [ "Hata", "Tamako", "" ], [ "Iwabuchi", "Kikuo", "" ] ]
Cells must coordinate adjustments in genome expression to accommodate changes in their environment. We hypothesized that the amount of transcriptome change is proportional to the amount of environmental change. To capture the effects of environmental changes on the transcriptome, we compared transcriptome diversities (defined as the Shannon entropy of frequency distribution) of silkworm fat-body tissues cultured with several concentrations of phenobarbital. Although there was no proportional relationship, we did identify a drug concentration tipping point between 0.25 and 1.0 mM. Cells cultured in media containing lower drug concentrations than the tipping point showed uniformly high transcriptome diversities, while those cultured at higher drug concentrations than the tipping point showed uniformly low transcriptome diversities. The plasticity of transcriptome diversity was corroborated by cultivations of fat bodies in MGM-450 insect medium without phenobarbital and in 0.25 mM phenobarbital-supplemented MGM-450 insect medium after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium). Interestingly, the transcriptome diversities of cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital, followed by cultivation for 10 hours in 1.0 mM phenobarbital-supplemented MGM-450 insect medium) were different from cells cultured in media containing 0.25 mM phenobarbital after previous cultivation (cultivation for 80 hours in MGM-450 insect medium without phenobarbital). This hysteretic phenomenon of transcriptome diversities indicates multi-stability of the genome expression system.
2202.04991
Delfim F. M. Torres
M\'arcia Lemos-Silva, Delfim F. M. Torres
A note on a prey-predator model with constant-effort harvesting
This is a preprint whose final form is published by Springer Nature Switzerland AG in the book 'Dynamic Control and Optimization'. Submitted 30/Nov/2021; Accepted 10/Feb/2022
null
10.1007/978-3-031-17558-9_11
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study a prey-predator model based on the classical Lotka-Volterra system with Leslie-Gower and Holling IV schemes and a constant-effort harvesting. Our goal is twofold: to present the model proposed by Cheng and Zhang in 2021, pointing out some inconsistencies; to analyse the number and type of equilibrium points of the model. We end by proving the stability of the meaningful equilibrium point, according to the distribution of the eigenvalues.
[ { "created": "Thu, 10 Feb 2022 12:39:28 GMT", "version": "v1" } ]
2022-10-03
[ [ "Lemos-Silva", "Márcia", "" ], [ "Torres", "Delfim F. M.", "" ] ]
We study a prey-predator model based on the classical Lotka-Volterra system with Leslie-Gower and Holling IV schemes and a constant-effort harvesting. Our goal is twofold: to present the model proposed by Cheng and Zhang in 2021, pointing out some inconsistencies; to analyse the number and type of equilibrium points of the model. We end by proving the stability of the meaningful equilibrium point, according to the distribution of the eigenvalues.
1612.01605
Ricard Sole
Bernat Corominas-Murtra, Lu\'is Seoane and Ricard Sol\'e
Zipf's law, unbounded complexity and open-ended evolution
16 pages, 4 figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major problem for evolutionary theory is understanding the so called {\em open-ended} nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in complexity that seems to characterise evolution on multiple scales. This property seems to be a characteristic feature of biological and technological evolution and is strongly tied to the generative potential associated with combinatorics, which allows the system to grow and expand their available state spaces. Interestingly, many complex systems presumably displaying OEE, from language to proteins, share a common statistical property: the presence of Zipf's law. Given an inventory of basic items (such as words or protein domains) required to build more complex structures (sentences or proteins) Zipf's law tells us that most of these elements are rare whereas a few of them are extremely common. Using Algorithmic Information Theory, in this paper we provide a fundamental definition for open-endedness, which can be understood as {\em postulates}. Its statistical counterpart, based on standard Shannon Information theory, has the structure of a variational problem which is shown to lead to Zipf's law as the expected consequence of an evolutionary process displaying OEE. We further explore the problem of information conservation through an OEE process and we conclude that statistical information (standard Shannon information) is not conserved, resulting into the paradoxical situation in which the increase of information content has the effect of erasing itself. We prove that this paradox is solved if we consider non-statistical forms of information. This last result implies that standard information theory may not be a suitable theoretical framework to explore the persistence and increase of the information content in OEE systems.
[ { "created": "Tue, 6 Dec 2016 00:43:51 GMT", "version": "v1" }, { "created": "Tue, 12 Jun 2018 11:04:16 GMT", "version": "v2" }, { "created": "Tue, 7 Aug 2018 14:33:51 GMT", "version": "v3" } ]
2018-08-08
[ [ "Corominas-Murtra", "Bernat", "" ], [ "Seoane", "Luís", "" ], [ "Solé", "Ricard", "" ] ]
A major problem for evolutionary theory is understanding the so called {\em open-ended} nature of evolutionary change, from its definition to its origins. Open-ended evolution (OEE) refers to the unbounded increase in complexity that seems to characterise evolution on multiple scales. This property seems to be a characteristic feature of biological and technological evolution and is strongly tied to the generative potential associated with combinatorics, which allows the system to grow and expand their available state spaces. Interestingly, many complex systems presumably displaying OEE, from language to proteins, share a common statistical property: the presence of Zipf's law. Given an inventory of basic items (such as words or protein domains) required to build more complex structures (sentences or proteins) Zipf's law tells us that most of these elements are rare whereas a few of them are extremely common. Using Algorithmic Information Theory, in this paper we provide a fundamental definition for open-endedness, which can be understood as {\em postulates}. Its statistical counterpart, based on standard Shannon Information theory, has the structure of a variational problem which is shown to lead to Zipf's law as the expected consequence of an evolutionary process displaying OEE. We further explore the problem of information conservation through an OEE process and we conclude that statistical information (standard Shannon information) is not conserved, resulting into the paradoxical situation in which the increase of information content has the effect of erasing itself. We prove that this paradox is solved if we consider non-statistical forms of information. This last result implies that standard information theory may not be a suitable theoretical framework to explore the persistence and increase of the information content in OEE systems.
2308.04988
Colin Bredenberg
Colin Bredenberg, Cristina Savin
Desiderata for normative models of synaptic plasticity
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models -- REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
[ { "created": "Wed, 9 Aug 2023 14:42:10 GMT", "version": "v1" } ]
2023-08-10
[ [ "Bredenberg", "Colin", "" ], [ "Savin", "Cristina", "" ] ]
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models -- REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
1612.06357
Jonas Haslbeck
Jonas M B Haslbeck and Eiko I Fried
How Predictable are Symptoms in Psychopathological Networks? A Reanalysis of 18 Published Datasets
24 pages, 1 table, 4 figures
null
null
null
q-bio.NC physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background Network analyses on psychopathological data focus on the network structure and its derivatives such as node centrality. One conclusion one can draw from centrality measures is that the node with the highest centrality is likely to be the node that is determined most by its neighboring nodes. However, centrality is a relative measure: knowing that a node is highly central gives no information about the extent to which it is determined by its neighbors. Here we provide an absolute measure of determination (or controllability) of a node - its predictability. We introduce predictability, estimate the predictability of all nodes in 18 prior empirical network papers on psychopathology, and statistically relate it to centrality. Methods We carried out a literature review and collected 25 datasets from 18 published papers in the field (several mood and anxiety disorders, substance abuse, psychosis, autism, and transdiagnostic data). We fit state-of-the-art net- work models to all datasets, and computed the predictability of all nodes. Results Predictability was unrelated to sample size, moderately high in most symptom networks, and differed considerable both within and between datasets. Predictability was higher in community than clinical samples, highest for mood and anxiety disorders, and lowest for psychosis. Conclusions Predictability is an important additional characterization of symptom networks because it gives an absolute measure of the controllability of each node. It allows conclusions about how self-determined a symptom network is, and may help to inform intervention strategies. Limitations of predictability along with future directions are discussed.
[ { "created": "Mon, 28 Nov 2016 23:05:24 GMT", "version": "v1" }, { "created": "Tue, 20 Jun 2017 08:32:15 GMT", "version": "v2" } ]
2017-06-21
[ [ "Haslbeck", "Jonas M B", "" ], [ "Fried", "Eiko I", "" ] ]
Background Network analyses on psychopathological data focus on the network structure and its derivatives such as node centrality. One conclusion one can draw from centrality measures is that the node with the highest centrality is likely to be the node that is determined most by its neighboring nodes. However, centrality is a relative measure: knowing that a node is highly central gives no information about the extent to which it is determined by its neighbors. Here we provide an absolute measure of determination (or controllability) of a node - its predictability. We introduce predictability, estimate the predictability of all nodes in 18 prior empirical network papers on psychopathology, and statistically relate it to centrality. Methods We carried out a literature review and collected 25 datasets from 18 published papers in the field (several mood and anxiety disorders, substance abuse, psychosis, autism, and transdiagnostic data). We fit state-of-the-art net- work models to all datasets, and computed the predictability of all nodes. Results Predictability was unrelated to sample size, moderately high in most symptom networks, and differed considerable both within and between datasets. Predictability was higher in community than clinical samples, highest for mood and anxiety disorders, and lowest for psychosis. Conclusions Predictability is an important additional characterization of symptom networks because it gives an absolute measure of the controllability of each node. It allows conclusions about how self-determined a symptom network is, and may help to inform intervention strategies. Limitations of predictability along with future directions are discussed.
2204.04587
Karina Laneri
Javier Armando Gutierrez and Karina Laneri and Juan Pablo Aparicio and Gustavo Javier Sibona
Meteorological indicators of dengue epidemics in non-endemic Northwest Argentina
10 pages, 17 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last two decades dengue cases increased significantly throughout the world. In several regions dengue re-emerged, particularly in Latin America, where dengue cases not only increased but also occurred more frequently. It is therefore necessary to understand the mechanisms that drive epidemic outbreaks in non-endemic regions, to help in the design of control strategies. We develop a stochastic model that includes climate variables, social structure, and mobility between a non-endemic city and an endemic area. We choose as a case study the non-endemic city of San Ram{\'o}n de la Nueva Or{\'a}n, located in Northwest Argentina. Human mobility is intense through the border with Bolivia, where dengue transmission is sustained during the whole year. City population was modelled as a meta-population taking into account households and population data for each patch. Climate variability was considered by including rainfall, relative humidity and temperature time series into the models. Those climatic variables were input of a mosquito population ecological model, which in turn is coupled to an epidemiological model. Different hypotheses regarding people's mobility between an endemic and non-endemic area are tested, taking into account the local climatic variation, typical of the non-endemic city. Simulations are qualitatively consistent with weekly clinical data reported from 2009 to 2016. Our model results allow to explain the observed pattern of outbreaks, that alternates large dengue epidemics and several years with smaller outbreaks. We found that the number of vectors per host and an effective reproductive number are proxies for large epidemics, both related with climate variability such as rainfall and temperature, opening the possibility to test these meteorological variables for forecast purposes.
[ { "created": "Sun, 10 Apr 2022 03:26:34 GMT", "version": "v1" } ]
2022-04-12
[ [ "Gutierrez", "Javier Armando", "" ], [ "Laneri", "Karina", "" ], [ "Aparicio", "Juan Pablo", "" ], [ "Sibona", "Gustavo Javier", "" ] ]
In the last two decades dengue cases increased significantly throughout the world. In several regions dengue re-emerged, particularly in Latin America, where dengue cases not only increased but also occurred more frequently. It is therefore necessary to understand the mechanisms that drive epidemic outbreaks in non-endemic regions, to help in the design of control strategies. We develop a stochastic model that includes climate variables, social structure, and mobility between a non-endemic city and an endemic area. We choose as a case study the non-endemic city of San Ram{\'o}n de la Nueva Or{\'a}n, located in Northwest Argentina. Human mobility is intense through the border with Bolivia, where dengue transmission is sustained during the whole year. City population was modelled as a meta-population taking into account households and population data for each patch. Climate variability was considered by including rainfall, relative humidity and temperature time series into the models. Those climatic variables were input of a mosquito population ecological model, which in turn is coupled to an epidemiological model. Different hypotheses regarding people's mobility between an endemic and non-endemic area are tested, taking into account the local climatic variation, typical of the non-endemic city. Simulations are qualitatively consistent with weekly clinical data reported from 2009 to 2016. Our model results allow to explain the observed pattern of outbreaks, that alternates large dengue epidemics and several years with smaller outbreaks. We found that the number of vectors per host and an effective reproductive number are proxies for large epidemics, both related with climate variability such as rainfall and temperature, opening the possibility to test these meteorological variables for forecast purposes.
0911.1720
Carlos P. Roca
Carlos P. Roca, Jos\'e A. Cuesta and Angel S\'anchez
Evolutionary game theory: Temporal and spatial effects beyond replicator dynamics
Review, 48 pages, 26 figures
Physics of Life Reviews 6, 208-249 (2009)
10.1016/j.plrev.2009.08.001
null
q-bio.PE cs.GT physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines.
[ { "created": "Mon, 9 Nov 2009 16:29:47 GMT", "version": "v1" } ]
2009-11-14
[ [ "Roca", "Carlos P.", "" ], [ "Cuesta", "José A.", "" ], [ "Sánchez", "Angel", "" ] ]
Evolutionary game dynamics is one of the most fruitful frameworks for studying evolution in different disciplines, from Biology to Economics. Within this context, the approach of choice for many researchers is the so-called replicator equation, that describes mathematically the idea that those individuals performing better have more offspring and thus their frequency in the population grows. While very many interesting results have been obtained with this equation in the three decades elapsed since it was first proposed, it is important to realize the limits of its applicability. One particularly relevant issue in this respect is that of non-mean-field effects, that may arise from temporal fluctuations or from spatial correlations, both neglected in the replicator equation. This review discusses these temporal and spatial effects focusing on the non-trivial modifications they induce when compared to the outcome of replicator dynamics. Alongside this question, the hypothesis of linearity and its relation to the choice of the rule for strategy update is also analyzed. The discussion is presented in terms of the emergence of cooperation, as one of the current key problems in Biology and in other disciplines.
2405.16861
Wonho Zhung
Joongwon Lee, Wonho Zhung, Woo Youn Kim
NCIDiff: Non-covalent Interaction-generative Diffusion Model for Improving Reliability of 3D Molecule Generation Inside Protein Pocket
null
null
null
null
q-bio.BM cs.LG physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Advancements in deep generative modeling have changed the paradigm of drug discovery. Among such approaches, target-aware methods that exploit 3D structures of protein pockets were spotlighted for generating ligand molecules with their plausible binding modes. While docking scores superficially assess the quality of generated ligands, closer inspection of the binding structures reveals the inconsistency in local interactions between a pocket and generated ligands. Here, we address the issue by explicitly generating non-covalent interactions (NCIs), which are universal patterns throughout protein-ligand complexes. Our proposed model, NCIDiff, simultaneously denoises NCI types of protein-ligand edges along with a 3D graph of a ligand molecule during the sampling. With the NCI-generating strategy, our model generates ligands with more reliable NCIs, especially outperforming the baseline diffusion-based models. We further adopted inpainting techniques on NCIs to further improve the quality of the generated molecules. Finally, we showcase the applicability of NCIDiff on drug design tasks for real-world settings with specialized objectives by guiding the generation process with desired NCI patterns.
[ { "created": "Mon, 27 May 2024 06:26:55 GMT", "version": "v1" } ]
2024-05-28
[ [ "Lee", "Joongwon", "" ], [ "Zhung", "Wonho", "" ], [ "Kim", "Woo Youn", "" ] ]
Advancements in deep generative modeling have changed the paradigm of drug discovery. Among such approaches, target-aware methods that exploit 3D structures of protein pockets were spotlighted for generating ligand molecules with their plausible binding modes. While docking scores superficially assess the quality of generated ligands, closer inspection of the binding structures reveals the inconsistency in local interactions between a pocket and generated ligands. Here, we address the issue by explicitly generating non-covalent interactions (NCIs), which are universal patterns throughout protein-ligand complexes. Our proposed model, NCIDiff, simultaneously denoises NCI types of protein-ligand edges along with a 3D graph of a ligand molecule during the sampling. With the NCI-generating strategy, our model generates ligands with more reliable NCIs, especially outperforming the baseline diffusion-based models. We further adopted inpainting techniques on NCIs to further improve the quality of the generated molecules. Finally, we showcase the applicability of NCIDiff on drug design tasks for real-world settings with specialized objectives by guiding the generation process with desired NCI patterns.
1711.07387
Sam Kriegman
Sam Kriegman, Nick Cheney, Josh Bongard
How morphological development can guide evolution
null
null
10.1038/s41598-018-31868-7
null
q-bio.PE cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Organisms result from adaptive processes interacting across different time scales. One such interaction is that between development and evolution. Models have shown that development sweeps over several traits in a single agent, sometimes exposing promising static traits. Subsequent evolution can then canalize these rare traits. Thus, development can, under the right conditions, increase evolvability. Here, we report on a previously unknown phenomenon when embodied agents are allowed to develop and evolve: Evolution discovers body plans robust to control changes, these body plans become genetically assimilated, yet controllers for these agents are not assimilated. This allows evolution to continue climbing fitness gradients by tinkering with the developmental programs for controllers within these permissive body plans. This exposes a previously unknown detail about the Baldwin effect: instead of all useful traits becoming genetically assimilated, only traits that render the agent robust to changes in other traits become assimilated. We refer to this as differential canalization. This finding also has implications for the evolutionary design of artificial and embodied agents such as robots: robots robust to internal changes in their controllers may also be robust to external changes in their environment, such as transferal from simulation to reality or deployment in novel environments.
[ { "created": "Mon, 20 Nov 2017 15:51:34 GMT", "version": "v1" }, { "created": "Fri, 15 Dec 2017 00:46:09 GMT", "version": "v2" }, { "created": "Wed, 2 May 2018 01:42:03 GMT", "version": "v3" }, { "created": "Fri, 3 Aug 2018 23:55:21 GMT", "version": "v4" }, { "created": "Fri, 7 Sep 2018 21:12:14 GMT", "version": "v5" } ]
2018-09-19
[ [ "Kriegman", "Sam", "" ], [ "Cheney", "Nick", "" ], [ "Bongard", "Josh", "" ] ]
Organisms result from adaptive processes interacting across different time scales. One such interaction is that between development and evolution. Models have shown that development sweeps over several traits in a single agent, sometimes exposing promising static traits. Subsequent evolution can then canalize these rare traits. Thus, development can, under the right conditions, increase evolvability. Here, we report on a previously unknown phenomenon when embodied agents are allowed to develop and evolve: Evolution discovers body plans robust to control changes, these body plans become genetically assimilated, yet controllers for these agents are not assimilated. This allows evolution to continue climbing fitness gradients by tinkering with the developmental programs for controllers within these permissive body plans. This exposes a previously unknown detail about the Baldwin effect: instead of all useful traits becoming genetically assimilated, only traits that render the agent robust to changes in other traits become assimilated. We refer to this as differential canalization. This finding also has implications for the evolutionary design of artificial and embodied agents such as robots: robots robust to internal changes in their controllers may also be robust to external changes in their environment, such as transferal from simulation to reality or deployment in novel environments.
2109.06405
Fang Chen
Fang Chen, Te Wu and Long Wang
Evolutionary dynamics of zero-determinant strategies in repeated multiplayer games
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since Press and Dyson's ingenious discovery of ZD (zero-determinant) strategy in the repeated Prisoner's Dilemma game, several studies have confirmed the existence of ZD strategy in repeated multiplayer social dilemmas. However, few researches study the evolutionary performance of multiplayer ZD strategies, especially from a theoretical perspective. Here, we use a newly proposed state-clustering method to theoretically analyze the evolutionary dynamics of two representative ZD strategies: generous ZD strategies and extortionate ZD strategies. Apart from the competitions between the two strategies and some classical strategies, we consider two new settings for multiplayer ZD strategies: competitions in the whole ZD strategy space and competitions in the space of all memory-1 strategies. Besides, we investigate the influence of level of generosity and extortion on the evolutionary dynamics of generous and extortionate ZD, which was commonly ignored in previous studies. Theoretical results show players with limited generosity are at an advantageous place and extortioners extorting more severely hold their ground more readily. Our results may provide new insights into better understanding the evolutionary dynamics of ZD strategies in repeated multiplayer games.
[ { "created": "Tue, 14 Sep 2021 02:41:02 GMT", "version": "v1" } ]
2021-09-15
[ [ "Chen", "Fang", "" ], [ "Wu", "Te", "" ], [ "Wang", "Long", "" ] ]
Since Press and Dyson's ingenious discovery of ZD (zero-determinant) strategy in the repeated Prisoner's Dilemma game, several studies have confirmed the existence of ZD strategy in repeated multiplayer social dilemmas. However, few researches study the evolutionary performance of multiplayer ZD strategies, especially from a theoretical perspective. Here, we use a newly proposed state-clustering method to theoretically analyze the evolutionary dynamics of two representative ZD strategies: generous ZD strategies and extortionate ZD strategies. Apart from the competitions between the two strategies and some classical strategies, we consider two new settings for multiplayer ZD strategies: competitions in the whole ZD strategy space and competitions in the space of all memory-1 strategies. Besides, we investigate the influence of level of generosity and extortion on the evolutionary dynamics of generous and extortionate ZD, which was commonly ignored in previous studies. Theoretical results show players with limited generosity are at an advantageous place and extortioners extorting more severely hold their ground more readily. Our results may provide new insights into better understanding the evolutionary dynamics of ZD strategies in repeated multiplayer games.
2405.20203
Chiara Villa
Federica Padovano, Chiara Villa
The development of drug resistance in metastatic tumours under chemotherapy: an evolutionary perspective
35 pages, 10 Figures, 1 Supplementary material
null
null
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a mathematical model of the evolutionary dynamics of a metastatic tumour under chemotherapy, comprising non-local partial differential equations for the phenotype-structured cell populations in the primary tumour and its metastasis. These equations are coupled with a physiologically-based pharmacokinetic model of drug delivery, implementing a realistic delivery schedule. The model is carefully calibrated from the literature, focusing on BRAF-mutated melanoma treated with Dabrafenib as a case study. By means of long-time asymptotic analysis, global sensitivity analysis and numerical simulations, we explore the impact of cell migration from the primary to the metastatic site, physiological aspects of the tumour sites and drug dose on the development of drug resistance and treatment efficacy. Our findings provide a possible explanation for empirical evidence indicating that chemotherapy may foster metastatic spread and that metastatic sites may be less impacted by chemotherapy.
[ { "created": "Thu, 30 May 2024 16:05:37 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 07:32:33 GMT", "version": "v2" } ]
2024-06-21
[ [ "Padovano", "Federica", "" ], [ "Villa", "Chiara", "" ] ]
We present a mathematical model of the evolutionary dynamics of a metastatic tumour under chemotherapy, comprising non-local partial differential equations for the phenotype-structured cell populations in the primary tumour and its metastasis. These equations are coupled with a physiologically-based pharmacokinetic model of drug delivery, implementing a realistic delivery schedule. The model is carefully calibrated from the literature, focusing on BRAF-mutated melanoma treated with Dabrafenib as a case study. By means of long-time asymptotic analysis, global sensitivity analysis and numerical simulations, we explore the impact of cell migration from the primary to the metastatic site, physiological aspects of the tumour sites and drug dose on the development of drug resistance and treatment efficacy. Our findings provide a possible explanation for empirical evidence indicating that chemotherapy may foster metastatic spread and that metastatic sites may be less impacted by chemotherapy.
1112.5604
Pau Ru\'e
Pau Ru\'e, N\'uria Domedel-Puig, Jordi Garcia-Ojalvo, Antonio J. Pons
Integration of cellular signals in chattering environments
9 pages, 6 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells are constantly exposed to fluctuating environmental conditions. External signals are sensed, processed and integrated by cellular signal transduction networks, which translate input signals into specific cellular responses by means of biochemical reactions. These networks have a complex nature, and we are still far from having a complete characterization of the process through which they integrate information, specially given the noisy environment in which that information is embedded. Guided by the many instances of constructive influences of noise that have been reported in the physical sciences in the last decades, here we explore how multiple signals are integrated in an eukaryotic cell in the presence of background noise, or chatter. To that end, we use a Boolean model of a typical human signal transduction network. Despite its complexity, we find that the network is able to display simple patterns of signal integration. Furthermore, our computational analysis shows that these integration patterns depend on the levels of fluctuating background activity carried by other cell inputs. Taken together, our results indicate that signal integration is sensitive to environmental fluctuations, and that this background noise effectively determines the information integration capabilities of the cell.
[ { "created": "Fri, 23 Dec 2011 15:37:01 GMT", "version": "v1" } ]
2012-05-29
[ [ "Rué", "Pau", "" ], [ "Domedel-Puig", "Núria", "" ], [ "Garcia-Ojalvo", "Jordi", "" ], [ "Pons", "Antonio J.", "" ] ]
Cells are constantly exposed to fluctuating environmental conditions. External signals are sensed, processed and integrated by cellular signal transduction networks, which translate input signals into specific cellular responses by means of biochemical reactions. These networks have a complex nature, and we are still far from having a complete characterization of the process through which they integrate information, specially given the noisy environment in which that information is embedded. Guided by the many instances of constructive influences of noise that have been reported in the physical sciences in the last decades, here we explore how multiple signals are integrated in an eukaryotic cell in the presence of background noise, or chatter. To that end, we use a Boolean model of a typical human signal transduction network. Despite its complexity, we find that the network is able to display simple patterns of signal integration. Furthermore, our computational analysis shows that these integration patterns depend on the levels of fluctuating background activity carried by other cell inputs. Taken together, our results indicate that signal integration is sensitive to environmental fluctuations, and that this background noise effectively determines the information integration capabilities of the cell.
1310.5091
Leandro Nadaletti
Leandro P. Nadaletti, Beatriz S. L. P. de Lima, and Solange Guimar\~aes
Synchronization as a unifying mechanism for protein folding
6 pages, 5 figures; Reference added
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the peptide plane oscillatory movements throughout the backbone must also play a key role in the folding mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model. The amino acid coupling may explain the mean-field character of the force that propels an amino acid sequence into a structure through self-organization. Thus, the pattern of synchronized cluster formation and growing helps to solve the Levinthal's paradox.Synchronization may also help us to understand the success of homology structural modeling, allosteric effect, and the mechanism responsible for the recognition of odorants by olfactory receptors.
[ { "created": "Fri, 18 Oct 2013 16:50:59 GMT", "version": "v1" }, { "created": "Wed, 13 Nov 2013 00:56:42 GMT", "version": "v2" }, { "created": "Mon, 23 Jun 2014 18:26:19 GMT", "version": "v3" }, { "created": "Tue, 14 Oct 2014 00:12:09 GMT", "version": "v4" } ]
2014-10-15
[ [ "Nadaletti", "Leandro P.", "" ], [ "de Lima", "Beatriz S. L. P.", "" ], [ "Guimarães", "Solange", "" ] ]
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the peptide plane oscillatory movements throughout the backbone must also play a key role in the folding mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model. The amino acid coupling may explain the mean-field character of the force that propels an amino acid sequence into a structure through self-organization. Thus, the pattern of synchronized cluster formation and growing helps to solve the Levinthal's paradox.Synchronization may also help us to understand the success of homology structural modeling, allosteric effect, and the mechanism responsible for the recognition of odorants by olfactory receptors.
2307.11325
Shreya Ghosh
Matthew Hines, Gregory Glatzer, Shreya Ghosh, Prasenjit Mitra
Analysis of Elephant Movement in Sub-Saharan Africa: Ecological, Climatic, and Conservation Perspectives
11 pages, 17 figures, Accepted in ACM SIGCAS SIGCHI Conference on Computing and Sustainable Societies (COMPASS 2023)
null
null
null
q-bio.PE cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
The interaction between elephants and their environment has profound implications for both ecology and conservation strategies. This study presents an analytical approach to decipher the intricate patterns of elephant movement in Sub-Saharan Africa, concentrating on key ecological drivers such as seasonal variations and rainfall patterns. Despite the complexities surrounding these influential factors, our analysis provides a holistic view of elephant migratory behavior in the context of the dynamic African landscape. Our comprehensive approach enables us to predict the potential impact of these ecological determinants on elephant migration, a critical step in establishing informed conservation strategies. This projection is particularly crucial given the impacts of global climate change on seasonal and rainfall patterns, which could substantially influence elephant movements in the future. The findings of our work aim to not only advance the understanding of movement ecology but also foster a sustainable coexistence of humans and elephants in Sub-Saharan Africa. By predicting potential elephant routes, our work can inform strategies to minimize human-elephant conflict, effectively manage land use, and enhance anti-poaching efforts. This research underscores the importance of integrating movement ecology and climatic variables for effective wildlife management and conservation planning.
[ { "created": "Fri, 21 Jul 2023 03:23:17 GMT", "version": "v1" } ]
2023-07-24
[ [ "Hines", "Matthew", "" ], [ "Glatzer", "Gregory", "" ], [ "Ghosh", "Shreya", "" ], [ "Mitra", "Prasenjit", "" ] ]
The interaction between elephants and their environment has profound implications for both ecology and conservation strategies. This study presents an analytical approach to decipher the intricate patterns of elephant movement in Sub-Saharan Africa, concentrating on key ecological drivers such as seasonal variations and rainfall patterns. Despite the complexities surrounding these influential factors, our analysis provides a holistic view of elephant migratory behavior in the context of the dynamic African landscape. Our comprehensive approach enables us to predict the potential impact of these ecological determinants on elephant migration, a critical step in establishing informed conservation strategies. This projection is particularly crucial given the impacts of global climate change on seasonal and rainfall patterns, which could substantially influence elephant movements in the future. The findings of our work aim to not only advance the understanding of movement ecology but also foster a sustainable coexistence of humans and elephants in Sub-Saharan Africa. By predicting potential elephant routes, our work can inform strategies to minimize human-elephant conflict, effectively manage land use, and enhance anti-poaching efforts. This research underscores the importance of integrating movement ecology and climatic variables for effective wildlife management and conservation planning.
1210.2850
Barbara Rakitsch
Barbara Rakitsch, Christoph Lippert, Hande Topa, Karsten Borgwardt, Antti Honkela, Oliver Stegle
A mixed model approach for joint genetic analysis of alternatively spliced transcript isoforms using RNA-Seq data
null
null
null
null
q-bio.GN q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA-Seq technology allows for studying the transcriptional state of the cell at an unprecedented level of detail. Beyond quantification of whole-gene expression, it is now possible to disentangle the abundance of individual alternatively spliced transcript isoforms of a gene. A central question is to understand the regulatory processes that lead to differences in relative abundance variation due to external and genetic factors. Here, we present a mixed model approach that allows for (i) joint analysis and genetic mapping of multiple transcript isoforms and (ii) mapping of isoform-specific effects. Central to our approach is to comprehensively model the causes of variation and correlation between transcript isoforms, including the genomic background and technical quantification uncertainty. As a result, our method allows to accurately test for shared as well as transcript-specific genetic regulation of transcript isoforms and achieves substantially improved calibration of these statistical tests. Experiments on genotype and RNA-Seq data from 126 human HapMap individuals demonstrate that our model can help to obtain a more fine-grained picture of the genetic basis of gene expression variation.
[ { "created": "Wed, 10 Oct 2012 09:56:06 GMT", "version": "v1" } ]
2012-10-11
[ [ "Rakitsch", "Barbara", "" ], [ "Lippert", "Christoph", "" ], [ "Topa", "Hande", "" ], [ "Borgwardt", "Karsten", "" ], [ "Honkela", "Antti", "" ], [ "Stegle", "Oliver", "" ] ]
RNA-Seq technology allows for studying the transcriptional state of the cell at an unprecedented level of detail. Beyond quantification of whole-gene expression, it is now possible to disentangle the abundance of individual alternatively spliced transcript isoforms of a gene. A central question is to understand the regulatory processes that lead to differences in relative abundance variation due to external and genetic factors. Here, we present a mixed model approach that allows for (i) joint analysis and genetic mapping of multiple transcript isoforms and (ii) mapping of isoform-specific effects. Central to our approach is to comprehensively model the causes of variation and correlation between transcript isoforms, including the genomic background and technical quantification uncertainty. As a result, our method allows to accurately test for shared as well as transcript-specific genetic regulation of transcript isoforms and achieves substantially improved calibration of these statistical tests. Experiments on genotype and RNA-Seq data from 126 human HapMap individuals demonstrate that our model can help to obtain a more fine-grained picture of the genetic basis of gene expression variation.
1610.06127
Steven Frank
Steven A. Frank
Puzzles in modern biology. III. Two kinds of causality in age-related disease
null
F1000Research 5:2533 (2016)
10.12688/f1000research.9789.1
null
q-bio.PE q-bio.MN q-bio.TO
http://creativecommons.org/licenses/by/4.0/
The two primary causal dimensions of age-related disease are rate and function. Change in rate of disease development shifts the age of onset. Change in physiological function provides necessary steps in disease progression. A causal factor may alter the rate of physiological change, but that causal factor itself may have no direct physiological role. Alternatively, a causal factor may provide a necessary physiological function, but that causal factor itself may not alter the rate of disease onset. The rate-function duality provides the basis for solving puzzles of age-related disease. Causal factors of cancer illustrate the duality between rate processes of discovery, such as somatic mutation, and necessary physiological functions, such as invasive penetration across tissue barriers. Examples from cancer suggest general principles of age-related disease.
[ { "created": "Wed, 19 Oct 2016 17:49:45 GMT", "version": "v1" } ]
2016-10-20
[ [ "Frank", "Steven A.", "" ] ]
The two primary causal dimensions of age-related disease are rate and function. Change in rate of disease development shifts the age of onset. Change in physiological function provides necessary steps in disease progression. A causal factor may alter the rate of physiological change, but that causal factor itself may have no direct physiological role. Alternatively, a causal factor may provide a necessary physiological function, but that causal factor itself may not alter the rate of disease onset. The rate-function duality provides the basis for solving puzzles of age-related disease. Causal factors of cancer illustrate the duality between rate processes of discovery, such as somatic mutation, and necessary physiological functions, such as invasive penetration across tissue barriers. Examples from cancer suggest general principles of age-related disease.
q-bio/0512018
Otger Camp\`as
O. Campas, Y. Kafri, K.B. Zeldovich, J. Casademunt, J.-F. Joanny
Collective dynamics of molecular motors pulling on fluid membranes
5 pages, 5 figures
Phys. Rev. Lett. 97, 038101 (2006)
10.1103/PhysRevLett.97.038101
null
q-bio.SC physics.bio-ph
null
The collective dynamics of $N$ weakly coupled processive molecular motors are considered theoretically. We show, using a discrete lattice model, that the velocity-force curves strongly depend on the effective dynamic interactions between motors and differ significantly from a simple mean field prediction. They become essentially independent of $N$ if it is large enough. For strongly biased motors such as kinesin this occurs if $N\gtrsim 5$. The study of a two-state model shows that the existence of internal states can induce effective interactions.
[ { "created": "Thu, 8 Dec 2005 20:43:54 GMT", "version": "v1" } ]
2009-11-11
[ [ "Campas", "O.", "" ], [ "Kafri", "Y.", "" ], [ "Zeldovich", "K. B.", "" ], [ "Casademunt", "J.", "" ], [ "Joanny", "J. -F.", "" ] ]
The collective dynamics of $N$ weakly coupled processive molecular motors are considered theoretically. We show, using a discrete lattice model, that the velocity-force curves strongly depend on the effective dynamic interactions between motors and differ significantly from a simple mean field prediction. They become essentially independent of $N$ if it is large enough. For strongly biased motors such as kinesin this occurs if $N\gtrsim 5$. The study of a two-state model shows that the existence of internal states can induce effective interactions.
1801.01385
Sang-Yoon Kim
Sang-Yoon Kim, Woochang Lim
Effect of Inhibitory Spike-Timing-Dependent Plasticity on Fast Sparsely Synchronized Rhythms in A Small-World Neuronal Network
arXiv admin note: text overlap with arXiv:1704.03150
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Watts-Strogatz small-world network (SWN) consisting of inhibitory fast spiking Izhikevich interneurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without iSTDP, fast sparsely synchronized rhythms, associated with diverse cognitive functions, were found to appear in a range of large noise intensities for fixed strong synaptic inhibition strengths. Here, we investigate the effect of iSTDP on fast sparse synchronization (FSS) by varying the noise intensity $D$. We employ an asymmetric anti-Hebbian time window for the iSTDP update rule [which is in contrast to the Hebbian time window for the excitatory STDP (eSTDP)]. Depending on values of $D$, population-averaged values of saturated synaptic inhibition strengths are potentiated [long-term potentiation (LTP)] or depressed [long-term depression (LTD)] in comparison with the initial mean value, and dispersions from the mean values of LTP/LTD are much increased when compared with the initial dispersion, independently of $D$. In most cases of LTD where the effect of mean LTD is dominant in comparison with the effect of dispersion, good FSS (with higher spiking measure) is found to get better via LTD, while bad FSS (with lower spiking measure) is found to get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). Emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, we also investigate the effects of network architecture on FSS by changing the rewiring probability $p$ of the SWN in the presence of iSTDP.
[ { "created": "Wed, 3 Jan 2018 01:29:34 GMT", "version": "v1" }, { "created": "Mon, 15 Jan 2018 09:46:39 GMT", "version": "v2" }, { "created": "Fri, 11 May 2018 17:05:43 GMT", "version": "v3" }, { "created": "Mon, 14 May 2018 04:59:47 GMT", "version": "v4" } ]
2018-05-15
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We consider the Watts-Strogatz small-world network (SWN) consisting of inhibitory fast spiking Izhikevich interneurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without iSTDP, fast sparsely synchronized rhythms, associated with diverse cognitive functions, were found to appear in a range of large noise intensities for fixed strong synaptic inhibition strengths. Here, we investigate the effect of iSTDP on fast sparse synchronization (FSS) by varying the noise intensity $D$. We employ an asymmetric anti-Hebbian time window for the iSTDP update rule [which is in contrast to the Hebbian time window for the excitatory STDP (eSTDP)]. Depending on values of $D$, population-averaged values of saturated synaptic inhibition strengths are potentiated [long-term potentiation (LTP)] or depressed [long-term depression (LTD)] in comparison with the initial mean value, and dispersions from the mean values of LTP/LTD are much increased when compared with the initial dispersion, independently of $D$. In most cases of LTD where the effect of mean LTD is dominant in comparison with the effect of dispersion, good FSS (with higher spiking measure) is found to get better via LTD, while bad FSS (with lower spiking measure) is found to get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). Emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, we also investigate the effects of network architecture on FSS by changing the rewiring probability $p$ of the SWN in the presence of iSTDP.
q-bio/0607032
Graziano Vernizzi
Michael Bon, Graziano Vernizzi, Henri Orland, A. Zee
Topological classification of RNA structures
17 pages, 3 tables, 13 figures (high quality figures available on request)
null
null
null
q-bio.BM cond-mat.soft q-bio.SC
null
We present a novel topological classification of RNA secondary structures with pseudoknots. It is based on the topological genus of the circular diagram associated to the RNA base-pair structure. The genus is a positive integer number, whose value quantifies the topological complexity of the folded RNA structure. In such a representation, planar diagrams correspond to pure RNA secondary structures and have zero genus, whereas non planar diagrams correspond to pseudoknotted structures and have higher genus. We analyze real RNA structures from the databases wwPDB and Pseudobase, and classify them according to their topological genus. We compare the results of our statistical survey with existing theoretical and numerical models. We also discuss possible applications of this classification and show how it can be used for identifying new RNA structural motifs.
[ { "created": "Fri, 21 Jul 2006 14:26:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bon", "Michael", "" ], [ "Vernizzi", "Graziano", "" ], [ "Orland", "Henri", "" ], [ "Zee", "A.", "" ] ]
We present a novel topological classification of RNA secondary structures with pseudoknots. It is based on the topological genus of the circular diagram associated to the RNA base-pair structure. The genus is a positive integer number, whose value quantifies the topological complexity of the folded RNA structure. In such a representation, planar diagrams correspond to pure RNA secondary structures and have zero genus, whereas non planar diagrams correspond to pseudoknotted structures and have higher genus. We analyze real RNA structures from the databases wwPDB and Pseudobase, and classify them according to their topological genus. We compare the results of our statistical survey with existing theoretical and numerical models. We also discuss possible applications of this classification and show how it can be used for identifying new RNA structural motifs.
2012.11889
Jiancheng Xu
Dongyang Xing, Suyan Tian, Yukun Chen, Jinmei Wang, Xuejuan Sun, Shanji Li, Jiancheng Xu
Establishment of a diagnostic model to distinguish coronavirus disease 2019 from influenza A based on laboratory findings
26 pages,3 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Coronavirus disease 2019 (COVID-19) and Influenza A are common disease caused by viral infection. The clinical symptoms and transmission routes of the two diseases are similar. However, there are no relevant studies on laboratory diagnostic models to discriminate COVID-19 and influenza A. This study aims at establishing a signature of laboratory findings to tell patients with COVID-19 apart from those with influenza A perfectly. Materials: In this study, 56 COVID-19 patients and 54 influenza A patients were included. Laboratory findings, epidemiological characteristics and demographic data were obtained from electronic medical record databases. Elastic network models, followed by a stepwise logistic regression model were implemented to identify indicators capable of discriminating COVID-19 and influenza A. A nomogram is diagramed to show the resulting discriminative model. Results: The majority of hematological and biochemical parameters in COVID-19 patients were significantly different from those in influenza A patients. In the final model, albumin/globulin (A/G), total bilirubin (TBIL) and erythrocyte specific volume (HCT) were selected as predictors. Using an external dataset, the model was validated to perform well. Conclusion: A diagnostic model of laboratory findings was established, in which A/G, TBIL and HCT were included as highly relevant indicators for the segmentation of COVID-19 and influenza A, providing a complimentary means for the precise diagnosis of these two diseases.
[ { "created": "Tue, 22 Dec 2020 09:13:08 GMT", "version": "v1" } ]
2020-12-23
[ [ "Xing", "Dongyang", "" ], [ "Tian", "Suyan", "" ], [ "Chen", "Yukun", "" ], [ "Wang", "Jinmei", "" ], [ "Sun", "Xuejuan", "" ], [ "Li", "Shanji", "" ], [ "Xu", "Jiancheng", "" ] ]
Background: Coronavirus disease 2019 (COVID-19) and Influenza A are common disease caused by viral infection. The clinical symptoms and transmission routes of the two diseases are similar. However, there are no relevant studies on laboratory diagnostic models to discriminate COVID-19 and influenza A. This study aims at establishing a signature of laboratory findings to tell patients with COVID-19 apart from those with influenza A perfectly. Materials: In this study, 56 COVID-19 patients and 54 influenza A patients were included. Laboratory findings, epidemiological characteristics and demographic data were obtained from electronic medical record databases. Elastic network models, followed by a stepwise logistic regression model were implemented to identify indicators capable of discriminating COVID-19 and influenza A. A nomogram is diagramed to show the resulting discriminative model. Results: The majority of hematological and biochemical parameters in COVID-19 patients were significantly different from those in influenza A patients. In the final model, albumin/globulin (A/G), total bilirubin (TBIL) and erythrocyte specific volume (HCT) were selected as predictors. Using an external dataset, the model was validated to perform well. Conclusion: A diagnostic model of laboratory findings was established, in which A/G, TBIL and HCT were included as highly relevant indicators for the segmentation of COVID-19 and influenza A, providing a complimentary means for the precise diagnosis of these two diseases.
2210.06099
Priya Chakraborty
Priya Chakraborty, Ushasi Roy, Mohit K. Jolly, Sayantari Ghosh
Spatio-temporal Pattern Formation due to Host-Circuit Interplay in Gene Expression Dynamics
33 pages, 11 figures
null
10.1016/j.chaos.2022.112995
null
q-bio.QM nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological systems are majorly dependent on their property of bistability in order to exhibit nongenetic heterogeneity in terms of cellular morphology and physiology. Spatial patterns of phenotypically heterogeneous cells, arising due to underlying bistability, may play significant role in phenomena like biofilm development, adaptation, cell motility etc. While nonlinear positive feedback regulation, like cooperative heterodimer formation are the usual reason behind bistability, similar dynamics can also occur as a consequence of host-circuit interaction. In this paper, we have investigated the pattern formation by a motif with non-cooperative positive feedback, that imposes a metabolic burden on its host due to its expression. In a cellular array set inside diffusible environment, we investigate spatio-temporal diffusion in one dimension as well as in two dimension in the context of various initial conditions respectively. Moreover, the number of cells exhibiting the same steady state, as well as their spatial distribution has been quantified in terms of connected component analysis. The effect of diffusion coefficient variation has been studied in terms of stability of related states and time evolution of patterns.
[ { "created": "Wed, 12 Oct 2022 11:23:30 GMT", "version": "v1" } ]
2023-01-25
[ [ "Chakraborty", "Priya", "" ], [ "Roy", "Ushasi", "" ], [ "Jolly", "Mohit K.", "" ], [ "Ghosh", "Sayantari", "" ] ]
Biological systems are majorly dependent on their property of bistability in order to exhibit nongenetic heterogeneity in terms of cellular morphology and physiology. Spatial patterns of phenotypically heterogeneous cells, arising due to underlying bistability, may play significant role in phenomena like biofilm development, adaptation, cell motility etc. While nonlinear positive feedback regulation, like cooperative heterodimer formation are the usual reason behind bistability, similar dynamics can also occur as a consequence of host-circuit interaction. In this paper, we have investigated the pattern formation by a motif with non-cooperative positive feedback, that imposes a metabolic burden on its host due to its expression. In a cellular array set inside diffusible environment, we investigate spatio-temporal diffusion in one dimension as well as in two dimension in the context of various initial conditions respectively. Moreover, the number of cells exhibiting the same steady state, as well as their spatial distribution has been quantified in terms of connected component analysis. The effect of diffusion coefficient variation has been studied in terms of stability of related states and time evolution of patterns.
1910.10002
Eric Sun
Eric D. Sun, Thomas C.T. Michaels, L. Mahadevan
Optimal control of aging in complex networks
null
null
10.1073/pnas.2006375117
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many complex systems experience damage accumulation which leads to aging, manifest as an increasing probability of system collapse with time. This naturally raises the question of how to maximize health and longevity in an aging system at minimal cost of maintenance and intervention. Here, we pose this question in the context of a simple interdependent network model of aging in complex systems, and use both optimal control theory and reinforcement learning alongside a combination of analysis and simulation to determine optimal maintenance protocols. These protocols may motivate the rational design of strategies for promoting longevity in aging complex systems with potential applications in therapeutic schedules and engineered system maintenance.
[ { "created": "Tue, 22 Oct 2019 14:22:06 GMT", "version": "v1" } ]
2020-12-02
[ [ "Sun", "Eric D.", "" ], [ "Michaels", "Thomas C. T.", "" ], [ "Mahadevan", "L.", "" ] ]
Many complex systems experience damage accumulation which leads to aging, manifest as an increasing probability of system collapse with time. This naturally raises the question of how to maximize health and longevity in an aging system at minimal cost of maintenance and intervention. Here, we pose this question in the context of a simple interdependent network model of aging in complex systems, and use both optimal control theory and reinforcement learning alongside a combination of analysis and simulation to determine optimal maintenance protocols. These protocols may motivate the rational design of strategies for promoting longevity in aging complex systems with potential applications in therapeutic schedules and engineered system maintenance.
1008.2591
Thierry Rabilloud
Thierry Rabilloud (BBSI), Mireille Chevallet (BBSI), Sylvie Luche (BBSI), C\'ecile Lelong (BBSI)
Two-dimensional gel electrophoresis in proteomics: past, present and future
null
Journal of proteomics (2010) epub ahead of print
10.1016/j.jprot.2010.05.016
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two-dimensional gel electrophoresis has been instrumental in the birth and developments of proteomics, although it is no longer the exclusive separation tool used in the field of proteomics. In this review, a historical perspective is made, starting from the days where two-dimensional gels were used and the word proteomics did not even exist. The events that have led to the birth of proteomics are also recalled, ending with a description of the now well-known limitations of two-dimensional gels in proteomics. However, the often-underestimated advantages of two-dimensional gels are also underlined, leading to a description of how and when to use two-dimensional gels for the best in a proteomics approach. Taking support of these advantages (robustness, resolution, and ability to separate entire, intact proteins), possible future applications of this technique in proteomics are also mentioned.
[ { "created": "Mon, 16 Aug 2010 08:16:47 GMT", "version": "v1" } ]
2010-08-17
[ [ "Rabilloud", "Thierry", "", "BBSI" ], [ "Chevallet", "Mireille", "", "BBSI" ], [ "Luche", "Sylvie", "", "BBSI" ], [ "Lelong", "Cécile", "", "BBSI" ] ]
Two-dimensional gel electrophoresis has been instrumental in the birth and developments of proteomics, although it is no longer the exclusive separation tool used in the field of proteomics. In this review, a historical perspective is made, starting from the days where two-dimensional gels were used and the word proteomics did not even exist. The events that have led to the birth of proteomics are also recalled, ending with a description of the now well-known limitations of two-dimensional gels in proteomics. However, the often-underestimated advantages of two-dimensional gels are also underlined, leading to a description of how and when to use two-dimensional gels for the best in a proteomics approach. Taking support of these advantages (robustness, resolution, and ability to separate entire, intact proteins), possible future applications of this technique in proteomics are also mentioned.
q-bio/0703057
Jacek Miekisz
Jacek Miekisz and Tadeusz Platkowski
Population dynamics with a stable efficient equilibrium
12 pages
J. Theor. Biol. 237: 363 - 368 (2005)
null
null
q-bio.PE
null
We propose a game-theoretic dynamics of a population of replicating individuals. It consists of two parts: the standard replicator one and a migration between two different habitats. We consider symmetric two-player games with two evolutionarily stable strategies: the efficient one in which the population is in a state with a maximal payoff and the risk-dominant one where players are averse to risk. We show that for a large range of parameters of our dynamics, even if the initial conditions in both habitats are in the basin of attraction of the risk-dominant equilibrium (with respect to the standard replication dynamics without migration), in the long run most individuals play the efficient strategy.
[ { "created": "Tue, 27 Mar 2007 10:12:51 GMT", "version": "v1" } ]
2007-05-23
[ [ "Miekisz", "Jacek", "" ], [ "Platkowski", "Tadeusz", "" ] ]
We propose a game-theoretic dynamics of a population of replicating individuals. It consists of two parts: the standard replicator one and a migration between two different habitats. We consider symmetric two-player games with two evolutionarily stable strategies: the efficient one in which the population is in a state with a maximal payoff and the risk-dominant one where players are averse to risk. We show that for a large range of parameters of our dynamics, even if the initial conditions in both habitats are in the basin of attraction of the risk-dominant equilibrium (with respect to the standard replication dynamics without migration), in the long run most individuals play the efficient strategy.
1801.08626
Yunda Huang
Lily Zhang, Peter B. Gilbert, Edmund Capparelli, Yunda Huang
Pharmacokinetics Simulations for Studying Correlates of Prevention Efficacy of Passive HIV-1 Antibody Prophylaxis in the Antibody Mediated Prevention (AMP) Study
null
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A key objective in two phase 2b AMP clinical trials of VRC01 is to evaluate whether drug concentration over time, as estimated by non-linear mixed effects pharmacokinetics (PK) models, is associated with HIV infection rate. We conducted a simulation study of marker sampling designs, and evaluated the effect of study adherence and sub-cohort sample size on PK model estimates in multiple-dose studies. With m=120, even under low adherence (about half of study visits missing per participant), reasonably unbiased and consistent estimates of most fixed and random effect terms were obtained. Coarsened marker sampling schedules were also studied.
[ { "created": "Thu, 25 Jan 2018 22:43:57 GMT", "version": "v1" } ]
2018-01-29
[ [ "Zhang", "Lily", "" ], [ "Gilbert", "Peter B.", "" ], [ "Capparelli", "Edmund", "" ], [ "Huang", "Yunda", "" ] ]
A key objective in two phase 2b AMP clinical trials of VRC01 is to evaluate whether drug concentration over time, as estimated by non-linear mixed effects pharmacokinetics (PK) models, is associated with HIV infection rate. We conducted a simulation study of marker sampling designs, and evaluated the effect of study adherence and sub-cohort sample size on PK model estimates in multiple-dose studies. With m=120, even under low adherence (about half of study visits missing per participant), reasonably unbiased and consistent estimates of most fixed and random effect terms were obtained. Coarsened marker sampling schedules were also studied.
2103.04636
Fares Al-Shargie
Fares Al-Shargie
Prefrontal cortex functional connectivity based on simultaneous record of electrical and hemodynamic responses associated with mental stress
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
This paper investigates prefrontal cortex (PFC) functional connectivity based on synchronized electrical and hemodynamic responses associated with mental stress. The electrical response was based on alpha rhythmic of Electroencephalography (EEG) signals and the hemodynamic responses were based on the mean concentrations of oxygenated and deoxygenated hemoglobin measured using functional Near-Infrared Spectroscopy (fNIRS). The aim is to explore the effects of stress on the inter and intra hemispheric PFC functional connectivity at narrow and wide frequency bands with 8- 13 Hz in EEG and 0.009-0.1Hz in fNIRS signals. The results demonstrated significantly reduce in the functional connectivity on the dorsolateral PFC within the inter and intra hemispheric PFC areas based in EEG alpha rhythmic and fNIRS oxygenated and deoxygenated hemoglobin. The statistical analysis further demonstrated right dorsolateral dominant to mental stress.
[ { "created": "Mon, 8 Mar 2021 09:48:03 GMT", "version": "v1" } ]
2021-03-09
[ [ "Al-Shargie", "Fares", "" ] ]
This paper investigates prefrontal cortex (PFC) functional connectivity based on synchronized electrical and hemodynamic responses associated with mental stress. The electrical response was based on alpha rhythmic of Electroencephalography (EEG) signals and the hemodynamic responses were based on the mean concentrations of oxygenated and deoxygenated hemoglobin measured using functional Near-Infrared Spectroscopy (fNIRS). The aim is to explore the effects of stress on the inter and intra hemispheric PFC functional connectivity at narrow and wide frequency bands with 8- 13 Hz in EEG and 0.009-0.1Hz in fNIRS signals. The results demonstrated significantly reduce in the functional connectivity on the dorsolateral PFC within the inter and intra hemispheric PFC areas based in EEG alpha rhythmic and fNIRS oxygenated and deoxygenated hemoglobin. The statistical analysis further demonstrated right dorsolateral dominant to mental stress.
0904.0376
Philip von Doetinchem
S. Reibe, Ph. von Doetinchem, B. Madea
A new simulation-based model for calculating post-mortem intervals using developmental data for Lucilia sericata (Dipt.: Calliphoridae)
14 pages, 5 figures, 1 table
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homicide investigations often depend on the determination of a minimum post-mortem interval (PMI$_{min}$) by forensic entomologists. The age of the most developed insect larvae (mostly blow fly larvae) gives reasonably reliable information about the minimum time a person has been dead. Methods such as isomegalen diagrams or ADH calculations can have problems in their reliability, so we established in this study a new growth model to calculate the larval age of \textit{Lucilia sericata} (Meigen 1826). This is based on the actual non-linear development of the blow fly and is designed to include uncertainties, e.g. for temperature values from the crime scene. We used published data for the development of \textit{L. sericata} to estimate non-linear functions describing the temperature dependent behavior of each developmental state. For the new model it is most important to determine the progress within one developmental state as correctly as possible since this affects the accuracy of the PMI estimation by up to 75%. We found that PMI calculations based on one mean temperature value differ by up to 65% from PMIs based on an 12-hourly time temperature profile. Differences of 2\degree C in the estimation of the crime scene temperature result in a deviation in PMI calculation of 15 - 30%.
[ { "created": "Thu, 2 Apr 2009 12:49:19 GMT", "version": "v1" }, { "created": "Tue, 15 Sep 2009 08:58:01 GMT", "version": "v2" }, { "created": "Tue, 30 Mar 2010 18:46:26 GMT", "version": "v3" } ]
2010-03-31
[ [ "Reibe", "S.", "" ], [ "von Doetinchem", "Ph.", "" ], [ "Madea", "B.", "" ] ]
Homicide investigations often depend on the determination of a minimum post-mortem interval (PMI$_{min}$) by forensic entomologists. The age of the most developed insect larvae (mostly blow fly larvae) gives reasonably reliable information about the minimum time a person has been dead. Methods such as isomegalen diagrams or ADH calculations can have problems in their reliability, so we established in this study a new growth model to calculate the larval age of \textit{Lucilia sericata} (Meigen 1826). This is based on the actual non-linear development of the blow fly and is designed to include uncertainties, e.g. for temperature values from the crime scene. We used published data for the development of \textit{L. sericata} to estimate non-linear functions describing the temperature dependent behavior of each developmental state. For the new model it is most important to determine the progress within one developmental state as correctly as possible since this affects the accuracy of the PMI estimation by up to 75%. We found that PMI calculations based on one mean temperature value differ by up to 65% from PMIs based on an 12-hourly time temperature profile. Differences of 2\degree C in the estimation of the crime scene temperature result in a deviation in PMI calculation of 15 - 30%.
1603.08397
Augusto Gonzalez
Augusto Gonzalez
Estimating the number of tissue resident macrophages
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I provide a simple estimation for the number of macrophages in a tissue, arising from the hypothesis that they should keep infections below a certain threshold, above which neutrophils are recruited from blood circulation. The estimation reads Nm=a Ncel^{\alpha}/Nmax, where a is a numerical coefficient, the exponent {\alpha} is near 2/3, and Nmax is the maximal number of pathogens a macrophage may engulf in the time interval, tr, between pathogen replications.
[ { "created": "Mon, 28 Mar 2016 15:04:22 GMT", "version": "v1" } ]
2016-03-29
[ [ "Gonzalez", "Augusto", "" ] ]
I provide a simple estimation for the number of macrophages in a tissue, arising from the hypothesis that they should keep infections below a certain threshold, above which neutrophils are recruited from blood circulation. The estimation reads Nm=a Ncel^{\alpha}/Nmax, where a is a numerical coefficient, the exponent {\alpha} is near 2/3, and Nmax is the maximal number of pathogens a macrophage may engulf in the time interval, tr, between pathogen replications.
q-bio/0606037
Richard A. Blythe
R A Blythe
The propagation of a cultural or biological trait by neutral genetic drift in a subdivided population
17 pages, 8 figures, requires elsart5p.cls; substantially revised and improved version; accepted for publication in Theoretical Population Biology
Theoretical Population Biology (2007) 71 454
10.1016/j.tpb.2007.01.006
null
q-bio.PE
null
We study fixation probabilities and times as a consequence of neutral genetic drift in subdivided populations, motivated by a model of the cultural evolutionary process of language change that is described by the same mathematics as the biological process. We focus on the growth of fixation times with the number of subpopulations, and variation of fixation probabilities and times with initial distributions of mutants. A general formula for the fixation probability for arbitrary initial condition is derived by extending a duality relation between forwards- and backwards-time properties of the model from a panmictic to a subdivided population. From this we obtain new formulae, formally exact in the limit of extremely weak migration, for the mean fixation time from an arbitrary initial condition for Wright's island model, presenting two cases as examples. For more general models of population subdivision, formulae are introduced for an arbitrary number of mutants that are randomly located, and a single mutant whose position is known. These formulae contain parameters that typically have to be obtained numerically, a procedure we follow for two contrasting clustered models. These data suggest that variation of fixation time with the initial condition is slight, but depends strongly on the nature of subdivision. In particular, we demonstrate conditions under which the fixation time remains finite even in the limit of an infinite number of demes. In many cases - except this last where fixation in a finite time is seen - the time to fixation is shown to be in precise agreement with predictions from formulae for the asymptotic effective population size.
[ { "created": "Mon, 26 Jun 2006 16:50:11 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2007 09:49:39 GMT", "version": "v2" } ]
2015-05-26
[ [ "Blythe", "R A", "" ] ]
We study fixation probabilities and times as a consequence of neutral genetic drift in subdivided populations, motivated by a model of the cultural evolutionary process of language change that is described by the same mathematics as the biological process. We focus on the growth of fixation times with the number of subpopulations, and variation of fixation probabilities and times with initial distributions of mutants. A general formula for the fixation probability for arbitrary initial condition is derived by extending a duality relation between forwards- and backwards-time properties of the model from a panmictic to a subdivided population. From this we obtain new formulae, formally exact in the limit of extremely weak migration, for the mean fixation time from an arbitrary initial condition for Wright's island model, presenting two cases as examples. For more general models of population subdivision, formulae are introduced for an arbitrary number of mutants that are randomly located, and a single mutant whose position is known. These formulae contain parameters that typically have to be obtained numerically, a procedure we follow for two contrasting clustered models. These data suggest that variation of fixation time with the initial condition is slight, but depends strongly on the nature of subdivision. In particular, we demonstrate conditions under which the fixation time remains finite even in the limit of an infinite number of demes. In many cases - except this last where fixation in a finite time is seen - the time to fixation is shown to be in precise agreement with predictions from formulae for the asymptotic effective population size.
1511.07768
Yang-Yu Liu
Arunachalam Vinayagam, Travis E. Gibson, Ho-Joon Lee, Bahar Yilmazel, Charles Roesel, Yanhui Hu, Young Kwon, Amitabh Sharma, Yang-Yu Liu, Norbert Perrimon and Albert-L\'aszl\'o Barab\'asi
Controllability analysis of the directed human protein interaction network identifies disease genes and drug targets
31 pages, 4 figures
null
10.1073/pnas.1603992113
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The protein-protein interaction (PPI) network is crucial for cellular information processing and decision-making. With suitable inputs, PPI networks drive the cells to diverse functional outcomes such as cell proliferation or cell death. Here we characterize the structural controllability of a large directed human PPI network comprised of 6,339 proteins and 34,813 interactions. This allows us to classify proteins as "indispensable", "neutral" or "dispensable", which correlates to increasing, no effect, or decreasing the number of driver nodes in the network upon removal of that protein. We find that 21% of the proteins in the PPI network are indispensable. Interestingly, these indispensable proteins are the primary targets of disease-causing mutations, human viruses, and drugs, suggesting that altering a network's control property is critical for the transition between healthy and disease states. Furthermore, analyzing copy number alterations data from 1,547 cancer patients reveals that 56 genes that are frequently amplified or deleted in nine different cancers are indispensable. Among the 56 genes, 46 of them have not been previously associated with cancer. This suggests that controllability analysis is very useful in identifying novel disease genes and potential drug targets.
[ { "created": "Tue, 24 Nov 2015 15:55:33 GMT", "version": "v1" } ]
2016-09-28
[ [ "Vinayagam", "Arunachalam", "" ], [ "Gibson", "Travis E.", "" ], [ "Lee", "Ho-Joon", "" ], [ "Yilmazel", "Bahar", "" ], [ "Roesel", "Charles", "" ], [ "Hu", "Yanhui", "" ], [ "Kwon", "Young", "" ], [ "Sharma", "Amitabh", "" ], [ "Liu", "Yang-Yu", "" ], [ "Perrimon", "Norbert", "" ], [ "Barabási", "Albert-László", "" ] ]
The protein-protein interaction (PPI) network is crucial for cellular information processing and decision-making. With suitable inputs, PPI networks drive the cells to diverse functional outcomes such as cell proliferation or cell death. Here we characterize the structural controllability of a large directed human PPI network comprised of 6,339 proteins and 34,813 interactions. This allows us to classify proteins as "indispensable", "neutral" or "dispensable", which correlates to increasing, no effect, or decreasing the number of driver nodes in the network upon removal of that protein. We find that 21% of the proteins in the PPI network are indispensable. Interestingly, these indispensable proteins are the primary targets of disease-causing mutations, human viruses, and drugs, suggesting that altering a network's control property is critical for the transition between healthy and disease states. Furthermore, analyzing copy number alterations data from 1,547 cancer patients reveals that 56 genes that are frequently amplified or deleted in nine different cancers are indispensable. Among the 56 genes, 46 of them have not been previously associated with cancer. This suggests that controllability analysis is very useful in identifying novel disease genes and potential drug targets.
1206.0616
Daniel Gamermann Dr.
R. Reyes, D. Gamermann, A. Montagud, D. Fuente, J. Triana, J. F. Urchueg\'ia, P. Fern\'andez de C\'ordoba
Automation on the generation of genome scale metabolic models
24 pages, 2 figures, 2 tables
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Nowadays, the reconstruction of genome scale metabolic models is a non-automatized and interactive process based on decision taking. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. Results: This work presents the automation of a methodology for the reconstruction of genome scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome scale metabolic model of a photosynthetic organism, {\it Synechocystis sp. PCC6803}. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. Conclusions: For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models like connectivity and average shortest mean path of the different models have been compared and analyzed.
[ { "created": "Mon, 4 Jun 2012 13:41:38 GMT", "version": "v1" } ]
2012-06-05
[ [ "Reyes", "R.", "" ], [ "Gamermann", "D.", "" ], [ "Montagud", "A.", "" ], [ "Fuente", "D.", "" ], [ "Triana", "J.", "" ], [ "Urchuegía", "J. F.", "" ], [ "de Córdoba", "P. Fernández", "" ] ]
Background: Nowadays, the reconstruction of genome scale metabolic models is a non-automatized and interactive process based on decision taking. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. Results: This work presents the automation of a methodology for the reconstruction of genome scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome scale metabolic model of a photosynthetic organism, {\it Synechocystis sp. PCC6803}. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. Conclusions: For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models like connectivity and average shortest mean path of the different models have been compared and analyzed.
1410.7350
Logan Brooks
Logan C. Brooks, David C. Farrow, Sangwon Hyun, Ryan J. Tibshirani, Roni Rosenfeld
Flexible Modeling of Epidemics with an Empirical Bayes Framework
52 pages
null
10.1371/journal.pcbi.1004382
null
q-bio.PE stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic's behavior, policy makers can design and implement more effective countermeasures. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, as well as the season onset, duration, peak time, and peak height, with and without additional data from Google Flu Trends, as part of the CDC's 2013--2014 "Predict the Influenza Season Challenge". Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, these models may not accurately capture the range of possible behaviors that we may see in the future. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to other diseases as well. Another important advantage of this method is that it produces a complete posterior distribution for any desired forecasting target, rather than mere point predictions. We report prospective influenza-like-illness forecasts that were made for the 2013--2014 U.S. influenza season, and compare the framework's cross-validated prediction error on historical data to that of a variety of simpler baseline predictors.
[ { "created": "Mon, 27 Oct 2014 18:41:42 GMT", "version": "v1" } ]
2016-02-17
[ [ "Brooks", "Logan C.", "" ], [ "Farrow", "David C.", "" ], [ "Hyun", "Sangwon", "" ], [ "Tibshirani", "Ryan J.", "" ], [ "Rosenfeld", "Roni", "" ] ]
Seasonal influenza epidemics cause consistent, considerable, widespread loss annually in terms of economic burden, morbidity, and mortality. With access to accurate and reliable forecasts of a current or upcoming influenza epidemic's behavior, policy makers can design and implement more effective countermeasures. We developed a framework for in-season forecasts of epidemics using a semiparametric Empirical Bayes framework, and applied it to predict the weekly percentage of outpatient doctors visits for influenza-like illness, as well as the season onset, duration, peak time, and peak height, with and without additional data from Google Flu Trends, as part of the CDC's 2013--2014 "Predict the Influenza Season Challenge". Previous work on epidemic modeling has focused on developing mechanistic models of disease behavior and applying time series tools to explain historical data. However, these models may not accurately capture the range of possible behaviors that we may see in the future. Our approach instead produces possibilities for the epidemic curve of the season of interest using modified versions of data from previous seasons, allowing for reasonable variations in the timing, pace, and intensity of the seasonal epidemics, as well as noise in observations. Since the framework does not make strict domain-specific assumptions, it can easily be applied to other diseases as well. Another important advantage of this method is that it produces a complete posterior distribution for any desired forecasting target, rather than mere point predictions. We report prospective influenza-like-illness forecasts that were made for the 2013--2014 U.S. influenza season, and compare the framework's cross-validated prediction error on historical data to that of a variety of simpler baseline predictors.
2008.09109
Indrajit Ghosh
Indrajit Ghosh and Maia Martcheva
Modeling the effects of prosocial awareness on COVID-19 dynamics: A case study on Colombia
null
null
10.1007/s11071-021-06489-x
null
q-bio.PE physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
The ongoing COVID-19 pandemic has affected most of the countries on Earth. It has become a pandemic outbreak with more than 24 million confirmed infections and above 840 thousand deaths worldwide. In this study, we consider a mathematical model on COVID-19 transmission with the prosocial awareness effect. The proposed model can have four equilibrium states based on different parametric conditions. The local and global stability conditions for awareness free, disease-free equilibrium is studied. Using Lyapunov function theory and LaSalle Invariance Principle, the disease-free equilibrium is shown globally asymptotically stable under some parametric constraints. The existence of unique awareness free, endemic equilibrium and unique endemic equilibrium is presented. We calibrate our proposed model parameters to fit daily cases and deaths from Colombia. Sensitivity analysis indicates that the transmission rate and learning factor related to awareness of susceptibles are very crucial for reduction in disease related deaths. Finally, we assess the impact of prosocial awareness during the outbreak and compare this strategy with popular control measures. Results indicate that prosocial awareness has competitive potential to flatten the curve.
[ { "created": "Thu, 20 Aug 2020 17:53:00 GMT", "version": "v1" }, { "created": "Sun, 30 Aug 2020 14:27:28 GMT", "version": "v2" } ]
2021-05-21
[ [ "Ghosh", "Indrajit", "" ], [ "Martcheva", "Maia", "" ] ]
The ongoing COVID-19 pandemic has affected most of the countries on Earth. It has become a pandemic outbreak with more than 24 million confirmed infections and above 840 thousand deaths worldwide. In this study, we consider a mathematical model on COVID-19 transmission with the prosocial awareness effect. The proposed model can have four equilibrium states based on different parametric conditions. The local and global stability conditions for awareness free, disease-free equilibrium is studied. Using Lyapunov function theory and LaSalle Invariance Principle, the disease-free equilibrium is shown globally asymptotically stable under some parametric constraints. The existence of unique awareness free, endemic equilibrium and unique endemic equilibrium is presented. We calibrate our proposed model parameters to fit daily cases and deaths from Colombia. Sensitivity analysis indicates that the transmission rate and learning factor related to awareness of susceptibles are very crucial for reduction in disease related deaths. Finally, we assess the impact of prosocial awareness during the outbreak and compare this strategy with popular control measures. Results indicate that prosocial awareness has competitive potential to flatten the curve.
1506.06581
Ernest Montbrio
Ernest Montbri\'o, Diego Paz\'o, Alex Roxin
Macroscopic description for networks of spiking neurons
null
Phys. Rev. X 5, 021028 (2015)
10.1103/PhysRevX.5.021028
null
q-bio.NC cond-mat.dis-nn nlin.AO nlin.CD
http://creativecommons.org/licenses/by/3.0/
A major goal of neuroscience, statistical physics and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally we show that the firing rate description is related, via a conformal map, with a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.
[ { "created": "Mon, 22 Jun 2015 12:58:47 GMT", "version": "v1" } ]
2015-06-23
[ [ "Montbrió", "Ernest", "" ], [ "Pazó", "Diego", "" ], [ "Roxin", "Alex", "" ] ]
A major goal of neuroscience, statistical physics and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally we show that the firing rate description is related, via a conformal map, with a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.
2401.04478
Maximilian Schuh
Maximilian G. Schuh, Davide Boldini, Stephan A. Sieber
TwinBooster: Synergising Large Language Models with Barlow Twins and Gradient Boosting for Enhanced Molecular Property Prediction
13(+9) pages(+appendix), 5 figures, 11 tables
null
null
null
q-bio.BM cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The success of drug discovery and development relies on the precise prediction of molecular activities and properties. While in silico molecular property prediction has shown remarkable potential, its use has been limited so far to assays for which large amounts of data are available. In this study, we use a fine-tuned large language model to integrate biological assays based on their textual information, coupled with Barlow Twins, a Siamese neural network using a novel self-supervised learning approach. This architecture uses both assay information and molecular fingerprints to extract the true molecular information. TwinBooster enables the prediction of properties of unseen bioassays and molecules by providing state-of-the-art zero-shot learning tasks. Remarkably, our artificial intelligence pipeline shows excellent performance on the FS-Mol benchmark. This breakthrough demonstrates the application of deep learning to critical property prediction tasks where data is typically scarce. By accelerating the early identification of active molecules in drug discovery and development, this method has the potential to help streamline the identification of novel therapeutics.
[ { "created": "Tue, 9 Jan 2024 10:36:20 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2024 09:29:47 GMT", "version": "v2" } ]
2024-01-31
[ [ "Schuh", "Maximilian G.", "" ], [ "Boldini", "Davide", "" ], [ "Sieber", "Stephan A.", "" ] ]
The success of drug discovery and development relies on the precise prediction of molecular activities and properties. While in silico molecular property prediction has shown remarkable potential, its use has been limited so far to assays for which large amounts of data are available. In this study, we use a fine-tuned large language model to integrate biological assays based on their textual information, coupled with Barlow Twins, a Siamese neural network using a novel self-supervised learning approach. This architecture uses both assay information and molecular fingerprints to extract the true molecular information. TwinBooster enables the prediction of properties of unseen bioassays and molecules by providing state-of-the-art zero-shot learning tasks. Remarkably, our artificial intelligence pipeline shows excellent performance on the FS-Mol benchmark. This breakthrough demonstrates the application of deep learning to critical property prediction tasks where data is typically scarce. By accelerating the early identification of active molecules in drug discovery and development, this method has the potential to help streamline the identification of novel therapeutics.
1605.01039
John Rhodes
Mark Layer and John A. Rhodes
Phylogenetic trees and Euclidean embeddings
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It was recently observed by de Vienne et al. that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.
[ { "created": "Tue, 3 May 2016 19:33:45 GMT", "version": "v1" } ]
2016-05-04
[ [ "Layer", "Mark", "" ], [ "Rhodes", "John A.", "" ] ]
It was recently observed by de Vienne et al. that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.
1209.4469
Martin Hediger
Martin R. Hediger, Luca De Vico, Julie B. Rannes, Christian J\"ackel, Werner Besenmatter, Allan Svendsen, Jan H. Jensen
In silico screening of 393 mutants facilitates enzyme engineering of amidase activity in CalB
null
null
10.7717/peerj.145
null
q-bio.BM physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our previously presented method for high throughput computational screening of mutant activity (Hediger et al., arXiv:1203.2950) is benchmarked against experimentally measured amidase activity for 22 mutants of Candida antarctica lipase B (CalB). Using an appropriate cutoff criterion for the computed barriers, the qualitative activity of 15 out of 22 mutants is correctly predicted. The method identifies four of the six most active mutants with >=3-fold wild type activity and seven out of the eight least active mutants with <=0.5-fold wild type activity. The method is further used to screen all sterically possible (386) double-, triple- and quadruple-mutants constructed from the most active single mutants. Based on the benchmark test at least 20 new promising mutants are identified.
[ { "created": "Thu, 20 Sep 2012 09:21:16 GMT", "version": "v1" } ]
2013-09-16
[ [ "Hediger", "Martin R.", "" ], [ "De Vico", "Luca", "" ], [ "Rannes", "Julie B.", "" ], [ "Jäckel", "Christian", "" ], [ "Besenmatter", "Werner", "" ], [ "Svendsen", "Allan", "" ], [ "Jensen", "Jan H.", "" ] ]
Our previously presented method for high throughput computational screening of mutant activity (Hediger et al., arXiv:1203.2950) is benchmarked against experimentally measured amidase activity for 22 mutants of Candida antarctica lipase B (CalB). Using an appropriate cutoff criterion for the computed barriers, the qualitative activity of 15 out of 22 mutants is correctly predicted. The method identifies four of the six most active mutants with >=3-fold wild type activity and seven out of the eight least active mutants with <=0.5-fold wild type activity. The method is further used to screen all sterically possible (386) double-, triple- and quadruple-mutants constructed from the most active single mutants. Based on the benchmark test at least 20 new promising mutants are identified.
2403.16290
Wayne Getz
Wayne M Getz
An Information Theory Treatment of Animal Movement Tracks
21 pages, 2 tables, 1 figure
null
null
null
q-bio.PE cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Position recordings of the two-dimensional tracks of animals moving over landscapes has progressed over the past three decades from hourly to second-by-second locations. Track segmentation methods for analyzing the behavioral information in such relocation data has lagged somewhat behind, with scales of analysis currently at the sub-hourly to minute level. A new approach is needed to bring segmentation analysis down to a second-by-second level. Here, a fine-scale approach is presented that rests heavily on concepts from Shannon's Information Theory. In this paper, we first briefly review and update concepts relating to movement path segmentation. We then discuss how cluster analysis can be used to organize the smallest viable statistical movement elements (StaMEs), which are $\mu$ steps long, and to code the next level of movement elements called ``words'' that are $m \mu$ steps long. Centroids of these word clusters are identified as canonical activity modes (CAMs). Unlike current behavioral change point analysis and hidden Markov model segmentation schemes, the approach presented here allows us to provide entropy measures for movement paths, compute the coding efficiencies of derived StaMEs and CAMs, and to assess error rates in the allocation of strings of $m$ StaMEs to CAM types. In addition our approach allows us to employ the Jensen-Shannon divergence measure to assess and compare the best choices for the various parameters (number of steps in a StaME, number of StaME types, number of StaMEs in a word, number of CAM types), as well as the best clustering methods for generating segments that can then be used to interpret and predict sequences of higher order segments. The theory presented here provides another tool in our toolbox for dealing with the effects of global change on the movement and redistribution of animals across altered landscapes.
[ { "created": "Sun, 24 Mar 2024 20:41:19 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2024 16:20:44 GMT", "version": "v2" }, { "created": "Tue, 16 Apr 2024 17:52:34 GMT", "version": "v3" }, { "created": "Fri, 31 May 2024 15:59:44 GMT", "version": "v4" } ]
2024-06-03
[ [ "Getz", "Wayne M", "" ] ]
Position recordings of the two-dimensional tracks of animals moving over landscapes has progressed over the past three decades from hourly to second-by-second locations. Track segmentation methods for analyzing the behavioral information in such relocation data has lagged somewhat behind, with scales of analysis currently at the sub-hourly to minute level. A new approach is needed to bring segmentation analysis down to a second-by-second level. Here, a fine-scale approach is presented that rests heavily on concepts from Shannon's Information Theory. In this paper, we first briefly review and update concepts relating to movement path segmentation. We then discuss how cluster analysis can be used to organize the smallest viable statistical movement elements (StaMEs), which are $\mu$ steps long, and to code the next level of movement elements called ``words'' that are $m \mu$ steps long. Centroids of these word clusters are identified as canonical activity modes (CAMs). Unlike current behavioral change point analysis and hidden Markov model segmentation schemes, the approach presented here allows us to provide entropy measures for movement paths, compute the coding efficiencies of derived StaMEs and CAMs, and to assess error rates in the allocation of strings of $m$ StaMEs to CAM types. In addition our approach allows us to employ the Jensen-Shannon divergence measure to assess and compare the best choices for the various parameters (number of steps in a StaME, number of StaME types, number of StaMEs in a word, number of CAM types), as well as the best clustering methods for generating segments that can then be used to interpret and predict sequences of higher order segments. The theory presented here provides another tool in our toolbox for dealing with the effects of global change on the movement and redistribution of animals across altered landscapes.
1209.6046
Joshua Schraiber
Joshua G. Schraiber and Stephannie Shih and Montgomery Slatkin
Genomic tests of variation in inbreeding among individuals and among chromosomes
18 pages, 2 figures
Genetics December 1, 2012 vol. 192 no. 4 1477-1482
10.1534/genetics.112.145367
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the distribution of heterozygous sites in nine European and nine Yoruban individuals whose genomic sequences were made publicly available by Complete Genomics. We show that it is possible to obtain detailed information about inbreeding when a relatively small set of whole-genome sequences is available. Rather than focus on testing for deviations from Hardy-Weinberg genotype frequencies at each site, we analyze the entire distribution of heterozygotes conditioned on the number of copies of the derived (non-chimpanzee) allele. Using Levene's exact test, we reject Hardy-Weinberg in both populations. We generalized Levene's distribution to obtain the exact distribution of the number of heterozygous individuals given that every individual has the same inbreeding coefficient, F. We estimated F to be 0.0026 in Europeans and 0.0005 in Yorubans, but we could also reject the hypothesis that F was the same in each individual. We used a composite likelihood method to estimate F in each individual and within each chromosome. Variation in F across chromosomes within individuals was too large to be consistent with sampling effects alone. Furthermore, estimates of F for each chromosome in different populations were not correlated. Our results show how detailed comparisons of population genomic data can be made to theoretical predictions. The application of methods to the Complete Genomics data set shows that the extent of apparent inbreeding varies across chromosomes and across individuals, and estimates of inbreeding coefficients are subject to unexpected levels of variation which might be partly accounted for by selection.
[ { "created": "Wed, 26 Sep 2012 19:48:19 GMT", "version": "v1" } ]
2012-12-14
[ [ "Schraiber", "Joshua G.", "" ], [ "Shih", "Stephannie", "" ], [ "Slatkin", "Montgomery", "" ] ]
We examine the distribution of heterozygous sites in nine European and nine Yoruban individuals whose genomic sequences were made publicly available by Complete Genomics. We show that it is possible to obtain detailed information about inbreeding when a relatively small set of whole-genome sequences is available. Rather than focus on testing for deviations from Hardy-Weinberg genotype frequencies at each site, we analyze the entire distribution of heterozygotes conditioned on the number of copies of the derived (non-chimpanzee) allele. Using Levene's exact test, we reject Hardy-Weinberg in both populations. We generalized Levene's distribution to obtain the exact distribution of the number of heterozygous individuals given that every individual has the same inbreeding coefficient, F. We estimated F to be 0.0026 in Europeans and 0.0005 in Yorubans, but we could also reject the hypothesis that F was the same in each individual. We used a composite likelihood method to estimate F in each individual and within each chromosome. Variation in F across chromosomes within individuals was too large to be consistent with sampling effects alone. Furthermore, estimates of F for each chromosome in different populations were not correlated. Our results show how detailed comparisons of population genomic data can be made to theoretical predictions. The application of methods to the Complete Genomics data set shows that the extent of apparent inbreeding varies across chromosomes and across individuals, and estimates of inbreeding coefficients are subject to unexpected levels of variation which might be partly accounted for by selection.
1712.03346
Sam Sinai
Sam Sinai, Eric Kelsic, George M. Church, Martin A. Nowak
Variational auto-encoding of protein sequences
Abstract for oral presentation at NIPS 2017 Workshop on Machine Learning in Computational Biology
null
null
null
q-bio.QM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are responsible for the most diverse set of functions in biology. The ability to extract information from protein sequences and to predict the effects of mutations is extremely valuable in many domains of biology and medicine. However the mapping between protein sequence and function is complex and poorly understood. Here we present an embedding of natural protein sequences using a Variational Auto-Encoder and use it to predict how mutations affect protein function. We use this unsupervised approach to cluster natural variants and learn interactions between sets of positions within a protein. This approach generally performs better than baseline methods that consider no interactions within sequences, and in some cases better than the state-of-the-art approaches that use the inverse-Potts model. This generative model can be used to computationally guide exploration of protein sequence space and to better inform rational and automatic protein design.
[ { "created": "Sat, 9 Dec 2017 06:36:17 GMT", "version": "v1" }, { "created": "Fri, 22 Dec 2017 02:43:57 GMT", "version": "v2" }, { "created": "Wed, 3 Jan 2018 17:39:14 GMT", "version": "v3" } ]
2018-01-04
[ [ "Sinai", "Sam", "" ], [ "Kelsic", "Eric", "" ], [ "Church", "George M.", "" ], [ "Nowak", "Martin A.", "" ] ]
Proteins are responsible for the most diverse set of functions in biology. The ability to extract information from protein sequences and to predict the effects of mutations is extremely valuable in many domains of biology and medicine. However the mapping between protein sequence and function is complex and poorly understood. Here we present an embedding of natural protein sequences using a Variational Auto-Encoder and use it to predict how mutations affect protein function. We use this unsupervised approach to cluster natural variants and learn interactions between sets of positions within a protein. This approach generally performs better than baseline methods that consider no interactions within sequences, and in some cases better than the state-of-the-art approaches that use the inverse-Potts model. This generative model can be used to computationally guide exploration of protein sequence space and to better inform rational and automatic protein design.
2312.06159
Miriam Shiffman
Miriam Shiffman, Ryan Giordano, Tamara Broderick
Could dropping a few cells change the takeaways from differential expression?
null
null
null
null
q-bio.QM stat.CO stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Differential expression (DE) plays a fundamental role toward illuminating the molecular mechanisms driving a difference between groups (e.g., due to treatment or disease). While any analysis is run on particular cells/samples, the intent is to generalize to future occurrences of the treatment or disease. Implicitly, this step is justified by assuming that present and future samples are independent and identically distributed from the same population. Though this assumption is always false, we hope that any deviation from the assumption is small enough that A) conclusions of the analysis still hold and B) standard tools like standard error, significance, and power still reflect generalizability. Conversely, we might worry about these deviations, and reliance on standard tools, if conclusions could be substantively changed by dropping a very small fraction of data. While checking every small fraction is computationally intractable, recent work develops an approximation to identify when such an influential subset exists. Building on this work, we develop a metric for dropping-data robustness of DE; namely, we cast the analysis in a form suitable to the approximation, extend the approximation to models with data-dependent hyperparameters, and extend the notion of a data point from a single cell to a pseudobulk observation. We then overcome the inherent non-differentiability of gene set enrichment analysis to develop an additional approximation for the robustness of top gene sets. We assess robustness of DE for published single-cell RNA-seq data and discover that 1000s of genes can have their results flipped by dropping <1% of the data, including 100s that are sensitive to dropping a single cell (0.07%). Surprisingly, this non-robustness extends to high-level takeaways; half of the top 10 gene sets can be changed by dropping 1-2% of cells, and 2/10 can be changed by dropping a single cell.
[ { "created": "Mon, 11 Dec 2023 06:51:57 GMT", "version": "v1" } ]
2023-12-12
[ [ "Shiffman", "Miriam", "" ], [ "Giordano", "Ryan", "" ], [ "Broderick", "Tamara", "" ] ]
Differential expression (DE) plays a fundamental role toward illuminating the molecular mechanisms driving a difference between groups (e.g., due to treatment or disease). While any analysis is run on particular cells/samples, the intent is to generalize to future occurrences of the treatment or disease. Implicitly, this step is justified by assuming that present and future samples are independent and identically distributed from the same population. Though this assumption is always false, we hope that any deviation from the assumption is small enough that A) conclusions of the analysis still hold and B) standard tools like standard error, significance, and power still reflect generalizability. Conversely, we might worry about these deviations, and reliance on standard tools, if conclusions could be substantively changed by dropping a very small fraction of data. While checking every small fraction is computationally intractable, recent work develops an approximation to identify when such an influential subset exists. Building on this work, we develop a metric for dropping-data robustness of DE; namely, we cast the analysis in a form suitable to the approximation, extend the approximation to models with data-dependent hyperparameters, and extend the notion of a data point from a single cell to a pseudobulk observation. We then overcome the inherent non-differentiability of gene set enrichment analysis to develop an additional approximation for the robustness of top gene sets. We assess robustness of DE for published single-cell RNA-seq data and discover that 1000s of genes can have their results flipped by dropping <1% of the data, including 100s that are sensitive to dropping a single cell (0.07%). Surprisingly, this non-robustness extends to high-level takeaways; half of the top 10 gene sets can be changed by dropping 1-2% of cells, and 2/10 can be changed by dropping a single cell.
q-bio/0509044
Ana Nunes
M. M. Telo da Gama and A. Nunes
Epidemics in small world networks
null
null
10.1140/epjb/e2006-00099-7
null
q-bio.PE
null
For many infectious diseases, a small-world network on an underlying regular lattice is a suitable simplified model for the contact structure of the host population. It is well known that the contact network, described in this setting by a single parameter, the small-world parameter $p$, plays an important role both in the short term and in the long term dynamics of epidemic spread. We have studied the effect of the network structure on models of immune for life diseases and found that in addition to the reduction of the effective transmission rate, through the screening of infectives, spatial correlations may strongly enhance the stochastic fluctuations. As a consequence, time series of unforced Susceptible-Exposed-Infected-Recovered (SEIR) models provide patterns of recurrent epidemics with realistic amplitudes, suggesting that these models together with complex networks of contacts are the key ingredients to describe the prevaccination dynamical patterns of diseases such as measles and pertussis. We have also studied the role of the host contact strucuture in pathogen antigenic variation, through its effect on the final outcome of an invasion by a viral strain of a population where a very similar virus is endemic. Similar viral strains are modelled by the same infection and reinfection parameters, and by a given degree of cross immunity that represents the antigenic distance between the competing strains. We have found, somewhat surprisingly, that clustering on the network decreases the potential to sustain pathogen diversity.
[ { "created": "Fri, 30 Sep 2005 17:35:17 GMT", "version": "v1" }, { "created": "Wed, 26 Oct 2005 22:41:36 GMT", "version": "v2" } ]
2009-11-11
[ [ "da Gama", "M. M. Telo", "" ], [ "Nunes", "A.", "" ] ]
For many infectious diseases, a small-world network on an underlying regular lattice is a suitable simplified model for the contact structure of the host population. It is well known that the contact network, described in this setting by a single parameter, the small-world parameter $p$, plays an important role both in the short term and in the long term dynamics of epidemic spread. We have studied the effect of the network structure on models of immune for life diseases and found that in addition to the reduction of the effective transmission rate, through the screening of infectives, spatial correlations may strongly enhance the stochastic fluctuations. As a consequence, time series of unforced Susceptible-Exposed-Infected-Recovered (SEIR) models provide patterns of recurrent epidemics with realistic amplitudes, suggesting that these models together with complex networks of contacts are the key ingredients to describe the prevaccination dynamical patterns of diseases such as measles and pertussis. We have also studied the role of the host contact strucuture in pathogen antigenic variation, through its effect on the final outcome of an invasion by a viral strain of a population where a very similar virus is endemic. Similar viral strains are modelled by the same infection and reinfection parameters, and by a given degree of cross immunity that represents the antigenic distance between the competing strains. We have found, somewhat surprisingly, that clustering on the network decreases the potential to sustain pathogen diversity.
q-bio/0503027
Isaac Hubner
Isaac A. Hubner, Katherine A. Edmonds, and Eugene I. Shakhnovich
Nucleation and the transition state of the SH3 domain
In press at the Journal of Molecular Biology
null
null
null
q-bio.BM q-bio.OT
null
We present a verified computational model of the SH3 domain transition state (TS) ensemble. This model was built for three separate SH3 domains using experimental s in all-atom protein folding simulations. While averaging over all conformations incorrectly considers non-TS conformations as transition states, quantifying structures as pre-TS, TS, and post-TS by measurement of their transmission coefficient (pfold, or probability to fold) allows for rigorous conclusions regarding the structure of the folding nucleus and a full mechanistic analysis of the folding process. Through analysis of the TS, we observe a highly polarized nucleus in which many residues are solvent-exposed. Mechanistic analysis suggests the hydrophobic core forms largely after an early nucleation step. SH3 presents an ideal system for studying the nucleation-condensation mechanism and highlights the synergistic relationship between experiment and simulation in the study of protein folding.
[ { "created": "Thu, 17 Mar 2005 17:12:48 GMT", "version": "v1" } ]
2007-05-23
[ [ "Hubner", "Isaac A.", "" ], [ "Edmonds", "Katherine A.", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
We present a verified computational model of the SH3 domain transition state (TS) ensemble. This model was built for three separate SH3 domains using experimental s in all-atom protein folding simulations. While averaging over all conformations incorrectly considers non-TS conformations as transition states, quantifying structures as pre-TS, TS, and post-TS by measurement of their transmission coefficient (pfold, or probability to fold) allows for rigorous conclusions regarding the structure of the folding nucleus and a full mechanistic analysis of the folding process. Through analysis of the TS, we observe a highly polarized nucleus in which many residues are solvent-exposed. Mechanistic analysis suggests the hydrophobic core forms largely after an early nucleation step. SH3 presents an ideal system for studying the nucleation-condensation mechanism and highlights the synergistic relationship between experiment and simulation in the study of protein folding.
2304.02656
Bingxin Zhou
Xinye Xiong, Bingxin Zhou, Yu Guang Wang
Graph Representation Learning for Interactive Biomolecule Systems
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Advances in deep learning models have revolutionized the study of biomolecule systems and their mechanisms. Graph representation learning, in particular, is important for accurately capturing the geometric information of biomolecules at different levels. This paper presents a comprehensive review of the methodologies used to represent biological molecules and systems as computer-recognizable objects, such as sequences, graphs, and surfaces. Moreover, it examines how geometric deep learning models, with an emphasis on graph-based techniques, can analyze biomolecule data to enable drug discovery, protein characterization, and biological system analysis. The study concludes with an overview of the current state of the field, highlighting the challenges that exist and the potential future research directions.
[ { "created": "Wed, 5 Apr 2023 08:00:50 GMT", "version": "v1" } ]
2023-04-07
[ [ "Xiong", "Xinye", "" ], [ "Zhou", "Bingxin", "" ], [ "Wang", "Yu Guang", "" ] ]
Advances in deep learning models have revolutionized the study of biomolecule systems and their mechanisms. Graph representation learning, in particular, is important for accurately capturing the geometric information of biomolecules at different levels. This paper presents a comprehensive review of the methodologies used to represent biological molecules and systems as computer-recognizable objects, such as sequences, graphs, and surfaces. Moreover, it examines how geometric deep learning models, with an emphasis on graph-based techniques, can analyze biomolecule data to enable drug discovery, protein characterization, and biological system analysis. The study concludes with an overview of the current state of the field, highlighting the challenges that exist and the potential future research directions.
2005.00049
Jean Dolbeault
Jean Dolbeault and Gabriel Turinici
Heterogeneous social interactions and the COVID-19 lockdown outcome in a multi-group SEIR model
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study variants of the SEIR model for interpreting some qualitative features of the statistics of the Covid-19 epidemic in France. Standard SEIR models distinguish essentially two regimes: either the disease is controlled and the number of infected people rapidly decreases, or the disease spreads and contaminates a significant fraction of the population until herd immunity is achieved. After lockdown, at first sight it seems that social distancing is not enough to control the outbreak. We discuss here a possible explanation, namely that the lockdown is creating social heterogeneity: even if a large majority of the population complies with the lockdown rules, a small fraction of the population still has to maintain a normal or high level of social interactions, such as health workers, providers of essential services, etc. This results in an apparent high level of epidemic propagation as measured through re-estimations of the basic reproduction ratio. However, these measures are limited to averages, while variance inside the population plays an essential role on the peak and the size of the epidemic outbreak and tends to lower these two indicators. We provide theoretical and numerical results to sustain such a view.
[ { "created": "Thu, 30 Apr 2020 18:43:22 GMT", "version": "v1" }, { "created": "Tue, 23 Jun 2020 19:56:54 GMT", "version": "v2" } ]
2020-06-25
[ [ "Dolbeault", "Jean", "" ], [ "Turinici", "Gabriel", "" ] ]
We study variants of the SEIR model for interpreting some qualitative features of the statistics of the Covid-19 epidemic in France. Standard SEIR models distinguish essentially two regimes: either the disease is controlled and the number of infected people rapidly decreases, or the disease spreads and contaminates a significant fraction of the population until herd immunity is achieved. After lockdown, at first sight it seems that social distancing is not enough to control the outbreak. We discuss here a possible explanation, namely that the lockdown is creating social heterogeneity: even if a large majority of the population complies with the lockdown rules, a small fraction of the population still has to maintain a normal or high level of social interactions, such as health workers, providers of essential services, etc. This results in an apparent high level of epidemic propagation as measured through re-estimations of the basic reproduction ratio. However, these measures are limited to averages, while variance inside the population plays an essential role on the peak and the size of the epidemic outbreak and tends to lower these two indicators. We provide theoretical and numerical results to sustain such a view.
2404.05501
Vivek Kumar Agarwal
Vivek Agarwal, Joshua Harvey, Dmitry Rinberg and Vasant Dhar
Data Science In Olfaction
20 pages, 10 Figures, 2 Appendix, 1 Table
null
null
null
q-bio.NC cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Advances in neural sensing technology are making it possible to observe the olfactory process in great detail. In this paper, we conceptualize smell from a Data Science and AI perspective, that relates the properties of odorants to how they are sensed and analyzed in the olfactory system from the nose to the brain. Drawing distinctions to color vision, we argue that smell presents unique measurement challenges, including the complexity of stimuli, the high dimensionality of the sensory apparatus, as well as what constitutes ground truth. In the face of these challenges, we argue for the centrality of odorant-receptor interactions in developing a theory of olfaction. Such a theory is likely to find widespread industrial applications, and enhance our understanding of smell, and in the longer-term, how it relates to other senses and language. As an initial use case of the data, we present results using machine learning-based classification of neural responses to odors as they are recorded in the mouse olfactory bulb with calcium imaging.
[ { "created": "Mon, 8 Apr 2024 13:25:02 GMT", "version": "v1" } ]
2024-04-09
[ [ "Agarwal", "Vivek", "" ], [ "Harvey", "Joshua", "" ], [ "Rinberg", "Dmitry", "" ], [ "Dhar", "Vasant", "" ] ]
Advances in neural sensing technology are making it possible to observe the olfactory process in great detail. In this paper, we conceptualize smell from a Data Science and AI perspective, that relates the properties of odorants to how they are sensed and analyzed in the olfactory system from the nose to the brain. Drawing distinctions to color vision, we argue that smell presents unique measurement challenges, including the complexity of stimuli, the high dimensionality of the sensory apparatus, as well as what constitutes ground truth. In the face of these challenges, we argue for the centrality of odorant-receptor interactions in developing a theory of olfaction. Such a theory is likely to find widespread industrial applications, and enhance our understanding of smell, and in the longer-term, how it relates to other senses and language. As an initial use case of the data, we present results using machine learning-based classification of neural responses to odors as they are recorded in the mouse olfactory bulb with calcium imaging.
1306.2772
Ellen Baake
Sandra Kluth, Ellen Baake
The Moran model with selection: Fixation probabilities, ancestral lines, and an alternative particle representation
21 pages, 8 figures
Theor. Pop. Biol. 90 (2013), 104-112
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We reconsider the Moran model in continuous time with population size $N$, two allelic types, and selection. We introduce a new particle representation, which we call the labelled Moran model, and which has the same distribution of type frequencies as the original Moran model, provided the initial values are chosen appropriately. In the new model, individuals are labelled $1,2, \dots, N$; neutral resampling events may take place between arbitrary labels, whereas selective events only occur in the direction of increasing labels. With the help of elementary methods only, we not only recover fixation probabilities, but also obtain detailed insight into the number and nature of the selective events that play a role in the fixation process forward in time.
[ { "created": "Wed, 12 Jun 2013 10:02:23 GMT", "version": "v1" }, { "created": "Fri, 6 Dec 2013 13:29:01 GMT", "version": "v2" } ]
2013-12-09
[ [ "Kluth", "Sandra", "" ], [ "Baake", "Ellen", "" ] ]
We reconsider the Moran model in continuous time with population size $N$, two allelic types, and selection. We introduce a new particle representation, which we call the labelled Moran model, and which has the same distribution of type frequencies as the original Moran model, provided the initial values are chosen appropriately. In the new model, individuals are labelled $1,2, \dots, N$; neutral resampling events may take place between arbitrary labels, whereas selective events only occur in the direction of increasing labels. With the help of elementary methods only, we not only recover fixation probabilities, but also obtain detailed insight into the number and nature of the selective events that play a role in the fixation process forward in time.
1904.10845
Coralie Picoche
Coralie Picoche and Frederic Barraquand
How self-regulation, the storage effect and their interaction contribute to coexistence in stochastic and seasonal environments
27 pages, 9 figures, Theor Ecol (2019)
null
10.1007/s12080-019-0420-9
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining coexistence in species-rich communities of primary producers remains a challenge for ecologists because of their likely competition for shared resources. Following Hutchinson's seminal suggestion, many theoreticians have tried to create diversity through a fluctuating environment, which impairs or slows down competitive exclusion. However, fluctuating-environment models often only produce a dozen of coexisting species at best. Here, we investigate how to create richer communities in fluctuating environments, using an empirically parameterized model. Building on the forced Lotka-Volterra model of Scranton and Vasseur (Theor Ecol 9(3):353-363, 2016), inspired by phytoplankton communities, we have investigated the effect of two coexistence mechanisms, namely the storage effect and higher intra- than interspecific competition strengths (i.e., strong self-regulation). We tuned the intra/inter competition ratio based on empirical analyses, in which self-regulation dominates interspecific interactions. Although a strong self-regulation maintained more species (50%) than the storage effect (25%), we show that none of the two coexistence mechanisms considered could ensure the coexistence of all species alone. Realistic seasonal environments only aggravated that picture, as they decreased persistence relative to a random environment. However, strong self-regulation and the storage effect combined superadditively so that all species could persist with both mechanisms at work. Our results suggest that combining different coexistence mechanisms into community models might be more fruitful than trying to find which mechanism best explains diversity. We additionally highlight that while biomass-trait distributions provide some clues regarding coexistence mechanisms, they cannot indicate unequivocally which mechanisms are at play.
[ { "created": "Wed, 24 Apr 2019 14:36:13 GMT", "version": "v1" } ]
2019-04-25
[ [ "Picoche", "Coralie", "" ], [ "Barraquand", "Frederic", "" ] ]
Explaining coexistence in species-rich communities of primary producers remains a challenge for ecologists because of their likely competition for shared resources. Following Hutchinson's seminal suggestion, many theoreticians have tried to create diversity through a fluctuating environment, which impairs or slows down competitive exclusion. However, fluctuating-environment models often only produce a dozen of coexisting species at best. Here, we investigate how to create richer communities in fluctuating environments, using an empirically parameterized model. Building on the forced Lotka-Volterra model of Scranton and Vasseur (Theor Ecol 9(3):353-363, 2016), inspired by phytoplankton communities, we have investigated the effect of two coexistence mechanisms, namely the storage effect and higher intra- than interspecific competition strengths (i.e., strong self-regulation). We tuned the intra/inter competition ratio based on empirical analyses, in which self-regulation dominates interspecific interactions. Although a strong self-regulation maintained more species (50%) than the storage effect (25%), we show that none of the two coexistence mechanisms considered could ensure the coexistence of all species alone. Realistic seasonal environments only aggravated that picture, as they decreased persistence relative to a random environment. However, strong self-regulation and the storage effect combined superadditively so that all species could persist with both mechanisms at work. Our results suggest that combining different coexistence mechanisms into community models might be more fruitful than trying to find which mechanism best explains diversity. We additionally highlight that while biomass-trait distributions provide some clues regarding coexistence mechanisms, they cannot indicate unequivocally which mechanisms are at play.
1610.04201
Joseph Griffis
Joseph C. Griffis, Rodolphe Nenert, Jane B. Allendorfer, Jerzy P. Szaflarski
Parallel ICA reveals linked patterns of structural damage and fMRI language task activation in chronic post-stroke aphasia
47 pages; 4 figures; 3 Tables; 3 Supplementary Figures; 2 Supplementary Tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structural and functional MRI studies of patients with post-stroke language deficits have contributed substantially to our understanding of how cognitive-behavioral impairments relate to the location of structural damage and to the activation of surviving brain regions during language processing, respectively. However, very little is known about how inter-patient variability in language task activation relates to variability in the structures affected by stroke. Here, we used parallel independent component analysis (pICA) to characterize links between patterns of structural damage and patterns of functional MRI activation during semantic decisions. The pICA analysis revealed a significant association between a lesion component featuring damage to left posterior temporo-parietal cortex and the underlying deep white matter and an fMRI component featuring (1) heightened activation in a primarily right hemispheric network of frontal, temporal, and parietal regions, and (2) reduced activation in areas associated with the semantic network activated by healthy controls. Stronger loading parameters on both the lesion and fMRI activation components were associated with poorer language test performance. Fiber tracking suggests that lesions affecting the left posterior temporo-parietal cortex and deep white matter may lead to the simultaneous disruption of multiple long-range structural pathways connecting distal language areas. Damage to the left posterior temporo-parietal cortex and underlying white matter may (1) impede the language task-driven recruitment of canonical left hemispheric language and other areas (e.g. the right anterior temporal lobe and default mode regions) that likely support residual language function after stroke, and (2) lead to the compensatory recruitment of right hemispheric fronto-temporo-parietal networks for tasks requiring semantic processing.
[ { "created": "Thu, 13 Oct 2016 19:02:53 GMT", "version": "v1" } ]
2016-10-14
[ [ "Griffis", "Joseph C.", "" ], [ "Nenert", "Rodolphe", "" ], [ "Allendorfer", "Jane B.", "" ], [ "Szaflarski", "Jerzy P.", "" ] ]
Structural and functional MRI studies of patients with post-stroke language deficits have contributed substantially to our understanding of how cognitive-behavioral impairments relate to the location of structural damage and to the activation of surviving brain regions during language processing, respectively. However, very little is known about how inter-patient variability in language task activation relates to variability in the structures affected by stroke. Here, we used parallel independent component analysis (pICA) to characterize links between patterns of structural damage and patterns of functional MRI activation during semantic decisions. The pICA analysis revealed a significant association between a lesion component featuring damage to left posterior temporo-parietal cortex and the underlying deep white matter and an fMRI component featuring (1) heightened activation in a primarily right hemispheric network of frontal, temporal, and parietal regions, and (2) reduced activation in areas associated with the semantic network activated by healthy controls. Stronger loading parameters on both the lesion and fMRI activation components were associated with poorer language test performance. Fiber tracking suggests that lesions affecting the left posterior temporo-parietal cortex and deep white matter may lead to the simultaneous disruption of multiple long-range structural pathways connecting distal language areas. Damage to the left posterior temporo-parietal cortex and underlying white matter may (1) impede the language task-driven recruitment of canonical left hemispheric language and other areas (e.g. the right anterior temporal lobe and default mode regions) that likely support residual language function after stroke, and (2) lead to the compensatory recruitment of right hemispheric fronto-temporo-parietal networks for tasks requiring semantic processing.
2405.19587
Simone Linz
Janosch D\"ocker, Simone Linz, Kristina Wicke
Bounding the softwired parsimony score of a phylogenetic network
null
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In comparison to phylogenetic trees, phylogenetic networks are more suitable to represent complex evolutionary histories of species whose past includes reticulation such as hybridisation or lateral gene transfer. However, the reconstruction of phylogenetic networks remains challenging and computationally expensive due to their intricate structural properties. For example, the small parsimony problem that is solvable in polynomial time for phylogenetic trees, becomes NP-hard on phylogenetic networks under softwired and parental parsimony, even for a single binary character and structurally constrained networks. To calculate the parsimony score of a phylogenetic network $N$, these two parsimony notions consider different exponential-size sets of phylogenetic trees that can be extracted from $N$ and infer the minimum parsimony score over all trees in the set. In this paper, we ask: What is the maximum difference between the parsimony score of any phylogenetic tree that is contained in the set of considered trees and a phylogenetic tree whose parsimony score equates to the parsimony score of $N$? Given a gap-free sequence alignment of multi-state characters and a rooted binary level-$k$ phylogenetic network, we use the novel concept of an informative blob to show that this difference is bounded by $k+1$ times the softwired parsimony score of $N$. In particular, the difference is independent of the alignment length and the number of character states. We show that an analogous bound can be obtained for the softwired parsimony score of semi-directed networks, while under parental parsimony on the other hand, such a bound does not hold.
[ { "created": "Thu, 30 May 2024 00:39:40 GMT", "version": "v1" } ]
2024-05-31
[ [ "Döcker", "Janosch", "" ], [ "Linz", "Simone", "" ], [ "Wicke", "Kristina", "" ] ]
In comparison to phylogenetic trees, phylogenetic networks are more suitable to represent complex evolutionary histories of species whose past includes reticulation such as hybridisation or lateral gene transfer. However, the reconstruction of phylogenetic networks remains challenging and computationally expensive due to their intricate structural properties. For example, the small parsimony problem that is solvable in polynomial time for phylogenetic trees, becomes NP-hard on phylogenetic networks under softwired and parental parsimony, even for a single binary character and structurally constrained networks. To calculate the parsimony score of a phylogenetic network $N$, these two parsimony notions consider different exponential-size sets of phylogenetic trees that can be extracted from $N$ and infer the minimum parsimony score over all trees in the set. In this paper, we ask: What is the maximum difference between the parsimony score of any phylogenetic tree that is contained in the set of considered trees and a phylogenetic tree whose parsimony score equates to the parsimony score of $N$? Given a gap-free sequence alignment of multi-state characters and a rooted binary level-$k$ phylogenetic network, we use the novel concept of an informative blob to show that this difference is bounded by $k+1$ times the softwired parsimony score of $N$. In particular, the difference is independent of the alignment length and the number of character states. We show that an analogous bound can be obtained for the softwired parsimony score of semi-directed networks, while under parental parsimony on the other hand, such a bound does not hold.
1809.01878
Juvid Aryaman
Juvid Aryaman, Iain G. Johnston, Nick S. Jones
Mitochondrial heterogeneity
null
null
10.3389/fgene.2018.00718
null
q-bio.SC
http://creativecommons.org/licenses/by-sa/4.0/
Cell-to-cell heterogeneity drives a range of (patho)physiologically important phenomena, such as cell fate and chemotherapeutic resistance. The role of metabolism, and particularly mitochondria, is increasingly being recognised as an important explanatory factor in cell-to-cell heterogeneity. Most eukaryotic cells possess a population of mitochondria, in the sense that mitochondrial DNA (mtDNA) is held in multiple copies per cell, where the sequence of each molecule can vary. Hence intra-cellular mitochondrial heterogeneity is possible, which can induce inter-cellular mitochondrial heterogeneity, and may drive aspects of cellular noise. In this review, we discuss sources of mitochondrial heterogeneity (variations between mitochondria in the same cell, and mitochondrial variations between supposedly identical cells) from both genetic and non-genetic perspectives, and mitochondrial genotype-phenotype links. We discuss the apparent homeostasis of mtDNA copy number, the observation of pervasive intra-cellular mtDNA mutation (we term `microheteroplasmy') and developments in the understanding of inter-cellular mtDNA mutation (`macroheteroplasmy'). We point to the relationship between mitochondrial supercomplexes, cristal structure, pH and cardiolipin as a potential amplifier of the mitochondrial genotype-phenotype link. We also discuss mitochondrial membrane potential and networks as sources of mitochondrial heterogeneity, and their influence upon the mitochondrial genome. Finally, we revisit the idea of mitochondrial complementation as a means of dampening mitochondrial genotype-phenotype links in light of recent experimental developments. The diverse sources of mitochondrial heterogeneity, as well as their increasingly recognised role in contributing to cellular heterogeneity, highlights the need for future single-cell mitochondrial measurements in the context of cellular noise studies.
[ { "created": "Thu, 6 Sep 2018 08:33:11 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2018 09:22:45 GMT", "version": "v2" } ]
2019-02-11
[ [ "Aryaman", "Juvid", "" ], [ "Johnston", "Iain G.", "" ], [ "Jones", "Nick S.", "" ] ]
Cell-to-cell heterogeneity drives a range of (patho)physiologically important phenomena, such as cell fate and chemotherapeutic resistance. The role of metabolism, and particularly mitochondria, is increasingly being recognised as an important explanatory factor in cell-to-cell heterogeneity. Most eukaryotic cells possess a population of mitochondria, in the sense that mitochondrial DNA (mtDNA) is held in multiple copies per cell, where the sequence of each molecule can vary. Hence intra-cellular mitochondrial heterogeneity is possible, which can induce inter-cellular mitochondrial heterogeneity, and may drive aspects of cellular noise. In this review, we discuss sources of mitochondrial heterogeneity (variations between mitochondria in the same cell, and mitochondrial variations between supposedly identical cells) from both genetic and non-genetic perspectives, and mitochondrial genotype-phenotype links. We discuss the apparent homeostasis of mtDNA copy number, the observation of pervasive intra-cellular mtDNA mutation (we term `microheteroplasmy') and developments in the understanding of inter-cellular mtDNA mutation (`macroheteroplasmy'). We point to the relationship between mitochondrial supercomplexes, cristal structure, pH and cardiolipin as a potential amplifier of the mitochondrial genotype-phenotype link. We also discuss mitochondrial membrane potential and networks as sources of mitochondrial heterogeneity, and their influence upon the mitochondrial genome. Finally, we revisit the idea of mitochondrial complementation as a means of dampening mitochondrial genotype-phenotype links in light of recent experimental developments. The diverse sources of mitochondrial heterogeneity, as well as their increasingly recognised role in contributing to cellular heterogeneity, highlights the need for future single-cell mitochondrial measurements in the context of cellular noise studies.
2105.00267
Tiago Rodrigues
Kuan Lee, Ann Yang, Yen-Chu Lin, Daniel Reker, Goncalo J. L. Bernardes and Tiago Rodrigues
Combating small molecule aggregation with machine learning
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biological screens are plagued by false positive hits resulting from aggregation. Thus, methods to triage small colloidally aggregating molecules (SCAMs) are in high demand. Herein, we disclose a bespoke machine-learning tool to confidently and intelligibly flag such entities. Our data demonstrate an unprecedented utility of machine learning for predicting SCAMs, achieving 80% of correct predictions in a challenging out-of-sample validation. The tool outperformed a panel of expert chemists, who correctly predicted 61 +/- 7% of the same test molecules in a Turing-like test. Further, the computational routine provided insight into molecular features governing aggregation that had remained hidden to expert intuition. Leveraging our tool, we quantify that up to 15-20% of ligands in publicly available chemogenomic databases have the high potential to aggregate at typical screening concentrations, imposing caution in systems biology and drug design programs. Our approach provides a means to augment human intuition, mitigate attrition and a pathway to accelerate future molecular medicine.
[ { "created": "Sat, 1 May 2021 14:41:01 GMT", "version": "v1" } ]
2021-05-04
[ [ "Lee", "Kuan", "" ], [ "Yang", "Ann", "" ], [ "Lin", "Yen-Chu", "" ], [ "Reker", "Daniel", "" ], [ "Bernardes", "Goncalo J. L.", "" ], [ "Rodrigues", "Tiago", "" ] ]
Biological screens are plagued by false positive hits resulting from aggregation. Thus, methods to triage small colloidally aggregating molecules (SCAMs) are in high demand. Herein, we disclose a bespoke machine-learning tool to confidently and intelligibly flag such entities. Our data demonstrate an unprecedented utility of machine learning for predicting SCAMs, achieving 80% of correct predictions in a challenging out-of-sample validation. The tool outperformed a panel of expert chemists, who correctly predicted 61 +/- 7% of the same test molecules in a Turing-like test. Further, the computational routine provided insight into molecular features governing aggregation that had remained hidden to expert intuition. Leveraging our tool, we quantify that up to 15-20% of ligands in publicly available chemogenomic databases have the high potential to aggregate at typical screening concentrations, imposing caution in systems biology and drug design programs. Our approach provides a means to augment human intuition, mitigate attrition and a pathway to accelerate future molecular medicine.
1912.10262
Lyudmila Kushnir
Lyudmila Kushnir, Sophie Den\`eve
Learning temporal structure of the input with a network of integrate-and-fire neurons
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of the brain is to look for structure in the external input. We study a network of integrate-and-fire neurons with several types of recurrent connections that learns the structure of its time-varying feedforward input by attempting to efficiently represent this input with spikes. The efficiency of the representation arises from incorporating the structure of the input into the decoder, which is implicit in the learned synaptic connectivity of the network. While in the original work of [Boerlin, Machens, Den\`eve 2013] and [Brendel et al., 2017] the structure learned by the network to make the representation efficient was the low-dimensionality of the feedforward input, in the present work it is its temporal dynamics. The network achieves the efficiency by adjusting its synaptic weights in such a way, that for any neuron in the network, the recurrent input cancels the feedforward for most of the time. We show that if the only temporal structure that the input possesses is that it changes slowly on the time scale of neuronal integration, the dimensionality of the network dynamics is equal to the dimensionality of the input. However, if the input follows a linear differential equation of the first order, the efficiency of the representation can be increased by increasing the dimensionality of the network dynamics in comparison to the dimensionality of the input. If there is only one type of slow synaptic current in the network, the increase is two-fold, while if there are two types of slow synaptic currents that decay with different rates and whose amplitudes can be adjusted separately, it is advantageous to make the increase three-fold. We numerically simulate the network with synaptic weights that imply the most efficient input representation in the above cases. We also propose a learning rule by means of which the corresponding synaptic weights can be learned.
[ { "created": "Sat, 21 Dec 2019 13:04:30 GMT", "version": "v1" }, { "created": "Sat, 28 Dec 2019 23:11:50 GMT", "version": "v2" }, { "created": "Tue, 6 Oct 2020 19:50:24 GMT", "version": "v3" }, { "created": "Sun, 11 Oct 2020 18:34:22 GMT", "version": "v4" } ]
2020-10-13
[ [ "Kushnir", "Lyudmila", "" ], [ "Denève", "Sophie", "" ] ]
The task of the brain is to look for structure in the external input. We study a network of integrate-and-fire neurons with several types of recurrent connections that learns the structure of its time-varying feedforward input by attempting to efficiently represent this input with spikes. The efficiency of the representation arises from incorporating the structure of the input into the decoder, which is implicit in the learned synaptic connectivity of the network. While in the original work of [Boerlin, Machens, Den\`eve 2013] and [Brendel et al., 2017] the structure learned by the network to make the representation efficient was the low-dimensionality of the feedforward input, in the present work it is its temporal dynamics. The network achieves the efficiency by adjusting its synaptic weights in such a way, that for any neuron in the network, the recurrent input cancels the feedforward for most of the time. We show that if the only temporal structure that the input possesses is that it changes slowly on the time scale of neuronal integration, the dimensionality of the network dynamics is equal to the dimensionality of the input. However, if the input follows a linear differential equation of the first order, the efficiency of the representation can be increased by increasing the dimensionality of the network dynamics in comparison to the dimensionality of the input. If there is only one type of slow synaptic current in the network, the increase is two-fold, while if there are two types of slow synaptic currents that decay with different rates and whose amplitudes can be adjusted separately, it is advantageous to make the increase three-fold. We numerically simulate the network with synaptic weights that imply the most efficient input representation in the above cases. We also propose a learning rule by means of which the corresponding synaptic weights can be learned.
0802.0522
Jeremy England
Jeremy L. England and Vijay S. Pande
Potential for modulation of the hydrophobic effect inside chaperonins
null
null
10.1529/biophysj.108.131037
null
q-bio.BM
null
Despite the spontaneity of some in vitro protein folding reactions, native folding in vivo often requires the participation of barrel-shaped multimeric complexes known as chaperonins. Although it has long been known that chaperonin substrates fold upon sequestration inside the chaperonin barrel, the precise mechanism by which confinement within this space facilitates folding remains unknown. In this study, we examine the possibility that the chaperonin mediates a favorable reorganization of the solvent for the folding reaction. We begin by discussing the effect of electrostatic charge on solvent-mediated hydrophobic forces in an aqueous environment. Based on these initial physical arguments, we construct a simple, phenomenological theory for the thermodynamics of density and hydrogen bond order fluctuations in liquid water. Within the framework of this model, we investigate the effect of confinement within a chaperonin-like cavity on the configurational free energy of water by calculating solvent free energies for cavities corresponding to the different conformational states in the ATP- driven catalytic cycle of the prokaryotic chaperonin GroEL. Our findings suggest that one function of chaperonins may be to trap unfolded proteins and subsequently expose them to a micro-environment in which the hydrophobic effect, a crucial thermodynamic driving force for folding, is enhanced.
[ { "created": "Tue, 5 Feb 2008 00:10:39 GMT", "version": "v1" } ]
2009-11-13
[ [ "England", "Jeremy L.", "" ], [ "Pande", "Vijay S.", "" ] ]
Despite the spontaneity of some in vitro protein folding reactions, native folding in vivo often requires the participation of barrel-shaped multimeric complexes known as chaperonins. Although it has long been known that chaperonin substrates fold upon sequestration inside the chaperonin barrel, the precise mechanism by which confinement within this space facilitates folding remains unknown. In this study, we examine the possibility that the chaperonin mediates a favorable reorganization of the solvent for the folding reaction. We begin by discussing the effect of electrostatic charge on solvent-mediated hydrophobic forces in an aqueous environment. Based on these initial physical arguments, we construct a simple, phenomenological theory for the thermodynamics of density and hydrogen bond order fluctuations in liquid water. Within the framework of this model, we investigate the effect of confinement within a chaperonin-like cavity on the configurational free energy of water by calculating solvent free energies for cavities corresponding to the different conformational states in the ATP- driven catalytic cycle of the prokaryotic chaperonin GroEL. Our findings suggest that one function of chaperonins may be to trap unfolded proteins and subsequently expose them to a micro-environment in which the hydrophobic effect, a crucial thermodynamic driving force for folding, is enhanced.