id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1712.02965
Steven Kelk
Janosch D\"ocker, Leo van Iersel, Steven Kelk, Simone Linz
Deciding the existence of a cherry-picking sequence is hard on two trees
Fixed some tiny things. Accepted for journal publication
Discrete Applied Mathematics, 260:131-143, 2019
10.1016/j.dam.2019.01.031
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we show that deciding whether two rooted binary phylogenetic trees on the same set of taxa permit a cherry-picking sequence, a special type of elimination order on the taxa, is NP-complete. This improves on an earlier result which proved hardness for eight or more trees. Via a known equivalence between cherry-picking sequences and temporal phylogenetic networks, our result proves that it is NP-complete to determine the existence of a temporal phylogenetic network that contains topological embeddings of both trees. The hardness result also greatly strengthens previous inapproximability results for the minimum temporal-hybridization number problem. This is the optimization version of the problem where we wish to construct a temporal phylogenetic network that topologically embeds two given rooted binary phylogenetic trees and that has a minimum number of indegree-2 nodes, which represent events such as hybridization and horizontal gene transfer. We end on a positive note, pointing out that fixed parameter tractability results in this area are likely to ensure the continued relevance of the temporal phylogenetic network model.
[ { "created": "Fri, 8 Dec 2017 06:53:22 GMT", "version": "v1" }, { "created": "Fri, 25 Jan 2019 17:02:50 GMT", "version": "v2" } ]
2021-04-13
[ [ "Döcker", "Janosch", "" ], [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ], [ "Linz", "Simone", "" ] ]
Here we show that deciding whether two rooted binary phylogenetic trees on the same set of taxa permit a cherry-picking sequence, a special type of elimination order on the taxa, is NP-complete. This improves on an earlier result which proved hardness for eight or more trees. Via a known equivalence between cherry-picking sequences and temporal phylogenetic networks, our result proves that it is NP-complete to determine the existence of a temporal phylogenetic network that contains topological embeddings of both trees. The hardness result also greatly strengthens previous inapproximability results for the minimum temporal-hybridization number problem. This is the optimization version of the problem where we wish to construct a temporal phylogenetic network that topologically embeds two given rooted binary phylogenetic trees and that has a minimum number of indegree-2 nodes, which represent events such as hybridization and horizontal gene transfer. We end on a positive note, pointing out that fixed parameter tractability results in this area are likely to ensure the continued relevance of the temporal phylogenetic network model.
2006.16525
Nishchal Sapkota
Nishchal Sapkota, Rimsha Bhatta, Phillip Dabney, Zhifu Xie
Hunting Co-operation in the Middle Predator in Three Species Food Chain Model
null
2019 Proceedings of LA/MS Section of Mathematical Association of America
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We proposed a three-species food chain model with hunting co-operation among the middle predator. In this model, third species prey on the middle species and the middle prey on the first species. The hunting cooperation among the middle predator affects interestingly on the numbers of both the predators and the prey. We examined the linear stability of the model theoretically and numerically. We conducted the two-parameter numerical analysis to check the long-term behavior and the change in the number of species with respect to hunting co-operation. Our findings supported the postulates from the two species food chain model with hunting co-operation.
[ { "created": "Tue, 30 Jun 2020 04:42:56 GMT", "version": "v1" } ]
2020-07-01
[ [ "Sapkota", "Nishchal", "" ], [ "Bhatta", "Rimsha", "" ], [ "Dabney", "Phillip", "" ], [ "Xie", "Zhifu", "" ] ]
We proposed a three-species food chain model with hunting co-operation among the middle predator. In this model, third species prey on the middle species and the middle prey on the first species. The hunting cooperation among the middle predator affects interestingly on the numbers of both the predators and the prey. We examined the linear stability of the model theoretically and numerically. We conducted the two-parameter numerical analysis to check the long-term behavior and the change in the number of species with respect to hunting co-operation. Our findings supported the postulates from the two species food chain model with hunting co-operation.
2008.07488
Rui Wang
Rui Wang, Yuta Hozumi, Yong-Hui Zheng, Changchuan Yin and Guo-Wei Wei
Host immune response driving SARS-CoV-2 evolution
22 pages, 15 figures
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The transmission and evolution of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are of paramount importance to the controlling and combating of coronavirus disease 2019 (COVID-19) pandemic. Currently, near 15,000 SARS-CoV-2 single mutations have been recorded, having a great ramification to the development of diagnostics, vaccines, antibody therapies, and drugs. However, little is known about SARS-CoV-2 evolutionary characteristics and general trend. In this work, we present a comprehensive genotyping analysis of existing SARS-CoV-2 mutations. We reveal that host immune response via APOBEC and ADAR gene editing gives rise to near 65\% of recorded mutations. Additionally, we show that children under age five and the elderly may be at high risk from COVID-19 because of their overreacting to the viral infection. Moreover, we uncover that populations of Oceania and Africa react significantly more intensively to SARS-CoV-2 infection than those of Europe and Asia, which may explain why African Americans were shown to be at increased risk of dying from COVID-19, in addition to their high risk of getting sick from COVID-19 caused by systemic health and social inequities. Finally, our study indicates that for two viral genome sequences of the same origin, their evolution order may be determined from the ratio of mutation type C$>$T over T$>$C.
[ { "created": "Mon, 17 Aug 2020 17:31:20 GMT", "version": "v1" }, { "created": "Thu, 20 Aug 2020 16:18:58 GMT", "version": "v2" } ]
2020-08-21
[ [ "Wang", "Rui", "" ], [ "Hozumi", "Yuta", "" ], [ "Zheng", "Yong-Hui", "" ], [ "Yin", "Changchuan", "" ], [ "Wei", "Guo-Wei", "" ] ]
The transmission and evolution of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are of paramount importance to the controlling and combating of coronavirus disease 2019 (COVID-19) pandemic. Currently, near 15,000 SARS-CoV-2 single mutations have been recorded, having a great ramification to the development of diagnostics, vaccines, antibody therapies, and drugs. However, little is known about SARS-CoV-2 evolutionary characteristics and general trend. In this work, we present a comprehensive genotyping analysis of existing SARS-CoV-2 mutations. We reveal that host immune response via APOBEC and ADAR gene editing gives rise to near 65\% of recorded mutations. Additionally, we show that children under age five and the elderly may be at high risk from COVID-19 because of their overreacting to the viral infection. Moreover, we uncover that populations of Oceania and Africa react significantly more intensively to SARS-CoV-2 infection than those of Europe and Asia, which may explain why African Americans were shown to be at increased risk of dying from COVID-19, in addition to their high risk of getting sick from COVID-19 caused by systemic health and social inequities. Finally, our study indicates that for two viral genome sequences of the same origin, their evolution order may be determined from the ratio of mutation type C$>$T over T$>$C.
1202.4647
Manuela Capello
M. Capello, M. Soria, P. Cotel, G. Potin, L. Dagorn and P. Fr\'eon
The heterogeneous spatial and temporal patterns of behavior of small pelagic fish in an array of Fish Aggregating Devices (FADs)
21 pages, 9 figures + 2 supplementary figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying spatial and temporal patterns can reveal the driving factors that govern the behavior of fish in their environment. In this study, we characterized the spatial and temporal occupation of 37 acoustically tagged bigeye scads (Selar Crumenophthalmus) in an array of shallow Fish Aggregating Devices (FADs) to clarify the mechanism that leads fish to associate with FADs. A comparison of the number of visits and residence times exhibited by the fish at different FADs revealed a strong variability over the array of FADS, with the emergence of a leading FAD, which recorded the majority of visits and retained the fish for a longer period of time. We found diel variability in the residence times, with fish associated at daytime and exploring the array of FADs at nighttime. We demonstrated that this diel temporal pattern was amplified in the leading FAD. We identified a 24-hour periodicity for a subset of individuals aggregated to the leading FAD, thus suggesting that those fish were able to find this FAD after night excursions. The modeling of fish movements based on a Monte Carlo sampling of inter-FAD transitions revealed that the observed spatial heterogeneity in the number of visits could not be explained through simple array-connectivity arguments. Similarly, we demonstrated that the high residence times recorded at the leading FAD were not due to the spatial arrangement of individual fish having different associative characters. We discussed the relationships between these patterns of association with the FADs, the exploration of the FAD array and the possible effects of social interactions and environmental factors.
[ { "created": "Mon, 20 Feb 2012 10:28:58 GMT", "version": "v1" } ]
2012-02-22
[ [ "Capello", "M.", "" ], [ "Soria", "M.", "" ], [ "Cotel", "P.", "" ], [ "Potin", "G.", "" ], [ "Dagorn", "L.", "" ], [ "Fréon", "P.", "" ] ]
Identifying spatial and temporal patterns can reveal the driving factors that govern the behavior of fish in their environment. In this study, we characterized the spatial and temporal occupation of 37 acoustically tagged bigeye scads (Selar Crumenophthalmus) in an array of shallow Fish Aggregating Devices (FADs) to clarify the mechanism that leads fish to associate with FADs. A comparison of the number of visits and residence times exhibited by the fish at different FADs revealed a strong variability over the array of FADS, with the emergence of a leading FAD, which recorded the majority of visits and retained the fish for a longer period of time. We found diel variability in the residence times, with fish associated at daytime and exploring the array of FADs at nighttime. We demonstrated that this diel temporal pattern was amplified in the leading FAD. We identified a 24-hour periodicity for a subset of individuals aggregated to the leading FAD, thus suggesting that those fish were able to find this FAD after night excursions. The modeling of fish movements based on a Monte Carlo sampling of inter-FAD transitions revealed that the observed spatial heterogeneity in the number of visits could not be explained through simple array-connectivity arguments. Similarly, we demonstrated that the high residence times recorded at the leading FAD were not due to the spatial arrangement of individual fish having different associative characters. We discussed the relationships between these patterns of association with the FADs, the exploration of the FAD array and the possible effects of social interactions and environmental factors.
1602.05189
Krzysztof Bartoszek
Krzysztof Bartoszek
A Central Limit Theorem for Punctuated Equilibrium
null
Stochastic Models 36:3, 473-517, 2020
10.1080/15326349.2020.1752242
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current evolutionary biology models usually assume that a phenotype undergoes gradual change. This is in stark contrast to biological intuition, which indicates that change can also be punctuated-the phenotype can jump. Such a jump could especially occur at speciation, i.e. dramatic change occurs that drives the species apart. Here we derive a Central Limit Theorem for punctuated equilibrium. We show that, if adaptation is fast, for weak convergence to normality to hold, the variability in the occurrence of change has to disappear with time.
[ { "created": "Tue, 16 Feb 2016 17:51:50 GMT", "version": "v1" }, { "created": "Wed, 11 May 2016 14:49:33 GMT", "version": "v2" }, { "created": "Thu, 16 Feb 2017 10:48:21 GMT", "version": "v3" }, { "created": "Wed, 2 Aug 2017 11:04:29 GMT", "version": "v4" }, { "created": "Sun, 24 Nov 2019 21:12:43 GMT", "version": "v5" }, { "created": "Thu, 2 Apr 2020 20:50:45 GMT", "version": "v6" } ]
2020-08-07
[ [ "Bartoszek", "Krzysztof", "" ] ]
Current evolutionary biology models usually assume that a phenotype undergoes gradual change. This is in stark contrast to biological intuition, which indicates that change can also be punctuated-the phenotype can jump. Such a jump could especially occur at speciation, i.e. dramatic change occurs that drives the species apart. Here we derive a Central Limit Theorem for punctuated equilibrium. We show that, if adaptation is fast, for weak convergence to normality to hold, the variability in the occurrence of change has to disappear with time.
2307.01289
Mrinal Pandey
Mrinal Pandey, Young Joon Suh, Minha Kim, Hannah Jane Davis, Jeffrey E Segall, and Mingming Wu
Mechanical compression regulates tumor spheroid invasion into a 3D collagen matrix
10 pages, 5 figure and 3 supplementary figures. Phys. Biol. 2024
null
10.1088/1478-3975/ad3ac5
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Uncontrolled growth of tumor cells in confined spaces leads to the accumulation of compressive stress within the tumor. Although the effects of tension within 3D extracellular matrices on tumor growth and invasion are well established, the role of compression in tumor mechanics and invasion is largely unexplored. In this study, we modified a Transwell assay such that it provides constant compressive loads to spheroids embedded within a collagen matrix. We used microscopic imaging to follow the single cell dynamics of the cells within the spheroids, as well as invasion into the 3D extracellular matrices (EMCs). Our experimental results showed that malignant breast tumor (MDA-MB-231) and non-tumorigenic epithelial (MCF10A) spheroids responded differently to a constant compression. Cells within the malignant spheroids became more motile within the spheroids and invaded more into the ECM under compression; whereas cells within non-tumorigenic MCF10A spheroids became less motile within the spheroids and did not display apparent detachment from the spheroids under compression. These findings suggest that compression may play differential roles in healthy and pathogenic epithelial tissues and highlights the importance of tumor mechanics and invasion.
[ { "created": "Mon, 3 Jul 2023 18:33:28 GMT", "version": "v1" } ]
2024-04-08
[ [ "Pandey", "Mrinal", "" ], [ "Suh", "Young Joon", "" ], [ "Kim", "Minha", "" ], [ "Davis", "Hannah Jane", "" ], [ "Segall", "Jeffrey E", "" ], [ "Wu", "Mingming", "" ] ]
Uncontrolled growth of tumor cells in confined spaces leads to the accumulation of compressive stress within the tumor. Although the effects of tension within 3D extracellular matrices on tumor growth and invasion are well established, the role of compression in tumor mechanics and invasion is largely unexplored. In this study, we modified a Transwell assay such that it provides constant compressive loads to spheroids embedded within a collagen matrix. We used microscopic imaging to follow the single cell dynamics of the cells within the spheroids, as well as invasion into the 3D extracellular matrices (EMCs). Our experimental results showed that malignant breast tumor (MDA-MB-231) and non-tumorigenic epithelial (MCF10A) spheroids responded differently to a constant compression. Cells within the malignant spheroids became more motile within the spheroids and invaded more into the ECM under compression; whereas cells within non-tumorigenic MCF10A spheroids became less motile within the spheroids and did not display apparent detachment from the spheroids under compression. These findings suggest that compression may play differential roles in healthy and pathogenic epithelial tissues and highlights the importance of tumor mechanics and invasion.
1402.0468
Michael Deem
Michael W. Deem
Evolution: Life has Evolved to Evolve
3 pages
Phys. Life Rev. 10 (2013) 333-335
10.1016/j.plrev.2013.07.010
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Jim Shapiro synthesizes a great many observations about the mechanisms of evolution to reach the remarkable conclusion that large-scale modification, exchange, and rearrangement of the genome are common and should be viewed as fundamental features of life. In other words, the genome should be viewed not as mostly read-only with a few rare mutations, but rather as a fully-fledged read-write library of genetic functions under continuous revision. Revision of the genome occurs during cellular replication, during multicellular development, and during evolution of a population of individuals. DNA formatting controls the timing and location of genetic rearrangements, gene expression, and genetic repair. Each of these events is under the control of precise cellular circuits. Shapiro reviews the toolbox of natural genetic engineering that provides the functionalities necessary for efficient long-term genome restructuring.
[ { "created": "Thu, 30 Jan 2014 03:11:32 GMT", "version": "v1" } ]
2015-06-18
[ [ "Deem", "Michael W.", "" ] ]
Jim Shapiro synthesizes a great many observations about the mechanisms of evolution to reach the remarkable conclusion that large-scale modification, exchange, and rearrangement of the genome are common and should be viewed as fundamental features of life. In other words, the genome should be viewed not as mostly read-only with a few rare mutations, but rather as a fully-fledged read-write library of genetic functions under continuous revision. Revision of the genome occurs during cellular replication, during multicellular development, and during evolution of a population of individuals. DNA formatting controls the timing and location of genetic rearrangements, gene expression, and genetic repair. Each of these events is under the control of precise cellular circuits. Shapiro reviews the toolbox of natural genetic engineering that provides the functionalities necessary for efficient long-term genome restructuring.
2112.03238
Hins Shaheen
Hina Shaheen, Roderick Melnik
Deep brain stimulation with a computational model for the cortex-thalamus-basal-ganglia system and network dynamics of neurological disorders
15 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep brain stimulation (DBS) can alleviate the movement disorders like Parkinson's disease (PD). Indeed, it is known that aberrant beta (13-30Hz)oscillations and the loss of dopaminergic neurons in the basal ganglia-thalamus (BGTH) and cortex characterize the akinesia symptoms of PD. However, the relevant biophysical mechanism behind this process still remains unclear. Based on the prior striatal inhibitory model, we propose an extended BGTH model incorporating medium spine neurons (MSNs) and fast-spiking interneurons (FSIs) along with the effect of DBS. We are focusing in this paper on an open-loop DBS mode, where the stimulation parameters stay constant independent of variations in the disease state, and modifications of parameters rely mainly on trial and error of medical experts. Additionally, we propose a novel combined model of the cerebellar-basal-ganglia thalamocortical network, MSNs, and FSIs, and show new results that indicate that Parkinsonian oscillations in the beta-band frequency range emerge from the dynamics of such a network. Our model predicts that DBS can be used to suppress beta oscillations in globus pallidus pars interna (GPi) neurons. This research will help our better understanding of the changes in brain activity caused by DBS, providing new insight for studying PD in the future.
[ { "created": "Mon, 6 Dec 2021 18:46:42 GMT", "version": "v1" } ]
2021-12-07
[ [ "Shaheen", "Hina", "" ], [ "Melnik", "Roderick", "" ] ]
Deep brain stimulation (DBS) can alleviate the movement disorders like Parkinson's disease (PD). Indeed, it is known that aberrant beta (13-30Hz)oscillations and the loss of dopaminergic neurons in the basal ganglia-thalamus (BGTH) and cortex characterize the akinesia symptoms of PD. However, the relevant biophysical mechanism behind this process still remains unclear. Based on the prior striatal inhibitory model, we propose an extended BGTH model incorporating medium spine neurons (MSNs) and fast-spiking interneurons (FSIs) along with the effect of DBS. We are focusing in this paper on an open-loop DBS mode, where the stimulation parameters stay constant independent of variations in the disease state, and modifications of parameters rely mainly on trial and error of medical experts. Additionally, we propose a novel combined model of the cerebellar-basal-ganglia thalamocortical network, MSNs, and FSIs, and show new results that indicate that Parkinsonian oscillations in the beta-band frequency range emerge from the dynamics of such a network. Our model predicts that DBS can be used to suppress beta oscillations in globus pallidus pars interna (GPi) neurons. This research will help our better understanding of the changes in brain activity caused by DBS, providing new insight for studying PD in the future.
1208.2238
Sriram Sankararaman
Sriram Sankararaman, Nick Patterson, Heng Li, Svante P\"a\"abo, David Reich
The date of interbreeding between Neandertals and modern humans
null
null
null
null
q-bio.PE stat.AP
http://creativecommons.org/licenses/by/3.0/
Comparisons of DNA sequences between Neandertals and present-day humans have shown that Neandertals share more genetic variants with non-Africans than with Africans. This could be due to interbreeding between Neandertals and modern humans when the two groups met subsequent to the emergence of modern humans outside Africa. However, it could also be due to population structure that antedates the origin of Neandertal ancestors in Africa. We measure the extent of linkage disequilibrium (LD) in the genomes of present-day Europeans and find that the last gene flow from Neandertals (or their relatives) into Europeans likely occurred 37,000-86,000 years before the present (BP), and most likely 47,000-65,000 years ago. This supports the recent interbreeding hypothesis, and suggests that interbreeding may have occurred when modern humans carrying Upper Paleolithic technologies encountered Neandertals as they expanded out of Africa.
[ { "created": "Fri, 10 Aug 2012 18:15:01 GMT", "version": "v1" } ]
2012-08-13
[ [ "Sankararaman", "Sriram", "" ], [ "Patterson", "Nick", "" ], [ "Li", "Heng", "" ], [ "Pääbo", "Svante", "" ], [ "Reich", "David", "" ] ]
Comparisons of DNA sequences between Neandertals and present-day humans have shown that Neandertals share more genetic variants with non-Africans than with Africans. This could be due to interbreeding between Neandertals and modern humans when the two groups met subsequent to the emergence of modern humans outside Africa. However, it could also be due to population structure that antedates the origin of Neandertal ancestors in Africa. We measure the extent of linkage disequilibrium (LD) in the genomes of present-day Europeans and find that the last gene flow from Neandertals (or their relatives) into Europeans likely occurred 37,000-86,000 years before the present (BP), and most likely 47,000-65,000 years ago. This supports the recent interbreeding hypothesis, and suggests that interbreeding may have occurred when modern humans carrying Upper Paleolithic technologies encountered Neandertals as they expanded out of Africa.
1504.00621
Jinzhi Lei JL
Wenjun Xia, Jinzhi Lei
aa-tRNA competition is crucial for the effective translation efficiency
8 pages, 10 figures
null
10.3934/mbe.2018023
null
q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Translation is a central biological process by which proteins are synthesized from genetic information contained within mRNAs. Here we study the kinetics of translation at molecular level through a stochastic simulation model. The model explicitly include RNA sequences, ribosome dynamics, tRNA pool and biochemical reactions in the translation elongation. The results show that the translation efficiency is mainly limited by the available ribosome number, translation initiation and the translation elongation time. The elongation time is log-normal distribution with mean and variance determined by both the codon saturation and the process of aa-tRNA selection at each codon binding. Moreover, our simulations show that the translation accuracy exponentially decreases with the sequence length. These results suggest that aa-tRNA competition is crucial for both translation elongation, translation efficiency and the accuracy, which in turn determined the effective protein production rate of correct proteins. Our results improve the dynamical equation of protein production with a delay differential equation which is dependent on sequence informations through both the effective production rate and the distribution of elongation time.
[ { "created": "Tue, 31 Mar 2015 01:46:12 GMT", "version": "v1" } ]
2020-02-18
[ [ "Xia", "Wenjun", "" ], [ "Lei", "Jinzhi", "" ] ]
Translation is a central biological process by which proteins are synthesized from genetic information contained within mRNAs. Here we study the kinetics of translation at molecular level through a stochastic simulation model. The model explicitly include RNA sequences, ribosome dynamics, tRNA pool and biochemical reactions in the translation elongation. The results show that the translation efficiency is mainly limited by the available ribosome number, translation initiation and the translation elongation time. The elongation time is log-normal distribution with mean and variance determined by both the codon saturation and the process of aa-tRNA selection at each codon binding. Moreover, our simulations show that the translation accuracy exponentially decreases with the sequence length. These results suggest that aa-tRNA competition is crucial for both translation elongation, translation efficiency and the accuracy, which in turn determined the effective protein production rate of correct proteins. Our results improve the dynamical equation of protein production with a delay differential equation which is dependent on sequence informations through both the effective production rate and the distribution of elongation time.
2205.11243
Zhuoyan Xu
Zhuoyan Xu, Kris Sankaran
Spatial Transcriptomics Dimensionality Reduction using Wavelet Bases
null
null
null
null
q-bio.GN cs.LG stat.AP
http://creativecommons.org/licenses/by/4.0/
Spatially resolved transcriptomics (ST) measures gene expression along with the spatial coordinates of the measurements. The analysis of ST data involves significant computation complexity. In this work, we propose gene expression dimensionality reduction algorithm that retains spatial structure. We combine the wavelet transformation with matrix factorization to select spatially-varying genes. We extract a low-dimensional representation of these genes. We consider Empirical Bayes setting, imposing regularization through the prior distribution of factor genes. Additionally, We provide visualization of extracted representation genes capturing the global spatial pattern. We illustrate the performance of our methods by spatial structure recovery and gene expression reconstruction in simulation. In real data experiments, our method identifies spatial structure of gene factors and outperforms regular decomposition regarding reconstruction error. We found the connection between the fluctuation of gene patterns and wavelet technique, providing smoother visualization. We develop the package and share the workflow generating reproducible quantitative results and gene visualization. The package is available at https://github.com/OliverXUZY/waveST.
[ { "created": "Thu, 19 May 2022 04:13:51 GMT", "version": "v1" } ]
2022-05-24
[ [ "Xu", "Zhuoyan", "" ], [ "Sankaran", "Kris", "" ] ]
Spatially resolved transcriptomics (ST) measures gene expression along with the spatial coordinates of the measurements. The analysis of ST data involves significant computation complexity. In this work, we propose gene expression dimensionality reduction algorithm that retains spatial structure. We combine the wavelet transformation with matrix factorization to select spatially-varying genes. We extract a low-dimensional representation of these genes. We consider Empirical Bayes setting, imposing regularization through the prior distribution of factor genes. Additionally, We provide visualization of extracted representation genes capturing the global spatial pattern. We illustrate the performance of our methods by spatial structure recovery and gene expression reconstruction in simulation. In real data experiments, our method identifies spatial structure of gene factors and outperforms regular decomposition regarding reconstruction error. We found the connection between the fluctuation of gene patterns and wavelet technique, providing smoother visualization. We develop the package and share the workflow generating reproducible quantitative results and gene visualization. The package is available at https://github.com/OliverXUZY/waveST.
0907.5021
Anatoly Ruvinsky
Anatoly M. Ruvinsky and Ilya A. Vakser
Sequence composition and environment effects on residue fluctuations in protein structures
8 pages, 4 figures
null
10.1063/1.3498743
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The spectrum and scale of fluctuations in protein structures affect the range of cell phenomena, including stability of protein structures or their fragments, allosteric transitions and energy transfer. The study presents a statistical-thermodynamic analysis of relationship between the sequence composition and the distribution of residue fluctuations in protein-protein complexes. A one-node-per residue elastic network model accounting for the nonhomogeneous protein mass distribution and the inter-atomic interactions through the renormalized inter-residue potential is developed. Two factors, a protein mass distribution and a residue environment, were found to determine the scale of residue fluctuations. Surface residues undergo larger fluctuations than core residues, showing agreement with experimental observations. Ranking residues over the normalized scale of fluctuations yields a distinct classification of amino acids into three groups. The structural instability in proteins possibly relates to the high content of the highly fluctuating residues and a deficiency of the weakly fluctuating residues in irregular secondary structure elements (loops), chameleon sequences and disordered proteins. Strong correlation between residue fluctuations and the sequence composition of protein loops supports this hypothesis. Comparing fluctuations of binding site residues (interface residues) with other surface residues shows that, on average, the interface is more rigid than the rest of the protein surface and Gly, Ala, Ser, Cys, Leu and Trp have a propensity to form more stable docking patches on the interface. The findings have broad implications for understanding mechanisms of protein association and stability of protein structures.
[ { "created": "Tue, 28 Jul 2009 23:31:35 GMT", "version": "v1" } ]
2015-05-13
[ [ "Ruvinsky", "Anatoly M.", "" ], [ "Vakser", "Ilya A.", "" ] ]
The spectrum and scale of fluctuations in protein structures affect the range of cell phenomena, including stability of protein structures or their fragments, allosteric transitions and energy transfer. The study presents a statistical-thermodynamic analysis of relationship between the sequence composition and the distribution of residue fluctuations in protein-protein complexes. A one-node-per residue elastic network model accounting for the nonhomogeneous protein mass distribution and the inter-atomic interactions through the renormalized inter-residue potential is developed. Two factors, a protein mass distribution and a residue environment, were found to determine the scale of residue fluctuations. Surface residues undergo larger fluctuations than core residues, showing agreement with experimental observations. Ranking residues over the normalized scale of fluctuations yields a distinct classification of amino acids into three groups. The structural instability in proteins possibly relates to the high content of the highly fluctuating residues and a deficiency of the weakly fluctuating residues in irregular secondary structure elements (loops), chameleon sequences and disordered proteins. Strong correlation between residue fluctuations and the sequence composition of protein loops supports this hypothesis. Comparing fluctuations of binding site residues (interface residues) with other surface residues shows that, on average, the interface is more rigid than the rest of the protein surface and Gly, Ala, Ser, Cys, Leu and Trp have a propensity to form more stable docking patches on the interface. The findings have broad implications for understanding mechanisms of protein association and stability of protein structures.
2405.20863
Fr\'ed\'eric Dreyer
Henry Kenlay, Fr\'ed\'eric A. Dreyer, Daniel Cutting, Daniel Nissley, Charlotte M. Deane
ABodyBuilder3: Improved and scalable antibody structure predictions
8 pages, 3 figures, 3 tables, code available at https://github.com/Exscientia/ABodyBuilder3, weights and data available at https://zenodo.org/records/11354577
null
null
null
q-bio.BM cs.AI
http://creativecommons.org/licenses/by/4.0/
Accurate prediction of antibody structure is a central task in the design and development of monoclonal antibodies, notably to understand both their developability and their binding properties. In this article, we introduce ABodyBuilder3, an improved and scalable antibody structure prediction model based on ImmuneBuilder. We achieve a new state-of-the-art accuracy in the modelling of CDR loops by leveraging language model embeddings, and show how predicted structures can be further improved through careful relaxation strategies. Finally, we incorporate a predicted Local Distance Difference Test into the model output to allow for a more accurate estimation of uncertainties.
[ { "created": "Fri, 31 May 2024 14:45:11 GMT", "version": "v1" } ]
2024-06-03
[ [ "Kenlay", "Henry", "" ], [ "Dreyer", "Frédéric A.", "" ], [ "Cutting", "Daniel", "" ], [ "Nissley", "Daniel", "" ], [ "Deane", "Charlotte M.", "" ] ]
Accurate prediction of antibody structure is a central task in the design and development of monoclonal antibodies, notably to understand both their developability and their binding properties. In this article, we introduce ABodyBuilder3, an improved and scalable antibody structure prediction model based on ImmuneBuilder. We achieve a new state-of-the-art accuracy in the modelling of CDR loops by leveraging language model embeddings, and show how predicted structures can be further improved through careful relaxation strategies. Finally, we incorporate a predicted Local Distance Difference Test into the model output to allow for a more accurate estimation of uncertainties.
1608.00431
Danielle Bassett
Laura Wiles, Shi Gu, Fabio Pasqualetti, Danielle S. Bassett, David F. Meaney
Autaptic Connections Shift Network Excitability and Bursting
31 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network architecture forms a critical constraint on neuronal function. Here we examine the role of structural autapses, when a neuron synapses onto itself, in driving network-wide bursting behavior. Using a simple spiking model of neuronal activity, we study how autaptic connections affect activity patterns, and evaluate if neuronal degree or controllability are significant factors that affect changes in bursting from these autaptic connections. We observed that adding increasing numbers of autaptic connections to excitatory neurons increased the number of spiking events in the network and the number of network-wide bursts, particularly in the portion of the phase space in which excitatory synapses were stronger contributors to bursting behavior than inhibitory synapses. In comparison, autaptic connections to excitatory neurons with high average controllability led to higher burst frequencies than adding the same number of self-looping connections to neurons with high modal controllability. The number of autaptic connections required to induce bursting behavior could be lowered by selectively adding autapses to high degree excitatory neurons. These results suggest a role of autaptic connections in controlling network-wide bursts in diverse cortical and subcortical regions of mammalian brain. Moreover, they open up new avenues for the study of dynamic neurophysiological correlates of structural controllability.
[ { "created": "Mon, 1 Aug 2016 13:58:50 GMT", "version": "v1" } ]
2016-08-02
[ [ "Wiles", "Laura", "" ], [ "Gu", "Shi", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Bassett", "Danielle S.", "" ], [ "Meaney", "David F.", "" ] ]
Network architecture forms a critical constraint on neuronal function. Here we examine the role of structural autapses, when a neuron synapses onto itself, in driving network-wide bursting behavior. Using a simple spiking model of neuronal activity, we study how autaptic connections affect activity patterns, and evaluate if neuronal degree or controllability are significant factors that affect changes in bursting from these autaptic connections. We observed that adding increasing numbers of autaptic connections to excitatory neurons increased the number of spiking events in the network and the number of network-wide bursts, particularly in the portion of the phase space in which excitatory synapses were stronger contributors to bursting behavior than inhibitory synapses. In comparison, autaptic connections to excitatory neurons with high average controllability led to higher burst frequencies than adding the same number of self-looping connections to neurons with high modal controllability. The number of autaptic connections required to induce bursting behavior could be lowered by selectively adding autapses to high degree excitatory neurons. These results suggest a role of autaptic connections in controlling network-wide bursts in diverse cortical and subcortical regions of mammalian brain. Moreover, they open up new avenues for the study of dynamic neurophysiological correlates of structural controllability.
1608.06471
Khalil Cherifi
Khalil Cherifi (LBVNR)
Evidence of natural hybridization and introgression between Medicago ciliaris and Medicago intertexta
null
International Journal of Environmental & Agriculture Research 2 (2016) 129-135
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present study, investigated some reproductive and fertility parameters in some wild populations, originating from the North Tunisia (4 populations of Medicago ciliaris and 3 populations of Medicago intertexta). Previous finding revealed that these species are genetically distinct and easily recognized by the number of flowers per inflorescence and pod dimensions. However, biometrical traits and isozyme patterns intermediacy between these two species had detected the existence of a potential spontaneous interspecific hybrid originating from Sedjnane locality in Tunisia. Indeed, the present work has shown significant decrease of pollen fertility and seed production for this population when compared to the others(pollen viability 75%, pollen germinability 8% and pod production=9%). These results suggested a possible natural interspecific hybrid and confirming introgressive hybridization possibility between M. intertexta and M. ciliaris.
[ { "created": "Tue, 23 Aug 2016 11:34:57 GMT", "version": "v1" } ]
2016-10-05
[ [ "Cherifi", "Khalil", "", "LBVNR" ] ]
The present study, investigated some reproductive and fertility parameters in some wild populations, originating from the North Tunisia (4 populations of Medicago ciliaris and 3 populations of Medicago intertexta). Previous finding revealed that these species are genetically distinct and easily recognized by the number of flowers per inflorescence and pod dimensions. However, biometrical traits and isozyme patterns intermediacy between these two species had detected the existence of a potential spontaneous interspecific hybrid originating from Sedjnane locality in Tunisia. Indeed, the present work has shown significant decrease of pollen fertility and seed production for this population when compared to the others(pollen viability 75%, pollen germinability 8% and pod production=9%). These results suggested a possible natural interspecific hybrid and confirming introgressive hybridization possibility between M. intertexta and M. ciliaris.
1404.6520
Tom Leinster
Richard Reeve, Tom Leinster, Christina A. Cobbold, Jill Thompson, Neil Brummitt, Sonia N. Mitchell, Louise Matthews
How to partition diversity
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diversity measurement underpins the study of biological systems, but measures used vary across disciplines. Despite their common use and broad utility, no unified framework has emerged for measuring, comparing and partitioning diversity. The introduction of information theory into diversity measurement has laid the foundations, but the framework is incomplete without the ability to partition diversity, which is central to fundamental questions across the life sciences: How do we prioritise communities for conservation? How do we identify reservoirs and sources of pathogenic organisms? How do we measure ecological disturbance arising from climate change? The lack of a common framework means that diversity measures from different fields have conflicting fundamental properties, allowing conclusions reached to depend on the measure chosen. This conflict is unnecessary and unhelpful. A mathematically consistent framework would transform disparate fields by delivering scientific insights in a common language. It would also allow the transfer of theoretical and practical developments between fields. We meet this need, providing a versatile unified framework for partitioning biological diversity. It encompasses any kind of similarity between individuals, from functional to genetic, allowing comparisons between qualitatively different kinds of diversity. Where existing partitioning measures aggregate information across the whole population, our approach permits the direct comparison of subcommunities, allowing us to pinpoint distinct, diverse or representative subcommunities and investigate population substructure. The framework is provided as a ready-to-use R package to easily test our approach.
[ { "created": "Fri, 25 Apr 2014 19:58:04 GMT", "version": "v1" }, { "created": "Wed, 20 Aug 2014 19:58:38 GMT", "version": "v2" }, { "created": "Thu, 8 Dec 2016 20:38:57 GMT", "version": "v3" } ]
2016-12-09
[ [ "Reeve", "Richard", "" ], [ "Leinster", "Tom", "" ], [ "Cobbold", "Christina A.", "" ], [ "Thompson", "Jill", "" ], [ "Brummitt", "Neil", "" ], [ "Mitchell", "Sonia N.", "" ], [ "Matthews", "Louise", "" ] ]
Diversity measurement underpins the study of biological systems, but measures used vary across disciplines. Despite their common use and broad utility, no unified framework has emerged for measuring, comparing and partitioning diversity. The introduction of information theory into diversity measurement has laid the foundations, but the framework is incomplete without the ability to partition diversity, which is central to fundamental questions across the life sciences: How do we prioritise communities for conservation? How do we identify reservoirs and sources of pathogenic organisms? How do we measure ecological disturbance arising from climate change? The lack of a common framework means that diversity measures from different fields have conflicting fundamental properties, allowing conclusions reached to depend on the measure chosen. This conflict is unnecessary and unhelpful. A mathematically consistent framework would transform disparate fields by delivering scientific insights in a common language. It would also allow the transfer of theoretical and practical developments between fields. We meet this need, providing a versatile unified framework for partitioning biological diversity. It encompasses any kind of similarity between individuals, from functional to genetic, allowing comparisons between qualitatively different kinds of diversity. Where existing partitioning measures aggregate information across the whole population, our approach permits the direct comparison of subcommunities, allowing us to pinpoint distinct, diverse or representative subcommunities and investigate population substructure. The framework is provided as a ready-to-use R package to easily test our approach.
2109.08981
Gissell Estrada-Rodriguez
Gissell Estrada-Rodriguez and Benoit Perthame
Motility switching and front-back synchronisation in polarized cells
null
null
null
null
q-bio.CB
http://creativecommons.org/publicdomain/zero/1.0/
The combination of protrusions and retractions in the movement of polarized cells leads to understand the effect of possible synchronisation between the two ends of the cells. This synchronisation, in turn, could lead to different dynamics such as normal and fractional diffusion. Departing from a stochastic single cell trajectory, where a memory effect induces persistent movement, we derive a kinetic-renewal system at the mesoscopic scale. We investigate various scenarios with different levels of complexity, where the two ends of the cell move either independently or with partial or full synchronisation. We study the relevant macroscopic limits where we obtain diffusion, drift-diffusion or fractional diffusion, depending on the initial system. This article clarifies the form of relevant macroscopic equations that describe the possible effects of synchronised movement in cells, and sheds light on the switching between normal and fractional diffusion
[ { "created": "Sat, 18 Sep 2021 18:28:52 GMT", "version": "v1" } ]
2021-09-21
[ [ "Estrada-Rodriguez", "Gissell", "" ], [ "Perthame", "Benoit", "" ] ]
The combination of protrusions and retractions in the movement of polarized cells leads to understand the effect of possible synchronisation between the two ends of the cells. This synchronisation, in turn, could lead to different dynamics such as normal and fractional diffusion. Departing from a stochastic single cell trajectory, where a memory effect induces persistent movement, we derive a kinetic-renewal system at the mesoscopic scale. We investigate various scenarios with different levels of complexity, where the two ends of the cell move either independently or with partial or full synchronisation. We study the relevant macroscopic limits where we obtain diffusion, drift-diffusion or fractional diffusion, depending on the initial system. This article clarifies the form of relevant macroscopic equations that describe the possible effects of synchronised movement in cells, and sheds light on the switching between normal and fractional diffusion
2101.11277
Jessie Renton
Jessie Renton and Karen M. Page
Cooperative success in epithelial public goods games
40 Pages, 15 Figures. Accepted version
Journal of Theoretical Biology, 528:110838, 2021
10.1016/j.jtbi.2021.110838
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cancer cells obtain mutations which rely on the production of diffusible growth factors to confer a fitness benefit. These mutations can be considered cooperative, and studied as public goods games within the framework of evolutionary game theory. The population structure, benefit function and update rule all influence the evolutionary success of cooperators. We model the evolution of cooperation in epithelial cells using the Voronoi tessellation model. Unlike traditional evolutionary graph theory, this allows us to implement global updating, for which birth and death events are spatially decoupled. We compare, for a sigmoid benefit function, the conditions for cooperation to be favoured and/or beneficial for well mixed and structured populations. We find that when population structure is combined with global updating, cooperation is more successful than if there were local updating or the population were well-mixed. Interestingly, the qualitative behaviour for the well-mixed population and the Voronoi tessellation model is remarkably similar, but the latter case requires significantly lower incentives to ensure cooperation.
[ { "created": "Wed, 27 Jan 2021 09:15:48 GMT", "version": "v1" }, { "created": "Fri, 3 Sep 2021 12:51:18 GMT", "version": "v2" } ]
2021-09-06
[ [ "Renton", "Jessie", "" ], [ "Page", "Karen M.", "" ] ]
Cancer cells obtain mutations which rely on the production of diffusible growth factors to confer a fitness benefit. These mutations can be considered cooperative, and studied as public goods games within the framework of evolutionary game theory. The population structure, benefit function and update rule all influence the evolutionary success of cooperators. We model the evolution of cooperation in epithelial cells using the Voronoi tessellation model. Unlike traditional evolutionary graph theory, this allows us to implement global updating, for which birth and death events are spatially decoupled. We compare, for a sigmoid benefit function, the conditions for cooperation to be favoured and/or beneficial for well mixed and structured populations. We find that when population structure is combined with global updating, cooperation is more successful than if there were local updating or the population were well-mixed. Interestingly, the qualitative behaviour for the well-mixed population and the Voronoi tessellation model is remarkably similar, but the latter case requires significantly lower incentives to ensure cooperation.
2310.18760
JunJie Wee
JunJie Wee, Jiahui Chen, Kelin Xia, Guo-Wei Wei
Integration of persistent Laplacian and pre-trained transformer for protein solubility changes upon mutation
null
null
null
null
q-bio.BM math.AT
http://creativecommons.org/licenses/by/4.0/
Protein mutations can significantly influence protein solubility, which results in altered protein functions and leads to various diseases. Despite of tremendous effort, machine learning prediction of protein solubility changes upon mutation remains a challenging task as indicated by the poor scores of normalized Correct Prediction Ratio (CPR). Part of the challenge stems from the fact that there is no three-dimensional (3D) structures for the wild-type and mutant proteins. This work integrates persistent Laplacians and pre-trained Transformer for the task. The Transformer, pretrained with hunderds of millions of protein sequences, embeds wild-type and mutant sequences, while persistent Laplacians track the topological invariant change and homotopic shape evolution induced by mutations in 3D protein structures, which are rendered from AlphaFold2. The resulting machine learning model was trained on an extensive data set labeled with three solubility types. Our model outperforms all existing predictive methods and improves the state-of-the-art up to 15%.
[ { "created": "Sat, 28 Oct 2023 17:13:47 GMT", "version": "v1" }, { "created": "Thu, 2 Nov 2023 20:19:28 GMT", "version": "v2" } ]
2023-11-06
[ [ "Wee", "JunJie", "" ], [ "Chen", "Jiahui", "" ], [ "Xia", "Kelin", "" ], [ "Wei", "Guo-Wei", "" ] ]
Protein mutations can significantly influence protein solubility, which results in altered protein functions and leads to various diseases. Despite of tremendous effort, machine learning prediction of protein solubility changes upon mutation remains a challenging task as indicated by the poor scores of normalized Correct Prediction Ratio (CPR). Part of the challenge stems from the fact that there is no three-dimensional (3D) structures for the wild-type and mutant proteins. This work integrates persistent Laplacians and pre-trained Transformer for the task. The Transformer, pretrained with hunderds of millions of protein sequences, embeds wild-type and mutant sequences, while persistent Laplacians track the topological invariant change and homotopic shape evolution induced by mutations in 3D protein structures, which are rendered from AlphaFold2. The resulting machine learning model was trained on an extensive data set labeled with three solubility types. Our model outperforms all existing predictive methods and improves the state-of-the-art up to 15%.
q-bio/0505008
Julien Mayor
Julien Mayor and Wulfram Gerstner
Noise-enhanced computation in a model of a cortical column
null
null
null
null
q-bio.NC
null
Varied sensory systems use noise in order to enhance detection of weak signals. It has been conjectured in the literature that this effect, known as stochastic resonance, may take place in central cognitive processes such as the memory retrieval of arithmetical multiplication. We show in a simplified model of cortical tissue, that complex arithmetical calculations can be carried out and are enhanced in the presence of a stochastic background. The performance is shown to be positively correlated to the susceptibility of the network, defined as its sensitivity to a variation of the mean of its inputs. For nontrivial arithmetic tasks such as multiplication, stochastic resonance is an emergent property of the microcircuitry of the model network.
[ { "created": "Wed, 4 May 2005 15:54:43 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mayor", "Julien", "" ], [ "Gerstner", "Wulfram", "" ] ]
Varied sensory systems use noise in order to enhance detection of weak signals. It has been conjectured in the literature that this effect, known as stochastic resonance, may take place in central cognitive processes such as the memory retrieval of arithmetical multiplication. We show in a simplified model of cortical tissue, that complex arithmetical calculations can be carried out and are enhanced in the presence of a stochastic background. The performance is shown to be positively correlated to the susceptibility of the network, defined as its sensitivity to a variation of the mean of its inputs. For nontrivial arithmetic tasks such as multiplication, stochastic resonance is an emergent property of the microcircuitry of the model network.
1711.08988
Atsushi Kamimura
Atsushi Kamimura and Kunihiko Kaneko
Exponential growth for self-reproduction in a catalytic reaction network: relevance of a minority molecular species and crowdedness
19 pages, submitted for publication
null
10.1088/1367-2630/aaaf37
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary to acquire evolvability at the primitive stage of life.
[ { "created": "Fri, 24 Nov 2017 14:36:05 GMT", "version": "v1" } ]
2018-04-18
[ [ "Kamimura", "Atsushi", "" ], [ "Kaneko", "Kunihiko", "" ] ]
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary to acquire evolvability at the primitive stage of life.
1709.09645
Moo K. Chung
Moo K. Chung
Statistical Challenges of Big Brain Network Data
8 pages, 2 figure
null
null
null
q-bio.NC stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the main characteristics of big brain network data that offer unique statistical challenges. The brain networks are biologically expected to be both sparse and hierarchical. Such unique characterizations put specific topological constraints onto statistical approaches and models we can use effectively. We explore the limitations of the current models used in the field and offer alternative approaches and explain new challenges.
[ { "created": "Wed, 27 Sep 2017 17:30:10 GMT", "version": "v1" }, { "created": "Sat, 23 Dec 2017 07:31:43 GMT", "version": "v2" } ]
2017-12-27
[ [ "Chung", "Moo K.", "" ] ]
We explore the main characteristics of big brain network data that offer unique statistical challenges. The brain networks are biologically expected to be both sparse and hierarchical. Such unique characterizations put specific topological constraints onto statistical approaches and models we can use effectively. We explore the limitations of the current models used in the field and offer alternative approaches and explain new challenges.
2311.18574
Jiaxian Yan
Jiaxian Yan, Zaixi Zhang, Kai Zhang, and Qi Liu
Multi-scale Iterative Refinement towards Robust and Versatile Molecular Docking
13 pages, 8 figures
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular docking is a key computational tool utilized to predict the binding conformations of small molecules to protein targets, which is fundamental in the design of novel drugs. Despite recent advancements in geometric deep learning-based approaches leading to improvements in blind docking efficiency, these methods have encountered notable challenges, such as limited generalization performance on unseen proteins, the inability to concurrently address the settings of blind docking and site-specific docking, and the frequent occurrence of physical implausibilities such as inter-molecular steric clash. In this study, we introduce DeltaDock, a robust and versatile framework designed for efficient molecular docking to overcome these challenges. DeltaDock operates in a two-step process: rapid initial complex structures sampling followed by multi-scale iterative refinement of the initial structures. In the initial stage, to sample accurate structures with high efficiency, we develop a ligand-dependent binding site prediction model founded on large protein models and graph neural networks. This model is then paired with GPU-accelerated sampling algorithms. The sampled structures are updated using a multi-scale iterative refinement module that captures both protein-ligand atom-atom interactions and residue-atom interactions in the following stage. Distinct from previous geometric deep learning methods that are conditioned on the blind docking setting, DeltaDock demonstrates superior performance in both blind docking and site-specific docking settings. Comprehensive experimental results reveal that DeltaDock consistently surpasses baseline methods in terms of docking accuracy. Furthermore, it displays remarkable generalization capabilities and proficiency for predicting physically valid structures, thereby attesting to its robustness and reliability in various scenarios.
[ { "created": "Thu, 30 Nov 2023 14:09:20 GMT", "version": "v1" } ]
2023-12-01
[ [ "Yan", "Jiaxian", "" ], [ "Zhang", "Zaixi", "" ], [ "Zhang", "Kai", "" ], [ "Liu", "Qi", "" ] ]
Molecular docking is a key computational tool utilized to predict the binding conformations of small molecules to protein targets, which is fundamental in the design of novel drugs. Despite recent advancements in geometric deep learning-based approaches leading to improvements in blind docking efficiency, these methods have encountered notable challenges, such as limited generalization performance on unseen proteins, the inability to concurrently address the settings of blind docking and site-specific docking, and the frequent occurrence of physical implausibilities such as inter-molecular steric clash. In this study, we introduce DeltaDock, a robust and versatile framework designed for efficient molecular docking to overcome these challenges. DeltaDock operates in a two-step process: rapid initial complex structures sampling followed by multi-scale iterative refinement of the initial structures. In the initial stage, to sample accurate structures with high efficiency, we develop a ligand-dependent binding site prediction model founded on large protein models and graph neural networks. This model is then paired with GPU-accelerated sampling algorithms. The sampled structures are updated using a multi-scale iterative refinement module that captures both protein-ligand atom-atom interactions and residue-atom interactions in the following stage. Distinct from previous geometric deep learning methods that are conditioned on the blind docking setting, DeltaDock demonstrates superior performance in both blind docking and site-specific docking settings. Comprehensive experimental results reveal that DeltaDock consistently surpasses baseline methods in terms of docking accuracy. Furthermore, it displays remarkable generalization capabilities and proficiency for predicting physically valid structures, thereby attesting to its robustness and reliability in various scenarios.
1503.05043
Ricard Sole
Ricard Sol\'e, Salva Duran-Nebreda and Raul Monta\~nez
Synthetic circuit designs for Earth terraformation
8 pages, 3 figures
null
null
null
q-bio.QM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mounting evidence indicates that our planet might experience runaway effects associated to rising temperatures and ecosystem overexploitation, leading to catastrophic shifts on short time scales. Remediation scenarios capable of counterbalancing these effects involve geoengineering, sustainable practices and carbon sequestration, among others. None of these scenarios seems powerful enough to achieve the desired restoration of safe boundaries. We hypothesise that synthetic organisms with the appropriate engineering design could be used to safely prevent declines in some stressed ecosystems and help improving carbon sequestration. Such schemes would include engineering mutualistic dependencies preventing undesired evolutionary processes. We hypothesise that some particular design principles introduce unescapable constraints to the engineered organisms that act as effective firewalls. Testing this designed organisms can be achieved by using controlled bioreactor models and accurate computational models including different scales (from genetic constructs and metabolic pathways to population dynamics). Our hypothesis heads towards a future anthropogenic action that should effectively act as Terraforming agents. It also implies a major challenge in the existing biosafety policies, since we suggest release of modified organisms as potentially necessary strategy for success.
[ { "created": "Tue, 17 Mar 2015 13:43:21 GMT", "version": "v1" } ]
2015-03-18
[ [ "Solé", "Ricard", "" ], [ "Duran-Nebreda", "Salva", "" ], [ "Montañez", "Raul", "" ] ]
Mounting evidence indicates that our planet might experience runaway effects associated to rising temperatures and ecosystem overexploitation, leading to catastrophic shifts on short time scales. Remediation scenarios capable of counterbalancing these effects involve geoengineering, sustainable practices and carbon sequestration, among others. None of these scenarios seems powerful enough to achieve the desired restoration of safe boundaries. We hypothesise that synthetic organisms with the appropriate engineering design could be used to safely prevent declines in some stressed ecosystems and help improving carbon sequestration. Such schemes would include engineering mutualistic dependencies preventing undesired evolutionary processes. We hypothesise that some particular design principles introduce unescapable constraints to the engineered organisms that act as effective firewalls. Testing this designed organisms can be achieved by using controlled bioreactor models and accurate computational models including different scales (from genetic constructs and metabolic pathways to population dynamics). Our hypothesis heads towards a future anthropogenic action that should effectively act as Terraforming agents. It also implies a major challenge in the existing biosafety policies, since we suggest release of modified organisms as potentially necessary strategy for success.
q-bio/0503037
Gianluca Lattanzi
Francesco Pampaloni, Gianluca Lattanzi, Alexandr Jon\'a\v{s}, Thomas Surrey, Erwin Frey and Ernst-Ludwig Florin
Elastic properties of grafted microtubules
9 pages, 3 figures
PNAS 103, 10248 (2006)
10.1073/pnas.0603931103
LMU-ASC 26/05
q-bio.BM
null
We use single-particle tracking to study the elastic properties of single microtubules grafted to a substrate. Thermal fluctuations of the free microtubule's end are recorded, in order to measure position distribution functions from which we calculate the persistence length of microtubules with contour lengths between 2.6 and 48 micrometers. We find the persistence length to vary by more than a factor of 20 over the total range of contour lengths. Our results support the hypothesis that shearing between protofilaments contributes significantly to the mechanics of microtubules.
[ { "created": "Thu, 24 Mar 2005 10:47:55 GMT", "version": "v1" } ]
2007-05-23
[ [ "Pampaloni", "Francesco", "" ], [ "Lattanzi", "Gianluca", "" ], [ "Jonáš", "Alexandr", "" ], [ "Surrey", "Thomas", "" ], [ "Frey", "Erwin", "" ], [ "Florin", "Ernst-Ludwig", "" ] ]
We use single-particle tracking to study the elastic properties of single microtubules grafted to a substrate. Thermal fluctuations of the free microtubule's end are recorded, in order to measure position distribution functions from which we calculate the persistence length of microtubules with contour lengths between 2.6 and 48 micrometers. We find the persistence length to vary by more than a factor of 20 over the total range of contour lengths. Our results support the hypothesis that shearing between protofilaments contributes significantly to the mechanics of microtubules.
2101.11399
Giacomo Cacciapaglia
Giacomo Cacciapaglia, Corentin Cot, Michele Della Morte, Stefan Hohenegger, Francesco Sannino, Shahram Vatani
The field theoretical ABC of epidemic dynamics
57 pages, 40 figures. Article expanded into a review. Prepared for submission to Physics Reports
null
null
null
q-bio.PE hep-lat hep-th physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Infectious diseases are a threat for human health with tremendous impact on our society at large. The recent COVID-19 pandemic, caused by the SARS-CoV-2, is the latest example of a highly infectious disease ravaging the world, since late 2019. It is therefore imperative to develop efficient mathematical models, able to substantially curb the damages of a pandemic by unveiling disease spreading dynamics and symmetries. This will help inform (non)-pharmaceutical prevention strategies. For the reasons above we wrote this report that goes at the heart of mathematical modelling of infectious disease diffusion by simultaneously investigating the underlying microscopic dynamics in terms of percolation models, effective description via compartmental models and the employment of temporal symmetries naturally encoded in the mathematical language of critical phenomena. Our report reviews these approaches and determines their common denominators, relevant for theoretical epidemiology and its link to important concepts in theoretical physics. We show that the different frameworks exhibit common features such as criticality and self-similarity under time rescaling. These features are naturally encoded within the unifying field theoretical approach. The latter leads to an efficient description of the time evolution of the disease via a framework in which (near) time-dilation invariance is explicitly realised. As important test of the relevance of symmetries we show how to mathematically account for observed phenomena such as multi-wave dynamics. The models presented here are of immediate relevance for different realms of scientific enquiry from medical applications to the understanding of human behaviour. Our review offers novel perspectives on how to model, capture, organise and understand epidemiological data and disease dynamics for modelling real-world phenomena.
[ { "created": "Mon, 25 Jan 2021 15:19:39 GMT", "version": "v1" }, { "created": "Thu, 16 Sep 2021 08:40:33 GMT", "version": "v2" } ]
2021-09-17
[ [ "Cacciapaglia", "Giacomo", "" ], [ "Cot", "Corentin", "" ], [ "Della Morte", "Michele", "" ], [ "Hohenegger", "Stefan", "" ], [ "Sannino", "Francesco", "" ], [ "Vatani", "Shahram", "" ] ]
Infectious diseases are a threat for human health with tremendous impact on our society at large. The recent COVID-19 pandemic, caused by the SARS-CoV-2, is the latest example of a highly infectious disease ravaging the world, since late 2019. It is therefore imperative to develop efficient mathematical models, able to substantially curb the damages of a pandemic by unveiling disease spreading dynamics and symmetries. This will help inform (non)-pharmaceutical prevention strategies. For the reasons above we wrote this report that goes at the heart of mathematical modelling of infectious disease diffusion by simultaneously investigating the underlying microscopic dynamics in terms of percolation models, effective description via compartmental models and the employment of temporal symmetries naturally encoded in the mathematical language of critical phenomena. Our report reviews these approaches and determines their common denominators, relevant for theoretical epidemiology and its link to important concepts in theoretical physics. We show that the different frameworks exhibit common features such as criticality and self-similarity under time rescaling. These features are naturally encoded within the unifying field theoretical approach. The latter leads to an efficient description of the time evolution of the disease via a framework in which (near) time-dilation invariance is explicitly realised. As important test of the relevance of symmetries we show how to mathematically account for observed phenomena such as multi-wave dynamics. The models presented here are of immediate relevance for different realms of scientific enquiry from medical applications to the understanding of human behaviour. Our review offers novel perspectives on how to model, capture, organise and understand epidemiological data and disease dynamics for modelling real-world phenomena.
1804.10093
Katherine Medina
Katherine Medina
A GPP algorithm for hippocampal interneuron characterization
11 pages; 2 figures; 2 tables
null
null
null
q-bio.NC
http://creativecommons.org/publicdomain/zero/1.0/
Correctly identifying neuronal subsets is critical to multiple downstream methods in several areas of neuroscience research. The hippocampal interneuron characterization technology has achieved rapid development in recent years. However, capturing true neuronal features for accurate interneuron characterization and segmentation has remained elusive. In the current study, a novel global preserving estimate algorithm is used to capture the non-linearity in the features of hippocampal interneurons after factor Algorithm. Our results provide evidence for the effective integration of the original linear and nonlinear neuronal features and achieves better characterization performance on multiple hippocampal interneuron databases through array matching.
[ { "created": "Thu, 26 Apr 2018 14:42:03 GMT", "version": "v1" } ]
2018-04-27
[ [ "Medina", "Katherine", "" ] ]
Correctly identifying neuronal subsets is critical to multiple downstream methods in several areas of neuroscience research. The hippocampal interneuron characterization technology has achieved rapid development in recent years. However, capturing true neuronal features for accurate interneuron characterization and segmentation has remained elusive. In the current study, a novel global preserving estimate algorithm is used to capture the non-linearity in the features of hippocampal interneurons after factor Algorithm. Our results provide evidence for the effective integration of the original linear and nonlinear neuronal features and achieves better characterization performance on multiple hippocampal interneuron databases through array matching.
1203.1595
Pablo Moisset de Espan\'es
Pablo Moisset de Espan\'es and Axel Osses and Iv\'an Rapaport
Fixed-points in Random Boolean Networks: The impact of parallelism in the scale-free topology case
13 pages
null
null
null
q-bio.CB q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fixed points are fundamental states in any dynamical system. In the case of gene regulatory networks (GRNs) they correspond to stable genes profiles associated to the various cell types. We use Kauffman's approach to model GRNs with random Boolean networks (RBNs). We start this paper by proving that, if we fix the values of the source nodes (nodes with in-degree 0), the expected number of fixed points of any RBN is one (independently of the topology we choose). For finding such fixed points we use the {\alpha}-asynchronous dynamics (where every node is updated independently with probability 0 < {\alpha} < 1). In fact, it is well-known that asynchrony avoids the cycle attractors into which parallel dynamics tends to fall. We perform simulations and we show the remarkable property that, if for a given RBN with scale-free topology and {\alpha}-asynchronous dynamics an initial configuration reaches a fixed point, then every configuration also reaches a fixed point. By contrast, in the parallel regime, the percentage of initial configurations reaching a fixed point (for the same networks) is dramatically smaller. We contrast the results of the simulations on scale-free networks with the classical Erdos-Renyi model of random networks. Everything indicates that scale-free networks are extremely robust. Finally, we study the mean and maximum time/work needed to reach a fixed point when starting from randomly chosen initial configurations.
[ { "created": "Wed, 7 Mar 2012 20:27:36 GMT", "version": "v1" } ]
2012-03-08
[ [ "de Espanés", "Pablo Moisset", "" ], [ "Osses", "Axel", "" ], [ "Rapaport", "Iván", "" ] ]
Fixed points are fundamental states in any dynamical system. In the case of gene regulatory networks (GRNs) they correspond to stable genes profiles associated to the various cell types. We use Kauffman's approach to model GRNs with random Boolean networks (RBNs). We start this paper by proving that, if we fix the values of the source nodes (nodes with in-degree 0), the expected number of fixed points of any RBN is one (independently of the topology we choose). For finding such fixed points we use the {\alpha}-asynchronous dynamics (where every node is updated independently with probability 0 < {\alpha} < 1). In fact, it is well-known that asynchrony avoids the cycle attractors into which parallel dynamics tends to fall. We perform simulations and we show the remarkable property that, if for a given RBN with scale-free topology and {\alpha}-asynchronous dynamics an initial configuration reaches a fixed point, then every configuration also reaches a fixed point. By contrast, in the parallel regime, the percentage of initial configurations reaching a fixed point (for the same networks) is dramatically smaller. We contrast the results of the simulations on scale-free networks with the classical Erdos-Renyi model of random networks. Everything indicates that scale-free networks are extremely robust. Finally, we study the mean and maximum time/work needed to reach a fixed point when starting from randomly chosen initial configurations.
2404.00110
S\'ilvia Sequeira
S\'ilvia O. Sequeira, Ekaterina Pasnak, Carla Viegas, Bianca Gomes, Marta Dias, Renata Cervantes, Pedro Pena, Magdalena Twaru\.zek, Robert Kosicki, Susana Viegas, Liliana Aranha Caetano, Maria Jo\~ao Penetra, In\^es Santos, Ana Teresa Caldeira, Catarina Pinheiro
Microbial assessment in a rare Norwegian book collection: a One Health approach to cultural heritage
17 pages, 7 figures, 2 tables
null
10.3390/microorganisms12061215
null
q-bio.PE q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Microbial contamination poses a threat to both the preservation of library and archival collections and the health of staff and users. This study investigated the microbial communities and potential health risks associated with the UNESCO-classified Norwegian Sea Trade Archive (NSTA) collection exhibiting visible microbial colonization and staff health concerns. Dust samples from book surfaces and the storage environment were analysed using culturing methods, qPCR, Next Generation Sequencing, and mycotoxin, cytotoxicity and azole resistance assays. Penicillium sp., Aspergillus sp., and Cladosporium sp. were the most common fungi identified, with some potentially toxic species like Stachybotrys sp., Toxicladosporium sp. and Aspergillus section Fumigati. Fungal resistance to azoles was not detected. Only one mycotoxin, sterigmatocystin, was found in a heavily contaminated book. Dust extracts from books exhibited moderate to high cytotoxicity on human lung cells, suggesting a potential respiratory risk. The collection had higher contamination levels compared to the storage environment, likely due to improved storage conditions. Even though, overall low contamination levels were obtained, which might be underestimated due to the presence of salt (from cod preservation) that could have interfered with the analyses. This study underlines the importance of monitoring microbial communities and implementing proper storage measures to safeguard cultural heritage and staff well-being.
[ { "created": "Fri, 29 Mar 2024 18:53:22 GMT", "version": "v1" } ]
2024-06-24
[ [ "Sequeira", "Sílvia O.", "" ], [ "Pasnak", "Ekaterina", "" ], [ "Viegas", "Carla", "" ], [ "Gomes", "Bianca", "" ], [ "Dias", "Marta", "" ], [ "Cervantes", "Renata", "" ], [ "Pena", "Pedro", "" ], [ "Twarużek", "Magdalena", "" ], [ "Kosicki", "Robert", "" ], [ "Viegas", "Susana", "" ], [ "Caetano", "Liliana Aranha", "" ], [ "Penetra", "Maria João", "" ], [ "Santos", "Inês", "" ], [ "Caldeira", "Ana Teresa", "" ], [ "Pinheiro", "Catarina", "" ] ]
Microbial contamination poses a threat to both the preservation of library and archival collections and the health of staff and users. This study investigated the microbial communities and potential health risks associated with the UNESCO-classified Norwegian Sea Trade Archive (NSTA) collection exhibiting visible microbial colonization and staff health concerns. Dust samples from book surfaces and the storage environment were analysed using culturing methods, qPCR, Next Generation Sequencing, and mycotoxin, cytotoxicity and azole resistance assays. Penicillium sp., Aspergillus sp., and Cladosporium sp. were the most common fungi identified, with some potentially toxic species like Stachybotrys sp., Toxicladosporium sp. and Aspergillus section Fumigati. Fungal resistance to azoles was not detected. Only one mycotoxin, sterigmatocystin, was found in a heavily contaminated book. Dust extracts from books exhibited moderate to high cytotoxicity on human lung cells, suggesting a potential respiratory risk. The collection had higher contamination levels compared to the storage environment, likely due to improved storage conditions. Even though, overall low contamination levels were obtained, which might be underestimated due to the presence of salt (from cod preservation) that could have interfered with the analyses. This study underlines the importance of monitoring microbial communities and implementing proper storage measures to safeguard cultural heritage and staff well-being.
1603.04007
Bertrand Roehner
Sylvie Berrut, Violette Pouillard, Peter Richmond, Bertrand M. Roehner
Deciphering infant mortality. Part 1: empirical evidence
46 pages, 14 figures, 4 tables
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is not (or at least not only) about human infant mortality. In line with reliability theory, "infant" will refer here to the time interval following birth during which the mortality (or failure) rate decreases. This definition provides a systems science perspective in which birth constitutes a sudden transition which falls within the field of application of the "Transient Shock" (TS) conjecture put forward in Richmond et al. (2016c). This conjecture provides predictions about the timing and shape of the death rate peak. (i) It says that there will be a death rate spike whenever external conditions change abruptly and drastically. (ii) It predicts that after a steep rising there will be a much longer hyperbolic relaxation process. These predictions can be tested by considering living organisms for which birth is a multi-step process. Thus, for fish there are three states: egg, yolk-sac phase, young adult. The TS conjecture predicts a mortality spike at the end of the yolk-sac phase, and this timing is indeed confirmed by observation. Secondly, the hyperbolic nature of the relaxation process can be tested using high accuracy Swiss statistics which give postnatal death rates from one hour after birth up to the age of 10 years. It turns out that since the 19th century despite a great overall reduction in infant mortality, the shape of the age-specific death rate has remained basically unchanged. This hyperbolic pattern is not specific to humans. It can also be found in small primates as recorded in the archives of zoological gardens. Our ultimate objective is to set up a chain of cases which starts from simple systems and then moves up step by step to more complex organisms. The cases discussed here can be seen as initial landmarks.
[ { "created": "Sun, 13 Mar 2016 08:17:11 GMT", "version": "v1" } ]
2016-03-15
[ [ "Berrut", "Sylvie", "" ], [ "Pouillard", "Violette", "" ], [ "Richmond", "Peter", "" ], [ "Roehner", "Bertrand M.", "" ] ]
This paper is not (or at least not only) about human infant mortality. In line with reliability theory, "infant" will refer here to the time interval following birth during which the mortality (or failure) rate decreases. This definition provides a systems science perspective in which birth constitutes a sudden transition which falls within the field of application of the "Transient Shock" (TS) conjecture put forward in Richmond et al. (2016c). This conjecture provides predictions about the timing and shape of the death rate peak. (i) It says that there will be a death rate spike whenever external conditions change abruptly and drastically. (ii) It predicts that after a steep rising there will be a much longer hyperbolic relaxation process. These predictions can be tested by considering living organisms for which birth is a multi-step process. Thus, for fish there are three states: egg, yolk-sac phase, young adult. The TS conjecture predicts a mortality spike at the end of the yolk-sac phase, and this timing is indeed confirmed by observation. Secondly, the hyperbolic nature of the relaxation process can be tested using high accuracy Swiss statistics which give postnatal death rates from one hour after birth up to the age of 10 years. It turns out that since the 19th century despite a great overall reduction in infant mortality, the shape of the age-specific death rate has remained basically unchanged. This hyperbolic pattern is not specific to humans. It can also be found in small primates as recorded in the archives of zoological gardens. Our ultimate objective is to set up a chain of cases which starts from simple systems and then moves up step by step to more complex organisms. The cases discussed here can be seen as initial landmarks.
2206.00668
Lovro Vrcek
Lovro Vr\v{c}ek, Xavier Bresson, Thomas Laurent, Martin Schmitz, Mile \v{S}iki\'c
Learning to Untangle Genome Assembly with Graph Convolutional Networks
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
A quest to determine the complete sequence of a human DNA from telomere to telomere started three decades ago and was finally completed in 2021. This accomplishment was a result of a tremendous effort of numerous experts who engineered various tools and performed laborious manual inspection to achieve the first gapless genome sequence. However, such method can hardly be used as a general approach to assemble different genomes, especially when the assembly speed is critical given the large amount of data. In this work, we explore a different approach to the central part of the genome assembly task that consists of untangling a large assembly graph from which a genomic sequence needs to be reconstructed. Our main motivation is to reduce human-engineered heuristics and use deep learning to develop more generalizable reconstruction techniques. Precisely, we introduce a new learning framework to train a graph convolutional network to resolve assembly graphs by finding a correct path through them. The training is supervised with a dataset generated from the resolved CHM13 human sequence and tested on assembly graphs built using real human PacBio HiFi reads. Experimental results show that a model, trained on simulated graphs generated solely from a single chromosome, is able to remarkably resolve all other chromosomes. Moreover, the model outperforms hand-crafted heuristics from a state-of-the-art \textit{de novo} assembler on the same graphs. Reconstructed chromosomes with graph networks are more accurate on nucleotide level, report lower number of contigs, higher genome reconstructed fraction and NG50/NGA50 assessment metrics.
[ { "created": "Wed, 1 Jun 2022 04:14:25 GMT", "version": "v1" } ]
2022-06-03
[ [ "Vrček", "Lovro", "" ], [ "Bresson", "Xavier", "" ], [ "Laurent", "Thomas", "" ], [ "Schmitz", "Martin", "" ], [ "Šikić", "Mile", "" ] ]
A quest to determine the complete sequence of a human DNA from telomere to telomere started three decades ago and was finally completed in 2021. This accomplishment was a result of a tremendous effort of numerous experts who engineered various tools and performed laborious manual inspection to achieve the first gapless genome sequence. However, such method can hardly be used as a general approach to assemble different genomes, especially when the assembly speed is critical given the large amount of data. In this work, we explore a different approach to the central part of the genome assembly task that consists of untangling a large assembly graph from which a genomic sequence needs to be reconstructed. Our main motivation is to reduce human-engineered heuristics and use deep learning to develop more generalizable reconstruction techniques. Precisely, we introduce a new learning framework to train a graph convolutional network to resolve assembly graphs by finding a correct path through them. The training is supervised with a dataset generated from the resolved CHM13 human sequence and tested on assembly graphs built using real human PacBio HiFi reads. Experimental results show that a model, trained on simulated graphs generated solely from a single chromosome, is able to remarkably resolve all other chromosomes. Moreover, the model outperforms hand-crafted heuristics from a state-of-the-art \textit{de novo} assembler on the same graphs. Reconstructed chromosomes with graph networks are more accurate on nucleotide level, report lower number of contigs, higher genome reconstructed fraction and NG50/NGA50 assessment metrics.
1908.00077
Jennifer Stiso
Jennifer Stiso, Marie-Constance Corsi, Jean M. Vettel, Javier O. Garcia, Fabio Pasqualetti, Fabrizio De Vico Fallani, Timothy H. Lucas, Danielle S. Bassett
Learning in brain-computer interface control evidenced by joint decomposition of brain and behavior
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Previous research suggests that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using MEG. Specifically, we employ a minimally constrained matrix decomposition method (non-negative matrix factorization) to simultaneously identify regularized, covarying subgraphs of functional connectivity, to assess their similarity to task performance, and to detect their time-varying expression. Individuals also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in regions important for sustaining attention. To test this model, we use tools that stipulate regional dynamics on a networked system (network control theory), and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention.
[ { "created": "Wed, 31 Jul 2019 20:17:07 GMT", "version": "v1" }, { "created": "Fri, 2 Aug 2019 13:42:25 GMT", "version": "v2" } ]
2019-08-05
[ [ "Stiso", "Jennifer", "" ], [ "Corsi", "Marie-Constance", "" ], [ "Vettel", "Jean M.", "" ], [ "Garcia", "Javier O.", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Fallani", "Fabrizio De Vico", "" ], [ "Lucas", "Timothy H.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Previous research suggests that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using MEG. Specifically, we employ a minimally constrained matrix decomposition method (non-negative matrix factorization) to simultaneously identify regularized, covarying subgraphs of functional connectivity, to assess their similarity to task performance, and to detect their time-varying expression. Individuals also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in regions important for sustaining attention. To test this model, we use tools that stipulate regional dynamics on a networked system (network control theory), and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention.
0804.0216
Emmanuel Tannenbaum
Amit Kama and Emmanuel Tannenbaum
The effect of the SOS response on the mean fitness of unicellular populations: A quasispecies approach
9 pages, 3 figures
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper develops a quasispecies model that incorporates the SOS response. We consider a unicellular, asexually replicating population of organisms, whose genomes consist of a single, double-stranded DNA molecule, i.e. one chromosome. We assume that repair of post-replication mismatched base-pairs occurs with probability $ \lambda $, and that the SOS response is triggered when the total number of mismatched base-pairs exceeds $ l_S $. We further assume that the per-mismatch SOS elimination rate is characterized by a first-order rate constant $ \kappa_{SOS} $. For a single fitness peak landscape where the master genome can sustain up to $ l $ mismatches and remain viable, this model is analytically solvable in the limit of infinite sequence length. The results, which are confirmed by stochastic simulations, indicate that the SOS response does indeed confer a fitness advantage to a population, provided that it is only activated when DNA damage is so extensive that a cell will die if it does not attempt to repair its DNA.
[ { "created": "Tue, 1 Apr 2008 17:53:17 GMT", "version": "v1" } ]
2008-04-02
[ [ "Kama", "Amit", "" ], [ "Tannenbaum", "Emmanuel", "" ] ]
This paper develops a quasispecies model that incorporates the SOS response. We consider a unicellular, asexually replicating population of organisms, whose genomes consist of a single, double-stranded DNA molecule, i.e. one chromosome. We assume that repair of post-replication mismatched base-pairs occurs with probability $ \lambda $, and that the SOS response is triggered when the total number of mismatched base-pairs exceeds $ l_S $. We further assume that the per-mismatch SOS elimination rate is characterized by a first-order rate constant $ \kappa_{SOS} $. For a single fitness peak landscape where the master genome can sustain up to $ l $ mismatches and remain viable, this model is analytically solvable in the limit of infinite sequence length. The results, which are confirmed by stochastic simulations, indicate that the SOS response does indeed confer a fitness advantage to a population, provided that it is only activated when DNA damage is so extensive that a cell will die if it does not attempt to repair its DNA.
1611.07801
Daniel Jacob
Daniel Jacob, Catherine Deborde, Marie Lefebvre, Mickael Maucourt, Anick Moing
NMRProcFlow: A graphical and interactive tool dedicated to 1D spectra processing for NMR-based metabolomics
null
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concerning NMR-based metabolomics, 1D spectra processing often requires an expert eye for disentangling the intertwined peaks, and so far the best way is to proceed interactively with a spectra viewer. NMRProcFlow is a graphical and interactive 1D NMR (1H \& 13C) spectra processing tool dedicated to metabolic fingerprinting and targeted metabolomic, covering all spectra processing steps including baseline correction, chemical shift calibration, alignment. It does not require programming skills. Biologists and NMR spectroscopists can easily interact and develop synergies by visualizing the NMR spectra along with their corresponding experimental-factor levels, thus setting a bridge between experimental design and subsequent statistical analyses.
[ { "created": "Wed, 23 Nov 2016 13:59:57 GMT", "version": "v1" } ]
2016-11-24
[ [ "Jacob", "Daniel", "" ], [ "Deborde", "Catherine", "" ], [ "Lefebvre", "Marie", "" ], [ "Maucourt", "Mickael", "" ], [ "Moing", "Anick", "" ] ]
Concerning NMR-based metabolomics, 1D spectra processing often requires an expert eye for disentangling the intertwined peaks, and so far the best way is to proceed interactively with a spectra viewer. NMRProcFlow is a graphical and interactive 1D NMR (1H \& 13C) spectra processing tool dedicated to metabolic fingerprinting and targeted metabolomic, covering all spectra processing steps including baseline correction, chemical shift calibration, alignment. It does not require programming skills. Biologists and NMR spectroscopists can easily interact and develop synergies by visualizing the NMR spectra along with their corresponding experimental-factor levels, thus setting a bridge between experimental design and subsequent statistical analyses.
2402.17182
Danny Miller
Miranda PG Zalusky, Danny E Miller
Methylation Operation Wizard (MeOW): Identification of differentially methylated regions in long-read sequencing data
7 pages, 1 figure
null
null
null
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Long-read sequencing (LRS) is able to simultaneously capture information about both DNA sequence and modifications, such as CpG methylation in a single sequencing experiment. Here we present Methylation Operation Wizard (MeOW), a program to identify and prioritize differentially methylated regions (DMRs) genome-wide using LRS data. MeOW can be run using either a file containing counts of per-nucleotide methylated CpG sites or with a bam file containing modified base tags.
[ { "created": "Tue, 27 Feb 2024 03:36:50 GMT", "version": "v1" } ]
2024-02-28
[ [ "Zalusky", "Miranda PG", "" ], [ "Miller", "Danny E", "" ] ]
Long-read sequencing (LRS) is able to simultaneously capture information about both DNA sequence and modifications, such as CpG methylation in a single sequencing experiment. Here we present Methylation Operation Wizard (MeOW), a program to identify and prioritize differentially methylated regions (DMRs) genome-wide using LRS data. MeOW can be run using either a file containing counts of per-nucleotide methylated CpG sites or with a bam file containing modified base tags.
1803.07256
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Burst Synchronization in A Scale-Free Neuronal Network with Inhibitory Spike-Timing-Dependent Plasticity
arXiv admin note: substantial text overlap with arXiv:1708.04543, arXiv:1801.01385
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are concerned about burst synchronization (BS), related to neural information processes in health and disease, in the Barab\'{a}si-Albert scale-free network (SFN) composed of inhibitory bursting Hindmarsh-Rose neurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without considering iSTDP, BS was found to appear in a range of noise intensities for fixed synaptic inhibition strengths. In contrast, in our present work, we take into consideration iSTDP and investigate its effect on BS by varying the noise intensity. Our new main result is to find occurrence of a Matthew effect in inhibitory synaptic plasticity: good BS gets better via LTD, while bad BS get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). We note that, due to inhibition, the roles of LTD and LTP in inhibitory synaptic plasticity are reversed in comparison with those in excitatory synaptic plasticity. Moreover, emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic burst onset times. Finally, in the presence of iSTDP we investigate the effects of network architecture on BS by varying the symmetric attachment degree $l^*$ and the asymmetry parameter $\Delta l$ in the SFN.
[ { "created": "Tue, 20 Mar 2018 04:52:01 GMT", "version": "v1" }, { "created": "Wed, 21 Mar 2018 02:16:06 GMT", "version": "v2" }, { "created": "Fri, 6 Apr 2018 05:23:12 GMT", "version": "v3" }, { "created": "Mon, 20 Aug 2018 07:23:05 GMT", "version": "v4" } ]
2018-08-21
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We are concerned about burst synchronization (BS), related to neural information processes in health and disease, in the Barab\'{a}si-Albert scale-free network (SFN) composed of inhibitory bursting Hindmarsh-Rose neurons. This inhibitory neuronal population has adaptive dynamic synaptic strengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP). In previous works without considering iSTDP, BS was found to appear in a range of noise intensities for fixed synaptic inhibition strengths. In contrast, in our present work, we take into consideration iSTDP and investigate its effect on BS by varying the noise intensity. Our new main result is to find occurrence of a Matthew effect in inhibitory synaptic plasticity: good BS gets better via LTD, while bad BS get worse via LTP. This kind of Matthew effect in inhibitory synaptic plasticity is in contrast to that in excitatory synaptic plasticity where good (bad) synchronization gets better (worse) via LTP (LTD). We note that, due to inhibition, the roles of LTD and LTP in inhibitory synaptic plasticity are reversed in comparison with those in excitatory synaptic plasticity. Moreover, emergences of LTD and LTP of synaptic inhibition strengths are intensively investigated via a microscopic method based on the distributions of time delays between the pre- and the post-synaptic burst onset times. Finally, in the presence of iSTDP we investigate the effects of network architecture on BS by varying the symmetric attachment degree $l^*$ and the asymmetry parameter $\Delta l$ in the SFN.
1805.01608
Matthieu Vignes
Alex White and Matthieu Vignes
Causal Queries from Observational Data in Biological Systems via Bayesian Networks: An Empirical Study in Small Networks
This chapter will appear in the forthcoming book "Gene Regulatory Networks: Methods and Protocols", published by Springer Nature
null
null
null
q-bio.QM q-bio.MN stat.AP stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological networks are a very convenient modelling and visualisation tool to discover knowledge from modern high-throughput genomics and postgenomics data sets. Indeed, biological entities are not isolated, but are components of complex multi-level systems. We go one step further and advocate for the consideration of causal representations of the interactions in living systems.We present the causal formalism and bring it out in the context of biological networks, when the data is observational. We also discuss its ability to decipher the causal information flow as observed in gene expression. We also illustrate our exploration by experiments on small simulated networks as well as on a real biological data set.
[ { "created": "Fri, 4 May 2018 05:09:48 GMT", "version": "v1" } ]
2018-05-07
[ [ "White", "Alex", "" ], [ "Vignes", "Matthieu", "" ] ]
Biological networks are a very convenient modelling and visualisation tool to discover knowledge from modern high-throughput genomics and postgenomics data sets. Indeed, biological entities are not isolated, but are components of complex multi-level systems. We go one step further and advocate for the consideration of causal representations of the interactions in living systems.We present the causal formalism and bring it out in the context of biological networks, when the data is observational. We also discuss its ability to decipher the causal information flow as observed in gene expression. We also illustrate our exploration by experiments on small simulated networks as well as on a real biological data set.
q-bio/0609048
Eugene Shakhnovich
D. B. Lukatsky, B. E. Shakhnovich, J. Mintseris, E. I. Shakhnovich
Structural similarity enhances interaction propensity of proteins
null
null
null
null
q-bio.BM
null
We study statistical properties of interacting protein-like surfaces and predict two strong, related effects: (i) statistically enhanced self-attraction of proteins; (ii) statistically enhanced attraction of proteins with similar structures. The effects originate in the fact that the probability to find a pattern self-match between two identical, even randomly organized interacting protein surfaces is always higher compared with the probability for a pattern match between two different, promiscuous protein surfaces. This theoretical finding explains statistical prevalence of homodimers in protein-protein interaction networks reported earlier. Further, our findings are confirmed by the analysis of curated database of protein complexes that showed highly statistically significant overrepresentation of dimers formed by structurally similar proteins with highly divergent sequences (superfamily heterodimers). We predict that significant fraction of heterodimers evolved from homodimers with the negative design evolutionary pressure applied against promiscuous homodimer formation. This is achieved through the formation of highly specific contacts formed by charged residues as demonstrated both in model and real superfamily heterodimers
[ { "created": "Wed, 27 Sep 2006 01:26:50 GMT", "version": "v1" } ]
2007-05-23
[ [ "Lukatsky", "D. B.", "" ], [ "Shakhnovich", "B. E.", "" ], [ "Mintseris", "J.", "" ], [ "Shakhnovich", "E. I.", "" ] ]
We study statistical properties of interacting protein-like surfaces and predict two strong, related effects: (i) statistically enhanced self-attraction of proteins; (ii) statistically enhanced attraction of proteins with similar structures. The effects originate in the fact that the probability to find a pattern self-match between two identical, even randomly organized interacting protein surfaces is always higher compared with the probability for a pattern match between two different, promiscuous protein surfaces. This theoretical finding explains statistical prevalence of homodimers in protein-protein interaction networks reported earlier. Further, our findings are confirmed by the analysis of curated database of protein complexes that showed highly statistically significant overrepresentation of dimers formed by structurally similar proteins with highly divergent sequences (superfamily heterodimers). We predict that significant fraction of heterodimers evolved from homodimers with the negative design evolutionary pressure applied against promiscuous homodimer formation. This is achieved through the formation of highly specific contacts formed by charged residues as demonstrated both in model and real superfamily heterodimers
2309.07146
Santiago Rosa
Santiago Rosa, Manuel Pulido, Juan Ruiz, Tadeo Cocucci
Transmission matrix parameter estimation of COVID-19 evolution with age compartments using ensemble-based data assimilation
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic and its multiple outbreaks have challenged governments around the world. Much of the epidemiological modeling was based on pre-pandemic contact information of the population, which changed drastically due to governmental health measures, so called non-pharmaceutical interventions made to reduce transmission of the virus, like social distancing and complete lockdown. In this work, we evaluate an ensemble-based data assimilation framework applied to a meta-population model to infer the transmission of the disease between different population agegroups. We perform a set of idealized twin-experiments to investigate the performance of different possible parameterizations of the transmission matrix. These experiments show that it is not possible to unambiguously estimate all the independent parameters of the transmission matrix. However, under certain parameterizations, the transmission matrix in an age-compartmental model can be estimated. These estimated parameters lead to an increase of forecast accuracy in agegroups compartments assimilating age-dependent accumulated cases and deaths observed in Argentina compared to a single-compartment model, and reliable estimations of the effective reproduction number. The age-dependent data assimilation and forecasting of virus transmission may be important for an accurate prediction and diagnosis of health care demand.
[ { "created": "Wed, 6 Sep 2023 19:42:57 GMT", "version": "v1" } ]
2023-09-15
[ [ "Rosa", "Santiago", "" ], [ "Pulido", "Manuel", "" ], [ "Ruiz", "Juan", "" ], [ "Cocucci", "Tadeo", "" ] ]
The COVID-19 pandemic and its multiple outbreaks have challenged governments around the world. Much of the epidemiological modeling was based on pre-pandemic contact information of the population, which changed drastically due to governmental health measures, so called non-pharmaceutical interventions made to reduce transmission of the virus, like social distancing and complete lockdown. In this work, we evaluate an ensemble-based data assimilation framework applied to a meta-population model to infer the transmission of the disease between different population agegroups. We perform a set of idealized twin-experiments to investigate the performance of different possible parameterizations of the transmission matrix. These experiments show that it is not possible to unambiguously estimate all the independent parameters of the transmission matrix. However, under certain parameterizations, the transmission matrix in an age-compartmental model can be estimated. These estimated parameters lead to an increase of forecast accuracy in agegroups compartments assimilating age-dependent accumulated cases and deaths observed in Argentina compared to a single-compartment model, and reliable estimations of the effective reproduction number. The age-dependent data assimilation and forecasting of virus transmission may be important for an accurate prediction and diagnosis of health care demand.
2103.13464
Egor Alimpiev
Egor Alimpiev, Noah A Rosenberg
Enumeration of coalescent histories for caterpillar species trees and $p$-pseudocaterpillar gene trees
null
null
10.1016/j.aam.2021.102265
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by/4.0/
For a fixed set $X$ containing $n$ taxon labels, an ordered pair consisting of a gene tree topology $G$ and a species tree $S$ bijectively labeled with the labels of $X$ possesses a set of coalescent histories -- mappings from the set of internal nodes of $G$ to the set of edges of $S$ describing possible lists of edges in $S$ on which the coalescences in $G$ take place. Enumerations of coalescent histories for gene trees and species trees have produced suggestive results regarding the pairs $(G,S)$ that, for a fixed $n$, have the largest number of coalescent histories. We define a class of 2-cherry binary tree topologies that we term $p$-pseudocaterpillars, examining coalescent histories for non-matching pairs $(G,S)$, in the case in which $S$ has a caterpillar shape and $G$ has a $p$-pseudocaterpillar shape. Using a construction that associates coalescent histories for $(G,S)$ with a class of "roadblocked" monotonic paths, we identify the $p$-pseudocaterpillar labeled gene tree topology that, for a fixed caterpillar labeled species tree topology, gives rise to the largest number of coalescent histories. The shape that maximizes the number of coalescent histories places the "second" cherry of the $p$-pseudocaterpillar equidistantly from the root of the "first" cherry and from the tree root. A symmetry in the numbers of coalescent histories for $p$-pseudocaterpillar gene trees and caterpillar species trees is seen to exist around the maximizing value of the parameter $p$. The results provide insight into the factors that influence the number of coalescent histories possible for a given gene tree and species tree.
[ { "created": "Wed, 24 Mar 2021 19:48:36 GMT", "version": "v1" } ]
2022-05-24
[ [ "Alimpiev", "Egor", "" ], [ "Rosenberg", "Noah A", "" ] ]
For a fixed set $X$ containing $n$ taxon labels, an ordered pair consisting of a gene tree topology $G$ and a species tree $S$ bijectively labeled with the labels of $X$ possesses a set of coalescent histories -- mappings from the set of internal nodes of $G$ to the set of edges of $S$ describing possible lists of edges in $S$ on which the coalescences in $G$ take place. Enumerations of coalescent histories for gene trees and species trees have produced suggestive results regarding the pairs $(G,S)$ that, for a fixed $n$, have the largest number of coalescent histories. We define a class of 2-cherry binary tree topologies that we term $p$-pseudocaterpillars, examining coalescent histories for non-matching pairs $(G,S)$, in the case in which $S$ has a caterpillar shape and $G$ has a $p$-pseudocaterpillar shape. Using a construction that associates coalescent histories for $(G,S)$ with a class of "roadblocked" monotonic paths, we identify the $p$-pseudocaterpillar labeled gene tree topology that, for a fixed caterpillar labeled species tree topology, gives rise to the largest number of coalescent histories. The shape that maximizes the number of coalescent histories places the "second" cherry of the $p$-pseudocaterpillar equidistantly from the root of the "first" cherry and from the tree root. A symmetry in the numbers of coalescent histories for $p$-pseudocaterpillar gene trees and caterpillar species trees is seen to exist around the maximizing value of the parameter $p$. The results provide insight into the factors that influence the number of coalescent histories possible for a given gene tree and species tree.
1910.04918
Okyaz Eminaga
Okyaz Eminaga, Yuri Tolkach, Christian Kunder, Mahmood Abbas, Ryan Han, Rosalie Nolley, Axel Semjonow, Martin Boegemann, Sebastian Huss, Andreas Loening, Robert West, Geoffrey Sonn, Richard Fan, Olaf Bettendorf, James Brook and Daniel Rubin
Deep Learning for Prostate Pathology
null
null
null
null
q-bio.TO cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current study detects different morphologies related to prostate pathology using deep learning models; these models were evaluated on 2,121 hematoxylin and eosin (H&E) stain histology images captured using bright field microscopy, which spanned a variety of image qualities, origins (whole slide, tissue micro array, whole mount, Internet), scanning machines, timestamps, H&E staining protocols, and institutions. For case usage, these models were applied for the annotation tasks in clinician-oriented pathology reports for prostatectomy specimens. The true positive rate (TPR) for slides with prostate cancer was 99.7% by a false positive rate of 0.785%. The F1-scores of Gleason patterns reported in pathology reports ranged from 0.795 to 1.0 at the case level. TPR was 93.6% for the cribriform morphology and 72.6% for the ductal morphology. The correlation between the ground truth and the prediction for the relative tumor volume was 0.987 n. Our models cover the major components of prostate pathology and successfully accomplish the annotation tasks.
[ { "created": "Fri, 11 Oct 2019 00:10:59 GMT", "version": "v1" }, { "created": "Mon, 14 Oct 2019 07:34:27 GMT", "version": "v2" }, { "created": "Wed, 16 Oct 2019 00:14:28 GMT", "version": "v3" } ]
2019-10-17
[ [ "Eminaga", "Okyaz", "" ], [ "Tolkach", "Yuri", "" ], [ "Kunder", "Christian", "" ], [ "Abbas", "Mahmood", "" ], [ "Han", "Ryan", "" ], [ "Nolley", "Rosalie", "" ], [ "Semjonow", "Axel", "" ], [ "Boegemann", "Martin", "" ], [ "Huss", "Sebastian", "" ], [ "Loening", "Andreas", "" ], [ "West", "Robert", "" ], [ "Sonn", "Geoffrey", "" ], [ "Fan", "Richard", "" ], [ "Bettendorf", "Olaf", "" ], [ "Brook", "James", "" ], [ "Rubin", "Daniel", "" ] ]
The current study detects different morphologies related to prostate pathology using deep learning models; these models were evaluated on 2,121 hematoxylin and eosin (H&E) stain histology images captured using bright field microscopy, which spanned a variety of image qualities, origins (whole slide, tissue micro array, whole mount, Internet), scanning machines, timestamps, H&E staining protocols, and institutions. For case usage, these models were applied for the annotation tasks in clinician-oriented pathology reports for prostatectomy specimens. The true positive rate (TPR) for slides with prostate cancer was 99.7% by a false positive rate of 0.785%. The F1-scores of Gleason patterns reported in pathology reports ranged from 0.795 to 1.0 at the case level. TPR was 93.6% for the cribriform morphology and 72.6% for the ductal morphology. The correlation between the ground truth and the prediction for the relative tumor volume was 0.987 n. Our models cover the major components of prostate pathology and successfully accomplish the annotation tasks.
q-bio/0511048
Jonathan Coe
J. B. Coe and Y. Mao
Gompertz mortality law and scaling behaviour of the Penna model
5 pages, 3 figures
Physical Review E 72, 051925, (2005)
10.1103/PhysRevE.72.051925
null
q-bio.PE
null
The Penna model is a model of evolutionary ageing through mutation accumulation where traditionally time and the age of an organism are treated as discrete variables and an organism's genome by a binary bit string. We reformulate the asexual Penna model and show that, a universal scale invariance emerges as we increase the number of discrete genome bits to the limit of a continuum. The continuum model, introduced by Almeida and Thomas in [Int.J.Mod.Phys.C, 11, 1209 (2000)] can be recovered from the discrete model in the limit of infinite bits coupled with a vanishing mutation rate per bit. Finally, we show that scale invariant properties may lead to the ubiquitous Gompertz Law for mortality rates for early ages, which is generally regarded as being empirical.
[ { "created": "Tue, 29 Nov 2005 14:13:53 GMT", "version": "v1" } ]
2007-05-23
[ [ "Coe", "J. B.", "" ], [ "Mao", "Y.", "" ] ]
The Penna model is a model of evolutionary ageing through mutation accumulation where traditionally time and the age of an organism are treated as discrete variables and an organism's genome by a binary bit string. We reformulate the asexual Penna model and show that, a universal scale invariance emerges as we increase the number of discrete genome bits to the limit of a continuum. The continuum model, introduced by Almeida and Thomas in [Int.J.Mod.Phys.C, 11, 1209 (2000)] can be recovered from the discrete model in the limit of infinite bits coupled with a vanishing mutation rate per bit. Finally, we show that scale invariant properties may lead to the ubiquitous Gompertz Law for mortality rates for early ages, which is generally regarded as being empirical.
2311.18219
Yuan Liu
Yuan Liu and Hong-Bin Shen
FoldExplorer: Fast and Accurate Protein Structure Search with Sequence-Enhanced Graph Embedding
14 pages, 8 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The advent of highly accurate protein structure prediction methods has fueled an exponential expansion of the protein structure database. Consequently, there is a rising demand for rapid and precise structural homolog search. Traditional alignment-based methods are dedicated to precise comparisons between pairs, exhibiting high accuracy. However, their sluggish processing speed is no longer adequate for managing the current massive volume of data. In response to this challenge, we propose a novel deep-learning approach FoldExplorer. It harnesses the powerful capabilities of graph attention neural networks and protein large language models for protein structures and sequences data processing to generate embeddings for protein structures. The structural embeddings can be used for fast and accurate protein search. The embeddings also provide insights into the protein space. FoldExplorer demonstrates a substantial performance improvement of 5% to 8% over the current state-of-the-art algorithm on the benchmark datasets. Meanwhile, FoldExplorer does not compromise on search speed and excels particularly in searching on a large-scale dataset.
[ { "created": "Thu, 30 Nov 2023 03:29:20 GMT", "version": "v1" } ]
2023-12-01
[ [ "Liu", "Yuan", "" ], [ "Shen", "Hong-Bin", "" ] ]
The advent of highly accurate protein structure prediction methods has fueled an exponential expansion of the protein structure database. Consequently, there is a rising demand for rapid and precise structural homolog search. Traditional alignment-based methods are dedicated to precise comparisons between pairs, exhibiting high accuracy. However, their sluggish processing speed is no longer adequate for managing the current massive volume of data. In response to this challenge, we propose a novel deep-learning approach FoldExplorer. It harnesses the powerful capabilities of graph attention neural networks and protein large language models for protein structures and sequences data processing to generate embeddings for protein structures. The structural embeddings can be used for fast and accurate protein search. The embeddings also provide insights into the protein space. FoldExplorer demonstrates a substantial performance improvement of 5% to 8% over the current state-of-the-art algorithm on the benchmark datasets. Meanwhile, FoldExplorer does not compromise on search speed and excels particularly in searching on a large-scale dataset.
1306.5261
Kirk Lohmueller
Kirk E. Lohmueller
The impact of population demography and selection on the genetic architecture of complex traits
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population genetic studies have found evidence for dramatic population growth in recent human history. It is unclear how this recent population growth, combined with the effects of negative natural selection, has affected patterns of deleterious variation, as well as the number, frequencies, and effect sizes of mutations that contribute risk to complex traits. Here I use simulations under population genetic models where a proportion of the heritability of the trait is accounted for by mutations in a subset of the exome. I show that recent population growth increases the proportion of nonsynonymous variants segregating in the population, but does not affect the genetic load relative to that in a population that did not expand. Under a model where a mutation's effect on a trait is correlated with its effect on fitness, rare variants explain a greater portion of the additive genetic variance of the trait in a population that has recently expanded than in a population that did not recently expand. Further, when using a single-marker test, for a given false-positive rate and sample size, recent population growth decreases the expected number of significant association with the trait relative to the number detected in a population that did not expand. However, in a model where there is no correlation between a mutation's effect on fitness and the effect on the trait, common variants account for much of the additive genetic variance, regardless of demography. Moreover, here demography does not affect the number of significant association detected. These finding suggest recent population history may be an important factor influencing the power of association tests in accounting for the missing heritability of certain complex traits.
[ { "created": "Fri, 21 Jun 2013 21:53:31 GMT", "version": "v1" }, { "created": "Sun, 9 Feb 2014 06:29:25 GMT", "version": "v2" } ]
2014-02-11
[ [ "Lohmueller", "Kirk E.", "" ] ]
Population genetic studies have found evidence for dramatic population growth in recent human history. It is unclear how this recent population growth, combined with the effects of negative natural selection, has affected patterns of deleterious variation, as well as the number, frequencies, and effect sizes of mutations that contribute risk to complex traits. Here I use simulations under population genetic models where a proportion of the heritability of the trait is accounted for by mutations in a subset of the exome. I show that recent population growth increases the proportion of nonsynonymous variants segregating in the population, but does not affect the genetic load relative to that in a population that did not expand. Under a model where a mutation's effect on a trait is correlated with its effect on fitness, rare variants explain a greater portion of the additive genetic variance of the trait in a population that has recently expanded than in a population that did not recently expand. Further, when using a single-marker test, for a given false-positive rate and sample size, recent population growth decreases the expected number of significant association with the trait relative to the number detected in a population that did not expand. However, in a model where there is no correlation between a mutation's effect on fitness and the effect on the trait, common variants account for much of the additive genetic variance, regardless of demography. Moreover, here demography does not affect the number of significant association detected. These finding suggest recent population history may be an important factor influencing the power of association tests in accounting for the missing heritability of certain complex traits.
1401.2331
Helene Loevenbruck
Lucile Rapin (GIPSA-lab), Marion Dohen (GIPSA-lab), Mircea Polosan (GIN), Pascal Perrier (GIPSA-lab), H\'el\`ene Loevenbruck (GIPSA-lab, LPNC)
An EMG study of the lip muscles during covert auditory verbal hallucinations in schizophrenia
null
Journal of Speech, Language, and Hearing Research 56 (2013) S1882-S1893
10.1044/1092-4388(2013/12-0210)
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: Auditory verbal hallucinations (AVHs) are speech perceptions in the absence of a external stimulation. An influential theoretical account of AVHs in schizophrenia claims that a deficit in inner speech monitoring would cause the verbal thoughts of the patient to be perceived as external voices. The account is based on a predictive control model, in which verbal self-monitoring is implemented. The aim of this study was to examine lip muscle activity during AVHs in schizophrenia patients, in order to check whether inner speech occurred. Methods: Lip muscle activity was recorded during covert AVHs (without articulation) and rest. Surface electromyography (EMG) was used on eleven schizophrenia patients. Results: Our results show an increase in EMG activity in the orbicularis oris inferior muscle, during covert AVHs relative to rest. This increase is not due to general muscular tension since there was no increase of muscular activity in the forearm muscle. Conclusion: This evidence that AVHs might be self-generated inner speech is discussed in the framework of a predictive control model. Further work is needed to better describe how the inner speech monitoring dysfunction occurs and how inner speech is controlled and monitored. This will help better understanding how AVHs occur.
[ { "created": "Fri, 10 Jan 2014 14:01:56 GMT", "version": "v1" } ]
2014-01-13
[ [ "Rapin", "Lucile", "", "GIPSA-lab" ], [ "Dohen", "Marion", "", "GIPSA-lab" ], [ "Polosan", "Mircea", "", "GIN" ], [ "Perrier", "Pascal", "", "GIPSA-lab" ], [ "Loevenbruck", "Hélène", "", "GIPSA-lab, LPNC" ] ]
Purpose: Auditory verbal hallucinations (AVHs) are speech perceptions in the absence of a external stimulation. An influential theoretical account of AVHs in schizophrenia claims that a deficit in inner speech monitoring would cause the verbal thoughts of the patient to be perceived as external voices. The account is based on a predictive control model, in which verbal self-monitoring is implemented. The aim of this study was to examine lip muscle activity during AVHs in schizophrenia patients, in order to check whether inner speech occurred. Methods: Lip muscle activity was recorded during covert AVHs (without articulation) and rest. Surface electromyography (EMG) was used on eleven schizophrenia patients. Results: Our results show an increase in EMG activity in the orbicularis oris inferior muscle, during covert AVHs relative to rest. This increase is not due to general muscular tension since there was no increase of muscular activity in the forearm muscle. Conclusion: This evidence that AVHs might be self-generated inner speech is discussed in the framework of a predictive control model. Further work is needed to better describe how the inner speech monitoring dysfunction occurs and how inner speech is controlled and monitored. This will help better understanding how AVHs occur.
1604.04913
Kevin Leder
Qie He, Junfeng Zhu, David Dingli, Jasmine Foo, Kevin Leder
Optimized Treatment Schedules for Chronic Myeloid Leukemia
26 pages, 7 figures
null
10.1371/journal.pcbi.1005129
null
q-bio.TO math.OC q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past decade, several targeted therapies (e.g. imatinib, dasatinib, nilotinib) have been developed to treat Chronic Myeloid Leukemia (CML). Despite an initial response to therapy, drug resistance remains a problem for some CML patients. Recent studies have shown that resistance mutations that preexist treatment can be detected in a substan- tial number of patients, and that this may be associated with eventual treatment failure. One proposed method to extend treatment efficacy is to use a combination of multiple targeted therapies. However, the design of such combination therapies (timing, sequence, etc.) remains an open challenge. In this work we mathematically model the dynamics of CML response to combination therapy and analyze the impact of combination treatment schedules on treatment efficacy in patients with preexisting resistance. We then propose an optimization problem to find the best schedule of multiple therapies based on the evolution of CML according to our ordinary differential equation model. This resulting optimiza- tion problem is nontrivial due to the presence of ordinary different equation constraints and integer variables. Our model also incorporates realistic drug toxicity constraints by tracking the dynamics of patient neutrophil counts in response to therapy. Using realis- tic parameter estimates, we determine optimal combination strategies that maximize time until treatment failure.
[ { "created": "Sun, 17 Apr 2016 19:16:17 GMT", "version": "v1" } ]
2017-02-08
[ [ "He", "Qie", "" ], [ "Zhu", "Junfeng", "" ], [ "Dingli", "David", "" ], [ "Foo", "Jasmine", "" ], [ "Leder", "Kevin", "" ] ]
Over the past decade, several targeted therapies (e.g. imatinib, dasatinib, nilotinib) have been developed to treat Chronic Myeloid Leukemia (CML). Despite an initial response to therapy, drug resistance remains a problem for some CML patients. Recent studies have shown that resistance mutations that preexist treatment can be detected in a substan- tial number of patients, and that this may be associated with eventual treatment failure. One proposed method to extend treatment efficacy is to use a combination of multiple targeted therapies. However, the design of such combination therapies (timing, sequence, etc.) remains an open challenge. In this work we mathematically model the dynamics of CML response to combination therapy and analyze the impact of combination treatment schedules on treatment efficacy in patients with preexisting resistance. We then propose an optimization problem to find the best schedule of multiple therapies based on the evolution of CML according to our ordinary differential equation model. This resulting optimiza- tion problem is nontrivial due to the presence of ordinary different equation constraints and integer variables. Our model also incorporates realistic drug toxicity constraints by tracking the dynamics of patient neutrophil counts in response to therapy. Using realis- tic parameter estimates, we determine optimal combination strategies that maximize time until treatment failure.
1504.03940
Sergei Kozyrev
S.V. Kozyrev
Model of protein fragments and statistical potentials
17 pages, some discussion is added or improved
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We discuss a model of protein conformations where the conformations are combinations of short fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss model of free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss applications of this model to problem of prediction of the native conformation of a protein from its primary structure and to description of dynamics of a protein. Modification of structural alignment taking into account statistical potentials for protein fragments is considered and application to threading procedure for proteins is discussed.
[ { "created": "Wed, 15 Apr 2015 15:16:49 GMT", "version": "v1" }, { "created": "Sat, 29 Aug 2015 11:41:21 GMT", "version": "v2" }, { "created": "Tue, 5 Jan 2016 09:28:32 GMT", "version": "v3" }, { "created": "Sun, 3 Jul 2016 18:05:52 GMT", "version": "v4" } ]
2016-07-05
[ [ "Kozyrev", "S. V.", "" ] ]
We discuss a model of protein conformations where the conformations are combinations of short fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss model of free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss applications of this model to problem of prediction of the native conformation of a protein from its primary structure and to description of dynamics of a protein. Modification of structural alignment taking into account statistical potentials for protein fragments is considered and application to threading procedure for proteins is discussed.
1812.09137
Florian Hartig
Florian Hartig
Simulation Modeling
Chapter for Oxford Bibliographies in Ecology, Editor David Gibson
Oxford Bibliographies in Ecology, 2017
10.1093/OBO/9780199830060-0189
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rise of computers, simulation models have emerged beside the more traditional statistical and mathematical models as a third pillar for ecological analysis. Broadly speaking, a simulation model is an algorithm, typically implemented as a computer program, which propagates the states of a system forward. Unlike in a mathematical model, however, this propagation does not employ the methods of calculus but rather a set of rules or formulae that directly prescribe the next state. Such an algorithmic model specification is particularly suited for describing systems that are difficult to capture or analyze with differential equations such as: (a) systems that are highly nonlinear or chaotic; (b) discrete systems, for example networks or groups of distinct individuals; (c) systems that are stochastic; and (d) systems that are too complex to be successfully treated with classical calculus. As these situations are frequently encountered in ecology, simulation models are now widely applied across the discipline. They have been instrumental in developing new insights into classical questions of species' coexistence, community assembly, population dynamics, biogeography, and many more. The methods for this relatively young field are still being actively developed, and practical work with simulation models requires ecologists to learn new skills such as coding, sensitivity analysis, calibration, validation, and forecasting uncertainties. Moreover, scientific inquiry with complex systems has led to subtle changes to the philosophical and epistemological views regarding simplicity, reductionism, and the relationship between prediction and understanding.
[ { "created": "Fri, 21 Dec 2018 14:17:52 GMT", "version": "v1" } ]
2018-12-24
[ [ "Hartig", "Florian", "" ] ]
With the rise of computers, simulation models have emerged beside the more traditional statistical and mathematical models as a third pillar for ecological analysis. Broadly speaking, a simulation model is an algorithm, typically implemented as a computer program, which propagates the states of a system forward. Unlike in a mathematical model, however, this propagation does not employ the methods of calculus but rather a set of rules or formulae that directly prescribe the next state. Such an algorithmic model specification is particularly suited for describing systems that are difficult to capture or analyze with differential equations such as: (a) systems that are highly nonlinear or chaotic; (b) discrete systems, for example networks or groups of distinct individuals; (c) systems that are stochastic; and (d) systems that are too complex to be successfully treated with classical calculus. As these situations are frequently encountered in ecology, simulation models are now widely applied across the discipline. They have been instrumental in developing new insights into classical questions of species' coexistence, community assembly, population dynamics, biogeography, and many more. The methods for this relatively young field are still being actively developed, and practical work with simulation models requires ecologists to learn new skills such as coding, sensitivity analysis, calibration, validation, and forecasting uncertainties. Moreover, scientific inquiry with complex systems has led to subtle changes to the philosophical and epistemological views regarding simplicity, reductionism, and the relationship between prediction and understanding.
1609.02959
Steven Frank
Steven A. Frank
Puzzles in modern biology. II. Language, cancer and the recursive processes of evolutionary innovation
null
F1000Research 5:2089 (2016)
10.12688/f1000research.9568.1
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Human language emerged abruptly. Diverse body forms evolved suddenly. Seed-bearing plants spread rapidly. How do complex evolutionary innovations arise so quickly? Resolving alternative claims remains difficult. The great events of the past happened a long time ago. Cancer provides a model to study evolutionary innovation. A tumor must evolve many novel traits to become an aggressive cancer. I use what we know or could study about cancer to describe the key processes of innovation. In general, evolutionary systems form a hierarchy of recursive processes. Those recursive processes determine the rates at which innovations are generated, spread and transmitted. I relate the recursive processes to abrupt evolutionary innovation.
[ { "created": "Fri, 9 Sep 2016 22:16:22 GMT", "version": "v1" } ]
2016-09-13
[ [ "Frank", "Steven A.", "" ] ]
Human language emerged abruptly. Diverse body forms evolved suddenly. Seed-bearing plants spread rapidly. How do complex evolutionary innovations arise so quickly? Resolving alternative claims remains difficult. The great events of the past happened a long time ago. Cancer provides a model to study evolutionary innovation. A tumor must evolve many novel traits to become an aggressive cancer. I use what we know or could study about cancer to describe the key processes of innovation. In general, evolutionary systems form a hierarchy of recursive processes. Those recursive processes determine the rates at which innovations are generated, spread and transmitted. I relate the recursive processes to abrupt evolutionary innovation.
1910.04932
Ali Sadeghian
Hyun Choi, Ali Sadeghian, Sergio Marconi, Ethan White, Daisy Zhe Wang
Measuring Impact of Climate Change on Tree Species: analysis of JSDM on FIA data
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the first beings affected by changes in the climate are trees, one of our most vital resources. In this study tree species interaction and the response to climate in different ecological environments is observed by applying a joint species distribution model to different ecological domains in the United States. Joint species distribution models are useful to learn inter-species relationships and species response to the environment. The climates' impact on the tree species is measured through species abundance in an area. We compare the model's performance across all ecological domains and study the sensitivity of the climate variables. With the prediction of abundances, tree species populations can be predicted in the future and measure the impact of climate change on tree populations.
[ { "created": "Fri, 11 Oct 2019 01:35:13 GMT", "version": "v1" } ]
2019-10-14
[ [ "Choi", "Hyun", "" ], [ "Sadeghian", "Ali", "" ], [ "Marconi", "Sergio", "" ], [ "White", "Ethan", "" ], [ "Wang", "Daisy Zhe", "" ] ]
One of the first beings affected by changes in the climate are trees, one of our most vital resources. In this study tree species interaction and the response to climate in different ecological environments is observed by applying a joint species distribution model to different ecological domains in the United States. Joint species distribution models are useful to learn inter-species relationships and species response to the environment. The climates' impact on the tree species is measured through species abundance in an area. We compare the model's performance across all ecological domains and study the sensitivity of the climate variables. With the prediction of abundances, tree species populations can be predicted in the future and measure the impact of climate change on tree populations.
q-bio/0508034
B. J. Powell
Paul Meredith, B. J. Powell, Jennifer Riesz, Stephen Nighswander-Rempel, Mark R. Pederson, and Evan Moore
Towards Structure-Property-Function Relationships for Eumelanin
19 pages, 8 figures, Invited highlight article for Soft Matter
Soft Matter, 2006, 2(1), 37 - 44
10.1039/b511922g
null
q-bio.BM q-bio.TO
null
We discuss recent progress towards the establishment of important structure-property-function relationships in eumelanins - key functional bio-macromolecular systems responsible for photo-protection and immune response in humans, and implicated in the development of melanoma skin cancer. We focus on the link between eumelanin's secondary structure and optical properties such as broad band UV-visible absorption and strong non-radiative relaxation; both key features of the photo-protective function. We emphasise the insights gained through a holistic approach combining optical spectroscopy with first principles quantum chemical calculations, and advance the hypothesis that the robust functionality characteristic of eumelanin is related to extreme chemical and structural disorder at the secondary level. This inherent disorder is a low cost natural resource, and it is interesting to speculate as to whether it may play a role in other functional bio-macromolecular systems.
[ { "created": "Wed, 24 Aug 2005 02:26:02 GMT", "version": "v1" } ]
2007-05-23
[ [ "Meredith", "Paul", "" ], [ "Powell", "B. J.", "" ], [ "Riesz", "Jennifer", "" ], [ "Nighswander-Rempel", "Stephen", "" ], [ "Pederson", "Mark R.", "" ], [ "Moore", "Evan", "" ] ]
We discuss recent progress towards the establishment of important structure-property-function relationships in eumelanins - key functional bio-macromolecular systems responsible for photo-protection and immune response in humans, and implicated in the development of melanoma skin cancer. We focus on the link between eumelanin's secondary structure and optical properties such as broad band UV-visible absorption and strong non-radiative relaxation; both key features of the photo-protective function. We emphasise the insights gained through a holistic approach combining optical spectroscopy with first principles quantum chemical calculations, and advance the hypothesis that the robust functionality characteristic of eumelanin is related to extreme chemical and structural disorder at the secondary level. This inherent disorder is a low cost natural resource, and it is interesting to speculate as to whether it may play a role in other functional bio-macromolecular systems.
1507.02562
Thomas R. Sokolowski
Thomas R. Sokolowski, Aleksandra M. Walczak, William Bialek and Ga\v{s}per Tka\v{c}ik
Extending the dynamic range of transcription factor action by translational regulation
14 pages, 5 figures
Phys. Rev. E 93, 022404 (2016)
10.1103/PhysRevE.93.022404
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A crucial step in the regulation of gene expression is binding of transcription factor (TF) proteins to regulatory sites along the DNA. But transcription factors act at nanomolar concentrations, and noise due to random arrival of these molecules at their binding sites can severely limit the precision of regulation. Recent work on the optimization of information flow through regulatory networks indicates that the lower end of the dynamic range of concentrations is simply inaccessible, overwhelmed by the impact of this noise. Motivated by the behavior of homeodomain proteins, such as the maternal morphogen Bicoid in the fruit fly embryo, we suggest a scheme in which transcription factors also act as indirect translational regulators, binding to the mRNA of other transcription factors. Intuitively, each mRNA molecule acts as an independent sensor of the TF concentration, and averaging over these multiple sensors reduces the noise. We analyze information flow through this new scheme and identify conditions under which it outperforms direct transcriptional regulation. Our results suggest that the dual role of homeodomain proteins is not just a historical accident, but a solution to a crucial physics problem in the regulation of gene expression.
[ { "created": "Thu, 9 Jul 2015 15:37:13 GMT", "version": "v1" } ]
2016-02-10
[ [ "Sokolowski", "Thomas R.", "" ], [ "Walczak", "Aleksandra M.", "" ], [ "Bialek", "William", "" ], [ "Tkačik", "Gašper", "" ] ]
A crucial step in the regulation of gene expression is binding of transcription factor (TF) proteins to regulatory sites along the DNA. But transcription factors act at nanomolar concentrations, and noise due to random arrival of these molecules at their binding sites can severely limit the precision of regulation. Recent work on the optimization of information flow through regulatory networks indicates that the lower end of the dynamic range of concentrations is simply inaccessible, overwhelmed by the impact of this noise. Motivated by the behavior of homeodomain proteins, such as the maternal morphogen Bicoid in the fruit fly embryo, we suggest a scheme in which transcription factors also act as indirect translational regulators, binding to the mRNA of other transcription factors. Intuitively, each mRNA molecule acts as an independent sensor of the TF concentration, and averaging over these multiple sensors reduces the noise. We analyze information flow through this new scheme and identify conditions under which it outperforms direct transcriptional regulation. Our results suggest that the dual role of homeodomain proteins is not just a historical accident, but a solution to a crucial physics problem in the regulation of gene expression.
1411.7916
Thomas Pfeil
Thomas Pfeil, Jakob Jordan, Tom Tetzlaff, Andreas Gr\"ubl, Johannes Schemmel, Markus Diesmann, Karlheinz Meier
The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study
20 pages, 10 figures, supplements
Phys. Rev. X 6, 021023 (2016)
10.1103/PhysRevX.6.021023
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...
[ { "created": "Fri, 28 Nov 2014 15:51:59 GMT", "version": "v1" }, { "created": "Fri, 3 Jul 2015 12:38:05 GMT", "version": "v2" }, { "created": "Fri, 19 Feb 2016 20:30:02 GMT", "version": "v3" }, { "created": "Thu, 9 Jun 2016 14:23:52 GMT", "version": "v4" } ]
2016-06-10
[ [ "Pfeil", "Thomas", "" ], [ "Jordan", "Jakob", "" ], [ "Tetzlaff", "Tom", "" ], [ "Grübl", "Andreas", "" ], [ "Schemmel", "Johannes", "" ], [ "Diesmann", "Markus", "" ], [ "Meier", "Karlheinz", "" ] ]
High-level brain function such as memory, classification or reasoning can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear sub-threshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with non-linear, conductance-based synapses. Emulations of these networks on the analog neuromorphic hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm ...
0706.0117
Toby Johnson
Toby Johnson
Reciprocal best hits are not a logically sufficient condition for orthology
null
null
null
null
q-bio.GN
null
It is common to use reciprocal best hits, also known as a boomerang criterion, for determining orthology between sequences. The best hits may be found by blast, or by other more recently developed algorithms. Previous work seems to have assumed that reciprocal best hits is a sufficient but not necessary condition for orthology. In this article, I explain why reciprocal best hits cannot logically be a sufficient condition for orthology. If reciprocal best hits is neither sufficient nor necessary for orthology, it would seem worthwhile to examine further the logical foundations of some unsupervised algorithms that are used to identify orthologs.
[ { "created": "Fri, 1 Jun 2007 10:19:11 GMT", "version": "v1" } ]
2007-06-04
[ [ "Johnson", "Toby", "" ] ]
It is common to use reciprocal best hits, also known as a boomerang criterion, for determining orthology between sequences. The best hits may be found by blast, or by other more recently developed algorithms. Previous work seems to have assumed that reciprocal best hits is a sufficient but not necessary condition for orthology. In this article, I explain why reciprocal best hits cannot logically be a sufficient condition for orthology. If reciprocal best hits is neither sufficient nor necessary for orthology, it would seem worthwhile to examine further the logical foundations of some unsupervised algorithms that are used to identify orthologs.
0710.0383
Mauro Mobilia Dr
Tobias Reichenbach, Mauro Mobilia, Erwin Frey
Noise and Correlations in a Spatial Population Model with Cyclic Competition
4 pages of main text, 3 color figures + 2 pages of supplementary material (EPAPS Document). Final version for Physical Review Letters
Phys. Rev. Lett. 99, 238105 (2007)
10.1103/PhysRevLett.99.238105
LMU-ASC 66/07
q-bio.PE cond-mat.stat-mech physics.bio-ph q-bio.QM
null
Noise and spatial degrees of freedom characterize most ecosystems. Some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments [e.g. B. Kerr et al., Nature {\bf 418}, 171 (2002)]. To reach a better theoretical understanding of these phenomena, we consider a paradigmatic spatial model where three species exhibit cyclic dominance. Using an individual-based description, as well as stochastic partial differential and deterministic reaction-diffusion equations, we account for stochastic fluctuations and spatial diffusion at different levels, and show how fascinating patterns of entangled spirals emerge. We rationalize our analysis by computing the spatio-temporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves.
[ { "created": "Tue, 2 Oct 2007 12:43:41 GMT", "version": "v1" }, { "created": "Sat, 8 Dec 2007 18:41:20 GMT", "version": "v2" } ]
2007-12-08
[ [ "Reichenbach", "Tobias", "" ], [ "Mobilia", "Mauro", "" ], [ "Frey", "Erwin", "" ] ]
Noise and spatial degrees of freedom characterize most ecosystems. Some aspects of their influence on the coevolution of populations with cyclic interspecies competition have been demonstrated in recent experiments [e.g. B. Kerr et al., Nature {\bf 418}, 171 (2002)]. To reach a better theoretical understanding of these phenomena, we consider a paradigmatic spatial model where three species exhibit cyclic dominance. Using an individual-based description, as well as stochastic partial differential and deterministic reaction-diffusion equations, we account for stochastic fluctuations and spatial diffusion at different levels, and show how fascinating patterns of entangled spirals emerge. We rationalize our analysis by computing the spatio-temporal correlation functions and provide analytical expressions for the front velocity and the wavelength of the propagating spiral waves.
1012.3437
Raul Isea
Gustavo Rivera, Fernando Gonz\'alez-Nilo, Tom\'as Perez-Acle, Raul Isea and David S. Holmes
The Virtual Institute for Integrative Biology (VIIB)
10 pages, ISBN 978-84-7834-565-6
Proceedings of the Third Conference of the EELA Project (2007). R. Gavela, B. Marechal, R. Barbera et al. (Eds.) pp. 111-120
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/3.0/
The Virtual Institute for Integrative Biology (VIIB) is a Latin American initiative for achieving global collaborative e-Science in the areas of bioinformatics, genome biology, systems biology, metagenomics, medical applications and nanobiotechnolgy. The scientific agenda of VIIB includes: construction of databases for comparative genomics, the AlterORF database for alternate open reading frames discovery in genomes, bioinformatics services and protein simulations for biotechnological and medical applications. Human resource development has been promoted through co-sponsored students and shared teaching and seminars via video conferencing. E-Science challenges include: interoperability and connectivity concerns, high performance computing limitations, and the development of customized computational frameworks and flexible workflows to efficiently exploit shared resources without causing impediments to the user. Outreach programs include training workshops and classes for high school teachers and students and the new Adopt-a-Gene initiative. The VIIB has proved an effective way for small teams to transcend the critical mass problem, to overcome geographic limitations, to harness the power of large scale, collaborative science and improve the visibility of Latin American science It may provide a useful paradigm for developing further e-Science initiatives in Latin America and other emerging regions.
[ { "created": "Wed, 15 Dec 2010 19:41:49 GMT", "version": "v1" } ]
2010-12-16
[ [ "Rivera", "Gustavo", "" ], [ "González-Nilo", "Fernando", "" ], [ "Perez-Acle", "Tomás", "" ], [ "Isea", "Raul", "" ], [ "Holmes", "David S.", "" ] ]
The Virtual Institute for Integrative Biology (VIIB) is a Latin American initiative for achieving global collaborative e-Science in the areas of bioinformatics, genome biology, systems biology, metagenomics, medical applications and nanobiotechnolgy. The scientific agenda of VIIB includes: construction of databases for comparative genomics, the AlterORF database for alternate open reading frames discovery in genomes, bioinformatics services and protein simulations for biotechnological and medical applications. Human resource development has been promoted through co-sponsored students and shared teaching and seminars via video conferencing. E-Science challenges include: interoperability and connectivity concerns, high performance computing limitations, and the development of customized computational frameworks and flexible workflows to efficiently exploit shared resources without causing impediments to the user. Outreach programs include training workshops and classes for high school teachers and students and the new Adopt-a-Gene initiative. The VIIB has proved an effective way for small teams to transcend the critical mass problem, to overcome geographic limitations, to harness the power of large scale, collaborative science and improve the visibility of Latin American science It may provide a useful paradigm for developing further e-Science initiatives in Latin America and other emerging regions.
1605.03005
Yoram Burak
Neta Ravid Tannenbaum and Yoram Burak
Shaping neural circuits by high order synaptic interactions
Version 2 contains minor revisions. 33 pages, 10 figures, and 5 supporting figures. Accepted to PLoS Computational Biology; An earlier version of this work appeared in abstract form (Program No. 260.23. 2015 Neuroscience Meeting Planner. Chicago, IL: Society for Neuroscience, 2015. Online.)
(2016). PLoS Comput Biol 12(8): e1005056
10.1371/journal.pcbi.1005056
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Spike timing dependent plasticity (STDP) is believed to play an important role in shaping the structure of neural circuits. Here we show that STDP generates effective interactions between synapses of different neurons, which were neglected in previous theoretical treatments, and can be described as a sum over contributions from structural motifs. These interactions can have a pivotal influence on the connectivity patterns that emerge under the influence of STDP. In particular, we consider two highly ordered forms of structure: wide synfire chains, in which groups of neurons project to each other sequentially, and self connected assemblies. We show that high order synaptic interactions can enable the formation of both structures, depending on the form of the STDP function and the time course of synaptic currents. Furthermore, within a certain regime of biophysical parameters, emergence of the ordered connectivity occurs robustly and autonomously in a stochastic network of spiking neurons, without a need to expose the neural network to structured inputs during learning.
[ { "created": "Tue, 10 May 2016 13:31:58 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2016 08:26:55 GMT", "version": "v2" } ]
2016-08-25
[ [ "Tannenbaum", "Neta Ravid", "" ], [ "Burak", "Yoram", "" ] ]
Spike timing dependent plasticity (STDP) is believed to play an important role in shaping the structure of neural circuits. Here we show that STDP generates effective interactions between synapses of different neurons, which were neglected in previous theoretical treatments, and can be described as a sum over contributions from structural motifs. These interactions can have a pivotal influence on the connectivity patterns that emerge under the influence of STDP. In particular, we consider two highly ordered forms of structure: wide synfire chains, in which groups of neurons project to each other sequentially, and self connected assemblies. We show that high order synaptic interactions can enable the formation of both structures, depending on the form of the STDP function and the time course of synaptic currents. Furthermore, within a certain regime of biophysical parameters, emergence of the ordered connectivity occurs robustly and autonomously in a stochastic network of spiking neurons, without a need to expose the neural network to structured inputs during learning.
2105.13570
Jiayu Shang
Jiayu Shang and Yanni Sun
Predicting the hosts of prokaryotic viruses using GCN-based semi-supervised learning
16 pages, 14 figures
BMC Biol 19, 250 (2021)
10.1186/s12915-021-01180-4
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Background: Prokaryotic viruses, which infect bacteria and archaea, are the most abundant and diverse biological entities in the biosphere. To understand their regulatory roles in various ecosystems and to harness the potential of bacteriophages for use in therapy, more knowledge of viral-host relationships is required. High-throughput sequencing and its application to the microbiome have offered new opportunities for computational approaches for predicting which hosts particular viruses can infect. However, there are two main challenges for computational host prediction. First, the empirically known virus-host relationships are very limited. Second, although sequence similarity between viruses and their prokaryote hosts have been used as a major feature for host prediction, the alignment is either missing or ambiguous in many cases. Thus, there is still a need to improve the accuracy of host prediction. Results: In this work, we present a semi-supervised learning model, named HostG, to conduct host prediction for novel viruses. We construct a knowledge graph by utilizing both virus-virus protein similarity and virus-host DNA sequence similarity. Then graph convolutional network (GCN) is adopted to exploit viruses with or without known hosts in training to enhance the learning ability. During the GCN training, we minimize the expected calibrated error (ECE) to ensure the confidence of the predictions. We tested HostG on both simulated and real sequencing data and compared its performance with other state-of-the-art methods specifcally designed for virus host classification (VHM-net, WIsH, PHP, HoPhage, RaFAH, vHULK, and VPF-Class). Conclusion: HostG outperforms other popular methods, demonstrating the efficacy of using a GCN-based semi-supervised learning approach. A particular advantage of HostG is its ability to predict hosts from new taxa.
[ { "created": "Fri, 28 May 2021 03:29:31 GMT", "version": "v1" }, { "created": "Wed, 10 Nov 2021 04:13:28 GMT", "version": "v2" }, { "created": "Thu, 2 Dec 2021 15:28:01 GMT", "version": "v3" } ]
2021-12-03
[ [ "Shang", "Jiayu", "" ], [ "Sun", "Yanni", "" ] ]
Background: Prokaryotic viruses, which infect bacteria and archaea, are the most abundant and diverse biological entities in the biosphere. To understand their regulatory roles in various ecosystems and to harness the potential of bacteriophages for use in therapy, more knowledge of viral-host relationships is required. High-throughput sequencing and its application to the microbiome have offered new opportunities for computational approaches for predicting which hosts particular viruses can infect. However, there are two main challenges for computational host prediction. First, the empirically known virus-host relationships are very limited. Second, although sequence similarity between viruses and their prokaryote hosts have been used as a major feature for host prediction, the alignment is either missing or ambiguous in many cases. Thus, there is still a need to improve the accuracy of host prediction. Results: In this work, we present a semi-supervised learning model, named HostG, to conduct host prediction for novel viruses. We construct a knowledge graph by utilizing both virus-virus protein similarity and virus-host DNA sequence similarity. Then graph convolutional network (GCN) is adopted to exploit viruses with or without known hosts in training to enhance the learning ability. During the GCN training, we minimize the expected calibrated error (ECE) to ensure the confidence of the predictions. We tested HostG on both simulated and real sequencing data and compared its performance with other state-of-the-art methods specifcally designed for virus host classification (VHM-net, WIsH, PHP, HoPhage, RaFAH, vHULK, and VPF-Class). Conclusion: HostG outperforms other popular methods, demonstrating the efficacy of using a GCN-based semi-supervised learning approach. A particular advantage of HostG is its ability to predict hosts from new taxa.
1608.08314
Sebastian Schreiber
Robert Stephen Cantrell and Chris Cosner and Yuan Lou and Sebastian J. Schreiber
Evolution of natal dispersal in spatially heterogenous environments
Revision correcting several minor errors
Mathematical biosciences 283 (2017): 136-144
10.1016/j.mbs.2016.11.003
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the evolution of dispersal is an important issue in evolutionary ecology. For continuous time models in which individuals disperse throughout their lifetime, it has been shown that a balanced dispersal strategy, which results in an ideal free distribution, is evolutionary stable in spatially varying but temporally constant environments. Many species, however, primarily disperse prior to reproduction (natal dispersal) and less commonly between reproductive events (breeding dispersal). As demographic and dispersal terms combine in a multiplicative way for models of natal dispersal, rather than the additive way for the previously studied models, we develop new mathematical methods to study the evolution of natal dispersal for continuous-time and discrete-time models. A fundamental ecological dichotomy is identified for the non-trivial equilibrium of these models: (i) the per-capita growth rates for individuals in all patches is equal to zero, or (ii) individuals in some patches experience negative per-capita growth rates, while individuals in other patches experience positive per-capita growth rates. The first possibility corresponds to an ideal-free distribution, while the second possibility corresponds to a "source-sink" spatial structure. We prove that populations with a dispersal strategy leading to an ideal-free distribution displace populations with dispersal strategy leading to a source-sink spatial structure. When there are patches which can not sustain a population, ideal-free strategies can be achieved by sedentary populations, and we show that these populations can displace populations with any irreducible dispersal strategy. Collectively, these results support that evolution selects for natal or breeding dispersal strategies which lead to ideal-free distributions in spatially heterogenous, but temporally homogenous, environments.
[ { "created": "Tue, 30 Aug 2016 03:36:48 GMT", "version": "v1" }, { "created": "Thu, 3 Nov 2016 20:52:56 GMT", "version": "v2" } ]
2019-02-12
[ [ "Cantrell", "Robert Stephen", "" ], [ "Cosner", "Chris", "" ], [ "Lou", "Yuan", "" ], [ "Schreiber", "Sebastian J.", "" ] ]
Understanding the evolution of dispersal is an important issue in evolutionary ecology. For continuous time models in which individuals disperse throughout their lifetime, it has been shown that a balanced dispersal strategy, which results in an ideal free distribution, is evolutionary stable in spatially varying but temporally constant environments. Many species, however, primarily disperse prior to reproduction (natal dispersal) and less commonly between reproductive events (breeding dispersal). As demographic and dispersal terms combine in a multiplicative way for models of natal dispersal, rather than the additive way for the previously studied models, we develop new mathematical methods to study the evolution of natal dispersal for continuous-time and discrete-time models. A fundamental ecological dichotomy is identified for the non-trivial equilibrium of these models: (i) the per-capita growth rates for individuals in all patches is equal to zero, or (ii) individuals in some patches experience negative per-capita growth rates, while individuals in other patches experience positive per-capita growth rates. The first possibility corresponds to an ideal-free distribution, while the second possibility corresponds to a "source-sink" spatial structure. We prove that populations with a dispersal strategy leading to an ideal-free distribution displace populations with dispersal strategy leading to a source-sink spatial structure. When there are patches which can not sustain a population, ideal-free strategies can be achieved by sedentary populations, and we show that these populations can displace populations with any irreducible dispersal strategy. Collectively, these results support that evolution selects for natal or breeding dispersal strategies which lead to ideal-free distributions in spatially heterogenous, but temporally homogenous, environments.
0908.2885
Mike Steel Prof.
Andreas Dress, Vincent Moulton, Mike Steel, Taoyang Wu
Species, Clusters and the 'Tree of Life': A graph-theoretic perspective
19 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A hierarchical structure describing the inter-relationships of species has long been a fundamental concept in systematic biology, from Linnean classification through to the more recent quest for a 'Tree of Life.' In this paper we use an approach based on discrete mathematics to address a basic question: Could one delineate this hierarchical structure in nature purely by reference to the 'genealogy' of present-day individuals, which describes how they are related with one another by ancestry through a continuous line of descent? We describe several mathematically precise ways by which one can naturally define collections of subsets of present day individuals so that these subsets are nested (and so form a tree) based purely on the directed graph that describes the ancestry of these individuals. We also explore the relationship between these and related clustering constructions.
[ { "created": "Thu, 20 Aug 2009 09:26:30 GMT", "version": "v1" } ]
2009-08-21
[ [ "Dress", "Andreas", "" ], [ "Moulton", "Vincent", "" ], [ "Steel", "Mike", "" ], [ "Wu", "Taoyang", "" ] ]
A hierarchical structure describing the inter-relationships of species has long been a fundamental concept in systematic biology, from Linnean classification through to the more recent quest for a 'Tree of Life.' In this paper we use an approach based on discrete mathematics to address a basic question: Could one delineate this hierarchical structure in nature purely by reference to the 'genealogy' of present-day individuals, which describes how they are related with one another by ancestry through a continuous line of descent? We describe several mathematically precise ways by which one can naturally define collections of subsets of present day individuals so that these subsets are nested (and so form a tree) based purely on the directed graph that describes the ancestry of these individuals. We also explore the relationship between these and related clustering constructions.
0810.2358
Kevin E. Cahill
Kevin Cahill
Molecular Electroporation and the Transduction of Oligoarginines
15 pages, 5 figures
Phys. Biol. 7 (2010) 016001 (14pp)
10.1088/1478-3975/7/1/016001
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Certain short polycations, such as TAT and polyarginine, rapidly pass through the plasma membranes of mammalian cells by an unknown mechanism called transduction as well as by endocytosis and macropinocytosis. These cell-penetrating peptides (CPPs) promise to be medically useful when fused to biologically active peptides. I offer a simple model in which one or more CPPs and the phosphatidylserines of the inner leaflet form a kind of capacitor with a voltage in excess of 180 mV, high enough to create a molecular electropore. The model is consistent with an empirical upper limit on the cargo peptide of 40--60 amino acids and with experimental data on how the transduction of a polyarginine-fluorophore into mouse C2C12 myoblasts depends on the number of arginines in the CPP and on the CPP concentration. The model makes three testable predictions.
[ { "created": "Tue, 14 Oct 2008 04:56:23 GMT", "version": "v1" }, { "created": "Mon, 17 Nov 2008 05:02:35 GMT", "version": "v2" }, { "created": "Fri, 30 Oct 2009 00:37:23 GMT", "version": "v3" } ]
2010-09-21
[ [ "Cahill", "Kevin", "" ] ]
Certain short polycations, such as TAT and polyarginine, rapidly pass through the plasma membranes of mammalian cells by an unknown mechanism called transduction as well as by endocytosis and macropinocytosis. These cell-penetrating peptides (CPPs) promise to be medically useful when fused to biologically active peptides. I offer a simple model in which one or more CPPs and the phosphatidylserines of the inner leaflet form a kind of capacitor with a voltage in excess of 180 mV, high enough to create a molecular electropore. The model is consistent with an empirical upper limit on the cargo peptide of 40--60 amino acids and with experimental data on how the transduction of a polyarginine-fluorophore into mouse C2C12 myoblasts depends on the number of arginines in the CPP and on the CPP concentration. The model makes three testable predictions.
2210.16993
Chunyu Liu
Chunyu Liu and Jiacai Zhang
STN: a new tensor network method to identify stimulus category from brain activity pattern
12 pages
null
null
EFI-94-11
q-bio.NC cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural decoding is still a challenge and hot topic in neurocomputing science. Recently, many studies have shown that brain network patterns containing rich spatial and temporal structure information, which represents the activation information of brain under external stimuli. %Therefore, the research of decoding stimuli from brain network received extensive more attention. The traditional method extracts brain network features directly from the common machine learning method, then puts these features into the classifier, and realizes to decode external stimuli. However, this method cannot effectively extract the multi-dimensional structural information, which is hidden in the brain network. The tensor researchers show that the tensor decomposition model can fully mine unique spatio-temporal structure characteristics in multi-dimensional structure data. This research proposed a stimulus constrained tensor brain model(STN)which involves the tensor decomposition idea and stimulus category constraint information. The model was verified on the real neuroimaging data sets (MEG and fMRI). The experimental results show that the STN model achieves more than 11.06% and 18.46% on accuracy matrix compared with others methods on two modal data sets. These results imply the superiority of extracting discriminative characteristics about STN model, especially for decoding object stimuli with semantic information.
[ { "created": "Mon, 31 Oct 2022 00:42:48 GMT", "version": "v1" }, { "created": "Thu, 3 Nov 2022 01:35:43 GMT", "version": "v2" }, { "created": "Wed, 23 Nov 2022 00:18:32 GMT", "version": "v3" } ]
2022-11-24
[ [ "Liu", "Chunyu", "" ], [ "Zhang", "Jiacai", "" ] ]
Neural decoding is still a challenge and hot topic in neurocomputing science. Recently, many studies have shown that brain network patterns containing rich spatial and temporal structure information, which represents the activation information of brain under external stimuli. %Therefore, the research of decoding stimuli from brain network received extensive more attention. The traditional method extracts brain network features directly from the common machine learning method, then puts these features into the classifier, and realizes to decode external stimuli. However, this method cannot effectively extract the multi-dimensional structural information, which is hidden in the brain network. The tensor researchers show that the tensor decomposition model can fully mine unique spatio-temporal structure characteristics in multi-dimensional structure data. This research proposed a stimulus constrained tensor brain model(STN)which involves the tensor decomposition idea and stimulus category constraint information. The model was verified on the real neuroimaging data sets (MEG and fMRI). The experimental results show that the STN model achieves more than 11.06% and 18.46% on accuracy matrix compared with others methods on two modal data sets. These results imply the superiority of extracting discriminative characteristics about STN model, especially for decoding object stimuli with semantic information.
2202.02245
Zijin Gu
Zijin Gu, Keith Jamison, Mert Sabuncu, and Amy Kuceyeski
Personalized visual encoding model construction with small data
null
null
null
null
q-bio.QM cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Encoding models that predict brain response patterns to stimuli are one way to capture this relationship between variability in bottom-up neural systems and individual's behavior or pathological state. However, they generally need a large amount of training data to achieve optimal accuracy. Here, we propose and test an alternative personalized ensemble encoding model approach to utilize existing encoding models, to create encoding models for novel individuals with relatively little stimuli-response data. We show that these personalized ensemble encoding models trained with small amounts of data for a specific individual, i.e. ~300 image-response pairs, achieve accuracy not different from models trained on ~20,000 image-response pairs for the same individual. Importantly, the personalized ensemble encoding models preserve patterns of inter-individual variability in the image-response relationship. Additionally, we show the proposed approach is robust against domain shift by validating on a prospectively collected set of image-response data in novel individuals with a different scanner and experimental setup. Finally, we use our personalized ensemble encoding model within the recently developed NeuroGen framework to generate optimal stimuli designed to maximize specific regions' activations for a specific individual. We show that the inter-individual differences in face areas responses to images of animal vs human faces observed previously is replicated using NeuroGen with the ensemble encoding model. Our approach shows the potential to use previously collected, deeply sampled data to efficiently create accurate, personalized encoding models and, subsequently, personalized optimal synthetic images for new individuals scanned under different experimental conditions.
[ { "created": "Fri, 4 Feb 2022 17:24:50 GMT", "version": "v1" }, { "created": "Sun, 15 May 2022 02:52:47 GMT", "version": "v2" } ]
2022-05-17
[ [ "Gu", "Zijin", "" ], [ "Jamison", "Keith", "" ], [ "Sabuncu", "Mert", "" ], [ "Kuceyeski", "Amy", "" ] ]
Encoding models that predict brain response patterns to stimuli are one way to capture this relationship between variability in bottom-up neural systems and individual's behavior or pathological state. However, they generally need a large amount of training data to achieve optimal accuracy. Here, we propose and test an alternative personalized ensemble encoding model approach to utilize existing encoding models, to create encoding models for novel individuals with relatively little stimuli-response data. We show that these personalized ensemble encoding models trained with small amounts of data for a specific individual, i.e. ~300 image-response pairs, achieve accuracy not different from models trained on ~20,000 image-response pairs for the same individual. Importantly, the personalized ensemble encoding models preserve patterns of inter-individual variability in the image-response relationship. Additionally, we show the proposed approach is robust against domain shift by validating on a prospectively collected set of image-response data in novel individuals with a different scanner and experimental setup. Finally, we use our personalized ensemble encoding model within the recently developed NeuroGen framework to generate optimal stimuli designed to maximize specific regions' activations for a specific individual. We show that the inter-individual differences in face areas responses to images of animal vs human faces observed previously is replicated using NeuroGen with the ensemble encoding model. Our approach shows the potential to use previously collected, deeply sampled data to efficiently create accurate, personalized encoding models and, subsequently, personalized optimal synthetic images for new individuals scanned under different experimental conditions.
1308.0317
William Bialek
Mikhail Tikhonov and William Bialek
Complexity in genetic networks: topology vs. strength of interactions
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic regulatory networks are defined by their topology and by a multitude of continuously adjustable parameters. Here we present a class of simple models within which the relative importance of topology vs. interaction strengths becomes a well-posed problem. We find that complexity - the ability of the network to adopt multiple stable states - is dominated by the adjustable parameters. We comment on the implications for real networks and their evolution.
[ { "created": "Thu, 1 Aug 2013 19:48:34 GMT", "version": "v1" } ]
2013-08-02
[ [ "Tikhonov", "Mikhail", "" ], [ "Bialek", "William", "" ] ]
Genetic regulatory networks are defined by their topology and by a multitude of continuously adjustable parameters. Here we present a class of simple models within which the relative importance of topology vs. interaction strengths becomes a well-posed problem. We find that complexity - the ability of the network to adopt multiple stable states - is dominated by the adjustable parameters. We comment on the implications for real networks and their evolution.
2207.12173
Arunabha Majumdar
Arunabha Majumdar and Bogdan Pasaniuc
A Bayesian method for estimating gene-level polygenicity under the framework of transcriptome-wide association study
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
Polygnicity refers to the phenomenon that multiple genetic variants have a non-zero effect on a complex trait. It is defined as the proportion of genetic variants that have a nonzero effect on the trait. Evaluation of polygenicity can provide valuable insights into the genetic architecture of the trait. Several recent works have attempted to estimate polygenicity at the SNP level. However, evaluating polygenicity at the gene level can be biologically more meaningful. We propose the notion of gene-level polygenicity, defined as the proportion of genes having a non-zero effect on the trait under the framework of transcriptome-wide association study. We introduce a Bayesian approach polygene to estimate this quantity for a trait. The method is based on spike and slab prior and simultaneously provides an optimal subset of non-null genes. Our simulation study shows that polygene efficiently estimates gene-level polygenicity. The method produces downward bias for small choices of trait heritability due to a non-null gene, which diminishes rapidly with an increase in the GWAS sample size. While identifying the optimal subset of non-null genes, polygene offers a high level of specificity and an overall good level of sensitivity -- the sensitivity increases as the sample size of the reference panel expression and GWAS data increase. We applied the method to seven phenotypes in the UK Biobank, integrating expression data. We find height to be most polygenic and asthma to be the least polygenic. Our analysis suggests that both HDL and triglycerides are more polygenic than LDL.
[ { "created": "Mon, 25 Jul 2022 13:13:29 GMT", "version": "v1" } ]
2022-07-26
[ [ "Majumdar", "Arunabha", "" ], [ "Pasaniuc", "Bogdan", "" ] ]
Polygnicity refers to the phenomenon that multiple genetic variants have a non-zero effect on a complex trait. It is defined as the proportion of genetic variants that have a nonzero effect on the trait. Evaluation of polygenicity can provide valuable insights into the genetic architecture of the trait. Several recent works have attempted to estimate polygenicity at the SNP level. However, evaluating polygenicity at the gene level can be biologically more meaningful. We propose the notion of gene-level polygenicity, defined as the proportion of genes having a non-zero effect on the trait under the framework of transcriptome-wide association study. We introduce a Bayesian approach polygene to estimate this quantity for a trait. The method is based on spike and slab prior and simultaneously provides an optimal subset of non-null genes. Our simulation study shows that polygene efficiently estimates gene-level polygenicity. The method produces downward bias for small choices of trait heritability due to a non-null gene, which diminishes rapidly with an increase in the GWAS sample size. While identifying the optimal subset of non-null genes, polygene offers a high level of specificity and an overall good level of sensitivity -- the sensitivity increases as the sample size of the reference panel expression and GWAS data increase. We applied the method to seven phenotypes in the UK Biobank, integrating expression data. We find height to be most polygenic and asthma to be the least polygenic. Our analysis suggests that both HDL and triglycerides are more polygenic than LDL.
2110.14232
Shixiang Wang
Shixiang Wang, Xue-Song Liu, Jianfeng Li and Qi Zhao
ezcox: An R/CRAN Package for Cox Model Batch Processing and Visualization
6 pages, 3 figures
null
null
null
q-bio.QM q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cox analysis is a common clinical data analysis technique to link valuable variables to clinical outcomes including dead and relapse. In the omics era, Cox model batch processing is a basic strategy for screening clinically relevant variables, biomarker discovery and gene signature identification. However, all such analyses have been implemented with homebrew code in research community, thus lack of transparency and reproducibility. Here, we present ezcox, the first R/CRAN package for Cox model batch processing and visualization. ezcox is an open source R package under GPL-3 license and it is free available at https://github.com/ShixiangWang/ezcox and https://cran.r-project.org/package=ezcox.
[ { "created": "Wed, 27 Oct 2021 07:35:53 GMT", "version": "v1" } ]
2021-10-28
[ [ "Wang", "Shixiang", "" ], [ "Liu", "Xue-Song", "" ], [ "Li", "Jianfeng", "" ], [ "Zhao", "Qi", "" ] ]
Cox analysis is a common clinical data analysis technique to link valuable variables to clinical outcomes including dead and relapse. In the omics era, Cox model batch processing is a basic strategy for screening clinically relevant variables, biomarker discovery and gene signature identification. However, all such analyses have been implemented with homebrew code in research community, thus lack of transparency and reproducibility. Here, we present ezcox, the first R/CRAN package for Cox model batch processing and visualization. ezcox is an open source R package under GPL-3 license and it is free available at https://github.com/ShixiangWang/ezcox and https://cran.r-project.org/package=ezcox.
1311.7328
Weiqun Peng
Ji-Eun Lee, Chaochen Wang, Shiliyang Xu, Young-Wook Cho, Lifeng Wang, Xuesong Feng, Vittorio Sartorelli, Anne Baldridge, Weiqun Peng, and Kai Ge
H3K4 mono- and di-methyltransferase MLL4 is required for enhancer activation during cell differentiation
eLife 2013
null
null
null
q-bio.GN
http://creativecommons.org/licenses/publicdomain/
Enhancers play a central role in cell-type-specific gene expression and are marked by H3K4me1/2. Active enhancers are further marked by H3K27ac. However, the methyltransferases responsible for H3K4me1/2 on enhancers remain elusive. Furthermore, how these enzymes function on enhancers to regulate cell-type-specific gene expression is unclear. Here we identify MLL4 (KMT2D) as a major mammalian H3K4 mono- and di-methyltransferase with partial functional redundancy with MLL3 (KMT2C). Using adipogenesis and myogenesis as model systems, we show that MLL4 exhibits cell-type- and differentiation-stage-specific genomic binding and is predominantly localized on enhancers. MLL4 co-localizes with lineage-determining transcription factors (TFs) on active enhancers during differentiation. Deletion of MLL4 markedly decreases H3K4me1/2, H3K27ac, Polymerase II and Mediator levels on enhancers and leads to severe defects in cell-type-specific gene expression and cell differentiation. Together, these findings identify MLL4 as a major mammalian H3K4 mono- and di-methyltransferase essential for enhancer activation during cell differentiation.
[ { "created": "Thu, 28 Nov 2013 14:25:50 GMT", "version": "v1" } ]
2013-12-02
[ [ "Lee", "Ji-Eun", "" ], [ "Wang", "Chaochen", "" ], [ "Xu", "Shiliyang", "" ], [ "Cho", "Young-Wook", "" ], [ "Wang", "Lifeng", "" ], [ "Feng", "Xuesong", "" ], [ "Sartorelli", "Vittorio", "" ], [ "Baldridge", "Anne", "" ], [ "Peng", "Weiqun", "" ], [ "Ge", "Kai", "" ] ]
Enhancers play a central role in cell-type-specific gene expression and are marked by H3K4me1/2. Active enhancers are further marked by H3K27ac. However, the methyltransferases responsible for H3K4me1/2 on enhancers remain elusive. Furthermore, how these enzymes function on enhancers to regulate cell-type-specific gene expression is unclear. Here we identify MLL4 (KMT2D) as a major mammalian H3K4 mono- and di-methyltransferase with partial functional redundancy with MLL3 (KMT2C). Using adipogenesis and myogenesis as model systems, we show that MLL4 exhibits cell-type- and differentiation-stage-specific genomic binding and is predominantly localized on enhancers. MLL4 co-localizes with lineage-determining transcription factors (TFs) on active enhancers during differentiation. Deletion of MLL4 markedly decreases H3K4me1/2, H3K27ac, Polymerase II and Mediator levels on enhancers and leads to severe defects in cell-type-specific gene expression and cell differentiation. Together, these findings identify MLL4 as a major mammalian H3K4 mono- and di-methyltransferase essential for enhancer activation during cell differentiation.
1505.06249
Alexandre Drouin
Alexandre Drouin, S\'ebastien Gigu\`ere, Maxime D\'eraspe, Fran\c{c}ois Laviolette, Mario Marchand, Jacques Corbeil
Greedy Biomarker Discovery in the Genome with Applications to Antimicrobial Resistance
Peer-reviewed and accepted for an oral presentation in the Greed is Great workshop at the International Conference on Machine Learning, Lille, France, 2015
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Set Covering Machine (SCM) is a greedy learning algorithm that produces sparse classifiers. We extend the SCM for datasets that contain a huge number of features. The whole genetic material of living organisms is an example of such a case, where the number of feature exceeds 10^7. Three human pathogens were used to evaluate the performance of the SCM at predicting antimicrobial resistance. Our results show that the SCM compares favorably in terms of sparsity and accuracy against L1 and L2 regularized Support Vector Machines and CART decision trees. Moreover, the SCM was the only algorithm that could consider the full feature space. For all other algorithms, the latter had to be filtered as a preprocessing step.
[ { "created": "Fri, 22 May 2015 23:29:40 GMT", "version": "v1" } ]
2015-05-26
[ [ "Drouin", "Alexandre", "" ], [ "Giguère", "Sébastien", "" ], [ "Déraspe", "Maxime", "" ], [ "Laviolette", "François", "" ], [ "Marchand", "Mario", "" ], [ "Corbeil", "Jacques", "" ] ]
The Set Covering Machine (SCM) is a greedy learning algorithm that produces sparse classifiers. We extend the SCM for datasets that contain a huge number of features. The whole genetic material of living organisms is an example of such a case, where the number of feature exceeds 10^7. Three human pathogens were used to evaluate the performance of the SCM at predicting antimicrobial resistance. Our results show that the SCM compares favorably in terms of sparsity and accuracy against L1 and L2 regularized Support Vector Machines and CART decision trees. Moreover, the SCM was the only algorithm that could consider the full feature space. For all other algorithms, the latter had to be filtered as a preprocessing step.
2111.09414
Elcin Huseyn
Elcin Huseyn
Developmental Status and Perspectives for Tissue Engineering in Urology
null
null
null
null
q-bio.TO physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Tissue engineering technology and tissue cell-based stem cell research have made great strides in treating tissue and organ damage, correcting tissue and organ dysfunction, and reducing surgical complications. In the past, traditional methods have used biological substitutes for tissue repair materials, while tissue engineering technology has focused on merging sperm cells with biological materials to form biological tissues with the same structure and function as their own tissues. The advantage is that tissue engineering technology can overcome donors. Material procurement restrictions can effectively reduce complications. The aim of studying tissue engineering technology is to find sperm cells and suitable biological materials to replace the original biological functions of tissues and to establish a suitable in vivo microenvironment. This article mainly describes the current developments of tissue engineering in various fields of urology and discusses the future trends of tissue engineering technology in the treatment of complex diseases of the urinary system. The results of the research in this article indicate that while the current clinical studies are relatively few, the good results from existing animal model studies indicate good prospects of tissue engineering technology for the treatment of various urinary tract diseases in the future.
[ { "created": "Sun, 14 Nov 2021 16:02:02 GMT", "version": "v1" } ]
2021-11-19
[ [ "Huseyn", "Elcin", "" ] ]
Tissue engineering technology and tissue cell-based stem cell research have made great strides in treating tissue and organ damage, correcting tissue and organ dysfunction, and reducing surgical complications. In the past, traditional methods have used biological substitutes for tissue repair materials, while tissue engineering technology has focused on merging sperm cells with biological materials to form biological tissues with the same structure and function as their own tissues. The advantage is that tissue engineering technology can overcome donors. Material procurement restrictions can effectively reduce complications. The aim of studying tissue engineering technology is to find sperm cells and suitable biological materials to replace the original biological functions of tissues and to establish a suitable in vivo microenvironment. This article mainly describes the current developments of tissue engineering in various fields of urology and discusses the future trends of tissue engineering technology in the treatment of complex diseases of the urinary system. The results of the research in this article indicate that while the current clinical studies are relatively few, the good results from existing animal model studies indicate good prospects of tissue engineering technology for the treatment of various urinary tract diseases in the future.
q-bio/0506035
Sreepurna Malakar
Sukanto Bhattacharya and Sreepurna Malakar
Monte Carlo modeling of the effect of extreme events on the extinction dynamics of animal species with 2-year life cycles
12 pages, 6 tables, 6 figures
null
null
null
q-bio.QM q-bio.PE
null
Our paper computationally explores the extinction dynamics of an animal species effected by a sudden spike in mortality due to an extreme event. In our study, the animal species has a 2-year life cycle and is endowed with a high survival probability under normal circumstances. Our proposed approach does not involve any restraining assumptions concerning environmental variables or predator-prey relationships. Rather it is based on the simple premise that if observed on an year-to-year basis, the population size will be noted to either have gone up or come down as compared to last year. The conceptualization is borrowed from the theory of asset pricing in stochastic finance. Our results indicate that an extreme event with a maximum shock size (i.e. the maximum number of immediate mortalities that may be caused by an extreme event) exceeding two-thirds the size of the pristine population can potentially drive any animal species with a 2-year life cycle to extinction for any fecundity level.
[ { "created": "Wed, 22 Jun 2005 23:42:29 GMT", "version": "v1" } ]
2007-05-23
[ [ "Bhattacharya", "Sukanto", "" ], [ "Malakar", "Sreepurna", "" ] ]
Our paper computationally explores the extinction dynamics of an animal species effected by a sudden spike in mortality due to an extreme event. In our study, the animal species has a 2-year life cycle and is endowed with a high survival probability under normal circumstances. Our proposed approach does not involve any restraining assumptions concerning environmental variables or predator-prey relationships. Rather it is based on the simple premise that if observed on an year-to-year basis, the population size will be noted to either have gone up or come down as compared to last year. The conceptualization is borrowed from the theory of asset pricing in stochastic finance. Our results indicate that an extreme event with a maximum shock size (i.e. the maximum number of immediate mortalities that may be caused by an extreme event) exceeding two-thirds the size of the pristine population can potentially drive any animal species with a 2-year life cycle to extinction for any fecundity level.
2407.09564
Maxime Lenormand
Marie Soret, Sylvain Moulherat, Maxime Lenormand, Sandra Luque
Implication of modelling choices on connectivity estimation: A comparative analysis
34 pages, 9 figures + Appendix
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We focus on connectivity methods used to understand and predict how landscapes and habitats facilitate or impede the movement and dispersal of species. Our objective is to compare the implication of methodological choices at three stages of the modelling framework: landscape characterisation, connectivity estimation, and connectivity assessment. What are the convergences and divergences of different modelling approaches? What are the implications of their combined results for landscape planning? We implemented two landscape characterisation approaches: expert opinion and species distribution model (SDM); four connectivity estimation models: Euclidean distance, least-cost paths (LCP), circuit theory, and stochastic movement simulation (SMS); and two connectivity indices: flux and area-weighted flux (dPCflux). We compared outcomes such as movement maps and habitat prioritisation for a rural landscape in southwestern France. Landscape characterisation is the main factor influencing connectivity assessment. The movement maps reflect the models' assumptions: LCP produced narrow beams reflecting the optimal pathways; whereas circuit theory and SMS produced wider estimation reflecting movement stochasticity, with SMS integrating behavioural drivers. The indices highlighted different aspects: dPCflux the surface of suitable habitats and flux their proximity. We recommend focusing on landscape characterisation before engaging further in the modelling framework. We emphasise the importance of stochasticity and behavioural drivers in connectivity, which can be reflected using circuit theory, SMS or other stochastic individual-based models. We stress the importance of using multiple indices to capture the multi-factorial aspect of connectivity.
[ { "created": "Fri, 5 Jul 2024 06:01:51 GMT", "version": "v1" } ]
2024-07-16
[ [ "Soret", "Marie", "" ], [ "Moulherat", "Sylvain", "" ], [ "Lenormand", "Maxime", "" ], [ "Luque", "Sandra", "" ] ]
We focus on connectivity methods used to understand and predict how landscapes and habitats facilitate or impede the movement and dispersal of species. Our objective is to compare the implication of methodological choices at three stages of the modelling framework: landscape characterisation, connectivity estimation, and connectivity assessment. What are the convergences and divergences of different modelling approaches? What are the implications of their combined results for landscape planning? We implemented two landscape characterisation approaches: expert opinion and species distribution model (SDM); four connectivity estimation models: Euclidean distance, least-cost paths (LCP), circuit theory, and stochastic movement simulation (SMS); and two connectivity indices: flux and area-weighted flux (dPCflux). We compared outcomes such as movement maps and habitat prioritisation for a rural landscape in southwestern France. Landscape characterisation is the main factor influencing connectivity assessment. The movement maps reflect the models' assumptions: LCP produced narrow beams reflecting the optimal pathways; whereas circuit theory and SMS produced wider estimation reflecting movement stochasticity, with SMS integrating behavioural drivers. The indices highlighted different aspects: dPCflux the surface of suitable habitats and flux their proximity. We recommend focusing on landscape characterisation before engaging further in the modelling framework. We emphasise the importance of stochasticity and behavioural drivers in connectivity, which can be reflected using circuit theory, SMS or other stochastic individual-based models. We stress the importance of using multiple indices to capture the multi-factorial aspect of connectivity.
1403.1011
Petter Holme
Petter Holme
Model versions and fast algorithms for network epidemiology
This write-up covers some details about simulating epidemiological models on networks. Feedback is appreciated
Journal of Logistical Engineering University 30, 1-7 (2014)
10.3969/j.issn.1672-7843.2014.03.001
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions - one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people), the other where the duration of the disease is constant. We find that, for most practical purposes, these versions are qualitatively the same.
[ { "created": "Wed, 5 Mar 2014 05:52:09 GMT", "version": "v1" } ]
2014-06-10
[ [ "Holme", "Petter", "" ] ]
Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions - one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people), the other where the duration of the disease is constant. We find that, for most practical purposes, these versions are qualitatively the same.
1210.2993
Elisenda Feliu
Heather A. Harrington, Elisenda Feliu, Carsten Wiuf, Michael M. P. Stumpf
Cellular compartments cause multistability in biochemical reaction networks and allow cells to process more information
null
null
null
null
q-bio.MN q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many biological, physical, and social interactions have a particular dependence on where they take place. In living cells, protein movement between the nucleus and cytoplasm affects cellular response (i.e., proteins must be present in the nucleus to regulate their target genes). Here we use recent developments from dynamical systems and chemical reaction network theory to identify and characterize the key-role of the spatial organization of eukaryotic cells in cellular information processing. In particular the existence of distinct compartments plays a pivotal role in whether a system is capable of multistationarity (multiple response states), and is thus directly linked to the amount of information that the signaling molecules can represent in the nucleus. Multistationarity provides a mechanism for switching between different response states in cell signaling systems and enables multiple outcomes for cellular-decision making. We find that introducing species localization can alter the capacity for multistationarity and mathematically demonstrate that shuttling confers flexibility for and greater control of the emergence of an all-or-none response.
[ { "created": "Wed, 10 Oct 2012 17:53:30 GMT", "version": "v1" } ]
2012-10-11
[ [ "Harrington", "Heather A.", "" ], [ "Feliu", "Elisenda", "" ], [ "Wiuf", "Carsten", "" ], [ "Stumpf", "Michael M. P.", "" ] ]
Many biological, physical, and social interactions have a particular dependence on where they take place. In living cells, protein movement between the nucleus and cytoplasm affects cellular response (i.e., proteins must be present in the nucleus to regulate their target genes). Here we use recent developments from dynamical systems and chemical reaction network theory to identify and characterize the key-role of the spatial organization of eukaryotic cells in cellular information processing. In particular the existence of distinct compartments plays a pivotal role in whether a system is capable of multistationarity (multiple response states), and is thus directly linked to the amount of information that the signaling molecules can represent in the nucleus. Multistationarity provides a mechanism for switching between different response states in cell signaling systems and enables multiple outcomes for cellular-decision making. We find that introducing species localization can alter the capacity for multistationarity and mathematically demonstrate that shuttling confers flexibility for and greater control of the emergence of an all-or-none response.
1606.08585
Abhay Sharma
Abhay Sharma
Reanalyzing variable directionality of gene expression in transgenerational epigenetic inheritance
12 pages, 1 figure, 1 table
null
null
null
q-bio.GN
http://creativecommons.org/publicdomain/zero/1.0/
A previous report claimed no evidence of transgenerational epigenetic inheritance in a mouse model of in utero environmental exposure, based on the observation that gene expression changes observed in the germ cells of G1 and G2 male fetus were not in the same direction. A subsequent data reanalysis however showed a statistically significant overlap between G1 and G2 genes irrespective of direction, leading to the suggestion that, as phenotypic variability in epigenetic transmission has been observed in several other examples also, the above report provided evidence in favor of, not against, transgenerational inheritance. This criticism has recently been questioned. Here, it is shown that the questions raised are based not only on incorrect statistical calculations but also on wrong premise that gene expression changes do not constitute a phenotype.
[ { "created": "Tue, 28 Jun 2016 07:30:01 GMT", "version": "v1" } ]
2016-06-29
[ [ "Sharma", "Abhay", "" ] ]
A previous report claimed no evidence of transgenerational epigenetic inheritance in a mouse model of in utero environmental exposure, based on the observation that gene expression changes observed in the germ cells of G1 and G2 male fetus were not in the same direction. A subsequent data reanalysis however showed a statistically significant overlap between G1 and G2 genes irrespective of direction, leading to the suggestion that, as phenotypic variability in epigenetic transmission has been observed in several other examples also, the above report provided evidence in favor of, not against, transgenerational inheritance. This criticism has recently been questioned. Here, it is shown that the questions raised are based not only on incorrect statistical calculations but also on wrong premise that gene expression changes do not constitute a phenotype.
1805.07768
Jorge Fernandez-de-Cossio-Diaz
Jorge Fernandez-de-Cossio-Diaz, Roberto Mulet, Alexei Vazquez
Cell population heterogeneity driven by stochastic partition and growth optimality
null
null
10.1038/s41598-019-45882-w
null
q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A fundamental question in biology is how cell populations evolve into different subtypes based on homogeneous processes at the single cell level. Here we show that population bimodality can emerge even when biological processes are homogenous at the cell level and the environment is kept constant. Our model is based on the stochastic partitioning of a cell component with an optimal copy number. We show that the existence of unimodal or bimodal distributions depends on the variance of partition errors and the growth rate tolerance around the optimal copy number. In particular, our theory provides a consistent explanation for the maintenance of aneuploid states in a population. The proposed model can also be relevant for other cell components such as mitochondria and plasmids, whose abundances affect the growth rate and are subject to stochastic partition at cell division.
[ { "created": "Sun, 20 May 2018 14:16:20 GMT", "version": "v1" }, { "created": "Sun, 29 Jul 2018 10:14:42 GMT", "version": "v2" }, { "created": "Wed, 19 Jun 2019 13:04:34 GMT", "version": "v3" } ]
2019-07-01
[ [ "Fernandez-de-Cossio-Diaz", "Jorge", "" ], [ "Mulet", "Roberto", "" ], [ "Vazquez", "Alexei", "" ] ]
A fundamental question in biology is how cell populations evolve into different subtypes based on homogeneous processes at the single cell level. Here we show that population bimodality can emerge even when biological processes are homogenous at the cell level and the environment is kept constant. Our model is based on the stochastic partitioning of a cell component with an optimal copy number. We show that the existence of unimodal or bimodal distributions depends on the variance of partition errors and the growth rate tolerance around the optimal copy number. In particular, our theory provides a consistent explanation for the maintenance of aneuploid states in a population. The proposed model can also be relevant for other cell components such as mitochondria and plasmids, whose abundances affect the growth rate and are subject to stochastic partition at cell division.
1809.09798
E. Song
Euijun Song
Energy landscape analysis of cardiac fibrillation wave dynamics using pairwise maximum entropy model
11 pages, 3 figures, Presented at the 62nd Biophysical Society Annual Meeting, San Francisco, CA, 2018
null
null
null
q-bio.TO nlin.PS physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Cardiac fibrillation is characterized by chaotic and disintegrated spiral wave dynamics patterns, whereas sinus rhythm shows synchronized excitation patterns. To determine functional correlations among cardiomyocytes during complex fibrillation states, we applied a pairwise maximum entropy model (MEM) to the 2D numerical simulation data of human atrial fibrillation. We then constructed an energy landscape and estimated a hierarchical structure among the different local minima (attractors) to explain the dynamic properties of cardiac fibrillation. The MEM could describe the wave dynamics of sinus rhythm, single stable rotor, and single rotor with wavebreak (both accuracy and reliability>0.9), but not the multiple random wavelet case. The energy landscapes exhibited unique profiles of local minima and energy barriers, characterizing the spatiotemporal patterns of cardiac fibrillation dynamics. The pairwise MEM-based energy landscape analysis provides reliable explanations of complex nonlinear dynamics of cardiac fibrillation, which might be determined by the presence of a 'driver' such as a sinus node or rotor.
[ { "created": "Wed, 26 Sep 2018 04:19:12 GMT", "version": "v1" }, { "created": "Tue, 9 Oct 2018 05:10:41 GMT", "version": "v2" }, { "created": "Tue, 9 Aug 2022 03:48:23 GMT", "version": "v3" }, { "created": "Wed, 19 Oct 2022 12:37:11 GMT", "version": "v4" } ]
2022-10-20
[ [ "Song", "Euijun", "" ] ]
Cardiac fibrillation is characterized by chaotic and disintegrated spiral wave dynamics patterns, whereas sinus rhythm shows synchronized excitation patterns. To determine functional correlations among cardiomyocytes during complex fibrillation states, we applied a pairwise maximum entropy model (MEM) to the 2D numerical simulation data of human atrial fibrillation. We then constructed an energy landscape and estimated a hierarchical structure among the different local minima (attractors) to explain the dynamic properties of cardiac fibrillation. The MEM could describe the wave dynamics of sinus rhythm, single stable rotor, and single rotor with wavebreak (both accuracy and reliability>0.9), but not the multiple random wavelet case. The energy landscapes exhibited unique profiles of local minima and energy barriers, characterizing the spatiotemporal patterns of cardiac fibrillation dynamics. The pairwise MEM-based energy landscape analysis provides reliable explanations of complex nonlinear dynamics of cardiac fibrillation, which might be determined by the presence of a 'driver' such as a sinus node or rotor.
1601.03559
Michele Monti
Michele Monti and Pieter Rein ten Wolde
The accuracy of telling time via oscillatory signals
null
null
10.1088/1478-3975/13/3/035005
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Circadian clocks are the central timekeepers of life, allowing cells to anticipate changes between day and night. Experiments in recent years have revealed that circadian clocks can be highly stable, raising the question how reliably they can be read out. Here, we combine mathematical modeling with information theory to address the question how accurately a cell can infer the time from an ensemble of protein oscillations, which are driven by a circadian clock. We show that the precision increases with the number of oscillations and their amplitude relative to their noise. Our analysis also reveals that their exists an optimal phase relation that minimizes the error in the estimate of time, which depends on the relative noise levels of the protein oscillations. Lastly, our work shows that cross-correlations in the noise of the protein oscillations can enhance the mutual information, which suggests that cross-regulatory interactions between the proteins that read out the clock can be beneficial for temporal information transmission.
[ { "created": "Thu, 14 Jan 2016 11:32:50 GMT", "version": "v1" } ]
2016-06-22
[ [ "Monti", "Michele", "" ], [ "Wolde", "Pieter Rein ten", "" ] ]
Circadian clocks are the central timekeepers of life, allowing cells to anticipate changes between day and night. Experiments in recent years have revealed that circadian clocks can be highly stable, raising the question how reliably they can be read out. Here, we combine mathematical modeling with information theory to address the question how accurately a cell can infer the time from an ensemble of protein oscillations, which are driven by a circadian clock. We show that the precision increases with the number of oscillations and their amplitude relative to their noise. Our analysis also reveals that their exists an optimal phase relation that minimizes the error in the estimate of time, which depends on the relative noise levels of the protein oscillations. Lastly, our work shows that cross-correlations in the noise of the protein oscillations can enhance the mutual information, which suggests that cross-regulatory interactions between the proteins that read out the clock can be beneficial for temporal information transmission.
2103.09150
Margaret Cheung
Andrei G. Gasic, Atrayee Sarkar, and Margaret S. Cheung
Understanding Protein-Complex Assembly through Grand Canonical Maximum Entropy Modeling
8 figures
Phys. Rev. Research 3, 033220 (2021)
10.1103/PhysRevResearch.3.033220
null
q-bio.BM q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Inside a cell, heterotypic proteins assemble in inhomogeneous, crowded systems where the abundance of these proteins vary with cell types. While some protein complexes form putative structures that can be visualized with imaging, there are far more protein complexes that are yet to be solved because of their dynamic associations with one another. Yet, it is possible to infer these protein complexes through a physical model. However, it is often not clear to physicists what kind of data from biology is necessary for such a modeling endeavor. Here, we aim to model these clusters of coarse-grained protein assemblies from multiple subunits through the constraints of interactions among the subunits and the chemical potential of each subunit. We obtained the constraints on the interactions among subunits from the known protein structures. We inferred the chemical potential, that dictates the particle number distribution of each protein subunit, from the knowledge of protein abundance from experimental data. Guided by the maximum entropy principle, we formulate an inverse statistical mechanical method to infer the distribution of particle numbers from the data of protein abundance as chemical potentials for a grand canonical multi-component mixture. Using grand canonical Monte Carlo simulations, we captured a distribution of high-order clusters in a protein complex of Succinate Dehydrogenase (SDH) with four known subunits. The complexity of hierarchical clusters varies with the relative protein abundance of each subunit in distinctive cell types such as lung, heart, and brain. When the crowding content increases, we observed that crowding stabilizes emergent clusters that do not exist in dilute conditions. We, therefore, proposed a testable hypothesis that the hierarchical complexity of protein clusters on a molecular scale is a plausible biomarker of predicting the phenotypes of a cell.
[ { "created": "Tue, 16 Mar 2021 15:47:30 GMT", "version": "v1" }, { "created": "Thu, 18 Mar 2021 03:40:00 GMT", "version": "v2" }, { "created": "Tue, 7 Sep 2021 17:48:40 GMT", "version": "v3" } ]
2021-09-15
[ [ "Gasic", "Andrei G.", "" ], [ "Sarkar", "Atrayee", "" ], [ "Cheung", "Margaret S.", "" ] ]
Inside a cell, heterotypic proteins assemble in inhomogeneous, crowded systems where the abundance of these proteins vary with cell types. While some protein complexes form putative structures that can be visualized with imaging, there are far more protein complexes that are yet to be solved because of their dynamic associations with one another. Yet, it is possible to infer these protein complexes through a physical model. However, it is often not clear to physicists what kind of data from biology is necessary for such a modeling endeavor. Here, we aim to model these clusters of coarse-grained protein assemblies from multiple subunits through the constraints of interactions among the subunits and the chemical potential of each subunit. We obtained the constraints on the interactions among subunits from the known protein structures. We inferred the chemical potential, that dictates the particle number distribution of each protein subunit, from the knowledge of protein abundance from experimental data. Guided by the maximum entropy principle, we formulate an inverse statistical mechanical method to infer the distribution of particle numbers from the data of protein abundance as chemical potentials for a grand canonical multi-component mixture. Using grand canonical Monte Carlo simulations, we captured a distribution of high-order clusters in a protein complex of Succinate Dehydrogenase (SDH) with four known subunits. The complexity of hierarchical clusters varies with the relative protein abundance of each subunit in distinctive cell types such as lung, heart, and brain. When the crowding content increases, we observed that crowding stabilizes emergent clusters that do not exist in dilute conditions. We, therefore, proposed a testable hypothesis that the hierarchical complexity of protein clusters on a molecular scale is a plausible biomarker of predicting the phenotypes of a cell.
2012.02089
George Lamb
George Lamb, Brooks Paige
Bayesian Graph Neural Networks for Molecular Property Prediction
Presented at NeurIPS 2020 Machine Learning for Molecules workshop
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks for molecular property prediction are frequently underspecified by data and fail to generalise to new scaffolds at test time. A potential solution is Bayesian learning, which can capture our uncertainty in the model parameters. This study benchmarks a set of Bayesian methods applied to a directed MPNN, using the QM9 regression dataset. We find that capturing uncertainty in both readout and message passing parameters yields enhanced predictive accuracy, calibration, and performance on a downstream molecular search task.
[ { "created": "Wed, 25 Nov 2020 22:32:54 GMT", "version": "v1" } ]
2020-12-04
[ [ "Lamb", "George", "" ], [ "Paige", "Brooks", "" ] ]
Graph neural networks for molecular property prediction are frequently underspecified by data and fail to generalise to new scaffolds at test time. A potential solution is Bayesian learning, which can capture our uncertainty in the model parameters. This study benchmarks a set of Bayesian methods applied to a directed MPNN, using the QM9 regression dataset. We find that capturing uncertainty in both readout and message passing parameters yields enhanced predictive accuracy, calibration, and performance on a downstream molecular search task.
1311.1992
Daniela Delneri
Sarah K. Hewitt, Ian Donaldson, Simon C. Lovell and Daniela Delneri
Sequencing and characterisation of rearrangements in three S. pastorianus strains reveals the presence of chimeric genes and gives evidence of breakpoint reuse
null
null
10.1371/journal.pone.0092203
null
q-bio.GN
http://creativecommons.org/licenses/by/3.0/
Gross chromosomal rearrangements have the potential to be evolutionarily advantageous to an adapting organism. The generation of a hybrid species increases opportunity for recombination by bringing together two homologous genomes. We sought to define the location of genomic rearrangements in three strains of Saccharomyces pastorianus, a natural lager-brewing yeast hybrid of Saccharomyces cerevisiae and Saccharomyces eubayanus, using whole genome shotgun sequencing. Each strain of S. pastorianus has lost species-specific portions of its genome and has undergone extensive recombination, producing chimeric chromosomes. We predicted 30 breakpoints that we confirmed at the single nucleotide level by designing species-specific primers that flank each breakpoint, and then sequencing the PCR product. These rearrangements are the result of recombination between areas of homology between the two subgenomes, rather than repetitive elements such as transposons or tRNAs. Interestingly, 28/30 S. cerevisiae- S. eubayanus recombination breakpoints are located within genic regions, generating chimeric genes. Furthermore we show evidence for the reuse of two breakpoints, located in HSP82 and KEM1, in strains of proposed independent origin.
[ { "created": "Fri, 8 Nov 2013 14:54:07 GMT", "version": "v1" } ]
2015-06-17
[ [ "Hewitt", "Sarah K.", "" ], [ "Donaldson", "Ian", "" ], [ "Lovell", "Simon C.", "" ], [ "Delneri", "Daniela", "" ] ]
Gross chromosomal rearrangements have the potential to be evolutionarily advantageous to an adapting organism. The generation of a hybrid species increases opportunity for recombination by bringing together two homologous genomes. We sought to define the location of genomic rearrangements in three strains of Saccharomyces pastorianus, a natural lager-brewing yeast hybrid of Saccharomyces cerevisiae and Saccharomyces eubayanus, using whole genome shotgun sequencing. Each strain of S. pastorianus has lost species-specific portions of its genome and has undergone extensive recombination, producing chimeric chromosomes. We predicted 30 breakpoints that we confirmed at the single nucleotide level by designing species-specific primers that flank each breakpoint, and then sequencing the PCR product. These rearrangements are the result of recombination between areas of homology between the two subgenomes, rather than repetitive elements such as transposons or tRNAs. Interestingly, 28/30 S. cerevisiae- S. eubayanus recombination breakpoints are located within genic regions, generating chimeric genes. Furthermore we show evidence for the reuse of two breakpoints, located in HSP82 and KEM1, in strains of proposed independent origin.
1105.5014
Franz Baumdicker
Franz Baumdicker and Peter Pfaffelhuber
Evolution of bacterial genomes under horizontal gene transfer
Published at the 58th World Statistics Congress of the International Statistical Institute (ISI) in Dublin
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unraveling the evolutionary forces shaping bacterial diversity can today be tackled using a growing amount of genomic data. While the genome of eukaryotes is highly stable, bacterial genomes from cells of the same species highly vary in gene content. This huge variation in gene content led to the concepts of the distributed genome of bacteria and their pangenome (Tettelin et al.,2005; Ehrlich et al.,2005). We present a population genetic model for gene content evolution which accounts for several mechanisms. Gene uptake from the environment is modeled by events of gene gain along the genealogical tree relating the population. Pseudogenization may lead to deletion of genes and is incoporated by gene loss. These two mechanisms were studied by Huson and Steel (2004) using a fixed phylogenetic tree. Taking the random genealogy given by the coalescent (Kingman, 1982; Hudson, 1983), we studied the resulting genomic diversity already in Baumdicker et al. (2010). In the present paper, we extend the model in order to incorporate events of interspecies horizontal gene transfer. Within this model, we derive expectations for the gene frequency spectrum and other quantities of interest.
[ { "created": "Wed, 25 May 2011 13:11:56 GMT", "version": "v1" } ]
2015-03-19
[ [ "Baumdicker", "Franz", "" ], [ "Pfaffelhuber", "Peter", "" ] ]
Unraveling the evolutionary forces shaping bacterial diversity can today be tackled using a growing amount of genomic data. While the genome of eukaryotes is highly stable, bacterial genomes from cells of the same species highly vary in gene content. This huge variation in gene content led to the concepts of the distributed genome of bacteria and their pangenome (Tettelin et al.,2005; Ehrlich et al.,2005). We present a population genetic model for gene content evolution which accounts for several mechanisms. Gene uptake from the environment is modeled by events of gene gain along the genealogical tree relating the population. Pseudogenization may lead to deletion of genes and is incoporated by gene loss. These two mechanisms were studied by Huson and Steel (2004) using a fixed phylogenetic tree. Taking the random genealogy given by the coalescent (Kingman, 1982; Hudson, 1983), we studied the resulting genomic diversity already in Baumdicker et al. (2010). In the present paper, we extend the model in order to incorporate events of interspecies horizontal gene transfer. Within this model, we derive expectations for the gene frequency spectrum and other quantities of interest.
0712.0391
Jose Albornoz
Jos\'e M. Albornoz, Antonio Parravano
Modeling a simple enzyme reaction with delay and discretization
13 pages, 7 figures. Submitted to Journal of Theoretical Biology
null
null
null
q-bio.MN q-bio.QM
null
A comparison is made between conventional Michaelis-Menten kinetics and two models that take into account the duration of the conformational changes that take place at the molecular level during the catalytic cycle of a monomer. The models consider the time that elapses from the moment an enzyme-substrate complex forms until the moment a product molecule is released, as well as the recovery time needed to reset the conformational change that took place. In the first model the dynamics is described by a set of delayed differential equations, instead of the ordinary differential equations associated to Michaelis-Menten kinetics. In the second model the delay, the discretization inherent to enzyme reactions and the stochastic binding of substrates to enzimes at the molecular level is considered. All three models agree at equilibrium, as expected; however, out-of-equilibrium dynamics can differ substantially. In particular, both delayed models show oscillations at low values of the Michaelis constant which are not reproduced by the Michaelis-Menten model. Additionally, in certain cases, the dynamics shown by the continuous delayed model differs from the dynamics of the discrete delayed model when some reactant become scarce.
[ { "created": "Mon, 3 Dec 2007 21:15:01 GMT", "version": "v1" } ]
2007-12-05
[ [ "Albornoz", "José M.", "" ], [ "Parravano", "Antonio", "" ] ]
A comparison is made between conventional Michaelis-Menten kinetics and two models that take into account the duration of the conformational changes that take place at the molecular level during the catalytic cycle of a monomer. The models consider the time that elapses from the moment an enzyme-substrate complex forms until the moment a product molecule is released, as well as the recovery time needed to reset the conformational change that took place. In the first model the dynamics is described by a set of delayed differential equations, instead of the ordinary differential equations associated to Michaelis-Menten kinetics. In the second model the delay, the discretization inherent to enzyme reactions and the stochastic binding of substrates to enzimes at the molecular level is considered. All three models agree at equilibrium, as expected; however, out-of-equilibrium dynamics can differ substantially. In particular, both delayed models show oscillations at low values of the Michaelis constant which are not reproduced by the Michaelis-Menten model. Additionally, in certain cases, the dynamics shown by the continuous delayed model differs from the dynamics of the discrete delayed model when some reactant become scarce.
2201.02990
Ana Carpio
Ana Carpio, Rafael Gonzalez-Albaladejo
Immersed boundary approach to biofilm spread on surfaces
null
Commun. Comput. Phys. Vol. 31, No. 1, pp. 257-292, 2022
10.4208/cicp.OA-2021-0039
null
q-bio.CB physics.bio-ph physics.comp-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a computational framework to study the growth and spread of bacterial biofilms on interfaces, as well as the action of antibiotics on them. Bacterial membranes are represented by boundaries immersed in a fluid matrix and subject to interaction forces. Growth, division and death of bacterial cells follow dynamic energy budget rules, in response to variations in environmental concentrations of nutrients, toxicants and substances released by the cells. In this way, we create, destroy and enlarge boundaries, either spherical or rod-like. Appropriate forces represent details of the interaction between cells, and the interaction with the environment. Numerical simulations illustrate the evolution of top views and diametral slices of small biofilm seeds, as well as the action of antibiotics. We show that cocktails of antibiotics targeting active and dormant cells can entirely eradicate a biofilm.
[ { "created": "Sun, 9 Jan 2022 11:47:25 GMT", "version": "v1" } ]
2022-01-11
[ [ "Carpio", "Ana", "" ], [ "Gonzalez-Albaladejo", "Rafael", "" ] ]
We propose a computational framework to study the growth and spread of bacterial biofilms on interfaces, as well as the action of antibiotics on them. Bacterial membranes are represented by boundaries immersed in a fluid matrix and subject to interaction forces. Growth, division and death of bacterial cells follow dynamic energy budget rules, in response to variations in environmental concentrations of nutrients, toxicants and substances released by the cells. In this way, we create, destroy and enlarge boundaries, either spherical or rod-like. Appropriate forces represent details of the interaction between cells, and the interaction with the environment. Numerical simulations illustrate the evolution of top views and diametral slices of small biofilm seeds, as well as the action of antibiotics. We show that cocktails of antibiotics targeting active and dormant cells can entirely eradicate a biofilm.
1409.5062
Keri Ighwela
Keri Alhadi Ighwela, Aziz Bin Ahmad and A.B. Abol-Munafi
Water Stability and Nutrient Leaching of Different Levels of Maltose Formulated Fish Pellets
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/3.0/
The effects of different levels of maltose on feed pellet water stability and nutrient leaching were studied. Five treatments, including control with three replicates with setup (0.0, 20, 25, 30 and 35%). Pellet leaching rates were used to indicate pellet water stability. The results show that the presence of maltose in the diets significantly improved pellet water stability (p<0.05), but the leaching rates of the feed (35% maltose) observed higher than other feeds. Increased maltose resulted in the corresponding decrease in pellet stability. The protein leaching rate of control feed and feed (20% maltose) was significantly (p < 0.05) lower than the rates of other diets The lipid leaching rate of control feed was lower than the rates of other diets, while the feed (35% maltose) was more leaching rate. It improved feeds water stability is one important reason why maltose enhances fish growth.
[ { "created": "Wed, 17 Sep 2014 16:45:00 GMT", "version": "v1" } ]
2014-09-18
[ [ "Ighwela", "Keri Alhadi", "" ], [ "Ahmad", "Aziz Bin", "" ], [ "Abol-Munafi", "A. B.", "" ] ]
The effects of different levels of maltose on feed pellet water stability and nutrient leaching were studied. Five treatments, including control with three replicates with setup (0.0, 20, 25, 30 and 35%). Pellet leaching rates were used to indicate pellet water stability. The results show that the presence of maltose in the diets significantly improved pellet water stability (p<0.05), but the leaching rates of the feed (35% maltose) observed higher than other feeds. Increased maltose resulted in the corresponding decrease in pellet stability. The protein leaching rate of control feed and feed (20% maltose) was significantly (p < 0.05) lower than the rates of other diets The lipid leaching rate of control feed was lower than the rates of other diets, while the feed (35% maltose) was more leaching rate. It improved feeds water stability is one important reason why maltose enhances fish growth.
1605.01213
Alexander L\"uck
Alexander L\"uck, Verena Wolf
Generalized Method of Moments for Estimating Parameters of Stochastic Reaction Networks
18 pages, 5 figures, minor changes concerning measurement errors
null
10.1186/s12918-016-0342-8
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete-state stochastic models have become a well-established approach to describe biochemical reaction networks that are influenced by the inherent randomness of cellular events. In the last years severalmethods for accurately approximating the statistical moments of such models have become very popular since they allow an efficient analysis of complex networks. We propose a generalized method of moments approach for inferring the parameters of reaction networks based on a sophisticated matching of the statistical moments of the corresponding stochastic model and the sample moments of population snapshot data. The proposed parameter estimation method exploits recently developed moment-based approximations and provides estimators with desirable statistical properties when a large number of samples is available. We demonstrate the usefulness and efficiency of the inference method on two case studies. The generalized method of moments provides accurate and fast estimations of unknown parameters of reaction networks. The accuracy increases when also moments of order higher than two are considered. In addition, the variance of the estimator decreases, when more samples are given or when higher order moments are included.
[ { "created": "Wed, 4 May 2016 10:44:13 GMT", "version": "v1" }, { "created": "Wed, 24 Aug 2016 08:39:56 GMT", "version": "v2" }, { "created": "Fri, 7 Oct 2016 11:51:53 GMT", "version": "v3" } ]
2017-07-03
[ [ "Lück", "Alexander", "" ], [ "Wolf", "Verena", "" ] ]
Discrete-state stochastic models have become a well-established approach to describe biochemical reaction networks that are influenced by the inherent randomness of cellular events. In the last years severalmethods for accurately approximating the statistical moments of such models have become very popular since they allow an efficient analysis of complex networks. We propose a generalized method of moments approach for inferring the parameters of reaction networks based on a sophisticated matching of the statistical moments of the corresponding stochastic model and the sample moments of population snapshot data. The proposed parameter estimation method exploits recently developed moment-based approximations and provides estimators with desirable statistical properties when a large number of samples is available. We demonstrate the usefulness and efficiency of the inference method on two case studies. The generalized method of moments provides accurate and fast estimations of unknown parameters of reaction networks. The accuracy increases when also moments of order higher than two are considered. In addition, the variance of the estimator decreases, when more samples are given or when higher order moments are included.
1907.08659
Walter Veit
Walter Veit
Modeling Morality
Preprint: Model-Based Reasoning in Science and Technology
null
null
null
q-bio.PE econ.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unlike any other field, the science of morality has drawn attention from an extraordinarily diverse set of disciplines. An interdisciplinary research program has formed in which economists, biologists, neuroscientists, psychologists, and even philosophers have been eager to provide answers to puzzling questions raised by the existence of human morality. Models and simulations, for a variety of reasons, have played various important roles in this endeavor. Their use, however, has sometimes been deemed as useless, trivial and inadequate. The role of models in the science of morality has been vastly underappreciated. This omission shall be remedied here, offering a much more positive picture on the contributions modelers made to our understanding of morality.
[ { "created": "Fri, 19 Jul 2019 19:29:12 GMT", "version": "v1" } ]
2019-07-23
[ [ "Veit", "Walter", "" ] ]
Unlike any other field, the science of morality has drawn attention from an extraordinarily diverse set of disciplines. An interdisciplinary research program has formed in which economists, biologists, neuroscientists, psychologists, and even philosophers have been eager to provide answers to puzzling questions raised by the existence of human morality. Models and simulations, for a variety of reasons, have played various important roles in this endeavor. Their use, however, has sometimes been deemed as useless, trivial and inadequate. The role of models in the science of morality has been vastly underappreciated. This omission shall be remedied here, offering a much more positive picture on the contributions modelers made to our understanding of morality.
2005.02809
Adejoke Obajuluwa Dr
Adejoke Olukayode Obajuluwa, Pius Abimbola Okiki, Tiwalola Madoc Obajuluwa, Olakunle Bamikole Afolabi
In-silico nucleotide and protein analyses of S-gene region in selected zoonotic coronaviruses reveal conserved domains and evolutionary emergence with trajectory course of viral entry from SARS-CoV2 genomic data
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The recent zoonotic coronavirus virus outbreak of a novel type [COVID 19] has necessitated the adequate understanding of the evolutionary pathway of zoonotic viruses which adversely affects human populations for therapeutic constructs to combat the pandemic now and in the future. We analyzed conserved domains of the severe acute respiratory coronavirus 2 [SARS-CoV2] for possible targets of viral entry inhibition in host cells, evolutionary relationship of human coronavirus [229E] and zoonotic coronaviruses with SAR-CoV2 as well as evolutionary relationship between selected SARS-CoV 2 genomic data. Conserved domains with antagonistic action on host innate antiviral cellular mechanisms in SARS-CoV 2 include nsp 11, nsp 13 etc. Also, multiple sequence alignments of the spike [S] gene protein of selected candidate zoonotic coronaviruses alongside the S gene protein of the SARs-CoV2 revealed closest evolutionary relationship [95.6%] with pangolin coronaviruses [S] gene. Clades formed between Wuhan SARS-CoV2 phylogeny data and five others suggests viral entry trajectory while revealing genomic and protein SARS CoV 2 data from Philippines as early ancestors. Therefore, phylogeny of SARS-CoV 2 genomic data suggests profiling in diverse populations with and without the outbreak alongside migration history and racial background for mutation tracking and dating of viral subtype divergence which is essential for effective management of present and future zoonotic coronavirus outbreaks.
[ { "created": "Wed, 6 May 2020 13:32:55 GMT", "version": "v1" } ]
2020-05-07
[ [ "Obajuluwa", "Adejoke Olukayode", "" ], [ "Okiki", "Pius Abimbola", "" ], [ "Obajuluwa", "Tiwalola Madoc", "" ], [ "Afolabi", "Olakunle Bamikole", "" ] ]
The recent zoonotic coronavirus virus outbreak of a novel type [COVID 19] has necessitated the adequate understanding of the evolutionary pathway of zoonotic viruses which adversely affects human populations for therapeutic constructs to combat the pandemic now and in the future. We analyzed conserved domains of the severe acute respiratory coronavirus 2 [SARS-CoV2] for possible targets of viral entry inhibition in host cells, evolutionary relationship of human coronavirus [229E] and zoonotic coronaviruses with SAR-CoV2 as well as evolutionary relationship between selected SARS-CoV 2 genomic data. Conserved domains with antagonistic action on host innate antiviral cellular mechanisms in SARS-CoV 2 include nsp 11, nsp 13 etc. Also, multiple sequence alignments of the spike [S] gene protein of selected candidate zoonotic coronaviruses alongside the S gene protein of the SARs-CoV2 revealed closest evolutionary relationship [95.6%] with pangolin coronaviruses [S] gene. Clades formed between Wuhan SARS-CoV2 phylogeny data and five others suggests viral entry trajectory while revealing genomic and protein SARS CoV 2 data from Philippines as early ancestors. Therefore, phylogeny of SARS-CoV 2 genomic data suggests profiling in diverse populations with and without the outbreak alongside migration history and racial background for mutation tracking and dating of viral subtype divergence which is essential for effective management of present and future zoonotic coronavirus outbreaks.
1904.06326
Lucas Barberis
L. Ben\'itez, L. Barberis, C. A. Condat
Modeling tumorspheres reveals cancer stem cell niche building and plasticity
18 pg
null
10.1016/j.physa.2019.121906
null
q-bio.CB cond-mat.other
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer stem cells have been shown to be critical to the development of a variety of solid cancers. The precise interplay mechanisms between cancer stem cells and the rest of a tissue are still not elucidated. To shed light on the interactions between stem and non-stem cancer cell populations we develop a two-population mathematical model, which is suitable to describe tumorsphere growth. Both interspecific and intraspecific interactions, mediated by the microenvironment, are included. We show that there is a tipping point, characterized by a transcritical bifurcation, where a purely non-stem cell attractor is replaced by a new attractor that contains both stem and differentiated cancer cells. The model is then applied to describe the outcome of a recent experiment. This description reveals that, while the intraspecific interactions are inhibitory, the interspecific interactions stimulate growth. This can be understood in terms of stem cells needing differentiated cells to reinforce their niches, and phenotypic plasticity favoring the de-differentiation of differentiated cells into cancer stem cells. We posit that this is a consequence of the deregulation of the quorum sensing that maintains homeostasis in healthy tissues.
[ { "created": "Tue, 9 Apr 2019 19:13:15 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 16:55:56 GMT", "version": "v2" } ]
2019-09-04
[ [ "Benítez", "L.", "" ], [ "Barberis", "L.", "" ], [ "Condat", "C. A.", "" ] ]
Cancer stem cells have been shown to be critical to the development of a variety of solid cancers. The precise interplay mechanisms between cancer stem cells and the rest of a tissue are still not elucidated. To shed light on the interactions between stem and non-stem cancer cell populations we develop a two-population mathematical model, which is suitable to describe tumorsphere growth. Both interspecific and intraspecific interactions, mediated by the microenvironment, are included. We show that there is a tipping point, characterized by a transcritical bifurcation, where a purely non-stem cell attractor is replaced by a new attractor that contains both stem and differentiated cancer cells. The model is then applied to describe the outcome of a recent experiment. This description reveals that, while the intraspecific interactions are inhibitory, the interspecific interactions stimulate growth. This can be understood in terms of stem cells needing differentiated cells to reinforce their niches, and phenotypic plasticity favoring the de-differentiation of differentiated cells into cancer stem cells. We posit that this is a consequence of the deregulation of the quorum sensing that maintains homeostasis in healthy tissues.
1211.0413
Sudip Kundu
Saurav Mallik and Sudip Kundu
The Lipid-RNA World
no figures and 7 pages
null
null
null
q-bio.BM physics.bio-ph q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simplest possible beginning of abiogenesis has been a riddle from the last century, which is most successfully solved by the Lipid World hypothesis. However, origin of the next stages of evolution starting form lipids is still in dark. We propose a 'Lipid-RNA World Scenario' based on the assumption that modern stable lipid-RNA interactions are molecular fossils of an ancient stage of evolution when RNA World originated from Lipid World. In accordance to the faint young sun conditions, we present an 'ice-covered hydrothermal vent' model of Hadean Ocean. Our hypothetical model suggests that faint young sun condition probably provided susceptible physical conditions for an evolutionary route from Lipid-World to Protein-RNA World, through an intermediate Lipid-RNA World. Ancient ribozymes were 'protected' by lipids assuring their survival in prebiotic ocean. The origin of natural selection ensures transition of Lipid-RNA World to Protein-RNA World after the origin of ribosome. Assuming the modern peptidyltransferase as the proto-ribosome structure, we have presented a hypothetical translation mechanism: proto-ribosome randomly polymerized amino acids being attached to the inner layer of a lipid-vesicle, using only physical energies available from our Hadean Ocean model. In accordance to the strategy of chemical evolution, we also have described the possible evolutionary behavior of this proto-ribosome, which explains the contemporary three-dimensional structure of 50S subunit and supports the predictions regarding the ancient regions of it. It also explains the origin of membrane-free 'minimal ribosome' in the time of LUCA.
[ { "created": "Fri, 2 Nov 2012 10:37:48 GMT", "version": "v1" }, { "created": "Mon, 25 Feb 2013 10:23:35 GMT", "version": "v2" } ]
2013-02-26
[ [ "Mallik", "Saurav", "" ], [ "Kundu", "Sudip", "" ] ]
The simplest possible beginning of abiogenesis has been a riddle from the last century, which is most successfully solved by the Lipid World hypothesis. However, origin of the next stages of evolution starting form lipids is still in dark. We propose a 'Lipid-RNA World Scenario' based on the assumption that modern stable lipid-RNA interactions are molecular fossils of an ancient stage of evolution when RNA World originated from Lipid World. In accordance to the faint young sun conditions, we present an 'ice-covered hydrothermal vent' model of Hadean Ocean. Our hypothetical model suggests that faint young sun condition probably provided susceptible physical conditions for an evolutionary route from Lipid-World to Protein-RNA World, through an intermediate Lipid-RNA World. Ancient ribozymes were 'protected' by lipids assuring their survival in prebiotic ocean. The origin of natural selection ensures transition of Lipid-RNA World to Protein-RNA World after the origin of ribosome. Assuming the modern peptidyltransferase as the proto-ribosome structure, we have presented a hypothetical translation mechanism: proto-ribosome randomly polymerized amino acids being attached to the inner layer of a lipid-vesicle, using only physical energies available from our Hadean Ocean model. In accordance to the strategy of chemical evolution, we also have described the possible evolutionary behavior of this proto-ribosome, which explains the contemporary three-dimensional structure of 50S subunit and supports the predictions regarding the ancient regions of it. It also explains the origin of membrane-free 'minimal ribosome' in the time of LUCA.
0912.4726
Eugene Shakhnovich
Muyoung Heo and Eugene Shakhnovich
Interplay between pleiotropy and secondary selection determines rise and fall of mutators in stress response
null
null
10.1371/journal.pcbi.1000710
null
q-bio.BM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dramatic rise of mutators has been found to accompany adaptation of bacteria in response to many kinds of stress. Two views on the evolutionary origin of this phenomenon emerged: the pleiotropic hypothesis positing that it is a byproduct of environmental stress or other specific stress response mechanisms and the second order selection which states that mutators hitchhike to fixation with unrelated beneficial alleles. Conventional population genetics models could not fully resolve this controversy because they are based on certain assumptions about fitness landscape. Here we address this problem using a microscopic multiscale model, which couples physically realistic molecular descriptions of proteins and their interactions with population genetics of carrier organisms without assuming any a priori fitness landscape. We found that both pleiotropy and second order selection play a crucial role at different stages of adaptation: the supply of mutators is provided through destabilization of error correction complexes or fluctuations of production levels of prototypic mismatch repair proteins (pleiotropic effects), while rise and fixation of mutators occur when there is a sufficient supply of beneficial mutations in replication-controlling genes. This general mechanism assures a robust and reliable adaptation of organisms to unforeseen challenges. This study highlights physical principles underlying physical biological mechanisms of stress response and adaptation.
[ { "created": "Wed, 23 Dec 2009 20:43:19 GMT", "version": "v1" } ]
2015-05-14
[ [ "Heo", "Muyoung", "" ], [ "Shakhnovich", "Eugene", "" ] ]
Dramatic rise of mutators has been found to accompany adaptation of bacteria in response to many kinds of stress. Two views on the evolutionary origin of this phenomenon emerged: the pleiotropic hypothesis positing that it is a byproduct of environmental stress or other specific stress response mechanisms and the second order selection which states that mutators hitchhike to fixation with unrelated beneficial alleles. Conventional population genetics models could not fully resolve this controversy because they are based on certain assumptions about fitness landscape. Here we address this problem using a microscopic multiscale model, which couples physically realistic molecular descriptions of proteins and their interactions with population genetics of carrier organisms without assuming any a priori fitness landscape. We found that both pleiotropy and second order selection play a crucial role at different stages of adaptation: the supply of mutators is provided through destabilization of error correction complexes or fluctuations of production levels of prototypic mismatch repair proteins (pleiotropic effects), while rise and fixation of mutators occur when there is a sufficient supply of beneficial mutations in replication-controlling genes. This general mechanism assures a robust and reliable adaptation of organisms to unforeseen challenges. This study highlights physical principles underlying physical biological mechanisms of stress response and adaptation.
1508.06579
Yuri A. Dabaghian
Yuri Dabaghian
Geometry of Spatial Memory Replay
15 pages, 5 figures, Neural Computation, 2016
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally, during both sleep and awake states, reactivating the place cells that fired during its exploration in the same sequence they were originally activated. Although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs, it remains unclear how a consistent representation of space can be maintained during spontaneous replay. We propose a model that can account for this phenomenon and suggests that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths.
[ { "created": "Wed, 26 Aug 2015 17:28:48 GMT", "version": "v1" }, { "created": "Sun, 20 Mar 2016 18:43:16 GMT", "version": "v2" } ]
2016-03-22
[ [ "Dabaghian", "Yuri", "" ] ]
Place cells in the rat hippocampus play a key role in creating the animal's internal representation of the world. During active navigation, these cells spike only in discrete locations, together encoding a map of the environment. Electrophysiological recordings have shown that the animal can revisit this map mentally, during both sleep and awake states, reactivating the place cells that fired during its exploration in the same sequence they were originally activated. Although consistency of place cell activity during active navigation is arguably enforced by sensory and proprioceptive inputs, it remains unclear how a consistent representation of space can be maintained during spontaneous replay. We propose a model that can account for this phenomenon and suggests that a spatially consistent replay requires a number of constraints on the hippocampal network that affect its synaptic architecture and the statistics of synaptic connection strengths.
q-bio/0507033
Dietrich Stauffer
Dietrich Stauffer and Klaus Rohde
Simulation of Rapoport's rule for latitudinal species spread
14 pages including 6 figures
null
null
null
q-bio.PE
null
Rapoport's rule claims that latitudinal ranges of plant and animal species are generally smaller at low than at high latitudes. However, doubts as to the generality of the rule have been expressed, because studies providing evidence against the rule are more numerous than those in support of it. In groups for which support has been provided, the trend of increasing latitudinal ranges with latitude is restricted to or at least most distinct at high latitudes, suggesting that the effect may be a local phenomenon, for example the result of glaciations. Here we test the rule using two models, a simple one-dimensional one with a fixed number of animals expanding in a northern or southerly direction only, and the evolutionary/ecological Chowdhury model using birth, ageing, death, mutation, speciation, prey-predator relations and food levels. Simulations with both models gave results contradicting Rapoport's rule. In the first, latitudinal ranges were roughly independent of latitude, in the second, latitudinal ranges were greatest at low latitudes, as also shown empirically for some well studied groups of animals.
[ { "created": "Thu, 21 Jul 2005 14:49:33 GMT", "version": "v1" } ]
2007-05-23
[ [ "Stauffer", "Dietrich", "" ], [ "Rohde", "Klaus", "" ] ]
Rapoport's rule claims that latitudinal ranges of plant and animal species are generally smaller at low than at high latitudes. However, doubts as to the generality of the rule have been expressed, because studies providing evidence against the rule are more numerous than those in support of it. In groups for which support has been provided, the trend of increasing latitudinal ranges with latitude is restricted to or at least most distinct at high latitudes, suggesting that the effect may be a local phenomenon, for example the result of glaciations. Here we test the rule using two models, a simple one-dimensional one with a fixed number of animals expanding in a northern or southerly direction only, and the evolutionary/ecological Chowdhury model using birth, ageing, death, mutation, speciation, prey-predator relations and food levels. Simulations with both models gave results contradicting Rapoport's rule. In the first, latitudinal ranges were roughly independent of latitude, in the second, latitudinal ranges were greatest at low latitudes, as also shown empirically for some well studied groups of animals.
2101.10582
Sourav Kumar Sasmal
Sourav Kumar Sasmal and Yasuhiro Takeuchi
Modeling the Allee effects induced by cost of predation fear and its carry-over effects
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predation driven Allee effects play an important role in the dynamics of small population, however, such predation-driven Allee effects cannot occur for the model with type I functional response. It generally occurs when a generalist predator targets some specific prey. However, apart from the lethal effects of predation, there are some non-lethal effects in the presence of predator. Due to the fear of predation, positive density dependence growth may be observed at low population density, because of reduced foraging activities. Moreover, this non-lethal effect can be carried over generations. In the present manuscript, we investigate the role of predation fear and its carry-over effects in prey-predator model. First, we study the single species model in global perspective. We have shown that depending on the birth rate, our single species model describes three types of growth dynamics, namely, strong Allee dynamics, weak Allee dynamics and logistic dynamics. Then we consider the explicit dynamics of predator, with type I functional response. Basic dynamical properties, as well as global stability of each equilibria have been discussed. From our analysis, we can observe that both the fear and its carry-over effects have significant role in the stability of the coexistence equilibrium, even if for the model with type I functional response. The phenomenon paradox of enrichment can be observed in our model, which cannot be observed in the classical prey-predator model with type I functional response. However, we can see that such phenomenon can be ruled out by choosing suitable non-lethal effect parameters. Therefore, our study shows how non-lethal effects change the dynamics of a competition model, and has important biological insights, specially for the understanding of the dynamics of small populations.
[ { "created": "Tue, 26 Jan 2021 06:21:22 GMT", "version": "v1" } ]
2021-01-27
[ [ "Sasmal", "Sourav Kumar", "" ], [ "Takeuchi", "Yasuhiro", "" ] ]
Predation driven Allee effects play an important role in the dynamics of small population, however, such predation-driven Allee effects cannot occur for the model with type I functional response. It generally occurs when a generalist predator targets some specific prey. However, apart from the lethal effects of predation, there are some non-lethal effects in the presence of predator. Due to the fear of predation, positive density dependence growth may be observed at low population density, because of reduced foraging activities. Moreover, this non-lethal effect can be carried over generations. In the present manuscript, we investigate the role of predation fear and its carry-over effects in prey-predator model. First, we study the single species model in global perspective. We have shown that depending on the birth rate, our single species model describes three types of growth dynamics, namely, strong Allee dynamics, weak Allee dynamics and logistic dynamics. Then we consider the explicit dynamics of predator, with type I functional response. Basic dynamical properties, as well as global stability of each equilibria have been discussed. From our analysis, we can observe that both the fear and its carry-over effects have significant role in the stability of the coexistence equilibrium, even if for the model with type I functional response. The phenomenon paradox of enrichment can be observed in our model, which cannot be observed in the classical prey-predator model with type I functional response. However, we can see that such phenomenon can be ruled out by choosing suitable non-lethal effect parameters. Therefore, our study shows how non-lethal effects change the dynamics of a competition model, and has important biological insights, specially for the understanding of the dynamics of small populations.
2106.02441
Francesco Zamponi
Matteo Bisardi, Juan Rodriguez-Rivas, Francesco Zamponi, Martin Weigt
Modeling sequence-space exploration and emergence of epistatic signals in protein evolution
16 pages, 14 figures
Molecular Biology and Evolution 39, msab321 (2022)
10.1093/molbev/msab321
null
q-bio.BM cond-mat.dis-nn q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During their evolution, proteins explore sequence space via an interplay between random mutations and phenotypic selection. Here we build upon recent progress in reconstructing data-driven fitness landscapes for families of homologous proteins, to propose stochastic models of experimental protein evolution. These models predict quantitatively important features of experimentally evolved sequence libraries, like fitness distributions and position-specific mutational spectra. They also allow us to efficiently simulate sequence libraries for a vast array of combinations of experimental parameters like sequence divergence, selection strength and library size. We showcase the potential of the approach in re-analyzing two recent experiments to determine protein structure from signals of epistasis emerging in experimental sequence libraries. To be detectable, these signals require sufficiently large and sufficiently diverged libraries. Our modeling framework offers a quantitative explanation for the variable success of recently published experiments. Furthermore, we can forecast the outcome of time- and resource-intensive evolution experiments, opening thereby a way to computationally optimize experimental protocols.
[ { "created": "Fri, 4 Jun 2021 12:39:52 GMT", "version": "v1" }, { "created": "Thu, 27 Jan 2022 10:55:14 GMT", "version": "v2" } ]
2022-01-28
[ [ "Bisardi", "Matteo", "" ], [ "Rodriguez-Rivas", "Juan", "" ], [ "Zamponi", "Francesco", "" ], [ "Weigt", "Martin", "" ] ]
During their evolution, proteins explore sequence space via an interplay between random mutations and phenotypic selection. Here we build upon recent progress in reconstructing data-driven fitness landscapes for families of homologous proteins, to propose stochastic models of experimental protein evolution. These models predict quantitatively important features of experimentally evolved sequence libraries, like fitness distributions and position-specific mutational spectra. They also allow us to efficiently simulate sequence libraries for a vast array of combinations of experimental parameters like sequence divergence, selection strength and library size. We showcase the potential of the approach in re-analyzing two recent experiments to determine protein structure from signals of epistasis emerging in experimental sequence libraries. To be detectable, these signals require sufficiently large and sufficiently diverged libraries. Our modeling framework offers a quantitative explanation for the variable success of recently published experiments. Furthermore, we can forecast the outcome of time- and resource-intensive evolution experiments, opening thereby a way to computationally optimize experimental protocols.
1309.2588
Kyung Hyuk Kim
Kyung Hyuk Kim, Hong Qian, Herbert M. Sauro
Nonlinear Biochemical Signal Processing via Noise Propagation
23 pages, 5 figures, Accepted by Journal of Chemical Physics
null
10.1063/1.4822103
null
q-bio.QM q-bio.MN q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell studies often show significant phenotypic variability due to the stochastic nature of intra-cellular biochemical reactions. When the numbers of molecules, e.g., transcription factors and regulatory enzymes, are in low abundance, fluctuations in biochemical activities become significant and such "noise" can propagate through regulatory cascades in terms of biochemical reaction networks. Here we develop an intuitive, yet fully quantitative method for analyzing how noise affects cellular phenotypes based on identifying a system's nonlinearities and noise propagations. We observe that such noise can simultaneously enhance sensitivities in one behavioral region while reducing sensitivities in another. Employing this novel phenomenon we designed three biochemical signal processing modules: (a) A gene regulatory network that acts as a concentration detector with both enhanced amplitude and sensitivity. (b) A non-cooperative positive feedback system, with a graded dose-response in the deterministic case, that serves as a bistable switch due to noise-induced bimodality. (c) A noise-induced linear amplifier for gene regulation that requires no feedback. The methods developed in the present work allow one to understand and engineer nonlinear biochemical signal processors based on fluctuation-induced phenotypes.
[ { "created": "Tue, 10 Sep 2013 17:46:40 GMT", "version": "v1" } ]
2015-06-17
[ [ "Kim", "Kyung Hyuk", "" ], [ "Qian", "Hong", "" ], [ "Sauro", "Herbert M.", "" ] ]
Single-cell studies often show significant phenotypic variability due to the stochastic nature of intra-cellular biochemical reactions. When the numbers of molecules, e.g., transcription factors and regulatory enzymes, are in low abundance, fluctuations in biochemical activities become significant and such "noise" can propagate through regulatory cascades in terms of biochemical reaction networks. Here we develop an intuitive, yet fully quantitative method for analyzing how noise affects cellular phenotypes based on identifying a system's nonlinearities and noise propagations. We observe that such noise can simultaneously enhance sensitivities in one behavioral region while reducing sensitivities in another. Employing this novel phenomenon we designed three biochemical signal processing modules: (a) A gene regulatory network that acts as a concentration detector with both enhanced amplitude and sensitivity. (b) A non-cooperative positive feedback system, with a graded dose-response in the deterministic case, that serves as a bistable switch due to noise-induced bimodality. (c) A noise-induced linear amplifier for gene regulation that requires no feedback. The methods developed in the present work allow one to understand and engineer nonlinear biochemical signal processors based on fluctuation-induced phenotypes.
1608.08166
Thiparat Chotibut
Thiparat Chotibut, David R. Nelson
Population Genetics with Fluctuating Population Sizes
Submitted to Journal of Statistical Physics, Special Issue: Dedicated to the Memory of Leo Kadanoff. arXiv admin note: text overlap with arXiv:1412.6688
null
10.1007/s10955-017-1741-y
null
q-bio.PE cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Standard neutral population genetics theory with a strictly fixed population size has important limitations. An alternative model that allows independently fluctuating population sizes and reproduces the standard neutral evolution is reviewed. We then study a situation such that the competing species are neutral at the equilibrium population size but population size fluctuations nevertheless favor fixation of one species over the other. In this case, a separation of timescales emerges naturally and allows adiabatic elimination of a fast population size variable to deduce the fluctuations-induced selection dynamics near the equilibrium population size. The results highlight the incompleteness of the standard population genetics with a strictly fixed population size.
[ { "created": "Mon, 29 Aug 2016 18:19:55 GMT", "version": "v1" } ]
2017-03-08
[ [ "Chotibut", "Thiparat", "" ], [ "Nelson", "David R.", "" ] ]
Standard neutral population genetics theory with a strictly fixed population size has important limitations. An alternative model that allows independently fluctuating population sizes and reproduces the standard neutral evolution is reviewed. We then study a situation such that the competing species are neutral at the equilibrium population size but population size fluctuations nevertheless favor fixation of one species over the other. In this case, a separation of timescales emerges naturally and allows adiabatic elimination of a fast population size variable to deduce the fluctuations-induced selection dynamics near the equilibrium population size. The results highlight the incompleteness of the standard population genetics with a strictly fixed population size.
1801.04008
Eslam Abbas
Eslam Abbas
Comorbid CAD and Ventricular Hypertrophy Compromise The Perfusion of Myocardial Tissue at Subcritical Stenosis of Epicardial Coronaries
10 pages and 2 figures
null
10.1186/s43044-019-0003-5
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BACKGROUND: Most studies of CAD revascularization have been based on and reported according to angiographic criteria which don't consider the relation between the resulting effective flow distal to the stenosis and the demand of a hypertrophied myocardial tissue. MODEL: Mathematical model of the myocardial perfusion in comorbid CAD and ventricular hypertrophy using Poiseuille's law. The analysis yields that the curve, which represents the relation between the perfusion and the severity of CAD depending on angiographic and/or angiophysiologic criteria, is shifted to the right by the effect of myocardial tissue hypertrophy. The right shift of said curve, which is directly proportional to the degree of ventricular hypertrophy, indicates that the perfusion of the corresponding myocardial tissue is compromised at angiographically and/or angiophysiologically subsignificant stenosis of the supplying epicardial vessel. RESULTS: Patients with comorbid CAD and left ventricular hypertrophy are more sensitive to CAD-related hemodynamic changes. They are more prone to develop ischemic complications, than their peers with isolated CAD regarding the same degree of coronary stenosis. CONCLUSION: Patients with comorbid CAD and ventricular hypertrophy suffer from myocardial hypoperfusion at angiographically and/or angiophysiologically subcritical epicardial stenosis. Accordingly; the comorbidity of both diseases should be considered upon designing of the treatment regime.
[ { "created": "Thu, 11 Jan 2018 22:46:55 GMT", "version": "v1" } ]
2019-11-01
[ [ "Abbas", "Eslam", "" ] ]
BACKGROUND: Most studies of CAD revascularization have been based on and reported according to angiographic criteria which don't consider the relation between the resulting effective flow distal to the stenosis and the demand of a hypertrophied myocardial tissue. MODEL: Mathematical model of the myocardial perfusion in comorbid CAD and ventricular hypertrophy using Poiseuille's law. The analysis yields that the curve, which represents the relation between the perfusion and the severity of CAD depending on angiographic and/or angiophysiologic criteria, is shifted to the right by the effect of myocardial tissue hypertrophy. The right shift of said curve, which is directly proportional to the degree of ventricular hypertrophy, indicates that the perfusion of the corresponding myocardial tissue is compromised at angiographically and/or angiophysiologically subsignificant stenosis of the supplying epicardial vessel. RESULTS: Patients with comorbid CAD and left ventricular hypertrophy are more sensitive to CAD-related hemodynamic changes. They are more prone to develop ischemic complications, than their peers with isolated CAD regarding the same degree of coronary stenosis. CONCLUSION: Patients with comorbid CAD and ventricular hypertrophy suffer from myocardial hypoperfusion at angiographically and/or angiophysiologically subcritical epicardial stenosis. Accordingly; the comorbidity of both diseases should be considered upon designing of the treatment regime.
2101.00819
Babak Nouri-Moghaddam
Babak Nouri-Moghaddam, Mehdi Ghazanfari, Mohammad Fathian
A Novel Bio-Inspired Hybrid Multi-Filter Wrapper Gene Selection Method with Ensemble Classifier for Microarray Data
22 pages, 10 figures
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with challenges such as small sample size, a significant number of genes, imbalanced data, etc. that make classification models inefficient. Thus, a new hybrid solution based on multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, to reduce the dataset's dimensions, a multi-filter model uses a combination of five filter methods to remove redundant and irrelevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, and hypervolume indicator criterion with five hybrid multi-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in microarray data compared to conventional ensemble methods.
[ { "created": "Mon, 4 Jan 2021 07:57:35 GMT", "version": "v1" } ]
2021-01-05
[ [ "Nouri-Moghaddam", "Babak", "" ], [ "Ghazanfari", "Mehdi", "" ], [ "Fathian", "Mohammad", "" ] ]
Microarray technology is known as one of the most important tools for collecting DNA expression data. This technology allows researchers to investigate and examine types of diseases and their origins. However, microarray data are often associated with challenges such as small sample size, a significant number of genes, imbalanced data, etc. that make classification models inefficient. Thus, a new hybrid solution based on multi-filter and adaptive chaotic multi-objective forest optimization algorithm (AC-MOFOA) is presented to solve the gene selection problem and construct the Ensemble Classifier. In the proposed solution, to reduce the dataset's dimensions, a multi-filter model uses a combination of five filter methods to remove redundant and irrelevant genes. Then, an AC-MOFOA based on the concepts of non-dominated sorting, crowding distance, chaos theory, and adaptive operators is presented. AC-MOFOA as a wrapper method aimed at reducing dataset dimensions, optimizing KELM, and increasing the accuracy of the classification, simultaneously. Next, in this method, an ensemble classifier model is presented using AC-MOFOA results to classify microarray data. The performance of the proposed algorithm was evaluated on nine public microarray datasets, and its results were compared in terms of the number of selected genes, classification efficiency, execution time, time complexity, and hypervolume indicator criterion with five hybrid multi-objective methods. According to the results, the proposed hybrid method could increase the accuracy of the KELM in most datasets by reducing the dataset's dimensions and achieve similar or superior performance compared to other multi-objective methods. Furthermore, the proposed Ensemble Classifier model could provide better classification accuracy and generalizability in microarray data compared to conventional ensemble methods.
2206.06951
Alfonso Nieto-Castanon
Alfonso Nieto-Castanon
Brain-wide connectome inferences using functional connectivity MultiVariate Pattern Analyses (fc-MVPA)
null
null
10.1371/journal.pcbi.1010634
null
q-bio.QM q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Current functional Magnetic Resonance Imaging technology is able to resolve billions of individual functional connections characterizing the human connectome. Classical statistical inferential procedures attempting to make valid inferences across this many measures from a reduced set of observations and from a limited number of subjects can be severely underpowered for any but the largest effect sizes. This manuscript discusses fc-MVPA (functional connectivity Multivariate Pattern Analysis), a novel application of multivariate pattern analysis techniques in the context of brain-wide connectome inferences. The theory behind fc-MVPA is presented, and several of its key concepts are illustrated through examples from a publicly available resting state dataset, including an example analysis evaluating gender differences across the entire functional connectome. Last, Monte Carlo simulations are used to demonstrated this method's validity and sensitivity. In addition to offering powerful whole-brain inferences, fc-MVPA also provides a meaningful characterization of the heterogeneity in functional connectivity across subjects.
[ { "created": "Tue, 14 Jun 2022 16:14:11 GMT", "version": "v1" } ]
2023-01-11
[ [ "Nieto-Castanon", "Alfonso", "" ] ]
Current functional Magnetic Resonance Imaging technology is able to resolve billions of individual functional connections characterizing the human connectome. Classical statistical inferential procedures attempting to make valid inferences across this many measures from a reduced set of observations and from a limited number of subjects can be severely underpowered for any but the largest effect sizes. This manuscript discusses fc-MVPA (functional connectivity Multivariate Pattern Analysis), a novel application of multivariate pattern analysis techniques in the context of brain-wide connectome inferences. The theory behind fc-MVPA is presented, and several of its key concepts are illustrated through examples from a publicly available resting state dataset, including an example analysis evaluating gender differences across the entire functional connectome. Last, Monte Carlo simulations are used to demonstrated this method's validity and sensitivity. In addition to offering powerful whole-brain inferences, fc-MVPA also provides a meaningful characterization of the heterogeneity in functional connectivity across subjects.
1406.3016
Chandrajit Basu
Devesh Singh, Chandrajit Basu, Merve Meinhardt-Wollweber, and Bernhard Roth
LEDs for Energy Efficient Greenhouse Lighting
22 pages, 7 figures
null
null
DS_CB_June2014
q-bio.OT physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Light energy is an important factor for plant growth. In regions where the natural light source, i.e. solar radiation, is not sufficient for growth optimization, additional light sources are being used. Traditional light sources such as high pressure sodium lamps and other metal halide lamps are not very efficient and generate high radiant heat. Therefore, new sustainable solutions should be developed for energy efficient greenhouse lighting. Recent developments in the field of light source technologies have opened up new perspectives for sustainable and highly efficient light sources in the form of light-emitting diodes, i.e. LEDs, for greenhouse lighting. This review focuses on the potential of LEDs to replace traditional light sources in the greenhouse. In a comparative economic analysis of traditional vs. LED lighting, we show that the introduction of LEDs allows reduction of the production cost of vegetables in the long-run of several years, due to the high energy efficiency, low maintenance cost and longevity of LEDs. In order to evaluate LEDs as a true alternative to current lighting sources, species specific plant response to different wavelengths is discussed in a comparative study. However, more detailed scientific studies are necessary to understand the effect of different LED spectra on plants physiology. Technical innovations are required to design and realize an energy efficient light source with a spectrum tailored for optimal plant growth in specific plant species.
[ { "created": "Wed, 11 Jun 2014 09:33:13 GMT", "version": "v1" } ]
2014-06-13
[ [ "Singh", "Devesh", "" ], [ "Basu", "Chandrajit", "" ], [ "Meinhardt-Wollweber", "Merve", "" ], [ "Roth", "Bernhard", "" ] ]
Light energy is an important factor for plant growth. In regions where the natural light source, i.e. solar radiation, is not sufficient for growth optimization, additional light sources are being used. Traditional light sources such as high pressure sodium lamps and other metal halide lamps are not very efficient and generate high radiant heat. Therefore, new sustainable solutions should be developed for energy efficient greenhouse lighting. Recent developments in the field of light source technologies have opened up new perspectives for sustainable and highly efficient light sources in the form of light-emitting diodes, i.e. LEDs, for greenhouse lighting. This review focuses on the potential of LEDs to replace traditional light sources in the greenhouse. In a comparative economic analysis of traditional vs. LED lighting, we show that the introduction of LEDs allows reduction of the production cost of vegetables in the long-run of several years, due to the high energy efficiency, low maintenance cost and longevity of LEDs. In order to evaluate LEDs as a true alternative to current lighting sources, species specific plant response to different wavelengths is discussed in a comparative study. However, more detailed scientific studies are necessary to understand the effect of different LED spectra on plants physiology. Technical innovations are required to design and realize an energy efficient light source with a spectrum tailored for optimal plant growth in specific plant species.