id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1210.7495
Eduardo Mizraji
Eduardo Mizraji
Illustrating a neural model of logic computations: The case of Sherlock Holmes' old maxim
Corrected version with new references
THEORIA 31/1 (2016): 7-25
10.1387/theoria.13959
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural languages can express some logical propositions that humans are able to understand. We illustrate this fact with a famous text that Conan Doyle attributed to Holmes: 'It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth'. This is a subtle logical statement usually felt as an evident truth. The problem we are trying to solve is the cognitive reason for such a feeling. We postulate here that we accept Holmes' maxim as true because our adult brains are equipped with neural modules that naturally perform modal logical computations.
[ { "created": "Sun, 28 Oct 2012 19:37:33 GMT", "version": "v1" }, { "created": "Fri, 2 Nov 2012 12:00:10 GMT", "version": "v2" }, { "created": "Sat, 27 Feb 2016 14:45:52 GMT", "version": "v3" } ]
2016-03-01
[ [ "Mizraji", "Eduardo", "" ] ]
Natural languages can express some logical propositions that humans are able to understand. We illustrate this fact with a famous text that Conan Doyle attributed to Holmes: 'It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth'. This is a subtle logical statement usually felt as an evident truth. The problem we are trying to solve is the cognitive reason for such a feeling. We postulate here that we accept Holmes' maxim as true because our adult brains are equipped with neural modules that naturally perform modal logical computations.
1411.3917
Andrew Magyar
Andrew Magyar, John Collins
Two-population model for MTL neurons: The vast majority are almost silent
null
Phys. Rev. E 92, 012712 (2015)
10.1103/PhysRevE.92.012712
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recordings in the human medial temporal lobe have found many neurons that respond to pictures (and related stimuli) of just one particular person out of those presented. It has been proposed that these are concept cells, responding to just a single concept. However, a direct experimental test of the concept cell idea appears impossible, because it would need the measurement of the response of each cell to enormous numbers of other stimuli. Here we propose a new statistical method for analysis of the data, that gives a more powerful way to analyze how close data are to the concept-cell idea. It exploits the large number of sampled neurons, to give sensitivity to situations where the average response sparsity is to much less than one response for the number of presented stimuli. We show that a conventional model where a single sparsity is postulated for all neurons gives an extremely poor fit to the data. In contrast a model with two dramatically different populations give an excellent fit to data from the hippocampus and entorhinal cortex. In the hippocampus, one population has 7% of the cells with a 2.6% sparsity. But a much larger fraction 93% respond to only 0.1% of the stimuli. This results in an extreme bias in the reported responsive of neurons compared with a typical neuron. Finally, we show how to allow for the fact that some of reported identified units correspond to multiple neurons, and find that our conclusions at the neural level are quantitatively changed but strengthened, with an even stronger difference between the two populations.
[ { "created": "Fri, 14 Nov 2014 14:21:39 GMT", "version": "v1" } ]
2015-07-22
[ [ "Magyar", "Andrew", "" ], [ "Collins", "John", "" ] ]
Recordings in the human medial temporal lobe have found many neurons that respond to pictures (and related stimuli) of just one particular person out of those presented. It has been proposed that these are concept cells, responding to just a single concept. However, a direct experimental test of the concept cell idea appears impossible, because it would need the measurement of the response of each cell to enormous numbers of other stimuli. Here we propose a new statistical method for analysis of the data, that gives a more powerful way to analyze how close data are to the concept-cell idea. It exploits the large number of sampled neurons, to give sensitivity to situations where the average response sparsity is to much less than one response for the number of presented stimuli. We show that a conventional model where a single sparsity is postulated for all neurons gives an extremely poor fit to the data. In contrast a model with two dramatically different populations give an excellent fit to data from the hippocampus and entorhinal cortex. In the hippocampus, one population has 7% of the cells with a 2.6% sparsity. But a much larger fraction 93% respond to only 0.1% of the stimuli. This results in an extreme bias in the reported responsive of neurons compared with a typical neuron. Finally, we show how to allow for the fact that some of reported identified units correspond to multiple neurons, and find that our conclusions at the neural level are quantitatively changed but strengthened, with an even stronger difference between the two populations.
1809.04953
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Cluster Burst Synchronization in A Scale-Free Network of Inhibitory Bursting Neurons
arXiv admin note: text overlap with arXiv:1803.07256, arXiv:1708.04543
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a scale-free network of inhibitory Hindmarsh-Rose (HR) bursting neurons, and investigate coupling-induced cluster burst synchronization by varying the average coupling strength $J_0$. For sufficiently small $J_0$, non-cluster desynchronized states exist. However, when passing a critical point $J^*_c~(\simeq 0.16)$, the whole population is segregated into 3 clusters via a constructive role of synaptic inhibition to stimulate dynamical clustering between individual burstings, and thus 3-cluster desynchronized states appear. As $J_0$ is further increased and passes a lower threshold $J^*_l~(\simeq 0.78)$, a transition to 3-cluster burst synchronization occurs due to another constructive role of synaptic inhibition to favor population synchronization. In this case, HR neurons in each cluster exhibit burst synchronization. However, as $J_0$ passes an intermediate threshold $J^*_m~(\simeq 5.2)$, HR neurons begin to make intermittent hoppings between the 3 clusters. Due to the intermittent intercluster hoppings, the 3 clusters are integrated into a single one. In spite of break-up of the 3 clusters, (non-cluster) burst synchronization persists in the whole population, which is well visualized in the raster plot of burst onset times where bursting stripes (composed of burst onset times and indicating burst synchronization) appear successively. With further increase in $J_0$, intercluster hoppings are intensified, and bursting stripes also become smeared more and more due to a destructive role of synaptic inhibition to spoil the burst synchronization. Eventually, when passing a higher threshold $J^*_h~(\simeq 17.8)$ a transition to desynchronization occurs via complete overlap between the bursting stripes. Finally, we also investigate the effects of stochastic noise on both 3-cluster burst synchronization and intercluster hoppings.
[ { "created": "Wed, 12 Sep 2018 00:40:18 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2019 13:51:14 GMT", "version": "v2" } ]
2019-04-02
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We consider a scale-free network of inhibitory Hindmarsh-Rose (HR) bursting neurons, and investigate coupling-induced cluster burst synchronization by varying the average coupling strength $J_0$. For sufficiently small $J_0$, non-cluster desynchronized states exist. However, when passing a critical point $J^*_c~(\simeq 0.16)$, the whole population is segregated into 3 clusters via a constructive role of synaptic inhibition to stimulate dynamical clustering between individual burstings, and thus 3-cluster desynchronized states appear. As $J_0$ is further increased and passes a lower threshold $J^*_l~(\simeq 0.78)$, a transition to 3-cluster burst synchronization occurs due to another constructive role of synaptic inhibition to favor population synchronization. In this case, HR neurons in each cluster exhibit burst synchronization. However, as $J_0$ passes an intermediate threshold $J^*_m~(\simeq 5.2)$, HR neurons begin to make intermittent hoppings between the 3 clusters. Due to the intermittent intercluster hoppings, the 3 clusters are integrated into a single one. In spite of break-up of the 3 clusters, (non-cluster) burst synchronization persists in the whole population, which is well visualized in the raster plot of burst onset times where bursting stripes (composed of burst onset times and indicating burst synchronization) appear successively. With further increase in $J_0$, intercluster hoppings are intensified, and bursting stripes also become smeared more and more due to a destructive role of synaptic inhibition to spoil the burst synchronization. Eventually, when passing a higher threshold $J^*_h~(\simeq 17.8)$ a transition to desynchronization occurs via complete overlap between the bursting stripes. Finally, we also investigate the effects of stochastic noise on both 3-cluster burst synchronization and intercluster hoppings.
0808.3873
Gareth Hughes
Gareth Hughes
Notes on the UK Non-Native Organism Risk Assessment Scheme
New version updates the URL of the UK Non-Native Organism Risk Assessment Scheme
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2004, the UK Government's Department for Environment, Food and Rural Affairs commissioned research with the aim of developing a scheme for assessing the risks posed to species, habitats and ecosystems in the UK by non-native organisms. The outcome was the UK Non-Native Organism Risk Assessment Scheme. Unfortunately, the mathematical basis of the procedure for summarising risks deployed in the Risk Assessment Scheme, as outlined in Baker et al. (2008) and described in more detail in the Risk Assessment Scheme's User Manual, contains several analytical errors. These errors are outlined in the notes that follow.
[ { "created": "Thu, 28 Aug 2008 10:15:24 GMT", "version": "v1" }, { "created": "Fri, 5 Jun 2009 08:10:46 GMT", "version": "v2" } ]
2009-06-05
[ [ "Hughes", "Gareth", "" ] ]
In 2004, the UK Government's Department for Environment, Food and Rural Affairs commissioned research with the aim of developing a scheme for assessing the risks posed to species, habitats and ecosystems in the UK by non-native organisms. The outcome was the UK Non-Native Organism Risk Assessment Scheme. Unfortunately, the mathematical basis of the procedure for summarising risks deployed in the Risk Assessment Scheme, as outlined in Baker et al. (2008) and described in more detail in the Risk Assessment Scheme's User Manual, contains several analytical errors. These errors are outlined in the notes that follow.
1708.04020
Natalia Bielczyk Ms
Natalia Z. Bielczyk, Sebo Uithol, Tim van Mourik, Paul Anderson, Jeffrey C. Glennon, Jan K. Buitelaar
Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area.
[ { "created": "Mon, 14 Aug 2017 07:13:17 GMT", "version": "v1" }, { "created": "Mon, 21 Aug 2017 21:56:26 GMT", "version": "v2" }, { "created": "Wed, 30 May 2018 11:38:13 GMT", "version": "v3" }, { "created": "Thu, 30 May 2019 09:03:45 GMT", "version": "v4" } ]
2019-05-31
[ [ "Bielczyk", "Natalia Z.", "" ], [ "Uithol", "Sebo", "" ], [ "van Mourik", "Tim", "" ], [ "Anderson", "Paul", "" ], [ "Glennon", "Jeffrey C.", "" ], [ "Buitelaar", "Jan K.", "" ] ]
In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area.
1104.3889
Lior Pachter
Lior Pachter
Models for transcript quantification from RNA-Seq
null
null
null
null
q-bio.GN stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA-Seq is rapidly becoming the standard technology for transcriptome analysis. Fundamental to many of the applications of RNA-Seq is the quantification problem, which is the accurate measurement of relative transcript abundances from the sequenced reads. We focus on this problem, and review many recently published models that are used to estimate the relative abundances. In addition to describing the models and the different approaches to inference, we also explain how methods are related to each other. A key result is that we show how inference with many of the models results in identical estimates of relative abundances, even though model formulations can be very different. In fact, we are able to show how a single general model captures many of the elements of previously published methods. We also review the applications of RNA-Seq models to differential analysis, and explain why accurate relative transcript abundance estimates are crucial for downstream analyses.
[ { "created": "Tue, 19 Apr 2011 21:46:46 GMT", "version": "v1" }, { "created": "Fri, 13 May 2011 00:18:18 GMT", "version": "v2" } ]
2011-05-16
[ [ "Pachter", "Lior", "" ] ]
RNA-Seq is rapidly becoming the standard technology for transcriptome analysis. Fundamental to many of the applications of RNA-Seq is the quantification problem, which is the accurate measurement of relative transcript abundances from the sequenced reads. We focus on this problem, and review many recently published models that are used to estimate the relative abundances. In addition to describing the models and the different approaches to inference, we also explain how methods are related to each other. A key result is that we show how inference with many of the models results in identical estimates of relative abundances, even though model formulations can be very different. In fact, we are able to show how a single general model captures many of the elements of previously published methods. We also review the applications of RNA-Seq models to differential analysis, and explain why accurate relative transcript abundance estimates are crucial for downstream analyses.
1602.06504
Gennadi Glinsky
Gennadi Glinsky
Malignant field signature analysis in biopsy samples at diagnosis identifies lethal disease in patients with localized Gleason 6 and 7 prostate cancer
16 pages, 6 figures, 4 tables
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Overtreatment of early-stage low-risk prostate cancer (PC) patients represents a significant problem in disease management and has socio-economic implications. Development of genetic and molecular markers of clinically significant disease in patients diagnosed with low grade localized PC would have a major impact in disease management. A gene expression signature (GES) is reported for lethal PC in biopsy specimens obtained at the time of diagnosis from patients with Gleason 6 and Gleason 7 tumors in a Swedish watchful waiting cohort with up to 30 years follow-up. A 98-genes GES identified 89 and 100 percent of all death events 4 years after diagnosis in G7 and G6 patients, respectively; at 6 years follow-up, 83 and 100 percent of all deaths events were captured. Remarkably, the 98-genes GES appears to perform successfully in patients stratification with as little as 2% of cancer cells in a specimen, strongly indicating that it captures a malignant field effect in prostates harboring cancer cells of different degrees of aggressiveness. In G6 and G7 tumors from PC patients of age 65 or younger, GES identified 86 percent of all death events during the entire follow-up period. In G6 and G7 tumors from PC patients of age 70 or younger, GES identified 90 percent of all death events 6 years after diagnosis. Classification performance of the reported in this study 98-genes GES of lethal PC appeared suitable to meet design and feasibility requirements of a prospective 4 to 6 years clinical trial, which is essential for regulatory approval of diagnostic and prognostic tests in clinical setting. Prospectively validated GES of lethal PC in biopsy specimens of G6 and G7 tumors will help physicians to identify, at the time of diagnosis, patients who should be considered for exclusion from active surveillance programs and who would most likely benefit from immediate curative interventions.
[ { "created": "Sun, 21 Feb 2016 06:32:08 GMT", "version": "v1" } ]
2016-02-24
[ [ "Glinsky", "Gennadi", "" ] ]
Overtreatment of early-stage low-risk prostate cancer (PC) patients represents a significant problem in disease management and has socio-economic implications. Development of genetic and molecular markers of clinically significant disease in patients diagnosed with low grade localized PC would have a major impact in disease management. A gene expression signature (GES) is reported for lethal PC in biopsy specimens obtained at the time of diagnosis from patients with Gleason 6 and Gleason 7 tumors in a Swedish watchful waiting cohort with up to 30 years follow-up. A 98-genes GES identified 89 and 100 percent of all death events 4 years after diagnosis in G7 and G6 patients, respectively; at 6 years follow-up, 83 and 100 percent of all deaths events were captured. Remarkably, the 98-genes GES appears to perform successfully in patients stratification with as little as 2% of cancer cells in a specimen, strongly indicating that it captures a malignant field effect in prostates harboring cancer cells of different degrees of aggressiveness. In G6 and G7 tumors from PC patients of age 65 or younger, GES identified 86 percent of all death events during the entire follow-up period. In G6 and G7 tumors from PC patients of age 70 or younger, GES identified 90 percent of all death events 6 years after diagnosis. Classification performance of the reported in this study 98-genes GES of lethal PC appeared suitable to meet design and feasibility requirements of a prospective 4 to 6 years clinical trial, which is essential for regulatory approval of diagnostic and prognostic tests in clinical setting. Prospectively validated GES of lethal PC in biopsy specimens of G6 and G7 tumors will help physicians to identify, at the time of diagnosis, patients who should be considered for exclusion from active surveillance programs and who would most likely benefit from immediate curative interventions.
1207.3454
Tidjani Negadi
Tidjani Negadi
The irregular (integer) tetrahedron as a warehouse of biological information
to be published in 2012; Symmetry: Culture and Science, 2012
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is devoted to a new classification of the twenty amino acids based on the heronian (integer) tetrahedron.
[ { "created": "Sat, 14 Jul 2012 19:45:48 GMT", "version": "v1" } ]
2012-07-17
[ [ "Negadi", "Tidjani", "" ] ]
This paper is devoted to a new classification of the twenty amino acids based on the heronian (integer) tetrahedron.
2401.00077
Erik Johnson
Erik C. Johnson, Thinh T. Nguyen, Benjamin K. Dichter, Frank Zappulla, Montgomery Kosma, Kabilar Gunalan, Yaroslav O. Halchenko, Shay Q. Neufeld, Michael Schirner, Petra Ritter, Maryann E. Martone, Brock Wester, Franco Pestilli, Dimitri Yatsenko
A Maturity Model for Operations in Neuroscience Research
10 pages, one figure
null
null
null
q-bio.NC cs.CY
http://creativecommons.org/licenses/by/4.0/
Scientists are adopting new approaches to scale up their activities and goals. Progress in neurotechnologies, artificial intelligence, automation, and tools for collaboration promises new bursts of discoveries. However, compared to other disciplines and the industry, neuroscience laboratories have been slow to adopt key technologies to support collaboration, reproducibility, and automation. Drawing on progress in other fields, we define a roadmap for implementing automated research workflows for diverse research teams. We propose establishing a five-level capability maturity model for operations in neuroscience research. Achieving higher levels of operational maturity requires new technology-enabled methodologies, which we describe as ``SciOps''. The maturity model provides guidelines for evaluating and upgrading operations in multidisciplinary neuroscience teams.
[ { "created": "Fri, 29 Dec 2023 21:37:22 GMT", "version": "v1" } ]
2024-01-02
[ [ "Johnson", "Erik C.", "" ], [ "Nguyen", "Thinh T.", "" ], [ "Dichter", "Benjamin K.", "" ], [ "Zappulla", "Frank", "" ], [ "Kosma", "Montgomery", "" ], [ "Gunalan", "Kabilar", "" ], [ "Halchenko", "Yaroslav O.", "" ], [ "Neufeld", "Shay Q.", "" ], [ "Schirner", "Michael", "" ], [ "Ritter", "Petra", "" ], [ "Martone", "Maryann E.", "" ], [ "Wester", "Brock", "" ], [ "Pestilli", "Franco", "" ], [ "Yatsenko", "Dimitri", "" ] ]
Scientists are adopting new approaches to scale up their activities and goals. Progress in neurotechnologies, artificial intelligence, automation, and tools for collaboration promises new bursts of discoveries. However, compared to other disciplines and the industry, neuroscience laboratories have been slow to adopt key technologies to support collaboration, reproducibility, and automation. Drawing on progress in other fields, we define a roadmap for implementing automated research workflows for diverse research teams. We propose establishing a five-level capability maturity model for operations in neuroscience research. Achieving higher levels of operational maturity requires new technology-enabled methodologies, which we describe as ``SciOps''. The maturity model provides guidelines for evaluating and upgrading operations in multidisciplinary neuroscience teams.
1910.03529
Emanuele Massaro Ph.D.
Emanuele Massaro and Daniel Kondor and Carlo Ratti
Assessing the interplay between human mobility and mosquito borne diseases in urban environments
null
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Urbanization drives the epidemiology of infectious diseases to many threats and new challenges. In this research, we study the interplay between human mobility and dengue outbreaks in the complex urban environment of the city-state of Singapore. We integrate both stylized and mobile phone data-driven mobility patterns in an agent-based transmission model in which humans and mosquitoes are represented as agents that go through the epidemic states of dengue. We monitor with numerical simulations the system-level response to the epidemic by comparing our results with the observed cases reported during the 2013 and 2014 outbreaks. Our results show that human mobility is a major factor in the spread of vector-borne diseases such as dengue even on the short scale corresponding to intra-city distances. We finally discuss the advantages and the limits of mobile phone data and potential alternatives for assessing valuable mobility patterns for modeling vector-borne diseases outbreaks in cities.
[ { "created": "Tue, 8 Oct 2019 16:36:43 GMT", "version": "v1" } ]
2019-10-09
[ [ "Massaro", "Emanuele", "" ], [ "Kondor", "Daniel", "" ], [ "Ratti", "Carlo", "" ] ]
Urbanization drives the epidemiology of infectious diseases to many threats and new challenges. In this research, we study the interplay between human mobility and dengue outbreaks in the complex urban environment of the city-state of Singapore. We integrate both stylized and mobile phone data-driven mobility patterns in an agent-based transmission model in which humans and mosquitoes are represented as agents that go through the epidemic states of dengue. We monitor with numerical simulations the system-level response to the epidemic by comparing our results with the observed cases reported during the 2013 and 2014 outbreaks. Our results show that human mobility is a major factor in the spread of vector-borne diseases such as dengue even on the short scale corresponding to intra-city distances. We finally discuss the advantages and the limits of mobile phone data and potential alternatives for assessing valuable mobility patterns for modeling vector-borne diseases outbreaks in cities.
2208.02344
Vincent Zaballa
Vincent D. Zaballa and Elliot E. Hui
An Optimal Likelihood Free Method for Biological Model Selection
2022 International Conference on Machine Learning Workshop on Computational Biology
null
null
null
q-bio.QM stat.ML
http://creativecommons.org/licenses/by/4.0/
Systems biology seeks to create math models of biological systems to reduce inherent biological complexity and provide predictions for applications such as therapeutic development. However, it remains a challenge to determine which math model is correct and how to arrive optimally at the answer. We present an algorithm for automated biological model selection using mathematical models of systems biology and likelihood free inference methods. Our algorithm shows improved performance in arriving at correct models without a priori information over conventional heuristics used in experimental biology and random search. This method shows promise to accelerate biological basic science and drug discovery.
[ { "created": "Wed, 3 Aug 2022 21:05:20 GMT", "version": "v1" } ]
2022-08-05
[ [ "Zaballa", "Vincent D.", "" ], [ "Hui", "Elliot E.", "" ] ]
Systems biology seeks to create math models of biological systems to reduce inherent biological complexity and provide predictions for applications such as therapeutic development. However, it remains a challenge to determine which math model is correct and how to arrive optimally at the answer. We present an algorithm for automated biological model selection using mathematical models of systems biology and likelihood free inference methods. Our algorithm shows improved performance in arriving at correct models without a priori information over conventional heuristics used in experimental biology and random search. This method shows promise to accelerate biological basic science and drug discovery.
2312.07026
Sarah Brueningk
John Metzcar, Catherine R. Jutzeler, Paul Macklin, Alvaro K\"ohn-Luque, Sarah C. Br\"uningk
A review of mechanistic learning in mathematical oncology
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mechanistic learning, the synergistic combination of knowledge-driven and data-driven modeling, is an emerging field. In particular, in mathematical oncology, the application of mathematical modeling to cancer biology and oncology, the use of mechanistic learning is growing. This review aims to capture the current state of the field and provide a perspective on how mechanistic learning may further progress in mathematical oncology. We highlight the synergistic potential of knowledge-driven mechanistic mathematical modeling and data-driven modeling, such as machine and deep learning. We point out similarities and differences regarding model complexity, data requirements, outputs generated, and interpretability of the algorithms and their results. Then, organizing combinations of knowledge- and data-driven modeling into four categories (sequential, parallel, intrinsic, and extrinsic mechanistic learning), we summarize a variety of approaches at the interface between purely data- and knowledge-driven models. Using examples predominantly from oncology, we discuss a range of techniques including physics-informed neural networks, surrogate model learning, and digital twins. We see that mechanistic learning, with its intentional leveraging of the strengths of both knowledge and data-driven modeling, can greatly impact the complex problems of oncology. Given the increasing ubiquity and impact of machine learning, it is critical to incorporate it into the study of mathematical oncology with mechanistic learning providing a path to that end. As the field of mechanistic learning advances, we aim for this review and proposed categorization framework to foster additional collaboration between the data- and knowledge-driven modeling fields. Further collaboration will help address difficult issues in oncology such as limited data availability, requirements of model transparency, and complex input data
[ { "created": "Tue, 12 Dec 2023 07:24:43 GMT", "version": "v1" } ]
2023-12-13
[ [ "Metzcar", "John", "" ], [ "Jutzeler", "Catherine R.", "" ], [ "Macklin", "Paul", "" ], [ "Köhn-Luque", "Alvaro", "" ], [ "Brüningk", "Sarah C.", "" ] ]
Mechanistic learning, the synergistic combination of knowledge-driven and data-driven modeling, is an emerging field. In particular, in mathematical oncology, the application of mathematical modeling to cancer biology and oncology, the use of mechanistic learning is growing. This review aims to capture the current state of the field and provide a perspective on how mechanistic learning may further progress in mathematical oncology. We highlight the synergistic potential of knowledge-driven mechanistic mathematical modeling and data-driven modeling, such as machine and deep learning. We point out similarities and differences regarding model complexity, data requirements, outputs generated, and interpretability of the algorithms and their results. Then, organizing combinations of knowledge- and data-driven modeling into four categories (sequential, parallel, intrinsic, and extrinsic mechanistic learning), we summarize a variety of approaches at the interface between purely data- and knowledge-driven models. Using examples predominantly from oncology, we discuss a range of techniques including physics-informed neural networks, surrogate model learning, and digital twins. We see that mechanistic learning, with its intentional leveraging of the strengths of both knowledge and data-driven modeling, can greatly impact the complex problems of oncology. Given the increasing ubiquity and impact of machine learning, it is critical to incorporate it into the study of mathematical oncology with mechanistic learning providing a path to that end. As the field of mechanistic learning advances, we aim for this review and proposed categorization framework to foster additional collaboration between the data- and knowledge-driven modeling fields. Further collaboration will help address difficult issues in oncology such as limited data availability, requirements of model transparency, and complex input data
1503.01843
Brooks Emerick
Brooks Emerick, Abhyudai Singh
Host-feeding enhances stability of discrete-time host-parasitoid population dynamic models
18 pages, 4 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete-time models are the traditional approach for capturing population dynamics of a host-parasitoid system. Recent work has introduced a semi-discrete framework for obtaining model update functions that connect host-parasitoid population levels from year-to-year. In particular, this framework uses differential equations to describe the hosts-parasitoid interaction during the time of year where they come in contact, allowing specific behaviors to be mechanistically incorporated into the model. We use the semi-discrete approach to study the effects of host-feeding, which occurs when a parasitoid consumes a potential host larva without ovipositing. Our results show that host-feeding by itself cannot stabilize the system, and both the host and parasitoid populations exhibit diverging oscillations similar to the Nicholson-Bailey model. However, when combined with other stabilizing mechanisms such as density-dependent host mortality or density-dependent parasitoid attack rate, host-feeding expands the region of parameter space that allows for a stable host-parasitoid equilibrium. Finally, our results show that host-feeding causes inefficiency in the parasitoid population, which yields a higher population of hosts per generation. This suggests that host-feeding may have limited long-term impact in terms of suppressing host levels for biological control applications.
[ { "created": "Fri, 6 Mar 2015 03:52:14 GMT", "version": "v1" } ]
2015-03-09
[ [ "Emerick", "Brooks", "" ], [ "Singh", "Abhyudai", "" ] ]
Discrete-time models are the traditional approach for capturing population dynamics of a host-parasitoid system. Recent work has introduced a semi-discrete framework for obtaining model update functions that connect host-parasitoid population levels from year-to-year. In particular, this framework uses differential equations to describe the hosts-parasitoid interaction during the time of year where they come in contact, allowing specific behaviors to be mechanistically incorporated into the model. We use the semi-discrete approach to study the effects of host-feeding, which occurs when a parasitoid consumes a potential host larva without ovipositing. Our results show that host-feeding by itself cannot stabilize the system, and both the host and parasitoid populations exhibit diverging oscillations similar to the Nicholson-Bailey model. However, when combined with other stabilizing mechanisms such as density-dependent host mortality or density-dependent parasitoid attack rate, host-feeding expands the region of parameter space that allows for a stable host-parasitoid equilibrium. Finally, our results show that host-feeding causes inefficiency in the parasitoid population, which yields a higher population of hosts per generation. This suggests that host-feeding may have limited long-term impact in terms of suppressing host levels for biological control applications.
2002.08802
Norichika Ogata
Tomoko Matsuda, Hikoyu Suzuki, Norichika Ogata
Phylogenetic analyses of the severe acute respiratory syndrome coronavirus 2 reflected the several routes of introduction to Taiwan, the United States, and Japan
9 pages, 4 figures and 4 tables
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Worldwide Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is disrupting in the economy and anxiety of people. The public anxiety has increased the psychological burden on government and healthcare professionals, resulting in a government worker suicide in Japan. The terrified people are asking the government for border measures. However, are border measures possible for this virus? By analyzing 48 almost complete virus genome sequences, we found out that the viruses that invaded Taiwan, the United States, and Japan were introduced independently. We identified thirteen parsimony-informative sites and three groups (CTC, TCC, and TCT). Viruses found outside China did not form a monophyletic clade, opposite to previous study. These results suggest the difficulty of implementing effective border measures against this virus.
[ { "created": "Thu, 20 Feb 2020 15:29:37 GMT", "version": "v1" }, { "created": "Fri, 28 Feb 2020 16:08:35 GMT", "version": "v2" } ]
2020-03-02
[ [ "Matsuda", "Tomoko", "" ], [ "Suzuki", "Hikoyu", "" ], [ "Ogata", "Norichika", "" ] ]
Worldwide Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection is disrupting in the economy and anxiety of people. The public anxiety has increased the psychological burden on government and healthcare professionals, resulting in a government worker suicide in Japan. The terrified people are asking the government for border measures. However, are border measures possible for this virus? By analyzing 48 almost complete virus genome sequences, we found out that the viruses that invaded Taiwan, the United States, and Japan were introduced independently. We identified thirteen parsimony-informative sites and three groups (CTC, TCC, and TCT). Viruses found outside China did not form a monophyletic clade, opposite to previous study. These results suggest the difficulty of implementing effective border measures against this virus.
q-bio/0611021
Reidun Twarock Dr
N. Jonoska and R. Twarock
A Note on Genome Organisation in RNA Viruses with Icosahedral Symmetry
8 pages, 8 figures
null
null
null
q-bio.BM
null
The structural organisation of the viral genome within its protein container, called the viral capsid, is an important aspect of virus architecture. Many single-stranded (ss) RNA viruses organise a significant part of their genome in a dodecahedral cage as a RNA duplex structure that mirrors the symmetry of the capsid. Bruinsma and Rudnick have suggested a model for the structural organisation of the RNA in these cages. It is the purpose of this paper to further develop their approach based on results from the areas of graph theory and DNA network engineering. We start by suggesting a scenario for pariacoto virus, a representative of this class of viruses, that is energetically more favorable than those derived previously. We then show that it is a representative of a whole family of cage structures that abide to the same construction principle, and then derive the energetically optimal configuration for a second family of cage structures along similar lines. Finally, we give reasons for the conjecture that these two families are more likely to occur in nature than other scenarios.
[ { "created": "Mon, 6 Nov 2006 18:50:41 GMT", "version": "v1" } ]
2007-05-23
[ [ "Jonoska", "N.", "" ], [ "Twarock", "R.", "" ] ]
The structural organisation of the viral genome within its protein container, called the viral capsid, is an important aspect of virus architecture. Many single-stranded (ss) RNA viruses organise a significant part of their genome in a dodecahedral cage as a RNA duplex structure that mirrors the symmetry of the capsid. Bruinsma and Rudnick have suggested a model for the structural organisation of the RNA in these cages. It is the purpose of this paper to further develop their approach based on results from the areas of graph theory and DNA network engineering. We start by suggesting a scenario for pariacoto virus, a representative of this class of viruses, that is energetically more favorable than those derived previously. We then show that it is a representative of a whole family of cage structures that abide to the same construction principle, and then derive the energetically optimal configuration for a second family of cage structures along similar lines. Finally, we give reasons for the conjecture that these two families are more likely to occur in nature than other scenarios.
0911.0406
Per Arne Rikvold
Per Arne Rikvold (Florida State University)
Degree Correlations in a Dynamically Generated Model Food Web
4 pages
In Proceedings of CSP09, edited by D.P. Landau, S.P. Lewis, and H.-B. Sch\"uttler, Physics Procedia 3, 1487-1492 (2010).
10.1016/j.phpro.2010.01.210
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore aspects of the community structures generated by a simple predator-prey model of biological coevolution, using large-scale kinetic Monte Carlo simulations. The model accounts for interspecies and intraspecies competition for resources, as well as adaptive foraging behavior. It produces a metastable low-diversity phase and a stable high-diversity phase. The structures and joint indegree-outdegree distributions of the food webs generated in the latter phase are discussed.
[ { "created": "Mon, 2 Nov 2009 23:24:45 GMT", "version": "v1" } ]
2010-02-18
[ [ "Rikvold", "Per Arne", "", "Florida State University" ] ]
We explore aspects of the community structures generated by a simple predator-prey model of biological coevolution, using large-scale kinetic Monte Carlo simulations. The model accounts for interspecies and intraspecies competition for resources, as well as adaptive foraging behavior. It produces a metastable low-diversity phase and a stable high-diversity phase. The structures and joint indegree-outdegree distributions of the food webs generated in the latter phase are discussed.
1405.3902
Benjamin Good
Benjamin H Good and Michael M Desai
Deleterious passengers in adapting populations
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most new mutations are deleterious and are eventually eliminated by natural selection. But in an adapting population, the rapid amplification of beneficial mutations can hinder the removal of deleterious variants in nearby regions of the genome, altering the patterns of sequence evolution. Here, we analyze the interactions between beneficial "driver" mutations and linked deleterious "passengers" during the course of adaptation. We derive analytical expressions for the substitution rate of a deleterious mutation as a function of its fitness cost, as well as the reduction in the beneficial substitution rate due to the genetic load of the passengers. We find that the fate of each deleterious mutation varies dramatically with the rate and spectrum of beneficial mutations, with a non-monotonic dependence on both the population size and the rate of adaptation. By quantifying this dependence, our results allow us to estimate which deleterious mutations will be likely to fix, and how many of these mutations must arise before the progress of adaptation is significantly reduced.
[ { "created": "Thu, 15 May 2014 16:29:05 GMT", "version": "v1" } ]
2014-05-16
[ [ "Good", "Benjamin H", "" ], [ "Desai", "Michael M", "" ] ]
Most new mutations are deleterious and are eventually eliminated by natural selection. But in an adapting population, the rapid amplification of beneficial mutations can hinder the removal of deleterious variants in nearby regions of the genome, altering the patterns of sequence evolution. Here, we analyze the interactions between beneficial "driver" mutations and linked deleterious "passengers" during the course of adaptation. We derive analytical expressions for the substitution rate of a deleterious mutation as a function of its fitness cost, as well as the reduction in the beneficial substitution rate due to the genetic load of the passengers. We find that the fate of each deleterious mutation varies dramatically with the rate and spectrum of beneficial mutations, with a non-monotonic dependence on both the population size and the rate of adaptation. By quantifying this dependence, our results allow us to estimate which deleterious mutations will be likely to fix, and how many of these mutations must arise before the progress of adaptation is significantly reduced.
2311.08546
Yanying Wu
Yanying Wu
A Category of Genes
13 pages, 6 figures, 1 table
null
null
null
q-bio.OT math.CT
http://creativecommons.org/licenses/by-nc-sa/4.0/
Understanding how genes interact and relate to each other is a fundamental question in biology. However, current practices for describing these relationships, such as drawing diagrams or graphs in a somewhat arbitrary manner, limit our ability to integrate various aspects of the gene functions and view the genome holistically. To overcome these limitations, we need a more appropriate way to describe the intricate relationships between genes. Interestingly, category theory, an abstract field of mathematics seemingly unrelated to biology, has emerged as a powerful language for describing relations in general. We propose that category theory could provide a framework for unifying our knowledge of genes and their relationships. As a starting point, we construct a category of genes, with its morphisms abstracting various aspects of the relationships betweens genes. These relationships include, but not limited to, the order of genes on the chromosomes, the physical or genetic interactions, the signalling pathways, the gene ontology causal activity models (GO-CAM) and gene groups. Previously, they were encoded by miscellaneous networks or graphs, while our work unifies them in a consistent manner as a category. By doing so, we hope to view the relationships between genes systematically. In the long run, this paves a promising way for us to understand the fundamental principles that govern gene regulation and function.
[ { "created": "Tue, 14 Nov 2023 21:19:14 GMT", "version": "v1" } ]
2023-11-16
[ [ "Wu", "Yanying", "" ] ]
Understanding how genes interact and relate to each other is a fundamental question in biology. However, current practices for describing these relationships, such as drawing diagrams or graphs in a somewhat arbitrary manner, limit our ability to integrate various aspects of the gene functions and view the genome holistically. To overcome these limitations, we need a more appropriate way to describe the intricate relationships between genes. Interestingly, category theory, an abstract field of mathematics seemingly unrelated to biology, has emerged as a powerful language for describing relations in general. We propose that category theory could provide a framework for unifying our knowledge of genes and their relationships. As a starting point, we construct a category of genes, with its morphisms abstracting various aspects of the relationships betweens genes. These relationships include, but not limited to, the order of genes on the chromosomes, the physical or genetic interactions, the signalling pathways, the gene ontology causal activity models (GO-CAM) and gene groups. Previously, they were encoded by miscellaneous networks or graphs, while our work unifies them in a consistent manner as a category. By doing so, we hope to view the relationships between genes systematically. In the long run, this paves a promising way for us to understand the fundamental principles that govern gene regulation and function.
2304.10736
Zitong Lu
Zitong Lu and Julie D. Golomb
Generate your neural signals from mine: individual-to-individual EEG converters
Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023)
null
null
null
q-bio.NC cs.CV cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision. We applied THINGS EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to 72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to effectively learn the mapping of neural representations in EEG signals from one subject to another and achieve high conversion performance. Additionally, the generated EEG signals contain clearer representations of visual information than that can be obtained from real data. This method establishes a novel and state-of-the-art framework for neural conversion of EEG signals, which can realize a flexible and high-performance mapping from individual to individual and provide insight for both neural engineering and cognitive neuroscience.
[ { "created": "Fri, 21 Apr 2023 04:13:16 GMT", "version": "v1" } ]
2023-04-24
[ [ "Lu", "Zitong", "" ], [ "Golomb", "Julie D.", "" ] ]
Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision. We applied THINGS EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to 72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to effectively learn the mapping of neural representations in EEG signals from one subject to another and achieve high conversion performance. Additionally, the generated EEG signals contain clearer representations of visual information than that can be obtained from real data. This method establishes a novel and state-of-the-art framework for neural conversion of EEG signals, which can realize a flexible and high-performance mapping from individual to individual and provide insight for both neural engineering and cognitive neuroscience.
1802.07338
Chandre Dharma-wardana
M.W.C. Dharma-wardana
Fertilizer usage and cadmium in soils, crops and food
14 pages, two figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phosphate fertilizers were first implicated by Schroeder and Balassa in 1963 for increasing the Cd concentration in cultivated soils and crops. This suggestion has become a part of the accepted paradigm on soil toxicity. Consequently, stringent fertilizer control programs to monitor Cd have been launched. Attempts to link Cd toxicity and fertilizers to chronic diseases are common. A re-assessment of this "accepted" paradigm is timely, given the larger body of data available today. The data show that both the input and output of Cd per hectare from fertilizers are negligibly small compared to the total amount of Cd/hectare usually present in the soil itself. Calculations based on current agricultural practices are used to show that it will take about 18 centuries to double the ambient soil-cadmium level, and about 8 centuries to double the soil-fluoride level, even after neglecting leaching and other removal effects. Hence the concern of long-term agriculture should be the depletion of available phosphate fertilizers, rather than the contamination of the soil by trace metals or fluoride. This conclusion is confirmed by showing that the claimed correlations between fertilizer input and cadmium accumulation in crops are not robust. Alternative scenarios that explain the data are examined. Thus soil acidulation on fertilizer loading, and the effect of magnesium, zinc, and fluoride ions contained in fertilizers are considered using recent Cd$^{2+}$, Mg$^{2+}$ and F$^-$ ion-association theories. The protective role of ions like Zn, Se, Fe, etc., is emphasized, and the question of cadmium toxicity in the presence of other ions is considered. These help to clarify and rectify difficulties found in the standard point of view. This analysis does not modify the accepted views on Cd contamination by airborne delivery, smoking, and industrial activity, or P-contamination causing algal blooms.
[ { "created": "Wed, 21 Feb 2018 18:40:07 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2018 20:16:33 GMT", "version": "v2" }, { "created": "Tue, 5 Jun 2018 23:48:49 GMT", "version": "v3" } ]
2018-06-07
[ [ "Dharma-wardana", "M. W. C.", "" ] ]
Phosphate fertilizers were first implicated by Schroeder and Balassa in 1963 for increasing the Cd concentration in cultivated soils and crops. This suggestion has become a part of the accepted paradigm on soil toxicity. Consequently, stringent fertilizer control programs to monitor Cd have been launched. Attempts to link Cd toxicity and fertilizers to chronic diseases are common. A re-assessment of this "accepted" paradigm is timely, given the larger body of data available today. The data show that both the input and output of Cd per hectare from fertilizers are negligibly small compared to the total amount of Cd/hectare usually present in the soil itself. Calculations based on current agricultural practices are used to show that it will take about 18 centuries to double the ambient soil-cadmium level, and about 8 centuries to double the soil-fluoride level, even after neglecting leaching and other removal effects. Hence the concern of long-term agriculture should be the depletion of available phosphate fertilizers, rather than the contamination of the soil by trace metals or fluoride. This conclusion is confirmed by showing that the claimed correlations between fertilizer input and cadmium accumulation in crops are not robust. Alternative scenarios that explain the data are examined. Thus soil acidulation on fertilizer loading, and the effect of magnesium, zinc, and fluoride ions contained in fertilizers are considered using recent Cd$^{2+}$, Mg$^{2+}$ and F$^-$ ion-association theories. The protective role of ions like Zn, Se, Fe, etc., is emphasized, and the question of cadmium toxicity in the presence of other ions is considered. These help to clarify and rectify difficulties found in the standard point of view. This analysis does not modify the accepted views on Cd contamination by airborne delivery, smoking, and industrial activity, or P-contamination causing algal blooms.
2203.04671
Tim Esser
Paul Fremdling, Tim K. Esser, Bodhisattwa Saha, Alexander Makarov, Kyle Fort, Maria Reinhardt-Szyba, Joseph Gault, and Stephan Rauschenbach
A preparative mass spectrometer to deposit intact large native protein complexes
null
null
10.1021/acsnano.2c04831
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Electrospray ion-beam deposition (ES-IBD) is a versatile tool to study structure and reactivity of molecules from small metal clusters to large protein assemblies. It brings molecules gently into the gas phase where they can be accurately manipulated and purified, followed by controlled deposition onto various substrates. In combination with imaging techniques, direct structural information of well-defined molecules can be obtained, which is essential to test and interpret results from indirect mass spectrometry techniques. To date, ion-beam deposition experiments are limited to a small number of custom instruments worldwide, and there are no commercial alternatives. Here we present a module that adds ion-beam deposition capabilities to a popular commercial MS platform (Thermo Scientific$^{\mathrm{TM}}$ Q Exactive$^{\mathrm{TM}}$ UHMR). This combination significantly reduces the overhead associated with custom instruments, while benefiting from established high performance and reliability. We present current performance characteristics including beam intensity, landing-energy control, and deposition spot size for a broad range of molecules. In combination with atomic force microscopy (AFM) and transmission electron microscopy (TEM), we distinguish near-native from unfolded proteins and show retention of native shape of protein assemblies after dehydration and deposition. Further, we use an enzymatic assay to quantify activity of an non-covalent protein complex after deposition an a dry surface. Together, these results indicate a great potential of ES-IBD for applications in structural biology, but also outline the challenges that need to be solved for it to reach its full potential.
[ { "created": "Wed, 9 Mar 2022 12:24:23 GMT", "version": "v1" }, { "created": "Thu, 10 Mar 2022 11:19:42 GMT", "version": "v2" }, { "created": "Mon, 21 Mar 2022 16:41:45 GMT", "version": "v3" } ]
2022-09-14
[ [ "Fremdling", "Paul", "" ], [ "Esser", "Tim K.", "" ], [ "Saha", "Bodhisattwa", "" ], [ "Makarov", "Alexander", "" ], [ "Fort", "Kyle", "" ], [ "Reinhardt-Szyba", "Maria", "" ], [ "Gault", "Joseph", "" ], [ "Rauschenbach", "Stephan", "" ] ]
Electrospray ion-beam deposition (ES-IBD) is a versatile tool to study structure and reactivity of molecules from small metal clusters to large protein assemblies. It brings molecules gently into the gas phase where they can be accurately manipulated and purified, followed by controlled deposition onto various substrates. In combination with imaging techniques, direct structural information of well-defined molecules can be obtained, which is essential to test and interpret results from indirect mass spectrometry techniques. To date, ion-beam deposition experiments are limited to a small number of custom instruments worldwide, and there are no commercial alternatives. Here we present a module that adds ion-beam deposition capabilities to a popular commercial MS platform (Thermo Scientific$^{\mathrm{TM}}$ Q Exactive$^{\mathrm{TM}}$ UHMR). This combination significantly reduces the overhead associated with custom instruments, while benefiting from established high performance and reliability. We present current performance characteristics including beam intensity, landing-energy control, and deposition spot size for a broad range of molecules. In combination with atomic force microscopy (AFM) and transmission electron microscopy (TEM), we distinguish near-native from unfolded proteins and show retention of native shape of protein assemblies after dehydration and deposition. Further, we use an enzymatic assay to quantify activity of an non-covalent protein complex after deposition an a dry surface. Together, these results indicate a great potential of ES-IBD for applications in structural biology, but also outline the challenges that need to be solved for it to reach its full potential.
q-bio/0610009
Tao Hu
Tao Hu, Rui Zhang, B. I. Shklovskii
Electrostatic theory of viral self-assembly: a toy model
4 pages, 2 figures
Physica A 387, 3059 (2008)
10.1016/j.physa.2008.01.010
null
q-bio.BM cond-mat.soft
null
Viruses self-assemble from identical capsid proteins and their genome consisting, for example, of a long single stranded (ss) RNA. For a big class of T = 3 viruses capsid proteins have long positive N-terminal tails. We explore the role played by the Coulomb interaction between the brush of positive N-terminal tails rooted at the inner surface of the capsid and the negative ss RNA molecule. We show that viruses are most stable when the total contour length of ss RNA is close to the total length of the tails. For such a structure the absolute value of the total RNA charge is approximately twice larger than the charge of the capsid. This conclusion agrees with structural data.
[ { "created": "Tue, 3 Oct 2006 23:17:54 GMT", "version": "v1" }, { "created": "Tue, 10 Oct 2006 16:00:08 GMT", "version": "v2" }, { "created": "Thu, 28 Dec 2006 19:16:44 GMT", "version": "v3" }, { "created": "Fri, 2 Feb 2007 17:14:53 GMT", "version": "v4" } ]
2015-06-26
[ [ "Hu", "Tao", "" ], [ "Zhang", "Rui", "" ], [ "Shklovskii", "B. I.", "" ] ]
Viruses self-assemble from identical capsid proteins and their genome consisting, for example, of a long single stranded (ss) RNA. For a big class of T = 3 viruses capsid proteins have long positive N-terminal tails. We explore the role played by the Coulomb interaction between the brush of positive N-terminal tails rooted at the inner surface of the capsid and the negative ss RNA molecule. We show that viruses are most stable when the total contour length of ss RNA is close to the total length of the tails. For such a structure the absolute value of the total RNA charge is approximately twice larger than the charge of the capsid. This conclusion agrees with structural data.
1511.00262
Konrad Kording
Konrad Paul Kording
The geometry of Tempotronlike problems
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the discrete Tempotron learning problem a neuron receives time varying inputs and for a set of such input sequences ($\mathcal S_-$ set) the neuron must be sub-threshold for all times while for some other sequences ($\mathcal S_+$ set) the neuron must be super threshold for at least one time. Here we present a graphical treatment of a slight reformulation of the tempotron problem. We show that the problem's general form is equivalent to the question if a polytope, specified by a set of inequalities, is contained in the union of a set of equally defined polytopes. Using recent results from computational geometry, we show that the problem is W[1]-hard. This phrasing gives some new insights into the nature of gradient based learning algorithms. A sampling based approach can, under certain circumstances provide an approximation in polynomial time. Other problems, related to hierarchical neural networks may share some topological structure.
[ { "created": "Sun, 1 Nov 2015 15:49:46 GMT", "version": "v1" } ]
2015-11-03
[ [ "Kording", "Konrad Paul", "" ] ]
In the discrete Tempotron learning problem a neuron receives time varying inputs and for a set of such input sequences ($\mathcal S_-$ set) the neuron must be sub-threshold for all times while for some other sequences ($\mathcal S_+$ set) the neuron must be super threshold for at least one time. Here we present a graphical treatment of a slight reformulation of the tempotron problem. We show that the problem's general form is equivalent to the question if a polytope, specified by a set of inequalities, is contained in the union of a set of equally defined polytopes. Using recent results from computational geometry, we show that the problem is W[1]-hard. This phrasing gives some new insights into the nature of gradient based learning algorithms. A sampling based approach can, under certain circumstances provide an approximation in polynomial time. Other problems, related to hierarchical neural networks may share some topological structure.
2402.00484
Satori Tsuzuki Ph.D
Satori Tsuzuki
Extreme value statistics of nerve transmission delay
null
null
null
null
q-bio.NC math-ph math.MP stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nerve transmission delay is an important topic in neuroscience. Spike signals fired or received at the dendrites of a neuron travel from the axon to the presynaptic cell. The spike signal triggers a chemical reaction at the synapse, wherein a presynaptic cell transfers neurotransmitters to the postsynaptic cell, and regenerates electrical signals by a chemical reaction process through ion channels and transmits it to neighboring neurons. In the context of describing the complex physiological reaction process as a stochastic process, this study aimed to show that the distribution of the maximum time interval of spike signals follows extreme order statistics. By considering the statistical variance in the time constant of the Leaky Integrate-and-Fire model, which is a deterministic time evolution model of spike signals, we enabled randomness in the time interval of spike signals. When the time constant follows an exponential distribution function, the time interval of the spike signal also follows an exponential distribution. In this case, our theory and simulations confirmed that the histogram of the maximum time interval follows the Gumbel distribution, which is one of the three types of extreme value statistics. We also confirmed that the histogram of the maximum time interval follows a Fr\'{e}chet distribution when the time interval of the spike signal follows a Pareto distribution. These findings confirm that nerve transmission delay can be described using extreme value statistics and could, therefore, be used as a new indicator for transmission delay.
[ { "created": "Thu, 1 Feb 2024 10:40:42 GMT", "version": "v1" }, { "created": "Tue, 21 May 2024 19:55:50 GMT", "version": "v2" } ]
2024-05-24
[ [ "Tsuzuki", "Satori", "" ] ]
Nerve transmission delay is an important topic in neuroscience. Spike signals fired or received at the dendrites of a neuron travel from the axon to the presynaptic cell. The spike signal triggers a chemical reaction at the synapse, wherein a presynaptic cell transfers neurotransmitters to the postsynaptic cell, and regenerates electrical signals by a chemical reaction process through ion channels and transmits it to neighboring neurons. In the context of describing the complex physiological reaction process as a stochastic process, this study aimed to show that the distribution of the maximum time interval of spike signals follows extreme order statistics. By considering the statistical variance in the time constant of the Leaky Integrate-and-Fire model, which is a deterministic time evolution model of spike signals, we enabled randomness in the time interval of spike signals. When the time constant follows an exponential distribution function, the time interval of the spike signal also follows an exponential distribution. In this case, our theory and simulations confirmed that the histogram of the maximum time interval follows the Gumbel distribution, which is one of the three types of extreme value statistics. We also confirmed that the histogram of the maximum time interval follows a Fr\'{e}chet distribution when the time interval of the spike signal follows a Pareto distribution. These findings confirm that nerve transmission delay can be described using extreme value statistics and could, therefore, be used as a new indicator for transmission delay.
1409.1946
Stephan Schiffels
Stephan Schiffels, Michael L\"assig, and Ville Mustonen
Rate and cost of adaptation in the Drosophila Genome
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have consistently inferred high rates of adaptive molecular evolution between Drosophila species. At the same time, the Drosophila genome evolves under different rates of recombination, which results in partial genetic linkage between alleles at neighboring genomic loci. Here we analyze how linkage correlations affect adaptive evolution. We develop a new inference method for adaptation that takes into account the effect on an allele at a focal site caused by neighboring deleterious alleles (background selection) and by neighboring adaptive substitutions (hitchhiking). Using complete genome sequence data and fine-scale recombination maps, we infer a highly heterogeneous scenario of adaptation in Drosophila. In high-recombining regions, about 50% of all amino acid substitutions are adaptive, together with about 20% of all substitutions in proximal intergenic regions. In low-recombining regions, only a small fraction of the amino acid substitutions are adaptive, while hitchhiking accounts for the majority of these changes. Hitchhiking of deleterious alleles generates a substantial collateral cost of adaptation, leading to a fitness decline of about 30/2N per gene and per million years in the lowest-recombining regions. Our results show how recombination shapes rate and efficacy of the adaptive dynamics in eukaryotic genomes.
[ { "created": "Fri, 5 Sep 2014 21:09:48 GMT", "version": "v1" } ]
2014-09-09
[ [ "Schiffels", "Stephan", "" ], [ "Lässig", "Michael", "" ], [ "Mustonen", "Ville", "" ] ]
Recent studies have consistently inferred high rates of adaptive molecular evolution between Drosophila species. At the same time, the Drosophila genome evolves under different rates of recombination, which results in partial genetic linkage between alleles at neighboring genomic loci. Here we analyze how linkage correlations affect adaptive evolution. We develop a new inference method for adaptation that takes into account the effect on an allele at a focal site caused by neighboring deleterious alleles (background selection) and by neighboring adaptive substitutions (hitchhiking). Using complete genome sequence data and fine-scale recombination maps, we infer a highly heterogeneous scenario of adaptation in Drosophila. In high-recombining regions, about 50% of all amino acid substitutions are adaptive, together with about 20% of all substitutions in proximal intergenic regions. In low-recombining regions, only a small fraction of the amino acid substitutions are adaptive, while hitchhiking accounts for the majority of these changes. Hitchhiking of deleterious alleles generates a substantial collateral cost of adaptation, leading to a fitness decline of about 30/2N per gene and per million years in the lowest-recombining regions. Our results show how recombination shapes rate and efficacy of the adaptive dynamics in eukaryotic genomes.
1804.07406
Soham De
Soham De, Dana S. Nau, Xinyue Pan, Michele J. Gelfand
Tipping Points for Norm Change in Human Cultures
SBP-BRiMS 2018
null
10.1007/978-3-319-93372-6_7
null
q-bio.PE cs.CY cs.GT cs.MA physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans interact with each other on a daily basis by developing and maintaining various social norms and it is critical to form a deeper understanding of how such norms develop, how they change, and how fast they change. In this work, we develop an evolutionary game-theoretic model based on research in cultural psychology that shows that humans in various cultures differ in their tendencies to conform with those around them. Using this model, we analyze the evolutionary relationships between the tendency to conform and how quickly a population reacts when conditions make a change in norm desirable. Our analysis identifies conditions when a tipping point is reached in a population, causing norms to change rapidly.
[ { "created": "Thu, 19 Apr 2018 23:43:28 GMT", "version": "v1" }, { "created": "Mon, 2 Jul 2018 01:05:42 GMT", "version": "v2" } ]
2018-07-03
[ [ "De", "Soham", "" ], [ "Nau", "Dana S.", "" ], [ "Pan", "Xinyue", "" ], [ "Gelfand", "Michele J.", "" ] ]
Humans interact with each other on a daily basis by developing and maintaining various social norms and it is critical to form a deeper understanding of how such norms develop, how they change, and how fast they change. In this work, we develop an evolutionary game-theoretic model based on research in cultural psychology that shows that humans in various cultures differ in their tendencies to conform with those around them. Using this model, we analyze the evolutionary relationships between the tendency to conform and how quickly a population reacts when conditions make a change in norm desirable. Our analysis identifies conditions when a tipping point is reached in a population, causing norms to change rapidly.
1906.08881
Vijay Singh
Vijay Singh and Ilya Nemenman
Universal properties of concentration sensing in large ligand-receptor networks
5 pages, 3 figures, 2 supplementary figures
Phys. Rev. Lett. 124, 028101 (2020)
10.1103/PhysRevLett.124.028101
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cells estimate concentrations of chemical ligands in their environment using a limited set of receptors. Recent work has shown that the temporal sequence of binding and unbinding events on just a single receptor can be used to estimate the concentrations of multiple ligands. Here, for a network of many ligands and many receptors, we show that such temporal sequences can be used to estimate the concentration of a few times as many ligand species as there are receptors. Crucially, we show that the spectrum of the inverse covariance matrix of these estimates has several universal properties, which we trace to properties of Vandermonde matrices. We argue that this can be used by cells in realistic biochemical decoding networks.
[ { "created": "Thu, 20 Jun 2019 22:29:17 GMT", "version": "v1" } ]
2020-01-22
[ [ "Singh", "Vijay", "" ], [ "Nemenman", "Ilya", "" ] ]
Cells estimate concentrations of chemical ligands in their environment using a limited set of receptors. Recent work has shown that the temporal sequence of binding and unbinding events on just a single receptor can be used to estimate the concentrations of multiple ligands. Here, for a network of many ligands and many receptors, we show that such temporal sequences can be used to estimate the concentration of a few times as many ligand species as there are receptors. Crucially, we show that the spectrum of the inverse covariance matrix of these estimates has several universal properties, which we trace to properties of Vandermonde matrices. We argue that this can be used by cells in realistic biochemical decoding networks.
2009.02156
Andreas Kamilaris
P. Papademas, E. Kamilari, M. Aspri, D. A Anagnostopoulos, P. Mousikos, A. Kamilaris and D. Tsaltas
Investigation of the Cyprus donkey milk bacterial diversity by 16SrDNA high-throughput sequencing in a Cyprus donkey farm
Accepted for publication at Journal of Dairy Science
null
null
null
q-bio.QM cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The interest in milk originating from donkeys is growing worldwide due to its claimed functional and nutritional properties, especially for sensitive population groups, such as infants with cow milk protein allergy. The current study aimed to assess the microbiological quality of donkey milk produced in a donkey farm in Cyprus using cultured-based and high-throughput sequencing (HTS) techniques. The culture-based microbiological analysis showed very low microbial counts, while important food-borne pathogens were not detected in any sample. In addition, HTS was applied to characterize the bacterial communities of donkey milk samples. Donkey milk was mostly comprised of: Gram-negative Proteobacteria, including Sphingomonas, Pseudomonas Mesorhizobium and Acinetobacter; lactic acid bacteria, including Lactobacillus and Streptococcus; the endospores forming Clostridium; and the environmental genera Flavobacterium and Ralstonia, detected in lower relative abundances. The results of the study support existing findings that donkey milk contains mostly Gram-negative bacteria. Moreover, it raises questions regarding the contribution: a) of antimicrobial agents (i.e. lysozyme, peptides) in shaping the microbial communities and b) of the bacterial microbiota to the functional value of donkey milk.
[ { "created": "Fri, 4 Sep 2020 12:42:54 GMT", "version": "v1" } ]
2020-09-07
[ [ "Papademas", "P.", "" ], [ "Kamilari", "E.", "" ], [ "Aspri", "M.", "" ], [ "Anagnostopoulos", "D. A", "" ], [ "Mousikos", "P.", "" ], [ "Kamilaris", "A.", "" ], [ "Tsaltas", "D.", "" ] ]
The interest in milk originating from donkeys is growing worldwide due to its claimed functional and nutritional properties, especially for sensitive population groups, such as infants with cow milk protein allergy. The current study aimed to assess the microbiological quality of donkey milk produced in a donkey farm in Cyprus using cultured-based and high-throughput sequencing (HTS) techniques. The culture-based microbiological analysis showed very low microbial counts, while important food-borne pathogens were not detected in any sample. In addition, HTS was applied to characterize the bacterial communities of donkey milk samples. Donkey milk was mostly comprised of: Gram-negative Proteobacteria, including Sphingomonas, Pseudomonas Mesorhizobium and Acinetobacter; lactic acid bacteria, including Lactobacillus and Streptococcus; the endospores forming Clostridium; and the environmental genera Flavobacterium and Ralstonia, detected in lower relative abundances. The results of the study support existing findings that donkey milk contains mostly Gram-negative bacteria. Moreover, it raises questions regarding the contribution: a) of antimicrobial agents (i.e. lysozyme, peptides) in shaping the microbial communities and b) of the bacterial microbiota to the functional value of donkey milk.
2404.09947
Katherine Meyer
Benjamin Hafner and Katherine Meyer
Bounding seed loss from isolated habitat patches
25 pages, 10 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dispersal of propagules (seeds, spores) from a geographically isolated population into an uninhabitable matrix can threaten population persistence if it prevents new growth from keeping pace with mortality. Quantifying propagule loss can thus inform restoration and conservation of vulnerable populations in fragmented landscapes. To model propagule loss in detail, one can integrate dispersal kernels (probabilistic descriptions of dispersal) and plant densities. However, one might lack the detailed spatial information and computational tools needed by such integral models. Here we derive two upper bounds on the probability of propagule loss--one assuming rotational symmetry of dispersal and the other not--that require only habitat area, habitat perimeter, and the mean dispersal distance of a propagule. We compare the bounds to simulations of integral models for the population of Asclepias syriaca (common milkweed) at McKnight Prairie--a 13.7 hectare reserve surrounded by agricultural fields in Goodhue County, Minnesota--and identify conditions under which the bounds closely estimate propagule loss.
[ { "created": "Mon, 15 Apr 2024 17:21:23 GMT", "version": "v1" } ]
2024-04-16
[ [ "Hafner", "Benjamin", "" ], [ "Meyer", "Katherine", "" ] ]
Dispersal of propagules (seeds, spores) from a geographically isolated population into an uninhabitable matrix can threaten population persistence if it prevents new growth from keeping pace with mortality. Quantifying propagule loss can thus inform restoration and conservation of vulnerable populations in fragmented landscapes. To model propagule loss in detail, one can integrate dispersal kernels (probabilistic descriptions of dispersal) and plant densities. However, one might lack the detailed spatial information and computational tools needed by such integral models. Here we derive two upper bounds on the probability of propagule loss--one assuming rotational symmetry of dispersal and the other not--that require only habitat area, habitat perimeter, and the mean dispersal distance of a propagule. We compare the bounds to simulations of integral models for the population of Asclepias syriaca (common milkweed) at McKnight Prairie--a 13.7 hectare reserve surrounded by agricultural fields in Goodhue County, Minnesota--and identify conditions under which the bounds closely estimate propagule loss.
2305.01059
Jeferson J. Arenzon
Jeferson J. Arenzon and Luca Peliti
Emergent cooperative behavior in transient compartments
7 pages, 5 figures
Phys. Rev. E 108 (2023) 034409
10.1103/PhysRevE.108.034409
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a minimal model of multilevel selection on structured populations, considering the interplay between game theory and population dynamics. Through a bottleneck process, finite groups are formed with cooperators and defectors sampled from an infinite pool. After the fragmentation, these transient compartments grow until the carrying capacity is attained. Eventually, all compartments are merged, well mixed and the whole process is repeated. We show that cooperators, even if interacting only through mean-field intra-group interactions that favor defectors, may perform well because of the inter-group competition and the size diversity among the compartments. These cycles of isolation and coalescence may therefore be important in maintaining diversity among different species or strategies and may help to understand the underlying mechanisms of the scaffolding processes in the transition to multicellularity.
[ { "created": "Mon, 1 May 2023 19:45:33 GMT", "version": "v1" }, { "created": "Fri, 29 Sep 2023 00:35:20 GMT", "version": "v2" } ]
2023-10-02
[ [ "Arenzon", "Jeferson J.", "" ], [ "Peliti", "Luca", "" ] ]
We introduce a minimal model of multilevel selection on structured populations, considering the interplay between game theory and population dynamics. Through a bottleneck process, finite groups are formed with cooperators and defectors sampled from an infinite pool. After the fragmentation, these transient compartments grow until the carrying capacity is attained. Eventually, all compartments are merged, well mixed and the whole process is repeated. We show that cooperators, even if interacting only through mean-field intra-group interactions that favor defectors, may perform well because of the inter-group competition and the size diversity among the compartments. These cycles of isolation and coalescence may therefore be important in maintaining diversity among different species or strategies and may help to understand the underlying mechanisms of the scaffolding processes in the transition to multicellularity.
1910.04048
Mohsen Annabestani
Mohsen Annabestani, Sina Azizmohseni, Pouria Esmaeili-Dokht, Nahal Bagheri, Afarin Aghassizadeh and, Mahdi Fardmanesh
Multiphysics analysis and practical implementation of an ionic soft actuator-based microfluidic device toward the design of a POCT compatible active micromixer
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroactive-Polymers (EAPs) are one of the best soft materials with great applications in active microfluidics. Ionic ones (i-EAPs) have more promising features for being appropriate candidates to use in active microfluidic devices. Here, as a case study, we have designed and fabricated a microfluidic micromixer using an i-EAP named Ionic Polymer-Metal Composite (IPMC). In microfluidics, active devices have more functionality but due to their required facilities are less effective for Point of Care Tests (POCTs). In the direction of solving this paradox, we should use some active components that they need minimum facilities. IPMC can be one of these components, hence by integrating the IPMC actuator into a microfluidic channel, a micromixer chip was designed and put to the simulation and experimental tests. The result showed that the proposed micromixer is able to mix the micro fluids properly and IPMC actuator has adequate potential to be an active component for POCT-based microfluidic chips.
[ { "created": "Tue, 8 Oct 2019 11:57:20 GMT", "version": "v1" } ]
2019-10-10
[ [ "Annabestani", "Mohsen", "" ], [ "Azizmohseni", "Sina", "" ], [ "Esmaeili-Dokht", "Pouria", "" ], [ "Bagheri", "Nahal", "" ], [ "and", "Afarin Aghassizadeh", "" ], [ "Fardmanesh", "Mahdi", "" ] ]
Electroactive-Polymers (EAPs) are one of the best soft materials with great applications in active microfluidics. Ionic ones (i-EAPs) have more promising features for being appropriate candidates to use in active microfluidic devices. Here, as a case study, we have designed and fabricated a microfluidic micromixer using an i-EAP named Ionic Polymer-Metal Composite (IPMC). In microfluidics, active devices have more functionality but due to their required facilities are less effective for Point of Care Tests (POCTs). In the direction of solving this paradox, we should use some active components that they need minimum facilities. IPMC can be one of these components, hence by integrating the IPMC actuator into a microfluidic channel, a micromixer chip was designed and put to the simulation and experimental tests. The result showed that the proposed micromixer is able to mix the micro fluids properly and IPMC actuator has adequate potential to be an active component for POCT-based microfluidic chips.
1411.4108
Li Zhang
Li Zhang, Xuejun Liu, Songcan Chen
Detecting Differential Expression from RNA-seq Data with Expression Measurement Uncertainty
20 pages, 9 figures
null
null
null
q-bio.GN cs.CE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput RNA sequencing (RNA-seq) has emerged as a revolutionary and powerful technology for expression profiling. Most proposed methods for detecting differentially expressed (DE) genes from RNA-seq are based on statistics that compare normalized read counts between conditions. However, there are few methods considering the expression measurement uncertainty into DE detection. Moreover, most methods are only capable of detecting DE genes, and few methods are available for detecting DE isoforms. In this paper, a Bayesian framework (BDSeq) is proposed to detect DE genes and isoforms with consideration of expression measurement uncertainty. This expression measurement uncertainty provides useful information which can help to improve the performance of DE detection. Three real RAN-seq data sets are used to evaluate the performance of BDSeq and results show that the inclusion of expression measurement uncertainty improves accuracy in detection of DE genes and isoforms. Finally, we develop a GamSeq-BDSeq RNA-seq analysis pipeline to facilitate users, which is freely available at the website http://parnec.nuaa.edu.cn/liux/GSBD/GamSeq-BDSeq.html.
[ { "created": "Sat, 15 Nov 2014 03:43:01 GMT", "version": "v1" } ]
2014-11-18
[ [ "Zhang", "Li", "" ], [ "Liu", "Xuejun", "" ], [ "Chen", "Songcan", "" ] ]
High-throughput RNA sequencing (RNA-seq) has emerged as a revolutionary and powerful technology for expression profiling. Most proposed methods for detecting differentially expressed (DE) genes from RNA-seq are based on statistics that compare normalized read counts between conditions. However, there are few methods considering the expression measurement uncertainty into DE detection. Moreover, most methods are only capable of detecting DE genes, and few methods are available for detecting DE isoforms. In this paper, a Bayesian framework (BDSeq) is proposed to detect DE genes and isoforms with consideration of expression measurement uncertainty. This expression measurement uncertainty provides useful information which can help to improve the performance of DE detection. Three real RAN-seq data sets are used to evaluate the performance of BDSeq and results show that the inclusion of expression measurement uncertainty improves accuracy in detection of DE genes and isoforms. Finally, we develop a GamSeq-BDSeq RNA-seq analysis pipeline to facilitate users, which is freely available at the website http://parnec.nuaa.edu.cn/liux/GSBD/GamSeq-BDSeq.html.
2309.01663
Brandon Legried
Brandon Legried
Anomaly zones for uniformly sampled gene trees under the gene duplication and loss model
32 pages, 3 pages of references, 8 figures, Appendix with 8 pages
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Recently, there has been interest in extending long-known results about the multispecies coalescent tree to other models of gene trees. Results about the gene duplication and loss (GDL) tree have mathematical proofs, including species tree identifiability, estimability, and sample complexity of popular algorithms like ASTRAL. Here, this work is continued by characterizing the anomaly zones of uniformly sampled gene trees. The anomaly zone for species trees is the set of parameters where some discordant gene tree occurs with the maximal probability. The detection of anomalous gene trees is an important problem in phylogenomics, as their presence renders effective estimation methods to being positively misleading. Under the multispecies coalescent, anomaly zones are known to exist for rooted species trees with as few as four species. The gene duplication and loss process is a generalization of the generalized linear-birth death process to the rooted species tree, where each edge is treated as a single timeline with exponential-rate duplication and loss. The methods and results come from a detailed probabilistic analysis of trajectories observed from this stochastic process. It is shown that anomaly zones do not exist for rooted GDL balanced trees on four species, but do exist for rooted caterpillar trees, as with the multispecies coalescent.
[ { "created": "Mon, 4 Sep 2023 15:30:52 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 02:16:32 GMT", "version": "v2" }, { "created": "Thu, 28 Mar 2024 23:00:08 GMT", "version": "v3" } ]
2024-04-01
[ [ "Legried", "Brandon", "" ] ]
Recently, there has been interest in extending long-known results about the multispecies coalescent tree to other models of gene trees. Results about the gene duplication and loss (GDL) tree have mathematical proofs, including species tree identifiability, estimability, and sample complexity of popular algorithms like ASTRAL. Here, this work is continued by characterizing the anomaly zones of uniformly sampled gene trees. The anomaly zone for species trees is the set of parameters where some discordant gene tree occurs with the maximal probability. The detection of anomalous gene trees is an important problem in phylogenomics, as their presence renders effective estimation methods to being positively misleading. Under the multispecies coalescent, anomaly zones are known to exist for rooted species trees with as few as four species. The gene duplication and loss process is a generalization of the generalized linear-birth death process to the rooted species tree, where each edge is treated as a single timeline with exponential-rate duplication and loss. The methods and results come from a detailed probabilistic analysis of trajectories observed from this stochastic process. It is shown that anomaly zones do not exist for rooted GDL balanced trees on four species, but do exist for rooted caterpillar trees, as with the multispecies coalescent.
1502.03135
M\'at\'e Lengyel
M\'at\'e Lengyel, \'Ad\'am Koblinger, Marjena Popovi\'c, J\'ozsef Fiser
On the role of time in perceptual decision making
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to the dominant view, time in perceptual decision making is used for integrating new sensory evidence. Based on a probabilistic framework, we investigated the alternative hypothesis that time is used for gradually refining an internal estimate of uncertainty, that is to obtain an increasingly accurate approximation of the posterior distribution through collecting samples from it. In the context of a simple orientation estimation task, we analytically derived predictions of how humans should behave under the two hypotheses, and identified the across-trial correlation between error and subjective uncertainty as a proper assay to distinguish between them. Next, we developed a novel experimental paradigm that could be used to reliably measure these quantities, and tested the predictions derived from the two hypotheses. We found that in our task, humans show clear evidence that they use time mostly for probabilistic sampling and not for evidence integration. These results provide the first empirical support for iteratively improving probabilistic representations in perceptual decision making, and open the way to reinterpret the role of time in the cortical processing of complex sensory information.
[ { "created": "Tue, 10 Feb 2015 22:09:41 GMT", "version": "v1" } ]
2015-02-12
[ [ "Lengyel", "Máté", "" ], [ "Koblinger", "Ádám", "" ], [ "Popović", "Marjena", "" ], [ "Fiser", "József", "" ] ]
According to the dominant view, time in perceptual decision making is used for integrating new sensory evidence. Based on a probabilistic framework, we investigated the alternative hypothesis that time is used for gradually refining an internal estimate of uncertainty, that is to obtain an increasingly accurate approximation of the posterior distribution through collecting samples from it. In the context of a simple orientation estimation task, we analytically derived predictions of how humans should behave under the two hypotheses, and identified the across-trial correlation between error and subjective uncertainty as a proper assay to distinguish between them. Next, we developed a novel experimental paradigm that could be used to reliably measure these quantities, and tested the predictions derived from the two hypotheses. We found that in our task, humans show clear evidence that they use time mostly for probabilistic sampling and not for evidence integration. These results provide the first empirical support for iteratively improving probabilistic representations in perceptual decision making, and open the way to reinterpret the role of time in the cortical processing of complex sensory information.
2012.15647
Franz Franchetti
Yoko Franchetti, Thomas D. Nolin, Franz Franchetti
Indirect Measurement of Hepatic Drug Clearance by Fitting Dynamical Models
This preprint is based on Chapter 2 of the PhD dissertation of Y Franchetti. The dissertation thesis is available at http://d-scholarship.pitt.edu/id/eprint/39885
null
null
University of Pittsburgh ETD 39885
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an indirect signal processing-based measurement method for biological quantities in humans that cannot be directly measured. We develop the method by focusing on estimating hepatic enzyme and drug transporter activity through breath-biopsy samples clinically obtained via the erythromycin breath test (EBT): a small dose of radio-labeled drug is injected and the subsequent content of radio-labeled CO$_2$ is measured repeatedly in exhaled breath; the resulting time series is analyzed. To model EBT we developed a 14-variable non-linear reduced order dynamical model that describes the behavior of the drug and its metabolites in the human body well enough to capture all biological phenomena of interest. Based on this system of coupled non-linear ordinary differential equations (ODEs) we treat the measurement problem as inverse problem: we estimate the ODE parameters of individual patients from the measured EBT time series. These estimates then provide a measurement of the liver activity of interest. The parameters are hard to estimate as the ODEs are stiff and the problem needs to be regularized to ensure stable convergence. We develop a formal operator framework to capture and treat the specific non-linearities present, and perform perturbation analysis to establish properties of the estimation procedure and its solution. Development of the method required 150,000 CPU hours at a supercomputing center, and a single production run takes CPU 24 hours. We introduce and analyze the method in the context of future precision dosing of drugs for vulnerable patients (e.g., oncology, nephrology, or pediatrics) to eventually ensure efficacy and avoid toxicity.
[ { "created": "Thu, 31 Dec 2020 15:09:21 GMT", "version": "v1" } ]
2021-01-01
[ [ "Franchetti", "Yoko", "" ], [ "Nolin", "Thomas D.", "" ], [ "Franchetti", "Franz", "" ] ]
We present an indirect signal processing-based measurement method for biological quantities in humans that cannot be directly measured. We develop the method by focusing on estimating hepatic enzyme and drug transporter activity through breath-biopsy samples clinically obtained via the erythromycin breath test (EBT): a small dose of radio-labeled drug is injected and the subsequent content of radio-labeled CO$_2$ is measured repeatedly in exhaled breath; the resulting time series is analyzed. To model EBT we developed a 14-variable non-linear reduced order dynamical model that describes the behavior of the drug and its metabolites in the human body well enough to capture all biological phenomena of interest. Based on this system of coupled non-linear ordinary differential equations (ODEs) we treat the measurement problem as inverse problem: we estimate the ODE parameters of individual patients from the measured EBT time series. These estimates then provide a measurement of the liver activity of interest. The parameters are hard to estimate as the ODEs are stiff and the problem needs to be regularized to ensure stable convergence. We develop a formal operator framework to capture and treat the specific non-linearities present, and perform perturbation analysis to establish properties of the estimation procedure and its solution. Development of the method required 150,000 CPU hours at a supercomputing center, and a single production run takes CPU 24 hours. We introduce and analyze the method in the context of future precision dosing of drugs for vulnerable patients (e.g., oncology, nephrology, or pediatrics) to eventually ensure efficacy and avoid toxicity.
0902.0980
Ping Ao
P Ao
Global View of Bionetwork Dynamics: Adaptive Landscape
16 pages
Journal of Genetics and Genomics. V.36, 63-73 (2009) Ping Ao. Global View of Bionetwork Dynamics: Adaptive Landscape
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantifying the adaptive landscape in a given dynamical processes has been one of most important goals in theoretical biology. It can have immediate implications for many dynamical properties, such as robustness and plasticity. Based on recent work here I give a nontechnical brief review of this powerful quantitative concept in biology. This concept was initially proposed by S Wright 70 years ago, re-introduced by one of the founders of molecular biology and by others in different biological contexts. It was apparently forgotten by mainstream modern biologists for many years. Currently, this concept has found its increasingly important role in the development of systems biology and the modeling of bionetwork dynamics, from phage lambda genetic switch to endogenous network of cancer genesis and progression. It is an ideal quantify to describe the robustness and stability of bionetworks. I will first introduce five landmark proposals in biology on this concept, to demonstrate the important common thread in its theoretical biology development. Then I will discuss a few recent results, focusing on the work showing the logical consistency of adaptive landscape. From the perspective of a working scientist and of what needed for a dynamical theory when confronting empirical data, the adaptive landscape is useful both metaphorically and quantitatively and has captured an essential aspect of biological dynamical processes. Still, many important open problems remain to be solved. Having this important problem under control, we may expect that we are on the right road to quantitatively formulate the evolutionary dynamics discovered by Darwin and Wallace.
[ { "created": "Thu, 5 Feb 2009 21:25:11 GMT", "version": "v1" } ]
2009-02-09
[ [ "Ao", "P", "" ] ]
Quantifying the adaptive landscape in a given dynamical processes has been one of most important goals in theoretical biology. It can have immediate implications for many dynamical properties, such as robustness and plasticity. Based on recent work here I give a nontechnical brief review of this powerful quantitative concept in biology. This concept was initially proposed by S Wright 70 years ago, re-introduced by one of the founders of molecular biology and by others in different biological contexts. It was apparently forgotten by mainstream modern biologists for many years. Currently, this concept has found its increasingly important role in the development of systems biology and the modeling of bionetwork dynamics, from phage lambda genetic switch to endogenous network of cancer genesis and progression. It is an ideal quantify to describe the robustness and stability of bionetworks. I will first introduce five landmark proposals in biology on this concept, to demonstrate the important common thread in its theoretical biology development. Then I will discuss a few recent results, focusing on the work showing the logical consistency of adaptive landscape. From the perspective of a working scientist and of what needed for a dynamical theory when confronting empirical data, the adaptive landscape is useful both metaphorically and quantitatively and has captured an essential aspect of biological dynamical processes. Still, many important open problems remain to be solved. Having this important problem under control, we may expect that we are on the right road to quantitatively formulate the evolutionary dynamics discovered by Darwin and Wallace.
2101.00304
Shahabeddin Sotudian
Shahabeddin Sotudian and Mohammad Hossein Fazel Zarandi
Interval Type-2 Enhanced Possibilistic Fuzzy C-Means Clustering for Gene Expression Data Analysis
null
null
null
null
q-bio.GN cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Both FCM and PCM clustering methods have been widely applied to pattern recognition and data clustering. Nevertheless, FCM is sensitive to noise and PCM occasionally generates coincident clusters. PFCM is an extension of the PCM model by combining FCM and PCM, but this method still suffers from the weaknesses of PCM and FCM. In the current paper, the weaknesses of the PFCM algorithm are corrected and the enhanced possibilistic fuzzy c-means (EPFCM) clustering algorithm is presented. EPFCM can still be sensitive to noise. Therefore, we propose an interval type-2 enhanced possibilistic fuzzy c-means (IT2EPFCM) clustering method by utilizing two fuzzifiers $(m_1, m_2)$ for fuzzy memberships and two fuzzifiers $({\theta}_1, {\theta}_2)$ for possibilistic typicalities. Our computational results show the superiority of the proposed approaches compared with several state-of-the-art techniques in the literature. Finally, the proposed methods are implemented for analyzing microarray gene expression data.
[ { "created": "Fri, 1 Jan 2021 19:29:24 GMT", "version": "v1" }, { "created": "Wed, 24 Nov 2021 06:52:01 GMT", "version": "v2" } ]
2021-11-25
[ [ "Sotudian", "Shahabeddin", "" ], [ "Zarandi", "Mohammad Hossein Fazel", "" ] ]
Both FCM and PCM clustering methods have been widely applied to pattern recognition and data clustering. Nevertheless, FCM is sensitive to noise and PCM occasionally generates coincident clusters. PFCM is an extension of the PCM model by combining FCM and PCM, but this method still suffers from the weaknesses of PCM and FCM. In the current paper, the weaknesses of the PFCM algorithm are corrected and the enhanced possibilistic fuzzy c-means (EPFCM) clustering algorithm is presented. EPFCM can still be sensitive to noise. Therefore, we propose an interval type-2 enhanced possibilistic fuzzy c-means (IT2EPFCM) clustering method by utilizing two fuzzifiers $(m_1, m_2)$ for fuzzy memberships and two fuzzifiers $({\theta}_1, {\theta}_2)$ for possibilistic typicalities. Our computational results show the superiority of the proposed approaches compared with several state-of-the-art techniques in the literature. Finally, the proposed methods are implemented for analyzing microarray gene expression data.
1701.08043
Magne Aldrin
Magne Aldrin, Ragnar Bang Huseby, Audun Stien, Randi Nygaard Gr{\o}ntvedt, Hildegunn Viljugrein and Peder Andreas Jansen
A stage-structured Bayesian hierarchical model for salmon lice populations at individual salmon farms - Estimated from multiple farm data sets
null
Ecological Modelling, 2017
10.1016/j.ecolmodel.2017.05.019
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Salmon farming has become a prosperous international industry over the last decades. Along with growth in the production farmed salmon, however, an increasing threat by pathogens has emerged. Of special concern is the propagation and spread of the salmon louse, Lepeophtheirus salmonis. In order to gain insight into this parasites population dynamics in large scale salmon farming system, we present a fully mechanistic stage-structured population model for the salmon louse, also allowing for complexities involved in the hierarchical structure of full scale salmon farming. The model estimates parameters controlling a wide range of processes, including temperature dependent demographic rates, fish size and abundance effects on louse transmission rates, effects sizes of various salmon louse control measures, and distance based between farm transmission rates. Model parameters were estimated from data including 32 salmon farms, except the last production months for five farms which were used to evaluate model predictions. We used a Bayesian estimation approach, combining the prior distributions and the data likelihood into a joint posterior distribution for all model parameters. The model generated expected values that fitted the observed infection levels of the chalimus, adult female and other mobile stages of salmon lice, reasonably well. Predictions for the time periods not used for fitting the model were also consistent with the observational data. We argue that the present model for the population dynamics of the salmon louse in aquaculture farm systems may contribute to resolve the complexity of processes that drive that drive this host-parasite relationship, and hence may improve strategies to control the parasite in this production system.
[ { "created": "Fri, 27 Jan 2017 13:07:49 GMT", "version": "v1" } ]
2018-08-22
[ [ "Aldrin", "Magne", "" ], [ "Huseby", "Ragnar Bang", "" ], [ "Stien", "Audun", "" ], [ "Grøntvedt", "Randi Nygaard", "" ], [ "Viljugrein", "Hildegunn", "" ], [ "Jansen", "Peder Andreas", "" ] ]
Salmon farming has become a prosperous international industry over the last decades. Along with growth in the production farmed salmon, however, an increasing threat by pathogens has emerged. Of special concern is the propagation and spread of the salmon louse, Lepeophtheirus salmonis. In order to gain insight into this parasites population dynamics in large scale salmon farming system, we present a fully mechanistic stage-structured population model for the salmon louse, also allowing for complexities involved in the hierarchical structure of full scale salmon farming. The model estimates parameters controlling a wide range of processes, including temperature dependent demographic rates, fish size and abundance effects on louse transmission rates, effects sizes of various salmon louse control measures, and distance based between farm transmission rates. Model parameters were estimated from data including 32 salmon farms, except the last production months for five farms which were used to evaluate model predictions. We used a Bayesian estimation approach, combining the prior distributions and the data likelihood into a joint posterior distribution for all model parameters. The model generated expected values that fitted the observed infection levels of the chalimus, adult female and other mobile stages of salmon lice, reasonably well. Predictions for the time periods not used for fitting the model were also consistent with the observational data. We argue that the present model for the population dynamics of the salmon louse in aquaculture farm systems may contribute to resolve the complexity of processes that drive that drive this host-parasite relationship, and hence may improve strategies to control the parasite in this production system.
1311.3236
Erik Wijnker
Erik Wijnker, Geo Velikkakam James, Jia Ding, Frank Becker, Jonas R. Klasen, Vimal Rawat, Beth A. Rowan, Daniel F. de Jong, C. Bastiaan de Snoo, Luis Zapata, Bruno Huettel, Hans de Jong, Stephan Ossowski, Detlef Weigel, Maarten Koornneef, Joost J.B. Keurentjes and Korbinian Schneeberger
The genomic landscape of meiotic crossovers and gene conversions in Arabidopsis thaliana
44 pages, 5 figures with figure supplements
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Knowledge of the exact distribution of meiotic crossovers (COs) and gene conversions (GCs) is essential for understanding many aspects of population genetics and evolution, from haplotype structure and long-distance genetic linkage to the generation of new allelic variants of genes. To this end, we resequenced the four products of 13 meiotic tetrads along with 10 doubled haploids derived from Arabidopsis thaliana hybrids. GC detection through short reads has previously been confounded by genomic rearrangements. Rigid filtering for misaligned reads allowed GC identification at high accuracy and revealed an ~80-kb transposition, which undergoes copy-number changes mediated by meiotic recombination. Non-crossover associated GCs were extremely rare most likely due to their short average length of ~25-50 bp, which is significantly shorter than the length of CO associated GCs. Overall, recombination preferentially targeted non-methylated nucleosome-free regions at gene promoters, which showed significant enrichment of two sequence motifs.
[ { "created": "Wed, 13 Nov 2013 18:12:04 GMT", "version": "v1" } ]
2013-11-14
[ [ "Wijnker", "Erik", "" ], [ "James", "Geo Velikkakam", "" ], [ "Ding", "Jia", "" ], [ "Becker", "Frank", "" ], [ "Klasen", "Jonas R.", "" ], [ "Rawat", "Vimal", "" ], [ "Rowan", "Beth A.", "" ], [ "de Jong", "Daniel F.", "" ], [ "de Snoo", "C. Bastiaan", "" ], [ "Zapata", "Luis", "" ], [ "Huettel", "Bruno", "" ], [ "de Jong", "Hans", "" ], [ "Ossowski", "Stephan", "" ], [ "Weigel", "Detlef", "" ], [ "Koornneef", "Maarten", "" ], [ "Keurentjes", "Joost J. B.", "" ], [ "Schneeberger", "Korbinian", "" ] ]
Knowledge of the exact distribution of meiotic crossovers (COs) and gene conversions (GCs) is essential for understanding many aspects of population genetics and evolution, from haplotype structure and long-distance genetic linkage to the generation of new allelic variants of genes. To this end, we resequenced the four products of 13 meiotic tetrads along with 10 doubled haploids derived from Arabidopsis thaliana hybrids. GC detection through short reads has previously been confounded by genomic rearrangements. Rigid filtering for misaligned reads allowed GC identification at high accuracy and revealed an ~80-kb transposition, which undergoes copy-number changes mediated by meiotic recombination. Non-crossover associated GCs were extremely rare most likely due to their short average length of ~25-50 bp, which is significantly shorter than the length of CO associated GCs. Overall, recombination preferentially targeted non-methylated nucleosome-free regions at gene promoters, which showed significant enrichment of two sequence motifs.
2204.03354
Patrick Krauss
Achim Schilling, William Sedley, Richard Gerum, Claus Metzner, Konstantin Tziridis, Andreas Maier, Holger Schulze, Fan-Gang Zeng, Karl J. Friston, Patrick Krauss
Predictive coding and stochastic resonance as fundamental principles of auditory perception
arXiv admin note: substantial text overlap with arXiv:2010.01914
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
How is information processed in the brain during perception? Mechanistic insight is achieved only when experiments are employed to test formal or computational models. In analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying auditory perception. With a special focus on tinnitus -- as the prime example of auditory phantom perception -- we review recent work at the intersection of artificial intelligence, psychology, and neuroscience. In particular, we discuss why everyone with tinnitus suffers from hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that the increase of sensory precision due to Bayesian inference could be caused by intrinsic neural noise and lead to a prediction error in the cerebral cortex. Hence, two fundamental processing principles - being ubiquitous in the brain - provide the most explanatory power for the emergence of tinnitus: predictive coding as a top-down, and stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles play a crucial role in healthy auditory perception.
[ { "created": "Thu, 7 Apr 2022 10:47:58 GMT", "version": "v1" }, { "created": "Mon, 23 May 2022 09:14:52 GMT", "version": "v2" } ]
2022-05-24
[ [ "Schilling", "Achim", "" ], [ "Sedley", "William", "" ], [ "Gerum", "Richard", "" ], [ "Metzner", "Claus", "" ], [ "Tziridis", "Konstantin", "" ], [ "Maier", "Andreas", "" ], [ "Schulze", "Holger", "" ], [ "Zeng", "Fan-Gang", "" ], [ "Friston", "Karl J.", "" ], [ "Krauss", "Patrick", "" ] ]
How is information processed in the brain during perception? Mechanistic insight is achieved only when experiments are employed to test formal or computational models. In analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying auditory perception. With a special focus on tinnitus -- as the prime example of auditory phantom perception -- we review recent work at the intersection of artificial intelligence, psychology, and neuroscience. In particular, we discuss why everyone with tinnitus suffers from hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that the increase of sensory precision due to Bayesian inference could be caused by intrinsic neural noise and lead to a prediction error in the cerebral cortex. Hence, two fundamental processing principles - being ubiquitous in the brain - provide the most explanatory power for the emergence of tinnitus: predictive coding as a top-down, and stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles play a crucial role in healthy auditory perception.
2012.02361
Storm Slivkoff
Storm Slivkoff, Jack L. Gallant
Design of Complex Experiments Using Mixed Integer Linear Programming
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article we demonstrate how this design process can be greatly assisted using an optimization tool known as Mixed Integer Linear Programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroimaging experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.
[ { "created": "Fri, 4 Dec 2020 01:49:43 GMT", "version": "v1" } ]
2020-12-07
[ [ "Slivkoff", "Storm", "" ], [ "Gallant", "Jack L.", "" ] ]
Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article we demonstrate how this design process can be greatly assisted using an optimization tool known as Mixed Integer Linear Programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroimaging experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.
1607.08694
Alma Dal Co
Alma Dal Co, Marco Cosentino Lagomarsino, Michele Caselle, Matteo Osella
Stochastic timing in gene expression for simple regulatory strategies
10 pages, 5 figures
Nucleic Acids Res 45 (3): 1069-1078 (2017)
10.1093/nar/gkw1235
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Timing is essential for many cellular processes, from cellular responses to external stimuli to the cell cycle and circadian clocks. Many of these processes are based on gene expression. For example, an activated gene may be required to reach in a precise time a threshold level of expression that triggers a specific downstream process. However, gene expression is subject to stochastic fluctuations, naturally inducing an uncertainty in this threshold-crossing time with potential consequences on biological functions and phenotypes. Here, we consider such "timing fluctuations", and we ask how they can be controlled. Our analytical estimates and simulations show that, for an induced gene, timing variability is minimal if the threshold level of expression is approximately half of the steady-state level. Timing fuctuations can be reduced by increasing the transcription rate, while they are insensitive to the translation rate. In presence of self-regulatory strategies, we show that self-repression reduces timing noise for threshold levels that have to be reached quickly, while selfactivation is optimal at long times. These results lay a framework for understanding stochasticity of endogenous systems such as the cell cycle, as well as for the design of synthetic trigger circuits.
[ { "created": "Fri, 29 Jul 2016 06:35:05 GMT", "version": "v1" }, { "created": "Thu, 23 Feb 2017 07:52:32 GMT", "version": "v2" } ]
2017-02-24
[ [ "Co", "Alma Dal", "" ], [ "Lagomarsino", "Marco Cosentino", "" ], [ "Caselle", "Michele", "" ], [ "Osella", "Matteo", "" ] ]
Timing is essential for many cellular processes, from cellular responses to external stimuli to the cell cycle and circadian clocks. Many of these processes are based on gene expression. For example, an activated gene may be required to reach in a precise time a threshold level of expression that triggers a specific downstream process. However, gene expression is subject to stochastic fluctuations, naturally inducing an uncertainty in this threshold-crossing time with potential consequences on biological functions and phenotypes. Here, we consider such "timing fluctuations", and we ask how they can be controlled. Our analytical estimates and simulations show that, for an induced gene, timing variability is minimal if the threshold level of expression is approximately half of the steady-state level. Timing fuctuations can be reduced by increasing the transcription rate, while they are insensitive to the translation rate. In presence of self-regulatory strategies, we show that self-repression reduces timing noise for threshold levels that have to be reached quickly, while selfactivation is optimal at long times. These results lay a framework for understanding stochasticity of endogenous systems such as the cell cycle, as well as for the design of synthetic trigger circuits.
1910.03934
Anyou Wang
Anyou Wang, Hai Rong
Noncoding RNAs serve as the deadliest regulators for cancer
null
null
null
null
q-bio.GN q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cancer is one of the leading causes of human death. Many efforts have made to understand its mechanism and have further identified many proteins and DNA sequence variations as suspected targets for therapy. However, drugs targeting these targets have low success rates, suggesting the basic mechanism still remains unclear. Here, we develop a computational software combining Cox proportional-hazards model and stability-selection to unearth an overlooked, yet the most important cancer drivers hidden in massive data from The Cancer Genome Atlas (TCGA), including 11,574 RNAseq samples and clinic data. Generally, noncoding RNAs primarily regulate cancer deaths and work as the deadliest cancer inducers and repressors, in contrast to proteins as conventionally thought. Especially, processed-pseudogenes serve as the primary cancer inducers, while lincRNA and antisense RNAs dominate the repressors. Strikingly, noncoding RNAs serves as the universal strongest regulators for all cancer types although personal clinic variables such as alcohol and smoking significantly alter cancer genome. Furthermore, noncoding RNAs also work as central hubs in cancer regulatory network and as biomarkers to discriminate cancer types. Therefore, noncoding RNAs overall serve as the deadliest cancer regulators, which refreshes the basic concept of cancer mechanism and builds a novel basis for cancer research and therapy. Biological functions of pseudogenes have rarely been recognized. Here we reveal them as the most important cancer drivers for all cancer types from big data, breaking a wall to explore their biological potentials.
[ { "created": "Tue, 8 Oct 2019 05:42:36 GMT", "version": "v1" } ]
2019-10-10
[ [ "Wang", "Anyou", "" ], [ "Rong", "Hai", "" ] ]
Cancer is one of the leading causes of human death. Many efforts have made to understand its mechanism and have further identified many proteins and DNA sequence variations as suspected targets for therapy. However, drugs targeting these targets have low success rates, suggesting the basic mechanism still remains unclear. Here, we develop a computational software combining Cox proportional-hazards model and stability-selection to unearth an overlooked, yet the most important cancer drivers hidden in massive data from The Cancer Genome Atlas (TCGA), including 11,574 RNAseq samples and clinic data. Generally, noncoding RNAs primarily regulate cancer deaths and work as the deadliest cancer inducers and repressors, in contrast to proteins as conventionally thought. Especially, processed-pseudogenes serve as the primary cancer inducers, while lincRNA and antisense RNAs dominate the repressors. Strikingly, noncoding RNAs serves as the universal strongest regulators for all cancer types although personal clinic variables such as alcohol and smoking significantly alter cancer genome. Furthermore, noncoding RNAs also work as central hubs in cancer regulatory network and as biomarkers to discriminate cancer types. Therefore, noncoding RNAs overall serve as the deadliest cancer regulators, which refreshes the basic concept of cancer mechanism and builds a novel basis for cancer research and therapy. Biological functions of pseudogenes have rarely been recognized. Here we reveal them as the most important cancer drivers for all cancer types from big data, breaking a wall to explore their biological potentials.
2109.15308
Michael Arcidiacono
Michael Arcidiacono and David Ryan Koes
MOLUCINATE: A Generative Model for Molecules in 3D Space
Camera-ready submission to NeurIPS 2020 MLSB workshop. 6 pages and 2 figures
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Recent advances in machine learning have enabled generative models for both optimization and de novo generation of drug candidates with desired properties. Previous generative models have focused on producing SMILES strings or 2D molecular graphs, while attempts at producing molecules in 3D have focused on reinforcement learning (RL), distance matrices, and pure atom density grids. Here we present MOLUCINATE (MOLecUlar ConvolutIoNal generATive modEl), a novel architecture that simultaneously generates topological and 3D atom position information. We demonstrate the utility of this method by using it to optimize molecules for desired radius of gyration. In the future, this model can be used for more useful optimization such as binding affinity for a protein target.
[ { "created": "Thu, 30 Sep 2021 17:51:50 GMT", "version": "v1" }, { "created": "Wed, 24 Nov 2021 03:56:07 GMT", "version": "v2" } ]
2021-11-25
[ [ "Arcidiacono", "Michael", "" ], [ "Koes", "David Ryan", "" ] ]
Recent advances in machine learning have enabled generative models for both optimization and de novo generation of drug candidates with desired properties. Previous generative models have focused on producing SMILES strings or 2D molecular graphs, while attempts at producing molecules in 3D have focused on reinforcement learning (RL), distance matrices, and pure atom density grids. Here we present MOLUCINATE (MOLecUlar ConvolutIoNal generATive modEl), a novel architecture that simultaneously generates topological and 3D atom position information. We demonstrate the utility of this method by using it to optimize molecules for desired radius of gyration. In the future, this model can be used for more useful optimization such as binding affinity for a protein target.
0902.4152
Sergei Mukhin I
Sergei I. Mukhin, Boris B. Kheyfets
Analytical derivation of thermodynamic properties of bilayer membrane with interdigitation
20 pages, 12 figures, 1 table, Biophysical Society Annual Meeting 2009, Boston, USA
null
null
null
q-bio.QM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a model of bilayer lipid membrane with interdigitation, in which the lipid tails of the opposite monolayers interpenetrate. The interdigitation is modeled by linking tails of the hydrophobic chains in the opposite monolayers within bilayer as a first approximation. A number of thermodynamical characteristics are calculated analytically and compared with the ones of a regular membrane without interdigitation. Striking difference between lateral pressure profiles at the layers interface for linked and regular bilayer models is found. In the linked case, the lateral pressure mid-plane peak disappears, while the free energy per chain increases. Within our model we found that in case of elongation of the chains inside a nucleus of e.g. liquid-condensed phase, homogeneous interdigitation would be more costly for the membrane's free energy than energy of the hydrophobic mismatch between the elongated chains and the liquid-expanded surrounding. Nonetheless, an inhomogeneous interdigitation along the nucleous boundary may occur inside a ``belt'' of a width that varies approximately with the hydrophobic mismatch amplitude.
[ { "created": "Tue, 24 Feb 2009 13:59:31 GMT", "version": "v1" } ]
2009-02-25
[ [ "Mukhin", "Sergei I.", "" ], [ "Kheyfets", "Boris B.", "" ] ]
We consider a model of bilayer lipid membrane with interdigitation, in which the lipid tails of the opposite monolayers interpenetrate. The interdigitation is modeled by linking tails of the hydrophobic chains in the opposite monolayers within bilayer as a first approximation. A number of thermodynamical characteristics are calculated analytically and compared with the ones of a regular membrane without interdigitation. Striking difference between lateral pressure profiles at the layers interface for linked and regular bilayer models is found. In the linked case, the lateral pressure mid-plane peak disappears, while the free energy per chain increases. Within our model we found that in case of elongation of the chains inside a nucleus of e.g. liquid-condensed phase, homogeneous interdigitation would be more costly for the membrane's free energy than energy of the hydrophobic mismatch between the elongated chains and the liquid-expanded surrounding. Nonetheless, an inhomogeneous interdigitation along the nucleous boundary may occur inside a ``belt'' of a width that varies approximately with the hydrophobic mismatch amplitude.
q-bio/0507001
Luciano da Fontoura Costa
Luciano da Fontoura Costa, Fernando Rocha and Silene Araujo de Lima
Characterizing Polygonality in Biological Structures
14 pages, 11 figures
Phys. Rev. E 73, 011913 (2006)
10.1103/PhysRevE.73.011913
null
q-bio.TO cond-mat.dis-nn
null
Several systems involve spatial arrangements of elements such as molecules or cells, the characterization of which bears important implications to biological and physical investigations. Traditional approaches to quantify spatial order and regularity have relied on nearest neighbor distances or the number of sides of cells. The current work shows that superior features can be achieved by considering angular regularity. Voronoi tessellations are obtained for each basic element and the angular regularity is then estimated from the differences between the angles defined by adjacent cells and a reference angle. In case this angle is 60 degrees, the measurement quantifies the hexagonality of the system. Other reference angles can be considered in order to quantify other types of spatial symmetries. The performance of the angular regularity is compared with other measurements including the conformity ratio (based on nearest neighbor distances) and the number of sides of the cells, confirming its improved sensitivity and discrimination power. The superior performance of the haxagonality measurement is illustrated also with respect to a real application concerning the characterization of retinal mosaics.
[ { "created": "Thu, 30 Jun 2005 22:05:47 GMT", "version": "v1" }, { "created": "Mon, 17 Oct 2005 17:41:55 GMT", "version": "v2" } ]
2007-09-19
[ [ "Costa", "Luciano da Fontoura", "" ], [ "Rocha", "Fernando", "" ], [ "de Lima", "Silene Araujo", "" ] ]
Several systems involve spatial arrangements of elements such as molecules or cells, the characterization of which bears important implications to biological and physical investigations. Traditional approaches to quantify spatial order and regularity have relied on nearest neighbor distances or the number of sides of cells. The current work shows that superior features can be achieved by considering angular regularity. Voronoi tessellations are obtained for each basic element and the angular regularity is then estimated from the differences between the angles defined by adjacent cells and a reference angle. In case this angle is 60 degrees, the measurement quantifies the hexagonality of the system. Other reference angles can be considered in order to quantify other types of spatial symmetries. The performance of the angular regularity is compared with other measurements including the conformity ratio (based on nearest neighbor distances) and the number of sides of the cells, confirming its improved sensitivity and discrimination power. The superior performance of the haxagonality measurement is illustrated also with respect to a real application concerning the characterization of retinal mosaics.
2106.07713
Jason Zwicker
Jason Zwicker, Francois Rivest
Interval Timing: Modeling the break-run-break pattern using start/stop threshold-less drift-diffusion model
null
null
10.1016/j.jmp.2022.102663
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animal interval timing is often studied through the peak interval (PI) procedure. In this procedure, the animal is rewarded for the first response after a fixed delay from the stimulus onset, but on some trials, the stimulus remains and no reward is given. The common methods and models to analyse the response pattern describe it as break-run-break, a period of low rate response followed by rapid responding, followed by a low rate of response. The study of the pattern has found correlations between start, stop, and duration of the run period that hold across species and experiment. It is commonly assumed that in order to achieve the statistics with a pacemaker accumulator model it is necessary to have start and stop thresholds. In this paper we will develop a new model that varies response rate in relation to the likelihood of event occurrence, as opposed to a threshold, for changing the response rate. The new model reproduced the start and stop statistics that have been observed in 14 different PI experiments from 3 different papers. The developed model is also compared to the Time-adaptive Drift-diffusion Model (TDDM), the latest accumulator model subsuming the scalar expectancy theory (SET), on all 14 data-sets. The results show that it is unnecessary to have explicit start and stop thresholds or an internal equivalent to break-run-break states to reproduce the individual trials statistics and population behaviour and get the same break-run-break analysis results. The new model also produces more realistic individual trials compared to TDDM.
[ { "created": "Mon, 14 Jun 2021 19:07:21 GMT", "version": "v1" }, { "created": "Mon, 4 Apr 2022 16:05:49 GMT", "version": "v2" } ]
2022-04-05
[ [ "Zwicker", "Jason", "" ], [ "Rivest", "Francois", "" ] ]
Animal interval timing is often studied through the peak interval (PI) procedure. In this procedure, the animal is rewarded for the first response after a fixed delay from the stimulus onset, but on some trials, the stimulus remains and no reward is given. The common methods and models to analyse the response pattern describe it as break-run-break, a period of low rate response followed by rapid responding, followed by a low rate of response. The study of the pattern has found correlations between start, stop, and duration of the run period that hold across species and experiment. It is commonly assumed that in order to achieve the statistics with a pacemaker accumulator model it is necessary to have start and stop thresholds. In this paper we will develop a new model that varies response rate in relation to the likelihood of event occurrence, as opposed to a threshold, for changing the response rate. The new model reproduced the start and stop statistics that have been observed in 14 different PI experiments from 3 different papers. The developed model is also compared to the Time-adaptive Drift-diffusion Model (TDDM), the latest accumulator model subsuming the scalar expectancy theory (SET), on all 14 data-sets. The results show that it is unnecessary to have explicit start and stop thresholds or an internal equivalent to break-run-break states to reproduce the individual trials statistics and population behaviour and get the same break-run-break analysis results. The new model also produces more realistic individual trials compared to TDDM.
1810.09935
Markus D Schirmer
Markus D. Schirmer and Ai Wern Chung and P. Ellen Grant and Natalia S. Rost
Network Structural Dependency in the Human Connectome Across the Life-Span
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Principles of network topology have been widely studied in the human connectome. Of particular interest is the modularity of the human brain, where the connectome is divided into subnetworks and subsequently changes with development, aging or disease are investigated. We present a weighted network measure, the Network Dependency Index (NDI), to identify an individual region's importance to the global functioning of the network. Importantly, we utilize NDI to differentiate four subnetworks (Tiers) in the human connectome following Gaussian Mixture Model fitting. We analyze the topological aspects of each subnetwork with respect to age and compare it to rich-club based subnetworks (rich-club, feeder and seeder). Our results first demonstrate the efficacy of NDI to identify more consistent, central nodes of the connectome across age-groups, when compared to the rich-club framework. Stratifying the connectome by NDI led to consistent subnetworks across the life-span revealing distinct patterns associated with age where, e.g., the key relay nuclei and cortical regions are contained in a subnetwork with highest NDI. The divisions of the human connectome derived from our data-driven NDI framework have the potential to reveal topological alterations described by network measures through the life-span.
[ { "created": "Tue, 23 Oct 2018 16:01:23 GMT", "version": "v1" }, { "created": "Fri, 26 Oct 2018 17:44:37 GMT", "version": "v2" }, { "created": "Sat, 19 Jan 2019 01:40:14 GMT", "version": "v3" }, { "created": "Sat, 2 Feb 2019 20:38:24 GMT", "version": "v4" } ]
2019-02-05
[ [ "Schirmer", "Markus D.", "" ], [ "Chung", "Ai Wern", "" ], [ "Grant", "P. Ellen", "" ], [ "Rost", "Natalia S.", "" ] ]
Principles of network topology have been widely studied in the human connectome. Of particular interest is the modularity of the human brain, where the connectome is divided into subnetworks and subsequently changes with development, aging or disease are investigated. We present a weighted network measure, the Network Dependency Index (NDI), to identify an individual region's importance to the global functioning of the network. Importantly, we utilize NDI to differentiate four subnetworks (Tiers) in the human connectome following Gaussian Mixture Model fitting. We analyze the topological aspects of each subnetwork with respect to age and compare it to rich-club based subnetworks (rich-club, feeder and seeder). Our results first demonstrate the efficacy of NDI to identify more consistent, central nodes of the connectome across age-groups, when compared to the rich-club framework. Stratifying the connectome by NDI led to consistent subnetworks across the life-span revealing distinct patterns associated with age where, e.g., the key relay nuclei and cortical regions are contained in a subnetwork with highest NDI. The divisions of the human connectome derived from our data-driven NDI framework have the potential to reveal topological alterations described by network measures through the life-span.
1504.03622
Alexander Huth
Alexander G. Huth, Thomas L. Griffiths, Frederic E. Theunissen, Jack L. Gallant
PrAGMATiC: a Probabilistic and Generative Model of Areas Tiling the Cortex
null
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the human cortex seems to be organized into topographic cortical maps. Yet few quantitative methods exist for characterizing these maps. To address this issue we developed a modeling framework that can reveal group-level cortical maps based on neuroimaging data. PrAGMATiC, a probabilistic and generative model of areas tiling the cortex, is a hierarchical Bayesian generative model of cortical maps. This model assumes that the cortical map in each individual subject is a sample from a single underlying probability distribution. Learning the parameters of this distribution reveals the properties of a cortical map that are common across a group of subjects while avoiding the potentially lossy step of co-registering each subject into a group anatomical space. In this report we give a mathematical description of PrAGMATiC, describe approximations that make it practical to use, show preliminary results from its application to a real dataset, and describe a number of possible future extensions.
[ { "created": "Tue, 14 Apr 2015 16:52:31 GMT", "version": "v1" } ]
2015-04-15
[ [ "Huth", "Alexander G.", "" ], [ "Griffiths", "Thomas L.", "" ], [ "Theunissen", "Frederic E.", "" ], [ "Gallant", "Jack L.", "" ] ]
Much of the human cortex seems to be organized into topographic cortical maps. Yet few quantitative methods exist for characterizing these maps. To address this issue we developed a modeling framework that can reveal group-level cortical maps based on neuroimaging data. PrAGMATiC, a probabilistic and generative model of areas tiling the cortex, is a hierarchical Bayesian generative model of cortical maps. This model assumes that the cortical map in each individual subject is a sample from a single underlying probability distribution. Learning the parameters of this distribution reveals the properties of a cortical map that are common across a group of subjects while avoiding the potentially lossy step of co-registering each subject into a group anatomical space. In this report we give a mathematical description of PrAGMATiC, describe approximations that make it practical to use, show preliminary results from its application to a real dataset, and describe a number of possible future extensions.
1906.02710
Gao-De Li Dr
Gao-De Li
Flexible Cancer-Associated Chromatin Configuration (CACC) Might Be the Fundamental Reason Why Cancer Is So Difficult to Cure
8 pages
null
null
null
q-bio.SC q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We once proposed that cell-type-associated chromatin configurations determine cell types and that cancer cell type is determined by cancer-associated chromatin configuration (CACC). In this paper, we hypothesize that flexible cell-type-associated chromatin configuration is associated with cell potency and has an advantage over inflexible one in regulating genome related activities, such as DNA replication, DNA transcription, DNA repair, and DNA mutagenesis. The reason why cancer is so difficult to treat is because CACC is flexible, which enables cancer cells not only to produce heterogeneous subclones through limited cell differentiation, but also to maximally and efficiently use genome related resources to survive environmental changes. Therefore, to beat cancer, more efforts should be made to restrict the flexibility of CACC or to change CACC so that cancer cells can be turned back to normal or become less malignant.
[ { "created": "Thu, 6 Jun 2019 17:24:32 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2019 08:34:44 GMT", "version": "v2" }, { "created": "Sun, 16 Jun 2019 13:01:21 GMT", "version": "v3" }, { "created": "Fri, 28 Jun 2019 16:42:23 GMT", "version": "v4" } ]
2019-07-01
[ [ "Li", "Gao-De", "" ] ]
We once proposed that cell-type-associated chromatin configurations determine cell types and that cancer cell type is determined by cancer-associated chromatin configuration (CACC). In this paper, we hypothesize that flexible cell-type-associated chromatin configuration is associated with cell potency and has an advantage over inflexible one in regulating genome related activities, such as DNA replication, DNA transcription, DNA repair, and DNA mutagenesis. The reason why cancer is so difficult to treat is because CACC is flexible, which enables cancer cells not only to produce heterogeneous subclones through limited cell differentiation, but also to maximally and efficiently use genome related resources to survive environmental changes. Therefore, to beat cancer, more efforts should be made to restrict the flexibility of CACC or to change CACC so that cancer cells can be turned back to normal or become less malignant.
1512.01088
Hector Zenil
Narsis A. Kiani, Hector Zenil, Jakub Olczak and Jesper Tegn\'er
Evaluating Network Inference Methods in Terms of Their Ability to Preserve the Topology and Complexity of Genetic Networks
main part: 18 pages. 21 pages with Sup Inf. Forthcoming in the journal of Seminars in Cell and Developmental Biology
null
null
null
q-bio.MN cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against \textit{gold standard} (true values) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration aspects of topological and information content in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concepts from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is superior to random guesswork. Therefore special care should be taken to suit the method to the purpose at hand. Finally, we show that evaluations from data generated using different underlying topologies have different signatures that can be used to better choose a network reconstruction method.
[ { "created": "Thu, 3 Dec 2015 14:25:04 GMT", "version": "v1" }, { "created": "Fri, 11 Dec 2015 18:54:09 GMT", "version": "v2" }, { "created": "Wed, 14 Sep 2016 18:18:09 GMT", "version": "v3" } ]
2016-09-15
[ [ "Kiani", "Narsis A.", "" ], [ "Zenil", "Hector", "" ], [ "Olczak", "Jakub", "" ], [ "Tegnér", "Jesper", "" ] ]
Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against \textit{gold standard} (true values) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration aspects of topological and information content in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concepts from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is superior to random guesswork. Therefore special care should be taken to suit the method to the purpose at hand. Finally, we show that evaluations from data generated using different underlying topologies have different signatures that can be used to better choose a network reconstruction method.
0912.5315
Peter Waddell
Peter J. Waddell and Timothy Herston
Expectation-Maximization (EM) Algorithms for Mapping Short Reads Illustrated with FAIRE data and the TP53-WRAP53 Gene Region
17 pages, 3 figures, 1 table all incorporated in one pdf
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Huge numbers of short reads are being generated for mapping back to the genome to discover the frequency of transcripts, miRNAs, DNAase hypersensitive sites, FAIRE regions, nucleosome occupancy, etc. Since these reads are typically short (e.g., 36 base pairs) and since many eukaryotic genomes, including humans, have highly repetitive sequences then many of these reads map to two or more locations in the genome. Current mapping of these reads, grading them according to 0, 1 or 2 mismatches wastes a great deal of information. These short sequences are typically mapped with no account of the accuracy of the sequence, even in company software when per base error rates are being reported by another part of the machine. Further, multiply mapping locations are frequently discarded altogether or allocated with no regard to where other reads are accumulating. Here we show how to combine probabilistic mapping of reads with an EM algorithm to iteratively improve the empirical likelihood of the allocation of short reads. Mapping using LAST takes into account the per base accuracy of the read, plus insertions and deletions, plus anticipated occasional errors or SNPs with respect to the parent genome. The probabilistic EM algorithm iteratively allocates reads based on the proportion of reads mapping within windows on the previous cycle, along with any prior information on where the read best maps. The methods are illustrated with FAIRE ENCODE data looking at the very important head-to-head gene combination of TP53 and WRAP 53.
[ { "created": "Tue, 29 Dec 2009 15:27:24 GMT", "version": "v1" } ]
2009-12-31
[ [ "Waddell", "Peter J.", "" ], [ "Herston", "Timothy", "" ] ]
Huge numbers of short reads are being generated for mapping back to the genome to discover the frequency of transcripts, miRNAs, DNAase hypersensitive sites, FAIRE regions, nucleosome occupancy, etc. Since these reads are typically short (e.g., 36 base pairs) and since many eukaryotic genomes, including humans, have highly repetitive sequences then many of these reads map to two or more locations in the genome. Current mapping of these reads, grading them according to 0, 1 or 2 mismatches wastes a great deal of information. These short sequences are typically mapped with no account of the accuracy of the sequence, even in company software when per base error rates are being reported by another part of the machine. Further, multiply mapping locations are frequently discarded altogether or allocated with no regard to where other reads are accumulating. Here we show how to combine probabilistic mapping of reads with an EM algorithm to iteratively improve the empirical likelihood of the allocation of short reads. Mapping using LAST takes into account the per base accuracy of the read, plus insertions and deletions, plus anticipated occasional errors or SNPs with respect to the parent genome. The probabilistic EM algorithm iteratively allocates reads based on the proportion of reads mapping within windows on the previous cycle, along with any prior information on where the read best maps. The methods are illustrated with FAIRE ENCODE data looking at the very important head-to-head gene combination of TP53 and WRAP 53.
1909.02297
Pedro Mediano
Pedro A.M. Mediano, Fernando Rosas, Robin L. Carhart-Harris, Anil K. Seth, Adam B. Barrett
Beyond integrated information: A taxonomy of information dynamics phenomena
null
null
null
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most information dynamics and statistical causal analysis frameworks rely on the common intuition that causal interactions are intrinsically pairwise -- every 'cause' variable has an associated 'effect' variable, so that a 'causal arrow' can be drawn between them. However, analyses that depict interdependencies as directed graphs fail to discriminate the rich variety of modes of information flow that can coexist within a system. This, in turn, creates problems with attempts to operationalise the concepts of 'dynamical complexity' or `integrated information.' To address this shortcoming, we combine concepts of partial information decomposition and integrated information, and obtain what we call Integrated Information Decomposition, or $\Phi$ID. We show how $\Phi$ID paves the way for more detailed analyses of interdependencies in multivariate time series, and sheds light on collective modes of information dynamics that have not been reported before. Additionally, $\Phi$ID reveals that what is typically referred to as 'integration' is actually an aggregate of several heterogeneous phenomena. Furthermore, $\Phi$ID can be used to formulate new, tailored measures of integrated information, as well as to understand and alleviate the limitations of existing measures.
[ { "created": "Thu, 5 Sep 2019 10:11:00 GMT", "version": "v1" } ]
2019-09-06
[ [ "Mediano", "Pedro A. M.", "" ], [ "Rosas", "Fernando", "" ], [ "Carhart-Harris", "Robin L.", "" ], [ "Seth", "Anil K.", "" ], [ "Barrett", "Adam B.", "" ] ]
Most information dynamics and statistical causal analysis frameworks rely on the common intuition that causal interactions are intrinsically pairwise -- every 'cause' variable has an associated 'effect' variable, so that a 'causal arrow' can be drawn between them. However, analyses that depict interdependencies as directed graphs fail to discriminate the rich variety of modes of information flow that can coexist within a system. This, in turn, creates problems with attempts to operationalise the concepts of 'dynamical complexity' or `integrated information.' To address this shortcoming, we combine concepts of partial information decomposition and integrated information, and obtain what we call Integrated Information Decomposition, or $\Phi$ID. We show how $\Phi$ID paves the way for more detailed analyses of interdependencies in multivariate time series, and sheds light on collective modes of information dynamics that have not been reported before. Additionally, $\Phi$ID reveals that what is typically referred to as 'integration' is actually an aggregate of several heterogeneous phenomena. Furthermore, $\Phi$ID can be used to formulate new, tailored measures of integrated information, as well as to understand and alleviate the limitations of existing measures.
2101.06294
Halim Maaroufi Hal
Halim Maaroufi
Interactions of SARS-CoV-2 spike protein and transient receptor potential (TRP) cation channels could explain smell, taste, and/or chemesthesis disorders
46 pages, 2 tables, 4 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
A significant subset of patients infected by SARS-CoV-2 presents olfactory, taste, and/or chemesthesis (OTC) disorders (OTCD). These patients recover rapidly, eliminating damage of sensory nerves. Discovering that S protein contains two ankyrin repeat binding motifs (S-ARBMs) and some TRP cation channels, implicated in OTC, have ankyrin repeat domains (TRPs-ARDs), I hypothesized that interaction of S-ARBMs and TRPs-ARDs could dysregulate the function of the latter and thus explains OTCD. Of note, some TRPs-ARDs are expressed in the olfactory epithelium, taste buds, trigeminal neurons in the oronasal cavity and vagal neurons in the trachea/lungs. Furthermore, this hypothesis is supported by studies that have shown: (i) respiratory viruses interact with TRPA1 and TRPV1 on sensory nerves and epithelial cells in the airways, (ii) the respiratory pathophysiology in COVID-19 patients is similar to lungs injuries produced by the sensitization of TRPV1 and TRPV4, and (iii) resolvin D1 and D2 shown to reduce SARS-CoV-2-induced inflammation, directly inhibit TRPA1, TRPV1, TRPV3 and TRPV4. Herein, results of blind dockings of S-ARBMs, 408-RQIAPG-413 (in RBD but distal from the ACE-2 binding region) and 905-RFNGIG-910 (in HR1), into TRPA1, TRPV1 and TRPV4 suggest that S-ARBMs interact with ankyrin repeat 6 of TRPA1 near an active site, and ankyrin repeat 3-4 of TRPV1 near cysteine 258 supposed to be implicated in the formation of inter-subunits disulfide bond. These findings suggest that S-ARBMs affect TRPA1, TRPV1 and TRPV4 function by interfering with channel assembly and trafficking. After an experimental confirmation of these interactions, among possible preventive treatments against COVID-19, the use of pharmacological manipulation (probably inhibition) of TRPs-ARDs to control or mitigate sustained pro-inflammatory response.
[ { "created": "Fri, 15 Jan 2021 20:29:30 GMT", "version": "v1" } ]
2021-01-19
[ [ "Maaroufi", "Halim", "" ] ]
A significant subset of patients infected by SARS-CoV-2 presents olfactory, taste, and/or chemesthesis (OTC) disorders (OTCD). These patients recover rapidly, eliminating damage of sensory nerves. Discovering that S protein contains two ankyrin repeat binding motifs (S-ARBMs) and some TRP cation channels, implicated in OTC, have ankyrin repeat domains (TRPs-ARDs), I hypothesized that interaction of S-ARBMs and TRPs-ARDs could dysregulate the function of the latter and thus explains OTCD. Of note, some TRPs-ARDs are expressed in the olfactory epithelium, taste buds, trigeminal neurons in the oronasal cavity and vagal neurons in the trachea/lungs. Furthermore, this hypothesis is supported by studies that have shown: (i) respiratory viruses interact with TRPA1 and TRPV1 on sensory nerves and epithelial cells in the airways, (ii) the respiratory pathophysiology in COVID-19 patients is similar to lungs injuries produced by the sensitization of TRPV1 and TRPV4, and (iii) resolvin D1 and D2 shown to reduce SARS-CoV-2-induced inflammation, directly inhibit TRPA1, TRPV1, TRPV3 and TRPV4. Herein, results of blind dockings of S-ARBMs, 408-RQIAPG-413 (in RBD but distal from the ACE-2 binding region) and 905-RFNGIG-910 (in HR1), into TRPA1, TRPV1 and TRPV4 suggest that S-ARBMs interact with ankyrin repeat 6 of TRPA1 near an active site, and ankyrin repeat 3-4 of TRPV1 near cysteine 258 supposed to be implicated in the formation of inter-subunits disulfide bond. These findings suggest that S-ARBMs affect TRPA1, TRPV1 and TRPV4 function by interfering with channel assembly and trafficking. After an experimental confirmation of these interactions, among possible preventive treatments against COVID-19, the use of pharmacological manipulation (probably inhibition) of TRPs-ARDs to control or mitigate sustained pro-inflammatory response.
1203.0072
Michael Woodhams
Michael Woodhams, Dorothy A. Steane, Rebecca C. Jones, Dean Nicolle, Vincent Moulton, Barbara R. Holland
Novel Distances for Dollo Data
null
null
null
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate distances on binary (presence/absence) data in the context of a Dollo process, where a trait can only arise once on a phylogenetic tree but may be lost many times. We introduce a novel distance, the Additive Dollo Distance (ADD), which is consistent for data generated under a Dollo model, and show that it has some useful theoretical properties including an intriguing link to the LogDet distance. Simulations of Dollo data are used to compare a number of binary distances including ADD, LogDet, Nei Li and some simple, but to our knowledge previously unstudied, variations on common binary distances. The simulations suggest that ADD outperforms other distances on Dollo data. Interestingly, we found that the LogDet distance performs poorly in the context of a Dollo process, which may have implications for its use in connection with conditioned genome reconstruction. We apply the ADD to two Diversity Arrays Technology (DArT) datasets, one that broadly covers Eucalyptus species and one that focuses on the Eucalyptus series Adnataria. We also reanalyse gene family presence/absence data on bacteria from the COG database and compare the results to previous phylogenies estimated using the conditioned genome reconstruction approach.
[ { "created": "Thu, 1 Mar 2012 01:50:59 GMT", "version": "v1" } ]
2012-03-02
[ [ "Woodhams", "Michael", "" ], [ "Steane", "Dorothy A.", "" ], [ "Jones", "Rebecca C.", "" ], [ "Nicolle", "Dean", "" ], [ "Moulton", "Vincent", "" ], [ "Holland", "Barbara R.", "" ] ]
We investigate distances on binary (presence/absence) data in the context of a Dollo process, where a trait can only arise once on a phylogenetic tree but may be lost many times. We introduce a novel distance, the Additive Dollo Distance (ADD), which is consistent for data generated under a Dollo model, and show that it has some useful theoretical properties including an intriguing link to the LogDet distance. Simulations of Dollo data are used to compare a number of binary distances including ADD, LogDet, Nei Li and some simple, but to our knowledge previously unstudied, variations on common binary distances. The simulations suggest that ADD outperforms other distances on Dollo data. Interestingly, we found that the LogDet distance performs poorly in the context of a Dollo process, which may have implications for its use in connection with conditioned genome reconstruction. We apply the ADD to two Diversity Arrays Technology (DArT) datasets, one that broadly covers Eucalyptus species and one that focuses on the Eucalyptus series Adnataria. We also reanalyse gene family presence/absence data on bacteria from the COG database and compare the results to previous phylogenies estimated using the conditioned genome reconstruction approach.
1411.2820
Sharon Lee
Sharon X. Lee, Geoffrey J. McLachlan, Saumyadipta Pyne
Supervised Classification of Flow Cytometric Samples via the Joint Clustering and Matching (JCM) Procedure
null
null
null
null
q-bio.QM stat.ME stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the use of the Joint Clustering and Matching (JCM) procedure for the supervised classification of a flow cytometric sample with respect to a number of predefined classes of such samples. The JCM procedure has been proposed as a method for the unsupervised classification of cells within a sample into a number of clusters and in the case of multiple samples, the matching of these clusters across the samples. The two tasks of clustering and matching of the clusters are performed simultaneously within the JCM framework. In this paper, we consider the case where there is a number of distinct classes of samples whose class of origin is known, and the problem is to classify a new sample of unknown class of origin to one of these predefined classes. For example, the different classes might correspond to the types of a particular disease or to the various health outcomes of a patient subsequent to a course of treatment. We show and demonstrate on some real datasets how the JCM procedure can be used to carry out this supervised classification task. A mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample with the components in the mixture model corresponding to the various populations of cells in the composition of the sample. For each class of samples, a class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The classification of a new unclassified sample is undertaken by assigning the unclassified sample to the class that minimizes the Kullback-Leibler distance between its fitted mixture density and each class density provided by the class templates.
[ { "created": "Tue, 11 Nov 2014 14:22:32 GMT", "version": "v1" } ]
2014-11-12
[ [ "Lee", "Sharon X.", "" ], [ "McLachlan", "Geoffrey J.", "" ], [ "Pyne", "Saumyadipta", "" ] ]
We consider the use of the Joint Clustering and Matching (JCM) procedure for the supervised classification of a flow cytometric sample with respect to a number of predefined classes of such samples. The JCM procedure has been proposed as a method for the unsupervised classification of cells within a sample into a number of clusters and in the case of multiple samples, the matching of these clusters across the samples. The two tasks of clustering and matching of the clusters are performed simultaneously within the JCM framework. In this paper, we consider the case where there is a number of distinct classes of samples whose class of origin is known, and the problem is to classify a new sample of unknown class of origin to one of these predefined classes. For example, the different classes might correspond to the types of a particular disease or to the various health outcomes of a patient subsequent to a course of treatment. We show and demonstrate on some real datasets how the JCM procedure can be used to carry out this supervised classification task. A mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample with the components in the mixture model corresponding to the various populations of cells in the composition of the sample. For each class of samples, a class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The classification of a new unclassified sample is undertaken by assigning the unclassified sample to the class that minimizes the Kullback-Leibler distance between its fitted mixture density and each class density provided by the class templates.
1311.6950
Sanjiv Dwivedi
Sarika Jalan and Sanjiv K. Dwivedi
Balanced condition in networks leads to Weibull statistics
7 pages, 10 figures
Phys. Rev. E 89, 062718 (2014)
10.1103/PhysRevE.89.062718
10.903/PhysRevE.89.062718
q-bio.NC cond-mat.dis-nn nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The importance of the balance in inhibitory and excitatory couplings in the brain has increasingly been realized. Despite the key role played by inhibitory-excitatory couplings in the functioning of brain networks, the impact of a balanced condition on the stability properties of underlying networks remains largely unknown. We investigate properties of the largest eigenvalues of networks having such couplings, and find that they follow completely different statistics when in the balanced situation. Based on numerical simulations, we demonstrate that the transition from Weibull to Fr\'echet via the Gumbel distribution can be controlled by the variance of the column sum of the adjacency matrix, which depends monotonically on the denseness of the underlying network. As a balanced condition is imposed, the largest real part of the eigenvalue emulates a transition to the generalized extreme value statistics, independent of the inhibitory connection probability. Furthermore, the transition to the Weibull statistics and the small-world transition occur at the same rewiring probability, reflecting a more stable system.
[ { "created": "Wed, 27 Nov 2013 12:23:57 GMT", "version": "v1" } ]
2015-06-18
[ [ "Jalan", "Sarika", "" ], [ "Dwivedi", "Sanjiv K.", "" ] ]
The importance of the balance in inhibitory and excitatory couplings in the brain has increasingly been realized. Despite the key role played by inhibitory-excitatory couplings in the functioning of brain networks, the impact of a balanced condition on the stability properties of underlying networks remains largely unknown. We investigate properties of the largest eigenvalues of networks having such couplings, and find that they follow completely different statistics when in the balanced situation. Based on numerical simulations, we demonstrate that the transition from Weibull to Fr\'echet via the Gumbel distribution can be controlled by the variance of the column sum of the adjacency matrix, which depends monotonically on the denseness of the underlying network. As a balanced condition is imposed, the largest real part of the eigenvalue emulates a transition to the generalized extreme value statistics, independent of the inhibitory connection probability. Furthermore, the transition to the Weibull statistics and the small-world transition occur at the same rewiring probability, reflecting a more stable system.
q-bio/0411039
Albert Diaz-Guilera
Luis A.N. Amaral (1), Albert Diaz-Guilera (1,2,3), Andre A. Moreira (1), Ary L. Goldberger (2), Lewis A. Lipsitz (4) ((1) Department of Chemical and Biological Engineering, Northwestern University (2) Cardiovascular Division, Beth Israel Deaconess Medical Center, Harvard Medical School (3) Dept. Fisica Fonamental, Universitat de Barcelona (4) Hebrew Rehabilitation Center for the Aged, Harvard Medical School)
Emergence of Complex Dynamics in a Simple Model of Signaling Networks
null
Proc. Nat. Acad. Sci. USA 101 (2004) 15551-15555
10.1073/pnas.0404843101
null
q-bio.OT cond-mat.soft physics.bio-ph
null
A variety of physical, social and biological systems generate complex fluctuations with correlations across multiple time scales. In physiologic systems, these long-range correlations are altered with disease and aging. Such correlated fluctuations in living systems have been attributed to the interaction of multiple control systems; however, the mechanisms underlying this behavior remain unknown. Here, we show that a number of distinct classes of dynamical behaviors, including correlated fluctuations characterized by $1/f$-scaling of their power spectra, can emerge in networks of simple signaling units. We find that under general conditions, complex dynamics can be generated by systems fulfilling two requirements: i) a ``small-world'' topology and ii) the presence of noise. Our findings support two notable conclusions: first, complex physiologic-like signals can be modeled with a minimal set of components; and second, systems fulfilling conditions (i) and (ii) are robust to some degree of degradation, i.e., they will still be able to generate $1/f$-dynamics.
[ { "created": "Fri, 19 Nov 2004 09:35:08 GMT", "version": "v1" } ]
2009-11-10
[ [ "Amaral", "Luis A. N.", "" ], [ "Diaz-Guilera", "Albert", "" ], [ "Moreira", "Andre A.", "" ], [ "Goldberger", "Ary L.", "" ], [ "Lipsitz", "Lewis A.", "" ] ]
A variety of physical, social and biological systems generate complex fluctuations with correlations across multiple time scales. In physiologic systems, these long-range correlations are altered with disease and aging. Such correlated fluctuations in living systems have been attributed to the interaction of multiple control systems; however, the mechanisms underlying this behavior remain unknown. Here, we show that a number of distinct classes of dynamical behaviors, including correlated fluctuations characterized by $1/f$-scaling of their power spectra, can emerge in networks of simple signaling units. We find that under general conditions, complex dynamics can be generated by systems fulfilling two requirements: i) a ``small-world'' topology and ii) the presence of noise. Our findings support two notable conclusions: first, complex physiologic-like signals can be modeled with a minimal set of components; and second, systems fulfilling conditions (i) and (ii) are robust to some degree of degradation, i.e., they will still be able to generate $1/f$-dynamics.
2405.07772
Samuel Gornard-Laidet
Samuel Gornard (EGCE), Pascaline Venon, Florian Lasfont, Thierry Balliau, Laure Marie-Paule Kaiser-Arnauld, Florence Mougel
Characterizing virulence differences in a parasitoid wasp through comparative transcriptomic and proteomic
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Two strains of the endoparasitoid Cotesia typhae present a differential parasitism success on the host, Sesamia nonagrioides. One is virulent on both permissive and resistant host populations, and the other only on the permissive host. This interaction provides a very interesting frame for studying virulence factors. Here, we used a combination of comparative transcriptomic and proteomic analyses to unravel the molecular basis underlying virulence differences between the strains.Results: First, we report that virulence genes are mostly expressed during the nymphal stage of the parasitoid. Especially, proviral genes are broadly up-regulated at this stage, while their expression is only expected in the host. Parasitoid gene expression in the host increases with time, indicating the production of more virulence factors. Secondly, comparison between strains reveals differences in venom composition, with 12 proteins showing differential abundance. Proviral expression in the host displays a strong temporal variability, along with differential patterns between strains. Notably, a subset of proviral genes including protein-tyrosine phosphatases is specifically over-expressed in the resistant host parasitized by the less virulent strain, 24 hours after parasitism. This result particularly hints at host modulation of proviral expression.Conclusions: This study sheds light on the temporal expression of virulence factors of Cotesia typhae, both in the host and in the parasitoid. It also identifies potential molecular candidates driving differences in parasitism success between two strains. Together, those findings provide a path for further exploration of virulence mechanisms in parasitoid wasps, and offer insights into host-parasitoid coevolution.
[ { "created": "Mon, 13 May 2024 14:17:35 GMT", "version": "v1" } ]
2024-05-14
[ [ "Gornard", "Samuel", "", "EGCE" ], [ "Venon", "Pascaline", "" ], [ "Lasfont", "Florian", "" ], [ "Balliau", "Thierry", "" ], [ "Kaiser-Arnauld", "Laure Marie-Paule", "" ], [ "Mougel", "Florence", "" ] ]
Background: Two strains of the endoparasitoid Cotesia typhae present a differential parasitism success on the host, Sesamia nonagrioides. One is virulent on both permissive and resistant host populations, and the other only on the permissive host. This interaction provides a very interesting frame for studying virulence factors. Here, we used a combination of comparative transcriptomic and proteomic analyses to unravel the molecular basis underlying virulence differences between the strains.Results: First, we report that virulence genes are mostly expressed during the nymphal stage of the parasitoid. Especially, proviral genes are broadly up-regulated at this stage, while their expression is only expected in the host. Parasitoid gene expression in the host increases with time, indicating the production of more virulence factors. Secondly, comparison between strains reveals differences in venom composition, with 12 proteins showing differential abundance. Proviral expression in the host displays a strong temporal variability, along with differential patterns between strains. Notably, a subset of proviral genes including protein-tyrosine phosphatases is specifically over-expressed in the resistant host parasitized by the less virulent strain, 24 hours after parasitism. This result particularly hints at host modulation of proviral expression.Conclusions: This study sheds light on the temporal expression of virulence factors of Cotesia typhae, both in the host and in the parasitoid. It also identifies potential molecular candidates driving differences in parasitism success between two strains. Together, those findings provide a path for further exploration of virulence mechanisms in parasitoid wasps, and offer insights into host-parasitoid coevolution.
2305.10472
Vincent Miele
Marine Desprez, Vincent Miele and Olivier Gimenez
Nine tips for ecologists using machine learning
null
null
null
null
q-bio.PE cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Due to their high predictive performance and flexibility, machine learning models are an appropriate and efficient tool for ecologists. However, implementing a machine learning model is not yet a trivial task and may seem intimidating to ecologists with no previous experience in this area. Here we provide a series of tips to help ecologists in implementing machine learning models. We focus on classification problems as many ecological studies aim to assign data into predefined classes such as ecological states or biological entities. Each of the nine tips identifies a common error, trap or challenge in developing machine learning models and provides recommendations to facilitate their use in ecological studies.
[ { "created": "Wed, 17 May 2023 15:41:08 GMT", "version": "v1" }, { "created": "Fri, 26 May 2023 07:38:55 GMT", "version": "v2" } ]
2023-05-29
[ [ "Desprez", "Marine", "" ], [ "Miele", "Vincent", "" ], [ "Gimenez", "Olivier", "" ] ]
Due to their high predictive performance and flexibility, machine learning models are an appropriate and efficient tool for ecologists. However, implementing a machine learning model is not yet a trivial task and may seem intimidating to ecologists with no previous experience in this area. Here we provide a series of tips to help ecologists in implementing machine learning models. We focus on classification problems as many ecological studies aim to assign data into predefined classes such as ecological states or biological entities. Each of the nine tips identifies a common error, trap or challenge in developing machine learning models and provides recommendations to facilitate their use in ecological studies.
1405.1260
Xiang-Ping Jia
Xiang-Ping Jia and Hong Sun
Recognizing Cancer via Somatic and Organic Evolution
28 pages, 5 figures
null
null
null
q-bio.PE q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fitness of somatic cells of metazoan, the ability of proliferation and survival, depends on microenvironment. In somatic evolution, a mutated cell in a tissue clonally expands abnormally because of its high fitness as normal cells in a corresponding microenvironment. In this study, we propose the cancer cell hypothesis that cancer cells are the mutated cells with two characteristics: clonal expansion and damaging the microenvironment through the behaviours such as producing more poison in metabolism than normal cells. This model provides an explanation for the nature of invasion and metastasis, which are still controversial. In addition, we theoretically reasoned out that normal cells have almost the highest fitness in healthy microenvironments as a result of long-term organic evolution. This inspires a new kind of therapy of cancer, which improving microenvironment to make cancer cells lower in fitness than normal cells and then halt the growth of tumours. This general therapy relies on a mechanism differing from chemotherapy and targeted therapy.
[ { "created": "Tue, 6 May 2014 13:22:26 GMT", "version": "v1" }, { "created": "Fri, 4 Sep 2015 11:24:05 GMT", "version": "v2" } ]
2015-09-07
[ [ "Jia", "Xiang-Ping", "" ], [ "Sun", "Hong", "" ] ]
The fitness of somatic cells of metazoan, the ability of proliferation and survival, depends on microenvironment. In somatic evolution, a mutated cell in a tissue clonally expands abnormally because of its high fitness as normal cells in a corresponding microenvironment. In this study, we propose the cancer cell hypothesis that cancer cells are the mutated cells with two characteristics: clonal expansion and damaging the microenvironment through the behaviours such as producing more poison in metabolism than normal cells. This model provides an explanation for the nature of invasion and metastasis, which are still controversial. In addition, we theoretically reasoned out that normal cells have almost the highest fitness in healthy microenvironments as a result of long-term organic evolution. This inspires a new kind of therapy of cancer, which improving microenvironment to make cancer cells lower in fitness than normal cells and then halt the growth of tumours. This general therapy relies on a mechanism differing from chemotherapy and targeted therapy.
2405.10486
Patrick Vincent Lubenia
Patrick Vincent N. Lubenia, Eduardo R. Mendoza, Angelyn R. Lao
Comparison of reaction networks of insulin signaling
18 pages, 0 figure
null
null
null
q-bio.MN
http://creativecommons.org/publicdomain/zero/1.0/
Understanding the insulin signaling cascade provides insights on the underlying mechanisms of biological phenomena such as insulin resistance, diabetes, Alzheimer's disease, and cancer. For this reason, previous studies utilized chemical reaction network theory to perform comparative analyses of reaction networks of insulin signaling in healthy (INSMS: INSulin Metabolic Signaling) and diabetic cells (INRES: INsulin RESistance). This study extends these analyses using various methods which give further insights regarding insulin signaling. Using embedded networks, we discuss evidence of the presence of a structural "bifurcation" in the signaling process between INSMS and INRES. Concordance profiles of INSMS and INRES show that both have a high propensity to remain monostationary. Moreover, the concordance properties allow us to present heuristic evidence that INRES has a higher level of stability beyond its monostationarity. Finally, we discuss a new way of analyzing reaction networks through network translation. This method gives rise to three new insights: (i) each stoichiometric class of INSMS and INRES contains a unique positive equilibrium; (ii) any positive equilibrium of INSMS is exponentially stable and is a global attractor in its stoichiometric class; and (iii) any positive equilibrium of INRES is locally asymptotically stable. These results open up opportunities for collaboration with experimental biologists to understand insulin signaling better.
[ { "created": "Fri, 17 May 2024 01:19:51 GMT", "version": "v1" } ]
2024-05-20
[ [ "Lubenia", "Patrick Vincent N.", "" ], [ "Mendoza", "Eduardo R.", "" ], [ "Lao", "Angelyn R.", "" ] ]
Understanding the insulin signaling cascade provides insights on the underlying mechanisms of biological phenomena such as insulin resistance, diabetes, Alzheimer's disease, and cancer. For this reason, previous studies utilized chemical reaction network theory to perform comparative analyses of reaction networks of insulin signaling in healthy (INSMS: INSulin Metabolic Signaling) and diabetic cells (INRES: INsulin RESistance). This study extends these analyses using various methods which give further insights regarding insulin signaling. Using embedded networks, we discuss evidence of the presence of a structural "bifurcation" in the signaling process between INSMS and INRES. Concordance profiles of INSMS and INRES show that both have a high propensity to remain monostationary. Moreover, the concordance properties allow us to present heuristic evidence that INRES has a higher level of stability beyond its monostationarity. Finally, we discuss a new way of analyzing reaction networks through network translation. This method gives rise to three new insights: (i) each stoichiometric class of INSMS and INRES contains a unique positive equilibrium; (ii) any positive equilibrium of INSMS is exponentially stable and is a global attractor in its stoichiometric class; and (iii) any positive equilibrium of INRES is locally asymptotically stable. These results open up opportunities for collaboration with experimental biologists to understand insulin signaling better.
1008.0523
Tsvi Tlusty
Dror Sagi, Tsvi Tlusty and Joel Stavans
High fidelity of RecA-catalyzed recombination: a watchdog of genetic diversity
http://www.weizmann.ac.il/complex/tlusty/papers/NuclAcidRes2006.pdf http://nar.oxfordjournals.org/cgi/content/short/34/18/5021
Nucleic Acids Research, 2006, Vol. 34, No. 18 5021-5031
10.1093/nar/gkl586
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homologous recombination plays a key role in generating genetic diversity, while maintaining protein functionality. The mechanisms by which RecA enables a single-stranded segment of DNA to recognize a homologous tract within a whole genome are poorly understood. The scale by which homology recognition takes place is of a few tens of base pairs, after which the quest for homology is over. To study the mechanism of homology recognition, RecA-promoted homologous recombination between short DNA oligomers with different degrees of heterology was studied in vitro, using fluorescence resonant energy transfer. RecA can detect single mismatches at the initial stages of recombination, and the efficiency of recombination is strongly dependent on the location and distribution of mismatches. Mismatches near the 5' end of the incoming strand have a minute effect, whereas mismatches near the 3' end hinder strand exchange dramatically. There is a characteristic DNA length above which the sensitivity to heterology decreases sharply. Experiments with competitor sequences with varying degrees of homology yield information about the process of homology search and synapse lifetime. The exquisite sensitivity to mismatches and the directionality in the exchange process support a mechanism for homology recognition that can be modeled as a kinetic proofreading cascade.
[ { "created": "Tue, 3 Aug 2010 11:24:16 GMT", "version": "v1" } ]
2010-08-04
[ [ "Sagi", "Dror", "" ], [ "Tlusty", "Tsvi", "" ], [ "Stavans", "Joel", "" ] ]
Homologous recombination plays a key role in generating genetic diversity, while maintaining protein functionality. The mechanisms by which RecA enables a single-stranded segment of DNA to recognize a homologous tract within a whole genome are poorly understood. The scale by which homology recognition takes place is of a few tens of base pairs, after which the quest for homology is over. To study the mechanism of homology recognition, RecA-promoted homologous recombination between short DNA oligomers with different degrees of heterology was studied in vitro, using fluorescence resonant energy transfer. RecA can detect single mismatches at the initial stages of recombination, and the efficiency of recombination is strongly dependent on the location and distribution of mismatches. Mismatches near the 5' end of the incoming strand have a minute effect, whereas mismatches near the 3' end hinder strand exchange dramatically. There is a characteristic DNA length above which the sensitivity to heterology decreases sharply. Experiments with competitor sequences with varying degrees of homology yield information about the process of homology search and synapse lifetime. The exquisite sensitivity to mismatches and the directionality in the exchange process support a mechanism for homology recognition that can be modeled as a kinetic proofreading cascade.
2401.02989
Robin Zbinden
Robin Zbinden, Nina van Tiel, Benjamin Kellenberger, Lloyd Hughes, Devis Tuia
On the selection and effectiveness of pseudo-absences for species distribution modeling with deep learning
null
Ecological Informatics, Volume 81, 2024, 102623
10.1016/j.ecoinf.2024.102623
null
q-bio.QM cs.LG q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Species distribution modeling is a highly versatile tool for understanding the intricate relationship between environmental conditions and species occurrences. However, the available data often lacks information on confirmed species absence and is limited to opportunistically sampled, presence-only observations. To overcome this limitation, a common approach is to employ pseudo-absences, which are specific geographic locations designated as negative samples. While pseudo-absences are well-established for single-species distribution models, their application in the context of multi-species neural networks remains underexplored. Notably, the significant class imbalance between species presences and pseudo-absences is often left unaddressed. Moreover, the existence of different types of pseudo-absences (e.g., random and target-group background points) adds complexity to the selection process. Determining the optimal combination of pseudo-absences types is difficult and depends on the characteristics of the data, particularly considering that certain types of pseudo-absences can be used to mitigate geographic biases. In this paper, we demonstrate that these challenges can be effectively tackled by integrating pseudo-absences in the training of multi-species neural networks through modifications to the loss function. This adjustment involves assigning different weights to the distinct terms of the loss function, thereby addressing both the class imbalance and the choice of pseudo-absence types. Additionally, we propose a strategy to set these loss weights using spatial block cross-validation with presence-only data. We evaluate our approach using a benchmark dataset containing independent presence-absence data from six different regions and report improved results when compared to competing approaches.
[ { "created": "Wed, 3 Jan 2024 16:06:30 GMT", "version": "v1" } ]
2024-06-18
[ [ "Zbinden", "Robin", "" ], [ "van Tiel", "Nina", "" ], [ "Kellenberger", "Benjamin", "" ], [ "Hughes", "Lloyd", "" ], [ "Tuia", "Devis", "" ] ]
Species distribution modeling is a highly versatile tool for understanding the intricate relationship between environmental conditions and species occurrences. However, the available data often lacks information on confirmed species absence and is limited to opportunistically sampled, presence-only observations. To overcome this limitation, a common approach is to employ pseudo-absences, which are specific geographic locations designated as negative samples. While pseudo-absences are well-established for single-species distribution models, their application in the context of multi-species neural networks remains underexplored. Notably, the significant class imbalance between species presences and pseudo-absences is often left unaddressed. Moreover, the existence of different types of pseudo-absences (e.g., random and target-group background points) adds complexity to the selection process. Determining the optimal combination of pseudo-absences types is difficult and depends on the characteristics of the data, particularly considering that certain types of pseudo-absences can be used to mitigate geographic biases. In this paper, we demonstrate that these challenges can be effectively tackled by integrating pseudo-absences in the training of multi-species neural networks through modifications to the loss function. This adjustment involves assigning different weights to the distinct terms of the loss function, thereby addressing both the class imbalance and the choice of pseudo-absence types. Additionally, we propose a strategy to set these loss weights using spatial block cross-validation with presence-only data. We evaluate our approach using a benchmark dataset containing independent presence-absence data from six different regions and report improved results when compared to competing approaches.
2011.06124
Matthew Zalesak
Matthew Zalesak and Samitha Samaranayake (Cornell University)
SEIR-Campus: Modeling Infectious Diseases on University Campuses
18 pages, 10 figures
null
null
null
q-bio.PE cs.MA cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a Python package for modeling and studying the spread of infectious diseases using an agent-based SEIR style epidemiological model with a focus on university campuses. This document explains the epidemiological model used in the package and gives examples highlighting the ways that the package can be used.
[ { "created": "Wed, 11 Nov 2020 23:50:32 GMT", "version": "v1" } ]
2020-11-13
[ [ "Zalesak", "Matthew", "", "Cornell University" ], [ "Samaranayake", "Samitha", "", "Cornell University" ] ]
We introduce a Python package for modeling and studying the spread of infectious diseases using an agent-based SEIR style epidemiological model with a focus on university campuses. This document explains the epidemiological model used in the package and gives examples highlighting the ways that the package can be used.
q-bio/0611066
Tao Hu
Tao Hu, B. I. Shklovskii
Kinetics of viral self-assembly: the role of ss RNA antenna
4 pages, 3 figures, several experiments are proposed, a new idea of experiment is added
Phys. Rev. E 75, 051901 (2007)
10.1103/PhysRevE.75.051901
null
q-bio.BM cond-mat.soft
null
A big class of viruses self-assemble from a large number of identical capsid proteins with long flexible N-terminal tails and ss RNA. We study the role of the strong Coulomb interaction of positive N-terminal tails with ss RNA in the kinetics of the in vitro virus self-assembly. Capsid proteins stick to unassembled chain of ss RNA (which we call "antenna") and slide on it towards the assembly site. We show that at excess of capsid proteins such one-dimensional diffusion accelerates self-assembly more than ten times. On the other hand at excess of ss RNA, antenna slows self-assembly down. Several experiments are proposed to verify the role of ss RNA antenna.
[ { "created": "Mon, 20 Nov 2006 20:27:14 GMT", "version": "v1" }, { "created": "Tue, 5 Dec 2006 19:32:34 GMT", "version": "v2" }, { "created": "Fri, 15 Dec 2006 22:43:07 GMT", "version": "v3" }, { "created": "Fri, 2 Feb 2007 20:32:22 GMT", "version": "v4" } ]
2009-11-13
[ [ "Hu", "Tao", "" ], [ "Shklovskii", "B. I.", "" ] ]
A big class of viruses self-assemble from a large number of identical capsid proteins with long flexible N-terminal tails and ss RNA. We study the role of the strong Coulomb interaction of positive N-terminal tails with ss RNA in the kinetics of the in vitro virus self-assembly. Capsid proteins stick to unassembled chain of ss RNA (which we call "antenna") and slide on it towards the assembly site. We show that at excess of capsid proteins such one-dimensional diffusion accelerates self-assembly more than ten times. On the other hand at excess of ss RNA, antenna slows self-assembly down. Several experiments are proposed to verify the role of ss RNA antenna.
0707.2573
F\`elix Campelo
F. Campelo and A. Hernandez-Machado
Model for curvature-driven pearling instability in membranes
Accepted for publication in Phys. Rev. Lett
Phys. Rev. Lett. 99, 088101 (2007)
10.1103/PhysRevLett.99.088101
null
q-bio.QM cond-mat.soft q-bio.CB
null
A phase-field model for dealing with dynamic instabilities in membranes is presented. We use it to study curvature-driven pearling instability in vesicles induced by the anchorage of amphiphilic polymers on the membrane. Within this model, we obtain the morphological changes reported in recent experiments. The formation of a homogeneous pearled structure is achieved by consequent pearling of an initial cylindrical tube from the tip. For high enough concentration of anchors, we show theoretically that the homogeneous pearled shape is energetically less favorable than an inhomogeneous one, with a large sphere connected to an array of smaller spheres.
[ { "created": "Tue, 17 Jul 2007 17:22:16 GMT", "version": "v1" } ]
2011-11-10
[ [ "Campelo", "F.", "" ], [ "Hernandez-Machado", "A.", "" ] ]
A phase-field model for dealing with dynamic instabilities in membranes is presented. We use it to study curvature-driven pearling instability in vesicles induced by the anchorage of amphiphilic polymers on the membrane. Within this model, we obtain the morphological changes reported in recent experiments. The formation of a homogeneous pearled structure is achieved by consequent pearling of an initial cylindrical tube from the tip. For high enough concentration of anchors, we show theoretically that the homogeneous pearled shape is energetically less favorable than an inhomogeneous one, with a large sphere connected to an array of smaller spheres.
2008.10246
Trinh Xuan Hoang
Phuong Thuy Bui and Trinh Xuan Hoang
Protein escape at the ribosomal exit tunnel: Effect of the tunnel shape
12 pages, 11 figures, with supplementary materials
J. Chem. Phys. 153, 045105 (2020)
10.1063/5.0008292
null
q-bio.BM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the post-translational escape of nascent proteins at the ribosomal exit tunnel with the consideration of a real shape atomistic tunnel based on the Protein Data Bank (PDB) structure of the large ribosome subunit of archeon Haloarcula marismortui. Molecular dynamics simulations employing the Go-like model for the proteins show that at intermediate and high temperatures, including a presumable physiological temperature, the protein escape process at the atomistic tunnel is quantitatively similar to that at a cylinder tunnel of length L = 72 {\AA} and diameter d = 16 {\AA}. At low temperatures, the atomistic tunnel, however, yields an increased probability of protein trapping inside the tunnel while the cylinder tunnel does not cause the trapping. All-$\beta$ proteins tend to escape faster than all-$\alpha$ proteins but this difference is blurred on increasing the protein's chain length. A 29-residue zinc-finger domain is shown to be severely trapped inside the tunnel. Most of the single-domain proteins considered, however, can escape efficiently at the physiological temperature with the escape time distribution following the diffusion model proposed in our previous works. An extrapolation of the simulation data to a realistic value of the friction coefficient for amino acids indicates that the escape times of globular proteins are at the sub-millisecond scale. It is argued that this time scale is short enough for the smooth functioning of the ribosome by not allowing nascent proteins to jam the ribosome tunnel.
[ { "created": "Mon, 24 Aug 2020 08:18:50 GMT", "version": "v1" } ]
2020-08-25
[ [ "Bui", "Phuong Thuy", "" ], [ "Hoang", "Trinh Xuan", "" ] ]
We study the post-translational escape of nascent proteins at the ribosomal exit tunnel with the consideration of a real shape atomistic tunnel based on the Protein Data Bank (PDB) structure of the large ribosome subunit of archeon Haloarcula marismortui. Molecular dynamics simulations employing the Go-like model for the proteins show that at intermediate and high temperatures, including a presumable physiological temperature, the protein escape process at the atomistic tunnel is quantitatively similar to that at a cylinder tunnel of length L = 72 {\AA} and diameter d = 16 {\AA}. At low temperatures, the atomistic tunnel, however, yields an increased probability of protein trapping inside the tunnel while the cylinder tunnel does not cause the trapping. All-$\beta$ proteins tend to escape faster than all-$\alpha$ proteins but this difference is blurred on increasing the protein's chain length. A 29-residue zinc-finger domain is shown to be severely trapped inside the tunnel. Most of the single-domain proteins considered, however, can escape efficiently at the physiological temperature with the escape time distribution following the diffusion model proposed in our previous works. An extrapolation of the simulation data to a realistic value of the friction coefficient for amino acids indicates that the escape times of globular proteins are at the sub-millisecond scale. It is argued that this time scale is short enough for the smooth functioning of the ribosome by not allowing nascent proteins to jam the ribosome tunnel.
1908.03332
Nicolas Blondeau
Tauskela Joseph S., Blondeau Nicolas (IPMC)
Requirement for preclinical prioritization of neuroprotective strategies in stroke: Incorporation of preconditioning
null
Conditioning Medicine, 2018
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acute neuroprotection in numerous human clinical trials has been an abject failure. Major systemic-and procedural-based issues have subsequently been identified in both clinical trials and preclinical animal model experimentation. As well, issues related to the neuroprotective moiety itself have contributed to clinical trial failures, including late delivery, mono-targeting, low potency and poor tolerability. Conditioning (pre-or post-) strategies can potentially address these issues and are therefore gaining increasing attention as approaches to protect the brain from cerebral ischemia. In principle, conditioning can address concerns of timing (preconditioning could be pre-emptively applied in high-risk patients, and post-conditioning after patients experience an unannounced brain infarction) and signaling (multi-modal). However, acute neuroprotection and conditioning strategies face a common translational issue: a myriad of possibilities exist, but with no strategy to select optimal candidates. In this review, we argue that what is required is a neuroprotective framework to identify the "best" agent(s), at the earliest investigational stage possible. This may require switching mindsets from identifying how neuroprotection can be achieved to determining how neuroprotection can fail, for the vast majority of candidates. Understanding the basis for failure can in turn guide supplementary treatment, thereby forming an evidence-based rationale for selecting combinations of therapies. An appropriately designed in vitro (neuron culture, brain slices) approach, based on increasing the harshness of the ischemic-like insult, can be useful in identifying the "best" conditioner or acute neuroprotective therapy, as well as how the two modalities can be combined to overcome individual limitations. This would serve as a base from which to launch further investigation into therapies required to protect the neurovascular unit in in vivo animal models of cerebral ischemia. Based on these respective approaches, our laboratories suggest that there is merit in examining synaptic activity-and nutraceutical-based preconditioning / acute neuroprotection.
[ { "created": "Fri, 9 Aug 2019 06:46:19 GMT", "version": "v1" } ]
2019-08-12
[ [ "S.", "Tauskela Joseph", "", "IPMC" ], [ "Nicolas", "Blondeau", "", "IPMC" ] ]
Acute neuroprotection in numerous human clinical trials has been an abject failure. Major systemic-and procedural-based issues have subsequently been identified in both clinical trials and preclinical animal model experimentation. As well, issues related to the neuroprotective moiety itself have contributed to clinical trial failures, including late delivery, mono-targeting, low potency and poor tolerability. Conditioning (pre-or post-) strategies can potentially address these issues and are therefore gaining increasing attention as approaches to protect the brain from cerebral ischemia. In principle, conditioning can address concerns of timing (preconditioning could be pre-emptively applied in high-risk patients, and post-conditioning after patients experience an unannounced brain infarction) and signaling (multi-modal). However, acute neuroprotection and conditioning strategies face a common translational issue: a myriad of possibilities exist, but with no strategy to select optimal candidates. In this review, we argue that what is required is a neuroprotective framework to identify the "best" agent(s), at the earliest investigational stage possible. This may require switching mindsets from identifying how neuroprotection can be achieved to determining how neuroprotection can fail, for the vast majority of candidates. Understanding the basis for failure can in turn guide supplementary treatment, thereby forming an evidence-based rationale for selecting combinations of therapies. An appropriately designed in vitro (neuron culture, brain slices) approach, based on increasing the harshness of the ischemic-like insult, can be useful in identifying the "best" conditioner or acute neuroprotective therapy, as well as how the two modalities can be combined to overcome individual limitations. This would serve as a base from which to launch further investigation into therapies required to protect the neurovascular unit in in vivo animal models of cerebral ischemia. Based on these respective approaches, our laboratories suggest that there is merit in examining synaptic activity-and nutraceutical-based preconditioning / acute neuroprotection.
1406.2447
Peter Gawthrop
Peter J. Gawthrop and Edmund J. Crampin
Energy-based Analysis of Biochemical Cycles using Bond Graphs
null
Proc. R. Soc. A 2014 470 20140459
10.1098/rspa.2014.0459
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Thermodynamic aspects of chemical reactions have a long history in the Physical Chemistry literature. In particular, biochemical cycles - the building-blocks of biochemical systems - require a source of energy to function. However, although fundamental, the role of chemical potential and Gibb's free energy in the analysis of biochemical systems is often overlooked leading to models which are physically impossible. The bond graph approach was developed for modelling engineering systems where energy generation, storage and transmission are fundamental. The method focuses on how power flows between components and how energy is stored, transmitted or dissipated within components. Based on early ideas of network thermodynamics, we have applied this approach to biochemical systems to generate models which automatically obey the laws of thermodynamics. We illustrate the method with examples of biochemical cycles. We have found that thermodynamically compliant models of simple biochemical cycles can easily be developed using this approach. In particular, both stoichiometric information and simulation models can be developed directly from the bond graph. Furthermore, model reduction and approximation while retaining structural and thermodynamic properties is facilitated. Because the bond graph approach is also modular and scaleable, we believe that it provides a secure foundation for building thermodynamically compliant models of large biochemical networks.
[ { "created": "Tue, 10 Jun 2014 07:25:10 GMT", "version": "v1" }, { "created": "Thu, 11 Sep 2014 05:34:40 GMT", "version": "v2" } ]
2018-08-14
[ [ "Gawthrop", "Peter J.", "" ], [ "Crampin", "Edmund J.", "" ] ]
Thermodynamic aspects of chemical reactions have a long history in the Physical Chemistry literature. In particular, biochemical cycles - the building-blocks of biochemical systems - require a source of energy to function. However, although fundamental, the role of chemical potential and Gibb's free energy in the analysis of biochemical systems is often overlooked leading to models which are physically impossible. The bond graph approach was developed for modelling engineering systems where energy generation, storage and transmission are fundamental. The method focuses on how power flows between components and how energy is stored, transmitted or dissipated within components. Based on early ideas of network thermodynamics, we have applied this approach to biochemical systems to generate models which automatically obey the laws of thermodynamics. We illustrate the method with examples of biochemical cycles. We have found that thermodynamically compliant models of simple biochemical cycles can easily be developed using this approach. In particular, both stoichiometric information and simulation models can be developed directly from the bond graph. Furthermore, model reduction and approximation while retaining structural and thermodynamic properties is facilitated. Because the bond graph approach is also modular and scaleable, we believe that it provides a secure foundation for building thermodynamically compliant models of large biochemical networks.
1610.06421
Hao Dong
Hao Dong, Akara Supratak, Wei Pan, Chao Wu, Paul M. Matthews and Yike Guo
Mixed Neural Network Approach for Temporal Sleep Stage Classification
THIS ARTICLE HAS BEEN PUBLISHED IN IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
null
10.1109/TNSRE.2017.2733220
null
q-bio.NC cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a practical approach to addressing limitations posed by use of single active electrodes in applications for sleep stage classification. Electroencephalography (EEG)-based characterizations of sleep stage progression contribute the diagnosis and monitoring of the many pathologies of sleep. Several prior reports have explored ways of automating the analysis of sleep EEG and of reducing the complexity of the data needed for reliable discrimination of sleep stages in order to make it possible to perform sleep studies at lower cost in the home (rather than only in specialized clinical facilities). However, these reports have involved recordings from electrodes placed on the cranial vertex or occiput, which can be uncomfortable or difficult for subjects to position. Those that have utilized single EEG channels which contain less sleep information, have showed poor classification performance. We have taken advantage of Rectifier Neural Network for feature detection and Long Short-Term Memory (LSTM) network for sequential data learning to optimize classification performance with single electrode recordings. After exploring alternative electrode placements, we found a comfortable configuration of a single-channel EEG on the forehead and have shown that it can be integrated with additional electrodes for simultaneous recording of the electroocuolgram (EOG). Evaluation of data from 62 people (with 494 hours sleep) demonstrated better performance of our analytical algorithm for automated sleep classification than existing approaches using vertex or occipital electrode placements. Use of this recording configuration with neural network deconvolution promises to make clinically indicated home sleep studies practical.
[ { "created": "Sat, 15 Oct 2016 18:48:00 GMT", "version": "v1" }, { "created": "Wed, 26 Jul 2017 17:39:53 GMT", "version": "v2" }, { "created": "Thu, 3 Aug 2017 15:00:48 GMT", "version": "v3" } ]
2017-08-04
[ [ "Dong", "Hao", "" ], [ "Supratak", "Akara", "" ], [ "Pan", "Wei", "" ], [ "Wu", "Chao", "" ], [ "Matthews", "Paul M.", "" ], [ "Guo", "Yike", "" ] ]
This paper proposes a practical approach to addressing limitations posed by use of single active electrodes in applications for sleep stage classification. Electroencephalography (EEG)-based characterizations of sleep stage progression contribute the diagnosis and monitoring of the many pathologies of sleep. Several prior reports have explored ways of automating the analysis of sleep EEG and of reducing the complexity of the data needed for reliable discrimination of sleep stages in order to make it possible to perform sleep studies at lower cost in the home (rather than only in specialized clinical facilities). However, these reports have involved recordings from electrodes placed on the cranial vertex or occiput, which can be uncomfortable or difficult for subjects to position. Those that have utilized single EEG channels which contain less sleep information, have showed poor classification performance. We have taken advantage of Rectifier Neural Network for feature detection and Long Short-Term Memory (LSTM) network for sequential data learning to optimize classification performance with single electrode recordings. After exploring alternative electrode placements, we found a comfortable configuration of a single-channel EEG on the forehead and have shown that it can be integrated with additional electrodes for simultaneous recording of the electroocuolgram (EOG). Evaluation of data from 62 people (with 494 hours sleep) demonstrated better performance of our analytical algorithm for automated sleep classification than existing approaches using vertex or occipital electrode placements. Use of this recording configuration with neural network deconvolution promises to make clinically indicated home sleep studies practical.
2202.00507
George A Kevrekidis
G.A. Kevrekidis, Z. Rapti, Y. Drossinos, P.G. Kevrekidis, M.A. Barmann, Q.Y. Chen, J. Cuevas-Maraver
Backcasting COVID-19: A Physics-Informed Estimate for Early Case Incidence
null
null
null
null
q-bio.QM physics.soc-ph q-bio.PE
http://creativecommons.org/licenses/by/4.0/
It is widely accepted that the number of reported cases during the first stages of the COVID-19 pandemic severely underestimates the number of actual cases. We leverage delay embedding theorems of Whitney and Takens and use Gaussian Process regression to estimate the number of cases during the first 2020 wave based on the second wave of the epidemic in several European countries, South Korea, and Brazil. We assume that the second wave was more accurately monitored and hence that it can be trusted. We then construct a manifold diffeomorphic to that of the implied original dynamical system, using fatalities or hospitalizations only. Finally, we restrict the diffeomorphism to the reported cases coordinate of the dynamical system. Our main finding is that in the European countries studied, the actual cases are under-reported by as much as 50\%. On the other hand, in South Korea -- which had an exemplary and proactive mitigation approach -- a far smaller discrepancy between the actual and reported cases is predicted, with an approximately 17\% predicted under-estimation. We believe that our backcasting framework is applicable to other epidemic outbreaks where (due to limited or poor quality data) there is uncertainty around the actual cases.
[ { "created": "Mon, 31 Jan 2022 04:19:58 GMT", "version": "v1" } ]
2022-02-02
[ [ "Kevrekidis", "G. A.", "" ], [ "Rapti", "Z.", "" ], [ "Drossinos", "Y.", "" ], [ "Kevrekidis", "P. G.", "" ], [ "Barmann", "M. A.", "" ], [ "Chen", "Q. Y.", "" ], [ "Cuevas-Maraver", "J.", "" ] ]
It is widely accepted that the number of reported cases during the first stages of the COVID-19 pandemic severely underestimates the number of actual cases. We leverage delay embedding theorems of Whitney and Takens and use Gaussian Process regression to estimate the number of cases during the first 2020 wave based on the second wave of the epidemic in several European countries, South Korea, and Brazil. We assume that the second wave was more accurately monitored and hence that it can be trusted. We then construct a manifold diffeomorphic to that of the implied original dynamical system, using fatalities or hospitalizations only. Finally, we restrict the diffeomorphism to the reported cases coordinate of the dynamical system. Our main finding is that in the European countries studied, the actual cases are under-reported by as much as 50\%. On the other hand, in South Korea -- which had an exemplary and proactive mitigation approach -- a far smaller discrepancy between the actual and reported cases is predicted, with an approximately 17\% predicted under-estimation. We believe that our backcasting framework is applicable to other epidemic outbreaks where (due to limited or poor quality data) there is uncertainty around the actual cases.
2302.11378
The Tien Mai
T. Tien Mai, Gerry Tonkin-Hill, John A. Lees, Jukka Corander
Quantifying the common genetic variability of bacterial traits
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
The study of common heritability, or co-heritability, among multiple traits has been widely established in quantitative and molecular genetics. However, in bacteria, genome-based estimation of heritability has only been considered very recently and no methods are currently available for considering co-heritability. Here we introduce such a method and demonstrate its usefulness by multi-trait analyses of the three major human pathogens \textit{Escherichia coli}, \textit{Neisseria gonorrhoeae} and \textit{Streprococcus pneumoniae}. We anticipate that the increased availability of high-throughput genomic and phenotypic screens of bacterial populations will spawn ample future opportunities to understand the common molecular basis of different traits in bacteria.
[ { "created": "Wed, 22 Feb 2023 13:52:40 GMT", "version": "v1" } ]
2023-02-23
[ [ "Mai", "T. Tien", "" ], [ "Tonkin-Hill", "Gerry", "" ], [ "Lees", "John A.", "" ], [ "Corander", "Jukka", "" ] ]
The study of common heritability, or co-heritability, among multiple traits has been widely established in quantitative and molecular genetics. However, in bacteria, genome-based estimation of heritability has only been considered very recently and no methods are currently available for considering co-heritability. Here we introduce such a method and demonstrate its usefulness by multi-trait analyses of the three major human pathogens \textit{Escherichia coli}, \textit{Neisseria gonorrhoeae} and \textit{Streprococcus pneumoniae}. We anticipate that the increased availability of high-throughput genomic and phenotypic screens of bacterial populations will spawn ample future opportunities to understand the common molecular basis of different traits in bacteria.
1911.05531
Iddo Drori
Iddo Drori, Darshan Thaker, Arjun Srivatsa, Daniel Jeong, Yueqi Wang, Linyong Nan, Fan Wu, Dimitri Leggas, Jinhao Lei, Weiyi Lu, Weilong Fu, Yuan Gao, Sashank Karri, Anand Kannan, Antonio Moretti, Mohammed AlQuraishi, Chen Keasar, Itsik Pe'er
Accurate Protein Structure Prediction by Embeddings and Deep Learning Representations
null
Machine Learning in Computational Biology, 2019
null
null
q-bio.BM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are the major building blocks of life, and actuators of almost all chemical and biophysical events in living organisms. Their native structures in turn enable their biological functions which have a fundamental role in drug design. This motivates predicting the structure of a protein from its sequence of amino acids, a fundamental problem in computational biology. In this work, we demonstrate state-of-the-art protein structure prediction (PSP) results using embeddings and deep learning models for prediction of backbone atom distance matrices and torsion angles. We recover 3D coordinates of backbone atoms and reconstruct full atom protein by optimization. We create a new gold standard dataset of proteins which is comprehensive and easy to use. Our dataset consists of amino acid sequences, Q8 secondary structures, position specific scoring matrices, multiple sequence alignment co-evolutionary features, backbone atom distance matrices, torsion angles, and 3D coordinates. We evaluate the quality of our structure prediction by RMSD on the latest Critical Assessment of Techniques for Protein Structure Prediction (CASP) test data and demonstrate competitive results with the winning teams and AlphaFold in CASP13 and supersede the results of the winning teams in CASP12. We make our data, models, and code publicly available.
[ { "created": "Sat, 9 Nov 2019 00:21:17 GMT", "version": "v1" } ]
2019-11-14
[ [ "Drori", "Iddo", "" ], [ "Thaker", "Darshan", "" ], [ "Srivatsa", "Arjun", "" ], [ "Jeong", "Daniel", "" ], [ "Wang", "Yueqi", "" ], [ "Nan", "Linyong", "" ], [ "Wu", "Fan", "" ], [ "Leggas", "Dimitri", "" ], [ "Lei", "Jinhao", "" ], [ "Lu", "Weiyi", "" ], [ "Fu", "Weilong", "" ], [ "Gao", "Yuan", "" ], [ "Karri", "Sashank", "" ], [ "Kannan", "Anand", "" ], [ "Moretti", "Antonio", "" ], [ "AlQuraishi", "Mohammed", "" ], [ "Keasar", "Chen", "" ], [ "Pe'er", "Itsik", "" ] ]
Proteins are the major building blocks of life, and actuators of almost all chemical and biophysical events in living organisms. Their native structures in turn enable their biological functions which have a fundamental role in drug design. This motivates predicting the structure of a protein from its sequence of amino acids, a fundamental problem in computational biology. In this work, we demonstrate state-of-the-art protein structure prediction (PSP) results using embeddings and deep learning models for prediction of backbone atom distance matrices and torsion angles. We recover 3D coordinates of backbone atoms and reconstruct full atom protein by optimization. We create a new gold standard dataset of proteins which is comprehensive and easy to use. Our dataset consists of amino acid sequences, Q8 secondary structures, position specific scoring matrices, multiple sequence alignment co-evolutionary features, backbone atom distance matrices, torsion angles, and 3D coordinates. We evaluate the quality of our structure prediction by RMSD on the latest Critical Assessment of Techniques for Protein Structure Prediction (CASP) test data and demonstrate competitive results with the winning teams and AlphaFold in CASP13 and supersede the results of the winning teams in CASP12. We make our data, models, and code publicly available.
1805.11851
Simone Carlo Surace
Simone Carlo Surace, Jean-Pascal Pfister, Wulfram Gerstner, Johanni Brea
On the choice of metric in gradient-based theories of brain function
Revised version; 14 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Since a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Since the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature and propose ways forward to constrain the metric.
[ { "created": "Wed, 30 May 2018 08:21:41 GMT", "version": "v1" }, { "created": "Fri, 1 Jun 2018 21:35:49 GMT", "version": "v2" }, { "created": "Fri, 21 Dec 2018 14:12:10 GMT", "version": "v3" } ]
2018-12-24
[ [ "Surace", "Simone Carlo", "" ], [ "Pfister", "Jean-Pascal", "" ], [ "Gerstner", "Wulfram", "" ], [ "Brea", "Johanni", "" ] ]
The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Since a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Since the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature and propose ways forward to constrain the metric.
2112.01002
Sammy Sambu
Sammy Sambu
Attaining scalable storage-expansion dualism for bioartificial tissues
10 pages, 5 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
The untenable dependence of cryopreservation on cytotoxic cryoprotectants has motivated the mining of biochemical libraries to identify molecular features marking cryoprotectants that may prevent ice crystal growth. It is hypothesized that such molecules may be useful across all temperatures eliminating cellular destruction due to equilibration cytotoxicity before and after cryopreservation. By probing the biochemical space using solvation-associated molecular topology and partition-distribution measures we developed an analytic formalism for ice crystal inhibition. By probing the union between a heat-shock protein cluster and an anti-freeze glycoprotein cluster, the model development process generated distinct regions of interest for anti-freeze glycoprotein and proved robust across different classes of proteins. These results confirm that there is a chemical space within which efficacious Ice Crystal Inhibitor molecular libraries can be constructed. These spaces contain latent projections drawn from solvent accessibility, hydrogen bonding capacity and molecular geometry. They also showed that molecular design can be a useful tool in the development of new-age cryoactive matrices to enable the smooth transition across culture, preservation and expansion. These chemical spaces provide low cytotoxicity since such amphipathic molecules occupy a continuum between solubizing and membrane stabilizing regions as shown by the free energy of translocation calculations. These biochemical design spaces are fundamentally the solution to efficient scale-up and deployment of cell therapies. Consequently, this article proposes the use of a molecular knowledge-mining approach in the development of a class of non-cytotoxic cryoprotective agents in the class of Ice Crystal Inhibitor compatible with continuous cryothermic and normothermic cell storage and expansion.
[ { "created": "Thu, 2 Dec 2021 06:28:23 GMT", "version": "v1" } ]
2021-12-03
[ [ "Sambu", "Sammy", "" ] ]
The untenable dependence of cryopreservation on cytotoxic cryoprotectants has motivated the mining of biochemical libraries to identify molecular features marking cryoprotectants that may prevent ice crystal growth. It is hypothesized that such molecules may be useful across all temperatures eliminating cellular destruction due to equilibration cytotoxicity before and after cryopreservation. By probing the biochemical space using solvation-associated molecular topology and partition-distribution measures we developed an analytic formalism for ice crystal inhibition. By probing the union between a heat-shock protein cluster and an anti-freeze glycoprotein cluster, the model development process generated distinct regions of interest for anti-freeze glycoprotein and proved robust across different classes of proteins. These results confirm that there is a chemical space within which efficacious Ice Crystal Inhibitor molecular libraries can be constructed. These spaces contain latent projections drawn from solvent accessibility, hydrogen bonding capacity and molecular geometry. They also showed that molecular design can be a useful tool in the development of new-age cryoactive matrices to enable the smooth transition across culture, preservation and expansion. These chemical spaces provide low cytotoxicity since such amphipathic molecules occupy a continuum between solubizing and membrane stabilizing regions as shown by the free energy of translocation calculations. These biochemical design spaces are fundamentally the solution to efficient scale-up and deployment of cell therapies. Consequently, this article proposes the use of a molecular knowledge-mining approach in the development of a class of non-cytotoxic cryoprotective agents in the class of Ice Crystal Inhibitor compatible with continuous cryothermic and normothermic cell storage and expansion.
2404.01514
Nicholas Williams
Nicholas Williams and Kara E. Rudolph
A drug classification pipeline for Medicaid claims using RxNorm
null
null
null
null
q-bio.QM cs.DB
http://creativecommons.org/licenses/by/4.0/
Objective: Freely preprocess drug codes recorded in electronic health records and insurance claims to drug classes that may then be used in biomedical research. Materials and Methods: We developed a drug classification pipeline for linking National Drug Codes to the World Health Organization Anatomical Therapeutic Chemical classification. To implement our solution, we created an R package interface to the National Library of Medicine's RxNorm API. Results: Using the classification pipeline, 59.4% of all unique NDC were linked to an ATC, resulting in 95.5% of all claims being successfully linked to a drug classification. We identified 12,004 unique NDC codes that were classified as being an opioid or non-opioid prescription for treating pain. Discussion: Our proposed pipeline performed similarly well to other NDC classification routines using commercial databases. A check of a small, random sample of non-active NDC found the pipeline to be accurate for classifying these codes. Conclusion: The RxNorm NDC classification pipeline is a practical and reliable tool for categorizing drugs in large-scale administrative claims data.
[ { "created": "Mon, 1 Apr 2024 22:39:18 GMT", "version": "v1" } ]
2024-04-03
[ [ "Williams", "Nicholas", "" ], [ "Rudolph", "Kara E.", "" ] ]
Objective: Freely preprocess drug codes recorded in electronic health records and insurance claims to drug classes that may then be used in biomedical research. Materials and Methods: We developed a drug classification pipeline for linking National Drug Codes to the World Health Organization Anatomical Therapeutic Chemical classification. To implement our solution, we created an R package interface to the National Library of Medicine's RxNorm API. Results: Using the classification pipeline, 59.4% of all unique NDC were linked to an ATC, resulting in 95.5% of all claims being successfully linked to a drug classification. We identified 12,004 unique NDC codes that were classified as being an opioid or non-opioid prescription for treating pain. Discussion: Our proposed pipeline performed similarly well to other NDC classification routines using commercial databases. A check of a small, random sample of non-active NDC found the pipeline to be accurate for classifying these codes. Conclusion: The RxNorm NDC classification pipeline is a practical and reliable tool for categorizing drugs in large-scale administrative claims data.
2104.10957
Chen-Gia Tsai
Chia-Wei Li and Chen-Gia Tsai
Differential brain connectivity patterns while listening to breakup and rebellious songs: A functional magnetic resonance imaging study
14 pages, 4 figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Song appreciation involves a broad range of mental processes, and different neural networks may be activated by different song types. The aim of the present study was to show differential functional connectivity of the prefrontal cortices while listening to breakup and rebellious songs. Breakup songs describe romance and longing, whereas rebellious songs convey criticism of conventional ideas or socio-cultural norms. We hypothesized that the medial and lateral prefrontal cortices may interact with different brain regions in response to these two song types. Functional magnetic resonance imaging data of fifteen participants were collected while they were listening to two complete breakup songs and two complete rebellious songs currently popular in Taiwan. The results showed that listening to the breakup songs, compared to the rebellious songs, enhanced the coupling between the medial prefrontal cortex and several emotion-related regions, including the thalamus, caudate, amygdala, hippocampus, middle orbitofrontal cortex, and right inferior frontal gyrus. This coupling may reflect the neural processes of pain empathy, reward processing, compassion, and reappraisal in response to longing and sorrow expressed by the breakup songs. Compared to the breakup songs, listening to the rebellious songs was associated with enhanced coupling between subregions in the prefrontal and orbitofrontal cortices. These areas might work in concert to support re-evaluation of conventional ideas or socio-cultural norms as suggested by the rebellious songs. This study advanced our understanding of the integration of brain functions while processing complex information.
[ { "created": "Thu, 22 Apr 2021 09:37:07 GMT", "version": "v1" }, { "created": "Fri, 4 Jun 2021 07:19:19 GMT", "version": "v2" } ]
2021-06-07
[ [ "Li", "Chia-Wei", "" ], [ "Tsai", "Chen-Gia", "" ] ]
Song appreciation involves a broad range of mental processes, and different neural networks may be activated by different song types. The aim of the present study was to show differential functional connectivity of the prefrontal cortices while listening to breakup and rebellious songs. Breakup songs describe romance and longing, whereas rebellious songs convey criticism of conventional ideas or socio-cultural norms. We hypothesized that the medial and lateral prefrontal cortices may interact with different brain regions in response to these two song types. Functional magnetic resonance imaging data of fifteen participants were collected while they were listening to two complete breakup songs and two complete rebellious songs currently popular in Taiwan. The results showed that listening to the breakup songs, compared to the rebellious songs, enhanced the coupling between the medial prefrontal cortex and several emotion-related regions, including the thalamus, caudate, amygdala, hippocampus, middle orbitofrontal cortex, and right inferior frontal gyrus. This coupling may reflect the neural processes of pain empathy, reward processing, compassion, and reappraisal in response to longing and sorrow expressed by the breakup songs. Compared to the breakup songs, listening to the rebellious songs was associated with enhanced coupling between subregions in the prefrontal and orbitofrontal cortices. These areas might work in concert to support re-evaluation of conventional ideas or socio-cultural norms as suggested by the rebellious songs. This study advanced our understanding of the integration of brain functions while processing complex information.
1503.05927
Guo-Wei Wei
Kelin Xia, Kristopher Opron and Guo-Wei Wei
Correlation function based Gaussian network models
4 figures
null
null
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gaussian network model (GNM) is one of the most accurate and efficient methods for biomolecular flexibility analysis. However, the systematic generalization of the GNM has been elusive. We show that the GNM Kirchhoff matrix can be built from the ideal low-pass filter, which is a special case of a wide class of correlation functions underpinning the linear scaling flexibility-rigidity index (FRI) method. Based on the mathematical structure of correlation functions, we propose a unified framework to construct generalized Kirchhoff matrices whose matrix inverse leads to correlation function based GNMs, whereas, the direct inverse of the diagonal elements gives rise to FRI method. We illustrate that correlation function based GNMs outperform the original GNM in the B-factor prediction of a set of 364 proteins. We demonstrate that for any given correlation function, FRI and GNM methods provide essentially identical B-factor predictions when the scale value in the correlation function is sufficiently large.
[ { "created": "Mon, 2 Mar 2015 00:56:56 GMT", "version": "v1" } ]
2015-03-23
[ [ "Xia", "Kelin", "" ], [ "Opron", "Kristopher", "" ], [ "Wei", "Guo-Wei", "" ] ]
Gaussian network model (GNM) is one of the most accurate and efficient methods for biomolecular flexibility analysis. However, the systematic generalization of the GNM has been elusive. We show that the GNM Kirchhoff matrix can be built from the ideal low-pass filter, which is a special case of a wide class of correlation functions underpinning the linear scaling flexibility-rigidity index (FRI) method. Based on the mathematical structure of correlation functions, we propose a unified framework to construct generalized Kirchhoff matrices whose matrix inverse leads to correlation function based GNMs, whereas, the direct inverse of the diagonal elements gives rise to FRI method. We illustrate that correlation function based GNMs outperform the original GNM in the B-factor prediction of a set of 364 proteins. We demonstrate that for any given correlation function, FRI and GNM methods provide essentially identical B-factor predictions when the scale value in the correlation function is sufficiently large.
2010.14366
Maria Soledad Aronna
Felipe J.P. Antunes and M. Soledad Aronna and Cl\'audia T. Code\c{c}o
Modeling and control of malaria dynamics in fish farming regions
To appear in SIAM Journal of Applied Dynamical Systems
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose a model that represents the relation between fish ponds, the mosquito population and the transmission of malaria. It has been observed that in the Amazonic region of Acre, in the North of Brazil, fish farming is correlated to the transmission of malaria when carried out in artificial ponds that become breeding sites. Evidence has been found indicating that cleaning the vegetation from the edges of the crop tanks helps to control the size of the mosquito population. We use our model to determine the effective contribution of fish farming practices on malaria transmission dynamics. The model consists of a nonlinear system of ordinary differential equations with jumps at the cleaning time, which act as impulsive controls. We study the asymptotic behaviour of the system in function of the intensity and periodicity of the cleaning, and the value of the parameters. In particular, we state sufficient conditions under which the mosquito population is eliminated or persists, and under which the malaria is eliminated or becomes endemic. We prove our conditions by applying results for cooperative systems with concave nonlinearities.
[ { "created": "Tue, 27 Oct 2020 15:22:39 GMT", "version": "v1" }, { "created": "Thu, 24 Nov 2022 13:56:55 GMT", "version": "v2" }, { "created": "Wed, 29 Mar 2023 15:32:56 GMT", "version": "v3" }, { "created": "Wed, 26 Apr 2023 13:47:37 GMT", "version": "v4" } ]
2023-04-27
[ [ "Antunes", "Felipe J. P.", "" ], [ "Aronna", "M. Soledad", "" ], [ "Codeço", "Cláudia T.", "" ] ]
In this work we propose a model that represents the relation between fish ponds, the mosquito population and the transmission of malaria. It has been observed that in the Amazonic region of Acre, in the North of Brazil, fish farming is correlated to the transmission of malaria when carried out in artificial ponds that become breeding sites. Evidence has been found indicating that cleaning the vegetation from the edges of the crop tanks helps to control the size of the mosquito population. We use our model to determine the effective contribution of fish farming practices on malaria transmission dynamics. The model consists of a nonlinear system of ordinary differential equations with jumps at the cleaning time, which act as impulsive controls. We study the asymptotic behaviour of the system in function of the intensity and periodicity of the cleaning, and the value of the parameters. In particular, we state sufficient conditions under which the mosquito population is eliminated or persists, and under which the malaria is eliminated or becomes endemic. We prove our conditions by applying results for cooperative systems with concave nonlinearities.
2107.03475
Pantea Moghimi
Pantea Moghimi, Anh The Dang, Theoden I. Netoff, Kelvin O. Lim, Gowtham Atluri
A Review on MR Based Human Brain Parcellation Methods
31 pages, 3 figures, 2 tables
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Brain parcellations play a ubiquitous role in the analysis of magnetic resonance imaging (MRI) datasets. Over 100 years of research has been conducted in pursuit of an ideal brain parcellation. Different methods have been developed and studied for constructing brain parcellations using different imaging modalities. More recently, several data-driven parcellation methods have been adopted from data mining, machine learning, and statistics communities. With contributions from different scientific fields, there is a rich body of literature that needs to be examined to appreciate the breadth of existing research and the gaps that need to be investigated. In this work, we review the large body of in vivo brain parcellation research spanning different neuroimaging modalities and methods. A key contribution of this work is a semantic organization of this large body of work into different taxonomies, making it easy to understand the breadth and depth of the brain parcellation literature. Specifically, we categorized the existing parcellations into three groups: Anatomical parcellations, functional parcellations, and structural parcellations which are constructed using T1-weighted MRI, functional MRI (fMRI), and diffusion-weighted imaging (DWI) datasets, respectively. We provide a multi-level taxonomy of different methods studied in each of these categories, compare their relative strengths and weaknesses, and highlight the challenges currently faced for the development of brain parcellations.
[ { "created": "Wed, 7 Jul 2021 20:55:51 GMT", "version": "v1" } ]
2021-07-09
[ [ "Moghimi", "Pantea", "" ], [ "Dang", "Anh The", "" ], [ "Netoff", "Theoden I.", "" ], [ "Lim", "Kelvin O.", "" ], [ "Atluri", "Gowtham", "" ] ]
Brain parcellations play a ubiquitous role in the analysis of magnetic resonance imaging (MRI) datasets. Over 100 years of research has been conducted in pursuit of an ideal brain parcellation. Different methods have been developed and studied for constructing brain parcellations using different imaging modalities. More recently, several data-driven parcellation methods have been adopted from data mining, machine learning, and statistics communities. With contributions from different scientific fields, there is a rich body of literature that needs to be examined to appreciate the breadth of existing research and the gaps that need to be investigated. In this work, we review the large body of in vivo brain parcellation research spanning different neuroimaging modalities and methods. A key contribution of this work is a semantic organization of this large body of work into different taxonomies, making it easy to understand the breadth and depth of the brain parcellation literature. Specifically, we categorized the existing parcellations into three groups: Anatomical parcellations, functional parcellations, and structural parcellations which are constructed using T1-weighted MRI, functional MRI (fMRI), and diffusion-weighted imaging (DWI) datasets, respectively. We provide a multi-level taxonomy of different methods studied in each of these categories, compare their relative strengths and weaknesses, and highlight the challenges currently faced for the development of brain parcellations.
2406.01627
Zicheng Liu
Zicheng Liu, Jiahui Li, Siyuan Li, Zelin Zang, Cheng Tan, Yufei Huang, Yajing Bai, and Stan Z. Li
GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models
null
null
null
null
q-bio.GN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Genomic Foundation Model (GFM) paradigm is expected to facilitate the extraction of generalizable representations from massive genomic data, thereby enabling their application across a spectrum of downstream applications. Despite advancements, a lack of evaluation framework makes it difficult to ensure equitable assessment due to experimental settings, model intricacy, benchmark datasets, and reproducibility challenges. In the absence of standardization, comparative analyses risk becoming biased and unreliable. To surmount this impasse, we introduce GenBench, a comprehensive benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models. GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies. Through systematic evaluations of datasets spanning diverse biological domains with a particular emphasis on both short-range and long-range genomic tasks, firstly including the three most important DNA tasks covering Coding Region, Non-Coding Region, Genome Structure, etc. Moreover, We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance. Our findings reveal an interesting observation: independent of the number of parameters, the discernible difference in preference between the attention-based and convolution-based models on short- and long-range tasks may provide insights into the future design of GFM.
[ { "created": "Sat, 1 Jun 2024 08:01:05 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 10:51:22 GMT", "version": "v2" } ]
2024-06-06
[ [ "Liu", "Zicheng", "" ], [ "Li", "Jiahui", "" ], [ "Li", "Siyuan", "" ], [ "Zang", "Zelin", "" ], [ "Tan", "Cheng", "" ], [ "Huang", "Yufei", "" ], [ "Bai", "Yajing", "" ], [ "Li", "Stan Z.", "" ] ]
The Genomic Foundation Model (GFM) paradigm is expected to facilitate the extraction of generalizable representations from massive genomic data, thereby enabling their application across a spectrum of downstream applications. Despite advancements, a lack of evaluation framework makes it difficult to ensure equitable assessment due to experimental settings, model intricacy, benchmark datasets, and reproducibility challenges. In the absence of standardization, comparative analyses risk becoming biased and unreliable. To surmount this impasse, we introduce GenBench, a comprehensive benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models. GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies. Through systematic evaluations of datasets spanning diverse biological domains with a particular emphasis on both short-range and long-range genomic tasks, firstly including the three most important DNA tasks covering Coding Region, Non-Coding Region, Genome Structure, etc. Moreover, We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance. Our findings reveal an interesting observation: independent of the number of parameters, the discernible difference in preference between the attention-based and convolution-based models on short- and long-range tasks may provide insights into the future design of GFM.
1602.05877
Ivo Siekmann
Ivo Siekmann, Mark Fackrell, Edmund J. Crampin and Peter Taylor
Modelling modal gating of ion channels with hierarchical Markov models
28 pages, 8 figures, 3 tables
Proc. R. Soc. A 2016 472 20160122
10.1098/rspa.2016.0122
null
q-bio.QM math.PR q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parameterised using experimental data and show that it provides a better representation than a previous model of the same data set. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models.
[ { "created": "Thu, 18 Feb 2016 16:58:42 GMT", "version": "v1" } ]
2018-08-14
[ [ "Siekmann", "Ivo", "" ], [ "Fackrell", "Mark", "" ], [ "Crampin", "Edmund J.", "" ], [ "Taylor", "Peter", "" ] ]
Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parameterised using experimental data and show that it provides a better representation than a previous model of the same data set. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models.
2304.09239
John Vandermeer
John Vandermeer, Ivette Perfecto
The ghost of ecology in chaos, combining intransitive and higher order effects
29 pages, 15 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Historically, musings about the structure of ecological communities has revolved around the structure of pairwise interactions, competition, predation, mutualism, etc. . . Recently a growing literature acknowledges that the baseline assumption that the pair of species is not necessarily the metaphorical molecule of community ecology, and that certain structures containing three or more species may not be usefully divisible into pairwise components. Two examples are intransitive competition (species A dominates species B dominates species C dominates species A), and nonlinear higher-order effects. While these two processes have been discussed extensively, the explicit analysis of how the two of them behave when simultaneously part of the same dynamic system has not yet appeared in the literature. A concrete situation exists on coffee farms in Puerto Rico in which three ant species, at least on some farms, form an intransitive competitive triplet, and that triplet is strongly influenced, nonlinearly, by a fly parasitoid that modifies the competitive ability of one of the species in the triplet. Using this arrangement as a template we explore the dynamical consequences with a simple ODE model. Results are complicated and include include alternative periodic and chaotic attractors. The qualitative structures of those complications, however, may be retrieved easily from a reflection on the basic natural history of the system.
[ { "created": "Tue, 18 Apr 2023 18:57:03 GMT", "version": "v1" } ]
2023-04-20
[ [ "Vandermeer", "John", "" ], [ "Perfecto", "Ivette", "" ] ]
Historically, musings about the structure of ecological communities has revolved around the structure of pairwise interactions, competition, predation, mutualism, etc. . . Recently a growing literature acknowledges that the baseline assumption that the pair of species is not necessarily the metaphorical molecule of community ecology, and that certain structures containing three or more species may not be usefully divisible into pairwise components. Two examples are intransitive competition (species A dominates species B dominates species C dominates species A), and nonlinear higher-order effects. While these two processes have been discussed extensively, the explicit analysis of how the two of them behave when simultaneously part of the same dynamic system has not yet appeared in the literature. A concrete situation exists on coffee farms in Puerto Rico in which three ant species, at least on some farms, form an intransitive competitive triplet, and that triplet is strongly influenced, nonlinearly, by a fly parasitoid that modifies the competitive ability of one of the species in the triplet. Using this arrangement as a template we explore the dynamical consequences with a simple ODE model. Results are complicated and include include alternative periodic and chaotic attractors. The qualitative structures of those complications, however, may be retrieved easily from a reflection on the basic natural history of the system.
2205.04877
Samrat Mondal
Joy Das Bairagya, Samrat Sohel Mondal, Debashish Chowdhury, Sagar Chakraborty
Eco-evolutionary games for harvesting self-renewing common resource: Effect of growing harvester population
10 pages, 3 figures
null
null
null
q-bio.PE nlin.AO physics.bio-ph physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The tragedy of the commons (TOC) is a ubiquitous social dilemma witnessed in interactions between a population of living entities and shared resources available to them: The individuals in the population tend to selfishly overexploit a common resource as it is arguably the rational choice, or in case of non-human beings, it may be an evolutionarily uninvadable action. How to avert the TOC is a significant problem related to the conservation of resources. It is not hard to envisage situations where the resource could be self-renewing and the size of the population may be dependent on the state of the resource through the fractions of the population employing different exploitation rates. If the self-renewal rate of the resource lies between the maximum and the minimum exploitation rates, it is not a priori obvious under what conditions the TOC can be averted. In this paper, we address this question analytically and numerically using the setup of an evolutionary game theoretical replicator equation that models the Darwinian tenet of natural selection. Through the replicator equation, while we investigate how a population of replicators exploit the shared resource, the latter's dynamical feedback on the former is also not ignored. We also present a transparent bottom-up derivation of the game-resource feedback model to facilitate future studies on the stochastic effects on the findings presented herein.
[ { "created": "Tue, 10 May 2022 13:23:20 GMT", "version": "v1" } ]
2022-05-11
[ [ "Bairagya", "Joy Das", "" ], [ "Mondal", "Samrat Sohel", "" ], [ "Chowdhury", "Debashish", "" ], [ "Chakraborty", "Sagar", "" ] ]
The tragedy of the commons (TOC) is a ubiquitous social dilemma witnessed in interactions between a population of living entities and shared resources available to them: The individuals in the population tend to selfishly overexploit a common resource as it is arguably the rational choice, or in case of non-human beings, it may be an evolutionarily uninvadable action. How to avert the TOC is a significant problem related to the conservation of resources. It is not hard to envisage situations where the resource could be self-renewing and the size of the population may be dependent on the state of the resource through the fractions of the population employing different exploitation rates. If the self-renewal rate of the resource lies between the maximum and the minimum exploitation rates, it is not a priori obvious under what conditions the TOC can be averted. In this paper, we address this question analytically and numerically using the setup of an evolutionary game theoretical replicator equation that models the Darwinian tenet of natural selection. Through the replicator equation, while we investigate how a population of replicators exploit the shared resource, the latter's dynamical feedback on the former is also not ignored. We also present a transparent bottom-up derivation of the game-resource feedback model to facilitate future studies on the stochastic effects on the findings presented herein.
2207.09671
Aminur Rahman
Aminur Rahman, Angela Peace, Ramesh Kesawan, Souparno Ghosh
Spatio-temporal models of infectious disease with high rates of asymptomatic transmission
8 figures
null
null
null
q-bio.PE math.DS physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The surprisingly mercurial Covid-19 pandemic has highlighted the need to not only accelerate research on infectious disease, but to also study them using novel techniques and perspectives. A major contributor to the difficulty of containing the current pandemic is due to the highly asymptomatic nature of the disease. In this investigation, we develop a modeling framework to study the spatio-temporal evolution of diseases with high rates of asymptomatic transmission, and we apply this framework to a hypothetical country with mathematically tractable geography; namely, square counties uniformly organized into a rectangle. We first derive a model for the temporal dynamics of susceptible, infected, and recovered populations, which is applied at the county level. Next we use likelihood-based parameter estimation to derive temporally varying disease transmission parameters on the state-wide level. While these two methods give us some spatial structure and show the effects of behavioral and policy changes, they miss the evolution of hot zones that have caused significant difficulties in resource allocation during the current pandemic. It is evident that the distribution of cases will not be stagnantly based on the population density, as with many other diseases, but will continuously evolve. We model this as a diffusive process where the diffusivity is spatially varying based on the population distribution, and temporally varying based on the current number of simulated asymptomatic cases. With this final addition coupled to the SIR model with temporally varying transmission parameters, we capture the evolution of "hot zones" in our hypothetical setup.
[ { "created": "Wed, 20 Jul 2022 06:02:02 GMT", "version": "v1" } ]
2022-07-21
[ [ "Rahman", "Aminur", "" ], [ "Peace", "Angela", "" ], [ "Kesawan", "Ramesh", "" ], [ "Ghosh", "Souparno", "" ] ]
The surprisingly mercurial Covid-19 pandemic has highlighted the need to not only accelerate research on infectious disease, but to also study them using novel techniques and perspectives. A major contributor to the difficulty of containing the current pandemic is due to the highly asymptomatic nature of the disease. In this investigation, we develop a modeling framework to study the spatio-temporal evolution of diseases with high rates of asymptomatic transmission, and we apply this framework to a hypothetical country with mathematically tractable geography; namely, square counties uniformly organized into a rectangle. We first derive a model for the temporal dynamics of susceptible, infected, and recovered populations, which is applied at the county level. Next we use likelihood-based parameter estimation to derive temporally varying disease transmission parameters on the state-wide level. While these two methods give us some spatial structure and show the effects of behavioral and policy changes, they miss the evolution of hot zones that have caused significant difficulties in resource allocation during the current pandemic. It is evident that the distribution of cases will not be stagnantly based on the population density, as with many other diseases, but will continuously evolve. We model this as a diffusive process where the diffusivity is spatially varying based on the population distribution, and temporally varying based on the current number of simulated asymptomatic cases. With this final addition coupled to the SIR model with temporally varying transmission parameters, we capture the evolution of "hot zones" in our hypothetical setup.
2310.17896
Ayan Paul
Arunava Patra, Supratim Sengupta, Ayan Paul, Sagar Chakraborty
Inferring to C or not to C: Evolutionary games with Bayesian inferential strategies
13 pages, 9 figures
null
null
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strategies for sustaining cooperation and preventing exploitation by selfish agents in repeated games have mostly been restricted to Markovian strategies where the response of an agent depends on the actions in the previous round. Such strategies are characterized by lack of learning. However, learning from accumulated evidence over time and using the evidence to dynamically update our response is a key feature of living organisms. Bayesian inference provides a framework for such evidence-based learning mechanisms. It is therefore imperative to understand how strategies based on Bayesian learning fare in repeated games with Markovian strategies. Here, we consider a scenario where the Bayesian player uses the accumulated evidence of the opponent's actions over several rounds to continuously update her belief about the reactive opponent's strategy. The Bayesian player can then act on her inferred belief in different ways. By studying repeated Prisoner's dilemma games with such Bayesian inferential strategies, both in infinite and finite populations, we identify the conditions under which such strategies can be evolutionarily stable. We find that a Bayesian strategy that is less altruistic than the inferred belief about the opponent's strategy can outperform a larger set of reactive strategies, whereas one that is more generous than the inferred belief is more successful when the benefit-to-cost ratio of mutual cooperation is high. Our analysis reveals how learning the opponent's strategy through Bayesian inference, as opposed to utility maximization, can be beneficial in the long run, in preventing exploitation and eventual invasion by reactive strategies.
[ { "created": "Fri, 27 Oct 2023 05:06:34 GMT", "version": "v1" } ]
2023-10-30
[ [ "Patra", "Arunava", "" ], [ "Sengupta", "Supratim", "" ], [ "Paul", "Ayan", "" ], [ "Chakraborty", "Sagar", "" ] ]
Strategies for sustaining cooperation and preventing exploitation by selfish agents in repeated games have mostly been restricted to Markovian strategies where the response of an agent depends on the actions in the previous round. Such strategies are characterized by lack of learning. However, learning from accumulated evidence over time and using the evidence to dynamically update our response is a key feature of living organisms. Bayesian inference provides a framework for such evidence-based learning mechanisms. It is therefore imperative to understand how strategies based on Bayesian learning fare in repeated games with Markovian strategies. Here, we consider a scenario where the Bayesian player uses the accumulated evidence of the opponent's actions over several rounds to continuously update her belief about the reactive opponent's strategy. The Bayesian player can then act on her inferred belief in different ways. By studying repeated Prisoner's dilemma games with such Bayesian inferential strategies, both in infinite and finite populations, we identify the conditions under which such strategies can be evolutionarily stable. We find that a Bayesian strategy that is less altruistic than the inferred belief about the opponent's strategy can outperform a larger set of reactive strategies, whereas one that is more generous than the inferred belief is more successful when the benefit-to-cost ratio of mutual cooperation is high. Our analysis reveals how learning the opponent's strategy through Bayesian inference, as opposed to utility maximization, can be beneficial in the long run, in preventing exploitation and eventual invasion by reactive strategies.
2304.04713
Masahito Ohue
Kairi Furui, Masahito Ohue
Faster Lead Optimization Mapper Algorithm for Large-Scale Relative Free Energy Perturbation
null
null
null
null
q-bio.BM cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, free energy perturbation (FEP) calculations have garnered increasing attention as tools to support drug discovery. The lead optimization mapper (Lomap) was proposed as an algorithm to calculate the relative free energy between ligands efficiently. However, Lomap requires checking whether each edge in the FEP graph is removable, which necessitates checking the constraints for all edges. Consequently, conventional Lomap requires significant computation time, at least several hours for cases involving hundreds of compounds, and is impractical for cases with more than tens of thousands of edges. In this study, we aimed to reduce the computational cost of Lomap to enable the construction of FEP graphs for hundreds of compounds. We can reduce the overall number of constraint checks required from an amount dependent on the number of edges to one dependent on the number of nodes by using the chunk check process to check the constraints for as many edges as possible simultaneously. Moreover, the output graph is equivalent to that obtained using conventional Lomap, enabling direct replacement of the original Lomap with our method. With our improvement, the execution was tens to hundreds of times faster than that of the original Lomap. https://github.com/ohuelab/FastLomap
[ { "created": "Mon, 10 Apr 2023 17:14:19 GMT", "version": "v1" } ]
2023-04-11
[ [ "Furui", "Kairi", "" ], [ "Ohue", "Masahito", "" ] ]
In recent years, free energy perturbation (FEP) calculations have garnered increasing attention as tools to support drug discovery. The lead optimization mapper (Lomap) was proposed as an algorithm to calculate the relative free energy between ligands efficiently. However, Lomap requires checking whether each edge in the FEP graph is removable, which necessitates checking the constraints for all edges. Consequently, conventional Lomap requires significant computation time, at least several hours for cases involving hundreds of compounds, and is impractical for cases with more than tens of thousands of edges. In this study, we aimed to reduce the computational cost of Lomap to enable the construction of FEP graphs for hundreds of compounds. We can reduce the overall number of constraint checks required from an amount dependent on the number of edges to one dependent on the number of nodes by using the chunk check process to check the constraints for as many edges as possible simultaneously. Moreover, the output graph is equivalent to that obtained using conventional Lomap, enabling direct replacement of the original Lomap with our method. With our improvement, the execution was tens to hundreds of times faster than that of the original Lomap. https://github.com/ohuelab/FastLomap
1405.2780
Michael Sadovsky
Michael G. Sadovsky, Maria Yu. Senashova
Reflexive spatial behaviour does not guarantee evolution advantage in prey--predator communities
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the model of spatially distributed population consisting of two species with "\textsl{predator\,--\,prey}" interaction; each of the species occupies two stations. Transfer of individuals between the stations (migration) is not random and yields the maximization of a net reproduction of each species. Besides, each species implements reflexive behavior strategy to determine the optimal migration flow.
[ { "created": "Mon, 12 May 2014 14:35:30 GMT", "version": "v1" } ]
2014-05-13
[ [ "Sadovsky", "Michael G.", "" ], [ "Senashova", "Maria Yu.", "" ] ]
We consider the model of spatially distributed population consisting of two species with "\textsl{predator\,--\,prey}" interaction; each of the species occupies two stations. Transfer of individuals between the stations (migration) is not random and yields the maximization of a net reproduction of each species. Besides, each species implements reflexive behavior strategy to determine the optimal migration flow.
2402.07949
Risto Miikkulainen
Ashok Khanna, Olivier Francon, Risto Miikkulainen
Optimizing the Design of an Artificial Pancreas to Improve Diabetes Management
null
null
null
null
q-bio.QM cs.AI cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Diabetes, a chronic condition that impairs how the body turns food into energy, i.e. blood glucose, affects 38 million people in the US alone. The standard treatment is to supplement carbohydrate intake with an artificial pancreas, i.e. a continuous insulin pump (basal shots), as well as occasional insulin injections (bolus shots). The goal of the treatment is to keep blood glucose at the center of an acceptable range, as measured through a continuous glucose meter. A secondary goal is to minimize injections, which are unpleasant and difficult for some patients to implement. In this study, neuroevolution was used to discover an optimal strategy for the treatment. Based on a dataset of 30 days of treatment and measurements of a single patient, a random forest was first trained to predict future glucose levels. A neural network was then evolved to prescribe carbohydrates, basal pumping levels, and bolus injections. Evolution discovered a Pareto front that reduced deviation from the target and number of injections compared to the original data, thus improving patients' quality of life. To make the system easier to adopt, a language interface was developed with a large language model. Thus, these technologies not only improve patient care but also adoption in a broader population.
[ { "created": "Sat, 10 Feb 2024 00:49:46 GMT", "version": "v1" } ]
2024-02-14
[ [ "Khanna", "Ashok", "" ], [ "Francon", "Olivier", "" ], [ "Miikkulainen", "Risto", "" ] ]
Diabetes, a chronic condition that impairs how the body turns food into energy, i.e. blood glucose, affects 38 million people in the US alone. The standard treatment is to supplement carbohydrate intake with an artificial pancreas, i.e. a continuous insulin pump (basal shots), as well as occasional insulin injections (bolus shots). The goal of the treatment is to keep blood glucose at the center of an acceptable range, as measured through a continuous glucose meter. A secondary goal is to minimize injections, which are unpleasant and difficult for some patients to implement. In this study, neuroevolution was used to discover an optimal strategy for the treatment. Based on a dataset of 30 days of treatment and measurements of a single patient, a random forest was first trained to predict future glucose levels. A neural network was then evolved to prescribe carbohydrates, basal pumping levels, and bolus injections. Evolution discovered a Pareto front that reduced deviation from the target and number of injections compared to the original data, thus improving patients' quality of life. To make the system easier to adopt, a language interface was developed with a large language model. Thus, these technologies not only improve patient care but also adoption in a broader population.
0708.1781
Eduardo Candelario-Jalil
A. Gonzalez-Falcon, E. Candelario-Jalil, M. Garcia-Cabrera, O. S. Leon
Effects of pyruvate administration on infarct volume and neurological deficits following permanent focal cerebral ischemia in rats
null
Brain Research 990(1-2): 1-7 (2003)
null
null
q-bio.TO
null
Recent experimental evidences indicate that pyruvate, the final metabolite of glycolysis, has a remarkable protective effect against different types of brain injury. The purpose of this study was to assess the neuroprotective effect and the neurological outcome after pyruvate administration in a model of ischemic stroke induced by permanent middle cerebral artery occlusion (pMCAO) in rats. Three doses of pyruvate (250, 500 and 1000 mg/kg, i.p.) or vehicle were administered intraperitoneally 30 min after pMCAO. In other set of experiments, pyruvate was given either before, immediately after ischemia or in a long-term administration paradigm. Functional outcome, mortality and infarct volume were determined 24 h after stroke. Even when the lowest doses of pyruvate reduced mortality and neurological deficits, no concomitant reduction in infarct volume was observed. The highest dose of pyruvate increased cortical infarction by 27% when administered 30 min after pMCAO. In addition, when pyruvate was given before pMCAO, a significant increase in neurological deficits was noticed. Surprisingly, on the contrary of what was found in the case of transient global ischemia, present findings do not support a great neuroprotective role for pyruvate in permanent focal cerebral ischemia, suggesting two distinct mechanisms involved in the effects of this glycolytic metabolite in the ischemic brain.
[ { "created": "Mon, 13 Aug 2007 22:10:46 GMT", "version": "v1" } ]
2007-08-15
[ [ "Gonzalez-Falcon", "A.", "" ], [ "Candelario-Jalil", "E.", "" ], [ "Garcia-Cabrera", "M.", "" ], [ "Leon", "O. S.", "" ] ]
Recent experimental evidences indicate that pyruvate, the final metabolite of glycolysis, has a remarkable protective effect against different types of brain injury. The purpose of this study was to assess the neuroprotective effect and the neurological outcome after pyruvate administration in a model of ischemic stroke induced by permanent middle cerebral artery occlusion (pMCAO) in rats. Three doses of pyruvate (250, 500 and 1000 mg/kg, i.p.) or vehicle were administered intraperitoneally 30 min after pMCAO. In other set of experiments, pyruvate was given either before, immediately after ischemia or in a long-term administration paradigm. Functional outcome, mortality and infarct volume were determined 24 h after stroke. Even when the lowest doses of pyruvate reduced mortality and neurological deficits, no concomitant reduction in infarct volume was observed. The highest dose of pyruvate increased cortical infarction by 27% when administered 30 min after pMCAO. In addition, when pyruvate was given before pMCAO, a significant increase in neurological deficits was noticed. Surprisingly, on the contrary of what was found in the case of transient global ischemia, present findings do not support a great neuroprotective role for pyruvate in permanent focal cerebral ischemia, suggesting two distinct mechanisms involved in the effects of this glycolytic metabolite in the ischemic brain.
1501.04732
Aditya Hernowo
Aditya Tri Hernowo and R. Haryo Yudono
Strabismic syndromes and syndromic strabismus - a brief review
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strabismus can be found in association with congenital heart diseases, for examples, in velocardiofacial (DiGeorge) syndrome, Down syndrome, mild dysmorphic features, in CHARGE association, Turner syndrome, Ullrich-Turner syndrome, cardiofaciocutaneous syndrome.1-4 Some types of strabismus is heritable (e.g. infantile esotropia syndrome), particularly the ones associated with multisystem disorders, e.g. Moebius syndrome, Prader-Willi syndrome, craniofacial dysostoses, and mitochondrial myopathies.5 Due to the complexities that a case of strabismus may pertain to, it is worthwhile to get to know more about the strabismic syndromes and syndromic strabismus. This brief review -as its name implied- does not attempt to cover every angle of the syndromic conditions, but offers a refreshment on our knowledge about more prevalent strabismus-related syndromes.
[ { "created": "Tue, 20 Jan 2015 08:27:20 GMT", "version": "v1" } ]
2015-01-21
[ [ "Hernowo", "Aditya Tri", "" ], [ "Yudono", "R. Haryo", "" ] ]
Strabismus can be found in association with congenital heart diseases, for examples, in velocardiofacial (DiGeorge) syndrome, Down syndrome, mild dysmorphic features, in CHARGE association, Turner syndrome, Ullrich-Turner syndrome, cardiofaciocutaneous syndrome.1-4 Some types of strabismus is heritable (e.g. infantile esotropia syndrome), particularly the ones associated with multisystem disorders, e.g. Moebius syndrome, Prader-Willi syndrome, craniofacial dysostoses, and mitochondrial myopathies.5 Due to the complexities that a case of strabismus may pertain to, it is worthwhile to get to know more about the strabismic syndromes and syndromic strabismus. This brief review -as its name implied- does not attempt to cover every angle of the syndromic conditions, but offers a refreshment on our knowledge about more prevalent strabismus-related syndromes.
0711.3253
Hiroki Sayama
Jonathan P. Newman and Hiroki Sayama
The Effect of Sensory Blind Zones on Milling Behavior in a Dynamic Self-Propelled Particle Model
12 pages, 4 figures
Phys. Rev. E 78, 011913 (2008)
10.1103/PhysRevE.78.011913
null
q-bio.PE
null
Emergent pattern formation in self-propelled particle (SPP) systems is extensively studied because it addresses a range of swarming phenomena which occur without leadership. Here we present a dynamic SPP model in which a sensory blind zone is introduced into each particle's zone of interaction. Using numerical simulations we discovered that the degradation of milling patterns with increasing blind zone ranges undergoes two distinct transitions, including a new, spatially nonhomogeneous transition that involves cessation of particles' motion caused by broken symmetries in their interaction fields. Our results also show the necessity of nearly complete panoramic sensory ability for milling behavior to emerge in dynamic SPP models, suggesting a possible relationship between collective behavior and sensory systems of biological organisms.
[ { "created": "Wed, 21 Nov 2007 02:27:12 GMT", "version": "v1" }, { "created": "Sat, 22 Mar 2008 23:08:36 GMT", "version": "v2" } ]
2008-07-23
[ [ "Newman", "Jonathan P.", "" ], [ "Sayama", "Hiroki", "" ] ]
Emergent pattern formation in self-propelled particle (SPP) systems is extensively studied because it addresses a range of swarming phenomena which occur without leadership. Here we present a dynamic SPP model in which a sensory blind zone is introduced into each particle's zone of interaction. Using numerical simulations we discovered that the degradation of milling patterns with increasing blind zone ranges undergoes two distinct transitions, including a new, spatially nonhomogeneous transition that involves cessation of particles' motion caused by broken symmetries in their interaction fields. Our results also show the necessity of nearly complete panoramic sensory ability for milling behavior to emerge in dynamic SPP models, suggesting a possible relationship between collective behavior and sensory systems of biological organisms.
1308.3542
Ross Williamson
Ross S. Williamson, Maneesh Sahani, Jonathan W. Pillow
The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
[ { "created": "Fri, 16 Aug 2013 03:47:19 GMT", "version": "v1" }, { "created": "Tue, 24 Feb 2015 22:29:56 GMT", "version": "v2" } ]
2015-02-26
[ [ "Williamson", "Ross S.", "" ], [ "Sahani", "Maneesh", "" ], [ "Pillow", "Jonathan W.", "" ] ]
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.
2110.12221
Giovanni Carmantini
Giovanni Sirio Carmantini, Fabio Schittler Neves, Marc Timme, Serafim Rodrigues
Stochastic facilitation in heteroclinic communication channels
null
Chaos 31, 093130 (2021)
10.1063/5.0054485
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological neural systems encode and transmit information as patterns of activity tracing complex trajectories in high-dimensional state-spaces, inspiring alternative paradigms of information processing. Heteroclinic networks, naturally emerging in artificial neural systems, are networks of saddles in state-space that provide a transparent approach to generate complex trajectories via controlled switches among interconnected saddles. External signals induce specific switching sequences, thus dynamically encoding inputs as trajectories. Recent works have focused either on computational aspects of heteroclinic networks, i.e. Heteroclinic Computing, or their stochastic properties under noise. Yet, how well such systems may transmit information remains an open question. Here we investigate the information transmission properties of heteroclinic networks, studying them as communication channels. Choosing a tractable but representative system exhibiting a heteroclinic network, we investigate the mutual information rate (MIR) between input signals and the resulting sequences of states as the level of noise varies. Intriguingly, MIR does not decrease monotonically with increasing noise. Intermediate noise levels indeed maximize the information transmission capacity by promoting an increased yet controlled exploration of the underlying network of states. Complementing standard stochastic resonance, these results highlight the constructive effect of stochastic facilitation (i.e. noise-enhanced information transfer) on heteroclinic communication channels and possibly on more general dynamical systems exhibiting complex trajectories in state-space.
[ { "created": "Sat, 23 Oct 2021 13:50:16 GMT", "version": "v1" } ]
2021-10-26
[ [ "Carmantini", "Giovanni Sirio", "" ], [ "Neves", "Fabio Schittler", "" ], [ "Timme", "Marc", "" ], [ "Rodrigues", "Serafim", "" ] ]
Biological neural systems encode and transmit information as patterns of activity tracing complex trajectories in high-dimensional state-spaces, inspiring alternative paradigms of information processing. Heteroclinic networks, naturally emerging in artificial neural systems, are networks of saddles in state-space that provide a transparent approach to generate complex trajectories via controlled switches among interconnected saddles. External signals induce specific switching sequences, thus dynamically encoding inputs as trajectories. Recent works have focused either on computational aspects of heteroclinic networks, i.e. Heteroclinic Computing, or their stochastic properties under noise. Yet, how well such systems may transmit information remains an open question. Here we investigate the information transmission properties of heteroclinic networks, studying them as communication channels. Choosing a tractable but representative system exhibiting a heteroclinic network, we investigate the mutual information rate (MIR) between input signals and the resulting sequences of states as the level of noise varies. Intriguingly, MIR does not decrease monotonically with increasing noise. Intermediate noise levels indeed maximize the information transmission capacity by promoting an increased yet controlled exploration of the underlying network of states. Complementing standard stochastic resonance, these results highlight the constructive effect of stochastic facilitation (i.e. noise-enhanced information transfer) on heteroclinic communication channels and possibly on more general dynamical systems exhibiting complex trajectories in state-space.
1307.0757
Yi Ming Zou
Yi Ming Zou
Dynamics of Boolean Networks
null
Discrete and Continuous Dynamical Systems Series S, Vol 4 (6), 2011, 1629-1640
10.3934/dcdss.2011.4.1629
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Boolean networks are special types of finite state time-discrete dynamical systems. A Boolean network can be described by a function from an n-dimensional vector space over the field of two elements to itself. A fundamental problem in studying these dynamical systems is to link their long term behaviors to the structures of the functions that define them. In this paper, a method for deriving a Boolean network's dynamical information via its disjunctive normal form is explained. For a given Boolean network, a matrix with entries 0 and 1 is associated with the polynomial function that represents the network, then the information on the fixed points and the limit cycles is derived by analyzing the matrix. The described method provides an algorithm for the determination of the fixed points from the polynomial expression of a Boolean network. The method can also be used to construct Boolean networks with prescribed limit cycles and fixed points. Examples are provided to explain the algorithm.
[ { "created": "Tue, 2 Jul 2013 16:49:00 GMT", "version": "v1" } ]
2013-07-03
[ [ "Zou", "Yi Ming", "" ] ]
Boolean networks are special types of finite state time-discrete dynamical systems. A Boolean network can be described by a function from an n-dimensional vector space over the field of two elements to itself. A fundamental problem in studying these dynamical systems is to link their long term behaviors to the structures of the functions that define them. In this paper, a method for deriving a Boolean network's dynamical information via its disjunctive normal form is explained. For a given Boolean network, a matrix with entries 0 and 1 is associated with the polynomial function that represents the network, then the information on the fixed points and the limit cycles is derived by analyzing the matrix. The described method provides an algorithm for the determination of the fixed points from the polynomial expression of a Boolean network. The method can also be used to construct Boolean networks with prescribed limit cycles and fixed points. Examples are provided to explain the algorithm.
1908.08464
Niv DeMalach
Man Qi, Niv DeMalach, Tao Sun, Hailin Zhang
Coexistence under hierarchical resource exploitation: the role of R*-preemption tradeoff
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resource competition theory predicts coexistence and exclusion patterns based on species R*s, the minimum resource values required for a species to persist. A central assumption of the theory is that all species have equal access to resources. However, many systems are characterized by preemption exploitation, where some species deplete resources before their competitors can access them (e.g., asymmetric light competition, contest competition among animals). We hypothesize that coexistence under preemption requires an R*-preemption tradeoff, i.e., the species with the priority access should have a higher R* (lower efficiency). Thus, we developed an extension of resource competition theory to investigate partial and total preemption (in the latter, the preemptor is unaffected by species with lower preemption rank). We found that an R*-preemption tradeoff is a necessary condition for coexistence in all models. Moreover, under total preemption, the tradeoff alone is sufficient for coexistence. In contrast, under partial preemption, more conditions are needed, which restricts the parameter space of coexistence. Finally, we discussed the implications of our finding for seemingly distinct tradeoffs, which we view as special cases of R*-preemption tradeoff. These tradeoffs include the digger-grazer, the competition-colonization, and tradeoffs related to light competition between trees and understories.
[ { "created": "Thu, 22 Aug 2019 15:49:06 GMT", "version": "v1" }, { "created": "Thu, 20 May 2021 13:01:42 GMT", "version": "v2" }, { "created": "Tue, 16 Nov 2021 14:46:05 GMT", "version": "v3" } ]
2021-11-17
[ [ "Qi", "Man", "" ], [ "DeMalach", "Niv", "" ], [ "Sun", "Tao", "" ], [ "Zhang", "Hailin", "" ] ]
Resource competition theory predicts coexistence and exclusion patterns based on species R*s, the minimum resource values required for a species to persist. A central assumption of the theory is that all species have equal access to resources. However, many systems are characterized by preemption exploitation, where some species deplete resources before their competitors can access them (e.g., asymmetric light competition, contest competition among animals). We hypothesize that coexistence under preemption requires an R*-preemption tradeoff, i.e., the species with the priority access should have a higher R* (lower efficiency). Thus, we developed an extension of resource competition theory to investigate partial and total preemption (in the latter, the preemptor is unaffected by species with lower preemption rank). We found that an R*-preemption tradeoff is a necessary condition for coexistence in all models. Moreover, under total preemption, the tradeoff alone is sufficient for coexistence. In contrast, under partial preemption, more conditions are needed, which restricts the parameter space of coexistence. Finally, we discussed the implications of our finding for seemingly distinct tradeoffs, which we view as special cases of R*-preemption tradeoff. These tradeoffs include the digger-grazer, the competition-colonization, and tradeoffs related to light competition between trees and understories.
1202.2219
Liang Wu
Liang Wu, Chengcheng Ji, Sishuo Wang, Jianhao Lv
The advantages of the pentameral symmetry of the starfish
17 pages, 10 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Starfish typically show pentameral symmetry, and they are typically similar in shape to a pentagram. Although starfish can evolve and live with other numbers of arms, the dominant species always show pentameral symmetry. We used mathematical and physical methods to analyze the superiority of starfish with five arms in comparison with those with a different number of arms with respect to detection, turning over, autotomy and adherence. In this study, we determined that starfish with five arms, although slightly inferior to others in one or two aspects, exhibit the best performance when the four aforementioned factors are considered together. In addition, five-armed starfish perform best on autotomy, which is crucially important for starfish survival. This superiority contributes to the dominance of five-armed starfish in evolution, which is consistent with the practical situation. Nevertheless, we can see some flexibility in the number and conformation of arms. The analyses performed in our research will be of great help in unraveling the mysteries of dominant shapes and structures.
[ { "created": "Fri, 10 Feb 2012 09:34:31 GMT", "version": "v1" }, { "created": "Tue, 21 Feb 2012 04:54:48 GMT", "version": "v2" } ]
2012-02-22
[ [ "Wu", "Liang", "" ], [ "Ji", "Chengcheng", "" ], [ "Wang", "Sishuo", "" ], [ "Lv", "Jianhao", "" ] ]
Starfish typically show pentameral symmetry, and they are typically similar in shape to a pentagram. Although starfish can evolve and live with other numbers of arms, the dominant species always show pentameral symmetry. We used mathematical and physical methods to analyze the superiority of starfish with five arms in comparison with those with a different number of arms with respect to detection, turning over, autotomy and adherence. In this study, we determined that starfish with five arms, although slightly inferior to others in one or two aspects, exhibit the best performance when the four aforementioned factors are considered together. In addition, five-armed starfish perform best on autotomy, which is crucially important for starfish survival. This superiority contributes to the dominance of five-armed starfish in evolution, which is consistent with the practical situation. Nevertheless, we can see some flexibility in the number and conformation of arms. The analyses performed in our research will be of great help in unraveling the mysteries of dominant shapes and structures.
1810.12397
Thomas Shultz
Daniel R. Shultz, Marcel Montrey, Thomas R. Shultz
Comparing fitness and drift explanations of Neanderthal replacement
28 pages, 9 figures
Proceedings of the Royal Society B 286: 20190907 (2019) 1-8
10.1098/rspb.2019.0907
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a general consensus among archaeologists that replacement of Neanderthals by anatomically modern humans in Europe occurred around 40K to 35K YBP. However, the causal mechanism for this replacement continues to be debated. Searching for specific fitness advantages in the archaeological record has proven difficult, as these may be obscured, absent, or subject to interpretation. Proposed models have therefore featured either fitness advantages in favor of anatomically modern humans, or invoked neutral drift under various preconditions. To bridge this gap, we rigorously compare the system-level properties of fitness- and drift-based explanations of Neanderthal replacement. Our stochastic simulations and analytical predictions show that, although both fitness and drift can produce fixation, they present important differences in 1) required initial conditions, 2) reliability, 3) time to replacement, and 4) path to replacement (population histories). These results present useful opportunities for comparison with archaeological and genetic data. We find far greater agreement between the available empirical evidence and the system-level properties of replacement by differential fitness, rather than by neutral drift.
[ { "created": "Mon, 29 Oct 2018 20:40:07 GMT", "version": "v1" } ]
2021-07-01
[ [ "Shultz", "Daniel R.", "" ], [ "Montrey", "Marcel", "" ], [ "Shultz", "Thomas R.", "" ] ]
There is a general consensus among archaeologists that replacement of Neanderthals by anatomically modern humans in Europe occurred around 40K to 35K YBP. However, the causal mechanism for this replacement continues to be debated. Searching for specific fitness advantages in the archaeological record has proven difficult, as these may be obscured, absent, or subject to interpretation. Proposed models have therefore featured either fitness advantages in favor of anatomically modern humans, or invoked neutral drift under various preconditions. To bridge this gap, we rigorously compare the system-level properties of fitness- and drift-based explanations of Neanderthal replacement. Our stochastic simulations and analytical predictions show that, although both fitness and drift can produce fixation, they present important differences in 1) required initial conditions, 2) reliability, 3) time to replacement, and 4) path to replacement (population histories). These results present useful opportunities for comparison with archaeological and genetic data. We find far greater agreement between the available empirical evidence and the system-level properties of replacement by differential fitness, rather than by neutral drift.
2111.15184
Durham Smith
Durham Smith and Grigory Tikhomirov
small: A Programmatic Nanostructure Design and Modelling Environment
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Structural DNA nanotechnology has advanced to the extent that extremely complex structures can be designed. Much of this advancement has been due to the development of automated DNA design and simulation tools. Typically, the tools (e.g. NUPAK, cadnano, OxDNA) are created for a specific task. Ideally, there would be an environment that can integrate all such DNA tools, also with non-DNA tools - for example for modelling electromagnetic field along a zero-mode waveguide made of gold nanoparticles organized on a DNA breadboard. Such an environment would streamline design in DNA nanotechnology and enable applying DNA nanotechnology principles to construct high performance materials and devices from non-DNA components. Here we present small a programmatic tool that is a step towards building such an environment for designing arbitrary nanostructures. In particular we showcase how small has been used to create an integrated computational materials engineering (ICME) framework for DNA nanotechnology, allowing the hierarchical design, simulation and visualization of arbitrary DNA nanostructures. Furthermore we demonstrate the design and modeling of the mode profiles and band structure of hybrid DNA-nanoparticle materials through the integration of small with Maxwell solvers.
[ { "created": "Tue, 30 Nov 2021 07:49:08 GMT", "version": "v1" } ]
2021-12-01
[ [ "Smith", "Durham", "" ], [ "Tikhomirov", "Grigory", "" ] ]
Structural DNA nanotechnology has advanced to the extent that extremely complex structures can be designed. Much of this advancement has been due to the development of automated DNA design and simulation tools. Typically, the tools (e.g. NUPAK, cadnano, OxDNA) are created for a specific task. Ideally, there would be an environment that can integrate all such DNA tools, also with non-DNA tools - for example for modelling electromagnetic field along a zero-mode waveguide made of gold nanoparticles organized on a DNA breadboard. Such an environment would streamline design in DNA nanotechnology and enable applying DNA nanotechnology principles to construct high performance materials and devices from non-DNA components. Here we present small a programmatic tool that is a step towards building such an environment for designing arbitrary nanostructures. In particular we showcase how small has been used to create an integrated computational materials engineering (ICME) framework for DNA nanotechnology, allowing the hierarchical design, simulation and visualization of arbitrary DNA nanostructures. Furthermore we demonstrate the design and modeling of the mode profiles and band structure of hybrid DNA-nanoparticle materials through the integration of small with Maxwell solvers.